Linux: The Architecture and All Its Flavors
What Linux actually is, how it's layered, why there are hundreds of versions, and how the major families differ - before you ever open a terminal.
Linux is not one thing.
That is the first thing most explanations get wrong - they treat Linux like a single operating system, the way Windows is a single operating system. It is not. Linux is more like a set of building blocks that different people and organisations assemble in different ways.
Understanding Linux means understanding the layers. Once you see the layers, the hundreds of "versions" stop being overwhelming and start making sense.
Part 2 covers the terminal and how to actually use it.
The kernel: what Linux actually is
Linux is a kernel. That is the precise, technical answer.
The kernel is the lowest layer of an operating system. It talks directly to hardware - managing memory, CPU time, storage, and all the devices connected to the machine. It runs underneath everything else and is never touched directly by the user.
On its own, a kernel is not an operating system you can use. It is infrastructure. You need everything else built on top of it - the programs, the tools, the interface - before you have something usable.
The Linux kernel was created by Linus Torvalds in 1991 and released as open source. Anyone can read it, modify it, or build on it. That single decision is why the Linux world looks the way it does today.
The layers above the kernel
A usable Linux system is built up in layers. Each layer depends on the one below it.
The kernel sits at the bottom, managing hardware.
System libraries sit just above. The most important is glibc - the GNU C Library. Almost every program on Linux calls into this library to do basic things: read a file, connect to a network, allocate memory. It is the standard interface between programs and the kernel.
Core utilities are the basic command-line tools: copy a file, list a directory, check a process. On most Linux systems, these come from the GNU coreutils project - the same ls, cp, mv across virtually every distribution.
The shell is the program that reads commands typed in a terminal and runs them. The most common is Bash. Others include Zsh and Fish. The shell sits on top of the core utilities and uses them constantly.
The init system is what starts everything when the machine boots. It is the first process the kernel launches, and it is responsible for starting all other services. The dominant init system today is systemd, though some distributions use alternatives by choice.
The package manager is how software gets installed, updated, and removed. Different distributions use different package managers - this is one of the biggest practical differences between them.
The display server handles graphics. It sits between the kernel's graphics drivers and the applications drawing windows on screen. The two main options are X11 (older, ubiquitous) and Wayland (newer, increasingly the default).
The desktop environment is what most people would call the "look and feel" - the taskbar, the app launcher, the file manager, the settings panel. This is entirely separate from everything below it. You can swap desktop environments without changing anything else.
Applications sit at the top.
The key insight: almost every layer is swappable. That is not true on Windows or macOS. On Linux, you can replace the shell, the init system, the package manager, the display server, and the desktop environment independently. Different distributions make different choices at each layer.
How a Linux system boots
Understanding the boot sequence fills in one gap the layer diagram leaves out: how does the machine go from powered-off to a running system?
The firmware (BIOS or its modern replacement, UEFI) runs first. It is software baked into the motherboard. It does a quick hardware check, then hands control to whatever is installed on the boot drive.
The bootloader is the next thing to run. On Linux, this is almost always GRUB. The bootloader's job is simple: find the kernel, load it into memory, and start it. GRUB is also what gives you a menu to choose between operating systems on a dual-boot machine.
The kernel loads, initialises hardware, mounts the root file system, and then immediately hands off to the init system.
The init system (usually systemd) takes over as the first real process - PID 1. From here it starts everything else: networking, the display server, login screens, background services. By the time you see a desktop or a login prompt, systemd has already brought up dozens of services in the right order.
The boot sequence is just the layers starting up from the bottom.
What a distribution actually is
A Linux distribution (distro) is a complete, pre-assembled system that makes those layer choices for you.
A distribution takes the Linux kernel, picks a package manager, picks an init system, bundles a set of default software, and ships it as something you can actually install. The maintainers handle updates, security patches, and keeping everything compatible with each other.
Some distributions target desktop users who want things to just work. Some target servers with minimal installs and maximum stability. Some target developers who want the latest software. Some target people who want total control over every choice.
The distribution is the product. Linux is the foundation it is built on.
The major families
Most distributions belong to a family - they share a common ancestor and a package manager. Understanding the families is far more useful than trying to memorise individual distros.
The Debian family
Debian is one of the oldest and most influential distributions. It is known for extreme stability - packages are tested extensively before release, which means they may lag behind the latest versions.
Ubuntu is built directly on top of Debian. Canonical (the company behind it) takes Debian packages, adds its own polish, and releases on a predictable schedule. Ubuntu made Linux dramatically more accessible to desktop users and remains the most popular entry point. It ships every six months, with Long Term Support (LTS) releases every two years.
From Ubuntu, several others branch off:
- Linux Mint - popular for users coming from Windows. Conservative, familiar, extremely beginner-friendly.
- Pop!_OS - made by System76 (a Linux hardware company). Developer-focused, with strong defaults for productivity and gaming.
- elementary OS - designed to feel like a premium, macOS-style desktop. Opinionated and polished.
- Kali Linux - built for security professionals. Comes pre-loaded with penetration testing tools. Not for general use.
Package manager: apt (Advanced Package Tool).
The Red Hat family
Red Hat Enterprise Linux (RHEL) is the dominant Linux in corporate and enterprise environments. Red Hat (now owned by IBM) charges for support subscriptions - the software itself is open source, but the support contract is the product.
Fedora is Red Hat's community distribution. It is where new features land first before making their way to RHEL. Fedora tends to run cutting-edge software and is popular with developers.
CentOS was for a long time the free, community-supported version of RHEL. Red Hat changed its direction, which led to community forks.
Rocky Linux and AlmaLinux are the main successors - community projects that rebuild RHEL from its open source components, offering RHEL compatibility for free.
Package manager: dnf (and its predecessor yum). Package format is .rpm.
The Arch family
Arch Linux is a minimalist, build-it-yourself distribution. There is no graphical installer. You boot a live environment, partition your own disk, install the kernel, install a bootloader, configure networking, and layer everything else from scratch. It is deliberately not for beginners.
What you get in return: a system that contains exactly what you chose to put in it, nothing more. And access to the AUR (Arch User Repository) - an enormous community-maintained repository of software that other distributions often do not package officially.
Arch uses a rolling release model. There are no version numbers or upgrade cycles. You update continuously, always running the latest software.
Manjaro and EndeavourOS are popular Arch-based distributions that add an installer and sensible defaults, making Arch more accessible without fully hiding it.
Package manager: pacman.
The independent distributions
Some distributions belong to no family and make genuinely different architectural choices.
Gentoo is source-based. Instead of installing pre-compiled software, you download source code and compile everything on your own machine with flags tuned to your hardware. Extreme performance and flexibility - also extreme setup time.
NixOS is built around a completely different idea: the entire system configuration is declared in a single file. Reproducing a system exactly, rolling back to a previous state, running multiple versions of software side by side - these are all first-class features. It has a steep learning curve but a growing following among developers who care about reproducibility.
Void Linux is small, independent, and uses its own package manager (xbps) and init system (runit instead of systemd). A popular choice for people who dislike systemd.
Slackware is the oldest surviving Linux distribution, and it has barely changed its philosophy in 30 years. Manual, minimal, educational.
Server and container distributions
Alpine Linux is tiny. A base install is under 10MB. It uses musl libc instead of glibc and is designed to run inside containers (Docker images often start from Alpine). Almost nothing is included by default.
Debian and Ubuntu Server are the most common general-purpose server distributions.
CoreOS (now Fedora CoreOS) and similar distributions are designed specifically for running containers at scale - immutable base system, everything runs in containers.
Android and ChromeOS
Android is a Linux distribution. The kernel underneath every Android phone is Linux. The layers above it - the Java runtime, the application framework, the interface - are completely different from desktop Linux, but the foundation is the same.
ChromeOS is also Linux-based. Modern Chromebooks can run a full Linux environment alongside Chrome apps and Android apps.
These two alone make Linux the most-used operating system in the world by device count.
The choices that actually differentiate distros
When you strip away branding and defaults, most distributions come down to a handful of key decisions.
Package manager and format. This is the most consequential practical difference. apt/.deb (Debian family) and dnf/.rpm (Red Hat family) cover the vast majority of Linux. They are not compatible - a .deb package does not work on a Red Hat system and vice versa. Arch's pacman is its own format entirely. This matters because it determines what software is available and how you install it.
Release model. Fixed-release distributions (Ubuntu, Fedora) ship a new version periodically. You upgrade between versions. Rolling-release distributions (Arch, Void) update continuously - there is no version to upgrade to. Fixed releases are more stable. Rolling releases are always current.
Init system. Most major distributions now use systemd. A minority use alternatives (runit, OpenRC, s6). This is a philosophical divide as much as a technical one - systemd is large and opinionated, which some people dislike.
Target audience. A distribution's defaults, documentation, installer, and community norms are shaped by who it is built for. Ubuntu's installer holds your hand. Arch's documentation assumes you will read carefully and figure things out.
Desktop environments: the layer most people see
The desktop environment is what most people mean when they say Linux "looks different."
Because it is its own swappable layer, a single distribution often ships multiple desktop environment options, and you can install others yourself.
GNOME is clean, modern, and opinionated. Ubuntu ships it by default. It leans into a workflow built around activities and workspaces, with minimal clutter. Some love it; others find it too different from what they know.
KDE Plasma is highly customisable. You can change nearly everything - layouts, animations, panels, themes. It is what people reach for when they want a Windows-like experience but with more control.
XFCE is lightweight and traditional. Fast on older hardware. Familiar to anyone used to a classic desktop.
LXDE / LXQt - even lighter than XFCE. Built for low-resource machines.
Cinnamon - Linux Mint's desktop, designed explicitly to feel comfortable for Windows users.
i3, Sway, and other tiling window managers - not desktop environments in the traditional sense. There are no floating windows - everything tiles and is arranged by the keyboard. Popular with developers who live in the terminal.
The desktop environment is cosmetic in terms of what Linux is, but it is everything in terms of what Linux feels like to use.
Linux vs Windows vs macOS
These three systems have different origins, different philosophies, and different trade-offs. Understanding where they diverge explains a lot about why Linux behaves the way it does.
Kernel. Windows runs the NT kernel - built entirely by Microsoft, closed source. macOS runs XNU - a hybrid kernel with roots in BSD Unix and Mach, also closed source. Linux runs the Linux kernel - open source, maintained by thousands of contributors.
Unix heritage. macOS is a certified Unix - it descends directly from BSD and is POSIX-compliant. Linux is Unix-like: it follows Unix conventions closely but was written from scratch, not derived from Unix source code. Windows has almost no Unix heritage and operates on fundamentally different conventions.
Architecture. Windows and macOS are products - you get what the company ships, and the layers are not separable. Linux is a set of components that can be mixed and matched. You can swap the shell, the init system, the desktop environment, the display server. Nothing locks you to a particular combination.
File system structure. Windows uses drive letters (C:\, D:\). macOS uses a Unix-style hierarchy but enforces a lot of structure on top of it. Linux uses a single unified tree starting at / - every drive, partition, and device is mounted somewhere inside it.
Software installation. On Windows, you download an installer from a website and run it. On macOS, you use the App Store or a .dmg file. On Linux, you use a package manager that pulls from a curated, signed repository. The Linux model is safer and more predictable - and is increasingly what other platforms are copying (see: Microsoft's winget, Apple's App Store).
Customisability. macOS offers very little at the OS level - Apple controls the experience tightly. Windows offers more but within limits Microsoft sets. Linux has no such constraints. Every layer is replaceable.
Cost and ownership. Windows requires a paid licence. macOS is free but requires Apple hardware - the cost is the machine. Linux is free and runs on almost anything.
Where each one dominates. Windows owns the consumer desktop and much of enterprise office computing. macOS dominates among creative professionals and a significant share of software developers. Linux owns servers, cloud infrastructure, mobile (via Android), embedded systems, and supercomputers.
Why there are so many distributions
The short answer: because the pieces are open source and anyone can assemble them differently.
The longer answer: because no single distribution is optimal for every use case.
A distribution designed for stability on long-running servers has different priorities than one designed for a developer wanting the newest language toolchains. A distribution targeting a beginner who needs things to just work looks nothing like one targeting a security researcher who needs a specific set of tools pre-installed.
The diversity is not chaos. It is the natural outcome of open architecture meeting genuinely different needs.
The distributions that survive and grow do so because they serve a real audience well. The ones that were redundant or poorly maintained quietly disappear or get absorbed.
The one rule to remember
Linux is not a product. It is a foundation.
What you actually install is a distribution - a set of choices made on top of that foundation. Different choices for different purposes.
Understand the layers. Understand the families. Then pick the distribution that matches what you are trying to do.
That is really the whole thing.