Skip to content

Linux Introduction

What is Linux?

  • Linux is an open-source operating system (OS) kernel initially developed by Linus Torvalds in 1991.
  • It is the core software that manages a computer’s hardware and allows other programs and users to interact with this hardware.
  • Linux is free to use, modify, and distribute because of its open-source nature.
  • A Linux operating system typically combines the Linux kernel with various GNU tools, libraries, and other software components to create a complete system, often delivered through Linux distributions (distros).

Overview of the Linux ecosystem

  • The Linux ecosystem is vast and varied, built around the Linux kernel and complemented by the GNU project’s software tools and utilities. These components form what is called GNU/Linux.
  • On top of this foundation, Linux distributions package these elements with additional software tailored for different use cases, whether desktops, servers, or embedded devices.
  • The ecosystem includes a rich set of command-line tools, graphical desktop environments, package managers, and diverse applications, all supported by a global community of developers and users.
  • Unix was created in the late 1960s and early 1970s at Bell Labs by a team including Ken Thompson and Dennis Ritchie.
  • Designed as a multi-user, multitasking operating system, Unix introduced many important concepts still foundational today, such as hierarchical file systems and simple, modular utilities.
  • Its design philosophy emphasized simplicity and portability, making it adaptable to various hardware.
  • Unix evolved into many versions, including commercial and academic variants like BSD (Berkeley Software Distribution), System V by AT&T, and others.
  • This evolution spawned a large family of Unix-like operating systems, influencing many modern OS designs, including Linux.
Early Days of Free and Open-Source Software
Section titled “Early Days of Free and Open-Source Software”
  • The concept of free software began gaining traction with projects like BSD releasing free Unix source code.
  • Richard Stallman launched the GNU Project in 1983 to develop a complete free software Unix-like OS, emphasizing user freedoms and copyleft licensing to protect software freedom.
Linus Torvalds and the Creation of the Linux Kernel (1991)
Section titled “Linus Torvalds and the Creation of the Linux Kernel (1991)”
  • In 1991, Linus Torvalds, a Finnish university student, started developing Linux as a hobby project to create a free replacement kernel compatible with Unix. He announced his project with humility, inviting others to contribute.
  • Torvalds famously described Linux as “just a hobby” but it rapidly grew into a major project worldwide.
Milestones in Linux Development and Growth
Section titled “Milestones in Linux Development and Growth”
  • The adoption of the GNU General Public License (GPL) ensured Linux would remain free and open.
  • The release of Linux distributions like Debian and Red Hat broadened its accessibility.
  • Linux became the foundation of many technologies, including Android, supercomputers, and web servers.
  • The community-driven development model allowed rapid iteration and improvements.
The Role of the GNU Project and Richard Stallman
Section titled “The Role of the GNU Project and Richard Stallman”
  • The GNU Project, led by Richard Stallman, developed many essential tools and utilities that complete a functional Unix-like system.
  • Linux relies on these GNU components for a full user environment. Stallman’s philosophy focused on software freedom, emphasizing the users’ rights to run, study, modify, and share software.
The Synergy: Linux Kernel + GNU Tools = GNU/Linux
Section titled “The Synergy: Linux Kernel + GNU Tools = GNU/Linux”
  • The combination of the Linux kernel and GNU software tools forms what is technically called GNU/Linux.
  • While often just called Linux, acknowledging GNU’s crucial role highlights the union of kernel and user-space software that creates a complete OS.

See more history at 🌐

Linux Philosophy and Open Source Principles

Section titled “Linux Philosophy and Open Source Principles”
  • The Linux philosophy deeply inherits the Unix philosophy, which originated with early Unix developers like Ken Thompson. This philosophy emphasizes building small, modular, and simple programs that do one thing well and can easily be combined with other programs.
  • Key principles include writing clear, maintainable code, favoring composition over monolithic design, and ensuring extensibility and reusability.
  • A central tenet of Linux philosophy is to give users full control of their system without unnecessary restrictions.
  • Linux assumes users have the capability to understand and configure their own systems, offering transparency and freedom to customize.
  • Free Software: As defined by the Free Software Foundation (FSF), free software means the user has the freedom to run, study, modify, and share the software. “Free” refers to liberty, not price.
  • Open Source Software: Open source emphasizes making the source code publicly available to encourage collaborative development and innovation with transparent processes. Open source licenses ensure users can access and modify the code legally.
  • The free and open source movement began to take formal shape in the 1980s with the GNU Project initiated by Richard Stallman in 1983, aimed at creating a fully free Unix-like OS.
  • The Open Source Initiative (OSI) was founded later in 1998 to promote open source software and its benefits.
  • The development of Linux in 1991 catalyzed the movement, embodying these principles in practice by offering a robust, collaborative OS built on open development.
GNU General Public License (GPL) and its Significance
Section titled “GNU General Public License (GPL) and its Significance”
  • The GPL license, written by Stallman, is fundamental to Linux’s development model.
  • It guarantees the freedoms of software usage, modification, and redistribution while requiring derivative works to remain under the same license (copyleft).
  • This legal framework ensures Linux and its ecosystem remain free and openly developed, protecting user rights long-term.

Read more philosophy of Unix and Linux at 🌐

Read more about open source defination 🌐

  • An operating system (OS) is the fundamental software layer that manages a computer’s hardware resources and enables interaction between the user and these resources. The OS acts as an intermediary between the user, application programs, and the hardware, managing memory, processes, input/output devices, and file storage.

Know more about OSes at 🌐

Know more about kernel at 🌐

Difference between OS and Kernel Explained Clearly
Section titled “Difference between OS and Kernel Explained Clearly”
  • The kernel is the central core of an operating system. It’s always running and has complete control over everything in the system.
  • The kernel manages low-level tasks such as process scheduling, memory management, and device control. In contrast, the operating system includes the kernel plus system programs (like shells and utilities), user interfaces, and application support.

Duties of Kernel

  • Directly managing CPU, memory, and device controllers
  • Handling interrupts and low-level hardware functions
  • Maintaining isolation between programs to enhance security
  • Providing system calls: the interface for user programs to request kernel services
Kernel Design Principles: Monolithic and Modular Approach
Section titled “Kernel Design Principles: Monolithic and Modular Approach”
  • The Linux kernel is architecturally monolithic, meaning all core functions (process, memory, driver management) execute in one address space as a single program for high performance and simplicity.
  • However, it is also modular: Linux supports loadable kernel modules, which allows features (like new filesystems or drivers) to be added or removed at runtime without rebuilding the whole kernel or rebooting. This approach offers a blend of speed, flexibility, and extensibility.
  • Linus Torvalds chose a monolithic design to make Linux efficient and easier for contributors to understand and extend, but modularity was introduced to ensure adaptability and broader hardware support.
Multitasking, Multiprocessing, and Process Scheduling
Section titled “Multitasking, Multiprocessing, and Process Scheduling”
  • Multitasking: The kernel runs multiple programs at once by quickly switching the CPU between them, making it appear as if everything happens in parallel.
  • Multiprocessing: Supports multiple CPUs or CPU cores, distributing work for greater speed and responsiveness.
  • Process scheduling: Sophisticated scheduling algorithms decide which process runs next, balancing responsiveness and fairness under heavy loads.
Interfacing with Hardware - Device Drivers
Section titled “Interfacing with Hardware - Device Drivers”
  • The kernel uses device drivers—special modules that translate generic OS commands into device-specific actions. This is how Linux supports an enormous array of hardware.
  • Most drivers can be built as modules, loaded only when needed, minimizing the kernel’s memory footprint and improving stability.
Kernel Updates and Community Development Model
Section titled “Kernel Updates and Community Development Model”
  • The Linux kernel is updated continuously by thousands of developers and hundreds of organizations worldwide.
  • Updates bring bug fixes, security patches, new features, and hardware support. The development and decision process is public, and Linus Torvalds, as the kernel’s “benevolent dictator,” still oversees major changes.
  • Updates follow a release cycle, and stable versions are supported for long periods, especially for enterprise use.
  • Anyone can inspect, contribute to, or modify the kernel source thanks to its open source license.
What is GNU? What are GNU Tools and Utilities?
Section titled “What is GNU? What are GNU Tools and Utilities?”
  • The GNU Project was launched in 1983 by Richard Stallman to create a complete, free Unix-like operating system that respected user freedom.

  • “GNU” is a recursive acronym for “GNU’s Not Unix,” emphasizing its intention to be compatible with—but not derived from—proprietary Unix.

  • GNU’s philosophy is about giving all users the freedom to use, study, modify, and share software.

Core GNU tools and utilities include:

  • GNU Compiler Collection (GCC): Compiles and builds software for various languages.
  • GNU C Library (glibc): Core library essential to many applications and the OS.
  • GNU Core Utilities (coreutils): Base command-line tools like ls, cat, cp, and rm.
  • GNU Bash (the Bourne Again Shell): A major command-line shell.
  • GNU Emacs: A powerful programmable text editor.
Clarifying The Term GNU/Linux and Why It Matters
Section titled “Clarifying The Term GNU/Linux and Why It Matters”
  • The term GNU/Linux refers to operating systems that use the Linux kernel in combination with GNU project tools and utilities.
  • While most people casually say “Linux” for the whole OS, “Linux” is technically just the kernel.
  • A complete, usable OS requires both the kernel and user-space software—most of which, in a typical distro, comes from GNU.
  • By the early 1990s, GNU had all the parts for a Unix-like OS—except a functioning kernel. Linus Torvalds’ Linux kernel (1991) filled that crucial gap.
  • The combination meant, for the first time, a fully functional and freely available OS could be assembled entirely from open-source components.
  • GNU tools manage files, execute programs, compile code, and provide shells—enabling developers and users to interact productively with the Linux kernel.
  • The GNU Manifesto, published in 1985, helped rally global developer support for free software decades before “open source” became mainstream.

Linux System Architecture: Levels and Layers of Abstraction

Section titled “Linux System Architecture: Levels and Layers of Abstraction”
OS Abstraction Layers: Kernel, System Libraries, User Space
Section titled “OS Abstraction Layers: Kernel, System Libraries, User Space”
  • Hardware Layer: Actual physical devices—CPU, memory, storage, I/O.
  • Kernel Space: The Linux kernel controls hardware, schedules processes, manages memory, and provides an API for higher layers.
  • System Libraries/API: Libraries like glibc offer standardized programming interfaces (POSIX, etc.) so applications don’t have to interact with the kernel directly.
  • User Space: Where user applications, shells, and utilities execute. User programs have limited privilege and interact with hardware only via controlled kernel interfaces.
System Calls and Interface Between User Programs and Kernel
Section titled “System Calls and Interface Between User Programs and Kernel”
  • User applications operate in user space and cannot access hardware directly. When they need services (reading/writing files, network communication, or process creation), they use system calls provided by libraries.
  • A system call transfers control to the kernel, which safely executes the operation and returns the result.
  • System calls include read(), write(), open(), exec(), fork(), and many more.
  • This interface is foundational to Linux’s design—programs get kernel privileges only when necessary, reducing risk and errors.

Read more about linux and its architecture at 🌐

  • Linux distributions, or distros, are complete operating systems built by packaging the Linux kernel together with GNU tools, libraries, additional software, and package management systems.
  • These bundles provide users with ready-to-use Linux environments tailored for different needs, from general desktop use to servers and specialized devices.
  • Each distribution offers a curated collection of software, tools, themes, and configurations alongside regular update and security support.
Why Distros Exist — Different Purposes and Audiences
Section titled “Why Distros Exist — Different Purposes and Audiences”
  • Distributions exist because Linux, being modular and open source, allows people to assemble systems optimized for specific goals.
  • The diversity allows users and organizations to pick distros fitting technical skills, hardware, and use cases.

Types of Distros based on preferences:

  • Desktop distros: Prioritize user-friendly interfaces and multimedia support (e.g., Ubuntu, Linux Mint).
  • Server distros: Focus on stability, security, and performance (e.g., CentOS, Red Hat Enterprise Linux).
  • Specialized distros: Built for security auditing, multimedia production, education, or embedded devices.
  • Rolling release and fixed release: Some distros like Arch Linux provide continuously updated software, while others like Debian use fixed stable releases for reliability.

Popular Linux distributions:

  • Debian: One of the oldest, known for stability and extensive software repositories. It emphasizes free software and community governance.

  • Ubuntu: Based on Debian, it is designed for ease of use with regular releases and broad hardware support. Widely popular among desktop and server users.

  • Mint: Based on ubuntu, but provides de-ubuntufied experience by removing ubuntu specific features like snap and others, less bloated.

  • Red Hat Enterprise Linux (RHEL): A commercial distro targeting enterprise servers and systems with professional support.

  • Fedora: Community-driven, sponsored by Red Hat, it focuses on integrating the latest free software innovations.

  • SUSE Linux Enterprise: A commercial distro with roots in Germany, designed for business environments.

  • Slackware: One of the earliest Linux distros, valued for simplicity and minimalism, preferred by advanced users.

  • Linux distributions come in many flavors, which refer to variations within distros that cater to different user preferences, requirements, or use cases. These flavors can differ by desktop environment, target audience, default software, or specific optimizations.

Desktop Environment Based:

  • GNOME: Default for many mainstream distros like Fedora and Ubuntu (GNOME edition). Known for simplicity and modern design.

  • KDE Plasma: A feature-rich, highly customizable desktop used by distros such as Kubuntu and openSUSE.

  • XFCE: Lightweight and fast, preferred on older hardware or for minimalism, featured in Xubuntu and Manjaro XFCE.

  • LXDE/LXQt: Ultra-lightweight desktops for very low-resource systems or embedded Linux.

  • MATE: A continuation of the classic GNOME 2 desktop, used in Ubuntu MATE and others.

Use Case Based:

  • Desktop Flavors: Focused on home users, developers, and general-purpose computing
  • Server Flavors: Optimized for stability, security, and scalability for enterprise environments
  • Security and Privacy Flavors: Designed for penetration testing, anonymity, or privacy-focused computing (Kali Linux, Tails).
  • Specialized Flavors: Tailored for education, multimedia production, gaming, or performance (Edubuntu, Ubuntu Studio, SteamOS).

Release Based:

  • Rolling Release: Continuously updated packages provide the latest software versions (Arch Linux, openSUSE Tumbleweed).
  • Fixed Release: Periodic, well-tested releases focus on system stability and long-term support (Ubuntu LTS, Debian Stable).

Community vs Commercial Flavors

  • Community Flavors: Developed and maintained by volunteers and communities (Debian, Fedora, Arch Linux).
  • Commercial Flavors: Supported with professional services and certifications, often aiming at business users (Red Hat, SUSE Linux Enterprise).

See more distros of linux at 🌐