CompTIA Linux+

Do you linux?


0. Introduction

Course Linux Distributions

  1. Primary Distributions:

    • Red Hat Enterprise Linux-based distribution
    • Debian-based distribution
  2. RHEL Alternatives (Clones):

    • Rocky Linux
    • AlmaLinux OS
    • CentOS Stream
    • Note: Most tutorials work on Fedora Linux with minimal modifications
  3. Debian-based Options:

    • Debian Linux
    • Ubuntu Linux
    • Linux Mint
    • Other Debian derivatives

Virtualization Setup

  1. Primary Tool: VirtualBox

    • Free to download
    • Cross-platform compatibility
    • Host OS options:
      • Windows
      • macOS
      • Sun Solaris
      • Various Linux distributions
  2. Alternative Options:

    • Other virtualization solutions acceptable
    • Must support running multiple VMs

Hardware Requirements

  1. CPU Requirements:

    • 64-bit Intel or AMD processor
    • Virtualization support needed:
      • Intel: VT-x technology
      • AMD: AMD-V technology
    • BIOS virtualization must be enabled
  2. Storage Requirements:

    • Minimum: 30-40 GB free space
      • Sufficient for two Linux VMs
    • Recommended: More space for flexibility
      • Additional VMs
      • Larger virtual drives
  3. Memory (RAM) Requirements:

    • Linux Host:
      • Minimum: 4 GB
      • Recommended: 8 GB
    • Windows Host:
      • Minimum: 8 GB
      • Recommended: 16 GB
    • More RAM allows:
      • Multiple concurrent VMs
      • Complex configurations
      • Network/server-client setups
  4. Internet Connection:

    • High-speed connection recommended
    • Required for:
      • OS updates
      • Downloading ISO images

Instructor’s Setup Reference

  • Host System:
    • OS: Fedora Linux 36
    • RAM: 16 GB
    • Storage: 1 TB
  • Virtual Machines:
    • Rocky Linux VM
    • Ubuntu Linux VM

Additional Notes

  • Course structure accommodates both RHEL and Debian-based systems
  • Flexibility in choice of distributions within each family
  • Virtual environment enables safe learning environment
  • Setup allows for practical networking and system administration exercises

1. Lab Setup

Explore Linux Distros

Linux Distribution Basics

  • Definition: Linux kernel + supporting drivers, tools, applications packaged in distributable format
  • Resource: distrowatch.com catalogs available distributions
  • Purpose-specific distributions available for:
    • Stability (e.g., Red Hat Enterprise Linux)
    • Multimedia production
    • Gaming
    • Education
    • Security

Major Distribution Branches

  1. Slackware (July 1993)

    • Aimed at advanced users
    • Uses pkgtools package management
    • Notable derivatives:
      • VectorLinux
      • SUSE (now uses RPM package format)
  2. Debian (December 1993)

    • Created by Ian Murdoch (named after wife Deborah)
    • Community-maintained with democratic structure
    • Features:
      • Elected project leaders
      • Emphasis on free software
      • Uses APT (Advanced Package Tool)
    • Popular derivatives:
      • Ubuntu
      • Linux Mint
      • Kali (security-focused)
    • Represents over 50% of top 10 distributions on distrowatch.com
  3. Red Hat (November 1994)

    • Commercial success:
      • Went public in 1999 (8th biggest first-day Wall Street gain)
      • First open-source company to exceed $1B revenue (2012)
      • Dominates commercial Linux server market
    • Uses RPM (Red Hat Package Manager)
    • Notable derivatives:
      • Fedora
      • CentOS Enterprise Linux
      • Rocky Enterprise Linux
      • CloudLinux OS
      • Mandriva and Mageia
  4. Arch (Early 2000s)

    • Focus on simplicity and lightweight design
    • Uses pacman package management
    • Notable derivatives:
      • Manjaro
      • EndeavorOS
    • Two derivatives in distrowatch.com top 10

Distribution Popularity Measurement Challenges

  • Sales figures unreliable due to:
    • Only commercial distributions are paid
    • Most distributions are free downloads
  • Web browser statistics limitations:
    • Only identify desktop OS usage
    • Server OS usage not captured
  • Distrowatch.com statistics:
    • Limited to site visitors
    • May not represent actual usage

Choosing a Distribution Recommendations:

  1. Check popular distributions on distrowatch.com
  2. Test different distributions in virtual machines
  3. Familiarize with both Debian and Red Hat for employment prospects
  4. Consider personal preferences and needs

Best Practices:

  • Back up personal data (e.g., Dropbox, Google Drive)
  • Feel free to experiment with different distributions
  • Consider specific use case requirements

Additional Context

  • Historical timeline spans approximately 30 years
  • Approximately 1,000 distributions have existed
  • Most distributions share similar software capabilities
  • Main differences lie in:
    • System configuration
    • Software installation methods
    • Package management systems
    • Update procedures

Prepare the host for virtualization

Prerequisites & System Requirements

  • Virtualization must be enabled in BIOS/UEFI
  • Host OS must be 64-bit
  • VirtualBox installation required

VirtualBox Installation Process

  1. Download Location

    • Website: VirtualBox.org/wiki/Downloads
    • Multiple OS versions available (Windows, MacOS, Linux)
  2. Linux-Specific Installation

    • Download appropriate distribution package
    • Installation methods:
      • GUI: Click package in file manager
      • Command Line (Preferred):
        cd ~/Downloads
        sudo dnf install ./VirtualBox[version].rpm
        
    • Benefits of command-line installation:
      • More verbose feedback
      • Faster execution
      • Automatic dependency resolution
  3. Post-Installation Configuration

    • User Group Configuration:

      • Add user to vboxusers group:
        sudo gpasswd -a username vboxusers
        
      • Verify groups with groups command
    • SELinux Configuration (if applicable):

      • Set VirtualBox boolean:
        sudo setsebool -P use_VirtualBox 1
        
      • Verify setting:
        sudo getsebool use_VirtualBox
        
  4. System Reboot

    • Recommended after installation
    • Minimum requirement: Desktop logout/login
    • Command: reboot

VirtualBox Extension Pack

  1. Purpose & Features

    • Enhanced USB device support
    • Remote VM access
    • Webcam pass-through
    • VM disk encryption
    • Hardware expansion card support
  2. Installation Process

    • Download from VirtualBox.org/wiki/Downloads
    • Select “Extension Pack for all supported platforms”
    • Version must match installed VirtualBox version
  3. Installation Steps

    • Click downloaded extension pack
    • Follow installation wizard
    • Accept license agreement
    • Provide administrator password
  4. Verification

    • VirtualBox 7+: File → Tools → Extension Pack Manager
    • Older versions: File → Preferences → Extensions

Important Notes

  • Extension pack is platform-independent
  • Must match VirtualBox version
  • Requires administrative privileges for installation
  • Enhances VM functionality significantly
  • Regular updates recommended for security and features

This setup provides a foundation for:

  • Creating virtual machines
  • Testing different operating systems
  • Development environments
  • System isolation
  • Training environments

Enterprise Linux Install

  1. Initial Setup & Download
  • Using Rocky Linux version 9 (created by original CentOS founder)
  • Download options from rockylinux.org/download:
    • 64-bit versions for Intel/AMD CPUs
    • Versions for ARM, Power PC, IBM S390
    • Download methods: HTTP or Torrent (recommended for faster speeds)
  1. VirtualBox VM Creation
  • Create new VM in VirtualBox
  • Naming convention shortcut: Type “Red Hat” for auto-fill
  • Named example: “rh_host1”
  • Configuration:
    • For VirtualBox 7+: Select downloaded ISO directly
    • Older versions: ISO selection during first boot
    • Important: Select “Skip unattended installation”
  • Hardware specifications:
    • Default memory and CPU settings
    • Minimum 10GB hard disk (20GB default)
    • Option to pre-allocate full size for performance
  1. Installation Process
  • Boot sequence:
    • Select “Install Rocky Linux 9”
    • Choose installation language
    • Configure timezone
  • Software Selection:
    • Choose “Server with GUI” for graphical interface
  • Storage Configuration:
    • Default automatic partitioning
  • User Setup:
    • Configure root password
    • Create user “user1” with administrator privileges
    • Add to “wheel” group for sudo access
  1. Post-Installation Configuration Network Setup:
  • Enable network connection
  • Make connection persistent:
    • Access wired settings
    • Enable “Connect automatically”

System Updates:

sudo dnf update -y
sudo reboot  # Required before guest additions

Development Tools Installation:

sudo dnf group install -y "Development Tools"
sudo dnf install -y kernel-devel
  1. VirtualBox Guest Additions
  • Installation steps:
    1. Insert guest additions CD via devices menu
    2. Run installation script
    3. Enter password when prompted
    4. Wait for kernel module compilation
  • Benefits:
    • Full screen capability
    • Seamless mouse integration
    • Better system integration
  1. Hostname Configuration
sudo hostnamectl set-hostname rh_host1.localnet.com
  1. VM Snapshot Creation
  • Purpose: Save initial state for practice
  • Process:
    • Use host key (usually Right Ctrl) + T
    • Name snapshot “base install”
  • Benefits:
    • Allows practice reset
    • Multiple exercise attempts
    • System state preservation

Additional Notes:

  • Guest additions require system reboot after updates
  • Development tools necessary for proper guest additions installation
  • Snapshot creation recommended before starting exercises
  • Administrator privileges essential for future lessons

This installation provides a foundation for:

  • System administration practice
  • Course exercises
  • Enterprise Linux learning environment

Ubuntu Install

Initial Setup & Download

  • Location: ubuntu.com/download > Ubuntu Desktop
  • Version Options:
    • Latest Release
    • Long Term Support (LTS) Release
  • Release Schedule:
    • April releases end in 04
    • October releases end in 10
    • Version numbers: First two digits indicate year
    • Example: 22.04 LTS (April 2022)

VirtualBox Configuration

  1. Creating New VM:

    • Name: dbhost1
    • Type: Debian (auto-filled)
    • Version: Auto-detected
    • ISO Selection: Direct in VirtualBox 7+, at first boot in older versions
    • Installation Type: Interactive (skip unattended installation)
  2. Hardware & Storage Settings:

    • Default hardware settings recommended
    • Virtual disk allocation:
      • Dynamic allocation by default
      • Option to pre-allocate for better performance
      • Pre-allocation checkbox available

Ubuntu Installation Process

  1. Initial Boot:

    • GRUB boot menu appears
    • Select “Try or Install Ubuntu”
    • Desktop environment loads first
  2. Installation Steps:

    • Choose “Install Ubuntu”
    • Keyboard layout selection
    • Installation type:
      • Normal installation
      • Option to download updates during installation
      • Third-party software option for graphics/WiFi support
    • Disk partitioning: Full disk erasure (VM environment)
    • Time zone selection
    • User setup:
      • Username: user1
      • Computer name: dbhost1
      • Password creation
    • System restart after completion

Post-Installation Configuration

  1. VirtualBox Guest Additions:

    • Purpose: Better screen resolution and mouse integration
    • Installation process:
      sudo apt update
      sudo apt install -y build-essential linux-headers-$(uname -r)
      
    • Insert Guest Additions CD via Devices menu
    • Run autorun.sh as program
    • Eject CD after completion
  2. System Configuration:

    • Display settings adjustment
    • Hostname configuration:
      sudo hostnamectl set-hostname dbhost1.localnet.com
      
  3. Snapshot Creation:

    • Use host key + T
    • Name: “Base Install”
    • Purpose: System state preservation

Additional Notes

  • Ubuntu comes with pre-installed Guest Additions
  • Updates needed after kernel changes
  • Welcome wizard can be customized
  • Online account setup is optional
  • Host key location shown in VM window bottom-right
  • System can be shut down after snapshot creation

This installation process creates a fully functional Ubuntu VM with proper display settings and system configuration, ready for further customization or use.

Locale and date tools

Locale Settings in Operating Systems

  1. GUI Method for Setting Locale
  • Access through Activities > Region and Language
  • Allows configuration of:
    • System language
    • Number formats
    • Date formats
    • Currency formats
  • Keyboard layout configuration:
    • Accessible through left-hand pane
    • Option to add new key maps via plus symbol
    • Searchable keyboard layout database
  1. Command Line Interface (CLI) Method Using localectl (for Systemd-based systems):
  • Basic command: localectl
    • Displays current language and keymap settings
  • Listing available locales:
    • Command: localectl list-locales
    • Shows approximately 800 locale options
    • Can filter using grep: localectl list-locales | grep ^en (Shows only English locales)

Setting System Locale:

  • Command syntax: localectl set-locale LANG=en_US.UTF-8
  • Verification: Run localectl again to confirm changes

Keyboard Mapping:

  • List available keymaps: localectl list-keymaps
    • Contains 500+ keyboard layouts
    • Can filter: localectl list-keymaps | grep ^us
  • Set keymap: localectl set-keymap us

System Time Management (timedatectl)

  1. Basic Usage
  • Command: timedatectl
  • Displays:
    • Local time
    • UTC time
    • Time zone
    • NTP synchronization status
  1. Time and Date Configuration Format specification:
  • Year: YYYY (4 digits)
  • Month: MM (2 digits)
  • Date: DD (2 digits)
  • Hour: HH (2 digits, 24-hour format)
  • Minutes: MM (2 digits)
  • Seconds: SS (2 digits) Example format: YYYY-MM-DD HH:MM:SS
  1. Time Zone Management
  • List available zones: timedatectl list-timezones
  • Set time zone: timedatectl set-timezone [zone]
  1. NTP (Network Time Protocol) Configuration
  • Enable NTP: timedatectl set-ntp 1 Additional commands:
  • timesync-status: Check synchronization status
  • ntp-servers: Configure interface-specific NTP servers
  • revert: Reset to default NTP servers
  1. Advanced Features
  • Scripting-friendly output options
  • Man page available for detailed reference
  • Individual time/date component modification possible
  • Interface-specific NTP server configuration

Best Practices

  1. System Configuration:
  • Always verify changes after implementation
  • Use appropriate permissions (sudo when needed)
  • Consider timezone implications for networked systems
  1. Locale Selection:
  • Choose UTF-8 encodings when available
  • Consider regional formatting requirements
  • Test keyboard layouts before permanent implementation
  1. Time Synchronization:
  • Enable NTP for accurate timekeeping
  • Consider security implications of NTP servers
  • Regular verification of synchronization status

This comprehensive system of locale and time management is crucial for:

  • System administration
  • Multi-language support
  • International deployment
  • Network synchronization
  • User experience optimization

2. Manipulating Files

Linux Shells

Shell Operation Process

  1. Command Input Flow:
    • User types command in terminal
    • Shell translates to binary (ones/zeros)
    • Kernel processes the binary
    • Results sent back to shell
    • Shell converts back to human-readable text
    • Terminal displays output

Shell Command Processing

  1. Command Execution Hierarchy:
    • Checks for built-in commands first
    • Looks for command aliases
    • Searches directories listed in $PATH
    • Returns “Command not found” if unsuccessful

Major Shell Types

  1. Bourne Shell (sh)

    • Created: 1977 by Stephen Bourne
    • Features:
      • Basic functionality
      • POSIX compliant
      • Considered lowest common denominator
      • Ensures maximum compatibility
  2. C Shell (csh)

    • Created by Bill Joy (Sun Microsystems founder)
    • Named for similarity to C language
    • Less popular than Bourne Shell
    • Limited compatibility with modern shells
  3. KornShell (ksh)

    • Created: 1983
    • New Features:
      • Job control
      • Command history
      • Advanced conditional statements
  4. Bash (Bourne Again Shell)

    • Created: 1989 by Brian Fox
    • Features:
      • Combines functionality of Bourne Shell
      • Incorporates KornShell and C Shell features
      • Additional unique functionality
      • Most widely used default shell
      • POSIX compliant (with correct options)
      • Currently at version 4+
  5. Dash

    • Debian version of Almquist shell
    • Advantages:
      • Smaller size
      • Less memory usage
      • Faster execution
      • POSIX compliant
    • Usage:
      • Common for script execution in Debian
      • Bash used for interactive sessions
  6. Z Shell (zsh)

    • Advanced Features:
      • Enhanced command completion
      • Better option completion
      • Advanced pattern matching
      • Spell correction
      • Most powerful Linux shell
      • Some compatibility trade-offs

Shell Management

  1. Installation:

    • Multiple shells can coexist
    • Installation command example:
      sudo dnf install -y zsh
      
  2. Changing Default Shell:

    • Use chsh command
    • Requires full path specification
    • Example:
      chsh
      /bin/zsh
      
    • Logout/login required for changes

POSIX Compliance

  • Standard for operating system compatibility
  • Defines base shell functionality
  • Important for:
    • Cross-platform compatibility
    • Script portability
    • System standardization

Practical Considerations

  1. Interactive Use:

    • Different shells can be tested
    • Features vary in importance by use case
  2. Script Writing:

    • Shell choice affects syntax
    • Built-in features vary
    • Compatibility considerations important
  3. Recommendations:

    • Bash recommended for beginners
    • Explore other shells later
    • Consider script compatibility needs
    • Match shell to specific use case

This expanded version includes additional technical details and practical implications while maintaining the original transcript’s structure and information.

Linux Terminals

Terminal vs Shell - Core Concepts

  1. Terminal

    • It’s the interface where users type commands
    • Provides the physical/virtual window for input/output
    • Modern terminals are software emulations of historical hardware terminals
  2. Historical Context

    • Early Computing Era:
      • Used mainframe computers
      • Physical terminals with keyboards and screens
      • Direct cable connections to server rooms
      • Hardware-based terminal systems (TTY)

Modern Terminal Systems

  1. Text-Based Systems

    • TTY program provides command prompt
    • Runs without graphical interface
    • Direct command-line interface
  2. GUI-Based Systems

    • Terminal emulators run within graphical windows
    • Provides CLI inside GUI environment
    • Multiple terminal options available

Terminal Features

  • Common functionalities:
    • Scroll bars
    • Menu bars
    • Tabbing capabilities
    • Customizable color schemes
    • Copy/paste functionality
    • Text formatting options

Specific Terminal Examples

  1. GNOME Terminal

    • Default terminal in Enterprise Linux/GNOME desktop
    • Accessed through overview mode
    • Standard features:
      • Basic menu system
      • Profile management
      • Standard right-click options
  2. XFCE4 Terminal

    • Part of XFCE Linux desktop
    • Notable features:
      • Optional toolbar
      • Enhanced right-click menu
      • Additional customization options
    • Requires separate installation
    • Similar appearance to GNOME terminal

Important Distinctions

  1. Shell vs Terminal

    • Shell: Command interpreter communicating with kernel
    • Terminal: Input/output interface
    • Different shells can run in any terminal
  2. Customization

    • Terminals are highly customizable
    • Can install multiple terminal programs
    • User preference determines choice

Best Practices

  • Focus on command accuracy over terminal choice
  • Choose terminal based on needed features
  • Command functionality remains consistent across terminals
  • Terminal selection doesn’t affect command execution

Help on the command line

  1. Command Line Interface
  • Powerful interface to Linux OS
  • Often more efficient than graphical tools
  • Essential to know how to get information about commands
  1. Main Methods to Get Help

a) –help option

  • Built into most Linux commands
  • Format: command –help
  • Shows command syntax and options
  • Example: grep –help

b) help command

  • Alternative when –help isn’t available
  • Format: help command
  • Example: help cd

c) man command

  • Provides detailed manual pages
  • Format: man command
  • Example: man grep
  1. Command Structure
  • Basic format: command options arguments
  • Options are optional
  • Single letter options use single dash (-f)
  • Word options use double dash (–force)
  1. Man Page Structure
  • Name and short description
  • Command synopsis/usage
  • Detailed description
  • Options section
  • “See also” section (useful for related commands)
  • Examples (when available)
  1. Man Page Sections
  • Section 1: User commands and tools
  • Section 5: File formats and configurations
  • Can view section info: man section_number intro
  • Example: man 1 intro
  1. Searching Man Pages
  • Create database: sudo mandb
  • Search specific command: man -F command
  • Example: man -F crontab shows entries in sections 1, 1P, 5
  • Comprehensive search: man -k keyword
  • Can specify section: man section_number command
  1. Best Practices
  • Use multiple help methods for comprehensive understanding
  • Create and maintain a list of learned commands
  • Check related commands in “See also” sections
  • Review both –help and man pages when learning new commands
  1. Special Notes
  • Quality of man pages varies by project
  • Some commands may have multiple man pages
  • POSIX-compliant commands are in section 1P
  • Manual database needs sudo privileges to update

Understand the Linux Filesystem Hierarchy

Root Directory (/)

  • Top-level directory containing all files and folders
  • Everything in OS nested under this directory

Essential Directories:

  1. /bin - Essential user commands (ls, cat)
  2. /sbin - System administration utilities, boot/recovery tools
  3. /boot - Boot process files
  4. /dev - Device files and interfaces
  5. /etc - Configuration files (static, non-executable)
  6. /home - Regular users’ personal files
  7. /lib - System libraries and kernel modules
  8. /media - Mount points for removable media
  9. /mnt - Mount points for temporary file systems
  10. /root - Superuser’s home directory
  11. /opt - Optional software packages
  12. /proc - Virtual file system for system/process information
  13. /srv - Site-specific data storage
  14. /sys - Device/driver information
  15. /tmp - Global temporary files
  16. /run - System information since boot

/usr Directory:

  • Contains majority of OS commands/data (read-only)
  • Key subdirectories:
    • /usr/bin - Primary executable commands
    • /usr/local - Local software installation
    • /usr/sbin - Non-essential admin binaries
    • /usr/share - Read-only architecture-independent data
    • /usr/src - Source code for manual compilation

/var Directory:

  • For variable length files
  • Important subdirectories:
    • /var/cache - Application cached data
    • /var/log - System log files
    • /var/spool - Data awaiting processing
    • /var/mail - User mailbox files
    • /var/lib - Application state information

Key Points:

  • Follows File System Hierarchy Standard
  • Mounting required for block devices
  • Some directories (/proc, /sys) are virtual
  • Clear separation between system and user files
  • Distinct purposes for temporary storage (/tmp, /var)

Understand filesystem paths

  1. Tree Command
  • Helps visualize file system structure
  • Basic syntax: tree /etc
  • Options:
    • -F: shows full path to files/directories
    • -i: removes indent lines
    • Can show directories only, hidden files, ownership, file sizes
    • Output available in XML, JSON, HTML
  1. Find Command
  • Lists all files recursively with paths
  • Basic syntax: find /etc
  • Powerful pattern matching capabilities
  • Less visual than tree but more functional
  1. CD (Change Directory) Command
  • Used for navigation
  • Common shortcuts:
    • cd /: goes to root directory
    • cd ~: goes to home directory
    • cd ..: moves up one level
    • cd -: toggles between current and previous directory
  1. PWD Command
  • Prints Working Directory
  • Shows current location in file system
  1. Path Types a) Absolute Path
    • Starts with forward slash (/)
    • Works from anywhere
    • Longer to type
    • Example: /usr/share/sounds/gnome/default

b) Relative Path

  • Doesn’t start with forward slash
  • Works relative to current location
  • Shorter to type
  • Example: sounds/gnome/default (when in /usr/share)
  1. Navigation Tips
  • Watch prompt for current location
  • Use PWD and LS frequently to understand location
  • Use path shortcuts to reduce typing
  • For relative paths, if directory visible in LS, no slash needed
  • Can mix absolute and relative paths in copy/move operations

Create files and dirs

  1. Creating Directories
  • Use ‘mkdir’ command
  • Basic syntax: mkdir ~/direxercise
  • Verify with: ls -l ~
  • Check current path with ‘pwd’
  1. Creating Nested Directories
  • Direct attempt fails: mkdir parent/child
  • Use -p option for nested directories: mkdir -p parent/child
  • Verify with ’tree’ command
  1. Brace Expansion
  • Create multiple directories simultaneously
  • Syntax: mkdir {dir1,dir2,dir3}
  • Works with most shell commands
  • It’s a shell function, not command-specific
  1. Creating Files a) Using touch command
  • Creates empty files
  • Primary function: update timestamps
  • Syntax: touch emptyfile.txt
  • Verify with: ls -l

b) Using Redirection

  • Single redirect (>): overwrites existing file
  • Double redirect (»): appends to file
  • Example: echo “a new line” » textfile.txt
  • Verify content with ‘cat’ command
  1. Simple Text Editing
  • Use cat with redirection
  • Syntax: cat » textfile.txt
  • Type content
  • Control+D to save
  • No overwrite confirmation when using redirects

Important Notes:

  • Always verify changes with ls or cat
  • Be careful with single redirect (>) as it overwrites
  • pwd helps confirm current directory
  • mkdir -p creates parent directories automatically

Metadata in Linux

Definition:

  • Metadata: Data that describes other data
  • Associated with files alongside their main content

File Attributes Include:

  • Filename
  • Size
  • Permissions
  • Ownership
  • Access time

Viewing File Information:

  1. LS Command (ls -l):

    • Shows file type:

      • (-) Regular file
      • (b) Block device
      • (c) Character device
      • (d) Directory
      • (l) Symbolic link
      • (n) Network file
      • (p) FIFO/pipe
      • (s) Socket file
    • Permissions structure:

      • User owner permissions (r/w/x)
      • Group owner permissions (r/w/x)
      • Others permissions (r/w/x)
    • Displays:

      • Number of inodes
      • File size (bytes)
      • Last modified date/time
      • Filename
  2. Hidden Files:

    • Identified by dot (.) prefix
    • Viewed using ls -la command
  3. File Command:

    • Shows file type based on content
    • Not influenced by file extension
    • Example: file /etc/passwd
  4. Stat Command Shows:

    • File name
    • Size (bytes)
    • File system blocks
    • Device number
    • Inode number
    • Hard links count
    • Permissions
    • User/Group ID numbers
    • SELinux context
    • Access times:
      • Last access
      • Last content modification
      • Last attribute modification
      • Creation time (not supported in Linux)

Important Notes:

  • Inodes store metadata and data block pointers
  • Filenames are stored in directory inodes, not file inodes
  • Multiple drives can have same inode numbers
  • Unique file identification: combination of device number and inode number

File Management - Copy Command (CP)

Syntax:

  • Command: CP
  • Options: Single letters (-) or complete words (–)
  • Single letters can be combined (e.g., -PF)
  • Source path: Absolute or relative path to original file
  • Multiple files can be separated by spaces
  • File Globbing/Braze expansion supported
  • Destination path: Single destination only
  • Can mix relative and absolute paths

Practical Examples:

  1. Basic Directory Setup:

    • Create directory: mkdir ~/copy_exercise
    • Navigate: CD ~/copy_exercise
    • Shortcut: Use tab key for auto-completion
  2. File Creation and Copying:

    • Create empty file: touch file.txt
    • Basic copy: cp file.txt file-copy.txt
    • Copy to directory: cp file.txt archive/
    • Rename while copying: cp file.txt archive/file-copy.txt

Important Options:

  • -i: Interactive mode (warns before overwriting)
  • -R: Recursive copying (required for directories)
  • -a: Archive mode (preserves metadata)
  • -u: Update (copies only newer files)

Directory Copying:

  • Simple cp archive backup fails
  • Must use -R option: cp -R archive backup

Additional Notes:

  • Overwriting occurs without warning by default
  • Use man cp for detailed documentation
  • Can combine relative/absolute paths
  • If destination is directory without filename, original filename is kept

Move Command (mv) Syntax

Command Structure:

  • mv + options + source file(s) + destination path
  • Options format:
    • Single letters with hyphen (-)
    • Complete words with double hyphen (–)
    • Can combine single letters (e.g., -uf)

Paths:

  • Source: Can be absolute or relative
  • Multiple files: Separate with spaces
  • Supports file globbing and brace expansion
  • Destination: Only one path allowed
  • Can mix absolute and relative paths

File Movement Behavior:

  1. Between different filesystems:

    • Copies data blocks to new location
    • Deletes original data blocks
  2. Within same filesystem:

    • Instantaneous operation
    • Only updates filesystem metadata
    • File keeps same data blocks and index number
    • No physical data movement

File Renaming:

  • Can rename while moving by specifying new filename
  • Can rename in same directory by changing only filename
  • mv command serves dual purpose for moving and renaming

Directory Operations:

  • Can move entire directories without recursive option
  • Different from cp command which requires recursive option

Practical Example:

  1. Create test environment:

    mkdir ~/moveexercise
    cd ~/moveexercise
    touch file.txt
    mkdir filedir
    
  2. File operations:

    • Move file: mv file.txt filedir
    • Rename file: mv newfile.txt renamedfile.txt
    • Move directory: mv filedir/ newfiledir

Additional Information:

  • Full documentation available via man mv command

File and Directory Removal Commands

  1. Main Commands:
  • rm: Remove files
  • rmdir: Remove directories
  • No recycle bin in Linux command line; deletions are permanent
  1. Exercise Setup:
  • Create directory: mkdir ~/rmexercise
  • Navigate: cd ~/rmexercise
  • Verify path: pwd
  1. Creating Test Files/Directories:
  • mkdir dir{1,2}
  • touch dir1/file1.txt
  • touch file{a,b,c,d}.txt
  • Verify structure using ’tree’
  1. File Removal:
  • Basic removal: rm filea.txt
  • Interactive removal: rm -i fileb.txt
    • Requires ‘y’ confirmation
    • Recommended for safety
  1. Directory Removal:
  • Empty directories: rmdir dir2
  • rmdir limitations:
    • Only works on empty directories
    • Won’t work if hidden files present
    • Error if directory contains files
  1. Removing Non-empty Directories:
  • Use: rm -Ri dir1
    • -R: recursive
    • -i: interactive
  • Prompts for:
    • Directory descent
    • File deletion
    • Directory removal
  1. Safe Practices:
  • Avoid using broad wildcards (rm *)
  • Use precise pattern matching
    • Example: rm file[cd].txt
  • Always use rmdir first for directories
  • Consider enabling interactive mode by default
  • Check man rm for more information
  1. Warning:
  • Deletions are permanent
  • Recovery requires extra work
  • Be especially careful when using root privileges

Types of Links:

  1. Hard Links

    • Points to same data blocks as original file
    • Shares same inode number
    • Takes virtually no disk space
    • Can only link files (not directories)
    • Cannot link across file systems
    • Both original and link must be on same partition
    • Transparent to OS and applications
    • Deleting one link doesn’t break others
  2. Symbolic Links (Symlinks)

    • Points to another file/directory
    • Can link across file systems
    • Can link to directories
    • Takes minimal disk space
    • Breaks if target is deleted
    • Easily identifiable in ls -l
    • Not completely seamless with all commands

Commands:

  • Create hard link: ln [target] [link_name]
  • Create symbolic link: ln -s [target] [link_name]
  • View file details: ls -l
  • Check file statistics: stat [filename]

Important Notes:

  • Hard links show multiple inodes in ls -l output
  • Symbolic links display different permissions and size from original
  • Can verify hard links using stat command (same inode number)
  • When deleting symbolic links to directories, remove trailing slash
  • Tab completion adds forward slash to directory links
  • Symbolic links turn red when broken (target deleted)

Example Usage:

mkdir ~/lnexercise
cd ~/lnexercise
touch file.txt
mkdir archive
ln file.txt filelink.txt          # Hard link
ln -s file.txt filesymlink.txt    # Symbolic link
ln -s archive/ dirlink           # Directory symbolic link

Linux Command Streams

  1. Shell Stream Creation:

    • Three streams created when command runs
    • Standard input (stdin) - File descriptor 0
    • Standard output (stdout) - File descriptor 1
    • Standard error (stderr) - File descriptor 2
  2. Default Behavior:

    • All output (stdout & stderr) goes to terminal/screen
    • Outputs can be split and handled separately
    • Stdout shows successful command output
    • Stderr displays error messages
  3. Output Handling Methods: a) Pipes:

    • Channels output from one command to another’s input
    • Similar to water pipe concept
    • Connects stdout of first command to stdin of second

    b) Redirects:

    • Sends command output to files
    • Can redirect stdout and stderr to:
      • Same file
      • Different files
  4. Standard Input (stdin):

    • Default input source is keyboard
    • Can receive input from:
      • Other commands via pipes
      • Files via redirection

PIPES IN UNIX/LINUX

  1. Basic Concept
  • Pipe: Method of communication between programs
  • Primary use: Sending standard output of one command to standard input of another
  • Default: Only standard output goes through pipe, not standard error
  1. Unnamed Pipes (|)
  • Example 1: grep tcp /etc/services | less
    • Searches for ’tcp’ and displays output page by page
  • Example 2: grep tcp /etc/services | awk ‘{print $1}’ | sort | less
    • Multiple pipe usage
    • Searches for tcp
    • Prints first column
    • Sorts output
    • Displays page by page
  • Limitations:
    • Temporary existence
    • Only works between commands in same shell
    • Disappears after use
  1. Named Pipes (FIFO)
  • Also called FIFO (First In First Out)
  • Features:
    • Exists in file system
    • Acts like a file
    • Allows inter-process communication
    • Works across different terminals/users
  • Creation: mkfifo named_pipe
  • Usage:
    • Writing: echo “Hi” > named_pipe
    • Reading: cat named_pipe
  • Characteristics:
    • Blocks IO until read
    • Appears as physical location
    • Identified by ‘p’ in ls -l output
  • Deletion: Like regular files
  1. Key Differences
  • Unnamed pipes: Temporary, same shell only
  • Named pipes: Persistent, cross-terminal communication possible

Unix File Redirects

  1. Basic Concepts:
  • Unix/Linux treats everything as files
  • File redirects allow output manipulation
  • STDOUT and STDERR normally go to screen
  • STDOUT = File descriptor 1 (default)
  • STDERR = File descriptor 2
  1. Output Redirection: a) Basic Syntax:
  • Command > filename (saves STDOUT to file)
  • Command 2> filename (saves STDERR to file)
  • Command > file1 2> file2 (separate files for STDOUT/STDERR)
  • Command &> filename (both STDOUT/STDERR to same file)

b) Appending:

  • Use » for appending instead of overwriting
  • Works with all redirect variations
  • Useful for log files
  1. Input Redirection:
  • Uses < symbol
  • Example: mysql < db.sql
  • Can combine with output redirects
  • Useful when commands don’t directly accept files
  1. Tee Command:
  • Splits output between file and screen
  • Basic: command | tee filename
  • Append option: command | tee -a filename
  1. Advanced Redirects: a) Piping STDERR:
  • Default: only STDOUT goes through pipes
  • Merge STDERR to STDOUT: 2>&1
  • Modern syntax: command |& grep (pipes both STDOUT/STDERR)

b) Selective Piping:

  • Pipe only STDERR: 2>&1 1>/dev/null
  • Can redirect STDOUT to /dev/null
  1. Key Points:
  • Modern Bash syntax is simpler
  • Older syntax still common in tutorials
  • Different methods for different needs
  • Can combine multiple redirects
  • Useful for debugging and logging

Using the locate Command in Linux

  1. Basic Information:
  • locate command uses a database created by updatedb
  • Database-driven, making searches very fast
  • Database typically updates once daily via system service
  • Only finds files listed in database
  1. Basic Usage:
  • Basic syntax: locate filename
  • Example: locate bzip2
  • Count results with -c option (e.g., locate -c bzip2)
  • Multiple item search: locate bzip2 man
  • AND search: use -A option (e.g., locate -A bzip2 man)
  1. Search Options:
  • Case sensitivity:
    • Files are case sensitive by default
    • Use -i for case insensitive search
    • Example: locate -i high
  1. Pattern Matching:
  • Default: Uses wildcards (*)
  • Example: ls /etc/*.conf
  • Supports basic and extended regular expressions
  • Regular Expression examples:
    • Basic: locate –regexp ‘^/usr.*pixmaps.*jpg$’
    • Extended: locate –regex ‘^/usr.*(pixmaps|backgrounds).*jpg$’
  1. Database Statistics:
  • View statistics: locate -S
  • Shows total number of items in database
  • Example showed over 4 million items
  1. Updating Database:
  • Manual update required for recent files
  • Command: sudo updatedb
  • Requires elevated privileges
  • Updates database with recently created files
  1. Key Features:
  • Fast searching due to database
  • Flexible pattern matching
  • Multiple search options
  • Regular expression support
  • Case sensitive/insensitive options

Note: Database must be updated to find recent files as it’s not real-time.

Linux ‘find’ Command

Basic Usage:

  • Requires search path (default: current directory)
  • Use ‘/’ to search entire filesystem (requires elevated privileges)
  • Searches are always up-to-date (not database-driven)

Search Parameters:

  1. Name Search:

    • -name: Exact match
    • Use asterisks (*) for patterns
    • -iname: Case-insensitive search
    • -not or ! for inverted search
    • -regex: Supports regular expressions
  2. File Types (-type):

    • f: Regular file
    • d: Directory
    • l: Symbolic link
    • c: Character device
    • b: Block device
    • Multiple types using commas (e.g., f,d)
  3. Size-based Search (-size):

    • Units: c (bytes), k (KB), M (MB), G (GB), B (blocks)
    • Use +/- for greater/less than
    • -empty for empty files
  4. Time-based Search:

    • Access time: -amin (minutes), -atime (days)
    • Change time: -cmin (minutes), -ctime (days)
    • Modification time: -mmin (minutes), -mtime (days)
    • Use +/- for greater/less than
  5. Ownership and Permissions:

    • -user/-group: Search by user/group name
    • -uid/-gid: Search by user/group ID
    • -perm: Search by permissions
      • Exact permissions: octal or symbolic mode
      • Prefix with ‘-’: match all specified permissions
      • Prefix with ‘/’: match any specified permissions
  6. Security:

    • -context: Search by SELinux security context

Actions:

  • Can execute commands on found files
  • Uses {} as placeholder for found items
  • Common actions:
    • Print formatted filename
    • Delete files
    • List metadata
    • Ignore files
    • Execute custom commands

Example Use Cases:

  • Finding large files to free disk space
  • Locating recently modified files
  • Identifying files with insecure permissions
  • Moving old files to backup locations
  • Automated file cleanup

Note: Man pages contain additional features and options not covered in this overview.

3. Processing Text Files

Nano Text Editor in Linux

Basic Operations:

  • Opening file: nano -u filename.txt (-u enables undo feature)
  • Help: Ctrl + G
  • Exit: Ctrl + X
  • Save: Ctrl + O (write out)

Interface Elements:

  • Top: Nano version
  • Center: File name
  • Bottom: Function menu

Text Manipulation:

  1. Cut, Copy, Paste:

    • Cut: Ctrl + K
    • Paste: Ctrl + U (uncut)
    • Mark text: Ctrl + 6
    • Copy marked text: Alt + 6
  2. Insert File:

    • Command: Ctrl + R
    • Example: Can insert files like /etc/group
  3. Undo/Redo:

    • Undo: Alt + U
    • Redo: Alt + E (Only available with -u option)

Search and Replace:

  • Search: Ctrl + W (“where”)
  • Replace: Alt + R
  • Replace options: Single instance or all (press ‘A’ for all)

Line Operations:

  • Toggle line numbers: Alt + C (count)
  • Go to specific line: Ctrl + Alt + hyphen, then enter line number

Key Characteristics:

  • Simple editor for beginners
  • Keyboard shortcuts may not be intuitive
  • All keys except Control and Meta sequences enter text
  • Cut text stacks up until pasted
  • Popular despite shortcut complexity

Vim Editor in Linux

Basic Overview:

  • Vim (Vi IMproved) is default editor in Linux
  • Known for power and efficiency
  • Uses one-letter shortcuts for commands
  • Started by typing “vim filename.txt” in terminal

Modes:

  1. Normal Mode
  • Default mode when starting Vim
  • Characters typed are commands
  • Bottom line blank
  • Return to normal mode using Escape key
  1. Insert Mode
  • Enter using ‘i’ or Insert key
  • For typing text
  • “INSERT” shows at bottom line

Navigation Commands (Normal Mode):

  • Arrow keys or h,j,k,l for cursor movement
  • h (left), j (down), k (up), l (right)
  • w: move forward by words
  • b: move backward by words
  • ^: beginning of line
  • $: end of line
  • Shift+H: top of file
  • Shift+M: middle of file
  • Shift+L: bottom of file
  • Count operators work (e.g., 6l moves right 6 characters)

Editing Commands:

  • dl: delete character under cursor
  • dd: delete entire line
  • u: undo
  • Ctrl+r: redo

File Operations (Colon Commands):

  • :w - write/save file
  • :w filename.txt - save as new file
  • :q - quit
  • :q! - force quit without saving
  • :wq or :x - save and quit
  • :wq! - force save and quit

Interface:

  • Minimal interface
  • Status line at bottom shows:
    • New file status
    • Line count
    • Current mode
    • Command feedback

Editing Text in Vim

Basic Commands:

  • Start Vim: vim filename.txt
  • Enter insert mode: Press ‘I’
  • Return to normal mode: Press ‘Escape’

Copy (Yank) and Put:

  • Copy letter: yl
  • Copy word: yw
  • Copy line: yy
  • Paste: p

Delete (Cut) and Put:

  • Delete letter: dl
  • Delete word: dw
  • Delete line: dd
  • Deleted text goes to clipboard
  • cc: Delete and enter insert mode

Count Operators:

  • Can be used with commands
  • Example: 5dd (delete 5 lines)
  • Example: 5yw (yank 5 words)

Searching:

  • Forward search: /searchterm
  • Backward search: ?searchterm
  • Navigate results:
    • n (forward)
    • N (backward)
  • Turn off highlight: :nohl

Search and Replace:

  • Format: :%s/searchtext/replacetext
  • Used in command line mode

Working with Multiple Files:

  • Open multiple files: vim file1.txt file2.txt file3.txt
  • Navigate files:
    • :next (next file)
    • :prev (previous file)
    • :wnext (save and next)
    • :wprev (save and previous)
    • :2next (skip two files forward)
  • View open files: :args
  • Can copy/paste between files

Additional Notes:

  • Commands performed in normal mode
  • Vim focuses on efficiency
  • Similar search function (/) works in Firefox
  • Changes must be saved before switching files
  • Yanked text can be used across multiple files

Compound Commands in Bash | grep

  1. Simple vs Compound Commands
  • Simple commands: Basic commands like ls or grep root /etc/passwd
  • Compound commands: Multiple commands grouped and executed in succession
  1. Command Execution Methods

a) Using Semicolon (;)

  • Format: command1 ; command2
  • Second command runs regardless of first command’s success
  • Example: echo “hi” ; echo “there”

b) Using Double Ampersand (&&)

  • Format: command1 && command2
  • Second command runs only if first command succeeds
  • Example: mkdir newfolder && cd newfolder

c) Using Double Pipe (||)

  • Format: command1 || command2
  • Second command runs only if first command fails
  • Example: mkdir newfolder || echo “directory creation failed”
  1. Combining Operators
  • Can combine && and ||
  • Example: mkdir newfolder2 && cd newfolder2 || echo “Directory creation failed”
  1. Curly Braces vs Parentheses

Curly Braces { }:

  • Executes commands in current shell
  • Requires spaces after { and before }
  • Requires semicolon before closing brace
  • Commands processed as one unit
  • Useful for redirecting output of multiple commands
  • Example: { echo “Hi” ; echo “there” ; }

Parentheses ( ):

  • Creates and executes commands in subshell
  • Different variable scope than main shell
  • Example: (a=10 ; echo “in=$a”)
  1. Variable Scope Example:
a=0
(a=10 ; echo "in=$a")
echo "out=$a"
  • Variables in subshell don’t affect main shell variables
  • Use parentheses for separate variable space
  • Use curly braces for command grouping

Key Differences:

  • Curly braces: Same shell, shared variables
  • Parentheses: Subshell, isolated variables
  • Both can be used for output redirection

sed (Stream Editor)

Basic Characteristics:

  • Stream editor that edits text as it’s piped through
  • Uses basic regular expressions by default
  • Can use extended regular expressions (-E option)

Main Modes:

  1. Print: Displays output based on patterns
  2. Delete: Removes matching text
  3. Substitute: Replaces patterns with other patterns

Input Methods:

  1. Piping data:

    • Command STDOUT → sed STDIN → Output
    • Must redirect to new file to save changes
    • Cannot redirect to original file
  2. Specifying input file:

    • Direct file reading
    • Can use -i option for in-place changes
    • Caution advised with -i option

Syntax and Operations:

  1. Printing:

    • Use -n option to suppress non-matching lines
    • Pattern goes between slashes
    • Supports globs, character sets, classes, regex
  2. Substitution:

    • Format: ’s/pattern1/pattern2/[options]’
    • ‘g’ option for global substitution
    • Without ‘g’, only first match is replaced
  3. Addresses and Ranges:

    • Can specify line numbers/ranges
    • Supports negation
    • Can match every nth line
    • Can be applied to substitutions

Delimiters:

  • Forward slash (/) is default
  • Can use alternative delimiters (: or #) for clarity
  • Useful when dealing with file paths
  • Must be consistent and avoid regex special characters

Back References:

  • & symbol replaces with entire match
  • Can remember up to 9 pattern groups
  • Groups defined using parentheses
  • Referenced using \1 through \9

Best Practices:

  • Test commands without -i first
  • Be careful with syntax
  • Use alternative delimiters when dealing with paths
  • Consider readability when choosing patterns

Additional Features:

  • Can format text using back references
  • Supports pattern grouping
  • Multiple operations possible
  • Extensive capabilities documented in One-Liners page

4. Boot Process

Linux Boot Process

Boot Stages:

  1. Firmware Stage

    • Runs POST (Power-On Self-Test)
    • Uses either BIOS (older) or UEFI (newer)
  2. Bootloader Stage

    • BIOS/UEFI executes bootloader
    • Uses GRUB2 (Grand Unified Bootloader v2)
    • Reads configuration files:
      • BIOS: /boot/grub2/grub.cfg
      • UEFI: Variable location
  3. Kernel Stage

    • Loads RAM disk (temporary root filesystem)
    • Contains:
      • Kernel modules
      • Drivers
      • Installation automation files (e.g., kickstart)
    • Later unmounts RAM disk
    • Mounts actual root filesystem
    • Initiates initialization stage
  4. Initialization Stage

    • Runs Grandfather process
    • Evolution of init systems:
      • Sys 5 INIT (oldest)
      • Upstart
      • Systemd (current)
    • Systemd:
      • Starts all system services
      • Manages targets (similar to runlevels)
      • Default: graphical target
      • Different targets for various purposes (e.g., system rescue)
    • Final stage: Login shell or GUI

Linux Bootloaders Evolution

  1. LILO (Linux Loader)
  • Original Linux bootloader
  • Configuration in /etc/LILO.conf
  • Required LILO command execution after config changes
  • Used Linux device names
  • Development ended in 2015
  1. GRUB Legacy (GRUB 0.97)
  • GNU project, not Linux-specific
  • Uses different device naming (starts from 0)
  • Static configuration in /etc
  • One-time MBR binary data writing
  • Limited support for complex drive setups
  1. GRUB 2
  • Current standard bootloader for most Linux distributions
  • Automatic configuration through scripting
  • Modular design
  • Supports:
    • Complex drive setups (RAID, logical volumes)
    • New file systems
    • Built-in console
    • Live boot environments
    • ISO images
  • More complex but more resilient
  1. ISOLINUX
  • Boots Linux from ISO formatted optical disks
  • Works with USB thumb drives containing ISO images
  • SYSLINUX derivative
  1. SYSLINUX
  • Boots Linux from FAT formatted storage
  • Compatible with:
    • Floppy disks
    • USB thumb drives
  1. PXELINUX
  • SYSLINUX derivative
  • Network-based booting
  • Uses Intel PXE (Pre-boot Execution Environment)
  • Enables remote booting without local media

Key Differences:

  • LILO: Required manual updates
  • GRUB Legacy: Limited functionality
  • GRUB 2: Most advanced, automatic configuration
  • ISOLINUX/SYSLINUX: Media-specific solutions
  • PXELINUX: Network-based solution

GRUB 2 Boot Loader

  1. Overview
  • Most popular Linux boot loader
  • Key features:
    • Scripting support
    • Dynamic module loading
    • Custom menus and themes
    • UUID for partitions
    • Recovery console
  1. Boot Process
  • Purpose: Start kernel
  • Kernel loads device drivers from RAM disk
  • Starts rest of OS
  • Components stored in /boot
  1. Key Files in /boot
  • vmlinuz: Compressed bootable kernels
  • RAM disk files:
    • initrd (kernel < 2.5)
    • initramfs (kernel > 2.6)
  • initramfs/rescue: Larger file with rescue tools
  • Creation tools:
    • mk initrd (older systems)
    • draca (newer/Enterprise Linux)
  1. GRUB Configuration A. Main Config File
  • Located: /boot/grub2/grub.cfg
  • Warning: Don’t edit directly
  • Changes overwritten during updates

B. Proper Configuration Locations

  • /etc/default/grub (menu timeout, kernel options)
  • /etc/grub.d/ directory:
    • 00_: Reserved files
    • 10_: Boot entries
    • 20_: Third-party apps
    • 30_OS_Pro: OS probe script
    • Various other numbered scripts (40, 41)
  1. GRUB Management Commands
  • grub2-install: Initial installation/reinstallation
  • grub2-mkconfig: Recreate boot files
  • grub2-reboot: Temporary kernel change
  • grub2-set-default: Permanent kernel change
  1. Important Notes
  • Must run grub2-mkconfig after config changes
  • Can select different kernel at boot screen
  • First-time installation requires grub2-install
  • Changes to default kernel occur with new installations

Rescue a System

Kernel Panic Troubleshooting:

  • Common causes:
    • Custom drivers not installed in new kernel
    • Driver incompatibility with hardware

Two Troubleshooting Methods:

  1. Booting Different Kernel

    • Force reboot and select different kernel at GRUB prompt
    • If successful, set older kernel as default using grub2-set-default
    • GRUB counts from zero (newest kernel is 0)
  2. Systemd Emergency Target

    • Requires root password
    • Doesn’t load drivers, services, GUI, or mount root filesystem
    • Access: Edit kernel line, add “systemd.unit=emergency.target”
    • Useful commands in emergency mode:
      • journalctl -xb (view journal messages)
      • dmesg (debug messages)
      • dmidecode (BIOS/hardware info)
      • mount (view mounted volumes)
    • Remount root as read-write: mount -o remount,rw /
    • Exit with Ctrl+D

Password Recovery:

  1. Edit kernel line at GRUB
  2. Add “init=/bin/sh”
  3. Remount root filesystem as read-write
  4. Use passwd command to reset password
  5. Create /auto.relabel file for SELinux
  6. Force reboot with full path and -f option

Systemd Targets:

  • View default: systemctl get-default
  • List all targets: systemctl –type=target –all
  • Common targets:
    • graphical.target
    • multi-user.target
    • emergency.target
  • Change target: systemctl set-default [target-name]

5. Maintaining Processes and System Services

PROGRAMS & PROCESSES

  1. Program vs Process

    • Program: Executable file stored on disk (passive entity)
    • Process: Active instance of program being executed in memory
  2. Process Characteristics

    • Allocated system resources:
      • Memory
      • CPU time
      • Process ID (PID)
    • Each process has unique PID
    • Has a parent process
  3. Process Hierarchy

    • systemd:
      • PID 1
      • Started by kernel
      • Only process directly started by kernel
      • Called “grandfather process”
    • Example:
      • sshd process (PID 1294)
      • Parent: systemd (PID 1)
  4. Parent-Child Relationship

    • Processes can create child processes
    • Hierarchical organization (like files/folders)
    • Child processes nested under parent processes
  5. Process Management

    • PIDs used by kernel for process control
    • Administrators can use PID to:
      • Set task priority
      • Change task priority
      • End tasks
  6. Process Termination

    • Reports back to parent process
    • Resources are freed
    • PID is removed

Linux Process Monitoring with PS Command

  1. Basic PS Command
  • By default, shows processes run by executing user
  • Displays: process ID, terminal, execution time, command
  1. PS Command Syntax Types
  • UNIX: Single dash options (-)
  • GNU: Double dash options (–)
  • BSD: No dashes (not focused on in this course)
  1. Common PS Options
  • -e: Shows every process
  • -H: Displays process hierarchy
  • -f: Shows full information (username, parent process ID, CPU usage, start time, commands)
  • -F: Adds memory allocation and CPU information
  • -L: Long format (17 columns of information)
  1. Customization Options
  • –format: Customize displayed columns Example: PS -e –format UID,PID,PPID,%CPU,CMD
  • –sort: Sort by specific fields Example: –sort %CPU (least to greatest) Use hyphen for reverse sort: –sort -%CPU
  1. Filtering Options
  • -U (uppercase): Filter by username
  • -u (lowercase): Filter by user ID
  • -C: Filter by program name
  1. Useful PS Commands a) CPU Usage Monitoring: PS -e –format uid,pid,tty,%cpu,cmd –sort -%cpu

b) Memory Usage Monitoring: PS -e –format uid,pid,tty,%mem,cmd –sort -%mem

c) User Memory Usage Calculator: PS -u username -o rss | awk ‘{sum+=$1} END {print sum/1024}’

  1. Additional Features
  • Can create aliases for frequently used commands
  • Can create shell scripts with user arguments
  • Can specify processes by:
    • Users
    • Groups
    • Terminals
    • Sessions
    • Process ID lists
    • Parent process ID lists
    • Command lists

Note: Man pages contain comprehensive documentation for more PS options.

Top Command in Linux

  1. Basic Overview:
  • Top command shows real-time process information
  • Automatically updates display
  • Shows system uptime, load average, processes, CPU & memory usage
  1. Summary Area Modifications:
  • l: toggle load average/uptime line
  • 1: display all CPU cores usage
  • t: toggle between task/CPU states
  • m: toggle memory display options
  1. Field Modifications:
  • f: access field menu
  • Space Bar: select/deselect fields
  • s: change sort field
  • Arrow keys: move fields
  • q: quit field menu
  1. Process Display Options:
  • c: toggle between command name/line
  • U (uppercase): filter by username
  • u (lowercase): filter by user ID
  • Arrow keys/Page Up/Down: scroll through tasks
  1. Process Management:
  • k: kill process
    • Requires process ID
    • SIGTERM (15): default, friendly kill
    • SIGKILL (9): forcible process removal
  • r: renice command
    • Changes process priority
    • Higher nice value = lower priority
    • Lower nice value = higher priority
  1. Quick Sort Commands:
  • M: sort by memory usage
  • P: sort by CPU usage
  • T: sort by running time
  • N: sort by process ID
  1. htop:
  • Enhanced version of top
  • Additional features:
    • Color-coded interface
    • Mouse support
    • Function key shortcuts
    • Better CPU core visualization
    • Requires separate installation
    • More user-friendly interface
  • Command: sudo dnf install -y htop
  1. Exit Commands:
  • q: quit top/htop
  • Escape: cancel operations

Process Signals and Priority Management

  1. Process Signals
  • Common signals include terminate and kill
  • 64 different signals available (view with kill -l)
  • Common signals:
    • SIGHUB/SIGUSR1 (non-destructive)
    • SIGTERM/SIGKILL (termination)
  1. Process Management Commands
  • pgrep: Pattern matching for processes
  • pidof: Shows multiple process IDs
  • pstree -p: Shows process tree with IDs
  • kill: Sends signals to specific process ID
  • killall: Terminates all processes with same name
  1. Signal Example with dd Command
  • dd command used for disk duplication
  • USR1 signal forces output display
  • Command syntax: kill -USR1 $(pidof dd)
  • Newer dd versions have status=progress option
  1. Nice System (Process Priority)
  • Range:
    • Regular users: 0 to 19
    • Privileged users: -20 to -1
  • Default level: 0
  • Higher nice number = Lower CPU priority
  • Lower nice number = Higher CPU priority
  1. Process Priority Management
  • renice: Changes nice level of running process
  • nice: Starts process with specific nice level
  • Priority level = Nice level + 20
  • Only root can set negative nice values
  • CPU sharing depends on nice levels
    • Same nice level: Equal CPU sharing
    • Higher nice level: Less CPU time
    • Lower nice level: More CPU time
  1. Monitoring
  • top command shows:
    • Nice level (4th column)
    • Priority level (3rd column)
    • CPU usage percentage
    • Process details
  1. Background Processing
  • Use & at end of command for asynchronous execution
  • Allows multiple processes to run simultaneously

Background & Foreground Process Management

  1. Default Program Behavior
  • Programs usually run as interactive tasks in foreground
  • Output visible on screen
  • Can terminate with Ctrl+C
  1. Background Process Benefits
  • Allows multitasking
  • Can check on tasks by bringing to foreground
  • Managed through bash commands
  1. Monitoring Processes
  • Use ‘watch’ command with ‘ps’ for real-time monitoring
  • Command syntax: watch ps -C dd –format pid,cmd,%cpu
  • Updates every 2 seconds
  1. Process Control Commands a) Moving to Background:
  • Use Ctrl+Z to stop and move process to background
  • Use ‘bg [job_number]’ to resume in background
  • Can append ‘&’ to command to start directly in background

b) Foreground Management:

  • ‘fg’ brings background process to foreground
  • Can interact with process once in foreground

c) Job Management:

  • ‘jobs’ command shows all background jobs
  • Jobs are identified by job spec numbers in brackets
  • Can specify job numbers with bg/fg commands
  1. Example Using dd Command
  • Command: dd if=/dev/zero of=/dev/null
  • Uses CPU intensively for demonstration
  • Can be monitored in separate terminal
  1. Termination Methods
  • Ctrl+C when in foreground
  • ‘killall dd’ from anywhere
  • Bring to foreground (fg) then Ctrl+C
  1. Key Benefits
  • Better multitasking capability
  • CPU resource management
  • Process suspension and resumption as needed

System Services in Linux

  1. Definition & Background
  • System services: Background processes handling specific requests
  • In Linux, called “daemon” (D-A-E-M-O-N)
  • Origin: MIT programmers named after Maxwell’s daemon
  • Etymology: Greek mythology - good spirit/angel
  • Pronunciation: “demen” (original) or “daimen”
  • Common naming convention: Services often end with ’d’ (e.g., httpd, smbd)
  1. Boot Process
  • Kernel loaded by boot loader
  • Kernel starts super service
  • Super service initiates all other processes
  1. System V Init (Legacy System) Features:
  • Originated from AT&T’s Unix System V (1980s)
  • Used runlevels for different configurations
  • Allowed switching between configurations Limitations:
  • Synchronous service starting
  • Slow shell script execution
  • No dependency system
  • Network interruptions during service restarts
  1. Modern Replacements

a) Upstart:

  • Designed by former Ubuntu employee
  • Features: Asynchronous processing, process monitoring
  • Used by major distributions (Ubuntu, Suza, Red Hat)
  • Currently in maintenance mode

b) Systemd (Current Standard):

  • Used in Enterprise Linux 7/8
  • Features:
    • Unified service configurations across distributions
    • Complete suite of system components
    • Init system for boot/process management
    • Includes device, login, network management
    • Event logging
  • Adopted by most major distributions since 2015
  1. Evolution Context
  • Traditional Init limited to boot/shutdown
  • Modern systems need dynamic management (USB, WiFi)
  • Systemd emerged as comprehensive solution

SystemD Service Management in Enterprise Linux

  1. SystemD Overview:
  • Manages multiple objects: devices, mounted volumes, network sockets, system timers, targets
  • Uses SystemCTL command for management
  • Objects called “units” with corresponding unit files containing configurations
  1. Key SystemCTL Commands: a) list-units:
  • Shows units in SystemD memory
  • Displays active, exited, or failed units
  • Represents currently/previously running services

b) list-unit-files:

  • Lists installed unit files and enabled states
  • Shows autostart configuration
  • Displays masked services

c) status:

  • Shows specified units’ status
  • Displays system status if no unit specified
  • Can show unit ownership of processes
  1. Service States:
  • Enabled: Starts automatically at boot
  • Disabled: Won’t start automatically
  • Static: Cannot be enabled, may be dependencies
  • Masked: Cannot start or be enabled
  1. List-units Command Output (-t service):
  • Shows 5 columns:
    • Service name
    • Unit file loaded status
    • Active state
    • Detailed state (sub column)
    • Service description
  • Can filter by state (e.g., –state running)
  1. Unit Files:
  • Contains service configuration
  • Stored on disk
  • Viewable using ‘systemctl cat [service]’
  • Includes:
    • Service dependencies
    • Execution commands
    • Failure handling
  1. Service Status:
  • Accessed via ‘systemctl status [service]’
  • Shows:
    • Service state
    • Start time
    • Process ID
    • Log messages

Note: Service extension (.service) is optional in commands as it’s the default unit type.

Managing Services with systemctl

Core Purpose:

  • systemd manages services via systemctl command
  • Allows manual start, stop, and restart (temporary changes)
  • Enables persistent changes through enable, disable, mask, and unmask

Key Commands:

  1. List Services

    • systemctl list-unit-files -t service (shows enabled/disabled/masked services)
    • systemctl list-units (shows services in memory, including failed ones)
  2. Service Status

    • systemctl status [service]
    • systemctl is-active [service]
    • systemctl is-failed [service]
    • systemctl is-enabled [service]
  3. Service Control

    • sudo systemctl start [service]
    • sudo systemctl stop [service]
    • sudo systemctl restart [service]
  4. Persistent Control

    • sudo systemctl enable [service] (auto-start at boot)
    • sudo systemctl disable [service]
    • sudo systemctl mask [service] (prevents manual/automatic start)
    • sudo systemctl unmask [service]

Additional Features:

  • Status checks return values (0 for success) useful in scripts
  • Can use echo $? to verify return values
  • No output displayed for start/stop/restart commands
  • Masking prevents accidental service starts (useful for services like DHCP)

Example Used:

  • atd service demonstrated all commands
  • Masking prevented manual start attempts
  • Unmasking restored service functionality

Note: systemctl service management is similar to traditional system management, making it relatively straightforward compared to other systemctl functions.

Job Scheduling in Linux

Types of Jobs:

  1. One-time AT Jobs

    • Created by user
    • Run once at specified time
    • Managed by atd service
  2. One-time Batch Jobs

    • Similar to AT jobs
    • Run when system resources available
    • Single execution
    • Managed by atd service
  3. Recurring User Jobs

    • User created and managed
    • Repeating schedule (minute/hour/day/week/month)
    • Can be deleted by creator
    • Root can create user jobs
    • Managed by crond service
  4. Recurring System Jobs

    • Created by system administrator
    • Run by operating system
    • Not user-associated
    • Can run as any user
    • Managed by crond service
  5. Systemd Timers

    • Created by system administrator
    • Run by operating system on repeat
    • Equivalent to recurring system jobs
    • Advantages:
      • Logging through systemd journal
      • Dependencies on other systemd units
    • More complex management
    • Managed by Systemd Service

Access Control:

  • Mechanism to allow/deny user job creation/management
  • Configurable for all job scheduling systems
  • Uses various technologies

AT Service for One-Time Jobs

Purpose:

  • Runs jobs at specific times or when CPU load < 0.8 (batch jobs)

Time Format Options:

  1. Standard Clock

    • 12-hour: 4:25 AM
    • 24-hour: 16:45
  2. General Terms

    • midnight
    • noon
    • teatime (4:00 PM)
  3. Incremental

    • now + minutes/hours/days
  4. Specific Date/Time

    • Time and date: 3:00 AM tomorrow
    • Date formats: MM/DD/YYYY or with dots
    • Precise format: CCYYMMDDHHNN.SS (requires -t option)

Commands & Usage:

  1. Creating AT job:

    • Syntax: at [time]
    • Example: “at now +5min”
    • Enter commands at prompt
    • End with CTRL + D
  2. Managing AT jobs:

    • View jobs: atq or at -l
    • View job contents: at -c [job_number]
    • Cancel job: atrm [job_number]

Job Information Display:

  • Job number
  • Time/date
  • Queue letter
  • Username

Batch Jobs:

  • Created using ‘batch’ command
  • Runs when system load < 0.8
  • Verification through atq
  • Job completion checked by output/file creation

Example Commands:

mkdir -p ~/Documents.bak
rsync -a ~/Documents/ ~/Documents.bak
touch ~/batchfile.txt
ls -l ~/batchfile.txt

CRON JOBS & CRONTAB

Types of Crontabs:

  1. User Crontabs
  • User-specific
  • User-managed
  • Location: /var/spool/cron/username
  1. System Crontabs
  • System-wide
  • Admin-managed
  • OS-run
  • Location: /etc/cron.d

Crontab Format (6 columns):

  1. Minutes (0-59)
    • = every minute
  • Multiple values: 15,30,45
  • Ranges: 15-45
  • Steps: */10 (every 10th minute)
  • Odd minutes: 1-59/2
  1. Hours (0-23)
  • 0 = midnight
    • = every hour
  1. Day of Month (1-31)
    • = every day
  1. Month (1-12)
  • Can use numbers or Jan-Dec
    • = every month
  1. Day of Week (0-7)
  • 0,7 = Sunday
  • 6 = Saturday
  • Can use three-letter abbreviations (Sun, Mon)
  1. Command
  • System crontabs can specify user to run command

Additional Information:

  • Online generator available: crontab-generator.org
  • Manual: man 5 crontab (for format details)
  • Regular man crontab shows command info

Systemd Timer Units

  1. Types of Timer Units:

    • Real-time timers (calendar events based)
    • Monotonic timers (relative time spans)
  2. Real-time Timers:

    • Activate on specific dates/times
    • Examples: New Year’s midnight, weekly Sunday backups
    • Uses “OnCalendar” keyword
  3. Monotonic Timers:

    • Activate relative to starting points
    • Examples: 5 minutes after boot, 30 seconds after login
  4. Advantages over Cron Jobs:

    • Individual service files
    • Independent job testing
    • Configurable environments
    • Container group compatibility
    • Dependencies on other systemd units
    • Systemd journal logging
  5. File Structure:

    • .timer extension for timer file
    • Matching .service file
    • Example: backup.timer & backup.service
  6. Commands:

    • List timers: systemctl list-timers
  7. Timer Keywords:

    • OnActiveSec: relative to timer activation
    • OnBootSec: relative to machine boot
    • OnStartupSec: relative to systemd start
    • OnUnitActiveSec: relative to service unit activation
    • OnUnitInactiveSec: relative to service unit stop
    • RandomizedDelaySec: random activation delay
  8. Configuration Examples:

    • Service file: Specifies script/application to run
    • Real-time timer: Daily 2 AM activation
    • Monotonic timer: 15 minutes post-boot, weekly
  9. Cron Job Advantages:

    • Simpler creation (one-liners)
    • Built-in email notifications
    • Note: These features can be replicated in systemd timers

System Process Analysis & Optimization Tools

CPU Monitoring & Configuration:

  • /proc/cpuinfo - View CPU information
  • uptime - Check system uptime
  • loadaverage - Monitor system CPU usage

SAR (System Activity Reporter) Tools:

  • sar - Complex tool for system data reading & reporting
  • sadf - Export reports in various formats (e.g., XML)
  • iostat - Generate CPU and I/O statistics
  • mpstat - Display CPU statistics
  • pidstat - Show statistics per process ID
  • ksar - Third-party Java visualization tool (not part of sysstat)

Kernel Tuning:

  • sysctl - Used for kernel tuning
  • tuned - Alternative tool for managing kernel parameters

Memory Monitoring:

  • vmstat - Reports on processes, memory, paging, block I/O, traps, disks, CPU
  • free - Display memory usage
  • /proc/meminfo - Detailed memory information

Process Management:

  1. Process States:

    • Zombie
    • Uninterruptible sleep
    • Interruptible sleep
    • Running
  2. Priority Management:

    • nice/renice - Manage process priorities
    • top - Interactive process monitoring and management
  3. Process Control:

    • kill/killall/pkill - Process termination commands
    • Different kill signals (e.g., 9 and 15)

Additional Process Tools:

  • ps - Display process information
  • lsof - List open files by processes
  • pgrep - Search through processes
  • top - Interactive process monitoring

Note: These tools are important for both system administration and exam purposes, and familiarity with each is recommended.

Application Troubleshooting

  1. SELinux-related Issues:
  • Check for context violations in enforcing mode
  • Tools available:
    • sealert - SELinux alert browser (GUI)
    • ausearch - search audit logs
    • aureport - produces audit log activity summaries
    • Direct audit log viewing at /var/log/audit/auditlog
  1. Permission Issues:
  • Check standard Linux permissions:
    • Use ls -la to view permissions
    • Verify execute privileges
  • File Access Control Lists:
    • Use getfacl to view ACLs and standard permissions
    • Check for:
      • User permissions
      • Group ownership
      • Executable permissions
      • File ACL inheritance
  1. Volume Mount Issues:
  • Use mount command to check:
    • Executable permissions
    • SUID/SGID program privileges
  1. System Logs:
  • Check using:
    • journalctl
    • /var/log/messages (location may vary by distribution)
  1. Program Debugging:
  • Use strace:
    • Monitors program execution
    • Shows library loading
    • Displays configuration file usage
    • Outputs to both stdout and stderr
    • Best used with grep for filtering
  1. Hardware Troubleshooting:
  • Tools:
    • lshw - lists all hardware
    • dmidecode - shows BIOS-level hardware info
    • Check man pages for specific options

6. Configure Network Connections

Linux Network Configuration Requirements

  1. Essential Configuration Items:
  • Host address (IPv4 or IPv6)
  • Network subnet mask
  • Default gateway router address
  • Hostname
  • Name resolution (local or remote DNS)
  1. Configuration Methods: A. Dynamic Configuration
  • Via DHCP or IPv6
  • Host requests config from server

B. Manual Configuration (Live)

  • Settings stored in RAM

  • Resets after reboot

  • Two toolsets: i. Net-tools (Legacy): - ARP, hostname, ifconfig, route, netstat - Obsolete but still available

    ii. iproute2: - Current standard - Recommended for future use

  1. Permanent Configuration Storage: Different locations by distribution:
  • RHEL (v8 and older): /etc/sysconfig/network-scripts
  • Debian: /etc/network/interfaces
  • SUSE: /etc/sysconfig/network
  1. Network Manager:
  • Unified network configuration service
  • Started by Red Hat (2004)
  • Widely adopted
  • Features:
    • Command-line interface
    • Graphical interface
    • Standardized approach across distributions

Note: Configuration methods vary by distribution, which can be challenging in mixed environments.

Live Network Configuration in Linux

  1. Virtual Network Setup (VirtualBox)
  • Create second network interface for VMs
  • Use internal network (intnet) to maintain internet access
  • Configure through VirtualBox settings > Network > Adapter 2
  1. Network Configuration Tools A. Two Main Tools:
  • Net-tools (obsolete but still used)
  • iproute2 (modern replacement)

B. Command Equivalents: Net-tools → iproute2

  • arp → ip neighbor
  • ifconfig → ip address
  • iptunnel → ip tunnel
  • mii-tool → ethtool
  • netstat → ss
  1. Network Interface Information
  • Modern naming convention: e.g., enp0s3 (PCI network card), wlp3s0 (wireless)
  • View interfaces:
    • Net-tools: ifconfig
    • iproute2: ip -s addr
  1. Gateway Information
  • Net-tools: route
  • iproute2: ip route
  1. Hardware Level Interface
  • Net-tools: sudo mii-tool [interface]
  • iproute2: sudo ethtool [interface]
  1. Address Resolution Protocol (ARP)
  • Net-tools: arp
  • iproute2: ip neighbor (can be shortened to ip neigh or ip n)
  1. Interface Management Enable/Disable Interface:
  • Net-tools: ifconfig [interface] up/down
  • iproute2: ip link set dev [interface] up/down
  1. IP Configuration Set IP Address:
  • Net-tools: sudo ifconfig [interface] [IP] netmask [mask]
  • iproute2: sudo ip addr add [IP/CIDR] dev [interface]
  1. Network Statistics
  • Net-tools: netstat -neopa
  • iproute2: ss -neopa

Key Difference:

  • ifconfig sets single IP address
  • ip command can add multiple addresses to interface

Recommendation: Focus on learning iproute2 while maintaining familiarity with net-tools.

Configuring Saved Network Connections

  1. Essential Network Settings:
  • Host name
  • Network subnet mask
  • Default gateway
  • Name resolution (local/remote)
  1. Network Configuration Files: a) Legacy Systems:
  • Debian: /etc/network/interfaces
  • SUSE: /etc/sysconfig/network
  • RHEL: /etc/sysconfig/network-scripts

b) Modern Systems:

  • Network Manager with key files (unified format)
  • Used by RHEL, SUSE, and Debian
  1. Network Manager Configuration:
  • Main config file: /etc/NetworkManager/NetworkManager.conf
  • Interface configurations: /etc/NetworkManager/system-connections/
  • Command to view config: sudo NetworkManager –print-config
  1. Configuration File Formats: a) DHCP Configuration:
  • Both ifcfg and key files include:
    • Interface name
    • UUID
    • Device name
    • Boot protocol

b) Manual Configuration:

  • IP address
  • Network mask
  • Gateway
  • DNS servers
  • IPv6 settings
  1. DNS Configuration:
  • Global DNS file: /etc/resolve.conf
  • Interface-specific settings possible
  • Settings:
    • pure DNS=no (ifcfg)
    • ignore-auto-dns=true (key file)
  1. Hostname Configuration:
  • Traditional: Edit /etc/hostname
  • Modern: Use systemd-hostnamectl
  • Command: sudo hostnamectl set-hostname [hostname]
  1. Name Resolution:
  • Configured in /etc/nsswitch.conf
  • Resolution order:
    1. Local files (/etc/hosts)
    2. DNS
    3. NSS-myhostname
  • Local resolution: Add entries to /etc/hosts Format: IP_address FQDN alias
  1. Making Changes Effective:
  • Traditional: Bring interface down/up
  • Modern: Use nmcli to reload configuration

Configure networking with NetworkManager

NetworkManager Overview:

  • Primary tool for managing network connections
  • Keeps network devices and connections active
  • Supports manual connection control

Configuration Tools:

  1. GUI Tools:

    • control-center (accessed via Overview > Network)
    • nm-connection-editor (run from command line)
    • nm-connection-editor has additional capabilities like VPN, bridges, tunnels, VLANs
  2. Text-based Tools:

    • nmtui (Text User Interface)
    • Navigate using arrow keys, tab, shift+tab
    • Can edit connections, activate connections, set hostname
  3. Command Line Tool (nmcli): Key Subcommands:

    • general: System information, hostname, logging
    • networking: System-wide settings
    • radio: WiFi/wireless controls
    • monitor: Watch connectivity changes
    • connection: Manage network profiles
    • device: Manage network interfaces

nmcli Syntax:

  • Supports abbreviated commands (e.g., “nmcli con show” → “nmcli c s”)
  • Full commands recommended for scripts

Common nmcli Operations:

  • Show active connections: nmcli con show –active
  • Activate connection: nmcli con up ‘connection_name’
  • Activate by interface: nmcli con up ifname device_name
  • Create new connection:
    nmcli con add con-name name \
    ifname interface \
    type ethernet \
    ip4 address/mask \
    gw4 gateway
    

Documentation:

  • Man pages available: man nmcli
  • Extended examples: man nmcli-examples
  • 30+ examples in nmcli man page
  • 500-page detailed example documentation

DNS Setup and Troubleshooting

  1. NSSwitch Configuration (/etc/nsswitch.conf)
  • Host line shows order of name resolution
  • Resolution order: local files first, then DNS
  • Local files refer to /etc/hosts file
  1. Hosts File (/etc/hosts)
  • Contains IP address mappings for hostnames
  • Critical for name resolution
  • Important to maintain correct IP addresses
  • Especially important with dynamic IP addresses
  1. DNS Server Configuration a) Primary Configuration (/etc/resolv.conf)
  • Contains global DNS server information
  • Should verify listed servers are reachable

b) Network Interface Configuration

  • Location: /etc/sysconfig/network-scripts/ifcfg-[interface]
  • DNS entries possible in interface config
  • PeerDNS setting:
    • Yes: Copies to resolv.conf
    • No: Stays only in interface config
    • Can override resolv.conf settings
  1. DNS Testing
  • Use ‘dig’ command to test DNS servers
  • Syntax: dig @[DNS-server] [domain-to-resolve]
  • Successful resolution indicates:
    • DNS server is accessible
    • Server is functioning properly
  • Failed resolution may indicate:
    • Host unavailable
    • Not a DNS server
    • Network connectivity issues
  1. Troubleshooting Tips
  • Verify /etc/hosts file accuracy
  • Check DNS server accessibility
  • Confirm network interface configurations
  • Test DNS resolution using dig command
  • Monitor for dynamic IP address changes

NETWORK TROUBLESHOOTING TOOLS

  1. Routing Issues:
  • route (legacy command)
  • netstat -r
  • ip route (iproute2 suite)
  • Network mapping tools:
    • nmap
    • traceroute
    • tracepath
    • mtr (if available)
  1. Local Switching & ARP:
  • arp command
  • Can safely delete and repopulate ARP entries
  1. Network Saturation:
  • iftop
  • ipperf
  1. Packet Analysis:
  • ping (connectivity testing)
  • tcpdump (CLI - for dropped packets/timeouts)
  • Wireshark (GUI alternative)
  • netcat (network stream analysis)
  1. DNS Troubleshooting:
  • nslookup (legacy systems)
  • dig (more comprehensive tool)
  • host (simple name queries)
  • whois (DNS name lookup)
  1. Network Adapter Tools:
  • ethtool (adapter diagnostics)
  • nmcli (network configuration/saving)
  • ip (live network settings)
  • ifconfig (legacy tool, preferred for better formatting)

Note: This list serves as a checklist for both job requirements and exam preparation. Not all tools may be available on all distributions.

7. Managing Users

User and Group Overview

  1. Linux System Characteristics:
  • Multiuser, multi-login, multitasking OS
  • Supports simultaneous users running different applications
  1. Types of Users: a) Non-login users
  • Cannot log in
  • Used for system services

b) Login users

  • Super user (root) - privileged administrator
  • Regular users - unprivileged
  • Best practice: Log in as regular user
  1. User Management:
  • Users can be added to groups
  • Access control assigned to entire groups
  • Every user has:
    • Username
    • Numeric user ID
    • At least one group
    • Primary group (automatically created)
  1. Group System:
  • Groups have:
    • Group name
    • Group ID number
  • Primary group shares username
  • Numeric IDs assigned sequentially
  • User’s primary group name = username
  • Groups cannot contain other groups
  1. File Ownership:
  • Files have user owner and group owner
  • Created files belong to:
    • User creator
    • User’s primary group
  1. Important Details:
  • Users can have only one primary group
  • Users can belong to multiple supplemental groups
  • Usernames/groups are case sensitive
  • Passwords required for login
  • Common practice: use lowercase for usernames
  1. Legacy Systems:
  • Older distributions: all users might share one group
  • Security risk: shared group access to files
  • Not common in modern distributions

Linux User Account File (/etc/passwd)

Structure:

  • Text file storing local user account information
  • One user account per line
  • 7 columns separated by colons

Column Details:

  1. Username

    • Case sensitive
    • Root (administrator) usually first line
  2. Password Field

    • Usually shows ‘x’ (modern systems)
    • Indicates password stored in Shadow Suite
    • Previously contained encoded password hash
    • Shadow Suite stores passwords in /etc/shadow (root-only access)
  3. Numeric User ID

    • System accounts: 1-999
    • Non-admin accounts: 1000+ (older systems: 500+)
    • Configurable in /etc/login.defs
  4. Primary Group ID

    • Numeric ID of user’s primary group
    • Cross-referenced with /etc/group file
  5. GECOS Comment Field

    • Optional user information
    • Inherited from General Electric’s OS
    • Avoid storing sensitive data
  6. Home Directory

    • Root: /root
    • Regular users: /home/username
    • Configurable in /etc/default/useradd
  7. Default Login Shell

    • Usually /bin/bash
    • Configurable via:
    • /etc/default/useradd
    • useradd -D
    • During user creation
    • Available shells listed in /etc/shells

Password Security:

  • Modern systems use Shadow Suite for security
  • Passwords stored in /etc/shadow (root-only access)
  • One-way hash encryption
  • Previous storage in /etc/passwd was vulnerable to modern computing power

Additional Shell Options:

  • Viewable via: sudo dnf search shell
  • Include: dash, fish, ksh, zsh
  • Can be installed separately

User Password File (/etc/shadow)

Location and Access:

  • Stored in /etc/shadow
  • Only readable by root
  • Contains password and account aging information

File Structure:

  • Nine columns, colon-delimited
  • First user (root) appears first, newest users last

Column Details:

  1. Username (must match /etc/password file)

  2. Password Field

    • Hash types:
      • $1$ = MD5 (not recommended)
      • $2A$ or $2Y$ = Blowfish
      • $5$ = SHA-256
      • $6$ = SHA-512 (strongest)
    • Format: $algorithm$salt$encoded_password
    • Exclamation marks (!) indicate unset/locked password
  3. Last Password Change (days since Jan 1, 1970)

    • 0 = change required at next login
  4. Minimum Days Before Password Change

    • 0 = can change anytime
  5. Maximum Days Until Required Password Change

    • 99,999 = default (≈274 years)
  6. Password Expiration Warning Period (days)

  7. Password Grace Period

    • Days after expiration where password still works
  8. Account Expiration Date

    • Days since Jan 1, 1970
    • Empty = no expiration
    • Different from password expiration
  9. Reserved for future use

Configuration:

  • System-wide defaults set in /etc/login.defs
  • Can be modified using authconfig command
  • Individual user aging information managed with chage command
  • Hash algorithm changeable via authconfig or password-auth file
    • Requires password reset to implement new hash type

Group Accounts and Passwords in Linux

File Locations:

  • Group information: /etc/group
  • Group passwords: /etc/gshadow

/etc/group file structure (4 columns):

  1. Group name
  2. Password placeholder (actual password stored in /etc/gshadow)
  3. Numeric group ID
    • Non-admin groups start at 1000 (configurable in /etc/login.defs)
    • IDs under 1000 belong to root user/system services
  4. List of users (comma-separated, no spaces)

/etc/gshadow file structure (4 columns):

  1. Group name (must match /etc/group entry)
  2. Group password
    • Hash password if set
    • Empty field: only members can switch to group (no password needed)
    • ! prefix: restricted group (members need password)
  3. Group administrators (comma-separated)
  4. Group members (comma-separated)

Group Management:

  • ‘groups’ command shows user’s group memberships
  • Leftmost group is primary group
  • ’newgrp’ command:
    • Switches to new group
    • Requires password unless user is member
    • Makes switched group the primary group
    • Exit using ’exit’ command
  • File ownership:
    • New files owned by user and their primary group

Security:

  • /etc/gshadow requires elevated privileges (sudo) to access
  • Group passwords enable non-members to switch groups with correct password
  • Group administrators can modify group password and membership

User Account Management in Linux

  1. Key Files:
  • /etc/passwd: User account data storage
  • /etc/shadow: Passwords and account aging info
  • /etc/login.defs: User account defaults
  • /etc/default/useradd: User defaults and skeleton directory location
  1. useradd Command Options: Basic Options:
  • -d: Specify home directory
  • -u: Set user ID number
  • -g: Set primary group ID
  • -G: Set supplemental groups (comma-separated)
  • -s: Specify shell

Account Aging Options:

  • -e: Account expiration
  • -f: Account inactive period

System Configuration Options:

  • -r: Create system account
  • -p: Set encrypted password (must be pre-encrypted)
  • -M: Skip home directory creation
  • -N: Don’t create primary user group
  • -k: Specify skeleton directory
  1. Creating Users - Practical Steps: a) Basic Creation:
  • Command: sudo useradd [username]
  • Example: sudo useradd bob

b) Password Setting:

  • Command: sudo passwd [username]
  • Example: sudo passwd bob
  1. Verification Methods:
  • Check user info: cat /etc/passwd
  • View group info: cat /etc/group
  • Verify password/aging: sudo cat /etc/shadow
  1. Notes:
  • Default values used if options not specified
  • Skeleton directory files automatically copy to new user’s home
  • chage command recommended for existing user account aging configuration

Modifying User Accounts with usermod

Key Command: usermod

  • Similar syntax to useradd but for modifying existing accounts
  • Used for changing user account settings

Common Options:

  1. Basic Modifications:
  • -s: Change default login shell
  • -l: Change username
  • -u: Change numeric user ID
  • -d: Change home directory
  1. Group Modifications:
  • -g: Change primary group ID
  • -G: Specify supplemental groups
  • -a: Append new groups (must use with -G to preserve existing groups)
  1. Administrative Options:
  • -m: Move home directory
  • -L: Lock user account
  • -U: Unlock user account
  • -e: Set account expiration
  • -f: Set account inactive period

Practical Example Demonstrated:

  1. Created user “Sally”
  2. Modified Sally’s account:
    • Changed user ID to 1010
    • Changed primary group ID to 1001 (user one group)
    • Changed default shell to /bin/sh

Important Notes:

  • Check /etc/shells for available shells
  • View /etc/group for group IDs
  • When changing ownership/groups, files retain user ownership but reflect new group
  • Specified shells, groups, or directories must exist before modification
  • For complex account aging, use ‘chage’ command instead
  • May require sudo privileges for modifications

Creating and Deleting Groups

Groups in Linux User Management:

  • Purpose: Streamline access rights for multiple users
  • Essential components: Group name and numeric group ID
  • User group membership:
    • Primary group: One per user
    • Supplemental groups: Multiple possible

Group Membership Limits:

  • UNIX-based OS, Windows 2000+: 16-1024 groups
  • Linux kernel < 2.6.3: 32 groups
  • Linux kernel ≥ 2.6.3: 65,536 groups
  • Check limit: getconf NGROUPS_MAX
  • NFS protocol limitation: Issues beyond 16 groups
  • Note: Groups cannot contain other groups

Viewing Group Information:

  1. groups command:

    • Shows primary (leftmost) and supplemental groups
    • Can specify username as argument
  2. id command:

    • Displays numeric group IDs and names
    • Can be used with specific usernames
    • Options available for name/number only output
  3. getent command:

    • Lists users in specific group
    • Can list all groups
    • Format: getent group [groupname]

Creating Groups:

  • Command: groupadd
  • Common option: -g (specify group ID)
  • Example: sudo groupadd -g 1050 accounting

Modifying Groups:

  • Command: groupmod
  • Can change:
    • Group ID (-g option)
    • Group name
  • Note: Files owned by group need manual updating after ID change
  • Primary group ID updates automatically in /etc/passwd

Assigning Users to Groups

Methods to Add Users to Groups:

  1. Using usermod command (User-centric approach):
  • Create group: sudo groupadd sales
  • Add user to group: sudo usermod -a -G sales sally
  • Important: -a flag is crucial for appending groups
  • Without -a, specified groups replace all existing supplemental groups
  • Can add multiple groups: sudo usermod -a -G sales,audio,wheel sally
  1. Using gpasswd command (Group-centric approach):
  • Add user to group: sudo gpasswd -a sally sales
  • Simpler than usermod as no need to worry about appending
  • Remove user from group: sudo gpasswd -D sally sales

Checking Group Membership:

  • Command: sudo groups sally
  • Shows all groups user belongs to

Important Notes:

  • usermod is good for:
    • Replacing all supplemental groups
    • Adding users to multiple groups simultaneously
  • Changes require user to log out and back in to take effect
  • gpasswd is easier for removing users from groups compared to usermod

Superuser Privileges in Linux:

  1. Root Login (Not Recommended):
  • Increases vulnerability to viruses
  • Removes Linux protection against malicious software
  • Lacks accountability with multiple admin users
  • Requires root password change when admin leaves
  1. Better Practices:
  • Elevate privileges only when necessary
  • Use specific commands for privilege escalation
  1. SU Command: Key options:
  • -c: Run command as specified user
  • -g: Run command as specified group
  • -l: Start login shell
  • -s: Run command in specified shell
  1. Authentication:
  • Uses PAM (Pluggable Authentication Module System)
  • Configurable through PAM configuration files
  • Located at /etc/pam.d/su
  1. Wheel Group Configuration:
  • Can be configured to allow password-less su access
  • Can restrict su access to wheel group members only
  • Neither option enabled by default
  1. Run User Command:
  • Alternative to su for privileged users
  • Doesn’t require authentication
  • Safer than su due to lack of set user ID permissions
  • Designed specifically for superuser use

Note: For more detailed privilege management, sudo command is recommended (covered separately).

Elevating Group Privileges

  1. Group Switching vs User Switching
  • Alternative to ‘su’ command which requires user password
  • Can change to different primary group using ’newgrp’ command
  • Gains group privileges instead of user privileges
  1. Creating and Managing Groups
  • Create new group: sudo groupadd [groupname]
  • Add group password: sudo gpasswd [groupname]
  • Verify group password: sudo cat /etc/gshadow
  • Add user to group: sudo gpasswd -a [username] [groupname]
  1. Using newgrp Command
  • Switch primary groups: newgrp [groupname]
  • Requires group password unless user is member of group
  • Verify primary group change: groups command
  • Primary group shows first in groups list
  1. File Ownership
  • New files created inherit:
    • User ownership from current user
    • Group ownership from current primary group
  1. Shell Behavior with newgrp
  • Creates new shell when executed
  • Command history may differ
  • Check shell level: echo $SHLVL
  • Exit command returns to previous shell level
  • Multiple exits required to return to original shell
  1. Password-less Group Switching
  • Add user to group to avoid password prompt
  • Members can switch to group without authentication

Elevating Privileges Using Sudo

  1. Different Methods of Privilege Elevation:

    • Set user ID and set group ID bits (covered in Linux files/permissions)
    • su command (requires root password)
    • sudo command (requires user’s own password)
  2. Sudo Command Benefits:

    • Users don’t need root password
    • More secure than sharing root password
    • Allows granular control of privileges
    • Can limit admin access to specific commands
    • Example: Database admins get access only to required commands
  3. Using Sudo:

    • Elevates privileges for one command at a time
    • Returns to user role after command execution
    • Caches password temporarily for convenience
    • Example command: “sudo cat /etc/shadow”
  4. Sudoers Configuration:

    • Controlled through sudoers file
    • Determines who can elevate privileges
    • Specifies which commands users can run
    • Can restrict command options
    • Must be edited using visudo tool
  5. Visudo Tool:

    • Special editor for sudoers file
    • Prevents multiple simultaneous edits
    • Opens file in VI editor
    • Checks syntax before saving
    • Command: “sudo visudo”
  6. Access Control:

    • Can group users, hosts, and commands
    • Creates access control rules
    • Uses operating system groups (prefixed with %)
    • Wheel group commonly used for sudo access

Manage sudo users

Sudo User Management Concepts:

  1. User Aliases
  • Groups of users with admin rights
  • Created to assign permissions to multiple users
  1. Runas Aliases
  • Users that commands can be run as
  • Example: Running web admin tools as Apache instead of root
  1. Command Aliases
  • Groups of similar commands
  • Example: Grouping web admin commands into web tools
  1. Host Aliases
  • Groups of hosts/servers
  • Useful for allowing access across multiple servers

Configuration Process:

  1. Edit sudoers file using: sudo visudo

  2. Creating Aliases (Syntax):

  • User Alias: User_Alias DRIVEADMINS = User1
  • Command Alias: Cmnd_Alias DRIVETOOLS = /usr/sbin/gdisk
  • Host Alias: Host_Alias DRIVEHOSTS = rhhost1.localnet.com
  1. User Specifications Format: userlist hostlist = operatorlist commandlist Example: DRIVEADMINS DRIVEHOSTS = (ALL) DRIVETOOLS

  2. Additional Options:

  • NOPASSWD: tag - Allows commands without password
  • Multiple tags can be added after the colon
  • Use ‘which’ command to find full path of commands

8. Handling Storage

Linux Storage Systems

Types of Storage:

  1. File Storage
  • Hierarchical organization (files in folders)
  • Requires path for access
  • Metadata stores path and file information (name, size, creation date)
  • Examples: Local file systems, NFS, CIFS
  1. Block Storage
  • Data in binary format blocks
  • Unique identifiers for blocks
  • System reassembles blocks when requested
  • Appears as device to local system
  • Common in storage area networks
  1. Object Storage
  • Data managed as discrete units (objects)
  • Single repository storage
  • Linked to metadata
  • Example: Red Hat’s Ceph Storage

Drive Interfaces:

  1. Parallel ATA
  • 80-wire ribbon cable
  • Spinning metal platters
  • Max 2 drives per controller
  • Speed: 133 MB/s
  • Device notation: /dev/hda, /dev/hdb
  1. Parallel SCSI
  • 168-wire ribbon cable
  • Server-oriented
  • Supports 16 devices
  • Speed: 320 MB/s
  • Device notation: /dev/sda, /dev/sdb
  1. Serial ATA (SATA)
  • Narrow serial cable
  • One cable per drive
  • Compatible with HDD and SSD
  • Speed: 6 Gbps
  • Device notation: /dev/sda, /dev/sdb
  1. Serial Attached SCSI (SAS)
  • Replaced Parallel SCSI
  • More robust than SATA
  • SATA-compatible
  • Device notation: same as SCSI/SATA

Adding Storage Drives to VM

  1. Initial Setup:

    • Shut down VM
    • Access Settings
    • Navigate to Storage in left pane
  2. Adding New Drives:

    • Right-click Controller.SATA
    • Select Hard Disk
    • Click Create
  3. Drive Configuration:

    • File Type: VDI (default)
    • Storage Type: Dynamically allocated
    • Size: 1 GB
    • Note naming convention: rhhost1_1.vdi
  4. Process:

    • Click Finish
    • Choose to attach to VM
    • Repeat process for 3 more drives
    • Increment name for each drive
  5. Final Result:

    • Total 5 drives:
      • 1 OS drive
      • 4 new additional drives
    • Power VM back up after completion

Purpose: Preparation for LVM and RAID exercises requiring multiple drives.

Creating Partitions using fdisk

  1. Viewing Drive Information:
  • /proc/partitions: Shows kernel-recognized drives/partitions
  • lsblk: Visual list of drives, partitions, and mount points
  • blkid (sudo): Shows partition labels, UUID numbers, and file systems
  • fdisk -l (sudo): Lists partitions directly from drive’s partition table
  1. Partition Table Updates:
  • If kernel partition list differs from fdisk:
    • Unmount partition if possible
    • Use ‘sudo partprobe’ to update partition table
    • Reboot may be required if differences persist
  1. Partition Table Types:
  • Legacy BIOS: Uses MBR (Master Boot Record)
  • Modern UEFI: Uses GPT (GUID Partition Table)
  • fdisk now supports both MBR and GPT
  • gdisk is alternative tool for GPT partitions
  1. Using fdisk: Key Commands:
  • M: Help menu
  • D: Delete partition
  • N: Create new partition
  • P: Print partition table
  • T: Change partition type
  • W: Write changes and exit
  • Q: Quit without saving
  • G: Convert to GPT
  1. Creating Partition Example: a. Enter fdisk: sudo fdisk /dev/sdb b. Convert to GPT if needed: Press G c. Create partition: Press N d. Specify:

    • Partition number (default: 1)
    • First sector (default: 2048)
    • Size (example: +500M for 500MB) e. Verify: Press P f. Save: Press W
  2. Post-Partition Creation:

  • Verify kernel recognition: cat /proc/partitions
  • Update kernel if needed: sudo udevadm settle
  • Format partition using mkfs
  • Mount as needed

Note: A disk tools PDF cheat sheet is available in exercise files covering fdisk, gdisk, and parted commands.

Creating Partitions using Parted

  1. Overview of Parted:
  • Similar to fdisk/gdisk for partition creation
  • Earlier versions (pre-2.4) had additional features like resize, copy, move
  • Current version mainly focuses on partition management
  • Different from Gparted (graphical tool)
  1. Operating Modes:
  • Interactive mode
  • Non-interactive mode (command line)
  1. Interactive Mode Commands:
  • Launch: sudo parted
  • Help: shows all subcommands
  • Help [command]: detailed help for specific command
  • Print commands:
    • print all: shows all drives and partitions
    • print devices: concise list of drives
    • print free: shows drive stats and free space
  1. Drive Selection and Partition Table Creation:
  • Select drive: select /dev/[drive]
  • Create partition table:
    • mklabel or mktable command
    • Supports GPT and MBR (MS-DOS) formats
  1. Creating Partitions:
  • Syntax: mkpart [type] [start] [end]
  • Types: primary, extended, or logical
  • GPT partitions are always primary
  • Specify size in units (e.g., MiB)
  1. Additional Operations:
  • Delete partitions: rm command
  • Exit: quit
  • Register new partition:
    • sudo udevadm settle
    • Verify with cat /proc/partitions
  1. Post-Partition Operations:
  • Can be used with LVM
  • Format using mkfs
  • Refer to mkfs man page or disk tools cheat sheet

Note: After creating partitions, it’s important to register them with the kernel and verify recognition before formatting or further use.

Managing LVM volumes and volume groups

LVM (Logical Volume Management) Advantages:

  • Supports non-contiguous space allocation
  • Allows volume resizing, combining, and moving
  • Can span across different drives
  • Enables drive swapping without system disruption

Creating LVM System - Steps:

  1. Create Physical Volume (PV)

    • Command: sudo pvcreate /dev/partition
    • Verify: sudo pvs (summary) or sudo pvdisplay (detailed)
  2. Create Volume Group (VG)

    • Command: sudo vgcreate vgname /dev/partition
    • Verify: sudo vgs (summary) or sudo vgdisplay (detailed)
  3. Create Logical Volume (LV)

    • Command: sudo lvcreate –name lvname –size size vgname
    • Verify: sudo lvs (summary) or sudo lvdisplay (detailed)

LV Path References:

  • /dev/VolumeGroupName/LogicalVolumeName
  • /dev/mapper/VolumeGroupName-LogicalVolumeName

Formatting and Mounting:

  1. Format LV:

    • sudo mkfs -t ext4 /dev/vgname/lvname
    • Verify: sudo blkid
  2. Mount LV:

    • Create mount point: sudo mkdir /media/mountpoint
    • Mount: sudo mount /dev/vgname/lvname /media/mountpoint
    • Verify: df -Th

Permanent Mount Configuration:

  • Edit /etc/fstab
  • Format: device_path mount_point filesystem_type options dump fsck_order
  • Example: /dev/vgdata/lvdata /media/lvdata ext4 defaults 1 2
  • Test: sudo mount -a

Tools for Verification:

  • pvs/pvdisplay: Physical volume information
  • vgs/vgdisplay: Volume group information
  • lvs/lvdisplay: Logical volume information
  • blkid: Block device information
  • df -Th: Mounted filesystem information

Expanding logical volumes

  1. Initial Steps:
  • Check partitions: cat /proc/partitions
  • Verify /dev/sdc1 exists (if not, create using fdisk/parted)
  • Create physical volume: sudo pvcreate /dev/sdc1
  • Verify with: sudo pvs
  1. Volume Group Operations:
  • List volume groups: sudo vgs
  • Extend existing volume group: sudo vgextend vg_data /dev/sdc1
  • Verify expansion: sudo vgs
  1. Logical Volume Resizing:
  • Check current LV size: sudo lvs
  • Resize logical volume: sudo lvresize -L 100%VG /dev/vg_data/lv_data
  • Verify new size: sudo lvs
  1. File System Adjustment:
  • Check current filesystem size: df -h
  • Resize ext4 filesystem: sudo resize2fs /dev/vg_data/lv_data
  • Verify final size: df -h

Key Points:

  • LVM allows resizing using non-contiguous drive space
  • Modern lvresize can resize filesystem (older versions couldn’t)
  • Full path must be specified when working with logical volumes
  • Always verify changes after each step

Reducing Logical Volumes

  1. Modern lvresize Features:
  • Can unmount and remount volumes automatically
  • Can resize file systems
  • Includes automated processes for volume management
  1. Manual Process Steps: a) Mounting Considerations:
  • Can leave mounted when increasing size
  • Must unmount when decreasing size
  • XFS file systems can only be resized up, not down

b) Basic Reduction Process:

  1. Unmount volume: sudo umount /dev/vgdata/lvdata

  2. Verify unmount using df command

  3. Run file system check: sudo e2fsck -ff /dev/vgdata/lvdata

  4. Resize file system: sudo resize2fs /dev/vgdata/lvdata 500M

  5. Reduce logical volume: sudo lvresize -L 500M /dev/vgdata/lvdata

  6. Verify with lvs command

  7. Remount volume: sudo mount /dev/vgdata/lvdata /media/lvdata

  8. Check new size with df -h

  9. Modern lvresize Method:

  • Uses -r option for automated process
  • Command format: sudo lvresize -r -L [size] /dev/vgdata/lvdata
  • Automatically handles:
    • Unmounting
    • File system resizing
    • Logical volume resizing
    • Remounting
  • Works for both increasing and decreasing volume sizes
  • Still requires unmounting for size reduction, but handles it automatically
  1. Key Points:
  • Always verify size changes using df -H
  • Resolve any file system check issues before resizing
  • Modern method is more efficient but may not be available in older versions

Creating EXT Filesystems

  1. Basic Concepts:
  • Formatting is required before using partition/logical volume
  • mkfs is the standard formatting tool
  • mkfs acts as front-end for other formatting tools
  • View formatting tools: ls /sbin/mk*
  1. File System Types:
  • Available formats: ext2, ext3, ext4, xfs, vfat
  • Main differences:
    • ext2: No journal
    • ext3: Adds journal to ext2
    • ext4: Most advanced, recommended for modern systems
  1. Formatting Commands:
  • Basic format: sudo mkfs /dev/[device]
  • Specify file system: mkfs -t [type] /dev/[device]
  • Verify format: sudo lsblk -f
  1. Converting Between File Systems:
  • ext2 to ext3: sudo tune2fs -j /dev/[device]
  • ext2/3 to ext4:
    • Command: sudo tune2fs -O extent,uninit_bg,dir_index /dev/[device]
    • Features:
      • extent: Uses extent trees for data blocks
      • uninit_bg: Speeds up file system checks
      • dir_index: Uses hashed b-trees for directory lookups
  1. Post-Conversion Steps:
  • Check file system: sudo e2fsck -fD /dev/[device]
  • Mount volume: sudo mount /dev/[device] /mount-point
  • Verify mount: df -Th
  1. Important Notes:
  • Recent Linux versions use ext4 driver for all ext systems
  • Older systems may need explicit ext4 driver specification
  • ext4 conversion is one-way process
  • Recommended to use ext4, especially for SSDs

Repairing EXT Filesystems

  1. Creating a Corrupted Filesystem (for testing):
  • Mount target volume (/dev/vgdata/lvdata at /media/lvdata)
  • Copy large number of files (using cp -R /[source] /media/lvdata)
  • Use dd command to corrupt filesystem:
    • Command: sudo dd if=/dev/zero bs=1 count=10 of=/dev/vgdata/lvdata seek=[random number]
    • Write directly to logical volume, not filesystem
    • Use different seek values multiple times
  1. FSCK (File System Check) Options:
  • -A: Checks all filesystems
  • -AR: Checks all except root filesystem
  • -f: Checks even clean filesystems
  • -a: Automatically fixes safe problems
  • -y: Answers all questions with yes (use with caution)
  • -n: Display results only, no fixes
  1. Running Filesystem Repairs:
  • Unmount filesystem first
  • Check for corruption: sudo fsck -n [device]
  • Fix filesystem: sudo fsck [device]
  • Run second check to verify repairs
  1. Checking Root Filesystem:
  • Difficult to check while mounted
  • Legacy systems: Create /forcefsck file and reboot
  • Modern SystemD systems: Use tune2fs
  1. Tune2FS Usage:
  • List filesystem settings: sudo tune2fs -l [device]
  • Important parameters:
    • Maximum mount count
    • Current mount count
    • Last checked time
  • Force check on next boot: sudo tune2fs -c 1 [device]
  • Disable automatic checks: sudo tune2fs -c -1 [device]
  1. Best Practices:
  • Never check mounted filesystems
  • Review errors before automatically fixing
  • Consider hardware issues if numerous errors appear
  • Reset mount count settings after forced checks

Creating and Repairing XFS Filesystems

Creation:

  • Similar to ext filesystems
  • Use mkfs with XFS specification
  • Command format: sudo mkfs -t xfs -f /dev/path
  • -f option forces format on volumes with existing filesystems
  • Verify creation using lsblk -f

Repair:

  • Use xfs_repair instead of fsck
  • Command: sudo xfs_repair /dev/path

Additional XFS Tools:

  1. xfs_admin

    • Changes filesystem parameters
    • Modifies filesystem label
  2. xfsdump

    • Provides incremental filesystem dump capability
  3. xfs_freeze

    • Suspends filesystem access
  4. xfs_quota

    • Manages XFS quotas
  5. xfs_growfs

    • Resizes XFS formatted drives
    • Important: XFS can only grow, not shrink

Important Note:

  • Ensure volume is unmounted before formatting
  • XFS filesystem cannot be reduced in size, only expanded

MDRAID vs DMRAID

MDRAID (mdadm):

  • Replaced older RAID tools package
  • Uses mdadm administration tool
  • Works at device level
  • Combines multiple devices (e.g., /dev/sdb, /dev/sdc)
  • Creates new device name (e.g., /dev/md0)
  • Supports RAID levels: 0, 1, 4, 5, 6, 10
  • Also supports linear volumes and multi-path
  • Currently recommended for software RAIDs in Enterprise Linux 7

DMRAID (LVM):

  • Part of LVM (Logical Volume Management)
  • Uses standard LVM tools for administration
  • Device paths: /dev/volumegroup/logicalvolume
  • Supports:
    • Linear volumes
    • RAID levels: 0, 1, 4, 5, 6, 10
    • Up to 4 mirrors per volume
    • Non-RAID mirrored volumes
  • Features:
    • Snapshots
    • User space flexibility
    • Sometimes uses MDRAID stack underneath
  • Recommended for non-RAID volumes in Enterprise Linux 7

Future Outlook:

  • LVM gaining more features
  • May become dominant solution

Raid 5 using LVM

RAID Overview:

  • RAID = Redundant Arrays of Independent Disks
  • Can be created using LVM or Madmin tool
  • LVM supports RAID levels: 0,1,4,5,6,10

RAID Types:

  • RAID 0: Striping, no redundancy, fast (2+ drives)
  • RAID 1: Disk mirroring, slower but redundant (2 drives)
  • RAID 10: Combines mirroring and striping
  • RAID 4: Striping with parity (3+ drives)
  • RAID 5: Striping with distributed parity (3+ drives)
  • RAID 6: Double parity (4+ drives)

Creating RAID 5 Steps:

  1. Preparation:

    • Unmount existing volumes
    • Remove volume groups (vgremove)
    • Clear fstab entries
  2. Drive Setup:

    • Need minimum 3 drives
    • Create partitions if needed using fdisk
    • Used sdb1, sdc1, sdd1 in example
  3. Create Volume Group:

    • Command: sudo vgcreate vgraid /dev/sdb1 /dev/sdc1 /dev/sdd1
    • Verify with pvs
  4. Create Logical Volume:

    • Command: sudo lvcreate –type raid5 -i 2 -l 100%VG -n lvraid vgraid
    • Stripe width = total drives minus 1
    • Verify with lvs
  5. Format and Mount:

    • Format: sudo mkfs -t ext4 /dev/vgraid/lvraid
    • Create mount point: sudo mkdir /media/lvraid
    • Mount: sudo mount /dev/vgraid/lvraid /media/lvraid
    • Verify with df -h

Important Notes:

  • RAID 5 capacity = (number of drives - 1) × smallest drive capacity
  • More drives increase efficiency and speed
  • Teardown order: umount → lvremove → vgremove → pvremove
  • Can use fio command for disk speed testing

RAID 5 using mdadm

  1. Initial Setup
  • Clear previous volumes
  • Remove volume group and logical volume
  • Remove LVM physical volume data from drives
  • Ensure drives are unmounted
  1. Partition Setup
  • Change partition type to Linux RAID (code 29) using fdisk
  • Create new partition on /dev/sde
  • Set GPT label
  • Create 500M partition
  • Set partition type to RAID
  1. mdadm Tool Modes
  • Assemble: combines existing RAID drives
  • Incremental assembly: adds single devices
  • Create: makes new array with RAID metadata
  • Build: creates array without RAID metadata
  • Monitor: watches RAID status
  • Grow: modifies array size/level
  • Manage: handles spare drives
  • Misc: various functions like metadata removal
  1. Creating RAID 5
  • Command: sudo mdadm –create /dev/md/mdraid [drives] –level=5 –raid-devices=4 –bitmap=internal
  • Creates device at /dev/md127
  • Symbolic link at /dev/md/mdraid
  1. RAID Management
  • Check status: cat /proc/mdstat
  • Detail view: mdadm –detail
  • Enable monitoring: systemctl enable mdmonitor
  • Format with XFS filesystem
  • Mount at /media/mdraid
  1. Useful mdadm Options
  • –query: summary info
  • –detail: detailed info
  • –fail: mark device faulty
  • –remove: remove device/array
  • –add: add new device
  • –stop: stop RAID
  • –zero-superblock: remove RAID metadata
  1. Cleanup Process
  • Unmount RAID
  • Stop RAID array
  • Remove RAID superblock from drives
  • Stop and disable mdmonitor
  • Change partition types back to Linux
  • Consult man page and RHEL 9 documentation for more details

Mounting Drives Using UUID and Labels

  1. Drive Naming Issues:
  • Drive names/partitions are assigned in boot order
  • Unplugging drives can change order, breaking system if using static paths
  • Better to mount using label or UUID than physical location
  1. Creating Test Partitions:
  • Use fdisk to create partitions on /dev/sde
  • Create sde2: 250MB partition
  • Create sde3: Remaining space
  • Register new partitions: sudo udevadm settle
  • Verify: cat /proc/partitions
  1. Formatting Partitions:
  • Format both as ext4: sudo mkfs -t ext4 /dev/sde[2/3]
  • Verify formatting: sudo blkid
  • Create mount points: /media/sde2 and /media/sde3
  1. Mounting by UUID (/dev/sde2):
  • Edit /etc/fstab
  • Format: UUID=[uuid] /media/sde2 ext4 defaults 0 0
  • Columns explanation:
    • Device name (UUID)
    • Mount point
    • File system type
    • File system options
    • Backup flag
    • File system check order
  1. Mounting by Label (/dev/sde3):
  • Set label using e2label: sudo e2label /dev/sde3 backups
  • Verify label: e2label or tune2fs -l
  • Edit /etc/fstab
  • Format: LABEL=backups /media/sde3 ext4 defaults 0 0
  1. Important Notes:
  • Verify fstab entries: sudo mount -a
  • Check mounted drives: df -h
  • Remove deleted partitions from fstab immediately
  • Mount options include: rw, suid, dev, exec, auto, nouser, async
  • Root should be 1 in fsck order, others 2+ or 0

Encrypting Drive Data using LUKS

  1. Drive Encryption Options:
  • GPG for single file encryption
  • LUKS (Linux Unified Key Setup) for full volume encryption
  1. Pre-encryption Steps:
  • Check /etc/fstab using “cat /etc/fstab”
  • Remove any references to target drive (/dev/sde1)
  • Important to prevent boot interruption issues
  1. Encryption Process:
  • Command: sudo cryptsetup -y -v luksFormat /dev/sde1
  • Confirm with “YES”
  • Enter strong passphrase twice
  1. Making Encrypted Drive Available:
  • Command: sudo cryptsetup -v luksOpen /dev/sde1 decryptvolume
  • “decryptvolume” is customizable device name
  • Enter encryption password when prompted
  1. Accessing Encrypted Drive:
  • Check device maps: ls -l /dev/mapper
  • Create filesystem: sudo mkfs -t ext4 /dev/mapper/decryptvolume
  • Drive can be used normally (mounting, file copying, etc.)
  1. Deactivating Encrypted Volume:
  • Command: sudo cryptsetup -v luksClose /dev/mapper/decryptvolume
  • Drive becomes inaccessible and data remains secure

TROUBLESHOOTING STORAGE ISSUES

  1. Space Usage Tools:
  • df -hT: Shows overall storage space usage of mounted volumes
    • -h: Human readable sizes
    • -T: Shows file system type
  • du -h ~: Shows directory size
    • Useful for finding large files
  1. I/O Performance Analysis: a) iostat (requires sysstat package)
  • Basic usage: sudo iostat -d
  • Extended stats: sudo iostat -dx
  • Interval monitoring: sudo iostat -xdt 2 5
    • Shows stats 5 times, 2-second intervals

b) iotop

  • Similar to top but for I/O usage
  • Requires iotop package
  • Command: sudo iotop
  1. I/O Schedulers:
  • Types:
    • CFQ (Complete Fair Queuing):
      • Queue-based for each application
      • Prioritizes read operations
      • Balanced approach
    • Deadline:
      • Batch processes by time
      • Preferred for databases
      • Good for latency-sensitive operations
    • NOOP:
      • Single FIFO queue
      • Good for low CPU usage
  1. Performance Tools:
  • fio: Disk performance measurement tool
    • Requires package installation
    • Comprehensive disk performance info
  1. SSD Considerations:
  • Requires trim operations for deleted files
  • Most modern filesystems: automatic trim
  • Manual trim: fstrim command
  1. File System Error Tools:
  • ext filesystems: fsck
  • XFS: xfs_check, xfs_repair
  • btrfs: btrfs check

9. Backup, Restore, and Compress Files

File Archiving in Linux

  1. Main Archive Tools:
  • cpio (copy in/copy out)
  • tar (tape archiver)
  1. CPIO Usage:
  • Creates archives from piped file lists
  • Basic syntax: find /etc | cpio -ov > etc.cpio
  • List contents: cpio -itvI etc.cpio
  • Extract: cpio -iv –no-absolute-filenames -I etc.cpio
  1. TAR Usage: Key Options:
  • -c: Create archive
  • -x: Extract archive
  • -t: List contents
  • -v: Verbose output
  • -p: Preserve permissions
  • –xattrs: Preserve extended attributes
  • -f: Specify file
  • -C: Change directory before extracting
  1. Compression Methods with tar: Comparison (using /etc directory):
  • Uncompressed (.tar): 27MB
  • gzip (.tar.gz): 5.2MB
  • bzip2 (.tar.bz2): 4.3MB
  • xz (.tar.xz): 3.3MB

Speed order (fastest to slowest):

  1. Uncompressed

  2. gzip

  3. bzip2

  4. xz

  5. Common tar operations:

  • Create: tar –xattrs -cvpf filename.tar /directory
  • List contents: tar -tf archive.tar
  • Extract: tar –xattrs -xvpf archive.tar
  • Extract to specific location: tar –xattrs -xvpf archive.tar -C /destination
  1. DD Command:
  • Purpose: Binary file/device copying
  • Syntax: dd if=<input> of=<output>
  • Key options:
    • bs=: Block size
    • count=: Number of blocks
    • status=: Status output
  • Common uses:
    • MBR backup
    • Drive zeroing
  • Warning: Use with caution as it writes raw data

Note: Always verify commands before execution, especially with dd, as incorrect usage can cause data loss.

Compressors in Linux

  1. Using Compressors without Tar
  • Suitable when only file size reduction is needed
  • Not concerned with metadata (ownership, permissions, timestamps)
  • Some compressors don’t handle recursive directories well
  1. Common Compressors:

a) Gzip

  • Create: gzip filename
  • Decompress: gunzip filename.gz
  • Original file is removed after compression

b) Bzip2

  • Create: bzip2 filename
  • Decompress: bunzip2 filename.bz2
  • Similar behavior to Gzip

c) XZ

  • Create: xz filename
  • Decompress: unxz filename.xz
  • Similar behavior to Gzip/Bzip2

d) ZIP

  • Create: zip archive.zip filename
  • Decompress: unzip archive.zip
  • Keeps original file intact
  • Similar compression size to Gzip
  • Useful in mixed OS environments (Windows/Linux)
  1. Best Practices:
  • For directories with metadata preservation: Use Tar with compressors
  • For Windows compatibility: Consider ZIP
  • Better cross-platform solution: Use 7-Zip on Windows (supports Linux formats)
  1. Key Differences:
  • Gzip/Bzip2/XZ remove original file after compression
  • ZIP/Tar preserve original file and create new archive
  • Tar better for preserving metadata and handling recursive directories

File Copying Between Linux Hosts

  1. SCP (Secure Copy)
  • Uses SSH authentication and tunnel
  • Basic syntax: scp [options] [local file] [username@hostname:remote path]
  • Key options:
    • -C: Specify remote SSH port
    • -P: Preserve permissions/ownership
    • -R: Recursive copying
    • -C (capital): File compression
    • -c: Specify cipher
    • -i: ID file for passwordless auth
    • -o: Additional SSH options
  1. SSH Direct Method
  • Alternative to SCP
  • Can pipe data directly using SSH
  • More flexible but requires more typing
  • Example: cat /etc/hosts | ssh user1@dbhost1 “cat > /home/user1/hosts”
  • Can handle block device copying using dd
  1. NCAT (Network Cat)
  • Simpler but less secure method
  • Requires port access (default 8080)
  • No security features or error checking
  • Only recommended for local networks
  • Setup:
    • Receiving end: ncat -l 8080 > ~/file
    • Sending end: ncat [IP] 8080 < [file]
  1. RSYNC
  • Intelligent file copying tool

  • Features:

    • Compares files before copying
    • Checksums verification
    • Skips existing identical files
    • Local and remote copying capability
  • Local copying syntax:

    • Basic: Similar to cp command
    • Directory copying: Trailing slash matters
      • With slash: Copies contents only
      • Without slash: Copies directory and contents
  • Remote copying:

    • Uses SSH protocol
    • Syntax: rsync [options] [local] [user@host:remote]
    • Common options:
      • -a: Archive mode
      • -v: Verbose
      • –progress: Show progress
      • -E: SSH protocol
      • –acls: Access control lists
      • –xattrs: Extended attributes

Best Practices:

  • Use rsync for multiple files
  • Use cp for simple, single file copies
  • Learn rsync basics despite its complexity
  • Consider permissions and ownership when copying

10. Manage Software

Software Management Systems in Linux

  1. Traditional Software Installation
  • Linux uses repository-based software management
  • Similar to modern app stores
  • Software downloaded from remote secure repositories
  • Repositories contain packages and indexes
  • Cryptographically signed for security
  1. Package Formats
  • Debian: .deb packages
  • Red Hat & SUSE: .rpm packages
  • Package Management Tools:
    • Debian: Advanced Package Management (APT)
    • Red Hat: DNF (replaced Yum)
    • SUSE: Zypper
  1. Package Components
  • Contains: binary programs, documentation, configuration files
  • Installation instructions included
  • Some include installation scripts
  • Built-in dependency systems
  • Automatic installation of required packages
  1. Source Code Installation
  • Used when software unavailable in repositories
  • Requires compilation using development tools
  • Not recommended unless necessary
  • Common for kernel drivers
  1. Sandboxed Applications
  • New category of software packaging

  • Uses containers including all dependencies

  • Distribution-agnostic

  • Three main formats: a) Snapd

    • Created by Ubuntu developers
    • Supports multiple versions
    • Can replace traditional packages
    • Available on Ubuntu, Debian, Linux Mint, RHEL

    b) Flatpak

    • Distribution-agnostic
    • Decentralized system
    • Uses “remotes” (repositories)
    • Flat Hub is main repository
    • For user software only

    c) Appimage

    • Single-file applications
    • No installation required
    • Similar to Windows .exe
    • Central repository: Appimage Hub
  1. Best Practices
  • Use repository packages when possible
  • Use containerized applications as secondary option
  • Compile from source only when necessary
  • Prioritize security and stability

DNF Overview

Core Features:

  • Successor to YUM (Yellow Dog Updated Modifier)
  • Default package manager in CentOS 8
  • YUM command redirects to DNF
  • Manages RPM packages and repositories
  • Automatically resolves dependencies
  • Handles software package groups

Key Functions:

  1. Repository Management
  • Maintains local list of software repositories
  • Users can modify repository configuration
  • Caches repository package lists locally
  • Updates cache during installations
  1. Package Installation Process:
  • Contacts known repositories
  • Updates local package lists
  • Allows package selection via CLI or GUI
  • Calculates dependencies
  • Downloads required packages
  • Installs using RPM libraries
  • Updates local package database
  1. Software Groups Feature:
  • Groups contain multiple related packages
  • Allows bulk installation/removal
  • Includes optional software components
  • Simplifies system configuration
  • More efficient than individual package installation

Differences from RPM:

  • Repository-centric approach
  • Better dependency management
  • Group package handling
  • Automated package downloads

Compatibility:

  • Used in Red Hat-like distributions (Fedora, CentOS)
  • Compatible with RPM package format
  • Main difference from YUM is dependency calculation algorithms

DNF Package Selection Options

  1. Package Selection Granularity:
  • Can select exact package versions/releases
  • Useful for installing 32-bit versions on 64-bit systems
  • Allows detailed listing and specification in DNF operations
  1. Selection Formats: a) By Name Only:
  • Basic package name
  • Shows all matches with –show-duplicates

b) Name and Architecture:

  • Format: name.architecture
  • Example: xfsprogs.x86_64

c) Name and Version:

  • Format: name-version
  • Example: xfsprogs-5.0.0

d) Name, Version, and Release:

  • Format: name-version-release
  • Example: xfsprogs-5.0.0-2.el8

e) Name, Version, Release, and Architecture:

  • Format: name-version-release.architecture
  • Note: Uses dot (.) before architecture

f) With EPOCH Number:

  • Format: name-epoch:version-release.architecture
  • EPOCH overrides normal version comparison
  • Strict format requirements
  • Not common in most packages
  1. Additional Features:
  • File glob support (*, [])
  • Example: matches version numbers and releases
  • DNF automatically selects correct architecture based on OS
  • Can override architecture selection manually
  1. Usage:
  • Works with most DNF operations (install, remove, list)
  • –show-duplicates flag shows all possible matches
  • Version numbers may vary based on when system is updated
  1. Finding EPOCH Numbers:
  • Command: dnf list installed
  • EPOCH numbers appear before version numbers with colon
  • Used rarely, mainly for override purposes

DNF Information Commands

  1. DNF List
  • Shows basic package info (name, version, repository)
  • Default columns: package name/architecture, version/release, repository source
  • Common options:
    • –all (default): Shows all packages
    • –showduplicates: Shows all versions and architectures
    • –installed: Only installed packages
    • –updates: Only available updates
    • –available: Uninstalled packages in repos
    • –obsoletes: Replaced packages
  1. DNF Info
  • Provides detailed package information including:
    • Name, version, release
    • Size
    • Source
    • Repository
    • Summary
    • License
    • Description
  • Shows both installed version and available updates
  • Can use same options as DNF List
  • Similar to “rpm -qi” output
  1. DNF Deplist
  • Lists package dependencies
  • Shows:
    • Required dependencies
    • Packages providing those dependencies

Visual Indicators (CentOS 8):

  • Green/underlined: Currently installed version
  • Blue: Update available
  • Repository “Anaconda”: Installed during OS installation

Note: Colors may vary by distribution version.

DNF Package Groups

  • Package groups are pre-configured collections of packages that can be installed/removed together
  • Serve common purposes (e.g., “Development Tools” for compilers and coding tools)

Listing Package Groups:

  • Command: “dnf group list”
  • Alternative older syntax: “grouplist” (one word)
  • Shows four categories:
    1. Available environment groups
    2. Installed environment groups
    3. Installed groups
    4. Available groups

Hidden Groups:

  • Command: “dnf group list hidden”
  • Shows additional specialized groups
  • Includes security tools, performance tools, mail server, virtualization
  • Usually more relevant during installation

Group Information:

  • Command: “dnf group info [group name]”
  • Use quotes for group names with spaces
  • Case-insensitive search
  • Shows three package categories:
    1. Mandatory packages
    2. Default packages
    3. Optional packages

Installation Behavior:

  • CentOS 8 default: Installs mandatory and default packages automatically
  • Optional packages require configuration change or “–with-optional” flag
  • Group names with spaces need double quotes

DNF Operations

Searching Packages with DNF:

  • Basic search: dnf search package_name
    • Searches name and summary only
    • Case-insensitive
  • Search all metadata: dnf search --all package_name
  • List with wildcards: dnf list -all pattern*
  • Find package providing command: dnf provides command_name

Installing/Removing Packages:

  • Basic install: sudo dnf install package_name
  • Non-interactive install: sudo dnf install -y package_name
  • Add third-party repo: sudo dnf install epel-release
  • Reinstall package: sudo dnf reinstall package_name
  • Skip broken dependencies: --skip-broken option
  • Upgrade package: sudo dnf upgrade package_name
  • Remove package: sudo dnf remove package_name
  • Remove unused dependencies: sudo dnf autoremove

Package Groups Management:

  • List groups with IDs: dnf group list ids
  • Group identification methods:
    1. Group name in quotes: “Security Tools”
    2. Group ID: security-tools
    3. @ symbol prefix: @“Security Tools”
    4. @ with ID: @security-tools
  • Install group: sudo dnf group install group_name
  • Upgrade group: sudo dnf group upgrade group_name
  • Remove group: sudo dnf group remove group_name
  • Clean unused dependencies after group removal: dnf autoremove

Get Package Information with RPM

  1. RPM Package Database Querying
  • RPM used for installing local software packages on Enterprise Linux
  • Can query: package database, packages directly (even if not installed), and files
  • File queries only work for files belonging to software packages
  1. Basic Query Commands
  • rpm -qa: Query all packages
  • rpm -qa | sort: Get sorted list of packages
  • rpm -qi [package]: Show detailed package information
  • rpm –last: Show packages by install date
  • rpm -ql [package]: List all installed files
  • rpm -qd [package]: Show documentation files
  • rpm -qc [package]: Show configuration files
  1. File-Related Queries
  • rpm -qf [file]: Find package owning a file
  • rpm -qdf [file]: Show documentation for specific file
  • rpm –provides: Show what package provides
  • rpm –requires: Show package dependencies
  • rpm –changelog: View package change history
  1. Querying Uninstalled Packages
  • Use -p option with rpm commands
  • Example: rpm -qip [package file]
  • rpm -qlp [package file]: List files in uninstalled package
  1. Advanced Querying with Tags
  • Use rpm –querytags to list available tags
  • Query format example: rpm -qa –queryformat “%{NAME} %{VERSION}\n”
  • Can format output with specific columns and information
  • Arrays in tags may need additional formatting
  1. Additional Features
  • Can combine multiple query options
  • Documentation available in man pages
  • Search for query options in uppercase for more details
  1. Practical Tips
  • Create directories for downloaded packages
  • Use DNF download option to get packages without installation
  • Tab completion helps with package names
  • Format queries for better readability and specific information

Managing DNF Repositories

Location and Configuration:

  • Repository configs stored in /etc/yum.repos.d
  • Files must end in .repo to be recognized by DNF

Repository File Structure (Example: rocky.repo):

  1. Repository name in square brackets
  2. Descriptive name
  3. Mirror list
  4. URL (if no mirror list)
  5. gpgcheck boolean (0=off, 1=on)
  6. Enable/disable status
  7. Traffic monitoring setting
  8. Metadata expiration time
  9. GPG key file location

Key Commands:

  • dnf repolist: Lists enabled repositories
  • dnf repolist -v: Verbose mode, shows detailed info
  • dnf repolist –disabled: Shows disabled repositories
  • sudo dnf config-manager –set-enabled/–set-disabled: Enable/disable repositories
  • –enablerepo/–disablerepo: Temporary repository enabling/disabling for single operations

Adding New Repositories:

  1. Package installation method:

    • Example: sudo dnf install epel-release
    • Verify contents: rpm -ql epel-release
  2. Manual methods:

    • Create .repo file manually using VI
    • Copy/paste from internet
    • Use dnf config-manager with repository URL

Repository Package Listing:

  • Command: dnf repository-packages [repo-name] list –all

Safety Considerations:

  • Be cautious with external repositories
  • Stick to main repositories when possible
  • Consider disabling third-party repos for stability
  • Verify trustworthiness to avoid malware

Managing OS Updates

  1. Checking Updates
  • Use “dnf check-update” to view packages needing updates
  • Indented packages indicate obsolete packages
  • Verify obsolete packages with “dnf list obsoletes”
  1. Upgrading Packages
  • Single package upgrade: sudo dnf upgrade [package-name]
  • Full system upgrade: sudo dnf upgrade
  • Exclude packages: sudo dnf upgrade -x [package-name]
  1. Version Lock Plugin
  • Install: sudo dnf install python3-dnf-plugin-versionlock
  • Useful for preventing automatic kernel updates
  • Commands:
    • Add lock: sudo dnf versionlock add [package-version]
    • List locks: dnf versionlock list
    • Delete lock: sudo dnf versionlock delete [package]
    • Clear all locks: sudo dnf versionlock clear
  1. Security Updates
  • Install only security updates: sudo dnf upgrade –security
  1. Configuration File Handling
  • Unmodified config files: overwritten during updates
  • Modified config files:
    • .rpmsave: saved if file was from previous rpm
    • .rpmorig: saved if file was from non-rpm source
    • .rpmnew: saved if package has “no replace” label
  1. Change Logs
  • View package changelog: dnf changelog [package-name]
  • View pending update changelogs: dnf changelog –upgrades
  • Can pipe to grep for specific issues
  1. Kernel Management
  • Check available versions: dnf list –showduplicates kernel
  • Can lock specific kernel versions to prevent automatic updates
  • Requires reboot after kernel updates

Updating the Kernel

  1. Basic Kernel Information:
  • Check installed kernel packages: DNF list kernel
  • Kernel naming convention: major.revision.patch.release (e.g., 4.18.0-193.el8)
  • Kernel files location: /boot directory
  • Check current kernel: uname -R
  1. Boot Configuration Files:
  • BIOS systems: /boot/grub2/grub.cfg
  • UEFI systems: /boot/efi/EFI/centos/grub.cfg (or /redhat/ for RHEL)
  • Main configuration file: /etc/default/grub
  • After changes, run: grub2-mkconfig to update boot loader
  1. Kernel Updates:
  • Check available updates: DNF list –available kernel
  • Update kernel: sudo DNF upgrade kernel
  • Can specify particular kernel version during installation
  • grub2-mkconfig runs automatically with new kernel installation
  1. Managing Multiple Kernels:
  • DNF can remove unused kernels but use caution
  • Configuration option: –latest-limit in /etc/DNF/dnf.conf
  • Remove old kernels command: sudo DNF remove $(DNF repoquery –installonly –latest-limit=-2 -q)
  • Preserves specified number of recent kernels
  1. Setting Default Boot Kernel:
  • Use grub2-set-default command
  • Kernel indexing starts at 0 (newest)
  • Example: sudo grub2-set-default 1 (sets second newest kernel)
  • Must run grub2-mkconfig after changing default
  • Reboot required for changes to take effect

Safety Notes:

  • Avoid using -y flag during kernel updates
  • Be careful when removing kernels
  • Don’t directly edit grub.cfg files
  • Always update grub config after changes

Managing Kernel Modules in Linux

Location of Kernel Modules:

  • 32-bit kernels: /lib/modules
  • 64-bit kernels: /lib/64
  • Contains directories for drivers, filesystems, network, virtualization, etc.

Key Commands:

  1. View Modules Directory:

    • ls /lib/modules/$(uname -r)/kernel
  2. List Loaded Modules:

    • lsmod
  3. Module Information:

    • modinfo [module_name]
  4. Module Management:

    • Remove module: modprobe -r [module_name]
    • Load module: modprobe -v [module_name]
    • Scan for hardware: depmod -v

Module Loading Behavior:

  • modprobe loads required dependencies automatically
  • Modules can accept custom parameters
  • Must unload module before changing parameters
  • Won’t load if module is already active

Automatic Module Loading:

  1. At Boot:

    • Create file in /etc/modules-load.d/
    • Name must end with .conf
    • Include module name in file
  2. Prevent Loading (Blacklisting):

    • Create file in /etc/modprobe.d/
    • Format: “blacklist [module_name]”
    • File must end with .conf

Use Cases:

  • Manual module management needed for:
    • Network-added devices
    • Storage area network equipment
    • Remote printers
    • Hardware not automatically detected

Note: Modern systems rarely require manual module management, but knowledge is valuable for troubleshooting.

Get Package Information with dpkg and APT

Package Management Systems in Debian-based Distributions:

  1. dpkg (Debian Package)

    • Low-level command
    • Equivalent to RPM in Red Hat
    • Manages local packages
  2. APT (Advanced Packaging Tool)

    • Repository-based system
    • Equivalent to YUM/DNF in Red Hat
    • Newer ‘apt’ tool preferred over apt-get/apt-cache

Key APT Commands:

  • apt search: Shows all packages (installed/not installed)
  • apt show: Displays package details
  • apt list: Lists packages based on criteria
  • apt list –installed: Shows installed packages
  • apt install: Installs packages

Package Information Commands:

  1. Using APT:

    • sudo apt search [package]
    • sudo apt info [package]
    • sudo apt list [package]
  2. Using dpkg:

    • dpkg –info: Shows package information
    • dpkg -c: Lists files in package
    • dpkg -i: Installs package
    • dpkg -L: Lists installed files
    • dpkg -S: Shows which package owns a file

Special Features:

  1. Virtual Packages:
    • Debian’s alternative to Red Hat’s package groups
    • Contains no files but has dependencies
    • Installing virtual package installs all dependencies

Package Download:

  • apt-get download: Downloads package without installing
  • Works with dpkg for local package management

Note: APT doesn’t ask for confirmation when installing single packages without dependencies, but will prompt if dependencies are needed.

Managing Software with dpkg and APT

Main Tools:

  • apt-get (older tool)
  • apt (newer tool)
  • dpkg (local package tool)

Basic APT Commands:

  1. Installation

    • sudo apt install [package_name]
    • Example: sudo apt install apache2
  2. Package Information

    • sudo apt info [package_name]
    • Shows: installation status, features, dependencies, suggested packages
  3. Updates & Upgrades

    • sudo apt list –upgradable (shows available updates)
    • sudo apt update (updates software indexes)
    • sudo apt install –only-upgrade [package_name] (upgrades specific package)
    • sudo apt full-upgrade (upgrades entire distribution)
  4. Package Removal Methods: a. Using APT:

    • sudo apt remove [package_name] (keeps configuration files)
    • sudo apt purge [package_name] (removes package and configuration files)
    • sudo apt autoremove (removes unused dependencies)

    b. Using dpkg:

    • sudo dpkg -r [package_name]
    • Note: doesn’t manage dependencies
    • Better for simple removals
    • May fail with complex dependency trees

Important Notes:

  • Regular package removal keeps configuration files
  • APT manages dependencies, dpkg doesn’t
  • apt full-upgrade is equivalent to apt-get dist-upgrade
  • System may require password for sudo commands
  • Always verify package information before installation/removal

Working with APT Repositories

  1. Software Installation Best Practices
  • Avoid installing software from websites
  • Use official software repositories
  • Ensures digital signature verification
  • Prevents tampering risks
  1. Repository Sections
  • Main: Ubuntu supported software
  • Universe: Open source community software
  • Restricted: Closed source drivers
  • Multiverse: Proprietary/non-open source software
  1. Repository Channels
  • Release: Standard channel without updates
  • Release-updates: Package updates
  • Release-security: Security updates only
  • Release-backports: Back-exported packages
  • Devel: Development software (potentially buggy)
  1. Ubuntu Version Repositories
  • Version naming: Year.Month (e.g., 22.04)
  • Example versions:
    • 18.10 Cosmic Cuttlefish
    • 19.04 Disco Dingo
    • 20.04 Focal Fossa
    • 22.04 Jammy Jellyfish
  1. Repository Configuration
  • Located in:
    • /etc/apt/sources.list
    • /etc/apt/sources.list.d/ directory
  • View repositories:
    • Command: apt policy
    • Command: grep ‘^deb’ /etc/apt/sources.list /etc/apt/sources.list.d/*
  1. Adding Repositories a) Manual methods:
    • Edit sources.list file
    • Download repo file to sources.list.d

b) Using commands:

  • add-apt-repository command
  • Adding GPG keys:
    • Traditional: apt-key add
    • New method: store in /etc/apt/trusted.gpg.d/
  1. PPA (Personal Package Archive)
  • Focused repositories often run by software authors
  • Adding PPA: sudo add-apt-repository ppa:[name]
  • Removing PPA:
    • Simple removal: add-apt-repository –remove
    • Complete removal with packages: ppa-purge
  1. Repository Management
  • Can remove repositories by:
    • Commenting out/deleting lines
    • Using command line tools
    • Using ppa-purge for complete removal
  • Note: Removing repository stops updates for installed software

Installing Software from Source Code

Key Points:

  1. Recommended Practice
  • Prefer installing from package repositories
  • Source code installation lacks security updates and GPG-signed package protection
  1. Prerequisites (Enterprise Linux Example)
  • Install development tools: sudo dnf group install -y "Development tools"
  • Install dependencies: sudo dnf install libcurl-devel
  1. Process Steps: a) Download Source Code
  • Visit repository (e.g., github.com/gi/gi/tags)
  • Download tar.gz file
  • Can use browser or wget/curl command-line tools

b) Extract Archive

  • Navigate to download directory: cd ~/Downloads
  • Extract using tar: tar -xzvpf [filename]
  • Enter source directory: cd [extracted-directory]

c) Compilation Steps

  1. Create config file: make configure

  2. Run configure script: ./configure --prefix=/usr/local

  3. Compile and install: sudo make install

  4. Verification

  • Check installation: /usr/local/bin/[program] --version

Important Considerations:

  • Package names may vary by distribution
  • Documentation might be distribution-specific
  • Not recommended for production machines unless necessary
  • May require detective work to compile successfully
  • Process is similar across Linux distributions with minor variations

Example Used: Git Installation

  • Repository version: 2.31.1
  • Source code version: 2.39.0
  • Installation location: /usr/local/bin

Sandboxed Applications

Definition & Benefits:

  • Uses OS containers to include all dependencies in one distribution-agnostic file
  • Allows multiple versions of same software with different library versions
  • Distribution-independent

Drawback:

  • Contains redundant software as shared libraries are included in each package

Main Container Formats:

  1. snapd
  • Created by Ubuntu developers
  • Available on Ubuntu 18.04+ and Enterprise Linux 7.6+
  • Installation:
    • sudo dnf install snapd
    • Enable socket: sudo systemctl enable –now snapd.socket
  • Key commands:
    • snap find [package] - search packages
    • snap info [package] - get package info
    • snap install [package] - installation
    • snap list - show installed packages
    • snap refresh - update packages
    • snap remove - uninstall packages
  1. Flatpak
  • Decentralized packaging system
  • Uses remotes (repositories)
  • Installation:
    • sudo dnf install flatpak
    • Add Flathub remote (most popular)
  • Key commands:
    • flatpak search [package] - find packages
    • flatpak install [ID] - installation
    • flatpak info [ID] - package information
    • flatpak update [ID] - update packages
    • flatpak list - show installed packages
    • flatpak run [ID] - run application
    • flatpak uninstall [ID] - remove package
  • Uses reverse DNS format for package IDs
  1. AppImage
  • Simplest approach: one file per application
  • Installation process:
    • Download package
    • Make executable
    • Run as regular user
  • Removal: Simply delete the file

Historical Context:

  • Multiple package formats evolved due to parallel development
  • Different distributions (Debian, Red Hat, Slackware) created separate solutions
  • Current formats provide similar features
  • Sandboxed applications aimed to unify package formats but resulted in multiple container formats

11. File Security

Files in Linux Operating Systems

  1. Basic File Concept
  • A file is a chunk of data containing information (text/binary)
  • Contains metadata describing file properties
  • Multiple formats exist for various data types
  1. Types of Files in Linux

a) Regular Data Files

  • Can be binary or text
  • Stored on storage devices
  • Has read/write permissions
  • Contains metadata (size, creation date, access rights)

b) Directories

  • Lists of other files
  • Organizational tool
  • Can be created, moved, deleted
  • Associates data blocks with file names

c) Block Device Files

  • Represents physical storage devices
  • Allows direct reading/writing to hardware
  • Used for tasks like computer forensics

d) Character Device Files

  • Can be physical or virtual devices
  • Examples:
    • Printer devices
    • Zero file (endless supply of zeros)
    • Null file (discards all input)
    • Screen display
    • Virtual files (CPU Info)

e) Network Sockets

  • Used for network communications
  • Enable program-to-program communication

f) Pipe Files

  • Enable direct data transfer between applications
  • No physical storage needed
  • Faster data transfer
  1. Linux’s “Everything is a File” Philosophy
  • Inherited from Unix
  • Includes physical devices, screen, and hardware
  • Application outputs treated as unsaved files
  1. Advantages
  • Simple tools can view system information
  • Easy writing to physical devices
  • Efficient command communication
  • Allows combining simple commands for complex tasks
  • Streamlined system interaction

This approach distinguishes Linux from other operating systems and provides powerful functionality through simplification and uniformity.

File Information in Linux

Metadata

  • Metadata is data that describes other data
  • Associated with files alongside their main content
  • Contains file attributes like name, size, permissions, ownership, access time

Viewing File Information

  1. Long List Command (ls -l)
  • Shows detailed file information
  • Format: ls -l [filename]
  • Displays:
    • File type (-, b, c, d, l, n, p, s)
    • Permissions (user, group, other)
    • Number of inodes
    • User owner
    • Group owner
    • File size (bytes)
    • Last modified date/time
    • Filename
  1. File Command
  • Shows file type
  • Format: file [filename]
  • Determines type by examining file bits, not extension
  • More accurate than extension-based identification
  1. Stat Command
  • Provides comprehensive metadata
  • Format: stat [filename]
  • Shows:
    • File name
    • Size (bytes)
    • File system blocks
    • IO block size
    • File type
    • Device number
    • Inode number
    • Hard links count
    • Permissions
    • User/Group ID numbers
    • SELinux context
    • Access times (last accessed, modified, attribute changes)
    • Creation time (UNIX legacy, not supported in Linux)

Hidden Files

  • Identified by dot (.) at start of filename
  • View with ls -la command
  • No special metadata flag for hidden status

Inodes

  • Store file metadata
  • Don’t store filename
  • Directory inodes contain list of names and inode numbers
  • Same inode numbers possible on different drives
  • Unique identification through device number + inode number combination

Extended Attributes in Linux

  1. Basic File Attributes
  • Standard attributes: user owner, group owner, permissions
  • Viewable through ls file and stat commands
  1. Types of Extended Attributes a) Extended System Attributes
  • Stores Access Control Lists (ACLs)
  • Provides additional layer of discretionary access control
  • Allows permissions for multiple users and groups
  • Enables inheritance of permissions from parent directory
  • Facilitates backup and restoration of permissions

b) Extended Security Attributes

  • Contains SELinux security context
  • Features:
    • Mandatory access control system
    • System-wide rules affecting all users
    • Multi-level security system
    • Role-based access control
    • Type enforcement (used in Enterprise Linux)

c) Extended User Attributes

  • Special flags for files:
    • Append only: allows adding data without overwriting original
    • Compressed: automatic compression/decompression
    • Immutable: prevents modification, deletion, renaming (even by root)
    • Backup: enables file recovery after deletion
  1. Important Notes
  • Most file systems support extended attributes
  • Not all attributes are supported by all file systems/operating systems
  • These attributes enhance Linux system security
  • SELinux is complex and provides additional security layers
  • Discretionary access control gives file owners control over permissions
  • Mandatory access control implements system-wide rules

Getting extended attributes

File Attributes & ACLs:

  • Use ls and stat commands for basic file attributes
  • Additional tools needed for extended attributes

Creating & Viewing ACL Files:

  1. Create file with ACL:

    • touch ACLfile.txt (creates empty file)
    • ls -l ACLfile.txt (view initial permissions)
    • setfacl -m user:root:rwx ACLfile.txt (set ACL)
  2. Viewing ACLs:

    • getfacl -t ACLfile.txt (shows standard permissions & ACLs)
    • ls -l shows ‘+’ symbol indicating ACL presence

SELinux Context:

  • View using ls -Z ACLfile.txt
  • Shows four columns:
    1. User (unconfined)
    2. Role (object)
    3. Type (user_home_t)
    4. Security level (0)
  • Context automatically set based on SELinux policy database

Extended User Attributes:

  1. Setting attributes:

    • sudo chattr +i ACLfile.txt (sets immutable flag)
  2. Viewing attributes:

    • lsattr ACLfile.txt (shows attributes)
    • lsattr -l ACLfile.txt (verbose output)

Summary of Commands:

  • ls -l: Verify ACL existence
  • getfacl: View access control list
  • ls -Z: View SELinux security context
  • lsattr: View extended user attributes

Linux Permissions System

Origin & Background:

  • Derived from Unix, ~40 years old
  • Proven and effective for most use cases

Key Features:

  1. User & Group Structure
  • Users can join multiple groups
  • Groups cannot contain other groups
  • Files/directories have single user owner
  • Files/directories have single group owner
  1. Permission Levels:
  • User owner
  • Group owner
  • Other (non-owner users/groups)
  1. Permission Types: Files:
  • Read
  • Write
  • Execute

Directories:

  • List contents
  • Create new files
  • Traverse directories
  1. Additional Capabilities:
  • Privilege escalation to user/group owner
  • Group owner inheritance from parent directory
  • Customizable default file permissions per user

Limitations:

  1. Ownership Restrictions:
  • Single user ownership only
  • Single group ownership only
  1. Inheritance Limitations:
  • Only group ownership inherits
  • Permissions don’t inherit
  1. Other Issues:
  • “Other” permissions lack specificity
  • No permission backup/restore system
  • No temporary permission restriction mechanism

Note: Access Control Lists (ACLs) can address most limitations mentioned above.

File and Directory Modes in Linux

  1. Standard Linux Permissions:

    • Three basic modes: Read, Write, Execute
  2. File Permissions:

    • Read: Open and read file contents
    • Write: Modify or write to file contents
    • Execute: Run file as application (e.g., ls command, Firefox)
      • Executed files are loaded into memory and run until stopped
  3. Directory Permissions:

    • Read:

      • List directory contents
      • View metadata of files/directories
      • Without read access, ls shows question marks instead of metadata
    • Write:

      • Create new files in directory
      • Write to directory
    • Execute:

      • Enter or traverse directory
      • Not for running directory as command
      • Required for directory navigation

File Ownership

File Ownership Basics:

  • Each file has exactly one user owner and one group owner
  • Visible in long listing format (ls -l)
  • User owner: 3rd column from left
  • Group owner: 4th column from left

Using chown Command:

  • Basic syntax: chown [options] username:groupname filename
  • Requires root privileges or sudo
  • Three ways to change ownership:
    1. User only: chown user1 file.txt
    2. Group only: chown :group file.txt
    3. Both user and group: chown user:group file.txt (can use . instead of :)

Important Notes:

  • Users and groups must exist before assigning ownership
  • View existing users in /etc/passwd
  • View existing groups in /etc/group
  • Common option: -R (recursive) Example: chown -R username:group /home/username

Practical Exercise Steps:

  1. Create directory: mkdir ownexercise
  2. Create file: touch file.txt
  3. Create new user: sudo useradd testuser
  4. Create new group: sudo groupadd testgroup
  5. Change ownership: sudo chown testuser:testgroup file.txt
  6. Verify changes: ls -l

Additional Information:

  • Detailed information available in man chown
  • For user and group management details, refer to “Linux: User and Group Management” course

Permissions in Linux - Numeric Method

  1. File Listing Components (ls -l):
  • 10 characters on left side
  • First character: File type (- for file, d for directory, l for symbolic link, c for character device, p for pipe, b for block device)
  • Next 9 characters: Three groups of three (user, group, other)
  1. Numeric Values:
  • Read (r) = 4
  • Write (w) = 2
  • Execute (x) = 1
  • Example: 750 (User=7, Group=5, Other=0)
  1. chmod Command:
  • Syntax: chmod [options] permissions filename
  • Example: chmod 750 file.txt

Permissions in Linux - Symbolic Method

  1. Symbolic Notation:
  • u = user owner
  • g = group owner
  • o = other
  • r = read
  • w = write
  • x = execute
  1. Operators:
  • = (set exact permissions)
    • (add permissions)
    • (remove permissions)
  1. Usage Examples:
  • chmod u=rwx,g=rx,o= file.txt (set specific permissions)
  • chmod g-x file.txt (remove execute from group)
  • chmod a-x file.txt (remove execute from all)
  1. Additional Features:
  • Recursive option: -R
  • Can combine positions (ugo)
  • ‘a’ represents all positions (user, group, other)

Key Advantages:

  • Numeric: Shorter to type
  • Symbolic:
    • Easier to understand
    • Can add/remove specific permissions without knowing current state
    • More practical for recursive operations

Note: Initial permissions are based on umask value (covered separately)

Initial Permissions & Umask

  1. Basic Concepts:
  • Initial permissions are automatically applied when files are created
  • Permissions are calculated using umask (bit mask)
  • View umask: Type “umask” or “umask -S” (symbolic notation)
  • Umask format: 3-4 characters (leading zero optional)
  1. Directory Permissions Calculation:
  • Maximum initial permissions for directories: 777
  • Example calculation:
    • Umask: 022
    • 777 - 022 = 755 (rwx,rx,rx)
  • Verification:
    • mkdir umaskdir
    • ls -l shows rwxr-xr-x (755)
  1. File Permissions Calculation:
  • Maximum initial permissions for files: 666
  • Execute permissions disabled by default for security
  • Example calculation:
    • Umask: 022
    • 666 - 022 = 644 (rw,r,r)
  • Verification:
    • touch umaskfile.txt
    • ls -l shows rw-r–r– (644)
  1. Changing Umask: a) Temporary Change:
  • Command: umask 0002
  • Affects current session only

b) Permanent Change (User-specific):

  • Edit ~/.bashrc
  • Add: umask 0002

c) System-wide Change:

  • Edit /etc/profile.d/umask.sh
  • Example configuration:
if [ "$UID" -ge 1000 ]; then
    umask 0002
fi
  • Takes effect after next login
  • Conditional setting based on user ID
  1. Effects of Umask 0002:
  • Directory permissions: rwx (user), rwx (group), rx (other)
  • File permissions: rw (user), rw (group), r (other)

Special File Bits - SUID and SGID

  1. Special Bits Overview:
  • SUID (Set User ID): Runs executable as file’s user owner
  • SGID (Set Group ID): Runs executable as file’s group owner
  • Sticky bit: Historical Unix feature (not functional in Linux)
  1. Permission Indicators:
  • SUID: Shown as ’s’ in user owner’s execute position
  • SGID: Shown as ’s’ in group owner’s execute position
  • Lowercase ’s’: Execute bit is also set
  • Uppercase ‘S’: Execute permissions not set
  1. Numeric Values:
  • Standard permissions: Read(4), Write(2), Execute(1)
  • Special bits: SUID(4), SGID(2), Sticky(1)
  • Example format: 4755 (4=SUID, 7=rwx for owner, 5=rx for group/others)
  1. Setting Special Bits: SUID:
  • Numeric mode: chmod 4755 [file]
  • Symbolic mode: chmod u+s [file]

SGID:

  • Numeric mode: chmod 2755 [file]
  • Symbolic mode: chmod g+s [file]
  1. Finding Special Bit Files:
  • Find SUID files: sudo find / -perm -4000
  • Find SGID files: sudo find / -perm -2000
  1. Security Implications:
  • Special bits allow privilege escalation without password
  • Important to track locations of SUID/SGID files for security
  • Regular users can execute commands with elevated privileges

Example: /usr/bin/su command:

  • Shows as bright red (indicating special permissions)
  • Has SUID bit set (runs as root when executed)
  • Permissions: rws (user), r-x (group), r-x (others)

Special Directory Bits - SGID and Sticky

SGID (Set Group ID) on Directories:

  • SUID has no effect on directories
  • SGID enables group inheritance for files/directories created inside the directory
  • Files created inherit the parent directory’s group ownership
  • Useful for group collaboration

Setting up SGID Directory Example:

  1. Create directory: sudo mkdir /home/accounting
  2. Create group: sudo groupadd accounting
  3. Change group ownership: sudo chown :accounting accounting
  4. Set SGID bit: sudo chmod 2770 accounting
  • Results in rwx for root user, rwx for accounting group, no permissions for others
  • ’s’ appears in group execute position

Sticky Bit on Directories:

  • Only file owners can delete their files
  • Set by adding ‘1’ to left of permissions
  • Represented by ’t’ or ‘T’ in other execute position
  • Commonly used in world-writable directories like /tmp

Example Setup:

  1. Create directory: mkdir stickydir
  2. Set sticky bit: sudo chmod 1777 stickydir
  • All users get full permissions
  • Only file owners can delete their files
  • Other users cannot delete/move files despite having rwx permissions

Testing:

  • Create test file as user1: touch file.txt
  • Set permissions: chmod 777 file.txt
  • Try deleting as different user (ted)
  • Deletion fails despite full permissions due to sticky bit

Practical Application:

  • SGID: Used for shared group directories
  • Sticky bit: Used in shared directories like /tmp
  • Both bits help maintain file security in shared environments

Access Control Lists (ACLs) Overview

Limitations of Standard Linux Permissions:

  • Files/directories limited to one user ownership
  • Files/directories limited to one group ownership
  • “Other” permissions lack precision
  • Limited inheritance (group ownership only)
  • Difficult to backup/restore permissions
  • Difficult to temporarily restrict permissions

Example Problem:

  • Scenario: Directory needs different permissions for multiple groups
    • Accounting group needs rwx (read, write, execute)
    • Marketing group needs r-x (read, execute)
  • Standard permission solution inadequate:
    • Can only assign one group (accounting) proper permissions
    • Marketing must use “other” permissions
    • Results in unwanted access for all users

ACL Benefits:

  1. Multiple user permissions per file/directory
  2. Multiple group permissions per file/directory
  3. Inheritance for user/group permissions
  4. Easy backup/restoration of permissions
  5. Simple temporary permission restrictions

ACL Limitations:

  • Not always installed
  • Not built into Linux
  • Can be disabled by admin
  • Requires learning new commands

ACL Implementation Example:

  • Base Structure:
    • Root user (rwx)
    • Root group (rwx)
    • Other (no permissions)
  • ACL Additions:
    • Accounting group: rwx
    • Marketing group: r-x
    • Default ACL for inheritance
  • Mask feature:
    • Controls maximum allowed permissions
    • Enables temporary access restrictions
    • Restoring mask returns original permissions

Important Notes:

  • ACLs are default in Enterprise Linux 8
  • Recommended despite learning curve
  • More efficient for permission management
  • Safer baseline permissions if ACLs disabled
  • Inheritance applies to new files/directories

Reading Access Control Lists (ACLs)

Tools for ACLs:

  • Cannot use standard Linux tools (ls) to list ACLs
  • Requires special ACL tools
  • Must have ACL support installed (default in Enterprise Linux)
  • File system must be mounted with ACL support

Basic Commands:

  1. getfacl - Read ACLs

    • Basic format: getfacl filename
    • Tabular format: getfacl -t filename
    • Shows: file name, user owner, group owner, permissions
    • Recursive listing: getfacl -R [directory]
  2. setfacl - Set ACLs

    • Format: setfacl -m user:username:permissions filename
    • Example: setfacl -m user:root:rwx aclfile

Identifying ACLs:

  • In ls -l output, a ‘+’ symbol after permissions indicates ACL exists
  • ACL users shown in lowercase in getfacl output
  • Standard Linux permissions shown in uppercase

Backing up and Restoring ACLs:

  • Can store ACL permissions recursively: getfacl -R /path > filename.txt
  • Stored without absolute path names
  • Must be in same directory when restoring permissions
  • View stored permissions using cat command

Additional Information:

  • Detailed documentation available in man pages (man getfacl)
  • ACLs layer on top of standard Linux permissions
  • Some distributions may require manual ACL support installation

Setting Access Control Lists (ACLs)

System Support & Installation:

  • Different distributions have varying ACL support levels
  • Enterprise Linux 8: ACLs installed and enabled by default
  • Older versions: ACLs automatic for root partition only
  • May need to mount file system with ACL option in some distributions

Using setfacl Command: Basic Syntax:

  • setfacl -m user:username:permissions file/path
  • -m stands for modify
  • Can use shorter form ‘u:’ instead of ‘user:’

Practical Example Steps:

  1. Create test environment:

    • Create directory: aclexercise
    • Create file: datafile.txt
    • Add user: bob
    • Add groups: accounting, marketing
  2. Setting ACLs:

    • Basic format: setfacl -m user:bob:rwx datafile.txt
    • Multiple ACLs: setfacl -m u:bob:rwx,g:accounting:rx datafile.txt
    • View ACLs: getfacl -t filename

Special Notes:

  • Can set user owner permissions without specifying username
  • Group ACLs require existing groups
  • Error near character 7 usually indicates invalid group name
  • Recursive ACL setting possible with -R option
  • Can set permissions for:
    • User owner
    • Group owner
    • Others

Important Features:

  • Can set multiple ACLs simultaneously using commas
  • Supports abbreviated notation (u: instead of user:)
  • Requires sudo for setting ACLs on others’ files
  • Man pages available for detailed information

Advantages:

  • Easier to implement than standard permissions
  • More straightforward than using special bits and custom UMask
  • Provides direct access control for specific users and groups

Configuring Inheritance with Default ACLs

  1. Standard Linux Permissions Inheritance:
  • Limited to SGID bit on directories
  • Only inherits group owner from parent directory
  • Doesn’t inherit permissions
  1. ACL Inheritance Capabilities:
  • More extensive than standard permissions
  • Can inherit multiple user/group permissions
  • Uses default ACLs for inheritance
  1. ACL Types and Usage:
  • Regular ACLs: For immediate directory access
  • Default ACLs: For future files/directories
  • Usually need to set both types
  1. Setting Up ACLs - Example Process: a) Create test environment:
  • Make directory: mkdir data
  • Create test files: touch data/file.txt, data/photo.jpg

b) Setting Regular ACL:

  • Command: setfacl -m user:bob:rwx data
  • For existing files (recursive): setfacl -R -m user:bob:rwx data

c) Setting Default ACL:

  • Command: setfacl -d -m user:bob:rwx data
  • Applies to future files/directories
  1. Verification:
  • Use getfacl -t to view ACLs
  • New files automatically inherit default ACLs
  • Works for both files and directories
  1. Important Points:
  • Regular ACLs needed for immediate access
  • Default ACLs ensure future access
  • Both types typically needed for complete access control
  • Works regardless of which user creates new files

Deleting Access Control Lists (ACLs)

Methods to Delete ACLs:

  1. Delete specific ACLs (-x option)
  2. Delete all default ACLs (-k option)
  3. Delete all ACLs (-b option)

Specific ACL Deletion:

  • Format: setfacl -x type:name target
  • Examples:
    • user:root
    • group:audio
  • Just specifying name assumes user ACL
  • For default ACLs: default:type:name (e.g., default:user:root)

Command Options:

  • -x: Delete specific user/group ACLs
  • -k: Delete all default ACLs
  • -b: Delete all ACLs (user, group, and default)
  • -R: Recursive deletion (for directory trees)

Verification:

  • Use getfacl command to verify ACL changes
  • Example: getfacl acldir

Example Commands:

  1. Delete group ACL: setfacl -x group:root acldir
  2. Delete user ACL: setfacl -x root acldir
  3. Delete all default ACLs: setfacl -k acldir
  4. Delete all ACLs: setfacl -b acldir
  5. Recursive deletion: setfacl -R -b acldir

Troubleshooting Access Control

  1. Initial Account Checks
  • Verify user account status
  • Check if account is locked using password command (L indicator)
  • Confirm login capability
  1. File Permission Checks
  • Use ls -l to view permissions
  • Check file ownership and group ownership
  • Verify traverse permissions in path
  • Test access by having user cd through directories
  • Review access control lists (ACL) if present
  1. File Access Requirements
  • Access possible through:
    • File ownership
    • Group membership
    • Other permissions (not recommended)
    • Access control lists
  1. Execution Permission Issues
  • Verify standard Unix permissions
  • Check ACL execution rights
  • For elevated privileges:
    • Verify sudo access
    • Check sudo file configuration
    • Confirm group membership (admin/sudo groups)
    • Check /etc/group file
  1. Special Permission Bits
  • Check SUID/SGID bits
  • Use rpm to verify permission changes
    • M: mode changed
    • U: user changed
    • G: group changed
  • Reset permissions using RPM (RPM-based systems only)
  1. Volume and Mount Checks
  • Use mount command to verify exec permissions
  • Check for:
    • noexec flag
    • nosuid flag
    • nosgid flag
  1. Log File Analysis
  • Authentication logs:
    • WN systems: /var/log/auth.log
    • Red Hat: /var/log/secure
  1. Mandatory Access Control SELinux:
  • Verify if enabled
  • Check SELinux logs for denials
  • Use SE troubleshooter for solutions

AppArmor:

  • Check status using aa-status

12. Mandatory Access Control

SELinux Modes and Access Control

  1. Access Control Types:
  • Discretionary Access Control (DAC):

    • Restricts access based on subject identity/groups
    • Permissions can be passed between subjects
    • Examples: Linux permissions, ACLs, SUID/SGID, su/sudo
  • Mandatory Access Control (MAC):

    • Additional layer over DAC
    • OS constrains subject’s access to objects
    • Based on rules system
    • Components:
      • Subjects (processes run by users)
      • Objects (files, directories, IO devices, ports, etc.)
      • Actions (read, write, delete, create)
  1. SELinux Components:
  • Security Policy: System-wide rules defining subject permissions
  • Security Context/Labels: Tags stored in file metadata
  • Access Process:
    • Subject requests object access
    • SELinux security server checks policy database
    • Access granted or denied based on rules
    • Denied access logged in Access Vector Cache
  1. SELinux Operating Modes:
  • Enforcing: Security policy fully enforced
  • Permissive:
    • Policy consulted but not enforced
    • Violations logged
    • Used for troubleshooting
  • Disabled: SELinux turned off (not recommended)
  1. Enforcement Policies:
  • Type Enforcement (default in targeted policy)
  • Role-based Access Control
  • Multi-level Security (clearance-based)
  • Multi-category Security (useful for containerization)
  1. Command Reference:
  • sestatus: Shows current SELinux status
  • getenforce: Displays current enforcement mode
  • setenforce: Changes mode temporarily
  • Configuration file: /etc/selinux/config
    • Permanent changes require editing this file
    • System reboot needed for changes
    • Disabling SELinux requires config edit and reboot

SELinux file and process context

SELinux Security Context Components:

  1. Format: user:role:type:level
  2. Main components:
    • SELinux user (e.g., unconfined_u)
    • Role (e.g., unconfined_r)
    • Type (e.g., unconfined_t)
    • Security level (for multilevel/multi-category security)

Viewing Security Contexts:

  1. User context: id -Z
  2. Process context: ps -eZ
  3. File context: ls -lZ

Domain Transitions:

  1. Subjects can move between types if allowed by security policy
  2. Example: passwd command
    • Executable file type: passwd_exec_t
    • Running process type: passwd_t
    • Target file (/etc/shadow) type: shadow_t
    • Process transitions from passwd_exec_t to passwd_t to write to shadow_t

Key Points:

  • All users, processes, and files have security contexts
  • Type enforcement is primary method for mandatory access control
  • Domain transitions allow temporary type changes for specific operations
  • Similar concept to sudo but more complex
  • Transitions must be explicitly allowed in security policy

Restoring SELinux default file contexts

SELinux File Contexts:

  • Unlike standard Unix permissions, SELinux security context is stored in extended attributes
  • Default security contexts are stored in SELinux security policy
  • View contexts using ls -lZ command

Changing Security Contexts:

  1. Using chcon:
  • Changes context temporarily
  • Syntax: chcon -t [type] [filename]
  • No root privileges needed for own files
  • Best used only for troubleshooting
  1. Using restorecon:
  • Restores default context from policy database
  • Syntax: restorecon [filename]
  • Recommended for regular context restoration

System-wide Context Reset:

  • Create /.autorelabel file in root directory
  • Command: sudo touch /.autorelabel
  • System will relabel all files on next boot
  • Boot process takes longer during relabeling
  • .autorelabel file auto-deletes after completion

Permanent Context Changes:

  1. Using semanage:
  • Modifies policy database
  • Syntax: sudo semanage fcontext -a -t [type] [filepath]
  • Changes persist through restorecon and relabeling
  • Verify changes: sudo semanage fcontext -l | grep [filename]

Best Practice:

  • Modify policy using semanage
  • Apply changes using restorecon
  • Avoid using chcon except for temporary troubleshooting

Using SELinux Booleans

  1. Purpose:
  • Booleans are on/off switches to modify SELinux behavior
  • Allows functionality changes without rewriting security policies
  1. Viewing Booleans: a) Using getsebool:
  • Command: getsebool -a
  • Lists all SELinux Booleans (approximately 300)
  • Single Boolean check: getsebool [boolean_name]

b) Using sestatus:

  • Command: sestatus -b
  • Shows Boolean list

c) Using semanage:

  • Command: sudo semanage boolean -l
  • Provides Boolean list with descriptions
  • Requires elevated privileges
  1. Modifying Booleans: a) Temporary changes:
  • Command: sudo setsebool [boolean_name] on/off
  • Changes don’t survive system reboot

b) Persistent changes:

  • Command: sudo setsebool -P [boolean_name] on/off
  • Adds Boolean to policy
  • Changes survive system reboot
  1. Example Used:
  • Boolean: mozilla_plugin_use_gps
  • Verification command: sudo semanage boolean -l | egrep ‘SELinux|mozilla_plugin_use_gps’
  • Shows header and specific Boolean status

SELinux Policy Violations Diagnosis

VM Preparation:

  • Create VM snapshot before making changes
  • SELinux logs alerts in enforcing/permissive modes
  • Logs to: /var/log/audit/audit.log (if audit D running)
  • Alternative log: var log messages

Generating SELinux Error:

  1. Check original security context of /etc/shadow:

    • Command: sudo ls -Z /etc/shadow
    • Original context: shadow_T
  2. Change security context:

    • Command: sudo chcon -t etc_T /etc/shadow
    • New incorrect context: etc_T

Monitoring & Troubleshooting:

  • Monitor audit log: sudo tail -f /var/log/audit/audit.log
  • Use ausearch command
  • Test by attempting password change (will fail)
  • Check SEAlert browser for detailed information

Audit Log Analysis:

  • Two main error messages:
    1. AVC (Access Vector Cache) - SELinux error
      • Shows denied actions, process ID, command name
    2. user_change_auth_tok
      • Shows PAM authentication details
      • Includes user account, command, host info

Solutions for SELinux Errors:

  1. Change Boolean Settings

    • View booleans: sudo semanage boolean -L
    • Modify using setsebool
    • Use -P flag for persistence
  2. Modify File Context

    • Use chcon or semanage
    • Changes with chcon are temporary
    • Make persistent with semanage + restorecon
  3. Create New Security Policy Module

    • Last resort option
    • Most intrusive solution
    • Modifies security policy directly

Best Practices:

  • Start troubleshooting in permissive mode
  • Check audit logs for errors
  • Look for desktop notifications
  • Follow SELinux alert browser instructions
  • Restore original context when needed: sudo restorecon /etc/shadow

Managing File Security Context

  1. Default Behavior:
  • Files copied to new locations inherit security context of destination directory
  • Example: Files copied to /home get user_home_dir_t type
  1. Preserving Original Security Context:
  • Copy command (cp):

    • Use -a option (archive)
    • Preserves: permissions, ACLs, extended attributes, SELinux context
  • Move command (mv):

    • Automatically preserves attributes
    • No special options needed
    • Works by moving files without changing metadata
  1. Backup Operations:
  • tar command:
    • Use –selinux option
    • Preserves security context during backup
  1. Remote File Transfer:
  • rsync command:
    • Use -X option
    • Preserves security context during host-to-host transfers
  1. Safety Consideration:
  • Preserving context may not always be desired
  • Safer approach:
    • Copy/move files normally
    • Use restorecon to reset context to policy default

AppArmor

Mandatory Access Control Systems:

  • Two main systems: SELinux (by NSA) and AppArmor (by Immunix/Novell)
  • Both supplement discretionary access control systems
  • AppArmor gained popularity through Ubuntu adoption

SELinux vs. AppArmor Comparison: SELinux:

  • Complex system with multi-level security
  • Inode-based (file labeling with security context)
  • Works only with file systems supporting extended attributes
  • Used in Red Hat-based distributions, Android, CoreOS
  • Available optionally in Debian, Ubuntu, SUSE

AppArmor:

  • Path-based system
  • Simpler to implement and use
  • Supports non-Linux file systems (NFS, NTFS)
  • Limited to recreating discretionary access control
  • Used primarily in SUSE, Debian, and Ubuntu

AppArmor Operations: Modes:

  • Complaining mode: logs violations without enforcement (like SELinux permissive mode)
  • Enforcing mode: enforces access control in production

Commands:

  • aa-status: check AppArmor status
  • aa-enforce: change to enforcing mode
  • aa-complain: change to complaining mode
  • apparmor_parser -R: disable profile
  • apparmor_parser -a: add profile to kernel

Profiles:

  • Located in /etc/apparmor.d/
  • Naming convention uses dots to replace forward slashes
  • Example: usr.sbin.cupsd represents /usr/sbin/cupsd
  • Additional profiles available through apparmor-profiles and apparmor-profiles-extra packages

Documentation:

  • Available at gitlab.com/apparmor

13. System Security

Pluggable Authentication Modules (PAM)

Definition & Background:

  • PAM = Pluggable Authentication Modules
  • Centralized, modular authentication system for Linux
  • Proposed by Sun Microsystems in mid-90s
  • Replaced individual service authentication methods
  • Located in /etc/pam.d directory

Module Types (4):

  1. Account

    • Validates account status
    • Handles non-authentication maintenance
    • Checks for locked accounts/expired passwords
  2. Auth

    • Manages user authentication
    • Sets up credentials
    • Handles challenge responses & hardware tokens
  3. Password

    • Updates authentication methods
    • Manages user passwords
  4. Session

    • Handles pre/post login tasks
    • Manages home directory mounting
    • Provides login audit trails

Control Flags:

  • Determine authentication flow through stack
  • Types:
    • Sufficient: Success stops further processing
    • Required: Failure noted, continues processing
    • Requisite: Immediate failure if unsuccessful
    • Include: Allows external module stack inclusion
    • Substack: Similar to include but limits failure scope
    • Optional: Not mandatory for authentication

PAM Service Files:

  • Structure: 3-4 columns
    1. Module type
    2. Control flag
    3. PAM module
    4. Module arguments (optional)
  • Processed top to bottom
  • Can be viewed with text editors
  • Modification requires careful consideration

Additional Information:

  • Module documentation available via man pages
  • Check PAM support using LDD command
  • Changing control flags can significantly impact security
  • Most users won’t need to modify PAM configuration

Login Counters

PAM Modules Overview:

  • Located in /usr/lib64/security/
  • Notable modules:
    • pam_access: Controls authentication access and locations
    • pam_exec: Executes commands post-authentication
    • pam_limits: Configures resource limits for user sessions

Login Counter Evolution:

  1. pam_tally (Legacy)
  2. pam_tally2 (Intermediate)
  3. pam_faillock (Current recommended)

PAM Faillock Features:

  • Maintains list of failed authentication attempts
  • Locks accounts exceeding threshold
  • Configuration options:
    • Can be in PAM auth config file or separate config file
    • Module arguments override config file settings

Key Module Options:

  1. preauth: Used before credential verification
  2. authfail: Logs failed attempts
  3. authsucc: Clears failure records upon successful login

Important Configuration Arguments:

  • deny: Sets maximum consecutive failures
  • fail_interval: Time window for consecutive failures
  • unlock_time: Duration before automatic unlock
  • audit: Logs non-existent user login attempts
  • silent: Suppresses user messages
  • no_log_info: Disables system logger
  • log_users_only: Tracks only local users

Admin-specific Options:

  • even_deny_root: Sets root account failure limit
  • root_unlock_time: Unlock time for root account
  • admin_group: Treats specified group like root

Management:

  • Configuration file: /etc/security/faillock.conf
  • Can be configured using authconfig command
  • View failed logins: faillock command
  • Reset login records: faillock –reset

Best Practice:

  • Recommended to keep configuration in separate faillock.conf
  • Helps prevent security vulnerabilities from PAM module ordering
  • Separates setup from daily management

Password Policy Configuration in Linux

  1. Purpose:
  • Prevent system hacks due to weak passwords
  • Enforce stronger password requirements
  1. Configuration File:
  • Location: /etc/security/pwquality.conf
  • Controls password policy settings
  1. Key Configuration Options: a) Basic Settings:
  • difok: Number of unique characters required in reset password
  • minlen: Minimum password length (minimum 6 characters)
  • minclass: Number of required character classes
  • maxrepeat: Limit on identical consecutive characters
  • maxclassrepeat: Limit on consecutive character classes

b) Security Checks:

  • gecoscheck: Checks GECOS comment field for character sequences
  • dictcheck: Checks for dictionary words
  • usercheck: Checks for username in password
  • badwords: List of prohibited words
  1. Credit System: Purpose: Allow shorter passwords while maintaining strength Types of Credits:
  • dcredit: Credit for numeric digits
  • ucredit: Credit for uppercase letters
  • lcredit: Credit for lowercase letters
  • ocredit: Credit for special characters/punctuation

Credit Settings:

  • 0: Disables credit
  • Positive number: Enables credit (reduces length requirement)
  • Negative number: Requires minimum number of special characters
  1. Implementation:
  • Changes to password policy may require users to change passwords on next login
  • Credits subtract from minimum length requirement
  • All options can be disabled by setting to zero

Edit Global User Account Defaults

  1. Default Account Settings (/etc/login.defs):

    • Contains system-wide defaults for new user accounts
    • Changes only affect newly created users, not existing ones
    • Accessed using: less /etc/login.defs
  2. Password Aging Controls:

    • Defines default values found in /etc/shadow
    • Settings include:
      • Maximum password validity period
      • Minimum time before password change
      • Minimum password length (overridden by /etc/security/pwquality.conf)
      • Warning period before password expiration
  3. User and Group ID Settings:

    • Specifies minimum UID and GID numbers
    • Old Red Hat systems: Started at 500
    • Current systems: Start at 1000
  4. Home Directory Creation:

    • Red Hat systems: Automatically create home directory by default
    • Can be disabled using useradd -M command
    • Default behavior varies by Linux distribution

Note: Existing user accounts must be modified using separate tools, not through login.defs file.

Locking user accounts and changing password aging

User Account Management:

  1. Locking Password:
  • Command: sudo passwd -L username
  • Effect: Adds double exclamation marks (!!) to password field in /etc/shadow
  • Note: User can still login with SSH keys
  1. Full Account Lock:
  • Command: sudo chage -E 0 username
  • Sets account expiration to Jan 1, 1970
  • To disable expiration: sudo chage -E -1 username
  1. Password Status Check:
  • View status: sudo passwd -S username
  • Unlock password: sudo passwd -U username
  1. Account Information Files:
  • Password info: /etc/passwd
  • Shadow password file: /etc/shadow
  • View aging info: sudo chage -L username
  1. Preventing Interactive Login:
  • Change shell to /sbin/nologin: sudo usermod -s /sbin/nologin username
  • This allows account to function but prevents user login
  • Default interactive shell is /bin/bash

Basic User Management Commands:

  • Create user: sudo useradd username
  • Set password: sudo passwd username
  • Modify user: sudo usermod [options] username

Note: All these commands require sudo privileges for execution.

Force Password Reset

  1. Method 1: Using passwd command
  • Command: sudo passwd –expire username
  • Forces password reset on next login
  • Example used: user1
  1. Process:
  • Check current password hash in /etc/shadow
  • Execute expire command
  • User must log out and log back in
  • System prompts for:
    • Old password
    • New password
  1. Verification:
  • Check /etc/shadow again
  • Password hash should be different
  1. Alternative Method:
  • Using chage command
  • Command: sudo chage -d 0 username
  • More powerful but complex tool
  • Manages password aging
  1. Note:
  • passwd command is simpler for basic password resets
  • chage offers more advanced password management features

Secure Shell (SSH) Configuration

  1. Basic Overview:
  • Core component for Linux remote access
  • Provides: interactive login shell, remote command execution, secure file copying, network tunneling
  • System needs: SSH client (local) + SSH server (remote) + encrypted tunnel
  1. Availability:
  • Clients: Available for Linux, macOS, Windows, iOS, Android
  • Servers: Built-in for Linux, macOS, Unix; third-party options for Windows
  1. Configuration Files: Main locations:
  • /etc/ssh/ssh_config (client config)
  • /etc/ssh/sshd_config (main server config)
  • /etc/sysconfig/sshd (minor server config)
  1. Server Settings:
  • Default port: 22
  • Configurable options include:
    • Ciphers
    • Compression
    • Access control
    • Forwarding
  1. Per-User Configuration:
  • Location: ~/.ssh/config
  • Created upon first server connection
  • Allows custom server settings
  • Simplifies administration
  1. Configuration Example: Without config file:
ssh -p 1022 grant@server1.vmguests.com -i ~/.ssh/server1.key

With config file:

ssh server1

Benefits: Streamlines server management, especially for multiple servers

PKI Concepts

Cryptography Basics:

  • Purpose: Hide or keep data private
  • Plaintext → Ciphertext (Encryption)
  • Ciphertext → Plaintext (Decryption)

Encryption Types:

  1. Symmetric (Private Key):

    • Same key for encryption/decryption
    • Limited for data transfer due to key sharing risks
  2. Asymmetric (Public/Private Key Pairs):

    • Public key: For encryption
    • Private key: For decryption
    • Example: Bob encrypts with Sally’s public key; Sally decrypts with her private key

Hashing:

  • One-way mathematical algorithms
  • Creates fixed-length ciphertext
  • Output called: message digest/hash value/fingerprint/signature
  • Types:
    • Non-salted: Uses only plaintext and algorithm
    • Salted: Adds random data for stronger protection

PKI (Public Key Infrastructure):

  1. Components:

    • Certificate Authority (CA)
    • Digital Certificates
    • Public/Private Keys
  2. Digital Certificates:

    • Certifies public key ownership
    • Issued by CA
    • Used in secure websites
    • Similar to driver’s license concept
  3. Digital Signatures:

    • Ensures message integrity
    • Process: a. Generate message hash b. Encrypt hash with sender’s private key c. Send message with encrypted hash d. Recipient verifies by:
      • Generating own hash
      • Decrypting sender’s hash
      • Comparing both hashes

Web Security:

  • Server sends public key
  • Browser encrypts session key
  • Creates encrypted tunnel
  • Certificates validate legitimate domain ownership

Note: Self-signed certificates should only be used for development, not public use.

Configuring SSH Key-Based Authentication

  1. Authentication Methods:
  • Two common SSH authentication methods:
    • Passwords
    • Private-public key pairs
  1. Generating SSH Key Pair:
  • Use ssh-keygen command
  • Default settings create RSA key pair
  • Creates two files in ~/.ssh/:
    • id_rsa (private key)
    • id_rsa.pub (public key)
  1. Copying Public Key to Remote Host:
  • Use ssh-copy-id command
    • Example: ssh-copy-id rhhost2
  • Requires password authentication once
  • Two actions occur:
    • Public key copied to remote’s authorized_keys file
    • Remote server fingerprint stored in local known_hosts file
  1. SSH Agent:
  • Run ssh-add to add key to local SSH agent
  • Optional but recommended step
  1. Important Files: a) Known_hosts file (~/.ssh/known_hosts):
    • Stores remote servers’ IP addresses and fingerprints
    • Must delete entry if remote IP changes
    • SSH client warns if fingerprint mismatch

b) Authorized_keys file (~/.ssh/authorized_keys):

  • Contains public keys of authorized users
  • Located on remote server
  • Must match local public key exactly
  1. Best Practices:
  • Use ssh-copy-id instead of manual key copying
  • SSH is sensitive to file permissions and syntax
  • Keep track of known_hosts entries for troubleshooting

SSH Tunneling

  1. Purpose and Basic Concepts
  • Many Linux protocols are unencrypted (X11, VNC, Rsync)
  • SSH can be used for secure tunneling
  • Basic SSH functions:
    • Interactive shell access
    • Remote command execution
    • Data piping through tunnel
  1. Types of SSH Tunneling a) Local Port Forwarding
  • Creates encrypted tunnel to remote host
  • Listens on local port
  • Forwards local traffic to remote network
  • Used to secure insecure protocols
  • Example: Local port 1080 → remote port 80

b) Remote Port Forwarding (Reverse)

  • Connects to remote host
  • Grabs remote port traffic
  • Brings traffic back to local network
  • Useful for NAT-restricted environments
  • Security considerations:
    • Gateway ports disabled by default
    • Can be enabled in SSH server config
    • Can specify allowed clients

c) Dynamic Port Forwarding

  • Functions like proxy server
  • Uses SOCKS5 proxy functionality
  • Useful when:
    • Local host can’t access internet
    • Can SSH to host with internet access
  • Application protocol determines remote port
  • Forwards traffic automatically to destinations
  1. Implementation Notes
  • Operates within existing SSH connections
  • Creates separate tunnels for forwarding
  • Can specify users and hosts
  • Configuration options available for security
  • Can handle multiple types of network traffic
  1. Key Benefits
  • Secures unencrypted protocols
  • Bypasses network restrictions
  • Provides flexible routing options
  • Maintains security through encryption

Summary Security Best Practices

Boot Security:

  • Set UEFI/BIOS password (limited protection if physical access exists)
  • Configure bootloader password
  • Physical security is crucial

Authentication:

  • Implement multi-factor authentication
  • Use one-time passwords
  • Consider biometrics (with noted limitations)
  • Utilize third-party directory/authentication services (RADIUS, LDAP, Kerberos)

User Access:

  • Avoid root login, use normal user accounts
  • Use sudo for privilege elevation
  • Restrict SU access using PAM
  • Limit root SSH logins via sshd.conf or pam_access module
  • Implement SSH key pairs for passwordless/multi-factor login

Service Security:

  • Run vulnerable services in chroot jail
  • Use containers/VMs for service isolation
  • Separate OS and application data in different volumes
  • Mount volumes with specific restrictions (read-only, no SUID/SGID, no exec)
  • Implement LUKS disk encryption
  • Restrict USB device usage
  • Configure SELinux for service containment

File System Security:

  • Limit SUID/SGID binaries
  • Use file system mount options to restrict SUID/SGID
  • Implement file ACLs for better permission management
  • Restrict root logins on local TTYs

Software Management:

  • Remove unnecessary software
  • Disable unneeded services
  • Keep OS updated with security patches
  • Avoid running high-risk services (FTP, Telnet, Finger)

Network Security:

  • Run firewall
  • Use dynamic rules for targeted servers
  • Implement TCP wrappers
  • Use PAM for granular network access control
  • Change default ports (especially SSH)
  • Restrict remote access to specific hosts
  • Deploy VPNs where appropriate (considering potential risks)

14. Linux Firewalls

Linux Firewalls Comparison

Evolution of Linux Firewalls:

  • Kernel 2.0: ipfwadm
  • Kernel 2.2: ipchains
  • Kernel 2.4: netfilter and iptables

Netfilter:

  • Kernel API for packet manipulation
  • Core firewall functionality
  • Features:
    • Stateful packet inspection
    • Connection tracking
    • Network address translation
    • Extensible through modules

Iptables:

  • Management tool for netfilter
  • Components:
    • Chains (INPUT, OUTPUT, FORWARD)
    • Tables (filter, nat, mangle)
  • Characteristics:
    • Static rules
    • Requires complete firewall restart for changes
    • Breaks established connections during restart
    • Uses protocol, address, ports, and state-based rules

Firewalld:

  • Modern management tool for netfilter
  • Key features:
    • Dynamic management
    • Network zones with trust levels
    • No restart required for changes
    • D-Bus integration
    • PolicyKit authentication
  • Management approach:
    • Uses zones and services instead of chains and rules
    • Simplified configuration
    • Zone types: dmz, external, home, internal, public, trusted, work
    • Service-based traffic management

Comparison:

  1. Both tools interface with netfilter
  2. Firewalld advantages:
    • Dynamic updates
    • No connection disruption
    • Simpler management
    • Zone-based approach
  3. Iptables advantages:
    • More granular control
    • Explicit rule visibility

Recommendation:

  • Learn and use firewalld as it represents the future of Linux firewall management
  • More efficient and easier to manage despite some administrators preferring iptables

Using Firewalld for Packet Filtering

  1. Basic Setup:
  • Firewalld is the default firewall tool in Enterprise Linux 8
  • Cannot run simultaneously with IP tables service
  • Start command: sudo systemctl start firewalld
  • Enable persistence: sudo systemctl enable firewalld
  1. Firewall-CMD Usage:
  • Main command: firewall-cmd
  • Check status: sudo firewall-cmd –state
  • Remote editing safety: Use –timeout= option to auto-revert changes
  • Configuration types:
    • Running config (temporary)
    • Saved config (permanent)
  • Use –permanent flag for persistent changes
  1. Managing Services:
  • Add HTTP service: sudo firewall-cmd –permanent –add-service=HTTP
  • Remove service: sudo firewall-cmd –permanent –remove-service=HTTP
  • List available services: sudo firewall-cmd –get-services
  • List enabled services in current zone: sudo firewall-cmd –list-services
  1. Port Management:
  • Add single port: sudo firewall-cmd –permanent –add-port=443/TCP
  • Remove port: sudo firewall-cmd –permanent –remove-port=443/TCP
  • Add port range: sudo firewall-cmd –permanent –add-port=5901-5910/TCP
  • List enabled ports: sudo firewall-cmd –list-ports
  1. Important Operations:
  • Reload rules to activate changes: sudo firewall-cmd –reload
  • Always reload after making changes
  • Use –permanent flag for changes to survive reboots

Firewalld zones

Firewalld Zones Overview:

  • Zones define trust levels for network connections
  • One connection can belong to only one zone
  • One zone can handle multiple network connections
  • Reference: man page “firewalld.zones”

Pre-defined Zones:

  1. Drop - Drops incoming packets without response
  2. Block - Blocks packets with icmp-host-prohibited response
  3. External - For external networks with masquerading enabled
  4. DMZ - For publicly accessible computers with limited internal access
  5. Public, Work, Home, Internal - For trusted networks
  6. Trusted - Accepts all network connections

Key Commands:

  1. Check default zone: sudo firewall-cmd –get-default-zone

  2. List all zones: sudo firewall-cmd –list-all-zones

  3. Create new zone: sudo firewall-cmd –permanent –new-zone=zonename

  4. Delete zone: sudo firewall-cmd –permanent –delete-zone=zonename

  5. Add source address to zone: sudo firewall-cmd –permanent –zone=zonename –add-source=network/mask

  6. Add service to zone: sudo firewall-cmd –permanent –zone=zonename –add-service=servicename

  7. Reload firewall rules: sudo firewall-cmd –reload

  8. Set default zone: sudo firewall-cmd –set-default-zone=zonename

  9. View zone information: sudo firewall-cmd –list-all –zone=zonename

Important Notes:

  • Rules added without specifying zone go to default zone
  • Use –permanent flag for persistent changes
  • Always reload firewall after making changes
  • Changes without –permanent flag are lost after reboot

Using firewalld for NAT

Network Address Translation (NAT) Methods:

  • Uses masquerade or forwarding
  • Masquerade limited to IPv4 only
  • Masquerade forwards packets not directed to local system
  • Changes source address to local system for response routing

Enabling Masquerade:

  1. Basic Command:
sudo firewall-cmd --permanent --zone=coffeeshop --add-masquerade
  1. Verify Configuration:
sudo firewall-cmd --permanent --query-masquerade
  1. Granular Control using Rich Rules:
sudo firewall-cmd --permanent --zone=coffeeshop --add-rich-rule='rule family=ipv4 source address=172.16.25.0/24 masquerade'

Port Forwarding:

  1. Basic Port Forward:
sudo firewall-cmd --permanent --zone=coffeeshop --add-forward-port=port=22:proto=tcp:toport=2222:toaddr=172.16.25.125
  1. Verify Zone Configuration:
sudo firewall-cmd --permanent --list-all --zone=coffeeshop
  1. Granular Port Forward using Rich Rules:
sudo firewall-cmd --permanent --zone=coffeeshop --add-rich-rule='rule family=ipv4 source address=172.16.25.0/24 forward-port port=22 protocol=tcp to-port=2222 to-addr=172.16.25.125'

Additional Information:

  • Rich rules provide more detailed control over NAT
  • Refer to firewalld rich language man page for more details
  • All commands use –permanent flag for persistent configuration

15. Automation & Scripting

Making a Shell Script

Essential Components:

  1. Text File Creation
  • Create using text editor (e.g., VI)
  • Naming convention: script.sh (helps with syntax highlighting)
  • File extension not mandatory
  1. Interpreter Specification (First Line) Two methods: a) Absolute Path:
  • #! /bin/bash b) ENV Command:
  • #! /usr/bin/env bash
  • Advantage: More flexible for different interpreter locations
  1. Permissions
  • Make file executable using: chmod u+x script.sh
  • Allows script to run as command
  1. System Path Integration
  • Create personal bin directory: ~/bin
  • Move script to ~/bin
  • Makes script executable from anywhere without full path
  • Scripts can be named without .sh extension

Optional Considerations:

  • Symbolic links can be created for scripts
  • Scripts can run without:
    • Execute permissions
    • Interpreter specification line
    • Being in system path
  • Alternative execution method:
    • Provide script as argument to bash directly
    • Example: bash ~/bin/script2.sh

Basic Script Structure Example:

#! /bin/bash
echo "This is a shell script"

Note: While these steps make scripts more convenient to use, a basic text file can still function as a shell script when provided directly to the bash interpreter.

Positional Arguments in Shell Scripts

Definition & Purpose:

  • Essential for getting data into shell scripts
  • Helps create tools that function like real commands

Basic Concept:

  • Most commands use positional arguments
  • Example with ’ls’ command:
    • ’ls *’ - Shell expands asterisk, passing list to ls
    • ’ls debugger.sh script.sh’ - Direct passing of arguments

Creating Script (posargs.sh):

  1. Structure:

    • Shebang: #!/bin/bash
    • Echo statements demonstrating different argument variables
  2. Argument Variables:

    • $0: Path to the script
    • $1: First argument
    • $2: Second argument
    • $@: All arguments (as separate items)
    • $*: All arguments (as one entity)
  3. Quoting Rules:

    • Single quotes (’): Show literal variable names
    • Double quotes ("): Preserve spaces, allow variable expansion
    • Always quote variables for safety

Script Execution:

  1. Without arguments:

    • Shows confusing output due to empty variables
  2. With arguments (example: posargs.sh dog cat horse):

    • $0 shows script path
    • $1 shows “dog”
    • $2 shows “cat”
    • $@ shows “dog cat horse” (as separate items)
    • $* shows “dog cat horse” (as one entity)

Important Distinctions:

  • $@ vs $*:
    • When quoted, $@ preserves individual word splitting
    • When quoted, $* treats all arguments as single entity
    • In loops, $@ iterates through each argument
    • In loops, $* iterates once through combined arguments
  • Never use unquoted versions as they don’t preserve spaces

File Globbing

Definition & Origin:

  • Pattern matching feature in bash
  • Originally from Bell Labs Unix command ‘glob’
  • Now built into shell functionality

Types of Pattern Matching in Bash:

  1. Globs
  2. Extended globs
  3. Brace expansion
  4. Regular expressions (basic & extended)

Key Characteristics of Globs:

  • Built-in shell function
  • Different shells may handle globs differently
  • Affected by bash shell options
  • Less expressive but easier to use than regex
  • More efficient for system processing

Globs vs Regular Expressions:

  • Globs: Match file names
  • Regular Expressions: Match text
  • Functionality may seem similar depending on usage

Practical Example:

  • Using LS command with glob pattern:
    • Pattern example: matches files starting with 0-9, one character, “file”, any characters, ending in .txt
    • Sample matching files: 1_file-rev1.txt, 2_file-rev1.txt, 3_file-rev1.txt

Important Notes:

  • Shell handles glob expansion
  • Commands (like LS) receive expanded list
  • LS doesn’t support regular expressions
  • For regex matching, need commands with built-in regex support (e.g., grep)
  • Grep matches text patterns, while globs match file patterns

Wildcards in Command Line

File Globbing:

  • Used for pattern matching files based on names
  • Handled by shell itself
  • Can be used with any command

Types of Wildcards:

  1. Asterisk (*):
  • Matches zero or more of any character
  • Examples:
    • file* matches file.txt, file.jpg, file.tar.gz, and file
    • file*.txt matches file.txt, filea.txt, file123.txt
  1. Question Mark (?):
  • Matches exactly one character
  • Examples:
    • file?.txt matches file1.txt, filea.txt
    • file??.txt matches file00.txt through file99.txt, fileab.txt
  1. Character Sets []:
  • Matches one specific character from defined set
  • Uses square brackets
  • Examples:
    • file[123].txt matches file1.txt, file2.txt, file3.txt
    • Can use ranges with hyphen: [1-3], [a-z], [A-Z]

Special Character Set Rules:

  • Combine ranges: [a-zA-Z] for all letters
  • Include hyphen in set: place at start/end
  • Negation: Use ! or ^ at start of set
    • file[!0-9].txt matches filea.txt but not file1.txt

Best Practices:

  • Avoid combining lowercase/uppercase in single range
  • Use separate ranges for upper/lower case letters
  • Can combine ranges with lists: [0-9abc]

Additional Information:

  • Manual page available: man 7 glob
  • Practice using provided exercise files
  • Globbing works across different commands

Bash Variable Scope

  1. Four Levels of Variable Scope:

a) Global Environmental Variables

  • Visible to entire OS
  • Set at system startup
  • Can be modified through:
    • /etc/profile
    • /etc/bash_ic
    • ~/.bash_profile
    • ~/.bashrc
  • Requires ’export’ command for subprocess accessibility

b) Script-Level Variables

  • Default scope for new variables
  • Visible throughout the entire script
  • Accessible by all functions, commands, and statements within script

c) Script and Sub-process Variables

  • Variables accessible to script and its sub-processes/sub-shells
  • Regular variables can be made accessible to sub-processes using ’export’ command

d) Local Variables

  • Limited to specific code blocks
  • Created using ’local’ command
  • Primarily used within functions
  • Only visible within their defined block

Key Points:

  • Bash doesn’t have tight variable scope by default
  • ’export’ command is crucial for environmental and sub-process visibility
  • ’local’ command restricts variable scope to specific code blocks
  • Default variables are script-level unless specified otherwise

Output to STDOUT and STDERR

  1. Purpose:
  • Make scripts behave like Linux commands
  • Send text to standard output (stdout) and standard error (stderr)
  1. Standard Output:
  • Use echo or printf commands
  • Default destination for text output
  • Simple syntax: echo “message”
  1. Standard Error:
  • Requires special syntax
  • Use &>2 to redirect to stderr
  • Example: echo “Error message” >&2
  1. Script Example (scriptoutput.sh):
#!/bin/bash
echo "This part of the script worked"
echo "Error: This part failed" >&2
  1. Implementation Steps:
  • Create script in ~/bin directory
  • Make executable with chmod u+x
  • Run script to see combined output
  • Use redirection to separate stdout and stderr
  1. Output Redirection:
  • Syntax: command > stdout.txt 2> stderr.txt
  • stdout.txt contains successful messages
  • stderr.txt contains error messages
  1. Verification:
  • Use cat command to view separated outputs
  • cat stdout.txt shows success messages
  • cat stderr.txt shows error messages

Key Benefit: Allows separate handling of normal output and error messages in scripts.

Piping Data into a Script

  1. Basic Concept:
  • Commands like ’less’ or ‘grep’ can receive piped input
  • Standard output of one command can be piped into standard input of another
  • Implementation uses ‘read’ command
  1. Script Creation (readpipe.sh):
#!/bin/bash
if [[ -p /dev/stdin ]]; then
    while IFS= read -r LINE; do
        pipearray+=("$LINE")
    done
    echo ${pipearray[@]}
fi
  1. Key Components:
  • Checks if /dev/stdin is a pipe
  • Uses while-read loop to process input
  • Stores data in ‘pipearray’ variable
  • Displays array contents at end
  1. Implementation Steps:
  • Create script in ~/bin directory
  • Make executable with chmod u+x readpipe.sh
  • Test without piping (should show nothing)
  • Test with piping: cat /etc/passwd | ./readpipe.sh
  1. Script Functionality:
  • Initially just displayed piped input
  • Modified to store input in indexed array
  • Array can be used for further data processing
  • Validates pipe presence before processing
  1. Important Features:
  • Uses IFS= for precise line reading
  • -r flag prevents backslash interpretation
  • Array storage allows data manipulation
  • Only processes input when receiving piped data

Conditional Flow in BASH

Basic Syntax:

  • Similar to other languages
  • Can use ’then’ on same line as ‘if’ with semicolon
  • Supports else statements and multiple conditions
  • Each if/elif condition is checked sequentially

Command Execution as Conditions:

  • Can use command success/failure as conditions
  • Integrated with OS operations
  • Example: grep command returning 0 (success) or non-zero (failure)
  • Can negate results using exclamation point (!)

Square Brackets Comparison:

Single Square Brackets [ ]:

  • POSIX compliant
  • Works with older shells (including Bourne)
  • Are actual commands (built-in test command)
  • File name expansion and word splitting occur
  • Parameter expansion happens
  • Special operators (&&, ||, <, >) interpreted by shell
  • Requires quoting for variable values

Double Square Brackets [[ ]]:

  • BASH/ksh specific (not POSIX compliant)
  • Built into BASH as keywords
  • No file name expansion
  • No word splitting
  • Supports parameter expansion and command substitution
  • Handles special operators (&&, ||, <, >) directly
  • Supports automatic arithmetic evaluation (octal/hexadecimal)
  • Supports extended regular expression matches
  • Quoting not required (though recommended)

Recommendation:

  • Use double square brackets [[ ]] in most cases
  • More reliable and consistent
  • Better suited for modern shell scripting
  • Fewer edge cases and potential issues

Conditional Flow with Case Statement in Bash

  1. Case Statement vs IF Conditionals
  • More efficient than IF-THEN-ELSEIF for multiple pattern matching
  • Evaluates condition once and acts accordingly
  • Cannot use regular expressions, but supports:
    • Wild cards
    • Character sets
    • Character classes
  1. Basic Case Statement Structure
  • Evaluates variables against patterns (globs)
  • Action lists follow patterns in parentheses
  • Double semicolons (;;) terminate action lists
  • Asterisk (*) catches unmatched patterns (like ELSE)
  • Last condition doesn’t require termination
  1. Default Behavior
  • Executes only the first matching pattern’s action list
  • Exits after first match with ;;
  • Example: If age=5, only first matching pattern executes
  1. Bash 4 New Action List Terminators a) ;;& (Double Semicolon Ampersand)
    • Continues processing after match
    • Executes all matching patterns’ action lists
    • Example: age=5 executes both first and second matching patterns

b) ;& (Semicolon Ampersand)

  • Automatically executes next action list
  • Doesn’t evaluate next pattern
  • Example: age=10-19 executes matched pattern and next action list
  1. Advantages
  • More efficient than IF statements for multiple conditions
  • Single evaluation of variable
  • Flexible pattern matching options

Numeric Conditions in Bash

  1. POSIX Compatible Operators:
  • -LT: less than
  • -GT: greater than
  • -EQ: equal
  • -LE: less than or equal
  • -GE: greater than or equal
  1. String vs Numeric Comparison:
  • , <, = symbols are for string comparison

  • Not to be used for numeric comparison
  1. Integer Math Forms:
  • Modern methods: double parentheses (()) and $(())
  • Double parentheses form used for conditionals
  • $(()) form used when output to standard out is needed
  1. Example Script (numericcondition.sh):
#!/bin/bash
if (($1 > $2)); then
    echo "The first argument is larger than the second"
else
    echo "The second argument is larger than the first"
fi
  1. Double Parentheses Behavior:
  • Returns return code based on expression outcome
  • Zero return = true condition
  • Non-zero return = false condition
  1. Extended Script with Sum Calculation:
sum=$(($1 + $2))
if [[ "$sum" -GE 10 ]]; then
    echo "The sum of the first two arguments is greater than or equal to 10"
else
    echo "The sum of the first two arguments is less than 10"
fi
  1. Usage Methods:
  • Can use mathematical expressions directly in if conditionals
  • Can store expression results in variables for later use
  • Both methods provide flexibility for mathematical conditions
  1. Script Execution:
  • Make executable using chmod u+x
  • Run with numeric arguments to test conditions

STRING CONDITIONS IN BASH

  1. Basic String Comparisons:

    • Compares characters in variables or static strings
    • Can compare numbers as characters (not numeric comparison)
  2. Empty String Tests:

    • -z: Checks if string has zero length (is empty)
    • -n: Checks if string is not empty
  3. String Equality Tests:

    • = : Tests if strings are equal
    • != : Tests if strings are not equal
  4. Sort Order Comparisons:

    • Can use > and < symbols
    • Based on ASCII codes
    • Not affected by locale settings

Important Note: When comparing numbers as strings, the comparison is based on character values, not numerical values.

File Condition Tests in Linux

File Existence & Type:

  • -e: File exists (any type)
  • -f: Regular file exists
  • -d: Directory exists
  • -c: Character device exists
  • -b: Block device exists
  • -p: Named pipe exists
  • -S: Socket exists
  • -L: Symbolic link exists

Permission & Security:

  • -g: SGID bit set
  • -u: SUID bit set
  • -r: Readable by current user
  • -w: Writeable by current user
  • -x: Executable by current user

File Properties:

  • -s: File size > 0 bytes
  • -nt: Newer than comparison file
  • -ot: Older than comparison file
  • -ef: Same device & inode

Best Practices:

  • Use built-in tests instead of ls
  • Use getattr or stat for detailed file info
  • Avoid using ls for file checks

For Loop in Bash

  1. Basic Syntax:

    • Used for looping through finite list of items
    • Items assigned to variable during iteration
    • Format: for item in [list]; do [actions]; done
  2. List Sources:

    • Static lists (e.g., numbers, names)
    • Dynamically created lists
    • Command substitution
    • Array values
  3. List Creation Methods:

    • Bash expansion (preferred)
      • More reliable
      • Faster
      • No new shell spawning
    • Command substitution (use with caution)
  4. IFS (Internal Field Separator):

    • Default splits on blank spaces
    • Can cause issues with filenames containing spaces
    • Can be temporarily modified
    • Should be reset after use
    • Alternative: ‘while read’ command
  5. Array Looping:

    • Can loop through array values directly
    • Better to loop through indexes for indexed arrays
    • Process:
      • Create array
      • Get array count using #array[@]
      • Adjust for zero-based indexing
      • Use sequence command for index list
      • Access array items using index
  6. Parameter Expansion Limitations:

    • Occurs before math and variable expansion
    • May require sequence command in subshell
    • Eval command possible but defeats simplification purpose

Note: When dealing with filenames or spaces, consider using ‘while read’ instead of modifying IFS.

While Loop

Definition & Purpose:

  • Alternative to for loop when conditional iteration is needed
  • Can handle infinite loops and conditional breaks
  • Better text handling compared to for loops

Types:

  1. While Loop

    • Iterates while condition is true
    • More commonly used
  2. Until Loop

    • Iterates until condition becomes true

Syntax & Rules:

  • Uses single square brackets [ ] for conditions
  • Supports wildcards but not regular expressions
  • Can use double square brackets [[ ]] for regular expressions when nested with if statements

Key Features:

  1. Infinite Loop:

    • Created using “while true”
    • Requires break command or user intervention to stop
    • Can nest if conditionals inside
  2. Conditional Loop Example:

    • Variable initialization (i=0)
    • Condition check (i < 4)
    • Increment counter
    • Exits when condition fails

Advantages:

  • Better handling of text input
  • Works well with blank spaces
  • More flexible than for loops
  • Can use read command effectively
  • Supports command line text processing

16. Automating Jobs

Managing One-Time Jobs with ‘at’

Types of Scheduled Jobs:

  1. One-time jobs
  2. Recurring jobs

At Service:

  • Runs jobs at specific times or when CPU load average < 0.8 (batch jobs)
  • Syntax: at [time format]
  • Supports various time formats:
    • 12/24 hour clock (4:25 AM, 16:45)
    • General terms (midnight, noon, tomorrow)
    • Now + minutes/hours/days
    • Teatime (4:00 PM)
    • Time must precede date in format

Installation & Setup:

  1. Install: sudo yum install -y at
  2. Start service: sudo systemctl start atd
  3. Enable on boot: sudo systemctl enable atd

Creating At Jobs:

  1. Command: at now + 5min
  2. Enter commands at prompt
  3. End with Ctrl + D

Managing At Jobs:

  • View jobs: atq
  • Job details shown:
    • Job number
    • Time/date
    • Queue letter
    • Username
  • View job contents: at -c [job number]
  • Cancel job: atrm [job number]

Batch Jobs:

  • Different from at jobs
  • Run when system load < 0.8
  • Create with ‘batch’ command
  • Listed in atq with regular at jobs
  • Verify completion by checking created files

Example Commands:

  • mkdir ~/documents.bak
  • rsync -a ~/documents/ ~/documents.bak
  • touch ~/batchfile.txt

Note: For rsync, trailing slash is important for proper file copying.

Managing Reoccurring User Jobs with Cron

  1. Cron Service Overview
  • Used for creating reoccurring jobs
  • Two types of Cron tabs:
    • User Crontabs (specific to each user)
    • System Crontabs (system-wide)
  1. User Crontabs
  • Each user has their own
  • Can be managed without elevated privileges
  • Stored in /var/spool/cron/username
  1. System Crontabs
  • System-wide jobs run by OS
  • Requires superuser management
  • Stored in /etc/cron.d
  1. Cron Job Format (5 Time Fields + Command) a) Minutes (0-59)

    • Asterisk = every minute
    • Multiple values allowed (15,30,45)
    • Ranges and step values possible b) Hours (0-23)
    • 0 = midnight c) Day of Month (1-31) d) Month (1-12 or jan-dec) e) Day of Week (0-6, Sun-Sat) f) Command to run
  2. Installation & Setup

  • Usually pre-installed
  • Install command: sudo yum install -y cronie crontabs
  • Start service: sudo systemctl start crond
  • Enable service: sudo systemctl enable crond
  1. Managing Crontabs
  • Edit: crontab -e
  • List: crontab -l
  • Remove all: crontab -r
  • Online generator available: crontab-generator.org
  1. Documentation
  • Basic command info: man crontab
  • Format details: man 5 crontab
  1. Example Usage
  • Backup command example: 0 1 * * * rsync -a ~/documents/ ~/documents.bak (Runs daily at 1:00 AM)

Managing System Cron Jobs

Location & Setup:

  • System cron jobs stored in /etc/cron.d
  • Requires elevated privileges (sudo) to create
  • Format similar to user cron jobs

Creating System Cron Job:

  1. Command: sudo vi /etc/cron.d/backupdocs
  2. Format differences from user cron:
    • User specification after time format
    • Requires absolute paths
    • Any user can be specified (needs proper permissions)

Shortcuts Available:

  • Pre-defined directories for different intervals:
    • /etc/cron.hourly
    • /etc/cron.daily
    • /etc/cron.weekly
    • /etc/cron.monthly
  • View directories: ls -d /etc/cron.*

Important Notes:

  • No service restart needed after changes
  • Cron re-reads files every minute automatically
  • Test commands with intended user before adding to crontab

Documentation Resources:

  1. man cron - Service information
  2. man crontab - Command usage
  3. man 5 crontab - File format details

Example Cron Job: 0 1 * * * root rsync -a /home/user1/documents/ /home/user1/documents.back

Configuring Network Time Protocol (NTP)

GUI Method:

  • Access through overview mode (top left corner)
  • Search “date and time”
  • Interface allows changes to:
    • Date
    • Time
    • Time zone
    • Automatic updates
    • 12/24 hour clock format

Time Sources:

  1. RTC (Real-time Clock)
    • Keeps time when computer is off
    • OS reads RTC at startup
  2. NTP (Network Time Protocol)
    • Contacts internet time server
    • Most accurate
    • Requires network connection

CLI Method (CentOS):

  1. Basic Commands:

    • TimedateCTL: Shows
      • Local time
      • Universal time
      • RTC time
      • Time zone
      • Auto-sync status
      • DST status
  2. Time Zone Management:

    • List zones: TimedateCTL List-Time zones
    • Filter zones: TimedateCTL List-Time zones | grep America
    • Set zone: TimedateCTL Set-Time Zone [zone name] Example: TimedateCTL Set-Time Zone America/Vancouver
  3. Time/Date Setting:

    • Set time: TimedateCTL set-time HH:MM:SS
    • Set date: TimedateCTL set-time YYYY-MM-DD
    • Set both: TimedateCTL set-time “YYYY-MM-DD HH:MM:SS”
  4. Enable NTP:

    • Command: TimedateCTL set-NTP true
    • Enables automatic time updates from NTP server

17. Version Control

Git Installation and Configuration

Installation:

  • Install Git using package manager (e.g., sudo dnf install -y git on Enterprise Linux)

Configuration Levels:

  1. System-wide (/etc/gitconfig in Linux, Program Files\Git\etc\gitconfig in Windows)
  2. User-level/Global (affects all user projects)
    • Linux/Unix: ~/.gitconfig
    • Windows: $HOME.gitconfig
  3. Project-level (.git/config in project directory)

Configuration Commands:

  • System-level: git config –system
  • User-level: git config –global
  • Project-level: git config –local (or no option)

Essential Configuration Steps:

  1. Set username: git config –global user.name “Your Name”

  2. Set email: git config –global user.email “your.email@example.com

  3. Configure text editor: git config –global core.editor “vim” (Can use other editors like Nano, Notepad++, etc.)

  4. Enable color output: git config –global color.ui true

View Configuration:

  • List all settings: git config –list

Creating First Git Project:

  1. Create directory: mkdir ~/GitProjectOne
  2. Navigate to directory: cd GitProjectOne
  3. Initialize repository: git init
  4. Verify creation: ls -la (shows .git directory)

Important Notes:

  • .git directory contains all Git tracking information
  • Deleting .git directory removes Git tracking
  • Use same email/username across all systems for consistent commit history
  • Project-specific configurations stored in .git/config

Git File Management and Commit Process

  1. Creating and Adding Files
  • Create a file using text editor (e.g., vim ourfirstfile.txt)
  • Initial file tracking not automatic - must be explicitly told to Git
  • Check status using “git status”
  • Start tracking file using “git add [filename]” command
  • Can add specific file or all files in directory
  1. Making Commits
  • Git tracks changes, not file versions
  • Two ways to commit: a. Simple changes: “git commit -m “[message]”” b. Complex changes: “git commit” (opens editor for detailed message)
  • Basic workflow: Edit file → Stage (add) → Commit
  1. Additional Changes
  • After editing tracked files, Git shows status as “modified”
  • Can commit using: a. “git add” followed by “git commit” b. “git commit -a” (commits all changes, opens editor)
  1. Commit Message Best Practices
  • Use present tense (e.g., “This adds…” not “This added…”)
  • Structure:
    • Short summary line
    • Blank line
    • Detailed description
  • Be specific and descriptive
  • Include relevant information (e.g., bug tracker references)
  • No need to include date/version info (Git handles automatically)
  • Avoid vague messages like “fixes bugs”
  1. Status Checking
  • Use “git status” regularly to monitor:
    • Untracked files
    • Modified files
    • Staged changes
    • Current repository state

Why Branch?

  • Key feature of Git allowing:
    • Testing new ideas
    • Quick switching between versions
    • Collaboration without disrupting current work
  • Advantages over rollback:
    • Creates test copy instead of reverting changes
    • Test in non-production environment
    • Can merge successful changes or discard failed ones
    • More efficient than creating new commits to undo changes

Creating and Using Branches

  1. Basic Commands:

    • git branch - shows current branches
    • git branch [name] - creates new branch
    • git checkout [name] - switches to specified branch
    • asterisk (*) indicates current active branch
  2. Branch Management:

    • New branches are exact copies of source branch
    • Changes in one branch don’t affect others
    • Each branch maintains separate commit history
    • Can delete unwanted branches entirely

Practical Application (System Administration Example):

  1. Environment Setup:

    • Production host
    • Test host
    • Development host
    • All connected to remote Git server
  2. Workflow:

    • Development:

      • Create test branch from master
      • Make and commit changes
      • Push to Git server
    • Testing:

      • Pull changes to test host
      • Switch to test branch
      • Verify changes
    • Production:

      • Merge successful changes to master
      • Push to server
      • Production host pulls verified configuration

Benefits:

  • Safer testing of configurations
  • Easy switching between versions
  • Documented change history
  • Efficient quality control process

Comparing, renaming, and deleting branches in Git

COMPARING BRANCHES:

  • Use ‘git diff’ to compare branches instead of checking logs separately
  • Basic syntax: git diff branch1..branch2
  • Example: git diff master..testconfig
  • Output shows differences with:
      • symbol for added lines
      • symbol for removed lines
  • Add caret (^) to compare previous version
    • Example: git diff master..testconfig^

RENAMING BRANCHES:

  • Use –move or -m option
  • Syntax: git branch –move oldname newname
  • Example: git branch –move testconfig development
  • Verify with: git branch

CREATING BRANCH COPIES:

  1. Switch to source branch: git checkout branchname
  2. Create new branch: git branch newbranchname
  3. Verify with: git branch

DELETING BRANCHES:

  • Use –delete or -d option
  • Syntax: git branch –delete branchname
  • Important restrictions:
    • Cannot delete currently checked out branch
    • Must switch to different branch first
    • Will receive warning if branch has uncommitted changes
  • Force delete with -D flag to override warnings
  • Example: git branch –delete development-user1

VERIFICATION:

  • Always verify branch operations with: git branch

Merging Branches in Git

Types of Merges:

  1. Fast Forward Merge
  • Occurs when master branch remains unchanged
  • Only changes exist in development branch
  • Simple merge process using ‘git merge’
  1. Non-Fast Forward Merge
  • Occurs when both branches have different changes
  • Requires manual conflict resolution
  • More complex merge process

Basic Merge Process:

  1. List branches: git branch
  2. Check differences: git diff master..development
  3. Switch to master: git checkout master
  4. Merge: git merge development
  5. Verify: git diff

Handling Merge Conflicts:

  1. When conflicts occur:

    • Git shows “auto-merging failed” message
    • Use ‘git status’ to see affected files
    • Open conflicted files in editor
  2. Conflict Resolution:

    • Git marks conflicts with ‘««’ and ‘»»’ symbols
    • Manually edit file to resolve conflicts
    • Remove Git’s conflict markers
    • Save changes
  3. Complete Merge:

    • Add resolved file: git add [filename]
    • Commit changes: git commit
    • Default merge commit message provided

Visualization:

  • Use ‘git log –graph –oneline’ to view merge history graphically
  • Shows branch creation, different edits, and merge points

Best Practices:

  • Keep workflow simple for easy merges
  • Fast forward merges are simpler
  • Multiple people working on same file may require manual merges
  • Git usually handles simple conflicts automatically

Creating a GitHub Repository

  1. GitHub Overview
  • Popular platform for remote Git repository hosting
  • Offers free and paid hosting options
  • Free accounts have space and feature limitations
  1. Account Creation
  • Visit github.com
  • Click “Sign up”
  • Enter email, password (15+ chars or 8+ chars with number and lowercase)
  • Choose username
  • Verify account (solve puzzle)
  • Enter email verification code
  • Select free plan
  • Complete initial setup questions
  1. Repository Creation
  • Click “Create repository”
  • Name: “GitProjectOne”
  • Add description
  • Choose visibility (public/private)
  • Optional: Include README file, .gitignore
  • Create repository
  1. Repository Setup
  • Two protocol options: HTTPS and SSH
  • HTTPS requires personal access token (as of 2021)
  • SSH recommended for Linux knowledge
  1. Adding Remote Repository
  • Copy SSH URL
  • Command: git remote add origin [SSH-URL]
  • Set up SSH authentication:
    • Use existing SSH key
    • Copy public key (cat ~/.ssh/id_rsa.pub)
    • Add to GitHub (Settings → SSH and GPG keys)
  1. Pushing Local Repository
  • Create main branch: git branch -M main
  • Push to GitHub: git push -u origin main
  • Accept fingerprint if prompted
  • Verify remote setup: git remote -v
  1. Verification
  • Check GitHub repository
  • Navigate to Your repositories → GitProjectOne
  • Verify files are visible and accessible

Note: Repository can be configured for different fetch and push destinations if needed.

18. Realizing Virtual and Cloud Environments

Cloud Computing vs Virtualization

Cloud Computing:

  • Automatic and dynamic scaling of computer resources (storage & computing power)
  • Resources expand/contract automatically based on demand
  • Named “cloud” due to dynamic size changes, similar to actual clouds
  • Key Resources provided: • CPU cycles (AWS, Google Cloud, Azure) • Storage (Dropbox, OneDrive, Google Drive, pCloud) • Database services (Amazon RDS, Google Cloud SQL, Azure Database) • Networking (routing, switching, firewalls, VPNs, IP addressing)

Service Models:

  1. Infrastructure as a Service (IaaS)

    • Low-level access
    • User controls OS and software installation
    • Pay for infrastructure usage
  2. Platform as a Service (PaaS)

    • Mid-level abstraction
    • Developer writes to platform API
    • No direct OS interaction
    • Example: Google App Engine
  3. Software as a Service (SaaS)

    • Highest level abstraction
    • End-user applications
    • Example: Gmail

Cloud Types:

  • Private cloud
  • Public cloud
  • Hybrid cloud

Virtualization:

  • Creates virtual versions of physical resources through software
  • Components virtualized: • Computer hardware • Storage devices • Network resources
  • Uses hypervisor to intercept hardware requests
  • Example: VirtualBox (Type 2 hypervisor) • Provides virtual CPUs, hard drives, network cards, ports, display cards, BIOS • Client OS operates unaware of virtualization

Relationship:

  • Virtualization not technically required for cloud computing but practically essential
  • Enables easy scaling of resources in cloud environments
  • Can be used independently of cloud computing
  • Facilitates efficient resource management in cloud services

Types of Hypervisors

  1. Basic Categories:
  • Type 1 (Bare Metal)
  • Type 2 (Hosted)
  1. Type 2 Hypervisors:
  • Runs as application on host OS
  • Examples: VirtualBox, VMware Workstation
  • Has two layers between guest OS and physical machine
  • Resource flow: Guest OS → VM → Hypervisor → Host OS → Physical Machine
  1. Type 1 Hypervisors:
  • Runs directly on hardware (bare metal)
  • Examples: VMware ESXi, Xen, KVM
  • Has one layer between guest OS and physical machine
  • Resource flow: Guest OS → VM → Hypervisor → Physical Machine
  1. Virtualization Modes:

a) Emulation:

  • Not true virtualization
  • Translates every instruction in software
  • Very slow but can emulate different CPU types
  • Uses JIT emulation and instruction caching

b) Paravirtualization (PV):

  • Software virtualization without hardware support
  • Direct CPU instruction passing
  • Requires guest OS awareness
  • Near-native performance on 32-bit, slower on 64-bit
  • Doesn’t provide complete virtual computer

c) Full Virtualization (HVM):

  • Hardware-based virtualization
  • Requires CPU support (Intel VT-x or AMD SVM)
  • Guest OS doesn’t need virtualization awareness
  • Provides complete virtual computer
  • Uses emulation for BIOS and slower devices

d) Full Virtualization with PV Drivers:

  • Combines full virtualization with optimized drivers
  • Improves performance for non-hardware-virtualized components
  • Compatible with Windows
  • Available for most Type 1 hypervisors

e) Hybrid Virtualization (PVH) - Xen specific:

  • Eliminates emulation
  • Uses hardware virtualization + paravirtualization
  • Fastest performance
  • Smaller attack surface
  • Used by Amazon AWS for Linux guests

VM Initialization and Tools

  1. Basic VM Creation
  • Simple process for single VM using Type 2 hypervisor
  • Download ISO, create VM, start VM, select ISO, install OS
  • Similar to physical machine installation
  1. Large-scale VM Deployment a) Automated Installation Files
  • Kickstart (Red Hat)
  • Preseed (Debian)
  • AutoYaST (SUSE)
  • Reduces inconsistencies
  • Can be accessed via USB, hard drive, or network location

b) Cloning Method

  • Create golden image
  • Make multiple clones
  • Limited by host resources
  1. VM Migration & Storage
  • Traditional method: shutdown, export, copy, import (time-consuming)
  • Better solution: Remote storage repository/pool
  • Live migration possible with Type 1 hypervisors
  • Supports seamless host maintenance
  1. Resource Management Thin Provisioning:
  • Allocates resources as needed
  • Applies to storage, CPU, memory
  • More efficient than thick provisioning
  • Requires dynamic resource allocation capability
  1. Cloud VM Management
  • Cloud-init for data injection
  • Can inject:
    • Metadata
    • SSH keys
    • User accounts
    • Network configuration
    • Shell scripts
  1. Libvirt Framework
  • Universal management tool
  • Cross-platform compatibility
  • Supports multiple hypervisors:
    • Xen
    • LXC
    • OpenVZ
    • QEMU
    • VirtualBox
    • VMware
    • Hyper-V

Key Tools:

  • virsh (interactive shell)
  • virt-clone
  • virt-install
  • virt-top
  • Virtual Machine Manager GUI

Documentation available at libvirt.org/docs.html

Containers Overview

  • Evolution: Emulation (QEMU) → Virtual Machines → Containers
  • Containers are more resource-efficient than VMs
  • Containers package applications with necessary dependencies, libraries, and files

Linux Container Systems:

  1. Linux Containers.org (LXC):
  • Includes LXC, LXD, LXCFS, and Distrobuilder
  • Creates lightweight OS-like environments
  • Sits between virtualization and container systems
  1. Docker:
  • Lightweight application containers
  • Runs on host OS kernel
  • Popular due to easy management system

Key Docker Commands:

  • attach: Connect container I/O to host
  • build: Create new container image
  • commit: Save container changes
  • cp: Copy files between host/container
  • exec: Run commands in container
  • images: List local images
  • inspect: Show container details
  • kill: Stop container
  • login/logout: Docker hub access
  • logs: Get container logs
  • ps: List running containers
  • pull/push: Retrieve/upload containers
  • rm/rmi: Delete container/image
  • run: Start container
  • stop/start: Container control

Practical Docker Implementation:

  1. Installation:
sudo dnf install -y docker
  1. Basic Container Management:
  • Check running containers: sudo docker ps
  • Pull Apache container: sudo docker pull docker.io/library/httpd:latest
  • Run container: sudo docker run -d -t -p 8088:80 --name testweb httpd
  • Connect to container: sudo docker exec -it testweb bash
  • Stop container: sudo docker stop testweb
  • Remove container: sudo docker rm testweb
  • Remove image: sudo docker rmi httpd

Important Concepts:

  • Containers require port mapping for network communication
  • Containers operate similarly to VMs regarding networking
  • Version control similar to Git (pull, push, commit)
  • Can specify versions (e.g., httpd:2.4)

19. System Orchestration

Configuration Management Systems Overview

  1. Popular Systems:
  • Puppet
  • Chef
  • Salt (SaltStack)
  • Ansible
  1. General Architecture:
  • Control host and client setup
  • Clients may use agents or be agentless
  • Communication through secure protocols
  1. Puppet:
  • Control Host: Puppet Master
  • Client: Puppet Agent
  • Uses SSL connection (OpenSSL)
  • Configuration: Puppet Manifest (Ruby-like format)
  • Compiles manifests per host
  • High server load but secure
  1. Chef:
  • Uses RabbitMQ MessageQ for communication
  • Configuration stored as cookbooks and recipes
  • Written in Ruby
  1. Salt (SaltStack):
  • Control Host: Salt Master
  • Client: Salt Minion
  • Uses ZeroMQ for encrypted communication
  • Can run agentless using SSH
  • Configuration in YML format
  • Uses Salt Pillars for sensitive data
  • Highly scalable (thousands of clients)
  1. Ansible:
  • Agentless system using SSH
  • Features SSH pipelining for multiple commands
  • Configurations stored as playbooks and plays
  • Similar to Salt’s agentless mode
  1. Common Features:
  • Support for Linux, Unix, macOS, and Windows
  • OS-specific implementation of configurations
  • Text-based configuration storage
  1. Git Integration:
  • All configurations stored as text files
  • Git tracking recommended for change management
  • Remote repositories enable admin collaboration
  • Supports branching for dev/test/production environments
  1. Best Practices:
  • Recommended for managing multiple machines
  • Can handle hundreds to thousands of hosts
  • Supports complex configuration scenarios
  • Enables testing in cloned environments before production deployment
  1. Advanced Features:
  • Event-driven systems
  • Command queuing
  • Variable data management
  • Scalability options

Server Roles

NTP (Network Time Protocol) Server

  • Maintains and distributes accurate time to other hosts
  • Critical for services like authentication
  • Syncs with reliable time source

SSH (Secure Shell) Server

  • Enables remote login
  • Allows remote command execution
  • Supports data tunneling

Web Servers

  • Serves webpages to browsers
  • Common stacks: Apache, Nginx

Certificate Authority

  • Issues digital certificates
  • Verifies public key ownership
  • Used in HTTPS websites

DNS Name Servers

  • Translates names to IP addresses
  • Common software: BIND

SNMP Server

  • Collects information about network devices
  • Monitors various network equipment (modems, routers, servers, etc.)

File Servers

  • Stores files
  • Main protocols: CIFS, NFS

Authentication Server

  • Verifies credentials
  • Can issue cryptographic tickets
  • Example protocol: Kerberos

Proxy Server

  • Acts as intermediary between client and other servers
  • Forwards client requests

Log Server

  • Centralizes logging from multiple hosts
  • Can include event monitoring capabilities

VPN Server

  • Endpoint for VPN connections
  • Authenticates clients
  • Creates secure tunnels

Monitoring Server

  • Tracks system conditions
  • Creates alerts based on thresholds
  • Notifies administrators of issues

Database Server

  • Stores databases
  • Often works with web servers
  • Powers dynamic websites

Print Server

  • Manages network printing services

Mail Server

  • Handles email services
  • Can act as forwarding agent

Load Balancer

  • Distributes traffic across multiple servers
  • Improves performance and reliability

Clustering Server

  • Enables server grouping
  • Provides failover capabilities
  • Increases application availability
  • Can form high-performance computing clusters (HPC)

Infrastructure and Build Automation

Infrastructure Automation:

  • Definition: Process of scripting/automating environments
  • Components:
    • OS installation
    • Cloud instance setup
    • Software configuration
    • Host communication
    • Network building
  • Benefits:
    • Consistent configuration across multiple servers
    • Complete documentation through code
    • Scalable deployment

Tools for Infrastructure Automation:

  • Configuration management systems
  • Salt Cloud (cloud deployment)
  • AWS CloudFormation
  • Puppet
  • Ansible
  • SaltStack
  • Chef
  • Kubernetes
  • Terraform
  • Google Cloud Deployment Manager
  • Microsoft Azure Automation

Build Automation:

  • Definition: Automating software development build process
  • Key processes:
    • Code compilation to binary
    • Binary packaging
    • Automated testing
  • Occurs before infrastructure deployment

Popular Build Automation Tools:

  • Jenkins
  • CircleCI
  • LambdaTest
  • Bamboo
  • Travis CI
  • Apache Maven

Key Distinction: Infrastructure automation focuses on environment setup, while build automation handles software development processes.

APPLICATION DEPLOYMENT

Definition:

  • Activities making software system available for use
  • Process varies per system but aims to deploy software to end users

Key Deployment Steps:

  1. Release
  • Follows development process
  • Transfers applications to production systems
  • Determines resource requirements
  • Documents deployment activities
  1. Installation
  • Establishes installation procedures
  • Uses Ansible for:
    • Defining software state on target host
    • Starting and activating software
  1. Activation
  • First-time application running
  • Includes:
    • License agreements
    • User setup questions
  1. Deactivation
  • Shuts down system components
  • Required for:
    • System updates
    • Application decommissioning
  1. Uninstallation
  • Removes application from host system
  • May require reconfiguration

Deployment Environments:

  • Development → Test → Production (Release Candidate) → Full Production

Web Application Deployment Process:

  1. Database setup (SQL scripts)
  2. Java database connectivity configuration
  3. HTTP ports and virtual hosts configuration
  4. Application installation and startup
  5. Firewall configuration
  6. Web server routing setup
  7. Static HTML content placement
  8. Web server configuration reload
  9. External firewall access configuration

Key Points:

  • Process should be simple and transparent
  • IT automation reduces deployment time significantly
  • Ansible playbooks are idempotent (consistent results with multiple runs)

Orchestration

Definition:

  • Automated configuration, coordination, and management of computer systems and software
  • Executes multiple automated tasks as part of larger workflows/processes

Modern Deployment Challenges:

  • Multiple applications and complex dependencies
  • Clustered applications
  • Multiple data centers worldwide
  • Public, private, and hybrid clouds

Analogy:

  • Similar to orchestra conductor coordinating different instruments
  • In IT: coordinates frontend/backend services, databases, monitoring, networks, storage
  • Each component has different configurations, roles, and deployment requirements

Types of Orchestration Systems:

  1. OpenStack Heat
  2. Amazon CloudFormations
  3. Docker Swarm
  4. Kubernetes
  5. Apache Mesos

Key Functions:

  • Automatic resource management for dynamic demand
  • Self-healing capabilities
  • Dynamic scaling based on load
  • Infrastructure automation for testing
  • Automated instance management (creation/termination)

Major Orchestration Platforms:

Kubernetes:

  • Developed by Google
  • Open-source and free
  • Uses YAML files for configuration
  • Components: cluster service, app pods, workers
  • Scalable and fault-tolerant

Docker Swarm:

  • Specific to Docker containers
  • Manages container clusters
  • Monitors cluster health
  • Easy integration with existing Docker systems

Apache Mesos:

  • Functions as distributed kernel
  • Different from traditional container orchestration
  • Can handle various workload types
  • Requires additional tools (Marathon, Kronos, Jenkins) for full orchestration
  • Manages resource allocation across systems

Benefits:

  • Handles dynamic cloud demands
  • Eliminates need for manual intervention
  • Supports modern distributed systems
  • Enables efficient resource management
  • Facilitates complex deployment scenarios

Use Cases:

  • Development and testing environments
  • Dynamic scaling of applications
  • Infrastructure management
  • Container orchestration
  • Resource optimization