LFCS - Linux Foundation Certified System Administrator
Do you linux?
- Introduction
- 1. Essential Commands
- Log into local and remote graphical and text mode consoles
- Read and use System Documentation
- Create, delete, copy and move
- Create and manage hard links
- Create and manage soft links
- File permissions
- SUID SGID and sticky bit
- Search for files
- Compare and manipulate file content
- Pages and VI demo
- Search file using grep
- Analyze text using basic regular expressions
- Extended regular expressions
- Archive, back up, compress, unpack, and uncompress files
- Back up files to the remote system
- Use input-output redirection
- Work with SSL Certificates
- Git: Basic operations
- Git: Staging and committing changes
- Git: Branches and remote Repositories
- 2. Operations Deployment
- Boot, reboot, and shutdown a system safely
- Use scripting to automate system maintenance tasks
- Manage startup process and services
- Create Systemd Services
- Diagnose and manage processess
- Locate and analyze system log files
- Manage software with package manager
- Configure the repositories of package manager
- Install software by compiling code
- Verify integrity and availability of resources and processes
- Change kernel runtime parameters, both persistent and non-persistent
- List and identify SELinux file and process contexts
- Create and enforce MAC using SELinux
- Create and manage containers
- Manage and configure virtual machines
- Create and boot a virtual machine
- Installing an operating system on a virtual machine
- 3. Users and Groups
- Create, delete, and modify local user accounts
- Create, delete, and modify local groups and group memberships
- Manage system-wide environment profiles
- Manage template user environment
- Configure user resource limits
- Manage user privileges
- Manage access to root account
- Configure the system to use LDAP user and group accounts
- 4. Networking
- Theory: Configure IPv4 and IPv6 networking and hostname resolution
- Configure IPv4 and IPv6 networking and hostname resolution
- Start, stop, and check status of network services
- Theory: Configure bridge and bonding devices
- Configure bridge and bonding devices
- Configure packet filtering (firewall)
- Port redirection and network address translation (NAT)
- Implement reverse proxies and load balancers
- Set and synchronize system time using time servers
- Configure SSH servers and clients
- 5. Storage
- List, create, delete, and modify physical storage partitions
- Configure and manage swap space
- Create and configure filesystems
- Configure systems to mount filesystems at or during boot
- Filesystem and mount options
- Use remote filesystems: NFS
- Use network block devices: NBD
- Manage and configure LVM storage
- Monitor storage performance
- Create, manage, and diagnose advanced filesystem permissions
Introduction
Course Introduction
Linux Overview
- Linux is a versatile and widely adopted operating system
- Known for stability, security, and flexibility
- Preferred platform for developers due to:
- Robust command line interface
- Extensive package management system
- Vast array of open source software
- Powers everything from personal computers to enterprise servers and embedded systems
Course Structure
- Course is divided into several parts to accommodate varying levels of expertise
- Sections:
- Essential Commands
- Operations Deployment
- Users and Groups
- Networking
- Storage
About LFCS Certification
- Developed by the Linux Foundation to meet the demand for Linux admin talent
- Validates system administration skillset and helps stand out in the market
Course Coverage
- 5 domains:
- Essential Commands (logging in, files/directories)
- Operations Deployment (system boot, task automation, resource management)
- Users and Groups (user/group management, resource quotas, advanced auth)
- Networking (network services, routing, packet filtering)
- Storage Management (LVM, RAID, encrypted storage, advanced file system perms)
Prerequisites
Linux Foundation Certified System Administrator (LFCS) Exam Details
- No Prerequisites: Anyone with skills can attend the exam
- Exam Objectives:
- Essential Commands: 20% of possible points
- Operations Deployment: 25% of possible points
- Users and Groups: 10% of possible points
- Networking: 25% of possible points
- Storage: 20% of possible points
- Exam Format:
- 2-hour duration
- Entirely performance-based
- Simulates on-the-job tasks
- No multiple-choice or true/false questions
- Exam Cost and Validity:
- $395 USD as of now
- Valid for 2 years
- Exam Administration:
- Proctored exam
- Available through browser, can be taken from home
- Exam Registration: Details to be discussed towards the end of the course
1. Essential Commands
Log into local and remote graphical and text mode consoles
Logging into a Linux System
- Logging into a Linux system is similar to logging into apps or websites
- 4 ways to log in:
- Local text-mode console
- Local graphical-mode console
- Remote text-mode login
- Remote graphical-mode login
Console, Virtual Terminal, and Terminal Emulator
- Console: a screen where the OS displays text and allows login or command input
- Virtual terminal: a software-based console, e.g., pressing Ctrl + Alt + F2 on a Linux machine
- Terminal emulator: a graphical app that runs in a window, showing text output and allowing command input
Local Logins
- Local: a device in front of you, e.g., a computer or laptop
- Logging into a local Linux system:
- With GUI: choose user, enter password, and log out when finished
- Without GUI: enter username and password, no GUI components
Remote Logins
- Remote: a device not in front of you, e.g., a server in the cloud
- Remote graphical login:
- VNC (Virtual Network Computing) solution: download VNC client, connect to remote server
- RDP (Remote Desktop Protocol) solution: use Windows Remote Desktop Connection
- Remote text-based login: uses OpenSSH daemon on the server and an SSH client on the local machine
- SSH (Secure Shell) protocol: secure, encrypted communication between client and server
- OpenSSH daemon: a program that runs in the background on the server, allowing secure remote logins
SSH Client and Server
- SSH client: a program that runs on the local machine, connecting to the remote SSH daemon
- SSH daemon: a program that runs on the remote server, listening for incoming connections
Local Graphical Login
- Select username and enter password to log in
- Can use graphical interface or terminal emulator
- Log out when finished
Remote Graphical Login using RDP
- Demonstrated from a Windows machine
- Use Remote Desktop Connection to connect to remote Linux machine
- Enter IP address (e.g., 10.0.0.81) and username/password to log in
- Can interact with remote desktop as if physically present
Remote Text Mode Login using SSH
- Demonstrated from a Windows, Mac, or Linux machine
- Use SSH client to connect to remote Linux machine
- Command:
ssh username@ip_address
(e.g.,ssh user@10.0.0.81
) - Enter password to log in
- Can interact with remote machine using command line interface
Read and use System Documentation
- Linux provides multiple ways to access help manuals and documentation from the command line
--help
option: displays a brief help message for a command, including command line optionsman
command: displays the manual for a command, including a short description, syntax, and detailed description- Manual pages are categorized into sections (e.g., section 1 for user commands, section 3 for programming functions)
apropos
command: searches for man pages containing a specific keyword or phrasemandb
command: updates the manual page database, which is required forapropos
to work- Auto completion: pressing Tab can complete commands, file names, or directory names
- Practice using
man
and--help
to develop the ability to quickly look for help when needed
Tips and Tricks
- Use
--help
for quick reminders of command line options - Use
man
for more detailed information about a command - Use
apropos
to search for commands related to a specific topic - Use auto completion to save time when typing commands
- Practice using system documentation to develop problem-solving skills and to prepare for the LFCS exam
Create, delete, copy and move
File System Tree:
- A hierarchical organization of files and directories
- Root directory (/) at the top, with branches (subdirectories) and leaves (files) growing downward
Paths:
- Absolute paths: start with the root directory (/) and specify the full path to a file or directory
- Relative paths: specify a path relative to the current working directory
- .. refers to the parent directory of the current directory
Commands:
ls
: list files and directories- Options:
-a
(show all files, including hidden files),-l
(long listing format),-h
(human-readable file sizes)
- Options:
pwd
: print working directory (show the current directory)cd
: change directory (move to a different directory)touch
: create a new filemkdir
: make a new directorycp
: copy a file or directory- Options:
-r
(recursive copy, for directories)
- Options:
mv
: move or rename a file or directoryrm
: remove a file or directory- Options:
-r
(recursive delete, for directories)
- Options:
Tips and Reminders:
- Use
cd -
to return to the previous working directory - Use
cd
without arguments to return to the home directory - End directory paths with a slash (/) to avoid ambiguity
- Use absolute or relative paths with commands, depending on the context
- Be careful when using
rm
andmv
commands, as they can permanently delete files and directories.
Create and manage hard links
Hard Links:
- A hard link is a file that points to an inode, which contains metadata and pointers to data blocks on disk
- Multiple hard links can point to the same inode, allowing multiple filenames to access the same data
- Hard links are created using the
ln
command, with the syntaxln target_file link_file
- Hard links are useful for sharing files between users without duplicating data
- When a hard link is deleted, the inode is not affected unless all hard links to it are deleted
- Data is only deleted from the filesystem when there are no more hard links to it
Inodes:
- An inode (Index Node) is a data structure that contains metadata and pointers to data blocks on disk
- Inodes keep track of file permissions, modification times, and other metadata
- Inodes are used by the filesystem to locate and manage files
Stat Command:
- The
stat
command displays file system information about a file, including the inode number and number of hard links
Limitations of Hard Links:
- Hard links can only be created for files, not directories
- Hard links can only be created on the same filesystem
Permissions and Hard Links:
- When creating a hard link, make sure the user has write permissions at the destination
- When hard linking a file, ensure that all users involved have the required permissions to access the file
- Changing permissions on one hard link affects all hard links to the same inode
Note: Soft links (also known as symbolic links) are not discussed in this section, but they are a different type of link that allows linking to directories and across filesystems.
Create and manage soft links
Soft Links (Symbolic Links):
- A soft link is a file that points to a path instead of an inode
- Soft links are created using the
ln
command with the-s
or--symbolic
option - The syntax is
ln -s target_file link_file
, wheretarget_file
is the path to the file or directory being linked to, andlink_file
is the name of the soft link file being created - Soft links can point to files or directories on the same or different filesystems
- Soft links are similar to shortcuts in Windows, redirecting access to the target file or directory
Characteristics of Soft Links:
- Soft links have a different inode number than the target file
- Soft links have permissions that do not affect access to the target file
- Soft links can become broken if the target file or directory is moved or renamed
- Soft links can be created with relative paths, which are relative to the directory where the soft link is created
Commands:
ln -s
: creates a soft linkls -l
: displays information about a soft link, including the path it points toreadlink
: displays the path stored in a soft link
Best Practices:
- Use relative paths when creating soft links to avoid broken links if the directory structure changes
- Be aware that soft links can become broken if the target file or directory is moved or renamed
File permissions
Listing File Permissions
- Use the
ls -l
command to list file permissions - The first character on the line indicates the file type (e.g.
-
for a regular file,d
for a directory,l
for a soft link) - The next 9 characters represent the file permissions:
- The first 3 characters represent the permissions for the user who owns the file
- The next 3 characters represent the permissions for the group associated with the file
- The last 3 characters represent the permissions for other users
Setting and Changing File Permissions
- Use the
chgrp
command to change the group associated with a file or directory:chgrp <group_name> <file_or_directory>
- Use the
chown
command to change the user owner of a file or directory:chown <user_name> <file_or_directory>
(only the root user can do this) - You can change both the user owner and group with the
chown
command using the syntax:chown <user_name>:<group_name> <file_or_directory>
- The
R
,W
, andX
permissions have different meanings depending on the context:- For files:
R
(read): allows the user, group, or other users to read the contents of the fileW
(write): allows the user, group, or other users to write to and modify the fileX
(execute): allows the user, group, or other users to execute the file (e.g. if it’s a program or shell script)
- For directories:
R
(read): allows the user, group, or other users to list the contents of the directoryW
(write): allows the user, group, or other users to create, delete, or modify files within the directoryX
(execute): allows the user, group, or other users to traverse the directory (i.e. change into it)
- For files:
SUID SGID and sticky bit
SUID (Set User Identification)
- Allows a user to run an executable with the permissions of the executable’s owner
- Useful for allowing users to perform specific actions that require elevated privileges without granting them full access to the owner’s account
- Example: John can run an application that reads reports from a folder, and the application will assume Emily’s permissions to read the files
- Set using the
chmod
command with a four-digit octal code, where the first digit is the SUID digit (4)
SGID (Set Group Identification)
- Applies to both executables and directories
- When set on an executable, it runs with the permissions of the group owner
- When set on a directory, new files created in that directory will inherit the group ownership
- Useful for collaborative environments where multiple users need to access and modify files
- Example: John and Emily can run an application that has permissions of the reports group, and new files created in the directory will also be part of the reports group
Sticky Bit
- A special permission that can be set on directories
- Restricts file deletion in that directory
- Only the file owner, directory owner, or superuser (root) can delete files in a directory with the sticky bit set
- Useful for shared directories where multiple users can create files, but should not be able to delete or modify files created by others
- Example: Emily creates a file in a directory with the sticky bit set, and John cannot delete the file
These special permissions can be useful in various scenarios, such as:
- Allowing users to perform specific actions that require elevated privileges without granting them full access to an account
- Maintaining group ownership of files in collaborative environments
- Restricting file deletion in shared directories
Search for files
Why Search for Files?
- Even with a well-organized file system, you may need to search for files in Linux, especially in scenarios where you’re not sure where a file is located or need to find files with specific characteristics.
** Typical Scenarios**
- Finding all image files in a directory (e.g.,
/usr/share
) with a command likefind /usr/share -name "*.jpg"
- Finding large files (e.g., larger than 20 GB) to free up disk space
- Finding recently modified files (e.g., in the last minute) to track changes
The find
Command
- The basic syntax of the
find
command isfind <directory> -<search_parameter> <search_value>
- The
find
command can be used to search for files based on various criteria, such as:-name
: searches for files with a specific name (e.g.,find /bin/ -name file1.txt
)-iname
: searches for files with a specific name, disregarding case sensitivity (e.g.,find /bin/ -iname file1.txt
)-name
with wildcards: searches for files with a pattern in their names (e.g.,find /bin/ -name "f*"
to find files starting with “f”)
Tips and Tricks
- Always specify the directory path before the search parameters (e.g.,
find /bin/ -name file1.txt
, notfind -name file1.txt /bin/
) - Use the analogy “first I have to go there, then I will find it” to remember to specify the directory path before the search parameters
- The
find
command can be used with other parameters, such as-size
to search for files of a specific size, or-mtime
to search for files modified within a certain time period.
Compare and manipulate file content
Viewing File Content
- Use the
cat
command to view the contents of a small file - Use the
tac
command to view the file in reverse order (from bottom to top) - Use the
tail
command to view the last few lines of a file (default is 10 lines, can be specified with the-n
option) - Use the
head
command to view the first few lines of a file (default is 10 lines, can be specified with the-n
option)
Manipulating File Content
- Use the
sed
command (stream editor) to search and replace text in a file - The general syntax for
sed
issed 's/search_pattern/replacement/g' file_name
- The
s
command searches for a pattern and replaces it with a replacement string - The
g
flag at the end of the command means “global” and replaces all occurrences of the pattern, not just the first one - Use single quotes around the
sed
command to prevent the command interpreter (bash) from interpreting special characters
Example Usage
cat /home/users.txt
to view the contents of a small filetac /home/users.txt
to view the file in reverse ordertail -n 5 /home/users.txt
to view the last 5 lines of a filehead -n 5 /home/users.txt
to view the first 5 lines of a filesed 's/Canda/Canada/g' userinfo.txt
to replace all occurrences of “Canda” with “Canada” in the file userinfo.txt
Best Practices
- Always preview the changes you want to make to a file using the
sed
command with the-n
option (e.g.sed -n 's/Canda/Canada/g' userinfo.txt
) to make sure you don’t make any mistakes.
Pages and VI demo
Pagers
- A pager is a program that allows you to open multiple pages of text and navigate through them while on the terminal
- Two common pagers are
less
andmore
less
has more features thanmore
- To access
less
, typeless
followed by the name of the file you want to open (e.g.sudo less /var/log/syslog
) - Features of
less
:- Use arrow keys to move up and down through the file
- Press
/
to search for text - Press
n
to move to the next instance of the search term - Press
N
to move to the previous instance of the search term - Press
Q
to exit the pager
more
is similar toless
, but with fewer features- To access
more
, typemore
followed by the name of the file you want to open (e.g.more /var/log/syslog
) - Features of
more
:- Press the space bar to move to the next page
- Press
Q
to exit the pager
Vim
- Vim stands for Vi IMproved
- Vim is a mode-sensitive text editor
- To open Vim, type
vim
followed by the name of the file you want to edit (e.g.vim myfile.txt
) - Vim has three modes:
- Command mode: the default mode, where you can enter commands
- Insert mode: where you can type text into the file
- Visual mode: where you can select text
- To enter insert mode, press the
i
key - To exit insert mode, press the
Esc
key - Some basic Vim commands:
i
to enter insert modeEsc
to exit insert mode:q
to quit Vim:wq
to save and quit Vim
Note: This is just a brief overview of pagers and Vim, and there are many more features and commands available in both.
Search file using grep
Basic syntax:
grep 'search_pattern' file_name
Example:
grep 'password' ssh_config
This will search for the word “password” in the file “ssh_config” and display all lines that contain the word.
Options:
-i
: ignore case (makes the search case-insensitive)-r
: recursive (searches all files in the specified directory and its subdirectories)--color
: forces grep to color-code the output-v
: inverts the search (displays lines that do not contain the search pattern)
Examples with options:
grep -i 'password' ssh_config
: searches for “password” in a case-insensitive mannergrep -r 'password' /etc/ssh/
: searches for “password” in all files in the /etc/ssh/ directory and its subdirectoriesgrep --color 'password' ssh_config
: searches for “password” and color-codes the outputgrep -v 'password' ssh_config
: searches for lines that do not contain the word “password”
Note: You can combine multiple options to customize your search.
Analyze text using basic regular expressions
Regular Expressions (Regex) Basics
- Regex is a way to specify complex search conditions using operators and patterns.
- In Linux, regex is used with commands like
grep
to search for patterns in files.
Basic Regex Operators
^
: Caret, matches the start of a line$
: Dollar sign, matches the end of a line.
: Period, matches any single character*
: Asterisk, matches zero or more occurrences of the preceding pattern+
: Plus sign, matches one or more occurrences of the preceding pattern?
: Question mark, makes the preceding pattern optional|
: Vertical pipe, OR operator, matches either the preceding or following pattern[]
: Square brackets, matches any character within the brackets()
: Parentheses, groups patterns together{}
: Braces, specifies a range of occurrences for the preceding pattern
Examples
^#
: Matches lines that start with a#
character (comments)^PASS
: Matches lines that start exactly with the string “PASS”7$
: Matches lines that end with the digit “7”mail$
: Matches lines that end with the string “mail”
Using Regex with grep
grep '^#' file
: Searches for lines that start with a#
character in the filegrep -v '^#' file
: Searches for lines that do not start with a#
character in the file (invert results)grep -w '7$' file
: Searches for lines that end with the digit “7” exactly (word boundary)grep 'mail$' file
: Searches for lines that end with the string “mail”
Tips
- Use the
-w
option withgrep
to match whole words only. - Use the
-v
option withgrep
to invert the search results. - Combine regex operators to create more complex search patterns.
Extended regular expressions
Extended Regular Expressions (ERE)
- ERE is a more advanced regex syntax that allows for more complex patterns.
- In
grep
, ERE is enabled by adding the-E
option.
ERE Operators
+
: Matches one or more occurrences of the preceding pattern?
: Makes the preceding pattern optional{min, max}
: Matches betweenmin
andmax
occurrences of the preceding pattern|
: OR operator, matches either the preceding or following pattern()
: Groups patterns together[]
: Matches any character within the brackets*
: Matches zero or more occurrences of the preceding pattern.
: Matches any single character^
: Matches the start of a line$
: Matches the end of a line
Examples
grep -E '0+' file
: Finds all lines that contain one or more zerosgrep -E '[0]{3,}' file
: Finds all lines that contain at least three zerosgrep -E '1{,3}0*' file
: Finds all lines that contain one followed by zero to three zerosgrep -E 'disable?' file
: Finds all lines that contain “disable” or “disabled”grep -E '(enabled|disabled)' file
: Finds all lines that contain “enabled” or “disabled”
Ranges and Sets
[a-z]
: Matches any lowercase letter from a to z[0-9]
: Matches any digit from 0 to 9[abz954]
: Matches any character specified in the set
Subexpressions
(pattern)*
: Matches the subexpression zero or more times(pattern)+
: Matches the subexpression one or more times(pattern)?
: Makes the subexpression optional
Example with Subexpressions
grep -E '/dev/[a-zA-Z]*(\d+[a-zA-Z]*)*' file
: Finds all lines that contain device file names like/dev/sda1
or/dev/tty0p0
Note: The example uses a subexpression to match the device file name pattern, which can be repeated zero or more times using the *
operator.
Archive, back up, compress, unpack, and uncompress files
What is Archiving?
- Archiving is the process of packing multiple files and directories into a single file, called a tarball.
- This makes it easier to move, upload, and download the data.
What is Tar?
- Tar is a popular tool for archiving and unarchiving files in Linux.
- It stands for Tape Archive, and was originally used to prepare data for backup on magnetic tapes.
- Tar is a packer and unpacker that can take multiple files and directories and pack them into a single tar file.
Tar Command Options
- Tar allows specifying command line options in three different ways:
- Long options (e.g.
--list
) - Short options (e.g.
-t
) - Single-character options (e.g.
t
)
- Long options (e.g.
- It’s recommended to use the long options (e.g.
--list
) when starting out, as they are easier to remember. - The
--file
or-f
option should always be used to specify the path to the tar file.
Tar Commands
tar --create --file archive.tar file1
: Creates a new tar archive calledarchive.tar
with the filefile1
.tar --append --file archive.tar file2
: Adds the filefile2
to the existing tar archivearchive.tar
.tar --create --file archive.tar directory/
: Adds the entiredirectory
and its contents to the tar archivearchive.tar
.tar --list --file archive.tar
: Displays the contents of the tar archivearchive.tar
.tar --extract --file archive.tar
: Extracts the contents of the tar archivearchive.tar
into the current directory.tar --extract --file archive.tar -C /tmp
: Extracts the contents of the tar archivearchive.tar
into the/tmp
directory.
Important Notes
- When extracting from a tar archive, it’s a good idea to use the
--list
option to check the paths of the extracted files. - Tar archives store permission and ownership information of files and directories, but extracting as a non-root user may not preserve these permissions.
- Using
sudo
with tar can help preserve ownership and permission information when extracting as a non-root user.
How to compress and uncompress files in Linux.
- Compression utilities: Most Linux systems come with three compression utilities pre-installed: gzip, bzip2, and xz.
- Compressing files: To compress files, you can use commands like
gzip file1
,bzip2 file2
, orxz file3
. These commands will create compressed versions of the files (e.g.,file1.gz
,file2.bz2
, etc.) and automatically delete the original files. - Decompressing files: To decompress files, you can use commands like
gunzip file1.gz
,bunzip2 file2.bz2
, orunxz file3.xz
. These commands will recreate the original uncompressed files and delete the compressed files. - Keeping original files: If you want to keep the original files after compressing or decompressing, you can use the
--keep
or-k
option, likegzip --keep file1
orbzip2 -k file2
. - Viewing compressed file contents: Some commands support a
--list
option to view the contents of a compressed file. - Zip utility: The
zip
utility can pack and compress entire directories or multiple files into a single archive, unlikegzip
and others, which can only compress a single file. - Creating a zip archive: To create a zip archive, you can use commands like
zip archive.zip file1
orzip -r archive.zip pictures
to compress a directory recursively. - Unpacking and decompressing zip files: To unpack and decompress a zip file, you can use the
unzip
command, likeunzip archive.zip
. - Using tar and compression utilities together:
tar
can be used to pack files into an archive, and then compressed using utilities likegzip
,bzip2
, orxz
. Alternatively,tar
can be used to do both packing and compressing in one step using options like--autocompress
. - Unpacking and decompressing tar archives: When unpacking and decompressing
tar
archives, you don’t need to specify the decompression utility, astar
can figure it out based on the file name extension.
Back up files to the remote system
Rsync
rsync
is a popular tool for backing up data by synchronizing files between two systems over a network connection.- The remote server must have an SSH daemon running.
- The general syntax is:
rsync -a source destination
- The
-a
option ensures thatrsync
synchronizes subdirectories, file permissions, modification times, and more. - Example:
rsync -a /local/pictures/ aaron@9.9.9.9:/remote/pictures/
rsync
will only copy changed data on subsequent runs, making future backups more efficient.- You can also use
rsync
to synchronize two local directories.
Syncthing Features:
- Continuous file synchronization tool
- Cross-platform (Linux, Windows, macOS)
- Peer-to-peer architecture
Key Capabilities:
- Send-only mode:
- One-way synchronization
- Source remains unchanged
- Useful for backup scenarios
Versioning Options:
- Simple versioning
- Staggered versioning
- Trash can versioning
- External versioning
Security Features:
- TLS encryption
- Device authentication
- Access controls
- No central server required
Configuration:
- Web-based interface
- Folder sharing options
- Conflict resolution
- Bandwidth limits
- Ignore patterns
DD
dd
is a tool for backing up an entire disk or partition by creating an exact, bit-by-bit copy (also known as imaging).- Before using
dd
, unmount the disk or partition to prevent data changes during the backup process. - Example command:
sudo dd if=/dev/sdb of=/path/to/image/file.img bs=1M status=progress
if
specifies the input file (disk or partition device),of
specifies the output file (image file),bs
specifies the block size (at least 1MB for efficiency), andstatus=progress
shows the progress of the backup process.- To restore a disk image from a file, simply reverse the
if
andof
options. - Note: Do not run
dd
on a virtual machine, as it will overwrite the virtual disk.
Use input-output redirection
Redirecting input and output in Linux, as well as how to pipe output from one program to another.
Input Redirection
- Input redirection is used to redirect input from a file to a program.
- The less than sign (
<
) is used to indicate input redirection. - Example:
sort < file.txt
redirects input fromfile.txt
to thesort
program.
Output Redirection
- Output redirection is used to redirect output from a program to a file.
- The greater than sign (
>
) is used to indicate output redirection. - Example:
sort file.txt > sortedfile.txt
redirects output fromsort
tosortedfile.txt
. - If you want to append to a file instead of overwriting it, use the double greater than sign (
>>
).
Standard Input, Output, and Error
- Standard input (stdin) is the default input source for a program.
- Standard output (stdout) is the default output destination for a program.
- Standard error (stderr) is the default error destination for a program.
- The
>
and<
symbols can be prefixed with a number to specify the type of output or input being redirected. 1>
redirects standard output,2>
redirects standard error.
Piping Output
- Piping output is used to redirect output from one program to another.
- The pipe sign (
|
) is used to indicate piping output. - Example:
grep pattern file.txt | sort
pipes output fromgrep
tosort
. - Piping output can be used to chain multiple programs together to perform complex tasks.
Here Documents and Here Strings
- A here document is a way to redirect input from a file to a program.
- A here string is a way to redirect a single line of text as input to a program.
- Example:
bc << EOF
andbc <<< "expression"
.
Work with SSL Certificates
This lesson covers the basics of SSL/TLS certificates, how to create and inspect them using OpenSSL, and clarifies the terminology around SSL and TLS.
SSL vs. TLS
- SSL (Secure Sockets Layer) was used for a long time, but it had many security issues.
- TLS (Transport Layer Security) is an upgrade over SSL, closing many security holes.
- Although SSL is outdated, the name “SSL” is still widely used in tools and documentation.
What are SSL/TLS certificates?
- Certificates authenticate a website and encrypt network traffic between the user and the website.
- They solve two problems:
- How to ensure the website is legitimate and not a clone created by a malicious hacker.
- How to ensure that no one can steal sensitive data as it’s being sent through the network.
Creating certificates with OpenSSL
- OpenSSL is a utility used to create, manage, and inspect TLS certificates.
- The
req
subcommand is used to generate certificate signing requests (CSRs). - The
x509
subcommand is used to interact with X.509 certificates.
Generating a private key and certificate signing request (CSR)
- The command
openssl req -newkey rsa:2048 -keyout key.pem -out req.pem
generates a private key and a CSR. - The CSR contains information about the organization, such as country, organization name, and common name (e.g., bharambe.dev).
Generating a self-signed certificate
- The command
openssl req -x509 -newkey RSA:4096 -noenc -days 365 -keyout myprivate.key -out mycertificate.crt
generates a self-signed TLS certificate. - This certificate is not trusted by default, but can be used for internal or testing purposes.
Inspecting certificates
- The
openssl x509
subcommand can be used to inspect existing certificates. - The
-in
option specifies the input file, and the-text
option prints the certificate in text form.
Tips and reminders
- Use
man openssl
to get help on the available subcommands and options. - Use
man openssl req
andman openssl x509
to get specific help on these subcommands. - Remember to use
req
for generating CSRs andx509
for interacting with existing certificates.
Git: Basic operations
This lesson introduces the concept of version control systems, specifically Git, and how they can help manage code changes in a collaborative development environment.
The problem of collaborative development
- In a team of 10 developers, each person makes changes to the codebase, adding, deleting, and modifying files.
- It’s hard to keep track of all the changes, and it’s essential to have a way to update team members on changes made by others.
The solution: Git
- Git is a distributed version control system that allows multiple people to work on the same codebase.
- A Git repository stores code alongside information about each change.
- There are two types of repositories: local (personal) and remote (shared, central location).
How Git works
- When you make changes to your local repository, you can push them to the remote repository to update the project.
- When you want to get the latest changes from the remote repository, you pull them into your local repository.
Setting up Git
- Install Git if it’s not already installed (e.g.,
sudo apt install git
on Linux). - Set your username and email with
git config --global user.name
andgit config --global user.email
. - These settings are not login information, but rather metadata to identify who made changes to the code.
Initializing a Git repository
- Create a directory for your project (e.g.,
mkdir project
). - Initialize the Git repository with
git init
. - This creates a hidden
.git
directory that stores repository information.
Basic Git concepts
- Git doesn’t continuously track changes; you need to inform Git about changes you make.
- You’ll learn more about the process of tracking changes in the next lesson.
This lesson provides a solid introduction to the basics of Git and sets the stage for exploring more advanced topics in future lessons.
Git: Staging and committing changes
Staging and Committing Changes with Git
Three steps to modifying code and tracking changes with Git:
- Make changes in the working area: The working area is the project directory where changes are made. In this lesson, two files (file1 and file2) were added to the project directory.
- Add changes to the staging area: The staging area is where we tell Git about the changes we want to track. You can use
git status
to show what changes have been made to the working area. Then, usegit add file1 file2
to stage the changes. This command tells Git that we want to track these changes in the next commit. - Commit the changes: Committing changes creates a snapshot of the project at that point in time. Committing changes is like taking a picture of the staging area. The changes are not tracked until they are committed.
Why Git Doesn’t Automatically Track All Changes
For example, if we add 50 lines of code today and intend to add another 50 lines tomorrow, we don’t want Git to track the partial change. We want to wait until we finish adding all our lines of code. Additionally, if we modify 10 different files, we don’t want Git to track 10 different changes. Instead, we want to track this as a single modification in our project’s lifecycle.
Using git status
You can use git status
to show what changes have been made to the working area. This command is useful when working on a complex project and we want to know what changes we made.
Adding Multiple Files to the Staging Area
For example, we can use git add *.html
to add all files with the .html
extension in the current directory. We can also use git add products/
to add an entire subdirectory and all its files to the staging area.
Unstaging Files
You can unstage a file using git reset file2
. However, using git rm
without the --cached
parameter can have unintended consequences, such as removing the file from the project entirely.
Committing Changes
Committing changes is standard practice to add a comment to each commit using the -m
option. For example, git commit -m "Added files file1 and file2"
. This comment helps other team members understand why changes were made.
Deleting Files
You can delete a file from the project using git rm file3
. This command removes the file from the working area and stages the change for the next commit.
Committing the Deletion
We need to commit the deletion using git commit -m "Deleted file3"
. This creates a new snapshot of the project without the deleted file.
Tracking Changes
Tracking changes with Git consists of three main steps: making changes in the working area, adding changes to the staging area, and committing the changes. This allows us to track the history of our project and see what changes were made over time.
Branches
Branches allow us to work on different features or versions of our project without affecting the main codebase.
Git: Branches and remote Repositories
Git Branches
- Multiple versions of a project can be worked on simultaneously using different branches
- Each branch can have its own development road, but can be reunited (merged) later
- The master branch is the default branch and is often used to track the stable version of a project
Creating a New Branch
git branch <branch-name>
creates a new branchgit branch --list
orgit branch
shows all available branchesgit checkout <branch-name>
switches to the specified branch
Working with Branches
- Changes can be made to a file in a branch without affecting the master branch
git add <file>
stages changes,git commit -m "<message>"
commits changesgit log
shows a list of commits, including details of what was changedgit log --raw
shows more detailed information about each commitgit show <commit-hash>
shows the changes made in a specific commit
Merging Branches
- Merging brings changes from one branch into another
git checkout <branch-to-merge-into>
switches to the branch to merge intogit merge <branch-to-merge-from>
merges the specified branch into the current branch
Remote Repositories
- A remote repository is a central location where team members can upload and download code changes
git remote add <name> <connection-string>
adds a remote repository to the local repositorygit push origin <branch>
uploads local changes to the remote repositorygit pull origin <branch>
downloads changes from the remote repository and merges them into the local branch
Configuring SSH Keys
ssh-keygen
generates a public and private key pair- The public key is added to the remote repository (e.g. GitHub)
- The private key is used to authenticate with the remote repository
Cloning a Repository
git clone <connection-string>
creates a new local repository from a remote repository- This is useful for new team members who want to start working on a project
Getting Help with Git
git <command> --help
shows help for a specific commandman git <command>
shows detailed documentation for a specific command
2. Operations Deployment
Boot, reboot, and shutdown a system safely
Booting, Rebooting, and Shutting Down Linux Systems Safely
- Use the
systemctl
command to reboot or shut down a Linux machine. systemctl
requires system administrator privileges, so:- Root user:
systemctl reboot
orsystemctl poweroff
- Regular user:
sudo systemctl reboot
orsudo systemctl poweroff
- Root user:
- Force Reboot/Shutdown (use with caution):
sudo systemctl reboot --force
orsudo systemctl poweroff --force
sudo systemctl reboot --force --force
(last resort, like pressing the reset button)
Scheduling Reboots/Shutdowns
- Use the
shutdown
command for scheduled reboots or shutdowns. - Examples:
- Shut down at 2:00 AM:
shutdown 02:00
- Shut down X minutes later:
shutdown +X
(e.g.,shutdown +15
) - Reboot instead:
shutdown -r
(e.g.,shutdown -r 02:00
)
- Shut down at 2:00 AM:
- Wall Message: display a message to logged-in users before reboot/shutdown, e.g.,
shutdown -r 02:00 'Scheduled restart to upgrade our Linux kernel.'
SystemD Targets
- Linux uses SystemD targets to boot up the system and load necessary programs.
- The default target can be checked with:
systemctl get-default
- The default target is stored in a file, e.g.,
graphical.target
, which contains instructions on what programs to load and in what order.
Changing the Default Boot Target
- To change the default target, use:
sudo systemctl set-default <target>
- Examples:
sudo systemctl set-default multi-user.target
(boot into a text-based interface, no graphical UI)sudo systemctl set-default graphical.target
(boot into a graphical interface)
Switching Targets without Rebooting
- To switch to a different target without rebooting, use:
sudo systemctl isolate <target>
- Example:
sudo systemctl isolate graphical.target
(switch to a graphical interface from a text-based interface)
Other Useful Targets
emergency.target
: loads only essential programs, useful for debugging, mounts root file system as read-onlyrescue.target
: loads a few more programs thanemergency.target
, drops user into a root shell, can be used for database backups, fixing settings, etc.
Note: To use these targets, a password must be set for the root user.
Use scripting to automate system maintenance tasks
Bash Basics:
- Bash is a command interpreter/shell that executes commands
- Can be used interactively or through scripts
- Scripts are files containing multiple instructions executed in order
Creating Scripts:
- File Creation:
- Use .sh extension (standard practice but not mandatory)
- First line must be shebang (#!/bin/bash)
- No spaces before shebang
- Script Components:
- Comments start with # (interpreter ignores these lines)
- Can use regular command line syntax
- Can use redirection, pipes, etc.
- Making Scripts Executable:
- Must grant execute permissions
- Use chmod command
- Can grant permissions to owner only or everyone
Running Scripts:
- Use full path or ./scriptname.sh from current directory
- . represents current directory
Bash Built-ins:
- View available built-ins using ‘help’ command
- Important built-ins: if and test
- Used for implementing logic in scripts
Script Logic Example:
- If Statement Structure:
if test -f filename; then
commands
else
commands
fi
- Key Components:
- test: checks conditions
- then: executes if true
- else: executes if false
- fi: ends if block
- Use indentation for readability
Exit Status Codes:
- Commands return 0 for success
- Non-zero values indicate errors/not found
- IF treats 0 as true, non-zero as false
Practical Applications:
- Backup creation
- System monitoring
- Log generation
- File management
- Automated maintenance tasks
Manage startup process and services
- Init System Overview
- Manages automatic startup of applications
- Handles application dependencies and order
- Automatically restarts crashed applications
- Uses systemd units (text files with instructions)
- Systemd
- Collection of tools, components, and applications
- Manages Linux-based OS operations
- Acts as init system
- Different unit types: service, socket, device, timer, etc.
- Service Units
- Contains instructions for:
- Program startup commands
- Crash handling
- Restart procedures
- Application lifecycle management
- Example: SSH Daemon service unit
- ExecStart: startup command
- ExecReload: configuration reload command
- Restart settings
- Service Management Commands
- systemctl status [service]: Check service status
- systemctl start/stop [service]: Manual control
- systemctl restart [service]: Complete restart
- systemctl reload [service]: Graceful configuration reload
- systemctl enable/disable [service]: Boot-time startup control
- systemctl mask/unmask [service]: Prevent service startup
- systemctl list-units –type service –all: List all services
- Service States
- Enabled: Automatic startup at boot
- Disabled: Manual startup only
- Active running: Currently running
- Active exited: Completed successfully
- PID: Process identifier for running services
- Important Notes
- Configuration changes require restart/reload
- Some services auto-start others (domino effect)
- Service names may vary by OS
- Debian/Ubuntu often auto-enable services after installation
- Red Hat systems usually require manual enabling
Create Systemd Services
Purpose:
- Manages application lifecycle
- Automatically restarts crashed applications
- Starts applications at system boot
Key Components:
Service File Structure:
- Unit section: Description and dependencies
- Service section: Lifecycle control instructions
- Install section: Target and alias information
Important Options:
- Restart: Controls when to restart (no, on-failure, always)
- RestartSec: Delay before restart
- ExecStartPre: Commands to run before main application
- ExecStart: Main application start command
- KillMode: How to stop processes
- Type: Application behavior type (simple, notify, oneshot, forking)
Implementation Steps:
- Create service file in /etc/systemd/system
- Can copy existing services from /lib/systemd/system as templates
- Reload systemd daemon after changes
- Start service using systemctl
Documentation:
- Main manual: man systemd.service
- Additional manuals:
- systemd.unit
- systemd.exec
- systemd.kill
Best Practices:
- Set appropriate restart delays to avoid restart loops
- Define correct dependencies
- Specify proper target assignment
- Review existing service files for reference
Management:
- Use systemctl commands to control services
- Monitor logs for service status
- Reload daemon after service file changes
Diagnose and manage processess
- Basic Concepts:
- Processes are running programs on OS
- Can be short-lived (ls) or long-lived (SSH daemon)
- Each process has a PID (Process Identifier)
- PS Command:
- Basic syntax: ps
- Common usage: ps aux (shows all processes)
- Shows: CPU%, Memory%, STAT, TIME, COMMAND
- Square brackets [] indicate kernel processes
- Process Monitoring:
- top: Real-time process monitoring
- pgrep: Search processes by name
- ps -U username: Show processes by specific user
- Process Niceness:
- Range: -20 to 19 (lower = higher priority)
- nice -n value command: Start process with niceness
- renice: Adjust existing process niceness
- Root required for negative values
- Process Signals:
- kill: Send signals to processes by PID
- pkill: Send signals to processes by name
- Common signals:
- SIGTERM: Graceful shutdown
- SIGKILL: Force terminate
- SIGHUP: Reload configuration
- SIGSTOP: Pause execution
- SIGCONT: Continue execution
- Process Control:
- Ctrl+C: Terminate process
- Ctrl+Z: Pause process
- bg: Run process in background
- fg: Bring process to foreground
- & : Launch process in background
- jobs: List background processes
- File Usage:
- lsof: List open files
- Can check files used by PID
- Can check processes using specific files
- Requires sudo for root-owned processes
Key Commands Summary:
- ps aux: List all processes
- top: Real-time monitoring
- nice/renice: Manage process priority
- kill/pkill: Process termination
- bg/fg: Background/foreground control
- lsof: File usage tracking
Locate and analyze system log files
- Purpose of Logs
- Record system events, actions, errors, and access
- Stored as text messages
- Generated by kernel and programs
- Logging System
- Managed by logging daemons (e.g., rsyslog - Rocket Fast System)
- Main storage location: /var/log directory
- Requires root access to read most logs
- Important Log Files
- /var/log/auth.log: Authentication/authorization logs
- SSH access
- Password changes
- Sudo commands
- /var/log/syslog: System logs
- Log rotation: .1, .2.gz (older versions)
- Log Monitoring
- Live view: tail -F filename
- SystemD Journal Daemon
- journal CTL command for analysis
- Filters by command, service unit
- Priority levels: alert, crit, debug, emerge, error, info, notice, warning
- Journal CTL Features
- Time-based filtering (-S since, -U until)
- Boot-specific logs (-B option)
- Pattern matching (-G option)
- Priority filtering (-P option)
- Persistent storage in /var/log/journal
- User Login History
- last command: Shows login history
- lastlog: Displays last login for each user
- Shows remote login IPs for SSH connections
- Access Requirements
- Root access or specific group membership needed
- wheel (Red Hat)
- ADM/sudo (Ubuntu)
- Use sudo for temporary privileges
- Log Format
- Includes: date, time, hostname, application source, message
- Follows consistent format for system logs
Schedule tasks to run at a set date and time
- Cron Utility
- Best for repetitive jobs
- Can run tasks at specific minutes, hours, days, times
- Uses /etc/crontab for system-wide tasks
- Syntax: minute hour day-of-month month day-of-week username command
- Anacron
- For repetitive jobs with minimum interval of one day
- Can’t run multiple times per day
- Runs missed jobs when system powers on
- Configured in /etc/anacrontab
- Syntax: period delay job-identifier command
- At Utility
- For one-time task execution
- Simple syntax: “at [time]”
- Commands entered interactively
- Use atq to list jobs, atrm to remove them
Cron Syntax Details:
- Time fields: minute (0-59), hour (0-23), day (1-31), month (1-12), weekday (0-6)
- Special characters:
- = all values , = multiple values
- = range / = step values
Important Notes:
- Use full paths for commands in cron jobs
- Personal cron tables preferred over system-wide
- Access with: crontab -e (edit), crontab -l (list)
- Special directories: /etc/cron.daily, hourly, monthly, weekly
Anacron Features:
- Focuses on job completion rather than exact timing
- Good for systems not running 24/7
- Verifies syntax with anacron -t
At Command Usage:
- Schedule one-time tasks
- Can specify exact times or relative times
- Commands: atq (list jobs), atrm (remove jobs)
- Use CTRL+D to save job after entering commands
Manage software with package manager
Package Management Basics:
- Uses ‘apt’ (Advanced Package Tool) command
- Before installing/upgrading, run: sudo apt update
- Updates local database of available packages from repositories
- Repositories: Servers storing package information maintained by Canonical
Key Commands:
- Update & Upgrade:
- sudo apt update: Refreshes package database
- sudo apt upgrade: Upgrades installed packages
- Can chain commands: sudo apt update && sudo apt upgrade
- Installing Software:
- Basic syntax: sudo apt install [package-name]
- Example: sudo apt install nginx
- Can chain with update: sudo apt update && sudo apt install [package-name]
- Package Information:
- View package files: dpkg –list files [package-name]
- Search file origin: dpkg –search [file-path]
- Package details: apt show [package-name]
- Search packages:
- apt search [term]
- Names only: apt search –names-only [term]
- Multiple terms: apt search [term1] [term2]
- Removing Software:
- Basic remove: sudo apt remove [package-name]
- Remove with dependencies: sudo apt autoremove [package-name]
Package Details:
- Contains: binaries, configuration files, documentation
- Includes dependencies (required additional packages)
- Managed through package management system
- Information stored in local database
Configure the repositories of package manager
- Default Repositories
- Located in system configuration files
- Ubuntu 24.04 uses updated file location
- Contains different types of repositories
- Repository Structure a) Types:
- Deb (Debian-style) repositories
- Contains .deb package files (programs, configs, docs, scripts)
b) URI:
- Uses HTTPS URLs
- Example: us.archive.ubuntu.com/ubuntu
c) Suites:
- Noble (main suite)
- Noble updates (bug fixes, security patches)
- Noble backports (newer package versions)
d) Components:
- Main: Official, free, open-source
- Restricted: Free but with usage restrictions
- Universe: Free but unofficially supported
- Multiverse: Non-free with licensing restrictions
Third-Party Repositories Steps to add:
Download public key
Convert key using GPG (dearmoring)
Move to /etc/apt/keyrings
Create config in sources.list.d
Update package manager database
PPAs (Personal Package Archives)
- Simplified third-party repositories
- Command: sudo add-apt-repository ppa:[username]/[repository]
- Management:
- List: add-apt-repository –list
- Remove: add-apt-repository –remove [PPA]
- Best Practices
- Keep main and universe components for servers
- Organize third-party repos in sources.list.d
- Verify signatures and keys for security
- Update package manager after changes
Install software by compiling code
- Installation Methods:
- Third method: Downloading and compiling code from sources (e.g., GitHub)
- Converts human-readable code to executable code
- Process Using htop as Example: a) Prerequisites:
- Git (usually pre-installed in Ubuntu/Linux)
- Clone repository using: git clone [GitHub URL]
b) Initial Steps:
- Check README file for compilation requirements
- Install required library (libncursesw5-dev)
- Install build-essential package for compilation tools
c) Configuration:
- Run autogen.sh script (./autogen.sh)
- Execute configure script
- View options with ./configure –help
- Run without arguments for default configuration
d) Compilation:
- Use ‘make’ utility
- View make targets by pressing tab twice after ‘make’
- Common targets:
- clean (reset build)
- distclean
- install
- uninstall
- Basic compilation: run ‘make’ without arguments
e) Installation:
- Compiled binary appears in current directory
- Install system-wide: sudo make install
- Moves binary to /usr/local/bin
- Allows execution from any directory without full path
- Key Points:
- Different projects have different compilation requirements
- README provides project-specific instructions
- Make clean helps fix failed compilations
- /usr/local/bin is in system PATH by default
Verify integrity and availability of resources and processes
- Storage Space Monitoring
- Use ‘df’ (disk-free) utility
- ‘df -h’ for human-readable format (MB, GB, TB)
- Ignore tmpfs (virtual file systems in memory)
- ‘du -sh’ shows disk usage for specific directories
- RAM Monitoring
- Use ‘free -h’ command
- Shows total, used, and available memory
- “Available” column more important than “free”
- Memory can be freed by OS when needed
- CPU Monitoring
- ‘uptime’ shows load averages for 1, 5, and 15 minutes
- Load average 1.0 = one CPU core at 100% capacity
- ’lscpu’ shows CPU details
- ’lspci’ shows hardware details
- File System Integrity XFS File Systems:
- Use command: xfs_repair -v /dev/[device]
- Must unmount first
- -v for verbose output
Ext4 File Systems:
- Use command: fsck.ext4 -v -f -p /dev/[device]
- -p for automatic fixing (preen mode)
- -f forces check
- -v for verbose output
- Process Verification
- ‘systemctl list-dependencies’ shows active/inactive units
- White circle = inactive unit
- Green circle = active unit
- ‘systemctl status [service]’ shows service details
- ‘journalctl’ for detailed service logs
- Important services (like ssh, cron, atd) should be active
Troubleshooting Steps:
- Check service status
- Review logs
- Fix identified issues
- Restart service if needed
Change kernel runtime parameters, both persistent and non-persistent
Definition:
- Kernel runtime parameters are settings that control how Linux kernel performs internal operations
- Mainly deals with memory allocation, network traffic, and other low-level functions
Viewing Parameters:
- Command: sysctl -a (shows all current parameters)
- Use sudo for full access to all parameters
- Parameter naming convention:
- net.* = network-related
- vm.* = virtual memory
- fs.* = file system
Changing Parameters:
Non-persistent changes:
- Command: sysctl -w parameter=value
- Changes don’t survive system reboots
- Check current value: sysctl parameter_name
Persistent changes:
- Create file in /etc/sysctl.d/ with .conf extension
- File format: parameter = value
- Apply changes immediately: sysctl -p
Example: vm.swappiness
- Controls system’s memory swapping behavior
- Value range: 0-100 (old kernels), 0-200 (new kernels)
- Higher value = more frequent swapping
- Lower value = less frequent swapping
- To change permanently:
- Create file (e.g., /etc/sysctl.d/swap-less.conf)
- Add content: vm.swappiness = 20
- Run sysctl -p to apply immediately
Alternative Method:
- Edit /etc/sysctl.conf directly
- Contains explanations for important parameters
- Risk: Can be overwritten during OS upgrades
- Recommended: Use /etc/sysctl.d/ for custom changes
List and identify SELinux file and process contexts
- Basic Linux Security Limitations
- Traditional Linux security includes basic file permissions (read/write/execute)
- Root user privileges
- These are too generic for modern cyber security needs
- SELinux Overview
- Kernel security module for advanced access control
- Default enabled on Red Hat systems
- Not default on Ubuntu
- Provides fine-grained control over actions
- SELinux Context/Labels Components (in order):
- User (SELinux user, different from login user)
- Role
- Type
- Level (security clearance level)
- Viewing SELinux Context
- Files/directories: ls -Z
- Processes: ps axZ
- Current user context: id -Z
- User mapping: semanage login -l
- User roles: sudo semanage user -l
Security Decision Process
Check SELinux user
Verify if user can assume role
Check if role can transition to type
Verify security level (if applicable)
Type Enforcement
- Most important part of context
- Creates restricted domains for processes
- Example: sshd_t domain for SSH daemon
- Files need specific types to enter certain domains
- Protection Features
- Prevents unauthorized access
- Restricts authorized users to specific actions
- Protects against hijacked programs
- Creates security boundaries
- SELinux States
- Enforcing: actively restricting actions
- Permissive: logging but not restricting
- Disabled: completely inactive
- Practical Example
- Apache web server protection
- Even if compromised, process remains confined to its security domain
- Limits potential damage from attacks
- Unconfined Context
- Label: unconfined_t
- Minimal restrictions
- Default for many user processes
Create and enforce MAC using SELinux
- INTRODUCTION
- SELinux = Mandatory Access Control security module
- Default on Red Hat/CentOS; Ubuntu uses AppArmor
- Cannot run multiple security modules simultaneously
- BASIC SETUP Initial Requirements:
- Disable existing security module (AppArmor)
- Install packages: selinux-basics and auditd
- Configure bootloader
- Relabel filesystem
- System reboot
- OPERATIONAL MODES Permissive Mode:
- Observes and logs violations
- Doesn’t enforce policies
- Good for learning/testing
Enforcing Mode:
- Actively enforces security policies
- Can be set temporarily (setenforce) or permanently (/etc/selinux/config)
- SECURITY CONTEXT STRUCTURE Three-part Label System:
- User (system_u, unconfined_u)
- Role (object_r)
- Type (specific to purpose) Example: system_u:object_r:var_log_t
- POLICY MANAGEMENT Policy Creation:
- audit2allow generates rules from logs
- Creates .pp (policy package) files
- .te files show readable rules
- Can create custom modules
Policy Implementation:
- Start in permissive mode
- Monitor audit logs
- Test thoroughly before enforcing
- Create separate policies per object
- FILE AND PROCESS SECURITY File Labeling:
- Every file needs SELinux labels
- Labels determine access permissions
- Can be restored with restorecon
- Can be changed with chcon
Process Domains:
- Processes run in security domains
- Domains restrict process actions
- Example: sshd_t domain for SSH
- MANAGEMENT TOOLS Key Commands:
- sestatus: Check status
- getenforce/setenforce: Mode management
- chcon: Change context
- restorecon: Restore default context
- semanage: Policy management
- audit2allow: Policy generation
- seinfo: View valid labels
- ADDITIONAL FEATURES
- Boolean switches for quick policy changes
- Port management for services
- Directory context management
- Custom policy creation
- Audit logging system
- BEST PRACTICES
- Test in permissive mode first
- Verify policies before enforcement
- Monitor audit logs
- Document all changes
- Expect learning curve (10-20 hours)
- Regular system testing
Create and manage containers
Docker Containers Overview:
- Containers encapsulate applications making them portable and easy to migrate
- Example: MariaDB in container vs. traditional installation
- All components (daemon, config files, logs, databases) are contained together
Practical Docker Commands:
- Basic Commands:
- docker –help: Shows available commands
- docker search [name]: Find container images
- docker pull [image]: Download container image
- docker images: List downloaded images
- docker rmi: Remove images
- Image Tags:
- Format: image:tag (e.g., nginx:1.22.1)
- “latest” is default tag if unspecified
- Tags specify different versions/variations
- Container Management:
- docker run [options] [image]: Create and run container
- Common options:
- –detach: Run in background
- –publish port:port: Port forwarding
- –name: Assign container name
- –restart always: Auto-restart policy
- docker ps: List running containers
- docker ps –all: List all containers
- docker start/stop: Control container state
- docker rm: Remove containers
- Building Custom Images:
- Requires Dockerfile with instructions
- Basic Dockerfile components:
- FROM: Base image
- COPY: Copy files
- RUN: Execute commands
- CMD/ENTRYPOINT: Startup commands
- Build command: docker build –tag repo/name:tag directory
Important Concepts:
- docker run vs. docker start:
- run: Creates new container from image
- start: Starts existing container
- Container removal requires:
- Stop running container
- Remove container
- Remove image (if needed)
- Images can be automatically pulled during run
- Port publishing enables external access
- Restart policies ensure container availability
Manage and configure virtual machines
Definition & Usage:
- Virtual machines (VMs) are simulated computers created on physical computers
- Useful for servers and resource allocation
- Example: 64 CPU core server can host 32 VMs with 2 VCPUs each
Key Software:
QEMU-KVM
- QEMU (Quick Emulator): Simulates virtual computer
- KVM (Kernel-based Virtual Machine): Linux kernel code for VM acceleration
VIRSH
- Command-line tool for VM management
- Installation: sudo apt install virt-manager
Basic VIRSH Commands:
Creation & Viewing
- Define VM: virsh define [filename.xml]
- List VMs: virsh list (active only)
- List all VMs: virsh list –all
Power Management
- Start: virsh start [VMname]
- Reboot: virsh reboot [VMname]
- Shutdown: virsh shutdown [VMname]
- Force power off: virsh destroy [VMname]
- Delete VM: virsh undefine [VMname]
Autostart Configuration
- Enable: virsh autostart [VMname]
- Disable: virsh autostart –disable [VMname]
Resource Management
- View info: virsh dominfo [VMname]
- Set CPU cores:
- virsh setvcpus [VMname] [count] –config
- Set maximum first: virsh setvcpus [VMname] [count] –config –maximum
- Set memory:
- Set maximum: virsh setmaxmem [VMname] [size]
- Set allocation: virsh setmem [VMname] [size]
Important Notes:
- Resource changes require VM restart
- Names with spaces need double quotes
- ‘destroy’ command only powers off VM, doesn’t delete it
- Help available via: virsh help [command]
Create and boot a virtual machine
- Initial Setup
- Previous lesson: Created minimal VM without virtual disk
- Goal: Create complete VM with operating system
- Need to download disk image from Ubuntu’s cloud images
- Image Preparation
- Download Ubuntu minimal cloud image using wget
- Verify image integrity using checksum (SHA256)
- Use qemu-image info to inspect image details
- Resize image from 3.5GB to 10GB for additional software
- Storage Configuration
- Default storage pool: /var/lib/libvirt
- Copy disk image to /var/lib/libvirt/images subdirectory
- VM Creation using virt-install Key parameters:
- OS info: Specify Ubuntu 24.04
- Name: Identify VM
- Memory: Allocation in MB (1024MB = 1GB)
- vCPUs: Number of virtual CPU cores
- –import: Skip OS installation (pre-installed image)
- –disk: Path to virtual disk
- –graphics none: For text-only interface
- Basic Command Structure:
virt-install --os-info Ubuntu24.04 --name Ubuntu1 --memory 1024 --vcpus 1 --import --disk /path/to/image --graphics none
- Cloud-init Configuration
- Add –cloud-init root-password-generate=on for random root password
- Password displayed for 10 seconds during setup
- Must change password on first login
- VM Management
- Exit console: Ctrl + ]
- Shutdown: virsh shutdown <VM_name>
- Force shutdown: virsh destroy <VM_name>
- Reattach to console: virsh console <VM_name>
- Alternative Options
- OS detection: –os-info detect=on
- Generic Linux: –os-info linux2022
- Remove cloud-init for non-cloud images
- Additional Features
- Default networking enabled automatically
- 10GB disk pre-allocated
- Internet connectivity available
Installing an operating system on a virtual machine
- Alternative to Pre-installed OS Images:
- Option to install fresh OS instead of using pre-configured disk images
- Requires creating VM with virtual CD/DVD RAM
- Needs empty disk image creation
- Virt-Install Command Differences:
- No –import option (no pre-built disk image)
- –disk option specifies size instead of path (e.g., size=10 for 10GB)
- –location points to ISO file (CD/DVD RAM installation disk)
- –graphics none parameter
- Additional –extra-args “console=ttyS0” needed
- Enables serial console for text-only environment interaction
- Essential for systems without GUI
- Efficient Installation Method:
- Can download required files directly from internet
- Reduces operation time
- Uses web address instead of ISO file
- Requires specific directory/file structure compatible with virt-install
- Installation Process:
- System downloads necessary files
- Boots minimal installation environment
- Allows language selection and basic setup
- Can be controlled through terminal
- Important Notes:
- Terminal output might appear distorted
- Can exit using CTRL + ]
- Cloud images (previous lesson) are generally preferred
- This method is useful as alternative installation approach
- Practical Considerations:
- Method requires proper URL structure
- Terminal display might need adjustments
- Minimal installation environment has limited language options
- Downloads required during installation process
3. Users and Groups
Create, delete, and modify local user accounts
- User Account Basics:
- Each person needs separate user account
- Provides personal files/directories with permissions
- Allows custom tool settings
- Administrators can limit privileges
- Helps prevent accidental damage
- Enhances system security
- Creating Users: Command: sudo adduser [username] Actions performed:
- Creates new user
- Creates matching group
- Sets primary group
- Creates home directory (/home/username)
- Sets default shell (/bin/bash)
- Copies /etc/skel files to home directory
- Account set to never expire
- Password Management:
- Set password: sudo passwd [username]
- Delete user: sudo deluser [username]
- Delete with home directory: sudo deluser –remove-home [username]
- Account Customization:
- Custom shell/home directory: adduser –shell [path] –home [path] [username]
- Account details stored in /etc/passwd
- Default user IDs start at 1000
- Manual ID selection: adduser –uid [number] [username]
- System Accounts:
- Created with –system option
- Used for programs/daemons
- IDs typically below 1000
- Usually created without home directory
- Modifying Users (usermod):
- Change home directory: usermod –home [path] –move-home [username]
- Change username: usermod –login [newname] [oldname]
- Change shell: usermod –shell [path] [username]
- Lock/unlock account: usermod –lock/–unlock [username]
- Account Expiration:
- Set expiration date: usermod –expiredate [YYYY-MM-DD] [username]
- Remove expiration: usermod –expiredate "" [username]
- Password Policies:
- Force password change: chage -d 0 [username]
- Set password expiration: chage -M [days] [username]
- Check expiration: chage -l [username]
- Useful Commands:
- View user info: id
- Check current user: whoami
- View help: adduser –help
Create, delete, and modify local groups and group memberships
Purpose of Groups:
- Allow multiple users to share permissions
- Simplify access management
- Assign roles to user accounts
- Grant special system privileges
Example Use Case:
- Instead of managing individual permissions for developers (John, Jack, Jane)
- Create “Developers” group
- Assign group ownership of files to Developers
- Add/remove users from group as needed
- Easier to manage access control
Types of Groups:
Primary (Login) Group:
- Main group assigned at login
- Programs run with primary group privileges
- New files automatically owned by primary group
Secondary/Supplementary Groups:
- Additional groups for extra permissions
- Examples: wheel/sudo group (root privileges), Docker group
Key Commands:
Create Group:
- groupadd [groupname]
Add User to Group:
- sudo gpasswd -a [username] [groupname]
- For secondary groups
Remove User from Group:
- sudo gpasswd -d [username] [groupname]
Change Primary Group:
- usermod -g/–gid [groupname] [username]
- Don’t confuse with -G (changes secondary groups)
View User’s Groups:
- groups [username]
- First group listed is primary group
Rename Group:
- groupmod –new-name/-n [newname] [oldname]
Delete Group:
- groupdel [groupname]
- Cannot delete if it’s anyone’s primary group
- Can delete if only used as secondary group
Important Notes:
- Group passwords rarely used in practice
- Must change primary group before deleting it
- Command syntax varies (username/groupname order)
- Use –help for command reference
Manage system-wide environment profiles
- Environment Overview
- View current environment using commands: printenv or env
- Environment variables store settings and system information
- Example: HISTSIZE=1000 (controls Bash command history size)
- Variables can be changed directly: HISTSIZE=2000
- Environment Variables Usage
- Used as program settings
- Help applications understand their running environment
- Example: $HOME variable indicates user’s home directory
- Access variable content using $ prefix (e.g., echo $HOME)
- Variable Implementation in Scripts
- Variables dynamically adjust for different users
- Example: Using $HOME in scripts adapts to each user’s directory
- Personal environment variables can be set in .bashrc file
- System-Wide Environment Configuration a) /etc/environment
- Sets variables for all system users
- Changes apply after user logs out and back in
b) /etc/profile.d
- For complex configurations and commands
- Files must have .sh extension
- No shebang (#!) needed as system processes with current shell
- Example script: last_login.sh
- Creates timestamp file in user’s home directory
- Uses $HOME variable for file location
- Testing Changes
- Log out using specified command
- Log back in to verify changes
- Check variable settings or script results
- Practical Applications
- Environment variables provide dynamic system information
- Enable flexible script writing for multiple users
- Allow system-wide configurations
- Support automated tasks at login
Manage template user environment
- Default User Account Creation Process:
- Files from /etc/skel directory automatically copy to new user’s /home directory
- Creates template for user environment
- Practical Example: Informing New Users of Policy
- Can add custom files to /etc/skel
- Example: Creating README file with policy information
- Policy message: “Please don’t run CPU-intensive processes between 8:00 AM and 10:00 PM”
- File Visibility:
- Use ’ls -a’ to show all files including hidden ones
- Hidden files start with a dot (.)
- Example: .bashrc not visible in default ls output
- Implementation Process:
- Edit/create README in /etc/skel
- New users receive copy in their home directory
- Users can access information using ‘cat readme’ command
- Purpose of /etc/skel:
- Acts as template directory
- All files placed here automatically copy to new user home directories
- Only affects users created after files are added to /etc/skel
Configure user resource limits
File Location:
- Resource limits configured in /etc/security/limits.conf
Syntax Structure:
Domain Options:
- Username (e.g., trinity)
- Group name (prefix with @, e.g., @developers)
- Asterisk (*) - matches all users not specifically mentioned
Type Fields:
- Hard limit - Maximum absolute value, cannot be exceeded
- Soft limit - Initial value, can be increased up to hard limit
- Dash (-) - Sets both hard and soft limits to same value
Common Items to Limit:
- nproc - Maximum number of processes
- fsize - Maximum file size (in KB)
- CPU - CPU time limit in minutes
- 100% CPU core for 1 second = 1 second allocation
- 50% CPU core for 1 second = 0.5 seconds allocation
Practical Example:
- Setting process limit for user Trinity: trinity hard nproc 3
- This limits Trinity to maximum 3 processes
- Testing showed:
- Successfully ran 3 processes (bash + ps + less)
- Failed when attempting 4th process
Viewing & Modifying Limits:
- View current session limits: ulimit -a
- Modify limits: ulimit -u [value]
- Users can:
- Lower limits by default
- Raise limits up to hard limit once (if soft limit exists)
- Cannot raise limits again after lowering
Note: Comments in limits.conf start with # (pound sign) - ensure no accidental commenting when adding new limits.
Manage user privileges
Sudo and Root Access:
- Root user (superuser) has full system access
- sudo command allows running commands with root privileges
- Users must be in sudo group to use sudo commands
Managing Sudo Access:
- Via Sudo Group:
- Add users to sudo group for full sudo access
- Command: sudo usermod -aG sudo username
- Grants complete system access
- Via Sudoers File (/etc/sudoers):
- Provides fine-tuned control over sudo privileges
- Must use visudo to edit (prevents syntax errors)
- Located at /etc/sudoers
Sudoers File Structure (5 parts):
- User/Group specification
- Host field (usually ALL)
- Run-as-user field
- Run-as-group field
- Allowed commands list
Syntax Examples:
- Basic format: user host=(run-as-user:run-as-group) commands
- For individual user: trinity ALL=(ALL) ALL
- For group access: %developers ALL=(ALL) ALL
- Limited user access: trinity ALL=(aaron,john) ALL
- Command restrictions: trinity ALL=(ALL) /bin/ls, /usr/bin/stat
Special Features:
- Run as specific user: sudo -u username command
- Multiple values separated by commas
- Group specification uses % prefix
- Can remove password requirement with NOPASSWD tag
Best Practices:
- Use visudo instead of direct file editing
- Be specific with permissions when possible
- Consider security implications when granting sudo access
Manage access to root account
- Temporary Root Access
- Use ‘sudo’ command for temporary root privileges
- Example: sudo ls /root
- Logging in as Root a) For users with sudo access:
- Use ‘sudo -i’
- Exit using ’logout’ command
b) For users without sudo but with root password:
- Use ‘su -’ or ‘su -l’ or ‘su –login’
- Requires root password
- Locked Root Account
- Common security measure
- Prevents regular password login
- Still allows ‘sudo –login’ with current user’s password
- Cannot use ‘su -’ (requires root password)
- Managing Root Password Access a) Enabling root login:
- For never-set password: Set new password
- For locked account: Unlock using passwd command
- After these steps, ‘su -’ becomes available
b) Disabling root login:
- Lock password-based root logins using ‘sudo passwd -l root’
- Alternative login methods (e.g., SSH key) still work if configured
- SSH private key authentication bypasses password lock
- Important Security Considerations
- Ensure current user has sudo access before locking root
- Without root login and sudo access, system management becomes impossible
- Alternative authentication methods may still work when password login is locked
- Best Practices
- Maintain at least one method of root access
- Consider security implications when changing root access
- Keep track of available authentication methods
Configure the system to use LDAP user and group accounts
- Default User Account Storage in Linux:
- Stored locally in /etc/passwd
- Contains username, user ID, home directory, shell info
- LDAP (Lightweight Directory Access Protocol):
- Centralized solution for managing user/group accounts
- Eliminates need to update accounts on multiple servers
- Changes made on LDAP server reflect automatically across all systems
- Implementation Steps: a) Setting up LDAP Server:
- Can be hosted on Azure, Windows Server, or local system
- In demo: Used LXC (Linux Containers) with pre-configured LDAP server
b) Client Configuration:
- Install libnss-ldapd package
- Configure NSS (Name Service Switch)
- Set up NSLCD (Name Service Local Daemon)
- Configuration Details: a) NSLCD Configuration:
- Requires LDAP server IP address
- Define distinguished name of LDAP search base
- Set domain components (DC)
b) NSS Configuration:
- Modified in /etc/nsswitch.conf
- Enable LDAP lookups for passwd, group, and shadow data
- Home Directory Management:
- Use PAM (Pluggable Authentication Modules)
- Configure automatic home directory creation on first login
- Command: sudo pam-auth-update
- Benefits:
- Centralized user/group management
- Automatic updates across all configured systems
- Simplified administration for multiple servers
- Important Files:
- /etc/nsswitch.conf: Defines data sources
- /etc/nslcd.conf: LDAP connection configuration
- PAM configuration for home directory creation
- Verification Methods:
- Use ‘id’ command to check user existence
- ‘getent’ command to view entries
- Can filter LDAP-specific entries using –service option
4. Networking
Theory: Configure IPv4 and IPv6 networking and hostname resolution
- IP Address Basics
- IP (Internet Protocol) has two versions: IPv4 and IPv6
- Required for network communication
- IPv4
- Format: Four numbers (0-255) separated by dots
- Example: 192.168.1.101
- Each number uses 8 bits (binary representation)
- Total 32 bits
- CIDR Notation
- Format: IP address followed by /number (e.g., 192.168.1.101/24)
- CIDR = Classless Inter-Domain Routing
- Number after slash indicates network prefix length
- Example:
- /24: First 24 bits (first 3 numbers) are network prefix
- /16: First 16 bits (first 2 numbers) are network prefix
- IPv6
- Uses 128 bits (vs IPv4’s 32 bits)
- Characteristics:
- Eight groups of numbers (vs IPv4’s four)
- Hexadecimal format (0-9, A-F)
- Groups separated by colons
- Example: 2001:0db8:0000:0000:0000:0000:0000:0001
- IPv6 Shortening Rules
- Remove leading zeros
- Consecutive zero groups replaced with ::
- Example shortened: 2001:db8::1
- CIDR in IPv6
- Similar concept to IPv4
- Example: IPv6 address/64
- Prefix length indicates network portion
- Multiple of 8 bits easier to understand
- Additional Note
- For complex CIDR calculations, use online CIDR/subnet calculators
Configure IPv4 and IPv6 networking and hostname resolution
- Network Interface Discovery & Basic Commands
- Use ‘ip link’ to show network devices/interfaces
- ’lo’ = loopback interface (127.0.0.1) for internal system connections
- ‘ip address/addr/a’ shows IP addresses for interfaces
- Add ‘-c’ for colored output (must be in middle of command)
- Interface Management
- Activate interface: sudo ip link set dev [interface] up
- Add IPv4: sudo ip address add [IP/CIDR] dev [interface]
- Add IPv6: Similar syntax, different IP format
- Changes using ‘ip’ command are temporary (lost after reboot)
- Netplan Configuration
- Ubuntu’s current networking tool
- Configuration files location: /etc/netplan/
- Files processed alphabetically (prefix with numbers, e.g., 99-)
- YAML format required
- Key commands:
- netplan get: view current configuration
- netplan try: test configuration with rollback option
- netplan apply: apply changes permanently
- Netplan Configuration Elements
- Interface settings
- DHCP configuration
- Static IP assignment
- DNS servers
- Routes configuration
- Proper YAML formatting crucial (spacing/alignment)
- DNS Configuration
- Configure via Netplan or systemd-resolved
- Global DNS: Edit /etc/systemd/resolved.conf
- Check DNS settings: resolvectl status/dns
- Local hostname resolution: /etc/hosts file
- Additional Tips
- Documentation available: man netplan
- Example configurations: /usr/share/doc/netplan/examples/
- File permissions should be restricted (chmod 600)
- Use netplan try –timeout for custom timeout window
- Network Routes
- Configure via Netplan
- Check routes: ip route
- Can set default gateway and specific network routes
This configuration system allows for both temporary (ip command) and permanent (Netplan) network settings management in Ubuntu, with various options for IP addressing, routing, and DNS resolution.
Start, stop, and check status of network services
Network Service Monitoring Tools:
- SS (modern tool)
- Netstat (older tool, may be deprecated)
SS Command Usage:
- Basic command: sudo ss -ltunp
- Options meaning:
- l - listening connections
- t - TCP connections
- u - UDP connections
- n - numeric values (port numbers)
- p - shows process information
- Mnemonic: “listening, tcp, udp, numeric, process” or “tunl,p” (tunnel programs)
Understanding Network Addresses:
- 127.0.0.1: localhost (internal connections only)
- 0.0.0.0: accepts external connections (IPv4)
- [::]: accepts external connections (IPv6)
Service Management:
Status checking:
- systemctl status servicename.service
- Example: systemctl status mariadb.service
Service control:
- Stop: systemctl stop servicename
- Start: systemctl start servicename
- Enable autostart: systemctl enable servicename
- Disable autostart: systemctl disable servicename
Additional Tools:
- Process inspection: ps
- Open files check: sudo lsof -p [PID]
- Logs can be checked using PID
Netstat Alternative:
- Similar syntax: sudo netstat -ltunp
- Cleaner output format
- May not be available on all systems
Common Services Example:
- SSH Daemon (port 22)
- MariaDB (port 3306)
Note: Service names may vary by distribution
- Ubuntu: ssh.service
- Red Hat: sshd.service
Theory: Configure bridge and bonding devices
- Basic Concept:
- Both methods combine multiple network devices into a virtual one
- Managed under the operating system
- Bridging:
- Connects two or more separate networks
- Allows computers on different networks to communicate
- Components:
- Controller (the bridge itself)
- Ports (network devices part of bridge)
- Similar to physical bridges connecting land masses
- Enables cross-network communication (e.g., Server 1 can contact Server 8)
- Bonding:
- Combines multiple network interfaces
- Benefits:
- Increased resilience
- Higher network throughput
- Improved connection reliability
- Creates single virtual interface from multiple physical ones
- Provides seamless failover if one connection fails
- Bonding Modes (0-6):
- Mode 0 (Round Robin): Sequential use of interfaces
- Mode 1 (Active Backup): One active, others as backup
- Mode 2 (XOR): Interface selection based on source/destination
- Mode 3 (Broadcast): Sends data through all interfaces
- Mode 4 (IEEE 802.3.ad): Increases transfer rates
- Mode 5 (Adaptive Transmit Load Balancing): Balances outgoing traffic
- Mode 6 (Adaptive Load Balancing): Balances both incoming/outgoing traffic
- Key Differences:
- Bridge: Connects separate networks for inter-network communication
- Bond: Combines multiple paths to same network for improved performance/reliability
- Terminology:
- Network devices called “Interfaces” in Linux
- In both bridging and bonding, connected devices are called “ports”
- Bond creates virtual interface visible to applications
Note: Mode selection requires thorough understanding of network infrastructure and specific needs.
Configure bridge and bonding devices
- Network Bridging:
- Use example YAML files as templates from Netplan
- Copy bridge.yaml to Netplan configuration directory
- Check interface names using “ip -c link” command
- Configuration steps:
- Define ethernet devices
- Disable DHCP for individual interfaces
- Enable DHCP for bridge
- Name the bridge (e.g., br0)
- List interfaces to bridge
- Bridge interfaces become slaves to master bridge
- Network Bonding:
- Copy bond example config file
- Configuration requirements:
- Define ethernet devices
- Enable DHCPv4
- List interfaces to bond
- Specify bond mode
- Set primary interface if needed
- Bond Modes Available:
- balance-rr (mode 0)
- active-backup (mode 1)
- balance-xor (mode 2)
- broadcast (mode 3)
- 802.3ad (mode 4)
- balance-tlb (mode 5)
- balance-alb (mode 6)
- Important Commands:
- ip -c link: List network interfaces
- man netplan: View Netplan manual
- netplan apply: Apply network configuration
- View bond details: /proc/net/bonding/[bondname]
- Key Points:
- Proper YAML indentation is crucial
- Bridges and bonds appear as regular network interfaces
- Standard IP commands work on bridges and bonds
- IP command changes are temporary (lost after reboot)
- Netplan try command doesn’t work for bonding
- Always have backup access when making network changes
- File Locations:
- Netplan config directory: /etc/netplan/
- Bond information: /proc/net/bonding/
Configure packet filtering (firewall)
UFW (Uncomplicated Firewall) Basics:
- UFW is Ubuntu’s default packet filtering firewall
- Disabled by default; check status with: sudo ufw status
- Uses whitelist approach - blocks all incoming traffic by default
- Must allow SSH (port 22) before enabling to maintain access
Key Commands:
- Enable UFW: sudo ufw enable
- Allow port: sudo ufw allow [port]
- Allow specific protocol: sudo ufw allow [port/protocol]
- Check detailed status: sudo ufw status verbose
- View numbered rules: sudo ufw status numbered
Rule Configuration:
- IP-specific rules:
- Allow from specific IP: sudo ufw allow from [IP] to any port [port]
- Allow IP range: Use CIDR notation (e.g., 10.0.0.0/24)
- Deny specific IP: sudo ufw deny from [IP]
- Rule Management:
- Delete by number: sudo ufw delete [rule number]
- Delete by rule: sudo ufw delete [rule specification]
- Insert rule at position: sudo ufw insert [position] [rule]
- Interface-specific rules:
- Format: sudo ufw [allow/deny] [in/out] on [interface]
- Example: sudo ufw deny out on enp0s3 to [IP]
Advanced Rule Structure:
- Full syntax: sudo ufw [allow/deny] [in/out] on [interface] from [source-IP] to [dest-IP] port [port] proto [protocol]
- Direction matters:
- Incoming: ’to’ refers to local machine
- Outgoing: ’to’ refers to external destination
Important Considerations:
- Rules processed top to bottom
- First matching rule is applied
- Order matters for allow/deny rules
- Can specify multiple parameters for precise control
- Manual provides example commands for reference
Port redirection and network address translation (NAT)
- Port Redirection (Port Forwarding)
- Allows access to private servers through public server
- Public server connects to both internal network and internet
- Rules direct incoming connections to specific private servers
- Example: Port 80 → Server 1, Port 993 → Server 2, Port 3306 → Server 3
- Network Address Translation (NAT)
- Network packets contain:
- Source IP address
- Destination IP address
- Other technical data
- Process:
- Public server changes destination address for internal routing
- Source address may be modified (masquerading)
- Similar to home router operations
- Linux Configuration A. Enable IP Forwarding:
- Edit /etc/sysctl.d/99-sysctl.conf (preferred) or /etc/sysctl.conf
- Uncomment IPv4/IPv6 forwarding lines
- Reload with sysctl –system
- Verify with sysctl command
B. Implementation Tools:
- NetFilter Framework (kernel-level)
- NFT (modern command)
- IPtables (older but still functional)
- Port Redirection Setup A. Prerequisites:
- Check network interfaces (ip -A command)
- Verify routing (ip -r command)
- Identify default gateway
B. IPtables Structure:
- Uses tables and chains
- Packets processed through specific chains
- NAT table handles address translation
C. Command Structure:
sudo iptables -t nat -A PREROUTING -i [interface] -s [source_range] -p tcp
Key components:
- -t nat: NAT table
- -A PREROUTING: Append to PREROUTING chain
- -i: Input interface
- -s: Source IP range
- -p: Protocol specification
Implement reverse proxies and load balancers
- Basic Web Access Flow:
- User requests webpage → Browser sends request to web server → Web server returns content
- Modern high-traffic websites use additional infrastructure between user and web server
- Reverse Proxy:
- Flow: User → Reverse Proxy → Web Server → Reverse Proxy → User
- Advantages:
- Instant traffic switching between servers
- Avoids DNS propagation delays
- Filters web traffic
- Caches pages
- Enables load balancing
- Load Balancer:
- Similar to reverse proxy but redirects to multiple servers
- Distributes traffic evenly across servers
- Prevents server overload
- Essential for high-traffic sites like YouTube
- NGINX Configuration as Reverse Proxy:
- Installation steps:
- Install NGINX
- Create config file in /etc/nginx/sites-available/
- Basic configuration includes:
- Server block with port listening
- Location directive
- Proxy pass directive
- Link config to /etc/nginx/sites-enabled/
- Test configuration
- Reload NGINX
- Load Balancer Configuration:
- Key components:
- Upstream directive defines server collection
- Server block similar to reverse proxy
- Various distribution methods:
- Round robin (default)
- Least connections (least_conn)
- Weighted distribution
- Additional features:
- Server weights
- Backup servers
- Down status for maintenance
- Custom port specification
- Important Configuration Options:
- proxy_params for preserving user information
- Location directive for URL filtering
- Multiple server definitions
- Weight assignments
- Backup server designation
- Port specifications
- Implementation Steps:
- Create configuration file
- Link to sites-enabled
- Test configuration
- Reload NGINX
This setup enables efficient traffic management and server load distribution while providing flexibility for maintenance and scaling.
Set and synchronize system time using time servers
- Hardware Clock Issues
- Computer clocks drift from real time
- Example: Real time 12:00:05, server shows 12:00:06
- Time Servers (NTP)
- NTP (Network Time Protocol) servers provide exact time
- Modern OS include time synchronization software
- Ubuntu uses Systemd-timesyncd by default
- Time Zones
- Important for server management across different locations
- Can be confusing when viewing logs from multiple servers
- Best practice: Set servers to company’s main office time zone
- Time Management Tools
- timedatectl utility for time operations
- Commands:
- timedatectl list-timezones: Show available zones
- sudo timedatectl set-timezone [Zone]: Set time zone
- Format: Continent/City (use underscore for multi-word cities)
- NTP Service Setup
- Install: sudo apt install systemd-timesyncd
- Check status: timedatectl (shows if NTP service is active)
- Configure via /etc/systemd/timesyncd.conf
- NTP Server Configuration
- Format: [number].countrycode.pool.ntp.org
- Example: 0.us.pool.ntp.org
- Can specify multiple servers
- Important Settings
- RootDistanceMaxSec: Maximum server response time
- Poll intervals:
- Minimum: 32 seconds
- Maximum: 2048 seconds
- Adjusts automatically based on clock drift
- Useful Commands
- timedatectl show-timesync: Show NTP servers in use
- timedatectl timesync-status: Show current poll interval and root distance
- Service restart required after configuration changes
Configure SSH servers and clients
Requirements:
- SSH client (local computer)
- SSH daemon/server (remote Linux server)
SSH Daemon Configuration:
Config file location: /etc/ssh/sshd_config
Key settings:
- Port (default: 22)
- AddressFamily (any/inet/inet6)
- ListenAddress (specify IP)
- PermitRootLogin
- PasswordAuthentication
- KbdInteractiveAuthentication
- X11Forwarding
User-specific settings:
- Can override global settings using “Match User”
- Example: Match User Aaron PasswordAuthentication yes
Important:
- Reload daemon after config changes
- Check sshd_config.d/ for additional config files
SSH Client Configuration:
Local user config:
- Location: ~/.ssh/config
- Permissions must be restricted
- Can create shortcuts for server connections
Global client config:
- Location: /etc/ssh/ssh_config
- Better to add new config in /etc/ssh/ssh_config.d/
SSH Keys:
Generation:
- Command: ssh-keygen
- Creates private and public key pair
- Optional passphrase protection
Key deployment:
- Method 1: ssh-copy-id username@ip
- Method 2: Manually add to ~/.ssh/authorized_keys
- Set proper permissions for authorized_keys
Known Hosts:
Purpose:
- Stores server fingerprints
- Verifies server identity
- Located in ~/.ssh/known_hosts
Management:
- Remove single entry: ssh-keygen -R ip_address
- Remove all: delete known_hosts file
Security Considerations:
- Disable password authentication when using keys
- Use passphrases for private keys
- Restrict file permissions
- Consider limiting root login
- Check for conflicting configs in sshd_config.d/
5. Storage
List, create, delete, and modify physical storage partitions
- Basic Concept
- Partitioning allows division of storage space for different operating systems
- Different OS can use different file systems (e.g., Windows-NTFS, Ubuntu-EXT4)
- Viewing Partitions
- Use ’lsblk’ command to list block devices
- Block devices are storage spaces for data
- Only entries with “part” in type column are partitions
- Device Naming Convention
- Virtual devices: May start with ‘B’
- Physical devices:
- SDA: Serial devices (SATA connected)
- NVMe: For NVMe storage
- Naming pattern: sda1, sdb2, etc.
- Letters (a,b,c) indicate device number
- Numbers indicate partition number
- Device Files
- Located in /dev directory
- Examples: /dev/sda (entire device), /dev/sda1 (first partition)
- Require root permissions to access
- Partition Management Tools a) fdisk:
- Pre-installed utility
- Shows sector information
- Can create/delete partitions
b) cfdisk:
- More user-friendly interface
- Features:
- Create/delete partitions
- Resize partitions
- Sort partitions
- Change partition types
- Changes only apply after using “Write” button
- Partition Tables
- MBR (Master Boot Record): Older format
- GPT (GUID Partition Table):
- Modern format
- Less corruption prone
- Supports more primary partitions
- Larger partition sizes
- Recommended for modern systems
- Important Notes
- Standard practice: Leave 1MB unpartitioned space at beginning
- Partitions are numbered by creation order, not physical position
- Special partition types:
- Linux filesystem (default)
- Swap partition
- EFI system (for boot partition)
Configure and manage swap space
- What is Swap?
- Area where Linux temporarily moves data from RAM
- Acts as overflow space when RAM is full
- Allows system to continue functioning when RAM is exhausted
- Example Scenario:
- 4GB RAM system
- Video editor uses 2GB
- Audio editor uses 2GB
- Chrome can still open by moving inactive video editor data to swap
- Basic Swap Commands:
- Check swap areas: swapon –show
- Stop using swap: swapoff [partition/file]
- Setting up Swap Partition:
- Format partition as swap using mkswap command
- Enable swap with swapon command
- Changes are temporary until properly configured in system boot
- Creating Swap File:
- Use dd utility to create file
- Command structure: dd if=/dev/zero of=/swap bs=1M count=128
- Parameters:
- if=/dev/zero: input file (generates zeros)
- of=/swap: output file location
- bs=1M: block size (1 megabyte)
- count=128: number of blocks
- status=progress: shows writing progress
- Security Considerations:
- Restrict swap file permissions to root user only
- Prevents unauthorized access to memory contents
- Multiple Swap Areas:
- Can use multiple swap partitions and files simultaneously
- System can utilize both partition and file-based swap
- Important Notes:
- Changes are temporary unless configured for boot
- Swap file size depends on system requirements
- Proper permissions are crucial for security
Create and configure filesystems
Default File Systems:
- Red Hat: XFS file system
- Ubuntu: ext4 file system
XFS File System:
- Basic Creation:
- Command: sudo mkfs.xfs /dev/sdb1
- View options: man mkfs.xfs or run command without arguments
- XFS Options:
- Label (-L): Maximum 12 characters
- Example: sudo mkfs.xfs -L BackupVolume /dev/sdb1
- Inode size (-i): Example: sudo mkfs.xfs -i size=512 /dev/sdb1
- Force format (-f): Overwrites existing file system
- XFS Administration:
- xfs_admin utility for managing existing file systems
- View label: xfs_admin -l
- Change label: xfs_admin -L
ext4 File System:
- Basic Creation:
- Command: sudo mkfs.ext4 /dev/sdb2
- Alternative name: mke2fs (original name)
- ext4 Characteristics:
- Inode limitations: Can run out even with free space available
- Each file/directory uses one inode
- Can specify number of inodes during creation
- ext4 Administration:
- tune2fs utility for managing existing file systems
- View properties: tune2fs -l
- Change label: tune2fs -L
Safety Features:
- Both systems warn before overwriting existing file systems
- Can force format if needed
- Options can be combined for customization
Verification:
- Use cfdisk to verify file system types and labels
- Can display multiple partitions and their properties
Configure systems to mount filesystems at or during boot
Mounting File Systems
- Mounting: Attaching a file system to a directory
- Manual mounting command:
mount /dev/device /mountpoint
- Unmounting command:
umount /mountpoint
- Use
lsblk
to verify mount status
Automatic Mounting (fstab)
- Location:
/etc/fstab
- Purpose: Defines file systems to mount at boot time
fstab Fields (6 total)
- Device identifier (/dev/device or UUID)
- Mount point (directory)
- File system type (ext4, xfs, etc.)
- Mount options (usually “defaults”)
- Dump field (0=disabled, 1=enabled)
- fsck order (0=no check, 1=check first, 2=check after)
Important Commands
systemctl daemon-reload
: Update system after fstab changesblkid
: Check UUID of block devicesls -l /dev/disk/by-uuid
: View UUID mappings
UUID vs Device Names
- UUIDs preferred over device names (/dev/sdX)
- UUIDs remain constant regardless of connection order
- Device names can change based on connection order
Swap Partition in fstab
- Mount point: “none”
- File system type: “swap”
- Last two fields: “0 0”
- No backup or error scanning needed
Best Practices
- Root file system should have fsck order of 1
- Other file systems should have fsck order of 2
- Use UUIDs for consistent device identification
- Check existing entries for reference
- Consult
man fstab
for help
Filesystem and mount options
- Commands to View Mounted Filesystems:
- lsblk: Basic overview of mounted devices
- findmnt: Detailed view including filesystem type and mount options
- Can filter with -t option (e.g., -t xfs,ext4 for real filesystems only)
- Shows only currently mounted filesystems
- Mount Options: a) Filesystem Independent Options:
- rw: Read-write access
- ro: Read-only access
- noexec: Prevents program execution from filesystem
- nosuid: Disables SUID permissions
- Multiple options can be combined with commas (no spaces)
b) Filesystem Specific Options:
- Unique to each filesystem type (XFS, ext4, etc.)
- Documentation found in respective man pages (man xfs, man ext4)
- Example: allocsize for XFS
- Mounting with Options:
- Basic syntax: mount -o [options] device mountpoint
- Remounting with new options: mount -o remount,[options] mountpoint
- For filesystem-specific options, full unmount/mount recommended
- Security Applications:
- noexec and nosuid commonly used for security
- Example: Android phones use these options for media storage to prevent malware execution
- Permanent Mount Options:
- Set in /etc/fstab
- Replace “defaults” with desired options
- Applied automatically at boot time
- Important Notes:
- findmnt shows virtual filesystems (like proc) by default
- Mount options control filesystem behavior and rules
- Filesystem-specific options should be applied during initial mount
- Manual pages (man mount) provide comprehensive option documentation
Use remote filesystems: NFS
- Introduction
- Deals with accessing data on different systems
- Uses protocols for client-server communication
- NFS (Network Filesystem Protocol) is commonly used between Linux computers
- NFS Server Setup a) Installation
- Install NFS Kernel Server package: sudo apt install nfs-kernel-server
b) Configuration (/etc/exports file)
- Define shared directories and access permissions
- Basic structure: path hostname(options)
- Client specification options:
- Hostname (e.g., hostname1, example.com)
- IP address (e.g., 10.0.0.9)
- CIDR notation for IP ranges (e.g., 10.0.16.0/24)
c) Export Options
- rw: read/write access
- ro: read-only access
- sync: synchronous writes (guaranteed storage)
- async: asynchronous writes (faster but less secure)
- no_subtree_check: disables subtree checking (default)
- no_root_squash: allows root privileges on client
- NFS Client Setup a) Installation
- Install NFS common package: sudo apt install nfs-common
b) Mounting NFS Shares
- Basic syntax: sudo mount server_IP:/remote_path /local_path
- Example: sudo mount 127.0.0.1:/etc /mnt
- Unmount: sudo umount /local_path
- Auto-mounting Configuration
- Edit /etc/fstab file
- Format: server_IP:/remote_path /local_path nfs defaults 0 0
- Additional Tips
- Use wildcards in hostname field (*)
- Avoid extra spaces between hostname and options
- Use exportfs -r to apply changes
- Use exportfs -v to view active shares
- Important Commands
- sudo exportfs -r: refresh exports
- sudo exportfs -v: view active shares
- man exports: view detailed documentation
Use network block devices: NBD
- Block Device Basics
- Special files reference storage devices (e.g., /dev/sda, /dev/vda)
- Partitions referenced as /dev/sda1, /dev/vda1, etc.
- Network Block Devices (NBD)
- Allows accessing storage devices on remote computers
- Creates special file /dev/nbd0
- Behaves like local block device but redirects to remote storage
- NBD Server Configuration
- Install package: nbd-server
- Configuration file: /etc/nbd-server/config
- Key settings:
- Comment out user/group for root privileges
- Set allowlist=true for export listing
- Define exports with identifier and device path
- Restart daemon after configuration
- NBD Client Setup
- Install package: nbd-client
- Load kernel module: sudo modprobe nbd
- Make module persistent: add ’nbd’ to /etc/modules-load.d/modules.conf
- Connecting to Remote Block Device
- Command: sudo nbd-client [IP-address] -N [export-name]
- Creates /dev/nbd0 device
- Can mount like regular block device
- List available exports using -l option
- Disconnecting NBD Device
- Unmount first
- Use command: nbd-client -d [device-path]
- Verify disconnection: device shows 0 byte size in lsblk
- Additional Features
- Can use hostnames instead of IP addresses
- List exports with -l option (requires allowlist=true on server)
- Manual pages available in section 5 for configuration options
Note: NBD requires root privileges for both server and client operations.
Manage and configure LVM storage
Main Advantage:
- Solves partition resizing limitations by allowing flexible space allocation
- Can combine non-contiguous spaces to appear as continuous partitions
Key Components (Important Abbreviations):
- PV (Physical Volume)
- Represents real storage devices/disks
- Can be entire disk or partition
- VG (Volume Group)
- Combines multiple PVs
- Acts like a virtual disk
- Can be expanded by adding more PVs
- Allows growth without server downtime
- LV (Logical Volume)
- Similar to partitions in LVM
- Can be resized flexibly
- Located at /dev/[volume_group_name]/[logical_volume_name]
- PE (Physical Extent)
- Basic unit of data division in LVM
Common Commands:
- pvcreate: Create Physical Volume
- vgcreate: Create Volume Group
- vgextend: Add PV to existing VG
- vgreduce: Remove PV from VG
- pvremove: Remove PV from LVM
- lvcreate: Create Logical Volume
- lvresize: Resize Logical Volume
Important Considerations:
- Install LVM using: sudo apt install lvm2
- When resizing LV with filesystem, use –resizefs option
- Some filesystems can be enlarged but not shrunk
- Can view available commands by typing component prefix (e.g., VG) and pressing tab twice
File System Management:
- LVs need filesystem creation to store data
- Use mkfs command to create filesystem
- When resizing LV with filesystem, both need to be resized together
Advantages:
- Flexible storage management
- Easy partition resizing
- Can add storage without server shutdown
- Combines multiple physical devices into single logical volume
Monitor storage performance
- Tools for Monitoring:
- sysstat package contains iostat and pidstat
- iostat: Shows I/O (Input/Output) statistics
- pidstat: Shows Process ID statistics
- iostat Command:
- Shows historical usage since system boot
- Key fields:
- tps (transfers per second)
- kB_read/s (kilobytes read per second)
- kB_wrtn/s (kilobytes written per second)
- kB_read and kB_wrtn (total kilobytes read/written)
- Usage: iostat 1 (refreshes every 1 second)
- Storage Device Stress Factors:
- High frequency of reads/writes
- Large volume of data transfers
- Can lead to device overuse and system slowdown
- pidstat Command:
- Shows process-specific I/O statistics
- Usage: pidstat -d
- Key columns:
- PID (Process ID)
- kB_rd/s (kilobytes read/second)
- kB_wr/s (kilobytes written/second)
- Device Mapper (dm):
- Use dmsetup command to get device information
- Example: sudo dmsetup info /dev/dm-0
- lsblk command shows device relationships
- Investigation Process:
- Use iostat to identify active devices
- Use pidstat to identify processes causing high I/O
- Use dmsetup to understand device mapping
- Use ps command to see process details
- Use kill command to stop problematic processes
Note: Storage device reporting may not always reflect exact process data due to how devices handle data blocks.
Create, manage, and diagnose advanced filesystem permissions
- Standard File Permissions:
- Format: rw- rw- r– (owner, group, others)
- Limited to one user, one group, and others
- Can be restrictive when needing specific permissions
- ACLs (Access Control Lists):
- Allows defining specific permissions for multiple users/groups
- Command: setfacl (set file access control list)
- Installation: sudo apt install acl (on newer Ubuntu versions)
- Using ACLs:
- Basic syntax: sudo setfacl –modify user:username:permissions filename
- Permission options: rwx (read, write, execute)
- Plus sign (+) indicates ACL existence in ls -l output
- View ACLs: getfacl command
- ACL Features:
- Mask: Defines maximum possible permissions
- Recursive application: –recursive option
- Can add/remove permissions for specific users/groups
- Remove all ACL entries: setfacl –remove-all filename
- File Attributes:
- Modified using chattr command
- Common attributes:
- Append only (a): Only allows adding data
- Immutable (i): Prevents any modifications
- View attributes: lsattr command
- Syntax: sudo chattr +/- attribute filename
- Important Notes:
- ACLs provide granular permission control
- Some attributes may not work on certain filesystems
- Root permissions required for many operations
- Mask automatically adjusts to fit permissions unless manually set
- Example Commands:
- Add user permission: setfacl –modify user:jeremy:rw file3
- Add group permission: setfacl –modify group:sudo:rw file3
- Remove ACL: setfacl –remove user:jeremy file3
- Set append attribute: chattr +a filename
- Set immutable attribute: chattr +i filename