Linux Interview Questions
Complete Linux Syllabus with Compact Command Flags
Section titled “Complete Linux Syllabus with Compact Command Flags”Part 1: Linux Fundamentals (Core - 30% of interviews)
Section titled “Part 1: Linux Fundamentals (Core - 30% of interviews)”1.1 Linux Philosophy & Architecture
Section titled “1.1 Linux Philosophy & Architecture”Memory Trigger: “Kernel→Hardware, Shell→You”
Boot Process: BIOS/UEFI → GRUB → Kernel → initrd → systemd → Shell
Runlevels/Targets:
systemctl get-default # Current targetsystemctl set-default multi-user.target # CLI modesystemctl isolate graphical.target # Switch to GUI1.2 File System Hierarchy (FHS)
Section titled “1.2 File System Hierarchy (FHS)”Memory Trigger: “/bin bin, /sbin sys bin, /etc config, /var varies, /usr user, /home home, /root root, /tmp temp, /dev devices, /proc process, /sys system, /opt optional”
1.3 File Types & Permissions
Section titled “1.3 File Types & Permissions”Memory Trigger: “rwx=421, SUID=4, SGID=2, Sticky=1”
chmod 755 file # rwxr-xr-xchmod u+x,g-w,o=r file # Symbolicchmod -R 755 dir/ # Recursivechown user:group file # Change owner:groupchgrp group file # Change group onlyumask 022 # Default 755/644
chmod u+s file # SUID (4) - runs as ownerchmod g+s dir/ # SGID (2) - inherits groupchmod +t dir/ # Sticky (1) - only owner delete1.4 Essential Linux Commands
Section titled “1.4 Essential Linux Commands”Memory Trigger: “ls -la(all details), -lh(human), -lt(time), -ltr(reverse time)“
ls -la, -lh, -lt, -ltrcp -r(recursive), -p(preserve), -u(update), -i(interactive)mv -i, -u, -v(verbose)rm -r(recursive), -f(force), -i(interactive)mkdir -p(parents), -m(mode)touch -t(timestamp), -r(reference)
head -n(lines), tail -f(follow), -F(follow+retry)less -N(lines), -S(chop)
find -name, -type f/d, -size, -mtime, -perm, -user, -exec
tar -czf(create gzip), -xzf(extract gzip), -cjf(bzip2), -xJf(xz), -tvf(view)gzip -k(keep), -9(best compression)
uname -a(all), -r(release)free -h(human), -m(MB)df -h, -i(inodes)du -sh(summary human), --max-depth=1
ps aux(all), -ef(full), -eo custom, --sort=-%cputop -u(user), -p(pid)kill -9(force), -15(graceful), -1(reload)pkill -f(full command), -u(user)nice -n(priority), renice -n -p PID1.5 Wildcards & Globbing
Section titled “1.5 Wildcards & Globbing”Memory Trigger: ” any, ? one, [] range, {} list”*
* .txt, ?.txt, [0-9], {a,b,c}, {1..10}1.6 Redirection & Pipes
Section titled “1.6 Redirection & Pipes”Memory Trigger: ”> overwrite, >> append, 2> error, &> both, | pipe”
command > file, >> file, 2> file, &> file, < filecommand1 | command2command | tee file # Display and savefind | xargs rm, xargs -n1, xargs -I{}1.7 Exit Codes
Section titled “1.7 Exit Codes”Memory Trigger: “0 success, && AND, || OR, ; always”
command && echo "OK" || echo "Fail"echo $? # Last exit codePart 2: Text Processing & Regex (15% of interviews)
Section titled “Part 2: Text Processing & Regex (15% of interviews)”2.1 Regular Expressions
Section titled “2.1 Regular Expressions”Memory Trigger: ”. any, * 0+, + 1+, ? 0/1, ^ start, $ end”
grep 'pattern' file # BRE (needs \\ for +?|)egrep 'pattern' file # ERE (no backslashes)grep -E, grep -F(fixed string)2.2 Grep Family
Section titled “2.2 Grep Family”Memory Trigger: “-i(ignore), -v(invert), -r(recursive), -n(number), -c(count), -l(filename), -A(after), -B(before), -C(context)“
grep -irn "error" /var/log/egrep "error|warning"fgrep ".*" # Literal search2.3 Sed (Stream Editor)
Section titled “2.3 Sed (Stream Editor)”Memory Trigger: “s/old/new/g(global), -i(in-place), -e(multiple)“
sed 's/old/new/g' filesed -i.bak 's/old/new/g' file # Backup then replacesed '2,5d', sed '/start/,/end/d'sed -n '10,20p' # Print range2.4 Awk
Section titled “2.4 Awk”Memory Trigger: “print $1(first field), -F(separator), NR(line number), NF(field count)“
awk '{print $1}' fileawk -F: '{print $1}' /etc/passwdawk 'NR==10, NR==20' fileawk '/error/ {print}'awk '{sum+=$1} END {print sum}'2.5 Other Text Tools
Section titled “2.5 Other Text Tools”Memory Trigger: “cut -d(field), -f(fields), -c(chars)“
cut -d: -f1 /etc/passwdsort -n(numeric), -r(reverse), -k(key), -u(unique)uniq -c(count), -d(duplicates), -u(unique only)wc -l(lines), -w(words), -c(chars)tr 'a-z' 'A-Z', -d(delete), -s(squeeze)diff -u(unified), -c(context), -r(recursive)Part 3: Shell Scripting (15% of interviews)
Section titled “Part 3: Shell Scripting (15% of interviews)”3.1 Shebang
Section titled “3.1 Shebang”#!/bin/bash, #!/usr/bin/env bashchmod +x script.sh./script.sh (subshell), source script.sh (current)3.2 Variables
Section titled “3.2 Variables”Memory Trigger: “$1 first arg, $# count, $? exit code, $$ PID”
name="value" # No spaces!${name}, "$name"$0,$1,$9,${10}, $#, $*, $@, $?, $$, $!
${VAR:-default} # Use default if unset${VAR:=default} # Assign default${#var} # Length${var#pattern} # Remove shortest prefix${var##pattern} # Remove longest prefix${var%pattern} # Remove shortest suffix${var%%pattern} # Remove longest suffix${var/old/new} # Replace first${var//old/new} # Replace all3.3 Arrays
Section titled “3.3 Arrays”arr=(a b c) # Indexed${arr[0]}, ${arr[@]}, ${#arr[@]}declare -A arr # Associativearr[key]=value, ${arr[key]}3.4 Conditionals
Section titled “3.4 Conditionals”Memory Trigger: “-f file, -d dir, -z empty, -eq equal, && and, || or”
if [ -f "$file" ]; then; fi[ -f "$file" ] && echo "exists"[[ "$str" == pattern ]] # Pattern matching[[ "$str" =~ ^[0-9]+$ ]] # Regex(( a > b )) # Arithmeticcase "$var" in pattern) ;; esac3.5 Loops
Section titled “3.5 Loops”for i in list; do; donefor ((i=0;i<10;i++)); do; donewhile [ condition ]; do; doneuntil [ condition ]; do; donebreak, continue3.6 Functions
Section titled “3.6 Functions”func() { echo $1; local var="x"; return 0; }result=$(func "arg")3.7 Input/Output
Section titled “3.7 Input/Output”Memory Trigger: “read -p(prompt), -s(silent), -t(timeout)“
read -p "Name: " name$(command), `command` # Command substitutioncat << EOF ... EOF # Heredoc<<< "string" # Herestring3.8 Debugging
Section titled “3.8 Debugging”Memory Trigger: “set -x(debug), -e(exit on error), -u(unset error)“
set -euxo pipefailtrap 'echo "Error"' ERRlogger -t tag "message"3.9 Advanced Bash
Section titled “3.9 Advanced Bash”Memory Trigger: “{1..10} brace, $(( )) arithmetic”
{1..10}, {a..z}, {1..10..2}$((2+2)), ((count++))printf "%s %d\\n" "text" 10eval "echo \\$var" # Caution!Part 4: User & Permission Management (10% of interviews)
Section titled “Part 4: User & Permission Management (10% of interviews)”4.1 User Management
Section titled “4.1 User Management”Memory Trigger: “-m(home), -s(shell), -L(lock), -e(expire)“
useradd -m -s /bin/bash userusermod -aG group user # Append to groupusermod -L(lock), -U(unlock)userdel -r(remove home)passwd -e(expire)chage -l(list), -M(max days)id, who, w, last, lastlog4.2 Group Management
Section titled “4.2 Group Management”groupadd, groupdelgpasswd -a user group # Add usergpasswd -d user group # Delete user4.3 Sudo
Section titled “4.3 Sudo”Memory Trigger: “-i(login), -s(shell), -l(list), -k(kill timestamp)“
visudo # Edit /etc/sudoerssudo -i, -s, -l, -ksudo -u user command4.4 ACLs
Section titled “4.4 ACLs”getfacl filesetfacl -m u:user:rwx filesetfacl -x u:user file4.5 User Profiles
Section titled “4.5 User Profiles”/etc/profile, ~/.bash_profile # Login shells/etc/bashrc, ~/.bashrc # Non-login shellssource ~/.bashrc # ReloadPart 5: Process Management (10% of interviews)
Section titled “Part 5: Process Management (10% of interviews)”5.1 Process States
Section titled “5.1 Process States”Memory Trigger: “R running, S sleep, D disk I/O, Z zombie, T stopped”
5.2 Process Monitoring
Section titled “5.2 Process Monitoring”Memory Trigger: “ps aux(all), -ef(full), -eo(custom), top -u(user)“
ps aux --sort=-%cpups -eo pid,ppid,cmd,%cpu,%memtop -u user, -p PIDpstree -p(pid), -u(user)lsof -i:port, -p PID, -u userfuser -v file, -k filestrace -p PID, -e trace=open5.3 Process Control
Section titled “5.3 Process Control”command & # BackgroundCtrl+Z, jobs, fg %1, bg %2disown %1, nohup command &screen -S name, -r reattachtmux new -s name, attach -t name5.4 Priority
Section titled “5.4 Priority”Memory Trigger: “nice -n(start), renice(change), -20 highest”
nice -n 10 commandrenice -n 5 -p PID5.5 Signals
Section titled “5.5 Signals”Memory Trigger: “1 HUP reload, 9 KILL force, 15 TERM graceful”
kill -9, -15, -1, -STOP, -CONTkillall -15 namepkill -15 patterntrap 'cmd' INT TERM EXITPart 6: Disk & Filesystem (10% of interviews)
Section titled “Part 6: Disk & Filesystem (10% of interviews)”6.1 Filesystem Types
Section titled “6.1 Filesystem Types”df -T, lsblk -f, blkid6.2 Partitioning
Section titled “6.2 Partitioning”Memory Trigger: “fdisk(MBR), gdisk(GPT), parted(both)“
fdisk -l /dev/sdagdisk -l /dev/sdaparted /dev/sda printlsblk -f(FS), -p(path)6.3 Filesystem Operations
Section titled “6.3 Filesystem Operations”Memory Trigger: “mkfs.ext4, mount -o ro/rw/noexec, fsck -f”
mkfs.ext4 /dev/sda1mount /dev/sda1 /mntmount -o ro, -o noexec, -o remount,rwumount -l(lazy)fsck -f(force), -y(auto yes)6.4 LVM
Section titled “6.4 LVM”Memory Trigger: “pvcreate→vgcreate→lvcreate”
pvcreate /dev/sda1vgcreate vg_name /dev/sda1lvcreate -L 10G -n lv_name vg_namelvextend -L +5G /dev/vg_name/lv_namelvreduce -L 5G /dev/vg_name/lv_nameresize2fs, xfs_growfs6.5 Disk Monitoring
Section titled “6.5 Disk Monitoring”iostat -x 1iotop -o(only active)smartctl -a /dev/sda, -H(health)6.6 Swap
Section titled “6.6 Swap”swapon -a(all), swapoffsysctl vm.swappiness=10Part 7: System Administration (10% of interviews)
Section titled “Part 7: System Administration (10% of interviews)”7.1 Boot Process
Section titled “7.1 Boot Process”/etc/default/grubupdate-grub # Debiangrub-mkconfig -o /boot/grub/grub.cfg # RHEL7.2 systemd
Section titled “7.2 systemd”Memory Trigger: “start/stop/restart/reload/enable/disable/mask”
systemctl start/stop/restart/reload/enable/disable/mask servicesystemctl status/is-active/is-enabled servicesystemctl reboot/poweroff/rescue/emergencysystemctl get-default/set-defaultsystemd-analyze blame7.3 Package Management
Section titled “7.3 Package Management”Debian/Ubuntu - “apt update/upgrade/install/remove/purge”
apt update/upgrade/install/remove/purge/autoremoveapt search/show/list --installeddpkg -i(install), -r(remove), -l(list), -L(list files), -S(search)RHEL/CentOS - “yum install/update/remove/search”
yum install/update/remove/search/infoyum list installed/providesdnf (same as yum)rpm -ivh(install), -e(erase), -qa(query all), -qi(info), -ql(list files), -qf(find)7.4 Logging
Section titled “7.4 Logging”Memory Trigger: “journalctl -u(unit), -f(follow), -p(priority)“
journalctl -u nginx -fjournalctl -b(boot), -b -1(previous)journalctl --since "1 hour ago"journalctl -p errlogrotate /etc/logrotate.conf7.5 Cron
Section titled “7.5 Cron”Memory Trigger: “crontab -e(edit), -l(list), -r(remove)“
crontab -e, -l, -r# * * * * * command (min hour day month dow)@reboot, @daily, @hourly, @weekly, @monthlyanacron -f(force)7.6 Monitoring
Section titled “7.6 Monitoring”vmstat 1, -s(stats)mpstat -P ALLsar -u(CPU), -r(memory), -b(I/O), -n DEVdstat -cdngglancesPart 8: Networking (10% of interviews)
Section titled “Part 8: Networking (10% of interviews)”8.1 Network Config
Section titled “8.1 Network Config”Memory Trigger: “ip addr(show), link(up/down), route(add)“
ip addr show/add/delip link set eth0 up/downip route show/add/delhostnamectl set-hostname name8.2 Network Services
Section titled “8.2 Network Services”systemctl restart sshdtimedatectl set-ntp true8.3 Network Tools
Section titled “8.3 Network Tools”Memory Trigger: “ping -c(count), traceroute -n(numeric), ss -tulpn”
ping -c 4 -i 0.5 -s 1400traceroute -n -w 2mtr -r -c 10ss -tulpn, -ta(all TCP)netstat -tulpn (legacy)nmap -sV, -p port, -sP(ping scan)tcpdump -i eth0 -w file.pcap -r file.pcapnc -l(listen), -zv(zero I/O verbose)curl -I(headers), -o(output), -L(follow)wget -c(resume), -r(recursive)dig +short, -x(reverse)nslookup, host -t MX8.4 Firewall
Section titled “8.4 Firewall”iptables Memory: “-L(list), -A(append), -D(delete), -p(protocol), -s(source), -d(dest), -j(jump)“
iptables -L -n -viptables -A INPUT -p tcp --dport 22 -j ACCEPTiptables -A INPUT -s 192.168.1.0/24 -j ACCEPTiptables -t nat -A POSTROUTING -o eth0 -j MASQUERADEfirewall-cmd --add-service=http --permanentufw allow 22/tcp8.5 Routing
Section titled “8.5 Routing”ip route add 10.0.0.0/24 via 192.168.1.1sysctl net.ipv4.ip_forward=1Part 9: Security (5% of interviews)
Section titled “Part 9: Security (5% of interviews)”9.1 User Security
Section titled “9.1 User Security”chage -l user, -M 90, -m 7, -W 7faillog -u userlastbfail2ban-client status sshd9.2 File Security
Section titled “9.2 File Security”Memory Trigger: “chattr +i(immutable), +a(append-only)“
chattr +i/+a/-i filelsattr filefind / -perm -0002 -type f # World-writablefind / -perm -4000 -type f # SUID files9.3 Auditing
Section titled “9.3 Auditing”auditctl -w /etc/passwd -p wa -k keyausearch -k keyaureport -l(login), -au(auth)9.4 SELinux
Section titled “9.4 SELinux”Memory Trigger: “getenforce, setenforce 0/1(permissive/enforcing)“
getenforce, setenforce 0/1ls -Z, chcon -t type, restorecon -vgetsebool -a, setsebool -P bool onaudit2why, audit2allow9.5 AppArmor
Section titled “9.5 AppArmor”aa-statusaa-complain /path/to/programaa-enforce /path/to/programaa-logprofPart 10: Advanced Topics (5% of interviews)
Section titled “Part 10: Advanced Topics (5% of interviews)”10.1 Kernel & Modules
Section titled “10.1 Kernel & Modules”Memory Trigger: “lsmod(list), modprobe(load/unload), modinfo”
uname -rlsmodmodprobe module, modprobe -r modulemodinfo modulesysctl -a, -w parameter=value10.2 Memory Management
Section titled “10.2 Memory Management”free -hcat /proc/meminfopmap -x PID10.3 System Calls
Section titled “10.3 System Calls”strace -p PID -e trace=file,networkltrace -p PID10.4 Debugging
Section titled “10.4 Debugging”gdb program, gdb -p PIDvalgrind --leak-check=full ./programperf top, record, report10.5 Containers
Section titled “10.5 Containers”chroot /newroot /bin/bashsystemd-nspawn -D /path/to/containermachinectl listvirsh list --all, start, shutdown, destroyPart 11: Distribution-Specific (5% of interviews)
Section titled “Part 11: Distribution-Specific (5% of interviews)”11.1 Debian/Ubuntu
Section titled “11.1 Debian/Ubuntu”Memory Trigger: “apt update/upgrade/install, dpkg -i”
apt update/upgrade/full-upgrade/install/remove/purge/autoremoveapt search/showdpkg -i/-r/-l/-L/-Sadd-apt-repository ppa:user/nameupdate-alternatives --config python11.2 RHEL/CentOS
Section titled “11.2 RHEL/CentOS”Memory Trigger: “yum install/update, rpm -ivh”
yum install/update/remove/search/info/providesdnf (same as yum)rpm -ivh/-e/-qa/-qi/-ql/-qfyum install epel-release11.3 SUSE
Section titled “11.3 SUSE”zypper refresh/install/update/remove/search11.4 Arch
Section titled “11.4 Arch”pacman -S(install), -Syu(update), -R(remove), -Qs(search), -Qi(info)yay -S (AUR)Part 12: Quick Reference
Section titled “Part 12: Quick Reference”Most Common Commands
Section titled “Most Common Commands”ls -la, -lh, -ltcp -rp, mv -i, rm -rfmkdir -p, touchfind . -name "*.log" -mtime -7grep -rin "error" .ps aux --sort=-%cputop -u userkill -9 PID, -15 PIDdf -h, -idu -sh, --max-depth=1tar -czf, -xzfssh user@hostscp file user@host:/pathrsync -avzchmod 755, chown user:groupsystemctl start/stop/restart/statusjournalctl -u service -fip addr, ss -tulpnKey Files
Section titled “Key Files”/etc/passwd, /etc/shadow, /etc/group/etc/fstab, /etc/hosts, /etc/resolv.conf/etc/crontab, /etc/sudoers/var/log/syslog, /var/log/auth.log/proc/cpuinfo, /proc/meminfoCommon Troubleshooting
Section titled “Common Troubleshooting”| Problem | Commands |
|---|---|
| Disk full | df -h, du -sh /*, find / -size +100M |
| High CPU | top, ps aux --sort=-%cpu |
| High memory | free -h, ps aux --sort=-%mem |
| Can’t login | last, lastb, journalctl -u sshd |
| Network down | ip addr, ping, ss -tulpn |
| Permission denied | ls -la, id, groups |
| Service not starting | systemctl status, journalctl -xe |
Linux Interview Questions: 30 Important Questions + 20 Scenario-Based Questions
Section titled “Linux Interview Questions: 30 Important Questions + 20 Scenario-Based Questions”Part 1: 30 Important Linux Questions with Detailed Answers
Section titled “Part 1: 30 Important Linux Questions with Detailed Answers”1. Explain the Linux boot process step by step.
Section titled “1. Explain the Linux boot process step by step.”Answer:
The Linux boot process consists of several stages:
1. BIOS/UEFI (Power-On Self Test):
- Performs hardware initialization and testing
- Locates bootable device (HDD, SSD, USB)
- Loads and executes bootloader from MBR/GPT
2. Bootloader (GRUB2 most common):
- Presents boot menu (optional)
- Loads Linux kernel into memory
- Loads initramfs/initrd (initial RAM disk)
- Passes control to kernel with parameters
3. Kernel Initialization:
- Decompresses and initializes hardware drivers
- Mounts initial root filesystem from initramfs
- Executes
/initfrom initramfs - Loads necessary kernel modules
- Mounts real root filesystem (switch_root)
4. Init System (systemd on most modern distros):
- Executes
default.target(equivalent to runlevel) - Starts system services in parallel
- Manages dependencies between services
5. User Space:
- Display Manager (GUI login) or Getty (text login)
- User session starts
- Shell or desktop environment loads
Boot Parameters Location:
# GRUB configuration/etc/default/grub/boot/grub/grub.cfg
# Kernel command linecat /proc/cmdline
# Boot messagesdmesgjournalctl -bRecovery Boot Options:
- Single-user mode:
systemctl rescueorinit 1 - Emergency mode:
systemctl emergency - Kernel parameters:
single,emergency,init=/bin/bash
2. Explain the difference between hard link and soft link (symlink).
Section titled “2. Explain the difference between hard link and soft link (symlink).”Answer:
| Aspect | Hard Link | Soft Link (Symbolic Link) |
|---|---|---|
| Inode | Same inode number | Different inode number |
| Cross filesystem | No | Yes |
| Directory linking | No (except special cases) | Yes |
| Original file deleted | Still accessible | Broken (dangling) |
| Size | Same as original (no extra space) | Small (path stored) |
| Creation | ln target linkname | ln -s target linkname |
Inode Explanation:
# Each file has an inode containing metadata# Hard links share the same inode (same file)# Soft links are separate files pointing to path
# View inode numbersls -li# 12345678 -rw-r--r-- 2 user group 1024 Jan 1 file.txt# 12345678 -rw-r--r-- 2 user group 1024 Jan 1 hardlink.txt# 87654321 lrwxrwxrwx 1 user group 8 Jan 1 softlink.txt -> file.txtPractical Examples:
# Create original fileecho "content" > original.txt
# Create hard linkln original.txt hard.txt# Both point to same data (same inode)
# Create soft linkln -s original.txt soft.txt# soft.txt contains path "original.txt"
# Check link countsls -l# -rw-r--r-- 2 user group 8 Jan 1 10:00 original.txt# -rw-r--r-- 2 user group 8 Jan 1 10:00 hard.txt# lrwxrwxrwx 1 user group 11 Jan 1 10:00 soft.txt -> original.txt
# Delete originalrm original.txt# hard.txt still works (data still exists)# soft.txt is broken (dangling symlink)Use Cases:
- Hard links: Version control, backup deduplication
- Soft links: Shortcuts, library versioning (
libc.so.6 -> libc-2.31.so)
3. What are file permissions in Linux? Explain SUID, SGID, and Sticky Bit.
Section titled “3. What are file permissions in Linux? Explain SUID, SGID, and Sticky Bit.”Answer:
Standard Permissions (ugo/rwx):
- Read (r=4): View file contents, list directory
- Write (w=2): Modify file, create/delete files in directory
- Execute (x=1): Run file as program, enter directory
Special Permissions:
SUID (Set User ID) - 4000 (4 in first octal):
# When executed, runs with owner's privileges (not user's)# Typically used for password changing, ping, etc.
ls -l /usr/bin/passwd# -rwsr-xr-x 1 root root 68208 May 28 2020 /usr/bin/passwd# ^ 's' indicates SUID
# Set SUIDchmod u+s filechmod 4755 file # 4 = SUID, 755 = rwxr-xr-x
# Security risk: SUID on shell or editors can lead to privilege escalationSGID (Set Group ID) - 2000 (2 in first octal):
# On files: Runs with group owner's privileges# On directories: New files inherit directory's group
# Example on directorymkdir sharedchgrp developers sharedchmod g+s shared# Files created in shared/ will belong to 'developers' group
# Set SGIDchmod g+s filechmod 2755 file # 2 = SGID, 755 = rwxr-xr-x
# Practical use: Shared team directoriesSticky Bit - 1000 (1 in first octal):
# On directories: Only file owner can delete/modify files# Classic example: /tmp directory
ls -ld /tmp# drwxrwxrwt 20 root root 4096 Jan 1 10:00 /tmp# ^ 't' indicates sticky bit
# Set sticky bitchmod +t directorychmod 1777 directory # 1 = sticky, 777 = rwxrwxrwx
# Without sticky bit, any user could delete others' temp filesCombined Special Permissions:
# All three special bits (rare)chmod 6777 file # SUID(4)+SGID(2)+sticky(1) + 777 = 6777
# Check permissionsstat -c "%a %n" file # Show numeric permissions4. Explain the difference between fork() and exec() system calls.
Section titled “4. Explain the difference between fork() and exec() system calls.”Answer:
| Aspect | fork() | exec() |
|---|---|---|
| Purpose | Creates child process | Replaces current process |
| PID | New PID for child | Same PID |
| Memory | Copy of parent (COW) | New program loaded |
| Return | Twice (0 in child, PID in parent) | Never returns on success |
| Use | Process creation | Program execution |
Fork() Details:
#include <unistd.h>pid_t fork(void);
// Returns:// - Child process: 0// - Parent process: child's PID// - Error: -1Fork Example:
pid_t pid = fork();if (pid == 0) { // Child process printf("Child: PID=%d\\n", getpid()); execl("/bin/ls", "ls", "-l", NULL);} else if (pid > 0) { // Parent process printf("Parent: Child PID=%d\\n", pid); wait(NULL); // Wait for child}Exec Family Functions:
#include <unistd.h>
// Variants:execl(path, arg0, arg1, ..., NULL); // List argumentsexeclp(file, arg0, arg1, ..., NULL); // Uses PATHexecle(path, arg0, arg1, ..., NULL, env);// With environmentexecv(path, argv); // Vector argumentsexecvp(file, argv); // Uses PATHexecve(path, argv, env); // Full controlCommon Pattern (Shell Operation):
# In shell, typing a command:# 1. Shell calls fork() to create child# 2. Child calls exec() to run command# 3. Parent waits for child to complete
# Shell example:# $ ls -l# fork() → child process → exec("ls", "ls", "-l", NULL)Copy-on-Write (COW):
- Modern Linux doesn’t actually copy entire memory on fork()
- Pages marked as read-only, shared between parent and child
- Copy only happens when either process writes to page
- Improves performance and reduces memory usage
5. Explain the difference between soft and hard limits in ulimit.
Section titled “5. Explain the difference between soft and hard limits in ulimit.”Answer:
Soft Limit:
- Current enforced limit
- Can be increased up to hard limit by user
- Default operating value
Hard Limit:
- Maximum ceiling for soft limit
- Can only be increased by root
- Set by system administrator
View Limits:
# View all limitsulimit -a
# View specific limitsulimit -n # open filesulimit -u # processesulimit -s # stack sizeulimit -c # core file sizeulimit -m # memory sizeulimit -v # virtual memory
# Soft vs Hardulimit -Sn # soft open filesulimit -Hn # hard open filesSetting Limits:
# Set soft limit (user can increase to hard)ulimit -n 2048
# Set hard limit (requires root)ulimit -Hn 4096
# Both soft and hardulimit -n 2048 -Hn 4096
# Remove limitulimit -n unlimited # softulimit -Hn unlimited # hard (root only)Configuration Files:
# System-wide limits/etc/security/limits.conf
# Format:# <domain> <type> <item> <value>* soft nofile 4096* hard nofile 65536root soft nofile 8192@developers hard nproc unlimited
# PAM configuration/etc/pam.d/common-session# session required pam_limits.soCommon Limit Types:
| Item | Description | Typical Values |
|---|---|---|
nofile | Open file descriptors | 1024 (soft), 4096 (hard) |
nproc | Number of processes | unlimited or 4096 |
core | Core dump size | 0 (disabled) |
data | Data segment size | unlimited |
stack | Stack size | 8192 KB |
memlock | Locked memory | 64 KB |
rss | Resident set size | unlimited |
Check Current Process Limits:
# Check for running processcat /proc/$(pidof process)/limits
# Check shell limitscat /proc/$$/limits6. What is the difference between static and dynamic linking?
Section titled “6. What is the difference between static and dynamic linking?”Answer:
| Aspect | Static Linking | Dynamic Linking |
|---|---|---|
| Libraries | Copied into executable | Shared at runtime |
| File Size | Larger | Smaller |
| Memory | More per process | Shared across processes |
| Updates | Need relink | Replace library |
| Portability | Self-contained | Needs libraries present |
| Startup | Faster | Slower (library loading) |
Static Linking:
# Create static executablegcc -static -o program program.c
# Check if statically linkedfile program# program: ELF 64-bit executable, statically linked
# ldd shows "not a dynamic executable"ldd program# not a dynamic executable
# Pros: Portable, no dependencies# Cons: Larger size, can't share librariesDynamic Linking:
# Create dynamically linked executable (default)gcc -o program program.c
# Check dynamic dependenciesldd program# linux-vdso.so.1 (0x00007ffe)# libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6# /lib64/ld-linux-x86-64.so.2
# Shared library locations/etc/ld.so.conf/etc/ld.so.conf.d/LD_LIBRARY_PATH environment variable
# Update library cacheldconfigDynamic Linker (ld-linux.so):
# Interpreter path in ELFreadelf -l program | grep INTERP# INTERP 0x000000 0x000000 0x000000 0x000019 0x000019 R 0x1# Manual execution with custom library pathLD_LIBRARY_PATH=/custom/lib ./programMemory Sharing Benefits:
# Multiple processes share same physical library pages# Example: All bash processes share /lib/libc.so.6
# Check shared library memorypmap $(pidof bash) | grep libc# 7f1234567000 2048K r-x-- libc-2.31.so # Shared across processesUse Cases:
- Static: Embedded systems, containers, recovery tools
- Dynamic: Most applications, shared hosting, regular desktop apps
7. Explain the difference between kill, pkill, and killall.
Section titled “7. Explain the difference between kill, pkill, and killall.”Answer:
| Command | Method | Target | Options |
|---|---|---|---|
kill | PID number | Specific process by ID | Signal number/name |
pkill | Pattern | Processes matching name/attributes | Full regex |
killall | Name | Processes by exact name | Case-sensitive |
Kill (by PID):
# Get PID firstps aux | grep firefox# user 12345 2.0 1.5 ... firefox
# Send signals by PIDkill 12345 # SIGTERM (15) - gracefulkill -9 12345 # SIGKILL (9) - forcekill -15 12345 # SIGTERMkill -SIGTERM 12345 # Same as above
# Signal numberskill -l# 1) SIGHUP 2) SIGINT 3) SIGQUIT 6) SIGABRT# 9) SIGKILL 15) SIGTERM 18) SIGCONT 19) SIGSTOPPkill (by Pattern):
# Kill by name patternpkill firefox # Matches firefox, firefox-bin, etc.pkill -9 firefox # Force killpkill -f "python script.py" # Match full command line
# List matching processes without killingpkill -l firefox # List signals onlypgrep firefox # Show PIDs only
# Optionspkill -u user # Kill all user's processespkill -t pts/2 # Kill processes on terminalpkill -HUP nginx # Reload nginx configKillall (by Exact Name):
# Kill by exact process namekillall firefox # Kills ONLY "firefox", not "firefox-bin"
# Case-sensitive by defaultkillall -I FIREFOX # Case-insensitive match
# Interactive modekillall -i firefox # Confirm before killing
# Older than timekillall -o 1h firefox # Kill processes older than 1 hourkillall -y 30m firefox # Kill processes younger than 30 minutes
# Wait for process to diekillall -w firefox # Wait until all killedSafe Killing Practices:
# 1. Try graceful termination firstkill -15 PID
# 2. Wait a few secondssleep 5
# 3. Check if still runningkill -0 PID # Returns 0 if running
# 4. Force kill if necessarykill -9 PID
# Signal meanings:# SIGTERM (15): Process can clean up (close files, etc.)# SIGKILL (9): Kernel terminates immediately (no cleanup)# SIGHUP (1): Reload configuration (daemons)# SIGINT (2): Interrupt (Ctrl+C)8. Explain the difference between su and sudo.
Section titled “8. Explain the difference between su and sudo.”Answer:
| Aspect | su | sudo |
|---|---|---|
| Authentication | Target user’s password | User’s own password |
| Command logging | No | Yes |
| Fine-grained control | No | Yes |
| Environment | New shell (usually) | Current environment preserved |
| Audit trail | Minimal | Complete |
su (Switch User):
# Switch to root (requires root password)susu -
# Switch to another usersu - username
# Run single command as another usersu -c "command" username
# Without hyphen: keeps current environmentsu username
# With hyphen: new login shell (clean environment)su - username
# Security issue: Users need target's password# Auditing: Hard to track who did whatsudo (Superuser DO):
# Run command as root (user's own password)sudo command
# Run as specific usersudo -u username command
# Open root shellsudo -isudo -s
# Run previous command with sudosudo !!
# List user's sudo privilegessudo -l
# Keep credentials cachedsudo -v # Update timestampsudo -k # Invalidate timestampsudoers Configuration (/etc/sudoers):
# User specificationsusername ALL=(ALL:ALL) ALL# user host=(run-as:group) commands
# Examples:# Allow user to run any commandjohn ALL=(ALL) ALL
# Allow without passwordjane ALL=(ALL) NOPASSWD: ALL
# Allow specific commandswebadmin ALL=(root) /usr/bin/systemctl restart nginx, /usr/bin/systemctl status nginx
# Allow as specific userbackup ALL=(backup) /usr/bin/rsync
# Group permissions%admin ALL=(ALL) ALL%sudo ALL=(ALL:ALL) ALL
# Command aliasesCmnd_Alias WEB_CMDS = /usr/bin/systemctl restart nginx, /usr/bin/systemctl reload nginxCmnd_Alias NET_CMDS = /sbin/ifconfig, /bin/ping
# DefaultsDefaults env_resetDefaults mail_badpassDefaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"Defaults logfile="/var/log/sudo.log"Security Differences:
# su logs:/var/log/auth.log: "su: session opened for user root by user"# No command details
# sudo logs:/var/log/auth.log: "sudo: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/bin/ls"# Full command details
# Better auditing with sudo# Can restrict to specific commands# No need to share root passwordBest Practices:
- Use
sudoinstead ofsufor better auditing - Disable root login:
PermitRootLogin noin/etc/ssh/sshd_config - Use
sudo -iinstead ofsu - - Never share passwords; use sudo with proper configuration
9. Explain the difference between cron and anacron.
Section titled “9. Explain the difference between cron and anacron.”Answer:
| Aspect | Cron | Anacron |
|---|---|---|
| Assumes | System runs 24/7 | System may be off |
| Precision | Minute-level | Day-level |
| Missing jobs | Skipped | Run at next opportunity |
| Root required | No (user crontabs) | Yes |
| Random delay | No | Yes (avoid stampede) |
Cron Syntax:
# Minute Hour Day Month DayOfWeek Command# 0-59 0-23 1-31 1-12 0-7
# Examples:# Run every day at 2:30 AM30 2 * * * /backup/script.sh
# Run every Monday at 5 AM0 5 * * 1 /scripts/weekly.sh
# Run every hour0 * * * * /scripts/hourly.sh
# Run every 15 minutes*/15 * * * * /scripts/check.sh
# Special strings:@reboot # Run at startup@daily # Run once per day@hourly # Run once per hour@weekly # Run once per week@monthly # Run once per month@yearly # Run once per yearCron Files Locations:
# System-wide crontab/etc/crontab
# User crontabs/var/spool/cron/crontabs/
# Cron directories/etc/cron.d//etc/cron.hourly//etc/cron.daily//etc/cron.weekly//etc/cron.monthly/Anacron Configuration (/etc/anacrontab):
# Format:# period delay job-identifier command# (in days) (in minutes)
# Example:1 5 cron.daily run-parts /etc/cron.daily7 10 cron.weekly run-parts /etc/cron.weekly30 15 cron.monthly run-parts /etc/cron.monthly
# Anacron timestamp file/var/spool/anacron/How Anacron Works:
# 1. At boot time, check when each job last ran# 2. If period has elapsed, schedule job# 3. Add random delay to prevent all jobs running simultaneously# 4. Update timestamp after job completes
# Run manuallyanacron -f # Force runanacron -d # Debug modeanacron -n # Run now (ignore delay)Practical Example - Laptop:
# Problem: Laptop turned off at 2 AM# Solution: Use anacron for daily tasks
# Run backup within 30 minutes of next boot1 30 daily-backup /usr/local/bin/backup.sh
# Cron for time-sensitive tasks (still needed)# Every 5 minutes check*/5 * * * * /scripts/check.shCombined Usage:
# Modern systems run cron, which runs anacron# Run anacron from cron30 7 * * * root test -x /usr/sbin/anacron && /usr/sbin/anacron
# Best practice:# - Use cron for precise scheduling# - Use anacron for daily/weekly maintenance# - Use systemd timers for modern systems10. Explain the difference between top and htop.
Section titled “10. Explain the difference between top and htop.”Answer:
| Feature | top | htop |
|---|---|---|
| Interface | Text-based | Colorful, mouse support |
| Navigation | Key bindings | Arrow keys, mouse |
| Scrolling | No horizontal | Yes (horizontal/vertical) |
| Process tree | Limited | Built-in tree view |
| Kill process | Type PID then ‘k’ | F9 key, select signal |
| Setup saving | No (via rc file) | Yes (F2 configuration) |
| Resource graphs | Basic | Colorful meters |
| Platform | Everywhere | Additional install |
Top Key Commands:
# Interactive keys:h, ? # Helpq # Quitk # Kill process (enter PID)r # Renice processs # Change delay (seconds)t # Toggle CPU/memory summarym # Toggle memory summary1 # Show each CPU corec # Show full command lineu # Filter by userP # Sort by CPU usageM # Sort by memory usageT # Sort by timeR # Reverse sortW # Write config to ~/.toprcTop Command Line Options:
top -d 1 # Update every 1 secondtop -p 1234,5678 # Monitor specific PIDstop -u username # Monitor specific usertop -b -n 1 # Batch mode (1 iteration)top -H # Show threadsHtop Features:
# Navigation:Arrow keys # Move selectionPgUp/PgDn # ScrollF1 / h # HelpF2 / S # SetupF3 / / # SearchF4 / \\ # FilterF5 / t # Tree viewF6 / > # Sort byF7 / ] # Increase priority (nice)F8 / [ # Decrease priorityF9 / k # Kill processF10 / q # Quit
# Display:# CPU cores with different colors for user/system/IO# Memory with color-coded usage# Process list with tree view# Setup saves to ~/.config/htop/htoprcInstallation:
# Debian/Ubuntusudo apt install htop
# RHEL/CentOSsudo yum install epel-releasesudo yum install htop
# Archsudo pacman -S htopWhen to Use Which:
- top: Always available, quick checks, scripts
- htop: Interactive monitoring, development, troubleshooting
11. Explain the difference between source and ./ when running scripts.
Section titled “11. Explain the difference between source and ./ when running scripts.”Answer:
| Aspect | source script.sh | ./script.sh |
|---|---|---|
| Shell | Current shell | New subshell |
| Environment | Modifies current environment | Isolated environment |
| Exit | Exits current shell | Exits subshell only |
| Permission | No execute needed | Execute permission required |
| Variables | Set in current shell | Lost after script ends |
| Use case | Configuration files | Regular scripts |
Source (dot operator):
# Two equivalent formssource script.sh. script.sh
# Example script (setenv.sh):export PATH=$PATH:/custom/binMYVAR="hello"
# Run with sourcesource setenv.shecho $MYVAR # Outputs: helloecho $PATH # Contains /custom/bin# Variables persist in current shell
# Useful for:# - Setting environment variables# - Loading shell functions# - Activating virtual environmentssource venv/bin/activateSubshell Execution:
# Example script (setenv.sh):export PATH=$PATH:/custom/binMYVAR="hello"exit 1
# Run as executablechmod +x setenv.sh./setenv.shecho $MYVAR # Empty (variable lost)echo $PATH # Original PATH# Exit 1 only affects subshell, not parent
# New shell process created:# Parent shell → fork() → child shell → exec() → script# Script changes child's environment only# Child exits, parent unchangedPermission Differences:
# Source works without execute permissionchmod -x script.shsource script.sh # Works./script.sh # Permission denied
# Execute permission required for direct executionchmod +x script.sh./script.sh # WorksShebang Effect:
# Script with #!/bin/bash./script.sh # Uses /bin/bash interpreter
# Source uses current shell regardless of shebangsource script.sh # Uses current shell (bash/zsh/dash)Practical Examples:
# Configuration file (.bashrc, .profile)source ~/.bashrc # Reload configuration
# Virtual environmentsource venv/bin/activate
# Running script in background./long_running.sh & # Subshell, can kill independently
# Modifying current directory# cd in script affects parent only with sourcecat cd.sh# cd /tmpsource cd.sh # Changes current directory to /tmp./cd.sh # Changes subshell directory only12. Explain the difference between $* and $@ in shell scripting.
Section titled “12. Explain the difference between $* and $@ in shell scripting.”Answer:
| Variable | Behavior | Quoted Behavior |
|---|---|---|
$* | All arguments as single string | "$*" = single string with IFS first char |
$@ | All arguments as separate words | "$@" = each argument quoted separately |
Unquoted Usage (same behavior):
#!/bin/bashecho "Unquoted \\$*:"for arg in $*; do echo " $arg"done
echo "Unquoted \\$@:"for arg in $@; do echo " $arg"done
# Run: ./test.sh "hello world" foo bar# Both output:# hello# world# foo# bar# (Arguments split on spaces)Quoted Usage (DIFFERENT):
#!/bin/bashecho "Quoted \\$*:"for arg in "$*"; do echo " $arg"done
echo "Quoted \\$@:"for arg in "$@"; do echo " $arg"done
# Run: ./test.sh "hello world" foo bar
# Output:# Quoted $*:# hello world foo bar (single string, space-separated)
# Quoted $@:# hello world (preserves quoted arguments)# foo# barIFS (Internal Field Separator) Effect:
# Script showing IFS behavior#!/bin/bashIFS=":"set -- "a b" c d
echo "$*" # Output: a b:c:decho "$@" # Output: a b c d (separate arguments)
# IFS first character used to join $*# Default IFS: space, tab, newlinePractical Examples:
# Function to process argumentsprocess() { # Use "$@" to preserve argument boundaries for arg in "$@"; do echo "Processing: $arg" done}
# Calling with spaces in argumentprocess "file with spaces.txt" "another file.txt"# Output preserves spaces
# Using "$*" for logginglog_message() { # Join all arguments with space logger "[INFO] $*"}log_message "User" "logged in" "from" "192.168.1.1"# Single log entry: "[INFO] User logged in from 192.168.1.1"
# Common patterns:# "$@" - Preferred for argument forwarding# "$*" - For creating single string from arguments
# Forwarding argumentswrapper() { # Pass all arguments to another command /usr/bin/real-command "$@"}13. Explain the difference between grep, egrep, fgrep.
Section titled “13. Explain the difference between grep, egrep, fgrep.”Answer:
| Command | Regex Type | Performance | Use Case |
|---|---|---|---|
grep | Basic (BRE) | Good | Basic patterns |
egrep / grep -E | Extended (ERE) | Good | Advanced patterns |
fgrep / grep -F | Fixed strings (no regex) | Fastest | Literal text search |
Grep (Basic Regex - BRE):
# Special characters need escapinggrep '\\(foo\\|bar\\)' file.txt # OR (needs \\)grep 'foo\\+' file.txt # One or more (needs \\)grep 'foo\\?' file.txt # Zero or one (needs \\)grep 'foo\\{2,5\\}' file.txt # Range (needs \\)grep '^foo.*bar$' file.txt # Anchors and .* work normally
# Common usagegrep "error" logfilegrep -i "warning" logfilegrep -v "debug" logfilegrep -r "pattern" /etc/Egrep (Extended Regex - ERE):
# Special characters work without escapingegrep '(foo|bar)' file.txt # OR (no escaping)egrep 'foo+' file.txt # One or moreegrep 'foo?' file.txt # Zero or oneegrep 'foo{2,5}' file.txt # Rangeegrep 'foo{2,}' file.txt # Two or moreegrep 'foo{,5}' file.txt # Up to five
# Advanced patternsegrep '[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}' email.txtegrep '\\b(https?|ftp)://\\S+' urls.txt
# Equivalentsgrep -E "pattern" file.txt # Same as egrepFgrep (Fixed Strings - No Regex):
# Treats everything as literal textfgrep '*.txt' file.txt # Finds literal "*.txt"fgrep 'a.b' file.txt # Finds "a.b", not "aXb"fgrep '(foobar)' file.txt # Finds literal "(foobar)"
# Useful for:# - Searching for special characters# - Large files with many patterns# - When you know the exact string
# Equivalentsgrep -F "pattern" file.txt # Same as fgrep
# Performance example:time grep -F -f patterns.txt largefile.txt# Faster than regex versionPerformance Comparison:
# Create test fileseq 1 1000000 > numbers.txt
# fgrep is fastesttime fgrep "500000" numbers.txt # ~0.1s
# grep (basic) slowertime grep "500000" numbers.txt # ~0.15s
# egrep (extended) similar to greptime egrep "500000" numbers.txt # ~0.15s
# For literal strings, always use -FUse Cases Summary:
# Simple text search → grep or fgrepgrep "error" log.txtfgrep "exact string" file.txt
# Pattern with alternation → egrepegrep "error|warning|critical" log.txt
# Email/URL extraction → egrepegrep -o '\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}\\b' file.txt
# Searching for regex metacharacters → fgrepfgrep ".*" file.txt # Finds literal ".*"14. Explain the difference between & and && in Linux.
Section titled “14. Explain the difference between & and && in Linux.”Answer:
| Symbol | Meaning | Behavior |
|---|---|---|
& | Background | Runs command in background |
&& | AND | Runs second command only if first succeeds |
Background Operator (&):
# Run command in backgroundlong_running_command &
# Multiple background commandscommand1 & command2 & command3 &
# Get background job PIDcommand &echo $! # Prints PID of background process
# Background with output redirectioncommand > output.log 2>&1 &
# Check background jobsjobs# [1] Running command1 &# [2]- Running command2 &# [3]+ Running command3 &
# Bring to foregroundfg %1 # Bring job 1 to foregroundfg # Bring most recent job
# Send to background againCtrl+Z # Suspendbg # Resume in background
# Background in script{ sleep 10 echo "Done"} &AND Operator (&&):
# Run command2 only if command1 succeeds (exit 0)command1 && command2
# Chain multiple commandsmake && make install && make clean
# Example with cdcd /tmp && rm -rf temp_folder# rm only runs if cd succeeded
# Combined with OR (||)command1 && echo "Success" || echo "Failed"
# Practical backup examplemkdir -p backup && cp -r important/ backup/ && echo "Backup complete"Comparison Table:
| Scenario | & | && |
|---|---|---|
| Succeeds | Runs in background | Runs next command |
| Fails | Runs in background | Stops, no next command |
| Waiting | No (returns immediately) | Yes (sequential) |
| Exit code | Doesn’t affect shell | Affects next command |
Combined Usage:
# Run in background AND chaincommand1 && command2 & # command2 runs in background only if command1 succeeds
# Grouping with subshell(command1 && command2) & # Both in background
# Complex example# Download file in background, then process if successfulcurl -O <https://example.com/file.zip> && unzip file.zip &Job Control Signals:
# Send SIGCONT to background jobkill -CONT %1
# Send SIGTERM to background jobkill %1
# Disown background job (remove from shell)disown %1
# Run immune to hangupsnohup command &15. Explain the difference between >> and > redirection.
Section titled “15. Explain the difference between >> and > redirection.”Answer:
| Operator | Behavior | File Content |
|---|---|---|
> | Overwrite | Replaces existing content |
>> | Append | Adds to end of existing content |
Overwrite Operator (>):
# Creates new file or overwrites existingecho "First line" > file.txtecho "Second line" > file.txt# Result: file.txt contains only "Second line"
# Danger: Can accidentally delete file contents> important.txt # Empties the file!
# Redirect stdout onlyls > files.txt
# Redirect stderr to filels non-existent 2> error.log
# Redirect both stdout and stderrcommand &> output.txtcommand > output.txt 2>&1Append Operator (>>):
# Adds to end of fileecho "Line 1" >> file.txtecho "Line 2" >> file.txtecho "Line 3" >> file.txt# Result: file.txt contains all three lines
# Append stderrcommand 2>> error.log
# Append bothcommand >> output.txt 2>&1
# Useful for loggingecho "$(date): Backup started" >> backup.logrsync -av /source/ /dest/ >> backup.log 2>&1echo "$(date): Backup completed" >> backup.logPractical Examples:
# Log rotation with overwrite (start fresh)> access.logecho "$(date): New log session" >> access.log
# Collecting outputs from multiple commandsecho "System Info:" > system_info.txtuname -a >> system_info.txtdf -h >> system_info.txtfree -h >> system_info.txt
# Configuration management# Safe way to update config (append new line)grep -q "alias ll='ls -la'" ~/.bashrc || echo "alias ll='ls -la'" >> ~/.bashrc
# Clearing log files without deleting> /var/log/syslog # Clear contents while keeping filenoclobber Option (Prevent Accidental Overwrite):
# Enable protectionset -o noclobber
# Now this fails if file existsecho "test" > existing.txt# -bash: existing.txt: cannot overwrite existing file
# Force overwrite anywayecho "test" >| existing.txt
# Append still worksecho "test" >> existing.txt
# Disable protectionset +o noclobberHere Document with Redirection:
# Overwrite with multi-line textcat > config.txt << EOFline 1line 2line 3EOF
# Append multi-line textcat >> config.txt << EOFline 4line 5EOF16. Explain the difference between kill, pkill, and killall.
Section titled “16. Explain the difference between kill, pkill, and killall.”Answer:
| Command | Method | Target | Options |
|---|---|---|---|
kill | PID number | Specific process by ID | Signal number/name |
pkill | Pattern | Processes matching name/attributes | Full regex |
killall | Name | Processes by exact name | Case-sensitive |
Kill (by PID):
# Get PID firstps aux | grep firefox# user 12345 2.0 1.5 ... firefox
# Send signals by PIDkill 12345 # SIGTERM (15) - gracefulkill -9 12345 # SIGKILL (9) - forcekill -15 12345 # SIGTERMkill -SIGTERM 12345 # Same as above
# Signal numberskill -l# 1) SIGHUP 2) SIGINT 3) SIGQUIT 6) SIGABRT# 9) SIGKILL 15) SIGTERM 18) SIGCONT 19) SIGSTOPPkill (by Pattern):
# Kill by name patternpkill firefox # Matches firefox, firefox-bin, etc.pkill -9 firefox # Force killpkill -f "python script.py" # Match full command line
# List matching processes without killingpkill -l firefox # List signals onlypgrep firefox # Show PIDs only
# Optionspkill -u user # Kill all user's processespkill -t pts/2 # Kill processes on terminalpkill -HUP nginx # Reload nginx config
# Oldest/newest processespkill -n firefox # Kill newest process onlypkill -o firefox # Kill oldest process onlyKillall (by Exact Name):
# Kill by exact process namekillall firefox # Kills ONLY "firefox", not "firefox-bin"
# Case-sensitive by defaultkillall -I FIREFOX # Case-insensitive match
# Interactive modekillall -i firefox # Confirm before killing
# Older than timekillall -o 1h firefox # Kill processes older than 1 hourkillall -y 30m firefox # Kill processes younger than 30 minutes
# Wait for process to diekillall -w firefox # Wait until all killed
# Verbose outputkillall -v firefox # Show what's happeningSafe Killing Practices:
# 1. Try graceful termination firstkill -15 PID
# 2. Wait a few secondssleep 5
# 3. Check if still runningkill -0 PID # Returns 0 if running
# 4. Force kill if necessarykill -9 PID
# Signal meanings:# SIGTERM (15): Process can clean up (close files, etc.)# SIGKILL (9): Kernel terminates immediately (no cleanup)# SIGHUP (1): Reload configuration (daemons)# SIGINT (2): Interrupt (Ctrl+C)# SIGSTOP (19): Pause process# SIGCONT (18): Resume paused process17. Explain the difference between su and sudo.
Section titled “17. Explain the difference between su and sudo.”Answer:
| Aspect | su | sudo |
|---|---|---|
| Authentication | Target user’s password | User’s own password |
| Command logging | No | Yes |
| Fine-grained control | No | Yes |
| Environment | New shell (usually) | Current environment preserved |
| Audit trail | Minimal | Complete |
su (Switch User):
# Switch to root (requires root password)susu -
# Switch to another usersu - username
# Run single command as another usersu -c "command" username
# Without hyphen: keeps current environmentsu username
# With hyphen: new login shell (clean environment)su - username
# Security issue: Users need target's password# Auditing: Hard to track who did whatsudo (Superuser DO):
# Run command as root (user's own password)sudo command
# Run as specific usersudo -u username command
# Open root shellsudo -isudo -s
# Run previous command with sudosudo !!
# List user's sudo privilegessudo -l
# Keep credentials cachedsudo -v # Update timestampsudo -k # Invalidate timestamp
# Run command with preserved environmentsudo -E commandsudoers Configuration (/etc/sudoers):
# User specificationsusername ALL=(ALL:ALL) ALL# user host=(run-as:group) commands
# Examples:# Allow user to run any commandjohn ALL=(ALL) ALL
# Allow without passwordjane ALL=(ALL) NOPASSWD: ALL
# Allow specific commandswebadmin ALL=(root) /usr/bin/systemctl restart nginx, /usr/bin/systemctl status nginx
# Allow as specific userbackup ALL=(backup) /usr/bin/rsync
# Group permissions%admin ALL=(ALL) ALL%sudo ALL=(ALL:ALL) ALL
# Command aliasesCmnd_Alias WEB_CMDS = /usr/bin/systemctl restart nginx, /usr/bin/systemctl reload nginxCmnd_Alias NET_CMDS = /sbin/ifconfig, /bin/ping
# DefaultsDefaults env_resetDefaults mail_badpassDefaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"Defaults logfile="/var/log/sudo.log"Security Differences:
# su logs:/var/log/auth.log: "su: session opened for user root by user"# No command details
# sudo logs:/var/log/auth.log: "sudo: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/bin/ls"# Full command details
# Better auditing with sudo# Can restrict to specific commands# No need to share root passwordBest Practices:
- Use
sudoinstead ofsufor better auditing - Disable root login:
PermitRootLogin noin/etc/ssh/sshd_config - Use
sudo -iinstead ofsu - - Never share passwords; use sudo with proper configuration
18. Explain the difference between soft and hard limits in ulimit.
Section titled “18. Explain the difference between soft and hard limits in ulimit.”Answer:
Soft Limit:
- Current enforced limit
- Can be increased up to hard limit by user
- Default operating value
Hard Limit:
- Maximum ceiling for soft limit
- Can only be increased by root
- Set by system administrator
View Limits:
# View all limitsulimit -a
# View specific limitsulimit -n # open filesulimit -u # processesulimit -s # stack sizeulimit -c # core file sizeulimit -m # memory sizeulimit -v # virtual memory
# Soft vs Hardulimit -Sn # soft open filesulimit -Hn # hard open filesSetting Limits:
# Set soft limit (user can increase to hard)ulimit -n 2048
# Set hard limit (requires root)ulimit -Hn 4096
# Both soft and hardulimit -n 2048 -Hn 4096
# Remove limitulimit -n unlimited # softulimit -Hn unlimited # hard (root only)Configuration Files:
# System-wide limits/etc/security/limits.conf
# Format:# <domain> <type> <item> <value>* soft nofile 4096* hard nofile 65536root soft nofile 8192@developers hard nproc unlimited
# PAM configuration/etc/pam.d/common-session# session required pam_limits.so
# Systemd limits# In service file:[Service]LimitNOFILE=4096LimitNPROC=10000Common Limit Types:
| Item | Description | Typical Values |
|---|---|---|
nofile | Open file descriptors | 1024 (soft), 4096 (hard) |
nproc | Number of processes | unlimited or 4096 |
core | Core dump size | 0 (disabled) |
data | Data segment size | unlimited |
stack | Stack size | 8192 KB |
memlock | Locked memory | 64 KB |
rss | Resident set size | unlimited |
cpu | CPU time (minutes) | unlimited |
Check Current Process Limits:
# Check for running processcat /proc/$(pidof process)/limits
# Check shell limitscat /proc/$$/limits
# Monitor limit usagelsof -p $$ | wc -l # Count open files19. Explain the difference between export and regular variable assignment.
Section titled “19. Explain the difference between export and regular variable assignment.”Answer:
| Aspect | Regular Variable | Exported Variable |
|---|---|---|
| Scope | Current shell only | Current shell and child processes |
| Subshell | Not visible | Visible |
| Scripts | Not accessible | Accessible |
| Permanence | Temporary | Temporary (unless in profile) |
Regular (Shell) Variables:
# Assignment (no spaces around =)MYVAR="hello"
# Available in current shellecho $MYVAR # Output: hello
# NOT available in child processbash -c 'echo $MYVAR' # Output: (empty)
# NOT available in scriptecho 'echo $MYVAR' > test.shchmod +x test.sh./test.sh # Output: (empty)Exported (Environment) Variables:
# Export a variableexport MYVAR="hello"# ORMYVAR="hello"export MYVAR
# Available in current shellecho $MYVAR # Output: hello
# Available in child processbash -c 'echo $MYVAR' # Output: hello
# Available in script./test.sh # Output: hello
# Export multiple variablesexport VAR1=val1 VAR2=val2Viewing Environment:
# Show all environment variablesenvprintenv
# Show specific variableprintenv PATHecho $PATH
# Show all variables (including shell variables)setRemoving Export:
# Remove variable from environmentexport -n MYVAR# Now MYVAR is shell variable only
# Unset completelyunset MYVARCommon Environment Variables:
PATH # Command search pathHOME # User's home directoryUSER # Current usernameSHELL # Current shellTERM # Terminal typeLANG # Language/localePWD # Current working directoryOLDPWD # Previous working directoryEDITOR # Default editorDISPLAY # X11 displayLD_LIBRARY_PATH # Library search pathPreserving Environment with sudo:
# Reset environment (default)sudo command
# Preserve environmentsudo -E commandsudo --preserve-env command
# Preserve specific variablessudo --preserve-env=HOME,PATH command20. Explain the difference between ps aux and ps -ef.
Section titled “20. Explain the difference between ps aux and ps -ef.”Answer:
Both show process information but with different syntax and defaults.
| Aspect | ps aux | ps -ef |
|---|---|---|
| Origin | BSD syntax | UNIX/System V syntax |
| Dash | No dash needed | Requires dash |
| Columns | USER, PID, %CPU, %MEM, VSZ, RSS, TTY, STAT, START, TIME, COMMAND | UID, PID, PPID, C, STIME, TTY, TIME, CMD |
| CPU/MEM | Shows percentages | No percentages |
| Parent PID | Not shown | Shows PPID |
ps aux (BSD Style):
ps aux
# Output columns:# USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND# root 1 0.0 0.1 168812 11208 ? Ss Jan01 0:02 /sbin/init# root 123 0.0 0.0 12345 1234 ? S Jan01 0:00 [kthreadd]# user 4567 0.5 2.3 1234567 234567 pts/0 S+ 10:30 0:01 bash
# Options meaning:# a = all users' processes# u = user-oriented format (CPU, MEM, etc.)# x = processes without terminalps -ef (System V Style):
ps -ef
# Output columns:# UID PID PPID C STIME TTY TIME CMD# root 1 0 0 Jan01 ? 00:00:02 /sbin/init# root 123 2 0 Jan01 ? 00:00:00 [kthreadd]# user 4567 4560 0 10:30 pts/0 00:00:00 bash
# Options meaning:# -e = all processes# -f = full format (PPID, STIME, etc.)Common Variations:
# Show process treeps auxfps -ef --forest
# Show threadsps aux -Lps -eLf
# Custom output formatps -eo pid,ppid,cmd,%cpu,%mem --sort=-%cpu
# Show specific user's processesps -u usernameps -U username
# Show processes by nameps -C process_name
# Show all processes with full command lineps auxwwps -efwwWhen to Use Which:
# Use ps aux when:# - You want CPU/MEM percentages# - You need to see resource usage# - You're troubleshooting performance
# Use ps -ef when:# - You need parent PID relationships# - You're tracing process ancestry# - You're on older UNIX systems
# Most Linux systems support both21. Explain the difference between nice and renice.
Section titled “21. Explain the difference between nice and renice.”Answer:
| Aspect | nice | renice |
|---|---|---|
| When | When starting a process | For running processes |
| Priority | Sets initial priority | Changes existing priority |
| Range | -20 (highest) to 19 (lowest) | Same range |
| Permission | Users can only increase (lower priority) | Users can only increase (except root) |
Nice Values Explained:
# Nice value range: -20 to +19# -20 = Highest priority (most CPU time)# +19 = Lowest priority (least CPU time)# Default = 0
# View process nice valuesps -eo pid,ni,commtop # NI columnNice (Starting Processes):
# Start with default priority (0)nice ./long_running.sh
# Start with lower priority (10)nice -n 10 ./long_running.shnice -10 ./long_running.sh # Same
# Start with higher priority (-10) - requires rootsudo nice -n -10 ./important.sh
# Start with lowest priority (19)nice -n 19 ./background_task.shRenice (Running Processes):
# Change priority of running process by PIDrenice -n 10 -p 1234
# Change all processes of a userrenice -n 5 -u username
# Change process grouprenice -n 10 -g 5678
# Increase priority (requires root)sudo renice -n -5 -p 1234
# Verify changeps -o pid,ni,comm -p 1234Practical Examples:
# Background backup (low priority)nice -n 19 rsync -av /data/ /backup/ &
# Important real-time process (high priority)sudo nice -n -15 ./realtime_app
# Find CPU-intensive processesps aux --sort=-%cpu | head
# Lower priority of CPU hogrenice -n 15 -p $(pgrep -f "cpu_hog")
# Reset to defaultrenice -n 0 -p 1234Permission Rules:
# Regular users:# - Can only increase nice value (lower priority)# - Cannot decrease nice value (increase priority)nice -n 10 ./script.sh # Allowednice -n -5 ./script.sh # Error: Permission denied
# Root users:# - Can set any nice valuesudo nice -n -20 ./critical.sh
# Renice same rules applyrenice -n 15 -p 1234 # Allowed (increase)renice -n -5 -p 1234 # Error for regular users22. Explain the difference between cron and anacron.
Section titled “22. Explain the difference between cron and anacron.”Answer:
| Aspect | Cron | Anacron |
|---|---|---|
| Assumes | System runs 24/7 | System may be off |
| Precision | Minute-level | Day-level |
| Missing jobs | Skipped | Run at next opportunity |
| Root required | No (user crontabs) | Yes |
| Random delay | No | Yes (avoid stampede) |
Cron Syntax:
# Minute Hour Day Month DayOfWeek Command# 0-59 0-23 1-31 1-12 0-7
# Examples:# Run every day at 2:30 AM30 2 * * * /backup/script.sh
# Run every Monday at 5 AM0 5 * * 1 /scripts/weekly.sh
# Run every hour0 * * * * /scripts/hourly.sh
# Run every 15 minutes*/15 * * * * /scripts/check.sh
# Special strings:@reboot # Run at startup@daily # Run once per day@hourly # Run once per hour@weekly # Run once per week@monthly # Run once per month@yearly # Run once per yearCron Files Locations:
# System-wide crontab/etc/crontab
# User crontabs/var/spool/cron/crontabs/
# Cron directories/etc/cron.d//etc/cron.hourly//etc/cron.daily//etc/cron.weekly//etc/cron.monthly/Anacron Configuration (/etc/anacrontab):
# Format:# period delay job-identifier command# (in days) (in minutes)
# Example:1 5 cron.daily run-parts /etc/cron.daily7 10 cron.weekly run-parts /etc/cron.weekly30 15 cron.monthly run-parts /etc/cron.monthly
# Anacron timestamp file/var/spool/anacron/How Anacron Works:
# 1. At boot time, check when each job last ran# 2. If period has elapsed, schedule job# 3. Add random delay to prevent all jobs running simultaneously# 4. Update timestamp after job completes
# Run manuallyanacron -f # Force runanacron -d # Debug modeanacron -n # Run now (ignore delay)Practical Example - Laptop:
# Problem: Laptop turned off at 2 AM# Solution: Use anacron for daily tasks
# Run backup within 30 minutes of next boot1 30 daily-backup /usr/local/bin/backup.sh
# Cron for time-sensitive tasks (still needed)# Every 5 minutes check*/5 * * * * /scripts/check.shCombined Usage:
# Modern systems run cron, which runs anacron# Run anacron from cron30 7 * * * root test -x /usr/sbin/anacron && /usr/sbin/anacron
# Best practice:# - Use cron for precise scheduling# - Use anacron for daily/weekly maintenance# - Use systemd timers for modern systems23. Explain the difference between systemctl start, enable, and restart.
Section titled “23. Explain the difference between systemctl start, enable, and restart.”Answer:
| Command | Effect | Persistence |
|---|---|---|
start | Starts service now | Not persistent (won’t survive reboot) |
enable | Configure service to start at boot | Persistent |
restart | Stops then starts service | Immediate effect |
reload | Reloads config without restart | Immediate effect |
Start (Immediate Only):
# Start service nowsudo systemctl start nginx
# Check statussystemctl status nginx
# Will not start automatically on reboot# Useful for temporary services or testingEnable (Boot-time Only):
# Configure to start at bootsudo systemctl enable nginx
# Creates symlink in /etc/systemd/system/multi-user.target.wants/ls -l /etc/systemd/system/multi-user.target.wants/nginx.service
# Does NOT start the service now# Need start or restart for immediate effect
# Enable and start in one commandsudo systemctl enable --now nginxRestart (Stop then Start):
# Stops service (SIGTERM) then starts againsudo systemctl restart nginx
# Use when:# - Configuration changed significantly# - Service is misbehaving# - Updated binaries need reloading
# Always works but may cause downtimeReload (Graceful Configuration Reload):
# Reloads configuration without stoppingsudo systemctl reload nginx
# Use when:# - Only configuration changed# - Service supports reload (SIGHUP)# - Zero downtime needed
# Check if service supports reloadsystemctl show nginx -p CanReloadOther Systemctl Commands:
# Stop servicesudo systemctl stop nginx
# Disable from bootsudo systemctl disable nginx
# Mask (prevent manual and automatic start)sudo systemctl mask nginx
# Unmasksudo systemctl unmask nginx
# Show service statussystemctl status nginx
# Show all unitssystemctl list-units
# Show failed unitssystemctl --failed
# Show service dependenciessystemctl list-dependencies nginxService States:
# Check if service is active (running)systemctl is-active nginx
# Check if service is enabled (boot start)systemctl is-enabled nginx
# Check if service failedsystemctl is-failed nginx
# All status informationsystemctl show nginx24. Explain the difference between journalctl and traditional syslog.
Section titled “24. Explain the difference between journalctl and traditional syslog.”Answer:
| Aspect | journalctl (systemd) | Traditional syslog |
|---|---|---|
| Format | Binary | Plain text |
| Storage | Structured database | Text files |
| Indexing | Automatic | Manual |
| Forwarding | Can forward to syslog | Native |
| Querying | Powerful filters | Basic (grep) |
| Persistence | Configurable | Configurable |
Journalctl Basic Usage:
# View all logsjournalctl
# Follow new logs (like tail -f)journalctl -f
# Show last N linesjournalctl -n 100
# Show logs since bootjournalctl -b
# Show logs for specific servicejournalctl -u nginx
# Show logs for specific time rangejournalctl --since "2024-01-01 10:00:00" --until "2024-01-01 11:00:00"journalctl --since "1 hour ago"journalctl --since yesterday
# Show logs by priorityjournalctl -p errjournalctl -p 3 # 0=emerg,1=alert,2=crit,3=err,4=warning,5=notice,6=info,7=debugAdvanced Journalctl Filters:
# Show logs by specific PIDjournalctl _PID=1234
# Show logs by specific userjournalctl _UID=1000
# Show kernel messagesjournalctl -k
# Show logs with specific executablejournalctl _EXE=/usr/bin/nginx
# Combine filtersjournalctl -u nginx -p err --since "1 hour ago"
# Show logs in JSON formatjournalctl -o jsonjournalctl -o json-pretty
# Show output without paginationjournalctl --no-pager
# Show only unique fieldsjournalctl -F _SYSTEMD_UNITJournal Configuration (/etc/systemd/journald.conf):
[Journal]# Storage: volatile (/run/log/journal), persistent (/var/log/journal), auto, noneStorage=persistent
# Compress logsCompress=yes
# Maximum log sizeSystemMaxUse=2GSystemMaxFileSize=100M
# Forward to traditional syslogForwardToSyslog=yesForwardToKMsg=noForwardToConsole=no
# Rate limitingRateLimitIntervalSec=30sRateLimitBurst=1000Traditional Syslog Files:
# Common syslog files/var/log/syslog # General system messages/var/log/auth.log # Authentication attempts/var/log/kern.log # Kernel messages/var/log/messages # General messages (RHEL)/var/log/secure # Security/auth (RHEL)/var/log/maillog # Mail server logs/var/log/cron # Cron job logs/var/log/dpkg.log # Package manager logs
# View syslogtail -f /var/log/sysloggrep "error" /var/log/syslogSyslog Configuration (/etc/rsyslog.conf):
# Rules: facility.priority actionmail.info /var/log/mail.logauth.* /var/log/auth.log*.emerg :omusrmsg:* # Broadcast to all users*.info;mail.none;auth.none /var/log/messagesConverting Between Formats:
# Export journal to textjournalctl -o short > logs.txt
# Forward journal to syslog# In journald.conf:ForwardToSyslog=yes
# Use both systems# systemd journal for local queries# rsyslog for central log aggregation25. Explain the difference between df and du.
Section titled “25. Explain the difference between df and du.”Answer:
| Aspect | df (Disk Free) | du (Disk Usage) |
|---|---|---|
| What it shows | Filesystem usage | Directory/file usage |
| Scope | Partition level | Directory level |
| Block size | Shows per filesystem | Shows per file/directory |
| Deleted files | Shows space as used | Doesn’t see them |
| Speed | Fast | Can be slow |
| Accuracy | Filesystem metadata | Actual file sizes |
df (Disk Free) Examples:
# Basic usagedf# Filesystem 1K-blocks Used Available Use% Mounted on# /dev/sda1 10240000 5120000 5120000 50% /
# Human-readabledf -h# Filesystem Size Used Avail Use% Mounted on# /dev/sda1 10G 5.0G 5.0G 50% /
# Show inode usagedf -i# Filesystem Inodes IUsed IFree IUse% Mounted on# /dev/sda1 655360 12345 643015 2% /
# Show specific filesystemdf -h /home
# Show filesystem typedf -T# Filesystem Type Size Used Avail Use% Mounted on# /dev/sda1 ext4 10G 5.0G 5.0G 50% /
# Exclude specific typesdf -x tmpfs -x devtmpfsdu (Disk Usage) Examples:
# Basic directory usagedu /home/user# 1234 /home/user/docs# 5678 /home/user/downloads# 8912 /home/user
# Human-readabledu -h /home/user# 1.2M /home/user/docs# 5.5M /home/user/downloads# 8.7M /home/user
# Summary (total only)du -sh /home/user# 8.7M /home/user
# Show all filesdu -ah /home/user# 4K /home/user/.bashrc# 8K /home/user/docs/readme.txt# 1.2M /home/user/docs/photo.jpg
# Sort by sizedu -h /home/user | sort -rh
# Show depth (max depth)du -h --max-depth=1 /homedu -hd 1 /home # Same
# Exclude patternsdu -h --exclude="*.log" /var
# Apparent size vs actualdu -h --apparent-size file.txtCommon Problems and Solutions:
# Problem: df shows disk full, but du doesn't add up# Cause: Deleted files still held open by processes
# Find processes holding deleted fileslsof | grep deleted# Orlsof +L1
# Solution: Restart process or empty file> /path/to/deleted/file
# Problem: Find largest directoriesdu -sh /* 2>/dev/null | sort -rh | head -10
# Problem: Check specific mount pointdf -h /vardu -sh /var/* 2>/dev/nullPerformance Comparison:
# df is fast (reads superblock)time df -h# real 0m0.003s
# du can be slow (scans all files)time du -sh /home# real 0m5.234s
# Use ncdu for interactive explorationncdu /home26. Explain the difference between tar, gzip, and zip.
Section titled “26. Explain the difference between tar, gzip, and zip.”Answer:
| Tool | Purpose | Compression | Multiple Files | Archive |
|---|---|---|---|---|
tar | Archive only | No (by itself) | Yes | Creates .tar |
gzip | Compression only | Yes | No | Compresses single file |
zip | Archive + compress | Yes | Yes | Creates .zip |
Tar (Tape Archive):
# Create archive (no compression)tar -cf archive.tar /path/to/dir
# Extract archivetar -xf archive.tar
# Create with gzip compression (.tar.gz)tar -czf archive.tar.gz /path/to/dir
# Create with bzip2 compression (.tar.bz2)tar -cjf archive.tar.bz2 /path/to/dir
# Create with xz compression (.tar.xz)tar -cJf archive.tar.xz /path/to/dir
# Extract compressed archive (auto-detects)tar -xf archive.tar.gztar -xf archive.tar.bz2tar -xf archive.tar.xz
# View contentstar -tf archive.tar
# Verbose outputtar -xvzf archive.tar.gz
# Extract specific filestar -xf archive.tar.gz --wildcards "*.txt"
# Exclude patternstar -czf backup.tar.gz --exclude="*.log" --exclude="tmp/*" /homeGzip (GNU Zip):
# Compress single filegzip file.txt# Creates file.txt.gz, deletes original
# Keep originalgzip -c file.txt > file.txt.gz# ORgzip -k file.txt
# Decompressgunzip file.txt.gz# ORgzip -d file.txt.gz
# Compression levels (1-9, 9=best/slowest)gzip -9 file.txt
# View compressed filezcat file.txt.gzzless file.txt.gz
# Combine with tar (most common)tar -czf archive.tar.gz directory/Zip:
# Create zip archivezip -r archive.zip /path/to/dir
# Create with compressionzip -r -9 archive.zip /path/to/dir
# Extractunzip archive.zip
# Extract to specific directoryunzip archive.zip -d /target/dir
# List contentsunzip -l archive.zip
# Update existing archivezip -u archive.zip newfile.txt
# Add password protectionzip -e archive.zip file.txt
# Split into multiple fileszip -s 100m -r large.zip /path/to/dir
# Exclude patternszip -r archive.zip . -x "*.log" "*.tmp"Comparison Examples:
# Create compressed archive with different tools
# tar + gzip (Linux standard)tar -czf backup.tar.gz /home/user# Size: 100MB# Time: 5s
# tar + bzip27. Explain the difference between ssh-keygen, ssh-copy-id, and ssh-agent.
Section titled “27. Explain the difference between ssh-keygen, ssh-copy-id, and ssh-agent.”Answer:
These three tools work together to provide secure, passwordless SSH authentication.
| Tool | Purpose | Output |
|---|---|---|
ssh-keygen | Generate key pairs | Private and public keys |
ssh-copy-id | Install public key on server | Adds to authorized_keys |
ssh-agent | Manage private keys in memory | Holds decrypted keys for session |
ssh-keygen (Key Generation):
# Generate RSA key pair (default)ssh-keygen -t rsa -b 4096 -C "user@example.com"# Creates: ~/.ssh/id_rsa (private) and ~/.ssh/id_rsa.pub (public)
# Generate ED25519 key (more secure, faster)ssh-keygen -t ed25519 -C "user@example.com"
# Generate with specific filenamessh-keygen -t rsa -f ~/.ssh/mykey
# List key fingerprintsssh-keygen -l -f ~/.ssh/id_rsa.pub
# Change passphrasessh-keygen -p -f ~/.ssh/id_rsa
# Convert private key formatssh-keygen -p -m PEM -f ~/.ssh/id_rsa
# Generate host keys (for SSH server)ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N ""Key Types Comparison:
| Type | Bit Size | Security | Speed | Compatibility |
|---|---|---|---|---|
| RSA | 2048-4096 | Good | Moderate | Universal |
| ED25519 | 256 | Excellent | Fast | Modern systems only |
| ECDSA | 256-521 | Good | Fast | Varies by curve |
| DSA | 1024 | Weak | Slow | Deprecated |
ssh-copy-id (Public Key Distribution):
# Copy key to remote server (password required once)ssh-copy-id user@server
# Specify key filessh-copy-id -i ~/.ssh/mykey.pub user@server
# Use specific portssh-copy-id -p 2222 user@server
# Copy to multiple serversfor server in server1 server2 server3; do ssh-copy-id user@$serverdone
# What it does behind the scenes:# 1. Connects to server using password# 2. Creates ~/.ssh/ directory if needed# 3. Appends public key to ~/.ssh/authorized_keys# 4. Sets correct permissions (700 for .ssh, 600 for authorized_keys)
# Manual equivalent:cat ~/.ssh/id_rsa.pub | ssh user@server "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys"ssh-agent (Key Management):
# Start ssh-agent in current shelleval $(ssh-agent)# Agent pid 12345
# Add private key to agent (prompts for passphrase once)ssh-add ~/.ssh/id_rsa
# Add all default keysssh-add
# List loaded keysssh-add -l
# List fingerprintsssh-add -L
# Add key with timeout (removes after 1 hour)ssh-add -t 3600 ~/.ssh/id_rsa
# Remove specific keyssh-add -d ~/.ssh/id_rsa
# Remove all keysssh-add -D
# Lock agent with passwordssh-add -x
# Unlock agentssh-add -X
# Check if agent is runningssh-add -l 2>/dev/null || echo "Agent not running"Complete Passwordless SSH Setup:
# 1. Generate key pairssh-keygen -t ed25519 -C "work@example.com"
# 2. Start agent and add keyeval $(ssh-agent)ssh-add ~/.ssh/id_ed25519
# 3. Copy to remote serverssh-copy-id user@server
# 4. Test connection (no password prompt)ssh user@server
# 5. Add agent startup to .bashrcecho 'eval $(ssh-agent) > /dev/null 2>&1' >> ~/.bashrcecho 'ssh-add -l > /dev/null 2>&1 || ssh-add' >> ~/.bashrcSSH Agent Forwarding:
# Allow agent forwarding (in ~/.ssh/config)Host myserver ForwardAgent yes
# Or command linessh -A user@server
# Security note: Only forward to trusted servers# Agent forwarding allows remote server to use your local keysTroubleshooting:
# Debug SSH connectionssh -vvv user@server
# Check key permissionsls -la ~/.ssh/# .ssh directory: 700 (drwx------)# private keys: 600 (-rw-------)# public keys: 644 (-rw-r--r--)# authorized_keys: 600 (-rw-------)
# Test key authenticationssh -o PreferredAuthentications=publickey user@server
# Clear agent keysssh-add -D
# Verify key is addedssh-add -l | grep -q "$(ssh-keygen -lf ~/.ssh/id_rsa.pub | cut -d' ' -f2)"28. Explain the difference between chroot, systemd-nspawn, and containers.
Section titled “28. Explain the difference between chroot, systemd-nspawn, and containers.”Answer:
These are isolation technologies with increasing levels of separation.
| Technology | Isolation Level | Use Case |
|---|---|---|
chroot | Filesystem only | Simple testing, legacy applications |
systemd-nspawn | Filesystem + Process + Network | Lightweight containers, system testing |
Containers (Docker/LXC) | Full OS-level virtualization | Application deployment, microservices |
chroot (Change Root):
# Basic chroot setupmkdir /newroot# Copy essential binaries and librariescp /bin/bash /newroot/bin/ldd /bin/bash # Check required librariescp /lib/x86_64-linux-gnu/libtinfo.so.6 /newroot/lib/cp /lib/x86_64-linux-gnu/libc.so.6 /newroot/lib/cp /lib64/ld-linux-x86-64.so.2 /newroot/lib64/
# Enter chrootsudo chroot /newroot /bin/bash
# Limitations:# - Still sees host kernel# - Can access host devices# - No process isolation# - Network access not restricted
# Practical use: Password recovery# Boot from live CD, mount root filesystemsudo chroot /mnt/rootpasswd rootsystemd-nspawn (Container Manager):
# Create minimal containersudo debootstrap bullseye /var/lib/machines/debian <https://deb.debian.org/debian>
# Boot containersudo systemd-nspawn -D /var/lib/machines/debian
# With network (private)sudo systemd-nspawn -D /var/lib/machines/debian -b# -b = boot with systemd as init
# With network sharingsudo systemd-nspawn -D /var/lib/machines/debian --network-veth
# Run command in containersudo systemd-nspawn -D /var/lib/machines/debian /bin/bash -c "apt update"
# List containersmachinectl list
# Start container as servicemachinectl start debianmachinectl shell debian
# Differences from chroot:# - Process isolation (separate PID namespace)# - Network isolation (virtual interfaces)# - Systemd as init process# - Better resource managementContainer Technologies Comparison:
LXC (Linux Containers):
# Create containersudo lxc-create -n mycontainer -t debian
# List containerslxc-ls
# Start containerlxc-start -n mycontainer -d
# Enter containerlxc-attach -n mycontainer
# Container config/var/lib/lxc/mycontainer/configDocker (Application Containers):
# Run container with volume isolationdocker run -it --rm ubuntu:latest /bin/bash
# With resource limitsdocker run --cpus=1 --memory=512m -it ubuntu:latest
# Differences from systemd-nspawn:# - Immutable infrastructure mindset# - Image-based deployment# - Registry integration# - Orchestration ecosystemIsolation Comparison Table:
| Feature | chroot | systemd-nspawn | Docker/LXC |
|---|---|---|---|
| Filesystem | Limited isolation | Full isolation | Full isolation |
| Processes | Same namespace | Separate namespace | Separate namespace |
| Network | Same namespace | Separate (optional) | Separate (default) |
| User | Same namespace | Can be mapped | User namespaces |
| Devices | Access to host | Limited | Controlled |
| Init system | None | Can run systemd | Custom init |
| Resource limits | None | cgroups | cgroups |
Creating Isolated Environment:
# 1. chroot - Simple filesystem isolationsudo mkdir /jailsudo chroot /jail /bin/sh
# 2. Unshare (Linux namespace tool) - Process isolationsudo unshare --mount --pid --fork --net /bin/bash# Creates new namespaces without full container stack
# 3. systemd-nspawn - Full container with minimal overheadsudo systemd-nspawn -D /var/lib/machines/container --boot
# 4. Docker - Full container with image managementdocker run -it ubuntu:latestSecurity Considerations:
# chroot is NOT security isolation# Escapes possible via:# - /proc exposure# - mounted devices# - root privileges
# systemd-nspawn adds:# - Private /tmp# - Read-only bind mounts# - Capability droppingsudo systemd-nspawn -D /container --cap-drop=ALL --cap-add=CAP_NET_BIND_SERVICE
# Best practice: Use proper containers for security# Combine with SELinux/AppArmor for additional security29. Explain the difference between systemd, init, and SysVinit.
Section titled “29. Explain the difference between systemd, init, and SysVinit.”Answer:
These are init systems that manage system startup, services, and processes.
| Feature | SysVinit | Upstart | systemd |
|---|---|---|---|
| First release | 1980s | 2006 | 2010 |
| Parallel startup | No | Limited | Yes |
| Dependency handling | Manual order | Events | Declarative units |
| Service management | Scripts | Scripts + events | Unit files |
| Logging | syslog | syslog | journald |
| Process supervision | Basic | Basic | Built-in |
SysVinit (System V Init - Traditional):
# Runlevels (0-6)# 0 = halt, 1 = single-user, 2-5 = multi-user, 6 = reboot/etc/inittab # Runlevel configuration
# Service scripts location/etc/init.d/# Example: /etc/init.d/nginx {start|stop|restart|status}
# Script format#!/bin/sh### BEGIN INIT INFO# Provides: nginx# Required-Start: $network $remote_fs# Required-Stop: $network $remote_fs# Default-Start: 2 3 4 5# Default-Stop: 0 1 6### END INIT INFO
case "$1" in start) start_service ;; stop) stop_service ;; restart) restart_service ;; status) status_service ;;esac
# Managing servicesservice nginx startservice nginx statusupdate-rc.d nginx defaults # Enable at bootSysVinit Limitations:
- Sequential startup (slow)
- No dependency resolution
- No automatic restart
- Manual ordering (S##K## files)
- /etc/rc?.d/ directories with symlinks
Upstart (Event-based Init):
# Configuration files/etc/init/description "Nginx web server"start on (filesystem and net-device-up)stop on runlevel [!2345]
respawnrespawn limit 10 5
exec /usr/sbin/nginx -g "daemon off;"
# Commandsinitctl list # List servicesinitctl status nginxinitctl start nginxinitctl stop nginxsystemd (Modern Init):
# Unit files locations/etc/systemd/system/ # Local configuration/usr/lib/systemd/system/ # Package-provided/run/systemd/system/ # Runtime
# Service unit example (/etc/systemd/system/nginx.service)[Unit]Description=NGINX web serverDocumentation=man:nginx(8)After=network.targetWants=network.target
[Service]Type=forkingPIDFile=/run/nginx.pidExecStartPre=/usr/sbin/nginx -tExecStart=/usr/sbin/nginxExecReload=/usr/sbin/nginx -s reloadExecStop=/bin/kill -s QUIT $MAINPIDPrivateTmp=trueLimitNOFILE=65535
[Install]WantedBy=multi-user.target
# Service managementsystemctl start nginxsystemctl stop nginxsystemctl restart nginxsystemctl reload nginxsystemctl status nginxsystemctl enable nginx # Enable at bootsystemctl disable nginx
# View dependenciessystemctl list-dependencies nginxsystemctl list-dependencies --reverse nginxsystemd Target Equivalents:
| SysVinit Runlevel | systemd Target | Description |
|---|---|---|
| 0 | runlevel0.target, poweroff.target | Halt |
| 1 | runlevel1.target, rescue.target | Single-user mode |
| 2 | runlevel2.target, multi-user.target | Multi-user without GUI |
| 3 | runlevel3.target, multi-user.target | Full multi-user |
| 4 | runlevel4.target, multi-user.target | Custom |
| 5 | runlevel5.target, graphical.target | Multi-user with GUI |
| 6 | runlevel6.target, reboot.target | Reboot |
systemd Features:
# Socket activationsystemctl list-sockets
# Timer units (replaces cron)systemctl list-timers
# System analysissystemd-analyzesystemd-analyze blame # Startup time per unitsystemd-analyze critical-chain nginx
# View logs (journald)journalctl -u nginxjournalctl -u nginx -f # Follow
# Manage user services (without sudo)systemctl --user start service
# Resource control (cgroups)systemctl set-property nginx MemoryLimit=512Msystemctl set-property nginx CPUQuota=50%Migration Commands:
| SysVinit | systemd |
|---|---|
service nginx start | systemctl start nginx |
service nginx status | systemctl status nginx |
chkconfig nginx on | systemctl enable nginx |
chkconfig nginx off | systemctl disable nginx |
init 6 | systemctl reboot |
init 0 | systemctl poweroff |
30. Explain the difference between rsync and scp for file transfer.
Section titled “30. Explain the difference between rsync and scp for file transfer.”Answer:
| Feature | scp | rsync |
|---|---|---|
| Protocol | SSH (only) | SSH (default) or rsync daemon |
| Incremental | No (full copy each time) | Yes (only changed parts) |
| Compression | Optional (-C) | Built-in (-z) |
| Resume | No | Yes (partial transfers) |
| Sync deletion | No | Yes (—delete) |
| Preserve attributes | Basic (-p) | Comprehensive (-a) |
| Speed | Slower for repeated transfers | Faster for incremental sync |
| Platform | Everywhere with SSH | Requires rsync on both ends |
scp (Secure Copy) - Simple File Transfer:
# Copy file to remotescp file.txt user@server:/path/
# Copy file from remotescp user@server:/path/file.txt .
# Copy directory recursivelyscp -r /local/dir user@server:/remote/
# Preserve timestamps, permissionsscp -p file.txt user@server:/path/
# Compress during transferscp -C largefile.zip user@server:/path/
# Use specific portscp -P 2222 file.txt user@server:/path/
# Copy between two remotesscp user1@server1:/file user2@server2:/path/
# Limitations:# - No partial transfer resume# - Always copies entire file# - No delta transfer (even if only 1 byte changed)rsync (Remote Sync) - Efficient Synchronization:
# Basic copy (like scp)rsync file.txt user@server:/path/
# Archive mode (preserves all attributes)rsync -avz /local/dir/ user@server:/remote/dir/# -a = archive (permissions, timestamps, etc.)# -v = verbose# -z = compress
# Sync with deletion (mirror)rsync -avz --delete /local/dir/ user@server:/remote/dir/# Removes files in remote that don't exist locally
# Dry run (see what would happen)rsync -avz --dry-run /local/dir/ user@server:/remote/
# Partial transfer (resume)rsync -avz --partial /large/file user@server:/remote/# --partial keeps partial files for resume
# Bandwidth limitrsync -avz --bwlimit=1000 /local/dir/ user@server:/remote/
# Exclude patternsrsync -avz --exclude="*.log" --exclude="tmp/" /local/ user@server:/remote/
# Include/exclude with filersync -avz --exclude-from=exclude-list.txt /local/ user@server:/remote/
# SSH with custom portrsync -avz -e "ssh -p 2222" /local/ user@server:/remote/
# Show progressrsync -avz --progress /large/file user@server:/remote/
# Compare checksums (instead of timestamp + size)rsync -avz --checksum /local/ user@server:/remote/
# Rsync daemon mode (faster, no encryption)rsync -avz rsync://server/module/ /local/rsync Algorithm (Delta Transfer):
How rsync works efficiently:1. Splits file into blocks (usually 700-1000 bytes)2. Computes rolling checksum (weak) and MD5 (strong) for each block3. Sends checksums to remote4. Remote checks for matching blocks5. Only transfers blocks that don't match6. Reassembles file on remote
Result: Only changed portions of files are transferredPerformance Comparison:
# Scenario: Syncing 10GB directory daily with 100MB changes
# scp (full copy each time)scp -r /data/ user@server:/backup/# Transfer: 10GB daily# Time: ~10-15 minutes
# rsync (delta transfer)rsync -avz /data/ user@server:/backup/# Transfer: ~100MB daily# Time: ~1 minute
# Benchmark exampletime rsync -avz /source/ /dest/time scp -r /source/ /dest/rsync as Backup Tool:
# Incremental backups with hard linksrsync -avz --delete --link-dest=/backup/previous /source/ /backup/current/
# Remote backup with snapshotrsync -avz --delete --backup --backup-dir=/backup/$(date +%Y%m%d) /source/ user@server:/backup/current/
# Exclude system filesrsync -avz --exclude={"/proc/*","/sys/*","/dev/*","/tmp/*"} / user@server:/backup/system/When to Use Which:
# Use scp when:# - One-time file transfer# - rsync not available on remote# - Simple copy needed# - Transferring single file
# Use rsync when:# - Recurring syncs (backups, mirrors)# - Large directories with small changes# - Need to preserve all attributes# - Require resume capability# - Mirroring with deletion# - Bandwidth is limitedAdvanced rsync Examples:
# Filter files by size (only files > 10MB)rsync -avz --min-size=10M /source/ /dest/
# Filter by modification timersync -avz --files-from=<(find /source -mtime -7) /source/ /dest/
# Use rsync daemon for local network (faster)# Server side: create /etc/rsyncd.conf[backup] path = /backup comment = Backup area read only = yes
# Start daemonrsync --daemon
# Client side (no SSH overhead)rsync -avz rsync://server/backup/ /local/Part 2: 20 Scenario-Based Linux Questions with Answers
Section titled “Part 2: 20 Scenario-Based Linux Questions with Answers”1. Scenario: Disk is full, but can’t find what’s using space
Section titled “1. Scenario: Disk is full, but can’t find what’s using space”Situation: df -h shows disk at 98% usage, but du -sh /* doesn’t add up to the total.
Investigation:
# 1. Check for deleted files still held openlsof +L1# Shows files with deleted but still open# Process ID, filename, size
# 2. Find processes holding deleted fileslsof | grep deleted
# 3. Check inode usagedf -i# If inode usage high, many small files
# 4. Find large deleted filesfind /proc/*/fd -ls 2>/dev/null | grep deleted
# 5. Check hidden directoriesdu -sh /tmp /var/tmp /home/*/.cache
# 6. Find largest files across systemfind / -type f -size +100M -exec ls -lh {} \\; 2>/dev/null
# 7. Check specific mount pointsdu -sh /var/* 2>/dev/null | sort -rh | head -10Resolution:
# If deleted files are held open, restart the processkill -HUP $(lsof -t /path/to/deleted/file)# Or restart servicesystemctl restart service_name
# Clear journal logsjournalctl --vacuum-size=500M
# Clear package cacheapt clean # Debian/Ubuntudnf clean all # RHEL
# Remove old kernels (Debian/Ubuntu)apt autoremove --purge
# Remove old kernels (RHEL)package-cleanup --oldkernels --count=22. Scenario: Server responding slowly, high load average
Section titled “2. Scenario: Server responding slowly, high load average”Situation: Load average > number of CPU cores, system feels sluggish.
Investigation:
# 1. Check load average and CPUuptimetop -bn1 | head -20
# 2. Identify top CPU processesps aux --sort=-%cpu | head -10
# 3. Check for processes in D state (uninterruptible sleep)ps aux | awk '$8=="D" {print}'# D state = waiting for I/O, can't be killed
# 4. Check I/O waitiostat -x 1 5# Look for high %iowait
# 5. Check disk queueiostat -x | awk '{if($12>10) print}'# svctm > 10ms indicates disk bottleneck
# 6. Check memory pressurefree -hvmstat 1 5# Look for si/so (swap in/out) > 0
# 7. Check process waiting for I/Oiotop -o
# 8. Check system logsjournalctl -p err -b | tail -50Resolution:
# CPU bound: Renice CPU-intensive processesrenice +10 -p $(pgrep -f "cpu_hog")
# I/O bound: Limit I/O priorityionice -c2 -n7 -p PID
# Memory pressure: Identify memory hogsps aux --sort=-%mem | head -10
# Swap usage: Check what's swappingsmem -r -k | head -10
# Kill zombie processesps aux | grep defunct# Find parent and restart/kill it3. Scenario: Can’t SSH into server, but ping works
Section titled “3. Scenario: Can’t SSH into server, but ping works”Situation: Server responds to ping but SSH connection times out or rejects.
Investigation from client:
# 1. Check SSH connectivity with verbosessh -vvv user@server
# 2. Check if port is opennc -zv server 22telnet server 22
# 3. Check SSH key issuesssh -o PreferredAuthentications=password user@server
# 4. Check firewall from clientnmap -p 22 server
# 5. Check if SSH service is running (from console if accessible)# Via out-of-band management or physical consolesystemctl status sshdss -tlnp | grep :22Investigation on server (if console accessible):
# 1. Check SSH servicesystemctl status sshdjournalctl -u sshd -n 50
# 2. Check network connectivityip addr showss -tlnp | grep :22
# 3. Check firewall rulesiptables -L -n | grep 22firewall-cmd --list-all
# 4. Check SSH configurationgrep -E "PermitRootLogin|PasswordAuthentication|Port" /etc/ssh/sshd_config
# 5. Check hosts.allow/denycat /etc/hosts.denycat /etc/hosts.allow
# 6. Check SELinuxgetenforceausearch -m avc -ts recent | grep sshCommon Resolutions:
# 1. Restart SSH servicesystemctl restart sshd
# 2. Fix firewalliptables -A INPUT -p tcp --dport 22 -j ACCEPTfirewall-cmd --add-service=ssh --permanent
# 3. Fix SELinuxrestorecon -Rv /etc/sshsetsebool -P ssh_sysadm_login on
# 4. Add user to allow listecho "AllowUsers username" >> /etc/ssh/sshd_configsystemctl reload sshd
# 5. Increase verbosity for debugging# In /etc/ssh/sshd_config:LogLevel DEBUGsystemctl reload sshdtail -f /var/log/auth.log4. Scenario: User can’t run sudo commands
Section titled “4. Scenario: User can’t run sudo commands”Situation: User attempts sudo but gets “user is not in the sudoers file” error.
Investigation:
# 1. Check user's groupsid usernamegroups username
# 2. Check sudoers file syntaxvisudo -c
# 3. Check if user is in sudo groupgetent group sudogetent group wheel
# 4. Check sudoers includesgrep -r "^%sudo" /etc/sudoers.d/grep "^%sudo" /etc/sudoers
# 5. Check sudo logsjournalctl -u sudo -n 20tail -50 /var/log/auth.log | grep sudoResolution:
# As root, add user to sudo groupusermod -aG sudo username# Or for RHEL/CentOSusermod -aG wheel username
# Verifygroups username
# If group membership doesn't work, add directly to sudoersvisudo# Add line:username ALL=(ALL:ALL) ALL
# Check for syntax errorsvisudo -c
# For NIS/LDAP users, check nsswitch.confcat /etc/nsswitch.conf | grep sudoers5. Scenario: Cron jobs not running
Section titled “5. Scenario: Cron jobs not running”Situation: Scheduled cron jobs are not executing.
Investigation:
# 1. Check cron servicesystemctl status cronsystemctl status crond
# 2. Check cron logsgrep CRON /var/log/syslog | tail -50journalctl -u cron -n 50
# 3. Check user crontabcrontab -l -u username
# 4. Check system crontabcat /etc/crontabls -la /etc/cron.d/
# 5. Check cron directoriesls -la /etc/cron.hourly/ls -la /etc/cron.daily/
# 6. Check environment issues# Add to crontab for debugging* * * * * env > /tmp/cron_env.txt 2>&1* * * * * /bin/bash -c "echo 'Running' >> /tmp/cron.log" 2>&1
# 7. Check mail for cron outputmail
# 8. Check permissionsls -la /var/spool/cron/crontabs/# Should be 600 (-rw-------)Common Issues and Fixes:
# 1. Missing environment variables# In crontab, add:PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# 2. Script not executablechmod +x /path/to/script.sh
# 3. Full paths required# Bad: * * * * * myscript.sh# Good: * * * * * /home/user/bin/myscript.sh
# 4. Redirect output to see errors* * * * * /path/to/script.sh >> /tmp/script.log 2>&1
# 5. Restart cronsystemctl restart cron
# 6. Check for syntax errorscrontab -e# Remove empty lines or invalid entries6. Scenario: Accidentally deleted important file
Section titled “6. Scenario: Accidentally deleted important file”Situation: You deleted a critical file and need to recover it.
Immediate Actions:
# 1. Stop using the filesystem immediately!# Any writes may overwrite deleted data
# 2. Check if file is still open by a processlsof | grep deleted | grep filename# If found, recover from /proccp /proc/PID/fd/FD /path/to/recover/file
# 3. Check if file is in backup# Restore from backupRecovery Tools:
# extundelete (for ext3/ext4)umount /dev/sda1 # Unmount first if possibleextundelete /dev/sda1 --restore-file /path/to/file
# TestDisk/PhotoRecphotorec /dev/sda1# Recovers by file signature, not filename
# debugfs (for ext filesystems)debugfs /dev/sda1debugfs: lsdel # List deleted inodesdebugfs: logdump -i <inode>debugfs: dump <inode> /recovered/file
# grep from /proc if file was in memorygrep -a -B10 -A10 "unique_string" /dev/sda1 > recovered.txtPrevention:
# Implement regular backups# Use version control# Enable file versioning in filesystem (ZFS, btrfs)# Use trash-cli instead of rmalias rm='trash-put'7. Scenario: Server running out of inodes
Section titled “7. Scenario: Server running out of inodes”Situation: df -h shows space available but cannot create new files.
Investigation:
# 1. Check inode usagedf -i# Filesystem Inodes IUsed IFree IUse% Mounted on# /dev/sda1 655360 655350 10 100% /
# 2. Find directory with most filesfind / -xdev -type d -size +1M -exec ls -la {} \\; 2>/dev/null | grep ^d
# 3. Count files per directoryfind /home -xdev -type f | cut -d/ -f2 | sort | uniq -c | sort -nr
# 4. Find directories with many filesfor dir in /*; do echo -n "$dir: "; find $dir -xdev -type f 2>/dev/null | wc -ldone
# 5. Check for mail spoolls -la /var/spool/mail/ls -la /var/mail/
# 6. Check for session filesls -la /var/tmp/ls -la /tmp/Resolution:
# 1. Delete old log filesfind /var/log -name "*.log" -mtime +30 -delete
# 2. Clean package cacheapt cleandnf clean all
# 3. Remove old kernelsapt autoremove --purgepackage-cleanup --oldkernels --count=2
# 4. Clear journal logsjournalctl --vacuum-size=500M
# 5. Clean Docker (if used)docker system prune -a
# 6. Find and delete empty filesfind / -type f -empty -delete 2>/dev/null
# 7. For mail spool, setup logrotate for mail# /etc/logrotate.d/mail/var/spool/mail/* { daily missingok rotate 7 compress}8. Scenario: Network interface not coming up after reboot
Section titled “8. Scenario: Network interface not coming up after reboot”Situation: Network interface doesn’t get IP after reboot.
Investigation:
# 1. Check interface statusip link showip addr showethtool eth0
# 2. Check NetworkManagernmcli dev statusnmcli con show
# 3. Check network config files# Debian/Ubuntucat /etc/network/interfacescat /etc/netplan/*.yaml
# RHEL/CentOScat /etc/sysconfig/network-scripts/ifcfg-eth0
# 4. Check if interface is downifconfig eth0 upip link set eth0 up
# 5. Check DHCP clientsystemctl status dhcpcdsystemctl status networking
# 6. Check kernel messagesdmesg | grep -i ethjournalctl -b | grep -i networkResolution:
# Debian/Ubuntu netplancat > /etc/netplan/01-netcfg.yaml << EOFnetwork: version: 2 ethernets: eth0: dhcp4: true optional: trueEOFnetplan apply
# RHEL/CentOScat > /etc/sysconfig/network-scripts/ifcfg-eth0 << EOFDEVICE=eth0BOOTPROTO=dhcpONBOOT=yesEOFsystemctl restart network
# Enable NetworkManagersystemctl enable NetworkManagersystemctl start NetworkManager9. Scenario: Application can’t bind to port
Section titled “9. Scenario: Application can’t bind to port”Situation: Application fails to start because “address already in use”.
Investigation:
# 1. Find what's using the portss -tlnp | grep :8080netstat -tlnp | grep 8080lsof -i :8080
# 2. Check if application is already runningps aux | grep appname
# 3. Check if port is in TIME_WAITss -tan | grep 8080
# 4. Check for zombie processesps aux | grep defunct
# 5. Check socket fileslsof | grep socket | grep appnameResolution:
# 1. Kill the process using the portkill -15 PIDsleep10. Scenario: System clock is wrong, affecting logs and SSL certificates
Section titled “10. Scenario: System clock is wrong, affecting logs and SSL certificates”Situation: Application SSL certificates failing, logs show incorrect timestamps, and NTP sync fails.
Investigation:
# 1. Check current system time and hardware clockdatetimedatectl statushwclock --show
# 2. Check NTP synchronization statustimedatectl show-timesyncsystemctl status chronydsystemctl status ntpd
# 3. Check timezone configurationls -la /etc/localtimecat /etc/timezonetimedatectl list-timezones | grep -i region
# 4. Check if time is driftingchronyc trackingntpq -p
# 5. Check system logs for time issuesjournalctl -u chronyd -n 50grep -i time /var/log/syslog | tail -20Resolution:
# 1. Set correct timezonesudo timedatectl set-timezone Asia/Kolkatasudo timedatectl set-timezone America/New_York
# 2. Enable and start NTP servicesudo systemctl enable chronydsudo systemctl start chronyd
# 3. Force time synchronizationsudo chronyc -a makestepsudo ntpdate -s time.nist.gov
# 4. Sync hardware clock to system timesudo hwclock --systohc
# 5. Check synchronization statussudo timedatectl set-ntp truetimedatectl status
# 6. For virtual machines, disable host time sync if causing issues# In /etc/systemd/timesyncd.conf:[Time]NTP=0.pool.ntp.org 1.pool.ntp.orgFallbackNTP=ntp.ubuntu.com11. Scenario: Users can’t change their passwords
Section titled “11. Scenario: Users can’t change their passwords”Situation: Users report “Authentication token manipulation error” when trying to change passwords.
Investigation:
# 1. Check password aging policiessudo chage -l username
# 2. Check disk space (full disk prevents password change)df -h /df -h /var
# 3. Check /etc/shadow permissionsls -la /etc/shadow# Should be: -rw-r----- 1 root shadow
# 4. Check PAM configurationsudo grep -r "password" /etc/pam.d/cat /etc/pam.d/common-password
# 5. Check if account is lockedsudo passwd -S username# If shows "LK", account locked
# 6. Check SELinux contextls -Z /etc/shadowrestorecon -v /etc/shadow
# 7. Check if password expiration is preventing changesudo chage -l username# If "Password expires: password must be changed"Resolution:
# 1. Fix permissionssudo chmod 640 /etc/shadowsudo chown root:shadow /etc/shadow
# 2. Unlock account if lockedsudo passwd -u username
# 3. Force password changesudo passwd -e username
# 4. Fix SELinux contextsudo restorecon -v /etc/shadowsudo restorecon -v /etc/passwd
# 5. Clear any stale lockssudo rm -f /etc/passwd.locksudo rm -f /etc/shadow.lock
# 6. Check disk space and clear if neededsudo apt cleansudo journalctl --vacuum-size=100M
# 7. Reset password as rootsudo passwd username12. Scenario: NFS mounts failing after reboot
Section titled “12. Scenario: NFS mounts failing after reboot”Situation: NFS shares mounted in /etc/fstab don’t mount after system reboot.
Investigation:
# 1. Check /etc/fstab entriescat /etc/fstab | grep nfs
# 2. Check NFS service statussystemctl status nfs-clientsystemctl status nfs-common
# 3. Try manual mountsudo mount -asudo mount -t nfs server:/export /mnt/nfs
# 4. Check NFS server reachabilityshowmount -e nfs-serverrpcinfo -p nfs-server
# 5. Check network availability during bootsystemctl list-units | grep network# Network might not be ready when mount occurs
# 6. Check mount logsjournalctl -u nfs-client -n 50dmesg | grep -i nfs
# 7. Check mount point existencels -ld /mnt/nfsResolution:
# 1. Add _netdev option to fstabserver:/export /mnt/nfs nfs defaults,_netdev,noauto,x-systemd.automount 0 0
# 2. Use systemd automountsudo systemctl daemon-reloadsudo systemctl enable mnt-nfs.automountsudo systemctl start mnt-nfs.automount
# 3. Alternative: Add network dependency to mount# Create systemd mount unitcat > /etc/systemd/system/mnt-nfs.mount << EOF[Unit]Description=NFS MountAfter=network-online.targetWants=network-online.target
[Mount]What=server:/exportWhere=/mnt/nfsType=nfsOptions=defaults,_netdev
[Install]WantedBy=multi-user.targetEOF
# 4. Add timeout to prevent hanging# /etc/fstab with timeout:server:/export /mnt/nfs nfs defaults,_netdev,timeo=30,retrans=3 0 0
# 5. Check NFS client kernel modulessudo modprobe nfsecho "nfs" >> /etc/modules-load.d/nfs.conf13. Scenario: Logs growing too fast, filling disk
Section titled “13. Scenario: Logs growing too fast, filling disk”Situation: Application logs are growing at 10GB/hour, causing disk space alerts.
Investigation:
# 1. Find largest log filessudo find /var/log -type f -size +100M -exec ls -lh {} \\; 2>/dev/null
# 2. Check log rotation statusls -la /etc/logrotate.d/cat /etc/logrotate.d/application
# 3. Check what's writing to logslsof /var/log/app.logsudo tail -f /var/log/app.log | head -100
# 4. Check application logging levelgrep -i log /etc/app/config# Might be set to DEBUG instead of INFO/ERROR
# 5. Check if logrotate is runningsudo systemctl status logrotatesudo logrotate -d /etc/logrotate.conf
# 6. Check for infinite error loopssudo tail -1000 /var/log/app.log | sort | uniq -c | sort -nrResolution:
# 1. Immediately truncate the logsudo truncate -s 0 /var/log/app.log# ORsudo cat /dev/null > /var/log/app.log
# 2. Configure aggressive logrotatecat > /etc/logrotate.d/application << EOF/var/log/app.log { daily rotate 3 maxsize 100M compress delaycompress missingok notifempty create 0640 appuser appgroup postrotate systemctl reload application endscript}EOF
# 3. Reduce application logging level# In application config, change:log_level = INFO # instead of DEBUG
# 4. Implement rate limiting in application if possible# Configure max log lines per second
# 5. Set up log monitoring alerts# Use logwatch or custom script to alert on rapid growth
# 6. Use systemd journal limits# /etc/systemd/journald.confSystemMaxUse=1GSystemMaxFileSize=100MMaxRetentionSec=1week14. Scenario: Web server can’t write to upload directory
Section titled “14. Scenario: Web server can’t write to upload directory”Situation: Web application returns “Permission denied” when users try to upload files.
Investigation:
# 1. Check directory permissionsls -la /var/www/uploads/
# 2. Check which user web server runs asps aux | grep -E "nginx|apache"# Usually www-data (Debian) or apache/nobody (RHEL)
# 3. Check SELinux context (RHEL/CentOS)ls -Z /var/www/uploads/sudo ausearch -m avc -ts recent | grep uploads
# 4. Check AppArmor (Ubuntu/Debian)sudo aa-status | grep apachesudo journalctl -u apparmor | grep DENIED
# 5. Check disk space and inodesdf -h /vardf -i /var
# 6. Check mount options (noexec, nosuid, etc.)mount | grep /varResolution:
# 1. Set correct ownershipsudo chown -R www-data:www-data /var/www/uploads/# RHEL/CentOS:sudo chown -R apache:apache /var/www/uploads/
# 2. Set correct permissionssudo chmod 755 /var/www/uploads/# For uploads that need public write (careful!):sudo chmod 1777 /var/www/uploads/
# 3. Fix SELinux contextsudo semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/uploads(/.*)?"sudo restorecon -Rv /var/www/uploads/# Allow httpd to writesudo setsebool -P httpd_unified on
# 4. Fix AppArmor# In /etc/apparmor.d/usr.sbin.apache2:/var/www/uploads/ rw,/var/www/uploads/** rw,sudo systemctl reload apparmor
# 5. If using PHP, check open_basedir# /etc/php/*/apache2/php.iniopen_basedir = /var/www/html:/var/www/uploads
# 6. Verify with testsudo -u www-data touch /var/www/uploads/test.txt15. Scenario: Process not responding, won’t kill with SIGTERM
Section titled “15. Scenario: Process not responding, won’t kill with SIGTERM”Situation: A process hangs and doesn’t respond to kill -15, affecting graceful shutdown.
Investigation:
# 1. Check process stateps aux | grep PID# State D = uninterruptible sleep (usually I/O)# State Z = zombie (already dead, parent not reaping)
# 2. Check what the process is waiting oncat /proc/PID/stackcat /proc/PID/wchanstrace -p PID
# 3. Check for I/O hangslsof -p PIDcat /proc/PID/io
# 4. Check for network hangsss -p | grep PID
# 5. Check if process is in D state (uninterruptible sleep)ps -eo pid,stat,wchan,cmd | grep PID# D state means waiting for kernel I/O
# 6. Check parent process relationshippstree -p PIDResolution:
# 1. Try SIGINT (Ctrl+C equivalent)kill -2 PID
# 2. Try SIGHUP (reload config, sometimes works)kill -1 PID
# 3. For D state (I/O hang) - wait for I/O timeout# If NFS mount hung, unmount firstsudo umount -f /mnt/nfs
# 4. For zombie processes - kill parentps -o ppid= -p PID | xargs kill -15
# 5. Force kill if absolutely necessarykill -9 PID
# 6. If process can't be killed and system stuck# Try sync and rebootsudo syncsudo reboot -f
# 7. For NFS-related hangs# Kill processes accessing NFS firstfuser -km /mnt/nfsumount -l /mnt/nfs # Lazy unmount16. Scenario: Kernel panic or system freeze
Section titled “16. Scenario: Kernel panic or system freeze”Situation: Server randomly freezes or kernel panics, requiring hard reboot.
Investigation After Reboot:
# 1. Check last kernel messagessudo dmesg | grep -i panicsudo dmesg | grep -i "kernel bug"
# 2. Check system logs from before crashsudo journalctl -b -1 -k | grep -i panicsudo tail -1000 /var/log/kern.log | grep -i "call trace"
# 3. Check hardware errorssudo journalctl -b -1 | grep -i "hardware error"sudo mcelog --client # For machine check exceptions
# 4. Check memory errorssudo grep -i "machine check" /var/log/messagessudo dmidecode -t memory
# 5. Check CPU temperaturesensorsipmitool sensor
# 6. Check for kernel oops before panicsudo dmesg | grep -i oopsResolution and Prevention:
# 1. Enable kernel crash dumps (kdump)sudo yum install kexec-tools crash# path /var/crashsudo systemctl enable kdump
# 2. Configure sysrq for emergency debuggingecho 1 > /proc/sys/kernel/sysrq# /etc/sysctl.conf:kernel.sysrq = 1
# 3. Use Magic SysRq keys for debugging# Alt+SysRq+? commands:# r - raw keyboard mode# e - SIGTERM all processes# i - SIGKILL all processes# s - sync# u - remount read-only# b - reboot
# 4. Update kernel and firmwaresudo apt update && sudo apt upgrade linux-firmwaresudo yum update kernel
# 5. Check for known hardware issuessudo lshwsudo lspci -v
# 6. Set up remote logging for crash analysis# Send logs to central syslog server*.* @logs.example.com17. Scenario: Package manager broken after failed update
Section titled “17. Scenario: Package manager broken after failed update”Situation: apt or yum fails with dependency conflicts after interrupted update.
Investigation (Debian/Ubuntu):
# 1. Check apt errorsudo apt updatesudo apt upgrade# Note specific error messages
# 2. Check dpkg statussudo dpkg --auditsudo dpkg --configure -a
# 3. Check held packagessudo apt-mark showhold
# 4. Check broken dependenciessudo apt --fix-broken install -f
# 5. Check dpkg lock filessudo lsof /var/lib/dpkg/locksudo lsof /var/lib/apt/lists/lockResolution (Debian/Ubuntu):
# 1. Remove lock files (if no process holding)sudo rm /var/lib/dpkg/lock-frontendsudo rm /var/lib/dpkg/locksudo rm /var/lib/apt/lists/locksudo rm /var/cache/apt/archives/lock
# 2. Reconfigure dpkgsudo dpkg --configure -a
# 3. Fix broken packagessudo apt --fix-broken install
# 4. Clean package cachesudo apt cleansudo apt autoclean
# 5. Force install specific packagesudo dpkg -i --force-overwrite /var/cache/apt/archives/package.deb
# 6. Remove problematic packagesudo dpkg --remove --force-remove-reinstreq package-name
# 7. Update againsudo apt updatesudo apt upgradeResolution (RHEL/CentOS):
# 1. Clean yum cachesudo yum clean allsudo dnf clean all
# 2. Check rpm databasesudo rpm --rebuilddb
# 3. Fix broken dependenciessudo yum checksudo dnf check
# 4. Remove problematic packagesudo rpm -e --noscripts package-name
# 5. Force reinstallsudo yum reinstall package-namesudo dnf reinstall package-name
# 6. Check for duplicate packagessudo package-cleanup --dupessudo package-cleanup --cleandupes18. Scenario: Docker containers can’t communicate with each other
Section titled “18. Scenario: Docker containers can’t communicate with each other”Situation: Multiple Docker containers on same host can’t ping or connect to each other.
Investigation:
# 1. Check container networksdocker network lsdocker inspect container1 | grep Networkdocker inspect container2 | grep Network
# 2. Check if containers are on same networkdocker network inspect bridge
# 3. Check container IPsdocker exec container1 ip addrdocker exec container2 ip addr
# 4. Test connectivitydocker exec container1 ping container2-ipdocker exec container1 ping container2-name
# 5. Check DNS resolutiondocker exec container1 cat /etc/resolv.confdocker exec container1 nslookup container2
# 6. Check firewall rulessudo iptables -L -n | grep DOCKERsudo firewall-cmd --list-allResolution:
# 1. Create custom bridge networkdocker network create --driver bridge my-network
# 2. Connect containers to same networkdocker network connect my-network container1docker network connect my-network container2
# 3. Or run containers with custom networkdocker run -d --network my-network --name app1 app-imagedocker run -d --network my-network --name app2 app-image
# 4. Enable inter-container communication# In /etc/docker/daemon.json:{ "icc": true, "iptables": true}sudo systemctl restart docker
# 5. Add firewall rules if neededsudo firewall-cmd --permanent --zone=trusted --add-interface=docker0sudo firewall-cmd --reload
# 6. For user-defined networks, containers resolve by namedocker exec app1 ping app2 # Works by container name19. Scenario: SSH connection drops after successful login
Section titled “19. Scenario: SSH connection drops after successful login”Situation: SSH connects successfully but drops immediately after login.
Investigation:
# 1. Verbose SSH from clientssh -vvv user@server# Look for error after authentication
# 2. Check SSH server logssudo journalctl -u sshd -n 50sudo tail -50 /var/log/auth.log | grep ssh
# 3. Check user shellgetent passwd username# Should show valid shell, not /bin/false or /sbin/nologin
# 4. Check if shell existscat /etc/shells
# 5. Check profile/rc files for errors# As root, inspect user's dotfilessudo -u username bash -x -c 'exit' 2>&1
# 6. Check forced commands in authorized_keyscat ~username/.ssh/authorized_keys# Look for command="..." restrictions
# 7. Check SSH daemon configurationgrep -E "ForceCommand|Match" /etc/ssh/sshd_configResolution:
# 1. Fix user's shellsudo usermod -s /bin/bash username
# 2. Create user's home directory if missingsudo mkdir -p /home/usernamesudo chown username:username /home/username
# 3. Remove problematic .profile/.bashrc entries# Login as root, check for errorssudo -u username bash# Then check what's failing
# 4. Temporarily disable shell restrictions# In /etc/ssh/sshd_config:# Comment out ForceCommand# Remove Match User restrictionssudo systemctl reload sshd
# 5. Check for disk quotas preventing shell accesssudo quota username
# 6. Add debug to .bashrc# In ~/.bashrc, add:set -x# Then check logs20. Scenario: System slow after kernel update
Section titled “20. Scenario: System slow after kernel update”Situation: After a kernel update, system performance degraded significantly.
Investigation:
# 1. Check current kernel versionuname -r
# 2. Check previous kernel versionsrpm -qa kerneldpkg -l | grep linux-image
# 3. Check CPU frequency scalingcat /proc/cpuinfo | grep MHzcpupower frequency-info
# 4. Check kernel parameterscat /proc/cmdline
# 5. Check for driver issuesdmesg | grep -i errordmesg | grep -i faillsmod | grep -E "nouveau|nvidia|i915"
# 6. Check system logs for warningsjournalctl -b -1 | grep -i "warning\\|error"Resolution:
# 1. Boot into previous kernel# At GRUB menu, select "Advanced options" then previous kernel
# 2. Set default kernel# /etc/default/grub:GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 5.4.0-42-generic"sudo update-grub
# 3. Remove problematic kernel# Debian/Ubuntu:sudo apt remove linux-image-5.8.0-45-generic# RHEL/CentOS:sudo yum remove kernel-5.8.0-45
# 4. Fix kernel parameters if missing# In /etc/default/grub, add:GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_pstate=disable"# Or for power management:GRUB_CMDLINE_LINUX_DEFAULT="quiet splash processor.max_cstate=1"sudo update-grub
# 5. Check and reinstall missing microcodesudo apt install intel-microcode # Intelsudo apt install amd64-microcode # AMD
# 6. Rebuild initramfs if neededsudo update-initramfs -u -k allsudo dracut -f # RHEL/CentOS
# 7. Check if modules need blacklistingecho "blacklist nouveau" > /etc/modprobe.d/blacklist-nouveau.confsudo update-initramfs -u21. Scenario: Inodes exhausted but disk space available (Bonus)
Section titled “21. Scenario: Inodes exhausted but disk space available (Bonus)”Situation: Users can’t create new files, df -h shows space available, but df -i shows 100% inode usage.
Investigation:
# 1. Confirm inode exhaustiondf -i
# 2. Find directory with most filessudo find / -xdev -type d -size +1M -exec ls -la {} \\; 2>/dev/null | grep -c ^d
# 3. Count files per directory recursivelyfor dir in /*; do echo -n "$dir: "; sudo find $dir -xdev -type f 2>/dev/null | wc -ldone
# 4. Find directory with massive number of small filessudo find / -xdev -type f -printf '%h\\n' 2>/dev/null | sort | uniq -c | sort -rn | head -20
# 5. Check mail spool for stuck emailsls /var/spool/mail/ | wc -lsudo mailq
# 6. Check for session filesls /tmp/ | wc -lls /var/tmp/ | wc -lls /var/lib/php/sessions/ | wc -l
# 7. Check Docker overlay if usedsudo find /var/lib/docker -type f | wc -lResolution:
# 1. Delete old log files (logs use many inodes)sudo find /var/log -name "*.log.*" -type f -mtime +30 -deletesudo find /var/log -name "*.gz" -type f -mtime +90 -delete
# 2. Clean up mail spool# For stuck mail queuessudo postsuper -d ALL # Postfixsudo mailq | grep -v "^ " | cut -d" " -f1 | xargs sudo rm -f # Sendmail
# 3. Clear old session filessudo find /tmp -type f -atime +7 -deletesudo find /var/tmp -type f -atime +7 -deletesudo find /var/lib/php/sessions -type f -atime +7 -delete
# 4. Clean package manager cachesudo apt cleansudo dnf clean all
# 5. Remove old kernels (preserve 2 most recent)# Debian/Ubuntusudo apt autoremove --purge# RHEL/CentOSsudo package-cleanup --oldkernels --count=2
# 6. Clean Docker resourcesdocker system prune -a
# 7. If using mail server, limit queue# /etc/postfix/main.cf:queue_minfree = 10000000message_size_limit = 10240000bounce_queue_lifetime = 1d
# 8. For applications creating many temp files# Set up tmpwatch or systemd-tmpfilescat > /etc/tmpfiles.d/app.conf << EOFd /var/lib/app/cache 0755 root root 7dd /tmp/app-temp 1777 root root 1dEOF22. Scenario: GRUB bootloader corrupted (Bonus)
Section titled “22. Scenario: GRUB bootloader corrupted (Bonus)”Situation: System boots to “grub rescue>” prompt after disk changes or updates.
Investigation:
# At grub rescue prompt:grub rescue> ls# (hd0) (hd0,msdos1) (hd0,msdos2)
# Try to find boot partitiongrub rescue> ls (hd0,1)/# Look for grub or boot directory
# Check for kernelgrub rescue> ls (hd0,1)/boot/Resolution from Live CD/USB:
# 1. Boot from live CD# 2. Mount root partitionsudo mount /dev/sda1 /mnt
# 3. Mount boot partition if separatesudo mount /dev/sda2 /mnt/boot
# 4. Mount necessary directoriessudo mount --bind /dev /mnt/devsudo mount --bind /proc /mnt/procsudo mount --bind /sys /mnt/sys
# 5. Chroot into systemsudo chroot /mnt
# 6. Reinstall GRUB# For BIOS/MBRgrub-install /dev/sdaupdate-grub
# For UEFIgrub-install --target=x86_64-efi --efi-directory=/boot/efiupdate-grub
# 7. Check if bootablelsblkefibootmgr -v # For UEFI
# 8. Exit and rebootexitsudo umount -R /mntsudo rebootManual GRUB Boot (Temporary):
# At grub rescue> prompt:grub rescue> set prefix=(hd0,msdos1)/boot/grubgrub rescue> insmod normalgrub rescue> normal
# At GRUB menu, press 'c' for command line:grub> set root=(hd0,msdos1)grub> linux /boot/vmlinuz-5.4.0-42-generic root=/dev/sda1grub> initrd /boot/initrd.img-5.4.0-42-genericgrub> boot
# After booting, reinstall GRUB permanently