Skip to content

Linux Interview Questions


Complete Linux Syllabus with Compact Command Flags

Section titled “Complete Linux Syllabus with Compact Command Flags”

Part 1: Linux Fundamentals (Core - 30% of interviews)

Section titled “Part 1: Linux Fundamentals (Core - 30% of interviews)”

Memory Trigger: “Kernel→Hardware, Shell→You”

Boot Process: BIOS/UEFI → GRUB → Kernel → initrd → systemd → Shell

Runlevels/Targets:

Terminal window
systemctl get-default # Current target
systemctl set-default multi-user.target # CLI mode
systemctl isolate graphical.target # Switch to GUI

Memory Trigger: “/bin bin, /sbin sys bin, /etc config, /var varies, /usr user, /home home, /root root, /tmp temp, /dev devices, /proc process, /sys system, /opt optional”

Memory Trigger: “rwx=421, SUID=4, SGID=2, Sticky=1”

Terminal window
chmod 755 file # rwxr-xr-x
chmod u+x,g-w,o=r file # Symbolic
chmod -R 755 dir/ # Recursive
chown user:group file # Change owner:group
chgrp group file # Change group only
umask 022 # Default 755/644
chmod u+s file # SUID (4) - runs as owner
chmod g+s dir/ # SGID (2) - inherits group
chmod +t dir/ # Sticky (1) - only owner delete

Memory Trigger: “ls -la(all details), -lh(human), -lt(time), -ltr(reverse time)“

Terminal window
ls -la, -lh, -lt, -ltr
cp -r(recursive), -p(preserve), -u(update), -i(interactive)
mv -i, -u, -v(verbose)
rm -r(recursive), -f(force), -i(interactive)
mkdir -p(parents), -m(mode)
touch -t(timestamp), -r(reference)
head -n(lines), tail -f(follow), -F(follow+retry)
less -N(lines), -S(chop)
find -name, -type f/d, -size, -mtime, -perm, -user, -exec
tar -czf(create gzip), -xzf(extract gzip), -cjf(bzip2), -xJf(xz), -tvf(view)
gzip -k(keep), -9(best compression)
uname -a(all), -r(release)
free -h(human), -m(MB)
df -h, -i(inodes)
du -sh(summary human), --max-depth=1
ps aux(all), -ef(full), -eo custom, --sort=-%cpu
top -u(user), -p(pid)
kill -9(force), -15(graceful), -1(reload)
pkill -f(full command), -u(user)
nice -n(priority), renice -n -p PID

Memory Trigger: ” any, ? one, [] range, {} list”*

Terminal window
* .txt, ?.txt, [0-9], {a,b,c}, {1..10}

Memory Trigger: ”> overwrite, >> append, 2> error, &> both, | pipe”

Terminal window
command > file, >> file, 2> file, &> file, < file
command1 | command2
command | tee file # Display and save
find | xargs rm, xargs -n1, xargs -I{}

Memory Trigger: “0 success, && AND, || OR, ; always”

Terminal window
command && echo "OK" || echo "Fail"
echo $? # Last exit code

Part 2: Text Processing & Regex (15% of interviews)

Section titled “Part 2: Text Processing & Regex (15% of interviews)”

Memory Trigger: ”. any, * 0+, + 1+, ? 0/1, ^ start, $ end”

Terminal window
grep 'pattern' file # BRE (needs \\ for +?|)
egrep 'pattern' file # ERE (no backslashes)
grep -E, grep -F(fixed string)

Memory Trigger: “-i(ignore), -v(invert), -r(recursive), -n(number), -c(count), -l(filename), -A(after), -B(before), -C(context)“

Terminal window
grep -irn "error" /var/log/
egrep "error|warning"
fgrep ".*" # Literal search

Memory Trigger: “s/old/new/g(global), -i(in-place), -e(multiple)“

Terminal window
sed 's/old/new/g' file
sed -i.bak 's/old/new/g' file # Backup then replace
sed '2,5d', sed '/start/,/end/d'
sed -n '10,20p' # Print range

Memory Trigger: “print $1(first field), -F(separator), NR(line number), NF(field count)“

Terminal window
awk '{print $1}' file
awk -F: '{print $1}' /etc/passwd
awk 'NR==10, NR==20' file
awk '/error/ {print}'
awk '{sum+=$1} END {print sum}'

Memory Trigger: “cut -d(field), -f(fields), -c(chars)“

Terminal window
cut -d: -f1 /etc/passwd
sort -n(numeric), -r(reverse), -k(key), -u(unique)
uniq -c(count), -d(duplicates), -u(unique only)
wc -l(lines), -w(words), -c(chars)
tr 'a-z' 'A-Z', -d(delete), -s(squeeze)
diff -u(unified), -c(context), -r(recursive)

Part 3: Shell Scripting (15% of interviews)

Section titled “Part 3: Shell Scripting (15% of interviews)”
#!/bin/bash, #!/usr/bin/env bash
chmod +x script.sh
./script.sh (subshell), source script.sh (current)

Memory Trigger: “$1 first arg, $# count, $? exit code, $$ PID”

Terminal window
name="value" # No spaces!
${name}, "$name"
$0,$1,$9,${10}, $#, $*, $@, $?, $$, $!
${VAR:-default} # Use default if unset
${VAR:=default} # Assign default
${#var} # Length
${var#pattern} # Remove shortest prefix
${var##pattern} # Remove longest prefix
${var%pattern} # Remove shortest suffix
${var%%pattern} # Remove longest suffix
${var/old/new} # Replace first
${var//old/new} # Replace all
Terminal window
arr=(a b c) # Indexed
${arr[0]}, ${arr[@]}, ${#arr[@]}
declare -A arr # Associative
arr[key]=value, ${arr[key]}

Memory Trigger: “-f file, -d dir, -z empty, -eq equal, && and, || or”

Terminal window
if [ -f "$file" ]; then; fi
[ -f "$file" ] && echo "exists"
[[ "$str" == pattern ]] # Pattern matching
[[ "$str" =~ ^[0-9]+$ ]] # Regex
(( a > b )) # Arithmetic
case "$var" in pattern) ;; esac
Terminal window
for i in list; do; done
for ((i=0;i<10;i++)); do; done
while [ condition ]; do; done
until [ condition ]; do; done
break, continue
Terminal window
func() { echo $1; local var="x"; return 0; }
result=$(func "arg")

Memory Trigger: “read -p(prompt), -s(silent), -t(timeout)“

Terminal window
read -p "Name: " name
$(command), `command` # Command substitution
cat << EOF ... EOF # Heredoc
<<< "string" # Herestring

Memory Trigger: “set -x(debug), -e(exit on error), -u(unset error)“

Terminal window
set -euxo pipefail
trap 'echo "Error"' ERR
logger -t tag "message"

Memory Trigger: “{1..10} brace, $(( )) arithmetic”

Terminal window
{1..10}, {a..z}, {1..10..2}
$((2+2)), ((count++))
printf "%s %d\\n" "text" 10
eval "echo \\$var" # Caution!

Part 4: User & Permission Management (10% of interviews)

Section titled “Part 4: User & Permission Management (10% of interviews)”

Memory Trigger: “-m(home), -s(shell), -L(lock), -e(expire)“

Terminal window
useradd -m -s /bin/bash user
usermod -aG group user # Append to group
usermod -L(lock), -U(unlock)
userdel -r(remove home)
passwd -e(expire)
chage -l(list), -M(max days)
id, who, w, last, lastlog
Terminal window
groupadd, groupdel
gpasswd -a user group # Add user
gpasswd -d user group # Delete user

Memory Trigger: “-i(login), -s(shell), -l(list), -k(kill timestamp)“

Terminal window
visudo # Edit /etc/sudoers
sudo -i, -s, -l, -k
sudo -u user command
Terminal window
getfacl file
setfacl -m u:user:rwx file
setfacl -x u:user file
Terminal window
/etc/profile, ~/.bash_profile # Login shells
/etc/bashrc, ~/.bashrc # Non-login shells
source ~/.bashrc # Reload

Part 5: Process Management (10% of interviews)

Section titled “Part 5: Process Management (10% of interviews)”

Memory Trigger: “R running, S sleep, D disk I/O, Z zombie, T stopped”

Memory Trigger: “ps aux(all), -ef(full), -eo(custom), top -u(user)“

Terminal window
ps aux --sort=-%cpu
ps -eo pid,ppid,cmd,%cpu,%mem
top -u user, -p PID
pstree -p(pid), -u(user)
lsof -i:port, -p PID, -u user
fuser -v file, -k file
strace -p PID, -e trace=open
Terminal window
command & # Background
Ctrl+Z, jobs, fg %1, bg %2
disown %1, nohup command &
screen -S name, -r reattach
tmux new -s name, attach -t name

Memory Trigger: “nice -n(start), renice(change), -20 highest”

Terminal window
nice -n 10 command
renice -n 5 -p PID

Memory Trigger: “1 HUP reload, 9 KILL force, 15 TERM graceful”

Terminal window
kill -9, -15, -1, -STOP, -CONT
killall -15 name
pkill -15 pattern
trap 'cmd' INT TERM EXIT

Part 6: Disk & Filesystem (10% of interviews)

Section titled “Part 6: Disk & Filesystem (10% of interviews)”
Terminal window
df -T, lsblk -f, blkid

Memory Trigger: “fdisk(MBR), gdisk(GPT), parted(both)“

Terminal window
fdisk -l /dev/sda
gdisk -l /dev/sda
parted /dev/sda print
lsblk -f(FS), -p(path)

Memory Trigger: “mkfs.ext4, mount -o ro/rw/noexec, fsck -f”

Terminal window
mkfs.ext4 /dev/sda1
mount /dev/sda1 /mnt
mount -o ro, -o noexec, -o remount,rw
umount -l(lazy)
fsck -f(force), -y(auto yes)

Memory Trigger: “pvcreate→vgcreate→lvcreate”

Terminal window
pvcreate /dev/sda1
vgcreate vg_name /dev/sda1
lvcreate -L 10G -n lv_name vg_name
lvextend -L +5G /dev/vg_name/lv_name
lvreduce -L 5G /dev/vg_name/lv_name
resize2fs, xfs_growfs
Terminal window
iostat -x 1
iotop -o(only active)
smartctl -a /dev/sda, -H(health)
Terminal window
swapon -a(all), swapoff
sysctl vm.swappiness=10

Part 7: System Administration (10% of interviews)

Section titled “Part 7: System Administration (10% of interviews)”
Terminal window
/etc/default/grub
update-grub # Debian
grub-mkconfig -o /boot/grub/grub.cfg # RHEL

Memory Trigger: “start/stop/restart/reload/enable/disable/mask”

Terminal window
systemctl start/stop/restart/reload/enable/disable/mask service
systemctl status/is-active/is-enabled service
systemctl reboot/poweroff/rescue/emergency
systemctl get-default/set-default
systemd-analyze blame

Debian/Ubuntu - “apt update/upgrade/install/remove/purge”

Terminal window
apt update/upgrade/install/remove/purge/autoremove
apt search/show/list --installed
dpkg -i(install), -r(remove), -l(list), -L(list files), -S(search)

RHEL/CentOS - “yum install/update/remove/search”

Terminal window
yum install/update/remove/search/info
yum list installed/provides
dnf (same as yum)
rpm -ivh(install), -e(erase), -qa(query all), -qi(info), -ql(list files), -qf(find)

Memory Trigger: “journalctl -u(unit), -f(follow), -p(priority)“

Terminal window
journalctl -u nginx -f
journalctl -b(boot), -b -1(previous)
journalctl --since "1 hour ago"
journalctl -p err
logrotate /etc/logrotate.conf

Memory Trigger: “crontab -e(edit), -l(list), -r(remove)“

Terminal window
crontab -e, -l, -r
# * * * * * command (min hour day month dow)
@reboot, @daily, @hourly, @weekly, @monthly
anacron -f(force)
Terminal window
vmstat 1, -s(stats)
mpstat -P ALL
sar -u(CPU), -r(memory), -b(I/O), -n DEV
dstat -cdng
glances

Memory Trigger: “ip addr(show), link(up/down), route(add)“

Terminal window
ip addr show/add/del
ip link set eth0 up/down
ip route show/add/del
hostnamectl set-hostname name
Terminal window
systemctl restart sshd
timedatectl set-ntp true

Memory Trigger: “ping -c(count), traceroute -n(numeric), ss -tulpn”

Terminal window
ping -c 4 -i 0.5 -s 1400
traceroute -n -w 2
mtr -r -c 10
ss -tulpn, -ta(all TCP)
netstat -tulpn (legacy)
nmap -sV, -p port, -sP(ping scan)
tcpdump -i eth0 -w file.pcap -r file.pcap
nc -l(listen), -zv(zero I/O verbose)
curl -I(headers), -o(output), -L(follow)
wget -c(resume), -r(recursive)
dig +short, -x(reverse)
nslookup, host -t MX

iptables Memory: “-L(list), -A(append), -D(delete), -p(protocol), -s(source), -d(dest), -j(jump)“

Terminal window
iptables -L -n -v
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -A INPUT -s 192.168.1.0/24 -j ACCEPT
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
firewall-cmd --add-service=http --permanent
ufw allow 22/tcp
Terminal window
ip route add 10.0.0.0/24 via 192.168.1.1
sysctl net.ipv4.ip_forward=1

Terminal window
chage -l user, -M 90, -m 7, -W 7
faillog -u user
lastb
fail2ban-client status sshd

Memory Trigger: “chattr +i(immutable), +a(append-only)“

Terminal window
chattr +i/+a/-i file
lsattr file
find / -perm -0002 -type f # World-writable
find / -perm -4000 -type f # SUID files
Terminal window
auditctl -w /etc/passwd -p wa -k key
ausearch -k key
aureport -l(login), -au(auth)

Memory Trigger: “getenforce, setenforce 0/1(permissive/enforcing)“

Terminal window
getenforce, setenforce 0/1
ls -Z, chcon -t type, restorecon -v
getsebool -a, setsebool -P bool on
audit2why, audit2allow
Terminal window
aa-status
aa-complain /path/to/program
aa-enforce /path/to/program
aa-logprof

Part 10: Advanced Topics (5% of interviews)

Section titled “Part 10: Advanced Topics (5% of interviews)”

Memory Trigger: “lsmod(list), modprobe(load/unload), modinfo”

Terminal window
uname -r
lsmod
modprobe module, modprobe -r module
modinfo module
sysctl -a, -w parameter=value
Terminal window
free -h
cat /proc/meminfo
pmap -x PID
Terminal window
strace -p PID -e trace=file,network
ltrace -p PID
Terminal window
gdb program, gdb -p PID
valgrind --leak-check=full ./program
perf top, record, report
Terminal window
chroot /newroot /bin/bash
systemd-nspawn -D /path/to/container
machinectl list
virsh list --all, start, shutdown, destroy

Part 11: Distribution-Specific (5% of interviews)

Section titled “Part 11: Distribution-Specific (5% of interviews)”

Memory Trigger: “apt update/upgrade/install, dpkg -i”

Terminal window
apt update/upgrade/full-upgrade/install/remove/purge/autoremove
apt search/show
dpkg -i/-r/-l/-L/-S
add-apt-repository ppa:user/name
update-alternatives --config python

Memory Trigger: “yum install/update, rpm -ivh”

Terminal window
yum install/update/remove/search/info/provides
dnf (same as yum)
rpm -ivh/-e/-qa/-qi/-ql/-qf
yum install epel-release
Terminal window
zypper refresh/install/update/remove/search
Terminal window
pacman -S(install), -Syu(update), -R(remove), -Qs(search), -Qi(info)
yay -S (AUR)

Terminal window
ls -la, -lh, -lt
cp -rp, mv -i, rm -rf
mkdir -p, touch
find . -name "*.log" -mtime -7
grep -rin "error" .
ps aux --sort=-%cpu
top -u user
kill -9 PID, -15 PID
df -h, -i
du -sh, --max-depth=1
tar -czf, -xzf
ssh user@host
scp file user@host:/path
rsync -avz
chmod 755, chown user:group
systemctl start/stop/restart/status
journalctl -u service -f
ip addr, ss -tulpn
Terminal window
/etc/passwd, /etc/shadow, /etc/group
/etc/fstab, /etc/hosts, /etc/resolv.conf
/etc/crontab, /etc/sudoers
/var/log/syslog, /var/log/auth.log
/proc/cpuinfo, /proc/meminfo
ProblemCommands
Disk fulldf -h, du -sh /*, find / -size +100M
High CPUtop, ps aux --sort=-%cpu
High memoryfree -h, ps aux --sort=-%mem
Can’t loginlast, lastb, journalctl -u sshd
Network downip addr, ping, ss -tulpn
Permission deniedls -la, id, groups
Service not startingsystemctl status, journalctl -xe

Linux Interview Questions: 30 Important Questions + 20 Scenario-Based Questions

Section titled “Linux Interview Questions: 30 Important Questions + 20 Scenario-Based Questions”

Part 1: 30 Important Linux Questions with Detailed Answers

Section titled “Part 1: 30 Important Linux Questions with Detailed Answers”

1. Explain the Linux boot process step by step.

Section titled “1. Explain the Linux boot process step by step.”

Answer:

The Linux boot process consists of several stages:

1. BIOS/UEFI (Power-On Self Test):

  • Performs hardware initialization and testing
  • Locates bootable device (HDD, SSD, USB)
  • Loads and executes bootloader from MBR/GPT

2. Bootloader (GRUB2 most common):

  • Presents boot menu (optional)
  • Loads Linux kernel into memory
  • Loads initramfs/initrd (initial RAM disk)
  • Passes control to kernel with parameters

3. Kernel Initialization:

  • Decompresses and initializes hardware drivers
  • Mounts initial root filesystem from initramfs
  • Executes /init from initramfs
  • Loads necessary kernel modules
  • Mounts real root filesystem (switch_root)

4. Init System (systemd on most modern distros):

  • Executes default.target (equivalent to runlevel)
  • Starts system services in parallel
  • Manages dependencies between services

5. User Space:

  • Display Manager (GUI login) or Getty (text login)
  • User session starts
  • Shell or desktop environment loads

Boot Parameters Location:

Terminal window
# GRUB configuration
/etc/default/grub
/boot/grub/grub.cfg
# Kernel command line
cat /proc/cmdline
# Boot messages
dmesg
journalctl -b

Recovery Boot Options:

  • Single-user mode: systemctl rescue or init 1
  • Emergency mode: systemctl emergency
  • Kernel parameters: single, emergency, init=/bin/bash

Section titled “2. Explain the difference between hard link and soft link (symlink).”

Answer:

AspectHard LinkSoft Link (Symbolic Link)
InodeSame inode numberDifferent inode number
Cross filesystemNoYes
Directory linkingNo (except special cases)Yes
Original file deletedStill accessibleBroken (dangling)
SizeSame as original (no extra space)Small (path stored)
Creationln target linknameln -s target linkname

Inode Explanation:

Terminal window
# Each file has an inode containing metadata
# Hard links share the same inode (same file)
# Soft links are separate files pointing to path
# View inode numbers
ls -li
# 12345678 -rw-r--r-- 2 user group 1024 Jan 1 file.txt
# 12345678 -rw-r--r-- 2 user group 1024 Jan 1 hardlink.txt
# 87654321 lrwxrwxrwx 1 user group 8 Jan 1 softlink.txt -> file.txt

Practical Examples:

Terminal window
# Create original file
echo "content" > original.txt
# Create hard link
ln original.txt hard.txt
# Both point to same data (same inode)
# Create soft link
ln -s original.txt soft.txt
# soft.txt contains path "original.txt"
# Check link counts
ls -l
# -rw-r--r-- 2 user group 8 Jan 1 10:00 original.txt
# -rw-r--r-- 2 user group 8 Jan 1 10:00 hard.txt
# lrwxrwxrwx 1 user group 11 Jan 1 10:00 soft.txt -> original.txt
# Delete original
rm original.txt
# hard.txt still works (data still exists)
# soft.txt is broken (dangling symlink)

Use Cases:

  • Hard links: Version control, backup deduplication
  • Soft links: Shortcuts, library versioning (libc.so.6 -> libc-2.31.so)

3. What are file permissions in Linux? Explain SUID, SGID, and Sticky Bit.

Section titled “3. What are file permissions in Linux? Explain SUID, SGID, and Sticky Bit.”

Answer:

Standard Permissions (ugo/rwx):

  • Read (r=4): View file contents, list directory
  • Write (w=2): Modify file, create/delete files in directory
  • Execute (x=1): Run file as program, enter directory

Special Permissions:

SUID (Set User ID) - 4000 (4 in first octal):

/usr/bin/passwd
# When executed, runs with owner's privileges (not user's)
# Typically used for password changing, ping, etc.
ls -l /usr/bin/passwd
# -rwsr-xr-x 1 root root 68208 May 28 2020 /usr/bin/passwd
# ^ 's' indicates SUID
# Set SUID
chmod u+s file
chmod 4755 file # 4 = SUID, 755 = rwxr-xr-x
# Security risk: SUID on shell or editors can lead to privilege escalation

SGID (Set Group ID) - 2000 (2 in first octal):

Terminal window
# On files: Runs with group owner's privileges
# On directories: New files inherit directory's group
# Example on directory
mkdir shared
chgrp developers shared
chmod g+s shared
# Files created in shared/ will belong to 'developers' group
# Set SGID
chmod g+s file
chmod 2755 file # 2 = SGID, 755 = rwxr-xr-x
# Practical use: Shared team directories

Sticky Bit - 1000 (1 in first octal):

Terminal window
# On directories: Only file owner can delete/modify files
# Classic example: /tmp directory
ls -ld /tmp
# drwxrwxrwt 20 root root 4096 Jan 1 10:00 /tmp
# ^ 't' indicates sticky bit
# Set sticky bit
chmod +t directory
chmod 1777 directory # 1 = sticky, 777 = rwxrwxrwx
# Without sticky bit, any user could delete others' temp files

Combined Special Permissions:

Terminal window
# All three special bits (rare)
chmod 6777 file # SUID(4)+SGID(2)+sticky(1) + 777 = 6777
# Check permissions
stat -c "%a %n" file # Show numeric permissions

4. Explain the difference between fork() and exec() system calls.

Section titled “4. Explain the difference between fork() and exec() system calls.”

Answer:

Aspectfork()exec()
PurposeCreates child processReplaces current process
PIDNew PID for childSame PID
MemoryCopy of parent (COW)New program loaded
ReturnTwice (0 in child, PID in parent)Never returns on success
UseProcess creationProgram execution

Fork() Details:

#include <unistd.h>
pid_t fork(void);
// Returns:
// - Child process: 0
// - Parent process: child's PID
// - Error: -1

Fork Example:

pid_t pid = fork();
if (pid == 0) {
// Child process
printf("Child: PID=%d\\n", getpid());
execl("/bin/ls", "ls", "-l", NULL);
} else if (pid > 0) {
// Parent process
printf("Parent: Child PID=%d\\n", pid);
wait(NULL); // Wait for child
}

Exec Family Functions:

#include <unistd.h>
// Variants:
execl(path, arg0, arg1, ..., NULL); // List arguments
execlp(file, arg0, arg1, ..., NULL); // Uses PATH
execle(path, arg0, arg1, ..., NULL, env);// With environment
execv(path, argv); // Vector arguments
execvp(file, argv); // Uses PATH
execve(path, argv, env); // Full control

Common Pattern (Shell Operation):

Terminal window
# In shell, typing a command:
# 1. Shell calls fork() to create child
# 2. Child calls exec() to run command
# 3. Parent waits for child to complete
# Shell example:
# $ ls -l
# fork() → child process → exec("ls", "ls", "-l", NULL)

Copy-on-Write (COW):

  • Modern Linux doesn’t actually copy entire memory on fork()
  • Pages marked as read-only, shared between parent and child
  • Copy only happens when either process writes to page
  • Improves performance and reduces memory usage

5. Explain the difference between soft and hard limits in ulimit.

Section titled “5. Explain the difference between soft and hard limits in ulimit.”

Answer:

Soft Limit:

  • Current enforced limit
  • Can be increased up to hard limit by user
  • Default operating value

Hard Limit:

  • Maximum ceiling for soft limit
  • Can only be increased by root
  • Set by system administrator

View Limits:

Terminal window
# View all limits
ulimit -a
# View specific limits
ulimit -n # open files
ulimit -u # processes
ulimit -s # stack size
ulimit -c # core file size
ulimit -m # memory size
ulimit -v # virtual memory
# Soft vs Hard
ulimit -Sn # soft open files
ulimit -Hn # hard open files

Setting Limits:

Terminal window
# Set soft limit (user can increase to hard)
ulimit -n 2048
# Set hard limit (requires root)
ulimit -Hn 4096
# Both soft and hard
ulimit -n 2048 -Hn 4096
# Remove limit
ulimit -n unlimited # soft
ulimit -Hn unlimited # hard (root only)

Configuration Files:

Terminal window
# System-wide limits
/etc/security/limits.conf
# Format:
# <domain> <type> <item> <value>
* soft nofile 4096
* hard nofile 65536
root soft nofile 8192
@developers hard nproc unlimited
# PAM configuration
/etc/pam.d/common-session
# session required pam_limits.so

Common Limit Types:

ItemDescriptionTypical Values
nofileOpen file descriptors1024 (soft), 4096 (hard)
nprocNumber of processesunlimited or 4096
coreCore dump size0 (disabled)
dataData segment sizeunlimited
stackStack size8192 KB
memlockLocked memory64 KB
rssResident set sizeunlimited

Check Current Process Limits:

Terminal window
# Check for running process
cat /proc/$(pidof process)/limits
# Check shell limits
cat /proc/$$/limits

6. What is the difference between static and dynamic linking?

Section titled “6. What is the difference between static and dynamic linking?”

Answer:

AspectStatic LinkingDynamic Linking
LibrariesCopied into executableShared at runtime
File SizeLargerSmaller
MemoryMore per processShared across processes
UpdatesNeed relinkReplace library
PortabilitySelf-containedNeeds libraries present
StartupFasterSlower (library loading)

Static Linking:

Terminal window
# Create static executable
gcc -static -o program program.c
# Check if statically linked
file program
# program: ELF 64-bit executable, statically linked
# ldd shows "not a dynamic executable"
ldd program
# not a dynamic executable
# Pros: Portable, no dependencies
# Cons: Larger size, can't share libraries

Dynamic Linking:

Terminal window
# Create dynamically linked executable (default)
gcc -o program program.c
# Check dynamic dependencies
ldd program
# linux-vdso.so.1 (0x00007ffe)
# libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
# /lib64/ld-linux-x86-64.so.2
# Shared library locations
/etc/ld.so.conf
/etc/ld.so.conf.d/
LD_LIBRARY_PATH environment variable
# Update library cache
ldconfig

Dynamic Linker (ld-linux.so):

/lib64/ld-linux-x86-64.so.2]
# Interpreter path in ELF
readelf -l program | grep INTERP
# INTERP 0x000000 0x000000 0x000000 0x000019 0x000019 R 0x1
# Manual execution with custom library path
LD_LIBRARY_PATH=/custom/lib ./program

Memory Sharing Benefits:

Terminal window
# Multiple processes share same physical library pages
# Example: All bash processes share /lib/libc.so.6
# Check shared library memory
pmap $(pidof bash) | grep libc
# 7f1234567000 2048K r-x-- libc-2.31.so # Shared across processes

Use Cases:

  • Static: Embedded systems, containers, recovery tools
  • Dynamic: Most applications, shared hosting, regular desktop apps

7. Explain the difference between kill, pkill, and killall.

Section titled “7. Explain the difference between kill, pkill, and killall.”

Answer:

CommandMethodTargetOptions
killPID numberSpecific process by IDSignal number/name
pkillPatternProcesses matching name/attributesFull regex
killallNameProcesses by exact nameCase-sensitive

Kill (by PID):

Terminal window
# Get PID first
ps aux | grep firefox
# user 12345 2.0 1.5 ... firefox
# Send signals by PID
kill 12345 # SIGTERM (15) - graceful
kill -9 12345 # SIGKILL (9) - force
kill -15 12345 # SIGTERM
kill -SIGTERM 12345 # Same as above
# Signal numbers
kill -l
# 1) SIGHUP 2) SIGINT 3) SIGQUIT 6) SIGABRT
# 9) SIGKILL 15) SIGTERM 18) SIGCONT 19) SIGSTOP

Pkill (by Pattern):

Terminal window
# Kill by name pattern
pkill firefox # Matches firefox, firefox-bin, etc.
pkill -9 firefox # Force kill
pkill -f "python script.py" # Match full command line
# List matching processes without killing
pkill -l firefox # List signals only
pgrep firefox # Show PIDs only
# Options
pkill -u user # Kill all user's processes
pkill -t pts/2 # Kill processes on terminal
pkill -HUP nginx # Reload nginx config

Killall (by Exact Name):

Terminal window
# Kill by exact process name
killall firefox # Kills ONLY "firefox", not "firefox-bin"
# Case-sensitive by default
killall -I FIREFOX # Case-insensitive match
# Interactive mode
killall -i firefox # Confirm before killing
# Older than time
killall -o 1h firefox # Kill processes older than 1 hour
killall -y 30m firefox # Kill processes younger than 30 minutes
# Wait for process to die
killall -w firefox # Wait until all killed

Safe Killing Practices:

Terminal window
# 1. Try graceful termination first
kill -15 PID
# 2. Wait a few seconds
sleep 5
# 3. Check if still running
kill -0 PID # Returns 0 if running
# 4. Force kill if necessary
kill -9 PID
# Signal meanings:
# SIGTERM (15): Process can clean up (close files, etc.)
# SIGKILL (9): Kernel terminates immediately (no cleanup)
# SIGHUP (1): Reload configuration (daemons)
# SIGINT (2): Interrupt (Ctrl+C)

8. Explain the difference between su and sudo.

Section titled “8. Explain the difference between su and sudo.”

Answer:

Aspectsusudo
AuthenticationTarget user’s passwordUser’s own password
Command loggingNoYes
Fine-grained controlNoYes
EnvironmentNew shell (usually)Current environment preserved
Audit trailMinimalComplete

su (Switch User):

Terminal window
# Switch to root (requires root password)
su
su -
# Switch to another user
su - username
# Run single command as another user
su -c "command" username
# Without hyphen: keeps current environment
su username
# With hyphen: new login shell (clean environment)
su - username
# Security issue: Users need target's password
# Auditing: Hard to track who did what

sudo (Superuser DO):

Terminal window
# Run command as root (user's own password)
sudo command
# Run as specific user
sudo -u username command
# Open root shell
sudo -i
sudo -s
# Run previous command with sudo
sudo !!
# List user's sudo privileges
sudo -l
# Keep credentials cached
sudo -v # Update timestamp
sudo -k # Invalidate timestamp

sudoers Configuration (/etc/sudoers):

Terminal window
# User specifications
username ALL=(ALL:ALL) ALL
# user host=(run-as:group) commands
# Examples:
# Allow user to run any command
john ALL=(ALL) ALL
# Allow without password
jane ALL=(ALL) NOPASSWD: ALL
# Allow specific commands
webadmin ALL=(root) /usr/bin/systemctl restart nginx, /usr/bin/systemctl status nginx
# Allow as specific user
backup ALL=(backup) /usr/bin/rsync
# Group permissions
%admin ALL=(ALL) ALL
%sudo ALL=(ALL:ALL) ALL
# Command aliases
Cmnd_Alias WEB_CMDS = /usr/bin/systemctl restart nginx, /usr/bin/systemctl reload nginx
Cmnd_Alias NET_CMDS = /sbin/ifconfig, /bin/ping
# Defaults
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
Defaults logfile="/var/log/sudo.log"

Security Differences:

Terminal window
# su logs:
/var/log/auth.log: "su: session opened for user root by user"
# No command details
# sudo logs:
/var/log/auth.log: "sudo: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/bin/ls"
# Full command details
# Better auditing with sudo
# Can restrict to specific commands
# No need to share root password

Best Practices:

  • Use sudo instead of su for better auditing
  • Disable root login: PermitRootLogin no in /etc/ssh/sshd_config
  • Use sudo -i instead of su -
  • Never share passwords; use sudo with proper configuration

9. Explain the difference between cron and anacron.

Section titled “9. Explain the difference between cron and anacron.”

Answer:

AspectCronAnacron
AssumesSystem runs 24/7System may be off
PrecisionMinute-levelDay-level
Missing jobsSkippedRun at next opportunity
Root requiredNo (user crontabs)Yes
Random delayNoYes (avoid stampede)

Cron Syntax:

Terminal window
# Minute Hour Day Month DayOfWeek Command
# 0-59 0-23 1-31 1-12 0-7
# Examples:
# Run every day at 2:30 AM
30 2 * * * /backup/script.sh
# Run every Monday at 5 AM
0 5 * * 1 /scripts/weekly.sh
# Run every hour
0 * * * * /scripts/hourly.sh
# Run every 15 minutes
*/15 * * * * /scripts/check.sh
# Special strings:
@reboot # Run at startup
@daily # Run once per day
@hourly # Run once per hour
@weekly # Run once per week
@monthly # Run once per month
@yearly # Run once per year

Cron Files Locations:

Terminal window
# System-wide crontab
/etc/crontab
# User crontabs
/var/spool/cron/crontabs/
# Cron directories
/etc/cron.d/
/etc/cron.hourly/
/etc/cron.daily/
/etc/cron.weekly/
/etc/cron.monthly/

Anacron Configuration (/etc/anacrontab):

Terminal window
# Format:
# period delay job-identifier command
# (in days) (in minutes)
# Example:
1 5 cron.daily run-parts /etc/cron.daily
7 10 cron.weekly run-parts /etc/cron.weekly
30 15 cron.monthly run-parts /etc/cron.monthly
# Anacron timestamp file
/var/spool/anacron/

How Anacron Works:

Terminal window
# 1. At boot time, check when each job last ran
# 2. If period has elapsed, schedule job
# 3. Add random delay to prevent all jobs running simultaneously
# 4. Update timestamp after job completes
# Run manually
anacron -f # Force run
anacron -d # Debug mode
anacron -n # Run now (ignore delay)

Practical Example - Laptop:

/etc/anacrontab
# Problem: Laptop turned off at 2 AM
# Solution: Use anacron for daily tasks
# Run backup within 30 minutes of next boot
1 30 daily-backup /usr/local/bin/backup.sh
# Cron for time-sensitive tasks (still needed)
# Every 5 minutes check
*/5 * * * * /scripts/check.sh

Combined Usage:

/etc/cron.d/anacron
# Modern systems run cron, which runs anacron
# Run anacron from cron
30 7 * * * root test -x /usr/sbin/anacron && /usr/sbin/anacron
# Best practice:
# - Use cron for precise scheduling
# - Use anacron for daily/weekly maintenance
# - Use systemd timers for modern systems

10. Explain the difference between top and htop.

Section titled “10. Explain the difference between top and htop.”

Answer:

Featuretophtop
InterfaceText-basedColorful, mouse support
NavigationKey bindingsArrow keys, mouse
ScrollingNo horizontalYes (horizontal/vertical)
Process treeLimitedBuilt-in tree view
Kill processType PID then ‘k’F9 key, select signal
Setup savingNo (via rc file)Yes (F2 configuration)
Resource graphsBasicColorful meters
PlatformEverywhereAdditional install

Top Key Commands:

Terminal window
# Interactive keys:
h, ? # Help
q # Quit
k # Kill process (enter PID)
r # Renice process
s # Change delay (seconds)
t # Toggle CPU/memory summary
m # Toggle memory summary
1 # Show each CPU core
c # Show full command line
u # Filter by user
P # Sort by CPU usage
M # Sort by memory usage
T # Sort by time
R # Reverse sort
W # Write config to ~/.toprc

Top Command Line Options:

Terminal window
top -d 1 # Update every 1 second
top -p 1234,5678 # Monitor specific PIDs
top -u username # Monitor specific user
top -b -n 1 # Batch mode (1 iteration)
top -H # Show threads

Htop Features:

Terminal window
# Navigation:
Arrow keys # Move selection
PgUp/PgDn # Scroll
F1 / h # Help
F2 / S # Setup
F3 / / # Search
F4 / \\ # Filter
F5 / t # Tree view
F6 / > # Sort by
F7 / ] # Increase priority (nice)
F8 / [ # Decrease priority
F9 / k # Kill process
F10 / q # Quit
# Display:
# CPU cores with different colors for user/system/IO
# Memory with color-coded usage
# Process list with tree view
# Setup saves to ~/.config/htop/htoprc

Installation:

Terminal window
# Debian/Ubuntu
sudo apt install htop
# RHEL/CentOS
sudo yum install epel-release
sudo yum install htop
# Arch
sudo pacman -S htop

When to Use Which:

  • top: Always available, quick checks, scripts
  • htop: Interactive monitoring, development, troubleshooting

11. Explain the difference between source and ./ when running scripts.

Section titled “11. Explain the difference between source and ./ when running scripts.”

Answer:

Aspectsource script.sh./script.sh
ShellCurrent shellNew subshell
EnvironmentModifies current environmentIsolated environment
ExitExits current shellExits subshell only
PermissionNo execute neededExecute permission required
VariablesSet in current shellLost after script ends
Use caseConfiguration filesRegular scripts

Source (dot operator):

Terminal window
# Two equivalent forms
source script.sh
. script.sh
# Example script (setenv.sh):
export PATH=$PATH:/custom/bin
MYVAR="hello"
# Run with source
source setenv.sh
echo $MYVAR # Outputs: hello
echo $PATH # Contains /custom/bin
# Variables persist in current shell
# Useful for:
# - Setting environment variables
# - Loading shell functions
# - Activating virtual environments
source venv/bin/activate

Subshell Execution:

Terminal window
# Example script (setenv.sh):
export PATH=$PATH:/custom/bin
MYVAR="hello"
exit 1
# Run as executable
chmod +x setenv.sh
./setenv.sh
echo $MYVAR # Empty (variable lost)
echo $PATH # Original PATH
# Exit 1 only affects subshell, not parent
# New shell process created:
# Parent shell → fork() → child shell → exec() → script
# Script changes child's environment only
# Child exits, parent unchanged

Permission Differences:

Terminal window
# Source works without execute permission
chmod -x script.sh
source script.sh # Works
./script.sh # Permission denied
# Execute permission required for direct execution
chmod +x script.sh
./script.sh # Works

Shebang Effect:

Terminal window
# Script with #!/bin/bash
./script.sh # Uses /bin/bash interpreter
# Source uses current shell regardless of shebang
source script.sh # Uses current shell (bash/zsh/dash)

Practical Examples:

Terminal window
# Configuration file (.bashrc, .profile)
source ~/.bashrc # Reload configuration
# Virtual environment
source venv/bin/activate
# Running script in background
./long_running.sh & # Subshell, can kill independently
# Modifying current directory
# cd in script affects parent only with source
cat cd.sh
# cd /tmp
source cd.sh # Changes current directory to /tmp
./cd.sh # Changes subshell directory only

12. Explain the difference between $* and $@ in shell scripting.

Section titled “12. Explain the difference between $* and $@ in shell scripting.”

Answer:

VariableBehaviorQuoted Behavior
$*All arguments as single string"$*" = single string with IFS first char
$@All arguments as separate words"$@" = each argument quoted separately

Unquoted Usage (same behavior):

test.sh
#!/bin/bash
echo "Unquoted \\$*:"
for arg in $*; do
echo " $arg"
done
echo "Unquoted \\$@:"
for arg in $@; do
echo " $arg"
done
# Run: ./test.sh "hello world" foo bar
# Both output:
# hello
# world
# foo
# bar
# (Arguments split on spaces)

Quoted Usage (DIFFERENT):

test.sh
#!/bin/bash
echo "Quoted \\$*:"
for arg in "$*"; do
echo " $arg"
done
echo "Quoted \\$@:"
for arg in "$@"; do
echo " $arg"
done
# Run: ./test.sh "hello world" foo bar
# Output:
# Quoted $*:
# hello world foo bar (single string, space-separated)
# Quoted $@:
# hello world (preserves quoted arguments)
# foo
# bar

IFS (Internal Field Separator) Effect:

# Script showing IFS behavior
#!/bin/bash
IFS=":"
set -- "a b" c d
echo "$*" # Output: a b:c:d
echo "$@" # Output: a b c d (separate arguments)
# IFS first character used to join $*
# Default IFS: space, tab, newline

Practical Examples:

Terminal window
# Function to process arguments
process() {
# Use "$@" to preserve argument boundaries
for arg in "$@"; do
echo "Processing: $arg"
done
}
# Calling with spaces in argument
process "file with spaces.txt" "another file.txt"
# Output preserves spaces
# Using "$*" for logging
log_message() {
# Join all arguments with space
logger "[INFO] $*"
}
log_message "User" "logged in" "from" "192.168.1.1"
# Single log entry: "[INFO] User logged in from 192.168.1.1"
# Common patterns:
# "$@" - Preferred for argument forwarding
# "$*" - For creating single string from arguments
# Forwarding arguments
wrapper() {
# Pass all arguments to another command
/usr/bin/real-command "$@"
}

13. Explain the difference between grep, egrep, fgrep.

Section titled “13. Explain the difference between grep, egrep, fgrep.”

Answer:

CommandRegex TypePerformanceUse Case
grepBasic (BRE)GoodBasic patterns
egrep / grep -EExtended (ERE)GoodAdvanced patterns
fgrep / grep -FFixed strings (no regex)FastestLiteral text search

Grep (Basic Regex - BRE):

Terminal window
# Special characters need escaping
grep '\\(foo\\|bar\\)' file.txt # OR (needs \\)
grep 'foo\\+' file.txt # One or more (needs \\)
grep 'foo\\?' file.txt # Zero or one (needs \\)
grep 'foo\\{2,5\\}' file.txt # Range (needs \\)
grep '^foo.*bar$' file.txt # Anchors and .* work normally
# Common usage
grep "error" logfile
grep -i "warning" logfile
grep -v "debug" logfile
grep -r "pattern" /etc/

Egrep (Extended Regex - ERE):

Terminal window
# Special characters work without escaping
egrep '(foo|bar)' file.txt # OR (no escaping)
egrep 'foo+' file.txt # One or more
egrep 'foo?' file.txt # Zero or one
egrep 'foo{2,5}' file.txt # Range
egrep 'foo{2,}' file.txt # Two or more
egrep 'foo{,5}' file.txt # Up to five
# Advanced patterns
egrep '[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}' email.txt
egrep '\\b(https?|ftp)://\\S+' urls.txt
# Equivalents
grep -E "pattern" file.txt # Same as egrep

Fgrep (Fixed Strings - No Regex):

Terminal window
# Treats everything as literal text
fgrep '*.txt' file.txt # Finds literal "*.txt"
fgrep 'a.b' file.txt # Finds "a.b", not "aXb"
fgrep '(foobar)' file.txt # Finds literal "(foobar)"
# Useful for:
# - Searching for special characters
# - Large files with many patterns
# - When you know the exact string
# Equivalents
grep -F "pattern" file.txt # Same as fgrep
# Performance example:
time grep -F -f patterns.txt largefile.txt
# Faster than regex version

Performance Comparison:

Terminal window
# Create test file
seq 1 1000000 > numbers.txt
# fgrep is fastest
time fgrep "500000" numbers.txt # ~0.1s
# grep (basic) slower
time grep "500000" numbers.txt # ~0.15s
# egrep (extended) similar to grep
time egrep "500000" numbers.txt # ~0.15s
# For literal strings, always use -F

Use Cases Summary:

Terminal window
# Simple text search → grep or fgrep
grep "error" log.txt
fgrep "exact string" file.txt
# Pattern with alternation → egrep
egrep "error|warning|critical" log.txt
# Email/URL extraction → egrep
egrep -o '\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}\\b' file.txt
# Searching for regex metacharacters → fgrep
fgrep ".*" file.txt # Finds literal ".*"

14. Explain the difference between & and && in Linux.

Section titled “14. Explain the difference between & and && in Linux.”

Answer:

SymbolMeaningBehavior
&BackgroundRuns command in background
&&ANDRuns second command only if first succeeds

Background Operator (&):

Terminal window
# Run command in background
long_running_command &
# Multiple background commands
command1 & command2 & command3 &
# Get background job PID
command &
echo $! # Prints PID of background process
# Background with output redirection
command > output.log 2>&1 &
# Check background jobs
jobs
# [1] Running command1 &
# [2]- Running command2 &
# [3]+ Running command3 &
# Bring to foreground
fg %1 # Bring job 1 to foreground
fg # Bring most recent job
# Send to background again
Ctrl+Z # Suspend
bg # Resume in background
# Background in script
{
sleep 10
echo "Done"
} &

AND Operator (&&):

Terminal window
# Run command2 only if command1 succeeds (exit 0)
command1 && command2
# Chain multiple commands
make && make install && make clean
# Example with cd
cd /tmp && rm -rf temp_folder
# rm only runs if cd succeeded
# Combined with OR (||)
command1 && echo "Success" || echo "Failed"
# Practical backup example
mkdir -p backup && cp -r important/ backup/ && echo "Backup complete"

Comparison Table:

Scenario&&&
SucceedsRuns in backgroundRuns next command
FailsRuns in backgroundStops, no next command
WaitingNo (returns immediately)Yes (sequential)
Exit codeDoesn’t affect shellAffects next command

Combined Usage:

Terminal window
# Run in background AND chain
command1 && command2 & # command2 runs in background only if command1 succeeds
# Grouping with subshell
(command1 && command2) & # Both in background
# Complex example
# Download file in background, then process if successful
curl -O <https://example.com/file.zip> && unzip file.zip &

Job Control Signals:

Terminal window
# Send SIGCONT to background job
kill -CONT %1
# Send SIGTERM to background job
kill %1
# Disown background job (remove from shell)
disown %1
# Run immune to hangups
nohup command &

15. Explain the difference between >> and > redirection.

Section titled “15. Explain the difference between >> and > redirection.”

Answer:

OperatorBehaviorFile Content
>OverwriteReplaces existing content
>>AppendAdds to end of existing content

Overwrite Operator (>):

Terminal window
# Creates new file or overwrites existing
echo "First line" > file.txt
echo "Second line" > file.txt
# Result: file.txt contains only "Second line"
# Danger: Can accidentally delete file contents
> important.txt # Empties the file!
# Redirect stdout only
ls > files.txt
# Redirect stderr to file
ls non-existent 2> error.log
# Redirect both stdout and stderr
command &> output.txt
command > output.txt 2>&1

Append Operator (>>):

Terminal window
# Adds to end of file
echo "Line 1" >> file.txt
echo "Line 2" >> file.txt
echo "Line 3" >> file.txt
# Result: file.txt contains all three lines
# Append stderr
command 2>> error.log
# Append both
command >> output.txt 2>&1
# Useful for logging
echo "$(date): Backup started" >> backup.log
rsync -av /source/ /dest/ >> backup.log 2>&1
echo "$(date): Backup completed" >> backup.log

Practical Examples:

Terminal window
# Log rotation with overwrite (start fresh)
> access.log
echo "$(date): New log session" >> access.log
# Collecting outputs from multiple commands
echo "System Info:" > system_info.txt
uname -a >> system_info.txt
df -h >> system_info.txt
free -h >> system_info.txt
# Configuration management
# Safe way to update config (append new line)
grep -q "alias ll='ls -la'" ~/.bashrc || echo "alias ll='ls -la'" >> ~/.bashrc
# Clearing log files without deleting
> /var/log/syslog # Clear contents while keeping file

noclobber Option (Prevent Accidental Overwrite):

Terminal window
# Enable protection
set -o noclobber
# Now this fails if file exists
echo "test" > existing.txt
# -bash: existing.txt: cannot overwrite existing file
# Force overwrite anyway
echo "test" >| existing.txt
# Append still works
echo "test" >> existing.txt
# Disable protection
set +o noclobber

Here Document with Redirection:

Terminal window
# Overwrite with multi-line text
cat > config.txt << EOF
line 1
line 2
line 3
EOF
# Append multi-line text
cat >> config.txt << EOF
line 4
line 5
EOF

16. Explain the difference between kill, pkill, and killall.

Section titled “16. Explain the difference between kill, pkill, and killall.”

Answer:

CommandMethodTargetOptions
killPID numberSpecific process by IDSignal number/name
pkillPatternProcesses matching name/attributesFull regex
killallNameProcesses by exact nameCase-sensitive

Kill (by PID):

Terminal window
# Get PID first
ps aux | grep firefox
# user 12345 2.0 1.5 ... firefox
# Send signals by PID
kill 12345 # SIGTERM (15) - graceful
kill -9 12345 # SIGKILL (9) - force
kill -15 12345 # SIGTERM
kill -SIGTERM 12345 # Same as above
# Signal numbers
kill -l
# 1) SIGHUP 2) SIGINT 3) SIGQUIT 6) SIGABRT
# 9) SIGKILL 15) SIGTERM 18) SIGCONT 19) SIGSTOP

Pkill (by Pattern):

Terminal window
# Kill by name pattern
pkill firefox # Matches firefox, firefox-bin, etc.
pkill -9 firefox # Force kill
pkill -f "python script.py" # Match full command line
# List matching processes without killing
pkill -l firefox # List signals only
pgrep firefox # Show PIDs only
# Options
pkill -u user # Kill all user's processes
pkill -t pts/2 # Kill processes on terminal
pkill -HUP nginx # Reload nginx config
# Oldest/newest processes
pkill -n firefox # Kill newest process only
pkill -o firefox # Kill oldest process only

Killall (by Exact Name):

Terminal window
# Kill by exact process name
killall firefox # Kills ONLY "firefox", not "firefox-bin"
# Case-sensitive by default
killall -I FIREFOX # Case-insensitive match
# Interactive mode
killall -i firefox # Confirm before killing
# Older than time
killall -o 1h firefox # Kill processes older than 1 hour
killall -y 30m firefox # Kill processes younger than 30 minutes
# Wait for process to die
killall -w firefox # Wait until all killed
# Verbose output
killall -v firefox # Show what's happening

Safe Killing Practices:

Terminal window
# 1. Try graceful termination first
kill -15 PID
# 2. Wait a few seconds
sleep 5
# 3. Check if still running
kill -0 PID # Returns 0 if running
# 4. Force kill if necessary
kill -9 PID
# Signal meanings:
# SIGTERM (15): Process can clean up (close files, etc.)
# SIGKILL (9): Kernel terminates immediately (no cleanup)
# SIGHUP (1): Reload configuration (daemons)
# SIGINT (2): Interrupt (Ctrl+C)
# SIGSTOP (19): Pause process
# SIGCONT (18): Resume paused process

17. Explain the difference between su and sudo.

Section titled “17. Explain the difference between su and sudo.”

Answer:

Aspectsusudo
AuthenticationTarget user’s passwordUser’s own password
Command loggingNoYes
Fine-grained controlNoYes
EnvironmentNew shell (usually)Current environment preserved
Audit trailMinimalComplete

su (Switch User):

Terminal window
# Switch to root (requires root password)
su
su -
# Switch to another user
su - username
# Run single command as another user
su -c "command" username
# Without hyphen: keeps current environment
su username
# With hyphen: new login shell (clean environment)
su - username
# Security issue: Users need target's password
# Auditing: Hard to track who did what

sudo (Superuser DO):

Terminal window
# Run command as root (user's own password)
sudo command
# Run as specific user
sudo -u username command
# Open root shell
sudo -i
sudo -s
# Run previous command with sudo
sudo !!
# List user's sudo privileges
sudo -l
# Keep credentials cached
sudo -v # Update timestamp
sudo -k # Invalidate timestamp
# Run command with preserved environment
sudo -E command

sudoers Configuration (/etc/sudoers):

Terminal window
# User specifications
username ALL=(ALL:ALL) ALL
# user host=(run-as:group) commands
# Examples:
# Allow user to run any command
john ALL=(ALL) ALL
# Allow without password
jane ALL=(ALL) NOPASSWD: ALL
# Allow specific commands
webadmin ALL=(root) /usr/bin/systemctl restart nginx, /usr/bin/systemctl status nginx
# Allow as specific user
backup ALL=(backup) /usr/bin/rsync
# Group permissions
%admin ALL=(ALL) ALL
%sudo ALL=(ALL:ALL) ALL
# Command aliases
Cmnd_Alias WEB_CMDS = /usr/bin/systemctl restart nginx, /usr/bin/systemctl reload nginx
Cmnd_Alias NET_CMDS = /sbin/ifconfig, /bin/ping
# Defaults
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
Defaults logfile="/var/log/sudo.log"

Security Differences:

Terminal window
# su logs:
/var/log/auth.log: "su: session opened for user root by user"
# No command details
# sudo logs:
/var/log/auth.log: "sudo: user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/bin/ls"
# Full command details
# Better auditing with sudo
# Can restrict to specific commands
# No need to share root password

Best Practices:

  • Use sudo instead of su for better auditing
  • Disable root login: PermitRootLogin no in /etc/ssh/sshd_config
  • Use sudo -i instead of su -
  • Never share passwords; use sudo with proper configuration

18. Explain the difference between soft and hard limits in ulimit.

Section titled “18. Explain the difference between soft and hard limits in ulimit.”

Answer:

Soft Limit:

  • Current enforced limit
  • Can be increased up to hard limit by user
  • Default operating value

Hard Limit:

  • Maximum ceiling for soft limit
  • Can only be increased by root
  • Set by system administrator

View Limits:

Terminal window
# View all limits
ulimit -a
# View specific limits
ulimit -n # open files
ulimit -u # processes
ulimit -s # stack size
ulimit -c # core file size
ulimit -m # memory size
ulimit -v # virtual memory
# Soft vs Hard
ulimit -Sn # soft open files
ulimit -Hn # hard open files

Setting Limits:

Terminal window
# Set soft limit (user can increase to hard)
ulimit -n 2048
# Set hard limit (requires root)
ulimit -Hn 4096
# Both soft and hard
ulimit -n 2048 -Hn 4096
# Remove limit
ulimit -n unlimited # soft
ulimit -Hn unlimited # hard (root only)

Configuration Files:

Terminal window
# System-wide limits
/etc/security/limits.conf
# Format:
# <domain> <type> <item> <value>
* soft nofile 4096
* hard nofile 65536
root soft nofile 8192
@developers hard nproc unlimited
# PAM configuration
/etc/pam.d/common-session
# session required pam_limits.so
# Systemd limits
# In service file:
[Service]
LimitNOFILE=4096
LimitNPROC=10000

Common Limit Types:

ItemDescriptionTypical Values
nofileOpen file descriptors1024 (soft), 4096 (hard)
nprocNumber of processesunlimited or 4096
coreCore dump size0 (disabled)
dataData segment sizeunlimited
stackStack size8192 KB
memlockLocked memory64 KB
rssResident set sizeunlimited
cpuCPU time (minutes)unlimited

Check Current Process Limits:

Terminal window
# Check for running process
cat /proc/$(pidof process)/limits
# Check shell limits
cat /proc/$$/limits
# Monitor limit usage
lsof -p $$ | wc -l # Count open files

19. Explain the difference between export and regular variable assignment.

Section titled “19. Explain the difference between export and regular variable assignment.”

Answer:

AspectRegular VariableExported Variable
ScopeCurrent shell onlyCurrent shell and child processes
SubshellNot visibleVisible
ScriptsNot accessibleAccessible
PermanenceTemporaryTemporary (unless in profile)

Regular (Shell) Variables:

Terminal window
# Assignment (no spaces around =)
MYVAR="hello"
# Available in current shell
echo $MYVAR # Output: hello
# NOT available in child process
bash -c 'echo $MYVAR' # Output: (empty)
# NOT available in script
echo 'echo $MYVAR' > test.sh
chmod +x test.sh
./test.sh # Output: (empty)

Exported (Environment) Variables:

Terminal window
# Export a variable
export MYVAR="hello"
# OR
MYVAR="hello"
export MYVAR
# Available in current shell
echo $MYVAR # Output: hello
# Available in child process
bash -c 'echo $MYVAR' # Output: hello
# Available in script
./test.sh # Output: hello
# Export multiple variables
export VAR1=val1 VAR2=val2

Viewing Environment:

Terminal window
# Show all environment variables
env
printenv
# Show specific variable
printenv PATH
echo $PATH
# Show all variables (including shell variables)
set

Removing Export:

Terminal window
# Remove variable from environment
export -n MYVAR
# Now MYVAR is shell variable only
# Unset completely
unset MYVAR

Common Environment Variables:

Terminal window
PATH # Command search path
HOME # User's home directory
USER # Current username
SHELL # Current shell
TERM # Terminal type
LANG # Language/locale
PWD # Current working directory
OLDPWD # Previous working directory
EDITOR # Default editor
DISPLAY # X11 display
LD_LIBRARY_PATH # Library search path

Preserving Environment with sudo:

Terminal window
# Reset environment (default)
sudo command
# Preserve environment
sudo -E command
sudo --preserve-env command
# Preserve specific variables
sudo --preserve-env=HOME,PATH command

20. Explain the difference between ps aux and ps -ef.

Section titled “20. Explain the difference between ps aux and ps -ef.”

Answer:

Both show process information but with different syntax and defaults.

Aspectps auxps -ef
OriginBSD syntaxUNIX/System V syntax
DashNo dash neededRequires dash
ColumnsUSER, PID, %CPU, %MEM, VSZ, RSS, TTY, STAT, START, TIME, COMMANDUID, PID, PPID, C, STIME, TTY, TIME, CMD
CPU/MEMShows percentagesNo percentages
Parent PIDNot shownShows PPID

ps aux (BSD Style):

Terminal window
ps aux
# Output columns:
# USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
# root 1 0.0 0.1 168812 11208 ? Ss Jan01 0:02 /sbin/init
# root 123 0.0 0.0 12345 1234 ? S Jan01 0:00 [kthreadd]
# user 4567 0.5 2.3 1234567 234567 pts/0 S+ 10:30 0:01 bash
# Options meaning:
# a = all users' processes
# u = user-oriented format (CPU, MEM, etc.)
# x = processes without terminal

ps -ef (System V Style):

Terminal window
ps -ef
# Output columns:
# UID PID PPID C STIME TTY TIME CMD
# root 1 0 0 Jan01 ? 00:00:02 /sbin/init
# root 123 2 0 Jan01 ? 00:00:00 [kthreadd]
# user 4567 4560 0 10:30 pts/0 00:00:00 bash
# Options meaning:
# -e = all processes
# -f = full format (PPID, STIME, etc.)

Common Variations:

Terminal window
# Show process tree
ps auxf
ps -ef --forest
# Show threads
ps aux -L
ps -eLf
# Custom output format
ps -eo pid,ppid,cmd,%cpu,%mem --sort=-%cpu
# Show specific user's processes
ps -u username
ps -U username
# Show processes by name
ps -C process_name
# Show all processes with full command line
ps auxww
ps -efww

When to Use Which:

Terminal window
# Use ps aux when:
# - You want CPU/MEM percentages
# - You need to see resource usage
# - You're troubleshooting performance
# Use ps -ef when:
# - You need parent PID relationships
# - You're tracing process ancestry
# - You're on older UNIX systems
# Most Linux systems support both

21. Explain the difference between nice and renice.

Section titled “21. Explain the difference between nice and renice.”

Answer:

Aspectnicerenice
WhenWhen starting a processFor running processes
PrioritySets initial priorityChanges existing priority
Range-20 (highest) to 19 (lowest)Same range
PermissionUsers can only increase (lower priority)Users can only increase (except root)

Nice Values Explained:

Terminal window
# Nice value range: -20 to +19
# -20 = Highest priority (most CPU time)
# +19 = Lowest priority (least CPU time)
# Default = 0
# View process nice values
ps -eo pid,ni,comm
top # NI column

Nice (Starting Processes):

Terminal window
# Start with default priority (0)
nice ./long_running.sh
# Start with lower priority (10)
nice -n 10 ./long_running.sh
nice -10 ./long_running.sh # Same
# Start with higher priority (-10) - requires root
sudo nice -n -10 ./important.sh
# Start with lowest priority (19)
nice -n 19 ./background_task.sh

Renice (Running Processes):

Terminal window
# Change priority of running process by PID
renice -n 10 -p 1234
# Change all processes of a user
renice -n 5 -u username
# Change process group
renice -n 10 -g 5678
# Increase priority (requires root)
sudo renice -n -5 -p 1234
# Verify change
ps -o pid,ni,comm -p 1234

Practical Examples:

Terminal window
# Background backup (low priority)
nice -n 19 rsync -av /data/ /backup/ &
# Important real-time process (high priority)
sudo nice -n -15 ./realtime_app
# Find CPU-intensive processes
ps aux --sort=-%cpu | head
# Lower priority of CPU hog
renice -n 15 -p $(pgrep -f "cpu_hog")
# Reset to default
renice -n 0 -p 1234

Permission Rules:

Terminal window
# Regular users:
# - Can only increase nice value (lower priority)
# - Cannot decrease nice value (increase priority)
nice -n 10 ./script.sh # Allowed
nice -n -5 ./script.sh # Error: Permission denied
# Root users:
# - Can set any nice value
sudo nice -n -20 ./critical.sh
# Renice same rules apply
renice -n 15 -p 1234 # Allowed (increase)
renice -n -5 -p 1234 # Error for regular users

22. Explain the difference between cron and anacron.

Section titled “22. Explain the difference between cron and anacron.”

Answer:

AspectCronAnacron
AssumesSystem runs 24/7System may be off
PrecisionMinute-levelDay-level
Missing jobsSkippedRun at next opportunity
Root requiredNo (user crontabs)Yes
Random delayNoYes (avoid stampede)

Cron Syntax:

Terminal window
# Minute Hour Day Month DayOfWeek Command
# 0-59 0-23 1-31 1-12 0-7
# Examples:
# Run every day at 2:30 AM
30 2 * * * /backup/script.sh
# Run every Monday at 5 AM
0 5 * * 1 /scripts/weekly.sh
# Run every hour
0 * * * * /scripts/hourly.sh
# Run every 15 minutes
*/15 * * * * /scripts/check.sh
# Special strings:
@reboot # Run at startup
@daily # Run once per day
@hourly # Run once per hour
@weekly # Run once per week
@monthly # Run once per month
@yearly # Run once per year

Cron Files Locations:

Terminal window
# System-wide crontab
/etc/crontab
# User crontabs
/var/spool/cron/crontabs/
# Cron directories
/etc/cron.d/
/etc/cron.hourly/
/etc/cron.daily/
/etc/cron.weekly/
/etc/cron.monthly/

Anacron Configuration (/etc/anacrontab):

Terminal window
# Format:
# period delay job-identifier command
# (in days) (in minutes)
# Example:
1 5 cron.daily run-parts /etc/cron.daily
7 10 cron.weekly run-parts /etc/cron.weekly
30 15 cron.monthly run-parts /etc/cron.monthly
# Anacron timestamp file
/var/spool/anacron/

How Anacron Works:

Terminal window
# 1. At boot time, check when each job last ran
# 2. If period has elapsed, schedule job
# 3. Add random delay to prevent all jobs running simultaneously
# 4. Update timestamp after job completes
# Run manually
anacron -f # Force run
anacron -d # Debug mode
anacron -n # Run now (ignore delay)

Practical Example - Laptop:

/etc/anacrontab
# Problem: Laptop turned off at 2 AM
# Solution: Use anacron for daily tasks
# Run backup within 30 minutes of next boot
1 30 daily-backup /usr/local/bin/backup.sh
# Cron for time-sensitive tasks (still needed)
# Every 5 minutes check
*/5 * * * * /scripts/check.sh

Combined Usage:

/etc/cron.d/anacron
# Modern systems run cron, which runs anacron
# Run anacron from cron
30 7 * * * root test -x /usr/sbin/anacron && /usr/sbin/anacron
# Best practice:
# - Use cron for precise scheduling
# - Use anacron for daily/weekly maintenance
# - Use systemd timers for modern systems

23. Explain the difference between systemctl start, enable, and restart.

Section titled “23. Explain the difference between systemctl start, enable, and restart.”

Answer:

CommandEffectPersistence
startStarts service nowNot persistent (won’t survive reboot)
enableConfigure service to start at bootPersistent
restartStops then starts serviceImmediate effect
reloadReloads config without restartImmediate effect

Start (Immediate Only):

Terminal window
# Start service now
sudo systemctl start nginx
# Check status
systemctl status nginx
# Will not start automatically on reboot
# Useful for temporary services or testing

Enable (Boot-time Only):

Terminal window
# Configure to start at boot
sudo systemctl enable nginx
# Creates symlink in /etc/systemd/system/multi-user.target.wants/
ls -l /etc/systemd/system/multi-user.target.wants/nginx.service
# Does NOT start the service now
# Need start or restart for immediate effect
# Enable and start in one command
sudo systemctl enable --now nginx

Restart (Stop then Start):

Terminal window
# Stops service (SIGTERM) then starts again
sudo systemctl restart nginx
# Use when:
# - Configuration changed significantly
# - Service is misbehaving
# - Updated binaries need reloading
# Always works but may cause downtime

Reload (Graceful Configuration Reload):

Terminal window
# Reloads configuration without stopping
sudo systemctl reload nginx
# Use when:
# - Only configuration changed
# - Service supports reload (SIGHUP)
# - Zero downtime needed
# Check if service supports reload
systemctl show nginx -p CanReload

Other Systemctl Commands:

Terminal window
# Stop service
sudo systemctl stop nginx
# Disable from boot
sudo systemctl disable nginx
# Mask (prevent manual and automatic start)
sudo systemctl mask nginx
# Unmask
sudo systemctl unmask nginx
# Show service status
systemctl status nginx
# Show all units
systemctl list-units
# Show failed units
systemctl --failed
# Show service dependencies
systemctl list-dependencies nginx

Service States:

Terminal window
# Check if service is active (running)
systemctl is-active nginx
# Check if service is enabled (boot start)
systemctl is-enabled nginx
# Check if service failed
systemctl is-failed nginx
# All status information
systemctl show nginx

24. Explain the difference between journalctl and traditional syslog.

Section titled “24. Explain the difference between journalctl and traditional syslog.”

Answer:

Aspectjournalctl (systemd)Traditional syslog
FormatBinaryPlain text
StorageStructured databaseText files
IndexingAutomaticManual
ForwardingCan forward to syslogNative
QueryingPowerful filtersBasic (grep)
PersistenceConfigurableConfigurable

Journalctl Basic Usage:

Terminal window
# View all logs
journalctl
# Follow new logs (like tail -f)
journalctl -f
# Show last N lines
journalctl -n 100
# Show logs since boot
journalctl -b
# Show logs for specific service
journalctl -u nginx
# Show logs for specific time range
journalctl --since "2024-01-01 10:00:00" --until "2024-01-01 11:00:00"
journalctl --since "1 hour ago"
journalctl --since yesterday
# Show logs by priority
journalctl -p err
journalctl -p 3 # 0=emerg,1=alert,2=crit,3=err,4=warning,5=notice,6=info,7=debug

Advanced Journalctl Filters:

Terminal window
# Show logs by specific PID
journalctl _PID=1234
# Show logs by specific user
journalctl _UID=1000
# Show kernel messages
journalctl -k
# Show logs with specific executable
journalctl _EXE=/usr/bin/nginx
# Combine filters
journalctl -u nginx -p err --since "1 hour ago"
# Show logs in JSON format
journalctl -o json
journalctl -o json-pretty
# Show output without pagination
journalctl --no-pager
# Show only unique fields
journalctl -F _SYSTEMD_UNIT

Journal Configuration (/etc/systemd/journald.conf):

Terminal window
[Journal]
# Storage: volatile (/run/log/journal), persistent (/var/log/journal), auto, none
Storage=persistent
# Compress logs
Compress=yes
# Maximum log size
SystemMaxUse=2G
SystemMaxFileSize=100M
# Forward to traditional syslog
ForwardToSyslog=yes
ForwardToKMsg=no
ForwardToConsole=no
# Rate limiting
RateLimitIntervalSec=30s
RateLimitBurst=1000

Traditional Syslog Files:

Terminal window
# Common syslog files
/var/log/syslog # General system messages
/var/log/auth.log # Authentication attempts
/var/log/kern.log # Kernel messages
/var/log/messages # General messages (RHEL)
/var/log/secure # Security/auth (RHEL)
/var/log/maillog # Mail server logs
/var/log/cron # Cron job logs
/var/log/dpkg.log # Package manager logs
# View syslog
tail -f /var/log/syslog
grep "error" /var/log/syslog

Syslog Configuration (/etc/rsyslog.conf):

Terminal window
# Rules: facility.priority action
mail.info /var/log/mail.log
auth.* /var/log/auth.log
*.emerg :omusrmsg:* # Broadcast to all users
*.info;mail.none;auth.none /var/log/messages

Converting Between Formats:

Terminal window
# Export journal to text
journalctl -o short > logs.txt
# Forward journal to syslog
# In journald.conf:
ForwardToSyslog=yes
# Use both systems
# systemd journal for local queries
# rsyslog for central log aggregation

25. Explain the difference between df and du.

Section titled “25. Explain the difference between df and du.”

Answer:

Aspectdf (Disk Free)du (Disk Usage)
What it showsFilesystem usageDirectory/file usage
ScopePartition levelDirectory level
Block sizeShows per filesystemShows per file/directory
Deleted filesShows space as usedDoesn’t see them
SpeedFastCan be slow
AccuracyFilesystem metadataActual file sizes

df (Disk Free) Examples:

Terminal window
# Basic usage
df
# Filesystem 1K-blocks Used Available Use% Mounted on
# /dev/sda1 10240000 5120000 5120000 50% /
# Human-readable
df -h
# Filesystem Size Used Avail Use% Mounted on
# /dev/sda1 10G 5.0G 5.0G 50% /
# Show inode usage
df -i
# Filesystem Inodes IUsed IFree IUse% Mounted on
# /dev/sda1 655360 12345 643015 2% /
# Show specific filesystem
df -h /home
# Show filesystem type
df -T
# Filesystem Type Size Used Avail Use% Mounted on
# /dev/sda1 ext4 10G 5.0G 5.0G 50% /
# Exclude specific types
df -x tmpfs -x devtmpfs

du (Disk Usage) Examples:

Terminal window
# Basic directory usage
du /home/user
# 1234 /home/user/docs
# 5678 /home/user/downloads
# 8912 /home/user
# Human-readable
du -h /home/user
# 1.2M /home/user/docs
# 5.5M /home/user/downloads
# 8.7M /home/user
# Summary (total only)
du -sh /home/user
# 8.7M /home/user
# Show all files
du -ah /home/user
# 4K /home/user/.bashrc
# 8K /home/user/docs/readme.txt
# 1.2M /home/user/docs/photo.jpg
# Sort by size
du -h /home/user | sort -rh
# Show depth (max depth)
du -h --max-depth=1 /home
du -hd 1 /home # Same
# Exclude patterns
du -h --exclude="*.log" /var
# Apparent size vs actual
du -h --apparent-size file.txt

Common Problems and Solutions:

Terminal window
# Problem: df shows disk full, but du doesn't add up
# Cause: Deleted files still held open by processes
# Find processes holding deleted files
lsof | grep deleted
# Or
lsof +L1
# Solution: Restart process or empty file
> /path/to/deleted/file
# Problem: Find largest directories
du -sh /* 2>/dev/null | sort -rh | head -10
# Problem: Check specific mount point
df -h /var
du -sh /var/* 2>/dev/null

Performance Comparison:

Terminal window
# df is fast (reads superblock)
time df -h
# real 0m0.003s
# du can be slow (scans all files)
time du -sh /home
# real 0m5.234s
# Use ncdu for interactive exploration
ncdu /home

26. Explain the difference between tar, gzip, and zip.

Section titled “26. Explain the difference between tar, gzip, and zip.”

Answer:

ToolPurposeCompressionMultiple FilesArchive
tarArchive onlyNo (by itself)YesCreates .tar
gzipCompression onlyYesNoCompresses single file
zipArchive + compressYesYesCreates .zip

Tar (Tape Archive):

Terminal window
# Create archive (no compression)
tar -cf archive.tar /path/to/dir
# Extract archive
tar -xf archive.tar
# Create with gzip compression (.tar.gz)
tar -czf archive.tar.gz /path/to/dir
# Create with bzip2 compression (.tar.bz2)
tar -cjf archive.tar.bz2 /path/to/dir
# Create with xz compression (.tar.xz)
tar -cJf archive.tar.xz /path/to/dir
# Extract compressed archive (auto-detects)
tar -xf archive.tar.gz
tar -xf archive.tar.bz2
tar -xf archive.tar.xz
# View contents
tar -tf archive.tar
# Verbose output
tar -xvzf archive.tar.gz
# Extract specific files
tar -xf archive.tar.gz --wildcards "*.txt"
# Exclude patterns
tar -czf backup.tar.gz --exclude="*.log" --exclude="tmp/*" /home

Gzip (GNU Zip):

Terminal window
# Compress single file
gzip file.txt
# Creates file.txt.gz, deletes original
# Keep original
gzip -c file.txt > file.txt.gz
# OR
gzip -k file.txt
# Decompress
gunzip file.txt.gz
# OR
gzip -d file.txt.gz
# Compression levels (1-9, 9=best/slowest)
gzip -9 file.txt
# View compressed file
zcat file.txt.gz
zless file.txt.gz
# Combine with tar (most common)
tar -czf archive.tar.gz directory/

Zip:

Terminal window
# Create zip archive
zip -r archive.zip /path/to/dir
# Create with compression
zip -r -9 archive.zip /path/to/dir
# Extract
unzip archive.zip
# Extract to specific directory
unzip archive.zip -d /target/dir
# List contents
unzip -l archive.zip
# Update existing archive
zip -u archive.zip newfile.txt
# Add password protection
zip -e archive.zip file.txt
# Split into multiple files
zip -s 100m -r large.zip /path/to/dir
# Exclude patterns
zip -r archive.zip . -x "*.log" "*.tmp"

Comparison Examples:

Terminal window
# Create compressed archive with different tools
# tar + gzip (Linux standard)
tar -czf backup.tar.gz /home/user
# Size: 100MB
# Time: 5s
# tar + bzip

27. Explain the difference between ssh-keygen, ssh-copy-id, and ssh-agent.

Section titled “27. Explain the difference between ssh-keygen, ssh-copy-id, and ssh-agent.”

Answer:

These three tools work together to provide secure, passwordless SSH authentication.

ToolPurposeOutput
ssh-keygenGenerate key pairsPrivate and public keys
ssh-copy-idInstall public key on serverAdds to authorized_keys
ssh-agentManage private keys in memoryHolds decrypted keys for session

ssh-keygen (Key Generation):

Terminal window
# Generate RSA key pair (default)
ssh-keygen -t rsa -b 4096 -C "user@example.com"
# Creates: ~/.ssh/id_rsa (private) and ~/.ssh/id_rsa.pub (public)
# Generate ED25519 key (more secure, faster)
ssh-keygen -t ed25519 -C "user@example.com"
# Generate with specific filename
ssh-keygen -t rsa -f ~/.ssh/mykey
# List key fingerprints
ssh-keygen -l -f ~/.ssh/id_rsa.pub
# Change passphrase
ssh-keygen -p -f ~/.ssh/id_rsa
# Convert private key format
ssh-keygen -p -m PEM -f ~/.ssh/id_rsa
# Generate host keys (for SSH server)
ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N ""

Key Types Comparison:

TypeBit SizeSecuritySpeedCompatibility
RSA2048-4096GoodModerateUniversal
ED25519256ExcellentFastModern systems only
ECDSA256-521GoodFastVaries by curve
DSA1024WeakSlowDeprecated

ssh-copy-id (Public Key Distribution):

Terminal window
# Copy key to remote server (password required once)
ssh-copy-id user@server
# Specify key file
ssh-copy-id -i ~/.ssh/mykey.pub user@server
# Use specific port
ssh-copy-id -p 2222 user@server
# Copy to multiple servers
for server in server1 server2 server3; do
ssh-copy-id user@$server
done
# What it does behind the scenes:
# 1. Connects to server using password
# 2. Creates ~/.ssh/ directory if needed
# 3. Appends public key to ~/.ssh/authorized_keys
# 4. Sets correct permissions (700 for .ssh, 600 for authorized_keys)
# Manual equivalent:
cat ~/.ssh/id_rsa.pub | ssh user@server "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys"

ssh-agent (Key Management):

Terminal window
# Start ssh-agent in current shell
eval $(ssh-agent)
# Agent pid 12345
# Add private key to agent (prompts for passphrase once)
ssh-add ~/.ssh/id_rsa
# Add all default keys
ssh-add
# List loaded keys
ssh-add -l
# List fingerprints
ssh-add -L
# Add key with timeout (removes after 1 hour)
ssh-add -t 3600 ~/.ssh/id_rsa
# Remove specific key
ssh-add -d ~/.ssh/id_rsa
# Remove all keys
ssh-add -D
# Lock agent with password
ssh-add -x
# Unlock agent
ssh-add -X
# Check if agent is running
ssh-add -l 2>/dev/null || echo "Agent not running"

Complete Passwordless SSH Setup:

Terminal window
# 1. Generate key pair
ssh-keygen -t ed25519 -C "work@example.com"
# 2. Start agent and add key
eval $(ssh-agent)
ssh-add ~/.ssh/id_ed25519
# 3. Copy to remote server
ssh-copy-id user@server
# 4. Test connection (no password prompt)
ssh user@server
# 5. Add agent startup to .bashrc
echo 'eval $(ssh-agent) > /dev/null 2>&1' >> ~/.bashrc
echo 'ssh-add -l > /dev/null 2>&1 || ssh-add' >> ~/.bashrc

SSH Agent Forwarding:

Terminal window
# Allow agent forwarding (in ~/.ssh/config)
Host myserver
ForwardAgent yes
# Or command line
ssh -A user@server
# Security note: Only forward to trusted servers
# Agent forwarding allows remote server to use your local keys

Troubleshooting:

Terminal window
# Debug SSH connection
ssh -vvv user@server
# Check key permissions
ls -la ~/.ssh/
# .ssh directory: 700 (drwx------)
# private keys: 600 (-rw-------)
# public keys: 644 (-rw-r--r--)
# authorized_keys: 600 (-rw-------)
# Test key authentication
ssh -o PreferredAuthentications=publickey user@server
# Clear agent keys
ssh-add -D
# Verify key is added
ssh-add -l | grep -q "$(ssh-keygen -lf ~/.ssh/id_rsa.pub | cut -d' ' -f2)"

28. Explain the difference between chroot, systemd-nspawn, and containers.

Section titled “28. Explain the difference between chroot, systemd-nspawn, and containers.”

Answer:

These are isolation technologies with increasing levels of separation.

TechnologyIsolation LevelUse Case
chrootFilesystem onlySimple testing, legacy applications
systemd-nspawnFilesystem + Process + NetworkLightweight containers, system testing
Containers (Docker/LXC)Full OS-level virtualizationApplication deployment, microservices

chroot (Change Root):

Terminal window
# Basic chroot setup
mkdir /newroot
# Copy essential binaries and libraries
cp /bin/bash /newroot/bin/
ldd /bin/bash # Check required libraries
cp /lib/x86_64-linux-gnu/libtinfo.so.6 /newroot/lib/
cp /lib/x86_64-linux-gnu/libc.so.6 /newroot/lib/
cp /lib64/ld-linux-x86-64.so.2 /newroot/lib64/
# Enter chroot
sudo chroot /newroot /bin/bash
# Limitations:
# - Still sees host kernel
# - Can access host devices
# - No process isolation
# - Network access not restricted
# Practical use: Password recovery
# Boot from live CD, mount root filesystem
sudo chroot /mnt/root
passwd root

systemd-nspawn (Container Manager):

Terminal window
# Create minimal container
sudo debootstrap bullseye /var/lib/machines/debian <https://deb.debian.org/debian>
# Boot container
sudo systemd-nspawn -D /var/lib/machines/debian
# With network (private)
sudo systemd-nspawn -D /var/lib/machines/debian -b
# -b = boot with systemd as init
# With network sharing
sudo systemd-nspawn -D /var/lib/machines/debian --network-veth
# Run command in container
sudo systemd-nspawn -D /var/lib/machines/debian /bin/bash -c "apt update"
# List containers
machinectl list
# Start container as service
machinectl start debian
machinectl shell debian
# Differences from chroot:
# - Process isolation (separate PID namespace)
# - Network isolation (virtual interfaces)
# - Systemd as init process
# - Better resource management

Container Technologies Comparison:

LXC (Linux Containers):

Terminal window
# Create container
sudo lxc-create -n mycontainer -t debian
# List containers
lxc-ls
# Start container
lxc-start -n mycontainer -d
# Enter container
lxc-attach -n mycontainer
# Container config
/var/lib/lxc/mycontainer/config

Docker (Application Containers):

Terminal window
# Run container with volume isolation
docker run -it --rm ubuntu:latest /bin/bash
# With resource limits
docker run --cpus=1 --memory=512m -it ubuntu:latest
# Differences from systemd-nspawn:
# - Immutable infrastructure mindset
# - Image-based deployment
# - Registry integration
# - Orchestration ecosystem

Isolation Comparison Table:

Featurechrootsystemd-nspawnDocker/LXC
FilesystemLimited isolationFull isolationFull isolation
ProcessesSame namespaceSeparate namespaceSeparate namespace
NetworkSame namespaceSeparate (optional)Separate (default)
UserSame namespaceCan be mappedUser namespaces
DevicesAccess to hostLimitedControlled
Init systemNoneCan run systemdCustom init
Resource limitsNonecgroupscgroups

Creating Isolated Environment:

Terminal window
# 1. chroot - Simple filesystem isolation
sudo mkdir /jail
sudo chroot /jail /bin/sh
# 2. Unshare (Linux namespace tool) - Process isolation
sudo unshare --mount --pid --fork --net /bin/bash
# Creates new namespaces without full container stack
# 3. systemd-nspawn - Full container with minimal overhead
sudo systemd-nspawn -D /var/lib/machines/container --boot
# 4. Docker - Full container with image management
docker run -it ubuntu:latest

Security Considerations:

Terminal window
# chroot is NOT security isolation
# Escapes possible via:
# - /proc exposure
# - mounted devices
# - root privileges
# systemd-nspawn adds:
# - Private /tmp
# - Read-only bind mounts
# - Capability dropping
sudo systemd-nspawn -D /container --cap-drop=ALL --cap-add=CAP_NET_BIND_SERVICE
# Best practice: Use proper containers for security
# Combine with SELinux/AppArmor for additional security

29. Explain the difference between systemd, init, and SysVinit.

Section titled “29. Explain the difference between systemd, init, and SysVinit.”

Answer:

These are init systems that manage system startup, services, and processes.

FeatureSysVinitUpstartsystemd
First release1980s20062010
Parallel startupNoLimitedYes
Dependency handlingManual orderEventsDeclarative units
Service managementScriptsScripts + eventsUnit files
Loggingsyslogsyslogjournald
Process supervisionBasicBasicBuilt-in

SysVinit (System V Init - Traditional):

Terminal window
# Runlevels (0-6)
# 0 = halt, 1 = single-user, 2-5 = multi-user, 6 = reboot
/etc/inittab # Runlevel configuration
# Service scripts location
/etc/init.d/
# Example: /etc/init.d/nginx {start|stop|restart|status}
# Script format
#!/bin/sh
### BEGIN INIT INFO
# Provides: nginx
# Required-Start: $network $remote_fs
# Required-Stop: $network $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
### END INIT INFO
case "$1" in
start)
start_service
;;
stop)
stop_service
;;
restart)
restart_service
;;
status)
status_service
;;
esac
# Managing services
service nginx start
service nginx status
update-rc.d nginx defaults # Enable at boot

SysVinit Limitations:

  • Sequential startup (slow)
  • No dependency resolution
  • No automatic restart
  • Manual ordering (S##K## files)
  • /etc/rc?.d/ directories with symlinks

Upstart (Event-based Init):

/etc/init/nginx.conf
# Configuration files
/etc/init/
description "Nginx web server"
start on (filesystem and net-device-up)
stop on runlevel [!2345]
respawn
respawn limit 10 5
exec /usr/sbin/nginx -g "daemon off;"
# Commands
initctl list # List services
initctl status nginx
initctl start nginx
initctl stop nginx

systemd (Modern Init):

Terminal window
# Unit files locations
/etc/systemd/system/ # Local configuration
/usr/lib/systemd/system/ # Package-provided
/run/systemd/system/ # Runtime
# Service unit example (/etc/systemd/system/nginx.service)
[Unit]
Description=NGINX web server
Documentation=man:nginx(8)
After=network.target
Wants=network.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/usr/sbin/nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
# Service management
systemctl start nginx
systemctl stop nginx
systemctl restart nginx
systemctl reload nginx
systemctl status nginx
systemctl enable nginx # Enable at boot
systemctl disable nginx
# View dependencies
systemctl list-dependencies nginx
systemctl list-dependencies --reverse nginx

systemd Target Equivalents:

SysVinit Runlevelsystemd TargetDescription
0runlevel0.target, poweroff.targetHalt
1runlevel1.target, rescue.targetSingle-user mode
2runlevel2.target, multi-user.targetMulti-user without GUI
3runlevel3.target, multi-user.targetFull multi-user
4runlevel4.target, multi-user.targetCustom
5runlevel5.target, graphical.targetMulti-user with GUI
6runlevel6.target, reboot.targetReboot

systemd Features:

Terminal window
# Socket activation
systemctl list-sockets
# Timer units (replaces cron)
systemctl list-timers
# System analysis
systemd-analyze
systemd-analyze blame # Startup time per unit
systemd-analyze critical-chain nginx
# View logs (journald)
journalctl -u nginx
journalctl -u nginx -f # Follow
# Manage user services (without sudo)
systemctl --user start service
# Resource control (cgroups)
systemctl set-property nginx MemoryLimit=512M
systemctl set-property nginx CPUQuota=50%

Migration Commands:

SysVinitsystemd
service nginx startsystemctl start nginx
service nginx statussystemctl status nginx
chkconfig nginx onsystemctl enable nginx
chkconfig nginx offsystemctl disable nginx
init 6systemctl reboot
init 0systemctl poweroff

30. Explain the difference between rsync and scp for file transfer.

Section titled “30. Explain the difference between rsync and scp for file transfer.”

Answer:

Featurescprsync
ProtocolSSH (only)SSH (default) or rsync daemon
IncrementalNo (full copy each time)Yes (only changed parts)
CompressionOptional (-C)Built-in (-z)
ResumeNoYes (partial transfers)
Sync deletionNoYes (—delete)
Preserve attributesBasic (-p)Comprehensive (-a)
SpeedSlower for repeated transfersFaster for incremental sync
PlatformEverywhere with SSHRequires rsync on both ends

scp (Secure Copy) - Simple File Transfer:

Terminal window
# Copy file to remote
scp file.txt user@server:/path/
# Copy file from remote
scp user@server:/path/file.txt .
# Copy directory recursively
scp -r /local/dir user@server:/remote/
# Preserve timestamps, permissions
scp -p file.txt user@server:/path/
# Compress during transfer
scp -C largefile.zip user@server:/path/
# Use specific port
scp -P 2222 file.txt user@server:/path/
# Copy between two remotes
scp user1@server1:/file user2@server2:/path/
# Limitations:
# - No partial transfer resume
# - Always copies entire file
# - No delta transfer (even if only 1 byte changed)

rsync (Remote Sync) - Efficient Synchronization:

Terminal window
# Basic copy (like scp)
rsync file.txt user@server:/path/
# Archive mode (preserves all attributes)
rsync -avz /local/dir/ user@server:/remote/dir/
# -a = archive (permissions, timestamps, etc.)
# -v = verbose
# -z = compress
# Sync with deletion (mirror)
rsync -avz --delete /local/dir/ user@server:/remote/dir/
# Removes files in remote that don't exist locally
# Dry run (see what would happen)
rsync -avz --dry-run /local/dir/ user@server:/remote/
# Partial transfer (resume)
rsync -avz --partial /large/file user@server:/remote/
# --partial keeps partial files for resume
# Bandwidth limit
rsync -avz --bwlimit=1000 /local/dir/ user@server:/remote/
# Exclude patterns
rsync -avz --exclude="*.log" --exclude="tmp/" /local/ user@server:/remote/
# Include/exclude with file
rsync -avz --exclude-from=exclude-list.txt /local/ user@server:/remote/
# SSH with custom port
rsync -avz -e "ssh -p 2222" /local/ user@server:/remote/
# Show progress
rsync -avz --progress /large/file user@server:/remote/
# Compare checksums (instead of timestamp + size)
rsync -avz --checksum /local/ user@server:/remote/
# Rsync daemon mode (faster, no encryption)
rsync -avz rsync://server/module/ /local/

rsync Algorithm (Delta Transfer):

How rsync works efficiently:
1. Splits file into blocks (usually 700-1000 bytes)
2. Computes rolling checksum (weak) and MD5 (strong) for each block
3. Sends checksums to remote
4. Remote checks for matching blocks
5. Only transfers blocks that don't match
6. Reassembles file on remote
Result: Only changed portions of files are transferred

Performance Comparison:

Terminal window
# Scenario: Syncing 10GB directory daily with 100MB changes
# scp (full copy each time)
scp -r /data/ user@server:/backup/
# Transfer: 10GB daily
# Time: ~10-15 minutes
# rsync (delta transfer)
rsync -avz /data/ user@server:/backup/
# Transfer: ~100MB daily
# Time: ~1 minute
# Benchmark example
time rsync -avz /source/ /dest/
time scp -r /source/ /dest/

rsync as Backup Tool:

Terminal window
# Incremental backups with hard links
rsync -avz --delete --link-dest=/backup/previous /source/ /backup/current/
# Remote backup with snapshot
rsync -avz --delete --backup --backup-dir=/backup/$(date +%Y%m%d) /source/ user@server:/backup/current/
# Exclude system files
rsync -avz --exclude={"/proc/*","/sys/*","/dev/*","/tmp/*"} / user@server:/backup/system/

When to Use Which:

Terminal window
# Use scp when:
# - One-time file transfer
# - rsync not available on remote
# - Simple copy needed
# - Transferring single file
# Use rsync when:
# - Recurring syncs (backups, mirrors)
# - Large directories with small changes
# - Need to preserve all attributes
# - Require resume capability
# - Mirroring with deletion
# - Bandwidth is limited

Advanced rsync Examples:

Terminal window
# Filter files by size (only files > 10MB)
rsync -avz --min-size=10M /source/ /dest/
# Filter by modification time
rsync -avz --files-from=<(find /source -mtime -7) /source/ /dest/
# Use rsync daemon for local network (faster)
# Server side: create /etc/rsyncd.conf
[backup]
path = /backup
comment = Backup area
read only = yes
# Start daemon
rsync --daemon
# Client side (no SSH overhead)
rsync -avz rsync://server/backup/ /local/

Part 2: 20 Scenario-Based Linux Questions with Answers

Section titled “Part 2: 20 Scenario-Based Linux Questions with Answers”

1. Scenario: Disk is full, but can’t find what’s using space

Section titled “1. Scenario: Disk is full, but can’t find what’s using space”

Situation: df -h shows disk at 98% usage, but du -sh /* doesn’t add up to the total.

Investigation:

Terminal window
# 1. Check for deleted files still held open
lsof +L1
# Shows files with deleted but still open
# Process ID, filename, size
# 2. Find processes holding deleted files
lsof | grep deleted
# 3. Check inode usage
df -i
# If inode usage high, many small files
# 4. Find large deleted files
find /proc/*/fd -ls 2>/dev/null | grep deleted
# 5. Check hidden directories
du -sh /tmp /var/tmp /home/*/.cache
# 6. Find largest files across system
find / -type f -size +100M -exec ls -lh {} \\; 2>/dev/null
# 7. Check specific mount points
du -sh /var/* 2>/dev/null | sort -rh | head -10

Resolution:

Terminal window
# If deleted files are held open, restart the process
kill -HUP $(lsof -t /path/to/deleted/file)
# Or restart service
systemctl restart service_name
# Clear journal logs
journalctl --vacuum-size=500M
# Clear package cache
apt clean # Debian/Ubuntu
dnf clean all # RHEL
# Remove old kernels (Debian/Ubuntu)
apt autoremove --purge
# Remove old kernels (RHEL)
package-cleanup --oldkernels --count=2

2. Scenario: Server responding slowly, high load average

Section titled “2. Scenario: Server responding slowly, high load average”

Situation: Load average > number of CPU cores, system feels sluggish.

Investigation:

Terminal window
# 1. Check load average and CPU
uptime
top -bn1 | head -20
# 2. Identify top CPU processes
ps aux --sort=-%cpu | head -10
# 3. Check for processes in D state (uninterruptible sleep)
ps aux | awk '$8=="D" {print}'
# D state = waiting for I/O, can't be killed
# 4. Check I/O wait
iostat -x 1 5
# Look for high %iowait
# 5. Check disk queue
iostat -x | awk '{if($12>10) print}'
# svctm > 10ms indicates disk bottleneck
# 6. Check memory pressure
free -h
vmstat 1 5
# Look for si/so (swap in/out) > 0
# 7. Check process waiting for I/O
iotop -o
# 8. Check system logs
journalctl -p err -b | tail -50

Resolution:

Terminal window
# CPU bound: Renice CPU-intensive processes
renice +10 -p $(pgrep -f "cpu_hog")
# I/O bound: Limit I/O priority
ionice -c2 -n7 -p PID
# Memory pressure: Identify memory hogs
ps aux --sort=-%mem | head -10
# Swap usage: Check what's swapping
smem -r -k | head -10
# Kill zombie processes
ps aux | grep defunct
# Find parent and restart/kill it

3. Scenario: Can’t SSH into server, but ping works

Section titled “3. Scenario: Can’t SSH into server, but ping works”

Situation: Server responds to ping but SSH connection times out or rejects.

Investigation from client:

Terminal window
# 1. Check SSH connectivity with verbose
ssh -vvv user@server
# 2. Check if port is open
nc -zv server 22
telnet server 22
# 3. Check SSH key issues
ssh -o PreferredAuthentications=password user@server
# 4. Check firewall from client
nmap -p 22 server
# 5. Check if SSH service is running (from console if accessible)
# Via out-of-band management or physical console
systemctl status sshd
ss -tlnp | grep :22

Investigation on server (if console accessible):

Terminal window
# 1. Check SSH service
systemctl status sshd
journalctl -u sshd -n 50
# 2. Check network connectivity
ip addr show
ss -tlnp | grep :22
# 3. Check firewall rules
iptables -L -n | grep 22
firewall-cmd --list-all
# 4. Check SSH configuration
grep -E "PermitRootLogin|PasswordAuthentication|Port" /etc/ssh/sshd_config
# 5. Check hosts.allow/deny
cat /etc/hosts.deny
cat /etc/hosts.allow
# 6. Check SELinux
getenforce
ausearch -m avc -ts recent | grep ssh

Common Resolutions:

Terminal window
# 1. Restart SSH service
systemctl restart sshd
# 2. Fix firewall
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
firewall-cmd --add-service=ssh --permanent
# 3. Fix SELinux
restorecon -Rv /etc/ssh
setsebool -P ssh_sysadm_login on
# 4. Add user to allow list
echo "AllowUsers username" >> /etc/ssh/sshd_config
systemctl reload sshd
# 5. Increase verbosity for debugging
# In /etc/ssh/sshd_config:
LogLevel DEBUG
systemctl reload sshd
tail -f /var/log/auth.log

4. Scenario: User can’t run sudo commands

Section titled “4. Scenario: User can’t run sudo commands”

Situation: User attempts sudo but gets “user is not in the sudoers file” error.

Investigation:

Terminal window
# 1. Check user's groups
id username
groups username
# 2. Check sudoers file syntax
visudo -c
# 3. Check if user is in sudo group
getent group sudo
getent group wheel
# 4. Check sudoers includes
grep -r "^%sudo" /etc/sudoers.d/
grep "^%sudo" /etc/sudoers
# 5. Check sudo logs
journalctl -u sudo -n 20
tail -50 /var/log/auth.log | grep sudo

Resolution:

Terminal window
# As root, add user to sudo group
usermod -aG sudo username
# Or for RHEL/CentOS
usermod -aG wheel username
# Verify
groups username
# If group membership doesn't work, add directly to sudoers
visudo
# Add line:
username ALL=(ALL:ALL) ALL
# Check for syntax errors
visudo -c
# For NIS/LDAP users, check nsswitch.conf
cat /etc/nsswitch.conf | grep sudoers

Situation: Scheduled cron jobs are not executing.

Investigation:

Terminal window
# 1. Check cron service
systemctl status cron
systemctl status crond
# 2. Check cron logs
grep CRON /var/log/syslog | tail -50
journalctl -u cron -n 50
# 3. Check user crontab
crontab -l -u username
# 4. Check system crontab
cat /etc/crontab
ls -la /etc/cron.d/
# 5. Check cron directories
ls -la /etc/cron.hourly/
ls -la /etc/cron.daily/
# 6. Check environment issues
# Add to crontab for debugging
* * * * * env > /tmp/cron_env.txt 2>&1
* * * * * /bin/bash -c "echo 'Running' >> /tmp/cron.log" 2>&1
# 7. Check mail for cron output
mail
# 8. Check permissions
ls -la /var/spool/cron/crontabs/
# Should be 600 (-rw-------)

Common Issues and Fixes:

Terminal window
# 1. Missing environment variables
# In crontab, add:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# 2. Script not executable
chmod +x /path/to/script.sh
# 3. Full paths required
# Bad: * * * * * myscript.sh
# Good: * * * * * /home/user/bin/myscript.sh
# 4. Redirect output to see errors
* * * * * /path/to/script.sh >> /tmp/script.log 2>&1
# 5. Restart cron
systemctl restart cron
# 6. Check for syntax errors
crontab -e
# Remove empty lines or invalid entries

6. Scenario: Accidentally deleted important file

Section titled “6. Scenario: Accidentally deleted important file”

Situation: You deleted a critical file and need to recover it.

Immediate Actions:

Terminal window
# 1. Stop using the filesystem immediately!
# Any writes may overwrite deleted data
# 2. Check if file is still open by a process
lsof | grep deleted | grep filename
# If found, recover from /proc
cp /proc/PID/fd/FD /path/to/recover/file
# 3. Check if file is in backup
# Restore from backup

Recovery Tools:

Terminal window
# extundelete (for ext3/ext4)
umount /dev/sda1 # Unmount first if possible
extundelete /dev/sda1 --restore-file /path/to/file
# TestDisk/PhotoRec
photorec /dev/sda1
# Recovers by file signature, not filename
# debugfs (for ext filesystems)
debugfs /dev/sda1
debugfs: lsdel # List deleted inodes
debugfs: logdump -i <inode>
debugfs: dump <inode> /recovered/file
# grep from /proc if file was in memory
grep -a -B10 -A10 "unique_string" /dev/sda1 > recovered.txt

Prevention:

Terminal window
# Implement regular backups
# Use version control
# Enable file versioning in filesystem (ZFS, btrfs)
# Use trash-cli instead of rm
alias rm='trash-put'

Situation: df -h shows space available but cannot create new files.

Investigation:

Terminal window
# 1. Check inode usage
df -i
# Filesystem Inodes IUsed IFree IUse% Mounted on
# /dev/sda1 655360 655350 10 100% /
# 2. Find directory with most files
find / -xdev -type d -size +1M -exec ls -la {} \\; 2>/dev/null | grep ^d
# 3. Count files per directory
find /home -xdev -type f | cut -d/ -f2 | sort | uniq -c | sort -nr
# 4. Find directories with many files
for dir in /*; do
echo -n "$dir: "; find $dir -xdev -type f 2>/dev/null | wc -l
done
# 5. Check for mail spool
ls -la /var/spool/mail/
ls -la /var/mail/
# 6. Check for session files
ls -la /var/tmp/
ls -la /tmp/

Resolution:

Terminal window
# 1. Delete old log files
find /var/log -name "*.log" -mtime +30 -delete
# 2. Clean package cache
apt clean
dnf clean all
# 3. Remove old kernels
apt autoremove --purge
package-cleanup --oldkernels --count=2
# 4. Clear journal logs
journalctl --vacuum-size=500M
# 5. Clean Docker (if used)
docker system prune -a
# 6. Find and delete empty files
find / -type f -empty -delete 2>/dev/null
# 7. For mail spool, setup logrotate for mail
# /etc/logrotate.d/mail
/var/spool/mail/* {
daily
missingok
rotate 7
compress
}

8. Scenario: Network interface not coming up after reboot

Section titled “8. Scenario: Network interface not coming up after reboot”

Situation: Network interface doesn’t get IP after reboot.

Investigation:

Terminal window
# 1. Check interface status
ip link show
ip addr show
ethtool eth0
# 2. Check NetworkManager
nmcli dev status
nmcli con show
# 3. Check network config files
# Debian/Ubuntu
cat /etc/network/interfaces
cat /etc/netplan/*.yaml
# RHEL/CentOS
cat /etc/sysconfig/network-scripts/ifcfg-eth0
# 4. Check if interface is down
ifconfig eth0 up
ip link set eth0 up
# 5. Check DHCP client
systemctl status dhcpcd
systemctl status networking
# 6. Check kernel messages
dmesg | grep -i eth
journalctl -b | grep -i network

Resolution:

Terminal window
# Debian/Ubuntu netplan
cat > /etc/netplan/01-netcfg.yaml << EOF
network:
version: 2
ethernets:
eth0:
dhcp4: true
optional: true
EOF
netplan apply
# RHEL/CentOS
cat > /etc/sysconfig/network-scripts/ifcfg-eth0 << EOF
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
EOF
systemctl restart network
# Enable NetworkManager
systemctl enable NetworkManager
systemctl start NetworkManager

9. Scenario: Application can’t bind to port

Section titled “9. Scenario: Application can’t bind to port”

Situation: Application fails to start because “address already in use”.

Investigation:

Terminal window
# 1. Find what's using the port
ss -tlnp | grep :8080
netstat -tlnp | grep 8080
lsof -i :8080
# 2. Check if application is already running
ps aux | grep appname
# 3. Check if port is in TIME_WAIT
ss -tan | grep 8080
# 4. Check for zombie processes
ps aux | grep defunct
# 5. Check socket files
lsof | grep socket | grep appname

Resolution:

Terminal window
# 1. Kill the process using the port
kill -15 PID
sleep

10. Scenario: System clock is wrong, affecting logs and SSL certificates

Section titled “10. Scenario: System clock is wrong, affecting logs and SSL certificates”

Situation: Application SSL certificates failing, logs show incorrect timestamps, and NTP sync fails.

Investigation:

Terminal window
# 1. Check current system time and hardware clock
date
timedatectl status
hwclock --show
# 2. Check NTP synchronization status
timedatectl show-timesync
systemctl status chronyd
systemctl status ntpd
# 3. Check timezone configuration
ls -la /etc/localtime
cat /etc/timezone
timedatectl list-timezones | grep -i region
# 4. Check if time is drifting
chronyc tracking
ntpq -p
# 5. Check system logs for time issues
journalctl -u chronyd -n 50
grep -i time /var/log/syslog | tail -20

Resolution:

Terminal window
# 1. Set correct timezone
sudo timedatectl set-timezone Asia/Kolkata
sudo timedatectl set-timezone America/New_York
# 2. Enable and start NTP service
sudo systemctl enable chronyd
sudo systemctl start chronyd
# 3. Force time synchronization
sudo chronyc -a makestep
sudo ntpdate -s time.nist.gov
# 4. Sync hardware clock to system time
sudo hwclock --systohc
# 5. Check synchronization status
sudo timedatectl set-ntp true
timedatectl status
# 6. For virtual machines, disable host time sync if causing issues
# In /etc/systemd/timesyncd.conf:
[Time]
NTP=0.pool.ntp.org 1.pool.ntp.org
FallbackNTP=ntp.ubuntu.com

11. Scenario: Users can’t change their passwords

Section titled “11. Scenario: Users can’t change their passwords”

Situation: Users report “Authentication token manipulation error” when trying to change passwords.

Investigation:

Terminal window
# 1. Check password aging policies
sudo chage -l username
# 2. Check disk space (full disk prevents password change)
df -h /
df -h /var
# 3. Check /etc/shadow permissions
ls -la /etc/shadow
# Should be: -rw-r----- 1 root shadow
# 4. Check PAM configuration
sudo grep -r "password" /etc/pam.d/
cat /etc/pam.d/common-password
# 5. Check if account is locked
sudo passwd -S username
# If shows "LK", account locked
# 6. Check SELinux context
ls -Z /etc/shadow
restorecon -v /etc/shadow
# 7. Check if password expiration is preventing change
sudo chage -l username
# If "Password expires: password must be changed"

Resolution:

Terminal window
# 1. Fix permissions
sudo chmod 640 /etc/shadow
sudo chown root:shadow /etc/shadow
# 2. Unlock account if locked
sudo passwd -u username
# 3. Force password change
sudo passwd -e username
# 4. Fix SELinux context
sudo restorecon -v /etc/shadow
sudo restorecon -v /etc/passwd
# 5. Clear any stale locks
sudo rm -f /etc/passwd.lock
sudo rm -f /etc/shadow.lock
# 6. Check disk space and clear if needed
sudo apt clean
sudo journalctl --vacuum-size=100M
# 7. Reset password as root
sudo passwd username

12. Scenario: NFS mounts failing after reboot

Section titled “12. Scenario: NFS mounts failing after reboot”

Situation: NFS shares mounted in /etc/fstab don’t mount after system reboot.

Investigation:

Terminal window
# 1. Check /etc/fstab entries
cat /etc/fstab | grep nfs
# 2. Check NFS service status
systemctl status nfs-client
systemctl status nfs-common
# 3. Try manual mount
sudo mount -a
sudo mount -t nfs server:/export /mnt/nfs
# 4. Check NFS server reachability
showmount -e nfs-server
rpcinfo -p nfs-server
# 5. Check network availability during boot
systemctl list-units | grep network
# Network might not be ready when mount occurs
# 6. Check mount logs
journalctl -u nfs-client -n 50
dmesg | grep -i nfs
# 7. Check mount point existence
ls -ld /mnt/nfs

Resolution:

/etc/fstab
# 1. Add _netdev option to fstab
server:/export /mnt/nfs nfs defaults,_netdev,noauto,x-systemd.automount 0 0
# 2. Use systemd automount
sudo systemctl daemon-reload
sudo systemctl enable mnt-nfs.automount
sudo systemctl start mnt-nfs.automount
# 3. Alternative: Add network dependency to mount
# Create systemd mount unit
cat > /etc/systemd/system/mnt-nfs.mount << EOF
[Unit]
Description=NFS Mount
After=network-online.target
Wants=network-online.target
[Mount]
What=server:/export
Where=/mnt/nfs
Type=nfs
Options=defaults,_netdev
[Install]
WantedBy=multi-user.target
EOF
# 4. Add timeout to prevent hanging
# /etc/fstab with timeout:
server:/export /mnt/nfs nfs defaults,_netdev,timeo=30,retrans=3 0 0
# 5. Check NFS client kernel modules
sudo modprobe nfs
echo "nfs" >> /etc/modules-load.d/nfs.conf

13. Scenario: Logs growing too fast, filling disk

Section titled “13. Scenario: Logs growing too fast, filling disk”

Situation: Application logs are growing at 10GB/hour, causing disk space alerts.

Investigation:

Terminal window
# 1. Find largest log files
sudo find /var/log -type f -size +100M -exec ls -lh {} \\; 2>/dev/null
# 2. Check log rotation status
ls -la /etc/logrotate.d/
cat /etc/logrotate.d/application
# 3. Check what's writing to logs
lsof /var/log/app.log
sudo tail -f /var/log/app.log | head -100
# 4. Check application logging level
grep -i log /etc/app/config
# Might be set to DEBUG instead of INFO/ERROR
# 5. Check if logrotate is running
sudo systemctl status logrotate
sudo logrotate -d /etc/logrotate.conf
# 6. Check for infinite error loops
sudo tail -1000 /var/log/app.log | sort | uniq -c | sort -nr

Resolution:

Terminal window
# 1. Immediately truncate the log
sudo truncate -s 0 /var/log/app.log
# OR
sudo cat /dev/null > /var/log/app.log
# 2. Configure aggressive logrotate
cat > /etc/logrotate.d/application << EOF
/var/log/app.log {
daily
rotate 3
maxsize 100M
compress
delaycompress
missingok
notifempty
create 0640 appuser appgroup
postrotate
systemctl reload application
endscript
}
EOF
# 3. Reduce application logging level
# In application config, change:
log_level = INFO # instead of DEBUG
# 4. Implement rate limiting in application if possible
# Configure max log lines per second
# 5. Set up log monitoring alerts
# Use logwatch or custom script to alert on rapid growth
# 6. Use systemd journal limits
# /etc/systemd/journald.conf
SystemMaxUse=1G
SystemMaxFileSize=100M
MaxRetentionSec=1week

14. Scenario: Web server can’t write to upload directory

Section titled “14. Scenario: Web server can’t write to upload directory”

Situation: Web application returns “Permission denied” when users try to upload files.

Investigation:

Terminal window
# 1. Check directory permissions
ls -la /var/www/uploads/
# 2. Check which user web server runs as
ps aux | grep -E "nginx|apache"
# Usually www-data (Debian) or apache/nobody (RHEL)
# 3. Check SELinux context (RHEL/CentOS)
ls -Z /var/www/uploads/
sudo ausearch -m avc -ts recent | grep uploads
# 4. Check AppArmor (Ubuntu/Debian)
sudo aa-status | grep apache
sudo journalctl -u apparmor | grep DENIED
# 5. Check disk space and inodes
df -h /var
df -i /var
# 6. Check mount options (noexec, nosuid, etc.)
mount | grep /var

Resolution:

Terminal window
# 1. Set correct ownership
sudo chown -R www-data:www-data /var/www/uploads/
# RHEL/CentOS:
sudo chown -R apache:apache /var/www/uploads/
# 2. Set correct permissions
sudo chmod 755 /var/www/uploads/
# For uploads that need public write (careful!):
sudo chmod 1777 /var/www/uploads/
# 3. Fix SELinux context
sudo semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/uploads(/.*)?"
sudo restorecon -Rv /var/www/uploads/
# Allow httpd to write
sudo setsebool -P httpd_unified on
# 4. Fix AppArmor
# In /etc/apparmor.d/usr.sbin.apache2:
/var/www/uploads/ rw,
/var/www/uploads/** rw,
sudo systemctl reload apparmor
# 5. If using PHP, check open_basedir
# /etc/php/*/apache2/php.ini
open_basedir = /var/www/html:/var/www/uploads
# 6. Verify with test
sudo -u www-data touch /var/www/uploads/test.txt

15. Scenario: Process not responding, won’t kill with SIGTERM

Section titled “15. Scenario: Process not responding, won’t kill with SIGTERM”

Situation: A process hangs and doesn’t respond to kill -15, affecting graceful shutdown.

Investigation:

Terminal window
# 1. Check process state
ps aux | grep PID
# State D = uninterruptible sleep (usually I/O)
# State Z = zombie (already dead, parent not reaping)
# 2. Check what the process is waiting on
cat /proc/PID/stack
cat /proc/PID/wchan
strace -p PID
# 3. Check for I/O hangs
lsof -p PID
cat /proc/PID/io
# 4. Check for network hangs
ss -p | grep PID
# 5. Check if process is in D state (uninterruptible sleep)
ps -eo pid,stat,wchan,cmd | grep PID
# D state means waiting for kernel I/O
# 6. Check parent process relationship
pstree -p PID

Resolution:

Terminal window
# 1. Try SIGINT (Ctrl+C equivalent)
kill -2 PID
# 2. Try SIGHUP (reload config, sometimes works)
kill -1 PID
# 3. For D state (I/O hang) - wait for I/O timeout
# If NFS mount hung, unmount first
sudo umount -f /mnt/nfs
# 4. For zombie processes - kill parent
ps -o ppid= -p PID | xargs kill -15
# 5. Force kill if absolutely necessary
kill -9 PID
# 6. If process can't be killed and system stuck
# Try sync and reboot
sudo sync
sudo reboot -f
# 7. For NFS-related hangs
# Kill processes accessing NFS first
fuser -km /mnt/nfs
umount -l /mnt/nfs # Lazy unmount

16. Scenario: Kernel panic or system freeze

Section titled “16. Scenario: Kernel panic or system freeze”

Situation: Server randomly freezes or kernel panics, requiring hard reboot.

Investigation After Reboot:

Terminal window
# 1. Check last kernel messages
sudo dmesg | grep -i panic
sudo dmesg | grep -i "kernel bug"
# 2. Check system logs from before crash
sudo journalctl -b -1 -k | grep -i panic
sudo tail -1000 /var/log/kern.log | grep -i "call trace"
# 3. Check hardware errors
sudo journalctl -b -1 | grep -i "hardware error"
sudo mcelog --client # For machine check exceptions
# 4. Check memory errors
sudo grep -i "machine check" /var/log/messages
sudo dmidecode -t memory
# 5. Check CPU temperature
sensors
ipmitool sensor
# 6. Check for kernel oops before panic
sudo dmesg | grep -i oops

Resolution and Prevention:

/etc/kdump.conf
# 1. Enable kernel crash dumps (kdump)
sudo yum install kexec-tools crash
# path /var/crash
sudo systemctl enable kdump
# 2. Configure sysrq for emergency debugging
echo 1 > /proc/sys/kernel/sysrq
# /etc/sysctl.conf:
kernel.sysrq = 1
# 3. Use Magic SysRq keys for debugging
# Alt+SysRq+? commands:
# r - raw keyboard mode
# e - SIGTERM all processes
# i - SIGKILL all processes
# s - sync
# u - remount read-only
# b - reboot
# 4. Update kernel and firmware
sudo apt update && sudo apt upgrade linux-firmware
sudo yum update kernel
# 5. Check for known hardware issues
sudo lshw
sudo lspci -v
# 6. Set up remote logging for crash analysis
# Send logs to central syslog server
*.* @logs.example.com

17. Scenario: Package manager broken after failed update

Section titled “17. Scenario: Package manager broken after failed update”

Situation: apt or yum fails with dependency conflicts after interrupted update.

Investigation (Debian/Ubuntu):

Terminal window
# 1. Check apt error
sudo apt update
sudo apt upgrade
# Note specific error messages
# 2. Check dpkg status
sudo dpkg --audit
sudo dpkg --configure -a
# 3. Check held packages
sudo apt-mark showhold
# 4. Check broken dependencies
sudo apt --fix-broken install -f
# 5. Check dpkg lock files
sudo lsof /var/lib/dpkg/lock
sudo lsof /var/lib/apt/lists/lock

Resolution (Debian/Ubuntu):

Terminal window
# 1. Remove lock files (if no process holding)
sudo rm /var/lib/dpkg/lock-frontend
sudo rm /var/lib/dpkg/lock
sudo rm /var/lib/apt/lists/lock
sudo rm /var/cache/apt/archives/lock
# 2. Reconfigure dpkg
sudo dpkg --configure -a
# 3. Fix broken packages
sudo apt --fix-broken install
# 4. Clean package cache
sudo apt clean
sudo apt autoclean
# 5. Force install specific package
sudo dpkg -i --force-overwrite /var/cache/apt/archives/package.deb
# 6. Remove problematic package
sudo dpkg --remove --force-remove-reinstreq package-name
# 7. Update again
sudo apt update
sudo apt upgrade

Resolution (RHEL/CentOS):

Terminal window
# 1. Clean yum cache
sudo yum clean all
sudo dnf clean all
# 2. Check rpm database
sudo rpm --rebuilddb
# 3. Fix broken dependencies
sudo yum check
sudo dnf check
# 4. Remove problematic package
sudo rpm -e --noscripts package-name
# 5. Force reinstall
sudo yum reinstall package-name
sudo dnf reinstall package-name
# 6. Check for duplicate packages
sudo package-cleanup --dupes
sudo package-cleanup --cleandupes

18. Scenario: Docker containers can’t communicate with each other

Section titled “18. Scenario: Docker containers can’t communicate with each other”

Situation: Multiple Docker containers on same host can’t ping or connect to each other.

Investigation:

Terminal window
# 1. Check container networks
docker network ls
docker inspect container1 | grep Network
docker inspect container2 | grep Network
# 2. Check if containers are on same network
docker network inspect bridge
# 3. Check container IPs
docker exec container1 ip addr
docker exec container2 ip addr
# 4. Test connectivity
docker exec container1 ping container2-ip
docker exec container1 ping container2-name
# 5. Check DNS resolution
docker exec container1 cat /etc/resolv.conf
docker exec container1 nslookup container2
# 6. Check firewall rules
sudo iptables -L -n | grep DOCKER
sudo firewall-cmd --list-all

Resolution:

Terminal window
# 1. Create custom bridge network
docker network create --driver bridge my-network
# 2. Connect containers to same network
docker network connect my-network container1
docker network connect my-network container2
# 3. Or run containers with custom network
docker run -d --network my-network --name app1 app-image
docker run -d --network my-network --name app2 app-image
# 4. Enable inter-container communication
# In /etc/docker/daemon.json:
{
"icc": true,
"iptables": true
}
sudo systemctl restart docker
# 5. Add firewall rules if needed
sudo firewall-cmd --permanent --zone=trusted --add-interface=docker0
sudo firewall-cmd --reload
# 6. For user-defined networks, containers resolve by name
docker exec app1 ping app2 # Works by container name

19. Scenario: SSH connection drops after successful login

Section titled “19. Scenario: SSH connection drops after successful login”

Situation: SSH connects successfully but drops immediately after login.

Investigation:

Terminal window
# 1. Verbose SSH from client
ssh -vvv user@server
# Look for error after authentication
# 2. Check SSH server logs
sudo journalctl -u sshd -n 50
sudo tail -50 /var/log/auth.log | grep ssh
# 3. Check user shell
getent passwd username
# Should show valid shell, not /bin/false or /sbin/nologin
# 4. Check if shell exists
cat /etc/shells
# 5. Check profile/rc files for errors
# As root, inspect user's dotfiles
sudo -u username bash -x -c 'exit' 2>&1
# 6. Check forced commands in authorized_keys
cat ~username/.ssh/authorized_keys
# Look for command="..." restrictions
# 7. Check SSH daemon configuration
grep -E "ForceCommand|Match" /etc/ssh/sshd_config

Resolution:

Terminal window
# 1. Fix user's shell
sudo usermod -s /bin/bash username
# 2. Create user's home directory if missing
sudo mkdir -p /home/username
sudo chown username:username /home/username
# 3. Remove problematic .profile/.bashrc entries
# Login as root, check for errors
sudo -u username bash
# Then check what's failing
# 4. Temporarily disable shell restrictions
# In /etc/ssh/sshd_config:
# Comment out ForceCommand
# Remove Match User restrictions
sudo systemctl reload sshd
# 5. Check for disk quotas preventing shell access
sudo quota username
# 6. Add debug to .bashrc
# In ~/.bashrc, add:
set -x
# Then check logs

20. Scenario: System slow after kernel update

Section titled “20. Scenario: System slow after kernel update”

Situation: After a kernel update, system performance degraded significantly.

Investigation:

Terminal window
# 1. Check current kernel version
uname -r
# 2. Check previous kernel versions
rpm -qa kernel
dpkg -l | grep linux-image
# 3. Check CPU frequency scaling
cat /proc/cpuinfo | grep MHz
cpupower frequency-info
# 4. Check kernel parameters
cat /proc/cmdline
# 5. Check for driver issues
dmesg | grep -i error
dmesg | grep -i fail
lsmod | grep -E "nouveau|nvidia|i915"
# 6. Check system logs for warnings
journalctl -b -1 | grep -i "warning\\|error"

Resolution:

Terminal window
# 1. Boot into previous kernel
# At GRUB menu, select "Advanced options" then previous kernel
# 2. Set default kernel
# /etc/default/grub:
GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 5.4.0-42-generic"
sudo update-grub
# 3. Remove problematic kernel
# Debian/Ubuntu:
sudo apt remove linux-image-5.8.0-45-generic
# RHEL/CentOS:
sudo yum remove kernel-5.8.0-45
# 4. Fix kernel parameters if missing
# In /etc/default/grub, add:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_pstate=disable"
# Or for power management:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash processor.max_cstate=1"
sudo update-grub
# 5. Check and reinstall missing microcode
sudo apt install intel-microcode # Intel
sudo apt install amd64-microcode # AMD
# 6. Rebuild initramfs if needed
sudo update-initramfs -u -k all
sudo dracut -f # RHEL/CentOS
# 7. Check if modules need blacklisting
echo "blacklist nouveau" > /etc/modprobe.d/blacklist-nouveau.conf
sudo update-initramfs -u

21. Scenario: Inodes exhausted but disk space available (Bonus)

Section titled “21. Scenario: Inodes exhausted but disk space available (Bonus)”

Situation: Users can’t create new files, df -h shows space available, but df -i shows 100% inode usage.

Investigation:

Terminal window
# 1. Confirm inode exhaustion
df -i
# 2. Find directory with most files
sudo find / -xdev -type d -size +1M -exec ls -la {} \\; 2>/dev/null | grep -c ^d
# 3. Count files per directory recursively
for dir in /*; do
echo -n "$dir: "; sudo find $dir -xdev -type f 2>/dev/null | wc -l
done
# 4. Find directory with massive number of small files
sudo find / -xdev -type f -printf '%h\\n' 2>/dev/null | sort | uniq -c | sort -rn | head -20
# 5. Check mail spool for stuck emails
ls /var/spool/mail/ | wc -l
sudo mailq
# 6. Check for session files
ls /tmp/ | wc -l
ls /var/tmp/ | wc -l
ls /var/lib/php/sessions/ | wc -l
# 7. Check Docker overlay if used
sudo find /var/lib/docker -type f | wc -l

Resolution:

Terminal window
# 1. Delete old log files (logs use many inodes)
sudo find /var/log -name "*.log.*" -type f -mtime +30 -delete
sudo find /var/log -name "*.gz" -type f -mtime +90 -delete
# 2. Clean up mail spool
# For stuck mail queues
sudo postsuper -d ALL # Postfix
sudo mailq | grep -v "^ " | cut -d" " -f1 | xargs sudo rm -f # Sendmail
# 3. Clear old session files
sudo find /tmp -type f -atime +7 -delete
sudo find /var/tmp -type f -atime +7 -delete
sudo find /var/lib/php/sessions -type f -atime +7 -delete
# 4. Clean package manager cache
sudo apt clean
sudo dnf clean all
# 5. Remove old kernels (preserve 2 most recent)
# Debian/Ubuntu
sudo apt autoremove --purge
# RHEL/CentOS
sudo package-cleanup --oldkernels --count=2
# 6. Clean Docker resources
docker system prune -a
# 7. If using mail server, limit queue
# /etc/postfix/main.cf:
queue_minfree = 10000000
message_size_limit = 10240000
bounce_queue_lifetime = 1d
# 8. For applications creating many temp files
# Set up tmpwatch or systemd-tmpfiles
cat > /etc/tmpfiles.d/app.conf << EOF
d /var/lib/app/cache 0755 root root 7d
d /tmp/app-temp 1777 root root 1d
EOF

22. Scenario: GRUB bootloader corrupted (Bonus)

Section titled “22. Scenario: GRUB bootloader corrupted (Bonus)”

Situation: System boots to “grub rescue>” prompt after disk changes or updates.

Investigation:

Terminal window
# At grub rescue prompt:
grub rescue> ls
# (hd0) (hd0,msdos1) (hd0,msdos2)
# Try to find boot partition
grub rescue> ls (hd0,1)/
# Look for grub or boot directory
# Check for kernel
grub rescue> ls (hd0,1)/boot/

Resolution from Live CD/USB:

Terminal window
# 1. Boot from live CD
# 2. Mount root partition
sudo mount /dev/sda1 /mnt
# 3. Mount boot partition if separate
sudo mount /dev/sda2 /mnt/boot
# 4. Mount necessary directories
sudo mount --bind /dev /mnt/dev
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys
# 5. Chroot into system
sudo chroot /mnt
# 6. Reinstall GRUB
# For BIOS/MBR
grub-install /dev/sda
update-grub
# For UEFI
grub-install --target=x86_64-efi --efi-directory=/boot/efi
update-grub
# 7. Check if bootable
lsblk
efibootmgr -v # For UEFI
# 8. Exit and reboot
exit
sudo umount -R /mnt
sudo reboot

Manual GRUB Boot (Temporary):

Terminal window
# At grub rescue> prompt:
grub rescue> set prefix=(hd0,msdos1)/boot/grub
grub rescue> insmod normal
grub rescue> normal
# At GRUB menu, press 'c' for command line:
grub> set root=(hd0,msdos1)
grub> linux /boot/vmlinuz-5.4.0-42-generic root=/dev/sda1
grub> initrd /boot/initrd.img-5.4.0-42-generic
grub> boot
# After booting, reinstall GRUB permanently