Toutes les commandes Bash essentielles sur les commandes intégrées, les variables, les boucles, les tableaux, les processus, le réseau et le débogage.
pwd
Print working directory
/home/user/projects
ls -la
List all files with details
Shows hidden files, permissions, size
ls -lh
List files with human-readable sizes
-rw-r--r-- 1 user group 1.2M file.tar
cd -
Go to previous directory
Toggle between two directories
pushd /path && popd
Push/pop directory stack
pushd /tmp; work; popd
find . -name '*.txt'
Find files by name pattern
find /var/log -name '*.log' -mtime -7
find . -type f -size +100M
Find files larger than 100MB
find / -type f -size +100M 2>/dev/null
tree -L 2
Show directory tree (2 levels)
tree -L 3 --dirsfirst
du -sh *
Disk usage of current directory items
du -sh /var/log/*
df -h
Disk free space (human-readable)
Shows all mounted filesystems
ls -lt | head -10
10 most recently modified files
Sort by modification time, newest first
cp -r src/ dest/
Copy directory recursively
cp -rp /etc/nginx /etc/nginx.bak
mv file1 file2
Move/rename file
mv old_name.txt new_name.txt
rm -rf dir/
Remove directory and all contents
Be careful — no recycle bin!
mkdir -p a/b/c
Create nested directories
Creates all intermediate dirs
touch file.txt
Create empty file or update timestamp
touch -t 202501010000 file.txt
ln -s /path/to/file link
Create symbolic link
ln -s /usr/local/bin/python3 /usr/bin/python
tar -czf archive.tar.gz dir/
Create gzipped tarball
tar -czf backup-$(date +%Y%m%d).tar.gz /data
tar -xzf archive.tar.gz
Extract gzipped tarball
tar -xzf archive.tar.gz -C /dest/
zip -r archive.zip dir/
Create zip archive
zip -r site.zip public/
rsync -avz src/ user@host:dest/
Sync files to remote host
rsync -avz --delete /local/ remote:/dest/
diff file1 file2
Show differences between files
diff -u file1.txt file2.txt | less
wc -l file.txt
Count lines in file
wc -l *.log | sort -n
cat file.txt
Print file contents
cat -n file.txt (with line numbers)
less file.txt
Page through file contents
Use arrow keys, q to quit, /search
head -n 20 file.txt
Show first 20 lines
head -n 20 /var/log/syslog
tail -f logfile
Follow file as it grows
tail -f /var/log/nginx/access.log
grep -rn 'pattern' dir/
Recursively search for pattern
grep -rn 'TODO' src/ --include='*.ts'
grep -v 'pattern'
Lines NOT matching pattern
grep -v '^#' config.conf (skip comments)
sed 's/old/new/g' file
Replace text (global)
sed -i 's/localhost/production.com/g' config.yml
awk '{print $1,$3}'Print specific columns
awk '{sum+=$2} END{print sum}' data.txtsort -k2 -n file
Sort by column 2 numerically
sort -t',' -k3 -n -r data.csv
sort | uniq -c | sort -rn
Count and rank unique lines
cat access.log | awk '{print $1}' | sort | uniq -c | sort -rn | headcut -d',' -f1,3
Extract comma-separated fields 1,3
cut -d':' -f1 /etc/passwd
tr '[:lower:]' '[:upper:]'
Convert to uppercase
echo 'hello' | tr '[:lower:]' '[:upper:]'
ps aux | grep process
Find process by name
ps aux | grep nginx
kill -9 PID
Force kill process by PID
kill -TERM PID (graceful first)
pkill -f 'pattern'
Kill processes matching pattern
pkill -f 'python my_script.py'
top / htop
Interactive process viewer
htop for a better interface
nohup cmd &
Run command immune to hangups
nohup ./server.sh > server.log 2>&1 &
jobs / fg / bg
Manage background jobs
Ctrl+Z to suspend, bg to resume
time command
Measure command execution time
time npm run build
watch -n 5 'command'
Repeat command every 5 seconds
watch -n 1 'ps aux | grep myapp'
lsof -p PID
List files opened by process
lsof -i :3000 (port 3000)
strace -p PID
Trace system calls of process
strace -e trace=openat ls
curl -I https://example.com
Fetch HTTP headers only
curl -sI https://api.example.com | head -5
curl -X POST -H 'Content-Type: application/json' -d '{...}' urlPOST JSON to endpoint
curl -s -o /dev/null -w '%{http_code}' urlwget -q -O file.tar.gz url
Download file quietly to name
wget -q --show-progress url
ssh user@host -p 2222
SSH with custom port
ssh -i ~/.ssh/key.pem user@host
scp file user@host:/path/
Secure copy to remote
scp -r user@host:/remote/ /local/
netstat -tlnp
Show listening TCP ports + PIDs
ss -tlnp (modern replacement)
ss -s
Socket statistics summary
ss -tlnp | grep :80
ping -c 4 host
Ping host 4 times
ping -i 0.2 -c 100 host
traceroute host
Trace route to host
mtr host (combined ping+traceroute)
nmap -p 80,443 host
Scan specific ports
nmap -sV -p 1-65535 host
chmod 755 file
rwxr-xr-x (owner full, others r+x)
chmod +x script.sh
chmod -R 644 dir/
Set 644 recursively on directory
chmod -R u=rw,go=r dir/
chown user:group file
Change file owner and group
chown -R www-data:www-data /var/www
umask 022
Set default permission mask
umask 027 (group can't write, others nothing)
stat file
Show file metadata and permissions
stat -c '%A %U %G' /etc/passwd
sudo -u user command
Run command as another user
sudo -u postgres psql
visudo
Edit sudoers file safely
Adds user ALL=(ALL) NOPASSWD: ALL
getfacl / setfacl
Get/set POSIX access control lists
setfacl -m u:bob:rw file
VAR="value"
Set local variable (no spaces around =)
NAME="World"; echo "Hello $NAME"
export VAR=value
Export variable to child processes
export NODE_ENV=production
echo ${VAR:-default}Use default if VAR is unset
PORT=${PORT:-3000}readonly VAR=value
Create read-only variable
readonly PI=3.14159
VAR=$(command)
Assign command output to variable
DATE=$(date +%Y-%m-%d)
env
List all environment variables
env | grep PATH
unset VAR
Remove variable
unset MY_SECRET
declare -a arr=(a b c)
Declare array
arr=("one" "two" "three"); echo ${arr[0]}${#VAR}String length
echo ${#HOME}${VAR#prefix}Remove prefix from value
${file%.txt} (remove .txt extension)if [ condition ]; then cmd fi
Basic if statement
if [ -f file.txt ]; then echo "exists"; fi
[[ ... ]]
Extended test — prefer over [ ]
[[ -n "$VAR" && "$VAR" =~ ^[0-9]+$ ]]
for i in {1..10}; do
cmd
doneLoop over range
for f in *.txt; do echo "$f"; done
while IFS= read -r line; do echo "$line" done < file.txt
Read file line by line
while read line; do ...; done < input.txt
case $VAR in 'a') cmd;; *) default;; esac
Case/switch statement
case "$OS" in linux) apt;; darwin) brew;; esac
cmd1 && cmd2
Run cmd2 only if cmd1 succeeds
make && make install
cmd1 || cmd2
Run cmd2 if cmd1 fails
ping -c1 host || echo 'unreachable'
set -e
Exit script on any error
set -euo pipefail (strict mode)
trap 'cleanup' EXIT
Run cleanup on script exit
trap 'rm -f $TMPFILE' EXIT
function_name() {
local var="value"
echo "$var"
}Define a function with local vars
greet() { local name="$1"; echo "Hello $name"; }$1 $2 $@
Function/script arguments ($1=first...)
$# is number of args, $@ is all args
return 0
Return from function (0=success)
Non-zero return = error
source script.sh (or . script.sh)
Load functions/variables from file
source ~/.bashrc
cmd > file.txt
Redirect stdout to file (overwrite)
ls > files.txt
cmd >> file.txt
Append stdout to file
date >> log.txt
cmd 2> err.txt
Redirect stderr to file
make 2> errors.txt
cmd > out.txt 2>&1
Redirect both stdout and stderr
Capture all output to file
cmd 2>/dev/null
Discard stderr
Suppress error messages
cmd1 | cmd2
Pipe stdout of cmd1 to stdin of cmd2
cat file | sort | uniq -c
tee file.txt
Write to file AND stdout simultaneously
cmd | tee output.txt | grep error
xargs
Build commands from stdin
find . -name '*.log' | xargs rm -f
cmd < file.txt
Redirect file to stdin
mysql -u root < dump.sql
cmd <<EOF text EOF
Here-doc: multi-line stdin
ssh host <<EOF cd /app && git pull EOF
for h in host1 host2; do ssh $h 'uptime'; done
Run command on multiple hosts
Parallel: use & and wait
date +%Y-%m-%dT%H:%M:%S
ISO 8601 timestamp
file_$(date +%Y%m%d_%H%M%S).bak
openssl rand -hex 32
Generate random 32-byte hex string
Good for generating secrets/tokens
python3 -m http.server 8080
Quick HTTP file server
Serve current directory
nc -zv host port
Test TCP connectivity to port
nc -zv db.host 5432
jq '.key' file.json
Parse/query JSON (jq)
curl -s api.url | jq '.data[].name'
!! / !$ / !string
History: last cmd / last arg / last cmd starting with 'string'
sudo !! (re-run with sudo)
CTRL+R
Reverse search command history
Type to filter, Enter to run
column -t -s','
Format CSV as aligned table
cat data.csv | column -t -s','
diff <(cmd1) <(cmd2)
Diff output of two commands
diff <(sort file1) <(sort file2)
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
# Script metadata
readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly SCRIPT_NAME="$(basename "${BASH_SOURCE[0]}")"
# Logging helpers
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*"; }
warn() { log "WARN: $*" >&2; }
err() { log "ERROR: $*" >&2; exit 1; }
# Cleanup on exit
cleanup() {
log "Cleaning up..."
# remove temp files, etc.
}
trap cleanup EXIT
# Validate dependencies
require_cmd() {
command -v "$1" >/dev/null 2>&1 || err "Required command not found: $1"
}
require_cmd curl
require_cmd jq
# Main logic
main() {
local input="${1:-}"
[[ -z "$input" ]] && err "Usage: $SCRIPT_NAME <input>"
log "Processing: $input"
# ... your logic here
}
main "$@"set -euo pipefail est le mode strict de bash : -e quitte sur erreur, -u traite les variables non définies comme des erreurs, -o pipefail détecte les erreurs dans les pipes.
$0Nom du script$1-$9Paramètres positionnels (arguments)$@Tous les arguments comme mots séparés$*Tous les arguments comme un seul mot$#Nombre d'arguments$?Statut de sortie de la dernière commande$$PID du shell courant$!PID de la dernière commande en arrière-plan$_Dernier argument de la commande précédente$BASHPIDPID du sous-shell bash courant$LINENONuméro de ligne courant dans le script$RANDOMEntier aléatoire 0-32767sh est le shell conforme POSIX (Bourne shell), disponible sur tous les systèmes Unix. Bash (Bourne Again SHell) est un surensemble de sh avec des fonctionnalités supplémentaires comme les tableaux, les conditionnelles étendues, l'expansion des accolades, et plus encore. Sur la plupart des systèmes Linux, /bin/sh est bash ou dash. Utilisez #!/bin/bash pour les scripts nécessitant des fonctionnalités spécifiques à bash.
Ajoutez une ligne shebang au début (#!/bin/bash ou #!/usr/bin/env bash), puis exécutez chmod +x script.sh pour le rendre exécutable. Lancez-le avec ./script.sh ou bash script.sh.
2>&1 redirige stderr (descripteur de fichier 2) vers le même endroit que stdout (descripteur de fichier 1). Combiné avec >, vous pouvez rediriger toutes les sorties : command > output.txt 2>&1 capture stdout et stderr dans le fichier.
Les guillemets simples ('') préservent tous les caractères littéralement — pas d'expansion de variable, pas de caractères spéciaux. Les guillemets doubles ("") permettent l'expansion de variable ($VAR), la substitution de commande ($(cmd)) et les séquences d'échappement (\n). Utilisez les guillemets simples pour les chaînes littérales, les guillemets doubles lorsque vous avez besoin de variables.