Skip to content

pipe

Unix-style pipe chain functionality and text processing commands for data filtering, transformation, and analysis.

The pipe system allows chaining multiple commands together using the | operator. Each command processes the output from the previous command, enabling powerful data transformation workflows.

CommandPurpose
bcCalculator with math functions
catDisplay file contents
grepSearch text patterns
lsList directory contents
psList running processes
sedFind and replace text
awkField extraction and processing
sortSort lines
uniqRemove duplicates
wcCount words, lines, characters
cutExtract columns/fields
trimRemove whitespace
trTranslate characters
revReverse strings
splitSplit strings into arrays
joinJoin arrays into strings
headDisplay first N lines or bytes
tailDisplay last N lines or bytes
echoPrint text
nslookupDNS hostname lookup
findSearch for files
xclipCopy to clipboard
decipherDecode text

data | command1 args | command2 args | command3 args

  • Data flows through the pipe from left to right
  • Each command processes output from the previous command
  • Final command returns the processed result
  • All commands must be connected with pipes (|)

cat /etc/passwd | wc -l

cat access.log | grep “error” | wc -l

ls | grep “.txt” | sort

Powerful calculator with arithmetic operations, mathematical functions, and special features.

5 + 3 | bc # 8 10 - 4 | bc # 6 7 * 6 | bc # 42 20 / 4 | bc # 5 17 % 5 | bc # 2 (modulo/remainder) 2 ^ 8 | bc # 256 (power)

5 + 3 * 2 | bc # 11 (multiplication first) 2 + 3 * 4 - 5 | bc # 9 10 / 2 + 3 | bc # 8

sqrt 16 | bc # 4 sqrt 2 | bc # 1.414214

abs -5 | bc # 5 abs 3.7 | bc # 3.7

floor 3.7 | bc # 3 ceil 3.2 | bc # 4 round 3.5 | bc # 4 round 3.4 | bc # 3

sign -5 | bc # -1 sign 0 | bc # 0 sign 10 | bc # 1

sin 0 | bc # 0 cos 0 | bc # 1 tan 0 | bc # 0

sin pi | bc # ~0 (sin(π)) cos pi | bc # -1 (cos(π))

asin 0.5 | bc # 0.523599 (π/6) acos 0 | bc # 1.5708 (π/2) atan 1 | bc # 0.785398 (π/4)

sum 1 2 3 4 5 | bc # 15

rnd | bc # Random 0-1 rnd 10 | bc # Random 0-10

range 5 | bc # [0, 1, 2, 3, 4] range 2 7 | bc # [2, 3, 4, 5, 6, 7] range 0 10 2 | bc # [0, 2, 4, 6, 8, 10]

char 65 | bc # [ A ] char 72 105 | bc # [ Hi ]

code A | bc # [ 65 ] code Hello | bc # [ 72, 101, 108, 108, 111 ]

10 + 5, * 2, - 3 | bc # 27

100 / 4, sqrt | bc # 5 (sqrt(25)) 2 ^ 4, + 10 | bc # 26 (16 + 10)

ip | bc # 192.168.1.45 ip 5 | bc # Generate 5 random IPs

Search for patterns in text with various options.

cat /var/log/system.log | grep “error”

Find lines containing “root” in passwd file

Section titled “Find lines containing “root” in passwd file”

cat /etc/passwd | grep “root”

cat /home/user/notes.txt | grep “TODO”

cat file.txt | grep -i “warning” cat log.txt | grep -i “error”

cat system.log | grep -i “error”

cat file.txt | grep -v “debug” cat /etc/passwd | grep -v “nologin”

cat config.conf | grep -v ”^#“

cat log.txt | grep -iv “debug”

cat file.txt | grep “error” | grep -v “ignore”

ls | grep “.txt”

cat system.log | grep “error” | grep -v “debug”

cat access.log | grep -E “[0-9]+.[0-9]+.[0-9]+.[0-9]+“

cat /etc/passwd | grep “/bin/bash”

Find and replace text in streams.

echo “hello world” | sed “s/world/universe/“

cat file.txt | sed “s/old/new/g”

cat config.txt | sed “s/localhost/192.168.1.1/g”

cat document.txt | sed “s/teh/the/g”

cat log.txt | sed “s/ERROR/FIXED/“

cat data.txt | sed “s/ */ /g”

Remove leading/trailing spaces (simplified)

Section titled “Remove leading/trailing spaces (simplified)”

cat file.txt | sed “s/^ *//” # Leading cat file.txt | sed “s/ *$//” # Trailing

cat links.txt | sed “s/oldsite.com/newsite.com/g”

cat config.txt | sed “s/password=.*/password=REDACTED/g”

cat config.txt | sed “s/debug=true/debug=false/” > new_config.txt

cat log.txt | grep “user” | sed “s/username/user_id/g”

Extract and process fields from structured text.

Print first field (default delimiter: space)

Section titled “Print first field (default delimiter: space)”

echo “one two three” | awk ‘{print $1}‘

echo “one two three” | awk ‘{print $2}‘

echo “one two three” | awk ‘{print $NF}‘

echo “hello world” | awk ‘{print $0}‘

cat /etc/passwd | awk -F: ‘{print $1}‘

cat data.csv | awk -F, ‘{print $1}‘

echo “a:b:c:d” | awk -F: ‘{print $2}‘

cat data.tsv | awk -F”\t” ‘{print $1 $3}‘

cat /etc/passwd | awk -F: ‘{print $1 $3}‘

echo “a b c d” | awk ‘{print $1, $3}‘

cat data.csv | awk -F, ‘{print $1 ”-” $2}‘

echo “one two three four” | awk ‘{print NF}‘

echo “a b c d e” | awk ‘{print $NF}‘

cat /etc/passwd | awk -F: ‘{print $1}‘

netstat | awk ‘{print $5}’ | awk -F: ‘{print $1}‘

ls -l | awk ‘{print $9}‘

cat users.csv | awk -F, ‘{print $1 $3}‘

Get second column from space-separated data

Section titled “Get second column from space-separated data”

cat data.txt | awk ‘{print $2}‘

echo “start middle1 middle2 end” | awk ‘{print $1 $NF}‘

cat names.txt | sort

cat names.txt | sort -r

cat numbers.txt | sort -n

cat scores.txt | sort -rn

cat file.txt | uniq

cat file.txt | uniq -c

cat file.txt | uniq -i

cat names.txt | sort | uniq

cat log.txt | sort | uniq -c

cat numbers.txt | sort -n | uniq

cat access.log | sort | uniq -c | sort -rn

cat /etc/passwd | awk -F: ‘{print $1}’ | sort

cat access.log | awk ‘{print $1}’ | sort | uniq

cat error.log | grep “ERROR” | sort | uniq -c | sort -rn | head -10

cat email_list.txt | sort | uniq > clean_list.txt

echo “hello” | rev

echo “hello world” | rev

cat file.txt | rev

echo “one two three” | split

echo “a,b,c,d” | split ,

echo “name:email:phone” | split :

echo “red,green,blue” | split ,

echo [“hello”, “world”] | join

echo [“a”, “b”, “c”] | join -

echo [“red”, “green”, “blue”] | join ,

echo [“h”, “e”, “l”, “l”, “o”] | join ""

echo ” hello ” | trim

cat file.txt | trim

cat data.txt | grep “value” | trim

cat data.csv | split , | awk ‘{print $1}‘

echo “hello world” | split | rev | join ” “

cat items.txt | trim | sort | join ”, “

cat /etc/passwd | wc -l

cat file.txt | grep “error”

cat log.txt | grep “warning” | wc -l

ls

ls /home ls /etc

Long format (-l): shows permissions, size, filename

Section titled “Long format (-l): shows permissions, size, filename”

ls -l ls -l /var/log

Show all files (-a): includes hidden files starting with .

Section titled “Show all files (-a): includes hidden files starting with .”

ls -a ls -a /home/user

Human-readable sizes (-h): displays sizes as K, M, G

Section titled “Human-readable sizes (-h): displays sizes as K, M, G”

ls -lh ls -lh /var/log

ls -la # Long format with all files ls -lah # Long format, all files, human-readable ls -lh /var # Long format with human sizes

ls | grep “.txt” ls /etc | grep “conf” ls -l | grep “log”

ls | wc -l ls /var/log | grep “.log” | wc -l

ls | sort ls -l | sort

ls > filelist.txt ls -la >> directory_contents.txt

find /home -name “*.txt” | wc -l

find . -type f | grep “.log”

find /var/log -name “*.log” | wc -l

find . -name “config.conf” | wc -l

echo “hello world” | wc -w

echo “test string” | grep “test”

echo “log entry” >> log.txt

echo [“a”, “b”, “c”] | join ,

echo “example.com” | nslookup

cat domains.txt | nslookup

echo “google.com” | nslookup > ip.txt

List running processes on the computer in a format suitable for pipe processing.

ps

ps -u root

ps -u guest

ps -u root | wc -l

Show processes with “sshd” in command name

Section titled “Show processes with “sshd” in command name”

ps -C sshd

ps -C bash

ps -C python

ps -u root -C bash

ps -u guest -C python | wc -l

ps | wc -l

ps | grep “nginx”

ps | awk ‘{print $5}‘

ps | grep -v “root”

ps | awk ‘{print $2}‘

ps | sort

ps | awk ‘{print $5}’ | sort | uniq -c

ps -C apache2 | wc -l

ps | grep “mysql”

ps | grep -v “root” | wc -l

ps -u root | wc -l ps -u guest | wc -l

ps > processes.txt

ps -u root > root_processes.txt

cat file.txt | wc -l

cat log.txt | grep “error” | wc -l

ls | wc -l

cat /etc/passwd | wc -l

cat file.txt | wc -w

echo “hello world test” | wc -w

cat document.txt | grep “chapter” | wc -w

cat file.txt | wc -c

echo “hello” | wc -c

cat file.txt | grep “error” | wc -c

cat system.log | grep -i “error” | wc -l

cat access.log | awk ‘{print $1}’ | sort | uniq | wc -l

cat *.txt | wc -w

cat script.src | grep -v ”^//” | wc -l

Extract first column (default delimiter: tab)

Section titled “Extract first column (default delimiter: tab)”

cat data.txt | cut -f1

cat data.csv | cut -d, -f1

cat data.csv | cut -d, -f1,3

cat data.csv | cut -d, -f2-4

cat file.txt | cut -c1-10

cat file.txt | cut -c5-15

cat file.txt | cut -c10-

Extract usernames (first field, colon delimiter)

Section titled “Extract usernames (first field, colon delimiter)”

cat /etc/passwd | cut -d: -f1

Extract IP addresses (first field, space delimiter)

Section titled “Extract IP addresses (first field, space delimiter)”

cat access.log | cut -d” ” -f1

cat log.txt | cut -c1-10

cat emails.txt | cut -d@ -f2

echo “/home/user/file.txt” | cut -d/ -f4

echo “hello” | tr “e” “a”

echo “hello” | tr “el” “ax”

echo “hello world” | tr ” ” ”_“

echo “hello” | tr “a-z” “A-Z”

echo “HELLO” | tr “A-Z” “a-z”

echo “Hello World” | tr “A-Z” “a-z”

cat dosfile.txt | tr -d “\r”

cat file.txt | tr -d “0-9”

echo “hello world” | tr “aeiou” ”*****“

Clean up filenames (spaces to underscores)

Section titled “Clean up filenames (spaces to underscores)”

echo “my file name.txt” | tr ” ” ”_“

Display first or last N lines/bytes from input.

cat file.txt | head -n 5

echo [“line1”, “line2”, “line3”, “line4”, “line5”] | head -n 3

Output: [“line1”, “line2”, “line3”]

Section titled “Output: [“line1”, “line2”, “line3”]”

cat file.txt | head -c 100

cat log.txt | head

cat file.txt | tail -n 5

echo [“line1”, “line2”, “line3”, “line4”, “line5”] | tail -n 2

cat data.csv | tail -n +2

cat file.txt | tail -c 100

cat file.txt | tail -c +50

cat log.txt | tail

cat data.csv | tail -n +2 | head -n 10

ps | tail -n +2 | sort

cat file.txt | head -n 50 | tail -n 10

cat binary.dat | tail -c +100 | head -c 50

cat log.txt | grep “ERROR” | head -n 5

cat access.log | grep “404” | tail -n 10

cat access.log | grep “error” | awk ‘{print $1}’ | sort | uniq -c | sort -rn

cat auth.log | grep “Failed password” | awk ‘{print $11}’ | sort | uniq -c

cat access.log | awk ‘{print $7}’ | sort | uniq -c | sort -rn | head -n 10

Recent errors from large log (skip old entries)

Section titled “Recent errors from large log (skip old entries)”

cat huge.log | tail -n 10000 | grep “ERROR” | tail -n 50

cat report.csv | tail -n +2 | awk -F, ‘{print $1}’ | sort | uniq

cat contacts.txt | grep ”@” | cut -d, -f2 | trim | sort | uniq

cat numbers.txt | awk ‘{sum+=$1} END {print sum/NR}‘

cat data.csv | grep -v ”^#” | cut -d, -f3 | sort -n | uniq -c

find /var/log -name “*.log” | xargs ls -lh | grep “M”

find . -name “*.src” | grep -v test | wc -l

find . -type f | awk -F/ ‘{print $NF}’ | sort | uniq -d

cat file.txt | grep -o “[a-z]@[a-z].[a-z]*” | cut -d@ -f2 | sort | uniq

cat raw_list.txt | trim | sort | uniq | awk ‘{print ”- ” $0}‘

cat data.txt | awk ‘{print $1 ”,” $3}’ | sort | sed “s/,/ -> /g”

cat /etc/passwd | cut -d: -f1 | grep -v ”^#” | sort | wc -l

cat input.txt | tr “A-Z” “a-z” | sed “s/old/new/g” | sort | uniq

cat access.log | grep “POST” | awk ‘{print $1}’ | sort | uniq -c | sort -rn | head -20

cat data.csv | grep -v ”^$” | cut -d, -f1-3 | trim | sort | uniq > clean_data.csv

cat input.txt | grep “error” > errors.txt

Append to file (cat still uses pipe internally)

Section titled “Append to file (cat still uses pipe internally)”

cat new_log.txt | cat >> main_log.txt

cat data.txt | sort | uniq > unique_data.txt

cat file.txt | xclip

cat log.txt | grep “error” | xclip

cat list.txt | sort | uniq | xclip

  1. Start Simple: Build pipes incrementally, testing each stage
  2. Use grep -v: Exclude unwanted lines before processing
  3. Sort Before uniq: uniq only removes consecutive duplicates
  4. Check Field Numbers: Use awk '{print NF}' to count fields
  5. Test Regex: Verify patterns with simple grep before complex pipes
  6. Use -n for Numbers: Sort numeric data with sort -n
  7. Trim Data: Use trim to clean whitespace before processing
  8. Redirect Output: Save results with > or >>
  9. Count Results: End pipes with wc -l to count matches
  10. Debug Pipes: Remove stages from end to isolate issues

cat file.txt | grep “pattern” | awk ‘{print $2}’ | sort | uniq

cat data.txt | awk ‘{print $1}’ | sort | uniq | wc -l

cat file.txt | sort | uniq -c | sort -rn | head -10

cat raw.txt | trim | sort | uniq > clean.txt

cat data.csv | cut -d, -f1,3 | sort

cat list.txt | tr “A-Z” “a-z” | sort | uniq

  • Empty results return []
  • Invalid commands return Error objects
  • File not found returns []
  • Permission denied returns []
  • Invalid regex patterns fail silently
  • Large files: Use grep to filter early in pipeline
  • Memory usage: Avoid loading entire files when possible
  • Sorting: Can be expensive on large datasets
  • Regex: Complex patterns slow down grep
  • Field extraction: awk is efficient for structured data