Need to know some important Command Line commands for Linux Admins?
Computers are designed to do precisely what we tell them to do. If we command to delete all files, it will remove them without question, and feasibly crash because it deleted itself.
If misused, they can cause a great deal of harm to our server.
The command line terminal, or shell on your Linux server, is a potent tool for deciphering activity on the server, performing operations, or making system changes. But with several thousand executable binaries installed by default, what tools are useful, and how should you use them safely?
We recommend coming to terms with the terminal over SSH.
In this context, we shall look into the basic commands for Linux Admins and show you even more useful and practical tools.
Vital Command Line for Linux Admins
The command line terminal, or shell, is a potent tool to decipher activity on the server, performing operations, or making system changes.
Moving ahead, let us see a few essential commands our Support Experts suggest to learn about a Linux system.
ls lists files in a directory. In the container space, this command can help determine the container image’s directory and files. Besides this, it can help examine permissions.
Here, we can't run myapp because of a permissions issue. However, when we check the permissions using ls -l, we realize that the permissions do not have an “x” in -rw-r–r–, which are read and write only:
bash: ./myapp: Permission denied
$ ls -l myapp
-rw-r–r–. 1 root root 33 Jul 21 18:36 myapp
tail displays the last part of a file. For example, we use tail to check what happens in the logs when we make a request to the Apache HTTP server.
Instead of following the log in real-time, we can use tail to see the last 100 lines of a file with the -n option.
$ tail -n 100 /var/log/httpd/access_log
cat concatenates and prints files. We can use it to check the contents of the dependencies file or to confirm the version of the application that we have already built locally.
$ cat requirements.txt
Here, it checks whether the Python Flask application has Flask listed as a dependency.
grep searches file patterns. If we look for a specific pattern in the output of another command, grep highlights the relevant lines.
We can use this command to search log files, specific processes, and more.
env allows to set or print the environment variables. During troubleshooting, we may find it useful for checking if the wrong environment variable prevents the application from starting.
For example, the below command is to check the environment variables set on the application’s host:
We can use the id command to check the user identity.
The example below uses Vagrant to test the application and isolate its development environment.
Once we log into the Vagrant box, if we try to install Apache HTTP Server the system states that we cannot perform the command as root.
To check the user and group, issue the id command:
$ yum -y install httpd
Loaded plugins: fastestmirror
We need to be root to perform this command.
uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
To correct this, we run the command as a superuser, which provides elevated privileges.
When we run the application binary for the first time on a host, we may receive the error message “permission denied.”
As seen in the example for ls, we can check the permissions of the application binary.
$ ls -l
-rw-rw-r–. 1 vagrant vagrant 34 Jul 11 02:17 test.sh
Here we can see that we don’t have execution rights (no “x”) to run the binary.
In such a case, chmod can correct the permissions:
$ chmod +x test.sh
[vagrant@localhost ~]$ ls -l
-rwxrwxr-x. 1 vagrant vagrant 34 Jul 11 02:17 test.sh
Chmod is useful when we load a binary into a container as well. It ensures that the container has the correct permissions to execute the binary.
It shows the history of commands we have issued since the start of the session. We can use history to log which commands we used to troubleshoot the application.
If we want to execute a command in the previous history, we use ‘!’ before the command number to re-execute.
The pipe command is possibly the most useful tool in the shell language. This command allows the output of one command to be fed into the input of another command directly, without temporary files.
It is useful if we are dealing with a huge command output that we would like to format or to process some other program without a temporary file.
Let us connect the commands w and grep using pipe to format the output.
The w command allows us to view users logged into the server while passing the output for the grep command to filter by the ‘root’ user type:
08:56:43 up 27 days, 22:17, 2 users, load average: 0.00, 0.00, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 10.1.1.206 08:52 0.00s 0.06s 0.00s w
bob pts/1 18.104.22.168 09:02 1:59 0.07s 0.06s -bash
# w | grep root
root pts/0 10.1.1.206 08:52 0.00s 0.06s 0.00s w
The ps command shows a ‘process snapshot’ of all currently running programs on the server.
For instance, let us see if the Apache process ‘httpd’ is running:
# ps faux | grep httpd
root 27242 0.0 0.0 286888 700 ? Ss Aug29 1:40 /usr/sbin/httpd -k start
nobody 77761 0.0 0.0 286888 528 ? S Sep17 0:03 \_ /usr/sbin/httpd -k start
nobody 77783 0.0 1.6 1403008 14416 ? Sl Sep17 0:03 \_ /usr/sbin/httpd -k start
We can see that there are several ‘httpd’ processes running here. If not, we can safely assume, Apache is not running.
The common flags used for ps are ‘faux’, which displays processes for all users in a user-oriented format, run from any source.
The top command helps to determine which processes are running on a server. This command has the advantage to display in real-time while filtering by several different factors.
In short, it dynamically shows the ‘top’ resource utilizers, and to execute we run:
Once inside, we will see a lot of process threads moving around. By default, it will show us processes that use the most CPU at the moment.
If we hold shift and type ‘M’ it will change the sort to processes that are using the most memory.
Hold shift and press ‘P’ to change the sort back to CPU. To quit, we can simply press ‘q’.
netstat shows services running on a server, but in particular, it shows processes that are listening for traffic on any particular network port. It can also display other interface statistics.
To display all publicly listening processes, we run:
# netstat -tunlp
The command flags ‘-tunlp’ will show program names listening for UDP or TCP traffic, with numeric addresses.
We can scope this down by using grep to see, for instance, what program is listening on port 80:
# netstat -tunlp | grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 27242/httpd
tcp 0 0 :::80 :::* LISTEN 27242/httpd
We can use this information to guarantee correct connects against my configurations and to provide the correct ports.
The ip command shows network devices, their routes, and a means of manipulating their interfaces.
We use this command to read the information on the interfaces:
# ip a
It is short for ‘ip address show’, and shows the active interfaces on the server:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.3/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.1/24 brd 192.168.0.255 scope global eth0:cp1
inet 192.168.0.2/24 brd 192.168.0.255 scope global secondary eth0:cp2
inet6 fe80::5054:ff:face:b00c/64 scope link
valid_lft forever preferred_lft forever
This interface supports IPv6, and IP is fe80::5054:ff:face:b00c.
lsof stands for ‘list open files.’ It lists the files that are in use by the system.
For example, consider PHP. To figure the location or path for the PHP default error logs, the ps command only tells us if PHP is running.
However, lsof will give us the details:
# lsof -c php | grep error
php-fpm 13366 root mem REG 252,3 16656 264846 /lib64/libgpg-error.so.0.5.0
php-fpm 13366 root 2w REG 252,3 185393 3139602 /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log
php-fpm 13366 root 5w REG 252,3 185393 3139602 /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log
php-fpm 13395 root mem REG 252,3 16656 264846 /lib64/libgpg-error.so.0.5.0
php-fpm 13395 root 2w REG 252,3 14842 2623528 /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log
php-fpm 13395 root 7w REG 252,3 14842 2623528 /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log
The ‘-c’ flag will only list processes that match a certain command name. In the output, we can see that there are two open error logs: /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log and /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log. We can check them to see recently logged errors.
If we use the rsync command for the transfer of large folder(s), in this case, /backup, we can search for open rsync processes inside:
# lsof -c rsync | grep /backup
rsync 48479 root cwd DIR 252,3 4096 4578561 /backup
rsync 48479 root 3r REG 252,3 5899771606 4578764 /backup/2018-09-12/accounts/bob.tar.gz
rsync 48480 root cwd DIR 252,3 4096 4578562 /backup/temp
rsync 48481 root cwd DIR 252,3 4096 4578562 /backup/temp
rsync 48481 root 1u REG 252,3 150994944 4578600 /backup/temp/2018-09-12/accounts/.bob.tar.gz.yG6Rl2
We can see two regular files open in the /backup directory. Even with quiet output on rsync, we can see that it is currently working on copying the bob.tar.gz file.
df is a swift command that displays the space used on the mounted partitions of a system. It only reads data from the partition tables, so it is slightly less accurate if we are actively moving files around.
# df -h
This ‘-h’ flag gets human-readable output in nice round numbers:
Filesystem Size Used Avail Use% Mounted on
/dev/vda3 72G 49G 20G 72% /
tmpfs 419M 0 419M 0% /dev/shm
/dev/vda1 190M 59M 122M 33% /boot
/usr/tmpDSK 3.1G 256M 2.7G 9% /tmp
Some of the information we see is the primary partition mounted on / is 72% used space with 20GB being free. There is no separate /backup partition mounted on the server, so cPanel backups are filling up the primary partition.
df can also show the inode count of mounted filesystems from the same partition table information:
# df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/vda3 4.6M 496K 4.1M 11% /
tmpfs 105K 2 105K 1% /dev/shm
/dev/vda1 50K 44 50K 1% /boot
/usr/tmpDSK 201K 654 201K 1% /tmp
Our main partition has 496,000 inodes used, and just over 4 million inodes free, which is plenty for general use.
If we run out of inodes, it won’t be able to record the location of any more files or folders.
With the du command we can tell the disk usage. However, it works by recursively counting folders and files that we specify:
# du -hs /home/temp/
The flags ‘-hs’ give human-readable output, and only displays the summary of the enumeration, rather than each nested folder.
Another useful flag is –max-depth, which defines how deep you would like to list folder summaries.
# du -hs public_html/5.5G public_html/
# du -h public_html/ –max-depth=0
# du -h public_html/ –max-depth=1
Then we add this to a pipe along with grep to get only folders that are 1GB or larger:
# du -h public_html/ –max-depth=1 | grep G
It shows the instant reading of free memory on the system.
# free -m
total used free shared buffers cached
Mem: 837 750 86 5 66 201
-/+ buffers/cache: 482 354
Swap: 1999 409 1590
Our free command with the megabytes flag displays output in MB.
In our output, the total RAM on the system is 837MB, or about 1GB. Of this, 750MB is ‘used,’ but 66MB is in buffers and 201M is cached data, so subtracting those, the total ‘free’ RAM on the server is around 354MB.