Vital Command Line commands for Linux Admins with examples

Need to know some important Command Line commands for Linux Admins?

Read on...

Computers are designed to do precisely what we tell them to do. If we command to delete all files, it will remove them without question, and feasibly crash because it deleted itself.

If misused, they can cause a great deal of harm to our server.

The command line terminal, or shell on your Linux server, is a potent tool for deciphering activity on the server, performing operations, or making system changes. But with several thousand executable binaries installed by default, what tools are useful, and how should you use them safely?

We recommend coming to terms with the terminal over SSH. 

Here at Ibmi Media, as part of our Server Management Services, we regularly help our Customers to perform Linux related queries.

In this context, we shall look into the basic commands for Linux Admins and show you even more useful and practical tools.

Vital Command Line for Linux Admins

The command line terminal, or shell, is a potent tool to decipher activity on the server, performing operations, or making system changes.

Moving ahead, let us see a few essential commands our Support Experts suggest to learn about a Linux system.

1. ls

ls lists files in a directory. In the container space, this command can help determine the container image’s directory and files. Besides this, it can help examine permissions.

Here, we can't run myapp because of a permissions issue. However, when we check the permissions using ls -l, we realize that the permissions do not have an “x” in -rw-r–r–, which are read and write only:

$ ./myapp
bash: ./myapp: Permission denied
$ ls -l myapp
-rw-r–r–. 1 root root 33 Jul 21 18:36 myapp

2. tail

tail displays the last part of a file. For example, we use tail to check what happens in the logs when we make a request to the Apache HTTP server.

Instead of following the log in real-time, we can use tail to see the last 100 lines of a file with the -n option.

$ tail -n 100 /var/log/httpd/access_log

3. cat

cat concatenates and prints files. We can use it to check the contents of the dependencies file or to confirm the version of the application that we have already built locally.

$ cat requirements.txt

Here, it checks whether the Python Flask application has Flask listed as a dependency.

4. grep

grep searches file patterns. If we look for a specific pattern in the output of another command, grep highlights the relevant lines.

We can use this command to search log files, specific processes, and more.

5. env

env allows to set or print the environment variables. During troubleshooting, we may find it useful for checking if the wrong environment variable prevents the application from starting.

For example, the below command is to check the environment variables set on the application’s host:

$ env

6. id

We can use the id command to check the user identity.

The example below uses Vagrant to test the application and isolate its development environment.

Once we log into the Vagrant box, if we try to install Apache HTTP Server the system states that we cannot perform the command as root.

To check the user and group, issue the id command:

$ yum -y install httpd
Loaded plugins: fastestmirror

We need to be root to perform this command.

$ id
uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

To correct this, we run the command as a superuser, which provides elevated privileges.

7. chmod

When we run the application binary for the first time on a host, we may receive the error message “permission denied.”

As seen in the example for ls, we can check the permissions of the application binary.

$ ls -l
total 4
-rw-rw-r–. 1 vagrant vagrant 34 Jul 11 02:17

Here we can see that we don’t have execution rights (no “x”) to run the binary.

In such a case, chmod can correct the permissions:

$ chmod +x
[vagrant@localhost ~]$ ls -l
total 4
-rwxrwxr-x. 1 vagrant vagrant 34 Jul 11 02:17

Chmod is useful when we load a binary into a container as well. It ensures that the container has the correct permissions to execute the binary.

8. history

It shows the history of commands we have issued since the start of the session. We can use history to log which commands we used to troubleshoot the application.

If we want to execute a command in the previous history, we use ‘!’ before the command number to re-execute.

9. pipe

The pipe command is possibly the most useful tool in the shell language. This command allows the output of one command to be fed into the input of another command directly, without temporary files.

It is useful if we are dealing with a huge command output that we would like to format or to process some other program without a temporary file.

Let us connect the commands w and grep using pipe to format the output.

The w command allows us to view users logged into the server while passing the output for the grep command to filter by the ‘root’ user type:

# w
08:56:43 up 27 days, 22:17, 2 users, load average: 0.00, 0.00, 0.00
root pts/0 08:52 0.00s 0.06s 0.00s w
bob pts/1 09:02 1:59 0.07s 0.06s -bash
# w | grep root
root pts/0 08:52 0.00s 0.06s 0.00s w

10. ps

The ps command shows a ‘process snapshot’ of all currently running programs on the server.

For instance, let us see if the Apache process ‘httpd’ is running:

# ps faux | grep httpd
root 27242 0.0 0.0 286888 700 ? Ss Aug29 1:40 /usr/sbin/httpd -k start
nobody 77761 0.0 0.0 286888 528 ? S Sep17 0:03 \_ /usr/sbin/httpd -k start
nobody 77783 0.0 1.6 1403008 14416 ? Sl Sep17 0:03 \_ /usr/sbin/httpd -k start

We can see that there are several ‘httpd’ processes running here. If not, we can safely assume, Apache is not running.

The common flags used for ps are ‘faux’, which displays processes for all users in a user-oriented format, run from any source.

11. top

The top command helps to determine which processes are running on a server. This command has the advantage to display in real-time while filtering by several different factors.

In short, it dynamically shows the ‘top’ resource utilizers, and to execute we run:

# top

Once inside, we will see a lot of process threads moving around. By default, it will show us processes that use the most CPU at the moment.

If we hold shift and type ‘M’ it will change the sort to processes that are using the most memory.

Hold shift and press ‘P’ to change the sort back to CPU. To quit, we can simply press ‘q’.

12. netstat

netstat shows services running on a server, but in particular, it shows processes that are listening for traffic on any particular network port. It can also display other interface statistics.

To display all publicly listening processes, we run:

# netstat -tunlp

The command flags ‘-tunlp’ will show program names listening for UDP or TCP traffic, with numeric addresses.

We can scope this down by using grep to see, for instance, what program is listening on port 80:

# netstat -tunlp | grep :80
tcp 0 0* LISTEN 27242/httpd
tcp 0 0 :::80 :::* LISTEN 27242/httpd

We can use this information to guarantee correct connects against my configurations and to provide the correct ports.

13. ip

The ip command shows network devices, their routes, and a means of manipulating their interfaces.

We use this command to read the information on the interfaces:

# ip a

It is short for ‘ip address show’, and shows the active interfaces on the server:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet brd scope global eth0
inet brd scope global eth0:cp1
inet brd scope global secondary eth0:cp2
inet6 fe80::5054:ff:face:b00c/64 scope link
valid_lft forever preferred_lft forever

This interface supports IPv6, and IP is fe80::5054:ff:face:b00c.

14. lsof

lsof stands for ‘list open files.’ It lists the files that are in use by the system.

For example, consider PHP. To figure the location or path for the PHP default error logs, the ps command only tells us if PHP is running.

However, lsof will give us the details:

# lsof -c php | grep error
php-fpm 13366 root mem REG 252,3 16656 264846 /lib64/
php-fpm 13366 root 2w REG 252,3 185393 3139602 /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log
php-fpm 13366 root 5w REG 252,3 185393 3139602 /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log
php-fpm 13395 root mem REG 252,3 16656 264846 /lib64/
php-fpm 13395 root 2w REG 252,3 14842 2623528 /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log
php-fpm 13395 root 7w REG 252,3 14842 2623528 /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log

The ‘-c’ flag will only list processes that match a certain command name. In the output, we can see that there are two open error logs: /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log and /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log. We can check them to see recently logged errors.

If we use the rsync command for the transfer of large folder(s), in this case, /backup, we can search for open rsync processes inside:

# lsof -c rsync | grep /backup
rsync 48479 root cwd DIR 252,3 4096 4578561 /backup
rsync 48479 root 3r REG 252,3 5899771606 4578764 /backup/2018-09-12/accounts/bob.tar.gz
rsync 48480 root cwd DIR 252,3 4096 4578562 /backup/temp
rsync 48481 root cwd DIR 252,3 4096 4578562 /backup/temp
rsync 48481 root 1u REG 252,3 150994944 4578600 /backup/temp/2018-09-12/accounts/.bob.tar.gz.yG6Rl2

We can see two regular files open in the /backup directory. Even with quiet output on rsync, we can see that it is currently working on copying the bob.tar.gz file.

15. df

df is a swift command that displays the space used on the mounted partitions of a system. It only reads data from the partition tables, so it is slightly less accurate if we are actively moving files around.

# df -h

This ‘-h’ flag gets human-readable output in nice round numbers:

Filesystem Size Used Avail Use% Mounted on

/dev/vda3 72G 49G 20G 72% /
tmpfs 419M 0 419M 0% /dev/shm
/dev/vda1 190M 59M 122M 33% /boot
/usr/tmpDSK 3.1G 256M 2.7G 9% /tmp

Some of the information we see is the primary partition mounted on / is 72% used space with 20GB being free. There is no separate /backup partition mounted on the server, so cPanel backups are filling up the primary partition.

df can also show the inode count of mounted filesystems from the same partition table information:

# df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/vda3 4.6M 496K 4.1M 11% /
tmpfs 105K 2 105K 1% /dev/shm
/dev/vda1 50K 44 50K 1% /boot
/usr/tmpDSK 201K 654 201K 1% /tmp

Our main partition has 496,000 inodes used, and just over 4 million inodes free, which is plenty for general use.

If we run out of inodes, it won’t be able to record the location of any more files or folders.

16. du

With the du command we can tell the disk usage. However, it works by recursively counting folders and files that we specify:

# du -hs /home/temp/
2.4M /home/temp/

The flags ‘-hs’ give human-readable output, and only displays the summary of the enumeration, rather than each nested folder.

Another useful flag is –max-depth, which defines how deep you would like to list folder summaries.

# du -hs public_html/
5.5G public_html/
# du -h public_html/ –max-depth=0
5.5G public_html/
# du -h public_html/ –max-depth=1
8.0K public_html/_vti_txt
8.0K public_html/_vti_cnf
257M public_html/storage
64K public_html/cgi-bin
8.0K public_html/_vti_log
5.0G public_html/images
64K public_html/scripts
8.0K public_html/.well-known
8.0K public_html/_private
5.0M public_html/forum
56K public_html/_vti_pvt
24K public_html/_vti_bin
360K public_html/configs
5.5G public_html/

Then we add this to a pipe along with grep to get only folders that are 1GB or larger:

# du -h public_html/ –max-depth=1 | grep G
5.0G public_html/images
5.5G public_html/

17. free

It shows the instant reading of free memory on the system.

# free -m
total used free shared buffers cached
Mem: 837 750 86 5 66 201
-/+ buffers/cache: 482 354
Swap: 1999 409 1590

Our free command with the megabytes flag displays output in MB.

In our output, the total RAM on the system is 837MB, or about 1GB. Of this, 750MB is ‘used,’ but 66MB is in buffers and 201M is cached data, so subtracting those, the total ‘free’ RAM on the server is around 354MB.

[Need to install any missing package on your Linux Server? We are available to help you today. ]


This article covers a few Vital Command Line for Linux Admins. The Linux command line is a text interface to your computer. Allows users to execute commands by manually typing at the terminal, or has the ability to automatically execute commands which were programmed in “Shell Scripts”.

Common commands in Linux:

1. su command

The su command exists on most unix-like systems. It lets you run a command as another user, provided you know that user's password. When run with no user specified, su will default to the root account. The command to run must be passed using the -c option.

2. which command

which command in Linux is a command which is used to locate the executable file associated with the given command by searching it in the path environment variable. It has 3 return status as follows: 0 : If all specified commands are found and executable.

3. Who am I command line?

whoami command is used both in Unix Operating System and as well as in Windows Operating System. It is basically the concatenation of the strings “who”,”am”,”i” as whoami. It displays the username of the current user when this command is invoked. It is similar as running the id command with the options -un.

4. What does W command do in Linux?

w is a command-line utility that displays information about currently logged in users and what each user is doing. It also gives information about how long the system has been running, the current time, and the system load average.

Facts about the demand in Linux admins?

1. The job prospects for Linux System Administrator are favorable. 

2. According to the US Bureau of Labor Statistics (BLS), there is expected to be a growth of 6 percent from 2016 to 2026. 

3. Candidates who have a firm hold on cloud computing and other latest technologies have bright chances.

Keep In Touch

We hope to hear from you.

Accept File Type: jpg,jpeg,png,txt,pdf,doc,docx