GlusterFS install CentOS 7

Are you trying to install GlusterFS on CentOS 7?

This guide is for you.

GlusterFS aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system. It is free software, with some parts licensed under the GNU General Public License(GPL) v3 while others are dual licensed under either GPL v2 or the Lesser General Public License (LGPL) v3. GlusterFS is based on a stackable user space design.

The main advantage of GlusterFS is that it eliminates the metadata and can dramatically improve the performance which will help us to unify data and objects.

Also, its simplicity, elasticity, scalable, flexible features make it one of the best-distributed file systems.

Here at Ibmi Media, as part of our Server Management Services, we regularly help our Customers to install GlusterFS.

In this context, we shall look into the steps to install GlusterFS on CentOS 7.

How we install GlusterFS on CentOS 7?

Here, you will learn how to set up GlusterFS Storage on RHEL 7.x and CentOS 7.x. Here, we are considering 4 servers.

For that, we add the below lines in /etc/hosts file as we are considering to have our own DNS servers. server1 server2 server3 server4

Initially, we set up gluster repo and EPEL repo. For that, we run the below commands

yum install wget
yum install centos-release-gluster -y
yum install epel-release -y
yum install glusterfs-server -y

After setting the gluster repo and EPEL repo, we start and enable the GlusterFS service using the commands,

systemctl start glusterd
systemctl enable glusterd

Then we allow the ports on the firewall so that the servers can communicate and form storage cluster. For that, we run the following commands,

firewall-cmd --zone=public --add-port=24007-24008/tcp --permanent
firewall-cmd --zone=public --add-port=24009/tcp --permanent
firewall-cmd --zone=public --add-service=nfs --add-service=samba --add-service=samba-client --permanent
firewall-cmd --zone=public --add-port=111/tcp --add-port=139/tcp --add-port=445/tcp --add-port=965/tcp --add-port=2049/tcp --add-port=38465-38469/tcp --add-port=631/tcp --add-port=111/udp --add-port=963/udp --add-port=49152-49251/tcp --permanent
firewall-cmd --reload

Distributing the Volume Setup :

Now, let's form a trusted storage pool consisting of server 1 and server 2. And then we will create bricks on that and after that, we will create distributed volume.

We run the below command from the server 1 console to form a trusted storage pool with server 2.

gluster peer probe

Then we check the peer status in server 1 using the command

gluster peer status

 Brick 1 creation on server 1

For setting up the brick we need to create the logical volumes on the raw disk (/dev/sdb).

For that, we run the below commands on server 1,

pvcreate /dev/sdb /dev/vg_bricks/dist_brick1 /bricks/dist_brick1 xfs rw,noatime,inode64,nouuid 1 2
vgcreate vg_bricks /dev/sdb
lvcreate -L 14G -T vg_bricks/brickpool1

Here, in the last command, the brickpool1 is the name of the thin pool.

Then we create a logical volume of 3GB

lvcreate -V 3G -T vg_bricks/brickpool1 -n dist_brick1

Now we format the logical Volume using xfs file system

mkfs.xfs -i size=512 /dev/vg_bricks/dist_brick1
mkdir -p /bricks/dist_brick1

We then mount the brick using the mount command

mount /dev/vg_bricks/dist_brick1 /bricks/dist_brick1/

If we want to mount it permanently then we add the below line in /etc/fsatb

/dev/vg_bricks/dist_brick1 /bricks/dist_brick1 xfs rw,noatime,inode64,nouuid 1 2

Then we create a directory with brick under the mount point.

mkdir /bricks/dist_brick1/brick

Brick 2 creation on server 2

We create the brick 2 in server 2 as similarly, we created brick 1 by running the below commands

pvcreate /dev/sdb ; vgcreate vg_bricks /dev/sdb
lvcreate -L 14G -T vg_bricks/brickpool2
lvcreate -V 3G -T vg_bricks/brickpool2 -n dist_brick2
mkfs.xfs -i size=512 /dev/vg_bricks/dist_brick2
mkdir -p /bricks/dist_brick2
mount /dev/vg_bricks/dist_brick2 /bricks/dist_brick2/
mkdir /bricks/dist_brick2/brick

Then we create a distributed volume using the below command

gluster volume create distvol
gluster volume start distvol

We verify the volume status using the following command

gluster volume info distvol


Mount Distribute volume on the Client :

Before mounting the volume using GlusterFS first we have to make sure that the glusterfs-fuse package is installed on the client.

We then log into the client console and run the below command to install glusterfs-fuse

yum install glusterfs-fuse -y

We create a mount for distribute volume

mkdir /mnt/distvol

Now we mount the ‘distvol‘ using below mount command :

mount -t glusterfs -o acl /mnt/distvol/

For permanent mount, we add the below entry in the /etc/fstab file /mnt/distvol glusterfs _netdev 0 0

We run the df command to verify the mounting status of volume.

df -Th


Replicating the Volume Setup :

For replicate volume setup we will use server 3 and server 4. And assume additional disk (/dev/sdb ) for GlusterFS is already assigned to the servers.

We add the server 3 and server 4 in trusted storage pool

gluster peer probe
gluster peer probe

We create and mount the brick on server 3. For that, we run the below commands in server 3,

pvcreate /dev/sdb ; vgcreate vg_bricks /dev/sdb
lvcreate -L 14G -T vg_bricks/brickpool3
lvcreate -V 3G -T vg_bricks/brickpool3 -n shadow_brick1
mkfs.xfs -i size=512 /dev/vg_bricks/shadow_brick1
mkdir -p /bricks/shadow_brick1
mount /dev/vg_bricks/shadow_brick1 /bricks/shadow_brick1/
mkdir /bricks/shadow_brick1/brick

For permanent mounting, we run the below command,

/dev/vg_bricks/shadow_brick1 /bricks/shadow_brick1/ xfs rw,noatime,inode64,nouuid 1 2

Similarly, we perform the same steps on server 4 for creating and mounting brick by navigating to server 4 console.

pvcreate /dev/sdb ; vgcreate vg_bricks /dev/sdb
lvcreate -L 14G -T vg_bricks/brickpool4
lvcreate -V 3G -T vg_bricks/brickpool4 -n shadow_brick2
mkfs.xfs -i size=512 /dev/vg_bricks/shadow_brick2
mkdir -p /bricks/shadow_brick2
mount /dev/vg_bricks/shadow_brick2 /bricks/shadow_brick2/
mkdir /bricks/shadow_brick2/brick

We create Replicated Volume using below gluster command.

gluster volume create shadowvol replica 2
volume create: shadowvol: success: please start the volume to access data
gluster volume start shadowvol

We verify the Volume info using below gluster command :

gluster volume info shadowvol

Note: One of the limitations in gluster storage is that the GlusterFS server only supports version 3 of the NFS protocol.

We add the below entry in the file “/etc/nfsmount.conf” on both the Storage Servers (Server 3 & Server 4 )


After making the above entry we reboot both servers once. Then we use below mount command to volume “shadowvol

mount -t nfs -o vers=3 /mnt/shadowvol/

For permanent mount, we add the following entry in /etc/fstab file /mnt/shadowvol/ nfs vers=3 0 0

We verify the size and mounting status of the volume using the command

df -Th


Distribute-Replicate Volume Setup :

For setting up Distribute-Replicate volume we will be using one brick from each server and will form the volume. We will create the logical volume from the existing thin pool on the respective servers.

Create a brick on all 4 servers using following commands

In Server 1

lvcreate -V 3G -T vg_bricks/brickpool1 -n prod_brick1
mkfs.xfs -i size=512 /dev/vg_bricks/prod_brick1
mkdir -p /bricks/prod_brick1
mount /dev/vg_bricks/prod_brick1 /bricks/prod_brick1/
mkdir /bricks/prod_brick1/brick

Server 2

lvcreate -V 3G -T vg_bricks/brickpool2 -n prod_brick2
mkfs.xfs -i size=512 /dev/vg_bricks/prod_brick2
mkdir -p /bricks/prod_brick2
mount /dev/vg_bricks/prod_brick2 /bricks/prod_brick2/
mkdir /bricks/prod_brick2/brick

In Server 3

lvcreate -V 3G -T vg_bricks/brickpool3 -n prod_brick3
mkfs.xfs -i size=512 /dev/vg_bricks/prod_brick3
mkdir -p /bricks/prod_brick3
mount /dev/vg_bricks/prod_brick3 /bricks/prod_brick3/
mkdir /bricks/prod_brick3/brick

Server 4

lvcreate -V 3G -T vg_bricks/brickpool4 -n prod_brick4
mkfs.xfs -i size=512 /dev/vg_bricks/prod_brick4
mkdir -p /bricks/prod_brick4
mount /dev/vg_bricks/prod_brick4 /bricks/prod_brick4/
mkdir /bricks/prod_brick4/brick

Now we Create volume with name “dist-rep-vol” using below gluster command :

gluster volume create dist-rep-vol replica 2 force
gluster volume start dist-rep-vol

We verify volume info using below command :

gluster volume info dist-rep-vol

In this volume, first files will be distributed on any two bricks and then the files will be replicated into the remaining two bricks.

Now we Mount this volume on the client machine via gluster

We first create the mount point for this volume :

mkdir /mnt/dist-rep-vol
mount.glusterfs /mnt/dist-rep-vol/

Then we add below entry in fstab for permanent entry /mnt/dist-rep-vol/ glusterfs _netdev 0 0

After that we verify the Size and volume using df command :

df -Th

That's it.

[Need urgent assistance with GlusterFS installation? – We'll help you. ]


This article will guide you on the steps to install and setup #GlusterFS. 

GlusterFS is a scalable #network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. 

GlusterFS has a client and #server component. Servers are typically deployed as storage bricks, with each server running a glusterfsd daemon to export a local file system as a #volume.

To install GlusterFS:

1. Have at least two nodes. CentOS 7 on two servers named "server1" and "server2".

2. Format and mount the bricks.

3. Installing GlusterFS.

4. #Iptables configuration.

5. Configure the trusted pool.

6. Set up a GlusterFS volume.

7. Testing the GlusterFS volume.

For Linux Tutorials

We create Linux HowTos and Tutorials for Sys Admins. Visit us on

Also for Tech related tips, Visit or General Technical tips on