EX236 - Hybrid Cloud Storage
Published on 2014-10-25This is the first in my series of posts for each of the five exams I will take to earn the RHCA. I'll begin with the EX236 - Hybrid Cloud Storage. This exam is closely related to RH436 - Enterprise Clustering and Storage Management.
- Introduction
- Deploy to physical and virtual hardware
- Configure a Red Hat Storage Server storage pool
- Create individual storage bricks
- Create various Red Hat Storage Server volumes
- Format the volumes with an appropriate file system
- Extend existing storage volumes
- Configure clients to use NFS
- Configure clients to use SMB
- Configure quotas and ACLs
- Configure IP failover for NFS-and SMB-based cluster services
- Configure geo-replication services
- Configure unified object storage
- Troubleshoot Red Hat Storage Server problems
- Monitor Red Hat Storage Server workloads
- Perform management tasks
Introduction
This exam covers the subject of the Red Hat Storage Server product which is the commercially supported solution that Red Hat offers. If you want to study for this exam without having to purchase a support license there is a solution suggested by the GlusterFS team themselves: use the community version of glusterfs-server. Note: as firewall configuration is not an objective for this exam, I am assuming there is no firewall on either client or nodes in the following examples.
The Gluster team provide a link to the main download site for GlusterFS but since I am using CentOS it will be easier to use the repo with yum. It wasn't obvious to me at first but there is a repo available for CentOS 7 buried a few levels deep:
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
This document is organized by objective as they are given by Red Hat. It is better to use this as a reference for each objective rather than a walkthrough. For example, the objective to create bricks comes after the objective to configure a storage pool. Normally you would create the bricks before adding them to a pool.
A quick note on terminology
A Gluser cluser is a group or "pool" of servers or "nodes" which are made of several shared filesystems or "bricks." A group of bricks make a "volume". A "client" is any device which accesses the resources available through the pools. More details can be found in the common setup criteria.
Deploy to physical and virtual hardware
The first objective is: "Deploy the Red Hat Storage Server appliance on both physical and virtual hardware and work with existing Red Hat Storage Server appliances."
I'm not actually sure what they mean by "deploy." For now I'm going to assume that they mean "install" so I'll describe installing the glusterfs-server on a node.
There isn't any difference between installing on physical or virtual hardware, the difference will come in how the pools and bricks are configured. The docs on installing to bare metal, for example, details the infastructure: DNS, network fabric, etc.
Once the repo (as described in the Introduction) is configured I can follow the guide provided by Server World to install and start the GlusterFS daemon. I will repeat this procedure for as many nodes as I want in the pool.
[root@node1 ~]# yum install glusterfs-server -y [root@node1 ~]# systemctl start glusterd.service [root@node1 ~]# systemctl enable glusterd.service
Configure a Red Hat Storage Server storage pool
To create the pool, I'll name each node using a node# pattern, node1.pequeno.in
, node2.pequeno.in
, etc.
Here, I tell the nodes to learn about each other. Once a node has been added to a pool it can inform other nodes about the ones it already knows about. When node1
probed for node2
I get one kind of success message:
[root@node1 ~]# gluster peer probe node2.pequeno.in peer probe: success.
But when I ask node3
to probe node1
I get a different kind of success message since node3
already knows about node1
.
[root@node3 ~]# gluster peer probe node1.pequeno.in peer probe: success. Host node1.pequeno.in port 24007 already in peer list
Create individual storage bricks
The full objective is: Create individual storage bricks on either physical devices or logical volumes. These are nodes with additional blocks of storage added to become the bricks. The added block storage is /dev/xvdb
on each of the nodes.
[root@node1 ~]# mkdir /glusterfs [root@node1 ~]# fdisk /dev/xvdb [root@node1 ~]# mkfs -t xfs /dev/xvdb5 [root@node1 ~]# mount /dev/xvdb5 /glusterfs [root@node1 ~]# tail -1 /etc/mtab >> /etc/fstab [root@node1 ~]# mkdir /glusterfs/distributed
Create various Red Hat Storage Server volumes
The full objective is:
Create various Red Hat Storage Server volumes such as:
- Distributed
- Replicated
- Distributed-replicated
- Stripe-replicated
- Distributed-striped
- Distributed-striped-replicated
I'll break them down into their own individual sections so I can focus more directly on each. This section borrows heavily from the "Setting up GlusterFS Server Volumes" guide provided by the Gluster team. For the examples I'll be using the same FQDNs (node1.pequeno.in
) for each setup but for each section assume these are newly created pools and bricks.
Distributed
Distributed is the default mode GlusterFS will use if I don't specify any other configuration. In this example I'm only giving name of the volume vol_distributed
.
[root@node1 ~]# gluster volume create vol_distributed \ >node1.pequeno.in:/glusterfs/distributed \ >node2.pequeno.in:/glusterfs/distributed \ >node3.pequeno.in:/glusterfs/distributed; volume create: vol_distributed: success: please start the volume to access data
Start the pool
[root@node1 ~]# gluster volume start vol_distributed volume start: vol_distributed: success
Even though we started the volume on node1
, I can check the status of the volume from any node in the pool. Here, I'm checking the status from node2
.
[root@node2 ~]# gluster volume info Volume Name: vol_distributed Type: Distribute Volume ID: 72e242ac-6fa8-49a1-a209-e20b80fe90fb Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node1.pequeno.in:/glusterfs/distributed Brick2: node2.pequeno.in:/glusterfs/distributed Brick3: node3.pequeno.in:/glusterfs/distributed
Replicated
[root@node1 ~]# gluster volume create replication1 replica 2 \ node1.pequeno.in:/glusterfs/replicated \ node2.pequeno.in:/glusterfs/replicated volume create: replication1: success: please start the volume to access data [root@node1 ~]# gluster volume start replication1 volume start: replication1: success
I can check the status of the volume on node2
:
[root@node2 ~]# gluster volume info Volume Name: replication1 Type: Replicate Volume ID: 75182cb0-07c5-4118-86c8-862f2a40d384 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node1.pequeno.in:/glusterfs/replicated Brick2: node2.pequeno.in:/glusterfs/replicated
Distributed-replicated
What is important to note about this configuration is that it is the exact syntax I used for the replicated volume. The difference is that I am providing more bricks (4) than the number of replication volumes (2).
Gluster will replicate automatically across the first two nodes (node1
and node2
each get a copy of file1
) and then distribute files on the latter nodes (node3
and node4
each get a copy of file2
).
It is also a better choice to have memebers of a replica set on different servers. While you could, you wouldn't want to have a physical device trying to replicate to itself. If they are on seperate physical volumes they are better protected against disk failure.
An important note from the documentation:
Note: The number of bricks should be a multiple of the replica count for a distributed replicated volume.
[root@node1 ~]# gluster volume create dist-repl replica 2 \ >node1.pequeno.in:/glusterfs/distributed-replicated/ \ >node2.pequeno.in:/glusterfs/distributed-replicated/ \ >node3.pequeno.in:/glusterfs/distributed-replicated/ \ >node4.pequeno.in:/glusterfs/distributed-replicated/; volume create: dist-repl: success: please start the volume to access data [root@node4 ~]# gluster volume info Volume Name: dist-repl Type: Distributed-Replicate Volume ID: 4a92e432-a1cc-419a-a6a7-7280854619c1 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: node1.pequeno.in:/glusterfs/distributed-replicated Brick2: node2.pequeno.in:/glusterfs/distributed-replicated Brick3: node3.pequeno.in:/glusterfs/distributed-replicated Brick4: node4.pequeno.in:/glusterfs/distributed-replicated
Stripe-replicated
I'm not sure that image is labeled as clearly as it could be. When I send a file to the volume named strp-repl
it will be distributed across 2 volumes with some of the data going to the first volume containing node1
and node2
. The rest of the data going to the second volume containing node3
and node4
. To call these replicated volumes is misleading since it's the bricks within the volumes that are replicated.
Data in each volume the data is replicated between the bricks. node2
will be a duplicate of the data found on node1
, and node4
will be a duplicate of the data found on node3
.
[root@node1 ~]# gluster volume create strp-repl stripe 2 replica 2 \ >node1.pequeno.in:/glusterfs/striped-replicated/ \ >node2.pequeno.in:/glusterfs/striped-replicated/ \ >node3.pequeno.in:/glusterfs/striped-replicated/ \ >node4.pequeno.in:/glusterfs/striped-replicated/; volume create: strp-repl: success: please start the volume to access data [root@node3 ~]# gluster volume info Volume Name: strp-repl Type: Striped-Replicate Volume ID: c002f35f-68ff-45de-9021-a4e77d68842f Status: Started Number of Bricks: 1 x 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: node1.pequeno.in:/glusterfs/striped-replicated Brick2: node2.pequeno.in:/glusterfs/striped-replicated Brick3: node3.pequeno.in:/glusterfs/striped-replicated Brick4: node4.pequeno.in:/glusterfs/striped-replicated
Distributed-striped
This example shows again how distribution is often implied. Here I'm asking GlusterFS to stripe across 2 but I'm giving it 4 bricks. GlusterFS will stripe file1
across node1
and node2
, and file2
will be striped across node3
and node4
.
[root@node1 ~]# gluster volume create dist-strp stripe 2 \ >node1.pequeno.in:/glusterfs/distributed-striped/ \ >node2.pequeno.in:/glusterfs/distributed-striped/ \ >node3.pequeno.in:/glusterfs/distributed-striped/ \ >node4.pequeno.in:/glusterfs/distributed-striped/; volume create: dist-strp: success: please start the volume to access data [root@node2 ~]# gluster volume info Volume Name: dist-strp Type: Distributed-Stripe Volume ID: 5ba0b6fb-8107-473c-88d6-40f75595b2c9 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: node1.pequeno.in:/glusterfs/distributed-striped Brick2: node2.pequeno.in:/glusterfs/distributed-striped Brick3: node3.pequeno.in:/glusterfs/distributed-striped Brick4: node4.pequeno.in:/glusterfs/distributed-striped
Distributed-striped-replicated
[root@node1 ~]# gluster volume create dist-strp-repl stripe 2 replica 2 \ >node1.pequeno.in:/glusterfs/dist-strp-repl/ \ >node2.pequeno.in:/glusterfs/dist-strp-repl/ \ >node3.pequeno.in:/glusterfs/dist-strp-repl/ \ >node4.pequeno.in:/glusterfs/dist-strp-repl/ \ >node5.pequeno.in:/glusterfs/dist-strp-repl/ \ >node6.pequeno.in:/glusterfs/dist-strp-repl/ \ >node7.pequeno.in:/glusterfs/dist-strp-repl/ \ >node8.pequeno.in:/glusterfs/dist-strp-repl/; volume create: dist-strp-repl: success: please start the volume to access data [root@node2 ~]# gluster volume info Volume Name: dist-strp-repl Type: Distributed-Striped-Replicate Volume ID: 2b93ca88-8dc9-401b-bc16-4041b9a733b5 Status: Started Number of Bricks: 2 x 2 x 2 = 8 Transport-type: tcp Bricks: Brick1: node1.pequeno.in:/glusterfs/dist-strp-repl Brick2: node2.pequeno.in:/glusterfs/dist-strp-repl Brick3: node3.pequeno.in:/glusterfs/dist-strp-repl Brick4: node4.pequeno.in:/glusterfs/dist-strp-repl Brick5: node5.pequeno.in:/glusterfs/dist-strp-repl Brick6: node6.pequeno.in:/glusterfs/dist-strp-repl Brick7: node7.pequeno.in:/glusterfs/dist-strp-repl Brick8: node8.pequeno.in:/glusterfs/dist-strp-repl
Format the volumes with an appropriate file system
Gluster is flexible when it comes to the type of file system you can format a volume with. The Gluster team suggest XFS or ext4 but there aren't very many restrictions from the Gluster side of the fence.
Red Hat, on the other hand, makes a clear choice: XFS.
Red Hat supports only formatting a Logical Volume by using the XFS file system on the bricks with few modifications to improve performance. Ensure to format bricks using XFS on Logical Volume Manager before adding it to a Red Hat Storage volume.
From the documentation, you should format the device with the following command:
# mkfs.xfs -i size=512 <DEVICE>
Extend existing storage volumes
Extend existing storage volumes by adding additional bricks and performing appropriate rebalancing operations
The objectives for this exam don't specify that we should know how to configure a strictly striped volume but I believe that is a useful configuration to test a rebalancing operation.
I'll be using a new node1
and node2
for this example.
[root@node1 ~]# gluster peer probe node2.pequeno.in peer probe: success. [root@node1 ~]# gluster peer status Number of Peers: 1 Hostname: node2.pequeno.in Uuid: 4ce512c2-00d2-4a36-8651-7885588e495a State: Peer in Cluster (Connected)
Now I'll create a brick on each of the nodes, I'm only showing the commands entered for node1
, the exact commands are to be duplicated on node2
.
[root@node1 ~]# fdisk /dev/xvdb [root@node1 ~]# mkfs.xfs -i size=512 /dev/xvdb5 [root@node1 ~]# mkdir /glusterfs [root@node1 ~]# mount /dev/xvdb5 /glusterfs/ [root@node1 ~]# mkdir /glusterfs/striped
I can now create the striped volume strp-vol
[root@node1 ~]# gluster volume create strp-vol stripe 2 \ > node1.pequeno.in:/glusterfs/striped \ > node2.pequeno.in:/glusterfs/striped; volume create: strp-vol: success: please start the volume to access data [root@node1 ~]# gluster volume start strp-vol volume start: strp-vol: success [root@gluster ~]# gluster volume info Volume Name: strp-vol Type: Stripe Volume ID: 5b54ba8c-706c-48fa-bdf3-befd19095916 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node1.pequeno.in:/glusterfs/striped Brick2: node2.pequeno.in:/glusterfs/striped
I'll have a client use this volume. The gluster docs on installing the native client don't seem to be up to date. They ask to install the openib
package but that is no longer included in RHEL. It seems as though that package has been replaced but is not needed unless you are planning to use infiniband.
The gluster docs say to install several packages for the native client but I didn't have much luck that way. I did have success using the instructions from this guide from server-world.
[root@client ~]# wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo -O /etc/yum.repos.d/glusterfs-epel.repo [root@client ~]# yum install glusterfs glusterfs-fuse -y [root@client ~]# mkdir gluster [root@client ~]# mount -t glusterfs node1.pequeno.in:/strp-vol gluster/ [root@client gluster]# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 20G 1.4G 18G 8% / devtmpfs 238M 0 238M 0% /dev tmpfs 243M 0 243M 0% /dev/shm tmpfs 243M 4.3M 239M 2% /run tmpfs 243M 0 243M 0% /sys/fs/cgroup node1.pequeno.in:/strp-vol 200G 65M 200G 1% /root/gluster
I'll create a bunch of dummy files:
[root@client gluster]# for i in {1..100}; do dd if=/dev/urandom of=$i bs=1M count=1;done
Finally, I can get to the heart of the issue: adding a brick and rebalancing. An important thing to note from this guide:
Note: When expanding distributed replicated and distributed striped volumes, you need to add a number of bricks that is a multiple of the replica or stripe count. For example, to expand a distributed replicated volume with a replica count of 2, you need to add bricks in multiples of 2 (such as 4, 6, 8, etc.).
If I attempt to use just 1 brick, I'll get the following error:
[root@node1 ~]# gluster volume add-brick strp-vol node1.pequeno.in:/newbrick/ volume add-brick: failed: Incorrect number of bricks supplied 1 with count 2
So I'll create new bricks on node2
like this:
[root@node2 ~]# fdisk /dev/xvdd [root@node2 ~]# mkfs.xfs -i size=512 /dev/xvdd5 [root@node2 ~]# mkdir /newbrick [root@node2 ~]# mount /dev/xvdd5 /to-be-added/ [root@node2 ~]# cd /to-be-added/ [root@node2 to-be-added]# mkdir brick1 [root@node2 to-be-added]# mkdir brick2
Now on node1
I will add the 2 new bricks to the volume:
[root@node1 ~]# gluster volume add-brick strp-vol \ > node2.pequeno.in:/to-be-added/brick1 \ > node2.pequeno.in:/to-be-added/brick2;
This changes the type of the volume to Distributed-Stripe
:
[root@node1 ~]# gluster volume info Volume Name: strp-vol Type: Distributed-Stripe Volume ID: 79ecfcbf-c1d0-4d7a-94e1-78f670a12e0d Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: node1.pequeno.in:/glusterfs/striped Brick2: node2.pequeno.in:/glusterfs/striped Brick3: node2.pequeno.in:/to-be-added/brick1 Brick4: node2.pequeno.in:/to-be-added/brick2
Now, we can rebalance:
[root@node1 ~]# gluster volume rebalance strp-vol start volume rebalance: strp-vol: success: Starting rebalance on volume strp-vol has been successful. ID: 2a86a8bd-c4f0-42bb-b76e-d4aa1be64ea0 [root@node1 ~]# gluster volume rebalance strp-vol status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 33 33.0MB 133 0 18 completed 10.00 node2.pequeno.in 0 0Bytes 103 0 3 completed 1.00
look @ http://gluster.org/community/documentation/index.php/Gluster_3.2:_Rebalancing_Volume_to_Fix_Layout_Changes and http://www.slashroot.in/gfs-gluster-file-system-complete-tutorial-guide-for-an-administrator
Configure clients to use NFS
Configure clients to use Red Hat Storage Server appliance volumes using native and network file systems (NFS)
This objective is very straight-forward. If you've set up NFS on RHEL before the process is identical, and there is no considerable difference on the node side of things.
First, node1
:
[root@node1 ~]# yum install glusterfs-server -y [root@node1 ~]# fdisk /dev/xvdb [root@node1 ~]# mkfs.xfs -i size=512 /dev/xvdb5 [root@node1 ~]# mkdir /glusterfs [root@node1 ~]# mount /dev/xvdb5 /glusterfs/ [root@node1 ~]# mkdir /glusterfs/distributed [root@node1 ~]# gluster volume create dist \ > node1.pequeno.in:/glusterfs/distributed/; volume create: dist: success: please start the volume to access data [root@node1 ~]# gluster volume start dist volume start: dist: success
Then the client
:
[root@client ~]# yum install nfs-utils -y [root@client ~]# mkdir remote [root@client ~]# mount -t nfs node1.pequeno.in:/dist remote/ [root@client ~]# touch remote/testfile
We can verify the file testfile
was created on node1
:
[root@node1 ~]# ls /glusterfs/distributed/ testfile
That's about it!
Configure clients to use SMB
Configure clients to use Red Hat Storage Server appliance volumes using SMB
To configure clients to use SMB the process is nearly identical to the way that you would configure a typical SMB share.
Note: You must repeat these steps on each Gluster node. For more advanced configurations, see Samba documentation.
First, node1
:
[root@node1 ~]# yum install glusterfs-server -y [root@node1 ~]# fdisk /dev/xvdb [root@node1 ~]# mkfs.xfs -i size=512 /dev/xvdb5 [root@node1 ~]# mkdir /glusterfs [root@node1 ~]# mount /dev/xvdb5 /glusterfs/ [root@node1 ~]# mkdir /glusterfs/distributed [root@node1 ~]# gluster volume create dist \ > node1.pequeno.in:/glusterfs/distributed/; volume create: dist: success: please start the volume to access data [root@node1 ~]# gluster volume start dist volume start: dist: success
Now, still on node1
we'll have to configure Samba as well. There are ways to create public samba shares that do not require a username and password for authentication but that leads a little further into a lesson on how Samba works rather than gluster, so I'll just create a user to test this configuration.
[root@node1 ~]# yum install samba samba-client -y [root@node1 ~]# vim /etc/samba/smb.conf [root@node1 ~]# useradd gfstest [root@node1 ~]# smbpasswd -a gfstest Added user gfstest. [root@node1 ~]# systemctl restart smb.service [root@node1 ~]# testparm Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[gluster]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions [global] workgroup = MYGROUP server string = Samba Server Version %v log file = /var/log/samba/log.%m max log size = 50 idmap config * : backend = tdb cups options = raw [gluster] comment = Gluster over SMB! path = /glusterfs/distributed valid users = gfstest read only = No guest ok = Yes
What I'm about to do is BAD AND NOT RECOMENDED but to simplify permission issues I'm going to make /glusterfs/distributed accessible to everyone and everything. This step is necessary if you want to be able to write to the share.
[root@node1 ~]# chmod -R 777 /glusterfs/distributed/
Now I can configure the client to look for that CIFS share:
[root@client ~]# yum install cifs-utils -y [root@client ~]# mount -t cifs //node1.pequeno.in/gluster -o rw,username=gfstest,password=123456 sambashare/ [root@client ~]# cd sambashare/ [root@client sambashare]# touch testfile [root@client sambashare]# ls testfile
Configure quotas and ACLs
Configure Red Hat Storage Server features including disk quotas and POSIX access control lists (ACLs)
I'll begin by configuring node1
and node2
similarly:
[root@node1 ~]# yum install glusterfs-server -y [root@node1 ~]# fdisk /dev/xvdb [root@node1 ~]# mkfs.xfs -i size=512 /dev/xvdb5 [root@node1 ~]# mkdir /glusterfs [root@node1 ~]# mount /dev/xvdb5 /glusterfs/ [root@node1 ~]# mkdir /glusterfs/distributed
I'll make some changes specifically on node1
[root@node1 ~]# gluster volume create dist \ > node1.pequeno.in:/glusterfs/distributed/ \ > node2.pequeno.in:/glusterfs/distributed/; volume create: dist: success: please start the volume to access data [root@node1 ~]# gluster volume start dist volume start: dist: success [root@node1 ~]# gluster volume quota dist enable volume quota : success [root@node1 ~]# gluster volume quota dist limit-usage / 2GB volume quota : success
I'll configure the client this way:
[root@client ~]# wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo -O /etc/yum.repos.d/glusterfs-epel.repo [root@client ~]# yum install glusterfs glusterfs-fuse -y [root@client ~]# mkdir gfs [root@client ~]# mount -t glusterfs node1.pequeno.in:/dist gfs/
And when I try to write a file that is larger than the 2GB limit put in place:
[root@client gfs]# dd if=/dev/zero of=bigfile bs=1M count=3000 dd: error writing ‘bigfile’: Disk quota exceeded dd: closing output file ‘bigfile’: Disk quota exceeded [root@client gfs]# ls -lh total 2.1G -rw-r--r--. 1 root root 2.1G Oct 19 00:25 bigfile
There are two types of ACLs available for use with gluster, 'access' and 'default'.
You can set two types of POSIX ACLs, that is, access ACLs and default ACLs. You can use access ACLs to grant permission for a specific file or directory. You can use default ACLs only on a directory but if a file inside that directory does not have an ACLs, it inherits the permissions of the default ACLs of the directory. gluster.org
To enable ACLs from the client side, we have to mount the drive using the acl
option.
[root@client gfs]# mkdir allow [root@client gfs]# mkdir deny [root@client gfs]# useradd alice [root@client ~]# mkdir /home/shared [root@client ~]# mount -t glusterfs -o acl node1.pequeno.in:/dist /home/shared/
On the server we must create the users as well. There, we can create the ACLs:
[root@node1 distributed]# useradd alice [root@node1 distributed]# setfacl -m u:alice:rwx allow/ [root@node1 distributed]# setfacl -m u:alice:--- deny/
From the client we can see the effect of these changes:
[root@client ~]# su alice [alice@client root]$ cd /home/shared/ [alice@client shared]$ ls allow deny [alice@client shared]$ cd allow/ [alice@client allow]$ touch alicefile [alice@client allow]$ ls alicefile [alice@client allow]$ cd .. [alice@client shared]$ cd deny/ bash: cd: deny/: Permission denied [alice@client shared]$ getfacl allow/ # file: allow/ # owner: root # group: root user::rwx user:alice:rwx user:bob:rwx group::r-x mask::rwx other::r-x
Configure IP failover for NFS-and SMB-based cluster services
This section will follow the Red Hat documentation on this subject very closely. If there is anything that I do not cover with adequate detail I would look there first for further clarification.
Configure geo-replication services
https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md
Configure unified object storage
http://blog.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/
Troubleshoot Red Hat Storage Server problems
During the installation of the client I came across this:
[root@client ~]# mount -t glusterfs node1.pequeno.in:/glusterfs/strp-vol gluster/ Mount failed. Please check the log file for more details.
The relevant lines from the logs read:
[2014-10-13 22:41:14.567433] E [glusterfsd-mgmt.c:1297:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server [2014-10-13 22:41:14.567480] E [glusterfsd-mgmt.c:1398:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/glusterfs/strp-vol)
A quick google search came up with this mailing list thread which solves the issue with changing some security settings. Making those changes did not solve the issue for me but this blog post pointed out that the problem was much more simple. As it turns out, I was attempting to mount using the path when I really just needed to pass the name of the volume:
[root@client ~]# mount -t glusterfs node1.pequeno.in:/strp-vol gluster/
While writing the quotas section I came across this message:
[root@node1 ~]# gluster volume quota dist limit-usage /glusterfs/distributed/ 5GB quota command failed : Failed to get trusted.gfid attribute on path /glusterfs/distributed/. Reason : No such file or directory
Again I had a syntax error. The gluster docs say:
The directory name should be relative to the volume with the export directory/mount being treated as "/".
So the command I should have ran was:
[root@node1 ~]# gluster volume quota dist limit-usage / 5GB
Which will limit the usage of all of dist
to 5GB
While writing the IP failover section I accidentally used a volume before probing for peers. When I tried to use those bricks again I got the following error:
[root@node1 ~]# gluster volume create repl replica 2 \ > node1.pequeno.in:/glusterfs/replicated/ \ > node2.pequeno.in:/glusterfs/replicated/; volume create: repl: failed: /glusterfs/replicated is already part of a volume
This is a common issue, having to remove bricks for use in another (or in this case the same) volume. In my opinion the solution is ugly and for me it didn't actually work.
I had to rm -rf /glusterfs/replicated
and recreate that directory before things would work again. YMMV.
Monitor Red Hat Storage Server workloads
Perform management tasks
Perform Red Hat Storage Server management tasks such as tuning volume options, volume migration, stopping and deleting volumes, and configuring server-side quorum