Re: [Gluster-users] Problem with SSL/TLS encryption on Gluster 4.0 & 4.1

2018-07-12 Thread David Spisla
Hello Andrei,

I am also using Gluster 4.1 on CentOS and I have the same problem. I tested
it with a volume which had no network encryption and one with network
encryption. You are not the only one.
It seems to be a bug. At the moment there is no other choice to disable
client.ssl and sever.ssl on a volume to have stable I/O.

Regards
David


2018-07-12 10:45 GMT+02:00 Havriliuc Andrei :

> Hello,
>
> I am doing some tests with GlusterFS 4.0 and I can't seem to solve some
> SSL/TLS issues. I am trying to set up a 2 node replicated gluster volume
> with SSL/TLS. For this setup, I use 3 KVM VMs (2 storage nodes + 1 client
> node). For the networking part, I use a dedicated private LAN for the KVM
> VMs. Each VM is able to ping the other, so there's no problem with the
> connectivity.
>
> To try to make the procedure I used as clear as possible, I will put all
> commands in chronological order:
>
>
>
> =
>
> 1. First, I update the systems, install ntp and then reboot:
>
> yum update
>
> yum install ntp
>
> systemctl status ntpd
> systemctl start ntpd
> systemctl enable ntpd
> systemctl status ntpd
>
>
> =
>
>
> 2. I use a separate partition within the two VM storage nodes. Each of the
> two nodes see the separate partition as /dev/sdb. After creating a thinly
> provisioned LV on each of the two nodes, I create an XFS filesystem on them:
>
>
> pvcreate /dev/sdb1
>
> vgcreate -s 32M vg_glusterfs /dev/sdb1
>
> lvcreate -L 20G --thinpool glusterfs_thin_pool vg_glusterfs
>
> lvcreate -V 15G --thin -n glusterfs_thin_vol1
> vg_glusterfs/glusterfs_thin_pool
>
> mkfs.xfs -i size=512 /dev/vg_glusterfs/glusterfs_thin_vol1
>
>
> =
>
>
> 3. On each node, I configured the brick data partition and added the
> following to /etc/fstab:
>
> mkdir /data
>
> echo "/dev/vg_glusterfs/glusterfs_thin_vol1 /data xfs defaults 1 2" >>
> /etc/fstab
>
> mount -a
>
>
> =
>
> 4. After mounting the volume, I see the following in df -Th, which is
> correct:
>
> [root@gluster1 brick1]# df -Th
> Filesystem   Type  Size  Used Avail
> Use% Mounted on
> /dev/sda1ext4   46G  1.4G 42G   4%
> /
> devtmpfs devtmpfs  3.9G 0 3.9G
>  0% /dev
> tmpfstmpfs 3.9G 0 3.9G
>  0% /dev/shm
> tmpfstmpfs 3.9G  8.6M 3.9G
>  1% /run
> tmpfstmpfs 3.9G 0 3.9G
>  0% /sys/fs/cgroup
> tmpfstmpfs 783M 0 783M
>  0% /run/user/0
> /dev/mapper/vg_glusterfs-glusterfs_thin_vol1 xfs15G   34M 15G
>  1% /data
>
>
> =
>
> 5. Create specific volume dirs on all storage nodes:
>
>
> mkdir -pv /data/glusterfs/${HOSTNAME%%.*}/vol01
>
>
> =
>
> 6. Add entries in /etc/hosts:
>
> vim /etc/hosts
>
> 192.168.10.233gluster1
> 192.168.10.234gluster2
> 192.168.10.237gluster-client
>
> =
>
> 7. Install gluster from the CentOS SIG:
>
>
> yum search centos-release-gluster
> yum install centos-release-gluster40
>
>
> yum install glusterfs-server
>
> =
>
> 8. Set up TLS/SSL encryption on all nodes and clients (gluster1, gluster2,
> gluster-client):
>
> openssl genrsa -out /etc/ssl/glusterfs.key 2048
>
> In gluster1 node:
> openssl req -new -x509 -key /etc/ssl/glusterfs.key -subj "/CN=gluster1"
> -out /etc/ssl/glusterfs.pem
> In gluster2 node:
> openssl req -new -x509 -key /etc/ssl/glusterfs.key -subj "/CN=gluster2"
> -out /etc/ssl/glusterfs.pem
> In gluster-client node:
> openssl req -new -x509 -key /etc/ssl/glusterfs.key -subj
> "/CN=gluster-client" -out /etc/ssl/glusterfs.pem
>
>
> =
>
> 9. On another box, I concatenate all of the .pem certificates into a .ca
> file.
>
> Bring all .pem files locally:
>
> scp gluster1:/etc/ssl/glusterfs.pem gluster01.pem
> scp gluster2:/etc/ssl/glusterfs.pem gluster02.pem
> scp gluster-client:/etc/ssl/glusterfs.pem gluster-client.pem
>
> For storage nodes, I concatenate all .pem certificates (including the
> client's .pem):
>
> cat gluster01.pem gluster02.pem gluster-client.pem > glusterfs-nodes.ca
>
> For server clients:
>
> cat gluster01.pem gluster02.pem > glusterfs-client.ca
>
>
> =
>
>
> 10. Put glusterfs-nodes.ca file on all the storage nodes (this includes
> storage nodes .pem + client's .pem):
>
> scp glusterfs-nodes.ca gluster1:/etc/ssl/glusterfs.ca
> scp glusterfs-nodes.ca gluster2:/etc/ssl/glusterfs.ca
>
> Put 

[Gluster-users] Problem with SSL/TLS encryption on Gluster 4.0 & 4.1

2018-07-12 Thread Havriliuc Andrei

Hello,

I am doing some tests with GlusterFS 4.0 and I can't seem to solve some 
SSL/TLS issues. I am trying to set up a 2 node replicated gluster volume 
with SSL/TLS. For this setup, I use 3 KVM VMs (2 storage nodes + 1 
client node). For the networking part, I use a dedicated private LAN for 
the KVM VMs. Each VM is able to ping the other, so there's no problem 
with the connectivity.


To try to make the procedure I used as clear as possible, I will put all 
commands in chronological order:




=

1. First, I update the systems, install ntp and then reboot:

yum update

yum install ntp

systemctl status ntpd
systemctl start ntpd
systemctl enable ntpd
systemctl status ntpd


=


2. I use a separate partition within the two VM storage nodes. Each of 
the two nodes see the separate partition as /dev/sdb. After creating a 
thinly provisioned LV on each of the two nodes, I create an XFS 
filesystem on them:



pvcreate /dev/sdb1

vgcreate -s 32M vg_glusterfs /dev/sdb1

lvcreate -L 20G --thinpool glusterfs_thin_pool vg_glusterfs

lvcreate -V 15G --thin -n glusterfs_thin_vol1 
vg_glusterfs/glusterfs_thin_pool


mkfs.xfs -i size=512 /dev/vg_glusterfs/glusterfs_thin_vol1


=


3. On each node, I configured the brick data partition and added the 
following to /etc/fstab:


mkdir /data

echo "/dev/vg_glusterfs/glusterfs_thin_vol1 /data xfs defaults 1 2" >> 
/etc/fstab


mount -a


=

4. After mounting the volume, I see the following in df -Th, which is 
correct:


[root@gluster1 brick1]# df -Th
Filesystem   Type  Size  Used Avail 
Use% Mounted on

/dev/sda1ext4   46G  1.4G 42G   4% /
devtmpfs devtmpfs  3.9G 0 3.9G   
0% /dev
tmpfstmpfs 3.9G 0 3.9G   
0% /dev/shm
tmpfstmpfs 3.9G  8.6M 3.9G   
1% /run
tmpfstmpfs 3.9G 0 3.9G   
0% /sys/fs/cgroup
tmpfstmpfs 783M 0 783M   
0% /run/user/0
/dev/mapper/vg_glusterfs-glusterfs_thin_vol1 xfs15G   34M 15G   
1% /data



=

5. Create specific volume dirs on all storage nodes:


mkdir -pv /data/glusterfs/${HOSTNAME%%.*}/vol01


=

6. Add entries in /etc/hosts:

vim /etc/hosts

192.168.10.233gluster1
192.168.10.234gluster2
192.168.10.237gluster-client

=

7. Install gluster from the CentOS SIG:


yum search centos-release-gluster
yum install centos-release-gluster40


yum install glusterfs-server

=

8. Set up TLS/SSL encryption on all nodes and clients (gluster1, 
gluster2, gluster-client):


openssl genrsa -out /etc/ssl/glusterfs.key 2048

In gluster1 node:
openssl req -new -x509 -key /etc/ssl/glusterfs.key -subj "/CN=gluster1" 
-out /etc/ssl/glusterfs.pem

In gluster2 node:
openssl req -new -x509 -key /etc/ssl/glusterfs.key -subj "/CN=gluster2" 
-out /etc/ssl/glusterfs.pem

In gluster-client node:
openssl req -new -x509 -key /etc/ssl/glusterfs.key -subj 
"/CN=gluster-client" -out /etc/ssl/glusterfs.pem



=

9. On another box, I concatenate all of the .pem certificates into a .ca 
file.


Bring all .pem files locally:

scp gluster1:/etc/ssl/glusterfs.pem gluster01.pem
scp gluster2:/etc/ssl/glusterfs.pem gluster02.pem
scp gluster-client:/etc/ssl/glusterfs.pem gluster-client.pem

For storage nodes, I concatenate all .pem certificates (including the 
client's .pem):


cat gluster01.pem gluster02.pem gluster-client.pem > glusterfs-nodes.ca

For server clients:

cat gluster01.pem gluster02.pem > glusterfs-client.ca


=


10. Put glusterfs-nodes.ca file on all the storage nodes (this includes 
storage nodes .pem + client's .pem):


scp glusterfs-nodes.ca gluster1:/etc/ssl/glusterfs.ca
scp glusterfs-nodes.ca gluster2:/etc/ssl/glusterfs.ca

Put glusterfs-client.ca file on all the storage nodes (this includes 
only storage nodes .pem):


scp glusterfs-client.ca gluster-client:/etc/ssl/glusterfs.ca


=


11. Enable management encryption on each node (command ran on gluster1, 
gluster2, gluster-client):



touch /var/lib/glusterd/secure-access


=


12. Start, enable and check status of glusterd on gluster1 and gluster2:

systemctl start glusterd
systemctl enable glusterd
systemctl status glusterd

=