Hi All,
I have now copied /var/lib/glusterd/geo-replication/secret.pem.pub (public
key) from master3 to drtier1data /root/.ssh/authorized_keys, and now I can ssh
from master node3 to drtier1data using the georep key
(/var/lib/glusterd/geo-replication/secret.pem).
But I am still getting the
Hi Strahil,
ok, that's what I did now to create the certificate:
-
openssl req -x509 -sha256 -key glusterfs.key -out "glusterfs.pem" -days
365 -subj "/C=de/ST=SH/L=St.
Michel/O=stka/OU=gluster-nodes/CN=c01.gluster" -addext "subjectAltName =
DNS:192.168.56.41"
You didn't specify correctly the IP in the SANS but I'm not sure if that's the
root cause.
In the SANs section Specify all hosts + their IPs:
IP.1=1.2.3.4IP.2=2.3.4.5DNS.1=c01.glusterDNS.2=c02.gluster
What is the output from the client:openssl s_client -showcerts -connect
c02.gluster:24007
Hi Strahil,
there's no arbiter: 3 servers with 5 bricks each.
Volume Name: workdata
Type: Distributed-Replicate
Volume ID: 7d1e23e5-0308-4443-a832-d36f85ff7959
Status: Started
Snapshot Count: 0
Number of Bricks: 5 x 3 = 15
The "problem" is: the number of files/entries to-be-healed has
Gluster doesn't use the ssh key in /root/.ssh, thus you need to exchange the
public key that corresponds to /var/lib/glusterd/geo-replication/secret.pem .
If you don't know the pub key, google how to obtain it from the private key.
Ensure that all hosts can ssh to the secondary before proceeding
HI Strahil,
As mentioned in my last email, I have copied the gluster public key from
master3 to secondary server, and I can now ssh from all master nodes to
secondary server, but still getting the same error.
[root@master1 geo-replication]# ssh root@drtier1data -i
Morning,
a few bad apples - but which ones? Checked glustershd.log on the "bad"
server and counted todays "gfid mismatch" entries (2800 in total):
44 /212>,
44 /174>,
44 /94037803>,
44 /94066216>,
44 /249771609>,
44 /64235523>,
44 /185>,
etc. But as i said, these are
2800 is too much. Most probably you are affected by a bug. How old are the
clients ? Is only 1 server affected ?Have you checked if a client is not
allowed to update all 3 copies ?
If it's only 1 system, you can remove the brick, reinitialize it and then bring
it back for a full sync.
Best