Hello,

If anyone could help me with a strange issue I am experiencing:

I run an EC2 stack, say v1, that has a gluster with three one brick nodes in 
the replication mode.
I make an ebs volume level snapshot of one of the bricks (not the gluster level 
snapshot)

Now I want to spin a new EC2 stack, v2, using that single snapshot from v1 to 
build all three bricks in v2.
Ebs volumes are successfully built, and can see that all data is there, on all 
three bricks, I clean the bricks using the setfattr and rem ./glusterfs routine,
start glusterd, probe peers, create a volume using the same volume name that 
already exists on the bricks, same replication mode, no errors, all is fine,
but when I mount the volume I see only top level directory there, no files and 
subdirectories; 'heal volume-name full' and 'ls' do not change the situation,
volume info and status report all is well, heal shows zero files.
I am using latest glusterfs, 3.7.5

Thank you




The information contained in this communication is intended for the use
of the designated recipients named above. If the reader of this 
communication is not the intended recipient, you are hereby notified
that you have received this communication in error, and that any review,
dissemination, distribution or copying of this communication is strictly
prohibited. If you have received this communication in error, please 
notify The Associated Press immediately by telephone at +1-212-621-1898 
and delete this email. Thank you.
[IP_US_DISC]

msk dccc60c6d2c3a6438f0cf467d9a4938
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to