[Gluster-users] sshd raid 5 gluster 3.7.13

2016-07-30 Thread Ricky Venerayan
Anyone of you have used sshd raid 5 with gluster 3.7.13? if not, I will be using it, and will post the outcome. ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] anyone who have use sshd raid 5 with gluster 3.7.13?

2016-07-30 Thread Lindsay Mathieson
On 31/07/2016 10:46 AM, Lenovo Lastname wrote: anyone who have use sshd raid 5 with gluster 3.7.13 with sharding? if not then I will let you know what will happened. I will be using Seagate sshd 1TB x3 with 32G nand. Do you mean the uderlying brick is raid5? how many bricks and what

[Gluster-users] anyone who have use sshd raid 5 with gluster 3.7.13?

2016-07-30 Thread Lenovo Lastname
anyone who have use sshd raid 5 with gluster 3.7.13 with sharding? if not then I will let you know what will happened.  I will be using Seagate sshd 1TB x3 with 32G nand.___ Gluster-users mailing list Gluster-users@gluster.org

[Gluster-users] managing slow drives in cluster

2016-07-30 Thread Jay Berkenbilt
We're using glusterfs in Amazon EC2 and observing certain behavior involving EBS volumes. The basic situation is that, in some cases, clients can write data to the file system at a rate such that the gluster daemon on one or more of the nodes may block in disk wait for longer than 42 seconds,

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-07-30 Thread Soumya Koduri
Inode stored in the shard xlator local is NULL. CCin Kruthika to comment. Thanks, Soumya (gdb) bt #0 0x7f196acab210 in pthread_spin_lock () from /lib64/libpthread.so.0 #1 0x7f196be7bcd5 in fd_anonymous (inode=0x0) at fd.c:804 #2 0x7f195deb1787 in shard_common_inode_write_do

[Gluster-users] Gluster 3.7.13 NFS Crash

2016-07-30 Thread Mahdi Adnan
Hi, I really appreciate if someone can help me fix my nfs crash, its happening a lot and it's causing lots of issues to my VMs;the problem is every few hours the native nfs crash and the volume become unavailable from the affected node unless i restart glusterd.the volume is used by vmware esxi