[Gluster-users] lots of repeative errors in logs

2013-12-23 Thread Mingfan Lu
I saw lots of logs, any thoughts? [2013-12-24 11:35:19.659143] E [marker-quota-helper.c:230:mq_dict_set_contribution] (-->/usr/lib64/glusterfs/3.3.0.5rhs/xlator/debug/io-stats.so(io_stats_lookup+0x13e) [0x7f9941ec7a3e] (-->/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(marker_lookup+0x3

Re: [Gluster-users] Unexpected gluster volume owner change during snapshot operation

2013-12-23 Thread Vijay Bellur
On 12/23/2013 07:02 PM, Xiao Bin XB Zhang wrote: Hey, does any one met such unexpected behavior of glusterFS? We have gluster mounted and serve as OpenStack glance volume, the owner is, say, glace:glance however we did some instance snapshot, and then found that the volume owner changed to roo

Re: [Gluster-users] Change Volume Transport?

2013-12-23 Thread Vijay Bellur
On 12/23/2013 08:09 PM, Dean Bruhn wrote: Is there a way to change the volume transport type. I have some RDMA volumes that I would like to adjust to RDMA/TCP. Steps outlined in [1] could also be used to convert a RDMA volume to RDMA,TCP. Since a volume stop/delete is involved, the operation

[Gluster-users] Change Volume Transport?

2013-12-23 Thread Dean Bruhn
Is there a way to change the volume transport type. I have some RDMA volumes that I would like to adjust to RDMA/TCP. - Dean ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Unexpected gluster volume owner change during snapshot operation

2013-12-23 Thread Xiao Bin XB Zhang
Hey, does any one met such unexpected behavior of glusterFS? We have gluster mounted and serve as OpenStack glance volume, the owner is, say, glace:glance however we did some instance snapshot, and then found that the volume owner changed to root:root several ones saw this strage behavior and

[Gluster-users] How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume

2013-12-23 Thread 张兵
Hi all How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume ; the client can write data that originally on offline bricks to other online bricks ; the distributed volume crash, even if one brick offline; it's so unreliable when the failed bric