I saw lots of logs, any thoughts?
[2013-12-24 11:35:19.659143] E
[marker-quota-helper.c:230:mq_dict_set_contribution]
(-->/usr/lib64/glusterfs/3.3.0.5rhs/xlator/debug/io-stats.so(io_stats_lookup+0x13e)
[0x7f9941ec7a3e]
(-->/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(marker_lookup+0x3
On 12/23/2013 07:02 PM, Xiao Bin XB Zhang wrote:
Hey,
does any one met such unexpected behavior of glusterFS?
We have gluster mounted and serve as OpenStack glance volume, the owner
is, say, glace:glance
however we did some instance snapshot, and then found that the volume
owner changed to roo
On 12/23/2013 08:09 PM, Dean Bruhn wrote:
Is there a way to change the volume transport type. I have some RDMA volumes
that I would like to adjust to RDMA/TCP.
Steps outlined in [1] could also be used to convert a RDMA volume to
RDMA,TCP. Since a volume stop/delete is involved, the operation
Is there a way to change the volume transport type. I have some RDMA volumes
that I would like to adjust to RDMA/TCP.
- Dean
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
Hey,
does any one met such unexpected behavior of glusterFS?
We have gluster mounted and serve as OpenStack glance volume, the owner is,
say, glace:glance
however we did some instance snapshot, and then found that the volume owner
changed to root:root
several ones saw this strage behavior and
Hi all
How to ensure the new data write to other bricks, if one brick offline of
gluster distributed volume ;
the client can write data that originally on offline bricks to other online
bricks ;
the distributed volume crash, even if one brick offline; it's so unreliable
when the failed bric