also, gluster volume status takes *forever* to run, & returns no useful info
On 9/18/15 4:10 PM, John Casu wrote:
Hi,
we're seeing multiple entries for nodes in pool list & peer status, each
associated with a unique UUID.
Filesystem seems to be working, but we need to cle
d: on
nfs.disable: true
--
Thanks!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
--
John Casu | Principal Solutions Architect
---
Penguin Computing
45800 Northport Loop West
F
Hoping someone can help.
If I have a brick that was created on shared storage (dual-host jbod,
for example), and the node on which that brick was created/mounted goes
down, can I "transparently" mount the brick from another node and have
things behave the same (i.e glusterfs recognizes the bri
Guys,
I'm new to the GlusterFS community, so please forgive any noobishness.
I'm wondering if there are any torture tests or some other collection of
test cases that would allow system builders to exercise their configs
prior to production deployment.
How do folks here go about their stress t