On Thu, Mar 05, 2015 at 03:31:51PM -0500, Tom Young wrote:
Update –
I found that we can enable ACLs on the gluster server, and still have
access to more than 32 groups. I had to remove the acl option from the
client that was mounting the gluster volume, and everything started working
the
Update –
I found that we can enable ACLs on the gluster server, and still have
access to more than 32 groups. I had to remove the acl option from the
client that was mounting the gluster volume, and everything started working
the way we wanted. Thank you
Tom Young
*From:* Tom Young
Niels,
One additional piece of info.
When tom mounted it with ACL on one client, it stopped allowing more than
32-groups on ALL the clients. Even the ones where it was FUSE mounted without
the ACL option.
To me, this was the biggest issue. A single improper mount point messing up the
hi,
My client log is full with this messages:
[2015-03-04 17:28:08.036915] W
[client-rpc-fops.c:1210:client3_3_removexattr_cbk] 0-w-vol-client-0:
remote operation failed: No data available
[2015-03-04 17:28:08.036978] W [fuse-bridge.c:1261:fuse_err_cbk]
0-glusterfs-fuse: 36483008:
segfault on replica1
Mar 3 22:40:08 HFMHVR3 kernel: [11430546.394720] qemu-system-x86[14267]:
segfault at 128 ip 7f4812d945cc sp 7f4816da48a0 error 4 in
qemu-system-x86_64[7f4812a08000+4b1000]
The qemu logs only show the client shutting down on replica1
2015-03-03 23:10:14.928+:
Hi Guys,
I meet fstab problem,
fstab config : gwgfs01:/vol01 /mnt/glusterglusterfs
defaults,_netdev0 0
The mount can’t take effect, I checked below:
[root@gfsclient02 ~]# systemctl status mnt-gluster.mount -l
mnt-gluster.mount - /mnt/gluster
Loaded: loaded (/etc/fstab)
On 03/04/2015 11:24 PM, ML mail wrote:
Hello,
I have two gluster nodes in a replicated setup and have connected the two
nodes together directly through a 10 Gbit/s crossover cable. Now I would like
to tell gluster to use this seperate private network for any communications
between the
Hello,
I would like to use ACLs on my gluster volume, and also not be restricted
by the 32 group limitation if I do. I have noticed that if I enable acl
support on a client, then I am restricted to using 32 groups. I have
several users that are part of more than 32 groups, but they still want
On 03/01/2015 11:44 PM, Atin Mukherjee wrote:
Thanks Fanghuang for your nice words.
Vijay,
Can we try to take this patch in for 3.7 ?
Happy to get this in to 3.7. Could you please rebase this patch to the
latest git HEAD?
Thanks,
Vijay
~Atin
On 03/02/2015 08:01 AM,
Thank you for the detailed explanation. Due to the fact that right now it does
not make much difference to split the traffic I will refrain from doing that
and simply wait for the new style replication. This looks like a very promising
feature and I am looking forward to it. My other concern
On 03/04/2015 03:29 PM, Josh Boon wrote:
We're still losing machines. I was able to get symbols loaded and a stack trace
this time. It looks like a self-heal causes some kind of issue. Stack trace
follows:
Stacktrace:
#0 0x7f4812d945cc in ?? ()
No symbol table info available.
#1
I have two gluster nodes in a replicated setup and have connected the two
nodes together directly through a 10 Gbit/s crossover cable. Now I would
like to tell gluster to use this seperate private network for any
communications between the two nodes. Does that make sense? Will this bring
me
On 03/04/2015 09:35 AM, Félix de Lelelis wrote:
Hi,
Someone know how obtain stats over fops or I/O operations in gluster?
The idea is integrate this scripts with zabbix.
On the servers, you can execute:
gluster volume profile volname .. to collect information.
For collecting statistics
On Thu, Mar 05, 2015 at 05:11:55PM -0500, David F. Robinson wrote:
Niels,
One additional piece of info.
When tom mounted it with ACL on one client, it stopped allowing more
than 32-groups on ALL the clients. Even the ones where it was FUSE
mounted without the ACL option.
To me, this
On Thu, Mar 05, 2015 at 11:18:07PM +0100, Tamas Papp wrote:
hi,
My client log is full with this messages:
[2015-03-04 17:28:08.036915] W
[client-rpc-fops.c:1210:client3_3_removexattr_cbk] 0-w-vol-client-0: remote
operation failed: No data available
[2015-03-04 17:28:08.036978] W
15 matches
Mail list logo