Re: [Gluster-users] ganesha BFS

2015-08-12 Thread Prasun Gera
Thanks. For small files and random I/O, nfs has been recommended over fuse. Would pNFS, with multiple MDS'es in the future, be the recommended approach for small files ? On Wed, Aug 12, 2015 at 11:03 PM, Soumya Koduri wrote: > It depends on the workload. Like native NFS, even with NFS-Ganesha, d

Re: [Gluster-users] ganesha BFS

2015-08-12 Thread Soumya Koduri
It depends on the workload. Like native NFS, even with NFS-Ganesha, data is routed through the server where its mounted from. In addition NFSv4.x protocol adds more complexity and cannot be directly compared with NFSv3 traffic. However with pNFS, I/O is routed to data servers directly by the NFS

Re: [Gluster-users] glusterd doesn't start

2015-08-12 Thread Atin Mukherjee
On 08/12/2015 09:24 PM, p...@email.cz wrote: > Hello, > some logs after reboot : > > # systemctl status glusterd > glusterd.service - GlusterFS, a clustered file-system server >Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) >Active: failed (Result: exit-code) since St

Re: [Gluster-users] Balance within node

2015-08-12 Thread Jordan Willis
Ok, so I think I know what my problem is now. df -h /dev/sdb5.5T 5.0T 378G 94% /export/brick1 /dev/sda 12T 5.6T 6.3T 47% /export/brick2 They are balanced pretty well as far as total storage, but is there a way to balance the

Re: [Gluster-users] ganesha BFS

2015-08-12 Thread Prasun Gera
And do either of them perform better than fuse mounts ? With native nfs, all data is routed through the server where it's mounted from, which makes HA and load balancing difficult. For pNFS, there is a single metadata server. How does that affect HA and load ? I thought one of the main goals of glu

Re: [Gluster-users] ganesha BFS

2015-08-12 Thread Joe Julian
nfs-ganesha is a much more feature rich nfs server that uses libgfapi to access the gluster volume in userspace. This userspace solution avoids the context switches like the native gluster nfs does, but adds support for pnfs/nfsv4 and udp. From the development standpoint, they have a full set

[Gluster-users] ganesha BFS

2015-08-12 Thread p...@email.cz
Hello Dears, can anybody explain advanteges / disadvantages of Ganesha NFS ?? Will U reccomend me go through this way ?? ( 4 node glusterFS ) regs. Pavel ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/g

Re: [Gluster-users] After Centos 6 yum update client fails to mount glusterfs volume

2015-08-12 Thread Atin Mukherjee
Please try any of the below two workarounds: There are two possible workarounds. Before upgrading, 1. Set 'client.bind-insecure off' on all volumes. This forces 3.7.3 clients to use secure ports to connect to the servers. This does not affect older clients as this setting is the default for them.

[Gluster-users] Rebuild .glusterfs

2015-08-12 Thread Ryan Clough
Some of these directories have been deleted by accident. Is there a way to rebuild this metadata? ___ ¯\_(ツ)_/¯ Ryan Clough Information Systems Decision Sciences International Corporation

Re: [Gluster-users] glusterd doesn't start

2015-08-12 Thread p...@email.cz
Hello, some logs after reboot : # systemctl status glusterd glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) Active: failed (Result: exit-code) since St 2015-08-12 16:34:22 CEST; 2min 6s ago Process: 1355 Exec

[Gluster-users] After Centos 6 yum update client fails to mount glusterfs volume

2015-08-12 Thread Taylor Lewick
I have an 8 node gluster cluster running 3.6.2. Yesterday one of our admin ran a yum update on his Centos 6.6 machine, and it updated the gluster client to 3.7.3. (Client update only on his machine, not the gluster cluster servers). After the update he was not able to mount the gluster volume.

Re: [Gluster-users] Gluster 3.6.4 tune2fs and inode size errors

2015-08-12 Thread Atin Mukherjee
Davy, I will check this with Kaleb and get back to you. -Atin Sent from one plus one On Aug 12, 2015 7:22 PM, "Davy Croonen" wrote: > Atin > > No problem to raise a bug for this, but isn’t this already addressed here: > > Bug 670 - conti

[Gluster-users] Disk space usage

2015-08-12 Thread Thibault Godouet
I have a replicated Gluster 3.7.3 volume composed of two bricks, each on a different server. If I mount the volume as NFS (because it is a lot faster than FUSE for du), and do a 'du -h' on this, it returns 56GB. Yet the disk usage on each brick is quite a lot higher: - a 'du -h' gives me 104GB,

Re: [Gluster-users] Gluster 3.6.4 tune2fs and inode size errors

2015-08-12 Thread Davy Croonen
Atin No problem to raise a bug for this, but isn’t this already addressed here: Bug 670 - continuous log entries failed to get inode size https://bugzilla.redhat.com/show_bug.cgi?id=670#c2 KR Davy On 12 Aug 2015, at 14:56, Atin Mukhe

Re: [Gluster-users] how to reboot all bricks safely and seamlessly

2015-08-12 Thread Kingsley
On Wed, 2015-08-12 at 17:29 +0530, Ravishankar N wrote: > The statistics option for the heal command should give you the number of > files healed per crawl. You can refer to > https://github.com/gluster/glusterfs/blob/release-3.7/doc/features/afr-statistics.md > > for an explanation. We do not

Re: [Gluster-users] [Gluster-devel] Minutes of today's Gluster Community meeting

2015-08-12 Thread Atin Mukherjee
Minutes: http://meetbot.fedoraproject.org/gluster-meeting/2015-08-12/gluster-meeting.2015-08-12-11.59.html Minutes (text): http://meetbot.fedoraproject.org/gluster-meeting/2015-08-12/gluster-meeting.2015-08-12-11.59.txt Log: http://meetbot.fedoraproject.org/gluster-meeting/2015-08-12/gluster-meetin

Re: [Gluster-users] Gluster 3.6.4 tune2fs and inode size errors

2015-08-12 Thread Atin Mukherjee
Well, this looks like a bug even in 3.7 as well. I've posted a fix [1] to address it. [1] http://review.gluster.org/11898 Could you please raise a bug for this? ~Atin On 08/12/2015 01:32 PM, Davy Croonen wrote: > Hi Atin > > Thanks for your answer. The op-version was indeed an old one, 30501 t

Re: [Gluster-users] split-brain files not shown in "gluster volume heal info split-brain" command

2015-08-12 Thread Ravishankar N
On 08/11/2015 03:28 PM, Lakshmi Anusha wrote: From the extended attributes, an entry split-brain seems to be appeared. Since gfid is different for the original file and its replica. Can you please let us know why the split-brain files are not shown in "gluster volume heal info split-brain" co

Re: [Gluster-users] how to reboot all bricks safely and seamlessly

2015-08-12 Thread Ravishankar N
On 08/12/2015 05:20 PM, Kingsley wrote: On Wed, 2015-08-12 at 17:09 +0530, Ravishankar N wrote: On 08/11/2015 10:06 PM, Kingsley wrote: Hi, If you need to reboot all bricks in a volume, what's the best way to do this seamlessly? I did this a few days ago by rebooting one, then waiting for "

[Gluster-users] REMINDER: Weekly Gluster Community meeting today at 12:00 UTC (~5 minutes from now)

2015-08-12 Thread Mohammed Rafi K C
Hi All, In about 5 minutes from now we will have the regular weekly Gluster Community meeting. Meeting details: - location: #gluster-meeting on Freenode IRC - date: every Wednesday - time: 12:00 UTC, 14:00 CEST, 17:30 IST (in your terminal, run: date -d "12:00 UTC") - agenda: https://publi

Re: [Gluster-users] how to reboot all bricks safely and seamlessly

2015-08-12 Thread Kingsley
On Wed, 2015-08-12 at 17:09 +0530, Ravishankar N wrote: > > On 08/11/2015 10:06 PM, Kingsley wrote: > > Hi, > > > > If you need to reboot all bricks in a volume, what's the best way to do > > this seamlessly? > > > > I did this a few days ago by rebooting one, then waiting for "gluster > > volume

Re: [Gluster-users] how to reboot all bricks safely and seamlessly

2015-08-12 Thread Ravishankar N
On 08/11/2015 10:06 PM, Kingsley wrote: Hi, If you need to reboot all bricks in a volume, what's the best way to do this seamlessly? I did this a few days ago by rebooting one, then waiting for "gluster volume info" on another brick to show it back online before doing the next, and so on. How

Re: [Gluster-users] how to reboot all bricks safely and seamlessly

2015-08-12 Thread Kingsley
Thanks. Who are the AFR team, and how can I contact them? Cheers, Kingsley. On Tue, 2015-08-11 at 22:47 +0530, Atin Mukherjee wrote: > Well as you mentioned you might have rebooted the other node of the > replica pair when the self heal was in progress. AFR team can help you > with details if the

Re: [Gluster-users] Gluster 3.6.4 tune2fs and inode size errors

2015-08-12 Thread Davy Croonen
Hi Atin Thanks for your answer. The op-version was indeed an old one, 30501 to be precise. I’ve updated the op-version to the one you suggested with the command: gluster volume set all cluster.op-version 30603. From testing it seems this issue is solved for the moment. Considering the errors i

Re: [Gluster-users] [Gluster-devel] Plans for Gluster 3.8

2015-08-12 Thread Kaushal M
Hi Csaba, These are the updates regarding the requirements, after our meeting last week. The specific updates on the requirements are inline. In general, we feel that the requirements for selective read-only mode and immediate disconnection of clients on access revocation are doable for GlusterFS