Re: [Gluster-users] Initial mount problem - all subvolumes are down

2015-04-01 Thread Rumen Telbizov
Any update here? Can I hope to see a fix incorporated into the release of 3.6.3 ? On Tue, Mar 31, 2015 at 10:53 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > On 03/31/2015 10:47 PM, Rumen Telbizov wrote: > > Pranith and Atin, > > Thank you for looking

Re: [Gluster-users] Initial mount problem - all subvolumes are down

2015-03-31 Thread Rumen Telbizov
hat do you think would be the timeline for fixing this issue? What version do you expect to see this fixed in? In the meantime, is there another workaround that you might suggest besides running a secondary mount later after the boot is over? Thank you again for your help, Rumen Telbizov On T

[Gluster-users] Initial mount problem - all subvolumes are down

2015-03-30 Thread Rumen Telbizov
imeout: 60 ​I run Debian 7 and the following GlusterFS version 3.6.2-2. While I could together some rc.local type of script which retries to mount the volume for a while until it succeeds or times out I was wondering if there's a better way to solve this problem? Thank you for your help.

Re: [Gluster-users] Access data directly from underlying storage

2015-03-24 Thread Rumen Telbizov
Thank you once again for your input. It's highly appreciated. On Sat, Mar 21, 2015 at 9:53 AM, Melkor Lord wrote: > On Thu, Mar 19, 2015 at 9:11 PM, Rumen Telbizov > wrote: > > Thank you for your answer Melkor. >> > > You're welcome! > > >> Th

Re: [Gluster-users] Access data directly from underlying storage

2015-03-19 Thread Rumen Telbizov
Thank you for your answer Melkor. This is the kind of experience I was looking for actually. I am happy that it has worked fine for you. Anybody coming across any issues while reading directly from the underlying disk? Thank you again, Rumen Telbizov On Thu, Mar 19, 2015 at 12:29 AM, Melkor

[Gluster-users] Access data directly from underlying storage

2015-03-18 Thread Rumen Telbizov
the local disk​ but I want to be certain that those reads, in terms of correctness and consistency, will be equivalent to reading of the shared drive itself. ​Thank you in advance for sharing your experience.​ Regards, -- Rumen Telbizov Unix Systems Administrator <http://telbizov.com> ___

Re: [Gluster-users] Missing 'status fd' and 'top *-perf' details

2015-02-12 Thread Rumen Telbizov
Am I the only one experiencing this? Do you guys have proper statistics? On Wed, Feb 11, 2015 at 1:29 PM, Rumen Telbizov wrote: > Hello everyone, > > I have the following situation. I put some read and write load on my test > GlusterFS setup as follows: > > # dd if=/dev/z

[Gluster-users] Missing 'status fd' and 'top *-perf' details

2015-02-11 Thread Rumen Telbizov
out: 10 nfs.disable: on client.ssl: off server.ssl: off ​ ​Has anyone else experienced this? ​Regards, -- Rumen Telbizov Unix Systems Administrator <http://telbizov.com> ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterd 100% cpu upon volume status inode

2015-02-11 Thread Rumen Telbizov
d for stable production work? 3.5 or 3.6? Regards, Rumen Telbizov On Tue, Feb 10, 2015 at 9:27 PM, Kaushal M wrote: > There is nothing wrong with your setup. This is a known issue (at least to > me). > > The problem here lies with how GlusterD collect and collate the > inf

[Gluster-users] glusterd 100% cpu upon volume status inode

2015-02-10 Thread Rumen Telbizov
k02/brick Options Reconfigured: nfs.disable: on network.ping-timeout: 10 I run: # glusterd -V glusterfs 3.5.3 built on Nov 17 2014 15:48:52 Repository revision: git://git.gluster.com/glusterfs.git ​Thank you for your time. ​Regards, -- Rumen Telbizov Unix Systems Administrator <http://telbiz