Re: [Gluster-users] vfs_gluster broken

2018-09-20 Thread Terry McGuire
> On Sep 19, 2018, at 06:37, Anoop C S wrote: > > On Wed, 2018-09-12 at 10:37 -0600, Terry McGuire wrote: >>> Can you please attach the output of `testparm -s` so as to look through how >>> Samba is setup? > > I have a setup where I could browse and work with a GlusterFS volume share > made

Re: [Gluster-users] vfs_gluster broken

2018-09-20 Thread Terry McGuire
> On Sep 18, 2018, at 14:09, Anoop C S wrote: > >>> If so, do you see any issues for files/directories other than those >>> present directly under root of the share? >> >> Interesting! When I use Mac or Windows, I still see issues deeper in the >> hierarchy, but when I >> use Linux, I

Re: [Gluster-users] sharding in glusterfs

2018-09-20 Thread Pranith Kumar Karampuri
On Wed, Sep 19, 2018 at 11:37 AM Ashayam Gupta wrote: > Please find our workload details as requested by you : > > * Only 1 write-mount point as of now > * Read-Mount : Since we auto-scale our machines this can be as big as > 300-400 machines during peak times > * >" multiple concurrent reads

Re: [Gluster-users] Kicking a stuck heal

2018-09-20 Thread Kaleb S. KEITHLEY
On 09/20/2018 04:34 AM, Niels de Vos wrote: > On Thu, Sep 20, 2018 at 10:19:27AM +0200, Patrick Matthäi wrote: > ... >>> Unless I'm very much mistaken, once they pick a version for a >>> distribution (e.g. 3.8 for jessie) then that's what they ship for the >>> life of that distribution. >>

[Gluster-users] Strange error in gluster during workflow

2018-09-20 Thread Zeeshan Ali Shah
Recently we deployed glusterfs for a genomic pipeline.. in pipeline there are steps for e.g generate a file and tar it .. there is strange issue appear that after generate a file when it jumps to tar it , it error that "file is not completed" -- may be the data is not propagated completely ---

Re: [Gluster-users] Data on gluster volume gone

2018-09-20 Thread Johan Karlsson
I understand that a 2 way replica can require some fiddling with heal, but how is it possible that all data just vanished, even from the bricks? --- gluster> volume info Volume Name: gvol0 Type: Replicate Volume ID: 17ed4d1c-2120-4fe8-abd6-dd77d7ddac59 Status: Started Snapshot Count: 0 Number

Re: [Gluster-users] Data on gluster volume gone

2018-09-20 Thread Pranith Kumar Karampuri
This logfile didn't have any logs about heal. Could you find the same file on the other node as well and attach it to the mail-thread? We should also see the mount logs once to confirm replication did something or not. Otherwise we should see the brick logs. Instead of checking it iteratively,

Re: [Gluster-users] Kicking a stuck heal

2018-09-20 Thread Dave Sherohman
I was just about to come over and say that, after talking this through with coworkers, we've decided to upgrade to something outside of Debian stable. And what should I find? On Thu, Sep 20, 2018 at 10:19:27AM +0200, Patrick Matthäi wrote: > But you also can use our stable backports [0].

Re: [Gluster-users] Kicking a stuck heal

2018-09-20 Thread Niels de Vos
On Thu, Sep 20, 2018 at 10:19:27AM +0200, Patrick Matthäi wrote: ... > > Unless I'm very much mistaken, once they pick a version for a > > distribution (e.g. 3.8 for jessie) then that's what they ship for the > > life of that distribution. > Correct. > > But you also can use our stable backports

Re: [Gluster-users] Data on gluster volume gone

2018-09-20 Thread Pranith Kumar Karampuri
Please also attach the logs for the mount points and the glustershd.logs On Thu, Sep 20, 2018 at 11:41 AM Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > How did you do the upgrade? > > On Thu, Sep 20, 2018 at 11:01 AM Raghavendra Gowdappa > wrote: > >> >> >> On Thu, Sep 20, 2018 at

Re: [Gluster-users] Data on gluster volume gone

2018-09-20 Thread Pranith Kumar Karampuri
How did you do the upgrade? On Thu, Sep 20, 2018 at 11:01 AM Raghavendra Gowdappa wrote: > > > On Thu, Sep 20, 2018 at 1:29 AM, Raghavendra Gowdappa > wrote: > >> Can you give volume info? Looks like you are using 2 way replica. >> > > Yes indeed. > gluster volume create gvol0 replica 2