Re: [Gluster-users] Can't heal a volume: "Please check if all brick processes are running."

2018-03-13 Thread Karthik Subrahmanya
On Wed, Mar 14, 2018 at 4:33 AM, Laura Bailey wrote: > Can we add a smarter error message for this situation by checking volume > type first? Yes we can. I will do that. Thanks, Karthik > > Cheers, > Laura B > > > On Wednesday, March 14, 2018, Karthik Subrahmanya

[Gluster-users] Announcing Gluster release 4.0.0 (Short Term Maintenance)

2018-03-13 Thread Shyam Ranganathan
The Gluster community celebrates 13 years of development with this latest release, Gluster 4.0. This release enables improved integration with containers, an enhanced user experience, and a next-generation management framework. The 4.0 release helps cloud-native app developers choose Gluster as

Re: [Gluster-users] Can't heal a volume: "Please check if all brick processes are running."

2018-03-13 Thread Laura Bailey
Can we add a smarter error message for this situation by checking volume type first? Cheers, Laura B On Wednesday, March 14, 2018, Karthik Subrahmanya wrote: > Hi Anatoliy, > > The heal command is basically used to heal any mismatching contents > between replica copies of

Re: [Gluster-users] Can't heal a volume: "Please check if all brick processes are running."

2018-03-13 Thread Karthik Subrahmanya
Hi Anatoliy, The heal command is basically used to heal any mismatching contents between replica copies of the files. For the command "gluster volume heal " to succeed, you should have the self-heal-daemon running, which is true only if your volume is of type replicate/disperse. In your case you

Re: [Gluster-users] trashcan on dist. repl. volume with geo-replication

2018-03-13 Thread Dietmar Putz
Hi Kotresh, ...another test. this time the trashcan was enabled on master only. as in the test before it's a gfs 3.12.6 on ubuntu 16.04.4 the geo rep error appeared again and disabling the trashcan does not change anything. as in the former test the error appears when i try to list files in

Re: [Gluster-users] Can't heal a volume: "Please check if all brick processes are running."

2018-03-13 Thread Anatoliy Dmytriyev
Hi, Maybe someone can point me to a documentation or explain this? I can't find it myself. Do we have any other useful resources except doc.gluster.org? As I see many gluster options are not described there or there are no explanation what is doing... On 2018-03-12 15:58, Anatoliy

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-13 Thread Ondrej Valousek
Sorry - no time to play with that. But it’s simple to reproduce, just set up your of async nfs server, take my script and you will see on your own. Ondrej From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com] Sent: Tuesday, March 13, 2018 10:41 AM To: Ondrej Valousek

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-13 Thread Pranith Kumar Karampuri
On Tue, Mar 13, 2018 at 2:42 PM, Ondrej Valousek < ondrej.valou...@s3group.com> wrote: > Yes, I have had this in place already (well except of the negative cache, > but enabling that did not make much effect). > > To me, this is no surprise – nothing can match nfs performance for small > files

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-13 Thread Ondrej Valousek
Yes, I have had this in place already (well except of the negative cache, but enabling that did not make much effect). To me, this is no surprise – nothing can match nfs performance for small files for obvious reasons: 1. Single server, does not have to deal with distributed locks 2.

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-13 Thread Ondrej Valousek
Well, it might be close to the _synchronous_ nfs, but it is still well behind of the asynchronous nfs performance. Simple script (bit extreme I know, but helps to draw the picture): #!/bin/csh set HOSTNAME=`/bin/hostname` set j=1 while ($j <= 7000) echo ahoj > test.$HOSTNAME.$j @ j++ end

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-13 Thread Pranith Kumar Karampuri
On Tue, Mar 13, 2018 at 12:58 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek < > ondrej.valou...@s3group.com> wrote: > >> Hi, >> >> Gluster will never perform well for small files. >> >> I believe there is nothing you can do with