[Gluster-users] How to sync content to a file through standard Java APIs

2018-03-14 Thread Sven Ludwig
Hello,

we have a Gluster FS mounted at some /mnt/... path on a Server. The actual 
physical device behind this resides on some other Server.

Now, the requirement is to write files to this Gluster FS Volume in a durable 
fashion, i.e. for an officially succeeded write the contents MUST have been 
actually synced to disk on at least one of the Gluster FS nodes in the replica 
set.

Our questions:

1. Which JDK 8 APIs can we use that would fulfill this requirement with an 
actually working sync?

2. Is java.nio.channels.FileChannel.open with StandardOpenOption#SYNC in the 
set of options given to this method an option, or would it perhaps not actually 
guarantee the sync?

3. Apart from 2., are there other ways to achieve the requirement through the 
standard JDK 8 APIs?

Kind Regards
Sven

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Disperse volume recovery and healing

2018-03-14 Thread Victor T
I have a question about how disperse volumes handle brick failure. I'm running 
version 3.10.10 on all systems. If I have a disperse volume in a 4+2 
configuration with 6 servers each serving 1 brick, and maintenance needs to be 
performed on all systems, are there any general steps that need to be taken to 
ensure data is not lost or service interrupted? For example, can I just reboot 
each system sequentially after making sure sure the service is running on all 
servers before rebooting the next system? Or is there a need to force/wait for 
a heal after each brick comes back online? If I have two bricks down for 
multiple days and then bring them back in, is there a need to issue a heal or 
something like a rebalance before rebooting the other servers? There's lots of 
documentation about other volume types, but it seems information specific to 
dispersed volumes is a bit hard to find. Thanks a bunch.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Can't heal a volume: "Please check if all brick processes are running."

2018-03-14 Thread Anatoliy Dmytriyev
Thanks 

On 2018-03-14 13:50, Karthik Subrahmanya wrote:

> On Wed, Mar 14, 2018 at 5:42 PM, Karthik Subrahmanya  
> wrote:
> 
> On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev  
> wrote:
> 
> Hi Karthik, 
> 
> Thanks a lot for the explanation. 
> 
> Does it mean a distributed volume health can be checked only by "gluster 
> volume status " command? 
> Yes. I am not aware of any other command which can give the status of plain 
> distribute volume which is similar to the heal info command for 
> replicate/disperse volumes. 
> 
> And one more question: cluster.min-free-disk is 10% by default. What kind of 
> "side effects" can we face if this option will be reduced to, for example, 
> 5%? Could you point to any best practice document(s)? 
> Yes you can decrease it to any value. There won't be any side effect.

Small correction here, min-free-disk should ideally be set to larger
than the largest file size likely to be written. Decreasing it beyond a
point raises the likelihood of the brick getting full which is a very
bad state to be in.
Will update you if I get some document which explains this thing. Sorry
for the previous statement. 

> Regards, 
> Karthik 
> 
> Regards, 
> 
> Anatoliy
> 
> On 2018-03-13 16:46, Karthik Subrahmanya wrote: 
> 
> Hi Anatoliy,
> 
> The heal command is basically used to heal any mismatching contents between 
> replica copies of the files. For the command "gluster volume heal " 
> to succeed, you should have the self-heal-daemon running,
> which is true only if your volume is of type replicate/disperse. In your case 
> you have a plain distribute volume where you do not store the replica of any 
> files. So the volume heal will return you the error.
> 
> Regards, Karthik 
> 
> On Tue, Mar 13, 2018 at 7:53 PM, Anatoliy Dmytriyev  
> wrote:
> Hi,
> 
> Maybe someone can point me to a documentation or explain this? I can't find 
> it myself.
> Do we have any other useful resources except doc.gluster.org [1]? As I see 
> many gluster options are not described there or there are no explanation what 
> is doing... 
> 
> On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
> Hello,
> 
> We have a very fresh gluster 3.10.10 installation.
> Our volume is created as distributed volume, 9 bricks 96TB in total
> (87TB after 10% of gluster disk space reservation)
> 
> For some reasons I can't "heal" the volume:
> # gluster volume heal gv0
> Launching heal operation to perform index self heal on volume gv0 has
> been unsuccessful on bricks that are down. Please check if all brick
> processes are running.
> 
> Which processes should be run on every brick for heal operation?
> 
> # gluster volume status
> Status of volume: gv0
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152  Y   70850
> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152  Y   
> 102951
> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152  Y   57535
> Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152  Y   56676
> Brick cn05-ib:/gfs/gv0/brick1/brick 0 49152  Y   56880
> Brick cn06-ib:/gfs/gv0/brick1/brick 0 49152  Y   56889
> Brick cn07-ib:/gfs/gv0/brick1/brick 0 49152  Y   56902
> Brick cn08-ib:/gfs/gv0/brick1/brick 0 49152  Y   94920
> Brick cn09-ib:/gfs/gv0/brick1/brick 0 49152  Y   56542
> 
> Task Status of Volume gv0
> --
> There are no active volume tasks
> 
> # gluster volume info gv0
> Volume Name: gv0
> Type: Distribute
> Volume ID: 8becaf78-cf2d-4991-93bf-f2446688154f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 9
> Transport-type: rdma
> Bricks:
> Brick1: cn01-ib:/gfs/gv0/brick1/brick
> Brick2: cn02-ib:/gfs/gv0/brick1/brick
> Brick3: cn03-ib:/gfs/gv0/brick1/brick
> Brick4: cn04-ib:/gfs/gv0/brick1/brick
> Brick5: cn05-ib:/gfs/gv0/brick1/brick
> Brick6: cn06-ib:/gfs/gv0/brick1/brick
> Brick7: cn07-ib:/gfs/gv0/brick1/brick
> Brick8: cn08-ib:/gfs/gv0/brick1/brick
> Brick9: cn09-ib:/gfs/gv0/brick1/brick
> Options Reconfigured:
> client.event-threads: 8
> performance.parallel-readdir: on
> performance.readdir-ahead: on
> cluster.nufa: on
> nfs.disable: on 
> -- 
> Best regards,
> Anatoliy
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users [2]

-- 
Best regards,
Anatoliy 

-- 
Best regards,
Anatoliy 

Links:
--
[1] http://doc.gluster.org
[2] http://lists.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org

Re: [Gluster-users] Can't heal a volume: "Please check if all brick processes are running."

2018-03-14 Thread Karthik Subrahmanya
On Wed, Mar 14, 2018 at 5:42 PM, Karthik Subrahmanya 
wrote:

>
>
> On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev 
> wrote:
>
>> Hi Karthik,
>>
>>
>> Thanks a lot for the explanation.
>>
>> Does it mean a distributed volume health can be checked only by "gluster
>> volume status " command?
>>
> Yes. I am not aware of any other command which can give the status of
> plain distribute volume which is similar to the heal info command for
> replicate/disperse volumes.
>
>> And one more question: cluster.min-free-disk is 10% by default. What kind
>> of "side effects" can we face if this option will be reduced to, for
>> example, 5%? Could you point to any best practice document(s)?
>>
> Yes you can decrease it to any value. There won't be any side effect.
>
Small correction here, min-free-disk should ideally be set to larger than
the largest file size likely to be written. Decreasing it beyond a point
raises the likelihood of the brick getting full which is a very bad state
to be in.
Will update you if I get some document which explains this thing. Sorry for
the previous statement.

>
> Regards,
> Karthik
>
>>
>> Regards,
>>
>> Anatoliy
>>
>>
>>
>>
>>
>> On 2018-03-13 16:46, Karthik Subrahmanya wrote:
>>
>> Hi Anatoliy,
>>
>> The heal command is basically used to heal any mismatching contents
>> between replica copies of the files.
>> For the command "gluster volume heal " to succeed, you should
>> have the self-heal-daemon running,
>> which is true only if your volume is of type replicate/disperse.
>> In your case you have a plain distribute volume where you do not store
>> the replica of any files.
>> So the volume heal will return you the error.
>>
>> Regards,
>> Karthik
>>
>> On Tue, Mar 13, 2018 at 7:53 PM, Anatoliy Dmytriyev 
>> wrote:
>>
>>> Hi,
>>>
>>>
>>> Maybe someone can point me to a documentation or explain this? I can't
>>> find it myself.
>>> Do we have any other useful resources except doc.gluster.org? As I see
>>> many gluster options are not described there or there are no explanation
>>> what is doing...
>>>
>>>
>>>
>>> On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
>>>
 Hello,

 We have a very fresh gluster 3.10.10 installation.
 Our volume is created as distributed volume, 9 bricks 96TB in total
 (87TB after 10% of gluster disk space reservation)

 For some reasons I can't "heal" the volume:
 # gluster volume heal gv0
 Launching heal operation to perform index self heal on volume gv0 has
 been unsuccessful on bricks that are down. Please check if all brick
 processes are running.

 Which processes should be run on every brick for heal operation?

 # gluster volume status
 Status of volume: gv0
 Gluster process TCP Port  RDMA Port
 Online  Pid
 
 --
 Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152  Y
  70850
 Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152  Y
  102951
 Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152  Y
  57535
 Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152  Y
  56676
 Brick cn05-ib:/gfs/gv0/brick1/brick 0 49152  Y
  56880
 Brick cn06-ib:/gfs/gv0/brick1/brick 0 49152  Y
  56889
 Brick cn07-ib:/gfs/gv0/brick1/brick 0 49152  Y
  56902
 Brick cn08-ib:/gfs/gv0/brick1/brick 0 49152  Y
  94920
 Brick cn09-ib:/gfs/gv0/brick1/brick 0 49152  Y
  56542

 Task Status of Volume gv0
 
 --
 There are no active volume tasks


 # gluster volume info gv0
 Volume Name: gv0
 Type: Distribute
 Volume ID: 8becaf78-cf2d-4991-93bf-f2446688154f
 Status: Started
 Snapshot Count: 0
 Number of Bricks: 9
 Transport-type: rdma
 Bricks:
 Brick1: cn01-ib:/gfs/gv0/brick1/brick
 Brick2: cn02-ib:/gfs/gv0/brick1/brick
 Brick3: cn03-ib:/gfs/gv0/brick1/brick
 Brick4: cn04-ib:/gfs/gv0/brick1/brick
 Brick5: cn05-ib:/gfs/gv0/brick1/brick
 Brick6: cn06-ib:/gfs/gv0/brick1/brick
 Brick7: cn07-ib:/gfs/gv0/brick1/brick
 Brick8: cn08-ib:/gfs/gv0/brick1/brick
 Brick9: cn09-ib:/gfs/gv0/brick1/brick
 Options Reconfigured:
 client.event-threads: 8
 performance.parallel-readdir: on
 performance.readdir-ahead: on
 cluster.nufa: on
 nfs.disable: on
>>>
>>>
>>> --
>>> Best regards,
>>> Anatoliy
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>> --
>> Best regards,
>> Anatoliy
>>
>
>

Re: [Gluster-users] Can't heal a volume: "Please check if all brick processes are running."

2018-03-14 Thread Karthik Subrahmanya
On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev 
wrote:

> Hi Karthik,
>
>
> Thanks a lot for the explanation.
>
> Does it mean a distributed volume health can be checked only by "gluster
> volume status " command?
>
Yes. I am not aware of any other command which can give the status of plain
distribute volume which is similar to the heal info command for
replicate/disperse volumes.

> And one more question: cluster.min-free-disk is 10% by default. What kind
> of "side effects" can we face if this option will be reduced to, for
> example, 5%? Could you point to any best practice document(s)?
>
Yes you can decrease it to any value. There won't be any side effect.

Regards,
Karthik

>
> Regards,
>
> Anatoliy
>
>
>
>
>
> On 2018-03-13 16:46, Karthik Subrahmanya wrote:
>
> Hi Anatoliy,
>
> The heal command is basically used to heal any mismatching contents
> between replica copies of the files.
> For the command "gluster volume heal " to succeed, you should
> have the self-heal-daemon running,
> which is true only if your volume is of type replicate/disperse.
> In your case you have a plain distribute volume where you do not store the
> replica of any files.
> So the volume heal will return you the error.
>
> Regards,
> Karthik
>
> On Tue, Mar 13, 2018 at 7:53 PM, Anatoliy Dmytriyev 
> wrote:
>
>> Hi,
>>
>>
>> Maybe someone can point me to a documentation or explain this? I can't
>> find it myself.
>> Do we have any other useful resources except doc.gluster.org? As I see
>> many gluster options are not described there or there are no explanation
>> what is doing...
>>
>>
>>
>> On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
>>
>>> Hello,
>>>
>>> We have a very fresh gluster 3.10.10 installation.
>>> Our volume is created as distributed volume, 9 bricks 96TB in total
>>> (87TB after 10% of gluster disk space reservation)
>>>
>>> For some reasons I can't "heal" the volume:
>>> # gluster volume heal gv0
>>> Launching heal operation to perform index self heal on volume gv0 has
>>> been unsuccessful on bricks that are down. Please check if all brick
>>> processes are running.
>>>
>>> Which processes should be run on every brick for heal operation?
>>>
>>> # gluster volume status
>>> Status of volume: gv0
>>> Gluster process TCP Port  RDMA Port  Online
>>> Pid
>>> 
>>> --
>>> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152  Y
>>>  70850
>>> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152  Y
>>>  102951
>>> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152  Y
>>>  57535
>>> Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152  Y
>>>  56676
>>> Brick cn05-ib:/gfs/gv0/brick1/brick 0 49152  Y
>>>  56880
>>> Brick cn06-ib:/gfs/gv0/brick1/brick 0 49152  Y
>>>  56889
>>> Brick cn07-ib:/gfs/gv0/brick1/brick 0 49152  Y
>>>  56902
>>> Brick cn08-ib:/gfs/gv0/brick1/brick 0 49152  Y
>>>  94920
>>> Brick cn09-ib:/gfs/gv0/brick1/brick 0 49152  Y
>>>  56542
>>>
>>> Task Status of Volume gv0
>>> 
>>> --
>>> There are no active volume tasks
>>>
>>>
>>> # gluster volume info gv0
>>> Volume Name: gv0
>>> Type: Distribute
>>> Volume ID: 8becaf78-cf2d-4991-93bf-f2446688154f
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 9
>>> Transport-type: rdma
>>> Bricks:
>>> Brick1: cn01-ib:/gfs/gv0/brick1/brick
>>> Brick2: cn02-ib:/gfs/gv0/brick1/brick
>>> Brick3: cn03-ib:/gfs/gv0/brick1/brick
>>> Brick4: cn04-ib:/gfs/gv0/brick1/brick
>>> Brick5: cn05-ib:/gfs/gv0/brick1/brick
>>> Brick6: cn06-ib:/gfs/gv0/brick1/brick
>>> Brick7: cn07-ib:/gfs/gv0/brick1/brick
>>> Brick8: cn08-ib:/gfs/gv0/brick1/brick
>>> Brick9: cn09-ib:/gfs/gv0/brick1/brick
>>> Options Reconfigured:
>>> client.event-threads: 8
>>> performance.parallel-readdir: on
>>> performance.readdir-ahead: on
>>> cluster.nufa: on
>>> nfs.disable: on
>>
>>
>> --
>> Best regards,
>> Anatoliy
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
> --
> Best regards,
> Anatoliy
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-14 Thread Ondrej Valousek
Use DRBD then, that will give you required redundancy.

From: Andreas Ericsson [mailto:andreas.erics...@findity.com]
Sent: Wednesday, March 14, 2018 11:32 AM
To: Ondrej Valousek 
Cc: Pranith Kumar Karampuri ; Gluster-users@gluster.org
Subject: Re: [Gluster-users] Expected performance for WORM scenario

We can't stick to single server because the law. Redundancy is a legal 
requirement for our business.

I'm sort of giving up on gluster though. It would seem a pretty stupid content 
addressable storage would suit our needs better.

On 13 March 2018 at 10:12, Ondrej Valousek 
> wrote:
Yes, I have had this in place already (well except of the negative cache, but 
enabling that did not make much effect).
To me, this is no surprise – nothing can match nfs performance for small files 
for obvious reasons:

1.   Single server, does not have to deal with distributed locks

2.   Afaik, gluster does not support read/write delegations the same way 
NFS does.

3.   Glusterfs is FUSE based

4.   Glusterfs does not support async writes

Summary: If you do not need to scale out, stick with a single server (+DRBD 
optionally for HA), it will give you the best performance

Ondrej


From: Pranith Kumar Karampuri 
[mailto:pkara...@redhat.com]
Sent: Tuesday, March 13, 2018 9:10 AM

To: Ondrej Valousek 
>
Cc: Andreas Ericsson 
>; 
Gluster-users@gluster.org
Subject: Re: [Gluster-users] Expected performance for WORM scenario



On Tue, Mar 13, 2018 at 1:37 PM, Ondrej Valousek 
> wrote:
Well, it might be close to the _synchronous_ nfs, but it is still well behind 
of the asynchronous nfs performance.
Simple script (bit extreme I know, but helps to draw the picture):

#!/bin/csh

set HOSTNAME=`/bin/hostname`
set j=1
while ($j <= 7000)
   echo ahoj > test.$HOSTNAME.$j
   @ j++
end
rm -rf test.$HOSTNAME.*


Takes 9 seconds to execute on the NFS share, but 90 seconds on GlusterFS – i.e. 
10 times slower.

Do you have the new features enabled?

performance.stat-prefetch=on
performance.cache-invalidation=on
performance.md-cache-timeout=600
network.inode-lru-limit=5
performance.nl-cache=on
performance.nl-cache-timeout=600
network.inode-lru-limit=5


Ondrej

From: Pranith Kumar Karampuri 
[mailto:pkara...@redhat.com]
Sent: Tuesday, March 13, 2018 8:28 AM
To: Ondrej Valousek 
>
Cc: Andreas Ericsson 
>; 
Gluster-users@gluster.org
Subject: Re: [Gluster-users] Expected performance for WORM scenario



On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek 
> wrote:
Hi,
Gluster will never perform well for small files.
I believe there is  nothing you can do with this.

It is bad compared to a disk filesystem but I believe it is much closer to NFS 
now.

Andreas,
 Looking at your workload, I am suspecting there to be lot of LOOKUPs which 
reduce performance. Is it possible to do the following?

# gluster volume profile  info incremental
#execute your workload
# gluster volume profile  info incremental > 
/path/to/file/that/you/need/to/send/us

If the last line in there is LOOKUP, mostly we need to enable nl-cache feature 
and see how it performs.

Ondrej

From: 
gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org]
 On Behalf Of Andreas Ericsson
Sent: Monday, March 12, 2018 1:47 PM
To: Gluster-users@gluster.org
Subject: [Gluster-users] Expected performance for WORM scenario

Heya fellas.

I've been struggling quite a lot to get glusterfs to perform even halfdecently 
with a write-intensive workload. Testnumbers are from gluster 3.10.7.

We store a bunch of small files in a doubly-tiered sha1 hash fanout directory 
structure. The directories themselves aren't overly full. Most of the data we 
write to gluster is "write once, read probably never", so 99% of all operations 
are of the write variety.

The network between servers is sound. 10gb network cards run over a 10gb (doh) 
switch. iperf reports 9.86Gbit/sec. ping reports a latency of 0.1 - 0.2 ms. 
There is no firewall, no packet inspection and no nothing between the servers, 
and the 10gb switch is the only path between the two machines, so traffic isn't 
going over some 2mbit wifi by accident.

Our main storage has always been really slow (write speed of roughly 1.5MiB/s), 
but I had long attributed that to the 

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-14 Thread Ingard Mevåg
If you could replicate the problem you had and provide the volume info +
profile that was requested from the redhat guys that would help in trying
to understand what is happening with your workload. Also if possible the
script you used to generate the load.

We've had our share of difficulties with small files on glusterfs as well.
The last problem we encountered is described here
https://bugzilla.redhat.com/show_bug.cgi?id=1546732 and seems to be a
regression from 3.10 to 3.12, but it is related to stat calls taking a long
time and that could be your problem as well. The diagnostics profile output
would tell us if thats the case.

That being said we've started experimenting with
https://github.com/chrislusf/seaweedfs for some of our small file workload.

2018-03-14 11:31 GMT+01:00 Andreas Ericsson :

> We can't stick to single server because the law. Redundancy is a legal
> requirement for our business.
>
> I'm sort of giving up on gluster though. It would seem a pretty stupid
> content addressable storage would suit our needs better.
>
> On 13 March 2018 at 10:12, Ondrej Valousek 
> wrote:
>
>> Yes, I have had this in place already (well except of the negative cache,
>> but enabling that did not make much effect).
>>
>> To me, this is no surprise – nothing can match nfs performance for small
>> files for obvious reasons:
>>
>> 1.   Single server, does not have to deal with distributed locks
>>
>> 2.   Afaik, gluster does not support read/write delegations the same
>> way NFS does.
>>
>> 3.   Glusterfs is FUSE based
>>
>> 4.   Glusterfs does not support async writes
>>
>>
>>
>> Summary: If you do not need to scale out, stick with a single server
>> (+DRBD optionally for HA), it will give you the best performance
>>
>>
>>
>> Ondrej
>>
>>
>>
>>
>>
>> *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
>> *Sent:* Tuesday, March 13, 2018 9:10 AM
>>
>> *To:* Ondrej Valousek 
>> *Cc:* Andreas Ericsson ;
>> Gluster-users@gluster.org
>> *Subject:* Re: [Gluster-users] Expected performance for WORM scenario
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Mar 13, 2018 at 1:37 PM, Ondrej Valousek <
>> ondrej.valou...@s3group.com> wrote:
>>
>> Well, it might be close to the _*synchronous*_ nfs, but it is still well
>> behind of the asynchronous nfs performance.
>>
>> Simple script (bit extreme I know, but helps to draw the picture):
>>
>>
>>
>> #!/bin/csh
>>
>>
>>
>> set HOSTNAME=`/bin/hostname`
>>
>> set j=1
>>
>> while ($j <= 7000)
>>
>>echo ahoj > test.$HOSTNAME.$j
>>
>>@ j++
>>
>> end
>>
>> rm -rf test.$HOSTNAME.*
>>
>>
>>
>>
>>
>> Takes 9 seconds to execute on the NFS share, but 90 seconds on GlusterFS
>> – i.e. 10 times slower.
>>
>>
>>
>> Do you have the new features enabled?
>>
>>
>>
>> performance.stat-prefetch=on
>> performance.cache-invalidation=on
>> performance.md-cache-timeout=600
>> network.inode-lru-limit=5
>>
>> performance.nl-cache=on
>> performance.nl-cache-timeout=600
>> network.inode-lru-limit=5
>>
>>
>>
>>
>>
>> Ondrej
>>
>>
>>
>> *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
>> *Sent:* Tuesday, March 13, 2018 8:28 AM
>> *To:* Ondrej Valousek 
>> *Cc:* Andreas Ericsson ;
>> Gluster-users@gluster.org
>> *Subject:* Re: [Gluster-users] Expected performance for WORM scenario
>>
>>
>>
>>
>>
>>
>>
>> On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek <
>> ondrej.valou...@s3group.com> wrote:
>>
>> Hi,
>>
>> Gluster will never perform well for small files.
>>
>> I believe there is  nothing you can do with this.
>>
>>
>>
>> It is bad compared to a disk filesystem but I believe it is much closer
>> to NFS now.
>>
>>
>>
>> Andreas,
>>
>>  Looking at your workload, I am suspecting there to be lot of LOOKUPs
>> which reduce performance. Is it possible to do the following?
>>
>>
>>
>> # gluster volume profile  info incremental
>>
>> #execute your workload
>>
>> # gluster volume profile  info incremental >
>> /path/to/file/that/you/need/to/send/us
>>
>>
>>
>> If the last line in there is LOOKUP, mostly we need to enable nl-cache
>> feature and see how it performs.
>>
>>
>>
>> Ondrej
>>
>>
>>
>> *From:* gluster-users-boun...@gluster.org [mailto:gluster-users-bounces@
>> gluster.org] *On Behalf Of *Andreas Ericsson
>> *Sent:* Monday, March 12, 2018 1:47 PM
>> *To:* Gluster-users@gluster.org
>> *Subject:* [Gluster-users] Expected performance for WORM scenario
>>
>>
>>
>> Heya fellas.
>>
>>
>>
>> I've been struggling quite a lot to get glusterfs to perform even
>> halfdecently with a write-intensive workload. Testnumbers are from gluster
>> 3.10.7.
>>
>>
>>
>> We store a bunch of small files in a doubly-tiered sha1 hash fanout
>> directory structure. The directories themselves aren't overly full. Most of
>> the data we write to gluster is "write once, read probably never", so 99%
>> of all 

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-14 Thread Ondrej Valousek
Gluster offers distributed filesystem. It will NEVER perform as good as a local 
filesystem because it can’t.
I also believe NFS will always outperform Gluster in certain situations as it 
does not have to deal with distributed locks.

It’s also using FUSE which isn’t great performance-wise.

O.


From: Andreas Ericsson [mailto:andreas.erics...@findity.com]
Sent: Wednesday, March 14, 2018 10:43 AM
To: Pranith Kumar Karampuri 
Cc: Ondrej Valousek ; Gluster-users@gluster.org
Subject: Re: [Gluster-users] Expected performance for WORM scenario

That seems unlikely. I pre-create the directory layout and then write to 
directories I know exist.

I don't quite understand how any settings at all can reduce performance to 
1/5000 of what I get when writing straight to ramdisk though, and especially 
when running on a single node instead of in a cluster. Has anyone else set this 
up and managed to get better write performance?

On 13 March 2018 at 08:28, Pranith Kumar Karampuri 
> wrote:


On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek 
> wrote:
Hi,
Gluster will never perform well for small files.
I believe there is  nothing you can do with this.

It is bad compared to a disk filesystem but I believe it is much closer to NFS 
now.

Andreas,
 Looking at your workload, I am suspecting there to be lot of LOOKUPs which 
reduce performance. Is it possible to do the following?

# gluster volume profile  info incremental
#execute your workload
# gluster volume profile  info incremental > 
/path/to/file/that/you/need/to/send/us

If the last line in there is LOOKUP, mostly we need to enable nl-cache feature 
and see how it performs.

Ondrej

From: 
gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org]
 On Behalf Of Andreas Ericsson
Sent: Monday, March 12, 2018 1:47 PM
To: Gluster-users@gluster.org
Subject: [Gluster-users] Expected performance for WORM scenario

Heya fellas.

I've been struggling quite a lot to get glusterfs to perform even halfdecently 
with a write-intensive workload. Testnumbers are from gluster 3.10.7.

We store a bunch of small files in a doubly-tiered sha1 hash fanout directory 
structure. The directories themselves aren't overly full. Most of the data we 
write to gluster is "write once, read probably never", so 99% of all operations 
are of the write variety.

The network between servers is sound. 10gb network cards run over a 10gb (doh) 
switch. iperf reports 9.86Gbit/sec. ping reports a latency of 0.1 - 0.2 ms. 
There is no firewall, no packet inspection and no nothing between the servers, 
and the 10gb switch is the only path between the two machines, so traffic isn't 
going over some 2mbit wifi by accident.

Our main storage has always been really slow (write speed of roughly 1.5MiB/s), 
but I had long attributed that to the extremely slow disks we use to back it, 
so now that we're expanding I set up a new gluster cluster with state of the 
art NVMe SSD drives to boost performance. However, performance only hopped up 
to around 2.1MiB/s. Perplexed, I tried it first with a 3-node cluster using 2GB 
ramdrives, which got me up to 2.4MiB/s. My last resort was to use a single node 
running on ramdisk, just to 100% exclude any network shenanigans, but the write 
performance stayed at an absolutely abysmal 3MiB/s.

Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I don't 
actually remember the numbers, but my test that took 2 minutes with gluster 
completed before I had time to blink). Writing straight to the backing SSD 
drives gives me a throughput of 96MiB/sec.

The test itself writes 8494 files that I simply took randomly from our 
production environment, comprising a total of 63.4MiB (so average file size is 
just under 8k. Most are actually close to 4k though, with the occasional 
2-or-so MB file in there.

I have googled and read a *lot* of performance-tuning guides, but the 3MiB/sec 
on single-node ramdisk seems to be far beyond the crippling one can cause by 
misconfiguration of a single system.

With this in mind; What sort of write performance can one reasonably hope to 
get with gluster? Assume a 3-node cluster running on top of (small) ramdisks on 
a fast and stable network. Is it just a bad fit for our workload?

/Andreas

-



The information contained in this e-mail and in any attachments is confidential 
and is designated solely for the attention of the intended recipient(s). If you 
are not an intended recipient, you must not use, disclose, copy, distribute or 
retain this e-mail or any part thereof. If you have received this e-mail in 
error, please notify the sender by return e-mail and delete all copies of this 
e-mail from your computer system(s). 

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-14 Thread Andreas Ericsson
We can't stick to single server because the law. Redundancy is a legal
requirement for our business.

I'm sort of giving up on gluster though. It would seem a pretty stupid
content addressable storage would suit our needs better.

On 13 March 2018 at 10:12, Ondrej Valousek 
wrote:

> Yes, I have had this in place already (well except of the negative cache,
> but enabling that did not make much effect).
>
> To me, this is no surprise – nothing can match nfs performance for small
> files for obvious reasons:
>
> 1.   Single server, does not have to deal with distributed locks
>
> 2.   Afaik, gluster does not support read/write delegations the same
> way NFS does.
>
> 3.   Glusterfs is FUSE based
>
> 4.   Glusterfs does not support async writes
>
>
>
> Summary: If you do not need to scale out, stick with a single server
> (+DRBD optionally for HA), it will give you the best performance
>
>
>
> Ondrej
>
>
>
>
>
> *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> *Sent:* Tuesday, March 13, 2018 9:10 AM
>
> *To:* Ondrej Valousek 
> *Cc:* Andreas Ericsson ;
> Gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Expected performance for WORM scenario
>
>
>
>
>
>
>
> On Tue, Mar 13, 2018 at 1:37 PM, Ondrej Valousek <
> ondrej.valou...@s3group.com> wrote:
>
> Well, it might be close to the _*synchronous*_ nfs, but it is still well
> behind of the asynchronous nfs performance.
>
> Simple script (bit extreme I know, but helps to draw the picture):
>
>
>
> #!/bin/csh
>
>
>
> set HOSTNAME=`/bin/hostname`
>
> set j=1
>
> while ($j <= 7000)
>
>echo ahoj > test.$HOSTNAME.$j
>
>@ j++
>
> end
>
> rm -rf test.$HOSTNAME.*
>
>
>
>
>
> Takes 9 seconds to execute on the NFS share, but 90 seconds on GlusterFS –
> i.e. 10 times slower.
>
>
>
> Do you have the new features enabled?
>
>
>
> performance.stat-prefetch=on
> performance.cache-invalidation=on
> performance.md-cache-timeout=600
> network.inode-lru-limit=5
>
> performance.nl-cache=on
> performance.nl-cache-timeout=600
> network.inode-lru-limit=5
>
>
>
>
>
> Ondrej
>
>
>
> *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> *Sent:* Tuesday, March 13, 2018 8:28 AM
> *To:* Ondrej Valousek 
> *Cc:* Andreas Ericsson ;
> Gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Expected performance for WORM scenario
>
>
>
>
>
>
>
> On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek <
> ondrej.valou...@s3group.com> wrote:
>
> Hi,
>
> Gluster will never perform well for small files.
>
> I believe there is  nothing you can do with this.
>
>
>
> It is bad compared to a disk filesystem but I believe it is much closer to
> NFS now.
>
>
>
> Andreas,
>
>  Looking at your workload, I am suspecting there to be lot of LOOKUPs
> which reduce performance. Is it possible to do the following?
>
>
>
> # gluster volume profile  info incremental
>
> #execute your workload
>
> # gluster volume profile  info incremental >
> /path/to/file/that/you/need/to/send/us
>
>
>
> If the last line in there is LOOKUP, mostly we need to enable nl-cache
> feature and see how it performs.
>
>
>
> Ondrej
>
>
>
> *From:* gluster-users-boun...@gluster.org [mailto:gluster-users-bounces@
> gluster.org] *On Behalf Of *Andreas Ericsson
> *Sent:* Monday, March 12, 2018 1:47 PM
> *To:* Gluster-users@gluster.org
> *Subject:* [Gluster-users] Expected performance for WORM scenario
>
>
>
> Heya fellas.
>
>
>
> I've been struggling quite a lot to get glusterfs to perform even
> halfdecently with a write-intensive workload. Testnumbers are from gluster
> 3.10.7.
>
>
>
> We store a bunch of small files in a doubly-tiered sha1 hash fanout
> directory structure. The directories themselves aren't overly full. Most of
> the data we write to gluster is "write once, read probably never", so 99%
> of all operations are of the write variety.
>
>
>
> The network between servers is sound. 10gb network cards run over a 10gb
> (doh) switch. iperf reports 9.86Gbit/sec. ping reports a latency of 0.1 -
> 0.2 ms. There is no firewall, no packet inspection and no nothing between
> the servers, and the 10gb switch is the only path between the two machines,
> so traffic isn't going over some 2mbit wifi by accident.
>
>
>
> Our main storage has always been really slow (write speed of roughly
> 1.5MiB/s), but I had long attributed that to the extremely slow disks we
> use to back it, so now that we're expanding I set up a new gluster cluster
> with state of the art NVMe SSD drives to boost performance. However,
> performance only hopped up to around 2.1MiB/s. Perplexed, I tried it first
> with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My
> last resort was to use a single node running on ramdisk, just to 100%
> exclude any network shenanigans, but the write performance stayed at an
> absolutely abysmal 3MiB/s.
>
>
>
> 

Re: [Gluster-users] Can't heal a volume: "Please check if all brick processes are running."

2018-03-14 Thread Anatoliy Dmytriyev
Hi Karthik, 

Thanks a lot for the explanation. 

Does it mean a distributed volume health can be checked only by "gluster
volume status " command? 

And one more question: cluster.min-free-disk is 10% by default. What
kind of "side effects" can we face if this option will be reduced to,
for example, 5%? Could you point to any best practice document(s)? 

Regards, 

Anatoliy 

On 2018-03-13 16:46, Karthik Subrahmanya wrote:

> Hi Anatoliy,
> 
> The heal command is basically used to heal any mismatching contents between 
> replica copies of the files. For the command "gluster volume heal " 
> to succeed, you should have the self-heal-daemon running,
> which is true only if your volume is of type replicate/disperse. In your case 
> you have a plain distribute volume where you do not store the replica of any 
> files. So the volume heal will return you the error.
> 
> Regards, Karthik 
> 
> On Tue, Mar 13, 2018 at 7:53 PM, Anatoliy Dmytriyev  
> wrote:
> Hi,
> 
> Maybe someone can point me to a documentation or explain this? I can't find 
> it myself.
> Do we have any other useful resources except doc.gluster.org [1]? As I see 
> many gluster options are not described there or there are no explanation what 
> is doing... 
> 
> On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
> Hello,
> 
> We have a very fresh gluster 3.10.10 installation.
> Our volume is created as distributed volume, 9 bricks 96TB in total
> (87TB after 10% of gluster disk space reservation)
> 
> For some reasons I can't "heal" the volume:
> # gluster volume heal gv0
> Launching heal operation to perform index self heal on volume gv0 has
> been unsuccessful on bricks that are down. Please check if all brick
> processes are running.
> 
> Which processes should be run on every brick for heal operation?
> 
> # gluster volume status
> Status of volume: gv0
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152  Y   70850
> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152  Y   
> 102951
> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152  Y   57535
> Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152  Y   56676
> Brick cn05-ib:/gfs/gv0/brick1/brick 0 49152  Y   56880
> Brick cn06-ib:/gfs/gv0/brick1/brick 0 49152  Y   56889
> Brick cn07-ib:/gfs/gv0/brick1/brick 0 49152  Y   56902
> Brick cn08-ib:/gfs/gv0/brick1/brick 0 49152  Y   94920
> Brick cn09-ib:/gfs/gv0/brick1/brick 0 49152  Y   56542
> 
> Task Status of Volume gv0
> --
> There are no active volume tasks
> 
> # gluster volume info gv0
> Volume Name: gv0
> Type: Distribute
> Volume ID: 8becaf78-cf2d-4991-93bf-f2446688154f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 9
> Transport-type: rdma
> Bricks:
> Brick1: cn01-ib:/gfs/gv0/brick1/brick
> Brick2: cn02-ib:/gfs/gv0/brick1/brick
> Brick3: cn03-ib:/gfs/gv0/brick1/brick
> Brick4: cn04-ib:/gfs/gv0/brick1/brick
> Brick5: cn05-ib:/gfs/gv0/brick1/brick
> Brick6: cn06-ib:/gfs/gv0/brick1/brick
> Brick7: cn07-ib:/gfs/gv0/brick1/brick
> Brick8: cn08-ib:/gfs/gv0/brick1/brick
> Brick9: cn09-ib:/gfs/gv0/brick1/brick
> Options Reconfigured:
> client.event-threads: 8
> performance.parallel-readdir: on
> performance.readdir-ahead: on
> cluster.nufa: on
> nfs.disable: on 
> -- 
> Best regards,
> Anatoliy
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users [2]

-- 
Best regards,
Anatoliy 

Links:
--
[1] http://doc.gluster.org
[2] http://lists.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-14 Thread Andreas Ericsson
That seems unlikely. I pre-create the directory layout and then write to
directories I know exist.

I don't quite understand how any settings at all can reduce performance to
1/5000 of what I get when writing straight to ramdisk though, and
especially when running on a single node instead of in a cluster. Has
anyone else set this up and managed to get better write performance?

On 13 March 2018 at 08:28, Pranith Kumar Karampuri 
wrote:

>
>
> On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek <
> ondrej.valou...@s3group.com> wrote:
>
>> Hi,
>>
>> Gluster will never perform well for small files.
>>
>> I believe there is  nothing you can do with this.
>>
>
> It is bad compared to a disk filesystem but I believe it is much closer to
> NFS now.
>
> Andreas,
>  Looking at your workload, I am suspecting there to be lot of LOOKUPs
> which reduce performance. Is it possible to do the following?
>
> # gluster volume profile  info incremental
> #execute your workload
> # gluster volume profile  info incremental >
> /path/to/file/that/you/need/to/send/us
>
> If the last line in there is LOOKUP, mostly we need to enable nl-cache
> feature and see how it performs.
>
>
>> Ondrej
>>
>>
>>
>> *From:* gluster-users-boun...@gluster.org [mailto:gluster-users-bounces@
>> gluster.org] *On Behalf Of *Andreas Ericsson
>> *Sent:* Monday, March 12, 2018 1:47 PM
>> *To:* Gluster-users@gluster.org
>> *Subject:* [Gluster-users] Expected performance for WORM scenario
>>
>>
>>
>> Heya fellas.
>>
>>
>>
>> I've been struggling quite a lot to get glusterfs to perform even
>> halfdecently with a write-intensive workload. Testnumbers are from gluster
>> 3.10.7.
>>
>>
>>
>> We store a bunch of small files in a doubly-tiered sha1 hash fanout
>> directory structure. The directories themselves aren't overly full. Most of
>> the data we write to gluster is "write once, read probably never", so 99%
>> of all operations are of the write variety.
>>
>>
>>
>> The network between servers is sound. 10gb network cards run over a 10gb
>> (doh) switch. iperf reports 9.86Gbit/sec. ping reports a latency of 0.1 -
>> 0.2 ms. There is no firewall, no packet inspection and no nothing between
>> the servers, and the 10gb switch is the only path between the two machines,
>> so traffic isn't going over some 2mbit wifi by accident.
>>
>>
>>
>> Our main storage has always been really slow (write speed of roughly
>> 1.5MiB/s), but I had long attributed that to the extremely slow disks we
>> use to back it, so now that we're expanding I set up a new gluster cluster
>> with state of the art NVMe SSD drives to boost performance. However,
>> performance only hopped up to around 2.1MiB/s. Perplexed, I tried it first
>> with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My
>> last resort was to use a single node running on ramdisk, just to 100%
>> exclude any network shenanigans, but the write performance stayed at an
>> absolutely abysmal 3MiB/s.
>>
>>
>>
>> Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I
>> don't actually remember the numbers, but my test that took 2 minutes with
>> gluster completed before I had time to blink). Writing straight to the
>> backing SSD drives gives me a throughput of 96MiB/sec.
>>
>>
>>
>> The test itself writes 8494 files that I simply took randomly from our
>> production environment, comprising a total of 63.4MiB (so average file size
>> is just under 8k. Most are actually close to 4k though, with the occasional
>> 2-or-so MB file in there.
>>
>>
>>
>> I have googled and read a *lot* of performance-tuning guides, but the
>> 3MiB/sec on single-node ramdisk seems to be far beyond the crippling one
>> can cause by misconfiguration of a single system.
>>
>>
>>
>> With this in mind; What sort of write performance can one reasonably hope
>> to get with gluster? Assume a 3-node cluster running on top of (small)
>> ramdisks on a fast and stable network. Is it just a bad fit for our
>> workload?
>>
>>
>>
>> /Andreas
>>
>> -
>>
>> The information contained in this e-mail and in any attachments is 
>> confidential and is designated solely for the attention of the intended 
>> recipient(s). If you are not an intended recipient, you must not use, 
>> disclose, copy, distribute or retain this e-mail or any part thereof. If you 
>> have received this e-mail in error, please notify the sender by return 
>> e-mail and delete all copies of this e-mail from your computer system(s). 
>> Please direct any additional queries to: communicati...@s3group.com. Thank 
>> You. Silicon and Software Systems Limited (S3 Group). Registered in Ireland 
>> no. 378073. Registered Office: South County Business Park, Leopardstown, 
>> Dublin 18.
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Pranith
>
___

Re: [Gluster-users] Expected performance for WORM scenario

2018-03-14 Thread Andreas Ericsson
I no longer have the volume lying around. The most interesting one was a
2GB volume created on ramdisk for a single node. If I can't get that to go
faster than 3MB/sec for writes, I figured I wouldn't bother further.

I was using gluster fuse fs 3.10.7. Everything was running on ubuntu 16.04
servers.

On 12 March 2018 at 15:30, Nithya Balachandran  wrote:

> Hi,
>
> Can you send us the following details:
> 1. gluster volume info
> 2. What client you are using to run this?
>
> Thanks,
> Nithya
>
> On 12 March 2018 at 18:16, Andreas Ericsson 
> wrote:
>
>> Heya fellas.
>>
>> I've been struggling quite a lot to get glusterfs to perform even
>> halfdecently with a write-intensive workload. Testnumbers are from gluster
>> 3.10.7.
>>
>> We store a bunch of small files in a doubly-tiered sha1 hash fanout
>> directory structure. The directories themselves aren't overly full. Most of
>> the data we write to gluster is "write once, read probably never", so 99%
>> of all operations are of the write variety.
>>
>> The network between servers is sound. 10gb network cards run over a 10gb
>> (doh) switch. iperf reports 9.86Gbit/sec. ping reports a latency of 0.1 -
>> 0.2 ms. There is no firewall, no packet inspection and no nothing between
>> the servers, and the 10gb switch is the only path between the two machines,
>> so traffic isn't going over some 2mbit wifi by accident.
>>
>> Our main storage has always been really slow (write speed of roughly
>> 1.5MiB/s), but I had long attributed that to the extremely slow disks we
>> use to back it, so now that we're expanding I set up a new gluster cluster
>> with state of the art NVMe SSD drives to boost performance. However,
>> performance only hopped up to around 2.1MiB/s. Perplexed, I tried it first
>> with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My
>> last resort was to use a single node running on ramdisk, just to 100%
>> exclude any network shenanigans, but the write performance stayed at an
>> absolutely abysmal 3MiB/s.
>>
>> Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I
>> don't actually remember the numbers, but my test that took 2 minutes with
>> gluster completed before I had time to blink). Writing straight to the
>> backing SSD drives gives me a throughput of 96MiB/sec.
>>
>> The test itself writes 8494 files that I simply took randomly from our
>> production environment, comprising a total of 63.4MiB (so average file size
>> is just under 8k. Most are actually close to 4k though, with the occasional
>> 2-or-so MB file in there.
>>
>> I have googled and read a *lot* of performance-tuning guides, but the
>> 3MiB/sec on single-node ramdisk seems to be far beyond the crippling one
>> can cause by misconfiguration of a single system.
>>
>> With this in mind; What sort of write performance can one reasonably hope
>> to get with gluster? Assume a 3-node cluster running on top of (small)
>> ramdisks on a fast and stable network. Is it just a bad fit for our
>> workload?
>>
>> /Andreas
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-Maintainers] Announcing Gluster release 4.0.0 (Short Term Maintenance)

2018-03-14 Thread Aravinda

On 03/14/2018 07:13 AM, Shyam Ranganathan wrote:

The Gluster community celebrates 13 years of development with this
latest release, Gluster 4.0. This release enables improved integration
with containers, an enhanced user experience, and a next-generation
management framework. The 4.0 release helps cloud-native app developers
choose Gluster as the default scale-out distributed file system.

We’re highlighting some of the announcements, major features and changes
here, but our release notes[1] have announcements, expanded major
changes and features, and bugs addressed in this release.

Major enhancements:

- Management
GlusterD2 (GD2) is a new management daemon for Gluster-4.0. It is a
complete rewrite, with all new internal core frameworks, that make it
more scalable, easier to integrate with and has lower maintenance
requirements. This replaces GlusterD.

A quick start guide [6] is available to get started with GD2.

Although GD2 is in tech preview for this release, it is ready to use for
forming and managing new clusters.

- Monitoring
With this release, GlusterFS enables a lightweight method to access
internal monitoring information.

- Performance
There are several enhancements to performance in the disperse translator
and in the client side metadata caching layers.

- Other enhancements of note
 This release adds: ability to run Gluster on FIPS compliant systems,
ability to force permissions while creating files/directories, and
improved consistency in distributed volumes.

- Developer related
New on-wire protocol version and full type encoding of internal
dictionaries on the wire, Global translator to handle per-daemon
options, improved translator initialization structure, among a few other
improvements, that help streamline development of newer translators.

Release packages (or where to get them) are available at [2] and are
signed with [3]. The upgrade guide for this release can be found at [4]

Related announcements:

- As 3.13 was a short term maintenance release, it will reach end of
life (EOL) with the release of 4.0.0 [5].

- Releases that receive maintenance updates post 4.0 release are, 3.10,
3.12, 4.0 [5].

- With this release, the CentOS storage SIG will not build server
packages for CentOS6. Server packages will be available for CentOS7
only. For ease of migrations, client packages on CentOS6 will be
published and maintained, as announced here [7].

References:
[1] Release notes:
https://docs.gluster.org/en/latest/release-notes/4.0.0.md/

Above link is not working, actual link is

https://docs.gluster.org/en/latest/release-notes/4.0.0/


[2] Packages: https://download.gluster.org/pub/gluster/glusterfs/4.0/
[3] Packages signed with:
https://download.gluster.org/pub/gluster/glusterfs/4.0/rsa.pub
[4] Upgrade guide:
https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.0/
[5] Release schedule: https://www.gluster.org/release-schedule/
[6] GD2 quick start:
https://github.com/gluster/glusterd2/blob/master/doc/quick-start-user-guide.md
[7] CentOS Storage SIG CentOS6 support announcement:
http://lists.gluster.org/pipermail/gluster-users/2018-January/033212.html
___
maintainers mailing list
maintain...@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers



--
regards
Aravinda VK

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users