Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-19 Thread Sam McLeod
Excellent description, thank you. With performance.write-behind-trickling-writes ON (default): ## 4k randwrite # fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite test: (g=0): rw=randwrite, bs=(R)

Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-19 Thread Raghavendra Gowdappa
On Tue, Mar 20, 2018 at 8:57 AM, Sam McLeod wrote: > Hi Raghavendra, > > > On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa > wrote: > > Aggregating large number of small writes by write-behind into large writes > has been merged on master: >

Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-19 Thread Sam McLeod
Hi Raghavendra, > On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa wrote: > > Aggregating large number of small writes by write-behind into large writes > has been merged on master: > https://github.com/gluster/glusterfs/issues/364 >

Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-19 Thread Raghavendra Gowdappa
On Tue, Mar 20, 2018 at 1:55 AM, TomK wrote: > On 3/19/2018 10:52 AM, Rik Theys wrote: > >> Hi, >> >> On 03/19/2018 03:42 PM, TomK wrote: >> >>> On 3/19/2018 5:42 AM, Ondrej Valousek wrote: >>> Removing NFS or NFS Ganesha from the equation, not very impressed on my >>> own

Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-19 Thread Sam McLeod
Howdy all, Sorry in Australia so most of your replies came in over night for me. Note: At the end of this reply is a listing of all our volume settings (gluster get volname all). Note 2: I really wish Gluster used Discourse for this kind of community troubleshooting an analysis, using a

Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-19 Thread TomK
On 3/19/2018 10:52 AM, Rik Theys wrote: Hi, On 03/19/2018 03:42 PM, TomK wrote: On 3/19/2018 5:42 AM, Ondrej Valousek wrote: Removing NFS or NFS Ganesha from the equation, not very impressed on my own setup either.  For the writes it's doing, that's alot of CPU usage in top. Seems

Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-19 Thread Rik Theys
Hi, On 03/19/2018 03:42 PM, TomK wrote: > On 3/19/2018 5:42 AM, Ondrej Valousek wrote: > Removing NFS or NFS Ganesha from the equation, not very impressed on my > own setup either.  For the writes it's doing, that's alot of CPU usage > in top. Seems bottle-necked via a single execution core

Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-19 Thread TomK
On 3/19/2018 5:42 AM, Ondrej Valousek wrote: Removing NFS or NFS Ganesha from the equation, not very impressed on my own setup either. For the writes it's doing, that's alot of CPU usage in top. Seems bottle-necked via a single execution core somewhere trying to facilitate read / writes to

Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-19 Thread Ondrej Valousek
Hi, As I posted in my previous emails - glusterfs can never match NFS (especially async one) performance of small files/latency. That's given by the design. Nothing you can do about it. Ondrej -Original Message- From: gluster-users-boun...@gluster.org

Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-19 Thread Rik Theys
Hi, I've done some similar tests and experience similar performance issues (see my 'gluster for home directories?' thread on the list). If I read your mail correctly, you are comparing an NFS mount of the brick disk against a gluster mount (using the fuse client)? Which options do you have set

Re: [Gluster-users] Disperse volume recovery and healing

2018-03-19 Thread Xavi Hernandez
Hi Victor, On Sun, Mar 18, 2018 at 3:47 AM, Victor T wrote: > > *No. After bringing up one brick and before stopping the next one, you > need to be sure that there are no damaged files. You shouldn't reboot a > node if "gluster volume heal info" shows damaged