Re: [Gluster-users] Extremely low performance - am I doing somethingwrong?

2019-07-05 Thread wkmail
well if you are addressing me, that was the point of my post re the 
original posters complaint.


If his chosen test gets lousy or inconsistent results on non-gluster 
setups then its hard to complain about gluster absent the known Gluster 
issues (i.e. network bandwidth, fuse context switching, etc)


there is more involved there.

and yes, my performance IS better inside the VMs because even though you 
use the oflag for sync or direct, KVM/Qemu still caches stuff underneath 
the qcow2 image.


So this is hist test on an active Gluster Rep2 + arb KVM setup run 
within a qcow2 image that is doing real work.


# for i in {1..5}; do { dd if=/dev/zero of=./test.tmp bs=1M count=10 
oflag=sync; rm -f ./test.tmp; } done

10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0206228 s, 508 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0152477 s, 688 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0149008 s, 704 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.014808 s, 708 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0147982 s, 709 MB/s


On 7/5/2019 12:13 PM, Strahil wrote:

I don't know what you are trying to test, but I'm sure this test doesn't show 
anything meaningful.
Have you tested with your apps' workload ?

I have done your test and I get aprox 20MB/s, but I can asure you that the 
performance is way better in my VMs.

Best Regards,
Strahil NikolovOn Jul 5, 2019 20:17, wkmail  wrote:


On 7/4/2019 2:28 AM, Vladimir Melnik wrote:

So, the disk is OK and the network is OK, I'm 100% sure.

Seems to be a GlusterFS-related issue. Either something needs to be
tweaked or it's a normal performance for a replica-3 cluster.

There is more to it than Gluster on that particular test.

I have some addititional datapoints, since those numbers seemed low
given the long time I have played with Gluster (first install was 3.3)

So I ran that exact test on some locally mounted hard drive sets (mdadm
RAID1- spinning metal) on Cent7 (stock)  and U18(stock) and got the
following:

No Gluster involved.

# for i in {1..5}; do { dd if=/dev/zero of=./test.tmp bs=1M count=10
oflag=sync; rm -f ./test.tmp; } done
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.0144 s, 10.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.791071 s, 13.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.832186 s, 12.6 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.80427 s, 13.0 MB/s
10+0 records in

That was reproducible over several machines with different CPUs that we
have in production.

Performance is about 20%  better when 7200 rpm drives were involved or
when no RAID was involved but never above 18 MB/s

Performance is also MUCH better when I use oflag=direct (roughly 2x)

However, on a U18 VM Host testbed machine that has a seperate SSD swap
disk I get the following, even though I am writing the test.tmp file to
the metal.

# for i in {1..5}; do { dd if=/dev/zero of=./test.tmp bs=1M count=10
oflag=sync; rm -f ./test.tmp; } done

10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0949153 s, 110 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0605883 s, 173 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0582863 s, 180 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0604369 s, 173 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0598746 s, 175 MB/s

So something else is going on with that particular test. Clearly,
buffers, elevators, cache etc count despite the oflag setting.

For the record on the Gluster Fuse Mount (2x+1arb Volume)  on that VM
Host I do get reduced performance

part of that is due to the gluster network being 2x1G using teaming on
that testbed, so there is a network bottleneck.

# for i in {1..5}; do { dd if=/dev/zero of=./test.tmp bs=1M count=10
oflag=sync; rm -f ./test.tmp; } done
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.693351 s, 15.1 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.349881 s, 30.0 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.339699 s, 30.9 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.34202 s, 30.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.337904 s, 31.0 MB/s

So the gluster fuse mount negates the advantage of that SSD swap disk
along with the obvious network bottleneck.

But clearly we have to all agree on same valid test.

-wk

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list

Re: [Gluster-users] Extremely low performance - am I doing somethingwrong?

2019-07-05 Thread Strahil
I don't know what you are trying to test, but I'm sure this test doesn't show 
anything meaningful.
Have you tested with your apps' workload ?

I have done your test and I get aprox 20MB/s, but I can asure you that the 
performance is way better in my VMs.

Best Regards,
Strahil NikolovOn Jul 5, 2019 20:17, wkmail  wrote:
>
>
> On 7/4/2019 2:28 AM, Vladimir Melnik wrote:
> > So, the disk is OK and the network is OK, I'm 100% sure.
> >
> > Seems to be a GlusterFS-related issue. Either something needs to be
> > tweaked or it's a normal performance for a replica-3 cluster.
>
> There is more to it than Gluster on that particular test.
>
> I have some addititional datapoints, since those numbers seemed low 
> given the long time I have played with Gluster (first install was 3.3)
>
> So I ran that exact test on some locally mounted hard drive sets (mdadm 
> RAID1- spinning metal) on Cent7 (stock)  and U18(stock) and got the 
> following:
>
> No Gluster involved.
>
> # for i in {1..5}; do { dd if=/dev/zero of=./test.tmp bs=1M count=10 
> oflag=sync; rm -f ./test.tmp; } done
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB) copied, 1.0144 s, 10.3 MB/s
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB) copied, 0.791071 s, 13.3 MB/s
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB) copied, 0.832186 s, 12.6 MB/s
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB) copied, 0.80427 s, 13.0 MB/s
> 10+0 records in
>
> That was reproducible over several machines with different CPUs that we 
> have in production.
>
> Performance is about 20%  better when 7200 rpm drives were involved or 
> when no RAID was involved but never above 18 MB/s
>
> Performance is also MUCH better when I use oflag=direct (roughly 2x)
>
> However, on a U18 VM Host testbed machine that has a seperate SSD swap 
> disk I get the following, even though I am writing the test.tmp file to 
> the metal.
>
> # for i in {1..5}; do { dd if=/dev/zero of=./test.tmp bs=1M count=10 
> oflag=sync; rm -f ./test.tmp; } done
>
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB, 10 MiB) copied, 0.0949153 s, 110 MB/s
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB, 10 MiB) copied, 0.0605883 s, 173 MB/s
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB, 10 MiB) copied, 0.0582863 s, 180 MB/s
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB, 10 MiB) copied, 0.0604369 s, 173 MB/s
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB, 10 MiB) copied, 0.0598746 s, 175 MB/s
>
> So something else is going on with that particular test. Clearly, 
> buffers, elevators, cache etc count despite the oflag setting.
>
> For the record on the Gluster Fuse Mount (2x+1arb Volume)  on that VM 
> Host I do get reduced performance
>
> part of that is due to the gluster network being 2x1G using teaming on 
> that testbed, so there is a network bottleneck.
>
> # for i in {1..5}; do { dd if=/dev/zero of=./test.tmp bs=1M count=10 
> oflag=sync; rm -f ./test.tmp; } done
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB, 10 MiB) copied, 0.693351 s, 15.1 MB/s
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB, 10 MiB) copied, 0.349881 s, 30.0 MB/s
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB, 10 MiB) copied, 0.339699 s, 30.9 MB/s
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB, 10 MiB) copied, 0.34202 s, 30.7 MB/s
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB, 10 MiB) copied, 0.337904 s, 31.0 MB/s
>
> So the gluster fuse mount negates the advantage of that SSD swap disk 
> along with the obvious network bottleneck.
>
> But clearly we have to all agree on same valid test.
>
> -wk
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Extremely low performance - am I doing somethingwrong?

2019-07-05 Thread wkmail


On 7/4/2019 2:28 AM, Vladimir Melnik wrote:

So, the disk is OK and the network is OK, I'm 100% sure.

Seems to be a GlusterFS-related issue. Either something needs to be
tweaked or it's a normal performance for a replica-3 cluster.


There is more to it than Gluster on that particular test.

I have some addititional datapoints, since those numbers seemed low 
given the long time I have played with Gluster (first install was 3.3)


So I ran that exact test on some locally mounted hard drive sets (mdadm 
RAID1- spinning metal) on Cent7 (stock)  and U18(stock) and got the 
following:


No Gluster involved.

# for i in {1..5}; do { dd if=/dev/zero of=./test.tmp bs=1M count=10 
oflag=sync; rm -f ./test.tmp; } done

10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.0144 s, 10.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.791071 s, 13.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.832186 s, 12.6 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.80427 s, 13.0 MB/s
10+0 records in

That was reproducible over several machines with different CPUs that we 
have in production.


Performance is about 20%  better when 7200 rpm drives were involved or 
when no RAID was involved but never above 18 MB/s


Performance is also MUCH better when I use oflag=direct (roughly 2x)

However, on a U18 VM Host testbed machine that has a seperate SSD swap 
disk I get the following, even though I am writing the test.tmp file to 
the metal.


# for i in {1..5}; do { dd if=/dev/zero of=./test.tmp bs=1M count=10 
oflag=sync; rm -f ./test.tmp; } done


10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0949153 s, 110 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0605883 s, 173 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0582863 s, 180 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0604369 s, 173 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0598746 s, 175 MB/s

So something else is going on with that particular test. Clearly, 
buffers, elevators, cache etc count despite the oflag setting.


For the record on the Gluster Fuse Mount (2x+1arb Volume)  on that VM 
Host I do get reduced performance


part of that is due to the gluster network being 2x1G using teaming on 
that testbed, so there is a network bottleneck.


# for i in {1..5}; do { dd if=/dev/zero of=./test.tmp bs=1M count=10 
oflag=sync; rm -f ./test.tmp; } done

10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.693351 s, 15.1 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.349881 s, 30.0 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.339699 s, 30.9 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.34202 s, 30.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.337904 s, 31.0 MB/s

So the gluster fuse mount negates the advantage of that SSD swap disk 
along with the obvious network bottleneck.


But clearly we have to all agree on same valid test.

-wk


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-replication does not send filesystem changes

2019-07-05 Thread Pasechny Alexey
Thank you for reply Kotresh!

I found the root of the issue. I started over geo-rep setup and erased the 
geo-replication.indexing on Master.
Replication worked fine if gluster volume is mounted natively or via 
nfs-ganesha server.
But when I tried to make a change on a brick locally it did not go to Slave 
node.

# mount | grep zdata
zdata on /zdata type zfs (rw,nosuid,noexec,noatime,xattr,noacl)
zdata/cicd on /zdata/cicd type zfs (rw,nosuid,noexec,noatime,xattr,posixacl)

Then I mounted glusterfs locally and local changes started to go to Slave node 
too.

BR, Alexey

From: Kotresh Hiremath Ravishankar [mailto:khire...@redhat.com]
Sent: Friday, July 05, 2019 5:20 PM
To: Pasechny Alexey
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Geo-replication does not send filesystem changes

The session is moved from "history crawl" to "changelog crawl". After this 
point, there are no changelogs to be synced as per the logs.
Please checking in ".processing" directories if there are any pending 
changelogs to be synced at 
"/var/lib/misc/gluster/gsyncd///.processing"
If there are no pending changelogs, then please check if the brick is up.

On Fri, Jul 5, 2019 at 5:29 PM Pasechny Alexey 
mailto:pasec...@iskrauraltel.ru>> wrote:
Hi everyone,

I have a problem with native geo-replication setup. It successfully starts, 
makes initial sync but does not send any filesystem data changes afterward.
I'm using CentOS 7.6.1810 with official glusterfs-6.3-1.el7 build on top of ZFS 
on Linux.
It is a single Master node with single brick and the same Slave node.

# "gluster vol geo-rep status" command gives the following output
 MASTER NODE = gfs-alfa1
 MASTER VOL = cicd
 MASTER BRICK = /zdata/cicd/brick
 SLAVE USER = root
 SLAVE = gfs-alfa2::cicd
 SLAVE NODE = gfs-alfa2
 STATUS = Active
 CRAWL STATUS = Changelog Crawl
 LAST_SYNCED = 2019-07-05 12:08:17
 ENTRY = 0
 DATA = 0
 META = 0
 FAILURES = 0
 CHECKPOINT TIME = 2019-07-05 12:13:46
 CHECKPOINT COMPLETED = No

I enabled DEBUG level log for gsyncd.log but did not get any error messages 
from it. Full log is available here: https://pastebin.com/pXL4dBhZ
On both brick I disabled ctime feature because it is incompatible with old 
versions of gfs clients, enabling this feature does not help too.

# gluster volume info
 Volume Name: cicd
 Type: Distribute
 Volume ID: 8f959a35-c7ab-4484-a1e8-9fa8e3a713b4
 Status: Started
 Snapshot Count: 0
 Number of Bricks: 1
 Transport-type: tcp
 Bricks:
 Brick1: gfs-alfa1:/zdata/cicd/brick
 Options Reconfigured:
 nfs.disable: on
 transport.address-family: inet
 features.ctime: off
 geo-replication.indexing: on
 geo-replication.ignore-pid-check: on
 changelog.changelog: on

# gluster volume get cicd rollover-time
Option  Value
--  -
changelog.rollover-time 15

# gluster volume get cicd fsync-interval
Option  Value
--  -
changelog.fsync-interval5

Could someone help me with debug of this geo-rep setup?
Thank you!

BR, Alexey




В данной электронной почте и во всех ее вложениях может содержаться информация 
конфиденциального характера, предназначенная исключительно только для адресата. 
Если Вы по ошибке получили не предназначенное для Вас сообщение, пожалуйста, 
безотлагательно проинформируйте об этом отправителя, а само сообщение 
немедленно уничтожьте. Физическому или юридическому лицу, не являющемуся 
адресатом почты, строго запрещается несанкционированно использовать, 
просматривать, пересылать, распространять, копировать или распоряжаться иным 
способом содержимым такой почты.

This e-mail and any attachments may contain confidential and/or privileged 
information and is intended solely for the addressee. If you are not the 
intended recipient (or have received this e-mail in error) please notify the 
sender immediately and destroy this e-mail. Any unauthorised use, review, 
retransmissions, dissemination, copying or other use of this information by 
persons or entities other than the intended recipient is strictly prohibited.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


--
Thanks and Regards,
Kotresh H R
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-replication does not send filesystem changes

2019-07-05 Thread Kotresh Hiremath Ravishankar
The session is moved from "history crawl" to "changelog crawl". After this
point, there are no changelogs to be synced as per the logs.
Please checking in ".processing" directories if there are any pending
changelogs to be synced at
"/var/lib/misc/gluster/gsyncd///.processing"
If there are no pending changelogs, then please check if the brick is up.

On Fri, Jul 5, 2019 at 5:29 PM Pasechny Alexey 
wrote:

> Hi everyone,
>
> I have a problem with native geo-replication setup. It successfully
> starts, makes initial sync but does not send any filesystem data changes
> afterward.
> I'm using CentOS 7.6.1810 with official glusterfs-6.3-1.el7 build on top
> of ZFS on Linux.
> It is a single Master node with single brick and the same Slave node.
>
> # "gluster vol geo-rep status" command gives the following output
>  MASTER NODE = gfs-alfa1
>  MASTER VOL = cicd
>  MASTER BRICK = /zdata/cicd/brick
>  SLAVE USER = root
>  SLAVE = gfs-alfa2::cicd
>  SLAVE NODE = gfs-alfa2
>  STATUS = Active
>  CRAWL STATUS = Changelog Crawl
>  LAST_SYNCED = 2019-07-05 12:08:17
>  ENTRY = 0
>  DATA = 0
>  META = 0
>  FAILURES = 0
>  CHECKPOINT TIME = 2019-07-05 12:13:46
>  CHECKPOINT COMPLETED = No
>
> I enabled DEBUG level log for gsyncd.log but did not get any error
> messages from it. Full log is available here:
> https://pastebin.com/pXL4dBhZ
> On both brick I disabled ctime feature because it is incompatible with old
> versions of gfs clients, enabling this feature does not help too.
>
> # gluster volume info
>  Volume Name: cicd
>  Type: Distribute
>  Volume ID: 8f959a35-c7ab-4484-a1e8-9fa8e3a713b4
>  Status: Started
>  Snapshot Count: 0
>  Number of Bricks: 1
>  Transport-type: tcp
>  Bricks:
>  Brick1: gfs-alfa1:/zdata/cicd/brick
>  Options Reconfigured:
>  nfs.disable: on
>  transport.address-family: inet
>  features.ctime: off
>  geo-replication.indexing: on
>  geo-replication.ignore-pid-check: on
>  changelog.changelog: on
>
> # gluster volume get cicd rollover-time
> Option  Value
> --  -
> changelog.rollover-time 15
>
> # gluster volume get cicd fsync-interval
> Option  Value
> --  -
> changelog.fsync-interval5
>
> Could someone help me with debug of this geo-rep setup?
> Thank you!
>
> BR, Alexey
>
>
> 
>
> В данной электронной почте и во всех ее вложениях может содержаться
> информация конфиденциального характера, предназначенная исключительно
> только для адресата. Если Вы по ошибке получили не предназначенное для Вас
> сообщение, пожалуйста, безотлагательно проинформируйте об этом отправителя,
> а само сообщение немедленно уничтожьте. Физическому или юридическому лицу,
> не являющемуся адресатом почты, строго запрещается несанкционированно
> использовать, просматривать, пересылать, распространять, копировать или
> распоряжаться иным способом содержимым такой почты.
>
> This e-mail and any attachments may contain confidential and/or privileged
> information and is intended solely for the addressee. If you are not the
> intended recipient (or have received this e-mail in error) please notify
> the sender immediately and destroy this e-mail. Any unauthorised use,
> review, retransmissions, dissemination, copying or other use of this
> information by persons or entities other than the intended recipient is
> strictly prohibited.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Thanks and Regards,
Kotresh H R
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Geo-replication does not send filesystem changes

2019-07-05 Thread Pasechny Alexey
Hi everyone,

I have a problem with native geo-replication setup. It successfully starts, 
makes initial sync but does not send any filesystem data changes afterward.
I'm using CentOS 7.6.1810 with official glusterfs-6.3-1.el7 build on top of ZFS 
on Linux.
It is a single Master node with single brick and the same Slave node.

# "gluster vol geo-rep status" command gives the following output
 MASTER NODE = gfs-alfa1
 MASTER VOL = cicd
 MASTER BRICK = /zdata/cicd/brick
 SLAVE USER = root
 SLAVE = gfs-alfa2::cicd
 SLAVE NODE = gfs-alfa2
 STATUS = Active
 CRAWL STATUS = Changelog Crawl
 LAST_SYNCED = 2019-07-05 12:08:17
 ENTRY = 0
 DATA = 0
 META = 0
 FAILURES = 0
 CHECKPOINT TIME = 2019-07-05 12:13:46
 CHECKPOINT COMPLETED = No

I enabled DEBUG level log for gsyncd.log but did not get any error messages 
from it. Full log is available here: https://pastebin.com/pXL4dBhZ
On both brick I disabled ctime feature because it is incompatible with old 
versions of gfs clients, enabling this feature does not help too.

# gluster volume info
 Volume Name: cicd
 Type: Distribute
 Volume ID: 8f959a35-c7ab-4484-a1e8-9fa8e3a713b4
 Status: Started
 Snapshot Count: 0
 Number of Bricks: 1
 Transport-type: tcp
 Bricks:
 Brick1: gfs-alfa1:/zdata/cicd/brick
 Options Reconfigured:
 nfs.disable: on
 transport.address-family: inet
 features.ctime: off
 geo-replication.indexing: on
 geo-replication.ignore-pid-check: on
 changelog.changelog: on

# gluster volume get cicd rollover-time
Option  Value
--  -
changelog.rollover-time 15

# gluster volume get cicd fsync-interval
Option  Value
--  -
changelog.fsync-interval5

Could someone help me with debug of this geo-rep setup?
Thank you!

BR, Alexey




В данной электронной почте и во всех ее вложениях может содержаться информация 
конфиденциального характера, предназначенная исключительно только для адресата. 
Если Вы по ошибке получили не предназначенное для Вас сообщение, пожалуйста, 
безотлагательно проинформируйте об этом отправителя, а само сообщение 
немедленно уничтожьте. Физическому или юридическому лицу, не являющемуся 
адресатом почты, строго запрещается несанкционированно использовать, 
просматривать, пересылать, распространять, копировать или распоряжаться иным 
способом содержимым такой почты.

This e-mail and any attachments may contain confidential and/or privileged 
information and is intended solely for the addressee. If you are not the 
intended recipient (or have received this e-mail in error) please notify the 
sender immediately and destroy this e-mail. Any unauthorised use, review, 
retransmissions, dissemination, copying or other use of this information by 
persons or entities other than the intended recipient is strictly prohibited.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Isolated Client faster than Server+Client

2019-07-05 Thread Adam C
Hi, I have been doing some testing of GlusterFS. I have 2 servers running
Gluster 6.3 (although same happening in version 3). 1 Server has 32gb ram
the other 4gb. Volume is type Replicated.

On both servers I also have the volume mounted using the Fuse client and
when I run a small of copy 100 x 1kb files it takes about 20 seconds to
complete.

The strange thing is if I spin up a 3rd server and set it up as a client on
the volume the same test completes in 2.7 seconds, almost 7.5 times faster.

Are there any known reasons for this behaviour?
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Parallel process hang on gluster volume

2019-07-05 Thread nico
I compared 4.1.5 and 3.12.15, there's no pb with 3.12.15 client. 

Regards, 
Nicolas. 


De: "Nithya Balachandran"  
À: n...@furyweb.fr 
Cc: "gluster-users"  
Envoyé: Vendredi 5 Juillet 2019 08:09:52 
Objet: Re: [Gluster-users] Parallel process hang on gluster volume 

Did you see this behaviour with previous Gluster versions? 
Regards, 
Nithya 

On Wed, 3 Jul 2019 at 21:41, < [ mailto:n...@furyweb.fr | n...@furyweb.fr ] > 
wrote: 


Am I alone having this problem ? 

- Mail original - 
De: [ mailto:n...@furyweb.fr | n...@furyweb.fr ] 
À: "gluster-users" < [ mailto:gluster-users@gluster.org | 
gluster-users@gluster.org ] > 
Envoyé: Vendredi 21 Juin 2019 09:48:47 
Objet: [Gluster-users] Parallel process hang on gluster volume 

I encounterd an issue on production servers using GlusterFS servers 5.1 and 
clients 4.1.5 when several process write at the same time on a gluster volume. 

With more than 48 process writes on the volume at the same time, they are 
blocked in D state (uninterruptible sleep), I guess some volume settings have 
to be tuned but can't figure out which. 

The client is using op-version 40100 on this volume 
Below are volume info, volume settings and ps output on blocked processes. 

___ 
Gluster-users mailing list 
[ mailto:Gluster-users@gluster.org | Gluster-users@gluster.org ] 
[ https://lists.gluster.org/mailman/listinfo/gluster-users | 
https://lists.gluster.org/mailman/listinfo/gluster-users ] 
___ 
Gluster-users mailing list 
[ mailto:Gluster-users@gluster.org | Gluster-users@gluster.org ] 
[ https://lists.gluster.org/mailman/listinfo/gluster-users | 
https://lists.gluster.org/mailman/listinfo/gluster-users ] 




___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Parallel process hang on gluster volume

2019-07-05 Thread Nithya Balachandran
Did you see this behaviour with previous Gluster versions?

Regards,
Nithya

On Wed, 3 Jul 2019 at 21:41,  wrote:

> Am I alone having this problem ?
>
> - Mail original -
> De: n...@furyweb.fr
> À: "gluster-users" 
> Envoyé: Vendredi 21 Juin 2019 09:48:47
> Objet: [Gluster-users] Parallel process hang on gluster volume
>
> I encounterd an issue on production servers using GlusterFS servers 5.1
> and clients 4.1.5 when several process write at the same time on a gluster
> volume.
>
> With more than 48 process writes on the volume at the same time, they are
> blocked in D state (uninterruptible sleep), I guess some volume settings
> have to be tuned but can't figure out which.
>
> The client is using op-version 40100 on this volume
> Below are volume info, volume settings and ps output on blocked processes.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users