[Gluster-users] Community Meeting Host Needed, 16 Jan at 15:00 UTC

2019-01-15 Thread Amye Scavarda
Your friendly neighborhood community meeting host has a conflict for
tomorrow's meeting, anyone want to take it on?
https://bit.ly/gluster-community-meetings has the agenda.
Time:
  - 15:00  UTC to 15:30 UTC
  - or in your local shell/terminal: `date -d "15:00 UTC"`

Thanks!
- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [External] To good to be truth speed improvements?

2019-01-15 Thread Diego Remolina
This is what I came up with:

< Corresponds to currently running 4.1.6
> Corresponds to old 3.10.12

+ diff /var/lib/glusterd/vols/export/export.10.0.1.6.bricks-hdds-brick.vol
/var/lib/glusterd-20190112/vols/export/export.10.0.1.6.bricks-hdds-brick.vol

3d2
< option shared-brick-count 0
45d43
< option bitrot disable
62d59
< option worm-files-deletable on
93,98d89
< volume export-selinux
< type features/selinux
< option selinux on
< subvolumes export-io-threads
< end-volume
<
108c99
< subvolumes export-selinux
---
> subvolumes export-io-threads
128a120
> option timeout 0
150d141
< option transport.listen-backlog 1024
158a150
> option ping-timeout 42


+ diff /var/lib/glusterd/vols/export/export.10.0.1.7.bricks-hdds-brick.vol
/var/lib/glusterd-20190112/vols/export/export.10.0.1.7.bricks-hdds-brick.vol

3d2
< option shared-brick-count 1
45d43
< option bitrot disable
62d59
< option worm-files-deletable on
93,98d89
< volume export-selinux
< type features/selinux
< option selinux on
< subvolumes export-io-threads
< end-volume
<
108c99
< subvolumes export-selinux
---
> subvolumes export-io-threads
128a120
> option timeout 0
150d141
< option transport.listen-backlog 1024
158a150
> option ping-timeout 42


+ diff /var/lib/glusterd/vols/export/export.tcp-fuse.vol
/var/lib/glusterd-20190112/vols/export/export.tcp-fuse.vol
40d39
< option force-migration off
75d73
< option cache-invalidation on


+ diff /var/lib/glusterd/vols/export/trusted-export.tcp-fuse.vol
/var/lib/glusterd-20190112/vols/export/trusted-export.tcp-fuse.vol
44d43
< option force-migration off
79d77
< option cache-invalidation on

Any other volume files were the same.

HTH,

Diego


On Tue, Jan 15, 2019 at 2:04 PM Davide Obbi  wrote:

> i think you can find the volume options doing  a grep -R option
> /var/lib/glusterd/vols/ and the .vol files show the options
>
> On Tue, Jan 15, 2019 at 2:28 PM Diego Remolina  wrote:
>
>> Hi Davide,
>>
>> The options information is already provided in prior e-mail, see the
>> termbin.con link for the options of the volume after the 4.1.6 upgrade.
>>
>> The gluster options set on the volume are:
>> https://termbin.com/yxtd
>>
>> This is the other piece:
>>
>> # gluster v info export
>>
>> Volume Name: export
>> Type: Replicate
>> Volume ID: b4353b3f-6ef6-4813-819a-8e85e5a95cff
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.0.1.7:/bricks/hdds/brick
>> Brick2: 10.0.1.6:/bricks/hdds/brick
>> Options Reconfigured:
>> performance.stat-prefetch: on
>> performance.cache-min-file-size: 0
>> network.inode-lru-limit: 65536
>> performance.cache-invalidation: on
>> features.cache-invalidation: on
>> performance.md-cache-timeout: 600
>> features.cache-invalidation-timeout: 600
>> performance.cache-samba-metadata: on
>> transport.address-family: inet
>> server.allow-insecure: on
>> performance.cache-size: 10GB
>> cluster.server-quorum-type: server
>> nfs.disable: on
>> performance.io-thread-count: 64
>> performance.io-cache: on
>> cluster.lookup-optimize: on
>> cluster.readdir-optimize: on
>> server.event-threads: 5
>> client.event-threads: 5
>> performance.cache-max-file-size: 256MB
>> diagnostics.client-log-level: INFO
>> diagnostics.brick-log-level: INFO
>> cluster.server-quorum-ratio: 51%
>>
>> Now I did create a backup of /var/lib/glusterd so if you tell me how to
>> pull information from there to compare I can do it.
>>
>> I compared the file /var/lib/glusterd/vols/export/info and it is the same
>> in both, though entries are in different order.
>>
>> Diego
>>
>>
>>
>>
>> On Tue, Jan 15, 2019 at 5:03 AM Davide Obbi 
>> wrote:
>>
>>>
>>>
>>> On Tue, Jan 15, 2019 at 2:18 AM Diego Remolina 
>>> wrote:
>>>
 Dear all,

 I was running gluster 3.10.12 on a pair of servers and recently
 upgraded to 4.1.6. There is a cron job that runs nightly in one machine,
 which rsyncs the data on the servers over to another machine for backup
 purposes. The rsync operation runs on one of the gluster servers, which
 mounts the gluster volume via fuse on /export.

 When using 3.10.12, this process would start at 8:00PM nightly, and
 usually end up at around 4:30AM when the servers had been freshly rebooted.
 From this point, things would start taking a bit longer and stabilize
 ending at around 7-9AM depending on actual file changes and at some point
 the servers would start eating up so much ram (up to 30GB) and I would have
 to reboot them to bring things back to normal as the file system would
 become extremely slow (perhaps the memory leak I have read was present on
 3.10.x).

 After upgrading to 4.1.6 over the weekend, I was shocked to see the
 rsync process finish in about 1 hour and 26 minutes. This is compared to 8
 hours 30 mins with the older version. This is a nice speed up, however, I

Re: [Gluster-users] [External] To good to be truth speed improvements?

2019-01-15 Thread Davide Obbi
i think you can find the volume options doing  a grep -R option
/var/lib/glusterd/vols/ and the .vol files show the options

On Tue, Jan 15, 2019 at 2:28 PM Diego Remolina  wrote:

> Hi Davide,
>
> The options information is already provided in prior e-mail, see the
> termbin.con link for the options of the volume after the 4.1.6 upgrade.
>
> The gluster options set on the volume are:
> https://termbin.com/yxtd
>
> This is the other piece:
>
> # gluster v info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: b4353b3f-6ef6-4813-819a-8e85e5a95cff
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.0.1.7:/bricks/hdds/brick
> Brick2: 10.0.1.6:/bricks/hdds/brick
> Options Reconfigured:
> performance.stat-prefetch: on
> performance.cache-min-file-size: 0
> network.inode-lru-limit: 65536
> performance.cache-invalidation: on
> features.cache-invalidation: on
> performance.md-cache-timeout: 600
> features.cache-invalidation-timeout: 600
> performance.cache-samba-metadata: on
> transport.address-family: inet
> server.allow-insecure: on
> performance.cache-size: 10GB
> cluster.server-quorum-type: server
> nfs.disable: on
> performance.io-thread-count: 64
> performance.io-cache: on
> cluster.lookup-optimize: on
> cluster.readdir-optimize: on
> server.event-threads: 5
> client.event-threads: 5
> performance.cache-max-file-size: 256MB
> diagnostics.client-log-level: INFO
> diagnostics.brick-log-level: INFO
> cluster.server-quorum-ratio: 51%
>
> Now I did create a backup of /var/lib/glusterd so if you tell me how to
> pull information from there to compare I can do it.
>
> I compared the file /var/lib/glusterd/vols/export/info and it is the same
> in both, though entries are in different order.
>
> Diego
>
>
>
>
> On Tue, Jan 15, 2019 at 5:03 AM Davide Obbi 
> wrote:
>
>>
>>
>> On Tue, Jan 15, 2019 at 2:18 AM Diego Remolina 
>> wrote:
>>
>>> Dear all,
>>>
>>> I was running gluster 3.10.12 on a pair of servers and recently upgraded
>>> to 4.1.6. There is a cron job that runs nightly in one machine, which
>>> rsyncs the data on the servers over to another machine for backup purposes.
>>> The rsync operation runs on one of the gluster servers, which mounts the
>>> gluster volume via fuse on /export.
>>>
>>> When using 3.10.12, this process would start at 8:00PM nightly, and
>>> usually end up at around 4:30AM when the servers had been freshly rebooted.
>>> From this point, things would start taking a bit longer and stabilize
>>> ending at around 7-9AM depending on actual file changes and at some point
>>> the servers would start eating up so much ram (up to 30GB) and I would have
>>> to reboot them to bring things back to normal as the file system would
>>> become extremely slow (perhaps the memory leak I have read was present on
>>> 3.10.x).
>>>
>>> After upgrading to 4.1.6 over the weekend, I was shocked to see the
>>> rsync process finish in about 1 hour and 26 minutes. This is compared to 8
>>> hours 30 mins with the older version. This is a nice speed up, however, I
>>> can only ask myself what has changed so drastically that this process is
>>> now so fast. Have there really been improvements in 4.1.6 that could speed
>>> this up so dramatically? In both of my test cases, there would had not
>>> really been a lot to copy via rsync given the fresh reboots are done on
>>> Saturday after the sync has finished from the day before.
>>>
>>> In general, the servers (which are accessed via samba for windows
>>> clients) are much faster and responsive since the update to 4.1.6. Tonight
>>> I will have the first rsync run which will actually have to copy the day's
>>> changes and will have another point of comparison.
>>>
>>> I am still using fuse mounts for samba, due to prior problems with vsf
>>> =gluster, which are currently present in Samba 4.8.3-4, and already
>>> documented in bugs, for which patches exist, but no official updated samba
>>> packages have been released yet. Since I was going from 3.10.12 to 4.1.6 I
>>> also did not want to change other things to make sure I could track any
>>> issues just related to the change in gluster versions and eliminate other
>>> complexity.
>>>
>>> The file system currently has about 16TB of data in
>>> 5142816 files and 696544 directories
>>>
>>> I've just ran the following code to count files and dirs and it took
>>> 67mins 38.957 secs to complete in this gluster volume:
>>> https://github.com/ChristopherSchultz/fast-file-count
>>>
>>> # time ( /root/sbin/dircnt /export )
>>> /export contains 5142816 files and 696544 directories
>>>
>>> real67m38.957s
>>> user0m6.225s
>>> sys 0m48.939s
>>>
>>> The gluster options set on the volume are:
>>> https://termbin.com/yxtd
>>>
>>> # gluster v status export
>>> Status of volume: export
>>> Gluster process TCP Port  RDMA Port  Online
>>> Pid
>>>
>>> --

Re: [Gluster-users] [External] To good to be truth speed improvements?

2019-01-15 Thread Diego Remolina
Hi Davide,

The options information is already provided in prior e-mail, see the
termbin.con link for the options of the volume after the 4.1.6 upgrade.

The gluster options set on the volume are:
https://termbin.com/yxtd

This is the other piece:

# gluster v info export

Volume Name: export
Type: Replicate
Volume ID: b4353b3f-6ef6-4813-819a-8e85e5a95cff
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.0.1.7:/bricks/hdds/brick
Brick2: 10.0.1.6:/bricks/hdds/brick
Options Reconfigured:
performance.stat-prefetch: on
performance.cache-min-file-size: 0
network.inode-lru-limit: 65536
performance.cache-invalidation: on
features.cache-invalidation: on
performance.md-cache-timeout: 600
features.cache-invalidation-timeout: 600
performance.cache-samba-metadata: on
transport.address-family: inet
server.allow-insecure: on
performance.cache-size: 10GB
cluster.server-quorum-type: server
nfs.disable: on
performance.io-thread-count: 64
performance.io-cache: on
cluster.lookup-optimize: on
cluster.readdir-optimize: on
server.event-threads: 5
client.event-threads: 5
performance.cache-max-file-size: 256MB
diagnostics.client-log-level: INFO
diagnostics.brick-log-level: INFO
cluster.server-quorum-ratio: 51%

Now I did create a backup of /var/lib/glusterd so if you tell me how to
pull information from there to compare I can do it.

I compared the file /var/lib/glusterd/vols/export/info and it is the same
in both, though entries are in different order.

Diego




On Tue, Jan 15, 2019 at 5:03 AM Davide Obbi  wrote:

>
>
> On Tue, Jan 15, 2019 at 2:18 AM Diego Remolina  wrote:
>
>> Dear all,
>>
>> I was running gluster 3.10.12 on a pair of servers and recently upgraded
>> to 4.1.6. There is a cron job that runs nightly in one machine, which
>> rsyncs the data on the servers over to another machine for backup purposes.
>> The rsync operation runs on one of the gluster servers, which mounts the
>> gluster volume via fuse on /export.
>>
>> When using 3.10.12, this process would start at 8:00PM nightly, and
>> usually end up at around 4:30AM when the servers had been freshly rebooted.
>> From this point, things would start taking a bit longer and stabilize
>> ending at around 7-9AM depending on actual file changes and at some point
>> the servers would start eating up so much ram (up to 30GB) and I would have
>> to reboot them to bring things back to normal as the file system would
>> become extremely slow (perhaps the memory leak I have read was present on
>> 3.10.x).
>>
>> After upgrading to 4.1.6 over the weekend, I was shocked to see the rsync
>> process finish in about 1 hour and 26 minutes. This is compared to 8 hours
>> 30 mins with the older version. This is a nice speed up, however, I can
>> only ask myself what has changed so drastically that this process is now so
>> fast. Have there really been improvements in 4.1.6 that could speed this up
>> so dramatically? In both of my test cases, there would had not really been
>> a lot to copy via rsync given the fresh reboots are done on Saturday after
>> the sync has finished from the day before.
>>
>> In general, the servers (which are accessed via samba for windows
>> clients) are much faster and responsive since the update to 4.1.6. Tonight
>> I will have the first rsync run which will actually have to copy the day's
>> changes and will have another point of comparison.
>>
>> I am still using fuse mounts for samba, due to prior problems with vsf
>> =gluster, which are currently present in Samba 4.8.3-4, and already
>> documented in bugs, for which patches exist, but no official updated samba
>> packages have been released yet. Since I was going from 3.10.12 to 4.1.6 I
>> also did not want to change other things to make sure I could track any
>> issues just related to the change in gluster versions and eliminate other
>> complexity.
>>
>> The file system currently has about 16TB of data in
>> 5142816 files and 696544 directories
>>
>> I've just ran the following code to count files and dirs and it took
>> 67mins 38.957 secs to complete in this gluster volume:
>> https://github.com/ChristopherSchultz/fast-file-count
>>
>> # time ( /root/sbin/dircnt /export )
>> /export contains 5142816 files and 696544 directories
>>
>> real67m38.957s
>> user0m6.225s
>> sys 0m48.939s
>>
>> The gluster options set on the volume are:
>> https://termbin.com/yxtd
>>
>> # gluster v status export
>> Status of volume: export
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>>
>> --
>> Brick 10.0.1.7:/bricks/hdds/brick   49157 0  Y
>>  13986
>> Brick 10.0.1.6:/bricks/hdds/brick   49153 0  Y
>>  9953
>> Self-heal Daemon on localhost   N/A   N/AY
>>  21934
>> Self-heal Daemon on 10.0.1.5N/A   N/AY
>>  4598
>> Self-heal Daemon on 10.0.1.6  

Re: [Gluster-users] [External] To good to be truth speed improvements?

2019-01-15 Thread Davide Obbi
On Tue, Jan 15, 2019 at 2:18 AM Diego Remolina  wrote:

> Dear all,
>
> I was running gluster 3.10.12 on a pair of servers and recently upgraded
> to 4.1.6. There is a cron job that runs nightly in one machine, which
> rsyncs the data on the servers over to another machine for backup purposes.
> The rsync operation runs on one of the gluster servers, which mounts the
> gluster volume via fuse on /export.
>
> When using 3.10.12, this process would start at 8:00PM nightly, and
> usually end up at around 4:30AM when the servers had been freshly rebooted.
> From this point, things would start taking a bit longer and stabilize
> ending at around 7-9AM depending on actual file changes and at some point
> the servers would start eating up so much ram (up to 30GB) and I would have
> to reboot them to bring things back to normal as the file system would
> become extremely slow (perhaps the memory leak I have read was present on
> 3.10.x).
>
> After upgrading to 4.1.6 over the weekend, I was shocked to see the rsync
> process finish in about 1 hour and 26 minutes. This is compared to 8 hours
> 30 mins with the older version. This is a nice speed up, however, I can
> only ask myself what has changed so drastically that this process is now so
> fast. Have there really been improvements in 4.1.6 that could speed this up
> so dramatically? In both of my test cases, there would had not really been
> a lot to copy via rsync given the fresh reboots are done on Saturday after
> the sync has finished from the day before.
>
> In general, the servers (which are accessed via samba for windows clients)
> are much faster and responsive since the update to 4.1.6. Tonight I will
> have the first rsync run which will actually have to copy the day's changes
> and will have another point of comparison.
>
> I am still using fuse mounts for samba, due to prior problems with vsf
> =gluster, which are currently present in Samba 4.8.3-4, and already
> documented in bugs, for which patches exist, but no official updated samba
> packages have been released yet. Since I was going from 3.10.12 to 4.1.6 I
> also did not want to change other things to make sure I could track any
> issues just related to the change in gluster versions and eliminate other
> complexity.
>
> The file system currently has about 16TB of data in
> 5142816 files and 696544 directories
>
> I've just ran the following code to count files and dirs and it took
> 67mins 38.957 secs to complete in this gluster volume:
> https://github.com/ChristopherSchultz/fast-file-count
>
> # time ( /root/sbin/dircnt /export )
> /export contains 5142816 files and 696544 directories
>
> real67m38.957s
> user0m6.225s
> sys 0m48.939s
>
> The gluster options set on the volume are:
> https://termbin.com/yxtd
>
> # gluster v status export
> Status of volume: export
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> --
> Brick 10.0.1.7:/bricks/hdds/brick   49157 0  Y
>  13986
> Brick 10.0.1.6:/bricks/hdds/brick   49153 0  Y
>  9953
> Self-heal Daemon on localhost   N/A   N/AY
>  21934
> Self-heal Daemon on 10.0.1.5N/A   N/AY
>  4598
> Self-heal Daemon on 10.0.1.6N/A   N/AY
>  14485
>
> Task Status of Volume export
>
> --
> There are no active volume tasks
>
> Truth, there is a 3rd server here, but no bricks on it.
>
> Thoughts?
>
> Diego
>
>
> 
>  Virus-free.
> www.avast.com
> 
> <#m_8084651329793795211_m_7462352325940458688_m_-6479459361629161759_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users


Hi Diego,

Besides the actual improvements made in the code i think new releases might
implement volume options by default that before might have had different
setting. I would have been interesting to diff "gluster volume get
 all" befor and after the upgrade. Just for curiosity and i am
trying to figure out volume options for rsync kind of workloads can you
share the command output anyway along with gluster volume info ?

thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users