Re: [Gluster-users] GlusterFS absolute max. storage size in total?

2017-04-19 Thread Mohamed Pakkeer
Hi Amar,

Currently, we are running 40 node cluster and each node has
36*6TB(210TB).As of now we don't face any issue with read and write
performance  except folder listing and disk failure healing. Planning to
start another cluster with 36*10TB(360)per node. What kind of problem will
we face if we go with 36*10(360)per node. Planned 360TB per node is far
from recommended 64TB. We are using Glusterfs with disperse volume for
video archival.

Regards
K. Mohamed Pakkeer

On 20-Apr-2017 9:00 AM, "Amar Tumballi"  wrote:

>
> On Wed, 19 Apr 2017 at 11:07 PM, Peter B.  wrote:
>
>> Hello,
>>
>> Could you tell me what the technically maximum limit of capacity of a
>> GlusterFS storage is?
>> I cannot find any official statement on this...
>>
>> Somewhere it says "several Petabytes", but that's a bit vague.
>
>
> It is mostly vague because the total storage depends on 2 things, what is
> per node storage you can get, and then how many nodes you have in cluster?
>
> It is scalable comfortably for 128nodes, and per node you can have 64TB
> drives, making it 8PB.
>
>
>
>> I'd need it for arguing pro GlusterFS in comparison to other systems.
>
>
> Hope the above info helps you. Btw, we are targeting more scalability
> related improvements in Gluster 4.0, so it should be much larger number
> from next year.
>
> -Amar
>
>
>
>>
>> Thank you very much in advance,
>> Peter B.
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
> --
> Amar Tumballi (amarts)
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS absolute max. storage size in total?

2017-04-19 Thread Amar Tumballi
On Wed, 19 Apr 2017 at 11:07 PM, Peter B.  wrote:

> Hello,
>
> Could you tell me what the technically maximum limit of capacity of a
> GlusterFS storage is?
> I cannot find any official statement on this...
>
> Somewhere it says "several Petabytes", but that's a bit vague.


It is mostly vague because the total storage depends on 2 things, what is
per node storage you can get, and then how many nodes you have in cluster?

It is scalable comfortably for 128nodes, and per node you can have 64TB
drives, making it 8PB.



> I'd need it for arguing pro GlusterFS in comparison to other systems.


Hope the above info helps you. Btw, we are targeting more scalability
related improvements in Gluster 4.0, so it should be much larger number
from next year.

-Amar



>
> Thank you very much in advance,
> Peter B.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Issue installing Gluster on CentOS 7.2

2017-04-19 Thread Eric K. Miller
We have a requirement to stay on CentOS 7.2 for a while (due to some
bugs in 7.3 components related to libvirt).  So we have the yum repos
set to CentOS 7.2, not 7.3.  When installing Gluster (latest version in
the repo, which turns out to be 3.8.10), a dependency exists for
firewalld-filesystem, which fails.  From what I have read,
firewalld-filesystem is only available in CentOS 7.3.

 

Has anyone else run into this?  Is there a workaround?  Or is this a bug
where we should consider installing an earlier version of Gluster?

 

I'm new to this list, so if there is a better list to post this issue
to, please let me know.

 

Thanks!

 

Eric

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] current Version

2017-04-19 Thread Kaleb S. KEITHLEY
On 04/19/2017 02:49 PM, lemonni...@ulrar.net wrote:
> Hi,
> 
> Look there : 
> https://download.gluster.org/pub/gluster/glusterfs/3.8/3.8.11/Debian/jessie/

Those are amd64/x86_64 only — they won't work on a Raspberry Pi.

> 
> On Wed, Apr 19, 2017 at 06:54:37AM +0200, Mario Roeber wrote:
>> Hello Everyone, 
>>
>> my Name is Mario and i’use glusterFS now around 1 Year at Home with some 
>> raspberry and USB HD. I’have the question where i’can found for Debian/Jessy 
>> the current 8.11 install *deb.
>> Maybe someone know. 

The last time I built packages for Raspbian Jessie was over a year and a
half ago (glusterfs-3.7.4). Nobody AFAIK has asked for newer packages.
TBPH it seemed pretty obvious that (almost) nobody was using them.

But I think you could easily take the 3.8.x or 3.10.x .dsc files [1] on
download.gluster.org and build your own packages with them.

And/or you can use the source from [2] and the packaging bits from [3]
and build with those.

[1] E.g.
https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.1/Debian/stretch/apt/pool/main/g/glusterfs/glusterfs_3.10.1-1.dsc
[2] E.g.
https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.1/glusterfs-3.10.1.tar.gz
[3] https://github.com/gluster/glusterfs-debian

-- 

Kaleb



signature.asc
Description: OpenPGP digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] current Version

2017-04-19 Thread lemonnierk
Hi,

Look there : 
https://download.gluster.org/pub/gluster/glusterfs/3.8/3.8.11/Debian/jessie/

On Wed, Apr 19, 2017 at 06:54:37AM +0200, Mario Roeber wrote:
> Hello Everyone, 
> 
> my Name is Mario and i’use glusterFS now around 1 Year at Home with some 
> raspberry and USB HD. I’have the question where i’can found for Debian/Jessy 
> the current 8.11 install *deb.
> Maybe someone know. 
> 
> thanks for help.  
> 
> 
> Mario Roeber
> er...@port-x.de
> 
> Sie möchten mit mir Verschluesselt eMails austauschen? Hier mein 
> oeffendlicher Schlüssel.  
> 
> 


>  
> 

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users



signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS absolute max. storage size in total?

2017-04-19 Thread Peter B.
Hello,

Could you tell me what the technically maximum limit of capacity of a
GlusterFS storage is?
I cannot find any official statement on this...

Somewhere it says "several Petabytes", but that's a bit vague.

I'd need it for arguing pro GlusterFS in comparison to other systems.


Thank you very much in advance,
Peter B.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Bugfix release GlusterFS 3.8.11 has landed

2017-04-19 Thread Niels de Vos
[repost from 
http://blog.nixpanic.net/2017/04/bugfix-release-glusterfs-3811-has-landed.html]

Bugfix release GlusterFS 3.8.11 has landed

   An other month has passed, and more bugs have been squashed in the
3.8 release. Packages should be available or arrive soon at the usual
repositories. The next 3.8 update is expected to be made available just
after the 10th of May.

Release notes for Gluster 3.8.11

   This is a bugfix release. The Release Notes for 3.8.0, 3.8.1, 3.8.2,
3.8.3, 3.8.4, 3.8.5, 3.8.6, 3.8.7, 3.8.8, 3.8.9 and 3.8.10 contain a
listing of all the new features that were added and bugs fixed in the
GlusterFS 3.8 stable release.

Bugs addressed

   A total of 15 patches have been merged, addressing 13 bugs:
 * #1422788: [Replicate] "RPC call decoding failed" leading to IO hang & 
mount inaccessible
 * #1427390: systemic testing: seeing lot of ping time outs which would 
lead to splitbrains
 * #1430845: build/packaging: Debian and Ubuntu don't have /usr/libexec/; 
results in bad packages
 * #1431592: memory leak in features/locks xlator
 * #1434298: [Disperse] Metadata version is not healing when a brick is down
 * #1434302: Move spit-brain msg in read txn to debug
 * #1435645: Disperse: Provide description of disperse.eager-lock option.
 * #1436231: Undo pending xattrs only on the up bricks
 * #1436412: Unrecognized filesystems (i.e. btrfs, zfs) log many errors 
about "getinode size"
 * #1437330: Sharding: Fix a performance bug
 * #1438424: [Ganesha + EC] : Input/Output Error while creating LOTS of 
smallfiles
 * #1439112: File-level WORM allows ftruncate() on read-only files
 * #1440635: Application VMs with their disk images on sharded-replica 3 
volume are unable to boot after performing rebalance



signature.asc
Description: PGP signature
___
Announce mailing list
annou...@gluster.org
http://lists.gluster.org/mailman/listinfo/announce
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] current Version

2017-04-19 Thread Mario Roeber
Hello Everyone, 

my Name is Mario and i’use glusterFS now around 1 Year at Home with some 
raspberry and USB HD. I’have the question where i’can found for Debian/Jessy 
the current 8.11 install *deb.
Maybe someone know. 

thanks for help.  


Mario Roeber
er...@port-x.de

Sie möchten mit mir Verschluesselt eMails austauschen? Hier mein oeffendlicher 
Schlüssel.  




pgp-public-keys.asc
Description: application/apple-msg-attachment
 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] rebalance fix layout necessary

2017-04-19 Thread Amar Tumballi
On Wed, 19 Apr 2017 at 4:58 PM, Amudhan P  wrote:

> Hi,
>
> Does rebalance fix-layout triggers automatically by any chance?
>
> because my cluster currently showing rebalance in progress and running
> command "rebalance status" shows " fix-layout in progress " in nodes added
> recently to cluster and "fix-layout completed" in old nodes.
>
> checking rebalance log in new nodes states it was started on 12th April.
>
> strange what would have triggered rebalance process?
>

Have you done remove-brick?

>
> regards
> Amudhan
>
> On Thu, Apr 13, 2017 at 12:51 PM, Amudhan P  wrote:
>
>> I have another issue now after expanding cluster folder listing time is
>> increased to 400%.
>>
>> I have also tried to enable readdir-ahead & parallel-readdir but was not
>> showing any improvement in folder listing but started with an issue in
>> listing folders like random folders disappeared from listing and data read
>> shows IO error.
>>
>> Tried disabling Cluster.readdir-optimze and remount fuse client but issue
>> continued. so, disabled readdir-ahead & parallel-readdir  and enabled
>> Cluster.readdir-optimze everything works fine.
>>
>> How do I bring down folder listing time?
>>
>>
>> Below is my config in Volume :
>> Options Reconfigured:
>> nfs.disable: yes
>> cluster.disperse-self-heal-daemon: enable
>> cluster.weighted-rebalance: off
>> cluster.rebal-throttle: aggressive
>> performance.readdir-ahead: off
>> cluster.min-free-disk: 10%
>> features.default-soft-limit: 80%
>> performance.force-readdirp: no
>> dht.force-readdirp: off
>> cluster.readdir-optimize: on
>> cluster.heal-timeout: 43200
>> cluster.data-self-heal: on
>>
>> On Fri, Apr 7, 2017 at 7:35 PM, Amudhan P  wrote:
>>
>>> Volume type:
>>> Disperse Volume  8+2  = 1080 bricks
>>>
>>> First time added 8+2 * 3 sets and it started giving issue in listing
>>> folder. so, remounted mount point and it was working fine.
>>>
>>> Second added 8+2 *13 sets and it also had the same issue.
>>>
>>> when listing folder it was returning an empty folder or not showing all
>>> the folders.
>>>
>>> when ongoing write was interrupted it throws an error destination not
>>> folder not available.
>>>
>>> adding few more lines from log.. let me know if you need full log file.
>>>
>>> [2017-04-05 13:40:03.702624] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec]
>>> 0-mgmt: Volume file changed
>>> [2017-04-05 13:40:04.970055] I [MSGID: 122067]
>>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-123: Using 'sse' CPU
>>> extensions
>>> [2017-04-05 13:40:04.971194] I [MSGID: 122067]
>>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-122: Using 'sse' CPU
>>> extensions
>>> [2017-04-05 13:40:04.972144] I [MSGID: 122067]
>>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-121: Using 'sse' CPU
>>> extensions
>>> [2017-04-05 13:40:04.973131] I [MSGID: 122067]
>>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-120: Using 'sse' CPU
>>> extensions
>>> [2017-04-05 13:40:04.974072] I [MSGID: 122067]
>>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-119: Using 'sse' CPU
>>> extensions
>>> [2017-04-05 13:40:04.975005] I [MSGID: 122067]
>>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-118: Using 'sse' CPU
>>> extensions
>>> [2017-04-05 13:40:04.975936] I [MSGID: 122067]
>>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-117: Using 'sse' CPU
>>> extensions
>>> [2017-04-05 13:40:04.976905] I [MSGID: 122067]
>>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-116: Using 'sse' CPU
>>> extensions
>>> [2017-04-05 13:40:04.977825] I [MSGID: 122067]
>>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-115: Using 'sse' CPU
>>> extensions
>>> [2017-04-05 13:40:04.978755] I [MSGID: 122067]
>>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-114: Using 'sse' CPU
>>> extensions
>>> [2017-04-05 13:40:04.979689] I [MSGID: 122067]
>>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-113: Using 'sse' CPU
>>> extensions
>>> [2017-04-05 13:40:04.980626] I [MSGID: 122067]
>>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-112: Using 'sse' CPU
>>> extensions
>>> [2017-04-05 13:40:07.270412] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
>>> 2-gfs-vol-client-736: changing port to 49153 (from 0)
>>> [2017-04-05 13:40:07.271902] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
>>> 2-gfs-vol-client-746: changing port to 49154 (from 0)
>>> [2017-04-05 13:40:07.272076] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
>>> 2-gfs-vol-client-756: changing port to 49155 (from 0)
>>> [2017-04-05 13:40:07.273154] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
>>> 2-gfs-vol-client-766: changing port to 49156 (from 0)
>>> [2017-04-05 13:40:07.273193] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
>>> 2-gfs-vol-client-776: changing port to 49157 (from 0)
>>> [2017-04-05 13:40:07.273371] I [MSGID: 114046]
>>> [client-handshake.c:1216:client_setvolume_cbk] 2-gfs-vol-client-579:
>>> Connected to gfs-vol-client-579, attached to remote volume
>>> '/media/disk22/brick22'.
>>> [2017-04-05 

Re: [Gluster-users] Replica 2 Quorum and arbiter

2017-04-19 Thread Karthik Subrahmanya
Hi,
Comments inline.

On Tue, Apr 18, 2017 at 1:11 AM, Mahdi Adnan 
wrote:

> Hi,
>
>
> We have a replica 2 volume and we have issue with setting proper quorum.
>
> The volumes used as datastore for vmware/ovirt, the current settings for
> the quorum are:
>
>
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.server-quorum-ratio: 51%
>
>
> Losing the first node which hosting the first bricks will take the storage
> domain in ovirt offline but in the FUSE mount point works fine "read/write"
>
This is not possible. When the client quorum is set to auto and the replica
count is 2, the first node should be up for read/write to happen from
mount. May be you are missing something here.

> Losing the second node or any other node that hosts only the second bricks
> of the replication will not affect ovirt storage domain i.e 2nd or 4th
> ndoes.
>
Since the server-quorum-ratio is set to 51%, this is also not possible.
Can you share the volume info here?

> As i understand, losing the first brick in replica 2 volumes will render
> the volume to read only
>
Yes you are correct, loosing the first brick will make the volume
read-only.

> , but how FUSE mount works in read write ?
> Also, can we add an arbiter node to the current replica 2 volume without
> losing data ? if yes, does the re-balance bug "Bug 1440635" affect this
> process ?
>
Yes you can add an arbiter brick without losing data and the bug 1440635
will not affect that, since only the metadata need to be replicated on the
arbiter brick.

> And what happen if we set "cluster.quorum-type: none" and the first node
> goes offline ?
>
If you set the quorum-type to auto in a replica 2 volume, you will be able
to read/write even when only one brick is up.
For an arbiter volume, quorum-type is auto by default and it is the
recommended setting.

HTH,
Karthik

>
> Thank you.
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] rebalance fix layout necessary

2017-04-19 Thread Amudhan P
Hi,

Does rebalance fix-layout triggers automatically by any chance?

because my cluster currently showing rebalance in progress and running
command "rebalance status" shows " fix-layout in progress " in nodes added
recently to cluster and "fix-layout completed" in old nodes.

checking rebalance log in new nodes states it was started on 12th April.

strange what would have triggered rebalance process?

regards
Amudhan

On Thu, Apr 13, 2017 at 12:51 PM, Amudhan P  wrote:

> I have another issue now after expanding cluster folder listing time is
> increased to 400%.
>
> I have also tried to enable readdir-ahead & parallel-readdir but was not
> showing any improvement in folder listing but started with an issue in
> listing folders like random folders disappeared from listing and data read
> shows IO error.
>
> Tried disabling Cluster.readdir-optimze and remount fuse client but issue
> continued. so, disabled readdir-ahead & parallel-readdir  and enabled
> Cluster.readdir-optimze everything works fine.
>
> How do I bring down folder listing time?
>
>
> Below is my config in Volume :
> Options Reconfigured:
> nfs.disable: yes
> cluster.disperse-self-heal-daemon: enable
> cluster.weighted-rebalance: off
> cluster.rebal-throttle: aggressive
> performance.readdir-ahead: off
> cluster.min-free-disk: 10%
> features.default-soft-limit: 80%
> performance.force-readdirp: no
> dht.force-readdirp: off
> cluster.readdir-optimize: on
> cluster.heal-timeout: 43200
> cluster.data-self-heal: on
>
> On Fri, Apr 7, 2017 at 7:35 PM, Amudhan P  wrote:
>
>> Volume type:
>> Disperse Volume  8+2  = 1080 bricks
>>
>> First time added 8+2 * 3 sets and it started giving issue in listing
>> folder. so, remounted mount point and it was working fine.
>>
>> Second added 8+2 *13 sets and it also had the same issue.
>>
>> when listing folder it was returning an empty folder or not showing all
>> the folders.
>>
>> when ongoing write was interrupted it throws an error destination not
>> folder not available.
>>
>> adding few more lines from log.. let me know if you need full log file.
>>
>> [2017-04-05 13:40:03.702624] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec]
>> 0-mgmt: Volume file changed
>> [2017-04-05 13:40:04.970055] I [MSGID: 122067]
>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-123: Using 'sse' CPU
>> extensions
>> [2017-04-05 13:40:04.971194] I [MSGID: 122067]
>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-122: Using 'sse' CPU
>> extensions
>> [2017-04-05 13:40:04.972144] I [MSGID: 122067]
>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-121: Using 'sse' CPU
>> extensions
>> [2017-04-05 13:40:04.973131] I [MSGID: 122067]
>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-120: Using 'sse' CPU
>> extensions
>> [2017-04-05 13:40:04.974072] I [MSGID: 122067]
>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-119: Using 'sse' CPU
>> extensions
>> [2017-04-05 13:40:04.975005] I [MSGID: 122067]
>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-118: Using 'sse' CPU
>> extensions
>> [2017-04-05 13:40:04.975936] I [MSGID: 122067]
>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-117: Using 'sse' CPU
>> extensions
>> [2017-04-05 13:40:04.976905] I [MSGID: 122067]
>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-116: Using 'sse' CPU
>> extensions
>> [2017-04-05 13:40:04.977825] I [MSGID: 122067]
>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-115: Using 'sse' CPU
>> extensions
>> [2017-04-05 13:40:04.978755] I [MSGID: 122067]
>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-114: Using 'sse' CPU
>> extensions
>> [2017-04-05 13:40:04.979689] I [MSGID: 122067]
>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-113: Using 'sse' CPU
>> extensions
>> [2017-04-05 13:40:04.980626] I [MSGID: 122067]
>> [ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-112: Using 'sse' CPU
>> extensions
>> [2017-04-05 13:40:07.270412] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
>> 2-gfs-vol-client-736: changing port to 49153 (from 0)
>> [2017-04-05 13:40:07.271902] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
>> 2-gfs-vol-client-746: changing port to 49154 (from 0)
>> [2017-04-05 13:40:07.272076] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
>> 2-gfs-vol-client-756: changing port to 49155 (from 0)
>> [2017-04-05 13:40:07.273154] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
>> 2-gfs-vol-client-766: changing port to 49156 (from 0)
>> [2017-04-05 13:40:07.273193] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
>> 2-gfs-vol-client-776: changing port to 49157 (from 0)
>> [2017-04-05 13:40:07.273371] I [MSGID: 114046]
>> [client-handshake.c:1216:client_setvolume_cbk] 2-gfs-vol-client-579:
>> Connected to gfs-vol-client-579, attached to remote volume
>> '/media/disk22/brick22'.
>> [2017-04-05 13:40:07.273388] I [MSGID: 114047]
>> [client-handshake.c:1227:client_setvolume_cbk] 2-gfs-vol-client-579:
>> Server and Client lk-version numbers are not same, reopening the fds
>> [2017-04-05 13:40:07.273435] I [MSGID: 114035]
>> 

Re: [Gluster-users] How to Speed UP heal process in Glusterfs 3.10.1

2017-04-19 Thread Pranith Kumar Karampuri
Some thoughts based on this mail thread:
1) At the moment heals happen in parallel only for files not directories.
i.e. same shd process doesn't heal 2 directories at a time. But it can do
as many file heals as shd-max-threads option. That could be the reason why
Amudhan faced better performance after a while, but it is a bit difficult
to confirm without data.

2) When a file is undergoing I/O both shd and mount will contend for locks
to do I/O from bricks this probably is the reason for the slowness in I/O.
it will last only until the file is healed in parallel with the I/O from
users.

3) Serkan, Amudhan, it would be nice to have feedback about what do you
feel are the bottlenecks so that we can come up with next set of
performance improvements. One of the newer enhancements Sunil is working on
is to be able to heal larger chunks in one go rather than ~128KB chunks. It
will be configurable upto 128MB I think, this will improve throughput. Next
set of enhancements would concentrate on reducing network round trips in
doing heal and doing parallel heals of directories.


On Tue, Apr 18, 2017 at 6:22 PM, Serkan Çoban  wrote:

> >Is this by design ? Is it tuneable ? 10MB/s/brick is too low for us.
> >We will use 10GB ethernet, healing 10MB/s/brick would be a bottleneck.
>
> That is the maximum if you are using EC volumes, I don't know about
> other volume configurations.
> With 3.9.0 parallel self heal of EC volumes should be faster though.
>
>
>
> On Tue, Apr 18, 2017 at 1:38 PM, Gandalf Corvotempesta
>  wrote:
> > 2017-04-18 9:36 GMT+02:00 Serkan Çoban :
> >> Nope, healing speed is 10MB/sec/brick, each brick heals with this
> >> speed, so one brick or one server each will heal in one week...
> >
> > Is this by design ? Is it tuneable ? 10MB/s/brick is too low for us.
> > We will use 10GB ethernet, healing 10MB/s/brick would be a bottleneck.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users