Re: [Gluster-users] High load on glusterfsd process

2017-04-20 Thread ABHISHEK PALIWAL
If the patch provided in that case will resolve my bug as well then please provide the patch so that I will backport it on 3.7.6 On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL wrote: > Hi Team, > > I have noticed that there are so many glusterfsd threads are running in my > system and we obs

Re: [Gluster-users] supermicro 60/90 bay servers

2017-04-20 Thread Ingard Mevåg
Our use case is archive storage so we are mainly after high capacity setups with some sort of resilliency. We do have 10gig nics on these boxes, but thats mainly for resync etc. For our end user traffic we dont need more than 1gig per server. Is it possible to change the EC setup when/if we double

[Gluster-users] High load on glusterfsd process

2017-04-20 Thread ABHISHEK PALIWAL
Hi Team, I have noticed that there are so many glusterfsd threads are running in my system and we observed some of those thread consuming more cpu. I did “strace” on two such threads (before the problem disappeared by itself) and found that there is a continuous activity like below: lstat("/opt/l

Re: [Gluster-users] Bugfix release GlusterFS 3.8.11 has landed

2017-04-20 Thread mabi
Thanks for pointing me to the documentation. That's perfect, I can now plan my upgrade to 3.8.11. By the way I was wondering why is a self-heal part of the upgrade procedure? Is it just in case or is it mandatory? Regards M. Original Message Subject: Re: [Gluster-users] Bugfix

Re: [Gluster-users] supermicro 60/90 bay servers

2017-04-20 Thread Serkan Çoban
What is your use case? Disperse is good for archive workloads, big files. I suggest you to buy 10 servers and use 8+2 EC configuration. This way you can handle two node failures. We are using 28 disk servers but our next cluster will use 68 disk servers. On Thu, Apr 20, 2017 at 1:19 PM, Ingard Me

Re: [Gluster-users] GlusterFS absolute max. storage size in total?

2017-04-20 Thread Amar Tumballi
On Thu, Apr 20, 2017 at 12:32 PM, Peter B. wrote: > Thanks Amar and Mohamed! > > My question was mainly aiming at things like programmatical limitations. > We're already running 2 Gluster-Clusters with 4 nodes each. > 3 bricks = 100 TB/node = 400 TB total. > > So with Gluster 3.x it's 8PB, possib

Re: [Gluster-users] Bugfix release GlusterFS 3.8.11 has landed

2017-04-20 Thread Niels de Vos
On Wed, Apr 19, 2017 at 06:31:45PM +, Mahdi Adnan wrote: > Hi, > > > I think bug 1440635 has not been fixed yet. > https://bugzilla.redhat.com/show_bug.cgi?id=1440635 Indeed, that bug has been re-opened. Some fixes were merged for the bug, so there might be specific corner cases where the pr

Re: [Gluster-users] Bugfix release GlusterFS 3.8.11 has landed

2017-04-20 Thread Niels de Vos
On Wed, Apr 19, 2017 at 01:46:14PM -0400, mabi wrote: > Sorry for insisting but where can I find the upgrading to 3.8 guide? > This is the only guide missing from the docs... I would like to > upgrade from 3.7 and would like to follow the documentation to make > sure everything goes well. The upgr

Re: [Gluster-users] GlusterFS absolute max. storage size in total?

2017-04-20 Thread Peter B.
Thanks Amar and Mohamed! My question was mainly aiming at things like programmatical limitations. We're already running 2 Gluster-Clusters with 4 nodes each. 3 bricks = 100 TB/node = 400 TB total. So with Gluster 3.x it's 8PB, possibly more with Gluster 4.x. Right? Thank you very much again! P

Re: [Gluster-users] current Version

2017-04-20 Thread Mario Roeber
Hello and Thanks, yes i’know this repository, but i’need it for raspberry pi, for normal Debian 8 i’have. In the current raspberry repository is only the version 3.5.2 . Mario Roeber er...@port-x.de Sie möchten mit mir Verschluesselt eMails austauschen? Hier mein oeffendlicher Schlüssel.

Re: [Gluster-users] Issue installing Gluster on CentOS 7.2

2017-04-20 Thread Niels de Vos
On Thu, Apr 20, 2017 at 06:58:51AM -0400, Kaleb S. KEITHLEY wrote: > On 04/19/2017 04:11 PM, Eric K. Miller wrote: > > We have a requirement to stay on CentOS 7.2 for a while (due to some > > bugs in 7.3 components related to libvirt). So we have the yum repos > > set to CentOS 7.2, not 7.3. When

Re: [Gluster-users] Issue installing Gluster on CentOS 7.2

2017-04-20 Thread Kaleb S. KEITHLEY
On 04/19/2017 04:11 PM, Eric K. Miller wrote: We have a requirement to stay on CentOS 7.2 for a while (due to some bugs in 7.3 components related to libvirt). So we have the yum repos set to CentOS 7.2, not 7.3. When installing Gluster (latest version in the repo, which turns out to be 3.8.10),

[Gluster-users] supermicro 60/90 bay servers

2017-04-20 Thread Ingard Mevåg
Hi We've been looking at supermicro 60 and 90 bay servers. Are anyone else using these models (or similar density) for gluster? Specifically I'd like to setup a distributed disperse volume with 8 of these servers. Any insight, does and donts or best practice guidelines would be appreciated :) ki

Re: [Gluster-users] transport disconnected on Disperse volume

2017-04-20 Thread Amar Tumballi
On Wed, Apr 19, 2017 at 8:42 AM, Tom Zhou wrote: > Setup: > > server : ubuntu 16.04 > glusterfs version: 3.10 > > volume type: Disperse volume (4+2) nodes > > mount type: glusterfs fuse > > > Problem: > > when grep heavily on a mounted Disperse volume, "transport disconnected " > error happen. >

Re: [Gluster-users] How to Speed UP heal process in Glusterfs 3.10.1

2017-04-20 Thread Amudhan P
Hi Pranith, > 1) At the moment heals happen in parallel only for files not directories. i.e. same shd process doesn't heal 2 directories at a time. But it > can do as many file heals as shd-max-threads option. That could be the reason why Amudhan faced better performance after a while, but > it

Re: [Gluster-users] rebalance fix layout necessary

2017-04-20 Thread Amudhan P
No, but there are few disk failures happened. since my volume type is disperse I have replaced disks from one of the disperse set and mount disk in same mount point in node and started volume with force to bring it to service. On Wed, Apr 19, 2017 at 9:46 PM, Amar Tumballi wrote: > > On Wed, 1