If the patch provided in that case will resolve my bug as well then please
provide the patch so that I will backport it on 3.7.6
On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL
wrote:
> Hi Team,
>
> I have noticed that there are so many glusterfsd threads are running in my
> system and we obs
Our use case is archive storage so we are mainly after high capacity setups
with some sort of resilliency. We do have 10gig nics on these boxes, but
thats mainly for resync etc. For our end user traffic we dont need more
than 1gig per server.
Is it possible to change the EC setup when/if we double
Hi Team,
I have noticed that there are so many glusterfsd threads are running in my
system and we observed some of those thread consuming more cpu. I did
“strace” on two such threads (before the problem disappeared by itself) and
found that there is a continuous activity like below:
lstat("/opt/l
Thanks for pointing me to the documentation. That's perfect, I can now plan my
upgrade to 3.8.11. By the way I was wondering why is a self-heal part of the
upgrade procedure? Is it just in case or is it mandatory?
Regards
M.
Original Message
Subject: Re: [Gluster-users] Bugfix
What is your use case? Disperse is good for archive workloads, big files.
I suggest you to buy 10 servers and use 8+2 EC configuration. This way you can
handle two node failures. We are using 28 disk servers but our next
cluster will use 68 disk servers.
On Thu, Apr 20, 2017 at 1:19 PM, Ingard Me
On Thu, Apr 20, 2017 at 12:32 PM, Peter B. wrote:
> Thanks Amar and Mohamed!
>
> My question was mainly aiming at things like programmatical limitations.
> We're already running 2 Gluster-Clusters with 4 nodes each.
> 3 bricks = 100 TB/node = 400 TB total.
>
> So with Gluster 3.x it's 8PB, possib
On Wed, Apr 19, 2017 at 06:31:45PM +, Mahdi Adnan wrote:
> Hi,
>
>
> I think bug 1440635 has not been fixed yet.
> https://bugzilla.redhat.com/show_bug.cgi?id=1440635
Indeed, that bug has been re-opened. Some fixes were merged for the bug,
so there might be specific corner cases where the pr
On Wed, Apr 19, 2017 at 01:46:14PM -0400, mabi wrote:
> Sorry for insisting but where can I find the upgrading to 3.8 guide?
> This is the only guide missing from the docs... I would like to
> upgrade from 3.7 and would like to follow the documentation to make
> sure everything goes well.
The upgr
Thanks Amar and Mohamed!
My question was mainly aiming at things like programmatical limitations.
We're already running 2 Gluster-Clusters with 4 nodes each.
3 bricks = 100 TB/node = 400 TB total.
So with Gluster 3.x it's 8PB, possibly more with Gluster 4.x.
Right?
Thank you very much again!
P
Hello and Thanks,
yes i’know this repository, but i’need it for raspberry pi, for normal Debian 8
i’have. In the current raspberry repository is only the version 3.5.2 .
Mario Roeber
er...@port-x.de
Sie möchten mit mir Verschluesselt eMails austauschen? Hier mein oeffendlicher
Schlüssel.
On Thu, Apr 20, 2017 at 06:58:51AM -0400, Kaleb S. KEITHLEY wrote:
> On 04/19/2017 04:11 PM, Eric K. Miller wrote:
> > We have a requirement to stay on CentOS 7.2 for a while (due to some
> > bugs in 7.3 components related to libvirt). So we have the yum repos
> > set to CentOS 7.2, not 7.3. When
On 04/19/2017 04:11 PM, Eric K. Miller wrote:
We have a requirement to stay on CentOS 7.2 for a while (due to some
bugs in 7.3 components related to libvirt). So we have the yum repos
set to CentOS 7.2, not 7.3. When installing Gluster (latest version in
the repo, which turns out to be 3.8.10),
Hi
We've been looking at supermicro 60 and 90 bay servers. Are anyone else
using these models (or similar density) for gluster?
Specifically I'd like to setup a distributed disperse volume with 8 of
these servers.
Any insight, does and donts or best practice guidelines would be
appreciated :)
ki
On Wed, Apr 19, 2017 at 8:42 AM, Tom Zhou wrote:
> Setup:
>
> server : ubuntu 16.04
> glusterfs version: 3.10
>
> volume type: Disperse volume (4+2) nodes
>
> mount type: glusterfs fuse
>
>
> Problem:
>
> when grep heavily on a mounted Disperse volume, "transport disconnected "
> error happen.
>
Hi Pranith,
> 1) At the moment heals happen in parallel only for files not directories.
i.e. same shd process doesn't heal 2 directories at a time. But it > can
do as many file heals as shd-max-threads option. That could be the reason
why Amudhan faced better performance after a while, but > it
No, but there are few disk failures happened. since my volume type is
disperse I have replaced disks from one of the disperse set and mount disk
in same mount point in node and started volume with force to bring it to
service.
On Wed, Apr 19, 2017 at 9:46 PM, Amar Tumballi wrote:
>
> On Wed, 1
16 matches
Mail list logo