Hello,
I can't find explicit information, so I am trying here...
Is cache only done on client side ? I saw it for quota feature (that I am
not using).
My bricks have 32GB of ram, but only kernel cache is using it. Gluster
daemon are taking very few (0.4% max for one of them).
I have 5 volumes,
Yes, I'm aware of that. That's a nuclear option though. In the past, there
haven't been any conflicts.
On Tue, Jun 16, 2015 at 5:59 PM, 何亦军 wrote:
> you can disable epel repos,
>
>
>
> [epel]
>
> ….
>
> enabled=0
>
> ….
>
>
>
> *发件人:* gluster-users-boun...@gluster.org [mailto:
> gluster-users-b
you can disable epel repos,
[epel]
….
enabled=0
….
发件人: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] 代表 Prasun Gera
发送时间: 2015年6月17日 1:36
收件人: gluster-users@gluster.org
主题: [Gluster-users] RHS 3 vdsm package conflicts with EPEL
Wanted to bring this to the attenti
Thank you Niels for your time to chase the issue. It is important to
have working files as people try, and "move on" if things don't work.
Not everyone is as persistent as Alessandro!
Regards, Malahal.
Niels de Vos [nde...@redhat.com] wrote:
> On Mon, Jun 15, 2015 at 06:50:21PM -0500, Malahal Nai
On Mon, Jun 15, 2015 at 06:50:21PM -0500, Malahal Naineni wrote:
> Kaleb Keithley [kkeit...@redhat.com] wrote:
> > But note that nfs-ganesha in EPEL[67] is built with a) glusterfs-api-3.6.x
> > from Red Hat's "downstream" glusterfs, and b) the "bundled" static version
> > of ntirpc, not the share
Wanted to bring this to the attention of RH folks. This started happening
sometime last week I think:
Error: Package: vdsm-gluster-4.16.8.1-6.2.el6rhs.noarch
(@rhel-x86_64-server-6-rhs-3)
Requires: vdsm = 4.16.8.1-6.2.el6rhs
Removing: vdsm-4.16.8.1-6.2.el6rhs.x86_64
(@rhel-x8
Brett,
Did you ever resolve this? I am seeing similar crashes on my system,
with glusterfs-3.6.3-3.fc22.x86_64.
- Mike
> Hoping someone can help me out with this. I've been running GlusterFS
> for awhile now and everything was great. Now for about the last month
> I'm lucky if it runs for
Sent from one plus one
On Jun 16, 2015 9:32 PM, "Andreas Hollaus"
wrote:
>
> Hi,
>
> I discovered this strange situation when I rebooted one of the nodes.
After the
> reboot I removed the brick on node 2, but for some reason it seems like
that
> information didn't each node 1.
> Any idea what coul
Hi,
I discovered this strange situation when I rebooted one of the nodes. After the
reboot I removed the brick on node 2, but for some reason it seems like that
information didn't each node 1.
Any idea what could have gone wrong and how to troubleshoot? Up until now I've
never
seen that the nodes
On 06/16/2015 10:42 AM, Gene Liverman wrote:
I have servers set to pull from
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-5Server/x86_64
yet when I go there and work back up the path to the EPEL.repo folder I
only see 6 & 7 now. Is this a mistake or was support for EPE
I have servers set to pull from
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-5Server/x86_64
yet when I go there and work back up the path to the EPEL.repo folder I
only see 6 & 7 now. Is this a mistake or was support for EPEL 5 dropped?
Thanks,
*Gene Liverman*
System
Thanks, that was exactly the issue. I had assumed that the default for
auth.ssl-allow was *, which was what I wanted. Setting that and stopping
and starting my volume fixed things. Thanks!
David
On Mon, Jun 15, 2015, 6:19 PM Jeff Darcy wrote:
> Hi all,
>
> I'm just installing my first ever glus
Hello,
Could you please tell me what should I do to enable/fix synchronous
replication between two GlusterFS nodes, because at that time my files
are synching about every 7 mins.
Here is my configuration:
Gluster01 server = 10.75.3.43 (and also 10.75.2.41 for clients)
Gluster02 server = 10.
On 06/16/2015 04:37 PM, Atin Mukherjee wrote:
> Hi all,
>
> This meeting is scheduled for anyone that is interested in learning more
> about, or assisting with the Bug Triage.
>
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
> ( https://webchat.freenode.net/?chann
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
Thanks Atin for the quick reply. Appreciated.
Date: Tue, 16 Jun 2015 16:13:01 +0530
Subject: Re: [Gluster-users] Peers moving to disconnected state while running
rebalance on a replica volume
From: atin.mukherje...@gmail.com
To: suneelg...@outlook.com
CC: gluster-users@gluster.org
This is a know
This is a known issue. Clusters crashed on the node which is showing
disconnected. The fix will be released in 3.7.2
Atin
Sent from one plus one
On Jun 16, 2015 4:06 PM, "Suneel Gali" wrote:
> Hi Team,
>
> Recently I started working glusterfs and created the following home setup:
>
> Total nodes
17 matches
Mail list logo