- Original Message -
> From: "Laurent Chouinard"
> To: gluster-users@gluster.org
> Sent: Thursday, May 22, 2014 9:16:01 PM
> Subject: [Gluster-users] Unavailability during self-heal for large volumes
>
>
>
> Hi,
>
>
>
> Digging in the archives of this list and bugzilla, it seems th
On 30/05/2014, at 6:14 AM, Humble Devassy Chirammal wrote:
> Hi Eco,
>
> Is there any plan to have "FORUMS" subpage in gluster.org ?
Personally, I hope not. Forum's take a massive amount of admin
effort to keep spam free. :(
+ Justin
--
Open Source and Standards @ Red Hat
twitter.com/realjust
Hi Eco,
Is there any plan to have "FORUMS" subpage in gluster.org ?
I think it will be beneficial.
--Humble
On Fri, May 30, 2014 at 4:40 AM, Eco Willson wrote:
> Dear Community members,
>
> We have been working on a new site design and we would love to get your
> feedback. You can check thi
hi,
We are taking an initiative to come up with some easy bugs where we can
help volunteers in the community to send patches for.
Goals of this initiative:
- Each maintainer needs to come up with a list of bugs that are easy to fix in
their components.
- All the developers who are already
Awesome !!!
+10 :)
On Fri, May 30, 2014 at 4:40 AM, Eco Willson wrote:
> Dear Community members,
>
> We have been working on a new site design and we would love to get your
> feedback. You can check things out at staging.gluster.org. Things are
> still very much in beta (a few pages not displ
End user guy over here likes it a lot. Much more CxO friendly when a
guy in a suit asks me "what's this Gluster thing we use?".
-Dan
Dan Mons
Unbreaker of broken things
Cutting Edge
http://cuttingedge.com.au
On 30 May 2014 09:13, Harshavardhana wrote:
> Excellent stuff! +1
>
web client
http://fpaste.org/105784/14014066/
gluster storage server
http://fpaste.org/105786/40140677/
Please find the above fpaste a situation I have experienced twice where my
8-node gluster 2x4 distributed-replication was doing the
connect/disconnect game.
My servers are rhel 6.4 on gluster
Excellent stuff! +1
On Thu, May 29, 2014 at 4:10 PM, Eco Willson wrote:
> Dear Community members,
>
> We have been working on a new site design and we would love to get your
> feedback. You can check things out at staging.gluster.org. Things are still
> very much in beta (a few pages not disp
Dear Community members,
We have been working on a new site design and we would love to get your
feedback. You can check things out at staging.gluster.org. Things are still
very much in beta (a few pages not displaying properly or at all, etc), but we
decided to roll things out so that we can
On 29/05/2014, at 8:04 PM, Ben Turner wrote:
>> From: "James"
>> Sent: Wednesday, May 28, 2014 5:21:21 PM
>> On Wed, May 28, 2014 at 5:02 PM, Justin Clift wrote:
>>> Hi all,
>>>
>>> Are there any Community members around who can test the GlusterFS 3.4.4
>>> beta (rpms are available)?
>>
>> I've
- Original Message -
> From: "James"
> To: "Justin Clift"
> Cc: gluster-users@gluster.org, "Gluster Devel"
> Sent: Wednesday, May 28, 2014 5:21:21 PM
> Subject: Re: [Gluster-users] [Gluster-devel] Need testers for GlusterFS 3.4.4
>
> On Wed, May 28, 2014 at 5:02 PM, Justin Clift wrote:
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
1892 root 20 0 10.2g 4.7g 1900 S 15 61.1 8980:27 glusterfs
10 GBytes is to much for a process and we want to know if there is
anything we can do too solve this situation.
In general VIRT (virtual) is not the m
Hello everyone,
I have a situation here, I have two node and the share only one volume. But
after many month we have notice a High load of memory consumption of the
glusterfs process.
You can see the top output of our server:
# top
top - 08:55:08 up 350 days, 23:36, 2 users, load average: 1.5
There are couple of things here:
1. With 3.5, geo-replication would also take care to maintain the GFIDs of
files to be in sync. Syncing data using rsync this way would haves mangled
the GFIDs. This is very much similar to an upgrade scenario to 3.5 where
data synced by geo-rep pre 3.5 would not h
14 matches
Mail list logo