[+gluster-users]
On 09/24/2014 11:59 AM, Demeter Tibor wrote:
Hi,
Is there any method in glusterfs, like raid-5?
I have three node, each node has 5 TB of disk. I would like utilize
all of space with redundancy, like raid-5.
If it not possible, can I make raid-6 like redundanci within three
On 2014-09-22 11:57, Justin Clift wrote:
On 21/09/2014, at 7:47 PM, Vijay Bellur wrote:
On 09/18/2014 05:42 PM, Humble Devassy Chirammal wrote:
Greetings,
As decided in our last GlusterFS meeting and the 3.6 planning schedule,
we shall conduct GlusterFS 3.6 test days starting from next
On 24/09/2014, at 10:17 AM, Anders Blomdell wrote:
On 2014-09-22 11:57, Justin Clift wrote:
On 21/09/2014, at 7:47 PM, Vijay Bellur wrote:
On 09/18/2014 05:42 PM, Humble Devassy Chirammal wrote:
Greetings,
As decided in our last GlusterFS meeting and the 3.6 planning schedule,
we shall
On 2014-09-24 11:22, Justin Clift wrote:
On 24/09/2014, at 10:17 AM, Anders Blomdell wrote:
On 2014-09-22 11:57, Justin Clift wrote:
On 21/09/2014, at 7:47 PM, Vijay Bellur wrote:
On 09/18/2014 05:42 PM, Humble Devassy Chirammal wrote:
Greetings,
As decided in our last GlusterFS meeting and
Hi,
I am running Gluster Test Framework on ZFS, XFS and most of the testcases
are failing.
Gluster version : v3.4.5
Operating System: CentOS 7
Please let us know how to fix it ?
What could be the major changes in CentOS 7 which is causing this issue ?
or
Is gluster is the culprit here ?
I
Reminder!!!
The weekly Gluster Community meeting is in 25 minutes, in
#gluster-meeting on IRC.
This is a completely public meeting, everyone is encouraged
to attend and be a part of it. :)
To add Agenda items
***
Add new items under the Other items to discuss point on the
On 24/09/2014, at 12:35 PM, Justin Clift wrote:
Reminder!!!
The weekly Gluster Community meeting is in 25 minutes, in
#gluster-meeting on IRC.
This is a completely public meeting, everyone is encouraged
to attend and be a part of it. :)
Thanks to everyone for attending. Lots of
On 24/09/2014, at 2:07 PM, Kiran Patil wrote:
Some of the reasons I have found so far are as below,
1. Cleanup operation does not work since killall is not part of CentOS 7
2. I used pkill and still testcases fail at first step Ex: TEST glusterd
3. Subsequent running of testcases does
Thanks for the info!
I started the remove-brick start and, of course, the brick went read-only
in less than an hour.
This morning I checked the status a couple of minutes apart and found:
Node Rebalanced-files size scanned failures
status
- ---
On 23/09/2014, at 4:43 PM, Justin Clift wrote:
Cool, I've not gotten around to testing EL7 yet. :)
Would you have the time / interest to add CentOS 7 steps to the page?
Thanks for adding CentOS 7 steps to the page. :)
+ Justin
--
GlusterFS - http://www.gluster.org
An open source,
On 09/24/2014 07:35 PM, james.bellin...@icecube.wisc.edu wrote:
Thanks for the info!
I started the remove-brick start and, of course, the brick went read-only
in less than an hour.
This morning I checked the status a couple of minutes apart and found:
Node Rebalanced-files size
On 09/24/2014 07:35 PM, james.bellin...@icecube.wisc.edu wrote:
Thanks for the info!
I started the remove-brick start and, of course, the brick went
read-only
in less than an hour.
This morning I checked the status a couple of minutes apart and found:
Node Rebalanced-files size
Hi,
Could I help anybody?
Tibor
[+gluster-users]
On 09/24/2014 11:59 AM, Demeter Tibor wrote:
Hi,
Is there any method in glusterfs, like raid-5?
I have three node, each node has 5 TB of disk. I would like utilize all of
space with redundancy, like raid-5.
If it not
FYI
http://blog.gluster.org/category/erasure-coding/
Alex
On 24/09/14 21:24, Demeter Tibor wrote:
Hi,
Could I help anybody?
Tibor
[+gluster-users]
On 09/24/2014 11:59 AM, Demeter Tibor wrote:
Hi,
Is there any method in glusterfs, like raid-5?
I have
pkill expects only one pattern, so I did as below in tests/include.rc file
and test cases started working fine.
pkill glusterfs 2/dev/null || true;
pkill glusterfsd 2/dev/null || true;
pkill glusterd 2/dev/null || true;
Thanks,
Kiran.
On Wed, Sep 24, 2014 at 6:48 PM, Justin Clift
15 matches
Mail list logo