Hi Atin,
On 25/06/15 14:34, Atin Mukherjee wrote:
On 06/25/2015 10:01 AM, John Gardeniers wrote:
Hi Atin,
On 25/06/15 14:24, Atin Mukherjee wrote:
On 06/25/2015 03:07 AM, John Gardeniers wrote:
No takers on this one?
On 22/06/15 14:37, John Gardeniers wrote:
Until last weekend we had a si
On 06/25/2015 03:07 AM, John Gardeniers wrote:
No takers on this one?
Hi John,
If you either replace a brick of a replica or increase the replica count
by adding another brick, you will need to perform `gluster volume heal
full` to sync the data into the new/replaced brick.
If you are ru
On 06/25/2015 10:01 AM, John Gardeniers wrote:
> Hi Atin,
>
> On 25/06/15 14:24, Atin Mukherjee wrote:
>>
>> On 06/25/2015 03:07 AM, John Gardeniers wrote:
>>> No takers on this one?
>>>
>>> On 22/06/15 14:37, John Gardeniers wrote:
Until last weekend we had a simple 1x2 replicated volume,
Hi Atin,
On 25/06/15 14:24, Atin Mukherjee wrote:
On 06/25/2015 03:07 AM, John Gardeniers wrote:
No takers on this one?
On 22/06/15 14:37, John Gardeniers wrote:
Until last weekend we had a simple 1x2 replicated volume, consisting
of a single brick on each peer. After a drive failure screwed
On 06/25/2015 03:07 AM, John Gardeniers wrote:
> No takers on this one?
>
> On 22/06/15 14:37, John Gardeniers wrote:
>> Until last weekend we had a simple 1x2 replicated volume, consisting
>> of a single brick on each peer. After a drive failure screwed the
>> brick on one peer we decided to cr
No takers on this one?
On 22/06/15 14:37, John Gardeniers wrote:
Until last weekend we had a simple 1x2 replicated volume, consisting
of a single brick on each peer. After a drive failure screwed the
brick on one peer we decided to create a new peer and swap the bricks.
Running "gluster volume
On 06/24/2015 07:41 PM, Alessandro De Salvo wrote:
Hi,
I just upgraded to gluster 3.7.2 from 3.7.1, and now the brick logs of
my replicated volumes are filling very quickly with messages like this:
[2015-06-24 14:08:13.989147] I [dict.c:467:dict_get]
(--> /lib64/libglusterfs.so.0(_gf_log_calling
Hello ,
split brain happened a few hours before, would you define which copy is
the newest ??
# gluster volume heal 1KVM12_P3 info
Brick 1kvm1:/STORAGES/g1r5p3/GFS/
/7e5ca629-5e97-4220-a6b2-b93242e8f314/dom_md/ids - Is in split-brain
Number of entries: 1
Brick 1kvm2:/STORAGES/g1r5p3/GFS/
/7e
As every week, we had our Gluster Community Meeting earlier today. The
agenda for next week can be found here:
https://public.pad.fsfe.org/p/gluster-community-meetings
Please add topics to the "Open Floor / BYOT" item around line 66 of
the etherpad and attend the meeting next week to discuss
Hi,
I just upgraded to gluster 3.7.2 from 3.7.1, and now the brick logs of
my replicated volumes are filling very quickly with messages like this:
[2015-06-24 14:08:13.989147] I [dict.c:467:dict_get]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f02d7cf7ef6]
(--> /lib64/libglusterfs.so
mailto:humble.deva...@gmail.com>> wrote:
Hi All,
As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of
GlusterFS and having an average of one release per week , we need
more helping hands on this task.
The responsibility includes building fedora and epel rpms using k
Hi,
Comments inline.
On 24/06/15 14:39, 莊尚豪 wrote:
Hi all,
I test some perfomance from pnfs (gluster-3.7.1 + ganesha-2.2) in
Fedora 22.
There are 4 glusterfs nodes with ganesha.
I reference from
https://gluster.readthedocs.org/en/latest/Features/mount_gluster_volume_using_pnfs/
Can
On 06/24/2015 03:17 PM, M S Vishwanath Bhat wrote:
On 24 June 2015 at 14:56, Humble Devassy Chirammal
mailto:humble.deva...@gmail.com>> wrote:
Hi All,
As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of
GlusterFS and having an average of one release per week , we need
Hi Humble
I did but error persists. I tried the CentOS-Repo, it s ok, the
EPEL.repo still gets error 404. The filenames differ from the
repodata.xml, eg:
e0a86d586e6a64f58d9d08ce77e75cda4d17da839a464c3ebdb726ffd
2a6c0873530bd20e70d4f5ed5893a7a5ebcab8c15de3d6f4a7b43675d9b0052d
136fe9a1433236660.01
Hello, this is my first post.
In a test environment, I have been testing likely failures and how to recover from them.
I was testing the event that a disc begins to fail, and we decide to remove the brick, and its counterpart replica brick from the volume.
Steps:
1. Call 'gluster volume re
Hi Frank,
Can you please retry now ?
--Humble
2015-06-24 15:35 GMT+05:30 Frank Rothenstein <
f.rothenst...@bodden-kliniken.de>:
> Hi there,
>
> trying yum update and getting error about missing files. Seems like
> /pub/gluster/glusterfs/LATEST/EPEL.repo/epel
> -7//repodata/repodata.xml have wr
Hi there,
trying yum update and getting error about missing files. Seems like
/pub/gluster/glusterfs/LATEST/EPEL.repo/epel
-7//repodata/repodata.xml have wrong entries -> filename
missmatch with dir entries.
yum:
ovirt-3.5-glusterfs-epel/7/x86 FAILED
ht
On 24 June 2015 at 14:56, Humble Devassy Chirammal wrote:
> Hi All,
>
> As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of GlusterFS and
> having an average of one release per week , we need more helping hands on
> this task.
>
> The responsibility includes building fedora and epel rpms
Hi All,
As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of GlusterFS and
having an average of one release per week , we need more helping hands on
this task.
The responsibility includes building fedora and epel rpms using koji build
system and deploying the rpms to download.gluster.org
Hi all,
I test some perfomance from pnfs (gluster-3.7.1 + ganesha-2.2) in Fedora 22.
There are 4 glusterfs nodes with ganesha.
I reference from
https://gluster.readthedocs.org/en/latest/Features/mount_gluster_volume_usin
g_pnfs/
The clients(Fedora 21) are fine to mount and commit some small
Hello,
for testing and scaling purpose i gave it a try with 2 ubuntu 14.04
machines on vmware workstation.i installed glusterfs 3.5 on both of them. i
started out by creating identical LVM on both of them,formatted with XFS
and created replicated volumes
#gluster volume create gvola replica 2 glus
21 matches
Mail list logo