Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Tuesday
- time: 12:00 UTC, 13:00 CET (run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.o
Thank, I go to see that :)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
I have my glusterfs configurated is replicated mode. When I add a new brick,
not copy all of data of the other brick. Exist a comand for sync this brick??
Thank!!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/li
> I took the recommendation of disabled the stripes. Now I just have encryption
> (at rest) and SSL enabled. The test I am running is a bwa indexing. Basic dd
> read/writes work fine and I don't see any errors in the gluster logs. Then
> when I try the bwa index I see the following:
> /shared/perf
As I understand this error appear because BWA opening hg19.fa.pac twice.
One handler for writing out and one for reading.
And second thread is trying to read some data which is not preset on disk
yet.
On Mon, Mar 9, 2015 at 11:49 AM, Adam wrote:
> Hi Jeff/all,
>
> I took the recommendation of di
Hi Jeff/all,
I took the recommendation of disabled the stripes. Now I just have
encryption (at rest) and SSL enabled. The test I am running is a bwa
indexing. Basic dd read/writes work fine and I don't see any errors in the
gluster logs. Then when I try the bwa index I see the following:
/shared/
You need to do a rebalance
Sent from my BlackBerry 10 smartphone.
Original Message
From: Carlos J. Herrera
Sent: Monday, March 9, 2015 10:43
To: gluster-users@gluster.org
Subject: [Gluster-users] carlos
I have my glusterfs configurated is replicated mode. When I add a new brick,
not copy all
Uli Zumbuhl [Mon, Mar 09, 2015 at 01:37:50PM +0100]:
> If we are already speaking about stable packages I have a question: is the
> 3.6 release "safe-enough" to run in production?
We are testing this out at the moment and so far it looks good (also
thanks to the great support of the community)
If we are already speaking about stable packages I have a question: is the 3.6
release "safe-enough" to run in production?
> Gesendet: Montag, 09. März 2015 um 12:32 Uhr
> Von: "Kaleb S. KEITHLEY"
> An: "Osborne, Paul (paul.osbo...@canterbury.ac.uk)"
> , "gluster-users@gluster.org"
>
> Betref
On 03/09/2015 08:24 AM, Osborne, Paul (paul.osbo...@canterbury.ac.uk) wrote:
Hi,
At the moment for my testing I am using version: 3.5.3-1 using the
gluster.org 3.5 LATEST repository.
Is it safe to assume that there will be continued critical fixes sent
out to this version? Or is there another
Thanks Jeff for this blog post, looking forward to NSR and its chain
replication!
On Monday, March 9, 2015 1:00 PM, Jeff Darcy wrote:
> I would be very interested to read your blog post as soon as its out and I
> guess many others too. Please do post the link to this list as soon as its
> on
On 9 Mar 2015, at 12:00, Jeff Darcy wrote:
>> I would be very interested to read your blog post as soon as its out and I
>> guess many others too. Please do post the link to this list as soon as its
>> online.
>
> Sorry, forgot to do this earlier. It's here:
>
> http://pl.atyp.us/2015-03-life-o
Hi,
At the moment for my testing I am using version: 3.5.3-1 using the gluster.org
3.5 LATEST repository.
Is it safe to assume that there will be continued critical fixes sent out to
this version? Or is there another point release (3.6) that I should be using
that will knowingly have reasonab
Hi,
I am in the process of setting up some gluster backed fileservers as part of a
LAMP stack, where a pair of web servers using autofs weighting mount a pair of
gluster servers. It occurs to me that in order to help overcome split brain in
the event of a DR scenario a quorum server would be us
> I would be very interested to read your blog post as soon as its out and I
> guess many others too. Please do post the link to this list as soon as its
> online.
Sorry, forgot to do this earlier. It's here:
http://pl.atyp.us/2015-03-life-on-the-server-side.html
___
I would guess that it's because of the way Gluster manages its
replication internally. I'm trying to locate some reference on that
topic but right now I can't.
Maybe open a bug with RedHat?
Thanks,
JF
On 09/03/15 11:33, Michaël Couren wrote:
>
>
>>
>> If set this option to "none", writting o
>
> If set this option to "none", writting on S1 is OK while S2 if rebooting
> but writting to S2 while S1 is rebooting is not possible.
Edit : stoping the current writting on S2 and restarting it works...
I don't need to do that on S1...
--
Cordialement / Best regards, Michaël Couren,
ABES, M
- Le 9 Mar 15, à 11:05, JF Le Fillâtre jean-francois.lefilla...@uni.lu a
écrit :
> Hello,
>
> I believe that what you are looking for are the quorum options:
>
> http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#cluster.quorum-type
>
> Thanks
>
As requested I now opened a bug for that issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1199936
Thanks for your help.
> Gesendet: Montag, 09. März 2015 um 09:14 Uhr
> Von: "Krishnan Parthasarathi"
> An: "Uli Zumbuhl"
> Cc: gluster-users@gluster.org
> Betreff: Re: [Gluster-users] Upgrade f
Hello,
I believe that what you are looking for are the quorum options:
http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#cluster.quorum-type
Thanks
JF
On 09/03/15 10:52, Michaël Couren wrote:
> Hi,
> I built a simple configuration (glusterfs 3.6.2) w
Hi,
I built a simple configuration (glusterfs 3.6.2) with only 2 machines.
First defined a trusted server pool of 2 servers, S1 and S2 :
S1# gluster peer probe S2
Then I defined a replicated volume with 2 brocks. S1 has brick1 and S2 brick2 :
S1# gluster volume create test replica 2 transport
> Actually I could solve my issue with the mounting of the volume by
> re-installing Gluster 3.6 from scratch. Before that I tried to update the
> volfile as I read in a bug but that did not work. So the only last problem
> which remains is this warning error with the socket as reported.
>
> As I
Actually I could solve my issue with the mounting of the volume by
re-installing Gluster 3.6 from scratch. Before that I tried to update the
volfile as I read in a bug but that did not work. So the only last problem
which remains is this warning error with the socket as reported.
As I did indee
On 03/05/2015 06:33 PM, Vijay Bellur wrote:
> On 03/01/2015 11:44 PM, Atin Mukherjee wrote:
>> Thanks Fanghuang for your nice words.
>>
>> Vijay,
>>
>> Can we try to take this patch in for 3.7 ?
>>
>
> Happy to get this in to 3.7. Could you please rebase this patch to the
> latest git HEAD?
I've
24 matches
Mail list logo