On 06/04/2012 07:15 PM, Amar Tumballi wrote:
Do you know if I'll be able to convert a distribute to
distribute-replicate this way?
1) delete the distribute volume
2) create a distribute-replicate volume
3) run the self-heal, which hopefully results in the data moved to the
other brick, *not*
hi,
Not necessarily. The first time when the file is accessed whichever brick
responds fast is the one where the reads/stat etc happen. Writes/create/rm etc
happen on both the bricks.
Pranith.
- Original Message -
From: "Костырев Александр Алексеевич"
To: gluster-users@gluster.org
Se
I am on: glusterfs 3.3.0 built on Jun 1 2012 12:08:38
On 5 June 2012 11:30, Anand Avati wrote:
> Are you on 3.2.x? If so can you try 'gluster volume set set
> performance.stat-prefetch off' and try again?
>
> Avati
>
>
> On Mon, Jun 4, 2012 at 10:58 PM, Sabyasachi Ruj wrote:
>>
>> Did not make
Are you on 3.2.x? If so can you try 'gluster volume set set
performance.stat-prefetch off' and try again?
Avati
On Mon, Jun 4, 2012 at 10:58 PM, Sabyasachi Ruj wrote:
> Did not make any difference using --attribute-timeout=0. Strangely I
> am noticing that this problem happens if I use tab-co
Did not make any difference using --attribute-timeout=0. Strangely I
am noticing that this problem happens if I use tab-completion to
complete the name of "sqlite.org"!
On 5 June 2012 04:15, Anand Avati wrote:
> You probably are treading within the narrow boundary of fuse's
> entry/attribute tim
I tried it to host Virtual Machines images and it didn't work at all. Was
hoping to be able to spread the IOPS more through the cluster.
That's part of what I was trying to say on the email I sent earlier today.
I saw that mail and I agree that the target of 3.3.0 was to make
glusterfs more
I tried it to host Virtual Machines images and it didn't work at all. Was
hoping to be able to spread the IOPS more through the cluster.
That's part of what I was trying to say on the email I sent earlier today.
Fernando
-Original Message-
From: gluster-users-boun...@gluster.org
[mailt
hello! question about IOs:
I've fired up glusterfs with
distributed replicated volumes
like this
gluster volume create test-mail replica 2 transport tcp 10.0.1.132:/mnt/ld0
10.0.1.133:/mnt/ld0 10.0.1.132:/mnt/ld1 10.0.1.133:/mnt/ld1
server1 is 10.0.1.132
server2 is 10.0.1.133
glu
On 06/04/2012 11:21 AM, Amar Tumballi wrote:
On 06/01/2012 10:18 PM, Travis Rhoden wrote:
Did an answer to Christian's question pop up? I was going to write in
with the exact same one.
If I created a replicated striped volume, what would keep it from
working in a non-Hadoop environment? Does th
You probably are treading within the narrow boundary of fuse's
entry/attribute timeout. Can you mount with --attribute-timeout=0 and
--entry-timeout=0 and see if it eliminates the behavior?
Avati
On Mon, Jun 4, 2012 at 6:26 AM, Sabyasachi Ruj wrote:
> So here is the situation (this is not repro
Hi,
I have been reading and trying to test(without much success) Gluster 3.3 for
Virtual Machines storage and from what I could see it isn't yet quiet ready for
running virtual machines.
One great improvement about the granular locking which was essential for these
types of environments was ac
I Have the same problem with 3.2.6, time to time in a randon basis some
server give-me the "Transport endpoint not connected".
I Have to reboot the server to make it connect again.
I run Fedora 16 and Gluster 3.2.6-2
- Original Message -
From: "Brian Candler"
To: "Amar Tumballi"
- Original Message -
> Doesn't sound like the solution we need for a large cluster. We would
> like to keep it simple and stupid . Squeeze has libssl version
> 0.9.8. Maybe you can work with " Toby Corkindale" since he managed
> to create a deb for sq ueeze?
I hear what you're saying. I e
On Fri, May 04, 2012 at 01:27:35PM +0530, Amar Tumballi wrote:
> Are you sure the clients are not automatically remounted within 10
> seconds of servers coming up? This was working fine from the time we
> had networking code written.
>
> Internally, there is a timer thread which makes sure we
> au
Doesn't sound like the solution we need for a large cluster. We would like
to keep it simple and stupid. Squeeze has libssl version 0.9.8. Maybe you
can work with "Toby Corkindale" since he managed to create a deb for sq
ueeze?
2012/6/2 John Mark Walker
> Philip -
>
> Gluster.org is only nominal
So here is the situation (this is not reproducible every time, but I
got it twice).
directory "sqlite3.org" has already been renamed from client2.
executing these commands from client1 gives this output:
# stat sqlite.org
stat: cannot stat `sqlite.org': No such file or directory
This pro
Hi David
Thanks for clearing it up
With regards to the "self-heal":
find /mnt/gfstest -noleaf -print0 | xargs --null stat >/dev/null
a) I do this on the server(1|2|client|doesnt-matter) ? IF server1 is the
one with the latest copy of data
b) Would the self-heal i've been reading about in 3.3 no
On 06/04/2012 05:21 PM, David Coulson wrote:
Question (4.2)
-Is it safe to create a brick in a directory that already has files in
it ?
As long as you force a self-heal on it before you use it.
Do you know if I'll be able to convert a distribute to
distribute-replicate this way?
1) delet
A bug has already been logged for this issue and we are working on it.
https://bugzilla.redhat.com/show_bug.cgi?id=762989
Regards,
Raghavendra Bhat
- Original Message -
From: "David Coulson"
To: "Raghavendra Bhat"
Cc: "Gluster General Discussion List"
Sent: Monday, June 4, 2012 3
On 6/4/12 4:05 AM, Jacques du Rand wrote:
HI Guys
This all applies to Gluster3.3
I love gluster but I'm having some difficulties understanding some
things.
1.Replication(with existing data):
Two servers in simple single brick replication. ie 1 volume (testvol)
-server1:/data/ && server2:/d
On 06/04/2012 04:46 PM, Raghavendra Bhat wrote:
>
> Glusterfs client starts binding from port number 1023 and if any port
> is not available, then tries to bind to the lower port. (i.e. suppose
> 1023 is not available because some other process is already using it,
> then it decrements the port to
Is there a way to change this behavior? It's particularly frustrating
having Gluster mount a filesystem before the service starts up, only to
find it steps on the top end of <1024 ports often - IMAPS and POP3S are
typical victims at 993 and 995.
Why does not not use ports within the
/proc/sys
Glusterfs client starts binding from port number 1023 and if any port is not
available, then tries to bind to the lower port. (i.e. suppose 1023 is not
available because some other process is already using it, then it decrements
the port to 1022 and tries to bind.) So in this case it has tried
HI Guys
This all applies to Gluster3.3
I love gluster but I'm having some difficulties understanding some things.
1.Replication(with existing data):
Two servers in simple single brick replication. ie 1 volume (testvol)
-server1:/data/ && server2:/data/
-server1 has a few millions files in the /d
24 matches
Mail list logo