Brent,
True, it is a nice idea as size of the directory inode is not used
anywhere (is it used?) We will keep this in mind.
Krishna
On Fri, Mar 20, 2009 at 7:31 AM, Brent A Nelson br...@phys.ufl.edu wrote:
Sage Weil with the Ceph filesystem came up with a clever idea, and it might
be a tempting
Paul,
I could not find a mem leak bug previously reported by you. Can you
give more details about your setup? the kind of operations that go on
on the FS? If possible what triggers the mem leak?
Krishna
On Sun, Mar 8, 2009 at 12:15 PM, Paul Rawson plr...@gmail.com wrote:
Just letting you know
Dan,
I think ping timeout value is not large enough for you, can you put
option ping-timeout 50 in client volumes and see if you still get
the error? It is presently 10 secs, if it works fine for you we will
increase the default value in the code.
Thanks
Krishna
On Tue, Mar 10, 2009 at 11:04
Mickey,
Did you delete the directory at the back end and recreated it?
(assuming you are using DHT)
Can you mention exactly what you did?
Krishna
On Fri, Feb 20, 2009 at 12:04 AM, Mickey Mazarick
m...@digitaltadpole.com wrote:
Is there an easy way to fix this error if it's on a directory?
I
On Wed, Feb 18, 2009 at 1:41 AM, Gordan Bobic gor...@bobich.net wrote:
Is there any reason to prefer:
# find /gluster/mountpoint -type f -exec head -c1 '{}' \;
This was necessary in the old code as selfheal code was implemented in
open() flow.
to
# ls -laR /gluster/mountpoint
This works
If you want to use TLA repository instead of tar balls:
tla register-archive http://arch.savannah.gnu.org/archives/gluster/;
After this:
tla get -A glus...@sv.gnu.org glusterfs--mainline--3.0 glusterfs
This will get the latest source into glusterfs directory
If you want a specific version you
192.168.240.228 # IP address of the remote brick
option remote-subvolume ser022 # name of the remote volume
end-volume
volume afr
type cluster/afr
subvolumes cli01 cli02
end-volume
Regards
2009/2/13 Krishna Srinivas kris...@zresearch.com
On Fri, Feb 13, 2009 at 1:50 PM
Alain,
Are those your actual vol files? just want to confirm.
option remote-host 192.168.x.x # IP address of server2
you need to give proper IP address here.
option auth.ip.brick1.allow *all
option auth.ip.afr.allow *all
*all is incorrect here.
If these are not your actual vol
Is there a way to specify --disable-direct-io-mode in fstab?
volume fuse
type mount/fuse
option direct-io-mode 1
option entry-timeout 1
option attr-timeout 1
option mount-point /mnt/storage
subvolumes unify
end-volume
you can give the options to fuse in the vol file.
On
On Sun, Feb 8, 2009 at 7:14 PM, Gordan Bobic gor...@bobich.net wrote:
Another one - tail -f doesn't appear to work correctly for logs on
GlusterFS. The logs themselves seem to be OK (not corrupted), but tail -f
doesn't seem to properly list them incrementally, the output ends up
corrupted
Melvin,
Can you try with the latest code? There were crucial bug fixes in HA
after 2.0.0rc1
Krishna
2009/2/6 Melvin Wong melvin.w...@muvee.com:
Hi,
I'm setting serverside AFR using glusterfs-2.0.0rc1 and the clients are
using HA translator to connect to the 2 servers. After running for some
Ruslan,
Indeed it is a strange error. Is it an easy bug to reproduce? By the
way, don't use single process server and client, we found issues
regarding locking. If the bug is easy to reproduce you can also check
if it is seen if server and client are different processes.
Krishna
On Thu, Feb 5,
Nicolas,
You can get back to us if you still have the problem.
Krishna
2009/2/7 Cory Meyer cory.me...@gmail.com:
I ran into this same issue and should be fixed in
glusterfs--mainline--3.0--patch-899.
On Fri, Feb 6, 2009 at 6:12 AM, nicolas prochazka
prochazka.nico...@gmail.com wrote:
Thanks Brent.
Avati, if lookup on the root has not been done the current operation
can be paused and continued later after looking up the root. It would
fix the issue. Or when glusterfs client is started fuse-bridge can
initiate a lookup on the root.
PS Are some of Filipe Maia's recent patches
If io-threads are used even on the client side there should be +ve
effect on the performance as more threads will be serving requests.
(Even with non blocking io on the sockets) Not sure why it was stated
that performace will not be affected. Let me check with them.
Krishna
On Sat, Feb 7, 2009
, i think, because with my two server,
i stop the first, then restart , wait, stop the second, restart and all is
KO.
I just try to stop the first and test, then all is ok .
Nicolas
On Tue, Feb 3, 2009 at 3:50 PM, Krishna Srinivas kris...@zresearch.com
wrote:
Nicolas,
When you restart
Nicolas,
When you restart the server logs indicating EBADFD is fine, AFR will
try the operation on the other server. When you have the situation
where the glusterfs client hangs can you attach gdb to the glusterfs
and mail us the backtrace?
gdb -p pid of glusterfs
type bt at the gdb command
On Mon, Feb 2, 2009 at 8:30 AM, Ben Mok ben...@powerallnetworks.com wrote:
Hi All,
i am doing redundant test, when i remove a directory before one server goes
down, the directory still keep in that server even run self-heal script. If
the directory have no file, it can be deleted
Jordi,
With the information you have given it is difficult to guess what
might be causing the problem. The Connection refused message
indicates that the server process was not running. Can you check?
About the stale mount point - were the commands hanging when tried to
operate on the mount
Josef,
Avati is trying to get access to a Mac machine to test the latest
code. Previous releases used to work fine on Mac but some of the
recent code changes might have changed that scenario.
Krishna
On Wed, Jan 14, 2009 at 8:37 PM, At Work ad...@matphot.com wrote:
Hello,
I'm having an odd
Dan,
Will investigate as soon as possible. Can you paste the back trace using gdb?
gdb -c path to core file glusterfs and then type bt.
Is this problem easily reproducible?
Krishna
On Fri, Jan 16, 2009 at 5:49 AM, Dan Parsons dpars...@nyip.net wrote:
I just had the glusterfs client crash on a
David,
The information you have given is not enough to analyze what might be happening.
The setup is easily understood. You have lot of afrs and each afr has
4 subvols. DHT uses the afrs.
You create 1000 directories on mount point but see only 100 when you do ls.
Are you bringing any of the
Hi Corin,
What do you mean by stack based design?
On Wed, Jan 14, 2009 at 8:59 PM, Corin Langosch cor...@gmx.de wrote:
Hi again,
I thought glusterfs is using the stack based design instead of a
threaded one to simplify the internal design etc, for example to get rid
of nastly thread
Nicolas,
It might be a bug. Let me try to reproduce the problem here and get back to you.
Krishna
On Wed, Jan 14, 2009 at 6:59 PM, nicolas prochazka
prochazka.nico...@gmail.com wrote:
hello again,
To finish with this issue and information I can send you :
If i stop glusterfsd ( on server B)
Dan,
Are you using same disk for both dht and stripe at the back end? i.e
create two directories on the disk and export, use one for dht and one
for stripe?
Can you mail the vol files?
Krishna
On Tue, Jan 13, 2009 at 4:56 AM, Dan Parsons dpars...@nyip.net wrote:
I'm following the directions on
Dan,
Can you removie the backend directories and re-create it and try this again?
Krishna
On Tue, Jan 13, 2009 at 6:31 AM, Dan Parsons dpars...@nyip.net wrote:
I'm unable to make dht work. I simply can't copy files to it without getting
the below error. If I make a subdirectory and put files
Gordon,
Anything you notice in the logs?
Krishna
On Wed, Jan 14, 2009 at 12:16 AM, Gordan Bobic gor...@bobich.net wrote:
It would appear that Firefox and GlusterFS don't mix too well. Every once in
a while it'll corrupt it's URL history, and it point-blank refuses to save
any bookmarks if the
On Thu, Jan 8, 2009 at 6:25 PM, Daniel Maher dma+glus...@witbe.net wrote:
Krishna Srinivas wrote:
HA is also useful when we use server side AFRs.
This statement is highly interesting. Would it be possible to have more
information on how the HA translator could be intelligently implemented
Melvin,
Patch 841 fixes the issue.
Regards
Krishna
On Fri, Jan 9, 2009 at 11:06 AM, Melvin Wong melvin.w...@muvee.com wrote:
Hi,
Do anyone know what these logs mean?
2009-01-09 13:30:35 E [ha.c:4301:notify] glusterfs-ha: GF_EVENT_CHILD_UP
from cluster
2009-01-09 13:30:35 E
On Thu, Jan 8, 2009 at 1:00 PM, Dan Parsons dpars...@nyip.net wrote:
Anyone? :)
--Original Message--
From: Dan Parsons
Sender: gluster-devel-bounces+dparsons=nyip@nongnu.org
To: Gluster Developers Discussion List
Subject: [Gluster-devel] upgrading; 1.4 vs waiting for 2.x
Sent:
% /locfs
/dev/sdb1 459G 199M 435G 1% /locfsb
df: `/mnt/new': Transport endpoint is not connected
Thanks,
Yaomin
--
From: Krishna Srinivas kris...@zresearch.com
Sent: Tuesday, January 06, 2009 1:09 PM
To: yaomin @ gmail yangyao
@ gmail yangyao...@gmail.com wrote:
Krishna,
1, The version is 1.3.9
2, the client and server vol files are in the attachments.
3, The result is No Stack
Thanks,
Yaomin
--
From: Krishna Srinivas kris...@zresearch.com
Sent: Tuesday, January
, January 05, 2009 10:52 PM
To: Krishna Srinivas
Cc: gluster-devel@nongnu.org
Subject: Re: [Gluster-devel] Cascading different translator doesn't work as
expectation
Krishna,
Thank you for your quick response.
There are two log information in the client's log file when setting up
the client
Melvin,
1.3.11 is very old and lot of bug fixes have gone in. Can you try the
latest release on 1.4?
http://www.gluster.org/download.php
Krishna
On Mon, Jan 5, 2009 at 4:33 PM, Melvin Wong melvin.w...@muvee.com wrote:
Hi,
I have a setup using server-side afr (glusterfs-1.3.11). Is it
Alfred,
Can you check client logs for any error messages?
You are using ALU, it might be creating the files on the disk with max
space (which being your storage nodes 3, 4)
You can check with RR scheduler to see if all the nodes are participating.
How much memory do the servers and client use?
Alfred,
It looks like a bug with iozone. I tried it on ext3:
[r...@client01 test]# /opt/benchmarks/iozone-3.315/bin/iozone -s 3k
-i 0 -i 1 -f test1
Iozone: Performance Test of File I/O
Version $Revision: 3.315 $
Compiled for 64 bit mode.
Schomburg,
You have 4 servers and one client. Each server has to export 2
directories /raid01a and /raid01b (FUSE do not play any role on the
servers). On the client machine the glusterfs mounts using the client
vol file combining all the exported directories. This would be a
typical setup in
On Thu, Jan 1, 2009 at 1:59 AM, Martin Fick mogul...@yahoo.com wrote:
I am a bit curious about the new HA translator and how it it supposed to
work? I have looked through the code a bit and this is my naive
interpretation of the way it is designed:
It appears that the HA translator keeps
Melvin,
HA is better than heartbeat as it makes sure that the failover is
smooth on the open file descriptors taking care of the problems you
mentioned. Give it a try and let us know if something does not work.
Krishna
On Wed, Dec 24, 2008 at 9:25 AM, Melvin Wong melvin.w...@muvee.com wrote:
Dan
Gowda has already made a fix which is available in the 1.4 release.
You can try this release
http://ftp.gluster.com/pub/gluster/glusterfs/1.4/glusterfs-1.4.0rc6.tar.gz
Krishna
On Tue, Dec 23, 2008 at 5:00 AM, Dan Parsons dpars...@nyip.net wrote:
OK, yes, the bug has come back. What do I try
Dan, Lukas, Thomas,
Any updates on this thread? Shall we conclude that it is not a memory
leak and io-cache is working fine?
Regards
Krishna
On Sat, Nov 8, 2008 at 10:32 AM, Krishna Srinivas [EMAIL PROTECTED] wrote:
Dan, Lukas, Thomas,
Internally io-cache limits the cache size to 120
, at 1:05 AM, Lukas Hejtmanek wrote:
Hello,
On Tue, Nov 04, 2008 at 12:37:03PM +0530, Krishna Srinivas wrote:
We want to reproduce the leak in our setup to fix it. What is your
setup on the client side? How many servers do you have? What are the
applications you run on the mount point? Do you
the 'cat' process before things got
out of hand. But, there's your test.
[EMAIL PROTECTED] ~]# glusterfs --version
glusterfs 1.3.11 built on Aug 21 2008 11:26:38
Repository revision: glusterfs--mainline--2.5--patch-795
Dan Parsons
On Nov 7, 2008, at 11:31 AM, Krishna Srinivas wrote:
Lukas
Thomas,
We want to reproduce the leak in our setup to fix it. What is your
setup on the client side? How many servers do you have? What are the
applications you run on the mount point? Do you observe leak only when
certain operations are done? (I am just looking for more clues)
Thanks
Krishna
On
Hi Gordan,
Next pre release will fix this, excuse us for not responding to this thread.
Thanks
Krishna
On Mon, Nov 3, 2008 at 7:05 AM, Gordan Bobic [EMAIL PROTECTED] wrote:
Hi,
I 1.4.0pre5 doesn't seem to work for me at all.
When I try to ls the share, I get broken directory entries. This
On Tue, Oct 21, 2008 at 5:54 PM, Gordan Bobic [EMAIL PROTECTED] wrote:
I'm starting to see lock-ups when using a single-file client/server setup.
machine1 (x86): =
volume home2
type protocol/client
option transport-type tcp/client
option
Chris,
Can you check if the logs give you a clue? when you start glusterfs
and when you start
the copy.
Krishna
On Sun, Oct 19, 2008 at 5:38 AM, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
Glusterfs version 1.3.10
Krishna Srinivas wrote:
Chris,
Which glusterfs version are you using
Rommer,
Thanks, we are working on the solution, for now please use separate process
for client and server.
Krishna
On Fri, Oct 17, 2008 at 5:21 AM, Rommer [EMAIL PROTECTED] wrote:
On Wed, 15 Oct 2008 16:36:25 +0530
Krishna Srinivas [EMAIL PROTECTED] wrote:
Rommer,
Thanks for that, we
oops forgot to CC gluster-devel...
On Fri, Oct 17, 2008 at 12:58 AM, Krishna Srinivas
[EMAIL PROTECTED] wrote:
On Fri, Oct 17, 2008 at 12:50 AM, Mickey Mazarick
[EMAIL PROTECTED] wrote:
Ultimately we want to use it permanently. We are looking for a system that
lets our infiniband system
Mickey,
Wait for the announcement :D the tla code is not stable.
Krishna
On Fri, Oct 17, 2008 at 2:56 AM, Mickey Mazarick [EMAIL PROTECTED] wrote:
Our first test didn't go so well, we got the error below.
If you remind me how to display the stacktrace using the core file I'll send
that as well
Rommer,
Thanks for that, we will get back to you.
Krishna
On Wed, Oct 15, 2008 at 4:27 PM, Rommer [EMAIL PROTECTED] wrote:
Hello,
### afr #
volume afr
type cluster/afr
subvolumes io-thr remote
Here change the order of the subvolumes. i.e as
subvolumes remote
On Tue, Oct 14, 2008 at 9:14 PM, Rommer [EMAIL PROTECTED] wrote:
On Tue, 14 Oct 2008 21:04:02 +0530
Krishna Srinivas [EMAIL PROTECTED] wrote:
Hi Rommer,
Can you paste the spec file of the other server too?
Thanks
Krishna
Config the same except for ip addresses:
### local brick
Hi Snezhana,
What is happening is, when one node was down, an entry
(dir/file/symlink) was deleted and
another entry was created of different type. During selfheal this
condition is not being handled
and I/O error is being returned. We will take care of this in the
coming release. For now please
with the
client...! In terms of the time and its execution.
KwangErn
On Sat, Sep 13, 2008 at 1:18 PM, Krishna Srinivas [EMAIL PROTECTED]
wrote:
KwangErn,
Can you give us the setup details?
Thanks
Krishna
On Sat, Sep 13, 2008 at 2:12 PM, KwangErn Liew [EMAIL PROTECTED]
wrote:
Hm
Hi Lukas,
Which version of glusterfs are you using?
Did you restart the glusterfsd server on 192.168.1.40 after
creating the directories? (to rule out mkdirs contributing to
the writes data)
apparently just the find and ls are creating the writes?
Thanks
Krishna
On Tue, Sep 16, 2008 at 1:55
KwangErn,
Can you give us the setup details?
Thanks
Krishna
On Sat, Sep 13, 2008 at 2:12 PM, KwangErn Liew [EMAIL PROTECTED] wrote:
Hm, it seems to be 'normal'. I have just ran dd across the network and this
is what I have...
$ dd if=/dev/zero of=/home/storage/testfile bs=16384k count=100
of August 2008 12:39:03 napisałeś(-łaś):
On Thu, Aug 28, 2008 at 3:01 PM, Łukasz Mierzwa [EMAIL PROTECTED]
wrote:
Thursday 28 of August 2008 07:06:30 Krishna Srinivas napisał(a):
On Wed, Aug 27, 2008 at 10:55 PM, Łukasz Mierzwa
[EMAIL PROTECTED]
wrote:
Tuesday 26
.
-
Krishna
Srinivas
[EMAIL PROTECTED] To
h.comJames E Warner/DEF/[EMAIL PROTECTED
On Thu, Aug 28, 2008 at 3:01 PM, Łukasz Mierzwa [EMAIL PROTECTED] wrote:
Thursday 28 of August 2008 07:06:30 Krishna Srinivas napisał(a):
On Wed, Aug 27, 2008 at 10:55 PM, Łukasz Mierzwa [EMAIL PROTECTED]
wrote:
Tuesday 26 August 2008 16:28:41 Łukasz Mierzwa napisał(a):
Hi,
I testing
Hi Brent,
Yes it is due to the recent AFR change. We made the change to hold locks during
write. This is to avoid race condition that is seen when two afrs are
writing to the
same region. We are seeing how we can improve the performance.
What we do now is:
hold locks on all subvols (till we
On Thu, Aug 28, 2008 at 12:45 AM, James E Warner [EMAIL PROTECTED] wrote:
Hi,
I'm currently testing gluster to see if I can make it work for our HA
filesystem needs. And in initial testing things seem to be very good
especially with client side AFR performing replication to our server
On Wed, Aug 27, 2008 at 10:55 PM, Łukasz Mierzwa [EMAIL PROTECTED] wrote:
Tuesday 26 August 2008 16:28:41 Łukasz Mierzwa napisał(a):
Hi,
I testing glusterfs for small files storage, first I've setup a single disk
gluster server, connected to it from another machine and served those files
Of
Dmitriy Kotkin
Sent: Thursday, August 07, 2008 11:54 AM
To: 'Krishna Srinivas'
Cc: 'Gluster Developers Discussion List'
Subject: RE: [Gluster-devel] Unify/AFR crashes
Hello Krishna.
I got the same bug yesterday after you asked me to change transport timeout.
Namespace is activating bailing
Dmitriy,
Did you get any help on IRC? What kind of operations do you do
when you say intense fops? Does the first node crash? do you
have the core file? if yes can you get the backtrace?
Krishna
On Tue, Aug 5, 2008 at 7:23 PM, Dmitriy Kotkin [EMAIL PROTECTED] wrote:
Hello guys!
I'm using
Hi Rohan,
I suspect that this is already a bug which we know of. Can you
cd out of the file system and back again and see if things
work? If its an application running on the system, stop and
start it again and see if you still get the error logs that you
have pasted below?
Thanks
Krishna
On
is not getting created.
We also tried created /mailbox/0/2, which got created.
Any work around? Can we create it on EXT3 file system manually?
Rohan
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Krishna Srinivas
Sent: Thursday, July 31, 2008 5:20 PM
PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Krishna Srinivas
Sent: Thursday, July 17, 2008 12:48 PM
To: Rohan
Cc: Amar S. Tumballi; Gluster-devel@nongnu.org
Subject: Re: [Gluster-devel] RE: Help needed
Rohan,
Can access be given to these machines? Does the log have
any useful
Martin,
If you are modifying backend directly, you shouldn't do it.
Krishna
On Thu, Jul 17, 2008 at 9:15 PM, Martin Fick [EMAIL PROTECTED] wrote:
--- On Thu, 7/17/08, Tomáš Siegl [EMAIL PROTECTED] wrote:
Step1: Client1: cp test_file.txt /mnt/gluster/
Step2: Brick1 and Brick4: has
Rohan,
Can access be given to these machines? Does the log have
any useful information?
Krishna
On Thu, Jul 17, 2008 at 12:41 PM, Rohan [EMAIL PROTECTED] wrote:
No these are not symlinks.
_
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Amar S.
Tumballi
Sent:
Kotkin,
Can you reproduce this problem consistently and easily?
Krishna
On Mon, Jun 30, 2008 at 4:19 PM, Kotkin Dmitriy [EMAIL PROTECTED] wrote:
Hello guys!
When I coping many files to the unify over 2 afrs mount point it sometimes
places files on afr1 and one node of afr2. So I'm getting
Joshua,
There are two meanings for read scheduling:
1) already implemented, load balance reads such that a file is always read
from the same subvol.
2) not implemented, load balance a read such that, for a read call of 1000 bytes
500 bytes is read from 1st subvol and next 500 bytes is
Ah! that kind of read scheduling already happens now. i.e a file is read from
a particular subvol depending on the inode number, this gives
us a fair scheduling. This is by default. However if you want to
read from a particular subvol for all files you can specify it by
option read-subvolume
Onyx,
Just thinking about it, using snapshots as glusterfs storage volumes
is not advisable unless you know what you are doing. If you roll back
a volume, it will depend on how selfheal of unify/afr handle it. AFR
would just update the rolled back volume with the other copy.
Better use of LVM +
Rohan,
What do you mean by connections here?
On Thu, Jun 12, 2008 at 2:30 PM, Rohan [EMAIL PROTECTED] wrote:
Hi,
We are using GlusterFS as home directory on FTP server. Its heavy ftp site
and we are using standard VSFTP. We found that connections are not getting
closed. And after few hrs
.
Krishna Srinivas escribió:
Victor,
Can you paste spec files? and which version are you using?
Krishna
On Wed, Jun 11, 2008 at 1:54 PM, Victor San Pedro [EMAIL PROTECTED] wrote:
Hello.
Finally I managed to obtain good time results with my old computers with
the booster volume in unify
Nicolas... we will get back on this.
Krishna
On Mon, Jun 9, 2008 at 12:57 PM, nicolas prochazka
[EMAIL PROTECTED] wrote:
Hi,
My conf files for client and server.
You can reproduce this problem with qemu and qcow2 format .
one big file ( 2G) on server.
Regards,
Nicolas
Victor,
Can you paste spec files? and which version are you using?
Krishna
On Wed, Jun 11, 2008 at 1:54 PM, Victor San Pedro [EMAIL PROTECTED] wrote:
Hello.
Finally I managed to obtain good time results with my old computers with
the booster volume in unify over afr...
It was important for
Nicolas,
Just for the records can you give your spec files?
How many dirs and files do you have (to get an
idea to reproduce the problem in our setup)
Krishna
On Fri, Jun 6, 2008 at 10:05 PM, nicolas prochazka
[EMAIL PROTECTED] wrote:
Hi,
I'm using glusterfs with sparse files, read is ok and
On Fri, May 30, 2008 at 12:32 PM, Amar S. Tumballi [EMAIL PROTECTED] wrote:
AFR ( STRIPE (server1, server2), server3)
Let afr serve over the stripe for high performance by default, and
server3
be a backup. This works well for read-heavy workloads.
How will afr over stripe help for
On Fri, May 30, 2008 at 1:47 PM, Krishna Srinivas [EMAIL PROTECTED] wrote:
On Fri, May 30, 2008 at 12:32 PM, Amar S. Tumballi [EMAIL PROTECTED] wrote:
AFR ( STRIPE (server1, server2), server3)
Let afr serve over the stripe for high performance by default, and
server3
be a backup
Daniel,
As you guessed it unify+AFR already does the functionality you are talking
about in the balance translator.
Lets fix the problem of selfheal you faced when you started this thread,
is it still valid?
Krishna
On Thu, May 29, 2008 at 2:13 AM, Daniel Wirtz [EMAIL PROTECTED] wrote:
I
Victor,
Yes, You can use stripe and afr, preferably stripe over a bunch of AFRs.
Krishna
On Wed, May 28, 2008 at 9:29 PM, Kevan Benson [EMAIL PROTECTED] wrote:
Victor San Pedro wrote:
Hello. I would like to ask you the following question:
Is it possible to use cluster STRIPING to build a
Hi Forcey,
Theoritically AFR over unify has to work. But it is not a well
tested scenario. We have tested just so that it works. But we can
fix the issues you might face if you go ahead with that setup.
Thanks
Krishna
On Mon, May 26, 2008 at 9:49 AM, Forcey [EMAIL PROTECTED] wrote:
Hi All,
Shaofeng,
Regarding the hang when you do ls, can you see if there is anything in the
logs? how many files are there in that directory? Can you try with the
latest
version of glusterfs and see if it still hangs?
Regarding how it works, I will write in another mail, or update the wiki
doc.
Thanks
Victor,
Here is your steps:
1 open file
2 read
3 bring first child down
4 read continues with a seamless failover
5 bring first child up, and bring second child down.
6 read fails.
Now retrying again from first server would be difficult as all the states
associated with that server would have
Gordan,
Which patch set is this? Can you run glusterfs server side with -L DEBUG
and send the logs?
Thanks
Krishna
On Tue, May 20, 2008 at 1:56 AM, Gordan Bobic [EMAIL PROTECTED] wrote:
Hi,
I'm having rather major problems getting single-process AFR to work between
two servers. When both
? Or does it mount the outermost volume that _isn't_ a
protocol/[client|server] (which is home in this case)?
Thanks.
Gordan
On Tue, 20 May 2008 13:18:07 +0530, Krishna Srinivas
[EMAIL PROTECTED] wrote:
Gordan,
Which patch set is this? Can you run glusterfs server side with -L
DEBUG
, and simplify my config somewhat.
Thanks.
Gordan
On Tue, 20 May 2008, Krishna Srinivas wrote:
In this setup, home1 is sending CHILD_UP event to server xlator instead
of the home afr xlator. (and home2 is not up) This makes afr think none
of its subvols are up. We will fix it to handle
On Fri, May 9, 2008 at 12:57 AM, Krishna Srinivas [EMAIL PROTECTED] wrote:
On Thu, May 8, 2008 at 9:19 PM, Gerry Reno [EMAIL PROTECTED] wrote:
Krishna Srinivas wrote:
Gerry,
In your client spec client-local does not have any purpose right?
This is your setup:
server1
Jeroen,
Can you detail on the setup including the conf files?
Thanks
Krishna
On Sat, May 10, 2008 at 9:53 PM, Jeroen Koekkoek [EMAIL PROTECTED] wrote:
Hi Everybody,
I'm planning on using glusterfs for the mailservers in my organisation. I
configured glusterfs to do afr on the server side
On Mon, May 12, 2008 at 3:01 AM, Martin Fick [EMAIL PROTECTED] wrote:
--- Derek Price [EMAIL PROTECTED] wrote:
Never mind, I thought that, using your algorithm,
it was possible to create files in two different
directories with identical version numbers, then
move the directories around to
On Fri, May 9, 2008 at 2:33 PM, Marcus Herou [EMAIL PROTECTED] wrote:
Oooops. Didn't think of that with AFR. However I think Lucene always create
new files when documents are flushed to disk so on commit basis there will
be low imapact. But the scenario you're talking about will most definitely
2008 15:37:40 +0530 Krishna Srinivas
[EMAIL PROTECTED] wrote:
Do you plan to do any AFR (automatic file replication) ? If so,
consider that even a one-byte change to your big index files will
cause the /entire/ file to be AFR'd between all participating
nodes.
Marcus, what do you
Chris,
Do you see clues in the log files?
Krishna
On Wed, Apr 30, 2008 at 8:22 PM, Anand Avati [EMAIL PROTECTED] wrote:
Chris,
can you get the glusterfs client logs from your ramdisk when the servers
are being pulled out and tried to access the mount point?
avati
2008/4/30
On Wed, May 7, 2008 at 4:29 AM, Gordan Bobic [EMAIL PROTECTED] wrote:
Kevan Benson wrote:
Gordan Bobic wrote:
I suspect this isn't a problem that can be solved without having a
proper journal of metadata per directory, so that upon connection, the whole
journal can be replayed.
We are thinking of a solution for this, there will be performance hit.
(so this can be kept as a config option) we will get back shortly.
Krishna
On Wed, May 7, 2008 at 1:28 PM, Daniel Maher [EMAIL PROTECTED] wrote:
On Tue, 6 May 2008 19:40:02 -0700 (PDT) Martin Fick
[EMAIL PROTECTED] wrote:
On Tue, May 6, 2008 at 9:45 PM, Derek Price [EMAIL PROTECTED] wrote:
Krishna Srinivas wrote:
Is this an issue in server-side-only AFR? I have two servers which
are also
clients of themselves, and they both list their local subvolume first
and
remote subvolume second
On Wed, May 7, 2008 at 2:07 PM, [EMAIL PROTECTED] wrote:
On Wed, 7 May 2008, Krishna Srinivas wrote:
Is this an issue in server-side-only AFR? I have two servers
which
are also clients of themselves, and they both list their local
subvolume first and
remote
On Fri, May 2, 2008 at 6:33 AM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Thu, May 1, 2008 at 5:59 PM, Amar S. Tumballi [EMAIL PROTECTED] wrote:
On Thu, May 1, 2008 at 5:37 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
Was there an option added somewhere to be able to choose whether to
Brandon,
Can you start glusterfsd on server1 with -l /tmp/log -L DEBUG
and put option debug on in afr volume definition on server1 spec.
When you do cat can check the logs and mail it to us?
Krishna
On Sun, May 4, 2008 at 2:30 AM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Sat, May 3, 2008 at
1 - 100 of 274 matches
Mail list logo