Re: [Gluster-users] intended use cases

2009-07-01 Thread Vikas Gorur

- "Kent Tong"  wrote:

> Hi,
> 
> What are the intended use cases for gluster? For example, is it
> suitable for,
> say, replacing a SAN? For example, is it good for the following?
> [ ] Storing a huge volume of seldom accessed files (file archive)
> [ ] Storing frequently read/write files (file server)
> [ ] Storing frequently read files (web server)
> [ ] Hosting databases
> [ ] Hosting VM images

All of the above.

Vikas
-- 
Engineer - http://gluster.com/

A: Because it messes up the way people read text.
Q: Why is a top-posting such a bad thing?
--

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Performance question

2009-07-01 Thread Joe Julian

I'm using an unpatched fuse 2.7.4-1 and glusterfs 2.0.2-1 with the
following configs and have this result which surprised me:

# dd if=/dev/zero of=foo bs=512k count=1024
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 14.1538 seconds, 37.9 MB/s
# dd if=/dev/zero of=foo bs=512k count=1024
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 24.4553 seconds, 22.0 MB/s

Why is it slower if the file exists? Should it be?


#
Servers
#
volume posix0
 type storage/posix
 option directory /cluster/0
end-volume

volume locks0
 type features/locks
 subvolumes posix0
end-volume

volume brick0
 type performance/io-threads
 option thread-count 8
 subvolumes locks0
end-volume

volume posix1
 type storage/posix
 option directory /cluster/1
end-volume

volume locks1
 type features/locks
 subvolumes posix1
end-volume

volume brick1
 type performance/io-threads
 option thread-count 8
 subvolumes locks1
end-volume

volume posix2
 type storage/posix
 option directory /cluster/2
end-volume

volume locks2
 type features/locks
 subvolumes posix2
end-volume

volume brick2
 type performance/io-threads
 option thread-count 8
 subvolumes locks2
end-volume

volume posix3
 type storage/posix
 option directory /cluster/3
end-volume

volume locks3
 type features/locks
 subvolumes posix3
end-volume

volume brick3
 type performance/io-threads
 option thread-count 8
 subvolumes locks3
end-volume

volume server
 type protocol/server
 option transport-type tcp
 subvolumes brick0 brick1 brick2 brick3
 option auth.addr.brick0.allow *
 option auth.addr.brick1.allow *
 option auth.addr.brick2.allow *
 option auth.addr.brick3.allow *
end-volume


Client

volume ewcs2_cluster0
 type protocol/client
 option transport-type tcp
 option remote-host ewcs2.ewcs.com
 option remote-subvolume brick0
end-volume
volume ewcs2_cluster1
 type protocol/client
 option transport-type tcp
 option remote-host ewcs2.ewcs.com
 option remote-subvolume brick1
end-volume
volume ewcs2_cluster2
 type protocol/client
 option transport-type tcp
 option remote-host ewcs2.ewcs.com
 option remote-subvolume brick2
end-volume
volume ewcs2_cluster3
 type protocol/client
 option transport-type tcp
 option remote-host ewcs2.ewcs.com
 option remote-subvolume brick3
end-volume

volume ewcs4_cluster0
 type protocol/client
 option transport-type tcp
 option remote-host ewcs4.ewcs.com
 option remote-subvolume brick0
end-volume
volume ewcs4_cluster1
 type protocol/client
 option transport-type tcp
 option remote-host ewcs4.ewcs.com
 option remote-subvolume brick1
end-volume
volume ewcs4_cluster2
 type protocol/client
 option transport-type tcp
 option remote-host ewcs4.ewcs.com
 option remote-subvolume brick2
end-volume
volume ewcs4_cluster3
 type protocol/client
 option transport-type tcp
 option remote-host ewcs4.ewcs.com
 option remote-subvolume brick3
end-volume

volume ewcs7_cluster0
 type protocol/client
 option transport-type tcp
 option remote-host ewcs7.ewcs.com
 option remote-subvolume brick0
end-volume
volume ewcs7_cluster1
 type protocol/client
 option transport-type tcp
 option remote-host ewcs7.ewcs.com
 option remote-subvolume brick1
end-volume
volume ewcs7_cluster2
 type protocol/client
 option transport-type tcp
 option remote-host ewcs7.ewcs.com
 option remote-subvolume brick2
end-volume
volume ewcs7_cluster3
 type protocol/client
 option transport-type tcp
 option remote-host ewcs7.ewcs.com
 option remote-subvolume brick3
end-volume

volume repl1
 type cluster/replicate
 subvolumes ewcs2_cluster0 ewcs4_cluster0 ewcs7_cluster0
end-volume

volume repl2
 type cluster/replicate
 subvolumes ewcs2_cluster1 ewcs4_cluster1 ewcs7_cluster1
end-volume

volume repl3
 type cluster/replicate
 subvolumes ewcs2_cluster2 ewcs4_cluster2 ewcs7_cluster2
end-volume

volume repl4
 type cluster/replicate
 subvolumes ewcs2_cluster3 ewcs4_cluster3 ewcs7_cluster3
end-volume

volume distribute
 type cluster/distribute
 subvolumes repl1 repl2 repl3 repl4
end-volume

volume writebehind
 type performance/write-behind
 option aggregate-size 128KB
 option cache-size 1MB
 subvolumes distribute
end-volume

volume ioc
 type performance/io-cache
 option cache-size 512MB
 subvolumes writebehind
end-volume

###

mount -t glusterfs /etc/glusterfs/glusterfs-client.vol /mnt/gluster



___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] intended use cases

2009-07-01 Thread Kent Tong
Hi,

What are the intended use cases for gluster? For example, is it suitable for,
say, replacing a SAN? For example, is it good for the following?
[ ] Storing a huge volume of seldom accessed files (file archive)
[ ] Storing frequently read/write files (file server)
[ ] Storing frequently read files (web server)
[ ] Hosting databases
[ ] Hosting VM images



___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Client detect when server comes back up

2009-07-01 Thread Simon Liang
Hi,

 

I have a basic 2 server (serverA and serverB) replication  with 1
client.

 

Sometimes serverB will go offline, so client is writing directly to
serverA.

 

However when serverB comes back online, the client does not detect this
and ignore serverB until I restart the client. Is this meant to happen?

 

Regards,

Simon

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Use of mod_glusterfs

2009-07-01 Thread Jasper van Wanrooy - Royalfish eSolutions

Hi,

Thanks for your return .
I didn't know how to use it after install mod_glusterfs into  
apache ,Could you tell me for detail ?
Should i mount the gluster for a directory ? or didn't mount  
directory ?  How to write or read data from gluster through  
mod_glusterfs ?
We only used gluster for reading purposes through apache. There is a  
little description in the wiki available. I would suggest you try  
that: http://www.gluster.org/docs/index.php/ 
Getting_modglusterfs_to_work.
The module doesn't need a locally mounted gluster-share. It connects  
directly with libgluster to the storageservers.


Regards Jasper___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HadoopFS-like gluster setup

2009-07-01 Thread Vijay Bellur

Peng Zhao wrote:


[2009-07-01 18:36:25] E [socket.c:206:__socket_server_bind] server: 
binding to failed: Address already in use
[2009-07-01 18:36:25] E [socket.c:209:__socket_server_bind] server: 
Port is already in use
[2009-07-01 18:36:25] E [server-protocol.c:7631:init] server: failed 
to bind/listen on socket
[2009-07-01 18:36:25] E [xlator.c:736:xlator_init_rec] xlator: 
Initialization of volume 'server' failed, review your volfile again
[2009-07-01 18:36:25] E [glusterfsd.c:498:_xlator_graph_init] 
glusterfs: initializing translator failed
[2009-07-01 18:36:25] E [glusterfsd.c:1191:main] glusterfs: translator 
initialization failed. exiting

Looks like the server port is already in use.
GlusterFS tries to bind to port 6996 by default if you do not specify 
one. Can you please check with a different port?


Regards,
Vijay

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HadoopFS-like gluster setup

2009-07-01 Thread Peng Zhao
BTW, is there someone have configured gluster like HDFS (unify with
automatic replication). Could someone share the volfile here?
I think I'm not the only fan waiting here ;-)
Gnep
On Wed, Jul 1, 2009 at 6:39 PM, Peng Zhao  wrote:

> OK, my stupid. There was no fuse module. I built one and modprobe fuse. The
> previous error is gone, with some new one:
> Here are the DEBUG-level msg:
> [2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator:
> dlsym(notify) on /usr/lib64/glusterfs/2.0.2/xlator/features/locks.so:
> undefined symbol: notify -- neglecting
> [2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator:
> dlsym(notify) on
> /usr/lib64/glusterfs/2.0.2/xlator/performance/io-threads.so: undefined
> symbol: notify -- neglecting
> [2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator:
> dlsym(notify) on
> /usr/lib64/glusterfs/2.0.2/xlator/performance/write-behind.so: undefined
> symbol: notify -- neglecting
> [2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator:
> dlsym(notify) on /usr/lib64/glusterfs/2.0.2/xlator/performance/io-cache.so:
> undefined symbol: notify -- neglecting
> [2009-07-01 18:36:25] D [glusterfsd.c:1179:main] glusterfs: running in pid
> 6874
> [2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-0:
> defaulting frame-timeout to 30mins
> [2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-0:
> defaulting ping-timeout to 10
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-1:
> defaulting frame-timeout to 30mins
> [2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-1:
> defaulting ping-timeout to 10
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-2:
> defaulting frame-timeout to 30mins
> [2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-2:
> defaulting ping-timeout to 10
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-3:
> defaulting frame-timeout to 30mins
> [2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-3:
> defaulting ping-timeout to 10
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [unify.c:4288:init] unified: namespace node
> specified as compute-5-4
> [2009-07-01 18:36:25] D [scheduler.c:48:get_scheduler] scheduler: attempt
> to load file rr.so
> [2009-07-01 18:36:25] D [unify.c:4320:init] unified: Child node count is 2
> [2009-07-01 18:36:25] D [rr-options.c:188:rr_options_validate] rr: using
> scheduler.limits.min-free-disk = 15 [default]
> [2009-07-01 18:36:25] D [rr-options.c:216:rr_options_validate] rr: using
> scheduler.refresh-interval = 10 [default]
> [2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-4:
> defaulting frame-timeout to 30mins
> [2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-4:
> defaulting ping-timeout to 10
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-4: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-4: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-4: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-4: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [write-behind.c:1859:init] writebehind: disabling
> write-behind for first 1 bytes
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-0: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-0: got
> GF_EVENT_

Re: [Gluster-users] HadoopFS-like gluster setup

2009-07-01 Thread Peng Zhao
OK, my stupid. There was no fuse module. I built one and modprobe fuse. The
previous error is gone, with some new one:
Here are the DEBUG-level msg:
[2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator: dlsym(notify)
on /usr/lib64/glusterfs/2.0.2/xlator/features/locks.so: undefined symbol:
notify -- neglecting
[2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator: dlsym(notify)
on /usr/lib64/glusterfs/2.0.2/xlator/performance/io-threads.so: undefined
symbol: notify -- neglecting
[2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator: dlsym(notify)
on /usr/lib64/glusterfs/2.0.2/xlator/performance/write-behind.so: undefined
symbol: notify -- neglecting
[2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator: dlsym(notify)
on /usr/lib64/glusterfs/2.0.2/xlator/performance/io-cache.so: undefined
symbol: notify -- neglecting
[2009-07-01 18:36:25] D [glusterfsd.c:1179:main] glusterfs: running in pid
6874
[2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-0:
defaulting frame-timeout to 30mins
[2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-0:
defaulting ping-timeout to 10
[2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
[2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
[2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-1:
defaulting frame-timeout to 30mins
[2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-1:
defaulting ping-timeout to 10
[2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
[2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
[2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-2:
defaulting frame-timeout to 30mins
[2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-2:
defaulting ping-timeout to 10
[2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
[2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
[2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-3:
defaulting frame-timeout to 30mins
[2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-3:
defaulting ping-timeout to 10
[2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
[2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
[2009-07-01 18:36:25] D [unify.c:4288:init] unified: namespace node
specified as compute-5-4
[2009-07-01 18:36:25] D [scheduler.c:48:get_scheduler] scheduler: attempt to
load file rr.so
[2009-07-01 18:36:25] D [unify.c:4320:init] unified: Child node count is 2
[2009-07-01 18:36:25] D [rr-options.c:188:rr_options_validate] rr: using
scheduler.limits.min-free-disk = 15 [default]
[2009-07-01 18:36:25] D [rr-options.c:216:rr_options_validate] rr: using
scheduler.refresh-interval = 10 [default]
[2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-4:
defaulting frame-timeout to 30mins
[2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-4:
defaulting ping-timeout to 10
[2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
[2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
[2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-4: got
GF_EVENT_PARENT_UP, attempting connect on transport
[2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-4: got
GF_EVENT_PARENT_UP, attempting connect on transport
[2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-4: got
GF_EVENT_PARENT_UP, attempting connect on transport
[2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-4: got
GF_EVENT_PARENT_UP, attempting connect on transport
[2009-07-01 18:36:25] D [write-behind.c:1859:init] writebehind: disabling
write-behind for first 1 bytes
[2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-0: got
GF_EVENT_PARENT_UP, attempting connect on transport
[2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-0: got
GF_EVENT_PARENT_UP, attempting connect on transport
[2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-1: got
GF_EVENT_PARENT_UP, attempting connect on transport
[2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-1: got
GF_EVENT_PARENT_UP, attempting connect on transport
[2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-2: got
GF_EVENT_PARENT_UP, attempting

Re: [Gluster-users] HadoopFS-like gluster setup

2009-07-01 Thread Pavel Riha
On Wednesday 01 of July 2009 03:50, Peng Zhao wrote:
> [2009-07-01 09:37:36] E [xlator.c:736:xlator_init_rec] xlator:
> Initialization of volume 'fuse' failed, review your volfile again

I got this error when the fuse module was not loaded, so check this first..


modprobe fuse
ls -l /dev/fuse


Pavel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] DHT and adding bricks

2009-07-01 Thread Daniel
hello there

sorry for the newbie question, but I couldn't find an answer thru the
old messages

I am researching glusterfs for a personal project involving a
constantly growing storage farm

my config is simple: several bricks and a server doing DHT using
cluster/distribute just like
http://www.gluster.org/docs/index.php/Hash_Across_Four_Storage_Servers

I learned that when you add a brick to an existing configuration,
files copied to the folders that already existed, including the root
folder are not distributed to the new bricks

new files in new folders are

what happens if I keep loading files on a folder that was already
created after a new brick is added?

will the new files eventually get distributed to the new bricks when
the old bricks get full ?

if not, is there a way around this ?

after I added the bricks, I restarted the whole server, but nothing changed

thanks in advance

daniel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HELP: problem of stripe expansion !!!!!!!!!!!!!!!!!!!!!!!!

2009-07-01 Thread eagleeyes
If i need height available with afr , did it use distribute + stripe + afr  
just like distribute(stripe( afr(2) 4)+stripe( afr(2) 4)+stripe( afr(2) 4))  ??


2009-07-01 



eagleeyes 



发件人: Anand Babu Periasamy 
发送时间: 2009-07-01  14:01:14 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] HELP: problem of stripe expansion 
 
 
You cannot expand stripe directly.  You have to use
distribute + stripe, where you scale in stripe sets.
For example, if you have 8 nodes, you create
=>  distribute(stripe(4)+stripe(4))
Now if you want to scale your storage cluster, you should do so
in stripe sets. Add 4 more nodes like this:
=>  distribute(stripe(4)+stripe(4)+stripe(4))
Distributed-stripe not only makes stripe scalable, but
better load balanced and reduced disk contention.
--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://unlocksmith.org]
GlusterFS [http://www.gluster.org]
GNU/Linux [http://www.gnu.org]
eagleeyes wrote:
> Hello:
>Today i test stripe expansion : two volumes  expand four volumes 
> , when i vi or cat a file ,the log was :
> [2009-07-01 11:25:55] W [stripe.c:1920:stripe_open_getxattr_cbk] stripe: 
> client8 returned error No such file or directory
> [2009-07-01 11:25:55] W [stripe.c:1920:stripe_open_getxattr_cbk] stripe: 
> client7 returned error No such file or directory
> [2009-07-01 11:25:55] W [stripe.c:1871:stripe_open_cbk] stripe: client7 
> returned error No such file or directory
> [2009-07-01 11:25:55] W [stripe.c:1871:stripe_open_cbk] stripe: client8 
> returned error No such file or directory
> [2009-07-01 11:25:55] W [fuse-bridge.c:639:fuse_fd_cbk] glusterfs-fuse: 149: 
> OPEN() /file => -1 (No such file or directory)
>  
> Did it a bug like dht expansion ?  What should we do to deal with this 
> problem?
>  
> my client configur changes is from subvolumes client5 client6  to 
> subvolumes client5 client6 client7 client8 
>  
>  
> 2009-07-01
> 
> eagleeyes
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users