Re: [Gluster-users] problem with booster when multiple volumes are exported per node

2009-09-15 Thread Wei Dong

You guys are really fast.  Thanks a lot.

- Wei

Shehjar Tikoo wrote:

Wei Dong wrote:
I've attached all configuration files.  The log file is empty.  The 
attached configuration is a simplified version of what I tried to 
do.  It causes the same problem.  Basically a single server exports 2 
volumes and the client import the 2 volumes and runs DHT over them.


Hi

The fix will be available in a day, at most. Please track the bug
here:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=260

Thanks
-Shehjar


Thanks,

- Wei

Shehjar Tikoo wrote:

Wei Dong wrote:

Hi All,

I'm experiencing a problem of booster when the server side nodes 
have more than one volumes exported.  The symptom is that when I 
run "ls MOUNT_POINT" with booster, I get something like the following:


ls: closing directory MOUNT_POINT: File descriptor in bad state.

The server configuration file is the following:

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume


On the client side, the bricks on the same server are imported 
separately.



The problem only appears when I use booster.  Nothing seems to go 
wrong when I mount GlusterFS.  Also everything is find if I only 
export one brick from each server.  There's also no warning or 
errors in the log file in all cases.


Any one has some idea on what's happening?


Please post the contents of booster FSTAB file. It'll tell us
which subvolume from the client volfile gets used by booster.

If the log file is available, do post that also.

Thanks
-Shehjar



- Wei
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users








___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] problem with booster when multiple volumes are exported per node

2009-09-14 Thread Shehjar Tikoo

Wei Dong wrote:
I've attached all configuration files.  The log file is empty.  The 
attached configuration is a simplified version of what I tried to do.  
It causes the same problem.  Basically a single server exports 2 volumes 
and the client import the 2 volumes and runs DHT over them.


Hi

The fix will be available in a day, at most. Please track the bug
here:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=260

Thanks
-Shehjar


Thanks,

- Wei

Shehjar Tikoo wrote:

Wei Dong wrote:

Hi All,

I'm experiencing a problem of booster when the server side nodes have 
more than one volumes exported.  The symptom is that when I run "ls 
MOUNT_POINT" with booster, I get something like the following:


ls: closing directory MOUNT_POINT: File descriptor in bad state.

The server configuration file is the following:

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume


On the client side, the bricks on the same server are imported 
separately.



The problem only appears when I use booster.  Nothing seems to go 
wrong when I mount GlusterFS.  Also everything is find if I only 
export one brick from each server.  There's also no warning or errors 
in the log file in all cases.


Any one has some idea on what's happening?


Please post the contents of booster FSTAB file. It'll tell us
which subvolume from the client volfile gets used by booster.

If the log file is available, do post that also.

Thanks
-Shehjar



- Wei
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users






___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] problem with booster when multiple volumes are exported per node

2009-09-14 Thread Shehjar Tikoo

Wei Dong wrote:
I've attached all configuration files.  The log file is empty.  The 
attached configuration is a simplified version of what I tried to do.  
It causes the same problem.  Basically a single server exports 2 volumes 
and the client import the 2 volumes and runs DHT over them.



Ok thanks. I am looking into this now.

-Shehjar


Thanks,

- Wei

Shehjar Tikoo wrote:

Wei Dong wrote:

Hi All,

I'm experiencing a problem of booster when the server side nodes have 
more than one volumes exported.  The symptom is that when I run "ls 
MOUNT_POINT" with booster, I get something like the following:


ls: closing directory MOUNT_POINT: File descriptor in bad state.

The server configuration file is the following:

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume


On the client side, the bricks on the same server are imported 
separately.



The problem only appears when I use booster.  Nothing seems to go 
wrong when I mount GlusterFS.  Also everything is find if I only 
export one brick from each server.  There's also no warning or errors 
in the log file in all cases.


Any one has some idea on what's happening?


Please post the contents of booster FSTAB file. It'll tell us
which subvolume from the client volfile gets used by booster.

If the log file is available, do post that also.

Thanks
-Shehjar



- Wei
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users






___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] problem with booster when multiple volumes are exported per node

2009-09-14 Thread Wei Dong
I further found that the problem occurs only when the client side is 
wrapped with a performance/write-behind translator.  Everything becomes 
OK if that is removed.


- Wei

Wei Dong wrote:
I've attached all configuration files.  The log file is empty.  The 
attached configuration is a simplified version of what I tried to do.  
It causes the same problem.  Basically a single server exports 2 
volumes and the client import the 2 volumes and runs DHT over them.


Thanks,

- Wei

Shehjar Tikoo wrote:

Wei Dong wrote:

Hi All,

I'm experiencing a problem of booster when the server side nodes 
have more than one volumes exported.  The symptom is that when I run 
"ls MOUNT_POINT" with booster, I get something like the following:


ls: closing directory MOUNT_POINT: File descriptor in bad state.

The server configuration file is the following:

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume


On the client side, the bricks on the same server are imported 
separately.



The problem only appears when I use booster.  Nothing seems to go 
wrong when I mount GlusterFS.  Also everything is find if I only 
export one brick from each server.  There's also no warning or 
errors in the log file in all cases.


Any one has some idea on what's happening?


Please post the contents of booster FSTAB file. It'll tell us
which subvolume from the client volfile gets used by booster.

If the log file is available, do post that also.

Thanks
-Shehjar



- Wei
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users






___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] problem with booster when multiple volumes are exported per node

2009-09-14 Thread Wei Dong
I've attached all configuration files.  The log file is empty.  The 
attached configuration is a simplified version of what I tried to do.  
It causes the same problem.  Basically a single server exports 2 volumes 
and the client import the 2 volumes and runs DHT over them.


Thanks,

- Wei

Shehjar Tikoo wrote:

Wei Dong wrote:

Hi All,

I'm experiencing a problem of booster when the server side nodes have 
more than one volumes exported.  The symptom is that when I run "ls 
MOUNT_POINT" with booster, I get something like the following:


ls: closing directory MOUNT_POINT: File descriptor in bad state.

The server configuration file is the following:

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume


On the client side, the bricks on the same server are imported 
separately.



The problem only appears when I use booster.  Nothing seems to go 
wrong when I mount GlusterFS.  Also everything is find if I only 
export one brick from each server.  There's also no warning or errors 
in the log file in all cases.


Any one has some idea on what's happening?


Please post the contents of booster FSTAB file. It'll tell us
which subvolume from the client volfile gets used by booster.

If the log file is available, do post that also.

Thanks
-Shehjar



- Wei
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




/memex/gluster/state/run/c.vol/gluster  glusterfs 
subvolume=client,logfile=/root/gfs.log/log,loglevel=WARNING

volume brick-0-0-0
type protocol/client
option transport-type tcp
option remote-host c8-0-0
option remote-port 7001
option remote-subvolume brick0
end-volume

volume rep-0-0
type cluster/replicate
subvolumes brick-0-0-0
end-volume

volume brick-0-1-0
type protocol/client
option transport-type tcp
option remote-host c8-0-0
option remote-port 7001
option remote-subvolume brick1
end-volume

volume rep-0-1
type cluster/replicate
subvolumes brick-0-1-0
end-volume

volume union
type cluster/distribute
subvolumes rep-0-0 rep-0-1
end-volume

volume client
type performance/write-behind
option cache-size 64MB
option flush-behind on
subvolumes union
end-volume

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] problem with booster when multiple volumes are exported per node

2009-09-14 Thread Shehjar Tikoo

Wei Dong wrote:

Hi All,

I'm experiencing a problem of booster when the server side nodes have 
more than one volumes exported.  The symptom is that when I run "ls 
MOUNT_POINT" with booster, I get something like the following:


ls: closing directory MOUNT_POINT: File descriptor in bad state.

The server configuration file is the following:

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume


On the client side, the bricks on the same server are imported separately.


The problem only appears when I use booster.  Nothing seems to go wrong 
when I mount GlusterFS.  Also everything is find if I only export one 
brick from each server.  There's also no warning or errors in the log 
file in all cases.


Any one has some idea on what's happening?


Please post the contents of booster FSTAB file. It'll tell us
which subvolume from the client volfile gets used by booster.

If the log file is available, do post that also.

Thanks
-Shehjar



- Wei
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] problem with booster when multiple volumes are exported per node

2009-09-13 Thread Wei Dong

Hi All,

I'm experiencing a problem of booster when the server side nodes have 
more than one volumes exported.  The symptom is that when I run "ls 
MOUNT_POINT" with booster, I get something like the following:


ls: closing directory MOUNT_POINT: File descriptor in bad state.

The server configuration file is the following:

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume


On the client side, the bricks on the same server are imported separately.


The problem only appears when I use booster.  Nothing seems to go wrong 
when I mount GlusterFS.  Also everything is find if I only export one 
brick from each server.  There's also no warning or errors in the log 
file in all cases.


Any one has some idea on what's happening?

- Wei
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users