[Gluster-users] ERROR with 2.1.0pre1

2009-09-10 Thread eagleeyes
]
[New Thread 22973]
[New Thread 22972]
[New Thread 22971]
[New Thread 22970]
[New Thread 22969]
[New Thread 22968]
[New Thread 22967]
[New Thread 22966]
[New Thread 22965]
[New Thread 22964]
[New Thread 22963]
[New Thread 22962]
[New Thread 22961]
[New Thread 22960]
[New Thread 22959]
[New Thread 22958]
[New Thread 22957]
[New Thread 22956]
[New Thread 22955]
[New Thread 23198]
[New Thread 23022]
[New Thread 23036]
[New Thread 22802]
[New Thread 23247]
[New Thread 23294]
[New Thread 23032]
[New Thread 22996]
[New Thread 23174]
[New Thread 22911]
[New Thread 22801]
Core was generated by `glusterfsd -f glusterfsd.vol'.
Program terminated with signal 11, Segmentation fault.
#0  0xb809c1c0 in ?? ()
(gdb) bt 
#0  0xb809c1c0 in ?? ()
#1  0xb80aca33 in ?? ()
#2  0xb809b41a in ?? ()
#3  0xb8094cd2 in ?? ()
#4  0xb806d17c in ?? ()
#5  0xb80a2b2e in ?? ()
#6  0xb807034b in ?? ()
#7  0xb8095daa in ?? ()
#8  0xb7694b7b in ?? ()
#9  0xb807720e in ?? ()
#10 0xb7693f00 in ?? ()
#11 0xb80411b5 in ?? ()
#12 0xb7faf3be in ?? ()
(gdb)  quit


2009-09-10 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] ERROR in glusterfs2.0.6 using afr+dht+quota

2009-08-19 Thread eagleeyes
HI all:
I met some  error  in glusterfs2.0.6
   
[2009-08-19 15:56:54] E [dht-common.c:1573 ht_setxattr] dht: invalid argument: 
loc-inode
[2009-08-19 15:56:54] C [quota.c:794:quota_setxattr_cbk] quota: failed to set 
the disk-usage value: Invalid argument


 client config is 
   
volume dht
  type cluster/dht
  option min-free-disk 15%
  subvolumes afr1 afr2 afr3 afr4 afr5 afr6 afr7 afr8 afr9 afr10 afr11
end-volume

volume writeback
 type performance/write-behind
  option cache-size 64MB
  option flush-behind on
  subvolumes dht   
end-volume

volume quota
  type features/quota
  option disk-usage-limit 51200MB
  subvolumes writeback
end-volume


   why ? would some one help me ?


2009-08-19 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ERROR in glusterfs2.0.4 using afr+dht

2009-08-19 Thread eagleeyes
Thanks  , Did it patch  into  2.0.6?  
 
  When the 2.1 version could be released?   I hope it into using  
very much 


2009-08-19 



eagleeyes 



发件人: Vikas Gorur 
发送时间: 2009-08-19  16:04:09 
收件人: eagleeyes 
抄送: gluster-users; Anand Babu 
主题: Re: [Gluster-users] ERROR in glusterfs2.0.4 using afr+dht 
 
- eagleeyes eaglee...@126.com wrote:
 HI all:
  My glusterfs  storage was using for xen images , which configure
 afr+dht ,but had some  error 
Thanks for reporting this crash. This has been fixed in release-2.0 by:
commit f4513b4de104f1c6f40f7bbe0a4bd698340db805
Author: Anand Avati av...@gluster.com
Date:   Fri Jul 17 15:34:14 2009 +
Do not failover readdir in replicate

Backport of http://patches.gluster.com/patch/561/ to release-2.0

Also, the failover version of afr_readdir_cbk is buggy and
crashes when it is called after a failover inevitably

Signed-off-by: Anand V. Avati av...@dev.gluster.com

BUG: 150 (AFR readdir should not failover to other subvolume)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=150
and in mainline by:
commit 77d8cfeab52cd19f3b70abd9c3a2c4bbc9219bff
Author: Vikas Gorur vi...@gluster.com
Date:   Thu Jun 11 08:47:06 2009 +
Do not fail over readdir in replicate.

If readdir fails on a subvolume, do not
fail-over to the next subvolume, since the
order of entries and offsets won't be same
on all subvolumes.

Signed-off-by: Anand V. Avati av...@dev.gluster.com
Vikas
-- 
Engineer - http://gluster.com/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Error with trash

2009-08-11 Thread eagleeyes

HI all:
 I met a error  when i using trash.
 When i  go into .trashcan on glusterfs client  ,and rename the dir  which 
had been deleted ,the glusterfs server  down , the gdb log was :

[r...@localhost /]# gdb glusterfscore.8011
GNU gdb Red Hat Linux (6.3.0.0-1.132.EL4rh)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as i386-redhat-linux-gnu...Using host libthread_db 
library /lib/tls/libthread_db.so.1.
warning: core file may not match specified executable file.
Core was generated by `glusterfsd -f /etc/glusterfs/glusterfsd-server.vol -l 
/var/log/glusterfs/gluste'.
Program terminated with signal 11, Segmentation fault.
Reading symbols from /lib/libglusterfs.so.0...done.
Loaded symbols for /lib/libglusterfs.so.0
Reading symbols from /lib/libdl.so.2...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib/tls/libpthread.so.0...done.
Loaded symbols for /lib/tls/libpthread.so.0
Reading symbols from /lib/tls/libc.so.6...done.
Loaded symbols for /lib/tls/libc.so.6
Reading symbols from /lib/ld-linux.so.2...done.
Loaded symbols for /lib/ld-linux.so.2
Reading symbols from /lib/glusterfs/2.0.4/xlator/storage/posix.so...done.
Loaded symbols for /lib/glusterfs/2.0.4/xlator/storage/posix.so
Reading symbols from 
/lib/glusterfs/2.0.4/xlator/testing/features/trash.so...done.
Loaded symbols for /lib/glusterfs/2.0.4/xlator/testing/features/trash.so
Reading symbols from /lib/glusterfs/2.0.4/xlator/features/posix-locks.so...done.
Loaded symbols for /lib/glusterfs/2.0.4/xlator/features/posix-locks.so
Reading symbols from /lib/glusterfs/2.0.4/xlator/protocol/server.so...done.
Loaded symbols for /lib/glusterfs/2.0.4/xlator/protocol/server.so
Reading symbols from /lib/glusterfs/2.0.4/transport/socket.so...done.
Loaded symbols for /lib/glusterfs/2.0.4/transport/socket.so
Reading symbols from /lib/glusterfs/2.0.4/auth/addr.so...done.
Loaded symbols for /lib/glusterfs/2.0.4/auth/addr.so
Reading symbols from /lib/libgcc_s.so.1...done.
Loaded symbols for /lib/libgcc_s.so.1
#0  trash_common_unwind_cbk (frame=0x96897d0, cookie=0x9686228, this=0x959a0d0, 
op_ret=0, op_errno=0) at trash.c:80
80  if (local-loc1.path)
(gdb) bt
#0  trash_common_unwind_cbk (frame=0x96897d0, cookie=0x9686228, this=0x959a0d0, 
op_ret=0, op_errno=0) at trash.c:80
#1  0x0039a954 in posix_unlink (frame=0x9686228, this=0x95996c0, loc=0x9689978) 
at posix.c:896
#2  0x00a8a2be in trash_unlink (frame=0x96897d0, this=0x959a0d0, loc=0x9689978) 
at trash.c:250
#3  0x00121c4c in default_unlink (frame=0x95a1838, this=0x959aad0, 
loc=0x9689978) at defaults.c:461
#4  0x00e81f87 in server_unlink_resume (frame=0x9687610, this=0x959acc8, 
loc=0x9689978) at server-protocol.c:4316
#5  0x0012e96c in call_resume (stub=0x9689960) at call-stub.c:2329
#6  0x00e82111 in server_unlink (frame=0x9687610, bound_xl=0x959aad0, 
hdr=0x9689be8, hdrlen=172, iobuf=0x0)
at server-protocol.c:4362
#7  0x00e8879a in protocol_server_interpret (this=0x959acc8, trans=0x9649180, 
hdr_p=0x9689be8 , hdrlen=172, iobuf=0x0)
at server-protocol.c:7473
#8  0x00e891f1 in protocol_server_pollin (this=0x959acc8, trans=0x9649180) at 
server-protocol.c:7754
#9  0x00e89329 in notify (this=0x959acc8, event=2, data=0x9649180) at 
server-protocol.c:7810
#10 0x00d30c67 in socket_event_poll_in (this=0x9649180) at socket.c:714
#11 0x00d30eef in socket_event_handler (fd=13, idx=7, data=0x9649180, 
poll_in=1, poll_out=0, poll_err=0) at socket.c:814
#12 0x00132c71 in event_dispatch_epoll (event_pool=0x95949c0) at event.c:804
#13 0x00132f41 in event_dispatch (event_pool=0x95949c0) at event.c:975
#14 0x0804b6b1 in main (argc=5, argv=0xbff770e4) at glusterfsd.c:1226


   Why this happen?
2009-08-12 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Error with glusterfs2.0.4 which add trash

2009-08-11 Thread eagleeyes

HI all:
 I met a error  again ,but i had no action in client .
 
GNU gdb Red Hat Linux (6.3.0.0-1.132.EL4rh)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as i386-redhat-linux-gnu...Using host libthread_db 
library /lib/tls/libthread_db.so.1.
warning: core file may not match specified executable file.
Core was generated by `glusterfsd -f /etc/glusterfs/glusterfsd-server.vol -l 
/var/log/glusterfs/gluste'.
Program terminated with signal 11, Segmentation fault.
Reading symbols from /lib/libglusterfs.so.0...done.
Loaded symbols for /lib/libglusterfs.so.0
Reading symbols from /lib/libdl.so.2...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib/tls/libpthread.so.0...done.
Loaded symbols for /lib/tls/libpthread.so.0
Reading symbols from /lib/tls/libc.so.6...done.
Loaded symbols for /lib/tls/libc.so.6
Reading symbols from /lib/ld-linux.so.2...done.
Loaded symbols for /lib/ld-linux.so.2
Reading symbols from /lib/glusterfs/2.0.4/xlator/storage/posix.so...done.
Loaded symbols for /lib/glusterfs/2.0.4/xlator/storage/posix.so
Reading symbols from 
/lib/glusterfs/2.0.4/xlator/testing/features/trash.so...done.
Loaded symbols for /lib/glusterfs/2.0.4/xlator/testing/features/trash.so
Reading symbols from /lib/glusterfs/2.0.4/xlator/features/posix-locks.so...done.
Loaded symbols for /lib/glusterfs/2.0.4/xlator/features/posix-locks.so
Reading symbols from /lib/glusterfs/2.0.4/xlator/protocol/server.so...done.
Loaded symbols for /lib/glusterfs/2.0.4/xlator/protocol/server.so
Reading symbols from /lib/glusterfs/2.0.4/transport/socket.so...done.
Loaded symbols for /lib/glusterfs/2.0.4/transport/socket.so
Reading symbols from /lib/glusterfs/2.0.4/auth/addr.so...done.
Loaded symbols for /lib/glusterfs/2.0.4/auth/addr.so
Reading symbols from /lib/libgcc_s.so.1...done.
Loaded symbols for /lib/libgcc_s.so.1
#0  trash_common_unwind_cbk (frame=0x9a3d600, cookie=0x9a0ca48, this=0x987bd28, 
op_ret=0, op_errno=0) at trash.c:80
80  if (local-loc1.path)
(gdb) bt
#0  trash_common_unwind_cbk (frame=0x9a3d600, cookie=0x9a0ca48, this=0x987bd28, 
op_ret=0, op_errno=0) at trash.c:80
#1  0x00a14954 in posix_unlink (frame=0x9a0ca48, this=0x987b440, loc=0x9a51768) 
at posix.c:896
#2  0x0096f2be in trash_unlink (frame=0x9a3d600, this=0x987bd28, loc=0x9a51768) 
at trash.c:250
#3  0x0025cc4c in default_unlink (frame=0x9a07b58, this=0x987c6e0, 
loc=0x9a51768) at defaults.c:461
#4  0x00374f87 in server_unlink_resume (frame=0x9a49020, this=0x987ccc8, 
loc=0x9a51768) at server-protocol.c:4316
#5  0x0026996c in call_resume (stub=0x9a51750) at call-stub.c:2329
#6  0x00375111 in server_unlink (frame=0x9a49020, bound_xl=0x987c6e0, 
hdr=0x9a51028, hdrlen=167, iobuf=0x0)
at server-protocol.c:4362
#7  0x0037b79a in protocol_server_interpret (this=0x987ccc8, trans=0x9964e48, 
hdr_p=0x9a51028 , hdrlen=167, iobuf=0x0)
at server-protocol.c:7473
#8  0x0037c1f1 in protocol_server_pollin (this=0x987ccc8, trans=0x9964e48) at 
server-protocol.c:7754
#9  0x0037c329 in notify (this=0x987ccc8, event=2, data=0x9964e48) at 
server-protocol.c:7810
#10 0x00158c67 in socket_event_poll_in (this=0x9964e48) at socket.c:714
#11 0x00158eef in socket_event_handler (fd=19, idx=12, data=0x9964e48, 
poll_in=1, poll_out=0, poll_err=0) at socket.c:814
#12 0x0026dc71 in event_dispatch_epoll (event_pool=0x98769c0) at event.c:804
#13 0x0026df41 in event_dispatch (event_pool=0x98769c0) at event.c:975
#14 0x0804b6b1 in main (argc=5, argv=0xbff313b4) at glusterfsd.c:1226
(gdb) quit

   Why this happen?
2009-08-12 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Error with glusterfs2.0.4 which add trash

2009-08-11 Thread eagleeyes
Thanks a lot ,did this patch in glusterfs2.0.6rc4? 


2009-08-12 



eagleeyes 



发件人: Vijay Bellur 
发送时间: 2009-08-12  12:37:43 
收件人: eagleeyes 
抄送: gluster-users; Anand Babu 
主题: Re: [Gluster-users] Error with glusterfs2.0.4 which add trash 
 
 #0  trash_common_unwind_cbk (frame=0x9a3d600, cookie=0x9a0ca48, 
 this=0x987bd28, op_ret=0, op_errno=0) at trash.c:80
 80  if (local-loc1.path)
 (gdb) bt
This is a known issue. Patch 586 [http://patches.gluster.com/patch/586/]
fixes this.
Regards,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] What error happend ?

2009-08-03 Thread eagleeyes
HI :
   What error happend  ?
   When i use ls -R /data  ,the log warning :
   

2009-08-03 15:07:54 W [dht-common.c:238:dht_revalidate_cbk] dht: mismatching 
filetypes 04 v/s 010 for /kaopu/web/www/accessory/icon
2009-08-03 15:07:54 W [fuse-bridge.c:297:need_fresh_lookup] fuse-bridge: 
revalidate of /kaopu/web/www/accessory/icon failed (Invalid argument)
2009-08-03 15:07:54 W [dht-common.c:238:dht_revalidate_cbk] dht: mismatching 
filetypes 04 v/s 010 for /kaopu/web/www/accessory/file/20090515
2009-08-03 15:07:54 W [fuse-bridge.c:297:need_fresh_lookup] fuse-bridge: 
revalidate of /kaopu/web/www/accessory/file/20090515 failed (Invalid argument)
2009-08-03 15:07:56 W [dht-common.c:254:dht_revalidate_cbk] dht: linkfile found 
in revalidate for /kaopu/web/www/accessory/file/zip/gaochao.zip
2009-08-03 15:07:56 W [dht-common.c:238:dht_revalidate_cbk] dht: mismatching 
filetypes 04 v/s 010 for /kaopu/web/www/accessory/photo/20090510
2009-08-03 15:07:56 W [fuse-bridge.c:297:need_fresh_lookup] fuse-bridge: 
revalidate of /kaopu/web/www/accessory/photo/20090510 failed (Invalid argument)
2009-08-03 15:08:04 W [dht-common.c:254:dht_revalidate_cbk] dht: linkfile found 
in revalidate for /kaopu/web/www/static/img/bj/Thumbs.db
2009-08-03 15:08:05 W [dht-common.c:254:dht_revalidate_cbk] dht: linkfile found 
in revalidate for /kaopu/web/www/static/img/btn/Thumbs.db
2009-08-03 15:08:05 W [dht-common.c:254:dht_revalidate_cbk] dht: linkfile found 
in revalidate for /kaopu/web/www/static/img/btn2/Thumbs.db
2009-08-03 15:08:05 W [dht-common.c:254:dht_revalidate_cbk] dht: linkfile found 
in revalidate for /kaopu/web/www/static/img/icon/Thumbs.db
2009-08-03 15:08:05 W [dht-common.c:254:dht_revalidate_cbk] dht: linkfile found 
in revalidate for /kaopu/web/www/static/js/kaopu/disk/text/protoTypeMySelf.js
2009-08-03 15:08:05 W [dht-common.c:254:dht_revalidate_cbk] dht: linkfile found 
in revalidate for /kaopu/web/www/static/js/kaopu/disk/text/prototype-1.6.0.2.js
2009-08-03 15:08:05 W [dht-common.c:254:dht_revalidate_cbk] dht: linkfile found 
in revalidate for /kaopu/web/www/static/js/kaopu/friend/friend_city.js
2009-08-03 15:08:05 W [dht-common.c:254:dht_revalidate_cbk] dht: linkfile found 
in revalidate for /kaopu/web/www/static/js/lib/citys/city.js
2009-08-03 15:08:06 W [dht-common.c:254:dht_revalidate_cbk] dht: linkfile found 
in revalidate for /kaopu/web/www/www/class/libraries/rss/code/gb-unicode.table
2009-08-03 15:08:50 W [dht-common.c:238:dht_revalidate_cbk] dht: mismatching 
filetypes 04 v/s 010 for 
/kaopu/web/www/www/ui/templates/default/member/invite
2009-08-03 15:08:50 W [fuse-bridge.c:297:need_fresh_lookup] fuse-bridge: 
revalidate of /kaopu/web/www/www/ui/templates/default/member/invite failed 
(Invalid argument)


2009-08-03 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Question of min-free-disk

2009-07-29 Thread eagleeyes
But i configure was 
volume brick1
   type storage/posix   # POSIX FS translator
   option directory /home/data# Export this directory
end-volume

volume brick2
   type storage/posix   # POSIX FS translator
   option directory /home/data2# Export this directory
end-volume
volume server
   type protocol/server
   option transport-type tcp
  option transport.socket.bind-address 172.20.92.95 # Default is to listen 
on all interfaces
  option transport.socket.listen-port 6996  # Default is 6996

   subvolumes brick1 brick2
   option auth.addr.brick1.allow * # Allow access to brick volume
   option auth.addr.brick2.allow * # Allow access to brick volume
end-volume

client configur was:
volume client1
   type protocol/client
   option transport-type tcp
   option remote-host 172.20.92.95 # IP address of the remote brick
   option transport.socket.remote-port 6996  # default server port 
is 6996
   option remote-subvolume brick1# name of the remote volume
end-volume
volume client2
   type protocol/client
   option transport-type tcp
   option remote-host 172.20.92.95 # IP address of the remote brick
   option transport.socket.remote-port 6996  # default server port 
is 6996
   option remote-subvolume brick2# name of the remote volume
end-volume
volume dht
  type  cluster/dht
  option min-free-disk 90%
  subvolumes client1 client2 
end-volume


It all in the same disk .
2009-07-30 



eagleeyes 



发件人: Raghavendra G 
发送时间: 2009-07-30  01:55:48 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] Question of min-free-disk 
 
Yes, min-free-disk option to dht works. But please note following points
1. Availability of disk space is checked only during creation of new files. If 
the disk space availability falls below configured limit, writes to files 
already on disk will continue without any warnings/errors.
2. If the subvolume on which the file being created (hashed subvolume), has 
free disk space less than min-free-disk, the file will be created on the node 
which has maximum free disk among all the children. Note that the file will be 
created even though this node has less than the min-free-disk space. Please 
note that, dht does not treat non-availability of min-free-disk of disk space 
as an error. On the other hand it tries its best to spread the files to other 
nodes, when the hashed subvolume does not have required space.
regards,
Raghavendra.
- Original Message -
From: eagleeyes eaglee...@126.com
To: gluster-users gluster-users@gluster.org
Sent: Monday, July 27, 2009 9:28:55 AM GMT +04:00 Abu Dhabi / Muscat
Subject: [Gluster-users] Question of min-free-disk
Hi
   Did the option min-free-disk  can work ? Today , i used min-free-disk  of 
dht ,but it  didn't work .
   i used one server as glusterfsd and glusterfs 
   the  configure of client 
   
volume client1
   type protocol/client
   option transport-type tcp
   option remote-host 172.20.92.95 # IP address of the remote brick
   option transport.socket.remote-port 6996  # default server port 
is 6996
   option remote-subvolume brick1# name of the remote volume
end-volume
volume client2
   type protocol/client
   option transport-type tcp
   option remote-host 172.20.92.95 # IP address of the remote brick
   option transport.socket.remote-port 6996  # default server port 
is 6996
   option remote-subvolume brick2# name of the remote volume
end-volume
volume dht
  type  cluster/dht
  option min-free-disk 90%
  subvolumes client1 client2 
end-volume
2627:/data # df  -h 
FilesystemSize  Used Avail Use% Mounted on
/dev/sda2 9.9G  6.0G  3.5G  64% /
udev  447M   72K  447M   1% /dev
/dev/sda3  62G  7.6G   51G  13% /home
glusterfs.vol.sample  124G   16G  102G  13% /data   
The data  exceeded  90%, and  I  writed 1G file into data ,had no error happend 
. Why min-free-disk didn't work  ?
   
2009-07-27 
eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] So tangly of Glusterfs 2.0.4 distributing after dht expansion !!!!

2009-07-14 Thread eagleeyes
HI:
   Today  i tested glusterfs2.0.4 with  fuse-2.8.0-pre3 , for dht expansion . I 
found a  problem.
I use 
[r...@nio1 data6]# ls /data/
data1  data2  data3  data4  data5  data6  as data dir ,

first i  used data1+data2+data3  ,mount dot /mnt  and touch 11  22  33  44  55  
66   , created ddd  eee  jjj  pp  uuu  yy  with some word in them .
then those files  in data was :

ll -h /data/* 
/data/data1:
total 0
-rw-r--r-- 1 root root 0 Jul 14 17:14 11
-rw-r--r-- 1 root root 0 Jul 14 17:14 33
/data/data2:
total 12K
-rw-r--r-- 1 root root   0 Jul 14 17:14 22
-rw-r--r-- 1 root root   0 Jul 14 17:14 44
-rw-r--r-- 1 root root 116 Jul 14 17:15 ddd
-rw-r--r-- 1 root root 116 Jul 14 17:15 jjj
-rw-r--r-- 1 root root 116 Jul 14 17:15 yy
/data/data3:
total 12K
-rw-r--r-- 1 root root   0 Jul 14 17:14 55
-rw-r--r-- 1 root root   0 Jul 14 17:14 66
-rw-r--r-- 1 root root 116 Jul 14 17:15 eee
-rw-r--r-- 1 root root 116 Jul 14 17:15 pp
-rw-r--r-- 1 root root 116 Jul 14 17:15 uuu

and then  i expansion dht client, used data1+data2+data3+data4+data5+data6   
,mount dot /mnt   
this time  i find  
ll -h /data/* 
/data/data1:
total 0
-rw-r--r-- 1 root root 0 Jul 14 16:57 11
-rw-r--r-- 1 root root 0 Jul 14 16:57 33
/data/data2:
total 16K
-T 1 root root   0 Jul 14 16:58 11
-rw-r--r-- 1 root root   0 Jul 14 16:57 22
-rw-r--r-- 1 root root   0 Jul 14 16:57 44
-rw-r--r-- 1 root root 126 Jul 14 16:56 ddd
-rw-r--r-- 1 root root 126 Jul 14 16:56 jjj
-rw-r--r-- 1 root root 126 Jul 14 16:56 yy
/data/data3:
total 24K
-T 1 root root   0 Jul 14 16:58 44
-rw-r--r-- 1 root root   0 Jul 14 16:57 55
-rw-r--r-- 1 root root   0 Jul 14 16:57 66
-T 1 root root   0 Jul 14 16:58 ddd
-rw-r--r-- 1 root root 126 Jul 14 16:56 eee
-T 1 root root   0 Jul 14 16:58 jjj
-rw-r--r-- 1 root root 126 Jul 14 16:56 pp
-rw-r--r-- 1 root root 126 Jul 14 16:56 uuu
/data/data4:
total 8.0K
-T 1 root root 0 Jul 14 16:58 22
-T 1 root root 0 Jul 14 16:58 yy
/data/data5:
total 4.0K
-T 1 root root 0 Jul 14 16:58 55
/data/data6:
total 16K
-T 1 root root 0 Jul 14 16:58 66
-T 1 root root 0 Jul 14 16:58 eee
-T 1 root root 0 Jul 14 16:58 pp
-T 1 root root 0 Jul 14 16:58 uuu

files' names  appeared in data4 data5 data6 ,but no byte . Why was it ? 
If a directory  i can comprehend ,but those was files  and  appeared files  was 
not all files in  data1+data2+data3  .
Why  the files distributing was  so tangly .

2009-07-14 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Question of 2.0.3 with fuse 2.8.x

2009-07-07 Thread eagleeyes
HI:
   Did that gluster 2.0.3 with fuse2.8.x must through fuse api  7.6?? or  
kernel   2.6.26? 


2009-07-08 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse 2.8 in kernel2.6.30 ,help !!!!!

2009-07-06 Thread eagleeyes
HI 

  1.I use gluster2.0.3rc2 with fuse init (API version 7.11) in SUSE sp10 
,kernel 2.6.30.
There were some  error log :
pending frames:
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
patchset: 65524f58b29f0b813549412ba6422711a505f5d8
signal received: 11
configuration details:argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.0.3rc2
[0xe400]
/usr/local/lib/libfuse.so.2(fuse_session_process+0x26)[0xb752fb56]
/lib/glusterfs/2.0.3rc2/xlator/mount/fuse.so[0xb755de25]
/lib/libpthread.so.0[0xb7f0d2ab]
/lib/libc.so.6(__clone+0x5e)[0xb7ea4a4e]
-
  2. Use  glusterfs 2.0.3rc2 with fuse init (API version 7.6)  in suse sp10, 
kernel 2.6.16.21-0.8-smp ,
when  i expanded  dht volumes from four to six ,then i rm * in gluster 
directory , there were some error :
  
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 
1636: RMDIR() /scheduler = -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 
1643: RMDIR() /transport = -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 
1655: RMDIR() /xlators/cluster = -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 
1666: RMDIR() /xlators/debug = -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 
1677: RMDIR() /xlators/mount = -1 (No such file or directory) 

  and new files  didn't write into the new volumes after expansion .




2009-07-06 



eagleeyes 



发件人: Anand Avati 
发送时间: 2009-07-06  12:09:13 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8 in kernel2.6.30 
,help ! 
 
Please use 2.0.3 stable, or upgrade to the next rc2 until then. This
has been fixed in rc2.
Avati
On Mon, Jul 6, 2009 at 8:31 AM, eagleeyeseaglee...@126.com wrote:
 HI
I use gluster2.0.3rc1 with  fuse 2.8 in kernel
 2.6.30(SUSE Linux Enterprise Server 10 SP1 with kernel  2.6.30 ) . the mount
 message was :

 /dev/hda4 on /data type reiserfs (rw,user_xattr)
 glusterfs-client.vol.dht on /home type fuse.glusterfs 
 (rw,allow_other,default_permissions,max_read=131072)



  There was some error when i touce 111 in gluster directory ,the error was
 :
  /home: Transport endpoint is not connected

 pending frames:
 patchset: e0db4ff890b591a58332994e37ce6db2bf430213
 signal received: 11
 configuration details:argp 1
 backtrace 1
 dlfcn 1
 fdatasync 1
 libpthread 1
 llistxattr 1
 setfsid 1
 spinlock 1
 epoll.h 1
 xattr.h 1
 st_atim.tv_nsec 1
 package-string: glusterfs 2.0.3rc1
 [0xe400]
 /lib/glusterfs/2.0.3rc1/xlator/mount/fuse.so[0xb75c6288]
 /lib/glusterfs/2.0.3rc1/xlator/performance/write-behind.so(wb_create_cbk+0xa7)[0xb75ccad7]
 /lib/glusterfs/2.0.3rc1/xlator/performance/io-cache.so(ioc_create_cbk+0xde)[0xb7fbe8ae]
 /lib/glusterfs/2.0.3rc1/xlator/performance/read-ahead.so(ra_create_cbk+0x167)[0xb7fc78b7]
 /lib/glusterfs/2.0.3rc1/xlator/cluster/dht.so(dht_create_cbk+0xf7)[0xb75e25b7]
 /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(client_create_cbk+0x2ad)[0xb76004ad]
 /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_interpret+0x1ef)[0xb75ef8ff]
 /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_pollin+0xcf)[0xb75efaef]
 /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(notify+0x1ec)[0xb75f6ddc]
 /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_poll_in+0x3b)[0xb75b775b]
 /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_handler+0xae)[0xb75b7b8e]
 /lib/libglusterfs.so.0[0xb7facbda]
 /lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7fabac1]
 glusterfs(main+0xc2e)[0x804b6ae]
 /lib/libc.so.6(__libc_start_main+0xdc)[0xb7e6087c]
 glusterfs[0x8049c11]
 -

 the server configuration

 gfs1:/ # cat /etc/glusterfs/glusterfsd-sever.vol
 volume posix1
   type storage/posix   # POSIX FS translator
   option directory /data/data1# Export this directory
 end-volume
 volume posix2
   type storage/posix   # POSIX FS translator
   option directory /data/data2# Export this directory
 end-volume
 volume posix3
   type storage/posix   # POSIX FS translator
   option directory /data/data3# Export this directory
 end-volume
 volume posix4
   type storage/posix   # POSIX FS translator
   option directory /data/data4# Export this directory
 end-volume
 volume posix5
   type storage/posix   # POSIX FS translator
   option directory /data/data5# Export this directory
 end-volume
 volume posix6
   type storage/posix   # POSIX FS translator
   option directory /data/data6# Export this directory
 end-volume
 volume posix7
   type storage/posix   # POSIX FS

[Gluster-users] Error : gluster2.0.0 with fuse2.8 in kernel 2.6.30

2009-07-06 Thread eagleeyes
-protocol.c:6327:client_setvolume_cbk] client3: 
connection and handshake succeeded
2009-07-07 10:45:00 W [dht-layout.c:485:dht_layout_normalize] dht: directory / 
looked up first time
2009-07-07 10:45:00 W [dht-common.c:163:dht_lookup_dir_cbk] dht: fixing 
assignment on /
pending frames:
patchset: 7b2e459db65edd302aa12476bc73b3b7a17b1410
signal received: 11
configuration details:argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.0.0
[0xe400]
/usr/local/lib/libfuse.so.2(fuse_session_process+0x17)[0xb776125f]
/lib/glusterfs/2.0.0/xlator/mount/fuse.so[0xb778fee2]
/lib/tls/libpthread.so.0[0x8bc341]
/lib/tls/libc.so.6(__clone+0x5e)[0x74e6fe]
-
  

2009-07-07 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Error : gluster2.0.0 with fuse2 .8 in kernel2.6.30

2009-07-06 Thread eagleeyes
I want to use java nio with mmap ,so had to updata kernel 2.6.27 or newer . The 
fuse in kernel 2.6.30 is api 7.11 .

  How can i give your details with gdb? what should i do ?


2009-07-07 



eagleeyes 



发件人: Anand Avati 
发送时间: 2009-07-07  11:33:56 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] Error : gluster2.0.0 with fuse2.8 in kernel2.6.30 
 
  dmesg |grep -i fuse
 fuse init (API version 7.11)

We have not yet tested against 7.11 API. Our test cluster still uses
2.6.18 kernel. Our tests for 2.6.30 have not yet started. If you can
give us more details about the crash by giving a backtrace of the
coredump with gdb, we might be able to fix this sooner.
Avati
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Error : gluster2.0.0 with fuse2 .8 inkernel2.6.30

2009-07-06 Thread eagleeyes
gluster2.0.3rc2 ,kernel 2.6.30 in  SUSE Linux Enterprise Server 10 SP1 (i586) 

fuse init (API version 7.11)
 FUSE_MINOR_VERSION 8 

gfs1:/ # gdb glusterfs core 
GNU gdb 6.6
Copyright (C) 2006 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as i586-suse-linux...
Using host libthread_db library /lib/libthread_db.so.1.
warning: Can't read pathname for load map: Input/output error.
Reading symbols from /lib/libglusterfs.so.0...done.
Loaded symbols for /lib/libglusterfs.so.0
Reading symbols from /lib/libdl.so.2...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib/libpthread.so.0...done.
Loaded symbols for /lib/libpthread.so.0
Reading symbols from /lib/libc.so.6...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /lib/ld-linux.so.2...done.
Loaded symbols for /lib/ld-linux.so.2
Reading symbols from /lib/glusterfs/2.0.3rc2/xlator/protocol/client.so...done.
Loaded symbols for /lib/glusterfs/2.0.3rc2/xlator/protocol/client.so
Reading symbols from /lib/glusterfs/2.0.3rc2/xlator/cluster/dht.so...done.
Loaded symbols for /lib/glusterfs/2.0.3rc2/xlator/cluster/dht.so
Reading symbols from 
/lib/glusterfs/2.0.3rc2/xlator/performance/read-ahead.so...done.
Loaded symbols for /lib/glusterfs/2.0.3rc2/xlator/performance/read-ahead.so
Reading symbols from 
/lib/glusterfs/2.0.3rc2/xlator/performance/io-cache.so...done.
Loaded symbols for /lib/glusterfs/2.0.3rc2/xlator/performance/io-cache.so
Reading symbols from 
/lib/glusterfs/2.0.3rc2/xlator/performance/write-behind.so...done.
Loaded symbols for /lib/glusterfs/2.0.3rc2/xlator/performance/write-behind.so
Reading symbols from /lib/glusterfs/2.0.3rc2/xlator/mount/fuse.so...done.
Loaded symbols for /lib/glusterfs/2.0.3rc2/xlator/mount/fuse.so
Reading symbols from /usr/local/lib/libfuse.so.2...done.
Loaded symbols for /usr/local/lib/libfuse.so.2
Reading symbols from /lib/librt.so.1...done.
Loaded symbols for /lib/librt.so.1
Reading symbols from /lib/glusterfs/2.0.3rc2/transport/socket.so...done.
Loaded symbols for /lib/glusterfs/2.0.3rc2/transport/socket.so
Reading symbols from /lib/libnss_files.so.2...done.
Loaded symbols for /lib/libnss_files.so.2
Reading symbols from /lib/libgcc_s.so.1...done.
Loaded symbols for /lib/libgcc_s.so.1
Core was generated by `glusterfs -f /etc/glusterfs/glusterfs-client.vol.dht 
--disable-direct-io-mode /'.
Program terminated with signal 11, Segmentation fault.
#0  0xb7584d38 in fuse_ll_process (data=0x805b6a8, buf=0xb7cae000 Y?, 
len=16217, ch=0x805b348) at fuse_lowlevel.c:1049
1049if (curr-u.i.unique == req-unique) {
(gdb) bt
#0  0xb7584d38 in fuse_ll_process (data=0x805b6a8, buf=0xb7cae000 Y?, 
len=16217, ch=0x805b348) at fuse_lowlevel.c:1049
#1  0xb7587b56 in fuse_session_process (se=0x805b540, buf=0xb7cae000 Y?, 
len=16217, ch=0x805b348) at fuse_session.c:80
#2  0xb75b5e25 in fuse_thread_proc (data=0x804fd18) at fuse-bridge.c:2480
#3  0xb7f652ab in start_thread () from /lib/libpthread.so.0
#4  0xb7efca4e in clone () from /lib/libc.so.6
(gdb)  exit


2009-07-07 



eagleeyes 



发件人: Anand Avati 
发送时间: 2009-07-07  11:44:28 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] Error : gluster2.0.0 with fuse2.8 inkernel2.6.30 
 
 
 I want to use java nio with mmap ,so had to updata kernel 2.6.27 or
 newer . The fuse in kernel 2.6.30 is api 7.11 .
 
 How can i give your details with gdb? what should i do ?
Do you have a file in your system / with a name like /core. ? If you do, 
run this command -
sh$ gdb glusterfs /core.XXX
...
(gdb) bt
and give us the output.
Thanks,
Avati
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HELP : Files lost after DHT expansion

2009-07-05 Thread eagleeyes
gfs1:~ # cat /etc/glusterfs/glusterfsd-sever.vol 
volume posix1
  type storage/posix   # POSIX FS translator
  option directory /data/data1# Export this directory
end-volume
volume posix2
  type storage/posix   # POSIX FS translator
  option directory /data/data2# Export this directory
end-volume
volume posix3
  type storage/posix   # POSIX FS translator
  option directory /data/data3# Export this directory
end-volume
volume posix4
  type storage/posix   # POSIX FS translator
  option directory /data/data4# Export this directory
end-volume
volume posix5
  type storage/posix   # POSIX FS translator
  option directory /data/data5# Export this directory
end-volume
volume posix6
  type storage/posix   # POSIX FS translator
  option directory /data/data6# Export this directory
end-volume
volume posix7
  type storage/posix   # POSIX FS translator
  option directory /data/data7# Export this directory
end-volume
volume posix8
  type storage/posix   # POSIX FS translator
  option directory /data/data8# Export this directory
end-volume
volume brick1
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix1
end-volume
volume brick2
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix2
end-volume
volume brick3
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix3
end-volume
volume brick4
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix4
end-volume
### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp
  option transport.socket.bind-address 172.20.92.240 # Default is to listen 
on all interfaces
  option transport.socket.listen-port 6996  # Default is 6996
  subvolumes brick1 brick2 brick3 brick4  
  option auth.addr.brick1.allow * # Allow access to brick volume
  option auth.addr.brick2.allow * # Allow access to brick volume
  option auth.addr.brick3.allow * # Allow access to brick volume
  option auth.addr.brick4.allow * # Allow access to brick volume
end-volume


2009-07-06 



eagleeyes 



发件人: Sachidananda 
发送时间: 2009-07-04  11:39:03 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] HELP : Files lost after DHT expansion 
 
Hi,
eagleeyes wrote:
  When i  update to gluster2.0.3 ,after dht expansion ,double  directorys
  appear in the gluster directory ,why ?
 
  client configure
  volume dht
type cluster/dht
option lookup-unhashed yes
option min-free-disk 10%
subvolumes client1 client2  client3 client4 client5 client6 client7 
client8
#subvolumes client1 client2  client3 client4
  end-volume
 
 
Can you please send us your server/client volume files?
--
Sachidananda.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HELP : Files lost after DHT expansion

2009-07-05 Thread eagleeyes
The  server configuration file is:

gfs1:~ # cat /etc/glusterfs/glusterfsd-sever.vol 
volume posix1
  type storage/posix   # POSIX FS translator
  option directory /data/data1# Export this directory
end-volume
volume posix2
  type storage/posix   # POSIX FS translator
  option directory /data/data2# Export this directory
end-volume
volume posix3
  type storage/posix   # POSIX FS translator
  option directory /data/data3# Export this directory
end-volume
volume posix4
  type storage/posix   # POSIX FS translator
  option directory /data/data4# Export this directory
end-volume
volume posix5
  type storage/posix   # POSIX FS translator
  option directory /data/data5# Export this directory
end-volume
volume posix6
  type storage/posix   # POSIX FS translator
  option directory /data/data6# Export this directory
end-volume
volume posix7
  type storage/posix   # POSIX FS translator
  option directory /data/data7# Export this directory
end-volume
volume posix8
  type storage/posix   # POSIX FS translator
  option directory /data/data8# Export this directory
end-volume
volume brick1
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix1
end-volume
volume brick2
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix2
end-volume
volume brick3
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix3
end-volume
volume brick4
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix4
end-volume

volume brick5
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix5
end-volume

volume brick6
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix6
end-volume

volume brick7
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix7
end-volume

volume brick8
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix8
end-volume
### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp
  option transport.socket.bind-address 172.20.92.240 # Default is to listen 
on all interfaces
  option transport.socket.listen-port 6996  # Default is 6996
  subvolumes brick1 brick2 brick3 brick4  brick5 brick6 brick7 brick8
  option auth.addr.brick1.allow * # Allow access to brick volume
  option auth.addr.brick2.allow * # Allow access to brick volume
  option auth.addr.brick3.allow * # Allow access to brick volume
  option auth.addr.brick4.allow * # Allow access to brick volume
option auth.addr.brick5.allow * # Allow access to brick volume
option auth.addr.brick6.allow * # Allow access to brick volume
option auth.addr.brick7.allow * # Allow access to brick volume
option auth.addr.brick8.allow * # Allow access to brick volume
end-volume


2009-07-06 



eagleeyes 



发件人: Sachidananda 
发送时间: 2009-07-04  11:39:03 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] HELP : Files lost after DHT expansion 
Hi,
eagleeyes wrote:
  When i  update to gluster2.0.3 ,after dht expansion ,double  directorys
  appear in the gluster directory ,why ?
 
  client configure
  volume dht
type cluster/dht
option lookup-unhashed yes
option min-free-disk 10%
subvolumes client1 client2  client3 client4 client5 client6 client7 
client8
#subvolumes client1 client2  client3 client4
  end-volume
 
 
Can you please send us your server/client volume files?
--
Sachidananda.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Error : gluster2.0.3rc1 with fuse 2.8 in kernel 2.6.30 ,help !!!!!

2009-07-05 Thread eagleeyes
.allow * # Allow access to brick volume
  option auth.addr.brick7.allow * # Allow access to brick volume
  option auth.addr.brick8.allow * # Allow access to brick volume
end-volume

the client configuration:

gfs1:/ # cat /etc/glusterfs/glusterfs-client.vol.dht 
volume client1
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.240# IP address of the remote brick2
  option remote-port 6996
  option remote-subvolume brick1   # name of the remote volume
end-volume
volume client2
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.240 # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick2   # name of the remote volume
end-volume
volume client3
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.240  # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick3   # name of the remote volume
end-volume
volume client4
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.240  # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick4   # name of the remote volume
end-volume
volume client5
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.240  # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick1   # name of the remote volume
end-volume
volume client6
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.240  # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick2   # name of the remote volume
end-volume
volume client7
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.240  # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick3   # name of the remote volume
end-volume
volume client8
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.240  # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick4   # name of the remote volume
end-volume
#volume afr3
#  type cluster/afr
#  subvolumes client3 client6
#end-volume
volume dht 
  type cluster/dht
  option lookup-unhashed yes
  subvolumes client1 client2  client3 client4  
end-volume

Could you help me ?



2009-07-06 



eagleeyes 



发件人: Sachidananda 
发送时间: 2009-07-04  11:39:03 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] HELP : Files lost after DHT expansion 
 
Hi,
eagleeyes wrote:
  When i  update to gluster2.0.3 ,after dht expansion ,double  directorys
  appear in the gluster directory ,why ?
 
  client configure
  volume dht
type cluster/dht
option lookup-unhashed yes
option min-free-disk 10%
subvolumes client1 client2  client3 client4 client5 client6 client7 
client8
#subvolumes client1 client2  client3 client4
  end-volume
 
 
Can you please send us your server/client volume files?
--
Sachidananda.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HELP : Files lost after DHT expansion

2009-07-03 Thread eagleeyes
When i  update to gluster2.0.3 ,after dht expansion ,double  directorys  appear 
in the gluster directory ,why ?

client configure
volume dht 
  type cluster/dht
  option lookup-unhashed yes
  option min-free-disk 10%
  subvolumes client1 client2  client3 client4 client5 client6 client7 client8 
  #subvolumes client1 client2  client3 client4
end-volume


2009-07-02 



eagleeyes 



发件人: Anand Babu Periasamy 
发送时间: 2009-07-01  13:41:20 
收件人: Anand Avati 
抄送: eagleeyes 
主题: Re: HELP : Files lost after DHT expansion 
2.0.3 is scheduled for release tomorrow evening PST.
--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://unlocksmith.org]
GlusterFS [http://www.gluster.org]
GNU/Linux [http://www.gnu.org]
Anand Avati wrote:
 
 Sorry, it is a Intranet , just like what your say , the data is just
 invisible on the mountpoint , when i designate file name , it will be
 visible . But in Production environment ,this Phenomenon will make
 some issue for applications.

 
 This problem is solved in 2.0.3. Please upgrade and you should be able to see 
 your files again.
 
 Avati
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HELP: problem of stripe expansion !!!!!!!!!!!!!!!!!!!!!!!!

2009-07-01 Thread eagleeyes
If i need height available with afr , did it use distribute + stripe + afr  
just like distribute(stripe( afr(2) 4)+stripe( afr(2) 4)+stripe( afr(2) 4))  ??


2009-07-01 



eagleeyes 



发件人: Anand Babu Periasamy 
发送时间: 2009-07-01  14:01:14 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] HELP: problem of stripe expansion 
 
 
You cannot expand stripe directly.  You have to use
distribute + stripe, where you scale in stripe sets.
For example, if you have 8 nodes, you create
=  distribute(stripe(4)+stripe(4))
Now if you want to scale your storage cluster, you should do so
in stripe sets. Add 4 more nodes like this:
=  distribute(stripe(4)+stripe(4)+stripe(4))
Distributed-stripe not only makes stripe scalable, but
better load balanced and reduced disk contention.
--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://unlocksmith.org]
GlusterFS [http://www.gluster.org]
GNU/Linux [http://www.gnu.org]
eagleeyes wrote:
 Hello:
Today i test stripe expansion : two volumes  expand four volumes 
 , when i vi or cat a file ,the log was :
 [2009-07-01 11:25:55] W [stripe.c:1920:stripe_open_getxattr_cbk] stripe: 
 client8 returned error No such file or directory
 [2009-07-01 11:25:55] W [stripe.c:1920:stripe_open_getxattr_cbk] stripe: 
 client7 returned error No such file or directory
 [2009-07-01 11:25:55] W [stripe.c:1871:stripe_open_cbk] stripe: client7 
 returned error No such file or directory
 [2009-07-01 11:25:55] W [stripe.c:1871:stripe_open_cbk] stripe: client8 
 returned error No such file or directory
 [2009-07-01 11:25:55] W [fuse-bridge.c:639:fuse_fd_cbk] glusterfs-fuse: 149: 
 OPEN() /file = -1 (No such file or directory)
  
 Did it a bug like dht expansion ?  What should we do to deal with this 
 problem?
  
 my client configur changes is from subvolumes client5 client6  to 
 subvolumes client5 client6 client7 client8 
  
  
 2009-07-01
 
 eagleeyes
 
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Use of mod_glusterfs

2009-06-30 Thread eagleeyes
HELLO:
  Were there some one who had used  mod_glusterfs  ?
  I  install   mod_glusterfs  with apache2.2 followed 
http://www.gluster.org/docs/index.php/Getting_modglusterfs_to_work  step by 
step , but how to use it ? or  how to Authentication it works ?

2009-06-30 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] HELP: problem of stripe expansion !!!!!!!!!!!!!!!!!!!!!!!!

2009-06-30 Thread eagleeyes
Hello:
   Today i test stripe expansion : two volumes  expand four volumes , when 
i vi or cat a file ,the log was :
[2009-07-01 11:25:55] W [stripe.c:1920:stripe_open_getxattr_cbk] stripe: 
client8 returned error No such file or directory
[2009-07-01 11:25:55] W [stripe.c:1920:stripe_open_getxattr_cbk] stripe: 
client7 returned error No such file or directory
[2009-07-01 11:25:55] W [stripe.c:1871:stripe_open_cbk] stripe: client7 
returned error No such file or directory
[2009-07-01 11:25:55] W [stripe.c:1871:stripe_open_cbk] stripe: client8 
returned error No such file or directory
[2009-07-01 11:25:55] W [fuse-bridge.c:639:fuse_fd_cbk] glusterfs-fuse: 149: 
OPEN() /file = -1 (No such file or directory)

Did it a bug like dht expansion ?  What should we do to deal with this problem?
 
my client configur changes is from subvolumes client5 client6  to subvolumes 
client5 client6 client7 client8  


2009-07-01 



eagleeyes
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] HELP mmap or java nio

2009-06-29 Thread eagleeyes
HELLO
  An old question :Did the glusterFS support fuse's mmap operation ? 
  Does any one use java nio with mmap  in Gluster ? 


2009-06-29 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] HELP !!!!!!Files lost after DHT expansion!!!!!!!!!!

2009-06-28 Thread eagleeyes
Hello all:
 I had a problem with  DHT expansion 
 When i expansion  DHT volumes ,After remounting the filesystem on the 
clients, a bunch of the content in the cluster 
became unavailable! i tried use option lookup-unhashed on and   ls -aR  .  
but no effect ,Am I missing something obvious in the setup and procedure for a
DHT expansion?
 I saw there had some one met the problem like me ,were there a solution of 
it ? Who can help us ? waiting for your help ,thanks a lot 
 
MY change was from subvolumes client1 client2  client3 client4  to 
subvolumes client1 client2  client3 client4 client5 client6 client7 client8  
for expansion DHT. 
 


2009-06-26 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Error when expand dht model volumes

2009-06-26 Thread eagleeyes
HI all:
I met a problem in expending dht volumes,  i write in a dht storage 
directory untile it grew up to  90%,so i add four new volumes into the configur 
file.
   But when start again ,the data in directory  some disappeared ,Why ???  Was 
there a special  action before expending the volumes?
   
my client cofigure file is this :
volume client1
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.240# IP address of the remote brick2
  option remote-port 6996
  option remote-subvolume brick1   # name of the remote volume
end-volume
volume client2
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.240 # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick2   # name of the remote volume
end-volume
volume client3
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.240  # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick3   # name of the remote volume
end-volume
volume client4
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.240  # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick4   # name of the remote volume
end-volume
volume client5
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.184  # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick1   # name of the remote volume
end-volume
volume client6
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.184  # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick2   # name of the remote volume
end-volume
volume client7
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.184  # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick3   # name of the remote volume
end-volume
volume client8
  type protocol/client
  option transport-type tcp
  option remote-host 172.20.92.184  # IP address of the remote brick2
  option remote-port 6996
  #option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick4   # name of the remote volume
end-volume

volume dht 
  type cluster/dht
  #option lookup-unhashed yes
  option min-free-disk 10%
  subvolumes client1 client2  client3 client4 client5 client6 client7 client8 
end-volume
 
My server configure file is this:
volume posix1
  type storage/posix   # POSIX FS translator
  option directory /data/data1# Export this directory
end-volume
volume posix2
  type storage/posix   # POSIX FS translator
  option directory /data/data2# Export this directory
end-volume
volume posix3
  type storage/posix   # POSIX FS translator
  option directory /data/data3# Export this directory
end-volume
volume posix4
  type storage/posix   # POSIX FS translator
  option directory /data/data4# Export this directory
end-volume
volume brick1
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix1
end-volume
volume brick2
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix2
end-volume
volume brick3
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix3
end-volume
volume brick4
  type features/posix-locks
  option mandatory-locks on  # enables mandatory locking on all files
  subvolumes posix4
end-volume
### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp
  option transport.socket.bind-address 172.20.92.240 # Default is to listen 
on all interfaces
  option transport.socket.listen-port 6996  # Default is 6996
  subvolumes brick1 brick2 brick3 brick4 
  option auth.addr.brick1.allow * # Allow access to brick volume
  option auth.addr.brick2.allow * # Allow access to brick volume
  option auth.addr.brick3.allow * # Allow access to brick volume
  option auth.addr.brick4.allow * # Allow access to brick volume
end-volume
26 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Files lost after DHT expansion!

2009-06-26 Thread eagleeyes
Was there a  solution ? I met the same problem . Any one  had a method? 


2009-06-26 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] help

2009-06-26 Thread eagleeyes
Hi firend:
  Could you tell me  how did you use 'features/trash' ? i didn't use it 
succeed ,the error is  

[2009-06-26 17:16:15] D [xlator.c:598:xlator_set_type] xlator: 
/usr/local/lib/glusterfs/2.0.2/xlator/features/trash.so: cannot open shared 
object file: No such file or directory
[2009-06-26 17:16:15] E [spec.y:208:section_type] parser: Volume 'trash', line 
47: type 'features/trash' is not valid or not found on this machine

 thanks a lots  ,waiting for your return 
2009-06-26 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Limit of Glusterfs help

2009-06-24 Thread eagleeyes
HI:
Was there  a limit of servers which was used as storage in Gluster ?


2009-06-24 



eagleeyes 



发件人: gluster-users-request 
发送时间: 2009-06-24  03:00:42 
收件人: gluster-users 
抄送: 
主题: Gluster-users Digest, Vol 14, Issue 34 
Send Gluster-users mailing list submissions to
gluster-users@gluster.org
To subscribe or unsubscribe via the World Wide Web, visit
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
gluster-users-requ...@gluster.org
You can reach the person managing the list at
gluster-users-ow...@gluster.org
When replying, please edit your Subject line so it is more specific
than Re: Contents of Gluster-users digest...
Today's Topics:
   1. Re: bailout after period of inactivity (mki-gluste...@mozone.net)
   2. AFR problem (maurizio oggiano)
--
Message: 1
Date: Tue, 23 Jun 2009 05:48:11 -0700
From: mki-gluste...@mozone.net
Subject: Re: [Gluster-users] bailout after period of inactivity
To: Vikas Gorur vi...@gluster.com
Cc: gluster-users@gluster.org
Message-ID: 20090623124811.gl3...@cyclonus.mozone.net
Content-Type: text/plain; charset=us-ascii
On Tue, Jun 23, 2009 at 07:22:31AM -0500, Vikas Gorur wrote:
  So there is a timeout, but whatever the cause is, it's triggered by
  long term inactivity. We never had any network problems.
  
  Other machines that access the filesystem on a regular basis do not
  show this problem. It's only the machine that get's used once in a while.
  The problem is reproducable, not a one time event.
  
 Thanks everyone for the reports. We will try to reproduce this and resolve
 the issue.
Perhaps adding SO_KEEPALIVE may help maintain the socket connection in the
event that there's some form of tcp session timeout happening on the
network and/or router (such as in the case of NAT)?
Just a thought.
Mohan
--
Message: 2
Date: Tue, 23 Jun 2009 16:50:10 +0200
From: maurizio oggiano oggiano.mauri...@gmail.com
Subject: [Gluster-users] AFR problem
To: gluster-users@gluster.org
Message-ID:
b251017f0906230750m747c8f0er9bc4e7968dd98...@mail.gmail.com
Content-Type: text/plain; charset=iso-8859-1
I all,
I have some problem with automatic file translator ( afr). I have two server
A e B. Both servers have afr client configured.
If I stop one server, for example B, The file system managed from AFR is not
available for 30 sec in the server A.
 Below there is the gluster-client.vol of one of the server.
  volume TSU-1.localdomain-disk
  type protocol/client
  option transport-type tcp/client
  option remote-host 127.0.0.1
  option remote-subvolume disk
  end-volume
  volume TSU-2.localdomain-disk
  type protocol/client
  option transport-type tcp/client
  option remote-host 10.1.48.51
  option remote-subvolume disk
  option transport-timeout 5
  end-volume
  volume disk-afr
  type cluster/afr
  subvolumes TSU-1.localdomain-disk TSU-2.localdomain-disk
  option favorite-child TSU-1.localdomain-disk
  end-volume
  volume writeback-disk
  type performance/write-behind
  option aggregate-size 131072
  subvolumes disk-afr
  end-volume
  volume readahead-disk
  type performance/read-ahead
  option page-size 65536
  option page-count 16
  subvolumes writeback-disk
  end-volume
the server has the following configuration:
# Volume #
  volume local-disk
  type storage/posix
  option directory /glusterfs/shared
  end-volume
  volume disk
  type features/posix-locks
  subvolumes local-disk
  end-volume
# Access Control #
  volume server
  type protocol/server
  option transport-type tcp/server
  subvolumes disk
  option auth.ip.disk.allow *
  end-volume
Is there a way to avoid this behaviour?
Thanks
Maurizio
-- next part --
An HTML attachment was scrubbed...
URL: 
http://zresearch.com/pipermail/gluster-users/attachments/20090623/23246bef/attachment.html
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
End of Gluster-users Digest, Vol 14, Issue 34
*
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Limit of Glusterfs help

2009-06-24 Thread eagleeyes
 Thanks a lot, but in unify ,dht and stripe mode  ,is there a limit of the 
number of servers ?


2009-06-24 



eagleeyes 



发件人: Vikas Gorur 
发送时间: 2009-06-24  15:17:55 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] Limit of Glusterfs help 
 
- eagleeyes eaglee...@126.com wrote:
 HI:
 Was there  a limit of servers which was used as storage in Gluster
 ?
No, there is no limit on the number of servers that can be used as storage.
Vikas
--
Engineer - http://gluster.com/
A: Because it messes up the way people read text.
Q: Why is a top-posting such a bad thing?
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Limit of Glusterfs help

2009-06-24 Thread eagleeyes
Thanks
  Another question :
   1.  in  glusterfs-2.0.2/xlators/cluster  i saw a mode named map,could we 
use it ? what should we do? 
   2.  in glusterfs-2.0.0/xlators/features  there was a trash ,and if we  
want to use it ,what should we do ? 
   3.  Could you give us a whole parameter for configure for server and client 
? i found if i use page-size  and page-count  the log it show they didn't 
effect .


2009-06-24 



eagleeyes 



发件人: Vikas Gorur 
发送时间: 2009-06-24  15:47:27 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] Limit of Glusterfs help 
 
- eagleeyes eaglee...@126.com wrote:
 Thanks a lot, but in unify ,dht and stripe mode  ,is there a limit of
 the number of servers ?
No.
--
Engineer - http://gluster.com/
A: Because it messes up the way people read text.
Q: Why is a top-posting such a bad thing?
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] features/trash

2009-06-18 Thread eagleeyes
Hello:
 I want to use features/trash  in gluster2.0.0 servers ,but when i start  
gluster server ,it tell  me  this :
 
2009-06-18 15:41:57 E [xlator.c:598:xlator_set_type] xlator: 
/lib/glusterfs/2.0.0/xlator/features/trash.so: cannot open shared object file:
 No such file or directory
2009-06-18 15:41:57 E [spec.y:211:section_type] parser: volume 'trash1', line 
27: type 'features/trash' is not valid or not found on this machine
 so i find the trash.so was in 
/lib/glusterfs/2.0.0/xlator/testing/features/trash.so ,what should i do if i 
want use  features/trash  ? I try copy the file from 
/lib/glusterfs/2.0.0/xlator/testing/features/ into the 
/lib/glusterfs/2.0.0/xlator/features/,but it didn't work .
 Who can help me ? waiting for your return ! thanks a lot .
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] features/trash

2009-06-18 Thread eagleeyes
Thanks a lot 


2009-06-18 



eagleeyes 



发件人: Ate Poorthuis 
发送时间: 2009-06-18  16:08:16 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] features/trash 
 
Hi,

You can just define the type as 'testing/features/trash' instead of 
'features/trash' in the vol file. Please realize that it is probably in testing 
for a reason - I don't know which one though.

Ate


2009/6/18 eagleeyes eaglee...@126.com

Hello:
 I want to use features/trash  in gluster2.0.0 servers ,but when i start  
gluster server ,it tell  me  this :
 
2009-06-18 15:41:57 E [xlator.c:598:xlator_set_type] xlator: 
/lib/glusterfs/2.0.0/xlator/features/trash.so: cannot open shared object file:
 No such file or directory
2009-06-18 15:41:57 E [spec.y:211:section_type] parser: volume 'trash1', line 
27: type 'features/trash' is not valid or not found on this machine
 so i find the trash.so was in 
/lib/glusterfs/2.0.0/xlator/testing/features/trash.so ,what should i do if i 
want use  features/trash  ? I try copy the file from 
/lib/glusterfs/2.0.0/xlator/testing/features/ into the 
/lib/glusterfs/2.0.0/xlator/features/,but it didn't work .
 Who can help me ? waiting for your return ! thanks a lot .

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Issue with java nio when using GFS2.0.0

2009-06-13 Thread eagleeyes
Hello :
 when i using  GFS2.0.0, fuse-2.7.4glfs11 with alfresco ,which use nio , 
the  error log mean that fuse didn't suppurt mmap .
What should i do ? i found a lot of file in google.like 
http://git.kernel.org/?p=linux/kernel/git/tj/misc.git;a=commitdiff;h=0100688fe273d03fe79cff3901106a2ab2088ef3
  
 But i think that fuse had mmap support in fuse-2.7.4glfs11 ,how i could 
use it ? 



2009-06-13 



eagleeyes
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] ERROR in glusterfs 2.0.0 with suse xen creating image

2009-05-13 Thread eagleeyes
:31:18 D [inode.c:292:__inode_destroy] fuse/inode: destroy 
inode(93847579) [...@0x2ae020b0]
2009-05-14 11:31:18 D [inode.c:328:__inode_passivate] fuse/inode: passivating 
inode(95682574) lru=4/0 active=2 purge=0
2009-05-14 11:31:18 N [client-protocol.c:6327:client_setvolume_cbk] client2: 
connection and handshake succeeded
2009-05-14 11:31:18 N [afr.c:2120:notify] afr2: subvolume client2 came up
2009-05-14 11:31:18 N [client-protocol.c:6327:client_setvolume_cbk] client2: 
connection and handshake succeeded
2009-05-14 11:31:18 N [afr.c:2120:notify] afr2: subvolume client2 came up
2009-05-14 11:31:23 D [client-protocol.c:6413:client_protocol_reconnect] 
client2: breaking reconnect chain
2009-05-14 11:31:24 D [client-protocol.c:6413:client_protocol_reconnect] 
client2: breaking reconnect chain

The debug log on server :
2009-05-14 11:30:27 D [common.c:514:pl_setlk] brick1: Lock (pid=-1427108160) 0 
- 0 = OK
2009-05-14 11:30:27 D [server-protocol.c:4374:server_flush] brick1: 393329: 
FLUSH 'fd=1 (22003715)'
2009-05-14 11:30:27 D [server-protocol.c:6171:server_finodelk] brick1: 393330: 
FINODELK 'fd=1 (22003715)'
2009-05-14 11:30:27 D [common.c:514:pl_setlk] brick1: Unlock (pid=-1427108160) 
0 - 0 = OK
2009-05-14 11:30:27 D [inode.c:328:__inode_passivate] brick1/inode: passivating 
inode(22003715) lru=3/1024 active=3 purge=0
2009-05-14 11:30:27 D [server-protocol.c:4284:server_release] brick1: 393331: 
RELEASE 'fd=1'
2009-05-14 11:30:28 D [server-protocol.c:6200:server_entrylk_resume] brick1: 
154: ENTRYLK '/windows (22003717) '
2009-05-14 11:30:28 D [server-protocol.c:4931:server_xattrop_resume] brick1: 
393332: XATTROP '/windows (22003717)'
2009-05-14 11:30:28 D [inode.c:309:__inode_activate] brick1/inode: activating 
inode(22003718), lru=2/1024 active=4 purge=0
2009-05-14 11:30:28 D [server-protocol.c:4595:server_unlink_resume] brick1: 
39: UNLINK '22003717//windows/disk0 (22003718)'
2009-05-14 11:30:44 D [server-protocol.c:1563:server_unlink_cbk] brick1: 
39: UNLINK_CBK 22003717/disk0 (22003718)
2009-05-14 11:30:44 D [inode.c:112:__dentry_unhash] brick1/inode: dentry 
unhashed disk0 (22003718)
2009-05-14 11:30:44 D [inode.c:125:__dentry_unset] brick1/inode: unset dentry 
disk0 (22003718)
2009-05-14 11:30:44 D [inode.c:328:__inode_passivate] brick1/inode: passivating 
inode(22003718) lru=3/1024 active=3 purge=0
2009-05-14 11:30:44 D [socket.c:90:__socket_rwv] server: EOF from peer 
192.168.69.6:1019
2009-05-14 11:30:44 D [socket.c:562:__socket_proto_state_machine] server: read 
(Transport endpoint is not connected) in state 1 (192.168.69.6:1019)
2009-05-14 11:30:44 N [server-protocol.c:8272:notify] server: 192.168.69.6:1019 
disconnected
2009-05-14 11:30:44 D [internal.c:721:pl_entrylk] brick1: releasing locks for 
transport 0x9d63a10
2009-05-14 11:30:44 D [inode.c:328:__inode_passivate] brick1/inode: passivating 
inode(22003717) lru=4/1024 active=2 purge=0
2009-05-14 11:30:44 D [socket.c:1332:fini] server: transport 0x9d2bcb0 destroyed
2009-05-14 11:30:44 E [socket.c:102:__socket_rwv] server: writev failed (Broken 
pipe)
2009-05-14 11:30:44 D [addr.c:174:gf_auth] brick1: allowed = *, received addr 
= 192.168.69.6
2009-05-14 11:30:44 N [server-protocol.c:7502:mop_setvolume] server: accepted 
client from 192.168.69.6:1015
2009-05-14 11:30:44 D [socket.c:90:__socket_rwv] server: EOF from peer 
192.168.69.6:1017
2009-05-14 11:30:44 D [socket.c:562:__socket_proto_state_machine] server: read 
(Transport endpoint is not connected) in state 1 (192.168.69.6:1017)
2009-05-14 11:30:44 N [server-protocol.c:8272:notify] server: 192.168.69.6:1017 
disconnected
2009-05-14 11:30:44 D [socket.c:1332:fini] server: transport 0x9d2bf40 destroyed
2009-05-14 11:30:44 D [addr.c:174:gf_auth] brick1: allowed = *, received addr 
= 192.168.69.6
2009-05-14 11:30:44 N [server-protocol.c:7502:mop_setvolume] server: accepted 
client from 192.168.69.6:1014



2009-05-14 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Problem of afr in glusterfs 2.0.0rc1

2009-05-10 Thread eagleeyes
Hello:
 i had met the problem  twice  when i copy some files into the GFS space .
 i have five clients and two servers , when i copy files into /data which 
was GFS space  on client A , the problem was appear.
in the same path , A server can see the all files ,but B and C or D 
couldin't see the all files ,liks some files was missing ,but when i mount 
again ,the files was appear ,why ? Was there somebody  met the problem like 
that ?
 my configur file is this :
  
volume client1
  type protocol/client
  option transport-type tcp
  option remote-host 10.4.11.134 # IP address of the remote brick
  option remote-port 6996
  option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick1   # name of the remote volume
end-volume
volume client2
  type protocol/client
  option transport-type tcp
  option remote-host 10.4.11.134 # IP address of the remote brick
  option remote-port 6996
  option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick2   # name of the remote volume
end-volume
volume client3
  type protocol/client
  option transport-type tcp
  option remote-host 10.4.11.134 # IP address of the remote brick
  option remote-port 6996
  option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick3   # name of the remote volume
end-volume
volume client4
  type protocol/client
  option transport-type tcp
  option remote-host 10.4.11.135 # IP address of the remote brick
  option remote-port 6996
  option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick1   # name of the remote volume
end-volume
volume client5
  type protocol/client
  option transport-type tcp
  option remote-host 10.4.11.135 # IP address of the remote brick
  option remote-port 6996
  option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick2   # name of the remote volume
end-volume
volume client6
  type protocol/client
  option transport-type tcp
  option remote-host 10.4.11.135 # IP address of the remote brick
  option remote-port 6996
  option transport-timeout 10  # seconds to wait for a reply
  option remote-subvolume brick3   # name of the remote volume
end-volume
volume afr1
  type cluster/afr
  subvolumes client1 client4
 option favorite-child client1
end-volume
volume afr2
  type cluster/afr
  subvolumes client2 client5
   option favorite-child client2
end-volume
volume afr3
  type cluster/afr
  subvolumes client3 client6
  option favorite-child client3
end-volume
volume dht
  type cluster/dht
  subvolumes afr1 afr2 afr3
end-volume
### Add readahead feature
volume readahead
  type performance/read-ahead
  option page-size 1MB # unit in bytes
  option page-count 2   # cache per file  = (page-count x page-size)
  subvolumes dht
end-volume
### Add IO-Cache feature
volume iocache
  type performance/io-cache
  option page-size 256KB
  option page-count 2
  subvolumes readahead
end-volume
### Add writeback feature
volume writeback
  type performance/write-behind
  option block-size 1MB
  option cache-size 2MB
  option flush-behind off
  subvolumes iocache   
end-volume
 
 The  option flush-behind off is the reason ?
  Waitting for your help imperative
 Thank a lot 
2009-05-11 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Stripe mode question

2009-04-13 Thread eagleeyes

Hello:
 i have a question about stripe mode ,when i use stripe like this 

volume bricks
  type cluster/stripe
  option block-size 1MB
  subvolumes client1 client2 client3
end-volume

When i copy a file which 329M ,the file on  client1 client2 client3 is the 
same size ,but not three fragments ,the stripe mode  mean what ? 
I want the file break into pieces which 1M size  

2009-04-13 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] ISCSI and Gluster

2009-03-23 Thread eagleeyes
Hello:
Did you have using  ISCSI with Gluster ? 
Or  Could Gluster  working  with ISCSI ?


2009-03-23 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster2.0.rc1 and openfiler 2.3

2009-03-23 Thread eagleeyes
   
whoused openfiler with  Gluster  ?  nobody? 

2009-03-23 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ISCSI and Gluster

2009-03-23 Thread eagleeyes
   My mind is using a gluster mount point as a iscsi target,  not using iscsi 
drive as a gluster point . 
   Do you have a solution like that ? 



2009-03-24 



eagleeyes 



发件人: Krishna Srinivas 
发送时间: 2009-03-23  22:53:37 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] ISCSI and Gluster 
 
Sure, you can use it. mount the iSCSI drive to a mount point and use
it for option directory in the definition of storage/posix volume.
Do you intend to setup iSCSI + glusterfs in any other way?
Krishna
2009/3/23 eagleeyes eaglee...@126.com:
 Hello:
 牋?Did爕ou have using 營SCSI with Gluster ?
 牋?Or?Could Gluster?working?with ISCSI ?


 2009-03-23
 
 eagleeyes
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] ERROR of gluster.2.0.rc1 client on suse reiserfs

2009-03-18 Thread eagleeyes
HELLO:
   Are there somebody  had met some error like  /bin/ls: /data: Structure 
needs cleaning  which happening when run ls  or other linux command in 
gluster client mounted directory ,the client system  is   SUSE Linux Enterprise 
Server 10 SP2 (x86_64) , 2.6.16.60-0.21-xen . 
glusterfs-2.0.0rc1 + fuse-2.7.4glfs11 + SUSE10 SP2 
~ # mount 
/dev/xvda3 on / type reiserfs (rw,acl,user_xattr)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw)
udev on /dev type tmpfs (rw,size=8g)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/xvda1 on /boot type reiserfs (rw,acl,user_xattr)
securityfs on /sys/kernel/security type securityfs (rw)
glusterfs on /data type fuse 
(rw,max_read=1048576,allow_other,default_permissions)

Waiting for your help ,thanks a lot 
   


2009-03-18 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Gluster2.0.rc1 and openfiler 2.3

2009-03-13 Thread eagleeyes
Hello:
 I have a openfiler server which i configure it as a gfs client ,but i 
couldn't see  the gfs space on web of openfiler  ,what should i do ?
 Waiting for help ,thanks a lot 


2009-03-13 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] question of UNIFY

2009-03-11 Thread eagleeyes
Hello:
 i have a question of unify : how many servers it can support   ?
  and  the Distribute Hash Table translator  is aliving ?  i don't see it 
in examples of GFS2.0.rc1 
2009-03-11 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] question of UNIFY

2009-03-11 Thread eagleeyes
Thanks a lot 


2009-03-12 



eagleeyes 



发件人: Basavanagowda Kanur 
发送时间: 2009-03-11  18:13:58 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] question of UNIFY 
 
Eagleeyes,
  You can use as many servers as you can for unify. But the namespace will be a 
bottleneck. You can only create as many files as the namespace allows.
  Distributed hash table (distribute) translator is in a working state and we 
recommend you use distribute instead of unify. 

--
Gowda


On Wed, Mar 11, 2009 at 1:05 PM, eagleeyes eaglee...@126.com wrote:

Hello:
 i have a question of unify : how many servers it can support   ?
  and  the Distribute Hash Table translator  is aliving ?  i don't see it 
in examples of GFS2.0.rc1 
2009-03-11 



eagleeyes 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
14.gif___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] What was the matter with my GFS 2.0

2009-03-04 Thread eagleeyes
Hello 
The DEBUG log like this 
 
2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep-ns: no 
range check required for 'option metadata-lock-server-
count 2'
2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep-ns: no 
range check required for 'option entry-lock-server-cou
nt 2'
2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep-ns: no 
range check required for 'option data-lock-server-coun
t 2'
2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep2: no 
range check required for 'option metadata-lock-server-co
unt 2'
2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep2: no 
range check required for 'option entry-lock-server-count
 2'
2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep2: no 
range check required for 'option data-lock-server-count 
2'
2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep1: no 
range check required for 'option metadata-lock-server-co
unt 2'
2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep1: no 
range check required for 'option entry-lock-server-count
 2'
2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] rep1: no 
range check required for 'option data-lock-server-count 
2'
2009-03-04 16:16:47 D [client-protocol.c:6221:init] client1: setting 
transport-timeout to 5
2009-03-04 16:16:47 D [client-protocol.c:6235:init] client1: defaulting 
ping-timeout to 10
2009-03-04 16:16:47 D [transport.c:141:transport_load] transport: attempt to 
load file /lib/glusterfs/2.0.0rc2/transport/socket.so
2009-03-04 16:16:47 W [xlator.c:426:validate_xlator_volume_options] client1: 
option 'transport.socket.remote-port' is deprecated, pr
eferred is 'remote-port', continuing with correction
2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] client1: no 
range check required for 'option remote-port 6996'
2009-03-04 16:16:47 D [transport.c:141:transport_load] transport: attempt to 
load file /lib/glusterfs/2.0.0rc2/transport/socket.so
2009-03-04 16:16:47 D [xlator.c:154:_volume_option_value_validate] client1: no 
range check required for 'option remote-port 6996'
2009-03-04 16:16:47 D [xlator.c:595:xlator_init_rec] client1: Initialization 
done

 no range check required for  was mean what ? and  the option 
'transport.socket.remote-port' is deprecated ??  
Why ? i modify configuration files  use options which   its own . 

   
   
  
2009-03-04 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 1client+2server performance problem

2009-03-03 Thread eagleeyes
Thanks for your helping .



2009-03-04 



eagleeyes 



发件人: Anand Avati 
发送时间: 2009-03-03  19:18:22 
收件人: eagleeyes 
抄送: gluster-users 
主题: Re: [Gluster-users] 1client+2server performance problem 
 

 How could?i improve my performance ? what should i do ? waiting for your
 return ,thanks a lot

Use the write-behind translator for improving write performance, and
try a newer version of GlusterFS
Avati
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users