Re: [Gluster-users] Transport endpoint not connected

2010-04-28 Thread Joe Warren-Meeks
Hey guys,

Any clues or pointers with this problem? It's occurring every 6 hours or
so.. Anything else I can do to help debug it?

Kind regards

 -- joe.


 -Original Message-
 From: gluster-users-boun...@gluster.org [mailto:gluster-users-
 boun...@gluster.org] On Behalf Of Joe Warren-Meeks
 Sent: 26 April 2010 12:31
 To: Vijay Bellur
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Transport endpoint not connected
 
 Here is the relevant crash section:
 
 patchset: v3.0.4
 signal received: 11
 time of crash: 2010-04-23 21:40:40
 configuration details:
 argp 1
 backtrace 1
 dlfcn 1
 fdatasync 1
 libpthread 1
 llistxattr 1
 setfsid 1
 spinlock 1
 epoll.h 1
 xattr.h 1
 st_atim.tv_nsec 1
 package-string: glusterfs 3.0.4
 /lib/libc.so.6[0x7ffd0d809100]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/read-
 ahead.so(ra_fstat
 +0x82)[0
 x7ffd0c968d22]
 /usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7ffd0df7411b]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/quick-
 read.so(qr_fstat
 +0x113)[
 0x7ffd0c5570a3]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
 behind.so(wb_fst
 at_helpe
 r+0xcb)[0x7ffd0c346adb]
 /usr/local/lib/libglusterfs.so.0(call_resume+0x390)[0x7ffd0df7cf60]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
 behind.so(wb_res
 ume_othe
 r_requests+0x58)[0x7ffd0c349938]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
 behind.so(wb_pro
 cess_que
 ue+0xe1)[0x7ffd0c348251]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
 behind.so(wb_fst
 at+0x20a
 )[0x7ffd0c34a87a]
 /usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7ffd0df7411b]
 /usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7ffd0bf23a36]
 /usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7ffd0bf246b6]
 /lib/libpthread.so.0[0x7ffd0db3f3f7]
 /lib/libc.so.6(clone+0x6d)[0x7ffd0d8aeb4d]
 
 And Startup section:
 
 -

===
 =
 
 Version  : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
 git: v3.0.4
 Starting Time: 2010-04-26 10:00:59
 Command line : /usr/local/sbin/glusterfs --log-level=NORMAL
 --volfile=/etc/glust
 erfs/repstore1-tcp.vol /data/import
 PID  : 5910
 System name  : Linux
 Nodename : w2
 Kernel Release : 2.6.24-27-server
 Hardware Identifier: x86_64
 
 Given volfile:

+--
 -
 ---+
   1: ## file auto generated by /usr/local/bin/glusterfs-volgen
 (mount.vol)
   2: # Cmd line:
   3: # $ /usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
 10.10.130.11:/data/export 10.10.130.12:/data/export
   4:
   5: # RAID 1
   6: # TRANSPORT-TYPE tcp
   7: volume 10.10.130.12-1
   8: type protocol/client
   9: option transport-type tcp
  10: option remote-host 10.10.130.12
  11: option transport.socket.nodelay on
  12: option transport.remote-port 6996
  13: option remote-subvolume brick1
  14: end-volume
  15:
  16: volume 10.10.130.11-1
  17: type protocol/client
  18: option transport-type tcp
  19: option remote-host 10.10.130.11
  20: option transport.socket.nodelay on
  21: option transport.remote-port 6996
  22: option remote-subvolume brick1
  23: end-volume
  24:
  25: volume mirror-0
  26: type cluster/replicate
  27: subvolumes 10.10.130.11-1 10.10.130.12-1
  28: end-volume
  29:
  30: volume readahead
  31: type performance/read-ahead
  32: option page-count 4
  33: subvolumes mirror-0
  34: end-volume
  35:
  36: volume iocache
  37: type performance/io-cache
  38: option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo |
 sed 's/[^0-9]//g') / 5120 ))`MB
  39: option cache-timeout 1
  40: subvolumes readahead
 41: end-volume
  42:
  43: volume quickread
  44: type performance/quick-read
  45: option cache-timeout 1
  46: option max-file-size 64kB
  47: subvolumes iocache
  48: end-volume
  49:
  50: volume writebehind
  51: type performance/write-behind
  52: option cache-size 4MB
  53: subvolumes quickread
  54: end-volume
  55:
  56: volume statprefetch
  57: type performance/stat-prefetch
  58: subvolumes writebehind
  59: end-volume
  60:
 
  -Original Message-
  From: Vijay Bellur [mailto:vi...@gluster.com]
  Sent: 22 April 2010 18:40
  To: Joe Warren-Meeks
  Cc: gluster-users@gluster.org
  Subject: Re: [Gluster-users] Transport endpoint not connected
 
  Hi Joe,
 
  Can you please share the complete client log file?
 
  Thanks,
  Vijay
 
 
  Joe Warren-Meeks wrote:
   Hey guys,
  
  
  
   I've recently implemented gluster to share webcontent read-write
  between
   two servers.
  
  
  
   Version  : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
  
   Fuse: 2.7.2-1ubuntu2.1
  
   Platform: ubuntu 8.04LTS
  
  
  
   I used the following command to generate my configs:
  
   /usr/local/bin/glusterfs-volgen --name repstore1 

Re: [Gluster-users] heavy metadata issues?

2010-04-28 Thread Moore, Michael
Hi Joe,

  We've been seeing the same thing here with several GlusterFS setups.  It is 
really bad with GigE under load.  From looking at our setups, it appears to be 
a network latency/contention issue.  It is definitely not a disk contention 
issue as there is not a lot of load on the disks at the time of the metadata 
performance issues.

  Like you, we've tried a number of different combinations to help improve the 
performance.  statprefetch will help with subsequent accesses to the same 
metadata, but it doesn't help with the initial access.

  We have not tried Infiniband yet to see if it improves the performance, but I 
am concerned that in a high IOps environment (like we run here), that even IB 
will not be able to sustain performance.

- Mike


Michael Moore
Staff Engineer - Network
SOLiD Advanced Research and Collaborations
500 Cummings Center
Suite 2400
Beverly, MA 01915
T: 978-232-7886

michael.mo...@lifetech.com
mike.mo...@appliedbiosystems.com




-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Joe Landman
Sent: Tuesday, April 27, 2010 7:19 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] heavy metadata issues?

Hi folks

  Seeing some metadata slowdowns in some cases on GlusterFS 3.0.3. 
These metadata slowdowns manifest in terms of long ls, long stat, long 
rm times for directories with a few thousand files.

  We tried statprefetch at the end of the stack, as well as right after 
the protocol/client, tried increasing read-ahead, etc.

  Is there anything in particular we should be looking to do, in order 
to get reasonable metadata performance under load?

  Thanks!

Joe

-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] server ver 3.0.4 crashes

2010-04-28 Thread Mickey Mazarick
Did a strait install and the ibverbs  instance will crash after a 
single  connection attempt. Are there any bugs that would cause this 
behavior?


All the log tells me is:

pending frames:
frame : type(2) op(SETVOLUME)

patchset: v3.0.4
signal received: 11
time of crash: 2010-04-28 10:41:08
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.4
/lib64/tls/libc.so.6[0x33c0a2e2b0]
/usr/local/lib/libglusterfs.so.0(dict_unserialize+0x111)[0x2b662221d281]
/usr/local/lib/glusterfs/3.0.4/xlator/protocol/server.so(mop_setvolume+0xa2)[0x2b6622f246e2]
/usr/local/lib/glusterfs/3.0.4/xlator/protocol/server.so(protocol_server_interpret+0x1aa)[0x2b6622f25a0a]
/usr/local/lib/glusterfs/3.0.4/xlator/protocol/server.so(protocol_server_pollin+0x8b)[0x2b6622f268cb]
/usr/local/lib/glusterfs/3.0.4/xlator/protocol/server.so(notify+0x100)[0x2b6622f26aa0]
/usr/local/lib/libglusterfs.so.0(xlator_notify+0x94)[0x2b661b24]
/usr/local/lib/glusterfs/3.0.4/transport/ib-verbs.so[0x2aaaf048]
/lib64/tls/libpthread.so.0[0x33c1906137]
/lib64/tls/libc.so.6(__clone+0x73)[0x33c0ac7113]
-

--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] server ver 3.0.4 crashes

2010-04-28 Thread Tejas N. Bhise
Hi Mickey,

Please open a defect in bugzilla. Someone from the dev team will have a look at 
it soon.

Regards,
Tejas.
- Original Message -
From: Mickey Mazarick m...@digitaltadpole.com
To: Gluster Users gluster-users@gluster.org
Sent: Wednesday, April 28, 2010 8:14:53 PM
Subject: [Gluster-users] server ver 3.0.4 crashes

Did a strait install and the ibverbs  instance will crash after a 
single  connection attempt. Are there any bugs that would cause this 
behavior?

All the log tells me is:

pending frames:
frame : type(2) op(SETVOLUME)

patchset: v3.0.4
signal received: 11
time of crash: 2010-04-28 10:41:08
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.4
/lib64/tls/libc.so.6[0x33c0a2e2b0]
/usr/local/lib/libglusterfs.so.0(dict_unserialize+0x111)[0x2b662221d281]
/usr/local/lib/glusterfs/3.0.4/xlator/protocol/server.so(mop_setvolume+0xa2)[0x2b6622f246e2]
/usr/local/lib/glusterfs/3.0.4/xlator/protocol/server.so(protocol_server_interpret+0x1aa)[0x2b6622f25a0a]
/usr/local/lib/glusterfs/3.0.4/xlator/protocol/server.so(protocol_server_pollin+0x8b)[0x2b6622f268cb]
/usr/local/lib/glusterfs/3.0.4/xlator/protocol/server.so(notify+0x100)[0x2b6622f26aa0]
/usr/local/lib/libglusterfs.so.0(xlator_notify+0x94)[0x2b661b24]
/usr/local/lib/glusterfs/3.0.4/transport/ib-verbs.so[0x2aaaf048]
/lib64/tls/libpthread.so.0[0x33c1906137]
/lib64/tls/libc.so.6(__clone+0x73)[0x33c0ac7113]
-

-- 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Transport endpoint not connected

2010-04-28 Thread Anand Avati
Joe,
  Do you have access to the core dump from the crash? If you do,
please post the output of 'thread apply all bt full' within gdb on the
core.

Thanks,
Avati

On Wed, Apr 28, 2010 at 2:26 PM, Joe Warren-Meeks
j...@encoretickets.co.uk wrote:
 Hey guys,

 Any clues or pointers with this problem? It's occurring every 6 hours or
 so.. Anything else I can do to help debug it?

 Kind regards

  -- joe.


 -Original Message-
 From: gluster-users-boun...@gluster.org [mailto:gluster-users-
 boun...@gluster.org] On Behalf Of Joe Warren-Meeks
 Sent: 26 April 2010 12:31
 To: Vijay Bellur
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Transport endpoint not connected

 Here is the relevant crash section:

 patchset: v3.0.4
 signal received: 11
 time of crash: 2010-04-23 21:40:40
 configuration details:
 argp 1
 backtrace 1
 dlfcn 1
 fdatasync 1
 libpthread 1
 llistxattr 1
 setfsid 1
 spinlock 1
 epoll.h 1
 xattr.h 1
 st_atim.tv_nsec 1
 package-string: glusterfs 3.0.4
 /lib/libc.so.6[0x7ffd0d809100]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/read-
 ahead.so(ra_fstat
 +0x82)[0
 x7ffd0c968d22]
 /usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7ffd0df7411b]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/quick-
 read.so(qr_fstat
 +0x113)[
 0x7ffd0c5570a3]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
 behind.so(wb_fst
 at_helpe
 r+0xcb)[0x7ffd0c346adb]
 /usr/local/lib/libglusterfs.so.0(call_resume+0x390)[0x7ffd0df7cf60]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
 behind.so(wb_res
 ume_othe
 r_requests+0x58)[0x7ffd0c349938]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
 behind.so(wb_pro
 cess_que
 ue+0xe1)[0x7ffd0c348251]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
 behind.so(wb_fst
 at+0x20a
 )[0x7ffd0c34a87a]
 /usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7ffd0df7411b]
 /usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7ffd0bf23a36]
 /usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7ffd0bf246b6]
 /lib/libpthread.so.0[0x7ffd0db3f3f7]
 /lib/libc.so.6(clone+0x6d)[0x7ffd0d8aeb4d]

 And Startup section:

 -

 ===
 =
 
 Version      : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
 git: v3.0.4
 Starting Time: 2010-04-26 10:00:59
 Command line : /usr/local/sbin/glusterfs --log-level=NORMAL
 --volfile=/etc/glust
 erfs/repstore1-tcp.vol /data/import
 PID          : 5910
 System name  : Linux
 Nodename     : w2
 Kernel Release : 2.6.24-27-server
 Hardware Identifier: x86_64

 Given volfile:

 +--
 -
 ---+
   1: ## file auto generated by /usr/local/bin/glusterfs-volgen
 (mount.vol)
   2: # Cmd line:
   3: # $ /usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
 10.10.130.11:/data/export 10.10.130.12:/data/export
   4:
   5: # RAID 1
   6: # TRANSPORT-TYPE tcp
   7: volume 10.10.130.12-1
   8:     type protocol/client
   9:     option transport-type tcp
  10:     option remote-host 10.10.130.12
  11:     option transport.socket.nodelay on
  12:     option transport.remote-port 6996
  13:     option remote-subvolume brick1
  14: end-volume
  15:
  16: volume 10.10.130.11-1
  17:     type protocol/client
  18:     option transport-type tcp
  19:     option remote-host 10.10.130.11
  20:     option transport.socket.nodelay on
  21:     option transport.remote-port 6996
  22:     option remote-subvolume brick1
  23: end-volume
  24:
  25: volume mirror-0
  26:     type cluster/replicate
  27:     subvolumes 10.10.130.11-1 10.10.130.12-1
  28: end-volume
  29:
  30: volume readahead
  31:     type performance/read-ahead
  32:     option page-count 4
  33:     subvolumes mirror-0
  34: end-volume
  35:
  36: volume iocache
  37:     type performance/io-cache
  38:     option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo |
 sed 's/[^0-9]//g') / 5120 ))`MB
  39:     option cache-timeout 1
  40:     subvolumes readahead
 41: end-volume
  42:
  43: volume quickread
  44:     type performance/quick-read
  45:     option cache-timeout 1
  46:     option max-file-size 64kB
  47:     subvolumes iocache
  48: end-volume
  49:
  50: volume writebehind
  51:     type performance/write-behind
  52:     option cache-size 4MB
  53:     subvolumes quickread
  54: end-volume
  55:
  56: volume statprefetch
  57:     type performance/stat-prefetch
  58:     subvolumes writebehind
  59: end-volume
  60:

  -Original Message-
  From: Vijay Bellur [mailto:vi...@gluster.com]
  Sent: 22 April 2010 18:40
  To: Joe Warren-Meeks
  Cc: gluster-users@gluster.org
  Subject: Re: [Gluster-users] Transport endpoint not connected
 
  Hi Joe,
 
  Can you please share the complete client log file?
 
  Thanks,
  Vijay
 
 
  Joe Warren-Meeks wrote:
   Hey guys,
  
  
  
   I've recently implemented gluster to share webcontent read-write
  between
   two servers.
  
  
  
   Version      : 

Re: [Gluster-users] Transport endpoint not connected

2010-04-28 Thread Anand Avati
 Here you go!

 Anything else I can do?

Joe, can you please rerun the gdb command as:

# gdb /usr/local/sbin/glusterfs -c /core.13560

Without giving the glusterfs binary in the parameter the backtrace is
missing all the symbols and just the numerical addresses are not quite
useful.

Thanks,
Avati
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Transport endpoint not connected

2010-04-28 Thread Joe Warren-Meeks
Oops, I'm an idiot, sorry about that.. here you go!


Thread 3 (process 13560):
#0  0x7ffd0db44402 in ?? () from /lib/libpthread.so.0
No symbol table info available.
#1  0x7ffd0c7600ff in ioc_open_cbk (frame=0x122c6a0,
cookie=0x122c7d0, 
this=0x6104e0, op_ret=0, op_errno=117, fd=0x12af1f0) at
io-cache.c:474
tmp_ioc_inode = 0
local = (ioc_local_t *) 0x122c700
table = value optimized out
ioc_inode = value optimized out
inode = (inode_t *) 0x61d220
weight = value optimized out
path = 0x12af130 /docs/cbolds/PAGES/V1/THEATRENET
__FUNCTION__ = ioc_open_cbk
#2  0x7ffd0c96a395 in ra_open_cbk (frame=0x122c7d0, 
cookie=value optimized out, this=value optimized out, op_ret=0, 
op_errno=117, fd=0x12af1f0) at read-ahead.c:116
fn = (fop_open_cbk_t) 0x7ffd0c760090 ioc_open_cbk
_parent = (call_frame_t *) 0x122c6a0
old_THIS = (xlator_t *) 0x60fbe0
file = value optimized out
__FUNCTION__ = ra_open_cbk
#3  0x7ffd0cb8f4a2 in afr_open_cbk (frame=0x122c830, cookie=0x1, 
this=value optimized out, op_ret=value optimized out, 
op_errno=value optimized out, fd=0x12af1f0) at afr-open.c:140
fn = (fop_open_cbk_t) 0x7ffd0c96a1f0 ra_open_cbk
_parent = (call_frame_t *) 0x122c7d0
old_THIS = (xlator_t *) 0x60f390
__local = (afr_local_t *) 0x122c890
__this = (xlator_t *) 0x60f390
local = (afr_local_t *) 0x122c890
ctx = 140724576331344
fd_ctx = value optimized out
ret = value optimized out
call_count = value optimized out
__FUNCTION__ = afr_open_cbk
#4  0x7ffd0cdbe27b in client_open_cbk (frame=0x12af600, 
hdr=value optimized out, hdrlen=value optimized out, 
iobuf=value optimized out) at client-protocol.c:4090
fn = (ret_fn_t) 0x7ffd0cb8f280 afr_open_cbk
_parent = (call_frame_t *) 0x122c830
old_THIS = (xlator_t *) 0x60e110
op_ret = 0
op_errno = 0
fd = (fd_t *) 0x12af1f0
local = (client_local_t *) 0x12af840
fdctx = value optimized out
ino = 3793176
gen = 5462270833604952119
#5  0x7ffd0cdb8a2a in protocol_client_pollin (this=0x60e110, 
trans=0x6151d0) at client-protocol.c:6827
conf = (client_conf_t *) 0x614bd0
ret = 0
iobuf = (struct iobuf *) 0x0
hdr = 0x7ffcfe6249a0 
hdrlen = 116
#6  0x7ffd0cdbfc7a in notify (this=0x61d228, event=value optimized
out, 
data=0x6151d0) at client-protocol.c:6946
ret = value optimized out
child_down = value optimized out
was_not_down = value optimized out
trans = (transport_t *) 0x122c7d0
conn = value optimized out
conf = (client_conf_t *) 0x614bd0
parent = value optimized out
__FUNCTION__ = notify
#7  0x7ffd0df6e033 in xlator_notify (xl=0x60e110, event=2,
data=0x6151d0)
at xlator.c:924
old_THIS = (xlator_t *) 0x7ffd0e19ec00
ret = value optimized out
#8  0x7ffd0bd1345b in socket_event_handler (fd=value optimized
out, 
idx=3, data=0x6151d0, poll_in=1, poll_out=0, poll_err=0) at
socket.c:831
this = (transport_t *) 0x61d228
priv = (socket_private_t *) 0x615540
ret = 0
#9  0x7ffd0df8784a in event_dispatch_epoll (event_pool=0x609930)
at event.c:804
events = (struct epoll_event *) 0x6166e0
i = 1
ret = 2
__FUNCTION__ = event_dispatch_epoll
#10 0x00404533 in main (argc=4, argv=0x7fff5b5ccac8)
at glusterfsd.c:1425
ctx = (glusterfs_ctx_t *) 0x608010
cmd_args = value optimized out
stbuf = {st_dev = 0, st_ino = 140734726193120, st_nlink = 0, 
  st_mode = 236629549, st_uid = 32765, st_gid = 0, pad0 = 0, st_rdev =
0, 
  st_size = 0, st_blksize = 140724840091158, st_blocks =
140734726194672, 
  st_atim = {tv_sec = 140724840082928, tv_nsec = 140734726194735},
st_mtim = {
tv_sec = 140734726194720, tv_nsec = 140734726194712}, st_ctim = {
tv_sec = 140724842257208, tv_nsec = 4199943}, __unused = {0, 0, 
140724829864661}}
tmp_logfile = '\0' repeats 1023 times
tmp_logfile_dyn = value optimized out
tmp_logfilebase = value optimized out
timestr = '\0' repeats 255 times
utime = 1271949306
tm = value optimized out
ret = 0
lim = {rlim_cur = 18446744073709551615, 
  rlim_max = 18446744073709551615}
specfp = value optimized out
graph = (xlator_t *) 0x609e70
trav = value optimized out
fuse_volume_found = 0
xl_count = value optimized out
pipe_fd = {6, 7}
gf_success = 0
gf_failure = -1
__FUNCTION__ = main

Thread 2 (process 13561):
#0  0x7ffd0d874b81 in nanosleep () from /lib/libc.so.6
No symbol table info available.
#1  0x7ffd0d8a8584 in usleep () from /lib/libc.so.6
No symbol table info available.
#2  0x7ffd0df79d63 in 

Re: [Gluster-users] Transport endpoint not connected

2010-04-28 Thread Anand Avati
On Wed, Apr 28, 2010 at 11:15 PM, Joe Warren-Meeks
j...@encoretickets.co.uk wrote:
 Oops, I'm an idiot, sorry about that.. here you go!


Thanks! We have a good understanding of the issue now. Please add
yourself to the CC list at
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=868 to get
fix updates.

Avati
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] modprobe fuse; device or resource busy

2010-04-28 Thread Dan Bretherton
I am getting the following error when trying to load fuse.ko built
from fuse-2.7.4:

FATAL: Error inserting fuse
(/lib/modules/2.6.21.5-clustervision-181_cvos/kernel/fs/fuse/fuse.ko):
Device or resource busy

Upgrading the kernel would be difficult as the machine in question is
a network booting cluster compute node.  It has a customised kernel
without fuse built in.  Can anybody tell me what I am doing wrong?  I
have successfully used fuse-2.7.4 and earlier versions with older
kernels in the past, but have never encountered this error.

-Dan Bretherton.

-- 
Mr. D.A. Bretherton
Reading e-Science Centre
Environmental Systems Science Centre
Harry Pitt Building
3 Earley Gate
University of Reading
Reading, RG6 6AL
UK

Tel. +44 118 378 7722
Fax: +44 118 378 6413
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Gluster bandwidth usage -

2010-04-28 Thread Craig Carl







Gluster-users - 




First I'd like to introduce myself, I'm Craig Carl, the new sales engineer here 
at Gluster Inc. I've been a Gluster user and a community/mailing list 
participant for a long time, and I'm very excited to be working with the 
incredible team here at Gluster Inc. and all of the members of the Gluster 
community. A strong community is vital to the success of any open source 
software project, I wanted to make sure you all know that your input and 
support is recognized and very much appreciated. Everyone here in the office is 
subscribed to this list and we are always glad to get feedback from you, good 
or otherwise. 




More on topic - 




Disk and processor speeds have increased by leaps and bounds in the last ten 
years while network connection speeds have stagnated, 1000Base-T is over ten 
years old(1). Because of those issues Gluster users usually hit some sort of 
bandwidth limit long before the storage node's CPU, memory, or disk I/O limits 
are reached. Bonding 1Gb interfaces can help, but the law of diminishing 
returns kicks in very quickly. 



We would like to know if bandwidth saturation is impacting your Gluster 
cluster. 


• Do you monitor and log the storage node's IP or IB interfaces for 
bandwidth used %? 
• Do the interfaces look saturated? How does a saturated link affect your 
access to the data? 
• Do you have MRTG, Cacti, or any other bandwidth monitoring tools 
installed and suspect network saturation is an issue at your site? 
• Have you had a bandwidth problem and solved it? 
• Would you be willing to share that data with us? 

Please let us know! 






Any information you have related to bandwidth and Gluster would be very 
helpful. If you want to send a responce directly to me please do, your data 
will be kept private. 




If you live or work in the San Francisco Bay Area I'd love to talk to you about 
how you are using Gluster, the beer(s) is on me, please get in touch! 




Thank you all very much for your continued support. 





Craig 

http://en.wikipedia.org/wiki/802.3ab#History 








-- 
Thanks, 

Craig Carl 
Gluster, Inc. 
Cell - (408) 829-9953 (California, USA) 
Office - (408) 770-1884 
Gtalk - craig.c...@gmail.com 
Twitter - @gluster 
http://www.gluster.com/files/installation-demo/demo.html 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Data

2010-04-28 Thread Tejas N. Bhise
Hi Brad,

Glusterfs does not proactively migrate data on addition of a node, but  there 
is a defrag script that allows an admin to do that.

Please see the defrag and scale and defrag scripts here -

http://ftp.gluster.com/pub/gluster/glusterfs/misc/defrag/

Regards,
Tejas.
- Original Message -
From: Brad Alexander b...@servosity.com
To: gluster-users@gluster.org
Sent: Saturday, April 24, 2010 11:36:10 PM
Subject: [Gluster-users] Data

Good Afternoon,

 

Does gluster proactively migrates data onto new devices in order to
maintain a balanced distribution of data when new bricks are added?

 

Thanks.

Brad Alexander


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users