Re: [Gluster-users] df causes hang

2011-02-07 Thread Joe Warren-Meeks

And that also fixed my issue too!

Kind regards

 -- joe.

-Original Message-
From: phil cryer [mailto:p...@cryer.us] 
Sent: 04 February 2011 17:49
To: Anand Avati
Cc: Joe Warren-Meeks; gluster-users@gluster.org
Subject: Re: [Gluster-users] df causes hang

On Thu, Feb 3, 2011 at 11:02 PM, Anand Avati anand.av...@gmail.com wrote:
 Ah! you must be mounting it wrong.. please mount it from a server (not using
 volfile)
 mount -t glusterfs SERVER:/vol /mnt
 or
 glusterfs -s SERVER --volfile-id vol /mnt
 that should fix it
 Avati

And that's it! The command I was using was getting info from the old
(pre 3.x) setup in fstab. This command worked for me:
mount -t glusterfs clustr-01:bhl-volume /mnt/glusterfs

and now df -h works, and I can see my files:
df -h | tail -n1
clustr-01:bhl-volume   96T   85T   11T  90% /mnt/glusterfs

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] df causes hang

2011-01-17 Thread Joe Warren-Meeks
Hey chaps,

Anyone got any pointers as to what this might be? This is still causing
a lot of problems for us whenever we attempt to do df.

 -- joe.

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Joe Warren-Meeks
Sent: 15 January 2011 11:41
To: gluster-users@gluster.org
Subject: [Gluster-users] df causes hang

Hey guys,

 

I've been using glusterfs to share a volume between two webservers
happily for quite a while.

 

However, for some reason, they've got into a bit of a state such that
typing 'df -k' causes both to hang, resulting in a loss of service for42
seconds. I see the following messages in the log files:

 

Any ideas what might be causing this?

 

Server1

 

Glusterfs.log: (i.e. the client log)

[2011-01-15 11:22:54] E [saved-frames.c:165:saved_frames_unwind]
10.10.130.11-1: forced unwinding frame type(1) op(LOOKUP)

[2011-01-15 11:22:54] E [saved-frames.c:165:saved_frames_unwind]
10.10.130.11-1: forced unwinding frame type(1) op(LOOKUP)

[2011-01-15 11:22:54] E [saved-frames.c:165:saved_frames_unwind]
10.10.130.11-1: forced unwinding frame type(1) op(LOOKUP)

[2011-01-15 11:22:54] E [saved-frames.c:165:saved_frames_unwind]
10.10.130.11-1: forced unwinding frame type(1) op(LOOKUP)

[2011-01-15 11:22:54] E [saved-frames.c:165:saved_frames_unwind]
10.10.130.11-1: forced unwinding frame type(2) op(PING)

[2011-01-15 11:22:54] N [client-protocol.c:6976:notify] 10.10.130.11-1:
disconnected

[2011-01-15 11:22:54] N [client-protocol.c:6228:client_setvolume_cbk]
10.10.130.11-1: Connected to 10.10.130.11:6996, attached to remote
volume 'brick1'.

[2011-01-15 11:22:54] N [client-protocol.c:6228:client_setvolume_cbk]
10.10.130.11-1: Connected to 10.10.130.11:6996, attached to remote
volume 'brick1'.

 

Glusterfsd.log:

[2011-01-15 11:22:54] N [server-protocol.c:6748:notify] server-tcp:
10.10.130.12:1023 disconnected

[2011-01-15 11:22:54] N [server-protocol.c:6748:notify] server-tcp:
10.10.130.11:1022 disconnected

[2011-01-15 11:22:54] N [server-protocol.c:6748:notify] server-tcp:
10.10.130.12:1022 disconnected

[2011-01-15 11:22:54] N [server-helpers.c:842:server_connection_destroy]
server-tcp: destroyed connection of
w3-4176-2010/10/19-06:35:34:26343-10.10.130.11-1

[2011-01-15 11:22:54] N [server-protocol.c:6748:notify] server-tcp:
10.10.130.11:1018 disconnected

[2011-01-15 11:22:54] N [server-helpers.c:842:server_connection_destroy]
server-tcp: destroyed connection of
w2-827-2011/01/15-11:09:38:7996-10.10.130.11-1

[2011-01-15 11:22:54] N [server-protocol.c:5812:mop_setvolume]
server-tcp: accepted client from 10.10.130.12:1019

[2011-01-15 11:22:54] N [server-protocol.c:5812:mop_setvolume]
server-tcp: accepted client from 10.10.130.12:1018

[2011-01-15 11:22:54] N [server-protocol.c:5812:mop_setvolume]
server-tcp: accepted client from 10.10.130.11:1023

[2011-01-15 11:22:54] N [server-protocol.c:5812:mop_setvolume]
server-tcp: accepted client from 10.10.130.11:1019

 

 

Server2

Client log:

[2011-01-15 11:21:47] E
[client-protocol.c:415:client_ping_timer_expired] 10.10.130.11-1: Server
10.10.130.11:6996 has not responded in the last 42 seconds,
disconnecting.

[2011-01-15 11:21:47] E [saved-frames.c:165:saved_frames_unwind]
10.10.130.11-1: forced unwinding frame type(1) op(STATFS)

[2011-01-15 11:21:47] E [saved-frames.c:165:saved_frames_unwind]
10.10.130.11-1: forced unwinding frame type(1) op(LOOKUP)

[2011-01-15 11:21:47] E [saved-frames.c:165:saved_frames_unwind]
10.10.130.11-1: forced unwinding frame type(1) op(LOOKUP)

[2011-01-15 11:21:47] N [client-protocol.c:6976:notify] 10.10.130.11-1:
disconnected

[2011-01-15 11:22:54] N [client-protocol.c:6228:client_setvolume_cbk]
10.10.130.11-1: Connected to 10.10.130.11:6996, attached to remote
volume 'brick1'.

[2011-01-15 11:22:54] N [client-protocol.c:6228:client_setvolume_cbk]
10.10.130.11-1: Connected to 10.10.130.11:6996, attached to remote
volume 'brick1'.

 

Note that the 2nd server doesn't show anything in the server log.

 

My glusterfsd.vol:

volume posix1

  type storage/posix

  option directory /data/export

end-volume

 

volume brick1

type features/locks

subvolumes posix1

end-volume

 

volume server-tcp

type protocol/server

option transport-type tcp

option auth.addr.brick1.allow *

option transport.socket.listen-port 6996

option transport.socket.nodelay on

subvolumes brick1

end-volume

 

 

repstore.vol

## file auto generated by /usr/bin/glusterfs-volgen (mount.vol)

# Cmd line:

# $ /usr/bin/glusterfs-volgen --name repstore1 --raid 1
10.10.130.11:/data/export 10.10.130.12:/data/export

 

# RAID 1

# TRANSPORT-TYPE tcp

volume 10.10.130.12-1

type protocol/client

option transport-type tcp

option remote-host 10.10.130.12

option transport.socket.nodelay on

option transport.remote-port 6996

option remote-subvolume brick1

end-volume

 

volume 10.10.130.11-1

type protocol

Re: [Gluster-users] df causes hang

2011-01-17 Thread Joe Warren-Meeks

(sorry about topposting.)

Just changing the timeout would only mask the problem. The real issue is
that running 'df' on either node causes a hang.

All other operations seem fine, files can be created and deleted as
normal with the results showing up on both.

I'd like to work out why it's hanging on df so I can fix it and get my
monitoring and cron scripts running again :)

 -- joe.

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Daniel Maher
Sent: 17 January 2011 12:48
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] df causes hang

On 01/17/2011 10:47 AM, Joe Warren-Meeks wrote:
 Hey chaps,

 Anyone got any pointers as to what this might be? This is still
causing
 a lot of problems for us whenever we attempt to do df.

   -- joe.

 -Original Message-

 However, for some reason, they've got into a bit of a state such that
 typing 'df -k' causes both to hang, resulting in a loss of service
for42
 seconds. I see the following messages in the log files:



42 seconds is the default tcp timeout time for any given node - you 
could try tuning that down and seeing how it works for you.

http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Se
tting_Volume_Options


-- 
Daniel Maher dma+gluster AT witbe DOT net
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] df causes hang

2011-01-15 Thread Joe Warren-Meeks

subvolumes mirror-0

end-volume

 

volume iocache

type performance/io-cache

option cache-size `grep 'MemTotal' /proc/meminfo  | awk '{print $2 *
0.2 / 1024}' | cut -f1 -d.`MB

option cache-timeout 60

subvolumes writebehind

end-volume

 

  -- joe.

 

Joe Warren-Meeks

Director Of Systems Development

ENCORE TICKETS LTD

Encore House, 50-51 Bedford Row, London WC1R 4LR

Direct line:  +44 (0)20 7492 1506

Reservations:+44 (0)20 7492 1500

Fax:+44 (0)20 7831 4410

Email:j...@encoretickets.co.uk
mailto:j...@encoretickets.co.uk 

web:  www.encoretickets.co.uk
http://www.encoretickets.co.uk/ 

 

 

Copyright in this message and any attachments remains with us. It is
confidential and may be legally privileged. If this message is not
intended for you it must not be read, copied or used by you or disclosed
to anyone else. Please advise the sender immediately if you have
received this message in error. Although this message and any
attachments are believed to be free of any virus or other defect that
might affect any computer system into which it is received and opened it
is the responsibility of the recipient to ensure that it is virus free
and no responsibility is accepted by Encore Tickets Limited for any loss
or damage in any way arising from its use.

 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Upgrading from 3.0.4 to 3.0.5

2010-07-21 Thread Joe Warren-Meeks

I've done just this in our integration environment and our production
environment.

I halted my application on the first server, unmounted the filesystem,
halted glusterfsd, upgraded it, restarted glustersfd and mounted the
filesystem. I checked that read/writes were fine and then brought it
back into service.

I left this for 24 hours, then repeated the process on the secondary
node.

You shouldn't have any problems.

 -- joe.

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Steve Wilson
Sent: 21 July 2010 17:22
To: gluster-users@gluster.org
Subject: [Gluster-users] Upgrading from 3.0.4 to 3.0.5

I am planning to upgrade a Gluster installation from 3.0.4 to 3.0.5.  
Can I do so in a piecemeal manner without taking the volume out of 
service?  In other words, can I bring down one replication server, 
upgrade it, and bring it up again while the other servers are running?  
Also, will 3.0.4 clients work with 3.0.5 servers and vice-versa?  If so,

then it sounds like I could slide in the upgrade without major 
disruption of service.

Thanks,
Steve

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] bigfix in 3.0.5

2010-07-19 Thread Joe Warren-Meeks
Ok, fantastic. I'm actually testing the Ubuntu .deb of 3.0.5 now, which
should have the fix in?

Thanks for all your good work.

 -- joe.

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Tejas N. Bhise
Sent: 19 July 2010 19:06
To: Gluster General Discussion List
Subject: Re: [Gluster-users] bigfix in 3.0.5

Hi Joe,

Yes. We mark it in the target release field. If it had not make to
3.0.5, the target release would be deferred to a future release. the
other thing that you can use to check is the regression field. RTP
means regression test passed.

The bottom of the bug text also shows the patches that were used to fix
it.

The tarball release of 3.0.5 is out there. The rpm release is about to
happen, we want to repackage and get the IB package out as a separate
rpm, not many use IB - should be released soon.

Regards,
Tejas.
- Original Message -
From: Joe Warren-Meeks j...@encoretickets.co.uk
To: gluster-users@gluster.org
Sent: Monday, July 19, 2010 10:14:16 PM
Subject: [Gluster-users] bigfix in 3.0.5

Hey there,

 

Can I check if
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=868 made it
into 3.0.5?

 

Kind regards

 

  -- joe.

 

Joe Warren-Meeks

Director Of Systems Development

ENCORE TICKETS LTD

Encore House, 50-51 Bedford Row, London WC1R 4LR

Direct line:  +44 (0)20 7492 1506

Reservations:+44 (0)20 7492 1500

Fax:+44 (0)20 7831 4410

Email:j...@encoretickets.co.uk
mailto:j...@encoretickets.co.uk 

web:  www.encoretickets.co.uk
http://www.encoretickets.co.uk/ 

 

 

Copyright in this message and any attachments remains with us. It is
confidential and may be legally privileged. If this message is not
intended for you it must not be read, copied or used by you or disclosed
to anyone else. Please advise the sender immediately if you have
received this message in error. Although this message and any
attachments are believed to be free of any virus or other defect that
might affect any computer system into which it is received and opened it
is the responsibility of the recipient to ensure that it is virus free
and no responsibility is accepted by Encore Tickets Limited for any loss
or damage in any way arising from its use.

 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=868

2010-05-24 Thread Joe Warren-Meeks
Hey guys,

 

I notice bug 868 is marked as fixed now and committed in release-3.0.
D'you know when the next release will be that I can grab with this fix
in it?

 

Kind regards

 

 -- joe.

 

Joe Warren-Meeks

Director Of Systems Development

ENCORE TICKETS LTD

Encore House, 50-51 Bedford Row, London WC1R 4LR

Direct line:  +44 (0)20 7492 1506

Reservations:+44 (0)20 7492 1500

Fax:+44 (0)20 7831 4410

Email:j...@encoretickets.co.uk
mailto:j...@encoretickets.co.uk 

web:  www.encoretickets.co.uk
http://www.encoretickets.co.uk/ 

 

 

Copyright in this message and any attachments remains with us. It is
confidential and may be legally privileged. If this message is not
intended for you it must not be read, copied or used by you or disclosed
to anyone else. Please advise the sender immediately if you have
received this message in error. Although this message and any
attachments are believed to be free of any virus or other defect that
might affect any computer system into which it is received and opened it
is the responsibility of the recipient to ensure that it is virus free
and no responsibility is accepted by Encore Tickets Limited for any loss
or damage in any way arising from its use.

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Transport endpoint not connected

2010-04-28 Thread Joe Warren-Meeks
Hey guys,

Any clues or pointers with this problem? It's occurring every 6 hours or
so.. Anything else I can do to help debug it?

Kind regards

 -- joe.


 -Original Message-
 From: gluster-users-boun...@gluster.org [mailto:gluster-users-
 boun...@gluster.org] On Behalf Of Joe Warren-Meeks
 Sent: 26 April 2010 12:31
 To: Vijay Bellur
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Transport endpoint not connected
 
 Here is the relevant crash section:
 
 patchset: v3.0.4
 signal received: 11
 time of crash: 2010-04-23 21:40:40
 configuration details:
 argp 1
 backtrace 1
 dlfcn 1
 fdatasync 1
 libpthread 1
 llistxattr 1
 setfsid 1
 spinlock 1
 epoll.h 1
 xattr.h 1
 st_atim.tv_nsec 1
 package-string: glusterfs 3.0.4
 /lib/libc.so.6[0x7ffd0d809100]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/read-
 ahead.so(ra_fstat
 +0x82)[0
 x7ffd0c968d22]
 /usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7ffd0df7411b]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/quick-
 read.so(qr_fstat
 +0x113)[
 0x7ffd0c5570a3]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
 behind.so(wb_fst
 at_helpe
 r+0xcb)[0x7ffd0c346adb]
 /usr/local/lib/libglusterfs.so.0(call_resume+0x390)[0x7ffd0df7cf60]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
 behind.so(wb_res
 ume_othe
 r_requests+0x58)[0x7ffd0c349938]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
 behind.so(wb_pro
 cess_que
 ue+0xe1)[0x7ffd0c348251]
 /usr/local/lib/glusterfs/3.0.4/xlator/performance/write-
 behind.so(wb_fst
 at+0x20a
 )[0x7ffd0c34a87a]
 /usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7ffd0df7411b]
 /usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7ffd0bf23a36]
 /usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7ffd0bf246b6]
 /lib/libpthread.so.0[0x7ffd0db3f3f7]
 /lib/libc.so.6(clone+0x6d)[0x7ffd0d8aeb4d]
 
 And Startup section:
 
 -

===
 =
 
 Version  : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
 git: v3.0.4
 Starting Time: 2010-04-26 10:00:59
 Command line : /usr/local/sbin/glusterfs --log-level=NORMAL
 --volfile=/etc/glust
 erfs/repstore1-tcp.vol /data/import
 PID  : 5910
 System name  : Linux
 Nodename : w2
 Kernel Release : 2.6.24-27-server
 Hardware Identifier: x86_64
 
 Given volfile:

+--
 -
 ---+
   1: ## file auto generated by /usr/local/bin/glusterfs-volgen
 (mount.vol)
   2: # Cmd line:
   3: # $ /usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
 10.10.130.11:/data/export 10.10.130.12:/data/export
   4:
   5: # RAID 1
   6: # TRANSPORT-TYPE tcp
   7: volume 10.10.130.12-1
   8: type protocol/client
   9: option transport-type tcp
  10: option remote-host 10.10.130.12
  11: option transport.socket.nodelay on
  12: option transport.remote-port 6996
  13: option remote-subvolume brick1
  14: end-volume
  15:
  16: volume 10.10.130.11-1
  17: type protocol/client
  18: option transport-type tcp
  19: option remote-host 10.10.130.11
  20: option transport.socket.nodelay on
  21: option transport.remote-port 6996
  22: option remote-subvolume brick1
  23: end-volume
  24:
  25: volume mirror-0
  26: type cluster/replicate
  27: subvolumes 10.10.130.11-1 10.10.130.12-1
  28: end-volume
  29:
  30: volume readahead
  31: type performance/read-ahead
  32: option page-count 4
  33: subvolumes mirror-0
  34: end-volume
  35:
  36: volume iocache
  37: type performance/io-cache
  38: option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo |
 sed 's/[^0-9]//g') / 5120 ))`MB
  39: option cache-timeout 1
  40: subvolumes readahead
 41: end-volume
  42:
  43: volume quickread
  44: type performance/quick-read
  45: option cache-timeout 1
  46: option max-file-size 64kB
  47: subvolumes iocache
  48: end-volume
  49:
  50: volume writebehind
  51: type performance/write-behind
  52: option cache-size 4MB
  53: subvolumes quickread
  54: end-volume
  55:
  56: volume statprefetch
  57: type performance/stat-prefetch
  58: subvolumes writebehind
  59: end-volume
  60:
 
  -Original Message-
  From: Vijay Bellur [mailto:vi...@gluster.com]
  Sent: 22 April 2010 18:40
  To: Joe Warren-Meeks
  Cc: gluster-users@gluster.org
  Subject: Re: [Gluster-users] Transport endpoint not connected
 
  Hi Joe,
 
  Can you please share the complete client log file?
 
  Thanks,
  Vijay
 
 
  Joe Warren-Meeks wrote:
   Hey guys,
  
  
  
   I've recently implemented gluster to share webcontent read-write
  between
   two servers.
  
  
  
   Version  : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
  
   Fuse: 2.7.2-1ubuntu2.1
  
   Platform: ubuntu 8.04LTS
  
  
  
   I used the following command to generate my configs:
  
   /usr/local/bin/glusterfs-volgen --name repstore1

Re: [Gluster-users] Transport endpoint not connected

2010-04-28 Thread Joe Warren-Meeks
 = 0x7ffcfb0c, iov_len = 131072}}
msg = (void *) 0x0
ret = value optimized out
now = {tv_sec = 1271949306, tv_usec = 169347}
timeout = {tv_sec = 1271949307, tv_nsec = 169347000}
__FUNCTION__ = fuse_thread_proc
#11 0x7ffd0db3f3f7 in start_thread () from /lib/libpthread.so.0
No symbol table info available.
#12 0x7ffd0d8aeb4d in clone () from /lib/libc.so.6
No symbol table info available.
#13 0x in ?? ()
No symbol table info available.




 -Original Message-
 From: Anand Avati [mailto:anand.av...@gmail.com]
 Sent: 28 April 2010 18:24
 To: Joe Warren-Meeks
 Cc: Vijay Bellur; gluster-users@gluster.org
 Subject: Re: [Gluster-users] Transport endpoint not connected
 
  Here you go!
 
  Anything else I can do?
 
 Joe, can you please rerun the gdb command as:
 
 # gdb /usr/local/sbin/glusterfs -c /core.13560
 
 Without giving the glusterfs binary in the parameter the backtrace is
 missing all the symbols and just the numerical addresses are not quite
 useful.
 
 Thanks,
 Avati


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Transport endpoint not connected

2010-04-26 Thread Joe Warren-Meeks
Here is the relevant crash section:

patchset: v3.0.4
signal received: 11
time of crash: 2010-04-23 21:40:40
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.4
/lib/libc.so.6[0x7ffd0d809100]
/usr/local/lib/glusterfs/3.0.4/xlator/performance/read-ahead.so(ra_fstat
+0x82)[0
x7ffd0c968d22]
/usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7ffd0df7411b]
/usr/local/lib/glusterfs/3.0.4/xlator/performance/quick-read.so(qr_fstat
+0x113)[
0x7ffd0c5570a3]
/usr/local/lib/glusterfs/3.0.4/xlator/performance/write-behind.so(wb_fst
at_helpe
r+0xcb)[0x7ffd0c346adb]
/usr/local/lib/libglusterfs.so.0(call_resume+0x390)[0x7ffd0df7cf60]
/usr/local/lib/glusterfs/3.0.4/xlator/performance/write-behind.so(wb_res
ume_othe
r_requests+0x58)[0x7ffd0c349938]
/usr/local/lib/glusterfs/3.0.4/xlator/performance/write-behind.so(wb_pro
cess_que
ue+0xe1)[0x7ffd0c348251]
/usr/local/lib/glusterfs/3.0.4/xlator/performance/write-behind.so(wb_fst
at+0x20a
)[0x7ffd0c34a87a]
/usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7ffd0df7411b]
/usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7ffd0bf23a36]
/usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7ffd0bf246b6]
/lib/libpthread.so.0[0x7ffd0db3f3f7]
/lib/libc.so.6(clone+0x6d)[0x7ffd0d8aeb4d]

And Startup section:

-


Version  : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
git: v3.0.4
Starting Time: 2010-04-26 10:00:59
Command line : /usr/local/sbin/glusterfs --log-level=NORMAL
--volfile=/etc/glust
erfs/repstore1-tcp.vol /data/import 
PID  : 5910
System name  : Linux
Nodename : w2
Kernel Release : 2.6.24-27-server
Hardware Identifier: x86_64

Given volfile:
+---
---+
  1: ## file auto generated by /usr/local/bin/glusterfs-volgen
(mount.vol)
  2: # Cmd line:
  3: # $ /usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
10.10.130.11:/data/export 10.10.130.12:/data/export
  4: 
  5: # RAID 1
  6: # TRANSPORT-TYPE tcp
  7: volume 10.10.130.12-1
  8: type protocol/client
  9: option transport-type tcp
 10: option remote-host 10.10.130.12
 11: option transport.socket.nodelay on
 12: option transport.remote-port 6996
 13: option remote-subvolume brick1
 14: end-volume
 15: 
 16: volume 10.10.130.11-1
 17: type protocol/client
 18: option transport-type tcp
 19: option remote-host 10.10.130.11
 20: option transport.socket.nodelay on
 21: option transport.remote-port 6996
 22: option remote-subvolume brick1
 23: end-volume
 24: 
 25: volume mirror-0
 26: type cluster/replicate
 27: subvolumes 10.10.130.11-1 10.10.130.12-1
 28: end-volume
 29: 
 30: volume readahead
 31: type performance/read-ahead
 32: option page-count 4
 33: subvolumes mirror-0
 34: end-volume
 35: 
 36: volume iocache
 37: type performance/io-cache
 38: option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo |
sed 's/[^0-9]//g') / 5120 ))`MB
 39: option cache-timeout 1
 40: subvolumes readahead
41: end-volume
 42: 
 43: volume quickread
 44: type performance/quick-read
 45: option cache-timeout 1
 46: option max-file-size 64kB
 47: subvolumes iocache
 48: end-volume
 49: 
 50: volume writebehind
 51: type performance/write-behind
 52: option cache-size 4MB
 53: subvolumes quickread
 54: end-volume
 55: 
 56: volume statprefetch
 57: type performance/stat-prefetch
 58: subvolumes writebehind
 59: end-volume
 60:

 -Original Message-
 From: Vijay Bellur [mailto:vi...@gluster.com]
 Sent: 22 April 2010 18:40
 To: Joe Warren-Meeks
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Transport endpoint not connected
 
 Hi Joe,
 
 Can you please share the complete client log file?
 
 Thanks,
 Vijay
 
 
 Joe Warren-Meeks wrote:
  Hey guys,
 
 
 
  I've recently implemented gluster to share webcontent read-write
 between
  two servers.
 
 
 
  Version  : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
 
  Fuse: 2.7.2-1ubuntu2.1
 
  Platform: ubuntu 8.04LTS
 
 
 
  I used the following command to generate my configs:
 
  /usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
  10.10.130.11:/data/export 10.10.130.12:/data/export
 
 
 
  And mount them on each of the servers as so:
 
  /etc/fstab:
 
  /etc/glusterfs/repstore1-tcp.vol  /data/import  glusterfs  defaults
 0
  0
 
 
 
 
 
  Every 12 hours or so, one or other of the servers will lose the
mount
  and error with:
 
  df: `/data/import': Transport endpoint is not connected
 
 
 
  And I get the following in my logfile:
 
  patchset: v3.0.4
 
  signal received: 11
 
  time of crash: 2010-04-22 11:41:10
 
  configuration details:
 
  argp 1
 
  backtrace 1
 
  dlfcn 1
 
  fdatasync 1
 
  libpthread 1

[Gluster-users] Transport endpoint not connected

2010-04-22 Thread Joe Warren-Meeks
Hey guys,

 

I've recently implemented gluster to share webcontent read-write between
two servers.

 

Version  : glusterfs 3.0.4 built on Apr 19 2010 16:37:50

Fuse: 2.7.2-1ubuntu2.1

Platform: ubuntu 8.04LTS

 

I used the following command to generate my configs:

/usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
10.10.130.11:/data/export 10.10.130.12:/data/export

 

And mount them on each of the servers as so:

/etc/fstab:

/etc/glusterfs/repstore1-tcp.vol  /data/import  glusterfs  defaults  0
0

 

 

Every 12 hours or so, one or other of the servers will lose the mount
and error with:

df: `/data/import': Transport endpoint is not connected

 

And I get the following in my logfile:

patchset: v3.0.4

signal received: 11

time of crash: 2010-04-22 11:41:10

configuration details:

argp 1

backtrace 1

dlfcn 1

fdatasync 1

libpthread 1

llistxattr 1

setfsid 1

spinlock 1

epoll.h 1

xattr.h 1

st_atim.tv_nsec 1

package-string: glusterfs 3.0.4

/lib/libc.so.6[0x7f2eca39a100]

/usr/local/lib/glusterfs/3.0.4/xlator/performance/read-ahead.so(ra_fstat
+0x82

)[0x7f2ec94f9d22]

/usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7f2ecab0511b]

/usr/local/lib/glusterfs/3.0.4/xlator/performance/quick-read.so(qr_fstat
+0x11

3)[0x7f2ec90e80a3]

/usr/local/lib/glusterfs/3.0.4/xlator/performance/write-behind.so(wb_fst
at_he

lper+0xcb)[0x7f2ec8ed7adb]

/usr/local/lib/libglusterfs.so.0(call_resume+0x390)[0x7f2ecab0df60]

/usr/local/lib/glusterfs/3.0.4/xlator/performance/write-behind.so(wb_res
ume_o

ther_requests+0x58)[0x7f2ec8eda938]

/usr/local/lib/glusterfs/3.0.4/xlator/performance/write-behind.so(wb_pro
cess_queue+0xe1)[0x7f2ec8ed9251]

/usr/local/lib/glusterfs/3.0.4/xlator/performance/write-behind.so(wb_fst
at+0x20a)[0x7f2ec8edb87a]

/usr/local/lib/libglusterfs.so.0(default_fstat+0xcb)[0x7f2ecab0511b]

/usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7f2ec8ab4a36]

/usr/local/lib/glusterfs/3.0.4/xlator/mount/fuse.so[0x7f2ec8ab56b6]

/lib/libpthread.so.0[0x7f2eca6d03f7]

/lib/libc.so.6(clone+0x6d)[0x7f2eca43fb4d]

 

 

If I umount and remount, things work again, but it isn't ideal..

 

Any clues, pointers, hints?

 

Kind regards

 

 -- joe.

 

Joe Warren-Meeks

Director Of Systems Development

ENCORE TICKETS LTD

Encore House, 50-51 Bedford Row, London WC1R 4LR

Direct line:  +44 (0)20 7492 1506

Reservations:+44 (0)20 7492 1500

Fax:+44 (0)20 7831 4410

Email:j...@encoretickets.co.uk
mailto:j...@encoretickets.co.uk 

web:  www.encoretickets.co.uk
http://www.encoretickets.co.uk/ 

 

 

Copyright in this message and any attachments remains with us. It is
confidential and may be legally privileged. If this message is not
intended for you it must not be read, copied or used by you or disclosed
to anyone else. Please advise the sender immediately if you have
received this message in error. Although this message and any
attachments are believed to be free of any virus or other defect that
might affect any computer system into which it is received and opened it
is the responsibility of the recipient to ensure that it is virus free
and no responsibility is accepted by Encore Tickets Limited for any loss
or damage in any way arising from its use.

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users