Public bug reported:

The glusterfsd crashes after a couple of seconds. The config-files are built 
with gluster-volgen


[2010-03-30 14:59:19] D [glusterfsd.c:1370:main] glusterfs: running in pid 32083
[2010-03-30 14:59:19] D [transport.c:145:transport_load] transport: attempt to 
load file /usr/lib/glusterfs/3.0.2/transport/socket.so
[2010-03-30 14:59:19] D [xlator.c:284:_volume_option_value_validate] 
server-tcp: no range check required for 'option transport.socket.listen-port 
6996'
[2010-03-30 14:59:19] D [io-threads.c:2841:init] brick1: io-threads: 
Autoscaling: off, min_threads: 8, max_threads: 8
[2010-03-30 14:59:19] N [glusterfsd.c:1396:main] glusterfs: Successfully started
pending frames:
frame : type(2) op(SETVOLUME)

patchset: v3.0.2
signal received: 11
time of crash: 2010-03-30 14:59:25
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.2
/lib/libc.so.6(+0x33af0)[0x7fa970195af0]
/usr/lib/libglusterfs.so.0(dict_unserialize+0xff)[0x7fa970913dff]
/usr/lib/glusterfs/3.0.2/xlator/protocol/server.so(mop_setvolume+0x8f)[0x7fa96f11656f]
/usr/lib/glusterfs/3.0.2/xlator/protocol/server.so(protocol_server_pollin+0x7a)[0x7fa96f10d76a]
/usr/lib/glusterfs/3.0.2/xlator/protocol/server.so(notify+0x83)[0x7fa96f10d7f3]
/usr/lib/libglusterfs.so.0(xlator_notify+0x43)[0x7fa970918dc3]
/usr/lib/glusterfs/3.0.2/transport/socket.so(socket_event_handler+0x7a)[0x7fa96e6fe09a]
/usr/lib/libglusterfs.so.0(+0x2e31d)[0x7fa97093331d]
glusterfsd(main+0x852)[0x4044f2]
/lib/libc.so.6(__libc_start_main+0xfd)[0x7fa970180c4d]
glusterfsd[0x402ab9]
---------
Segmenteringsfel (minnesutskrift skapad)
a...@homer:/data/export$ 


## file auto generated by /usr/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/bin/glusterfs-volgen --name store1 homer.vertel.se:/data/export/store1 
agata.vertel.se:/srv/export/store1

volume posix1
  type storage/posix
  option directory /data/export/store1
end-volume

volume locks1
    type features/locks
    subvolumes posix1
end-volume

volume brick1
    type performance/io-threads
    option thread-count 8
    subvolumes locks1
end-volume

volume server-tcp
    type protocol/server
    option transport-type tcp
    option auth.addr.brick1.allow *
    option transport.socket.listen-port 6996
    option transport.socket.nodelay on
    subvolumes brick1
end-volume

## file auto generated by /usr/bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /usr/bin/glusterfs-volgen --name store1 homer.vertel.se:/data/export/store1 
agata.vertel.se:/srv/export/store1

# TRANSPORT-TYPE tcp
volume hymer.vertel.se-1
    type protocol/client
    option transport-type tcp
    option remote-host 8.8.8.6
    option transport.socket.nodelay on
    option transport.remote-port 6996
    option remote-subvolume brick1
end-volume

volume cooler.vertel.se-1
    type protocol/client
    option transport-type tcp
    option remote-host cooler.vertel.se
    option transport.socket.nodelay on
    option transport.remote-port 6996
    option remote-subvolume brick1
end-volume

volume distribute
    type cluster/distribute
    subvolumes hymer.vertel.se-1 cooler.vertel.se-1
end-volume

volume writebehind
    type performance/write-behind
    option cache-size 4MB
    subvolumes distribute
end-volume

volume readahead
    type performance/read-ahead
    option page-count 4
    subvolumes writebehind
end-volume

volume iocache
    type performance/io-cache
    option cache-size `grep 'MemTotal' /proc/meminfo  | awk '{print $2 * 0.2 / 
1024}' | cut -f1 -d.`MB
    option cache-timeout 1
    subvolumes readahead
end-volume

volume quickread
    type performance/quick-read
    option cache-timeout 1
    option max-file-size 64kB
    subvolumes iocache
end-volume

volume statprefetch
    type performance/stat-prefetch
    subvolumes quickread
end-volume

** Affects: glusterfs (Ubuntu)
     Importance: Undecided
         Status: New

-- 
Server crashes after a couple of seconds
https://bugs.launchpad.net/bugs/551663
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to