[Gluster-users] Gluster Mount point on client disconnects on 1-node.

2014-01-27 Thread Bobby Jacob
Hi,

We have a 2node replica gluster setup. The gluster volume is mounted on 2 
client servers using the same IP (mount -t glusterfs 172.16.98.223:/glustervol 
/cloud_data). We ran the same mount command on both the clients.

But on 1 client, we always have this issue of the mount point getting 
disconnecting. See logs below.

Jan 27 23:47:54 KWTPROCAPP002 GlusterFS[1154]: [2014-01-27 20:47:54.436441] C 
[client-handshake.c:126:rpc_client_ping_timer_expired] 0-owncloud-client-0: 
server 172.16.
98.223:24009 has not responded in the last 42 seconds, disconnecting.

Please advice.


Thanks & Regards,
Bobby Jacob

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterd dont start

2014-01-27 Thread Franco Broi

I think Jefferson's problem might have been due to corrupted config
files, maybe because the /var partition was full as suggested by Paul
Boven but as has been pointed out before, the error messages don't make
it obvious what's wrong.

He got glusterd started but now the peers can't communicate, probably
because a uuid is wrong. This is an weird problem to debug because the
clients can see the data but df may not show the full size and you
wouldn't now anything was wrong until like Jefferson you looked in the
gluster log file.

[2014-01-27 15:48:19.580353] E [socket.c:2788:socket_connect] 0-management: 
connection attempt failed (Connection refused)
[2014-01-27 15:48:19.583374] I 
[glusterd-utils.c:1079:glusterd_volume_brickinfo_get] 0-management: Found brick
[2014-01-27 15:48:22.584029] E [socket.c:2788:socket_connect] 0-management: 
connection attempt failed (Connection refused)
[2014-01-27 15:48:22.607477] I 
[glusterd-utils.c:1079:glusterd_volume_brickinfo_get] 0-management: Found brick
[2014-01-27 15:48:25.608186] E [socket.c:2788:socket_connect] 0-management: 
connection attempt failed (Connection refused)
[2014-01-27 15:48:25.612032] I 
[glusterd-utils.c:1079:glusterd_volume_brickinfo_get] 0-management: Found brick
[2014-01-27 15:48:28.612638] E [socket.c:2788:socket_connect] 0-management: 
connection attempt failed (Connection refused)
[2014-01-27 15:48:28.615509] I 
[glusterd-utils.c:1079:glusterd_volume_brickinfo_get] 0-management: Found brick

I think the advice should be, if you have a working peer, use a peer
probe and glusterd restart to restore the files but in order for this to
work, you have to remove all the config files first so that glutserd
will start in the first place.


On Tue, 2014-01-28 at 08:32 +0530, shwetha wrote: 
> Hi Jefferson, 
> 
> glusterd don't start because it's not able to find the brick path for
> the volume Or the brick path doesn't exist any more. 
> 
> Please refer to the bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1036551 
> 
> Check if the brick path is available . 
> 
> -Shwetha
> 
> On 01/27/2014 05:23 PM, Jefferson Carlos Machado wrote:
> 
> > Hi, 
> > 
> > Please, help me!! 
> > 
> > After reboot my system the service glusterd dont start. 
> > 
> > the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log 
> > 
> > [2014-01-27 09:27:02.898807] I [glusterfsd.c:1910:main]
> > 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version
> > 3.4.2 (/usr/sbin/glusterd -p /run/glusterd.pid) 
> > [2014-01-27 09:27:02.909147] I [glusterd.c:961:init] 0-management:
> > Using /var/lib/glusterd as working directory 
> > [2014-01-27 09:27:02.913247] I [socket.c:3480:socket_init]
> > 0-socket.management: SSL support is NOT enabled 
> > [2014-01-27 09:27:02.913273] I [socket.c:3495:socket_init]
> > 0-socket.management: using system polling thread 
> > [2014-01-27 09:27:02.914337] W [rdma.c:4197:__gf_rdma_ctx_create]
> > 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such
> > device) 
> > [2014-01-27 09:27:02.914359] E [rdma.c:4485:init] 0-rdma.management:
> > Failed to initialize IB Device 
> > [2014-01-27 09:27:02.914375] E
> > [rpc-transport.c:320:rpc_transport_load] 0-rpc-transport: 'rdma'
> > initialization failed 
> > [2014-01-27 09:27:02.914535] W
> > [rpcsvc.c:1389:rpcsvc_transport_create] 0-rpc-service: cannot create
> > listener, initing the transport failed 
> > [2014-01-27 09:27:05.337557] I
> > [glusterd-store.c:1339:glusterd_restore_op_version] 0-glusterd:
> > retrieved op-version: 2 
> > [2014-01-27 09:27:05.373853] E
> > [glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown
> > key: brick-0 
> > [2014-01-27 09:27:05.373927] E
> > [glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown
> > key: brick-1 
> > [2014-01-27 09:27:06.166721] I [glusterd.c:125:glusterd_uuid_init]
> > 0-management: retrieved UUID: 28f232e9-564f-4866-8014-32bb020766f2 
> > [2014-01-27 09:27:06.169422] E
> > [glusterd-store.c:2487:glusterd_resolve_all_bricks] 0-glusterd:
> > resolve brick failed in restore 
> > [2014-01-27 09:27:06.169491] E [xlator.c:390:xlator_init]
> > 0-management: Initialization of volume 'management' failed, review
> > your volfile again 
> > [2014-01-27 09:27:06.169516] E [graph.c:292:glusterfs_graph_init]
> > 0-management: initializing translator failed 
> > [2014-01-27 09:27:06.169532] E
> > [graph.c:479:glusterfs_graph_activate] 0-graph: init failed 
> > [2014-01-27 09:27:06.169769] W [glusterfsd.c:1002:cleanup_and_exit]
> > (-->/usr/sbin/glusterd(main+0x3df) [0x7f23c76588ef]
> > (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xb0) [0x7f23c765b6e0]
> > (-->/usr/sbin/glusterd(glusterfs_process_volfp+0x103)
> > [0x7f23c765b5f3]))) 0-: received signum (0), shutting down 
> > 
> > ___ 
> > Gluster-users mailing list 
> > Gluster-users@gluster.org 
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users 
> 
> 
> ___
> Glus

Re: [Gluster-users] yum issue with rhel 6, gluster-repo

2014-01-27 Thread John Mark Walker
Ah, the libcrypto version mismatch:

--> Finished Dependency Resolution
Error: Package: glusterfs-3.4.2-1.el6.x86_64 (glusterfs-epel)
   Requires: libcrypto.so.10(libcrypto.so.10)(64bit)
Error: Package: glusterfs-libs-3.4.2-1.el6.x86_64 (glusterfs-epel)
   Requires: libcrypto.so.10(libcrypto.so.10)(64bit)
Error: Package: glusterfs-3.4.2-1.el6.x86_64 (glusterfs-epel)
   Requires: libssl.so.10(libssl.so.10)(64bit)


I recall someone else running into this, although I forgot the solution - can 
anyone pitch in?

-JM


- Original Message -
> Hi All,
> 
> I have tried to install the gluster rpms in a couple of ways and have been
> unsuccessful on rhel6 (I was successful with F20 after I disabled firewalld
> and installed iptables).
> 
> I ran into issues using the provided yum repo configuration
> (http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo),
> which I have captured my log here: http://www.fpaste.org/72143/08590291/
> 
> I also try to run the wget command and pull the rpm's in and it failed as
> well. I have captured my log here: http://www.fpaste.org/72146/85953013/
> 
> Any help would be appreciated!
> 
> Thanks,
> Jason
> 
> Jason D. Marley, RHCJA, RHCSA
> Senior Consultant
> Red Hat, Inc
> jason.mar...@redhat.com | http://www.redhat.com
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] yum issue with rhel 6, gluster-repo

2014-01-27 Thread James
On Mon, Jan 27, 2014 at 4:54 PM, Jason Marley  wrote:
> Hi All,
>
> I have tried to install the gluster rpms in a couple of ways and have been 
> unsuccessful on rhel6 (I was successful with F20 after I disabled firewalld 
> and installed iptables).

My Puppet-Gluster module can automatically configure your repo either
automatically (to the latest stable version) or build the correct repo
file from a version string.

You can either use that piece of the module directly, or browse it to
know what pattern to use. Alternatively, you can use it generate your
own repo file.

https://github.com/purpleidea/puppet-gluster/blob/master/manifests/repo.pp#L18

I've attached the gluster.repo file that it generated for CentOS6.5.

Try doing a 'sudo yum clean all' before/after switching to this file.

HTH,
James


>
> I ran into issues using the provided yum repo configuration 
> (http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo),
>  which I have captured my log here: http://www.fpaste.org/72143/08590291/
>
> I also try to run the wget command and pull the rpm's in and it failed as 
> well. I have captured my log here: http://www.fpaste.org/72146/85953013/
>
> Any help would be appreciated!
>
> Thanks,
> Jason
>
> Jason D. Marley, RHCJA, RHCSA
> Senior Consultant
> Red Hat, Inc
> jason.mar...@redhat.com | http://www.redhat.com
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users


gluster.repo
Description: Binary data
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterd dont start

2014-01-27 Thread shwetha

Hi Jefferson,

glusterd don't start because it's not able to find the brick path for 
the volume Or the brick path doesn't exist any more.


Please refer to the bug https://bugzilla.redhat.com/show_bug.cgi?id=1036551

Check if the brick path is available .

-Shwetha

On 01/27/2014 05:23 PM, Jefferson Carlos Machado wrote:

Hi,

Please, help me!!

After reboot my system the service glusterd dont start.

the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log

[2014-01-27 09:27:02.898807] I [glusterfsd.c:1910:main] 
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.2 
(/usr/sbin/glusterd -p /run/glusterd.pid)
[2014-01-27 09:27:02.909147] I [glusterd.c:961:init] 0-management: 
Using /var/lib/glusterd as working directory
[2014-01-27 09:27:02.913247] I [socket.c:3480:socket_init] 
0-socket.management: SSL support is NOT enabled
[2014-01-27 09:27:02.913273] I [socket.c:3495:socket_init] 
0-socket.management: using system polling thread
[2014-01-27 09:27:02.914337] W [rdma.c:4197:__gf_rdma_ctx_create] 
0-rpc-transport/rdma: rdma_cm event channel creation failed (No such 
device)
[2014-01-27 09:27:02.914359] E [rdma.c:4485:init] 0-rdma.management: 
Failed to initialize IB Device
[2014-01-27 09:27:02.914375] E 
[rpc-transport.c:320:rpc_transport_load] 0-rpc-transport: 'rdma' 
initialization failed
[2014-01-27 09:27:02.914535] W [rpcsvc.c:1389:rpcsvc_transport_create] 
0-rpc-service: cannot create listener, initing the transport failed
[2014-01-27 09:27:05.337557] I 
[glusterd-store.c:1339:glusterd_restore_op_version] 0-glusterd: 
retrieved op-version: 2
[2014-01-27 09:27:05.373853] E 
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown 
key: brick-0
[2014-01-27 09:27:05.373927] E 
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown 
key: brick-1
[2014-01-27 09:27:06.166721] I [glusterd.c:125:glusterd_uuid_init] 
0-management: retrieved UUID: 28f232e9-564f-4866-8014-32bb020766f2
[2014-01-27 09:27:06.169422] E 
[glusterd-store.c:2487:glusterd_resolve_all_bricks] 0-glusterd: 
resolve brick failed in restore
[2014-01-27 09:27:06.169491] E [xlator.c:390:xlator_init] 
0-management: Initialization of volume 'management' failed, review 
your volfile again
[2014-01-27 09:27:06.169516] E [graph.c:292:glusterfs_graph_init] 
0-management: initializing translator failed
[2014-01-27 09:27:06.169532] E [graph.c:479:glusterfs_graph_activate] 
0-graph: init failed
[2014-01-27 09:27:06.169769] W [glusterfsd.c:1002:cleanup_and_exit] 
(-->/usr/sbin/glusterd(main+0x3df) [0x7f23c76588ef] 
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xb0) [0x7f23c765b6e0] 
(-->/usr/sbin/glusterd(glusterfs_process_volfp+0x103) 
[0x7f23c765b5f3]))) 0-: received signum (0), shutting down


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] yum issue with rhel 6, gluster-repo

2014-01-27 Thread Jason Marley
Hi All,

I have tried to install the gluster rpms in a couple of ways and have been 
unsuccessful on rhel6 (I was successful with F20 after I disabled firewalld and 
installed iptables). 

I ran into issues using the provided yum repo configuration 
(http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo),
 which I have captured my log here: http://www.fpaste.org/72143/08590291/

I also try to run the wget command and pull the rpm's in and it failed as well. 
I have captured my log here: http://www.fpaste.org/72146/85953013/

Any help would be appreciated!

Thanks,
Jason  

Jason D. Marley, RHCJA, RHCSA
Senior Consultant
Red Hat, Inc
jason.mar...@redhat.com | http://www.redhat.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] help, all latency comes from FINODELK

2014-01-27 Thread Mingfan Lu
About 200+ clients

How to print  print lock info in bricks, print lock requst info in afr &
dht?
thanks.



On Tue, Jan 28, 2014 at 9:51 AM, haiwei.xie-soulinfo <
haiwei@soulinfo.com> wrote:

>
> hi,
>looks FINODELK deak lock. how many clients, and nfs or fuse?
> Maybe the best way is to print lock info in bricks, print lock requst info
> in afr & dht.
>
> > all my clients hang when they creating dir
> >
> >
> > On Mon, Jan 27, 2014 at 11:33 PM, Mingfan Lu 
> wrote:
> >
> > > I ran glusterfs volume profile my_volume info, I got some thing likes:
> > >
> > >   0.00   0.00 us   0.00 us   0.00 us 30
>  FORGET
> > >   0.00   0.00 us   0.00 us   0.00 us185
> > > RELEASE
> > >   0.00   0.00 us   0.00 us   0.00 us 11
> > > RELEASEDIR
> > >   0.00  66.50 us  54.00 us  79.00 us  2
> > > SETATTR
> > >   0.00  44.83 us  25.00 us  94.00 us  6
> > > READDIR
> > >   0.00  57.55 us  28.00 us  95.00 us 11
> > > OPENDIR
> > >   0.00 325.00 us 279.00 us 371.00 us  2
> > > RENAME
> > >   0.00 147.80 us  84.00 us 206.00 us  5
> > > LINK
> > >   0.00 389.50 us  55.00 us 724.00 us  2
> > > READDIRP
> > >   0.00 164.25 us  69.00 us 287.00 us  8
> > > UNLINK
> > >   0.00  37.46 us  18.00 us  87.00 us 50
> > > FSTAT
> > >   0.00  70.32 us  29.00 us 210.00 us 37
> > > GETXATTR
> > >   0.00  77.75 us  42.00 us 216.00 us 55
> > > SETXATTR
> > >   0.00  36.39 us  11.00 us 147.00 us119
> > > FLUSH
> > >   0.00  51.22 us  24.00 us 139.00 us275
> > > OPEN
> > >   0.00 180.14 us  84.00 us 457.00 us 96
> > > XATTROP
> > >   0.003847.20 us 231.00 us   18218.00 us  5
> > > MKNOD
> > >   0.00  70.08 us  15.00 us6539.00 us342
> > > ENTRYLK
> > >   0.00   10338.86 us 184.00 us   34813.00 us  7
> > > CREATE
> > >   0.00 896.65 us  12.00 us   83103.00 us235
> > > INODELK
> > >   0.00 187.86 us  50.00 us 668.00 us   1526
> > > WRITE
> > >   0.00  40.66 us  13.00 us 400.00 us  10400
> > > STATFS
> > >   0.00 313.13 us  66.00 us2142.00 us   2049
> > > FXATTROP
> > >   0.002794.97 us  26.00 us   78048.00 us295
> > > READ
> > >   0.00   24469.82 us 206.00 us  176157.00 us 34
> > > MKDIR
> > >   0.00  40.49 us  13.00 us 507.00 us  21420
> > > STAT
> > >   0.00 190.90 us  40.00 us  330032.00 us  45820
> > > LOOKUP
> > > 100.00 72004815.62 us   8.00 us 5783044563.00 us   3994
> > > FINODELK
> > >
> > > what happend?
> > >
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] help, all latency comes from FINODELK

2014-01-27 Thread haiwei.xie-soulinfo

hi, 
   looks FINODELK deak lock. how many clients, and nfs or fuse?
Maybe the best way is to print lock info in bricks, print lock requst info in 
afr & dht. 

> all my clients hang when they creating dir
> 
> 
> On Mon, Jan 27, 2014 at 11:33 PM, Mingfan Lu  wrote:
> 
> > I ran glusterfs volume profile my_volume info, I got some thing likes:
> >
> >   0.00   0.00 us   0.00 us   0.00 us 30  FORGET
> >   0.00   0.00 us   0.00 us   0.00 us185
> > RELEASE
> >   0.00   0.00 us   0.00 us   0.00 us 11
> > RELEASEDIR
> >   0.00  66.50 us  54.00 us  79.00 us  2
> > SETATTR
> >   0.00  44.83 us  25.00 us  94.00 us  6
> > READDIR
> >   0.00  57.55 us  28.00 us  95.00 us 11
> > OPENDIR
> >   0.00 325.00 us 279.00 us 371.00 us  2
> > RENAME
> >   0.00 147.80 us  84.00 us 206.00 us  5
> > LINK
> >   0.00 389.50 us  55.00 us 724.00 us  2
> > READDIRP
> >   0.00 164.25 us  69.00 us 287.00 us  8
> > UNLINK
> >   0.00  37.46 us  18.00 us  87.00 us 50
> > FSTAT
> >   0.00  70.32 us  29.00 us 210.00 us 37
> > GETXATTR
> >   0.00  77.75 us  42.00 us 216.00 us 55
> > SETXATTR
> >   0.00  36.39 us  11.00 us 147.00 us119
> > FLUSH
> >   0.00  51.22 us  24.00 us 139.00 us275
> > OPEN
> >   0.00 180.14 us  84.00 us 457.00 us 96
> > XATTROP
> >   0.003847.20 us 231.00 us   18218.00 us  5
> > MKNOD
> >   0.00  70.08 us  15.00 us6539.00 us342
> > ENTRYLK
> >   0.00   10338.86 us 184.00 us   34813.00 us  7
> > CREATE
> >   0.00 896.65 us  12.00 us   83103.00 us235
> > INODELK
> >   0.00 187.86 us  50.00 us 668.00 us   1526
> > WRITE
> >   0.00  40.66 us  13.00 us 400.00 us  10400
> > STATFS
> >   0.00 313.13 us  66.00 us2142.00 us   2049
> > FXATTROP
> >   0.002794.97 us  26.00 us   78048.00 us295
> > READ
> >   0.00   24469.82 us 206.00 us  176157.00 us 34
> > MKDIR
> >   0.00  40.49 us  13.00 us 507.00 us  21420
> > STAT
> >   0.00 190.90 us  40.00 us  330032.00 us  45820
> > LOOKUP
> > 100.00 72004815.62 us   8.00 us 5783044563.00 us   3994
> > FINODELK
> >
> > what happend?
> >


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] File (setuid) permission changes during volume heal - possible bug?

2014-01-27 Thread Chalcogen

Hi,

I am working on a twin-replicated setup (server1 and server2) with 
glusterfs 3.4.0. I perform the following steps:


1. Create a distributed volume 'testvol' with the XFS brick
   server1:/brick/testvol on server1, and mount it using the glusterfs
   native client at /testvol.

2. I copy the following file to /testvol:
   server1:~$ ls -l /bin/su
   -rw*s*r-xr-x 1 root root 84742 Jan 17  2014 /bin/su
   server1:~$ cp -a /bin/su /testvol

3. Within /testvol if I list out the file I just copied, I find its
   attributes intact.

4. Now, I add the XFS brick server2:/brick/testvol.
   server2:~$ gluster volume add-brick testvol replica 2
   server2:/brick/testvol

   At this point, heal kicks in and the file is replicated on server 2.

5. If I list out su in testvol on either server now, now, this is what
   I see.
   server1:~$ ls -l /testvol/su
   -rw*s*r-xr-x 1 root root 84742 Jan 17  2014 /bin/su

   server2:~$ ls -l /testvol/su
   -rw*x*r-xr-x 1 root root 84742 Jan 17  2014 /bin/su

That is, the 's' file mode gets changed to plain 'x' - meaning, all the 
attributes are not preserved upon heal completion. Would you consider 
this a bug? Is the behavior different on a higher release?


Thanks a lot.
Anirban
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] 3.5.0beta2 RPMs are available now

2014-01-27 Thread Kaleb S. KEITHLEY


3.5.0beta2 RPMs for el6, el7, fedora 19, fedora 20, and fedora 21 
(rawhide) are available at 
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.0beta2/ 
(el5  available momentarily).


Gluster Test Week starts now.

Debian and Ubuntu dpkgs coming soon too (I hope)

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] could I disable updatedb in brick server?

2014-01-27 Thread Ted Miller


On 1/26/2014 6:13 AM, Mingfan Lu wrote:
we have lots of (really) files in our gluser brick servers and every day, 
we will generate lots, the number of files increase very quickly. could I 
disable updatedb in brick servers? if that, glusterfs servers will be impacted?
I don't have the docs open now, but there is a setup file where you can tell 
updatedb what to index and what to not index.  I altered my configuration to 
add some things that it ignores by default, but it should be just as simple 
to tell it not to index your brick(s).


Ted Miller
Elkhart, IN, USA

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Bug 1057645] ownership of diskimage changes during livemigration, livemigration with kvm/libvirt fails

2014-01-27 Thread BGM
Hi Paul & all
I'm really keen on getting this solved,
right now it's a nasty show stopper.
I could try different gluster versions,
as long as I can get the .debs for it,
wouldn't want to start compiling 
(although does a config option have changed on package build?)
you reported that 3.4.0 on ubuntu 13.04 was working, right?
code diff, config options for package build.
Another approach: can anyone verify or falsify
https://bugzilla.redhat.com/show_bug.cgi?id=1057645
on another distro than ubuntu/debian?
thinking of it... could it be an apparmor interference? 
I had fun with apparmor and mysql on ubuntu 12.04 once...
will have a look at that tomorrow.
As mentioned before, a straight drbd/ocfs2 works (with only 1/4 speed 
and the pain of maintenance) so AFAIK I have to blame the ownership change
on gluster, not on an issue with my general setup
best regards
Bernhard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] intresting issue of replication and self-heal

2014-01-27 Thread Ted Miller


On 1/21/2014 11:05 PM, Mingfan Lu wrote:

I have a volume (distribute-replica (*3)), today i found an interesting problem

node22 node23 and node24 are the replica-7 from client A
but the annoying thing is when I create dir or write file from client to 
replica-7,


 date;dd if=/dev/zero of=49 bs=1MB count=120
Wed Jan 22 11:51:41 CST 2014
120+0 records in
120+0 records out
12000 bytes (120 MB) copied, 1.96257 s, 61.1 MB/s

but I could only find node23 & node24 have the find
---
node23,node24
---
/mnt/xfsd/test-volume/test/49

in clientA, I use find command

I use another machine as client B, and mount the test volume (newly mounted)
to run*find /mnt/xfsd/test-volume/test/49*

from Client A, the  three nodes have the file now.

---
node22,node23.node24
---
/mnt/xfsd/test-volume/test/49

but in Client A, I delete the file /mnt/xfsd/test-volume/test/49, node22 
still have the file in brick.


---
node22
---
/mnt/xfsd/test-volume/test/49

but if i delete the new created files from Client B )
my question is why node22 have no newly created/write dirs/files? I have to 
use find to trigger the self-heal to fix that?


from ClientA's log, I find something like:

 I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-test-volume-replicate-7: no 
active sinks for performing self-heal on file /test/49


It is harmless for it is information level?

I also see something like:
[2014-01-19 10:23:48.422757] E 
[afr-self-heal-entry.c:2376:afr_sh_post_nonblocking_entry_cbk] 
0-test-volume-replicate-7: Non Blocking entrylks failed for  
/test/video/2014/01.
[2014-01-19 10:23:48.423042] E 
[afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 
0-test-volume-replicate-7: background  entry self-heal failed on 
/test/video/2014/01
From the paths you are listing, it looks like you may be mounting the 
bricks, not the gluster volume.


You MUST mount the gluster volume, not the bricks that make up the volume.  
In your example, the mount looks like it is mounting the xfs volume.  Your 
mount command should be something like:


   mount :test volume /mount/gluster/test-volume

If a brick is part of a gluster volume, the brick must NEVER be written to 
directly.  Yes, what you write MAY eventually be duplicated over to the other 
nodes, but if and when that happens is unpredictable.  It will give the 
unpredictable replication results that you are seeing.


The best way to test is to run "mount".  If the line where you are mounting 
the gluster volume doesn't say "glusterfs" on it, you have it wrong.  Also, 
the line you use in /etc/fstab must say "glusterfs", not "xfs" or "ext4".


If you are in doubt, include the output of "mount" in your next email to the 
list.


Ted Miller
Elkhart, IN, USA

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] glusterfs-3.5.0beta2 released

2014-01-27 Thread Gluster Build System


SRC: 
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.0beta2.tar.gz

This release is made off jenkins-release-57

-- Gluster Build System
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] help, all latency comes from FINODELK

2014-01-27 Thread Mingfan Lu
all my clients hang when they creating dir


On Mon, Jan 27, 2014 at 11:33 PM, Mingfan Lu  wrote:

> I ran glusterfs volume profile my_volume info, I got some thing likes:
>
>   0.00   0.00 us   0.00 us   0.00 us 30  FORGET
>   0.00   0.00 us   0.00 us   0.00 us185
> RELEASE
>   0.00   0.00 us   0.00 us   0.00 us 11
> RELEASEDIR
>   0.00  66.50 us  54.00 us  79.00 us  2
> SETATTR
>   0.00  44.83 us  25.00 us  94.00 us  6
> READDIR
>   0.00  57.55 us  28.00 us  95.00 us 11
> OPENDIR
>   0.00 325.00 us 279.00 us 371.00 us  2
> RENAME
>   0.00 147.80 us  84.00 us 206.00 us  5
> LINK
>   0.00 389.50 us  55.00 us 724.00 us  2
> READDIRP
>   0.00 164.25 us  69.00 us 287.00 us  8
> UNLINK
>   0.00  37.46 us  18.00 us  87.00 us 50
> FSTAT
>   0.00  70.32 us  29.00 us 210.00 us 37
> GETXATTR
>   0.00  77.75 us  42.00 us 216.00 us 55
> SETXATTR
>   0.00  36.39 us  11.00 us 147.00 us119
> FLUSH
>   0.00  51.22 us  24.00 us 139.00 us275
> OPEN
>   0.00 180.14 us  84.00 us 457.00 us 96
> XATTROP
>   0.003847.20 us 231.00 us   18218.00 us  5
> MKNOD
>   0.00  70.08 us  15.00 us6539.00 us342
> ENTRYLK
>   0.00   10338.86 us 184.00 us   34813.00 us  7
> CREATE
>   0.00 896.65 us  12.00 us   83103.00 us235
> INODELK
>   0.00 187.86 us  50.00 us 668.00 us   1526
> WRITE
>   0.00  40.66 us  13.00 us 400.00 us  10400
> STATFS
>   0.00 313.13 us  66.00 us2142.00 us   2049
> FXATTROP
>   0.002794.97 us  26.00 us   78048.00 us295
> READ
>   0.00   24469.82 us 206.00 us  176157.00 us 34
> MKDIR
>   0.00  40.49 us  13.00 us 507.00 us  21420
> STAT
>   0.00 190.90 us  40.00 us  330032.00 us  45820
> LOOKUP
> 100.00 72004815.62 us   8.00 us 5783044563.00 us   3994
> FINODELK
>
> what happend?
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] help, all latency comes from FINODELK

2014-01-27 Thread Mingfan Lu
I ran glusterfs volume profile my_volume info, I got some thing likes:

  0.00   0.00 us   0.00 us   0.00 us 30  FORGET
  0.00   0.00 us   0.00 us   0.00 us185
RELEASE
  0.00   0.00 us   0.00 us   0.00 us 11
RELEASEDIR
  0.00  66.50 us  54.00 us  79.00 us  2
SETATTR
  0.00  44.83 us  25.00 us  94.00 us  6
READDIR
  0.00  57.55 us  28.00 us  95.00 us 11
OPENDIR
  0.00 325.00 us 279.00 us 371.00 us  2
RENAME
  0.00 147.80 us  84.00 us 206.00 us  5
LINK
  0.00 389.50 us  55.00 us 724.00 us  2
READDIRP
  0.00 164.25 us  69.00 us 287.00 us  8
UNLINK
  0.00  37.46 us  18.00 us  87.00 us 50
FSTAT
  0.00  70.32 us  29.00 us 210.00 us 37
GETXATTR
  0.00  77.75 us  42.00 us 216.00 us 55
SETXATTR
  0.00  36.39 us  11.00 us 147.00 us119
FLUSH
  0.00  51.22 us  24.00 us 139.00 us275
OPEN
  0.00 180.14 us  84.00 us 457.00 us 96
XATTROP
  0.003847.20 us 231.00 us   18218.00 us  5
MKNOD
  0.00  70.08 us  15.00 us6539.00 us342
ENTRYLK
  0.00   10338.86 us 184.00 us   34813.00 us  7
CREATE
  0.00 896.65 us  12.00 us   83103.00 us235
INODELK
  0.00 187.86 us  50.00 us 668.00 us   1526
WRITE
  0.00  40.66 us  13.00 us 400.00 us  10400
STATFS
  0.00 313.13 us  66.00 us2142.00 us   2049
FXATTROP
  0.002794.97 us  26.00 us   78048.00 us295
READ
  0.00   24469.82 us 206.00 us  176157.00 us 34
MKDIR
  0.00  40.49 us  13.00 us 507.00 us  21420
STAT
  0.00 190.90 us  40.00 us  330032.00 us  45820
LOOKUP
100.00 72004815.62 us   8.00 us 5783044563.00 us   3994
FINODELK

what happend?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-27 Thread John Mark Walker


- Original Message -
> On Mon, Jan 27, 2014 at 9:41 AM, Kaushal M  wrote:
> > This. I hadn't done a recursive clone. I cloned the repo correctly
> > again and everything works now. The vms are being provisioned as I
> > type this. Finally, time to test puppet-gluster.
> 
> 
> Awesome! I'm actually recording a screencast of the whole process. I
> figured, I might as well in the hopes visualizing the process helps
> others!
> 
> I'll post shortly, it might help with any other confusion.


Perfect! That will be awesome. Post on Youtube - I'll make sure Josephus Ant 
pulls it in :)

-JM


> 
> Cheers,
> James
> 
> ___
> Gluster-devel mailing list
> gluster-de...@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-27 Thread Jay Vyas
Thanks james.  Also looking at the code there is alot of use of advanced
puppet features (hiera for example).. during the screencast some hints
about the puppet modules are coded would also be awesome.

thanks again for all this work




On Mon, Jan 27, 2014 at 10:07 AM, James  wrote:

> On Mon, Jan 27, 2014 at 9:41 AM, Kaushal M  wrote:
> > This. I hadn't done a recursive clone. I cloned the repo correctly
> > again and everything works now. The vms are being provisioned as I
> > type this. Finally, time to test puppet-gluster.
>
>
> Awesome! I'm actually recording a screencast of the whole process. I
> figured, I might as well in the hopes visualizing the process helps
> others!
>
> I'll post shortly, it might help with any other confusion.
>
> Cheers,
> James
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



-- 
Jay Vyas
http://jayunit100.blogspot.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-27 Thread James
On Mon, Jan 27, 2014 at 9:41 AM, Kaushal M  wrote:
> This. I hadn't done a recursive clone. I cloned the repo correctly
> again and everything works now. The vms are being provisioned as I
> type this. Finally, time to test puppet-gluster.


Awesome! I'm actually recording a screencast of the whole process. I
figured, I might as well in the hopes visualizing the process helps
others!

I'll post shortly, it might help with any other confusion.

Cheers,
James
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Problem creating/editing files with samba gluster vfs module..

2014-01-27 Thread Ira Cooper
There is, but before disabling flock, I'd like to understand why we are doing 
it.  I thought we implemented posix locking in gluster?

The parameter will be named "locking", and I believe it can be set to posix, 
use a TDB, or nothing.

Thanks,

-Ira

- Original Message -
> Hi,
> 
> It seems to be a bug, this may have something to do with flock, in the
> vfs_glusterfs
> module the function vfs_gluster_kernel_flock() returns ENOSYS error.
> To solve this issue either modify the above mentioned function to return 0
> instead of ENOSYS, or there may be some parameter in smb.conf that disables
> flock.
> (i am not sure of the parameter though).
> 
> Regards,
> Poornima
> 
> - Original Message -
> From: "B.K.Raghuram" 
> To: "gluster-users" 
> Sent: Friday, January 24, 2014 12:16:46 PM
> Subject: [Gluster-users] Problem creating/editing files with samba gluster
>   vfs module..
> 
> Hi,
> 
> We have built samba 4.1 from source with the gluster vfs module
> enabled. I am able to access (read) and browse a volume from a windows
> machine. However, when I try to create or edit a file that resides on
> the volume from a windows box, it hangs forever. On the backend, I see
> that many temporary files are being created. I suspect that the
> windows box is trying to create the file but is not getting a
> confirmation of it having been created and so it tries to create one
> again. However, I do not have any problems with creating directories.
> We are using gluster 3.4.1.
> 
> Any ideas on what may be the issue?
> 
> Thanks,
> -Ram
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Glusterd dont start

2014-01-27 Thread Jefferson Carlos Machado

Hi,

Please, help me!!

After reboot my system the service glusterd dont start.

the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log

[2014-01-27 09:27:02.898807] I [glusterfsd.c:1910:main] 
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.2 
(/usr/sbin/glusterd -p /run/glusterd.pid)
[2014-01-27 09:27:02.909147] I [glusterd.c:961:init] 0-management: Using 
/var/lib/glusterd as working directory
[2014-01-27 09:27:02.913247] I [socket.c:3480:socket_init] 
0-socket.management: SSL support is NOT enabled
[2014-01-27 09:27:02.913273] I [socket.c:3495:socket_init] 
0-socket.management: using system polling thread
[2014-01-27 09:27:02.914337] W [rdma.c:4197:__gf_rdma_ctx_create] 
0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device)
[2014-01-27 09:27:02.914359] E [rdma.c:4485:init] 0-rdma.management: 
Failed to initialize IB Device
[2014-01-27 09:27:02.914375] E [rpc-transport.c:320:rpc_transport_load] 
0-rpc-transport: 'rdma' initialization failed
[2014-01-27 09:27:02.914535] W [rpcsvc.c:1389:rpcsvc_transport_create] 
0-rpc-service: cannot create listener, initing the transport failed
[2014-01-27 09:27:05.337557] I 
[glusterd-store.c:1339:glusterd_restore_op_version] 0-glusterd: 
retrieved op-version: 2
[2014-01-27 09:27:05.373853] E 
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key: 
brick-0
[2014-01-27 09:27:05.373927] E 
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key: 
brick-1
[2014-01-27 09:27:06.166721] I [glusterd.c:125:glusterd_uuid_init] 
0-management: retrieved UUID: 28f232e9-564f-4866-8014-32bb020766f2
[2014-01-27 09:27:06.169422] E 
[glusterd-store.c:2487:glusterd_resolve_all_bricks] 0-glusterd: resolve 
brick failed in restore
[2014-01-27 09:27:06.169491] E [xlator.c:390:xlator_init] 0-management: 
Initialization of volume 'management' failed, review your volfile again
[2014-01-27 09:27:06.169516] E [graph.c:292:glusterfs_graph_init] 
0-management: initializing translator failed
[2014-01-27 09:27:06.169532] E [graph.c:479:glusterfs_graph_activate] 
0-graph: init failed
[2014-01-27 09:27:06.169769] W [glusterfsd.c:1002:cleanup_and_exit] 
(-->/usr/sbin/glusterd(main+0x3df) [0x7f23c76588ef] 
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xb0) [0x7f23c765b6e0] 
(-->/usr/sbin/glusterd(glusterfs_process_volfp+0x103) 
[0x7f23c765b5f3]))) 0-: received signum (0), shutting down


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-27 Thread Kaushal M
On Mon, Jan 27, 2014 at 4:40 PM, James  wrote:
> On Mon, Jan 27, 2014 at 12:04 AM, Kaushal M  wrote:
>> I finally got vagrant working.
> Great! Now the fun stuff starts!
>
>> Had to roll back to v1.3.5 to get it
> As suspected. I've updated my original blog post to make it more clear
> to only use 1.3.5
>
>> working. I can get the puppet vm up, but the provisioning step always
>> fails with a puppet error [1]. Puppet complains about not finding a
>> declared class.
> Interesting! I don't see this error. The puppet server always builds
> properly for me. Can you verify that you did these steps:
>
> 1) git clone --recursive https://github.com/purpleidea/puppet-gluster

This. I hadn't done a recursive clone. I cloned the repo correctly
again and everything works now. The vms are being provisioned as I
type this. Finally, time to test puppet-gluster.

> 2) cd puppet-gluster/vagrant/gluster/
> 3) vagrant up puppet
>
> In particular can you verify that you used --recursive and that the
> puppet-gluster/vagrant/gluster/puppet/modules/ directory contains a
> puppet/ folder?
>
> Other than those things, I'm looking into this too... It seems some of
> the time, I've been getting similar errors too. I'm not quite sure
> what's going on. I got the feeling that maybe the puppet server didn't
> have enough memory. Now I'm not sure. Maybe there's a libvirt
> networking bug? Do you get the same errors when you repeat the
> process, or different errors each time?
>
>> This happens with and without vagrant-cachier. I'm
>> using the latest box (uploaded on 22-Jan).
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterd dont start

2014-01-27 Thread Enrico Valsecchi

Hello Carlos,
in log you have somes errors on rdma.

Have you installed gluster rdma?

Bye,

Enrico

Il 27/01/2014 12:53, Jefferson Carlos Machado ha scritto:
[2014-01-27 09:27:02.913273] I [socket.c:3495:socket_init] 
0-socket.management: using system polling thread
[2014-01-27 09:27:02.914337] W [rdma.c:4197:__gf_rdma_ctx_create] 
0-rpc-transport/rdma: rdma_cm event channel creation failed (No such 
device)
[2014-01-27 09:27:02.914359] E [rdma.c:4485:init] 0-rdma.management: 
Failed to initialize IB Device
[2014-01-27 09:27:02.914375] E 
[rpc-transport.c:320:rpc_transport_load] 0-rpc-transport: 'rdma' 
initialization failed 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterd dont start

2014-01-27 Thread Paul Boven

Hi Jefferson,

Did you perhaps run out of diskspace at some point in the past? I've had 
a similar thing happen to me this weekend, the machine worked fine for 
weeks after fixing the diskspace issue, but glusterd wouldn't start 
after a reboot. Here's how I managed to get things working again.


Amongst all the noise, the error message that 'resolve brick failed' 
seems to be the key here.


Check these files, by comparing the between your gluster servers:

/var/lib/glusterd/peers/
This file should actually contain the uuid, IP-address etc. for the 
peer. Look at your other node how it looks, and adapt UUID and IP 
address as appropriate.


/var/lib/glusterd/glusterd.info
This file should contain the UUID of the host, and you might be able to 
retrieve it from the other side.


I got mine back in working order after fixing these two files.

Regards, Paul Boven.

On 01/27/2014 12:53 PM, Jefferson Carlos Machado wrote:

Hi,

Please, help me!!

After reboot my system the service glusterd dont start.

the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log

[2014-01-27 09:27:02.898807] I [glusterfsd.c:1910:main]
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.2
(/usr/sbin/glusterd -p /run/glusterd.pid)
[2014-01-27 09:27:02.909147] I [glusterd.c:961:init] 0-management: Using
/var/lib/glusterd as working directory
[2014-01-27 09:27:02.913247] I [socket.c:3480:socket_init]
0-socket.management: SSL support is NOT enabled
[2014-01-27 09:27:02.913273] I [socket.c:3495:socket_init]
0-socket.management: using system polling thread
[2014-01-27 09:27:02.914337] W [rdma.c:4197:__gf_rdma_ctx_create]
0-rpc-transport/rdma: rdma_cm event channel creation failed (No such
device)
[2014-01-27 09:27:02.914359] E [rdma.c:4485:init] 0-rdma.management:
Failed to initialize IB Device
[2014-01-27 09:27:02.914375] E [rpc-transport.c:320:rpc_transport_load]
0-rpc-transport: 'rdma' initialization failed
[2014-01-27 09:27:02.914535] W [rpcsvc.c:1389:rpcsvc_transport_create]
0-rpc-service: cannot create listener, initing the transport failed
[2014-01-27 09:27:05.337557] I
[glusterd-store.c:1339:glusterd_restore_op_version] 0-glusterd:
retrieved op-version: 2
[2014-01-27 09:27:05.373853] E
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key:
brick-0
[2014-01-27 09:27:05.373927] E
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key:
brick-1
[2014-01-27 09:27:06.166721] I [glusterd.c:125:glusterd_uuid_init]
0-management: retrieved UUID: 28f232e9-564f-4866-8014-32bb020766f2
[2014-01-27 09:27:06.169422] E
[glusterd-store.c:2487:glusterd_resolve_all_bricks] 0-glusterd: resolve
brick failed in restore
[2014-01-27 09:27:06.169491] E [xlator.c:390:xlator_init] 0-management:
Initialization of volume 'management' failed, review your volfile again
[2014-01-27 09:27:06.169516] E [graph.c:292:glusterfs_graph_init]
0-management: initializing translator failed
[2014-01-27 09:27:06.169532] E [graph.c:479:glusterfs_graph_activate]
0-graph: init failed
[2014-01-27 09:27:06.169769] W [glusterfsd.c:1002:cleanup_and_exit]
(-->/usr/sbin/glusterd(main+0x3df) [0x7f23c76588ef]
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xb0) [0x7f23c765b6e0]
(-->/usr/sbin/glusterd(glusterfs_process_volfp+0x103)
[0x7f23c765b5f3]))) 0-: received signum (0), shutting down

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users



--
Paul Boven  +31 (0)521-596547
Unix/Linux/Networking specialist
Joint Institute for VLBI in Europe - www.jive.nl
VLBI - It's a fringe science
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Glusterd dont start

2014-01-27 Thread Jefferson Carlos Machado

Hi,

Please, help me!!

After reboot my system the service glusterd dont start.

the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log

[2014-01-27 09:27:02.898807] I [glusterfsd.c:1910:main] 
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.2 
(/usr/sbin/glusterd -p /run/glusterd.pid)
[2014-01-27 09:27:02.909147] I [glusterd.c:961:init] 0-management: Using 
/var/lib/glusterd as working directory
[2014-01-27 09:27:02.913247] I [socket.c:3480:socket_init] 
0-socket.management: SSL support is NOT enabled
[2014-01-27 09:27:02.913273] I [socket.c:3495:socket_init] 
0-socket.management: using system polling thread
[2014-01-27 09:27:02.914337] W [rdma.c:4197:__gf_rdma_ctx_create] 
0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device)
[2014-01-27 09:27:02.914359] E [rdma.c:4485:init] 0-rdma.management: 
Failed to initialize IB Device
[2014-01-27 09:27:02.914375] E [rpc-transport.c:320:rpc_transport_load] 
0-rpc-transport: 'rdma' initialization failed
[2014-01-27 09:27:02.914535] W [rpcsvc.c:1389:rpcsvc_transport_create] 
0-rpc-service: cannot create listener, initing the transport failed
[2014-01-27 09:27:05.337557] I 
[glusterd-store.c:1339:glusterd_restore_op_version] 0-glusterd: 
retrieved op-version: 2
[2014-01-27 09:27:05.373853] E 
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key: 
brick-0
[2014-01-27 09:27:05.373927] E 
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key: 
brick-1
[2014-01-27 09:27:06.166721] I [glusterd.c:125:glusterd_uuid_init] 
0-management: retrieved UUID: 28f232e9-564f-4866-8014-32bb020766f2
[2014-01-27 09:27:06.169422] E 
[glusterd-store.c:2487:glusterd_resolve_all_bricks] 0-glusterd: resolve 
brick failed in restore
[2014-01-27 09:27:06.169491] E [xlator.c:390:xlator_init] 0-management: 
Initialization of volume 'management' failed, review your volfile again
[2014-01-27 09:27:06.169516] E [graph.c:292:glusterfs_graph_init] 
0-management: initializing translator failed
[2014-01-27 09:27:06.169532] E [graph.c:479:glusterfs_graph_activate] 
0-graph: init failed
[2014-01-27 09:27:06.169769] W [glusterfsd.c:1002:cleanup_and_exit] 
(-->/usr/sbin/glusterd(main+0x3df) [0x7f23c76588ef] 
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xb0) [0x7f23c765b6e0] 
(-->/usr/sbin/glusterd(glusterfs_process_volfp+0x103) 
[0x7f23c765b5f3]))) 0-: received signum (0), shutting down


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Problem creating/editing files with samba gluster vfs module..

2014-01-27 Thread Poornima Gurusiddaiah
Hi,

It seems to be a bug, this may have something to do with flock, in the 
vfs_glusterfs 
module the function vfs_gluster_kernel_flock() returns ENOSYS error.
To solve this issue either modify the above mentioned function to return 0
instead of ENOSYS, or there may be some parameter in smb.conf that disables 
flock.
(i am not sure of the parameter though).

Regards,
Poornima

- Original Message -
From: "B.K.Raghuram" 
To: "gluster-users" 
Sent: Friday, January 24, 2014 12:16:46 PM
Subject: [Gluster-users] Problem creating/editing files with samba gluster  
vfs module..

Hi,

We have built samba 4.1 from source with the gluster vfs module
enabled. I am able to access (read) and browse a volume from a windows
machine. However, when I try to create or edit a file that resides on
the volume from a windows box, it hangs forever. On the backend, I see
that many temporary files are being created. I suspect that the
windows box is trying to create the file but is not getting a
confirmation of it having been created and so it tries to create one
again. However, I do not have any problems with creating directories.
We are using gluster 3.4.1.

Any ideas on what may be the issue?

Thanks,
-Ram
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-27 Thread James
On Mon, Jan 27, 2014 at 12:04 AM, Kaushal M  wrote:
> I finally got vagrant working.
Great! Now the fun stuff starts!

> Had to roll back to v1.3.5 to get it
As suspected. I've updated my original blog post to make it more clear
to only use 1.3.5

> working. I can get the puppet vm up, but the provisioning step always
> fails with a puppet error [1]. Puppet complains about not finding a
> declared class.
Interesting! I don't see this error. The puppet server always builds
properly for me. Can you verify that you did these steps:

1) git clone --recursive https://github.com/purpleidea/puppet-gluster
2) cd puppet-gluster/vagrant/gluster/
3) vagrant up puppet

In particular can you verify that you used --recursive and that the
puppet-gluster/vagrant/gluster/puppet/modules/ directory contains a
puppet/ folder?

Other than those things, I'm looking into this too... It seems some of
the time, I've been getting similar errors too. I'm not quite sure
what's going on. I got the feeling that maybe the puppet server didn't
have enough memory. Now I'm not sure. Maybe there's a libvirt
networking bug? Do you get the same errors when you repeat the
process, or different errors each time?

> This happens with and without vagrant-cachier. I'm
> using the latest box (uploaded on 22-Jan).
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client crash

2014-01-27 Thread Mingfan Lu
sorry, our customer remounted, so it is possible to get the full backtrace.
the full version is glusterfs-3.3.0.5rhs_iqiyi_6-1.el6.x86_64
it is a self-built version.

The code started from this change.

%changelog
* Wed May 9 2012 Kaleb S. KEITHLEY 
- Add BuildRequires: libxml2-devel so that configure will DTRT on for
- Fedora's Koji build system

with some backports:

1. backport patch to avoid fd leek during rebalance see
http://review.gluster.org/4888

2. backport http://review.gluster.org/#/c/4459/ to avoid gfid mismatch
during concurrent mkdir



On Mon, Jan 27, 2014 at 6:04 PM, Vijay Bellur  wrote:

> On 01/27/2014 01:34 PM, Mingfan Lu wrote:
>
>> the volume is distributed (replication = 1)
>>
>
> Is it possible to obtain a full backtrace using gdb?
>
> Also, what is the complete version string of this glusterfs release?
>
> Thanks,
> Vijay
>
>
>>
>> On Mon, Jan 27, 2014 at 4:01 PM, Mingfan Lu > > wrote:
>>
>> One of our client (3.3.0.5) crashed when writing data, the log is:
>>
>> pending frames:
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(LOOKUP)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> frame : type(1) op(WRITE)
>> patchset: git://git.gluster.com/glusterfs.git
>> 
>> signal received: 6
>> time of crash: 2014-01-27 15:36:32
>> configuration details:
>> argp 1
>> backtrace 1
>> dlfcn 1
>> fdatasync 1
>> libpthread 1
>> llistxattr 1
>> setfsid 1
>> spinlock 1
>> epoll.h 1
>> xattr.h 1
>> st_atim.tv_nsec 1
>> package-string: glusterfs 3.3.0.5rhs
>> /lib64/libc.so.6[0x32c5a32920]
>> /lib64/libc.so.6(gsignal+0x35)[0x32c5a328a5]
>> /lib64/libc.so.6(abort+0x175)[0x32c5a34085]
>> /lib64/libc.so.6[0x32c5a707b7]
>> /lib64/libc.so.6[0x32c5a760e6]
>> /usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-
>> behind.so(+0x42be)[0x7f79a63012be]
>> /usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-
>> behind.so(wb_sync_cbk+0xa0)[0x7f79a6307ab0]
>> /usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/quota.so(
>> quota_writev_cbk+0xed)[0x7f79a651864d]
>> /usr/lib64/glusterfs/3.3.0.5rhs/xlator/cluster/
>> distribute.so(dht_writev_cbk+0x14f)[0x7f79a6753aaf]
>> /usr/lib64/glusterfs/3.3.0.5rhs/xlator/protocol/client.
>> so(client3_1_writev_cbk+0x600)[0x7f79a6995340]
>> /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x31b020f4f5]
>> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x31b020fdb0]
>> /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x31b020aeb8]
>> /usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(
>> socket_event_poll_in+0x34)[0x7f79a79d4784]
>> /usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(
>> socket_event_handler+0xc7)[0x7f79a79d4867]
>> /usr/lib64/libglusterfs.so.0[0x31afe3e4e4]
>> /usr/sbin/glusterfs(main+0x590)[0x407420]
>> /lib64/libc.so.6(__libc_start_main+0x

Re: [Gluster-users] Replication delay

2014-01-27 Thread Fabio Rosati
Vijay,

   I can confirm that enabling network.remote-dio on the volume solves the 
problem.
Do you think this is the best option also for performace?


Thanks a lot
Fabio

- Messaggio originale -
Da: "Vijay Bellur" 
A: "Pranith Kumar Karampuri" 
Cc: "Fabio Rosati" , "Gluster-users@gluster.org 
List" 
Inviato: Sabato, 25 gennaio 2014 11:25:01
Oggetto: Re: [Gluster-users] Replication delay

On 01/25/2014 03:36 PM, Pranith Kumar Karampuri wrote:
>
>
> - Original Message -
>> From: "Vijay Bellur" 
>> To: "Pranith Kumar Karampuri" 
>> Cc: "Fabio Rosati" , 
>> "Gluster-users@gluster.org List" 
>> Sent: Saturday, January 25, 2014 3:32:24 PM
>> Subject: Re: [Gluster-users] Replication delay
>>
>> On 01/25/2014 02:28 PM, Pranith Kumar Karampuri wrote:
>>> Vijay,
>>>But it seems like self-heal's fd is able to perform 'writes'.
>>>Shouldn't it be uniform if it is the problem with xfs?
>>
>> The problem is not with xfs alone. It is due to a combination of several
>> factors including disk sector size, xfs sector size and the nature of
>> writes being performed. With cache=none, qemu does O_DIRECT open() which
>> necessitates proper alignment for write operations to happen
>> successfully. Self-heal does not open() with O_DIRECT and hence write
>> operations initiated by self-heal go through.
>
> I was also guessing it could be related to O_DIRECT. Anyway to fix that?

One option might be to enable option "network.remote-dio" on the 
glusterfs volume. Fabio - can you please check if this works?

> Wonder why it has to happen only on one of the bricks.

I suspect that the bricks are not completely identical. Hence it does go 
through on one and fails on the other.

-Vijay

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client crash

2014-01-27 Thread Vijay Bellur

On 01/27/2014 01:34 PM, Mingfan Lu wrote:

the volume is distributed (replication = 1)


Is it possible to obtain a full backtrace using gdb?

Also, what is the complete version string of this glusterfs release?

Thanks,
Vijay




On Mon, Jan 27, 2014 at 4:01 PM, Mingfan Lu mailto:mingfan...@gmail.com>> wrote:

One of our client (3.3.0.5) crashed when writing data, the log is:

pending frames:
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(LOOKUP)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
patchset: git://git.gluster.com/glusterfs.git

signal received: 6
time of crash: 2014-01-27 15:36:32
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.3.0.5rhs
/lib64/libc.so.6[0x32c5a32920]
/lib64/libc.so.6(gsignal+0x35)[0x32c5a328a5]
/lib64/libc.so.6(abort+0x175)[0x32c5a34085]
/lib64/libc.so.6[0x32c5a707b7]
/lib64/libc.so.6[0x32c5a760e6]

/usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(+0x42be)[0x7f79a63012be]

/usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(wb_sync_cbk+0xa0)[0x7f79a6307ab0]

/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/quota.so(quota_writev_cbk+0xed)[0x7f79a651864d]

/usr/lib64/glusterfs/3.3.0.5rhs/xlator/cluster/distribute.so(dht_writev_cbk+0x14f)[0x7f79a6753aaf]

/usr/lib64/glusterfs/3.3.0.5rhs/xlator/protocol/client.so(client3_1_writev_cbk+0x600)[0x7f79a6995340]
/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x31b020f4f5]
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x31b020fdb0]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x31b020aeb8]

/usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_poll_in+0x34)[0x7f79a79d4784]

/usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_handler+0xc7)[0x7f79a79d4867]
/usr/lib64/libglusterfs.so.0[0x31afe3e4e4]
/usr/sbin/glusterfs(main+0x590)[0x407420]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x32c5a1ecdd]
/usr/sbin/glusterfs[0x404289]




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Antw: Re: MS Office/Samba/Glusterfs/Centos

2014-01-27 Thread Adrian Valeanu
Hi,
over the weekend I installed ctdb from here 
https://ftp.samba.org/pub/ctdb/packages/redhat/RHEL6/2.5.1/ .
Now the situation is even worse than before. The tdb databases of Samba are 
clustered now, but of course all of them are clustered.
I can join the first server in to the AD domain without problems. When I join 
the second one (the tdb are clustered) it messes the join of the first.
This might be desirable in an HA situation where an ip takeover happens but for 
me it is contra productive. This is mentioned somewhere in the samba 
documentation. It would be nice to only have the tdb clustered where the locks 
happen but maybe this use case is too strange.
And of course this is no Glusterfs problem anymore.
 


>>> Paul Robert Marino  23.01.2014 23:12 >>>
Ira

In clustered mode stores locking information in a TDB so thats why you
need to configure CTDB and tell samba its clustered in the configed so
it will keep the sync databases in sync.

By the way samba does still do spinlocks or at least it did last year
here an intresting thread where they were talking about spinlocks,
fcntl locks and TDB based locks
https://groups.google.com/forum/#!topic/mailing.unix.samba-technical/hM-t2pBf_Hs

That said spinlocks are just the first thing that came to mind
(because that was part of many discussions back in the 2.x days) and
was not best choice of words on my part.




On Thu, Jan 23, 2014 at 4:48 PM, Paul Robert Marino  wrote:
> check this article its fairly strait forward http://ctdb.samba.org/samba.html
> if you don't get this configured properly then your locking wouldn't work.
>
> by the way oplocks need to be enabled based on an other post it looks
> like you turned off all locking support.
>
>
> On Thu, Jan 23, 2014 at 3:05 PM, Paul Robert Marino  
> wrote:
>> Are you using CTDB on a shared gluster volume instead of TDB on a local
>> volume which is samba default.
>>
>> If not this may explain your issue because Samba stores  or at least did ate
>> one time store spinlocks in the TDB for speed.
>>
>>
>>
>> -- Sent from my HP Pre3
>>
>> 
>> On Jan 23, 2014 10:31, Adrian Valeanu  wrote:
>>
>> Hi,
>> I try to replicate some data that resides on two CentOS servers. The
>> replicated data should be shared using Samba to the users. It should be
>> avoided that two users
>> try to work on the same file using MS Office. If the users access the same
>> file on one server, one of them is not able to modify the file. This is
>> possible (and the normal operation)
>> if one uses one server with Samba. I assumed that this kind of lock would be
>> replicated too.
>>
>> I did not knew about  libgfapi and have not used it yesterday. I used the
>> fuse mounted directory as data source for Samba.
>>
>> Yesterday night I updated both CentOS servers. They are CentOS 6.5 now. Now
>> locking does not happen at all any more. After you mentioned libgfapi I
>> found this:
>> https://www.mail-archive.com/gluster-users@gluster.org/msg13033.html
>>
>> I managed to compile the module and switched to samba-glusterfs-vfs. But I
>> still have no locking
>> My Samba configuration looks like this:
>> [glusterdata-vfs]
>>vfs object = glusterfs
>>glusterfs:volume = gv0
>>path = /
>>glusterfs:loglevel = 2
>>glusterfs:logfile = /var/log/samba/glusterdata-vfs.log
>>
>>read only = no
>>browseable = yes
>>guest ok = no
>>printable = no
>>nt acl support = yes
>>acl map full control = yes
>>
>> Thank you for your attention.
>>
>>
> Lalatendu Mohanty  22.01.2014 16:10 >>>
>>>
>> On 01/22/2014 08:30 PM, Adrian Valeanu wrote:
>>
>> Hi,
>> I have set up glusterfs 3.4.2 over an 10Gig xfs filesystem on two CentOS 6
>> servers. The gluster filesystem is shared through Samba on both servers.
>> Replication is working like a charm but file locking is not. Is it possible
>> to have file locking working in this configuration in an way that Microsoft
>> Office 2010
>> behaves like as if the files were on the same server? Does somebody have
>> such an configuration?
>> I tried a lot of the Samba configurations found on the mailing list but none
>> showed the expected results.
>>
>>
>> Are you using Samba with libgfapi? I am not sure if I understand your
>> expectation on locking through Samba. Some more context would be nice.
>>
>> -Lala
>>
>> Thank you
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster client crash

2014-01-27 Thread Mingfan Lu
the volume is distributed (replication = 1)


On Mon, Jan 27, 2014 at 4:01 PM, Mingfan Lu  wrote:

> One of our client (3.3.0.5) crashed when writing data, the log is:
>
> pending frames:
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(LOOKUP)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
> frame : type(1) op(WRITE)
>
> patchset: git://git.gluster.com/glusterfs.git
> signal received: 6
> time of crash: 2014-01-27 15:36:32
> configuration details:
> argp 1
> backtrace 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 3.3.0.5rhs
> /lib64/libc.so.6[0x32c5a32920]
> /lib64/libc.so.6(gsignal+0x35)[0x32c5a328a5]
> /lib64/libc.so.6(abort+0x175)[0x32c5a34085]
> /lib64/libc.so.6[0x32c5a707b7]
> /lib64/libc.so.6[0x32c5a760e6]
>
> /usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(+0x42be)[0x7f79a63012be]
>
> /usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(wb_sync_cbk+0xa0)[0x7f79a6307ab0]
>
> /usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/quota.so(quota_writev_cbk+0xed)[0x7f79a651864d]
>
> /usr/lib64/glusterfs/3.3.0.5rhs/xlator/cluster/distribute.so(dht_writev_cbk+0x14f)[0x7f79a6753aaf]
>
> /usr/lib64/glusterfs/3.3.0.5rhs/xlator/protocol/client.so(client3_1_writev_cbk+0x600)[0x7f79a6995340]
> /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x31b020f4f5]
> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x31b020fdb0]
> /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x31b020aeb8]
>
> /usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_poll_in+0x34)[0x7f79a79d4784]
>
> /usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_handler+0xc7)[0x7f79a79d4867]
> /usr/lib64/libglusterfs.so.0[0x31afe3e4e4]
> /usr/sbin/glusterfs(main+0x590)[0x407420]
> /lib64/libc.so.6(__libc_start_main+0xfd)[0x32c5a1ecdd]
> /usr/sbin/glusterfs[0x404289]
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster client crash

2014-01-27 Thread Mingfan Lu
One of our client (3.3.0.5) crashed when writing data, the log is:

pending frames:
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(LOOKUP)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)

patchset: git://git.gluster.com/glusterfs.git
signal received: 6
time of crash: 2014-01-27 15:36:32
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.3.0.5rhs
/lib64/libc.so.6[0x32c5a32920]
/lib64/libc.so.6(gsignal+0x35)[0x32c5a328a5]
/lib64/libc.so.6(abort+0x175)[0x32c5a34085]
/lib64/libc.so.6[0x32c5a707b7]
/lib64/libc.so.6[0x32c5a760e6]
/usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(+0x42be)[0x7f79a63012be]
/usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(wb_sync_cbk+0xa0)[0x7f79a6307ab0]
/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/quota.so(quota_writev_cbk+0xed)[0x7f79a651864d]
/usr/lib64/glusterfs/3.3.0.5rhs/xlator/cluster/distribute.so(dht_writev_cbk+0x14f)[0x7f79a6753aaf]
/usr/lib64/glusterfs/3.3.0.5rhs/xlator/protocol/client.so(client3_1_writev_cbk+0x600)[0x7f79a6995340]
/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x31b020f4f5]
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x31b020fdb0]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x31b020aeb8]
/usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_poll_in+0x34)[0x7f79a79d4784]
/usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_handler+0xc7)[0x7f79a79d4867]
/usr/lib64/libglusterfs.so.0[0x31afe3e4e4]
/usr/sbin/glusterfs(main+0x590)[0x407420]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x32c5a1ecdd]
/usr/sbin/glusterfs[0x404289]
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users