[Gluster-users] Mount Gluster on Block Device?

2016-02-24 Thread Samuel Hall
Hi,

I have a block device at /dev/blkdev. Does Gluster have some kind of
function to mount it on this block device?

 

Kind regards

Samuel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] FUSE fuse_main

2016-02-10 Thread Samuel Hall
Hello everyone,

I am trying to find where FUSE is initialized and the fuse_main function is
called by Gluster.

 

Normally the call by File-Systems using FUSE looks like this:

int main(int argc, char *argv[])

{

..

return fuse_main(argc, argv, _oper, NULL);

}

I can't find any similar pattern in Gluster.

Also I am looking the operations struct (prefix_oper in the example above),
which contains the fuse operations. I found similar structs in different
Gluster translators. Where can I find the one being used for FUSE?

 

The reason for those questions is, that I'm trying to integrate Gluster in a
microkernel OS, to which FUSE has been ported. But to be able to use FUSE
some changes have to made in calling the fuse_main.

 

Kind regards
Hall Samuel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] openstack's nova integration with libfgapi

2014-07-24 Thread samuel
Hi folks,

I've been trying to setup nova to use glusterfs volumes directly via
libfgapi but I haven't been able to configure it on icehouse version.
Gluster version is 3.5.1 and I can manually create gluster files via
qemu-img commands from the CLI of the compute nodes.

I've been trying to look for information but the only one I've found is to
manually mount gluster in the nova node and use it via FUSE.

Can anyone provide information how to setup this environtment?

Thanks in advance,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] compatibility between 3.3 and 3.4

2013-12-09 Thread samuel
Hi all,

We're playing around with new versions and uprading options. We currently
have a 2x2x2 stripped-distributed-replicated volume based on 3.3.0 and
we're planning to upgrade to 3.4 version.

We've tried upgrading fist the clients and we've tried with 3.4.0, 3.4.1
and 3.4.2qa2 but all of them caused the same error:

Failed to get stripe-size

So it seems as if 3.4 clients are not compatible to 3.3 volumes. Is this
assumtion right?

Is there any procedure to upgrade the gluster from 3.3 to 3.4 without
stopping the service?
Where are the compatibility limitations between these 2 versions?

Any hint or link to documentation would be highly appreciated.

Thank you in advance,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] mixing versions 3.3 and 3.4?

2013-10-08 Thread samuel
Hi all,

We're experimenting with a 3.3 gluster environment with 8 nodes in a
replicated-distributed-stripped structure.
We'd like to try next 3.4 version and we've read that it's backwards
compatible with 3.3.

I've got several questions:

1. Is it possible to add 3.4 version bricks to an existing 3.3 version
volume?
2. Is it possible to connect 3.4 native gluster clients to an existing 3.3
volume?
3. In case we add a 3.4 volume, would a 3.4 client be able to mount both
existing 3.3 volume and the future 3.4 volume?

We're trying to find the best way to increase both the version and the
nodes in the current system.

Any answer or hint to where to find above information is more than welcome.

Thanks in advance,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] 3.3.0 split-brain: how to set flags manually

2013-05-12 Thread samuel
After a few problems with power supply , we ended up with few split brains
situations using a gluster 3.3.0.

We've got 8 bricks having the data in a replicated distributed topology.
We've got several files with split brain and while trying to fix one of
them, it does not heal automatically. We've both deleted the file and the
.gfs attr file.

The problem is that the self-heal daemon recreates the file but does not
copy the data. the flags are the following:

on first brick:
getfattr -m . -e hex -d fd04a34a7aa503052503b65ab6eaea5f
# file: fd04a34a7aa503052503b65ab6eaea5f
trusted.afr.storage-client-0=0x
trusted.afr.storage-client-1=0x
trusted.gfid=0xf0d12a323e6f434a9886371a3e425f84
trusted.storage-stripe-0.stripe-count=0x3200
trusted.storage-stripe-0.stripe-index=0x3000
trusted.storage-stripe-0.stripe-size=0x31333130373200

on second brick:
getfattr -m . -e hex -d fd04a34a7aa503052503b65ab6eaea5f
# file: fd04a34a7aa503052503b65ab6eaea5f
trusted.afr.storage-client-0=0x
trusted.afr.storage-client-1=0x
trusted.afr.storage-io-threads=0x
trusted.afr.storage-replace-brick=0x
trusted.gfid=0xf0d12a323e6f434a9886371a3e425f84
trusted.storage-stripe-0.stripe-count=0x3200
trusted.storage-stripe-0.stripe-index=0x3000
trusted.storage-stripe-0.stripe-size=0x31333130373200

on the third brick which we want the data to be healed:
getfattr -m . -e hex -d fd04a34a7aa503052503b65ab6eaea5f
# file: fd04a34a7aa503052503b65ab6eaea5f
trusted.gfid=0xf0d12a323e6f434a9886371a3e425f84

on the fouth brick:
getfattr -m . -e hex -d fd04a34a7aa503052503b65ab6eaea5f
# file: fd04a34a7aa503052503b65ab6eaea5f
trusted.afr.storage-client-2=0x00010001
trusted.afr.storage-client-3=0x
trusted.afr.storage-io-threads=0x
trusted.afr.storage-replace-brick=0x
trusted.gfid=0xf0d12a323e6f434a9886371a3e425f84
trusted.storage-stripe-0.stripe-count=0x3200
trusted.storage-stripe-0.stripe-index=0x3100
trusted.storage-stripe-0.stripe-size=0x31333130373200

The problem, as far as I can see, are the flags set
on trusted.afr.storage-client-2. Is there any documentation what each flag
mean and how can we set the split-brain one so it's set to 0 and the
self-healing daemon copies de data?

As a side note, we've got around 150 files with similar issues. Is there
any limit about the maximum files the self-healing daemon can handle? Would
it be safe to manually copy the data from one brick to the other?

Thanks a lot in advance,
SAmuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS 3.3.1 split-brain rsync question

2013-04-11 Thread samuel
You might try this:
http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/

and wait for the self-heal to replace the file.
In most cases it works, but sometimes the gluster client stills reports
split brain (even all xattr flags are cleared).

In that case you may try to clear the cache on the clients by issuing (
http://linux-mm.org/Drop_Caches):

echo 3  /proc/sys/vm/drop_caches

If the client still see the file as split brain, you shall have to umount
and mount again the gluster volume.

Hope it helps,
Samuel.


On 11 April 2013 01:48, Robert Hajime Lanning lann...@lanning.cc wrote:

 On 04/10/13 03:44, Daniel Mons wrote:
 [snip]


 Option 1) Delete the file from the bad brick


 I would do this.  Then trigger a self-heal.

 --
 Mr. Flibble
 King of the Potato People

 __**_
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.**org/mailman/listinfo/gluster-**usershttp://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] split brain recovery 3.3.0

2013-03-08 Thread samuel
No one around can help with this situation?

The file finally got corrupted and it's impossible to read from any of
the current mounted native gluster clients.

Reading the attributes, all flags are set as 0:

# file: dd29900f215b175939816f94c907a31b
trusted.afr.storage-client-6=0x
trusted.afr.storage-client-7=0x
trusted.gfid=0x59311906edf04464be5f00f505b3aebb
trusted.storage-stripe-1.stripe-count=0x3200
trusted.storage-stripe-1.stripe-index=0x3100
trusted.storage-stripe-1.stripe-size=0x31333130373200

so there should not be read as split brain from clients but we got the
following logs:
split brain detected during lookup of

thanks in advance,
Samuel.


On 4 March 2013 14:37, samuel sam...@gmail.com wrote:

 Hi folks,

 We have detected a split-brain on 3.3.0 stripped replicated 8 nodes
 cluster.
 We've followed the instructions from:
 http://www.gluster.org/2012/07/fixing-split-brain-with-glusterfs-3-3/
 and we've manually recovered the information.

 The problem is that we've got 4 gluster native clients and from 2 of them
 we got
 0-storage-replicate-3: failed to open as split brain seen, returning EIO
 but from 2 of them we can access the recovered file.

 We're this information locally storage on clients so we can update all
 of them and recover the information in a coherent manner?

 Thanks a lot in advance,
 Samuel.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] split brain recovery 3.3.0

2013-03-08 Thread samuel
Dear Patric,

That seemed to make the trick because we can now see the file and access it.

Is there any place where it could be documented?

Thanks a lot,
Samuel

On 8 March 2013 10:40, Patric Uebele pueb...@redhat.com wrote:

 Hi Samuel,

 can you try to drop caches on the clients:

 “echo 3  /proc/sys/vm/drop_caches”

 /Patric

 On Fri, 2013-03-08 at 10:13 +0100, samuel wrote:
  No one around can help with this situation?
 
  The file finally got corrupted and it's impossible to read from any
  of the current mounted native gluster clients.
 
  Reading the attributes, all flags are set as 0:
 
  # file: dd29900f215b175939816f94c907a31b
  trusted.afr.storage-client-6=0x
  trusted.afr.storage-client-7=0x
  trusted.gfid=0x59311906edf04464be5f00f505b3aebb
  trusted.storage-stripe-1.stripe-count=0x3200
  trusted.storage-stripe-1.stripe-index=0x3100
  trusted.storage-stripe-1.stripe-size=0x31333130373200
 
  so there should not be read as split brain from clients but we got the
  following logs:
  split brain detected during lookup of
 
  thanks in advance,
  Samuel.
 
 
  On 4 March 2013 14:37, samuel sam...@gmail.com wrote:
  Hi folks,
 
  We have detected a split-brain on 3.3.0 stripped replicated 8
  nodes cluster.
  We've followed the instructions from:
 
 http://www.gluster.org/2012/07/fixing-split-brain-with-glusterfs-3-3/
  and we've manually recovered the information.
 
  The problem is that we've got 4 gluster native clients and
  from 2 of them we got
  0-storage-replicate-3: failed to open as split brain seen,
  returning EIO
  but from 2 of them we can access the recovered file.
 
  We're this information locally storage on clients so we can
  update all of them and recover the information in a
  coherent manner?
 
  Thanks a lot in advance,
  Samuel.
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users

 --
 Patric Uebele
 Solution Architect Storage

 Red Hat GmbH
 Technopark II, Haus C
 Werner-von-Siemens-Ring 14
 85630 Grasbrunn
 Germany

 Office:+49 89 205071-162
 Cell:  +49 172 669 14 99
 mailto:patric.ueb...@redhat.com

 gpg keyid: 48E64CC1
 gpg fingerprint: C63E 6320 A03B 4410 D208  4EE7 12FC D0E6 48E6 4CC1

 
 Reg. Adresse: Red Hat GmbH, Werner-von-Siemens-Ring 14, 85630 Grasbrunn
 Handelsregister: Amtsgericht Muenchen HRB 153243
 Geschaeftsfuehrer: Mark Hegarty, Charlie Peters, Michael Cunningham,
 Charles Cachera

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] split brain recovery 3.3.0

2013-03-04 Thread samuel
Hi folks,

We have detected a split-brain on 3.3.0 stripped replicated 8 nodes cluster.
We've followed the instructions from:
http://www.gluster.org/2012/07/fixing-split-brain-with-glusterfs-3-3/
and we've manually recovered the information.

The problem is that we've got 4 gluster native clients and from 2 of them
we got
0-storage-replicate-3: failed to open as split brain seen, returning EIO
but from 2 of them we can access the recovered file.

We're this information locally storage on clients so we can update all of
them and recover the information in a coherent manner?

Thanks a lot in advance,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] server3_1-fops.c:1240 (Cannot allocate memory)

2013-02-12 Thread samuel
Hi folks,

We're using in an old environment gluster 3.2.2 and we've just started
seeing the following error in /usr/local/var/log/glusterfs/bricks/gfs-log:

[2013-02-12 19:15:05.214430] I [server3_1-fops.c:1240:server_writev_cbk]
0-cloud-server: 2400444243: WRITEV 7 (37067) == -1 (Cannot allocate memory)
[2013-02-12 19:15:19.463087] I [server3_1-fops.c:1240:server_writev_cbk]
0-cloud-server: 2249406582: WRITEV 15 (52493) == -1 (Cannot allocate
memory)

in a distributed replicated 2-node environment.

Both nodes have enough disk space (1.3T) but looks like used memory is
quite high:

node1:
 total   used   free sharedbuffers cached
Mem:   40476804012700  34980  02763728652
-/+ buffers/cache: 2837723763908
Swap:  3906244  199163886328

node2:
 total   used   free sharedbuffers cached
Mem:   40476804012488  35192  0   10883713512
-/+ buffers/cache: 2978883749792
Swap:  3905532  252443880288

could it be a memory issue?

Best regards,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Striped Replicated Volumes: create files error.

2013-02-01 Thread samuel
It's a problem of stripped volumes in 3.3.1.
It does not appear on 3.3.0 and it's solved in coming 3.4.

Best regards,
Samuel.

On 25 January 2013 14:41, axel.we...@cbc.de wrote:

  Hi there,
 each time I copy (or dd or similar) a file to a striped replicated volume
 I get an error: the argument is not valid.
 An empty file is created.
 If I now run the copy, it works.
 This is in independed of the client platform.
 We are using version 3.3.1

 ** **

 ** **

 Mit freundlichen Grüßen / Kind regards

 ** **

 ** **

 Axel Weber

 ** **

 ** **

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Question regarding RDMA support, in gluster 3.3.1

2013-01-02 Thread samuel
Hi all,

Besides our negative experience about this topic, there's this post:

http://community.gluster.org/q/how-i-can-troubleshoot-rdma-performance-issues-in-3-3-0/

So it's normal that version 3.3 does not work with transport rdma.

Best regards,
Samuel.
On 30 December 2012 14:44, Ayelet Shemesh shemesh.aye...@gmail.com wrote:

 Hi,

 I've recently install glusterfs on a cluster of several machines, and
 although it works very nicely when I use TCP connections I completely fail
 to use RDMA connections.

 I have 10 machines with IB NICs, and RDMA does work between them (verified
 using ib_write_bw -r and several other apps).

 When I configure a volume with RDMA I have a problem with the port at the
 client side. It keeps trying to get to port 24008, no matter what port the
 server actually listens on (in my case 24024).

 I've tried many options to set the ports, it doesn't seem to work either
 using the files under /etc/glusterd/vols/my_vol_
 name/ or using gluster volume set my_vol_name transport.socket.listen-port
 (or any other option I tried).

 Any help will be appreciated.

 Thanks,
 Ayelet

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] stripped volume in 3.4.0qa5 with horrible read performance

2012-12-17 Thread samuel
Dear folks,

I've been tried to use replicated stripped volumes with 3.3. unsuccessfully
due to https://bugzilla.redhat.com/show_bug.cgi?id=861423 and I then
proceed to try 3.4.0qa5. I then find out that the bug was solved and I
could use replicated stripped volume with the new version. Amazingly, write
performance was quite astonishing.

The problem I'm facing now is in the read process: It's horribly slow. When
I open a file to edit using the gluster native client, it takes a few
seconds and sometimes I got an error refering to file has been modified
while I was editing it. There's a ruby application reading the files and I
got continuously timeout errors.

I'm using 4 bricks with Centos 6.3 with the following structure:
Type: Striped-Replicate
Volume ID: 23dbb8dd-5cb3-4c71-9702-7c16ee9a3b3b
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.0.51.31:/gfs
Brick2: 10.0.51.32:/gfs
Brick3: 10.0.51.33:/gfs
Brick4: 10.0.51.34:/gfs
Options Reconfigured:
performance.quick-read: on
performance.io-thread-count: 32
performance.cache-max-file-size: 128MB
performance.cache-size: 256MB
performance.io-cache: on
cluster.stripe-block-size: 2MB
nfs.disable: on

I started profiling and found out one node with absurd latency figures. I
stopped the node and the problem moved to another brick:
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
Fop
 -   ---   ---   ---   

99.94  551292.41 us  10.00 us 1996709.00 us361FINODELK

Could anyone provide some information how to debug this problem? Currently
the volume is not usable due to the horrible delay.

Thank you very much in advance,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] stripped volume in 3.4.0qa5 with horrible read performance

2012-12-17 Thread samuel
Done.

https://bugzilla.redhat.com/show_bug.cgi?id=888174

While testing the system, we found 3.3.0 enables stripped-replicated
volumes and seems to offer a right read behaviour in some tests.

Thanks in advance and, please, contact me in case I can offer further help.

Best regards,
Samuel.

On 17 December 2012 16:20, John Mark Walker johnm...@redhat.com wrote:

 Please file a bug. There might be time to fix read performance before the
 1st beta release.

 -JM


 --

 Dear folks,

 I've been tried to use replicated stripped volumes with 3.3.
 unsuccessfully due to https://bugzilla.redhat.com/show_bug.cgi?id=861423and I 
 then proceed to try 3.4.0qa5. I then find out that the bug was solved
 and I could use replicated stripped volume with the new version. Amazingly,
 write performance was quite astonishing.

 The problem I'm facing now is in the read process: It's horribly slow.
 When I open a file to edit using the gluster native client, it takes a few
 seconds and sometimes I got an error refering to file has been modified
 while I was editing it. There's a ruby application reading the files and I
 got continuously timeout errors.

 I'm using 4 bricks with Centos 6.3 with the following structure:
 Type: Striped-Replicate
 Volume ID: 23dbb8dd-5cb3-4c71-9702-7c16ee9a3b3b
 Status: Started
 Number of Bricks: 1 x 2 x 2 = 4
 Transport-type: tcp
 Bricks:
 Brick1: 10.0.51.31:/gfs
 Brick2: 10.0.51.32:/gfs
 Brick3: 10.0.51.33:/gfs
 Brick4: 10.0.51.34:/gfs
 Options Reconfigured:
 performance.quick-read: on
 performance.io-thread-count: 32
 performance.cache-max-file-size: 128MB
 performance.cache-size: 256MB
 performance.io-cache: on
 cluster.stripe-block-size: 2MB
 nfs.disable: on

 I started profiling and found out one node with absurd latency figures. I
 stopped the node and the problem moved to another brick:
  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of
 calls Fop
  -   ---   ---   ---   
 
 99.94  551292.41 us  10.00 us 1996709.00 us361FINODELK

 Could anyone provide some information how to debug this problem? Currently
 the volume is not usable due to the horrible delay.

 Thank you very much in advance,
 Samuel.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] infiniband bonding

2012-09-21 Thread samuel
Hi folks,

Reading this post:
http://community.gluster.org/q/port-bonding-link-aggregation-transport-rdma-ib-verbs/

It says that gluster 3.2 does not support bonding of infiniband ports.

Does anyone knows whether 3.3 has changed this limitation? Is there any
other place where to find information about this subject?

Thanks in advance!

Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS performance degradation in 3.3

2012-07-19 Thread samuel
I've just make more tests and without any error log, the NFS glusterfs
server raised up to 6.00 load (in a 4 core server) and in the 2 bricks
where the real files where stored, reached loads of 10. No error message
in log files (nfs, bricks, gluster).

Will deactivate NLM improve performance? Any other options?

Thanks in advance for any hint,
Samuel.

On 19 July 2012 08:44, samuel sam...@gmail.com wrote:

 This are the parameters that are set:

  59: volume nfs-server
  60: type nfs/server
  61: option nfs.dynamic-volumes on
  62: option nfs.nlm on
  63: option rpc-auth.addr.cloud.allow *
  64: option nfs3.cloud.volume-id 84fcec8c-d11a-43b6-9689-3f39700732b3
  65: option nfs.enable-ino32 off
  66: option nfs3.cloud.volume-access read-write
  67: option nfs.cloud.disable off
  68: subvolumes cloud
  69: end-volume

 And some errors are:
 [2012-07-18 17:57:00.391104] W [socket.c:195:__socket_rwv]
 0-socket.nfs-server: readv failed (Connection reset by peer)
 [2012-07-18 17:57:29.805684] W [socket.c:195:__socket_rwv]
 0-socket.nfs-server: readv failed (Connection reset by peer)
 [2012-07-18 18:04:08.603822] W [nfs3.c:3525:nfs3svc_rmdir_cbk] 0-nfs:
 d037df6: /one/var/datastores/0/99/disk.0 = -1 (Directory not empty)
 [2012-07-18 18:04:08.625753] W [nfs3.c:3525:nfs3svc_rmdir_cbk] 0-nfs:
 d037dfe: /one/var/datastores/0/99 = -1 (Directory not empty)

 The directory not empty is just an attempt to delete a directory with
 files inside but I guess that it should not increase the CPU load.

 Above case is just one of the many times that the NFS daemon started using
 CPU but it's not the only scenario (deleting not empyt directory) that
 causes the degradation. Sometimes it has happened wihout any concrete error
 on the log files. I'll try to make more tests and offer more debug
 information.

 Thanks for your answer so far,
 Samuel.


 On 18 July 2012 21:54, Anand Avati anand.av...@gmail.com wrote:

 Is there anything in the nfs logs?

 Avati

 On Wed, Jul 18, 2012 at 9:44 AM, samuel sam...@gmail.com wrote:

 Hi all,

 We're experiencing with a 4 nodes distributed-replicated environment
 (replica 2). We were using gluster native client to access the volumes, but
 we were asked to add NFS accessibility to the volume. We then started the
 NFS daemon on the bricks. Everything went ok but we started experiencing
 some performance degradation accessing the volume.
 We debugged the problem and found out that quite often the NFS glusterfs
 process (NOT the glusterfsd) eats up all the CPU and the server where the
 NFS is being exported starts offering really bad performance.

 Is there any issue with 3.3 and NFS performance? Are there any NFS
 parameters to play with that can mitigate this degradation (standard R/W
 values drops to a quarter of standard values)?

 Thanks in advance for any help,

 Samuel.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] managing split brain in 3.3

2012-06-25 Thread samuel
Hi all,

We've been using gluster 3.2.X without much issue and we were trying next
version (3.3) compiled from sources on a ubuntu 12.04 server:

glusterfs 3.3.0 built on Jun  7 2012 11:19:51
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.

We're using a replicated distributed architecture with 8 nodes in a
2-replica configuration.

On the client side we're using gluster native libraries to mount the
gluster volume and recently we found an issue with 1 file

[2012-06-25 14:58:22.161036] W
[afr-self-heal-data.c:831:afr_lookup_select_read_child_by_txn_type]
0-cloud-replicate-2:$FILE: Possible split-brain
[2012-06-25 14:58:22.161098] W
[afr-common.c:1226:afr_detect_self_heal_by_lookup_status]
0-cloud-replicate-2: split brain detected during lookup of $FILE
[2012-06-25 14:58:22.161881] E
[afr-self-heal-common.c:2156:afr_self_heal_completion_cbk]
0-cloud-replicate-2: background  data gfid self-heal failed on $FILE

I located the 2 bricks (servers) were the file was located, and the file
was ok in both nodes as expected. I tried to delete both the file and the
hard link in one node, perform a self-healing on the client, and the file
was recreated in the missing node but the file was not yet accessible from
the client.

I made the same procedure on the other node (delete file and hard link) and
launch self-healing and the file is not yet accessible.

Is there any guide or procedure to handle split brains on 3.3?

Thanks in advance,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] enabling NFS on a running gluster system

2012-04-18 Thread samuel
Hi all,

We're currently having a 2-nodes replicated-distributed gluster system
(version 3.2.2) where all the clients connect via the native gluster
client. There's been a requirement to connect via NFS to the existing
gluster and I'd like to ask to you whether the NFS can be dynamically
enabled,

Is it required to restart services in the server?
Is it required to remount existing clients?
There's a georeplica backend which I guess will not be affected, but is it
required to restart the replicacion?


As a side effect, would the existing gluster performance by degraded for
the activation of the NFS compatibility?

Thank you in advance.

Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] fd cleanup and Bad file descriptor

2011-09-13 Thread samuel
happen again?

Thank you very much in advance for any link or information,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs and pacemaker

2011-07-18 Thread samuel
I don't know from which version on but, if you use the native client for
mounting the volumes, it's only required to have the IP active in the mount
moment. After that, the native client will transparently manage node's
failure.

Best regards,
Samuel.

On 18 July 2011 13:14, Marcel Pennewiß mailingli...@pennewiss.de wrote:

 On Monday 18 July 2011 12:10:36 Uwe Weiss wrote:
  My second node is 192.168.50.2. But in the Filesystem RA I have
 referenced
  to 192.168.50.1 (see above). During my first test node1 was up and
 running,
  but what happens if node1 is completely away and the address is
  inaccessible?

 We're using replicated setup and both nodes share an IPv4/IPv6-address (via
 pacemaker) which is used for accessing/mounting glusterfs-share and
 nfs-share
 (from backup-server).

 Marcel
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] high wa% values

2011-07-01 Thread samuel
Hi folks,

Just starting playing with gluster because I'd like to move from an NFS (v3)
storage system to glusterfs. I downloaded sources from last 3.2 version and
compiled in all the clients and servers (ubuntu 11.04). Just installed
default parameters both for NFS and gluster in order to compare basic
installations.

In a kvm virtual environment using the same instance (copying the disk image
and runing both of them) the preformance using NFS is significantly much
better, specially regarding percentage of processes in waiting status.
Absolute values using hdparm or fio are right but gluster's backend virtual
machines stalled from time to time when processes are in waiting status.

Backend filesystem is
1)NFS with RAID 1+0 and XFS,
2) while gluster are just plain SATA XFS formated disks.

I've read gluster has lower performance accessing huge number of small files
compared to NFS but, in theory, the rest of scenarios gluster offers better
performance.
Can anyone point to some documentation where I can improve gluster
behaviour? Or any suggestion|idea to improve the storage system?

Thank you very much in advance,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Errors in logs and partition crash

2011-01-06 Thread Samuel Hassine
Hi there,

I have a problem here, in all my gluster clients logs files, I have since
one or two days the following errors:

[2011-01-06 11:52:33.396075] W [fuse-bridge.c:2510:fuse_getxattr]
glusterfs-fuse: 228319109: GETXATTR (null)/46853712 (security.capability)
(fuse_loc_fill() failed)
[2011-01-06 11:52:33.455547] W [fuse-bridge.c:2510:fuse_getxattr]
glusterfs-fuse: 228319223: GETXATTR (null)/46853712 (security.capability)
(fuse_loc_fill() failed)
[2011-01-06 11:52:33.499089] W [fuse-bridge.c:2510:fuse_getxattr]
glusterfs-fuse: 228319254: GETXATTR (null)/46853712 (security.capability)
(fuse_loc_fill() failed)
[2011-01-06 11:52:33.841787] W [fuse-bridge.c:2510:fuse_getxattr]
glusterfs-fuse: 228319756: GETXATTR (null)/46853712 (security.capability)
(fuse_loc_fill() failed)
[2011-01-06 11:52:34.38679] W [fuse-bridge.c:2510:fuse_getxattr]
glusterfs-fuse: 228319819: GETXATTR (null)/46853712 (security.capability)
(fuse_loc_fill() failed)

Here the same: http://pastebin.com/kYzcD3qq

And after a few hours like this, gluster partition freezes and crashes.

Here the config:

r...@on-001:~# gluster volume info
Volume Name: dns
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: on-001.olympe-network.com:/store1
Brick2: on-002.olympe-network.com:/store1
Brick3: on-003.olympe-network.com:/store1
Brick4: on-004.olympe-network.com:/store1
Options Reconfigured:
performance.cache-refresh-timeout: 0
performance.cache-size: 6144MB

Someone know what that means?

Regards.
Samuel



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Invalid argument on delete with GlusterFS 3.1.1 client

2011-01-04 Thread Samuel Hassine
Hi there,

I just want to add that we have exactly the same problem, with many many
files on our infrastructure.

If I want to delete a file, DHT returned Invalid argument.

And in the log file:

[2011-01-04 15:20:43.641438] I [dht-common.c:369:dht_revalidate_cbk]
dns-dht: subvolume dns-replicate-1 returned -1 (Invalid argument)
[2011-01-04 15:20:52.510538] I [dht-common.c:369:dht_revalidate_cbk]
dns-dht: subvolume dns-replicate-0 returned -1 (Invalid argument)

(and all over again...).

Regards.
Sam

-Message d'origine-
De : gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] De la part de Dan Bretherton
Envoyé : mardi 4 janvier 2011 14:29
À : Lana Deere
Cc : gluster-users
Objet : Re: [Gluster-users] Invalid argument on delete with GlusterFS 3.1.1
client

No I don't think it is load dependent.  The user reported the problem again
during the Christmas holiday when very few other people (if any) were using
the clients or the servers.
-Dan,

On 03/01/2011 17:28, Lana Deere wrote:
 I have seen this same problem but have not been able to find a 
 workaround other than to delete the file from the server directly.  I 
 was not able to figure out a way to reproduce the symptom reliably, 
 but in my case I suspect it was related to heavy concurrent access.
 Does that seem plausible in light of your access patterns?

 .. Lana (lana.de...@gmail.com)






 On Wed, Dec 29, 2010 at 3:30 PM, Dan Bretherton 
 d.a.brether...@reading.ac.uk  wrote:
 We have an occasional problem that prevents deletion of certain 
 GlusterFS mounted files.  See the following, for example, with 
 corresponding log file message.

 ke...@cd /glusterfs/atmos/users/kih/ECHAM5/TS4-TEMP
 ke...@rm HYBRID_TEMP_207212
 rm: cannot remove `HYBRID_TEMP_207212': Invalid argument

 [2010-12-28 00:59:04.298331] W [fuse-bridge.c:888:fuse_unlink_cbk]
 glusterfs-fuse: 3997: UNLINK() 
 /users/kih/ECHAM5/TS4-TEMP/HYBRID_TEMP_207212
 =  -1 (Invalid argument)

 The file was deleted without error on a machine where the volume was 
 mounted via NFS.  I have four compute servers that are using the 
 GlusterFS client for performance reasons.  Operating system and 
 GlusterFS package details are as follows.

 [r...@nemo1 TS4-TEMP]# cat /etc/redhat-release CentOS release 5.5 
 (Final)
 [r...@nemo1 TS4-TEMP]# uname -a
 Linux nemo1.nerc-essc.ac.uk 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 
 EDT
 2010 x86_64 x86_64 x86_64 GNU/Linux
 [r...@nemo1 TS4-TEMP]# rpm -qa | grep -i gluster
 glusterfs-fuse-3.1.1-1
 glusterfs-core-3.1.1-1

 Is there anything I can do to stop this from happening, other than 
 using NFS instead of GlusterFS client?

 -Dan.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Errors with Gluster 3.1.2qa2

2010-12-15 Thread Samuel Hassine
Hi all,

 

I have just migrated my old gluster partition to a fresh one with 4 nodes
with:

 

Type: Distributed-Replicate

Status: Started

Number of Bricks: 2 x 2 = 4

 

It solves my problems of latency and disk errors (like input/output errors
or file descriptor in bad state) but I have just many many errors like this:

 

[2010-12-15 12:01:12.711136] I [dht-common.c:369:dht_revalidate_cbk]
dns-dht: subvolume dns-replicate-0 returned -1 (Invalid argument)

[2010-12-15 12:01:21.228062] I [dht-common.c:369:dht_revalidate_cbk]
dns-dht: subvolume dns-replicate-1 returned -1 (Invalid argument)

[2010-12-15 12:01:28.677286] I [dht-common.c:369:dht_revalidate_cbk]
dns-dht: subvolume dns-replicate-0 returned -1 (Invalid argument)

[2010-12-15 12:01:31.741818] I [dht-common.c:369:dht_revalidate_cbk]
dns-dht: subvolume dns-replicate-0 returned -1 (Invalid argument)

[2010-12-15 12:01:56.247185] I [dht-common.c:369:dht_revalidate_cbk]
dns-dht: subvolume dns-replicate-0 returned -1 (Invalid argument)

[2010-12-15 12:02:01.278899] I [dht-common.c:369:dht_revalidate_cbk]
dns-dht: subvolume dns-replicate-1 returned -1 (Invalid argument)

[2010-12-15 12:02:25.228251] I [dht-common.c:369:dht_revalidate_cbk]
dns-dht: subvolume dns-replicate-1 returned -1 (Invalid argument)

 

What are these errors? Can you help me?

 

Regards.

Sam

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] diagnostics.latency-measurement

2010-12-06 Thread Samuel Hassine
Hi,

 

I want to know why my Gluster is a little slow when accessing many little
files such as MySQL databases. I set the option
diagnostics.latency-measurement to yes.

 

Options Reconfigured:

diagnostics.latency-measurement: yes

cluster.self-heal-window-size: 1024

performance.cache-refresh-timeout: 10

performance.cache-size: 4096MB

 

What is the log file or the command to view this diagnostic?

 

Regards.

Samuel Hassine

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 3.1 bailout error

2010-12-06 Thread Samuel Hassine
I have the same errors sometime.

I attach my entire client and server log files.

-Message d'origine-
De : gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] De la part de Raghavendra G
Envoyé : lundi 6 décembre 2010 08:55
À : Matt Keating
Cc : gluster-users@gluster.org
Objet : Re: [Gluster-users] Gluster 3.1 bailout error

Hi Matt,

Can you attach entire client and server log files?

regards,
- Original Message -
From: Matt Keating matt.keating.li...@gmail.com
To: gluster-users@gluster.org
Sent: Sunday, December 5, 2010 3:39:34 AM
Subject: [Gluster-users] Gluster 3.1 bailout error

Hi,

I've got a GlusterFS share serving web pages and I'm finding that imagecache
isn't always able to create new files on the mount.
Since upgrading to GlusterFS 3.1, I'm having ALOT of these errors appearing
in the logs:

logs/EBS-drupal-shared-.log:[2010-11-29 10:29:18.141045] E
[rpc-clnt.c:199:call_bail] drupal-client-0: bailing out frame type(GlusterFS
3.1) op(FINODELK(30)) xid = 0xb69cc sent = 2010-11-29 09:59:10.112834.
timeout = 1800
logs/EBS-drupal-shared-.log:[2010-11-29 10:42:58.365735] E
[rpc-clnt.c:199:call_bail] drupal-client-0: bailing out frame type(GlusterFS
3.1) op(FINODELK(30)) xid = 0xb863e sent = 2010-11-29 10:12:54.584124.
timeout = 1800
logs/EBS-drupal-shared-.log:[2010-11-29 12:00:02.572679] E
[rpc-clnt.c:199:call_bail] drupal-client-0: bailing out frame type(GlusterFS
3.1) op(FINODELK(30)) xid = 0xbe36f sent = 2010-11-29 11:29:57.497653.
timeout = 1800


Could anyone shed any light on whats happening/wrong?

Thanks,
Matt

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Anormal Gluster shutdown

2010-12-03 Thread Samuel Hassine
Craig,

I am using Debian Lenny (Proxmox 1.7)

r...@on-003:/# uname -a
Linux on-003 2.6.32-3-pve #1 SMP Fri Sep 17 17:56:13 CEST 2010 x86_64
GNU/Linux

On all Gluster nodes and gluster clients.

For hardware, it is sata disks with an LVM partition of 2.2To, in
distributed-replicated Gluster.

I tested this morning and the problem is still here.

Regards.
Sam

-Message d'origine-
De : gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] De la part de Craig Carl
Envoyé : vendredi 3 décembre 2010 09:02
À : gluster-users@gluster.org
Objet : Re: [Gluster-users] Anormal Gluster shutdown

Samuel -
I can't reproduce this issue locally, can you send me operating system
and hardware details for both the Gluster servers and the client?

Thanks,

Craig

--
Craig Carl
Senior Systems Engineer
Gluster



On 12/02/2010 05:59 AM, Samuel Hassine wrote:
 Hi all,



 GlusterFS partition automatically shutdown when umounting a binded 
 mount point

 with -f option (without it works).



 How to reproduce:



 mounted Gluster partition on /gluster (any config):



 df: localhost:/gluster4.5T  100G  4.4T   3% /gluster

 mount: localhost:/gluster on /gluster type fuse.glusterfs

 (rw,allow_other,default_permissions,max_read=131072)



 commands:



 mkdir /test

 mount -n --bind /gluster /test

 ls /test (verify you have the Gluster)



 and:



 umount -f /test



 ===



 df: `/gluster': Transport endpoint is not connected

 [2010-12-02 14:48:56.38309] I [fuse-bridge.c:3138:fuse_thread_proc] fuse:

 unmounting /gluster

 [2010-12-02 14:48:56.38364] I [glusterfsd.c:672:cleanup_and_exit]
 glusterfsd:

 shutting down



 Before 3.1.x I did not have this bug.



 Regards.

 Sam




 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Incomprehensive errors

2010-12-03 Thread Samuel Hassine
Hi there,

 

I have a lot (one each second) of these errors :

 

[2010-12-03 16:33:53.600610] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 248602: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)

[2010-12-03 16:33:53.658075] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 248696: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)

[2010-12-03 16:33:53.685461] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 248748: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)

[2010-12-03 16:33:53.992316] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 248905: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)

[2010-12-03 16:33:54.22034] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 248960: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)

[2010-12-03 16:33:54.287419] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 249502: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)

[2010-12-03 16:33:54.744890] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 249514: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)

[2010-12-03 16:33:56.148194] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 250798: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)

[2010-12-03 16:33:56.186352] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 250892: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)

[2010-12-03 16:33:57.307090] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 251415: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)

[2010-12-03 16:33:57.978952] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 252582: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)

[2010-12-03 16:33:58.596328] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 252715: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)

 

What does it mean?

 

Thanks for your answers.



Regards.

Samuel Hassine

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Anormal Gluster shutdown

2010-12-02 Thread Samuel Hassine
Hi all,

 

GlusterFS partition automatically shutdown when umounting a binded mount
point

with -f option (without it works).

 

How to reproduce:

 

mounted Gluster partition on /gluster (any config):

 

df: localhost:/gluster4.5T  100G  4.4T   3% /gluster

mount: localhost:/gluster on /gluster type fuse.glusterfs

(rw,allow_other,default_permissions,max_read=131072)

 

commands:

 

mkdir /test

mount -n --bind /gluster /test

ls /test (verify you have the Gluster)

 

and:

 

umount -f /test

 

===

 

df: `/gluster': Transport endpoint is not connected

[2010-12-02 14:48:56.38309] I [fuse-bridge.c:3138:fuse_thread_proc] fuse:

unmounting /gluster

[2010-12-02 14:48:56.38364] I [glusterfsd.c:672:cleanup_and_exit]
glusterfsd:

shutting down

 

Before 3.1.x I did not have this bug.

 

Regards.

Sam

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Gluster crash

2010-11-06 Thread Samuel Hassine
Hi all,

Our service using GlusterFS is in production since one week and we are
managing a huge trafic. The last night, one of the Gluster client (on a
physical node with a lot of virtual engines) crashed. Can you give me
more information about the log of the crash?

Here is the log: 

pending frames:
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(CREATE)
frame : type(1) op(CREATE)

patchset: v3.0.6
signal received: 6
time of crash: 2010-11-06 05:38:11
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.6
/lib/libc.so.6[0x7f7644e76f60]
/lib/libc.so.6(gsignal+0x35)[0x7f7644e76ed5]
/lib/libc.so.6(abort+0x183)[0x7f7644e783f3]
/lib/libc.so.6(__assert_fail+0xe9)[0x7f7644e6fdc9]
/lib/libpthread.so.0(pthread_mutex_lock+0x686)[0x7f76451a0b16]
/lib/glusterfs/3.0.6/xlator/performance/io-cache.so(ioc_create_cbk
+0x87)[0x7f7643dcd3f7]
/lib/glusterfs/3.0.6/xlator/performance/read-ahead.so(ra_create_cbk
+0x1a2)[0x7f7643fd9322]
/lib/glusterfs/3.0.6/xlator/cluster/replicate.so(afr_create_unwind
+0x126)[0x7f76441f1866]
/lib/glusterfs/3.0.6/xlator/cluster/replicate.so(afr_create_wind_cbk
+0x10f)[0x7f76441f25ef]
/lib/glusterfs/3.0.6/xlator/protocol/client.so(client_create_cbk
+0x5aa)[0x7f764443a00a]
/lib/glusterfs/3.0.6/xlator/protocol/client.so(protocol_client_pollin
+0xca)[0x7f76444284ba]
/lib/glusterfs/3.0.6/xlator/protocol/client.so(notify
+0xe0)[0x7f7644437d70]
/lib/libglusterfs.so.0(xlator_notify+0x43)[0x7f76455cd483]
/lib/glusterfs/3.0.6/transport/socket.so(socket_event_handler
+0xe0)[0x7f76433819e0]
/lib/libglusterfs.so.0[0x7f76455e7e0f]
/sbin/glusterfs(main+0x82c)[0x40446c]
/lib/libc.so.6(__libc_start_main+0xe6)[0x7f7644e631a6]
/sbin/glusterfs[0x402a29]

I just want to know why Gluster crashed.

Regards.
Sam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Problem in replication on Gluster 3.1

2010-10-24 Thread Samuel Hassine
Hello all,

I have just upgraded from Gluster 3.0 to Gluster 3.1 with a simple
configuration : 2 nodes with replication between them.

I have now many problems during replication and files lookup like :

on 
/com/**/speedcoolandfun/administrator/templates/khepri/images/toolbar/icon-32-upload.png
[2010-10-24 09:53:48.724063] I [afr-common.c:662:afr_lookup_done]
dns-replicate-0: entries are missing in lookup 

And many many errors of permission denied/resource temporary unavailble
etc...

Somebody can help me or has the same problem?

Thanks for your answer.

Regards.
Sam


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Many problems in 3.1

2010-10-23 Thread Samuel Hassine
Hello there,

I have just upgraded a GlusterFS filesystem in production after one week
of tests on a independant environment. First of all, I want to
congratulate Gluster team for the work and the new filesystem
capabilities, it is just awesome :)

But I am also very disappointed in the new internal replicate management
and the integrity checking. In production, we have a simple GlusterFS
system with 2 nodes in a replicate file system.

Since we migrate on Gluster 3.1 (from Gluster 3.0) with exactly the same
configuration, we pass from a daily partition log of 50k to 350M! We
have always so many errors like:

[2010-10-23 11:59:11.497818] W [fuse-bridge.c:1674:fuse_readv_cbk]
glusterfs-fuse: 56975335: READ = -1 (No such file or directory)
[2010-10-23 11:59:11.498753] W [fuse-bridge.c:1674:fuse_readv_cbk]
glusterfs-fuse: 56975336: READ = -1 (No such file or directory)
[2010-10-23 12:08:29.152259] W [fuse-bridge.c:1674:fuse_readv_cbk]
glusterfs-fuse: 57056236: READ = -1 (No such file or directory)
[2010-10-23 12:08:29.153387] W [fuse-bridge.c:1674:fuse_readv_cbk]
glusterfs-fuse: 57056237: READ = -1 (No such file or directory)
[2010-10-23 12:15:50.933772] W [fuse-bridge.c:1674:fuse_readv_cbk]
glusterfs-fuse: 57112661: READ = -1 (No such file or directory)

or like this:


 [2010-10-21 00:05:41.180371] W [fuse-bridge.c:1748:fuse_writev_cbk]
glusterfs-fuse: 36601886: WRITE = -1 (Permission denied)
[2010-10-21 00:05:41.974738] W [fuse-bridge.c:1748:fuse_writev_cbk]
glusterfs-fuse: 36601993: WRITE = -1 (Permission denied)
[2010-10-21 00:05:42.289002] W [fuse-bridge.c:1748:fuse_writev_cbk]
glusterfs-fuse: 36602074: WRITE = -1 (Permission denied)
[2010-10-21 00:05:43.79197] W [fuse-bridge.c:1748:fuse_writev_cbk]
glusterfs-fuse: 36602190: WRITE = -1 (Permission denied)
[2010-10-21 00:05:43.302434] W [fuse-bridge.c:1748:fuse_writev_cbk]

or like:

[2010-10-21 19:21:54.884030] I [afr-common.c:716:afr_lookup_done]
dns-replicate-0: background  meta-data data entry self-heal triggered.
path: **/images/toolbar/icon-32-delete.png
[2010-10-21 19:21:54.889428] I
[afr-self-heal-common.c:1526:afr_self_heal_completion_cbk]
dns-replicate-0: background  meta-data data entry self-heal completed on
**/images/toolbar/icon-32-delete.png
[2010-10-21 19:21:54.917167] I [afr-common.c:662:afr_lookup_done]
dns-replicate-0: entries are missing in lookup of
***/images/toolbar/icon-32-new.png.
[2010-10-21 19:21:54.917215] I [afr-common.c:716:afr_lookup_done]
dns-replicate-0: background  meta-data data entry self-heal triggered.
path: /images/toolbar/icon-32-new.png
[2010-10-21 19:21:54.922384] I
[afr-self-heal-common.c:1526:afr_self_heal_completion_cbk]
dns-replicate-0: background  meta-data data entry self-heal completed on
***/images/toolbar/icon-32-new.png
[2010-10-21 19:21:54.936605] I [afr-common.c:662:afr_lookup_done]
dns-replicate-0: entries are missing in lookup of
**/images/toolbar/icon-32-new.png.

So our filesystem is unusable...

Somebody have the same problem? Is there a solution?

Thanks for your answer.

Regards.
Sam


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Bug in Gluster .deb

2010-10-17 Thread Samuel Hassine
Hello all,

I'm using gluster since 2 years and I just test on a new pool the latest
version (3.1) with the .deb. (I'm used to compile and configure by hand)

I have just done:

wget GLUSTER.deb
dpkg -i GLUSTER.deb
/etc/init.d/glusterd start

And I have the following error:

Starting glusterd service: glusterd.
/usr/sbin/glusterd: option requires an argument -- f
Try `glusterd --help' or `glusterd --usage' for more information.

Is it a bug or I have to configure something before launch glusterd ?

Thanks for help.
Sam

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users