[Gluster-users] mv not atomic in glusterfs

2012-05-25 Thread anthony garnier

Hi all,
I'm using a distributed-replicated volume and it seems that mv  is not atomic 
with a volume mounted using GlusterFS ( but works well with NFS)

Volume Name: test
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ylal3540:/users3/test
Brick2: ylal3520:/users3/test
Brick3: ylal3530:/users3/test
Brick4: ylal3510:/users3/test
Options Reconfigured:
nfs.port: 2049
performance.cache-size: 6GB
performance.cache-refresh-timeout: 10
performance.cache-min-file-size: 1kB
performance.cache-max-file-size: 4GB
network.ping-timeout: 10
features.quota: off
features.quota-timeout: 60


mount -t glusterfs ylal3530:/test /users/glusterfs_mnt/

# while :; do echo test  /users/glusterfs_mnt/tmp/atomic1 ;mv 
/users/glusterfs_mnt/tmp/atomic1 /users/glusterfs_mnt/tmp/atomic2;done

# while :; do cat /users/glusterfs_mnt/tmp/atomic2 1/dev/null;done
cat: /users/glusterfs_mnt/tmp/atomic2: No such file or directory
cat: /users/glusterfs_mnt/tmp/atomic2: No such file or directory
cat: /users/glusterfs_mnt/tmp/atomic2: No such file or directory
cat: /users/glusterfs_mnt/tmp/atomic2: No such file or directory
cat: /users/glusterfs_mnt/tmp/atomic2: No such file or directory
cat: /users/glusterfs_mnt/tmp/atomic2: No such file or directory







  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 'remove-brick' is removing more bytes than are in the brick(?)

2012-05-25 Thread Amar Tumballi

Hi Harry,

Thanks for taking time and testing 3.3.0qa42.


So this was a big improvement over the previous trial. the only glitches
were the 120 failures (which mean...?)


You can open log file and search for the reason for the failure. 
(/var/lib/glusterfs/gli-rebalance.log). Currently there may be more logs 
for you to search, but do a grep on '\ E\ ' to get the relevant logs.



and the directory skeleton left on the removed brick which may be a feature..?


It is the current design limitation. We don't deal with anything 
directory in rebalance and remove-brick. Both these operations are on files.




So it seems to have been fixed in qa42.



Thanks for confirming.

Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] gluster volume replace-brick ... status breaks when executed on multiple nodes

2012-05-25 Thread Tomasz Chmielewski
I've executed gluster volume replace-brick ... status on multiple peers at 
the same time, which resulted in quite an interesting breakage.

It's no longer possible to pause/abort/status/start the replace-brick operation.

Please advise. I'm running glusterfs 3.2.6.

root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs 
ca2-int:/data/ca1 status
replace-brick status unknown
root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs 
ca2-int:/data/ca1 pause
replace-brick pause failed
root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs 
ca2-int:/data/ca1 abort
replace-brick abort failed
root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs 
ca2-int:/data/ca1 start
replace-brick failed to start



-- 
Tomasz Chmielewski
http://www.ptraveler.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Questions regarding rearrangements in the gluster file backend

2012-05-25 Thread Philip Poten
Hello,

since we need to restructure our gluster infrastructure a wee bit, I'd
need to use disks twice, aka two logical bricks on one physical disk.

Current layout, mountpoint = brick:
/srv/disk1/volumecontents
/srv/disk2/volumecontents
/srv/disk3/volumecontents
/srv/disk4/volumecontents

What my plan is currently:
mount /mnt/disk1
mkdir /mnt/disk1/disk1
mv /mnt/disk1/* /mnt/disk1/disk1
mount --bind /mnt/disk1/disk1 /srv/disk1
[repeat...]

so that the final layout looks something like that:
/mnt/disk1/disk1 - /srv/disk1
/mnt/disk2/disk2 - /srv/disk2
/mnt/disk3/disk3 - /srv/disk3
/mnt/disk4/disk4 - /srv/disk4
/mnt/disk1/disk5 - /srv/disk5 *new
/mnt/disk2/disk6 - /srv/disk6 *new
/mnt/disk3/disk7 - /srv/disk7 *new
/mnt/disk4/disk8 - /srv/disk8 *new

Now, my questions:
* can I do that without destroying/altering/... relevant dht
information/the distribution of the files? The pathnames of what was
there before stay the same for gluster.
* A mv should carry the extended attributes, shouldn't it?
* Can I use a symlink instead of a bind mount?

Thank you for any insight on this,
Philip
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users