Re: [Gluster-users] Rsync in place of heal after brick failure

2019-04-09 Thread Alvin Starr
The performance needs to be compared between the two in a real environment. For example I have a system where xfsdump takes something like 4 hours for a complete dump to /dev/null but a "find . -type f > /dev/null" takes well over a day. So it seems that xfsdump is very disk read efficient.

Re: [Gluster-users] Rsync in place of heal after brick failure

2019-04-08 Thread Ashish Pandey
- Original Message - From: "Poornima Gurusiddaiah" To: "Tom Fite" Cc: "Gluster-users" Sent: Tuesday, April 9, 2019 9:53:02 AM Subject: Re: [Gluster-users] Rsync in place of heal after brick failure On Mon, Apr 8, 2019, 6:31 PM Tom Fite < tomf

Re: [Gluster-users] Rsync in place of heal after brick failure

2019-04-08 Thread Poornima Gurusiddaiah
On Mon, Apr 8, 2019, 6:31 PM Tom Fite wrote: > Thanks for the idea, Poornima. Testing shows that xfsdump and xfsrestore > is much faster than rsync since it handles small files much better. I don't > have extra space to store the dumps but I was able to figure out how to > pipe the xfsdump and

Re: [Gluster-users] Rsync in place of heal after brick failure

2019-04-08 Thread Aravinda
On Mon, 2019-04-08 at 09:01 -0400, Tom Fite wrote: > Thanks for the idea, Poornima. Testing shows that xfsdump and > xfsrestore is much faster than rsync since it handles small files > much better. I don't have extra space to store the dumps but I was > able to figure out how to pipe the xfsdump

Re: [Gluster-users] Rsync in place of heal after brick failure

2019-04-08 Thread Tom Fite
Thanks for the idea, Poornima. Testing shows that xfsdump and xfsrestore is much faster than rsync since it handles small files much better. I don't have extra space to store the dumps but I was able to figure out how to pipe the xfsdump and restore via ssh. For anyone else that's interested: On

Re: [Gluster-users] Rsync in place of heal after brick failure

2019-04-01 Thread Poornima Gurusiddaiah
You could also try xfsdump and xfsrestore if you brick filesystem is xfs and the destination disk can be attached locally? This will be much faster. Regards, Poornima On Tue, Apr 2, 2019, 12:05 AM Tom Fite wrote: > Hi all, > > I have a very large (65 TB) brick in a replica 2 volume that needs

Re: [Gluster-users] Rsync in place of heal after brick failure

2019-04-01 Thread Jim Kinney
Nice! I didn't use -H -X and the system had to do some clean up. I'll add this in my next migration progress as I move 120TB to new hard drives. On Mon, 2019-04-01 at 14:27 -0400, Tom Fite wrote: > Hi all, > I have a very large (65 TB) brick in a replica 2 volume that needs to > be re-copied from

[Gluster-users] Rsync in place of heal after brick failure

2019-04-01 Thread Tom Fite
Hi all, I have a very large (65 TB) brick in a replica 2 volume that needs to be re-copied from scratch. A heal will take a very long time with performance degradation on the volume so I investigated using rsync to do the brunt of the work. The command: rsync -av -H -X --numeric-ids --progress

[Gluster-users] Rsync - should I rsync from mount point or vol directory

2016-06-15 Thread John Lewis
Hi :) I have /glusterdata dir that mounted to /var/www/mydir rsync seems slow reading from /var/www/mydir so I think I will use /glusterdata as a rsync source dir. My questions: 1. Is that ok? Is it safe? :) 2. I noticed I can exclude the .glusterfs and .trashcan dir. Is that correct? Is there

[Gluster-users] Rsync on bricks filesystem?

2016-03-30 Thread Yannick Perret
Hello, we have some replica-2 volumes and it works fine at this time. For some of the volumes I need to setup daily incremental blackups (on an other filesystem, which don't needs to be on glusterfs). As 'rsync' or similar is not very efficient on glusterfs volumes I tried to use direct

Re: [Gluster-users] rsync question about --inplace

2016-03-19 Thread Joe Julian
https://joejulian.name/blog/dht-misses-are-expensive/ On 03/16/2016 01:14 PM, Mark Selby wrote: I used rsync to copy files (10TB) from a local disk to a replicated gluster volume. I DID NOT use --inplace option during the copy. Someone mentioned this may have a long term adverse read

[Gluster-users] rsync question about --inplace

2016-03-19 Thread Mark Selby
I used rsync to copy files (10TB) from a local disk to a replicated gluster volume. I DID NOT use --inplace option during the copy. Someone mentioned this may have a long term adverse read performance impact because there we be an extra hard link that would lead to an extras FS Ops during

Re: [Gluster-users] rsync to gluster mount: self-heal and bad performance

2015-11-18 Thread Joe Julian
On 11/17/2015 08:19 AM, Tiemen Ruiten wrote: I double-checked my config and found out that the filesystem of the brick on the arbiter node doesn't support ACLs: underlying fs is ext4 without acl mount option, while the other bricks are XFS ( where it's always enabled). Do all the bricks need

Re: [Gluster-users] rsync to gluster mount: self-heal and bad performance

2015-11-17 Thread Tiemen Ruiten
ot;Ben Turner" <btur...@redhat.com> > > Cc: "gluster-users" <gluster-users@gluster.org> > > Sent: Monday, November 16, 2015 5:00:20 AM > > Subject: Re: [Gluster-users] rsync to gluster mount: self-heal and bad > performance > > > > Hello Ben, >

Re: [Gluster-users] rsync to gluster mount: self-heal and bad performance

2015-11-16 Thread Ben Turner
- Original Message - > From: "Tiemen Ruiten" <t.rui...@rdmedia.com> > To: "Ben Turner" <btur...@redhat.com> > Cc: "gluster-users" <gluster-users@gluster.org> > Sent: Monday, November 16, 2015 5:00:20 AM > Subject: Re: [Gluster-

Re: [Gluster-users] rsync to gluster mount: self-heal and bad performance

2015-11-16 Thread Tiemen Ruiten
<btur...@redhat.com> wrote: > - Original Message - > > From: "Tiemen Ruiten" <t.rui...@rdmedia.com> > > To: "gluster-users" <gluster-users@gluster.org> > > Sent: Sunday, November 15, 2015 5:22:08 AM > > Subject: Re: [Gluster-use

Re: [Gluster-users] rsync to gluster mount: self-heal and bad performance

2015-11-15 Thread Tiemen Ruiten
Any other suggestions? On 13 November 2015 at 09:56, Tiemen Ruiten wrote: > Hello Ernie, list, > > No, that's not the case. The volume is mounted through glusterfs-fuse - on > the same server running one of the bricks. The fstab: > > # /etc/fstab > # Created by anaconda on

Re: [Gluster-users] rsync to gluster mount: self-heal and bad performance

2015-11-13 Thread Tiemen Ruiten
Hello Ernie, list, No, that's not the case. The volume is mounted through glusterfs-fuse - on the same server running one of the bricks. The fstab: # /etc/fstab # Created by anaconda on Tue Aug 18 18:10:49 2015 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man

Re: [Gluster-users] rsync to gluster mount: self-heal and bad performance

2015-11-12 Thread Ernie Dunbar
Hi Tiemen It sounds like you're trying to rsync files onto your Gluster server, rather than to the Gluster filesystem. You want to copy these files into the mounted filesystem (typically on some other system than the Gluster servers), because Gluster is designed to handle it that way. I

[Gluster-users] rsync to gluster mount: self-heal and bad performance

2015-11-12 Thread Tiemen Ruiten
Hello, While rsyncing to a directory mounted through glusterfs fuse, performance is very bad and it appears every synced file generates a (metadata) self-heal. The volume is mounted with option acl and acl's are set on a subdirectory. Setup is as follows: Two Centos 7 VM's (KVM), with Gluster

[Gluster-users] rsync + stale file handle

2014-05-19 Thread David F. Robinson
When I do an rsync to backup my workstations onto a gluster mounted file system, I end up with thousands of healing problems. The heal status repeatedly shows the same number of healed/failed during a gluster volume heal homegfs info statistics check. There are over 9,000 files healed and

Re: [Gluster-users] rsync + stale file handle

2014-05-19 Thread David F. Robinson
Forgot to mention gluster version and O/S... Both client and server use: Scientific Linux 6.4 (Kernel 2.6.32-431.11.2.el6.x86_64) [root@gfs01a ~]# rpm -qa | grep gluster glusterfs-libs-3.5.0-2.el6.x86_64 glusterfs-server-3.5.0-2.el6.x86_64 glusterfs-3.5.0-2.el6.x86_64

[Gluster-users] Rsync error when brick become online after a reboot / cash

2012-03-08 Thread Xavier Normand
I'm currently doing some HA testing on glusterfs and i notice a problem when a cluster node become online after a reboot or a crash. There is my setup: Volume Name: data Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: gluster1:/data Brick2: gluster2:/data

Re: [Gluster-users] rsync for WAN replication (active/active)

2011-03-24 Thread Mohit Anchlia
Thanks for pointing that out. I think rsync also has option to sync based on time,md5hash and other attributes if I am not wrong. If we can preserve time and only sync the most latest file then I think we should be ok? What do you think? I can't think of any other option other than looking at some

Re: [Gluster-users] rsync for WAN replication (active/active)

2011-03-24 Thread phil cryer
On Thu, Mar 24, 2011 at 12:31 PM, Mohit Anchlia mohitanch...@gmail.com wrote: Thanks for pointing that out. I think rsync also has option to sync based on time,md5hash and other attributes if I am not wrong. If we can preserve time and only sync the most latest file then I think we should be

Re: [Gluster-users] rsync for WAN replication (active/active)

2011-03-17 Thread Jonathan Barber
On 17 March 2011 00:39, Mohit Anchlia mohitanch...@gmail.com wrote: I've had several discussions with different set of people about using rsync and everyone thinks it's ok to use rsync (2 way) for WAN replication in active/active data centers as long as it's done using file system mounted on

Re: [Gluster-users] rsync for WAN replication (active/active)

2011-03-17 Thread Mohit Anchlia
Thanks! I was going to trigger it through cron say every 10 mts. if rsync is not currently running. Regarding point 3) I thought of it also! I think this problem cannot be solved even when using bricks. If someone is editing 2 files at the same time only one will win (always). Only way we can

[Gluster-users] rsync for WAN replication (active/active)

2011-03-16 Thread Mohit Anchlia
I've had several discussions with different set of people about using rsync and everyone thinks it's ok to use rsync (2 way) for WAN replication in active/active data centers as long as it's done using file system mounted on the client. I am sending this out to this user list in case anyone sees

Re: [Gluster-users] rsync causing gluster to crash

2010-03-22 Thread Benjamin Hudgens
: Friday, March 19, 2010 2:54 PM Cc: Gluster Users Subject: Re: [Gluster-users] rsync causing gluster to crash Hello Vikas, Thank you for your help. Gluster does not seem to core dump when this occurrence happens thus it is not creating a dump file. Hopefully I can provide all the information you

Re: [Gluster-users] rsync causing gluster to crash

2010-03-19 Thread Joe Grace
@gluster.org Subject: Re: [Gluster-users] rsync causing gluster to crash Just to throw my experience on this, I'm currently using rsync over ssh (rsync -av /mnt/data u...@server:/mnt/glusterfs) to sync data from a file server to two glusterfs servers running in raid1 mode. I've moved about 2 TB

Re: [Gluster-users] rsync causing gluster to crash

2010-03-19 Thread Vikas Gorur
On Mar 19, 2010, at 9:47 AM, Joe Grace wrote: Thank you for the reply. Version 3.0.2 from source on Debian sqeeze. Here is the client log: That is the client volume file. What I meant was the client log file you can usually find in /usr/local/var/log/glusterfs/glusterfs.log. If you

Re: [Gluster-users] rsync causing gluster to crash

2010-03-19 Thread Benjamin Hudgens
bhudg...@photodex.com (512) 674-9920 -Original Message- From: gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Vikas Gorur Sent: Friday, March 19, 2010 2:02 PM To: Joe Grace Cc: Gluster Users Subject: Re: [Gluster-users] rsync causing gluster

Re: [Gluster-users] rsync causing gluster to crash

2010-03-18 Thread Jan Pisacka
Isn't the /mnt/control directory also exported via NFS? Do you have any .nfsX... files there? The second command, which you successfully used, doesn't attempt to rsync such files. Regards, Jan Pisacka Inst. of Plasma Physics AS CR On 18.3.2010 01:13, Joe Grace wrote: Hello, I am

Re: [Gluster-users] Rsync

2009-10-12 Thread Hiren Joshi
Message- From: gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi Sent: 05 October 2009 11:01 To: Pavan Vilas Sondur Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Rsync Just a quick update: The rsync is *still* not finished

Re: [Gluster-users] Rsync

2009-10-07 Thread Hiren Joshi
: [Gluster-users] Rsync Remember, the gluster-team does not like my way of data-feeding. If your setup blows up, don't blame them (or me :-) I can only tell you what I am doing: simply move (or copy) the initial data to the primary server of the replication setup and then start

Re: [Gluster-users] Rsync

2009-10-06 Thread Stephan von Krawczynski
@gluster.org Subject: Re: [Gluster-users] Rsync It would be nice to remember my thread about _not_ copying data initially to gluster via the mountpoint. And one major reason for _local_ feed was: speed. Obviously a lot of cases are merely impossible because of the pure waiting

Re: [Gluster-users] Rsync

2009-10-05 Thread Hiren Joshi
: [Gluster-users] Rsync Thanks! I'm keeping a close eye on the is glusterfs DHT really distributed? thread =) I tried nodelay on and unhashd no. I tarred about 400G to the share in about 17 hours (~6MB/s?) and am running an rsync now. Will post the results when it's done. -Original

Re: [Gluster-users] Rsync

2009-10-05 Thread Stephan von Krawczynski
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi Sent: 01 October 2009 16:50 To: Pavan Vilas Sondur Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Rsync Thanks! I'm keeping a close eye on the is glusterfs DHT really distributed? thread =) I

Re: [Gluster-users] Rsync

2009-10-05 Thread Hiren Joshi
To: Hiren Joshi Cc: Pavan Vilas Sondur; gluster-users@gluster.org Subject: Re: [Gluster-users] Rsync It would be nice to remember my thread about _not_ copying data initially to gluster via the mountpoint. And one major reason for _local_ feed was: speed. Obviously a lot of cases

Re: [Gluster-users] Rsync

2009-10-01 Thread Hiren Joshi
? -Original Message- From: gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi Sent: 24 September 2009 13:05 To: Pavan Vilas Sondur Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Rsync

Re: [Gluster-users] Rsync

2009-09-28 Thread Hiren Joshi
September 2009 13:05 To: Pavan Vilas Sondur Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Rsync -Original Message- From: Pavan Vilas Sondur [mailto:pa...@gluster.com] Sent: 24 September 2009 12:42 To: Hiren Joshi Cc: gluster-users@gluster.org Subject: Re: Rsync

Re: [Gluster-users] Rsync

2009-09-24 Thread Hiren Joshi
@gluster.org Subject: [Gluster-users] Rsync Hello all, I'm getting what I think is bizarre behaviour I have about 400G to rsync (rsync -av) onto a gluster share, the data is in a directory structure which has about

Re: [Gluster-users] Rsync

2009-09-23 Thread Pavan Vilas Sondur
...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi Sent: 22 September 2009 11:40 To: gluster-users@gluster.org Subject: [Gluster-users] Rsync Hello all, I'm getting what I think is bizarre behaviour I have about 400G to rsync (rsync -av) onto

Re: [Gluster-users] Rsync

2009-09-23 Thread Hiren Joshi
-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi Sent: 22 September 2009 11:40 To: gluster-users@gluster.org Subject: [Gluster-users] Rsync Hello all, I'm getting what I think is bizarre behaviour I have about 400G

Re: [Gluster-users] Rsync

2009-09-23 Thread Pavan Vilas Sondur
Subject: [Gluster-users] Rsync Hello all, I'm getting what I think is bizarre behaviour I have about 400G to rsync (rsync -av) onto a gluster share, the data is in a directory structure which has about 1000 directories per parent and about 1000 directories

Re: [Gluster-users] Rsync

2009-09-23 Thread Hiren Joshi
] On Behalf Of Hiren Joshi Sent: 22 September 2009 11:40 To: gluster-users@gluster.org Subject: [Gluster-users] Rsync Hello all, I'm getting what I think is bizarre behaviour I have about 400G to rsync (rsync -av) onto a gluster share, the data

Re: [Gluster-users] Rsync

2009-09-23 Thread Hiren Joshi
:02 To: Pavan Vilas Sondur Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Rsync Bellow. I've found that I get a performance hit if I add read cache or whitebehind. Server conf: ##Open vols volume posix1 type storage/posix option directory /gluster/export1 end

[Gluster-users] Rsync

2009-09-22 Thread Hiren Joshi
Hello all, I'm getting what I think is bizarre behaviour I have about 400G to rsync (rsync -av) onto a gluster share, the data is in a directory structure which has about 1000 directories per parent and about 1000 directories in each of them. When I try to rsync an end leaf directory (this

Re: [Gluster-users] Rsync

2009-09-22 Thread Hiren Joshi
Subject: [Gluster-users] Rsync Hello all, I'm getting what I think is bizarre behaviour I have about 400G to rsync (rsync -av) onto a gluster share, the data is in a directory structure which has about 1000 directories per parent and about 1000 directories in each of them. When I try

[Gluster-users] rsync and glusterfs

2009-04-29 Thread Sean Davis
I'm using 2.0.0rc8 on openSuse. I have a system of four servers combined via distribute (I can send volfiles if anyone wants to see). I have been using rsync to transfer some files over to a glusterfs volume from an nfs server. I use a command like: rsync -av dir host1:/mnt/glusterfs/ This is