The performance needs to be compared between the two in a real environment.
For example I have a system where xfsdump takes something like 4 hours
for a complete dump to /dev/null but a "find . -type f > /dev/null"
takes well over a day.
So it seems that xfsdump is very disk read efficient.
- Original Message -
From: "Poornima Gurusiddaiah"
To: "Tom Fite"
Cc: "Gluster-users"
Sent: Tuesday, April 9, 2019 9:53:02 AM
Subject: Re: [Gluster-users] Rsync in place of heal after brick failure
On Mon, Apr 8, 2019, 6:31 PM Tom Fite < tomf
On Mon, Apr 8, 2019, 6:31 PM Tom Fite wrote:
> Thanks for the idea, Poornima. Testing shows that xfsdump and xfsrestore
> is much faster than rsync since it handles small files much better. I don't
> have extra space to store the dumps but I was able to figure out how to
> pipe the xfsdump and
On Mon, 2019-04-08 at 09:01 -0400, Tom Fite wrote:
> Thanks for the idea, Poornima. Testing shows that xfsdump and
> xfsrestore is much faster than rsync since it handles small files
> much better. I don't have extra space to store the dumps but I was
> able to figure out how to pipe the xfsdump
Thanks for the idea, Poornima. Testing shows that xfsdump and xfsrestore is
much faster than rsync since it handles small files much better. I don't
have extra space to store the dumps but I was able to figure out how to
pipe the xfsdump and restore via ssh. For anyone else that's interested:
On
You could also try xfsdump and xfsrestore if you brick filesystem is xfs
and the destination disk can be attached locally? This will be much faster.
Regards,
Poornima
On Tue, Apr 2, 2019, 12:05 AM Tom Fite wrote:
> Hi all,
>
> I have a very large (65 TB) brick in a replica 2 volume that needs
Nice! I didn't use -H -X and the system had to do some clean up.
I'll add this in my next migration progress as I move 120TB to new hard
drives.
On Mon, 2019-04-01 at 14:27 -0400, Tom Fite wrote:
> Hi all,
> I have a very large (65 TB) brick in a replica 2 volume that needs to
> be re-copied from
Hi all,
I have a very large (65 TB) brick in a replica 2 volume that needs to be
re-copied from scratch. A heal will take a very long time with performance
degradation on the volume so I investigated using rsync to do the brunt of
the work.
The command:
rsync -av -H -X --numeric-ids --progress
Hi :)
I have /glusterdata dir that mounted to /var/www/mydir
rsync seems slow reading from /var/www/mydir so I think I will use
/glusterdata as a rsync source dir.
My questions:
1. Is that ok? Is it safe? :)
2. I noticed I can exclude the .glusterfs and .trashcan dir. Is that
correct? Is there
Hello,
we have some replica-2 volumes and it works fine at this time.
For some of the volumes I need to setup daily incremental blackups (on
an other filesystem, which don't needs to be on glusterfs).
As 'rsync' or similar is not very efficient on glusterfs volumes I tried
to use direct
https://joejulian.name/blog/dht-misses-are-expensive/
On 03/16/2016 01:14 PM, Mark Selby wrote:
I used rsync to copy files (10TB) from a local disk to a replicated
gluster volume. I DID NOT use --inplace option during the copy.
Someone mentioned this may have a long term adverse read
I used rsync to copy files (10TB) from a local disk to a replicated
gluster volume. I DID NOT use --inplace option during the copy.
Someone mentioned this may have a long term adverse read performance
impact because there we be an extra hard link that would lead to an
extras FS Ops during
On 11/17/2015 08:19 AM, Tiemen Ruiten wrote:
I double-checked my config and found out that the filesystem of the
brick on the arbiter node doesn't support ACLs: underlying fs is ext4
without acl mount option, while the other bricks are XFS ( where it's
always enabled). Do all the bricks need
ot;Ben Turner" <btur...@redhat.com>
> > Cc: "gluster-users" <gluster-users@gluster.org>
> > Sent: Monday, November 16, 2015 5:00:20 AM
> > Subject: Re: [Gluster-users] rsync to gluster mount: self-heal and bad
> performance
> >
> > Hello Ben,
>
- Original Message -
> From: "Tiemen Ruiten" <t.rui...@rdmedia.com>
> To: "Ben Turner" <btur...@redhat.com>
> Cc: "gluster-users" <gluster-users@gluster.org>
> Sent: Monday, November 16, 2015 5:00:20 AM
> Subject: Re: [Gluster-
<btur...@redhat.com> wrote:
> - Original Message -
> > From: "Tiemen Ruiten" <t.rui...@rdmedia.com>
> > To: "gluster-users" <gluster-users@gluster.org>
> > Sent: Sunday, November 15, 2015 5:22:08 AM
> > Subject: Re: [Gluster-use
Any other suggestions?
On 13 November 2015 at 09:56, Tiemen Ruiten wrote:
> Hello Ernie, list,
>
> No, that's not the case. The volume is mounted through glusterfs-fuse - on
> the same server running one of the bricks. The fstab:
>
> # /etc/fstab
> # Created by anaconda on
Hello Ernie, list,
No, that's not the case. The volume is mounted through glusterfs-fuse - on
the same server running one of the bricks. The fstab:
# /etc/fstab
# Created by anaconda on Tue Aug 18 18:10:49 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man
Hi Tiemen
It sounds like you're trying to rsync files onto your Gluster server,
rather than to the Gluster filesystem. You want to copy these files into
the mounted filesystem (typically on some other system than the Gluster
servers), because Gluster is designed to handle it that way.
I
Hello,
While rsyncing to a directory mounted through glusterfs fuse, performance
is very bad and it appears every synced file generates a (metadata)
self-heal.
The volume is mounted with option acl and acl's are set on a subdirectory.
Setup is as follows:
Two Centos 7 VM's (KVM), with Gluster
When I do an rsync to backup my workstations onto a gluster mounted file
system, I end up with thousands of healing problems. The heal status
repeatedly shows the same number of healed/failed during a gluster
volume heal homegfs info statistics check. There are over 9,000 files
healed and
Forgot to mention gluster version and O/S... Both client and server use:
Scientific Linux 6.4 (Kernel 2.6.32-431.11.2.el6.x86_64)
[root@gfs01a ~]# rpm -qa | grep gluster
glusterfs-libs-3.5.0-2.el6.x86_64
glusterfs-server-3.5.0-2.el6.x86_64
glusterfs-3.5.0-2.el6.x86_64
I'm currently doing some HA testing on glusterfs and i notice a problem
when a cluster node become online after a reboot or a crash. There is my
setup:
Volume Name: data
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/data
Brick2: gluster2:/data
Thanks for pointing that out. I think rsync also has option to sync
based on time,md5hash and other attributes if I am not wrong. If we
can preserve time and only sync the most latest file then I think we
should be ok? What do you think? I can't think of any other option
other than looking at some
On Thu, Mar 24, 2011 at 12:31 PM, Mohit Anchlia mohitanch...@gmail.com wrote:
Thanks for pointing that out. I think rsync also has option to sync
based on time,md5hash and other attributes if I am not wrong. If we
can preserve time and only sync the most latest file then I think we
should be
On 17 March 2011 00:39, Mohit Anchlia mohitanch...@gmail.com wrote:
I've had several discussions with different set of people about using
rsync and everyone thinks it's ok to use rsync (2 way) for WAN
replication in active/active data centers as long as it's done using
file system mounted on
Thanks! I was going to trigger it through cron say every 10 mts. if
rsync is not currently running.
Regarding point 3) I thought of it also! I think this problem cannot
be solved even when using bricks. If someone is editing 2 files at the
same time only one will win (always). Only way we can
I've had several discussions with different set of people about using
rsync and everyone thinks it's ok to use rsync (2 way) for WAN
replication in active/active data centers as long as it's done using
file system mounted on the client. I am sending this out to this user
list in case anyone sees
: Friday, March 19, 2010 2:54 PM
Cc: Gluster Users
Subject: Re: [Gluster-users] rsync causing gluster to crash
Hello Vikas,
Thank you for your help. Gluster does not seem to core dump when this
occurrence happens thus it is not creating a dump file. Hopefully I can
provide all the information you
@gluster.org
Subject: Re: [Gluster-users] rsync causing gluster to crash
Just to throw my experience on this, I'm currently using rsync over
ssh (rsync -av /mnt/data u...@server:/mnt/glusterfs) to sync data from
a file server to two glusterfs servers running in raid1 mode. I've
moved about 2 TB
On Mar 19, 2010, at 9:47 AM, Joe Grace wrote:
Thank you for the reply.
Version 3.0.2 from source on Debian sqeeze.
Here is the client log:
That is the client volume file. What I meant was the client log file you can
usually find in /usr/local/var/log/glusterfs/glusterfs.log.
If you
bhudg...@photodex.com
(512) 674-9920
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Vikas Gorur
Sent: Friday, March 19, 2010 2:02 PM
To: Joe Grace
Cc: Gluster Users
Subject: Re: [Gluster-users] rsync causing gluster
Isn't the /mnt/control directory also exported via NFS? Do you have any
.nfsX... files there? The second command, which you successfully
used, doesn't attempt to rsync such files.
Regards,
Jan Pisacka
Inst. of Plasma Physics AS CR
On 18.3.2010 01:13, Joe Grace wrote:
Hello,
I am
Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi
Sent: 05 October 2009 11:01
To: Pavan Vilas Sondur
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Rsync
Just a quick update: The rsync is *still* not finished
: [Gluster-users] Rsync
Remember, the gluster-team does not like my way of
data-feeding. If your setup
blows up, don't blame them (or me :-)
I can only tell you what I am doing: simply move (or copy)
the initial data to
the primary server of the replication setup and then start
@gluster.org
Subject: Re: [Gluster-users] Rsync
It would be nice to remember my thread about _not_ copying
data initially to
gluster via the mountpoint. And one major reason for _local_
feed was: speed.
Obviously a lot of cases are merely impossible because of the
pure waiting
: [Gluster-users] Rsync
Thanks!
I'm keeping a close eye on the is glusterfs DHT really distributed?
thread =)
I tried nodelay on and unhashd no. I tarred about 400G to the share in
about 17 hours (~6MB/s?) and am running an rsync now. Will post the
results when it's done.
-Original
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi
Sent: 01 October 2009 16:50
To: Pavan Vilas Sondur
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Rsync
Thanks!
I'm keeping a close eye on the is glusterfs DHT really distributed?
thread =)
I
To: Hiren Joshi
Cc: Pavan Vilas Sondur; gluster-users@gluster.org
Subject: Re: [Gluster-users] Rsync
It would be nice to remember my thread about _not_ copying
data initially to
gluster via the mountpoint. And one major reason for _local_
feed was: speed.
Obviously a lot of cases
?
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of
Hiren Joshi
Sent: 24 September 2009 13:05
To: Pavan Vilas Sondur
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Rsync
September 2009 13:05
To: Pavan Vilas Sondur
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Rsync
-Original Message-
From: Pavan Vilas Sondur [mailto:pa...@gluster.com]
Sent: 24 September 2009 12:42
To: Hiren Joshi
Cc: gluster-users@gluster.org
Subject: Re: Rsync
@gluster.org
Subject: [Gluster-users] Rsync
Hello all,
I'm getting what I think is bizarre
behaviour I have
about 400G to
rsync (rsync -av) onto a gluster share, the data is
in a directory
structure which has about
...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi
Sent: 22 September 2009 11:40
To: gluster-users@gluster.org
Subject: [Gluster-users] Rsync
Hello all,
I'm getting what I think is bizarre behaviour I have about 400G to
rsync (rsync -av) onto
-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of
Hiren Joshi
Sent: 22 September 2009 11:40
To: gluster-users@gluster.org
Subject: [Gluster-users] Rsync
Hello all,
I'm getting what I think is bizarre behaviour I have
about 400G
Subject: [Gluster-users] Rsync
Hello all,
I'm getting what I think is bizarre behaviour I have
about 400G to
rsync (rsync -av) onto a gluster share, the data is in a directory
structure which has about 1000 directories per parent and
about 1000
directories
] On Behalf Of
Hiren Joshi
Sent: 22 September 2009 11:40
To: gluster-users@gluster.org
Subject: [Gluster-users] Rsync
Hello all,
I'm getting what I think is bizarre behaviour I have
about 400G to
rsync (rsync -av) onto a gluster share, the data
:02
To: Pavan Vilas Sondur
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Rsync
Bellow.
I've found that I get a performance hit if I add read cache or
whitebehind.
Server conf:
##Open vols
volume posix1
type storage/posix
option directory /gluster/export1
end
Hello all,
I'm getting what I think is bizarre behaviour I have about 400G to
rsync (rsync -av) onto a gluster share, the data is in a directory
structure which has about 1000 directories per parent and about 1000
directories in each of them.
When I try to rsync an end leaf directory (this
Subject: [Gluster-users] Rsync
Hello all,
I'm getting what I think is bizarre behaviour I have about 400G to
rsync (rsync -av) onto a gluster share, the data is in a directory
structure which has about 1000 directories per parent and about 1000
directories in each of them.
When I try
I'm using 2.0.0rc8 on openSuse. I have a system of four servers combined
via distribute (I can send volfiles if anyone wants to see). I have been
using rsync to transfer some files over to a glusterfs volume from an nfs
server. I use a command like:
rsync -av dir host1:/mnt/glusterfs/
This is
50 matches
Mail list logo