Hi Daniel,
Did you happen to investigate why rsync -S is taking so much time? If it
doesn't deal with sparse file the way one expects, this option is probably
broken. Also have you already tried something like the advice in
http://lists.samba.org/archive/rsync/2003-August/007000.html ?
David Zeillinger wrote:
Did you happen to investigate why rsync -S is taking so much time? If it
doesn't deal with sparse file the way one expects, this option is
probably broken. Also have you already tried something like the advice
in
On 11/11/07, Daniel Ouellet [EMAIL PROTECTED] wrote:
2.3 ==
Now using scp as many times it's can also be use for quick sync of
changed files. Here however, we are up for a big surprise as well for
sure. Here we can't even do it as the sparse file like in rsync example
#1 will stop
knitti wrote:
if I'm not completely wrong, you could always tar -czf the sparse file, scp the
archive and then tar -xzf the file in place in the other side. this should also
create a new sparse file. of course, you lose the rsyncabilty and you have to
identify your sparse file in advance. But
Quoting Daniel Ouellet [EMAIL PROTECTED]:
Only two things here.
1. you have to identify your sparse file in advance.
That is the question. Look at the title.
Hi, Daniel.
Did you look at the Perl script I sent?
[code]
use strict;
use warnings;
use File::Find;
sub process_file {
[EMAIL PROTECTED] wrote:
Quoting Daniel Ouellet [EMAIL PROTECTED]:
Only two things here.
1. you have to identify your sparse file in advance.
That is the question. Look at the title.
Hi, Daniel.
Did you look at the Perl script I sent?
I am playing with it and looking if that can help
[EMAIL PROTECTED] wrote:
Did you look at the Perl script I sent?
I should also add in my previous emails in regards to good and bad part
of it that it is actually a much better idea then what I was doing by
the way! I think my emails didn't come out right in regard to the idea
express
On Sun, Nov 11, 2007 at 09:18:34PM +0100, knitti wrote:
if I'm not completely wrong, you could always tar -czf the sparse file, scp
the
archive and then tar -xzf the file in place in the other side. this should
also
create a new sparse file. of course, you lose the rsyncabilty and you
Douglas A. Tutty wrote:
I tried making a very sparse file (100 MB data, 1000 GB sparseness) and
gave up trying to compress it. gzip has to process the whole thing,
sparseness and all. Sure it would probably end up with a very small
file, but the whole thing has to be processed.
Yes it does
On Sun, 11 Nov 2007 22:31:13 -0500, Daniel Ouellet wrote:
Douglas A. Tutty wrote:
I tried making a very sparse file (100 MB data, 1000 GB sparseness) and
gave up trying to compress it. gzip has to process the whole thing,
sparseness and all. Sure it would probably end up with a very small
RW wrote:
What has not been addressed here is the question of what created those
files. It isn't something you do with a shell script usually.
Many things can do this, or could use this.
So if you have, just as an example, a database program that does make
such a file it is often possible to
On Fri, Nov 09, 2007 at 08:40:15PM +0200, Enache Adrian wrote:
On Fri, Nov 09, 2007 at 11:03:31AM +0100, Otto Moerbeek wrote:
So your problem seems to be that rsync -S is inefficient to the point
where it is not useable. I do not use rsync a lot, so I do not know
if there's a solution to
On 10/11/2007, at 10:05 AM, Daniel Ouellet wrote:
Otto Moerbeek wrote:
stat -s gives the raw info in one go. Some shell script hacking
should
make it easy to detect sparse files.
Thanks Otto for the suggestion. That might help until it can be
address for good. It would help speed up some
On Fri, Nov 09, 2007 at 03:47:10PM -0500, Daniel Ouellet wrote:
Ted Unangst wrote:
On 11/9/07, Daniel Ouellet [EMAIL PROTECTED] wrote:
Just for example, a source file that is sparse badly, don't really have
allocated disk block yet, but when copy over, via scp, or rsync will
actually use
On 10/11/2007, at 9:11 PM, Richard Toohey wrote:
(my $dev,my $ino,my $mode,my $nlink,my $uid,my $gid,my
$rdev,my $size,my $atime,my $mtime,my $ctime,my $blksize,my $blocks)
=sat($f);
Oops - should end with:
=stat($f);
not
=sat($f);
On Sat, Nov 10, 2007 at 09:11:27PM +1300, Richard Toohey wrote:
On 10/11/2007, at 10:05 AM, Daniel Ouellet wrote:
Otto Moerbeek wrote:
stat -s gives the raw info in one go. Some shell script hacking
should
make it easy to detect sparse files.
Thanks Otto for the suggestion. That might
On 10/11/2007, at 9:32 PM, Otto Moerbeek wrote:
yeah, look at stat(2):
int64_tst_blocks; /* blocks allocated for file */
u_int32_t st_blksize; /* optimal file sys I/O ops blocksize */
actually st_blocks's unit is disk sectors, to be precise.
I don't read perl, so I cannot comment on
Forgat to send to the list.
-Otto
- Forwarded message from Otto Moerbeek [EMAIL PROTECTED] -
Date: Sat, 10 Nov 2007 10:36:20 +0100
From: Otto Moerbeek [EMAIL PROTECTED]
To: Richard Toohey [EMAIL PROTECTED]
Subject: Re: identifying sparse files and get ride of them trick
Would people say that this edit is a decent description of these issues?
http://en.wikipedia.org/w/index.php?title=Sparse_filediff=170645177oldid=168346326
On 10/11/2007, Otto Moerbeek [EMAIL PROTECTED] wrote:
Your example just shows copying big files takes long. The point being,
if the file was not sparse, it would take at least the same time.
Blaming sparseness for the long cp time is not fair.
-Otto
But of course it would be
Hi,
Before we go nuts on this issue, or look for the wrong things or create
miss understanding.
Just allow me a little bit more time to try to come with a viseable
example showing the problem, or the issue here.
Obviously as Otto pointed out to me, looks like I can't explain it to well.
I
Hi,
I will try to make this very simple and show the issue by example only
when possible. I use two old servers on the Internet for the tests. The
source use real example sparse file, but that have only ~1GB of usable
data in it. The size show by 'ls -al' as an example gives~17GB. That's
ropers wrote:
Would people say that this edit is a decent description of these issues?
http://en.wikipedia.org/w/index.php?title=Sparse_filediff=170645177oldid=168346326
I can't really comment well for proper writing for sure. (;
But one thing that is not right as Otto pointed out to me and
On Fri, Nov 09, 2007 at 02:00:14AM -0500, Daniel Ouellet wrote:
Hi,
I am trying to find a way ti identifying sparse files properly and
quickly and find a way to rectify the situation.
Any trick to do this?
The problem is that overtime looks like I am ending up with lots of them
and
Any clue as to how to tackle this problem, or any trick around it?
I really do not understand the problem here. But you might be able to
detect sparse files compartaring the size vs the number of blocks it uses.
Without making a bit writing out of it. Let say that the problem is for
now a
On Fri, Nov 09, 2007 at 04:27:49AM -0500, Daniel Ouellet wrote:
Any clue as to how to tackle this problem, or any trick around it?
I really do not understand the problem here. But you might be able to
detect sparse files compartaring the size vs the number of blocks it uses.
Without
On Fri, Nov 09, 2007 at 11:03:31AM +0100, Otto Moerbeek wrote:
So your problem seems to be that rsync -S is inefficient to the point
where it is not useable. I do not use rsync a lot, so I do not know
if there's a solution to that problem. It does seem strange that a
feature to solve a
Ted Unangst wrote:
On 11/9/07, Daniel Ouellet [EMAIL PROTECTED] wrote:
Just for example, a source file that is sparse badly, don't really have
allocated disk block yet, but when copy over, via scp, or rsync will
actually use that space on the destination servers. All the servers are
identical
Otto Moerbeek wrote:
So your problem seems to be that rsync -S is inefficient to the point
where it is not useable. I do not use rsync a lot, so I do not know
if there's a solution to that problem. It does seem strange that a
feature to solve a problem actually make the problem worse.
Well,
Hi,
I am trying to find a way ti identifying sparse files properly and
quickly and find a way to rectify the situation.
Any trick to do this?
The problem is that overtime looks like I am ending up with lots of them
and because I have to sync multiples servers together the sparse files
30 matches
Mail list logo