could rsync support to specify source IP address when current host has multiple NIC?

2014-03-06 Thread Emre He
Hi,

Could rsync support to specify Source IP address when local host has
multiple NICs? In some complex network configuration, we need some
restrictions for this type of rsync usage.

thanks,
Emre
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

backup-dir over nfs

2014-03-06 Thread Philippe LeCavalier
Hi All.

Since it's a lengthy one to explain, I'll jump right into my issue:

I run rsync on 3 debian systems for the purpose of backing up approximately
500GBs of various types of data. The general strategy is two systems hold
full backups and the third holds incremental's using date-based --backup
--backup-dir=. The directory in question is an NFS mount.

Here's the problem: randomly but shortly after the sync starts, it hangs.
At this point, the NFS host(hosting the --backup-dir) responds just fine.
However, the rsync host can only be accessed via console(no ssh, no
ping...Nothing). Now here's the kicker: once I access the rsync host, it
resumes the transfer as if it was waiting to be wakened. If I stay logged
in and wait for it to hang again, all that is required for the sync to
resume is a touch of the keyboard again, as if it was somehow waiting to be
wakened.

One important thing I should mention is if I unmount the NFS share and run
the command with the --backup-dir to the local filesystem on the rsync host
it runs as expected. Furthermore, I can confirm that I can rsync large
amounts of data in both directions via the same NFS share with a plain
rsync -a /local/dir nfs/share/dir. To me this rules out any physical and/or
config issues in the LAN and on the hosts.

All three systems are in the same subnet in my local LAN. The original data
is being pulled from various remote locations over ssh. However, I should
also mention that I've tried the same command using a USB drive and
experienced the exact same hang. So in my mind that rules out the origin
of the data as being a potential source of the issue.

Thanks, Phil
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: could rsync support to specify source IP address when current host has multiple NIC?

2014-03-06 Thread devzero
if you use ssh as transport, you can try

rsync -e 'ssh -oBindAddress=local interface ip address'

man 5 ssh_config is telling:

 BindAddress
 Use the specified address on the local machine as the source
 address of the connection.  Only useful on systems with more than
 one address.  Note that this option does not work if
 UsePrivilegedPort is set to âyesâ

regards
roland


Hi,

Could rsync support to specify Source IP address when local host has
multiple NICs? In some complex network configuration, we need some
restrictions for this type of rsync usage.

thanks,
Emre
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: strange behavior of --inplace on ZFS

2014-03-06 Thread devzero
Hi Pavel, 

maybe that´s related to zfs compression ?

on compressed zfs filesystem, zeroes are not written to disk.

# dd if=/dev/zero of=test.dat bs=1024k count=100

/zfspool # ls -la
total 8
drwxr-xr-x  3 root root 4 Feb 26 10:18 .
drwxr-xr-x 27 root root  4096 Mar 29  2013 ..
drwxr-xr-x 25 root root25 Mar 29  2013 backup
-rw-r--r--  1 root root 104857600 Feb 26 10:18 test.dat

/zfspool # du -k test.dat
1   test.dat

/zfspool # du -k --apparent-size test.dat
102400  test.dat

despite that, space calculation on compressed fs is a difficult thing...

if that gives no pointer, i think that question is better being placed on a zfs 
mailing list.

regards
roland



List:   rsync
Subject:strange behavior of --inplace on ZFS
From:   Pavel Herrmann morpheus.ibis () gmail ! com
Date:   2014-02-25 3:26:03
Message-ID: 5129524.61kVAFkjCM () bloomfield
[Download message RAW]

Hi

I am extending my ZFS+rsync backup to be able to handle large files (think 
virtual machine disk images) in an efficient manner. however, during testing I 
have found a very strange behavior of --inplace flag (which seems to be what I 
am looking for).

what I did: create a 100MB file, rsync, snapshot, change 1k in random location, 
rsync, snapshot, change 1K in other random location, repeat a couple times, 
`zfs list` to see how large my volume actually is.

the strange thing here is that the resulting size was wildly different 
depending on how I created the initial file. all modifications were done by the 
same command, namely
dd if=/dev/urandom of=testfile count=1 bs=1024 seek=some_num conv=notrunc

situation A:
file was created by running 
dd if=/dev/zero of=testfile bs=1024 count=102400
the resulting size of the volume is approximately 100MB times the number of 
snapshots

situation B:
file was created by running
dd if=/dev/urandom of=testfile count=102400 bs=1024
the resulting size of the volume is just a bit over 100MB

the rsync command used was
rsync -aHAv --delete --inplace root@remote:/test/ .

rsync on backup machine (the destination) is 3.1.0, remote has 3.0.9

there is no compression or dedup enabled on the zfs volume

anyone seen this behavior before? is it a bug? can I avoid it? can I make 
rsync give me disk IO statistics to confirm?

regards
Pavel Herrmann
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: strange behavior of --inplace on ZFS

2014-03-06 Thread Hendrik Visage
Question: The source and destination folder host OS and are both sides ZFS?

I'd like to see some stats that rsync said it transfered, also add the
-S flag as an extra set of tests.

The other Question that would be interested (both with and without -S)
is when you use the dd if=/dev/urandom created file, but change some
places with dd =/dev/zero (ie the reverse of the A test case, creatin
with dd if=/dev/zero and changes with dd if=/dev/urandom)

When you are on Solaris, also see the impact of a test case using
mkfile and not dd if=/dev/zero.


On Thu, Mar 6, 2014 at 11:17 PM,  devz...@web.de wrote:
 Hi Pavel,

 maybe that´s related to zfs compression ?

 on compressed zfs filesystem, zeroes are not written to disk.

 # dd if=/dev/zero of=test.dat bs=1024k count=100

 /zfspool # ls -la
 total 8
 drwxr-xr-x  3 root root 4 Feb 26 10:18 .
 drwxr-xr-x 27 root root  4096 Mar 29  2013 ..
 drwxr-xr-x 25 root root25 Mar 29  2013 backup
 -rw-r--r--  1 root root 104857600 Feb 26 10:18 test.dat

 /zfspool # du -k test.dat
 1   test.dat

 /zfspool # du -k --apparent-size test.dat
 102400  test.dat

 despite that, space calculation on compressed fs is a difficult thing...

 if that gives no pointer, i think that question is better being placed on a 
 zfs mailing list.

 regards
 roland



 List:   rsync
 Subject:strange behavior of --inplace on ZFS
 From:   Pavel Herrmann morpheus.ibis () gmail ! com
 Date:   2014-02-25 3:26:03
 Message-ID: 5129524.61kVAFkjCM () bloomfield
 [Download message RAW]

 Hi

 I am extending my ZFS+rsync backup to be able to handle large files (think
 virtual machine disk images) in an efficient manner. however, during testing I
 have found a very strange behavior of --inplace flag (which seems to be what I
 am looking for).

 what I did: create a 100MB file, rsync, snapshot, change 1k in random 
 location,
 rsync, snapshot, change 1K in other random location, repeat a couple times,
 `zfs list` to see how large my volume actually is.

 the strange thing here is that the resulting size was wildly different
 depending on how I created the initial file. all modifications were done by 
 the
 same command, namely
 dd if=/dev/urandom of=testfile count=1 bs=1024 seek=some_num conv=notrunc

 situation A:
 file was created by running
 dd if=/dev/zero of=testfile bs=1024 count=102400
 the resulting size of the volume is approximately 100MB times the number of
 snapshots

 situation B:
 file was created by running
 dd if=/dev/urandom of=testfile count=102400 bs=1024
 the resulting size of the volume is just a bit over 100MB

 the rsync command used was
 rsync -aHAv --delete --inplace root@remote:/test/ .

 rsync on backup machine (the destination) is 3.1.0, remote has 3.0.9

 there is no compression or dedup enabled on the zfs volume

 anyone seen this behavior before? is it a bug? can I avoid it? can I make
 rsync give me disk IO statistics to confirm?

 regards
 Pavel Herrmann
 --
 Please use reply-all for most replies to avoid omitting the mailing list.
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: strange behavior of --inplace on ZFS

2014-03-06 Thread Pavel Herrmann
Hi

apologies for reordering the message parts, and for the long bit at the end

On Thursday 06 March 2014 23:45:22 Hendrik Visage wrote:
 Question: The source and destination folder host OS and are both sides ZFS?

no, the remote (source) was ext4. However, i plan to use it against at least 
one NTFS machine as well

 I'd like to see some stats that rsync said it transfered, also add the
 -S flag as an extra set of tests.

--sparse had the opposite result, minimal size for zeroed file, no space saved 
for random file.
combination of --sparse and --inplace is not supported

 When you are on Solaris, also see the impact of a test case using
 mkfile and not dd if=/dev/zero.

sadly, I have no Solaris in my environment, all test were done on Linux

 
 On Thu, Mar 6, 2014 at 11:17 PM,  devz...@web.de wrote:
  Hi Pavel,
  
  maybe that´s related to zfs compression ?
  
  on compressed zfs filesystem, zeroes are not written to disk.

compression was not turned on on the volume (unless this is enabled even if 
compression is set to off)

  
  # dd if=/dev/zero of=test.dat bs=1024k count=100
  
  /zfspool # ls -la
  total 8
  drwxr-xr-x  3 root root 4 Feb 26 10:18 .
  drwxr-xr-x 27 root root  4096 Mar 29  2013 ..
  drwxr-xr-x 25 root root25 Mar 29  2013 backup
  -rw-r--r--  1 root root 104857600 Feb 26 10:18 test.dat
  
  /zfspool # du -k test.dat
  1   test.dat
  
  /zfspool # du -k --apparent-size test.dat
  102400  test.dat
  
  despite that, space calculation on compressed fs is a difficult thing...
  

space was as reported by 'zfs list', on a volume that was created specifically 
for this test. I would assume that is the most reliable way to get space usage


 The other Question that would be interested (both with and without -S)
 is when you use the dd if=/dev/urandom created file, but change some
 places with dd =/dev/zero (ie the reverse of the A test case, creatin
 with dd if=/dev/zero and changes with dd if=/dev/urandom)

I just rerun all the tests with --sparse and --inplace, results follow 
(cleanup is done after each 'zfs list', not shown)


thanks
Pavel Herrmann





zero-inited file

remote runs:
# dd if=/dev/zero of=testfile bs=1024 count=102400
# dd if=/dev/urandom of=testfile count=1 bs=1024 seek=854 conv=notrunc
# dd if=/dev/urandom of=testfile count=1 bs=1024 seek=45368 conv=notrunc
# dd if=/dev/urandom of=testfile count=50 bs=1024 seek=9647 conv=notrunc


# rsync -aHAv --stats --delete --sparse root@remote:/test/ .
receiving incremental file list
./
testfile

Number of files: 2 (reg: 1, dir: 1)
Number of created files: 1 (reg: 1)
Number of regular files transferred: 1
Total file size: 104,857,600 bytes
Total transferred file size: 104,857,600 bytes
Literal data: 104,857,600 bytes
Matched data: 0 bytes
File list size: 50
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 33
Total bytes received: 104,870,500

sent 33 bytes  received 104,870,500 bytes  41,948,213.20 bytes/sec
total size is 104,857,600  speedup is 1.00
# zfs snapshot zraid/test@a
# rsync -aHAv --stats --delete --sparse root@remote:/test/ .
receiving incremental file list
testfile

Number of files: 2 (reg: 1, dir: 1)
Number of created files: 0
Number of regular files transferred: 1
Total file size: 104,857,600 bytes
Total transferred file size: 104,857,600 bytes
Literal data: 10,240 bytes
Matched data: 104,847,360 bytes
File list size: 50
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 71,710
Total bytes received: 51,301

sent 71,710 bytes  received 51,301 bytes  82,007.33 bytes/sec
total size is 104,857,600  speedup is 852.42
# zfs snapshot zraid/test@b
# rsync -aHAv --stats --delete --sparse root@remote:/test/ .
receiving incremental file list
testfile

Number of files: 2 (reg: 1, dir: 1)
Number of created files: 0
Number of regular files transferred: 1
Total file size: 104,857,600 bytes
Total transferred file size: 104,857,600 bytes
Literal data: 10,240 bytes
Matched data: 104,847,360 bytes
File list size: 50
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 71,710
Total bytes received: 51,301

sent 71,710 bytes  received 51,301 bytes  82,007.33 bytes/sec
total size is 104,857,600  speedup is 852.42
# zfs snapshot zraid/test@c
# rsync -aHAv --stats --delete --sparse root@remote:/test/ .
receiving incremental file list
testfile

Number of files: 2 (reg: 1, dir: 1)
Number of created files: 0
Number of regular files transferred: 1
Total file size: 104,857,600 bytes
Total transferred file size: 104,857,600 bytes
Literal data: 61,440 bytes
Matched data: 104,796,160 bytes
File list size: 50
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 71,710
Total bytes received: 102,489

sent 71,710 bytes  received 102,489 bytes  116,132.67 bytes/sec
total size is 104,857,600  speedup is 601.94

# zfs list
NAME  

Re: strange behavior of --inplace on ZFS

2014-03-06 Thread Kevin Korb
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The Linux equivilent to mkfile is truncate --size.

On 03/06/2014 06:50 PM, Pavel Herrmann wrote:
 Hi
 
 apologies for reordering the message parts, and for the long bit at
 the end
 
 On Thursday 06 March 2014 23:45:22 Hendrik Visage wrote:
 Question: The source and destination folder host OS and are both
 sides ZFS?
 
 no, the remote (source) was ext4. However, i plan to use it against
 at least one NTFS machine as well
 
 I'd like to see some stats that rsync said it transfered, also
 add the -S flag as an extra set of tests.
 
 --sparse had the opposite result, minimal size for zeroed file, no
 space saved for random file. combination of --sparse and --inplace
 is not supported
 
 When you are on Solaris, also see the impact of a test case
 using mkfile and not dd if=/dev/zero.
 
 sadly, I have no Solaris in my environment, all test were done on
 Linux
 
 
 On Thu, Mar 6, 2014 at 11:17 PM,  devz...@web.de wrote:
 Hi Pavel,
 
 maybe that´s related to zfs compression ?
 
 on compressed zfs filesystem, zeroes are not written to disk.
 
 compression was not turned on on the volume (unless this is enabled
 even if compression is set to off)
 
 
 # dd if=/dev/zero of=test.dat bs=1024k count=100
 
 /zfspool # ls -la total 8 drwxr-xr-x  3 root root 4 Feb
 26 10:18 . drwxr-xr-x 27 root root  4096 Mar 29  2013 .. 
 drwxr-xr-x 25 root root25 Mar 29  2013 backup 
 -rw-r--r--  1 root root 104857600 Feb 26 10:18 test.dat
 
 /zfspool # du -k test.dat 1   test.dat
 
 /zfspool # du -k --apparent-size test.dat 102400  test.dat
 
 despite that, space calculation on compressed fs is a difficult
 thing...
 
 
 space was as reported by 'zfs list', on a volume that was created
 specifically for this test. I would assume that is the most
 reliable way to get space usage
 
 
 The other Question that would be interested (both with and
 without -S) is when you use the dd if=/dev/urandom created file,
 but change some places with dd =/dev/zero (ie the reverse of the
 A test case, creatin with dd if=/dev/zero and changes with dd
 if=/dev/urandom)
 
 I just rerun all the tests with --sparse and --inplace, results
 follow (cleanup is done after each 'zfs list', not shown)
 
 
 thanks Pavel Herrmann
 
 
 
 
 
 zero-inited file
 
 remote runs: # dd if=/dev/zero of=testfile bs=1024 count=102400 #
 dd if=/dev/urandom of=testfile count=1 bs=1024 seek=854
 conv=notrunc # dd if=/dev/urandom of=testfile count=1 bs=1024
 seek=45368 conv=notrunc # dd if=/dev/urandom of=testfile count=50
 bs=1024 seek=9647 conv=notrunc
 
 
 # rsync -aHAv --stats --delete --sparse root@remote:/test/ . 
 receiving incremental file list ./ testfile
 
 Number of files: 2 (reg: 1, dir: 1) Number of created files: 1
 (reg: 1) Number of regular files transferred: 1 Total file size:
 104,857,600 bytes Total transferred file size: 104,857,600 bytes 
 Literal data: 104,857,600 bytes Matched data: 0 bytes File list
 size: 50 File list generation time: 0.001 seconds File list
 transfer time: 0.000 seconds Total bytes sent: 33 Total bytes
 received: 104,870,500
 
 sent 33 bytes  received 104,870,500 bytes  41,948,213.20 bytes/sec 
 total size is 104,857,600  speedup is 1.00 # zfs snapshot
 zraid/test@a # rsync -aHAv --stats --delete --sparse
 root@remote:/test/ . receiving incremental file list testfile
 
 Number of files: 2 (reg: 1, dir: 1) Number of created files: 0 
 Number of regular files transferred: 1 Total file size: 104,857,600
 bytes Total transferred file size: 104,857,600 bytes Literal data:
 10,240 bytes Matched data: 104,847,360 bytes File list size: 50 
 File list generation time: 0.001 seconds File list transfer time:
 0.000 seconds Total bytes sent: 71,710 Total bytes received:
 51,301
 
 sent 71,710 bytes  received 51,301 bytes  82,007.33 bytes/sec total
 size is 104,857,600  speedup is 852.42 # zfs snapshot zraid/test@b 
 # rsync -aHAv --stats --delete --sparse root@remote:/test/ . 
 receiving incremental file list testfile
 
 Number of files: 2 (reg: 1, dir: 1) Number of created files: 0 
 Number of regular files transferred: 1 Total file size: 104,857,600
 bytes Total transferred file size: 104,857,600 bytes Literal data:
 10,240 bytes Matched data: 104,847,360 bytes File list size: 50 
 File list generation time: 0.001 seconds File list transfer time:
 0.000 seconds Total bytes sent: 71,710 Total bytes received:
 51,301
 
 sent 71,710 bytes  received 51,301 bytes  82,007.33 bytes/sec total
 size is 104,857,600  speedup is 852.42 # zfs snapshot zraid/test@c 
 # rsync -aHAv --stats --delete --sparse root@remote:/test/ . 
 receiving incremental file list testfile
 
 Number of files: 2 (reg: 1, dir: 1) Number of created files: 0 
 Number of regular files transferred: 1 Total file size: 104,857,600
 bytes Total transferred file size: 104,857,600 bytes Literal data:
 61,440 bytes Matched data: 104,796,160 bytes File list size: 50 
 File list generation time: 0.001 seconds File list 

Re: strange behavior of --inplace on ZFS

2014-03-06 Thread Pavel Herrmann
Hi

On Thursday 06 March 2014 18:52:44 Kevin Korb wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 The Linux equivilent to mkfile is truncate --size.

Thanks for the tip

however, this produces the exactly same results as dd if=/dev/zero...

regards
Pavel Herrmann

 
 On 03/06/2014 06:50 PM, Pavel Herrmann wrote:
  Hi
  
  apologies for reordering the message parts, and for the long bit at
  the end
  
  On Thursday 06 March 2014 23:45:22 Hendrik Visage wrote:
  Question: The source and destination folder host OS and are both
  sides ZFS?
  
  no, the remote (source) was ext4. However, i plan to use it against
  at least one NTFS machine as well
  
  I'd like to see some stats that rsync said it transfered, also
  add the -S flag as an extra set of tests.
  
  --sparse had the opposite result, minimal size for zeroed file, no
  space saved for random file. combination of --sparse and --inplace
  is not supported
  
  When you are on Solaris, also see the impact of a test case
  using mkfile and not dd if=/dev/zero.
  
  sadly, I have no Solaris in my environment, all test were done on
  Linux
  
  On Thu, Mar 6, 2014 at 11:17 PM,  devz...@web.de wrote:
  Hi Pavel,
  
  maybe that´s related to zfs compression ?
  
  on compressed zfs filesystem, zeroes are not written to disk.
  
  compression was not turned on on the volume (unless this is enabled
  even if compression is set to off)
  
  # dd if=/dev/zero of=test.dat bs=1024k count=100
  
  /zfspool # ls -la total 8 drwxr-xr-x  3 root root 4 Feb
  26 10:18 . drwxr-xr-x 27 root root  4096 Mar 29  2013 ..
  drwxr-xr-x 25 root root25 Mar 29  2013 backup
  -rw-r--r--  1 root root 104857600 Feb 26 10:18 test.dat
  
  /zfspool # du -k test.dat 1   test.dat
  
  /zfspool # du -k --apparent-size test.dat 102400  test.dat
  
  despite that, space calculation on compressed fs is a difficult
  thing...
  
  space was as reported by 'zfs list', on a volume that was created
  specifically for this test. I would assume that is the most
  reliable way to get space usage
  
  The other Question that would be interested (both with and
  without -S) is when you use the dd if=/dev/urandom created file,
  but change some places with dd =/dev/zero (ie the reverse of the
  A test case, creatin with dd if=/dev/zero and changes with dd
  if=/dev/urandom)
  
  I just rerun all the tests with --sparse and --inplace, results
  follow (cleanup is done after each 'zfs list', not shown)
  
  
  thanks Pavel Herrmann
  
  
  
  
  
  zero-inited file
  
  remote runs: # dd if=/dev/zero of=testfile bs=1024 count=102400 #
  dd if=/dev/urandom of=testfile count=1 bs=1024 seek=854
  conv=notrunc # dd if=/dev/urandom of=testfile count=1 bs=1024
  seek=45368 conv=notrunc # dd if=/dev/urandom of=testfile count=50
  bs=1024 seek=9647 conv=notrunc
  
  
  # rsync -aHAv --stats --delete --sparse root@remote:/test/ .
  receiving incremental file list ./ testfile
  
  Number of files: 2 (reg: 1, dir: 1) Number of created files: 1
  (reg: 1) Number of regular files transferred: 1 Total file size:
  104,857,600 bytes Total transferred file size: 104,857,600 bytes
  Literal data: 104,857,600 bytes Matched data: 0 bytes File list
  size: 50 File list generation time: 0.001 seconds File list
  transfer time: 0.000 seconds Total bytes sent: 33 Total bytes
  received: 104,870,500
  
  sent 33 bytes  received 104,870,500 bytes  41,948,213.20 bytes/sec
  total size is 104,857,600  speedup is 1.00 # zfs snapshot
  zraid/test@a # rsync -aHAv --stats --delete --sparse
  root@remote:/test/ . receiving incremental file list testfile
  
  Number of files: 2 (reg: 1, dir: 1) Number of created files: 0
  Number of regular files transferred: 1 Total file size: 104,857,600
  bytes Total transferred file size: 104,857,600 bytes Literal data:
  10,240 bytes Matched data: 104,847,360 bytes File list size: 50
  File list generation time: 0.001 seconds File list transfer time:
  0.000 seconds Total bytes sent: 71,710 Total bytes received:
  51,301
  
  sent 71,710 bytes  received 51,301 bytes  82,007.33 bytes/sec total
  size is 104,857,600  speedup is 852.42 # zfs snapshot zraid/test@b
  # rsync -aHAv --stats --delete --sparse root@remote:/test/ .
  receiving incremental file list testfile
  
  Number of files: 2 (reg: 1, dir: 1) Number of created files: 0
  Number of regular files transferred: 1 Total file size: 104,857,600
  bytes Total transferred file size: 104,857,600 bytes Literal data:
  10,240 bytes Matched data: 104,847,360 bytes File list size: 50
  File list generation time: 0.001 seconds File list transfer time:
  0.000 seconds Total bytes sent: 71,710 Total bytes received:
  51,301
  
  sent 71,710 bytes  received 51,301 bytes  82,007.33 bytes/sec total
  size is 104,857,600  speedup is 852.42 # zfs snapshot zraid/test@c
  # rsync -aHAv --stats --delete --sparse root@remote:/test/ .
  receiving incremental file list testfile
  
  Number of files: 2 (reg: 1, dir: 1) Number 

Re: strange behavior of --inplace on ZFS

2014-03-06 Thread Kevin Korb
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Correct, it is the shorter way to make a 100% sparse file like dd seek=

On 03/06/2014 07:00 PM, Pavel Herrmann wrote:
 Hi
 
 On Thursday 06 March 2014 18:52:44 Kevin Korb wrote: The Linux
 equivilent to mkfile is truncate --size.
 
 Thanks for the tip
 
 however, this produces the exactly same results as dd
 if=/dev/zero...
 
 regards Pavel Herrmann
 
 
 On 03/06/2014 06:50 PM, Pavel Herrmann wrote:
 Hi
 
 apologies for reordering the message parts, and for the long
 bit at the end
 
 On Thursday 06 March 2014 23:45:22 Hendrik Visage wrote:
 Question: The source and destination folder host OS and are
 both sides ZFS?
 
 no, the remote (source) was ext4. However, i plan to use it
 against at least one NTFS machine as well
 
 I'd like to see some stats that rsync said it transfered,
 also add the -S flag as an extra set of tests.
 
 --sparse had the opposite result, minimal size for zeroed
 file, no space saved for random file. combination of --sparse
 and --inplace is not supported
 
 When you are on Solaris, also see the impact of a test
 case using mkfile and not dd if=/dev/zero.
 
 sadly, I have no Solaris in my environment, all test were
 done on Linux
 
 On Thu, Mar 6, 2014 at 11:17 PM,  devz...@web.de wrote:
 Hi Pavel,
 
 maybe that´s related to zfs compression ?
 
 on compressed zfs filesystem, zeroes are not written to
 disk.
 
 compression was not turned on on the volume (unless this is
 enabled even if compression is set to off)
 
 # dd if=/dev/zero of=test.dat bs=1024k count=100
 
 /zfspool # ls -la total 8 drwxr-xr-x  3 root root
 4 Feb 26 10:18 . drwxr-xr-x 27 root root  4096 Mar 29
 2013 .. drwxr-xr-x 25 root root25 Mar 29  2013
 backup -rw-r--r--  1 root root 104857600 Feb 26 10:18
 test.dat
 
 /zfspool # du -k test.dat 1   test.dat
 
 /zfspool # du -k --apparent-size test.dat 102400
 test.dat
 
 despite that, space calculation on compressed fs is a
 difficult thing...
 
 space was as reported by 'zfs list', on a volume that was
 created specifically for this test. I would assume that is
 the most reliable way to get space usage
 
 The other Question that would be interested (both with and 
 without -S) is when you use the dd if=/dev/urandom created
 file, but change some places with dd =/dev/zero (ie the
 reverse of the A test case, creatin with dd if=/dev/zero
 and changes with dd if=/dev/urandom)
 
 I just rerun all the tests with --sparse and --inplace,
 results follow (cleanup is done after each 'zfs list', not
 shown)
 
 
 thanks Pavel Herrmann
 
 
 
 
 
 zero-inited file
 
 remote runs: # dd if=/dev/zero of=testfile bs=1024
 count=102400 # dd if=/dev/urandom of=testfile count=1 bs=1024
 seek=854 conv=notrunc # dd if=/dev/urandom of=testfile
 count=1 bs=1024 seek=45368 conv=notrunc # dd if=/dev/urandom
 of=testfile count=50 bs=1024 seek=9647 conv=notrunc
 
 
 # rsync -aHAv --stats --delete --sparse root@remote:/test/ . 
 receiving incremental file list ./ testfile
 
 Number of files: 2 (reg: 1, dir: 1) Number of created files:
 1 (reg: 1) Number of regular files transferred: 1 Total file
 size: 104,857,600 bytes Total transferred file size:
 104,857,600 bytes Literal data: 104,857,600 bytes Matched
 data: 0 bytes File list size: 50 File list generation time:
 0.001 seconds File list transfer time: 0.000 seconds Total
 bytes sent: 33 Total bytes received: 104,870,500
 
 sent 33 bytes  received 104,870,500 bytes  41,948,213.20
 bytes/sec total size is 104,857,600  speedup is 1.00 # zfs
 snapshot zraid/test@a # rsync -aHAv --stats --delete
 --sparse root@remote:/test/ . receiving incremental file list
 testfile
 
 Number of files: 2 (reg: 1, dir: 1) Number of created files:
 0 Number of regular files transferred: 1 Total file size:
 104,857,600 bytes Total transferred file size: 104,857,600
 bytes Literal data: 10,240 bytes Matched data: 104,847,360
 bytes File list size: 50 File list generation time: 0.001
 seconds File list transfer time: 0.000 seconds Total bytes
 sent: 71,710 Total bytes received: 51,301
 
 sent 71,710 bytes  received 51,301 bytes  82,007.33 bytes/sec
 total size is 104,857,600  speedup is 852.42 # zfs snapshot
 zraid/test@b # rsync -aHAv --stats --delete --sparse
 root@remote:/test/ . receiving incremental file list
 testfile
 
 Number of files: 2 (reg: 1, dir: 1) Number of created files:
 0 Number of regular files transferred: 1 Total file size:
 104,857,600 bytes Total transferred file size: 104,857,600
 bytes Literal data: 10,240 bytes Matched data: 104,847,360
 bytes File list size: 50 File list generation time: 0.001
 seconds File list transfer time: 0.000 seconds Total bytes
 sent: 71,710 Total bytes received: 51,301
 
 sent 71,710 bytes  received 51,301 bytes  82,007.33 bytes/sec
 total size is 104,857,600  speedup is 852.42 # zfs snapshot
 zraid/test@c # rsync -aHAv --stats --delete --sparse
 root@remote:/test/ . receiving incremental file list
 testfile
 
 Number of files: 2 (reg: 1, dir: 1) 

Problem using --debug=FUZZY[2] flags

2014-03-06 Thread Graham 5915
Hi,

I'm having a problem trying to get the --debug flags to work with 3.1.0. I
wanted to check with the mailing list to make sure this is really a bug and
not something I'm misunderstanding.

This is what I'm running (as a test):

rsync --fuzzy --fuzzy -vv --debug=FUZZY,FUZZY2 C_VOL-b001-i3818.spi
rsync://user@localhost/SHARENAME/dest

The dest folder on the destination side has a file named
C_VOL-b001-i3816.spi, which is just a copy of C_VOL-b001-i3818.spi that I
made and modified in a few places with a hex editor.

I don't get any fuzzy messages when running the command that way, but it
does utilize that file. To confirm, I commented out the if
(DEBUG_GTE(FUZZY, 2)) line in generator.c (line 767 I believe) and rebuilt
- I get the fuzzy size/modtime match for... in the output then as
expected.
Does this look like a bug, or am I misunderstanding the new documentation
regarding how to use the --debug flags?

Thanks
Graham
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Problem using --fuzzy with --*-dest flags

2014-03-06 Thread Graham 5915
Hi,

I'm trying to get the --fuzzy option to work with --compare-dest with rsync
3.1.0.

I'm testing with two files. C_VOL-b001-i3818.spi and
C_VOL-b001-i3816.spi are copies of one another, but I've modified the
latter a bit with a hex editor as a test. The modified date is still the
same.

This works for me (fuzzy is utilized against the 3816 test file):

rsync --fuzzy --fuzzy -vv C_VOL-b001-i3818.spi rsync://user@localhost
/SHARENAME/dest
Contents of SHARENAME/dest: C_VOL-b001-i3816.spi

This does not work:

rsync --fuzzy --fuzzy -vv --copy-dest=../alt C_VOL-b001-i3818.spi
rsync://user@localhost/SHARENAME/dest
Contents of SHARENAME/dest: empty
Contents of SHARENAME/alt: C_VOL-b001-i3816.spi

I do get complaints that:
file has vanished[...][..]alt/C_VOL-b001-i3816.spi (in SHARENAME)
...so it does appear to be seeing the contents of the alt folder. Fuzzy
isn't being run against it, however.

I'd appreciate any insight on this.

PS: I posted another thread having to do with issues with the --debug=FUZZY
flag, but I think that's a separate issue so I'm posting this separately
for clarity.

Thanks
Graham
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html