Re: Aw: Re: rsync very very slow with multiple instances at the same time.

2018-03-23 Thread Kevin Korb via rsync
Right, latency is the problem here.  Every stat() is a tiny read
operation but it is one that has to come back over the network in the
case of iSCSI.  I also think that was a pretty big slowdown but I don't
have much iSCSI experience and I have only used it on gigabit ethernet.

On 03/23/2018 04:01 PM, devzero--- via rsync wrote:
>>The difference is not crazy. But the find itself takes so much time !
>  
> 38m for a find across 2,8m files looks a little bit slow, i'm getting
> 14k lines/s when doing  "find . | pv -l -a  >/dev/null" in my btrfs
> volume located via iscsi on a synology storage (3,5" ordinary sata
> disks) - while the VM i'm running this inside is being backed up at
> hypervisor level, i.e. there is additional load on the storage...
>  
> anyway, you are comparing apples to oranges here. i guess the iscsi
> storage is'n ssd, is it? there are even more, iscsi is introducing
> additional latencies...
>  
> regards
> roland
>  
>  
> *Gesendet:* Freitag, 23. März 2018 um 17:52 Uhr
> *Von:* "Jayce Piel via rsync" 
> *An:* "Kevin Korb via rsync" 
> *Betreff:* Re: rsync very very slow with multiple instances at the same
> time.
> Ok, so i did some tests.
> find /path -type f -ls > /dev/null
>  
>  
> First on my local SSD disk (1.9 millions files) :
> 1 find : 
> real2m16.743s
> user0m7.607s
> sys0m45.952s
>  
> 10 concurrent finds (approx same results for each)  :
> real4m48.629s
> user0m11.013s
> sys2m0.288s
>  
> Almost double time is somehow logic.
>  
>  
> Now same test on my server on the iSCSI disk (when there is no other
> activity) (2.8 millions files) :
> 1 find :
> real38m54.964s
> user0m35.626s
> sys4m33.593s
>  
> 10 concurrent finds :
> real76m34.781s
> user0m47.848s
> sys5m42.034s
>  
> The difference is not crazy. But the find itself takes so much time !
> I now see i have a real issue on that server. Transfer time is not a
> problem, but access time seems to be terribly slow.
>  
> 
> Le 21 mars 2018 à 16:59, Jayce Piel  > a écrit :
>  
> Thanks for the answer.
> I will do some tests of the stat() thing at a time when there is
> nothing else running.
>  
> For the compression i tried to find the lowest common factor between
> the clients and the server. Server is older for now.
> I used to use -c arcfour-128 before it was no more an option.
>  
> The 2 ciphers you are mentionning are available on the Clients but
> not on the server, sadly.
> But i keep this in mind for when i will upgrade the server (or move
> the destination backups).
>  
>  
> 
> Le 21 mars 2018 à 16:39, Kevin Korb via rsync
> > a écrit :
>  
> When rsync has a lot of files to look through but not many to
> actually
> transfer most of the work will be gathering information from the
> stat()
> function call.  You can simulate just the stat call with: find /path
> -type f -ls > /dev/null
> You can run one then a few of those to see if your storage has
> issues
> with lots of stats all at once.
> 
> Also, why -c aes128-ctr ?  If your OpenSSH is current then the
> default
> of chacha20-poly1...@openssh.com
>  is much faster.  If your
> systems have
> AES-NI in the CPU then aes128-...@openssh.com
>  is much faster.  If your
> OpenSSH is too old for chacha to be the default then aes128-ctr
> was the
> default anyway.
> 
> On 03/21/2018 09:49 AM, Jayce Piel via rsync wrote:
> 
> 
> Here are my options :
> 
> /usr/local/bin/rsync3 --rsync-path=/usr/local/bin/rsync3
> -aHXxvE --stats
> --numeric-ids --delete-excluded --delete-before --human-readable
> —rsh="ssh -T -c aes128-ctr -o Compression=no -x" -z
> --skip-compress=gz/bz2/jpg/jpeg/ogg/mp3/mp4/mov/avi/vmdk/vmem 
> --inplace
> --chmod=u+w --timeout=60 —exclude=‘Caches'
> —exclude=‘SyncService'
> —exclude=‘.FileSync' —exclude=‘IMAP*' —exclude=‘.Trash'
> —exclude='Saved
> Application State' —exclude='Autosave Information'
> --exclude-from=/Users/pabittan/.UserSync/exclude-list
> --max-size=1000M
> /Users/pabittan/ xserve.local.fftir:./
>  
> 
>  
> -- 
> Jayce Piel   —    jayce.p...@gmail.com
>   --  0616762431
>    Responsable Informatique F.F.Tir
> 
>  
> -- 
> Jayce Piel   —    jayce.p...@gmail.com   --
>  0616762431
>    Responsable Informatique F.F.Tir
> -- Please use reply-all for most replies to avoid omitting the mailing
> list. To 

Aw: Re: rsync very very slow with multiple instances at the same time.

2018-03-23 Thread devzero--- via rsync
>The difference is not crazy. But the find itself takes so much time !

 

38m for a find across 2,8m files looks a little bit slow, i'm getting 14k lines/s when doing  "find . | pv -l -a  >/dev/null" in my btrfs volume located via iscsi on a synology storage (3,5" ordinary sata disks) - while the VM i'm running this inside is being backed up at hypervisor level, i.e. there is additional load on the storage...

 

anyway, you are comparing apples to oranges here. i guess the iscsi storage is'n ssd, is it? there are even more, iscsi is introducing additional latencies...

 

regards

roland

 

 


Gesendet: Freitag, 23. März 2018 um 17:52 Uhr
Von: "Jayce Piel via rsync" 
An: "Kevin Korb via rsync" 
Betreff: Re: rsync very very slow with multiple instances at the same time.


Ok, so i did some tests.



find /path -type f -ls > /dev/null




 

 

First on my local SSD disk (1.9 millions files) :

1 find : 

real 2m16.743s
user 0m7.607s
sys 0m45.952s
 

10 concurrent finds (approx same results for each)  :

real 4m48.629s
user 0m11.013s
sys 2m0.288s
 

Almost double time is somehow logic.

 

 

Now same test on my server on the iSCSI disk (when there is no other activity) (2.8 millions files) :

1 find :


real 38m54.964s

user 0m35.626s

sys 4m33.593s


 

10 concurrent finds :


real 76m34.781s

user 0m47.848s

sys 5m42.034s


 

The difference is not crazy. But the find itself takes so much time !

I now see i have a real issue on that server. Transfer time is not a problem, but access time seems to be terribly slow.


 

Le 21 mars 2018 à 16:59, Jayce Piel  a écrit :
 

Thanks for the answer.

I will do some tests of the stat() thing at a time when there is nothing else running.
 

For the compression i tried to find the lowest common factor between the clients and the server. Server is older for now.

I used to use -c arcfour-128 before it was no more an option.

 

The 2 ciphers you are mentionning are available on the Clients but not on the server, sadly.

But i keep this in mind for when i will upgrade the server (or move the destination backups).

 

 

Le 21 mars 2018 à 16:39, Kevin Korb via rsync  a écrit :
 

When rsync has a lot of files to look through but not many to actually
transfer most of the work will be gathering information from the stat()
function call.  You can simulate just the stat call with: find /path
-type f -ls > /dev/null
You can run one then a few of those to see if your storage has issues
with lots of stats all at once.

Also, why -c aes128-ctr ?  If your OpenSSH is current then the default
of chacha20-poly1...@openssh.com is much faster.  If your systems have
AES-NI in the CPU then aes128-...@openssh.com is much faster.  If your
OpenSSH is too old for chacha to be the default then aes128-ctr was the
default anyway.

On 03/21/2018 09:49 AM, Jayce Piel via rsync wrote:


Here are my options :

/usr/local/bin/rsync3 --rsync-path=/usr/local/bin/rsync3 -aHXxvE --stats
--numeric-ids --delete-excluded --delete-before --human-readable
—rsh="ssh -T -c aes128-ctr -o Compression=no -x" -z
--skip-compress=gz/bz2/jpg/jpeg/ogg/mp3/mp4/mov/avi/vmdk/vmem --inplace
--chmod=u+w --timeout=60 —exclude=‘Caches' —exclude=‘SyncService'
—exclude=‘.FileSync' —exclude=‘IMAP*' —exclude=‘.Trash' —exclude='Saved
Application State' —exclude='Autosave Information'
--exclude-from=/Users/pabittan/.UserSync/exclude-list --max-size=1000M
/Users/pabittan/ xserve.local.fftir:./
 



 




-- 


Jayce Piel   —    jayce.p...@gmail.com  --  0616762431

   Responsable Informatique F.F.Tir








 




-- 


Jayce Piel   —    jayce.p...@gmail.com  --  0616762431

   Responsable Informatique F.F.Tir





-- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html





-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html