On 07/02/2014 04:40 PM, Nick Holland wrote:
On 07/02/14 09:08, Gregory Edigarov wrote:
On 07/01/2014 02:20 PM, Nick Holland wrote:
On 07/01/14 07:00, Gregory Edigarov wrote:
Hello,

Just out for curiosity.
what is the fastest and lightest in cpu terms algorithm in ssh?
As someone who has worked with lots of really old and weak processors
(and still used the defaults)...I must ask, why?  If this matters to
you, I'd suggest getting a better computer, not dumbing-down SSH.  Yes,
using ssh on a 25mhz sparc is annoying, but then, so is almost
everything else you do on those machines.  A 20% change one way or
another won't change the annoying factor enough to worry about.

And maybe more important: why aren't you just testing what YOU care
about on YOUR system and answering your own question?  I suspect you may
see different answers on different processors and different tasks.
I.e., what matters? connection time?  throughput?  On the client or server?

And if you have difficulty answering, maybe the answer is "doesn't
really matter, just use the defaults".

Nick.

because I need to scp some 90-100G  of data from a VERY busy server over
internet on a regular basis and I don't
want scp eat any cpu at all, which in case of encryption is unavoidable).

then, in the middle I have a firewall, that is out of my control, only
allowing connections to 22 port to that server.

Hope my explanation is enough
not really, but regardless, YOU still need to do experiments on YOUR
systems.  And I still think fiddling with the encryption knob is the
wrong knob.  Will it change something?  Sure.  Not much, however.

What is busy?  if "busy" is CPU, nice(1) is your friend.  if busy is
disk, chewing some CPU or even rate limiting may be your friend.  If you
are generating that much new data regularly, you may well have more of a
disk issue than a CPU issue.  If it isn't all new data, look at rsync --
more cpu for less disk and network I/O.

Try compression on vs. off (the results of this are usually easier to
explain after the fact than to predict before.  Shouldn't be the case, I
know, but I've bet wrong too many times).

Fiddle with the rate limiting of scp.  Note that the number you specify
is not terribly absolute -- don't take your available bandwidth and
claim 80% and think magic will happen, you will have to experiement with
values, and leave it sit for a while to let the buffers do their thing.

Then of course, there's the "if you don't like the answers, change the
question" strategy -- drop another machine behind the firewall with a
lower impact way of transfering data -- NFS? FTP?  You are again going
to have to experiement -- then SCP off that machine instead of your
overloaded box.  If the data is logs, you probably want to be syslogging
to another box anyway.

Some time back, TedU@ wrote a nifty little programlette he called
"disknice" -- google for that, you'll find it.  It yanks the program you
have it running away from the CPU (and thus, disk, etc.) periodically,
letting other tasks have at it.  I use it to back up some data from my
laptop's disk to a SD card on boot with rsync, before, it killed the
system performance until it was done.  Now it takes longer, but I don't
feel it happening.  Maybe this helps you in some way.
Thanks for the insight NIck. I will seriously think about second machine approach. The data I need to copy are in a way something like logs, although they are coming
from some technological equipment.

Reply via email to