John Jolet wrote:

>
> On Jan 24, 2006, at 9:10 PM, Ow Mun Heng wrote:
>
>> On Tue, 2006-01-24 at 17:23 +0000, Francesco Riosa wrote:
>>
>>> Jeff wrote:
>>>
>>>> Hey guys.
>>>>
>>>> I've got this big fat backup server with no space left on the  hard
>>>> drive
>>>> to store a tar file. I'd like to pipe a tar through ssh, but not  sure
>>>> what the command would be. Something to the effect of:
>>>>
>>>> # cat /var/backup | ssh backup.homelan.com 'tar data.info.gz'
>>>>
>>>> So that, the data is actually being sent over ssh, and then 
>>>> archived on
>>>> the destination machine.
>>>>
>>> tar -zcf - /var/backup | ssh backup.homelan.com "( cat > 
>>> data.info.gz  )"
>>>
>>
>> There's another way. This assumes your originating server's CPU is
>> slow/precious and you have a 16 way node on a backup server (HAHA!!)
>>
>> tar cf - /var/backup | ssh backup.homelan.com "gzip -c >
>> filename.tar.gz"
>>
>> But you transfer the stream uncompressed, so more bits get  transferred.
>>
> you're kidding, right?  Unless you've got a PII on the originating 
> end and are using gigabit ethernet between the two nodes, compressing 
> the data before transmission will almost always be faster.  I tested 
> this scenerio extensively about 3 years ago, using linux, aix, and 
> solaris hosts.  In no case was transferring uncompressed data faster 
> than compressing (at least to some degree) the data on the 
> originating server.  And frankly, no matter what you do...wouldn't 
> you hope ALL the bits get transferred? :)

I read something some time ago that suggested if you transfer a
compressed file over a compressed SFTP connection, for example, that it
would take longer to transfer the data versus if only the data or the
connection was compressed. The reason, as I recall, had to do with
compressing already compressed data--this apparently created some
overhead on the connection.

Did you look at this situation in your tests? If so, what were the results?
-- 
gentoo-user@gentoo.org mailing list

Reply via email to