[ 
https://issues.apache.org/jira/browse/HDFS-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026863#comment-14026863
 ] 

Tom Panning commented on HDFS-6323:
-----------------------------------

Hi Andrew,

It could be related to those two issues, depending on how they are implemented. 
If files are compressed transparently, and they remain compressed until the 
-get and -put commands uncompress them on the local machine, that would solve 
my problem. But if the files are transparently uncompressed as they are read of 
the HDFS disk, then that wouldn't.

Allowing webhdfs to use compression would also solve my problem.

> Compress files in transit for fs -get and -put operations
> ---------------------------------------------------------
>
>                 Key: HDFS-6323
>                 URL: https://issues.apache.org/jira/browse/HDFS-6323
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client
>            Reporter: Tom Panning
>            Priority: Minor
>
> For the {{hadoop fs -get}} and {{hadoop fs -put}} commands, it would be nice 
> if there was an option to compress the file(s) in transit. For some people, 
> the Hadoop cluster is far away (in terms of the network) or must be accessed 
> through a VPN, and many files that are put on or retrieved from the cluster 
> are very large compared to the available bandwidth.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to