Convert your data to parquet, it saves space and time.

Thanks
Best Regards

On Mon, Oct 19, 2015 at 11:43 PM, ahaider3 <ahaid...@hawk.iit.edu> wrote:

> Hi,
> A lot of the data I have in HDFS is compressed. I noticed when I load this
> data into spark and cache it, Spark unrolls the data like normal but stores
> the data uncompressed in memory. For example, suppose /data/ is an RDD with
> compressed partitions on HDFS. I then cache the data. When I call
> /data.count()/, the data is rightly decompressed since it needs to find the
> value of /.count()/. But, the data that is cached is also decompressed. Can
> a partition be compressed in spark? I know spark allows for data to be
> compressed, after serialization. But what if, I only want the partitions
> compressed.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Storing-Compressed-data-in-HDFS-into-Spark-tp25123.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to