I believe spark.rdd.compress requires the data to be serialized. In my
case, I have data already compressed which becomes decompressed as I try to
cache it. I believe even when I set spark.rdd.compress to *true, *Spark
will still decompress the data and then serialize it and then compress the
serialized data.

Although Parquet is an option, I believe it will only make sense to use it
when running Spark SQL. However, if I am using graphx or mllib will it
help?

Thanks, Adnan Haider
B.S Candidate, Computer Science
Illinois Institute of Technology

On Thu, Oct 22, 2015 at 7:15 AM, Igor Berman <igor.ber...@gmail.com> wrote:

> check spark.rdd.compress
>
> On 19 October 2015 at 21:13, ahaider3 <ahaid...@hawk.iit.edu> wrote:
>
>> Hi,
>> A lot of the data I have in HDFS is compressed. I noticed when I load this
>> data into spark and cache it, Spark unrolls the data like normal but
>> stores
>> the data uncompressed in memory. For example, suppose /data/ is an RDD
>> with
>> compressed partitions on HDFS. I then cache the data. When I call
>> /data.count()/, the data is rightly decompressed since it needs to find
>> the
>> value of /.count()/. But, the data that is cached is also decompressed.
>> Can
>> a partition be compressed in spark? I know spark allows for data to be
>> compressed, after serialization. But what if, I only want the partitions
>> compressed.
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Storing-Compressed-data-in-HDFS-into-Spark-tp25123.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

Reply via email to