There is only column level encoding (run length encoding, delta encoding,
dictionary encoding) and no generic compression.

On Thu, Dec 18, 2014 at 12:07 PM, Sadhan Sood <sadhan.s...@gmail.com> wrote:
>
> Hi All,
>
> Wondering if when caching a table backed by lzo compressed parquet data,
> if spark also compresses it (using lzo/gzip/snappy) along with column level
> encoding or just does the column level encoding when 
> "*spark.sql.inMemoryColumnarStorage.compressed"
> *is set to true. This is because when I try to cache the data, I notice
> the memory being used is almost as much as the uncompressed size of the
> data.
>
> Thanks!
>

Reply via email to