Interesting, just posted on another thread asking exactly the same question :) My answer there quoted below:

> For the following code:
>
>     val df = sqlContext.parquetFile(path)
>
> `df` remains columnar (actually it just reads from the columnar Parquet file on disk). For the following code:
>
>     val cdf = df.cache()
>
> `cdf` is also columnar but that's different from Parquet. When a DataFrame is cached, Spark SQL turns it into a private in-memory columnar format.
>
> So for your last question, the answer is: yes.

Some more details about the in-memory columnar structure: it's columnar, but much simpler than the one Parquet uses. The columnar byte arrays are split into batches with a fixed row count (configured by " spark.sql.inMemoryColumnarStorage.batchSize"). Also, each column is compressed with a compression scheme chose according to the data type and statistics information of that column. Supported compression schemes include RLE, DeltaInt, DeltaLong, BooleanBitSet, and DictionaryEncoding.

You may find the implementation here: https://github.com/apache/spark/tree/master/sql/core/src/main/scala/org/apache/spark/sql/columnar

Cheng

On 6/3/15 10:40 PM, kiran lonikar wrote:
When spark reads parquet files (sqlContext.parquetFile), it creates a DataFrame RDD. I would like to know if the resulting DataFrame has columnar structure (many rows of a column coalesced together in memory) or its a row wise structure that a spark RDD has. The section Spark SQL and DataFrames <http://spark.apache.org/docs/latest/sql-programming-guide.html#caching-data-in-memory> says you need to call sqlContext.cacheTable("tableName") or df.cache() to make it columnar. What exactly is this columnar structure?

To be precise: What does the row represent in the expression df.cache().map{row => ...}?

Is it a logical row which maintains an array of columns and each column in turn is an array of values for batchSize rows?

-Kiran


Reply via email to