Hi guys,

another question: what’s the approach to working with column-oriented data, 
i.e. data with more than 1000 columns. Using Parquet for this should be fine, 
but how well does SparkSQL handle the big amount of columns? Is there a limit? 
Should we use standard Spark instead?

Thanks for any insights,
- Marius


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to