Thank you!

The Hive solution seemed more like a workaround. I was wondering if a native 
Spark Sql support for computing statistics for Parquet files would be available 

Dima



Sent from my iPhone

> On Feb 11, 2015, at 3:34 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> 
> See earlier thread:
> http://search-hadoop.com/m/JW1q5BZhf92
> 
>> On Wed, Feb 11, 2015 at 3:04 PM, Dima Zhiyanov <dimazhiya...@gmail.com> 
>> wrote:
>> Hello
>> 
>> Has Spark implemented computing statistics for Parquet files? Or is there
>> any other way I can enable broadcast joins between parquet file RDDs in
>> Spark Sql?
>> 
>> Thanks
>> Dima
>> 
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-do-broadcast-join-in-SparkSQL-tp15298p21609.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
> 

Reply via email to