Hello,

I am wondering how does "/join/" work in SparkQL? Does it co-partition two
tables?  or does it do it by wide dependency?

I have two big tables to join, the query creates more than 150Go temporary
data, so the query stops because I have no space left my disk.
I guess I could use a HashPartitioner in order to join with inputs
co-partitioned, like this :

1/ Read my two tables in two SchemaRDD
2/ Transform the two SchemaRDD in two RDD[(Key,Value)]
3/ Repartition my two RDDs with my partitioner : rdd.PartitionBy(new
HashPartitioner(100))
4/ Join my two RDDs
5/ Transform my result in SchemaRDD 
6/ Reconstruct my hive table.

Is there an easy way via SparkQL (hivecontext)?


Thanks for your help.





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SPARKQL-Join-partitioner-tp22016.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to