Which function in spark is used to combine two RDDs by keys

2014-11-13 Thread Blind Faith
Let us say I have the following two RDDs, with the following key-pair
values.

rdd1 = [ (key1, [value1, value2]), (key2, [value3, value4]) ]

and

rdd2 = [ (key1, [value5, value6]), (key2, [value7]) ]

Now, I want to join them by key values, so for example I want to return the
following

ret = [ (key1, [value1, value2, value5, value6]), (key2, [value3,
value4, value7]) ]

How I can I do this, in spark using python or scala? One way is to use
join, but join would create a tuple inside the tuple. But I want to only
have one tuple per key value pair.


Re: Which function in spark is used to combine two RDDs by keys

2014-11-13 Thread Sonal Goyal
Check cogroup.

Best Regards,
Sonal
Founder, Nube Technologies http://www.nubetech.co

http://in.linkedin.com/in/sonalgoyal



On Thu, Nov 13, 2014 at 5:11 PM, Blind Faith person.of.b...@gmail.com
wrote:

 Let us say I have the following two RDDs, with the following key-pair
 values.

 rdd1 = [ (key1, [value1, value2]), (key2, [value3, value4]) ]

 and

 rdd2 = [ (key1, [value5, value6]), (key2, [value7]) ]

 Now, I want to join them by key values, so for example I want to return
 the following

 ret = [ (key1, [value1, value2, value5, value6]), (key2, [value3,
 value4, value7]) ]

 How I can I do this, in spark using python or scala? One way is to use
 join, but join would create a tuple inside the tuple. But I want to only
 have one tuple per key value pair.



Re: Which function in spark is used to combine two RDDs by keys

2014-11-13 Thread Davies Liu
rdd1.union(rdd2).groupByKey()

On Thu, Nov 13, 2014 at 3:41 AM, Blind Faith person.of.b...@gmail.com wrote:
 Let us say I have the following two RDDs, with the following key-pair
 values.

 rdd1 = [ (key1, [value1, value2]), (key2, [value3, value4]) ]

 and

 rdd2 = [ (key1, [value5, value6]), (key2, [value7]) ]

 Now, I want to join them by key values, so for example I want to return the
 following

 ret = [ (key1, [value1, value2, value5, value6]), (key2, [value3,
 value4, value7]) ]

 How I can I do this, in spark using python or scala? One way is to use join,
 but join would create a tuple inside the tuple. But I want to only have one
 tuple per key value pair.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org