No this is not needed, look at the map / reduce operations and the standard 
spark word count

> On 25 May 2016, at 12:57, Priya Ch <learnings.chitt...@gmail.com> wrote:
> 
> Lets say i have rdd A of strings as  {"hi","bye","ch"} and another RDD B of
> strings as {"padma","hihi","chch","priya"}. For every string rdd A i need
> to check the matches found in rdd B as such for string "hi" i have to check
> the matches against all strings in RDD B which means I need generate every
> possible combination r

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to