Scala is a language, Spark is an OO/Functional, Distributed Framework 
facilitating Parallel Programming in a distributed environment 

 

Any “Scala parallelism” occurs within the Parallel Model imposed by the Spark 
OO Framework – ie it is limited in terms of what it can achieve in terms of 
influencing the Spark Framework behavior – that is the nature of programming 
with/for frameworks 

 

When RDD1 and RDD2 are partitioned and different Actions applied to them this 
will result in Parallel Pipelines / DAGs within the Spark Framework

RDD1 = RDD.filter()

RDD2 = RDD.filter()

 

 

From: Bill Q [mailto:bill.q....@gmail.com] 
Sent: Thursday, May 7, 2015 4:55 PM
To: Evo Eftimov
Cc: user@spark.apache.org
Subject: Re: Map one RDD into two RDD

 

Thanks for the replies. We decided to use concurrency in Scala to do the two 
mappings using the same source RDD in parallel. So far, it seems to be working. 
Any comments?

On Wednesday, May 6, 2015, Evo Eftimov <evo.efti...@isecc.com> wrote:

RDD1 = RDD.filter()

RDD2 = RDD.filter()

 

From: Bill Q [mailto:bill.q....@gmail.com 
<javascript:_e(%7B%7D,'cvml','bill.q....@gmail.com');> ] 
Sent: Tuesday, May 5, 2015 10:42 PM
To: user@spark.apache.org 
<javascript:_e(%7B%7D,'cvml','user@spark.apache.org');> 
Subject: Map one RDD into two RDD

 

Hi all,

I have a large RDD that I map a function to it. Based on the nature of each 
record in the input RDD, I will generate two types of data. I would like to 
save each type into its own RDD. But I can't seem to find an efficient way to 
do it. Any suggestions?

 

Many thanks.

 

 

Bill



-- 

Many thanks.

Bill

 



-- 

Many thanks.



Bill

 

Reply via email to