My code seemed deadlock when I tried to do this:

object MoreRdd extends Serializable {
        def apply(i: Int) = {
                val rdd2 = sc.parallelize(0 to 10)
                rdd2.map(j => i*10 + j).collect
        }
}

val rdd1 = sc.parallelize(0 to 10)
val y = rdd1.map(i => MoreRdd(i)).collect
          
y.toString()


It never reached the last line.  The code seemed deadlock somewhere since my
CPU load was quite low.

Is there a restriction not to create an RDD while another one is still
active?  Is it because one worker can only handle one task?  How do I work
around this?





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Creating-an-RDD-in-another-RDD-causes-deadlock-tp13302.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to