Hi, Jörn, first of all, thanks for you intent to help.

This one external service is a native component, that is stateless and that
performs the calculation based on the data I provide. The data is in RDD.

That one component I have on each worker node and I would like to get as
much parallelism as possible on a single worker node.
Using scala future I can get it, at least as much threads, as my machine
allows me. But how to do the same on spark? Is there a possibility to cal
that native component on each worker in multiple threads?

Thanks in advance.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-do-you-perform-blocking-IO-in-apache-spark-job-tp13704p13707.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to