1.0:2 as TID 2 on
> executor 12: neuro-1-4.local (PROCESS_LOCAL)
> 14/05/12 17:17:32 INFO TaskSetManager: Serialized task 1.0:2 as 4890 bytes
> in 1 ms
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Forcing-spark-to-send-exactly-one-element-to-each-worker-node-tp5605p5616.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
under the impression that pipe() would just run the
C++ application on the remote node: is the application supposed to run
slower if you use pipe() to execute it?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Forcing-spark-to-send-exactly-one-element-to-e
ask 1.0:2 as TID 2 on
executor 12: neuro-1-4.local (PROCESS_LOCAL)
14/05/12 17:17:32 INFO TaskSetManager: Serialized task 1.0:2 as 4890 bytes
in 1 ms
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Forcing-spark-to-send-exactly-one-element-to-each-worker-n
e in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Forcing-spark-to-send-exactly-one-element-to-each-worker-node-tp5605.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
.nabble.com/Forcing-spark-to-send-exactly-one-element-to-each-worker-node-tp5605p5607.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
: is there any way to
force Spark to send only one? I've tried coalescing and repartitioning the
RDD to be equal to the number of elements in the RDD, but that hasn't
worked.
Thanks!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Forcing-spark