[ 
https://issues.apache.org/jira/browse/SPARK-2468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189825#comment-14189825
 ] 

zzc commented on SPARK-2468:
----------------------------

Hi, Reynold Xin, [SPARK-3453] Netty-based BlockTransferService, extracted from 
Spark core  was commited yesterday, I compile latest code from github master, 
when I set spark.shuffle.blockTransferService=netty, there is error:

ERROR - org.apache.spark.Logging$class.logError(Logging.scala:75) - 
sparkDriver-akka.actor.default-dispatcher-14 -Lost executor 17 on np05: remote 
Akka client disassociated 

and when i set spark.shuffle.blockTransferService=nio, run successfully.

In addition, when shuffle performance improvement issue will be resolved?

> Netty-based block server / client module
> ----------------------------------------
>
>                 Key: SPARK-2468
>                 URL: https://issues.apache.org/jira/browse/SPARK-2468
>             Project: Spark
>          Issue Type: Improvement
>          Components: Shuffle, Spark Core
>            Reporter: Reynold Xin
>            Assignee: Reynold Xin
>            Priority: Critical
>
> Right now shuffle send goes through the block manager. This is inefficient 
> because it requires loading a block from disk into a kernel buffer, then into 
> a user space buffer, and then back to a kernel send buffer before it reaches 
> the NIC. It does multiple copies of the data and context switching between 
> kernel/user. It also creates unnecessary buffer in the JVM that increases GC
> Instead, we should use FileChannel.transferTo, which handles this in the 
> kernel space with zero-copy. See 
> http://www.ibm.com/developerworks/library/j-zerocopy/
> One potential solution is to use Netty.  Spark already has a Netty based 
> network module implemented (org.apache.spark.network.netty). However, it 
> lacks some functionality and is turned off by default. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to