[ 
https://issues.apache.org/jira/browse/SPARK-2468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14204222#comment-14204222
 ] 

zzc commented on SPARK-2468:
----------------------------

@Aaron Davidson, Thank you for the suggestion.
The configuration  and code of comparing Hadoop vs Spark performance is not 
shown above. It just run a wordcount on 240G snappy files and writes 500G 
shuffle files. Configuration is as follows: 

command "--driver-memory 10g --num-executors 17 --executor-memory 12g 
--executor-cores 3 --driver-library-path :/usr/local/hadoop/lib/native/ 
/opt/wsspark.jar 24G_10_20g_1c 1 100 hdfs://wscluster/zzc_test/in/snappy8/ 100 
100 hdfs://wscluster/zzc_test/out/i007"

configuration :
spark.default.parallelism 204
spark.shuffle.consolidateFiles false
spark.shuffle.spill.compress true
spark.shuffle.compress true
spark.storage.memoryFraction 0.3
spark.shuffle.memoryFraction 0.5
spark.shuffle.file.buffer.kb 100
spark.reducer.maxMbInFlight 48
spark.shuffle.blockTransferService nio
spark.shuffle.manager HASH
spark.scheduler.mode FIFO
spark.akka.frameSize 10
spark.akka.timeout 100

> Netty-based block server / client module
> ----------------------------------------
>
>                 Key: SPARK-2468
>                 URL: https://issues.apache.org/jira/browse/SPARK-2468
>             Project: Spark
>          Issue Type: Improvement
>          Components: Shuffle, Spark Core
>            Reporter: Reynold Xin
>            Assignee: Reynold Xin
>            Priority: Critical
>             Fix For: 1.2.0
>
>
> Right now shuffle send goes through the block manager. This is inefficient 
> because it requires loading a block from disk into a kernel buffer, then into 
> a user space buffer, and then back to a kernel send buffer before it reaches 
> the NIC. It does multiple copies of the data and context switching between 
> kernel/user. It also creates unnecessary buffer in the JVM that increases GC
> Instead, we should use FileChannel.transferTo, which handles this in the 
> kernel space with zero-copy. See 
> http://www.ibm.com/developerworks/library/j-zerocopy/
> One potential solution is to use Netty.  Spark already has a Netty based 
> network module implemented (org.apache.spark.network.netty). However, it 
> lacks some functionality and is turned off by default. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to