[ https://issues.apache.org/jira/browse/SPARK-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235813#comment-14235813 ]
Aaron Davidson commented on SPARK-4740: --------------------------------------- I think we have ourselves a winner. NIO is reading from disk with 20 threads (handle-message-executor-*), so it's certainly saturating all disks. Netty is only reading with 3 threads. It's a kind of silly idea, but at the top of TransportClientFactory#createClient, you could change this code (which reuses existing clients): {code} if (cachedClient.isActive()) { logger.trace("Returning cached connection to {}: {}", address, cachedClient); return cachedClient; } else { logger.info("Found inactive connection to {}, closing it.", address); connectionPool.remove(address, cachedClient); // Remove inactive clients. } {code} to randomly discard the existing client, by simply changing it to this: {code} if (cachedClient.isActive() && Math.random() > 0.1) { logger.trace("Returning cached connection to {}: {}", address, cachedClient); return cachedClient; } else { logger.info("Found inactive connection to {}, closing it.", address); connectionPool.remove(address, cachedClient); // Remove inactive clients. } {code} This would cause us to create a new connection (and not close the old one) 1/10 of the time, hopefully causing us to use ~5 concurrent clients per host rather than 1. Not a real solution, of course, but could demonstrate very clearly what the issue is if it works. > Netty's network throughput is about 1/2 of NIO's in spark-perf sortByKey > ------------------------------------------------------------------------ > > Key: SPARK-4740 > URL: https://issues.apache.org/jira/browse/SPARK-4740 > Project: Spark > Issue Type: Improvement > Components: Shuffle, Spark Core > Affects Versions: 1.2.0 > Reporter: Zhang, Liye > Attachments: Spark-perf Test Report 16 Cores per Executor.pdf, > Spark-perf Test Report.pdf, TestRunner sort-by-key - Thread dump for > executor 1_files (Netty-48 Cores per node).zip, TestRunner sort-by-key - > Thread dump for executor 1_files (Nio-48 cores per node).zip > > > When testing current spark master (1.3.0-snapshot) with spark-perf > (sort-by-key, aggregate-by-key, etc), Netty based shuffle transferService > takes much longer time than NIO based shuffle transferService. The network > throughput of Netty is only about half of that of NIO. > We tested with standalone mode, and the data set we used for test is 20 > billion records, and the total size is about 400GB. Spark-perf test is > Running on a 4 node cluster with 10G NIC, 48 cpu cores per node and each > executor memory is 64GB. The reduce tasks number is set to 1000. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org