[jira] [Commented] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15245209#comment-15245209 ] Davies Liu commented on SPARK-13352: corrected, thanks > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Block Manager, Spark Core >Reporter: Davies Liu >Assignee: Zhang, Liye >Priority: Critical > Fix For: 1.6.2, 2.0.0 > > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15236426#comment-15236426 ] Zhang, Liye commented on SPARK-13352: - [~davies], the last result for 500M should be 7.8 seconds, not 7.8 min, right? > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Block Manager, Spark Core >Reporter: Davies Liu >Assignee: Zhang, Liye >Priority: Critical > Fix For: 1.6.2, 2.0.0 > > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234668#comment-15234668 ] Apache Spark commented on SPARK-13352: -- User 'liyezhang556520' has created a pull request for this issue: https://github.com/apache/spark/pull/12296 > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Block Manager, Spark Core >Reporter: Davies Liu >Assignee: Zhang, Liye >Priority: Critical > Fix For: 2.0.0 > > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234559#comment-15234559 ] Davies Liu commented on SPARK-13352: The result is much better now: {code} 50M2.2 seconds 100M 2.8 seconds 250M 3.7 seconds 500M 7.8 min {code} > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Block Manager, Spark Core >Reporter: Davies Liu >Assignee: Zhang, Liye >Priority: Critical > Fix For: 1.6.2, 2.0.0 > > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234488#comment-15234488 ] Zhang, Liye commented on SPARK-13352: - Hi [~davies], I think this JIRA is related with [SPARK-14242|https://issues.apache.org/jira/browse/SPARK-142242] and [SPARK-14290|https://issues.apache.org/jira/browse/SPARK-14290], can you test with spark master again to see if this issue still exists? > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Block Manager, Spark Core >Reporter: Davies Liu >Priority: Critical > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1503#comment-1503 ] Davies Liu commented on SPARK-13352: cc [~adav] > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Block Manager, Spark Core >Reporter: Davies Liu > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222199#comment-15222199 ] Davies Liu commented on SPARK-13352: After more investigating, it turned out that the block fetcher in 1.6+ is two times slower than that in 1.5, it took 44 seconds to fetch a 289M block (22 seconds in 1.5). 1.5 {code} 16/04/01 11:58:33 DEBUG BlockManager: Getting block taskresult_5 from memory 16/04/01 11:58:34 DEBUG TransportClient: Sending fetch chunk request 0 to localhost/127.0.0.1:54202 16/04/01 11:58:35 DEBUG Cleaner0: java.nio.ByteBuffer.cleaner(): available 16/04/01 11:58:56 DEBUG BlockManagerSlaveEndpoint: removing block taskresult_5 16/04/01 11:58:56 DEBUG BlockManager: Removing block taskresult_5 16/04/01 11:58:56 DEBUG MemoryStore: Block taskresult_5 of size 289281861 dropped from memory (free 933912) 16/04/01 11:58:56 INFO BlockManagerInfo: Removed taskresult_5 on localhost:54202 in memory (size: 275.9 MB, free: 2.1 GB) 16/04/01 11:58:56 DEBUG BlockManagerMaster: Updated info of block taskresult_5 16/04/01 11:58:56 DEBUG BlockManager: Told master about block taskresult_5 16/04/01 11:58:56 DEBUG BlockManagerSlaveEndpoint: Done removing block taskresult_5, response is true 16/04/01 11:58:56 DEBUG BlockManagerSlaveEndpoint: Sent response: true to AkkaRpcEndpointRef(Actor[akka://sparkDriver/temp/$I]) {code} In 1.6 or master {code} 16/04/01 11:55:47 DEBUG BlockManager: Getting remote block taskresult_5 as bytes 16/04/01 11:55:47 DEBUG BlockManager: Getting remote block taskresult_5 from BlockManagerId(driver, localhost, 54181) 16/04/01 11:55:47 DEBUG TransportClientFactory: Creating new connection to localhost/127.0.0.1:54181 16/04/01 11:55:47 DEBUG ResourceLeakDetector: -Dio.netty.leakDetectionLevel: simple 16/04/01 11:55:47 DEBUG TransportClientFactory: Connection to localhost/127.0.0.1:54181 successful, running bootstraps... 16/04/01 11:55:47 DEBUG TransportClientFactory: Successfully created connection to localhost/127.0.0.1:54181 after 31 ms (0 ms spent in bootstraps) 16/04/01 11:55:47 DEBUG Recycler: -Dio.netty.recycler.maxCapacity.default: 262144 16/04/01 11:55:47 DEBUG BlockManager: Level for block taskresult_5 is StorageLevel(true, true, false, false, 1) 16/04/01 11:55:47 DEBUG BlockManager: Getting block taskresult_5 from memory 16/04/01 11:55:48 DEBUG TransportClient: Sending fetch chunk request 0 to localhost/127.0.0.1:54181 16/04/01 11:55:58 DEBUG Cleaner0: java.nio.ByteBuffer.cleaner(): available 16/04/01 11:56:31 DEBUG BlockManagerSlaveEndpoint: removing block taskresult_5 16/04/01 11:56:31 DEBUG BlockManager: Removing block taskresult_5 16/04/01 11:56:31 DEBUG MemoryStore: Block taskresult_5 of size 289281861 dropped from memory (free 2851511312) 16/04/01 11:56:31 INFO BlockManagerInfo: Removed taskresult_5 on localhost:54181 in memory (size: 275.9 MB, free: 2.7 GB) 16/04/01 11:56:31 DEBUG BlockManagerMaster: Updated info of block taskresult_5 16/04/01 11:56:31 DEBUG BlockManager: Told master about block taskresult_5 16/04/01 11:56:31 DEBUG BlockManagerSlaveEndpoint: Done removing block taskresult_5, response is true 16/04/01 11:56:31 DEBUG BlockManagerSlaveEndpoint: Sent response: true to 192.168.0.143:54179 {code} > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Block Manager, Spark Core >Reporter: Davies Liu > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176594#comment-15176594 ] Reynold Xin commented on SPARK-13352: - I think the proper fix is to break up large blocks into small chunks. > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Spark Core >Reporter: Davies Liu > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176589#comment-15176589 ] Davies Liu commented on SPARK-13352: [~rxin] Can someone help to look into this one? This is one of the reasons broadcast hash join is slow with large broadcast. > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Spark Core >Reporter: Davies Liu > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org