[jira] [Comment Edited] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234559#comment-15234559 ] Davies Liu edited comment on SPARK-13352 at 4/18/16 6:40 AM: - The result is much better now (there is some fixed overhead for tests): {code} 50M2.2 seconds 100M 2.8 seconds 250M 3.7 seconds 500M 7.8 seconds {code} was (Author: davies): The result is much better now (there is some fixed overhead for tests): {code} 50M2.2 seconds 100M 2.8 seconds 250M 3.7 seconds 500M 7.8 min {code} > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Block Manager, Spark Core >Reporter: Davies Liu >Assignee: Zhang, Liye >Priority: Critical > Fix For: 1.6.2, 2.0.0 > > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234559#comment-15234559 ] Davies Liu edited comment on SPARK-13352 at 4/11/16 6:35 AM: - The result is much better now (there is some fixed overhead for tests): {code} 50M2.2 seconds 100M 2.8 seconds 250M 3.7 seconds 500M 7.8 min {code} was (Author: davies): The result is much better now: {code} 50M2.2 seconds 100M 2.8 seconds 250M 3.7 seconds 500M 7.8 min {code} > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Block Manager, Spark Core >Reporter: Davies Liu >Assignee: Zhang, Liye >Priority: Critical > Fix For: 1.6.2, 2.0.0 > > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234488#comment-15234488 ] Zhang, Liye edited comment on SPARK-13352 at 4/11/16 5:02 AM: -- Hi [~davies], I think this JIRA is related with [SPARK-14242|https://issues.apache.org/jira/browse/SPARK-142242] and [SPARK-14290|https://issues.apache.org/jira/browse/SPARK-14290], can you test with spark master branch again to see if this issue still exists? was (Author: liyezhang556520): Hi [~davies], I think this JIRA is related with [SPARK-14242|https://issues.apache.org/jira/browse/SPARK-142242] and [SPARK-14290|https://issues.apache.org/jira/browse/SPARK-14290], can you test with spark master again to see if this issue still exists? > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Block Manager, Spark Core >Reporter: Davies Liu >Priority: Critical > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-13352) BlockFetch does not scale well on large block
[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176594#comment-15176594 ] Reynold Xin edited comment on SPARK-13352 at 3/2/16 10:19 PM: -- I think the proper fix is to break up large blocks into small chunks. Basically there is no reason why we need to transfer a single, large, consecutive block of memory. was (Author: rxin): I think the proper fix is to break up large blocks into small chunks. > BlockFetch does not scale well on large block > - > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Spark Core >Reporter: Davies Liu > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org