Github user wypoon commented on a diff in the pull request:
https://github.com/apache/spark/pull/23058#discussion_r237320541
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -718,13 +718,9 @@ private[spark] class BlockManager
Github user wypoon commented on a diff in the pull request:
https://github.com/apache/spark/pull/23058#discussion_r237320424
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -789,21 +785,31 @@ private[spark] class BlockManager
Github user wypoon commented on the issue:
https://github.com/apache/spark/pull/23058
Thanks @squito. I updated the testing section of the PR.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user wypoon commented on a diff in the pull request:
https://github.com/apache/spark/pull/23058#discussion_r235603798
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -693,9 +693,9 @@ private[spark] class BlockManager
Github user wypoon commented on the issue:
https://github.com/apache/spark/pull/23058
> can we also make the same change to `TaskResultGetter`?
I had a conversation off-line with Imran. As we end up deserializing the
value of the task result into a ByteBuffer any
Github user wypoon commented on a diff in the pull request:
https://github.com/apache/spark/pull/23058#discussion_r235238066
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -693,9 +693,9 @@ private[spark] class BlockManager
GitHub user wypoon opened a pull request:
https://github.com/apache/spark/pull/23058
[SPARK-25905][CORE] When getting a remote block, avoid forcing a conversion
to a ChunkedByteBuffer
## What changes were proposed in this pull request?
In `BlockManager`, `getRemoteValues
Github user wypoon commented on the issue:
https://github.com/apache/spark/pull/22818
Looks good to me. I reran the test that encountered this issue on a secure
cluster after deploying a build with this change and now it passes
Github user wypoon commented on a diff in the pull request:
https://github.com/apache/spark/pull/19703#discussion_r150030572
--- Diff:
examples/src/main/scala/org/apache/spark/examples/sql/streaming/StructuredKafkaWordCount.scala
---
@@ -46,11 +51,13 @@ object
Github user wypoon commented on the issue:
https://github.com/apache/spark/pull/19703
@srowen This change is indeed just a workaround for an underlying problem,
as explained in the JIRA. @zsxwing suggested improving the
StructuredKafkaWordCount example as a workaround. He did
GitHub user wypoon opened a pull request:
https://github.com/apache/spark/pull/19703
[SPARK-22403][SS] Add optional checkpointLocation argument to
StructuredKafkaWordCount example
## What changes were proposed in this pull request?
When run in YARN cluster mode
11 matches
Mail list logo