Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/17789
@mridulm and @zsxwing thank you so much! This will help us out a lot! Much
appreciated. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/17789
@zsxwing Sorry for the delay! Thank you so much for your review and I saw a
bit of your patch - it looks very nice. I have just one question - would it be
a good idea to separate the codecs
Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/17024
@mridulm Waiting for @tdas and @zsxwing has taken more than a month now. Is
there any other way we can reach them or is there anyone else that can take a
look at this merge request
Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/17024
@mridulm Sure I can add in a file extension based on the codec being used.
But is there a specific use case that adding an extension would solve?
---
If your project is set up for it, you can
Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/17024
@mridulm I've added a new commit. Thank you for the review! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user aramesh117 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17024#discussion_r102592127
--- Diff:
core/src/main/scala/org/apache/spark/rdd/ReliableCheckpointRDD.scala ---
@@ -169,14 +174,24 @@ private[spark] object ReliableCheckpointRDD
Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/17024
@mridulm Thank you so much! I will definitely update with your suggestions.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user aramesh117 opened a pull request:
https://github.com/apache/spark/pull/17024
[SPARK-19525][CORE] Compressing checkpoints.
Spark's performance improves greatly if we enable compression of
checkpoints.
## What changes were proposed in this pull request
Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/11122
@zsxwing and @zzcclp thank you so much. This is much appreciated. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/11122
@zsxwing and @zzcclp please take a look when you can - I've fixed all
conflicts. I've run all tests and verified that all tests relevant to my change
have passed. I would greatly appreciate
Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/11122
@zzcclp and @zsxwing I will fix conflicts now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/11122
@zzcclp This pr was completed and approved by @zsxwing but hasn't been
merged for some reason. I've tried to reach @tdas but he hasn't responded. What
should we do? Now, there appear to be merge
Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/11122
@zsxwing Seems like @tdas is busy. Can you merge this if possible before
Spark 2.0?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user aramesh117 commented on the pull request:
https://github.com/apache/spark/pull/11122#issuecomment-196966683
@tdas Have you had a chance to look at the pull request?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user aramesh117 commented on the pull request:
https://github.com/apache/spark/pull/11122#issuecomment-195154957
I've fixed your comments. Sorry for the delay.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user aramesh117 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11122#discussion_r55783412
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/PairDStreamFunctions.scala
---
@@ -452,11 +452,15 @@ class PairDStreamFunctions[K
Github user aramesh117 commented on the pull request:
https://github.com/apache/spark/pull/11122#issuecomment-183791928
@zsxwing Any more issues that you can see?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user aramesh117 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11122#discussion_r52401410
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/StateDStream.scala
---
@@ -108,7 +109,8 @@ class StateDStream[K: ClassTag, V
Github user aramesh117 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11122#discussion_r52400939
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/StateDStream.scala
---
@@ -42,8 +42,8 @@ class StateDStream[K: ClassTag, V
Github user aramesh117 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11122#discussion_r52401878
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/PairDStreamFunctions.scala
---
@@ -453,8 +453,14 @@ class PairDStreamFunctions[K
Github user aramesh117 commented on the pull request:
https://github.com/apache/spark/pull/11122#issuecomment-182154259
@zsxwing Sure.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user aramesh117 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11122#discussion_r52401894
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/PairDStreamFunctions.scala
---
@@ -499,9 +505,35 @@ class PairDStreamFunctions[K
Github user aramesh117 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11122#discussion_r52403725
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/BasicOperationsSuite.scala
---
@@ -467,6 +467,73 @@ class BasicOperationsSuite extends
GitHub user aramesh117 opened a pull request:
https://github.com/apache/spark/pull/11122
[SPARK-13027][STREAMING] Added batch time as a parameter to updateStateByKey
Added RDD batch time as an input parameter to the update function in
updateStateByKey.
You can merge this pull
24 matches
Mail list logo