Repository: spark
Updated Branches:
refs/heads/branch-1.6 ab7da0eae -> d33f18c42
[SPARK-11333][STREAMING] Add executorId to ReceiverInfo and display it in UI
Expose executorId to `ReceiverInfo` and UI since it's helpful when there are
multiple executors running in the same host. Screenshot:
Repository: spark
Updated Branches:
refs/heads/master 6502944f3 -> 1431319e5
Add mockito as an explicit test dependency to spark-streaming
While sbt successfully compiles as it properly pulls the mockito dependency,
maven builds have broken. We need this in ASAP.
tdas
Author: Burak Ya
Repository: spark
Updated Branches:
refs/heads/branch-1.6 d33f18c42 -> d6f4b56a6
Add mockito as an explicit test dependency to spark-streaming
While sbt successfully compiles as it properly pulls the mockito dependency,
maven builds have broken. We need this in ASAP.
tdas
Author: Bu
- [x] state creating, updating, removing
- [ ] emitting
- [ ] checkpointing
- [x] Misc unit tests for State, TrackStateSpec, etc.
- [x] Update docs and experimental tags
Author: Tathagata Das
Closes #9256 from tdas/trackStateByKey.
Project: http://git-wip-us.apache.org/repos
Repository: spark
Updated Branches:
refs/heads/branch-1.6 85bc72908 -> daa74be6f
http://git-wip-us.apache.org/repos/asf/spark/blob/daa74be6/streaming/src/test/scala/org/apache/spark/streaming/rdd/TrackStateRDDSuite.scala
--
dif
Repository: spark
Updated Branches:
refs/heads/master bd70244b3 -> 99f5f9886
http://git-wip-us.apache.org/repos/asf/spark/blob/99f5f988/streaming/src/test/scala/org/apache/spark/streaming/rdd/TrackStateRDDSuite.scala
--
diff --
- [x] state creating, updating, removing
- [ ] emitting
- [ ] checkpointing
- [x] Misc unit tests for State, TrackStateSpec, etc.
- [x] Update docs and experimental tags
Author: Tathagata Das
Closes #9256 from tdas/trackStateByKey.
(cherry picked from commit
t a similar problem, but missed it here :( Submitting the
fix using a waiter.
cc tdas
Author: Burak Yavuz
Closes #9605 from brkyvz/fix-flaky-test.
(cherry picked from commit 27029bc8f6246514bd0947500c94cf38dc8616c3)
Signed-off-by: Tathagata Das
Project: http://git-wip-us.apache.org/repos/asf/sp
lar problem, but missed it here :( Submitting the
fix using a waiter.
cc tdas
Author: Burak Yavuz
Closes #9605 from brkyvz/fix-flaky-test.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/27029bc8
Tree: http://git-
Repository: spark
Updated Branches:
refs/heads/branch-1.6 d6d31815f -> f7c6c95f9
[SPARK-11335][STREAMING] update kafka direct python docs on how to get the
offset ranges for a KafkaRDD
tdas koeninger
This updates the Spark Streaming + Kafka Integration Guide doc with a working
method
Repository: spark
Updated Branches:
refs/heads/master a9a6b80c7 -> dd77e278b
[SPARK-11335][STREAMING] update kafka direct python docs on how to get the
offset ranges for a KafkaRDD
tdas koeninger
This updates the Spark Streaming + Kafka Integration Guide doc with a working
method to acc
Repository: spark
Updated Branches:
refs/heads/branch-1.6 f5c66d163 -> 340ca9e76
[SPARK-11290][STREAMING][TEST-MAVEN] Fix the test for maven build
Should not create SparkContext in the constructor of `TrackStateRDDSuite`. This
is a follow up PR for #9256 to fix the test for maven build.
Auth
Repository: spark
Updated Branches:
refs/heads/master 767d288b6 -> f0d3b58d9
[SPARK-11290][STREAMING][TEST-MAVEN] Fix the test for maven build
Should not create SparkContext in the constructor of `TrackStateRDDSuite`. This
is a follow up PR for #9256 to fix the test for maven build.
Author:
Repository: spark
Updated Branches:
refs/heads/master 41bbd2300 -> 0f1d00a90
[SPARK-11663][STREAMING] Add Java API for trackStateByKey
TODO
- [x] Add Java API
- [x] Add API tests
- [x] Add a function test
Author: Shixiong Zhu
Closes #9636 from zsxwing/java-track.
Project: http://git-wip-u
Repository: spark
Updated Branches:
refs/heads/branch-1.6 6c1bf19e8 -> 05666e09b
[SPARK-11663][STREAMING] Add Java API for trackStateByKey
TODO
- [x] Add Java API
- [x] Add API tests
- [x] Add a function test
Author: Shixiong Zhu
Closes #9636 from zsxwing/java-track.
(cherry picked from co
Repository: spark
Updated Branches:
refs/heads/master 0f1d00a90 -> 7786f9cc0
[SPARK-11419][STREAMING] Parallel recovery for FileBasedWriteAheadLog + minor
recovery tweaks
The support for closing WriteAheadLog files after writes was just merged in.
Closing every file after a write is a very e
Repository: spark
Updated Branches:
refs/heads/branch-1.6 05666e09b -> 199e4cb21
[SPARK-11419][STREAMING] Parallel recovery for FileBasedWriteAheadLog + minor
recovery tweaks
The support for closing WriteAheadLog files after writes was just merged in.
Closing every file after a write is a ve
is defined as "no data for a while",
not "not state update for a while".
Fix: Update timestamp when timestamp when timeout is specified, otherwise no
need.
Also refactored the code for better testability and added unit tests.
Author: Tathagata Das
Closes #9648 from tdas/SPARK
is defined as "no data for a while",
not "not state update for a while".
Fix: Update timestamp when timestamp when timeout is specified, otherwise no
need.
Also refactored the code for better testability and added unit tests.
Author: Tathagata Das
Closes #9648 from tda
Repository: spark
Updated Branches:
refs/heads/master ad960885b -> ec80c0c2f
[SPARK-11706][STREAMING] Fix the bug that Streaming Python tests cannot report
failures
This PR just checks the test results and returns 1 if the test fails, so that
`run-tests.py` can mark it fail.
Author: Shixion
Repository: spark
Updated Branches:
refs/heads/branch-1.6 aff44f9a8 -> c3da2bd46
[SPARK-11706][STREAMING] Fix the bug that Streaming Python tests cannot report
failures
This PR just checks the test results and returns 1 if the test fails, so that
`run-tests.py` can mark it fail.
Author: Shi
Repository: spark
Updated Branches:
refs/heads/master b0c3fd34e -> de5e531d3
[SPARK-11731][STREAMING] Enable batching on Driver WriteAheadLog by default
Using batching on the driver for the WriteAheadLog should be an improvement for
all environments and use cases. Users will be able to scale
Repository: spark
Updated Branches:
refs/heads/branch-1.6 f14fb291d -> 38673d7e6
[SPARK-11731][STREAMING] Enable batching on Driver WriteAheadLog by default
Using batching on the driver for the WriteAheadLog should be an improvement for
all environments and use cases. Users will be able to sc
Repository: spark
Updated Branches:
refs/heads/master de5e531d3 -> ace0db471
[SPARK-6328][PYTHON] Python API for StreamingListener
Author: Daniel Jalova
Closes #9186 from djalova/SPARK-6328.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/rep
Repository: spark
Updated Branches:
refs/heads/branch-1.6 38673d7e6 -> c83177d30
[SPARK-6328][PYTHON] Python API for StreamingListener
Author: Daniel Jalova
Closes #9186 from djalova/SPARK-6328.
(cherry picked from commit ace0db47141ffd457c2091751038fc291f6d5a8b)
Signed-off-by: Tathagata Da
Repository: spark
Updated Branches:
refs/heads/master 3c025087b -> bcea0bfda
[SPARK-11742][STREAMING] Add the failure info to the batch lists
https://cloud.githubusercontent.com/assets/1000778/11162322/9b88e204-8a51-11e5-8c57-a44889cab713.png";>
Author: Shixiong Zhu
Closes #9711 from zsxwin
Repository: spark
Updated Branches:
refs/heads/branch-1.6 64439f7d6 -> 3bd72eafc
[SPARK-11742][STREAMING] Add the failure info to the batch lists
https://cloud.githubusercontent.com/assets/1000778/11162322/9b88e204-8a51-11e5-8c57-a44889cab713.png";>
Author: Shixiong Zhu
Closes #9711 from zs
Repository: spark
Updated Branches:
refs/heads/master 936bc0bcb -> 928d63162
[SPARK-11740][STREAMING] Fix the race condition of two checkpoints in a batch
We will do checkpoint when generating a batch and completing a batch. When the
processing time of a batch is greater than the batch interv
Repository: spark
Updated Branches:
refs/heads/branch-1.6 1a5dfb706 -> fa9d56f9e
[SPARK-11740][STREAMING] Fix the race condition of two checkpoints in a batch
We will do checkpoint when generating a batch and completing a batch. When the
processing time of a batch is greater than the batch in
Repository: spark
Updated Branches:
refs/heads/branch-1.5 bdcbbdac6 -> e26dc9642
[SPARK-11740][STREAMING] Fix the race condition of two checkpoints in a batch
We will do checkpoint when generating a batch and completing a batch. When the
processing time of a batch is greater than the batch in
Repository: spark
Updated Branches:
refs/heads/branch-1.5 e26dc9642 -> f33e277f9
[HOTFIX][STREAMING] Add mockito to fix the compilation error
Added mockito to the test scope to fix the compilation error in branch 1.5
Author: Shixiong Zhu
Closes #9782 from zsxwing/1.5-hotfix.
Project: htt
Repository: spark
Updated Branches:
refs/heads/master b362d50fc -> 75a292291
[SPARK-9065][STREAMING][PYSPARK] Add MessageHandler for Kafka Python API
Fixed the merge conflicts in #7410
Closes #7410
Author: Shixiong Zhu
Author: jerryshao
Author: jerryshao
Closes #9742 from zsxwing/pr7410.
Repository: spark
Updated Branches:
refs/heads/branch-1.6 3133d8bd1 -> a7fcc3117
[SPARK-9065][STREAMING][PYSPARK] Add MessageHandler for Kafka Python API
Fixed the merge conflicts in #7410
Closes #7410
Author: Shixiong Zhu
Author: jerryshao
Author: jerryshao
Closes #9742 from zsxwing/pr7
Repository: spark
Updated Branches:
refs/heads/master 8fb775ba8 -> 446738e51
[SPARK-11761] Prevent the call to StreamingContext#stop() in the listener bus's
thread
See discussion toward the tail of https://github.com/apache/spark/pull/9723
>From zsxwing :
```
The user should not call stop or
Repository: spark
Updated Branches:
refs/heads/branch-1.6 c13f72316 -> 737f07172
[SPARK-11761] Prevent the call to StreamingContext#stop() in the listener bus's
thread
See discussion toward the tail of https://github.com/apache/spark/pull/9723
>From zsxwing :
```
The user should not call stop
Repository: spark
Updated Branches:
refs/heads/master 94624eacb -> 31921e0f0
[SPARK-4557][STREAMING] Spark Streaming foreachRDD Java API method should
accept a VoidFunction<...>
Currently streaming foreachRDD Java API uses a function prototype requiring a
return value of null. This PR depre
Repository: spark
Updated Branches:
refs/heads/branch-1.6 899106cc6 -> c130b8626
[SPARK-4557][STREAMING] Spark Streaming foreachRDD Java API method should
accept a VoidFunction<...>
Currently streaming foreachRDD Java API uses a function prototype requiring a
return value of null. This PR d
interval =
batch interval, and RDDs get checkpointed every batch.
This PR is to set the checkpoint interval of trackStateByKey to 10 * batch
duration.
Author: Tathagata Das
Closes #9805 from tdas/SPARK-11814.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.ap
interval =
batch interval, and RDDs get checkpointed every batch.
This PR is to set the checkpoint interval of trackStateByKey to 10 * batch
duration.
Author: Tathagata Das
Closes #9805 from tdas/SPARK-11814.
(cherry picked from commit a402c92c92b2e1c85d264f6077aec8f6d6a08270)
Signed-off-by: T
solution would be to implement a custom mockito matcher that sorts and
then compares the results, but that kind of sounds like overkill to me. Let me
know what you think tdas zsxwing
Author: Burak Yavuz
Closes #9790 from brkyvz/fix-flaky-2.
Project: http://git-wip-us.apache.org/repos/asf/s
Another solution would be to implement a custom mockito matcher that sorts and
then compares the results, but that kind of sounds like overkill to me. Let me
know what you think tdas zsxwing
Author: Burak Yavuz
Closes #9790 from brkyvz/fix-flaky-2.
(cherry picked fr
Repository: spark
Updated Branches:
refs/heads/master 470007453 -> 599a8c6e2
[SPARK-11812][PYSPARK] invFunc=None works properly with python's
reduceByKeyAndWindow
invFunc is optional and can be None. Instead of invFunc (the parameter)
invReduceFunc (a local function) was checked for trueness
Repository: spark
Updated Branches:
refs/heads/branch-1.5 9957925e4 -> 001c44667
[SPARK-11812][PYSPARK] invFunc=None works properly with python's
reduceByKeyAndWindow
invFunc is optional and can be None. Instead of invFunc (the parameter)
invReduceFunc (a local function) was checked for true
Repository: spark
Updated Branches:
refs/heads/branch-1.6 fdffc400c -> abe393024
[SPARK-11812][PYSPARK] invFunc=None works properly with python's
reduceByKeyAndWindow
invFunc is optional and can be None. Instead of invFunc (the parameter)
invReduceFunc (a local function) was checked for true
Repository: spark
Updated Branches:
refs/heads/branch-1.4 eda1ff4ee -> 5118abb4e
[SPARK-11812][PYSPARK] invFunc=None works properly with python's
reduceByKeyAndWindow
invFunc is optional and can be None. Instead of invFunc (the parameter)
invReduceFunc (a local function) was checked for true
Repository: spark
Updated Branches:
refs/heads/branch-1.3 5278ef0f1 -> 387d81891
[SPARK-11812][PYSPARK] invFunc=None works properly with python's
reduceByKeyAndWindow
invFunc is optional and can be None. Instead of invFunc (the parameter)
invReduceFunc (a local function) was checked for true
Repository: spark
Updated Branches:
refs/heads/branch-1.5 9a906c1c3 -> e9ae1fda9
[SPARK-11870][STREAMING][PYSPARK] Rethrow the exceptions in TransformFunction
and TransformFunctionSerializer
TransformFunction and TransformFunctionSerializer don't rethrow the exception,
so when any exception
Repository: spark
Updated Branches:
refs/heads/master 9ed4ad426 -> be7a2cfd9
[SPARK-11870][STREAMING][PYSPARK] Rethrow the exceptions in TransformFunction
and TransformFunctionSerializer
TransformFunction and TransformFunctionSerializer don't rethrow the exception,
so when any exception happ
Repository: spark
Updated Branches:
refs/heads/branch-1.6 9c8e17984 -> 0c23dd52d
[SPARK-11870][STREAMING][PYSPARK] Rethrow the exceptions in TransformFunction
and TransformFunctionSerializer
TransformFunction and TransformFunctionSerializer don't rethrow the exception,
so when any exception
Repository: spark
Updated Branches:
refs/heads/branch-1.4 5118abb4e -> 94789f374
[SPARK-11870][STREAMING][PYSPARK] Rethrow the exceptions in TransformFunction
and TransformFunctionSerializer
TransformFunction and TransformFunctionSerializer don't rethrow the exception,
so when any exception
Repository: spark
Updated Branches:
refs/heads/branch-1.6 b4cf318ab -> 849ddb6ae
[SPARK-11935][PYSPARK] Send the Python exceptions in TransformFunction and
TransformFunctionSerializer to Java
The Python exception track in TransformFunction and TransformFunctionSerializer
is not sent back to
Repository: spark
Updated Branches:
refs/heads/master 88875d941 -> d29e2ef4c
[SPARK-11935][PYSPARK] Send the Python exceptions in TransformFunction and
TransformFunctionSerializer to Java
The Python exception track in TransformFunction and TransformFunctionSerializer
is not sent back to Java
Repository: spark
Updated Branches:
refs/heads/master 2d6612cc8 -> cfdadcbd2
[SPARK-7430] [STREAMING] [TEST] General improvements to streaming tests to
increase debuggability
Author: Tathagata Das
Closes #5961 from tdas/SPARK-7430 and squashes the following commits:
d654978 [Tathagata
Repository: spark
Updated Branches:
refs/heads/branch-1.4 2337ccc15 -> 065d114c6
[SPARK-7430] [STREAMING] [TEST] General improvements to streaming tests to
increase debuggability
Author: Tathagata Das
Closes #5961 from tdas/SPARK-7430 and squashes the following commits:
d654978 [Tathag
the existing behavior
does not change for existing users.
Author: Tathagata Das
Closes #5929 from tdas/SPARK-7217 and squashes the following commits:
869a763 [Tathagata Das] Changed implementation.
685fe00 [Tathagata Das] Added configuration
Project: http://git-wip-us.apache.org/repos/asf/sp
hat the existing behavior
does not change for existing users.
Author: Tathagata Das
Closes #5929 from tdas/SPARK-7217 and squashes the following commits:
869a763 [Tathagata Das] Changed implementation.
685fe00 [Tathagata Das] Added configuration
(cherry picked from com
Repository: spark
Updated Branches:
refs/heads/branch-1.4 99897fe3e -> 2e8a141b5
[SPARK-7305] [STREAMING] [WEBUI] Make BatchPage show friendly information when
jobs are dropped by SparkListener
If jobs are dropped by SparkListener, at least we can show the job ids in
BatchPage. Screenshot:
Repository: spark
Updated Branches:
refs/heads/master 88063c626 -> 22ab70e06
[SPARK-7305] [STREAMING] [WEBUI] Make BatchPage show friendly information when
jobs are dropped by SparkListener
If jobs are dropped by SparkListener, at least we can show the job ids in
BatchPage. Screenshot:
![b1
Repository: spark
Updated Branches:
refs/heads/branch-1.3 f2b138dfb -> 2dc3ca67a
[SPARK-7341] [STREAMING] [TESTS] Fix the flaky test:
org.apache.spark.streaming.InputStreamsSuite.socket input stream (backport for
branch 1.3)
Remove non-deterministic "Thread.sleep" and use deterministic strat
s a confusing exception that the action name JobScheduler is already
registered. Instead its best to throw a proper exception as it is not supported.
Author: Tathagata Das
Closes #5907 from tdas/SPARK-7361 and squashes the following commits:
fb81c4a [Tathagata Das] Fix typo
a9cd5bb [Tathagata Das] Ad
ing exception that the action name JobScheduler is already
registered. Instead its best to throw a proper exception as it is not supported.
Author: Tathagata Das
Closes #5907 from tdas/SPARK-7361 and squashes the following commits:
fb81c4a [Tathagata Das] Fix typo
a9cd5bb [Tathagata Das] Ad
Repository: spark
Updated Branches:
refs/heads/master 8e674331d -> 25c01c548
[STREAMING] [MINOR] Close files correctly when iterator is finished in
streaming WAL recovery
Currently there's no chance to close the file correctly after the iteration is
finished, change to `CompletionIterator` t
Repository: spark
Updated Branches:
refs/heads/branch-1.4 1538b10e8 -> 9e226e1c8
[STREAMING] [MINOR] Close files correctly when iterator is finished in
streaming WAL recovery
Currently there's no chance to close the file correctly after the iteration is
finished, change to `CompletionIterato
Repository: spark
Updated Branches:
refs/heads/master 35fb42a0b -> f9c7580ad
[SPARK-7530] [STREAMING] Added StreamingContext.getState() to expose the
current state of the context
Author: Tathagata Das
Closes #6058 from tdas/SPARK-7530 and squashes the following commits:
80ee0e6 [Tathag
Repository: spark
Updated Branches:
refs/heads/branch-1.4 f18881598 -> c16b47f9e
[SPARK-7530] [STREAMING] Added StreamingContext.getState() to expose the
current state of the context
Author: Tathagata Das
Closes #6058 from tdas/SPARK-7530 and squashes the following commits:
80ee
Repository: spark
Updated Branches:
refs/heads/branch-1.4 56016326c -> 2bbb685f4
[SPARK-7532] [STREAMING] StreamingContext.start() made to logWarning and not
throw exception
Author: Tathagata Das
Closes #6060 from tdas/SPARK-7532 and squashes the following commits:
6fe2e83 [Tathagata
Repository: spark
Updated Branches:
refs/heads/master f3e8e6006 -> ec6f2a977
[SPARK-7532] [STREAMING] StreamingContext.start() made to logWarning and not
throw exception
Author: Tathagata Das
Closes #6060 from tdas/SPARK-7532 and squashes the following commits:
6fe2e83 [Tathagata
hubusercontent.com/assets/1000778/7504129/9c57f710-f3fc-11e4-9c6e-1b79c17c546d.png)
![screen shot 2015-05-06 at 2 24 35
pm](https://cloud.githubusercontent.com/assets/1000778/7504140/b63bb216-f3fc-11e4-83a5-6dfc6481d192.png)
tdas as we discussed offline
Author: zsxwing
Closes #5952 from zs
hubusercontent.com/assets/1000778/7504129/9c57f710-f3fc-11e4-9c6e-1b79c17c546d.png)
![screen shot 2015-05-06 at 2 24 35
pm](https://cloud.githubusercontent.com/assets/1000778/7504140/b63bb216-f3fc-11e4-83a5-6dfc6481d192.png)
tdas as we discussed offline
Author: zsxwing
Closes #5952 from zs
ort in the Scala API only so that it can be used in Scala
REPLs and notebooks.
Author: Tathagata Das
Closes #6070 from tdas/SPARK-7553 and squashes the following commits:
731c9a1 [Tathagata Das] Fixed style
a797171 [Tathagata Das] Added more unit tests
19fc70b [Tathagata Das] Added :: Experimen
in the Scala API only so that it can be used in Scala
REPLs and notebooks.
Author: Tathagata Das
Closes #6070 from tdas/SPARK-7553 and squashes the following commits:
731c9a1 [Tathagata Das] Fixed style
a797171 [Tathagata Das] Added more unit tests
19fc70b [Tathagata Das] Added :: Experimen
Repository: spark
Updated Branches:
refs/heads/master 2713bc65a -> 23f7d66d5
[SPARK-7554] [STREAMING] Throw exception when an active/stopped
StreamingContext is used to create DStreams and output operations
Author: Tathagata Das
Closes #6099 from tdas/SPARK-7554 and squashes the follow
Repository: spark
Updated Branches:
refs/heads/branch-1.4 6c292a213 -> bb81b1500
[SPARK-7554] [STREAMING] Throw exception when an active/stopped
StreamingContext is used to create DStreams and output operations
Author: Tathagata Das
Closes #6099 from tdas/SPARK-7554 and squashes
Repository: spark
Updated Branches:
refs/heads/master 0da254fb2 -> bec938f77
[SPARK-7589] [STREAMING] [WEBUI] Make "Input Rate" in the Streaming page
consistent with other pages
This PR makes "Input Rate" in the Streaming page consistent with Job and Stage
pages.
![screen shot 2015-05-12 at
Repository: spark
Updated Branches:
refs/heads/branch-1.4 42cf4a2a5 -> 10007fbe0
[SPARK-7589] [STREAMING] [WEBUI] Make "Input Rate" in the Streaming page
consistent with other pages
This PR makes "Input Rate" in the Streaming page consistent with Job and Stage
pages.
![screen shot 2015-05-1
ing SparkContext when the StreamingContext
tries to create a SparkContext.
Author: Tathagata Das
Closes #6096 from tdas/SPARK-6752 and squashes the following commits:
53f4b2d [Tathagata Das] Merge remote-tracking branch 'apache-github/master'
into SPARK-6752
f024b77 [Tathagata Das] Removed
ing SparkContext when the StreamingContext
tries to create a SparkContext.
Author: Tathagata Das
Closes #6096 from tdas/SPARK-6752 and squashes the following commits:
53f4b2d [Tathagata Das] Merge remote-tracking branch 'apache-github/master'
into SPARK-6752
f024b77 [Tathagata Das] Removed extra
ify kinesis application name explicitly
SPARK-7679 - Upgrade to latest KCL and AWS SDK.
Author: Tathagata Das
Closes #6147 from tdas/kinesis-api-update and squashes the following commits:
f23ea77 [Tathagata Das] Updated versions and updated APIs
373b201 [Tathagata Das] Updated Kinesis API
Proj
ify kinesis application name explicitly
SPARK-7679 - Upgrade to latest KCL and AWS SDK.
Author: Tathagata Das
Closes #6147 from tdas/kinesis-api-update and squashes the following commits:
f23ea77 [Tathagata Das] Updated versions and updated APIs
373b201 [Tathagata Das] Updated Kinesis API
(che
Repository: spark
Updated Branches:
refs/heads/master 32fbd297d -> 0b6f503d5
[SPARK-7658] [STREAMING] [WEBUI] Update the mouse behaviors for the timeline
graphs
1. If the user click one point of a batch, scroll down to the corresponding
batch row and highlight it. And recovery the batch row
Repository: spark
Updated Branches:
refs/heads/branch-1.4 a8332098c -> 39add3dd5
[SPARK-7658] [STREAMING] [WEBUI] Update the mouse behaviors for the timeline
graphs
1. If the user click one point of a batch, scroll down to the corresponding
batch row and highlight it. And recovery the batch
Repository: spark
Updated Branches:
refs/heads/master 4fb52f954 -> 0a7a94eab
[SPARK-7621] [STREAMING] Report Kafka errors to StreamingListeners
PR per [SPARK-7621](https://issues.apache.org/jira/browse/SPARK-7621), which
makes both `KafkaReceiver` and `ReliableKafkaReceiver` report its errors
Repository: spark
Updated Branches:
refs/heads/branch-1.4 60cb33d12 -> 9188ad8dd
[SPARK-7621] [STREAMING] Report Kafka errors to StreamingListeners
PR per [SPARK-7621](https://issues.apache.org/jira/browse/SPARK-7621), which
makes both `KafkaReceiver` and `ReliableKafkaReceiver` report its er
Repository: spark
Updated Branches:
refs/heads/branch-1.3 0d4cd30b8 -> fc1b4a414
[SPARK-7621] [STREAMING] Report Kafka errors to StreamingListeners
PR per [SPARK-7621](https://issues.apache.org/jira/browse/SPARK-7621), which
makes both `KafkaReceiver` and `ReliableKafkaReceiver` report its er
249 from tdas/kinesis-examples and squashes the following commits:
7cc307b [Tathagata Das] More tweaks
f080872 [Tathagata Das] More cleanup
841987f [Tathagata Das] Small update
011cbe2 [Tathagata Das] More fixes
b0d74f9 [Tathagata Das] Updated examples.
Project: http://git-wip-us.apache.org/repos/
249 from tdas/kinesis-examples and squashes the following commits:
7cc307b [Tathagata Das] More tweaks
f080872 [Tathagata Das] More cleanup
841987f [Tathagata Das] Small update
011cbe2 [Tathagata Das] More fixes
b0d74f9 [Tathagata Das] Updated examples.
(cherry picked from com
Repository: spark
Updated Branches:
refs/heads/master 191ee4745 -> 9b84443dd
[SPARK-7237] [SPARK-7741] [CORE] [STREAMING] Clean more closures that need
cleaning
SPARK-7741 is the equivalent of SPARK-7237 in streaming. This is an alternative
to #6268.
Author: Andrew Or
Closes #6269 from an
Repository: spark
Updated Branches:
refs/heads/branch-1.4 096cb127a -> 23356dd0d
[SPARK-7237] [SPARK-7741] [CORE] [STREAMING] Clean more closures that need
cleaning
SPARK-7741 is the equivalent of SPARK-7237 in streaming. This is an alternative
to #6268.
Author: Andrew Or
Closes #6269 fro
to debug and therefore its best to fail fast at
`start()` when checkpointing is enabled and the checkpoint is not serializable.
Author: Tathagata Das
Closes #6292 from tdas/SPARK-7767 and squashes the following commits:
51304e6 [Tathagata Das] Addressed comments.
c35237b [Tathagata Das] Added t
bug and therefore its best to fail fast at
`start()` when checkpointing is enabled and the checkpoint is not serializable.
Author: Tathagata Das
Closes #6292 from tdas/SPARK-7767 and squashes the following commits:
51304e6 [Tathagata Das] Addressed comments.
c35237b [Tathagata Das] Added test
Repository: spark
Updated Branches:
refs/heads/master a70bf06b7 -> 895baf8f7
[SPARK-] [STREAMING] Fix the flaky test in
org.apache.spark.streaming.BasicOperationsSuite
Just added a guard to make sure a batch has completed before moving to the next
batch.
Author: zsxwing
Closes #6306 f
Repository: spark
Updated Branches:
refs/heads/branch-1.4 0d061ff9e -> b6182ce89
[SPARK-] [STREAMING] Fix the flaky test in
org.apache.spark.streaming.BasicOperationsSuite
Just added a guard to make sure a batch has completed before moving to the next
batch.
Author: zsxwing
Closes #63
Repository: spark
Updated Branches:
refs/heads/master 947ea1cf5 -> 1ee8eb431
[SPARK-7745] Change asserts to requires for user input checks in Spark Streaming
Assertions can be turned off. `require` throws an `IllegalArgumentException`
which makes more sense when it's a user set variable.
Aut
Repository: spark
Updated Branches:
refs/heads/branch-1.4 64762444e -> f08c6f319
[SPARK-7745] Change asserts to requires for user input checks in Spark Streaming
Assertions can be turned off. `require` throws an `IllegalArgumentException`
which makes more sense when it's a user set variable.
ied through KinesisUtils.
Author: Tathagata Das
Closes #6316 from tdas/SPARK-7787 and squashes the following commits:
248ca5c [Tathagata Das] Fixed serializability
(cherry picked from commit 4b7ff3092c53827817079e0810563cbb0b9d0747)
Signed-off-by: Tathagata Das
Project: http://git-wip-us.apache.
ugh KinesisUtils.
Author: Tathagata Das
Closes #6316 from tdas/SPARK-7787 and squashes the following commits:
248ca5c [Tathagata Das] Fixed serializability
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/4b7ff309
Tree: h
Repository: spark
Updated Branches:
refs/heads/branch-1.4 7e0912b1d -> 33e0e
[SPARK-7722] [STREAMING] Added Kinesis to style checker
Author: Tathagata Das
Closes #6325 from tdas/SPARK-7722 and squashes the following commits:
9ab35b2 [Tathagata Das] Fixed styles in Kinesis
(che
Repository: spark
Updated Branches:
refs/heads/master cdc7c055c -> 311fab6f1
[SPARK-7722] [STREAMING] Added Kinesis to style checker
Author: Tathagata Das
Closes #6325 from tdas/SPARK-7722 and squashes the following commits:
9ab35b2 [Tathagata Das] Fixed styles in Kinesis
Project: h
les/streaming/SqlNetworkWordCount.scala
This can be solved by {{SQLContext.getOrCreate}} which get or creates a new
singleton instance of SQLContext using either a given SparkContext or a given
SparkConf.
rxin marmbrus
Author: Tathagata Das
Closes #6006 from tdas/SPARK-7478 and squashes
ark/examples/streaming/SqlNetworkWordCount.scala
This can be solved by {{SQLContext.getOrCreate}} which get or creates a new
singleton instance of SQLContext using either a given SparkContext or a given
SparkConf.
rxin marmbrus
Author: Tathagata Das
Closes #6006 from tdas/SPARK-7478 and squashes
301 - 400 of 1024 matches
Mail list logo