Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10931#issuecomment-175400213
I read the thread referenced by SPARK-11615 again.
But there was no clear statement on why shading annotation wouldn't work.
Test suite passed.
I
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/10931
Shade jackson core
See the thread for background information:
http://search-hadoop.com/m/q3RTtYuufRO7LLG
This PR shades com.fasterxml.jackson.core as org.spark-project.jackson
You can
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10931#issuecomment-175283734
I can create a JIRA if this PR makes sense.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10931#issuecomment-175315586
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10931#issuecomment-175339111
jackson-annotations is in com.fasterxml.jackson.core, right ?
bq. shading annotations doesn't work well with Scala code
If you can give a bit more detail
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10893#discussion_r50774861
--- Diff:
common/sketch/src/main/java/org/apache/spark/util/sketch/CountMinSketchImpl.java
---
@@ -256,13 +324,48 @@ public CountMinSketch mergeInPlace
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10906#issuecomment-174763618
Looking at current code, that is not the case.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/10906
[SPARK-12934] use try-with-resources for streams
@liancheng please take a look
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tedyu/spark master
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10906#issuecomment-174778367
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10906#issuecomment-174778309
```
ERROR: Timeout after 15 minutes
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from
https://github.com/apache
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10766#discussion_r49899520
--- Diff: NOTICE ---
@@ -610,7 +610,43 @@ Vis.js uses and redistributes the following
third-party libraries
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/10725
SPARK-12778 Use of Java Unsafe should take endianness into account
In Platform.java, methods of Java Unsafe are called directly without
considering endianness.
In thread, 'Tungsten
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10725#issuecomment-170999515
In HBase, if branch is used as shown in this PR.
We don't observe much performance impact.
---
If your project is set up for it, you can reply to this email and have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10725#issuecomment-171106786
I am waiting for outcome of discussion whether mixed endian env should be
supported.
Till demand comes, closing for now.
---
If your project is set up
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/10725
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10605#discussion_r49219913
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/Window.scala ---
@@ -307,27 +314,63 @@ case class Window(
// Collect all
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10605#discussion_r49218188
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/Window.scala ---
@@ -307,27 +314,63 @@ case class Window(
// Collect all
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10435#discussion_r48869485
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1863,6 +1863,17 @@ object functions extends LegacyFunctions
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10499#issuecomment-168756031
@marmbrus
Gentle ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10499#issuecomment-168059853
The test failure was not related to the patch.
Looks like HiveThriftBinaryServerSuite timed out:
```
[info] HiveThriftBinaryServerSuite:
[info] - GetInfo
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10499#issuecomment-168059897
@marmbrus :
Is there anything I need to do ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/10368
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-167802141
I think the first two days' work aligns with SPARK-12414.
Is it possible to keep that (without modification to KryoSerializer and the
new test) ?
---
If your
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10368#discussion_r48507734
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -109,6 +111,9 @@ class KryoSerializer(conf: SparkConf
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10368#discussion_r48513944
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskResultGetterSuite.scala ---
@@ -81,6 +81,16 @@ class TaskResultGetterSuite extends SparkFunSuite
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10366#discussion_r48442537
--- Diff:
core/src/main/scala/org/apache/spark/deploy/rest/mesos/MesosRestServer.scala ---
@@ -99,7 +99,11 @@ private[mesos] class MesosSubmitRequestServlet
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10333#discussion_r48442383
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -87,18 +87,21 @@ object Cast {
private def
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10430#issuecomment-167264161
Can this PR be merged so that maven build becomes green again ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-167086261
```
sbt.ForkMain$ForkError:
org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to
eventually never returned normally. Attempted 209 times over
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-167079477
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10368#discussion_r48389541
--- Diff: core/src/main/scala/org/apache/spark/scheduler/TaskResult.scala
---
@@ -46,6 +46,8 @@ class DirectTaskResult[T](var valueBytes: ByteBuffer, var
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-167018650
Here was why I implemented equals for Map (from #48257):
{code}
[info] - Bug: SPARK-12415 *** FAILED *** (3 milliseconds)
[info] java.nio.HeapByteBuffer[pos
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-167046527
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-167046506
```
[info] PostgresIntegrationSuite:
[info] Exception encountered when attempting to run a suite with class
name
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-166387628
@andrewor14 @zsxwing
Please take another look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10284#issuecomment-166386063
See
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48119/consoleFull
---
If your project is set up for it, you can reply to this email and have your
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10284#issuecomment-166033110
I think the following exception seen in unit test run was related to this
PR:
```
[info] - Simple replay (70 milliseconds)
java.lang.NullPointerException
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-166033780
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-166052576
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-166060830
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-166060825
HeartbeatReceiverSuite.'expire dead hosts' failed
Doesn't seem to be related to the PR.
---
If your project is set up for it, you can reply to this email
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-165987346
Modified new test in KryoSerializerSuite.scala
Looks like DirectTaskResult#equals should be defined to compare the fields.
Let me know if the above
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-165858582
I took a look at
./core/src/test/scala/org/apache/spark/scheduler/TaskResultGetterSuite.scala
but may need some hint how such result should be formed.
---
If your
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-165871041
See if the new test makes sense
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10368#discussion_r48078649
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskResultGetterSuite.scala ---
@@ -81,6 +81,16 @@ class TaskResultGetterSuite extends SparkFunSuite
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10368#discussion_r48070627
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskResultGetterSuite.scala ---
@@ -81,6 +81,16 @@ class TaskResultGetterSuite extends SparkFunSuite
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10320#issuecomment-165544488
Looks like the case is covered by NextIterator already
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10320#issuecomment-165516736
@zsxwing @srowen
Anything I need to do for this PR ?
Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/10320
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-165643949
```
ERROR: Timeout after 15 minutes
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from
https://github.com/apache
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10368#issuecomment-165644041
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/10368
[SPARK-12415] Do not use closure serializer to serialize task result
As the name suggests, closure serializer is for closures. We should be able
to use the generic serializer for task results. If we
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10320#issuecomment-165209976
I compared JDBCRDD.scala with JdbcRDD.scala
From what I can tell according to the usage of java.sql.Connection and
java.sql.ResultSet, the proposed change is needed
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10313#issuecomment-165076390
Created #10325
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10320#issuecomment-165006066
@andrewor14 @zsxwing
Please take a look
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/10320
[SPARK-12048][SQL] Part 2 Prevent to close JDBC resources twice
PR #10101 dealt with
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
This PR applies
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10313#issuecomment-164988105
There're still 4 references to Runtime.getRuntime.addShutdownHook() in the
code base.
Should they be replaced with ShutdownHookManager ?
---
If your project is set
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9862#issuecomment-162783007
I can send out another PR if other people think that variant is needed.
This PR has been closed.
---
If your project is set up for it, you can reply
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/10175
[SPARK-12181] Check Cached unaligned-access capability before using Unsafe
For MemoryMode.OFF_HEAP, Unsafe.getInt etc. are used with no restriction.
However, the Oracle implementation uses
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/10177
[SPARK-12074] Avoid memory copy involving
ByteBuffer.wrap(ByteArrayOutputStream.toByteArray)
SPARK-12060 fixed JavaSerializerInstance.serialize
This PR applies the same technique on two other
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10177#issuecomment-162691480
```
Had test failures in pyspark.streaming.tests with python2.6; see logs.
```
I don't think the above failure was caused this PR
---
If your project is set
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10177#issuecomment-162691511
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/10175
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10069#issuecomment-162328379
Will create new one when SPARK-12060 is put back.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/10069
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10164#issuecomment-162329570
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10069#issuecomment-162324872
With change in BlockManager.scala, BlockManagerReplicationSuite passes -
Thanks to Shixiong's fix in SPARK-12084
```
BlockManagerReplicationSuite:
- get peers
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/10164
[SPARK-12056][CORE] Part 2 Create a TaskAttemptContext only after calling
setConf
This is continuation of SPARK-12056 where change is applied to
SqlNewHadoopRDD.scala
@andrewor14
FYI
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10069#discussion_r46429985
--- Diff: core/src/main/scala/org/apache/spark/scheduler/Task.scala ---
@@ -193,7 +192,7 @@ private[spark] object Task {
dataOut.flush()
val
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10081#discussion_r46432025
--- Diff:
core/src/main/scala/org/apache/spark/memory/UnifiedMemoryManager.scala ---
@@ -130,6 +130,12 @@ private[spark] class UnifiedMemoryManager
private
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10069#issuecomment-161109455
BlockManagerReplicationSuite.scala, line 384:
```
assert(!blockStatus.storageLevel.useMemory || blockStatus.memSize
>= blockSize,
s&quo
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10069#issuecomment-161116101
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10069#issuecomment-161123401
I assume you will create another JIRA for fixing ByteBuffer.array
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10069#issuecomment-161171534
Does the fix involve passing arrayOffset to UploadBlock ctor ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/10069
Avoid memory copy involving ByteBuffer.wrap(ByteArrayOutputStream.toBâ¦
â¦yteArray)
SPARK-12060 fixed JavaSerializerInstance.serialize
This PR applies the same technique on two other
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10011#issuecomment-160038407
Thanks for the quick fix, shixiong
Happy Thanksgiving
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/9950
Start py4j callback server for Java Gateway
See the thread 'pyspark does not seem to start py4j callback server'
This PR starts py4j callback server for Java Gateway
@davies
You
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/9950
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9852#issuecomment-158959495
@zsxwing @andrewor14
Is there anything I need to do ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9852#issuecomment-158465022
@andrewor14 @zsxwing
Kindly review
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9852#issuecomment-158485837
Should I add test in StreamingListenerSuite.scala (stopping SparkContext
instead of stopping StreamingContext) ?
Which notification should be overridden
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9852#issuecomment-158531675
```
[error] Failed: Total 378, Failed 1, Errors 0, Passed 377, Ignored 2
[error] Failed tests:
[error] org.apache.spark.streaming.CheckpointSuite
[info
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9852#issuecomment-158534826
Manually ran org.apache.spark.streaming.CheckpointSuite (with this change)
which passed.
---
If your project is set up for it, you can reply to this email and have your
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/9635
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/9852
Prevent the call to SparkContext#stop() in the listener bus's thread
This is continuation of SPARK-11761
Andrew suggested adding this protection. See tail of
https://github.com/apache/spark
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9635#issuecomment-158254700
stopExecutorDelegationTokenRenewer() is called unconditionally in stop()
method at line 191
By the time startExecutorDelegationTokenRenewer() is called again, token
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9741#issuecomment-158239143
I agree.
New PR coming
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9812#issuecomment-157758507
Jenkins, please test this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9741#issuecomment-157553204
Please take a look at the test and see what should be improved.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9741#issuecomment-157585091
For both 46132 and 46136:
```
[info] StreamingListenerSuite:
[info] - batch info reporting (752 milliseconds)
[info] - receiver info reporting (369
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9741#issuecomment-157589468
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9741#issuecomment-157589455
```
ERROR: Timeout after 15 minutes
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from
https://github.com/apache
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9741#issuecomment-157599928
DirectKafkaStreamSuite failed in maven Jenkins:
https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.4,label=spark-test
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9741#issuecomment-157588328
DirectKafkaStreamSuite passed locally:
```
Run starting. Expected test count is: 6
DirectKafkaStreamSuite:
- basic stream receiving with multiple topics
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9741#issuecomment-157508125
@tdas @zsxwing
Any feedback ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/9741
Prevent the call to StreamingContext#stop() in the listener bus's thread
See discussion toward the tail of https://github.com/apache/spark/pull/9723
You can merge this pull request into a Git
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/9723
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9741#issuecomment-157138122
@tdas @zsxwing
Please take a look
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9741#issuecomment-157154693
@andrewor14
Was pulled into a meeting when filing the JIRA.
Will pay attention next time.
---
If your project is set up for it, you can reply to this email
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/9723#issuecomment-156964895
Sounds like we should prevent the call to StreamingContext#stop() in the
listener bus's thread.
How about setting a ThreadLocal Boolean to indicate to StreamingContext
201 - 300 of 503 matches
Mail list logo