GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/18684
[SPARK-21475][Core] Use NIO's Files API to replace
FileInputStream/FileOutputStream in some critical paths
## What changes were proposed in this pull request?
Java's `FileInputStream
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18649
Sorry I'm not familiar with this part, I cannot give you a valid comment,
you could ask others to help reviewing your patch ð .
---
If your project is set up for it, you can reply
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18663
LGTM, I tried it locally, looks now executors can be ramped up soon after
AM restart.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18663#discussion_r128091929
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -262,13 +259,7 @@ private[spark
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18649
So it looks like a basic copy-paste from hive code to support this feature
in STS (like what I did before to support spnego). It looks fine to me based on
my limited knowledge :).
---
If your
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18663#discussion_r128088140
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -262,13 +259,7 @@ private[spark
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18630
One layer for different cluster managers seems promising, but looks like it
requires lots of refactoring works (trying to build a Spark own distributed
cache and change the existing way
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18628#discussion_r127821540
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIService.scala
---
@@ -57,6 +59,20 @@ private[hive
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18616#discussion_r127778711
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -438,6 +441,24 @@ private[spark] class
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18616
@vanzin , do you have further comments?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18630
I'm wondering if we can prepare all the resources in `SparkSubmit` and
upload to HDFS's staging folder, then driver could download them from HDFS and
add to classpath. This is what basically YARN
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18633#discussion_r127565024
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/HadoopDelegationTokenManager.scala
---
@@ -42,7 +42,7 @@ import
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18630#discussion_r127562278
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/DriverWrapper.scala ---
@@ -66,4 +75,50 @@ object DriverWrapper {
System.exit
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18630#discussion_r127561962
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -473,6 +474,12 @@ object SparkSubmit extends CommandLineUtils
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18630#discussion_r127563323
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/DriverWrapper.scala ---
@@ -66,4 +75,50 @@ object DriverWrapper {
System.exit
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18630
Are you trying to support `--packages` in standalone cluster?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18616
Thanks @vanzin for your review, I will update it soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18633
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18628
@cloud-fan would you please help to review, thanks a lot!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/18633
[SPARK-21411][YARN] Lazily create FS within kerberized UGI to avoid token
acquiring failure
## What changes were proposed in this pull request?
In the current
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/18628
[SPARK-21407][ThriftServer] Add spnego auth support for ThriftServer
thrift/http protocol
## What changes were proposed in this pull request?
Spark ThriftServer doesn't support spnego
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18616#discussion_r127321876
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/HadoopDelegationTokenManager.scala
---
@@ -42,7 +42,7 @@ import
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18616#discussion_r127319048
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/HadoopDelegationTokenManager.scala
---
@@ -42,7 +42,7 @@ import
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18616#discussion_r127317541
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/HadoopDelegationTokenManager.scala
---
@@ -42,7 +42,7 @@ import
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18617#discussion_r127309324
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -192,16 +193,29 @@ private[spark] class Client
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18616#discussion_r127298989
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/HadoopDelegationTokenManager.scala
---
@@ -42,7 +42,7 @@ import
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18617
@tgravescs @vanzin can you please help to review, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18616
CC @vanzin @tgravescs can you please help to review, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/18617
[SPARK-21376][YARN] Fix yarn client token expire issue when cleaning the
staging files in long running scenario
## What changes were proposed in this pull request?
This issue happens
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/18616
[SPARK-21377][YARN] Make jars specify with --jars/--packages load-able in
AM's credential renwer
## What changes were proposed in this pull request?
In this issue we have a long running
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18602
Thanks @vanzin , based on the comment of JIRA, I will try another approach,
so closing it for now.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jerryshao closed the pull request at:
https://github.com/apache/spark/pull/18602
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18602
CC @vanzin @tgravescs would you please help to review? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/18602
[SPARK-21377][YARN] Add a new configuration to extend AM classpath in yarn
client mode
## What changes were proposed in this pull request?
This PR propose a new configuration
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18555#discussion_r126293302
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -35,10 +35,21 @@ package object config {
ConfigBuilder
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18555#discussion_r126293500
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -175,6 +246,8 @@ package object config {
ConfigBuilder
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18555#discussion_r126293557
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -231,6 +315,9 @@ package object config {
private[spark] val
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18555#discussion_r126293363
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -35,10 +35,21 @@ package object config {
ConfigBuilder
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18555#discussion_r126293375
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -51,29 +62,63 @@ package object config {
ConfigBuilder
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18555#discussion_r126292668
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -51,29 +62,63 @@ package object config {
ConfigBuilder
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18573#discussion_r126292609
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -60,6 +60,10 @@ private[spark] class TaskSetManager(
val
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18552
@jiangxb1987 @cloud-fan would you please help to review this doc fix,
thanks! It just added some words to clarify how to correctly set configurations.
---
If your project is set up for it, you
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/11994#discussion_r126284098
--- Diff:
core/src/main/scala/org/apache/spark/metrics/sink/GraphiteSink.scala ---
@@ -50,30 +52,30 @@ private[spark] class GraphiteSink(val property
GitHub user jerryshao reopened a pull request:
https://github.com/apache/zeppelin/pull/2459
[ZEPPELIN-2716] Change the default value of zeppelin.livy.displayAppInfo to
true
### What is this PR for?
Since it is quite useful to expose the application info for user to monitor
Github user jerryshao commented on the issue:
https://github.com/apache/zeppelin/pull/2459
Thanks guys for your review, seems there's always some UT failures. I will
reopen this PR to try again.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user jerryshao closed the pull request at:
https://github.com/apache/zeppelin/pull/2459
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user jerryshao reopened a pull request:
https://github.com/apache/zeppelin/pull/2459
[ZEPPELIN-2716] Change the default value of zeppelin.livy.displayAppInfo to
true
### What is this PR for?
Since it is quite useful to expose the application info for user to monitor
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/18552
[Mindo][Doc] Improve the docs about how to correctly set configurations
## What changes were proposed in this pull request?
Spark provides several ways to set configurations, either from
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18235#discussion_r125828051
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -520,7 +520,7 @@ private[deploy] class SparkSubmitArguments(args
Github user jerryshao commented on the issue:
https://github.com/apache/zeppelin/pull/2459
There's no failure in my local test, please help to check.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jerryshao closed the pull request at:
https://github.com/apache/zeppelin/pull/2459
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user jerryshao reopened a pull request:
https://github.com/apache/zeppelin/pull/2459
[ZEPPELIN-2716] Change the default value of zeppelin.livy.displayAppInfo to
true
### What is this PR for?
Since it is quite useful to expose the application info for user to monitor
GitHub user jerryshao reopened a pull request:
https://github.com/apache/zeppelin/pull/2459
[ZEPPELIN-2716] Change the default value of zeppelin.livy.displayAppInfo to
true
### What is this PR for?
Since it is quite useful to expose the application info for user to monitor
Github user jerryshao closed the pull request at:
https://github.com/apache/zeppelin/pull/2459
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user jerryshao reopened a pull request:
https://github.com/apache/zeppelin/pull/2459
[ZEPPELIN-2716] Change the default value of zeppelin.livy.displayAppInfo to
true
### What is this PR for?
Since it is quite useful to expose the application info for user to monitor
Github user jerryshao closed the pull request at:
https://github.com/apache/zeppelin/pull/2459
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18308
Is this the final modified code @ihazem , why do you check
`hasCachedBlocks` both inside and outside of logInfo statement? Also the code
is too long.
Can you please at least do a round
Github user jerryshao commented on the issue:
https://github.com/apache/zeppelin/pull/2459
CC @zjffdu , please help to review, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user jerryshao opened a pull request:
https://github.com/apache/zeppelin/pull/2459
[ZEPPELIN-2716] Change the default value of zeppelin.livy.displayAppInfo to
true
### What is this PR for?
Since it is quite useful to expose the application info for user to monitor
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18430
let's do this in another PR, seems a different threading issue.
Also would you please change the PR title like: Change fileToAppInfo in
FsHistoryProvider to fix concurrent issue
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18430
What's the issue of SPARK-13988?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18235#discussion_r124949735
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -841,36 +845,132 @@ object SparkSubmit extends CommandLineUtils
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18235#discussion_r124949549
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -841,36 +845,132 @@ object SparkSubmit extends CommandLineUtils
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/9518#discussion_r124689398
--- Diff:
core/src/main/scala/org/apache/spark/metrics/sink/StatsdReporter.scala ---
@@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/9518#discussion_r124699178
--- Diff:
core/src/test/scala/org/apache/spark/metrics/sink/StatsdSinkSuite.scala ---
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/9518#discussion_r124689716
--- Diff:
core/src/main/scala/org/apache/spark/metrics/sink/StatsdReporter.scala ---
@@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18435
@jiangxb1987 @cloud-fan Can you please review this JIRA? The changes should
be safe from my understanding.
Also can we change the title to: Considering CPUS_PER_TASK when allocating
task
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18351
Yes, I'm with the current solution, since there's no better solution.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18430
You're saying you're fixing a threading issue, so my question is did you
see any inconsistent behavior regarding to threading contention?
---
If your project is set up for it, you can reply
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18435
Can you please change your PR title and description to reflect the real
issue here?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18430
Can you please fix your PR description to remove template words? BTW did
you see the inconsistent issue here?
---
If your project is set up for it, you can reply to this email and have your
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18435
Besides is it enough to use `o.cores / CPUS_PER_TASK`? I'm not sure why do
we need to use `ceil`, for example if we have 10 cores in work offer and
CPUS_PER_TASK is 3, then 3 slots should
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18435
From my understanding, this looks like a bug here, we didn't consider
CPU_PER_TASK configuration. Instead of CPU memory, I think this PR is more like
fixing a bug here. As for saving memory, yes
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18406
So the key point is that should metrics system understand dynamic
registered metrics, am I understanding right? If the registered metrics are
static ones, then I think current code should
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18406
@robert3005 based on your description, this feature is more like your own
customized requirement, not Spark itself. I'm wondering can it be worked out of
Spark?
---
If your project is set up
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17937
This is already fix in https://github.com/apache/spark/pull/18230 CC
@gatorsmile .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18394
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18406
@robert3005 would you please elaborate the usage scenario of your proposal,
AFAIK Spark internally doesn't have such requirement.
---
If your project is set up for it, you can reply
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11994
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17113#discussion_r124165462
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/BlacklistTrackerSuite.scala ---
@@ -529,4 +529,59 @@ class BlacklistTrackerSuite extends
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17113#discussion_r124165102
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -145,6 +146,74 @@ private[scheduler] class BlacklistTracker
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18414
From my understanding, only when Master fully recovered, then `Master`
could receive new application registering. In the recovering period, all the
recovered applications have already ran, so
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18235#discussion_r123932628
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -310,33 +310,28 @@ object SparkSubmit extends CommandLineUtils
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18235#discussion_r123922136
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -310,33 +310,28 @@ object SparkSubmit extends CommandLineUtils
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11994
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18414
@srini-daruna I think I already addressed this issue in SPARK-12552, here
is the
[code](https://github.com/srini-daruna/spark/blob/b3ea3358a7bf55cedaa5cd7d08860bc625e83cd2/core/src/main/scala/org
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18329
CC @zsxwing @tdas would you please help to review this PR? Thanks a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18329#discussion_r123705179
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -235,6 +239,21 @@ final class DataStreamWriter[T] private
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18329#discussion_r123698342
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -235,6 +239,21 @@ final class DataStreamWriter[T] private
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18329#discussion_r123698373
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -17,17 +17,21 @@
package
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18329#discussion_r123698894
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala ---
@@ -553,6 +554,32 @@ class StreamSuite extends StreamTest
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18322
LGTM!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/9518#discussion_r123657729
--- Diff:
core/src/test/scala/org/apache/spark/metrics/sink/StatsdSinkSuite.scala ---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/9518#discussion_r123657675
--- Diff:
core/src/test/scala/org/apache/spark/metrics/sink/StatsdSinkSuite.scala ---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/9518#discussion_r123656934
--- Diff:
core/src/main/scala/org/apache/spark/metrics/sink/StatsdReporter.scala ---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17113
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17113
Looks like Jenkins is not so stable.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18329#discussion_r123454797
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -235,6 +239,21 @@ final class DataStreamWriter[T] private
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18329#discussion_r123453429
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -264,12 +281,12 @@ final class DataStreamWriter[T
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18329#discussion_r123448787
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -235,6 +239,21 @@ final class DataStreamWriter[T] private
1001 - 1100 of 2761 matches
Mail list logo