Github user saturday-shi commented on the issue:
https://github.com/apache/spark/pull/18230
@vanzin [Xing Shi
(saturday_s)](https://issues.apache.org/jira/secure/ViewProfile.jspa?name=saturday_s),
thanks.
---
If your project is set up for it, you can reply to this email and have
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122866332
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -328,6 +333,9 @@ case class DataSource(
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122867252
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -328,6 +333,9 @@ case class DataSource(
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18025
haha. I like the `\emph`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18355
**[Test build #78270 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78270/testReport)**
for PR 18355 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17758
**[Test build #78267 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78267/testReport)**
for PR 17758 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17758
yea, I've already found the cause; to fix the issue, it's okay to check
name duplication for partition columns in `getOrInferFileFormatSchema` as
@gatorsmile suggested
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18328#discussion_r122872220
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala ---
@@ -106,6 +105,11 @@ class CacheManager extends Logging {
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/18356
[SPARK-21144][SQL][BRANCH-2.2] Check column name duplication in read/write
paths
## What changes were proposed in this pull request?
This pr fixed unexpected results when the data schema and
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18356
cc: @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18320
**[Test build #78269 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78269/testReport)**
for PR 18320 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122860370
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <- socketConnection(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18357
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18343
thanks, merging to master/2.2!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18320
I also tested the current state on CentOS for sure.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122866890
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/util/SchemaUtils.scala ---
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122866830
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -355,12 +356,12 @@ object ViewHelper {
analyzedPlan:
Github user darionyaphet commented on a diff in the pull request:
https://github.com/apache/spark/pull/18288#discussion_r122869702
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/source/libsvm/LibSVMRelation.scala ---
@@ -91,12 +91,10 @@ private[libsvm] class LibSVMFileFormat
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18025
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122855545
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -31,7 +31,15 @@ inputCon <- socketConnection(
port = port, open = "rb", blocking = TRUE, timeout =
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18355
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18355
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78266/
Test FAILed.
---
Github user fjh100456 commented on the issue:
https://github.com/apache/spark/pull/18351
Yes,it should be. @ajbozarth
The screenshotï¼@zhuoliu
![default](https://user-images.githubusercontent.com/26785576/27312007-89a3eca6-5597-11e7-81fe-7dcff2c2a861.png)
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18025
AppVeyor failure is unfortunate. but it passed before a doc only change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18140
you can close and re-open this PR on github here
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/18140
How do I do that?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17758
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78267/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17758
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18359
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122875289
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <- socketConnection(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17758
**[Test build #78267 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78267/testReport)**
for PR 17758 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15821
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78265/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15821
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122862830
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -248,6 +249,10 @@ private[hive] class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18320
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78269/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18320
**[Test build #78269 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78269/testReport)**
for PR 18320 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18320
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18358
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/18358
[SPARK-21148] [CORE] Set SparkUncaughtExceptionHandler to the Master
## What changes were proposed in this pull request?
Adding the default UncaughtExceptionHandler to the Master as
GitHub user lawlietAi opened a pull request:
https://github.com/apache/spark/pull/18359
Update Word2Vec.scala
## What changes were proposed in this pull request?
the word2vec model needs an independent function to calculate the cosine
similarity.we also desire a function
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18356#discussion_r122874896
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
---
@@ -222,12 +223,10 @@ case class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18356
To avoid potential issues, could you revert all the unrelated changes?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18355
**[Test build #78270 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78270/testReport)**
for PR 18355 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18114
**[Test build #78272 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78272/testReport)**
for PR 18114 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18343
Agreed. The `hugeBlockSizes` map is not supposed to have too many records
but only few huge blocks.
LGTM
---
If your project is set up for it, you can reply to this email and have your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15821
**[Test build #78265 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78265/testReport)**
for PR 15821 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122860937
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <- socketConnection(
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122862386
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -687,4 +688,52 @@ class DataFrameReaderWriterSuite
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18356
**[Test build #78268 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78268/testReport)**
for PR 18356 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18356
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78268/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18356
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18356
**[Test build #78268 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78268/testReport)**
for PR 18356 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122861395
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <- socketConnection(
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122863353
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -328,6 +333,9 @@ case class DataSource(
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122867659
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
---
@@ -62,13 +63,8 @@ case class
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18025
merged to master, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user actuaryzhang reopened a pull request:
https://github.com/apache/spark/pull/18140
[SPARK-20917][ML][SparkR] SparkR supports string encoding consistent with R
## What changes were proposed in this pull request?
Add `stringIndexerOrderType` to `spark.glm` and
Github user actuaryzhang closed the pull request at:
https://github.com/apache/spark/pull/18140
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/18114
For the `column_datetime_diff_functions`:
![image](https://user-images.githubusercontent.com/11082368/27315654-9ba01c08-552f-11e7-973e-f8351cb50aae.png)
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/18114
For the date time functions, I create two groups: one for arithmetic
functions that work with two columns `column_datetime_diff_functions`, and the
other for functions that work with only one
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122853060
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -182,6 +183,10 @@ case class DataSource(
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18348
@srowen
Sorry, the last two or three days I did not deal with my jira in time.
Please help to review the code, thanks.
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18355
**[Test build #78266 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78266/testReport)**
for PR 18355 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18356
@gatorsmile This pr included whole changes in #17758 though, you originally
meant this pr should include a part of them to fix this issue only?
---
If your project is set up for it, you can reply
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/18357
[SPARK-21146] [CORE] Worker should handle and shutdown when any thread gets
UncaughtException
## What changes were proposed in this pull request?
Adding the default
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122860310
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <- socketConnection(
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18140
can you kick AppVeyor?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18114
**[Test build #78271 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78271/testReport)**
for PR 18114 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18356#discussion_r122875121
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -181,6 +182,10 @@ case class DataSource(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18356#discussion_r122874997
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -181,6 +182,10 @@ case class DataSource(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18114
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18355
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18114
**[Test build #78271 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78271/testReport)**
for PR 18114 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18114
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78271/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18355
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78270/
Test FAILed.
---
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122879650
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <- socketConnection(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18114
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18114
**[Test build #78272 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78272/testReport)**
for PR 18114 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18346
Btw, even we can evaluate all children expressions of `CodegenFallback`
with codegen path, we still can't do wholestage codegen with the plans
including `CodegenFallback` expressions. We just can do
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/14085
@zenglinxi0615
This pr is about adding all files in a directory recursively, thus no need
to enumerate all the filenames? I think this can be pretty useful especially in
production env.
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18343
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user yssharma commented on the issue:
https://github.com/apache/spark/pull/18029
@budde @brkyvz could you suggest if the current patch seems ok, or I should
make something similar to the case class/ trait ?
---
If your project is set up for it, you can reply to this email and
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17758
I think we should figure out
https://issues.apache.org/jira/browse/SPARK-21144 first. It doesn't make sense
to have duplicated columns between partition columns and data columns.
---
If your
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122868692
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
---
@@ -62,13 +63,8 @@ case class
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18320
> Also I'd suggest not committing this to branch-2.2 -- if we want to just
fix the CentOS tests we can have a different change for the older branches
agreed, this won't run as a part of
Github user ConeyLiu commented on the issue:
https://github.com/apache/spark/pull/18350
thanks @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122865721
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/util/SchemaUtils.scala ---
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122865863
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -355,12 +356,12 @@ object ViewHelper {
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/17395
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user lawlietAi commented on the issue:
https://github.com/apache/spark/pull/18359
sorry i'm confused about to operate the github.what should i do
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18114
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78272/
Test PASSed.
---
GitHub user timvw opened a pull request:
https://github.com/apache/spark/pull/18353
Corrected kafka dependencies
## What changes were proposed in this pull request?
Currently spark-streaming-kafka-0-10 has a dependency on the full kafka
distribution (but only uses and
Github user timvw commented on a diff in the pull request:
https://github.com/apache/spark/pull/18353#discussion_r122722841
--- Diff:
external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala
---
@@ -20,25 +20,24 @@ package
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14739
@srowen should we close this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/11994
Are you still working on this? @jerryshao
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/13863
Are you still working on this? @nezihyigitbasi
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/16803
ping @cloud-fan @gatorsmile @dongjoon-hyun Any thoughts on this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/17074
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/17620
Should we move forward with this PR or should we close this? @jerryshao
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17074
**[Test build #78258 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78258/testReport)**
for PR 17074 at commit
1 - 100 of 349 matches
Mail list logo