Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14247
**[Test build #62463 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62463/consoleFull)**
for PR 14247 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14150
**[Test build #62459 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62459/consoleFull)**
for PR 14150 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14246
**[Test build #62460 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62460/consoleFull)**
for PR 14246 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14245
**[Test build #62462 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62462/consoleFull)**
for PR 14245 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14137
**[Test build #62464 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62464/consoleFull)**
for PR 14137 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13051
**[Test build #62465 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62465/consoleFull)**
for PR 13051 at commit
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/13894
@krishnakalyan3 think merge conflicts still need to be resolved - also the
Python style issue. Subject to those this LGTM now.
---
If your project is set up for it, you can reply to this email and
Github user ahmed-mahran commented on the issue:
https://github.com/apache/spark/pull/14234
Fine, ignoring this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13912#discussion_r71131209
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -328,6 +328,10 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14234
Don't bother if you don't have Office and it's any trouble
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user ahmed-mahran commented on the issue:
https://github.com/apache/spark/pull/14234
The slides renders bad on "libre office" I have; I'll try something else
and see.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13912#discussion_r71130049
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -328,6 +328,10 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14234
Oh nice. It might be a matter of exporting the image at a higher
resolution, but I still wouldn't worry if it's just a trivial typo and takes
any non-trivial time to figure out. (You can fix and
Github user ahmed-mahran commented on the issue:
https://github.com/apache/spark/pull/14234
I can find a pptx at "docs/img/structured-streaming.pptx" where there is a
corresponding slide for each image.
---
If your project is set up for it, you can reply to this email and have your
Github user philipphoffmann commented on a diff in the pull request:
https://github.com/apache/spark/pull/13051#discussion_r71129439
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
---
@@ -105,11 +105,14 @@ private[mesos]
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13912#discussion_r71129335
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -328,6 +328,10 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user philipphoffmann commented on a diff in the pull request:
https://github.com/apache/spark/pull/13051#discussion_r71129270
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala
---
@@ -150,6 +150,7 @@ class
Github user philipphoffmann commented on a diff in the pull request:
https://github.com/apache/spark/pull/13051#discussion_r71129217
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
---
@@ -119,21 +122,25 @@ private[mesos]
Github user philipphoffmann commented on a diff in the pull request:
https://github.com/apache/spark/pull/13051#discussion_r71128900
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -408,8 +408,11 @@
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13912#discussion_r71127773
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -328,6 +328,10 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14222
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62456/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14222
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14222
**[Test build #62456 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62456/consoleFull)**
for PR 14222 at commit
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/14150
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14222
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14222
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62454/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14222
**[Test build #62454 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62454/consoleFull)**
for PR 14222 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13912#discussion_r71126016
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -328,6 +328,10 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14247
Seems reasonable.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14137
LGTM, will leave open for a bit for comments
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14137
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13912#discussion_r71124928
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVRelation.scala
---
@@ -195,18 +202,40 @@ private[sql] class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13912#discussion_r71124645
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVRelation.scala
---
@@ -195,18 +202,40 @@ private[sql] class
GitHub user zhengruifeng opened a pull request:
https://github.com/apache/spark/pull/14247
[MINOR] Remove unused arg in als.py
## What changes were proposed in this pull request?
The second arg in method `update()` is never used. So I delete it.
## How was this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14176
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62452/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14174
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14176
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14174
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62453/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14176
**[Test build #62452 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62452/consoleFull)**
for PR 14176 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14174
**[Test build #62453 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62453/consoleFull)**
for PR 14174 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13912#discussion_r71124186
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -328,6 +328,10 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13912#discussion_r71124234
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -328,6 +328,10 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user wesolowskim commented on the issue:
https://github.com/apache/spark/pull/14137
Is it sufficient right now or should I do something more?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13912#discussion_r71123633
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVRelation.scala
---
@@ -195,18 +202,40 @@ private[sql] class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14136
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14136
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62455/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14136
**[Test build #62455 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62455/consoleFull)**
for PR 14136 at commit
Github user sun-rui commented on the issue:
https://github.com/apache/spark/pull/14243
Will this test be run always no matter if the "sparkr" profile is specified
or not? In other words, does R need to installed for all spark tests to pass?
---
If your project is set up for it, you
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14246
OK, especially if you've had a search for other similar latex issues. It
probably doesn't even need a JIRA but OK.
---
If your project is set up for it, you can reply to this email and have your
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/14245
Reused JIRA number SPARK-16303 and renamed Scala/Java example file names.
Python examples are not being updated to use the `include_example` tag yet. The
PR (#14098) is still in WIP status.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14169
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14169
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62451/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14169
**[Test build #62451 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62451/consoleFull)**
for PR 14169 at commit
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/14065#discussion_r71121047
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/security/ConfigurableCredentialManager.scala
---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13912#discussion_r71120468
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVRelation.scala
---
@@ -195,18 +202,40 @@ private[sql] class
GitHub user WeichenXu123 opened a pull request:
https://github.com/apache/spark/pull/14246
[SPARK-16600][MLLib] fix some latex formula syntax error
## What changes were proposed in this pull request?
`\partial\x` ==> `\partial x`
`har{x_i}` ==> `hat{x_i}`
##
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14222
**[Test build #62456 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62456/consoleFull)**
for PR 14222 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14222
**[Test build #62454 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62454/consoleFull)**
for PR 14222 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14136
**[Test build #62455 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62455/consoleFull)**
for PR 14136 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14150#discussion_r71114168
--- Diff: mllib/src/test/java/org/apache/spark/ml/feature/JavaPCASuite.java
---
@@ -107,7 +107,11 @@ public VectorPair call(Tuple2 pair) {
Github user zhengruifeng commented on the issue:
https://github.com/apache/spark/pull/12983
@MechCoder @srowen There is no prefermance diffence. There is only one
little difference: Py2 have 'xrange' and 'range', while Py3 only have 'range'.
So unifying all case to 'range' may be
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14234
Ah OK I now see the nature of the problem in the original code blocks.
Great, that's an important fix. The rest look good. I wouldn't worry about the
image just for that; I don't know where the
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14238
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14238
OK. Merged to master/2.0 to match the previous changes
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14235
Looks pretty good now. Just couple minor comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/14065#discussion_r7198
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/security/AMDelegationTokenRenewer.scala
---
@@ -193,8 +200,14 @@ private[yarn] class
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14235#discussion_r7090
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/catalyst/LogicalPlanToSQLSuite.scala
---
@@ -17,15 +17,32 @@
package
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14086
Yeah interesting discussion for sure. It sounds like it's not valid to
assume TRUNCATE is OK if DROP/CREATE is OK. It sounds like it's useful to maybe
let the user choose via a little config option
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14235#discussion_r7037
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/catalyst/LogicalPlanToSQLSuite.scala
---
@@ -17,15 +17,32 @@
package
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14174
**[Test build #62453 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62453/consoleFull)**
for PR 14174 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14235
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14235
**[Test build #62450 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62450/consoleFull)**
for PR 14235 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14235
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62450/
Test FAILed.
---
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/13912
This needs a rebase.
Pardon, but where does this affect parsing of date strings? I'm missing
that but I'm sure it's here.
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14176
**[Test build #62452 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62452/consoleFull)**
for PR 14176 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13912#discussion_r71110451
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -328,6 +328,10 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14054
Interesting, well maybe `READ_UNCOMMITTED` is still a pretty good setting,
because the transaction here doesn't even read anything and doesn't care about
what it might read, but sounds like it could
Github user gurvindersingh commented on the issue:
https://github.com/apache/spark/pull/13950
@tgravescs @ajbozarth any update on this PR ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14242#discussion_r71109485
--- Diff:
examples/src/main/scala/org/apache/spark/examples/SparkKMeans.scala ---
@@ -75,7 +74,10 @@ object SparkKMeans {
val data =
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14169
**[Test build #62451 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62451/consoleFull)**
for PR 14169 at commit
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/14065#discussion_r71107536
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/security/AMDelegationTokenRenewer.scala
---
@@ -69,6 +71,9 @@ private[yarn] class
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14245
FWIW I think it'd be better to name the file SparkSQLExamples, rather than
SparkSqlExamples. It just feels weird to have SparkSql. And I'm talking about
both Scala and Python.
---
If your project
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14116
Thank you so much, @gatorsmile . And, sorry for late response. Definitely,
I have many things to do. Now, it's my turn. Let's see how much I can handle
them. :)
---
If your project is set
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14235
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14235
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62448/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14235
**[Test build #62450 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62450/consoleFull)**
for PR 14235 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14098
Please be careful with case sensitivity. It broke the release candidate
last time.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14235
**[Test build #62448 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62448/consoleFull)**
for PR 14235 at commit
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14179
LGTM.
btw, for future references, you could either
```
# sep defaults to " "
paste("sparkPackages has no effect when using spark-submit or sparkR
shell,",
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/14065#discussion_r71104553
--- Diff: yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
@@ -390,8 +393,22 @@ private[spark] class Client(
// Upload Spark and
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14207
Does it mean that if users do not issue refresh when the table location is
changed, the schema will be wrong when the Spark is re-starting?
---
If your project is set up for it, you can reply to
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14243
Looks good, interesting approach - this won't run in CRAN since this is
part of Scala tests?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14244
cc @hvanhovell Is this test included in the other suite?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14245
**[Test build #62449 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62449/consoleFull)**
for PR 14245 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14245
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14245
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62449/
Test PASSed.
---
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14243#discussion_r71104052
--- Diff: R/pkg/inst/tests/testthat/jarTest.R ---
@@ -16,17 +16,17 @@
#
library(SparkR)
-sparkR.session()
+sc <- sparkR.session()
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14207
@viirya The problem it tries to resolve is from the comment of @rxin in
another PR: https://github.com/apache/spark/pull/14148#issuecomment-232273833
---
If your project is set up for it, you
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14116
Finished my first pass. The major concern is handling of
`INFORMATION_SCHEMA` is not clean to me. It looks hacky. Many holes are caused
by it. More test cases are needed.
---
If your project
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/14098
@wangmiao1981 I guess it's not ready yet. You may put a `[WIP]` tag in the
PR title when it's in WIP status and remove it when it is ready for review.
---
If your project is set up for it, you
401 - 500 of 513 matches
Mail list logo