Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16326
Instead of appending the new rows, Hive will overwrite the previous files
in the specified location, even if we are using `INSERT INTO`. See the output
```
hive> create table test(c1
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16326
If we want to make it consistent with the managed Hive serde table, the
existing behavior is still not the same.
```
scala> spark.sql(s"create table newTab (fieldOne long, partCol
Github user wzhfy closed the pull request at:
https://github.com/apache/spark/pull/15544
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/16323
cc @rxin @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16326
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16326
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70315/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16326
**[Test build #70315 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70315/testReport)**
for PR 16326 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16326
cc @ericl @cloud-fan @mallman
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16326
**[Test build #70315 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70315/testReport)**
for PR 16326 at commit
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16326
[SPARK-18915] [SQL] Automatic Table Repair when Creating a Partitioned Data
Source Table with a Specified Path
### What changes were proposed in this pull request?
In Spark 2.1 (the default
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15983
Regarding the concern of the repair cost, I think we still face the same
issue. Each time when we append an extra row, we also repair the table, right?
That is still expensive.
---
If your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15983
Yeah, table repair is expensive, but this causes an external behavior
change. I tried it in 2.0. It can show the whole data source table without
repairing the table. In 2.1, it returns empty
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16290
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70314/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16290
**[Test build #70314 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70314/testReport)**
for PR 16290 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16290
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/11211#discussion_r92932905
--- Diff: python/pyspark/context.py ---
@@ -163,10 +163,8 @@ def _do_init(self, master, appName, sparkHome,
pyFiles, environment, batchSize,
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16252
@srowen I have restored it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15515#discussion_r92932400
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionProviderCompatibilitySuite.scala
---
@@ -0,0 +1,137 @@
+/*
+ * Licensed to
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/15983
It's the hive behavior to not repair the table. Otherwise, create table can
have an unbounded cost if there are many partitions.
On Sat, Dec 17, 2016, 5:12 PM Xiao Li
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16290
**[Test build #70314 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70314/testReport)**
for PR 16290 at commit
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16290
Thanks @gatorsmile - Addressed your comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16290#discussion_r92932104
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -964,10 +970,16 @@ object StaticSQLConf {
}
}
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16290#discussion_r92932105
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala ---
@@ -221,6 +221,19 @@ class SQLConfSuite extends QueryTest with
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16290#discussion_r92932097
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -819,7 +819,13 @@ private[sql] class SQLConf extends Serializable with
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16290#discussion_r92932095
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala ---
@@ -221,6 +221,19 @@ class SQLConfSuite extends QueryTest with
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16320
Hi, @gatorsmile .
Could you review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16290#discussion_r92931894
--- Diff: R/pkg/inst/tests/testthat/test_context.R ---
@@ -72,6 +72,20 @@ test_that("repeatedly starting and stopping
SparkSession", {
}
})
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16312
Hmm, from what I can see pretty sure it used to work with
sparkR.session.stop() calls for enableHiveSupport = F sessions. Do we know why
this is suddenly causing issues?
---
If your project
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16301
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16301
merged to master and branch-2.1, thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16325
cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/16194
I lean towards doing nothing, unless we can find a solution that is both
generic AND lists all/only relevant information.
---
If your project is set up for it, you can reply to this email and have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16324
This is to allow using jars defined using HDFS-API, not just HDFS right? In
that case it sounds like a good idea too ... but we need a test case for it.
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16325
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16325
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70313/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16325
**[Test build #70313 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70313/testReport)**
for PR 16325 at commit
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16282
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16313#discussion_r92928331
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/HadoopFsRelationTest.scala
---
@@ -459,7 +459,7 @@ abstract class HadoopFsRelationTest
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16314
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16314
**[Test build #70312 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70312/testReport)**
for PR 16314 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16314
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70312/
Test PASSed.
---
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16313#discussion_r92928215
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -133,6 +133,16 @@ case class BucketSpec(
if
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16313#discussion_r92928127
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -157,39 +156,74 @@ case class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16313#discussion_r92928101
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -157,39 +156,74 @@ case class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16252
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16252
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70308/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16252
**[Test build #70308 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70308/testReport)**
for PR 16252 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16312
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70310/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16312
**[Test build #70310 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70310/testReport)**
for PR 16312 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16312
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16312
**[Test build #70310 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70310/testReport)**
for PR 16312 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16313
**[Test build #70309 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70309/testReport)**
for PR 16313 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16309
**[Test build #3509 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3509/testReport)**
for PR 16309 at commit
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
The tests ran locally on my laptop have finished after...`7431 s` which is
2 hours (!)
```
[error] (sql/test:test) sbt.TestsFailedException: Tests unsuccessful
[error]
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/16301
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16309
**[Test build #3509 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3509/testReport)**
for PR 16309 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/16295
CC @jkbradley for approval
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16313
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70307/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16313
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16313
**[Test build #70307 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70307/testReport)**
for PR 16313 at commit
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
@srowen Please help as I'm stuck with the `OutOfMemoryError: GC overhead
limit exceeded` error. Should Jenkins run the tests with 6g?
What's even more interesting is that the tests
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16252
**[Test build #70308 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70308/testReport)**
for PR 16252 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15996
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70305/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15996
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15996
**[Test build #70305 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70305/testReport)**
for PR 15996 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15915
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15915
Merged to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/16194
To move towards a resolution, I'd either implement one generic `toString`
that enumerates params, or not do anything at this stage. Up to you
@zhengruifeng
---
If your project is set up for it,
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/16252
Thanks, so @wangyum I think you can restore that `!unrolled.hasNext` check.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16323
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70304/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16323
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16323
**[Test build #70304 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70304/testReport)**
for PR 16323 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16309
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70306/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16309
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15018#discussion_r92920572
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/regression/IsotonicRegression.scala
---
@@ -328,74 +336,68 @@ class IsotonicRegression private
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16309
**[Test build #70306 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70306/testReport)**
for PR 16309 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15018#discussion_r92920619
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/regression/IsotonicRegression.scala
---
@@ -328,74 +336,68 @@ class IsotonicRegression private
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15018#discussion_r92920879
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/regression/IsotonicRegression.scala
---
@@ -328,74 +336,68 @@ class IsotonicRegression private
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16313
**[Test build #70307 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70307/testReport)**
for PR 16313 at commit
Github user jaceklaskowski commented on the issue:
https://github.com/apache/spark/pull/16309
Rebasing with master to trigger tests on Jenkins...(hoping this time they
pass)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16309
**[Test build #70306 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70306/testReport)**
for PR 16309 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15996
**[Test build #70305 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70305/testReport)**
for PR 15996 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16323
**[Test build #70304 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70304/testReport)**
for PR 16323 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16323
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16323
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70300/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16323
**[Test build #70300 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70300/testReport)**
for PR 16323 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16312
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16312
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70302/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16312
**[Test build #70302 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70302/testReport)**
for PR 16312 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16301
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16301
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70303/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16301
**[Test build #70303 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70303/testReport)**
for PR 16301 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16301
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70301/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16301
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16301
**[Test build #70301 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70301/testReport)**
for PR 16301 at commit
Github user shenh062326 commented on the issue:
https://github.com/apache/spark/pull/16324
Currentlyï¼we can create a UDF with jar in HDFS, but failed to use it.
Spark driver won't download the jar from HDFS, it only add the path to the
classLoader.
If we don't support
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16312
**[Test build #70302 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70302/testReport)**
for PR 16312 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16301
**[Test build #70303 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70303/testReport)**
for PR 16301 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16324
First, I am not sure whether we should support reading UDF jar from HDFS.
Second, if we want to support, the best reviewers are @zsxwing @tdas They
added the file
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16301
**[Test build #70301 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70301/testReport)**
for PR 16301 at commit
1 - 100 of 111 matches
Mail list logo