Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18730
**[Test build #81020 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81020/testReport)**
for PR 18730 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18266
**[Test build #81019 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81019/testReport)**
for PR 18266 at commit
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/18971#discussion_r134659662
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/ReplayListenerSuite.scala ---
@@ -151,7 +153,10 @@ class ReplayListenerSuite extends SparkFunSuite
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18945
BTW, I think it'd be nicer if we can go with the approach above ^ (checking
the null in data and setting the correct type). I am okay with any form for the
approach above as we have a decent
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/19025#discussion_r134657843
--- Diff: core/src/main/java/org/apache/spark/memory/TaskMemoryManager.java
---
@@ -74,7 +74,7 @@
* Maximum supported data page size (in bytes). In
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/19025#discussion_r134657758
--- Diff: core/src/main/java/org/apache/spark/memory/TaskMemoryManager.java
---
@@ -53,7 +53,7 @@
* retrieve the base object.
*
* This
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19021
cc @rednaxelafx
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/18953#discussion_r134657662
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcUtils.scala ---
@@ -0,0 +1,316 @@
+/*
+ * Licensed to the Apache Software
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19021
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19021#discussion_r134657177
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -577,10 +577,10 @@ object SQLConf {
.doc("The maximum
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18730
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18730#discussion_r134656903
--- Diff:
core/src/main/scala/org/apache/spark/util/io/ChunkedByteBuffer.scala ---
@@ -63,6 +65,19 @@ private[spark] class ChunkedByteBuffer(var chunks:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18730
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81018/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18730
**[Test build #81018 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81018/testReport)**
for PR 18730 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18730
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18730
**[Test build #81018 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81018/testReport)**
for PR 18730 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18730
looks reasonable, do you have some performance numbers?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18266
**[Test build #81017 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81017/testReport)**
for PR 18266 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18266
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81017/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18266
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18266
**[Test build #81017 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81017/testReport)**
for PR 18266 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18953
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18953
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81013/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18953
**[Test build #81013 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81013/testReport)**
for PR 18953 at commit
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19017
@HyukjinKwon @viirya I replaced the functional transformations with a while
loop.
What do you think about this? Thanks.
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19017
**[Test build #81016 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81016/testReport)**
for PR 19017 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17373
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/17373
Merging with master
Thanks @WeichenXu123 !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18961
**[Test build #81011 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81011/testReport)**
for PR 18961 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18266
**[Test build #81015 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81015/testReport)**
for PR 18266 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18266
**[Test build #81015 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81015/testReport)**
for PR 18266 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17373
**[Test build #3895 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3895/testReport)**
for PR 17373 at commit
Github user caneGuy commented on the issue:
https://github.com/apache/spark/pull/18957
Yes i agree with you @jiangxb1987 since we do not have retry logic for
broadcast.Actually,my original idea is when length of Array 'localDirs' in
DiskBlockManager is larger than 1.We can retry to
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16774
**[Test build #3896 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3896/testReport)**
for PR 16774 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18962
**[Test build #81014 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81014/testReport)**
for PR 18962 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18492
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18492
thanks, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19025
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user Geek-He opened a pull request:
https://github.com/apache/spark/pull/19025
[SPARK-21813][core] Modify TaskMemoryManager.MAXIMUM_PAGE_SIZE_BYTES
comments
## What changes were proposed in this pull request?
The variable "TaskMemoryManager.MAXIMUM_PAGE_SIZE_BYTES"
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19011
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/19011
Merged into master. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17373
**[Test build #3895 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3895/testReport)**
for PR 17373 at commit
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/17373
Thanks! Will merge after rerunning tests
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19017
@HyukjinKwon That's a good point, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18953
**[Test build #81012 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81012/testReport)**
for PR 18953 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18953
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18953
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81012/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18953
**[Test build #81013 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81013/testReport)**
for PR 18953 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18953
**[Test build #81012 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81012/testReport)**
for PR 18953 at commit
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18962#discussion_r134645792
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -330,19 +332,21 @@ object SparkSubmit extends CommandLineUtils {
Github user vistep commented on the issue:
https://github.com/apache/spark/pull/19023
@falaki Hi, do you mean the pipeline models in scala API? I agree with you
about this. However, as I can see, there is little feature transformer
functions in SparkR, which makes building a
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/18313
+1 for merging https://github.com/apache/spark/pull/16774 before proceeding
with the other work since it will affect everything else.
@MLnick I'd be Ok with adding options for best/all/k
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18973
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user heary-cao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18961#discussion_r134643234
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -31,11 +31,22 @@ object InterpretedPredicate {
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18966#discussion_r134643155
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -582,6 +582,15 @@ object SQLConf {
.intConf
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18972
ping @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18935
@squito , as the next following step, I would expose these metrics with
MetricsSystem, I'm thinking of exposing shuffle related Netty memory usage. For
RPC related memory usage, I'm not fully
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/18973
Merging this to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user heary-cao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18961#discussion_r134641150
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -31,11 +31,22 @@ object InterpretedPredicate {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18961
**[Test build #81011 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81011/testReport)**
for PR 18961 at commit
Github user heary-cao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18961#discussion_r134639568
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -2029,4 +2029,15 @@ class DataFrameSuite extends QueryTest with
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18973
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81010/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18973
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18973
**[Test build #81010 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81010/testReport)**
for PR 18973 at commit
Github user janewangfb commented on a diff in the pull request:
https://github.com/apache/spark/pull/18492#discussion_r134637603
--- Diff:
core/src/test/scala/org/apache/spark/ExecutorAllocationManagerSuite.scala ---
@@ -188,6 +188,40 @@ class ExecutorAllocationManagerSuite
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18193#discussion_r134636110
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -138,6 +138,54 @@ class DetermineTableStats(session:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19012
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81009/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19012
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19012
**[Test build #81009 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81009/testReport)**
for PR 19012 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18193#discussion_r134635562
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -138,6 +138,54 @@ class DetermineTableStats(session:
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18193#discussion_r134635355
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -138,6 +138,54 @@ class DetermineTableStats(session:
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/17849
Merged to master, thanks everyone :) (There is also a follow up JIRA
https://issues.apache.org/jira/browse/SPARK-21812 for explicitly defining all
of the params in Python).
---
If your project is
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18492#discussion_r134634721
--- Diff:
core/src/test/scala/org/apache/spark/ExecutorAllocationManagerSuite.scala ---
@@ -188,6 +188,40 @@ class ExecutorAllocationManagerSuite
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17849
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18968
SGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18971#discussion_r134633585
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/ReplayListenerSuite.scala ---
@@ -151,7 +153,10 @@ class ReplayListenerSuite extends
Github user jinxing64 closed the pull request at:
https://github.com/apache/spark/pull/18866
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18973
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81008/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18973
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18866
@cloud-fan
Thanks for reply. Looks like #19001 continues working on this and it's more
comprehensive. I will close this pr for now.
---
If your project is set up for it, you can reply to
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18973
**[Test build #81008 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81008/testReport)**
for PR 18973 at commit
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/19021#discussion_r134631466
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -577,10 +577,10 @@ object SQLConf {
.doc("The maximum
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/18784
@ArtRand @susanxhuynh pls review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18975
Will review it tonight.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18961
Will review it tonight.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/18896
@jkbradley OK. (Can this directly merged to 2.2 ?)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18896
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/18896
@WeichenXu123 would you mind sending a backport PR for 2.2?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/18896
Merging with master and branch-2.2
Thanks @WeichenXu123 @MLnick @sethah !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user falaki commented on the issue:
https://github.com/apache/spark/pull/19023
I suggest we look at this problem holistically. Basically what is missing
is MLLib pipelines.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/15435#discussion_r134628661
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
---
@@ -1324,90 +1350,136 @@ private[ml] class
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/15435#discussion_r134627545
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
---
@@ -1324,90 +1354,147 @@ private[ml] class
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18931
@maropu Yes, I agreed. This PR changes the code-gen'd codes. It is not
going to detect the length of gen'd codes and decide to change codes or not.
It's because the generation of codes is a
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18968
Maybe we can still have a case for ListQuery, but it is simpler and mainly
for better message?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18968
@dilipbiswal @gatorsmile Regarding with the error message, I do think so.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19001
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81005/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19001
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19001
**[Test build #81005 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81005/testReport)**
for PR 19001 at commit
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r134622727
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -154,13 +154,13 @@ case class StaticInvoke(
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
Found this in janino documentation, it explains the need for explicit
casting: "Type arguments: Are parsed, but otherwise ignored. The most
significant restriction that follows is that you must
1 - 100 of 492 matches
Mail list logo