Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19271
**[Test build #81907 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81907/testReport)**
for PR 19271 at commit
[`94b63fb`](https://github.com/apache/spark/commit/9
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19271
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19271
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81907/
Test FAILed.
---
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/15544#discussion_r139599361
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/ApproxCountDistinctForIntervals.scala
---
@@ -0,0 +1,232 @@
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/18805
Updated with zstd-jni versin 1.3.1-1 and also updated the license to
include zstd-jni license. @srowen - How does that look from licensing
prospective?
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18805
**[Test build #81911 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81911/testReport)**
for PR 18805 at commit
[`38d4840`](https://github.com/apache/spark/commit/38
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18805
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81911/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18805
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18805
**[Test build #81911 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81911/testReport)**
for PR 18805 at commit
[`38d4840`](https://github.com/apache/spark/commit/3
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/15544#discussion_r139600490
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/aggregate/ApproxCountDistinctForIntervalsSuite.scala
---
@@ -0,0 +1,207 @
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/15544#discussion_r139600676
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/aggregate/ApproxCountDistinctForIntervalsSuite.scala
---
@@ -0,0 +1,207 @
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19130
**[Test build #81912 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81912/testReport)**
for PR 19130 at commit
[`9a2c8c7`](https://github.com/apache/spark/commit/9a
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19243#discussion_r139601709
--- Diff: R/pkg/R/DataFrame.R ---
@@ -984,12 +984,12 @@ setMethod("unique",
#' of the total count of of the given SparkDataFrame.
#'
#' @p
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19243#discussion_r139601790
--- Diff: R/pkg/R/DataFrame.R ---
@@ -998,33 +998,39 @@ setMethod("unique",
#' sparkR.session()
#' path <- "path/to/file.json"
#' df <- re
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19243#discussion_r139602407
--- Diff: R/pkg/R/DataFrame.R ---
@@ -998,33 +998,39 @@ setMethod("unique",
#' sparkR.session()
#' path <- "path/to/file.json"
#' df <- re
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19243#discussion_r139602239
--- Diff: R/pkg/R/DataFrame.R ---
@@ -998,33 +998,39 @@ setMethod("unique",
#' sparkR.session()
#' path <- "path/to/file.json"
#' df <- re
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17819
@WeichenXu123 I see. That's correct this change is not java compatible.
Thanks for pointing out. I'm merging the changes into `Bucketizer`.
---
-
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17819
Btw, the reason that this change isn't java compatible, is not mainly
because adding a trait to `Bucketizer`. Looks like It is because the params
setter methods such as `setInputCols`.
---
---
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/19229
@WeichenXu123 I'm not sure I understand it correctly. This change only
replaces the chain of `withColumn` to a pass of `withColumns`. We don't have
RDD version for this, so I'm not sure what version
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/17819
Yes you can only move `setInputCols` into the outer class to resolve this
issue. But I prefer merge it together. I think we can unify the `transform`
method. (First we check param `inputCol` an
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17819
@WeichenXu123 Yeah, I'm merging it. I just want to clarify adding trait to
a class doesn't necessarily makes java incompatible. :) Thanks.
---
--
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18704#discussion_r139605958
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/vectorized/ColumnarBatchSuite.scala
---
@@ -1311,4 +1314,172 @@ class ColumnarBatchSuite
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18704
LGTM, I think eventually we should simplify the columnar cache module and
codegen most of it to reduce code duplication.
---
-
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19211#discussion_r139606603
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/AsyncEventQueue.scala ---
@@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Found
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/19229
Oh. That's what have done in the old PR #18902 .(Because the RDD version
(not in master branch, only personal impl here (sorry for put wrong link, the
code link is here:
https://github.com/apa
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19145
>But if we restart the RM, then, the lost containers in the NM will be
reported to RM as lost again because of recovery
Since you already enabled RM and NM recovery, IIUC the failure of RM
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139607620
--- Diff: docs/running-on-yarn.md ---
@@ -212,6 +212,15 @@ To use a custom metrics.properties for the application
master and executors, upd
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139607663
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -385,4 +385,14 @@ package object config {
.checkValue(v =>
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19145
And based on your fix:
1. looks like you don't have retention mechanism, which will potential
introduce memory leak.
2. I don't see your logic to avoid requesting new containers, is yo
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139608374
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -385,4 +385,14 @@ package object config {
.checkValue(v =>
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/19135
thanks, merging to master!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139609285
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -385,4 +385,14 @@ package object config {
.checkValue(v =>
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19135
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
401 - 433 of 433 matches
Mail list logo