Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/19273
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19130
**[Test build #81914 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81914/testReport)**
for PR 19130 at commit
Github user buryat commented on a diff in the pull request:
https://github.com/apache/spark/pull/19266#discussion_r139616346
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/feature/Word2Vec.scala ---
@@ -344,7 +344,7 @@ class Word2Vec extends Serializable with Logging {
Github user buryat commented on a diff in the pull request:
https://github.com/apache/spark/pull/19266#discussion_r139616411
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/BlockMatrix.scala
---
@@ -304,8 +304,8 @@ class BlockMatrix @Since("1.3.0") (
Github user buryat commented on a diff in the pull request:
https://github.com/apache/spark/pull/19266#discussion_r139616165
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/CompactBuffer.scala ---
@@ -126,22 +126,20 @@ private[spark] class CompactBuffer[T: ClassTag]
Github user buryat commented on a diff in the pull request:
https://github.com/apache/spark/pull/19266#discussion_r139615624
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/array/LongArray.java ---
@@ -39,7 +39,7 @@
private final long length;
Github user buryat commented on a diff in the pull request:
https://github.com/apache/spark/pull/19266#discussion_r139615418
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/PartitionedPairBuffer.scala
---
@@ -96,5 +96,5 @@ private[spark] class
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/19068
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19271
**[Test build #81913 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81913/testReport)**
for PR 19271 at commit
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18754#discussion_r139612110
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowWriter.scala
---
@@ -224,6 +226,25 @@ private[arrow] class DoubleWriter(val
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19273
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81908/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19273
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19130
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81912/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19068
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81910/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19068
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19130
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19273
**[Test build #81908 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81908/testReport)**
for PR 19273 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19068
**[Test build #81910 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81910/testReport)**
for PR 19068 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19130
**[Test build #81912 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81912/testReport)**
for PR 19130 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19135
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139609285
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -385,4 +385,14 @@ package object config {
.checkValue(v =>
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/19135
thanks, merging to master!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139608374
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -385,4 +385,14 @@ package object config {
.checkValue(v =>
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19145
And based on your fix:
1. looks like you don't have retention mechanism, which will potential
introduce memory leak.
2. I don't see your logic to avoid requesting new containers, is
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139607663
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -385,4 +385,14 @@ package object config {
.checkValue(v =>
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139607620
--- Diff: docs/running-on-yarn.md ---
@@ -212,6 +212,15 @@ To use a custom metrics.properties for the application
master and executors, upd
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19145
>But if we restart the RM, then, the lost containers in the NM will be
reported to RM as lost again because of recovery
Since you already enabled RM and NM recovery, IIUC the failure of
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/19229
Oh. That's what have done in the old PR #18902 .(Because the RDD version
(not in master branch, only personal impl here (sorry for put wrong link, the
code link is here:
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19211#discussion_r139606603
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/AsyncEventQueue.scala ---
@@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18704
LGTM, I think eventually we should simplify the columnar cache module and
codegen most of it to reduce code duplication.
---
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18704#discussion_r139605958
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/vectorized/ColumnarBatchSuite.scala
---
@@ -1311,4 +1314,172 @@ class
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17819
@WeichenXu123 Yeah, I'm merging it. I just want to clarify adding trait to
a class doesn't necessarily makes java incompatible. :) Thanks.
---
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/17819
Yes you can only move `setInputCols` into the outer class to resolve this
issue. But I prefer merge it together. I think we can unify the `transform`
method. (First we check param `inputCol`
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/19229
@WeichenXu123 I'm not sure I understand it correctly. This change only
replaces the chain of `withColumn` to a pass of `withColumns`. We don't have
RDD version for this, so I'm not sure what version
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17819
Btw, the reason that this change isn't java compatible, is not mainly
because adding a trait to `Bucketizer`. Looks like It is because the params
setter methods such as `setInputCols`.
---
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17819
@WeichenXu123 I see. That's correct this change is not java compatible.
Thanks for pointing out. I'm merging the changes into `Bucketizer`.
---
501 - 536 of 536 matches
Mail list logo