Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/14818
We may want to use a different name? glmnet related name could be confusing
if it is actually only multiclass logistic.
---
If your project is set up for it, you can reply to this email and have
Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/13584
Sounds good. That's also what we meant.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/13584
Yeah I was going to say that we need to handle cases where `labels_output`
is also used. We can just add a numeric suffix maybe ?
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14753
Thanks. Overall looks good. I am merging this to master. Let me tweak the
interface later.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/14811
@mallman Could you close this PR since GitHub won't close it automatically,
please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14814
**[Test build #6 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/6/consoleFull)**
for PR 14814 at commit
Github user Parth-Brahmbhatt commented on the issue:
https://github.com/apache/spark/pull/14817
@hvanhovell The behavior in case this fallbackToHdfs is not enabled ( and
by default it is not enabled for performance reason) is to return the value
specified via
Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/13584
@shivaram Does it sound reasonable to you? Just discussed this with
@jkbradley.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/14811
Thanks! Merging into branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14811
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14816
**[Test build #64443 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64443/consoleFull)**
for PR 14816 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14815
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64437/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14811
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64438/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14815
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14811
**[Test build #64438 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64438/consoleFull)**
for PR 14811 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14815
**[Test build #64437 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64437/consoleFull)**
for PR 14815 at commit
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14816
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14816
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14710
**[Test build #64442 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64442/consoleFull)**
for PR 14710 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14710
wow a core dump in the build. retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/13584
sure I'll try to scan through all the mllib algorithms
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14813
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14818
**[Test build #64441 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64441/consoleFull)**
for PR 14818 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14818
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64441/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14818
**[Test build #64441 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64441/consoleFull)**
for PR 14818 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14818
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14813
Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76341601
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -459,52 +475,91 @@ case class HashAggregateExec(
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14817
@Parth-Brahmbhatt we are currently working Cost Based Optimization in
Spark. An important input will be the actual size of the table. Having partial
statistics (what you are suggestion) will not
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/14818
[SPARK-17157][SPARKR][WIP]: Add multiclass logistic regression SparkR
Wrapper
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14809
https://github.com/apache/spark/pull/13060 This change is based on another
PR. If users specify the location in `CREATE TABLE`, we always set the table
type to `EXTERNAL`.
---
If your project
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14793
@rxin How about this one? This is porting our existing test cases, instead
of HiveCompatbilitySuite
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14782
I see. Thanks! Let me close this one.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile closed the pull request at:
https://github.com/apache/spark/pull/14782
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76340251
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -518,15 +573,30 @@ case class HashAggregateExec(
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76340181
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -518,15 +573,30 @@ case class HashAggregateExec(
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76339927
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -459,52 +475,91 @@ case class HashAggregateExec(
Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/13584
@keypointt Can we keep searching (in random or sequential way) until an
unused column name has been found?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14637
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64435/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14637
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76339716
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -459,52 +475,91 @@ case class HashAggregateExec(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14637
**[Test build #64435 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64435/consoleFull)**
for PR 14637 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14638
**[Test build #64440 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64440/consoleFull)**
for PR 14638 at commit
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14638
Hi, @rxin . Until now, I couldn't find more suitable class in `hive`
package. I added the logic to check the Table's InputFormat. Now,
`skip.header.line.count` option is applied for the table
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/14728#discussion_r76339023
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamOptions.scala
---
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14817
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76338880
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -459,52 +475,91 @@ case class HashAggregateExec(
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76338652
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -279,9 +280,14 @@ case class HashAggregateExec(
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76338535
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -279,9 +280,14 @@ case class HashAggregateExec(
GitHub user Parth-Brahmbhatt opened a pull request:
https://github.com/apache/spark/pull/14817
[SPARK-17247][SQL]: when calcualting size of a relation from hdfs, thâ¦
## What changes were proposed in this pull request?
when calcualting size of a relation from hdfs, the size
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14815
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14815
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64436/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14815
**[Test build #64436 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64436/consoleFull)**
for PR 14815 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14655
Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14815#discussion_r76336122
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1386,15 +1386,17 @@ object EliminateOuterJoin
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/11192
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14777#discussion_r76335953
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -251,8 +251,21 @@ case class Literal (value: Any,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14816
**[Test build #64439 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64439/consoleFull)**
for PR 14816 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14816
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14816
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64439/
Test FAILed.
---
Github user Parth-Brahmbhatt commented on the issue:
https://github.com/apache/spark/pull/14655
@gatorsmile not sure if it will simplify much in this case as most of the
complexity is in figuring out what partitions can be pruned which I don't think
will go away. We will rely on hive
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14777
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14813
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14813
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64433/
Test PASSed.
---
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14815#discussion_r76334971
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1386,15 +1386,17 @@ object EliminateOuterJoin
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14777#discussion_r76334925
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -251,8 +251,21 @@ case class Literal (value: Any,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14813
**[Test build #64433 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64433/consoleFull)**
for PR 14813 at commit
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14777
LGTM - merging to master/2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14816
**[Test build #64439 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64439/consoleFull)**
for PR 14816 at commit
Github user Parth-Brahmbhatt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14720#discussion_r76334577
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQuerySuite.scala
---
@@ -865,6 +865,16 @@ class HiveQuerySuite extends
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/8880#discussion_r76334586
--- Diff:
yarn/src/test/scala/org/apache/spark/security/IOEncryptionSuite.scala ---
@@ -0,0 +1,332 @@
+/*
+ * Licensed to the Apache Software
Github user Parth-Brahmbhatt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14720#discussion_r76334535
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -87,6 +88,9 @@ private[hive] class HiveClientImpl(
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/8880
> They could be different sizes (different config options control that). We
could change it so both use the same configs / same code to generate the keys,
but in general if they're used for
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14816
[SPARK-17245] [SQL] [BRANCH-1.6] Do not rely on Hive's session state to
retrieve HiveConf
## What changes were proposed in this pull request?
Right now, we rely on Hive's `SessionState.get()` to
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/8880#discussion_r76334131
--- Diff:
yarn/src/test/scala/org/apache/spark/security/IOEncryptionSuite.scala ---
@@ -0,0 +1,332 @@
+/*
+ * Licensed to the Apache Software
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14777#discussion_r76332385
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -251,8 +251,21 @@ case class Literal (value: Any,
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/8880#discussion_r76332258
--- Diff: docs/configuration.md ---
@@ -559,6 +559,39 @@ Apart from these, the following properties are also
available, and may be useful
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/8880
> Why needs to generate a new key for IO encryption?
They could be different sizes (different config options control that). We
could change it so both use the same configs / same code to
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14433
`args.primaryResource` is good for this purpose. I can make change similar
to my initial commit but checking against `args.primaryResource`.
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14812
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/8880
Looks pretty overall. Just a high level question: Why needs to generate a
new key for IO encryption? Can we just use `SecurityManager.getSecretKey`?
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14812
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64432/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14812
**[Test build #64432 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64432/consoleFull)**
for PR 14812 at commit
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/8880#discussion_r76328447
--- Diff:
core/src/main/scala/org/apache/spark/security/CryptoStreamUtils.scala ---
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14796
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14796
LGTM - merging to master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14777#discussion_r76326911
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -251,8 +251,21 @@ case class Literal (value: Any,
Github user markgrover commented on the issue:
https://github.com/apache/spark/pull/14270
Sure.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14815
**[Test build #64437 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64437/consoleFull)**
for PR 14815 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14811
**[Test build #64438 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64438/consoleFull)**
for PR 14811 at commit
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/14811
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14811
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14811
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64431/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14811
**[Test build #64431 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64431/consoleFull)**
for PR 14811 at commit
Github user Ianwww commented on the issue:
https://github.com/apache/spark/pull/14270
Can this configuration be set in spark-defaults.conf as
spark.metrics.namespace=${spark.app.name}
@markgrover
---
If your project is set up for it, you can reply to this email and have your
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/8880#discussion_r76323741
--- Diff:
core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala ---
@@ -103,16 +108,44 @@ private[spark] class
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/8880#discussion_r76323295
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala ---
@@ -522,8 +521,9 @@ private[spark] class ExternalSorter[K, V, C](
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/8880#discussion_r76323076
--- Diff:
core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala ---
@@ -103,16 +108,44 @@ private[spark] class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14815
**[Test build #64436 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64436/consoleFull)**
for PR 14815 at commit
Github user sameeragarwal commented on the issue:
https://github.com/apache/spark/pull/14815
cc @brkyvz who found this bug
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
101 - 200 of 508 matches
Mail list logo