Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/18025
This is what the `'column_aggregate_functions.Rd'` doc looks like:
![image](https://cloud.githubusercontent.com/assets/11082368/26190195/fd353224-3b5c-11e7-9a78-2607cc665f49.png)
!
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18000
LGTM, is parquet going to fix it in the future? or is there any official
way to support filter push down for column names with dot?
---
If your project is set up for it, you can reply to this ema
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14971#discussion_r117169168
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -232,7 +446,8 @@ class StatisticsSuite extends
StatisticsCollectionT
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14971#discussion_r117168812
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -215,6 +218,217 @@ class StatisticsSuite extends
StatisticsCollectio
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14971#discussion_r117156770
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -414,6 +415,50 @@ private[hive] class HiveClientImpl(
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14971#discussion_r117169136
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveComparisonTest.scala
---
@@ -192,13 +192,7 @@ abstract class HiveComparisonTest
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14971#discussion_r117172781
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -215,6 +218,217 @@ class StatisticsSuite extends
StatisticsCollectio
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18002#discussion_r117174531
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/ColumnStats.scala
---
@@ -53,219 +53,299 @@ private[columnar] sealed trait Colu
Github user MLnick commented on a diff in the pull request:
https://github.com/apache/spark/pull/17996#discussion_r117174667
--- Diff: docs/ml-guide.md ---
@@ -72,35 +72,26 @@ MLlib is under active development.
The APIs marked `Experimental`/`DeveloperApi` may change in future
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18000
Based on the discussion in https://github.com/apache/parquet-mr/pull/361,
it does not sound Parquet will support it in the short term. We might need to
live with it for a long time.
---
If you
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14971#discussion_r117174966
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -414,6 +415,50 @@ private[hive] class HiveClientImpl(
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18000
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117170904
--- Diff: core/src/main/scala/org/apache/spark/memory/MemoryManager.scala
---
@@ -20,7 +20,7 @@ package org.apache.spark.memory
import javax.annotati
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117171397
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -278,4 +278,21 @@ package object config {
"spark.io.compr
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117172780
--- Diff:
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
---
@@ -175,33 +197,54 @@ final class ShuffleBlockFetcherIterato
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117170816
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -278,4 +278,21 @@ package object config {
"spark.io.compr
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117170463
--- Diff:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/OneForOneBlockFetcher.java
---
@@ -126,4 +150,50 @@ private void failRema
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117169752
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/server/OneForOneStreamManager.java
---
@@ -95,6 +97,25 @@ public ManagedBuffer get
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117174623
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -193,8 +217,19 @@ private[spark] object HighlyCompressedMapStatus {
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117171649
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -278,4 +278,21 @@ package object config {
"spark.io.compr
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117170538
--- Diff:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/OneForOneBlockFetcher.java
---
@@ -126,4 +150,50 @@ private void failRema
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117172062
--- Diff:
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
---
@@ -395,7 +438,6 @@ final class ShuffleBlockFetcherIterator(
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117172461
--- Diff:
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
---
@@ -129,6 +137,12 @@ final class ShuffleBlockFetcherIterator
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117175176
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -128,41 +133,60 @@ private[spark] class CompressedMapStatus(
* @param
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117174283
--- Diff: docs/configuration.md ---
@@ -954,12 +971,12 @@ Apart from these, the following properties are also
available, and may be useful
spark.me
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117174976
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -193,8 +217,19 @@ private[spark] object HighlyCompressedMapStatus {
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14971#discussion_r117176108
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -175,7 +178,7 @@ class StatisticsSuite extends
StatisticsCollectionT
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117173550
--- Diff: core/src/test/scala/org/apache/spark/MapOutputTrackerSuite.scala
---
@@ -29,7 +29,11 @@ import org.apache.spark.shuffle.FetchFailedException
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14971#discussion_r117175951
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -175,7 +178,7 @@ class StatisticsSuite extends
StatisticsCollectionT
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18000
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18000
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18012
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user JoshRosen commented on the issue:
https://github.com/apache/spark/pull/16989
A few more high-level thoughts about this PR:
- It seems like the benefits here come from three interrelated changes:
- Improving the accuracy of map output size reporting for large s
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18002#discussion_r117177275
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/ColumnStats.scala
---
@@ -53,219 +53,299 @@ private[columnar] sealed trait ColumnSt
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117178061
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -603,7 +603,13 @@ object DateTimeUtils {
*/
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117178267
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -603,7 +603,13 @@ object DateTimeUtils {
*/
Github user VishnuGowthemT commented on the issue:
https://github.com/apache/spark/pull/10405
Can this fix be added in 1.6 as well ?
https://github.com/apache/spark/blob/branch-1.6/sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala
---
If your project
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/18024
jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user JoshRosen commented on the issue:
https://github.com/apache/spark/pull/16989
Update: I realize that I overlooked the change to set a default for
`spark.memory.offHeap.size`. Thus I'll retract my original objections regarding
`MemoryMode.OFF_HEAP` but I'd still like to addr
Github user JoshRosen commented on the issue:
https://github.com/apache/spark/pull/16989
Also, I noticed that the PR description doesn't quite align with
implementation AFAIK:
> Track average size and also the outliers(which are larger than 2*avgSize)
in MapStatus;
d
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117180644
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -603,7 +603,13 @@ object DateTimeUtils {
*/
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18000
Seems jenkins doesn't work for now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117181219
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -603,7 +603,13 @@ object DateTimeUtils {
*/
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117181192
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -193,8 +217,19 @@ private[spark] object HighlyCompressedMapStatus {
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16989
+1 on @JoshRosen 's suggestion, we can integrate it with memory manager
later.
cc @JoshRosen shall we put this patch to branch 2.2?
---
If your project is set up for it, you can reply to
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/12162
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/12085
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/12419
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/12420
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14481
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17872
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15594
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14091
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14557
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17001
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17971
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16652
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15918
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18017
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17303
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15850
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13959
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17272
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13762
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13851
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16975
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/12491
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/11129
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14547
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16743
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15652
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16893
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17119
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13881
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15914
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14686
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13837
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13891
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16285
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
GitHub user jaceklaskowski opened a pull request:
https://github.com/apache/spark/pull/18026
[SPARK-16202][SQL][DOC] Follow-up to Correct The Description of
CreatableRelationProvider's createRelation
## What changes were proposed in this pull request?
Follow-up to SPARK-162
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16389
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17778
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17088
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/17996
@felixcheung @yanboliang by the way I haven't added any SparkR stuff here
as I'm not sure anything breaking, deprecated etc goes here or in the SparkR
guide.
---
If your project is set up for it,
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17723
I'm working on this now, and am definitely willing to execute the plan
we've agreed on, but the more I think about it, the more I think it makes sense
to make `ServiceCredentialProvider` private an
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/17819
I will try to take a look soon. My main concern is whether we should really
have a new class - it starts to make things really messy if we introduce
`Multi` versions of everything (e.g. we may want t
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17723
@mgummelt We have in house delegation provider for HiveServer2, multi HBase
cluster. I think this is useful in Hadoop world. So better to keep it.
---
If your project is set up for it, you can re
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17819
@MLnick That's right. I also have concern about this. However, to keep the
original single-column Bucketizer and multiple-column Bucketizer in one class
seems also producing a messy code.
I'
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17999
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/18022#discussion_r117191129
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/recommendation/ALSSuite.scala ---
@@ -78,7 +79,7 @@ class ALSSuite
val k = 2
val ne0 = n
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/18022#discussion_r117190338
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/recommendation/ALSSuite.scala ---
@@ -348,6 +349,37 @@ class ALSSuite
}
/**
+ * T
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/18022#discussion_r117192420
--- Diff: mllib/src/main/scala/org/apache/spark/ml/recommendation/ALS.scala
---
@@ -1624,15 +1628,15 @@ object ALS extends DefaultParamsReadable[ALS] with
L
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/18022#discussion_r117191375
--- Diff: mllib/src/main/scala/org/apache/spark/ml/recommendation/ALS.scala
---
@@ -795,8 +799,8 @@ object ALS extends DefaultParamsReadable[ALS] with
Loggi
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/18022#discussion_r117192700
--- Diff: mllib/src/main/scala/org/apache/spark/ml/recommendation/ALS.scala
---
@@ -763,11 +763,15 @@ object ALS extends DefaultParamsReadable[ALS] with
Log
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/16478
Is there no SQL committer support for this? Seems like a critical feature
for Spark users with no response from any SQL folks.
Making UDT public in some way is pretty important no?
---
If
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/17094
In terms of the high level intention of this, agree we definitely need it
and it should clean things up substantially. I will start taking a look through
ASAP. Thanks!
---
If your project is set up
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18011
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18011
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/77040/
Test FAILed.
---
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117203261
--- Diff:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/OneForOneBlockFetcher.java
---
@@ -126,4 +150,50 @@ private void failRemain
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r117203833
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/server/OneForOneStreamManager.java
---
@@ -95,6 +97,25 @@ public ManagedBuffer getCh
1 - 100 of 435 matches
Mail list logo