Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13413
@yhuai All comments Addressed ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13591
No performance benefits but improves readability.
On Fri, Jun 10, 2016 at 12:49 PM, Reynold Xin <notificati...@github.com>
wrote:
> Is this actually an improveme
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13413#discussion_r66564698
--- Diff: python/pyspark/sql/tests.py ---
@@ -1481,17 +1481,7 @@ def test_list_functions(self):
spark.sql("CREATE DATABASE so
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13591
[Minor] Replace all occurrences of None: Option[X] with Option.empty[X]
## What changes were proposed in this pull request?
Replace all occurrences of None: Option[X] with Option.empty[X
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13577
[Minor][Doc] Improve SQLContext Documentation and Fix SparkSession and
sql.functions Documentation
## What changes were proposed in this pull request?
1. In SparkSession, add emptyDataset
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13567
@rxin Done ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13567
@rxin Sure
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/12913
ping @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13567
@rxin I'm also biased on not doing self-links, but there are already so
many self-links on datasets docs So we should either remove those or add these
to make everything consistent
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13567
[Minor][Doc] Dataset.reduce Scaladoc Link to Dataset
## What changes were proposed in this pull request?
Documentation Fix
## How was this patch tested?
You can merge
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13559
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13464
created a updated PR with couple of more java lint fixes #13559
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13559
cc: @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13559
Minor 3
## What changes were proposed in this pull request?
revived #13464
Fix Java Lint errors introduced by #13286 and #13280
Before:
```
Using `mvn` from path
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13413
@maropu Thanks for the review, addressed all the comments
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13413#discussion_r66192955
--- Diff: python/pyspark/sql/tests.py ---
@@ -1481,17 +1481,7 @@ def test_list_functions(self):
spark.sql("CREATE DATABASE so
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13464
@andrewor14 As @dongjoon-hyun has suggested simply added the file to
`dev/checkstyle-suppressions.xml` should work
---
If your project is set up for it, you can reply to this email and have
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/13464
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13464
@andrewor14 btw making it lowercase(which was my first fix) or adding an
exception in check style suppression, please let me know what is the way
here
On Friday 3 June 2016
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13413
@rxin @andrewor14 Its ready for review again. Thanks in advance.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13286
@tdas updated my PR with exclusions for Append and Complete
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13464
cc: @tdas @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13286
@tdas @marmbrus this is failing `dev/lint-java`
So we should change `Append` and `Complete` to `append` and `complete`
```
[ERROR]
src/main/java/org/apache/spark/sql/streaming
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13464
[Minor] Fix Java Lint errors introduced by #13286 and 13280
## What changes were proposed in this pull request?
Fix Java Lint errors introduced by #13286 and #13280
1. remove unused
GitHub user techaddict reopened a pull request:
https://github.com/apache/spark/pull/13413
[SPARK-15663][SQL][WIP] SparkSession.catalog.listFunctions shouldn't
include the list of built-in functions
## What changes were proposed in this pull request
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/13413
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/13334
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13413
@rxin Sure, will keep in mind next time. Will filtering out non temp
functions work in this case ?
```scala
val loadedFunctions =
StringUtils.filterPattern
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13413
[SPARK-15663] SparkSession.catalog.listFunctions shouldn't include the list
of built-in functions
## What changes were proposed in this pull request?
SparkSession.catalog.listFunctions
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/13347
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13347#issuecomment-222048243
@rxin Yupp thats what I was about to ask, for calls where we just need
`.zero` what do we do ?
---
If your project is set up for it, you can reply to this email
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13347
[SPARK-15598] Change Aggregator.zero to Aggregator.init
## What changes were proposed in this pull request?
org.apache.spark.sql.expressions.Aggregator currently requires defining the
zero
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13334
[SPARK-15576] Add back hive tests blacklisted by SPARK-15539
## What changes were proposed in this pull request?
Add back hive tests blacklisted by SPARK-15539
## How was this patch
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/13325
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13325#issuecomment-221867688
@srowen Okay makes sense, I'll keep this open and add more stuff as I
encounter.
---
If your project is set up for it, you can reply to this email and have your
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13325#issuecomment-221850830
@srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13325
[MINOR] Convert match to pattern matching anonymous functionâ¦
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/12913#issuecomment-221259367
cc: @srowen @rxin @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13249#issuecomment-220834246
@srowen Its failing `dev/lint-java`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13242#discussion_r64144856
--- Diff: python/pyspark/ml/clustering.py ---
@@ -933,21 +933,20 @@ def getKeepLastCheckpoint(self):
if __name__ == "__main__":
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13249
[MINOR] More than 100 chars in line.
## What changes were proposed in this pull request?
More than 100 chars in line.
## How was this patch tested?
You can merge this pull request
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13223#issuecomment-220746454
@rxin @srowen I've removed qualifiers
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13223#issuecomment-220627182
@srowen All tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13223#discussion_r64026868
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala ---
@@ -134,7 +135,14 @@ final class Decimal extends Ordered[Decimal
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13223#discussion_r64026748
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala ---
@@ -134,7 +135,14 @@ final class Decimal extends Ordered[Decimal
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13223#issuecomment-220579058
cc: @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13223
[HOTFIX][SPARK-15445] Build fails for java 1.7 after adding
java.mathBigInteger support
## What changes were proposed in this pull request?
Using longValue() and then checking whether
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13101#issuecomment-220497162
@mengxr @andrewor14 @srowen Addressed all comments. Thanks for reviewing.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13101#discussion_r63977568
--- Diff: mllib/src/test/java/org/apache/spark/SharedSparkSession.java ---
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13101#discussion_r63977203
--- Diff: mllib/src/test/java/org/apache/spark/SharedSparkSession.java ---
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13202#issuecomment-220450350
cc: @jkbradley
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13202
[SPARK-15414][MLlib] Make the mllib,ml linalg type conversion APIs public
## What changes were proposed in this pull request?
Open up APIs for converting between new, old linear algebra
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13101#issuecomment-220442073
@andrewor14 comments addressed + ran `lint-java` locally and fixed all the
issues.
---
If your project is set up for it, you can reply to this email and have your
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13101#issuecomment-220348713
ping @srowen @andrewor14 @mengxr
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13168#issuecomment-220272524
ping @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13168#issuecomment-219991565
@srowen I checked this is only instance of `.set("master"` in the whole code
---
If your project is set up for it, you can reply to this email and have
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13168
[Core][MINOR] Remove redundant set master in
OutputCommitCoordinatorIntegrationSuite
## What changes were proposed in this pull request?
Remove redundant set master
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13101#issuecomment-219801934
ping @andrewor14 @mengxr
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13108#issuecomment-219198790
@smungee Can you please add tests for this ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/10125#discussion_r63227670
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/ScalaReflectionRelationSuite.scala
---
@@ -34,7 +34,13 @@ case class ReflectData
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/13100
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13101#issuecomment-219100845
cc: @andrewor14 @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13101
[SPARK-15296][MLlib] Refactor All Java Tests that use SparkSession
## What changes were proposed in this pull request?
Refactor All Java Tests that use SparkSession, to extend
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13100#issuecomment-219071205
cc: @jkbradley @mengxr
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13100
[SPARK-15314][MLlib] Enable tests that required save/load for Pipeline API
## What changes were proposed in this pull request?
Enable test that were commented out in
https://github.com
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/12913#issuecomment-219031508
ping @rxin @JoshRosen @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/13080
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13080#issuecomment-218814372
cc: @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13080
[MINOR][SQL] Pattern match using Singleton to check if object is a scala
object
## What changes were proposed in this pull request?
To Check if obj is an Scala Object, Instead of doing
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13063#issuecomment-218613152
Hi @andrewor14 support for `enableHiveSupport` was
added(https://github.com/apache/spark/commit/0903a185c7ebc57c75301a27d215b08efd347f99)
after I started working
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13063
[SPARK-15072][SQL][PySpark] FollowUp: Remove SparkSession.withHiveSupport
in PySpark
## What changes were proposed in this pull request?
This is a followup of https://github.com/apache
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13056#issuecomment-218571511
@andrewor14 Okay. Nope not WIP, was waiting for tests to finish.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13056#issuecomment-218568690
cc: @davies @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13056
[SPARK-15270][SQL][MINOR][WIP] Use SparkSession Builder to build a session
with HiveSupport
## What changes were proposed in this pull request?
Before:
Creating a hiveContext
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/12936#discussion_r62861908
--- Diff:
core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala ---
@@ -98,18 +98,19 @@ class AccumulatorV2Suite extends SparkFunSuite
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13044#issuecomment-218436442
cc: @andrewor14 @davies Can you review the PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13044
[SPARK-15037][SQL][MLLIB] Part2 Use SparkSession instead of SQLContext in
Python TestSuites
## What changes were proposed in this pull request?
Use SparkSession instead of SQLContext
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/12936#discussion_r62793263
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/metric/SQLMetrics.scala
---
@@ -29,9 +29,18 @@ class SQLMetric(val metricType: String
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/12936#discussion_r62792987
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -291,12 +291,23 @@ private[spark] object TaskMetrics extends Logging
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/12936#discussion_r62792294
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/metric/SQLMetrics.scala
---
@@ -29,9 +29,18 @@ class SQLMetric(val metricType: String
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/12936#issuecomment-218266772
jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13024#discussion_r62732815
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/functionResources.scala
---
@@ -20,16 +20,24 @@ package
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13030#issuecomment-218253680
@dongjoon-hyun Thanks for doing this, merging should have caused problems
since it was a very big patch. ð
---
If your project is set up for it, you can reply
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13024#issuecomment-218103693
cc: @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13024
[SPARK-15249][SQL] Use FunctionResource instead of (String, String) in
CreateFunction and CatalogFunction for resource
## What changes were proposed in this pull request?
Use
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/12907#discussion_r62617908
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -66,7 +66,8 @@ class DDLSuite extends QueryTest
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/13014#issuecomment-218014291
My bad closed the PR. Just curious why are case classes harder to maintain ?
On Tue, May 10, 2016 at 4:23 AM, andrewor14 <notificati...@github.com>
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/13014
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13014
[SPARK-15234][SQL] spark.catalog.listDatabases.show() is not formatted
correctly
## What changes were proposed in this pull request?
Make `Database`, `Table`, `Column`, `Function` classes
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/12770#discussion_r62582839
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -52,6 +52,11 @@ object SQLConf {
}
+ val
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/12907#issuecomment-218003214
@andrewor14 test failed because `WAREHOUSE_PATH` is set to
`${system:user.dir}/spark-warehouse` by default
(https://github.com/apache/spark/blob/master/sql/core/src
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/12936#discussion_r62558356
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -291,11 +291,20 @@ private[spark] object TaskMetrics extends Logging
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/12913#issuecomment-217656238
@JoshRosen @rxin can you review this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/12936#issuecomment-217656176
cc: @rxin @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/12977#issuecomment-217643688
cc: @tgravescs @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/12977
[SPARK-15178][CORE] Remove LazyFileRegion instead use netty's
DefaultFileRegion
## What changes were proposed in this pull request?
Remove LazyFileRegion instead use netty's
Github user techaddict commented on the pull request:
https://github.com/apache/spark/pull/12907#issuecomment-217642912
cc: @andrewor14 @rxin @dongjoon-hyun PR is ready with Scala/Java Changes.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/12953#discussion_r62342197
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -389,11 +389,10 @@ private[spark] class TaskSchedulerImpl
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/12953#discussion_r62330597
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -389,11 +389,10 @@ private[spark] class TaskSchedulerImpl
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/12953
[SPARK-15087][MINOR][DOC] Follow Up: Remove the Comment, It no longer
applies
## What changes were proposed in this pull request?
Remove the Comment, since it not longer applies. see
101 - 200 of 320 matches
Mail list logo