Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16975#discussion_r102569401
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -470,12 +470,25 @@ class SparkContext(config: SparkConf) extends Logging
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16975#discussion_r101887609
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -466,7 +466,7 @@ object SparkSubmit extends CommandLineUtils
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16975#discussion_r101887589
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -466,7 +466,7 @@ object SparkSubmit extends CommandLineUtils
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16975#discussion_r101857385
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -466,7 +466,7 @@ object SparkSubmit extends CommandLineUtils
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16975#discussion_r101857230
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -466,7 +466,7 @@ object SparkSubmit extends CommandLineUtils
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/16975
[SPARK-19522] Fix executor memory in local-cluster mode
## What changes were proposed in this pull request?
```
bin/spark-shell --master local-cluster[2,1,2048
Github user andrewor14 closed the pull request at:
https://github.com/apache/spark/pull/13899
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/13899
Closing for now; too many conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16819
I agree. Resource managers generally expect applications to request more
than what's available already so we don't have to do it again ourselves in
Spark.
---
If your project is set up
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16823
This is a bad idea! First it breaks backward compatibility, and second, we
intentionally didn't want to make it so general that the user can pass in any
objects. Can you please close this PR
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15396#discussion_r98218538
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -1589,7 +1589,8 @@ abstract class RDD[T: ClassTag](
* This is introduced
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16081
and 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1b1c849bf -> 5ecd3c23a
[SPARK][EXAMPLE] Added missing semicolon in quick-start-guide example
## What changes were proposed in this pull request?
Added missing semicolon in quick-start-guide java example code which wasn't
compiling
Repository: spark
Updated Branches:
refs/heads/branch-2.0 8b33aa089 -> 1b1c849bf
[SPARK-18640] Add synchronization to TaskScheduler.runningTasksByExecutors
## What changes were proposed in this pull request?
The method `TaskSchedulerImpl.runningTasksByExecutors()` accesses the mutable
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16073
and 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/branch-2.1 eae85da38 -> 7c0e2962d
[SPARK-18640] Add synchronization to TaskScheduler.runningTasksByExecutors
## What changes were proposed in this pull request?
The method `TaskSchedulerImpl.runningTasksByExecutors()` accesses the mutable
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16073
LGTM merging into master 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16081
Ok, merging into master 2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15462
@kiszk is there a JIRA associated specifically with adding tests for
`InMemoryRelation`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Repository: spark
Updated Branches:
refs/heads/branch-2.1 81e3f9711 -> b386943b2
[SPARK-17680][SQL][TEST] Added test cases for InMemoryRelation
## What changes were proposed in this pull request?
This pull request adds test cases for the following cases:
- keep all data types with null or
Repository: spark
Updated Branches:
refs/heads/master 0f5f52a3d -> ad67993b7
[SPARK-17680][SQL][TEST] Added test cases for InMemoryRelation
## What changes were proposed in this pull request?
This pull request adds test cases for the following cases:
- keep all data types with null or
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15462
LGTM, merging into master 2.1 thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15993
Sounds good. Merging into master 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Repository: spark
Updated Branches:
refs/heads/master 70ad07a9d -> f129ebcd3
[SPARK-18050][SQL] do not create default database if it already exists
## What changes were proposed in this pull request?
When we try to create the default database, we ask hive to do nothing if it
already exists.
Repository: spark
Updated Branches:
refs/heads/branch-2.1 599dac159 -> 835f03f34
[SPARK-18050][SQL] do not create default database if it already exists
## What changes were proposed in this pull request?
When we try to create the default database, we ask hive to do nothing if it
already
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15462#discussion_r89205894
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -20,18 +20,83 @@ package
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15462
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15462#discussion_r89205780
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -58,6 +123,12 @@ class
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15462#discussion_r89205541
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -20,18 +20,83 @@ package
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15462#discussion_r89205861
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -246,4 +317,59 @@ class
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15462#discussion_r89205730
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -20,18 +20,83 @@ package
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15978
(Oops never mind, not my fault! :p)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15978
@cloud-fan can you make a patch for 2.0?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Repository: spark
Updated Branches:
refs/heads/branch-2.1 0e624e990 -> fa360134d
[SPARK-18507][SQL] HiveExternalCatalog.listPartitions should only call getTable
once
## What changes were proposed in this pull request?
HiveExternalCatalog.listPartitions should only call `getTable` once,
Repository: spark
Updated Branches:
refs/heads/master 45ea46b7b -> 702cd403f
[SPARK-18507][SQL] HiveExternalCatalog.listPartitions should only call getTable
once
## What changes were proposed in this pull request?
HiveExternalCatalog.listPartitions should only call `getTable` once, instead
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15978
Oops, that was my fault. Thanks merging into master 2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15811
I did
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15896
I personally think `UNCACHE TABLE IF EXISTS` is best. It preserves the old
behavior but lets the user make sure a table is not cached if they really want.
---
If your project is set up
Repository: spark
Updated Branches:
refs/heads/branch-2.1 b0a73c9be -> 406f33987
[SPARK-18361][PYSPARK] Expose RDD localCheckpoint in PySpark
## What changes were proposed in this pull request?
Expose RDD's localCheckpoint() and associated functions in PySpark.
## How was this patch tested?
Repository: spark
Updated Branches:
refs/heads/branch-2.1 251a99276 -> b0a73c9be
[SPARK-18517][SQL] DROP TABLE IF EXISTS should not warn for non-existing tables
## What changes were proposed in this pull request?
Currently, `DROP TABLE IF EXISTS` shows warning for non-existing tables.
Repository: spark
Updated Branches:
refs/heads/master 70176871a -> ddd02f50b
[SPARK-18517][SQL] DROP TABLE IF EXISTS should not warn for non-existing tables
## What changes were proposed in this pull request?
Currently, `DROP TABLE IF EXISTS` shows warning for non-existing tables.
However,
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15953
Makes sense. This LGTM merging into master and 2.0. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15811
LGTM merging into master thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Repository: spark
Updated Branches:
refs/heads/master 07beb5d21 -> 70176871a
[SPARK-18361][PYSPARK] Expose RDD localCheckpoint in PySpark
## What changes were proposed in this pull request?
Expose RDD's localCheckpoint() and associated functions in PySpark.
## How was this patch tested?
I
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15811
Looks good, just one question.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15811#discussion_r88115031
--- Diff: python/pyspark/rdd.py ---
@@ -181,6 +181,7 @@ def __init__(self, jrdd, ctx,
jrdd_deserializer=AutoBatchedSerializer(PickleSeri
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15811
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15833#discussion_r88113719
--- Diff: core/src/main/scala/org/apache/spark/deploy/Client.scala ---
@@ -221,7 +221,9 @@ object Client {
val conf = new SparkConf
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15766
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15756
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15739
Also there's another patch trying to solve the same issue: #15742
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15756
LGTM. That's a massive amount of time spent in `Class.getSimpleName`!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15756#discussion_r86422590
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -540,7 +544,8 @@ private[spark] object JsonProtocol {
def
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15756#discussion_r86422652
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -540,7 +544,8 @@ private[spark] object JsonProtocol {
def
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15756#discussion_r86422521
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -540,7 +544,8 @@ private[spark] object JsonProtocol {
def
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15739
ok to test @vanzin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15698
LGTM retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15410
We shouldn't display file names but we should display application names and
IDs, something the user understands. We don't have to do that as part of this
issue.
---
If your project is set up
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15458
I see. Then maybe we should add a comment above the config to note that
several commands don't work (e.g. ALTER TABLE) if this is turned on, even if
it's only internal.
---
If your project
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15458
Yes that's why it's `internal`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Repository: spark
Updated Branches:
refs/heads/master db8784fea -> 7bf8a4049
[SPARK-17686][CORE] Support printing out scala and java version with
spark-submit --version command
## What changes were proposed in this pull request?
In our universal gateway service we need to specify different
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15456
Merging into master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15458
JK, actually it doesn't merge in 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Repository: spark
Updated Branches:
refs/heads/master 6f2fa6c54 -> db8784fea
[SPARK-17899][SQL] add a debug mode to keep raw table properties in
HiveExternalCatalog
## What changes were proposed in this pull request?
Currently `HiveExternalCatalog` will filter out the Spark SQL internal
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15458
Cool beans. Merging into master 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Repository: spark
Updated Branches:
refs/heads/master 7222a25a1 -> 6f2fa6c54
[SPARK-11272][WEB UI] Add support for downloading event logs from HistoryServer
UI
## What changes were proposed in this pull request?
This is a reworked PR based on feedback in #9238 after it was closed and not
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15400
This one LGTM I'm merging it into master. Thanks for working on this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15400
Usually we retest the PR if it's been a few days since it last ran tests.
We have had build breaks before where we merged a PR that passed tests a long
time ago.
---
If your project is set up
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15410
ok to test
I think the idea is good, but it would be a better UX if we display the
pending applications as rows in the existing table (or a new one) and indicate
there that it's still
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15410#discussion_r83044768
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/HistoryPage.scala ---
@@ -38,6 +39,13 @@ private[history] class HistoryPage(parent
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15396
Looks good. I left a suggestion that I think will make the code cleaner.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15396#discussion_r83043442
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -1589,7 +1589,8 @@ abstract class RDD[T: ClassTag](
* This is introduced
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15396#discussion_r83042522
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -1589,7 +1589,8 @@ abstract class RDD[T: ClassTag](
* This is introduced
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15400
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15353
@keypointt by "working" I mean it should be replaced by a line break, not a
space
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15405
Thanks for working on this. It's great to see how small the patch turned
out to be!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15405#discussion_r82671287
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -637,6 +637,16 @@ private[deploy] class Master
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15405
add to whitelist
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/master 2b01d3c70 -> e56614cba
[SPARK-16827] Stop reporting spill metrics as shuffle metrics
## What changes were proposed in this pull request?
Fix a bug where spill metrics were being reported as shuffle metrics.
Eventually these spill metrics
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15347
OK, this change by itself LGTM. @dafrista would you mind creating a
separate JIRA (or point me to an existing one) about the TODO then? Merging
this into master
---
If your project is set up
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15353
But this isn't the original intention, which is to actually add a line
break where `\n` is today. IIRC this works correctly on Chrome but not on
Safari (or the other way round?). If you can make
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15353
Also this is a more general problem, not just for streaming
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15347#discussion_r82042420
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeExternalSorter.java
---
@@ -145,7 +145,9 @@ private UnsafeExternalSorter
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15350
I think that's OK. This is supposed to be a unit test for the BlockManager,
not how BlockManager interacts with the rest of the system. LGTM
---
If your project is set up for it, you can reply
Repository: spark
Updated Branches:
refs/heads/master cb87b3ced -> 027dea8f2
[SPARK-17715][SCHEDULER] Make task launch logs DEBUG
## What changes were proposed in this pull request?
Ramp down the task launch logs from INFO to DEBUG. Task launches can happen
orders of magnitude more than
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15290
Merging into master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15290
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15247#discussion_r81218774
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/ApplicationHistoryProvider.scala
---
@@ -109,4 +109,11 @@ private[history] abstract class
Repository: spark
Updated Branches:
refs/heads/branch-2.0 f7839e47c -> 7c9450b00
[SPARK-17672] Spark 2.0 history server web Ui takes too long for a single
application
Added a new API getApplicationInfo(appId: String) in class
ApplicationHistoryProvider and class SparkUI to get app info. In
Repository: spark
Updated Branches:
refs/heads/master 7f779e743 -> cb87b3ced
[SPARK-17672] Spark 2.0 history server web Ui takes too long for a single
application
Added a new API getApplicationInfo(appId: String) in class
ApplicationHistoryProvider and class SparkUI to get app info. In this
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15247#discussion_r81219143
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/ApplicationHistoryProvider.scala
---
@@ -109,4 +109,11 @@ private[history] abstract class
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15247
LGTM merging into master 2.0, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15247#discussion_r81219049
--- Diff:
core/src/main/scala/org/apache/spark/status/api/v1/ApiRootResource.scala ---
@@ -222,6 +222,7 @@ private[spark] object ApiRootResource
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15247#discussion_r81218992
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/HistoryServer.scala ---
@@ -182,6 +182,10 @@ class HistoryServer
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15295
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Repository: spark
Updated Branches:
refs/heads/master 958200497 -> 7f779e743
[SPARK-17648][CORE] TaskScheduler really needs offers to be an IndexedSeq
## What changes were proposed in this pull request?
The Seq[WorkerOffer] is accessed by index, so it really should be an
IndexedSeq,
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15221
This looks reasonable. Merging into master. I will leave it out from
branch-2.0 just in case.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15295#discussion_r81150854
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -791,7 +791,7 @@ object SparkSession {
// Get the session
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15295
LGTM. Pretty straightforward.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Repository: spark
Updated Branches:
refs/heads/branch-2.0 8a58f2e8e -> f4594900d
[Docs] Update spark-standalone.md to fix link
Corrected a link to the configuration.html page, it was pointing to a page that
does not exist (configurations.html).
Documentation change, verified in preview.
1 - 100 of 11783 matches
Mail list logo