Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/12004#issuecomment-214747671
The latest version of this does, among other things, call
FileSystem.toString after operations. In HADOOP-13028, along with seek
optimisation
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11394#issuecomment-214746710
this is obsolete now: nothing broke. Unless someone wants regression tests,
I'm closing this
---
If your project is set up for it, you can reply to this
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11152#issuecomment-214245234
thanks for reviewing this,
Steve
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/12435#issuecomment-212823001
@zsxwing if you have q's about the quirks of s3* APIs and endpoints, feel
free to email me direct, stevel @ hortonworks.
---
If your project is set up f
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/12435#discussion_r60548244
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSinkLog.scala
---
@@ -0,0 +1,241 @@
+/*
+ * Licensed to
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/12435#discussion_r60547975
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -443,6 +444,27 @@ object SQLConf {
.booleanConf
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11152#discussion_r60461183
--- Diff: docs/monitoring.md ---
@@ -253,10 +273,24 @@ for a running application, at
`http://localhost:4040/api/v1`.
Summary metrics of all
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11152#discussion_r60457665
--- Diff: docs/monitoring.md ---
@@ -253,10 +273,24 @@ for a running application, at
`http://localhost:4040/api/v1`.
Summary metrics of all
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11152#discussion_r60452742
--- Diff: docs/monitoring.md ---
@@ -222,17 +222,33 @@ both running applications, and in the history server.
The endpoints are mounted
for the
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11152#discussion_r60452582
--- Diff: docs/monitoring.md ---
@@ -269,17 +303,24 @@ for a running application, at
`http://localhost:4040/api/v1`.
Details for the storage
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/12435#discussion_r60377889
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/FileStreamSinkLogSuite.scala
---
@@ -0,0 +1,201
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/12487#issuecomment-211956689
Thomas: regarding credential printing, what about having some regexps of
things not to print. In particular, credentials to talk to S3, Azure &c are
things
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/12435#discussion_r60229120
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/FileStreamSinkLogSuite.scala
---
@@ -0,0 +1,201
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/12435#discussion_r60228316
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSinkLog.scala
---
@@ -0,0 +1,255 @@
+/*
+ * Licensed to
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/12487#issuecomment-211840057
I don't see anything that would cause problems with YARN. Good to see that
`ApplicationMaster` does log what it's creating ... that'll be c
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11242#discussion_r60199645
--- Diff: core/src/main/scala/org/apache/spark/rdd/UnionRDD.scala ---
@@ -62,8 +62,14 @@ class UnionRDD[T: ClassTag](
var rdds: Seq[RDD[T
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11152#issuecomment-211563858
OK: for the jekyll site all is well with and without vtable; its just the
dev tools where things go wrong. I'll push up to github to see what decisions
it
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11152#issuecomment-211560297
Here's a couple of screenshots of two MD editors: IntelliJ IDEA and Mou.
Both of these are placing the /applications text in the centre of all the text,
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11152#issuecomment-211554153
OK, I'll move to {{<br>}} tags
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If y
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11152#discussion_r60124829
--- Diff: docs/monitoring.md ---
@@ -269,17 +310,23 @@ for a running application, at
`http://localhost:4040/api/v1`.
Details for the storage
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11152#issuecomment-207759561
@squito have you had a chance to review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/12229#discussion_r58835511
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/WriterContainer.scala
---
@@ -129,16 +129,17 @@ private[sql] abstract
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/12229#discussion_r58834890
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/WriterContainer.scala
---
@@ -129,16 +129,17 @@ private[sql] abstract
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11326#issuecomment-205753480
thx for this; helps with cross platform dev & test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/12171#issuecomment-205744467
without looking at the details, I appreciated the ambition âand like that
wildcard artifact exclusion.
what is the dependency graph that SBT generates
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/12076#issuecomment-205739991
Makes sense. Note that getting the SBT dependencies to match maven's was a
complete nightmare; it'd probably have been easier to write a new version
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/12076#issuecomment-204588620
I don't recall doing any reflection related stuff to work with kryo;
problems were showing up in compilation.
There's one more thing to worry a
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/12076#issuecomment-204325755
...Thinking about this; it might be possible to go to hive with a shaded
kryo, with the invocation of those methods which exchange kryo types referring
to the
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/12076#issuecomment-204324891
1. Hive uses Kryo "the guava of serialization" internally; I don't know the
specifics, but its not insignificant.
1. they moved ahead of spa
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/12004#issuecomment-203424016
build failing as SBT needs to be conditional on the spark/cloud module
being Hadoop 2.6+
---
If your project is set up for it, you can reply to this email and
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/12004#issuecomment-202793164
test failures are in hive; unrelated
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/10991#issuecomment-202530245
There's a more fundamental issue which the history server has too: log
replay is too expensive for long-lived applications. replay time is
O(jobs)+O(s
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/12004#discussion_r57614456
--- Diff: cloud/pom.xml ---
@@ -0,0 +1,141 @@
+
+
+http://maven.apache.org/POM/4.0.0";
xmlns:xsi="http://www.w3.org/2001
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/12004#issuecomment-202517351
Note that as this patch is is playing with the maven build and the
hadoop-2.6 and hadoop-2.7 profiles, the SparkQA builds aren't going to pick up
on much
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/12004
[SPARK-7481][build][WIP] Add Hadoop 2.6+ profile to pull in object store FS
accesors
## What changes were proposed in this pull request?
[SPARK-7481] Add Hadoop 2.6+ profile to pull
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11033#discussion_r57308003
--- Diff: docs/running-on-yarn.md ---
@@ -452,3 +452,104 @@ If you need a reference to the proper location to put
log files in the YARN so t
- In
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11033#discussion_r56998643
--- Diff: docs/running-on-yarn.md ---
@@ -452,3 +452,104 @@ If you need a reference to the proper location to put
log files in the YARN so t
- In
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11806#issuecomment-198322447
Summary: use an optimised storage format and dataframes, worry about
compression afterwards
1. you need to use a compression code that lets you seek into
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11672#issuecomment-198332302
This didn't cut the relevant doc/streaming-*.md files BTW
---
If your project is set up for it, you can reply to this email and have your
reply appear on G
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/10821#issuecomment-196426595
I'm not sure how that crept in on the patch; it wasn't something
intentional.
1. It is needed for the spark timeline stuff âbut that can be
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11033#issuecomment-195783350
1. I'll update
2. I think the extra credential dump should be pulled up into
{{SparkHadoopUtil}}; it's not yarn-specific
---
If your project is
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11152#issuecomment-195435446
Fixed up the endpoints, added some more detail on app-id vs app-attempt,
using base-app-id for the log retrieval. Also mentioned that after job/stage
GC, there
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11152#discussion_r55810810
--- Diff: docs/monitoring.md ---
@@ -273,8 +309,8 @@ for a running application, at
`http://localhost:4040/api/v1`.
Download the event logs
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11152#discussion_r55810591
--- Diff: docs/monitoring.md ---
@@ -229,10 +229,28 @@ for a running application, at
`http://localhost:4040/api/v1`.
A list of all
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11152#issuecomment-194948428
@squito : think this is ready for a merge now. I've been testing code
against it and not found any inconsistencies
---
If your project is set up for it, yo
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11033#discussion_r55576542
--- Diff: docs/running-on-yarn.md ---
@@ -441,3 +441,91 @@ If you need a reference to the proper location to put
log files in the YARN so t
- In
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11033#discussion_r55574388
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala ---
@@ -326,6 +330,65 @@ class YarnSparkHadoopUtil extends
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11033#discussion_r55507050
--- Diff: docs/running-on-yarn.md ---
@@ -441,3 +441,81 @@ If you need a reference to the proper location to put
log files in the YARN so t
- In
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11033#discussion_r55506512
--- Diff: docs/running-on-yarn.md ---
@@ -441,3 +441,81 @@ If you need a reference to the proper location to put
log files in the YARN so t
- In
Github user steveloughran closed the pull request at:
https://github.com/apache/spark/pull/11446
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user steveloughran closed the pull request at:
https://github.com/apache/spark/pull/11473
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11129#issuecomment-193200945
Yarn and labels, joy.
# Currently, a node can have exactly one label. That may change at a time
in the future, a time called, provisionally "the
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11473#issuecomment-193185188
will do
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11449#issuecomment-191917862
thx
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11326#discussion_r54892443
--- Diff:
core/src/test/resources/HistoryServerExpectations/one_app_multi_attempt_json_expectation.json
---
@@ -11,6 +14,9 @@
"comp
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11326#issuecomment-191800346
LGTM. @srowen ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11449#issuecomment-191747615
[Groovy and Xstream
attack](https://www.contrastsecurity.com/security-influencers/serialization-must-die-act-2-xstream).
Assume that you can do the same in Kryo
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11449#issuecomment-191746949
FWIW, We're backporting it in-house.Without it, downstream applications
which try to add a secure groovy to their jobs won't know which version is
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11449#issuecomment-191746623
I should that as well as the org.codehaus.groovy package, there's various
shaded things in groovy/ *and* an unshaded copy of antlr. This *may* create
versi
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11449#issuecomment-191741549
The risk is deserialization; Groovy CVE-2015-3253 shows how groovy < 2.4.4
makes it straightforward to use a class in Groovy to run arbitrary shell
commands
Github user steveloughran closed the pull request at:
https://github.com/apache/spark/pull/10545
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11449#issuecomment-19178
Added PR #11473 to cover only the `` bit of the patch, as that is
all that applies to 1.6.x
---
If your project is set up for it, you can reply to this email
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/11473
[SPARK-13599] [BUILD] remove transitive groovy dependencies from Hive
This is just the patch of #11449 cherry picked to branch-1.6; the enforcer
and dep/ diffs are cut
You can merge this
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11449#issuecomment-191320445
+1 for 1.6.x.
W.r.t 1.6.2, it'll keep the tar smaller, maybe even load faster. And
eliminate the risk of a CVE.
if you set spark.authent
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11449#issuecomment-191239336
...let me work out how to do enforcer rules
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user steveloughran closed the pull request at:
https://github.com/apache/spark/pull/11346
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11346#issuecomment-190841889
#11449 should render this obsolete
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11446#issuecomment-190841976
#11449 supercedes this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/11449
[SPARK-13599] [BUILD] remove transitive groovy dependencies from Hive
## What changes were proposed in this pull request?
Modifies the dependency declarations of the all the hive
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/11446
[SPARK-13471] [SQL] WiP update hive version to 1.2.1.1.spark
## What changes were proposed in this pull request?
This is the patch of #11346 against master; the details are covered
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/9168#discussion_r54569902
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -130,6 +130,21 @@ class SparkHadoopUtil extends Logging
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/10821#issuecomment-190217279
OK...so the question is "where is the 1 coming from"
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/10821#issuecomment-190175923
If you get a link like appId/1 then it means that the web UI/spark doesn't
have an instance ID; that's the default "single" link. So the
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11394#issuecomment-190159845
I was making sure it wouldn't break; writing the explicit tests to verify
the corner cases (spans, quarters ,etc). Some last minute checks. If the tests
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11394#issuecomment-189412745
Failing tests are all in :
`org.apache.spark.sql.hive.execution.HiveCompatibilitySuite`; it'd very hard
for them to be related
---
If your project is s
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11326#issuecomment-189347993
maybe use `Epoch` as the suffice; it is unix epoch values, after all.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11326#issuecomment-189280578
@srowen I'd like this as the REST API publishes the times as strings, not
long values which can be consistently deserialized across all platforms.
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/8744#issuecomment-189278459
I'm closing this for now as there's enough in Spark 2 to let me hook it up
independently; I'm also switching to the ATS1.5 publisher, which pushe
Github user steveloughran closed the pull request at:
https://github.com/apache/spark/pull/8744
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/11394
[SPARK-13513] [SQL] verify Feb 29 works on a leap year
## What changes were proposed in this pull request?
Some tests to verify that [SQLDate] and [SQLTimestamp] don't break
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11346#issuecomment-188792908
master is going to need work if/when the kryo version in chill is bumped to
3.x; there's an opportunity to have hive and spark using the same kryo ve
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/11346
[SPARK-13471] [SQL]: WiP update hive version to 1.2.1.1.spark
## What changes were proposed in this pull request?
This updates the hive dependency from 1.2.1.spark to 1.2.1.1.spark
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11033#discussion_r53528784
--- Diff: docs/running-on-yarn.md ---
@@ -441,3 +441,81 @@ If you need a reference to the proper location to put
log files in the YARN so t
- In
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11033#discussion_r53528281
--- Diff: docs/running-on-yarn.md ---
@@ -441,3 +441,81 @@ If you need a reference to the proper location to put
log files in the YARN so t
- In
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11033#issuecomment-186285168
Thomas: here's things trimmed back to do nothing but dump the credentials,
and the docs updated to cover what needs to be done, and how to troubleshoot i
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11033#issuecomment-184202371
I know that you can skip credential pickup for hbase & hive, this patch
also skips trying to collect any HDFS tokens for NNs, though of course that
coul
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11033#issuecomment-183672135
Yes: oozie gets all the tokens, but it has to hand them down. That env var
is how it does it. As it is the spark client always tries to retrieve those
tokens if
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11152#issuecomment-183331075
EndpointMeaning
/applications
A list of all applications
?status=[completed|running] list only
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11152#issuecomment-183330853
done. Also put the examples on new lines with a <br;> tag
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user steveloughran closed the pull request at:
https://github.com/apache/spark/pull/6935
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/8#issuecomment-183262779
Good Q. We thought it'd be simple at first too.
1. We need a notion of "out-of-dateness" which (a) supports different back
ends, and (b)
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11033#issuecomment-182809024
This is to handle the launch situation
* secure cluster
* Oozie acquires tokens, sets to env var {{HADOOP_TOKEN_FILE_LOCATION}}
* launches client
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/11152
[SPARK-13267] [Web UI] document the ?param arguments of the REST API; lift
theâ¦
Add to the REST API details on the ? args and examples from the test suite.
I've used the exi
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/8#discussion_r52465521
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/HistoryServerSuite.scala ---
@@ -256,6 +269,215 @@ class HistoryServerSuite extends
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/8#discussion_r52465565
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/HistoryServerSuite.scala ---
@@ -256,6 +269,215 @@ class HistoryServerSuite extends
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/8#issuecomment-182403322
LGTM; unifying the different probes for new-ness makes sense.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/8#discussion_r52461352
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -551,6 +597,8 @@ private[history] class FsHistoryProvider
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/8#discussion_r52461110
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -511,6 +545,14 @@ private[history] class
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/10780#issuecomment-182386654
thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/8#discussion_r52220649
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/ApplicationCache.scala ---
@@ -0,0 +1,669 @@
+/*
+ * Licensed to the Apache
601 - 700 of 1133 matches
Mail list logo