Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/969#discussion_r13777641
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala
---
@@ -45,6 +44,12 @@ class ClientArguments(val args: Array[String
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/969#issuecomment-46074277
My point is that, cluster and client mode should be consistent.
`spark.yarn.dist.*` only works in client mode is not perfect.
---
If your project is set up for it, you
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/969#discussion_r13784281
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala
---
@@ -45,6 +44,12 @@ class ClientArguments(val args: Array[String
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/991#discussion_r13790399
--- Diff: core/src/main/java/org/apache/spark/Service.java ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/969#issuecomment-46198873
@tgravescs The code should be like this?
ClientArguments:
```scala
// -archives/--files via spark submit or yarn-client defaults to use
file
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/969#issuecomment-46326177
spark-defaults.conf|command | path
| | -
spark.yarn.dist.archives /other/path |`./bin/spark-submit --archives
/some/path
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/969#issuecomment-46327168
`spark.yarn.dist.*` related is concentrated in `ClientArguments` is a
good idea
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1112
SPARK-1291: Link the spark UI to RM ui in yarn-client mode
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1291
Alternatively
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1124#discussion_r13951327
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -212,7 +208,14 @@ private[spark] class Executor(
val
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1124#discussion_r13951554
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -212,7 +208,14 @@ private[spark] class Executor(
val
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1135
The keys for sorting the columns of Executor page in SparkUI are incorrect
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-2181
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1135#discussion_r14005150
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/ExecutorTable.scala
---
@@ -67,18 +67,20 @@ private[ui] class ExecutorTable(stageId: Int, parent
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1153
Resolve sbt warnings during build â
¡
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark expectResult
Alternatively you can review
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1031#issuecomment-46654800
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1031
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo reopened a pull request:
https://github.com/apache/spark/pull/1031
[WIP][SPARK-1720]use LD_LIBRARY_PATH instead of -Djava.library.path
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1720
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1031
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1031#issuecomment-4248
This solution won't work
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/598
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1174
SizeBasedRollingPolicy throw an java.lang.IllegalArgumentException in jdk6
...
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1180
[WIP]Spark 2037: yarn client mode doesn't support
spark.yarn.max.executor.failures
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1112#discussion_r14077844
--- Diff: core/src/main/scala/org/apache/spark/ui/UIUtils.scala ---
@@ -135,7 +135,16 @@ private[spark] object UIUtils extends Logging
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1112#discussion_r14077980
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -139,7 +139,8 @@ private[spark] object JettyUtils extends Logging
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1112#discussion_r14080567
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -139,7 +139,8 @@ private[spark] object JettyUtils extends Logging
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1112#discussion_r14081769
--- Diff:
yarn/common/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
---
@@ -116,4 +118,15 @@ private[spark] class
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1194
SPARK-2248: spark.default.parallelism does not apply in local mode
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-2248
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-46977329
It should be related to
[SPARK-1693](https://issues.apache.org/jira/browse/SPARK-1693)
---
If your project is set up for it, you can reply to this email and have your
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1112#issuecomment-47056084
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1153#issuecomment-47056690
This PR is the subsequent optimization of #713 ,should only merge to the
master.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1208
SPARK-1470: Use the scala-logging wrapper instead of the directly sfl4j api
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/332
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1210#discussion_r14174587
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/hash/HashShuffleReader.scala ---
@@ -49,6 +49,17 @@ class HashShuffleReader[K, C](
} else
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1208#issuecomment-47118382
The main benefit is unified log Interface. Now the code using
`scala-logging-slf4j` and `slf4j-api` at the same time
---
If your project is set up for it, you can reply
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/991#discussion_r14232502
--- Diff: core/src/main/java/org/apache/spark/Service.java ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/991#issuecomment-47217893
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1233
[WIP] Loading spark-defaults.conf when creating SparkConf instances
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark defaults-conf
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1112
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo reopened a pull request:
https://github.com/apache/spark/pull/1112
SPARK-1291: Link the spark UI to RM ui in yarn-client mode
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1291
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1233#issuecomment-47372715
@vanzin The situation is `sbin/start-*.sh` are not support
`spark-defaults.conf`.
eg: `sbin/start-history-server.sh` cannot load
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1233#issuecomment-47373587
You're right, the corresponding code should be submitted at the weekend.
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1256
[WIP]SPARK-2098: All Spark processes should support spark-defaults.conf,
config file
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1233
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1233#issuecomment-47428611
@vanzin I submitted a new PR #1256 . I close this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1302#issuecomment-48038212
Run some simple tasks, there seems to be no problem.
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1069#issuecomment-48285811
@srowen If so, I close the PR and I will submit a new PR meets you said.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1069
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1330
Resolve sbt warnings during build
At the same time, import the `scala.language.postfixOps` and `
org.scalatest.time.SpanSugar._` cause `scala.language.postfixOps` doesn't work
You can merge
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1341
*-history-server.sh load spark-env.sh
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark history_env
Alternatively you can review
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1341#issuecomment-48466773
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1256#issuecomment-48492962
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1341#issuecomment-48550851
```
if [ $# != 0 ]; then
echo Using command line arguments for setting the log directory is
deprecated. Please
echo set the spark.history.fs.logDirectory
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1341#issuecomment-48586438
Use `load-spark-env.sh` to ensure `spark-env.sh` loads only once.
`start-master.sh ` did the same.
---
If your project is set up for it, you can reply to this email
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1112#discussion_r14775753
--- Diff:
yarn/stable/src/main/scala/org/apache/spark/deploy/yarn/ExecutorLauncher.scala
---
@@ -142,9 +149,20 @@ class ExecutorLauncher(args
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1112#discussion_r14776233
--- Diff: core/src/main/scala/org/apache/spark/ui/UIUtils.scala ---
@@ -135,7 +135,16 @@ private[spark] object UIUtils extends Logging
GitHub user witgo reopened a pull request:
https://github.com/apache/spark/pull/1208
SPARK-1470: Use the scala-logging wrapper instead of the directly sfl4j api
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1208
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1369
Use the scala-logging wrapper instead of the directly sfl4j api
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1470_new
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1369#issuecomment-48696680
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1369#issuecomment-48708268
#332 can't automatic test .
#1208 was messing up and I do not know how to solve . :sweat:
---
If your project is set up for it, you can reply to this email and have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/332#issuecomment-48708675
It can't automatic test. I submit a new PR #1369.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1256#discussion_r14850885
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/MasterArguments.scala ---
@@ -38,19 +39,24 @@ private[spark] class MasterArguments(args:
Array
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1387
[WIP]When the executor is thrown OutOfMemoryError exception driver run
garbage collection
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1387#issuecomment-48841019
Now, `SparkContext.cleaner` without considering the executor memory usage.
This will cause the spark to fail in the shortage of memory.
---
If your project is set up
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1387#issuecomment-48841151
@srowen
[Executor.scala#L253](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/executor/Executor.scala#L253)
handle exceptions
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1393#issuecomment-48843123
The overall increase how much memory? Have a detailed contrast?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1403
[WIP][SQL] By default does not run hive compatibility tests
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark hive_compatibility
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1404
Remove NOTE: SPARK_YARN is deprecated, please use -Pyarn flag
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark run-tests
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1404#issuecomment-48981721
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1387#issuecomment-48985219
I agree with your point.
But when a memory overflow exception is thrown .Error is the Spark given:
```
org.apache.spark.SparkException: Job aborted due to stage
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1387#issuecomment-48985713
```
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError=kill %p
# Executing /bin/sh -c kill 44942...
14/07/15 10:38:29 ERROR
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1112#issuecomment-49045150
@tgravescs The code has been submitted. Because I don't have the hadoop
0.23.x cluster, the code no strict test.
---
If your project is set up for it, you can reply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1404#issuecomment-49048633
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1387#issuecomment-49061792
`SparkContext.cleaner` will clean up no reference RDD, shuffle and
broadcast.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1387#issuecomment-49064468
Explicitly clear the means to keep all the reference object, for Java
programmers ,it is very unfriendly.
---
If your project is set up for it, you can reply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1387#issuecomment-49067472
Yes , `System.gc()` is just advice, may not really free resources. But RDD
no close method,can only be cleared by `ContextCleaner`
---
If your project is set up
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1387#issuecomment-49072476
Yes this solution is not perfect. I have been thinking about this problem.
BTW the `runGC ` method run GC and make sure it actually has run.
reference
https
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1387#discussion_r14953652
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskEventListener.scala ---
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1387#issuecomment-49074742
I'm sorry, my English is poor. The problem now is we do not have a reliable
solution to the RDD is cleared. Close this first?
---
If your project is set up for it, you
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1387#issuecomment-49075444
`runGC` method's main problem is likely to run for a long time and still
didn't work.
---
If your project is set up for it, you can reply to this email and have your
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1387#issuecomment-49076040
In my tests, `runGC` method is normally working in jdk7_45.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1387#issuecomment-49077845
Ok, tomorrow or the day after tomorrow I try it on the way you said. I only
tested the default gc configuration and I will test the other.
---
If your project is set up
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1330#issuecomment-49117596
As a result of #772. The master has fixed this problem. But we should
remove this line `arg-language:postfixOps/arg` in
[pom.xml#L807](https://github.com/apache/spark
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1022
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo reopened a pull request:
https://github.com/apache/spark/pull/1022
SPARK-1719: spark.*.extraLibraryPath isn't applied on yarn
Fix: spark.executor.extraLibraryPath isn't applied on yarn
You can merge this pull request into a Git repository by running:
$ git
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1482
SPARK-2491: Fix When an OOM is thrown,the executor does not stop properly.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-2491
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1409#issuecomment-49513420
@aarondav @pwendell
In my tests, it seems that there are still a deadlock.
To find a possible reason this here [Executor.scala#L189]
(https://github.com
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1501#issuecomment-49546564
cc @tgravescs
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1501
[YARN]In some cases, pages display incorrect in WebUI
The issue is caused by #1112 .
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1501#issuecomment-49549682
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/163#issuecomment-37902518
#150 the PR can add filters to static resources.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/150#issuecomment-37903246
High degree of overlap with #163 . Once #163 merged into master to see
whether to reopen
---
If your project is set up for it, you can reply to this email and have your
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/150#issuecomment-37947800
This is a necessary improvement. This PR can work under jetty 7 and jetty 9
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/150#discussion_r10734467
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -111,10 +111,13 @@ private[spark] object JettyUtils extends Logging {
Option
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/150#discussion_r10737769
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/ui/MasterWebUI.scala ---
@@ -61,7 +61,7 @@ class MasterWebUI(val master: Master, requestedPort: Int
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/150#discussion_r10737808
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -111,10 +111,13 @@ private[spark] object JettyUtils extends Logging {
Option
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/180#issuecomment-38417874
Who can merge the improvement for web UI?
@aarondav
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/180#issuecomment-38422713
val pairs = sc.parallelize(Array((1, 1), (1, 2), (1, 3), (2, 1)))
pairs.take(1)
http://host:4040/stages/
Completed Stages table Description Column
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/180#issuecomment-38524035
I think the stack trace looks like:
the call in UI
[StageTable.scala#L79-79](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/ui
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/180
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/234#discussion_r11008188
--- Diff: pom.xml ---
@@ -430,6 +430,16 @@
version${scala.version}/version
/dependency
dependency
201 - 300 of 866 matches
Mail list logo