Github user Zariel commented on the pull request:
https://github.com/apache/spark/pull/9663#issuecomment-156115289
@vanzin could you take a look please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user Zariel opened a pull request:
https://github.com/apache/spark/pull/9663
[SPARK-11695][CORE] Set s3a credentials
Set s3a credentials when creating a new default hadoop configuration.
You can merge this pull request into a Git repository by running:
$ git pull https
Github user Zariel commented on a diff in the pull request:
https://github.com/apache/spark/pull/8358#discussion_r42352139
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -655,15 +655,19 @@ private[spark] object Utils extends Logging {
// created
Github user Zariel commented on the pull request:
https://github.com/apache/spark/pull/8358#issuecomment-149158601
@andrewor14 that sounds like a good idea, log it at INFO or WARN?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user Zariel commented on the pull request:
https://github.com/apache/spark/pull/8358#issuecomment-147414856
@tnachen is there anything my end which needs to be done to merge this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user Zariel commented on the pull request:
https://github.com/apache/spark/pull/8358#issuecomment-140457896
@tnachen done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user Zariel commented on the pull request:
https://github.com/apache/spark/pull/8358#issuecomment-139751123
What should be the correct order for the local dirs? Currently as I can see
the priority is `YARN_LOCAL_DIRS > LOCAL_DIRS > SPARK_EXECUTOR_DIRS >
SPARK_L
Github user Zariel commented on the pull request:
https://github.com/apache/spark/pull/8358#issuecomment-138854320
I dont think there is an issue with running Mesos and wanting to run over
multiple disks, as it is the responsibility of whoever is managing Mesos to
setup this space
GitHub user Zariel opened a pull request:
https://github.com/apache/spark/pull/8358
[SPARK-9708] [MESOS] Spark should create local temporary directories in
Mesos sandbox when launched with Mesos
This is my own original work and I license this to the project under the
project