Github user jongyoul commented on the issue:
https://github.com/apache/spark/pull/21213
@gengliangwang Thanks. But changing URI format might make any
incompatibility with other versions, thus we need to consider other aspects
more
Github user jongyoul commented on the issue:
https://github.com/apache/spark/pull/21213
Thanks for reviewing this PR. Concerning your comments:
1. It makes sense. I'll prolong the time to wait for the current page.
1. Personally, I wondered if redirecting page would
Github user jongyoul commented on the issue:
https://github.com/apache/spark/pull/21213
Please review this PR
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user jongyoul commented on the issue:
https://github.com/apache/spark/pull/21213
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user jongyoul commented on the issue:
https://github.com/apache/spark/pull/21213
As I described in issue description, yarn filter couldn't pass query
strings when redirecting to yarn application. It's already fixed with the
latest version of yarn, but some old versions still
GitHub user jongyoul opened a pull request:
https://github.com/apache/spark/pull/21213
[SPARK-24120]
## What changes were proposed in this pull request?
Change the case which doesn't have jobId as a parameter so that it will
redirect to `Jobs` page.
## How
Github user jongyoul commented on the issue:
https://github.com/apache/spark/pull/20860
@dongjoon-hyun Thanks. I'll do that next time. :-)
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user jongyoul commented on the issue:
https://github.com/apache/spark/pull/20860
@jerryshao Do I have anything to fix that failure?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user jongyoul commented on a diff in the pull request:
https://github.com/apache/spark/pull/20860#discussion_r176654007
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/IsolatedClientLoader.scala
---
@@ -179,7 +179,7 @@ private[hive] class
Github user jongyoul commented on a diff in the pull request:
https://github.com/apache/spark/pull/20860#discussion_r176652812
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/IsolatedClientLoader.scala
---
@@ -179,7 +179,7 @@ private[hive] class
Github user jongyoul commented on the issue:
https://github.com/apache/spark/pull/20860
Thank you for reviewing this PR, @dongjoon-hyun @HyukjinKwon
---
-
To unsubscribe, e-mail: reviews-unsubscr
GitHub user jongyoul opened a pull request:
https://github.com/apache/spark/pull/20860
[SPARK-23743][SQL] Changed a comparison logic from containing 'slf4j' to
starting with 'org.slf4j'
## What changes were proposed in this pull request?
isSharedClass returns if some classes
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-93898567
@andrewor14 I've fixed what you issue. Please review and merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-93731580
I've rebased it from current master at first.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-93872085
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-93206397
@andrewor14 Thanks for overall reviewing. I'll handle what you issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jongyoul commented on a diff in the pull request:
https://github.com/apache/spark/pull/5063#discussion_r28381413
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -220,10 +222,9 @@ private[spark] class
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-88293005
@sryza Could you please merge this PR for spark 1.3.1?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-87544727
@sryza I hope this will be a final review :-) I'm sorry for making a typo
again and again.
---
If your project is set up for it, you can reply to this email and have
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-86841550
@sryza Review it again, please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-86795209
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jongyoul commented on a diff in the pull request:
https://github.com/apache/spark/pull/5063#discussion_r27275980
--- Diff: docs/running-on-mesos.md ---
@@ -211,6 +211,14 @@ See the [configuration page](configuration.html) for
information on Spark config
/td
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-86357482
@sryza Who Do you know help this build issue? I just edited docs only.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-86357586
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-85920174
@tnachen @elyast I've updated it to support fractional number of cores for
executors. Please review this PR. I've tested my cluster. My setting is
```
AVA_HOME
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-86345441
I've rebased from master. Don't mind that I have a mistake to write a
commit log which I've worked another issue.
---
If your project is set up for it, you can reply
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-86344972
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-86327232
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-85500449
@elyast @tnachen Do you think `CPUS_PER_TASK` also support fractional
value? If it's not, I may be support executorCores as fractional value without
huge changes
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-85317877
@tnachen I see.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5126#issuecomment-84614077
@srowen Yes, All tests passed in the test log. Do you know what the problem
is?
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user jongyoul opened a pull request:
https://github.com/apache/spark/pull/5126
[SPARK-6453][Mesos] Some Mesos*Suite have a different package with their
classes
- Moved Suites from o.a.s.s.mesos to o.a.s.s.cluster.mesos
You can merge this pull request into a Git repository
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-84761674
Jekins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-84769168
I rebased this from master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-84769305
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jongyoul commented on a diff in the pull request:
https://github.com/apache/spark/pull/5099#discussion_r26843969
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MemoryUtilsSuite.scala
---
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-84258060
@elyast It's enough to explain why we can set this property. @sryza This
feature exists only in Mesos mode now. How about setting this property
specified in Mesos now
GitHub user jongyoul opened a pull request:
https://github.com/apache/spark/pull/5088
[SPARK-6286][Mesos][minor] Handle missing Mesos case TASK_ERROR
- Made TaskState.isFailed for handling TASK_LOST and TASK_ERROR and
synchronizing CoarseMesosSchedulerBackend
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5088#issuecomment-83433890
@srowen Review this PR which is related by #5000 for handling `TASK_ERROR`,
please.
---
If your project is set up for it, you can reply to this email and have your
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5088#issuecomment-83440190
jenkins, test this, please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5099#issuecomment-83861862
@srowen I made a PR for handling with unintended behaviour issued by #5065.
Please review this.
---
If your project is set up for it, you can reply to this email
GitHub user jongyoul opened a pull request:
https://github.com/apache/spark/pull/5099
[SPARK-6423][Mesos] MemoryUtils should use memoryOverhead if it's set
- Fixed calculateTotalMemory to use spark.mesos.executor.memoryOverhead
- Added testCase
You can merge this pull request
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-83887545
I don't think parameter name is not a critical issue. It looks like that
@sryza doesn't want a new configuration parameter used only in a specific
cluster, but he wants
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5088#issuecomment-83619131
@srowen Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-82812556
@sryza I mean the former one. I agree to your opinion of
`spark.executor.frameworkCores`, and I have one more question. Do you think
It's good to update
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-82821343
@sryza What do you think of @tnachen's opinion? For avoiding this
confusion, I added comments that this configuration is being used on mesos
fine-grained mode
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5065#issuecomment-83030138
@srowen No, Mesos doc is not wrong, and I mean that Mesos also uses MB as
same as MiB.
---
If your project is set up for it, you can reply to this email and have your
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5000#issuecomment-83027537
@srowen Two missing points remains. We should handle
`MesosTaskState.TASK_LOST` in `MesosSchedulerBackend` and
`CoarseMesosSchedulerBackend`. I will follow up
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5000#issuecomment-83276749
@dragos Yes, right. I'll make `TaskState.isFailed` method.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user jongyoul opened a pull request:
https://github.com/apache/spark/pull/5065
SPARK-6085 Part. 2 Increase default value for memory overhead
- fixed a description of spark.mesos.executor.memoryOverhead from 7% to 10%
- This is a second part of SPARK-6085
You can merge
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5065#issuecomment-82193193
@srowen Review this PR, please. This is a second part of SPARK-6085 which
you committed. I just fixed a small documentation, so I don't make another
issue
GitHub user jongyoul opened a pull request:
https://github.com/apache/spark/pull/5063
[SPARK-6350][Mesos] Make mesosExecutorCores configurable in mesos
fine-grained mode
- Defined executorCores from spark.mesos.executor.cores
- Changed the amount of mesosExecutor's cores
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-82141199
Jenkins, test this, please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-82189318
@elyast I submitted a PR based on #4170. @tnachen Could you please review
my PR?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user jongyoul commented on a diff in the pull request:
https://github.com/apache/spark/pull/5063#discussion_r26556299
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -67,6 +67,8 @@ private[spark] class
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5065#issuecomment-82701925
@srowen I'm sorry not for understanding your opinion fully - I'm not a
native english -, do you mean that it's proper to use MB instead of MiB?
I've found Mesos project
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-82694857
@sryza Thanks. Do you think it is reasonable to use `spark.executor.cores`
as mesos executor cores? In fine-grained mode on mesos, each task has its own
cores
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-82031855
@tnachen @elyast I made a new issue about configuring mesos executor cores.
https://issues.apache.org/jira/browse/SPARK-6350
---
If your project is set up for it, you
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4361#issuecomment-80797514
@srowen Master and Branch-1.3 have a dependency of Mesos 0.21.0. Is it ok
to merge it into branch-1.3? And MESOS-1688 is resolved by Mesos 0.21.0. So
Spark isn't
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-80798498
@elyast Thanks for interesting this PR, which was about resources of cores
and memory. I misunderstood how mesos works specially in memory side, so I
closed this PR
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4361#issuecomment-78812902
@pwendell @srowen Review this PR, please. This is about SPARK-3619 which is
already completed but has some missing documentation about ENV name. From
mesos-0.21.0
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5000#issuecomment-78759592
@dragos You will find more codes which you could fix in *SchedulerBackend
which handles MesosTaskState directly.
---
If your project is set up for it, you can reply
Github user jongyoul commented on a diff in the pull request:
https://github.com/apache/spark/pull/4361#discussion_r25142832
--- Diff: conf/spark-env.sh.template ---
@@ -15,7 +15,7 @@
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
Github user jongyoul commented on a diff in the pull request:
https://github.com/apache/spark/pull/4361#discussion_r25141239
--- Diff: conf/spark-env.sh.template ---
@@ -15,7 +15,7 @@
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4361#issuecomment-74405146
@tnachen I've already changed docs/running_on_mesos.md. Could you please
tell me any other docs for changing description?
---
If your project is set up for it, you can
GitHub user jongyoul opened a pull request:
https://github.com/apache/spark/pull/4361
[SPARK-3619] Part 2. Upgrade to Mesos 0.21 to work around MESOS-1688
- MESOS_NATIVE_LIBRARY become deprecated
- Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY
You can merge
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-71767996
@mateiz That's a sample screenshot.
![screen shot 2015-01-28 at 10 55 07
am](https://cloud.githubusercontent.com/assets/3612566/5931130/a76d4d98-a6dc-11e4-8876
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-71765405
@mateiz We agree with one executor and multi task is intended behaviour. In
this situation, MesosScheduler offers CPUS_PER_TASK resources to executor when
we launch
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-71765614
Thus, I believe that executor has executor cores and executor memory
setting on ExecutorInfo, and task has its own cores and memories setting on
TaskInfo while
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-71770002
Sorry, I've shaw you my configuration. my configuraion is 5G for
SPARK_EXECUTOR_MEMORY and 5 for spark.task.cpus. In my screenshot, we launch
two tasks on the same
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-71772544
I don't know the behaviour in coarse-grained mode, but in fine-grained
mode, we use multiple JVM for running tasks. we run spark-class by launcher.
This means we launch
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-71773477
I believed that when we launch mesos driver launchTasks, container run the
command `bin/spark-class` everytime running task. And in my qna email for
mesos, @tnachen
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-71775404
@tnachen @mateiz So sorry for taking up a lot of time. I've found that only
one executor as a process runs at any time, and I understand executor can have
multiple
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-71773877
@tnachen Yes, I fully understand reusing executor while a framework is
alive. However, we launch two task on a same executor? What you've answered is
they are launched
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-71776099
I'll close this PR. It's wrong approach.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jongyoul closed the pull request at:
https://github.com/apache/spark/pull/3994
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user jongyoul closed the pull request at:
https://github.com/apache/spark/pull/4170
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/3994#issuecomment-71776175
I'll also close this PR. I've misunderstood mesos #4170
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-71411958
/cc @mateiz Could you please review this PR which is about offering
resources to executor and task.
---
If your project is set up for it, you can reply to this email
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4172#issuecomment-71161988
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user jongyoul opened a pull request:
https://github.com/apache/spark/pull/4170
[SPARK-5376][Mesos] MesosExecutor should have correct resources
- Divided task and executor resources
- Added `spark.mesos.executor.cpus` and fixed docs
You can merge this pull request
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-71150639
/cc @tnachen @pwendell This PR is about @pwendell 's todo. Review this,
please.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user jongyoul commented on a diff in the pull request:
https://github.com/apache/spark/pull/4170#discussion_r23433986
--- Diff: docs/configuration.md ---
@@ -341,6 +341,13 @@ Apart from these, the following properties are also
available, and may be useful
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4172#issuecomment-71158869
/cc @tdas This PR is a second part of
[SPARK-5058](https://issues.apache.org/jira/browse/SPARK-5058) which you
already reviewed and resolved.
---
If your project
GitHub user jongyoul opened a pull request:
https://github.com/apache/spark/pull/4172
[SPARK-5058] Part 2. Typos and broken URL
- Also fixed java link
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/jongyoul/spark SPARK-FIXDOC
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4119#issuecomment-70633473
I've added license information.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4119#issuecomment-70639628
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4119#issuecomment-70632731
/cc @JoshRosen This is related by SPARK-4104 #3849. Review this PR, please.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4119#issuecomment-70633023
retest this, please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user jongyoul opened a pull request:
https://github.com/apache/spark/pull/4119
[SPARK-5333][Mesos] MesosTaskLaunchData occurs BufferUnderflowException
- Rewind ByteBuffer before making ByteString
You can merge this pull request into a Git repository by running:
$ git
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/3994#issuecomment-70594275
@tnachen Yes, I've also found what you told about timeout. I'll check it
again by changing that value. But changing executorId is needed. If there are
two tasks running
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/3994#issuecomment-70597055
@tnachen In my case - above logs -, task 34 and 63 are assigned to same
executor and also same container on same node. Task 34 occurs error about
registration timeout
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/3994#issuecomment-70600230
@tnachen Ok, I see. It happened when executor couldn't get launched,
doesn't it? I'll change that setting first.
---
If your project is set up for it, you can reply
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/3897#issuecomment-70444764
Rebase is not finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/3897#issuecomment-70445944
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/3897#issuecomment-70455615
@mateiz I've rebased this PR and finished tests successfully. Merge this,
please.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/3994#issuecomment-70358767
@mateiz I think I don't know fine-grained mode how you intend to behave
exactly. What help me to understand more? I don't know how multi executor break
spark's intended
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/3994#issuecomment-70359541
@tnachen
- Slave page
![screen shot 2015-01-17 at 5 38 20
pm](https://cloud.githubusercontent.com/assets/3612566/5788288/a87230de-9e6f-11e4-8e18-972d6b3b9204
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/3994#issuecomment-70359713
@tnachen And slave's logs around task 34, 63. It looks like that if any
task occurs error while running, the executor running that task is terminated.
Check
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/3994#issuecomment-70352301
@mateiz I have one slave per node and the problem occurs when two tasks are
launched at the same time. Two tasks run in a same container so it makes two
tasks leave
1 - 100 of 185 matches
Mail list logo