SuYan created SPARK-4167:
Summary: Schedule task on Executor will be Imbalance while task
run less than local-wait time
Key: SPARK-4167
URL: https://issues.apache.org/jira/browse/SPARK-4167
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-4167:
-
Description:
Recently, when run a spark on yarn job. it occurs executor schedules imbalance.
the procedure is
[
https://issues.apache.org/jira/browse/SPARK-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-4167:
-
Description:
Recently, when run a spark on yarn job. it occurs executor schedules imbalance.
the procedure is
[
https://issues.apache.org/jira/browse/SPARK-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan closed SPARK-4167.
Resolution: Not a Problem
Schedule task on Executor will be Imbalance while task run less than
local-wait time
SuYan created SPARK-4200:
Summary: akka.loglevel
Key: SPARK-4200
URL: https://issues.apache.org/jira/browse/SPARK-4200
Project: Spark
Issue Type: Question
Components: Spark Core
SuYan created SPARK-4471:
Summary: blockManagerIdFromJson function throws exception while
BlockManagerId be null in MetadataFetchFailedException
Key: SPARK-4471
URL: https://issues.apache.org/jira/browse/SPARK-4471
SuYan created SPARK-4714:
Summary: Checking block is null or not after having gotten
info.lock in remove block method
Key: SPARK-4714
URL: https://issues.apache.org/jira/browse/SPARK-4714
Project: Spark
SuYan created SPARK-4721:
Summary: Improve first thread to put block failed
Key: SPARK-4721
URL: https://issues.apache.org/jira/browse/SPARK-4721
Project: Spark
Issue Type: Improvement
SuYan created SPARK-4777:
Summary: Some block memory after unrollSafely not count into used
memory(memoryStore.entrys or unrollMemory)
Key: SPARK-4777
URL: https://issues.apache.org/jira/browse/SPARK-4777
[
https://issues.apache.org/jira/browse/SPARK-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14249330#comment-14249330
]
SuYan commented on SPARK-4777:
--
Sean Owen, Hi, I intended to close that patch, but after
[
https://issues.apache.org/jira/browse/SPARK-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-5259:
-
Description:
1. while shuffle stage was retry, there may have 2 taskSet running.
we call the 2
SuYan created SPARK-5259:
Summary: Add task equal() and hashcode() to avoid
stage.pendingTasks not accurate while stage was retry
Key: SPARK-5259
URL: https://issues.apache.org/jira/browse/SPARK-5259
[
https://issues.apache.org/jira/browse/SPARK-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-5259:
-
Description:
1. while shuffle stage was retry, there may have 2 taskSet running.
we call the 2
[
https://issues.apache.org/jira/browse/SPARK-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-5259:
-
Summary: Fix endless retry stage by add task equal() and hashcode() to
avoid stage.pendingTasks not empty while
SuYan created SPARK-5132:
Summary: The name for get stage info atempt ID from Json was wrong
Key: SPARK-5132
URL: https://issues.apache.org/jira/browse/SPARK-5132
Project: Spark
Issue Type: Bug
SuYan created SPARK-6606:
Summary: Accumulator deserialized twice because the
NarrowCoGroupSplitDep contains rdd object.
Key: SPARK-6606
URL: https://issues.apache.org/jira/browse/SPARK-6606
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-6606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan closed SPARK-6606.
Resolution: Duplicate
Duplicate with SPARK-5360, see https://github.com/apache/spark/pull/4145
Accumulator
[
https://issues.apache.org/jira/browse/SPARK-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14341285#comment-14341285
]
SuYan commented on SPARK-5945:
--
I encounter stage retry infinitely when a executor lost
[
https://issues.apache.org/jira/browse/SPARK-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14346818#comment-14346818
]
SuYan edited comment on SPARK-6156 at 3/4/15 12:08 PM:
---
Sean Owen,
SuYan created SPARK-6157:
Summary: Unroll unsuccessful memory_and_disk level block should
release reserved unroll memory after put success in disk
Key: SPARK-6157
URL: https://issues.apache.org/jira/browse/SPARK-6157
[
https://issues.apache.org/jira/browse/SPARK-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-6156:
-
Affects Version/s: 1.2.1
Fix Version/s: (was: 1.3.0)
Refine Put Memory_And_Disk block
[
https://issues.apache.org/jira/browse/SPARK-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-6157:
-
Component/s: (was: Spark Core)
Block Manager
Unroll unsuccessful memory_and_disk level
SuYan created SPARK-6156:
Summary: Refine Put Memory_And_Disk block
Key: SPARK-6156
URL: https://issues.apache.org/jira/browse/SPARK-6156
Project: Spark
Issue Type: Bug
Reporter: SuYan
[
https://issues.apache.org/jira/browse/SPARK-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-6156:
-
Fix Version/s: 1.3.0
Refine Put Memory_And_Disk block
Key:
[
https://issues.apache.org/jira/browse/SPARK-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-6156:
-
Component/s: Spark Core
Refine Put Memory_And_Disk block
[
https://issues.apache.org/jira/browse/SPARK-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-6156:
-
Summary: Not cache in memory again if put memory_and_disk level block after
put it in disk after unroll unsuccess
[
https://issues.apache.org/jira/browse/SPARK-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578839#comment-14578839
]
SuYan commented on SPARK-8101:
--
Sorry for late to see that, that problem was fix in
SuYan created SPARK-8100:
Summary: Make able to refer lost executor log
Key: SPARK-8100
URL: https://issues.apache.org/jira/browse/SPARK-8100
Project: Spark
Issue Type: Improvement
Affects
SuYan created SPARK-8101:
Summary: Upgrade netty to avoid memory leak accord to netty #3837
issues
Key: SPARK-8101
URL: https://issues.apache.org/jira/browse/SPARK-8101
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572715#comment-14572715
]
SuYan commented on SPARK-8101:
--
May upgrade in some time
Upgrade netty to avoid memory leak
SuYan created SPARK-8044:
Summary: Invoid use directMemory while put or get block from file
Key: SPARK-8044
URL: https://issues.apache.org/jira/browse/SPARK-8044
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-8044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-8044:
-
Summary: Avoid to use directMemory while put or get disk level block from
file (was: Invoid use directMemory
[
https://issues.apache.org/jira/browse/SPARK-8044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-8044:
-
Description:
1. I found if we use getChannel to put or get data, it will create
DirectBuffer anyway, which is
[
https://issues.apache.org/jira/browse/SPARK-8044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-8044:
-
Summary: Invoid use directMemory while put or get disk level block from
file (was: Invoid use directMemory while
[
https://issues.apache.org/jira/browse/SPARK-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612786#comment-14612786
]
SuYan commented on SPARK-5594:
--
Do you write sth like:
object XXX {
val sc = new
[
https://issues.apache.org/jira/browse/SPARK-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612785#comment-14612785
]
SuYan commented on SPARK-5594:
--
Do you write sth like:
object XXX {
val sc = new
[
https://issues.apache.org/jira/browse/SPARK-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-5594:
-
Comment: was deleted
(was: Do you write sth like:
object XXX {
val sc = new SparkContext()
def main {
SuYan created SPARK-10052:
-
Summary: KafKaDirectDstream should filter empty partition task or
rdd
Key: SPARK-10052
URL: https://issues.apache.org/jira/browse/SPARK-10052
Project: Spark
Issue Type:
SuYan created SPARK-11746:
-
Summary: Use cache-aware method 'dependencies' to instead of
'getDependencies'
Key: SPARK-11746
URL: https://issues.apache.org/jira/browse/SPARK-11746
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-10842:
--
Affects Version/s: 1.5.0
Priority: Minor (was: Major)
Description: When we traverse
SuYan created SPARK-10842:
-
Summary: liminate duplicate stage
Key: SPARK-10842
URL: https://issues.apache.org/jira/browse/SPARK-10842
Project: Spark
Issue Type: Improvement
Reporter:
[
https://issues.apache.org/jira/browse/SPARK-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-10842:
--
Description:
When we traverse RDD, to generate Stage DAG, Spark will skip to judge the stage
whether was
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909990#comment-14909990
]
SuYan edited comment on SPARK-10796 at 9/28/15 2:59 AM:
Running Stage 0, running
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909990#comment-14909990
]
SuYan commented on SPARK-10796:
---
Running Stage 0, running TaskSet0.0, Finshed task0.0 in ExecA, running
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14909990#comment-14909990
]
SuYan edited comment on SPARK-10796 at 9/28/15 3:00 AM:
Running Stage 0.0,
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-10796:
--
Affects Version/s: 1.4.0
1.5.0
> The Stage taskSets may are all removed while stage
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14933176#comment-14933176
]
SuYan commented on SPARK-10796:
---
[~sowen] Hi, I had reproduced that problem in latest version. already
SuYan created SPARK-10796:
-
Summary: The Stage taskSets may are all removed while stage still
have pending partitions after having lost some executors
Key: SPARK-10796
URL:
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-10796:
--
Description:
We meet that problem in Spark 1.3.0, and I also check the latest Spark code,
and I think that
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907532#comment-14907532
]
SuYan edited comment on SPARK-10796 at 9/25/15 4:07 AM:
I already refine that
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907532#comment-14907532
]
SuYan edited comment on SPARK-10796 at 9/25/15 5:11 AM:
I already refine that
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907532#comment-14907532
]
SuYan commented on SPARK-10796:
---
I already refine that description. Simple Example will be add lately.
If
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-10796:
--
Description:
We meet that problem in Spark 1.3.0, and I also check the latest Spark code,
and I think that
SuYan created SPARK-12419:
-
Summary: FetchFailed = false Executor lost should not allowed
re-registered in BlockManager Master again?
Key: SPARK-12419
URL: https://issues.apache.org/jira/browse/SPARK-12419
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15029427#comment-15029427
]
SuYan commented on SPARK-12009:
---
run on the spark 1.4.0, and check current 1.5.2, that problem still exist,
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15029410#comment-15029410
]
SuYan commented on SPARK-12009:
---
= =, the log is based 1.4.0
{code}
override def
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-12009:
--
Description:
Log based 1.4.0
2015-11-26,03:05:16,176 WARN
SuYan created SPARK-12009:
-
Summary: Avoid re-allocate yarn container while driver want to
stop all Executors
Key: SPARK-12009
URL: https://issues.apache.org/jira/browse/SPARK-12009
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15028288#comment-15028288
]
SuYan commented on SPARK-12009:
---
user had called sc.stop in main Program
> Avoid re-allocate yarn
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031298#comment-15031298
]
SuYan edited comment on SPARK-12009 at 11/30/15 3:59 AM:
-
I still think it is
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031297#comment-15031297
]
SuYan commented on SPARK-12009:
---
default time is 10 min...after 10 min, yarn will mark AM as expired, and
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031283#comment-15031283
]
SuYan commented on SPARK-12009:
---
[~jerryshao]
I would take some time to look into if Yarn lost heart beat
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031298#comment-15031298
]
SuYan commented on SPARK-12009:
---
I still think it is better to only stop to request new containers
> Avoid
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15028363#comment-15028363
]
SuYan commented on SPARK-12009:
---
AM is not exit, it will exit while driver execute its usercode in
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15028363#comment-15028363
]
SuYan edited comment on SPARK-12009 at 11/26/15 8:42 AM:
-
AM is not exit, it will
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15028363#comment-15028363
]
SuYan edited comment on SPARK-12009 at 11/26/15 8:42 AM:
-
AM is not exit, it will
[
https://issues.apache.org/jira/browse/SPARK-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320131#comment-15320131
]
SuYan commented on SPARK-15815:
---
Got stage-partition blacklist executors, to found weather the task can run
[
https://issues.apache.org/jira/browse/SPARK-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-15815:
--
Comment: was deleted
(was: Got stage-partition blacklist executors, to found weather the task can
run success
SuYan created SPARK-15815:
-
Summary: Hang while enable blacklistExecutor and
DynamicExecutorAllocator
Key: SPARK-15815
URL: https://issues.apache.org/jira/browse/SPARK-15815
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331285#comment-15331285
]
SuYan edited comment on SPARK-15815 at 6/15/16 7:17 AM:
I see... although it can
[
https://issues.apache.org/jira/browse/SPARK-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331285#comment-15331285
]
SuYan commented on SPARK-15815:
---
I see... although it can solve the gang problem, but for Dynamic Allocate,
[
https://issues.apache.org/jira/browse/SPARK-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311938#comment-15311938
]
SuYan commented on SPARK-12757:
---
[~joshrosen]
Hi, can someone do some works to merge this patch and
[
https://issues.apache.org/jira/browse/SPARK-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333198#comment-15333198
]
SuYan edited comment on SPARK-15815 at 6/16/16 6:24 AM:
eh...yes, still have the
[
https://issues.apache.org/jira/browse/SPARK-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333198#comment-15333198
]
SuYan commented on SPARK-15815:
---
eh...yes, still have the uncertainty to got another executors, how can we
[
https://issues.apache.org/jira/browse/SPARK-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-13060:
--
Description:
Hi Josh
I am spark user, currently I feel confused about executor registration.
{code}
Why
SuYan created SPARK-13060:
-
Summary: CoarsedExecutorBackend register to driver should wait
Executor was ready?
Key: SPARK-13060
URL: https://issues.apache.org/jira/browse/SPARK-13060
Project: Spark
SuYan created SPARK-13112:
-
Summary: CoarsedExecutorBackend register to driver should wait
Executor was ready
Key: SPARK-13112
URL: https://issues.apache.org/jira/browse/SPARK-13112
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-14957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-14957:
--
Description:
It always adopt the first dir, and never test if the dir is exsit or can read
or can write?
SuYan created SPARK-14957:
-
Summary: can't connect to Yarn Shuffle service due to it adopt the
non-exists dir to store executors metas
Key: SPARK-14957
URL: https://issues.apache.org/jira/browse/SPARK-14957
[
https://issues.apache.org/jira/browse/SPARK-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253237#comment-15253237
]
SuYan commented on SPARK-14750:
---
yarn.log-aggregation-enable
true
// if this =true, means
[
https://issues.apache.org/jira/browse/SPARK-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280044#comment-15280044
]
SuYan commented on SPARK-14750:
---
[~vanzin], I just see you comment today, enable MR JobHistoryServer
[
https://issues.apache.org/jira/browse/SPARK-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281144#comment-15281144
]
SuYan edited comment on SPARK-14750 at 5/12/16 3:08 AM:
[~vanzin] ah, Thanks to
[
https://issues.apache.org/jira/browse/SPARK-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281144#comment-15281144
]
SuYan commented on SPARK-14750:
---
ah, Thanks to help me find a simple way to do this...
> Make
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278124#comment-15278124
]
SuYan commented on SPARK-10796:
---
main changes:
1. make DAGScheuler only receive Task Resubmit events from
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-10796:
--
Description:
{code}
test("Resubmit stage while lost partition in ZombieTasksets or
RemovedTaskSets") {
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-10796:
--
Description:
desc:
1. We know a running ShuffleMapStage will have multiple TaskSet: one Active
TaskSet,
[
https://issues.apache.org/jira/browse/SPARK-10796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-10796:
--
Description:
We meet that problem in Spark 1.3.0, and I also check the latest Spark code,
and I think that
SuYan created SPARK-14750:
-
Summary: Make historyServer refer application log in hdfs
Key: SPARK-14750
URL: https://issues.apache.org/jira/browse/SPARK-14750
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251184#comment-15251184
]
SuYan commented on SPARK-14750:
---
In yarn mode, spark user can't refer executor log on historyServer logUrl,
SuYan created SPARK-14804:
-
Summary: Graph vertexRDD/EdgeRDD checkpoint results
ClassCastException:
Key: SPARK-14804
URL: https://issues.apache.org/jira/browse/SPARK-14804
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-14804:
--
Description:
{code}
graph3.vertices.checkpoint()
graph3.vertices.count()
[
https://issues.apache.org/jira/browse/SPARK-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-14804:
--
Priority: Minor (was: Major)
> Graph vertexRDD/EdgeRDD checkpoint results ClassCastException:
>
[
https://issues.apache.org/jira/browse/SPARK-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251608#comment-15251608
]
SuYan commented on SPARK-14750:
---
# but there's not a reason to expect they also remain wherever they were
[
https://issues.apache.org/jira/browse/SPARK-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251607#comment-15251607
]
SuYan commented on SPARK-14750:
---
# but there's not a reason to expect they also remain wherever they were
[
https://issues.apache.org/jira/browse/SPARK-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
SuYan updated SPARK-14750:
--
Comment: was deleted
(was:
# but there's not a reason to expect they also remain wherever they were logged
[
https://issues.apache.org/jira/browse/SPARK-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251558#comment-15251558
]
SuYan edited comment on SPARK-14750 at 4/21/16 8:37 AM:
historyServer for spark
[
https://issues.apache.org/jira/browse/SPARK-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251558#comment-15251558
]
SuYan commented on SPARK-14750:
---
historyServer for spark on yarn, the logUrl was something like:
[
https://issues.apache.org/jira/browse/SPARK-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397271#comment-15397271
]
SuYan commented on SPARK-15815:
---
Current temp solution is when all executor were 60s time-out, we will
[
https://issues.apache.org/jira/browse/SPARK-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397271#comment-15397271
]
SuYan edited comment on SPARK-15815 at 7/28/16 8:50 AM:
Current temp solution is
[
https://issues.apache.org/jira/browse/SPARK-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365543#comment-15365543
]
SuYan commented on SPARK-3630:
--
may the reason was snappy 1.0.4.1 not support Concatenating? because the code
1 - 100 of 111 matches
Mail list logo