[
https://issues.apache.org/jira/browse/SPARK-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115634#comment-17115634
]
liupengcheng commented on SPARK-21172:
--
[~fanyunbojerry] I think you can check your
liupengcheng created SPARK-31202:
Summary: Improve SizeEstimator for AppendOnlyMap
Key: SPARK-31202
URL: https://issues.apache.org/jira/browse/SPARK-31202
Project: Spark
Issue Type: Improveme
liupengcheng created SPARK-31107:
Summary: Extend FairScheduler to support pool level resource
isolation
Key: SPARK-31107
URL: https://issues.apache.org/jira/browse/SPARK-31107
Project: Spark
liupengcheng created SPARK-31105:
Summary: Respect sql execution id when scheduling taskSets
Key: SPARK-31105
URL: https://issues.apache.org/jira/browse/SPARK-31105
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-30849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-30849:
-
Issue Type: Bug (was: Improvement)
> Application failed due to failed to get MapStatuses broadc
[
https://issues.apache.org/jira/browse/SPARK-30849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-30849:
-
Description:
Currently, we encountered an issue in Spark2.1. The exception is as follows:
{nof
[
https://issues.apache.org/jira/browse/SPARK-30849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038197#comment-17038197
]
liupengcheng edited comment on SPARK-30849 at 2/17/20 9:42 AM:
---
[
https://issues.apache.org/jira/browse/SPARK-30849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038197#comment-17038197
]
liupengcheng commented on SPARK-30849:
--
I find a issue related to this
[SPARK-5594
[
https://issues.apache.org/jira/browse/SPARK-30849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-30849:
-
Description:
Currently, we encountered a issue in Spark2.1. The exception is as follows:
{nofo
[
https://issues.apache.org/jira/browse/SPARK-30849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-30849:
-
Description:
Currently, we encountered a issue in Spark2.1. The exception is as follows:
{nofo
[
https://issues.apache.org/jira/browse/SPARK-30849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-30849:
-
Description:
Currently, we encountered a issue in Spark2.1. The exception is as follows:
{nofo
[
https://issues.apache.org/jira/browse/SPARK-30849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-30849:
-
Attachment: image-2020-02-16-11-17-32-103.png
> Application failed due to failed to get MapStatu
[
https://issues.apache.org/jira/browse/SPARK-30849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-30849:
-
Attachment: image-2020-02-16-11-13-18-195.png
> Application failed due to failed to get MapStatu
liupengcheng created SPARK-30849:
Summary: Application failed due to failed to get MapStatuses
broadcast
Key: SPARK-30849
URL: https://issues.apache.org/jira/browse/SPARK-30849
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-30712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17032076#comment-17032076
]
liupengcheng commented on SPARK-30712:
--
[~hyukjin.kwon] SPARK-24914 seems already c
[
https://issues.apache.org/jira/browse/SPARK-30394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-30394:
-
Description:
Currently, if `spark.sql.statistics.fallBackToHdfs` is enabled, then spark will
sc
[
https://issues.apache.org/jira/browse/SPARK-30712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17032066#comment-17032066
]
liupengcheng commented on SPARK-30712:
--
OK, thanks! [~hyukjin.kwon].
> Estimate si
[
https://issues.apache.org/jira/browse/SPARK-30712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17031644#comment-17031644
]
liupengcheng commented on SPARK-30712:
--
[~hyukjin.kwon] We use the rowCount info in
[
https://issues.apache.org/jira/browse/SPARK-30712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17031636#comment-17031636
]
liupengcheng commented on SPARK-30712:
--
[~hyukjin.kwon] Yes, in our customed spark
liupengcheng created SPARK-30713:
Summary: Respect mapOutputSize in memory in adaptive execution
Key: SPARK-30713
URL: https://issues.apache.org/jira/browse/SPARK-30713
Project: Spark
Issue T
liupengcheng created SPARK-30712:
Summary: Estimate sizeInBytes from file metadata for parquet files
Key: SPARK-30712
URL: https://issues.apache.org/jira/browse/SPARK-30712
Project: Spark
Iss
liupengcheng created SPARK-30470:
Summary: Uncache table in tempViews if needed on session closed
Key: SPARK-30470
URL: https://issues.apache.org/jira/browse/SPARK-30470
Project: Spark
Issue
liupengcheng created SPARK-30394:
Summary: Skip collecting stats in DetermineTableStats rule when
hive table is convertible to datasource tables
Key: SPARK-30394
URL: https://issues.apache.org/jira/browse/SPARK-3
liupengcheng created SPARK-30346:
Summary: Improve logging when events dropped
Key: SPARK-30346
URL: https://issues.apache.org/jira/browse/SPARK-30346
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-27802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16875945#comment-16875945
]
liupengcheng edited comment on SPARK-27802 at 7/2/19 6:23 AM:
liupengcheng created SPARK-28220:
Summary: join foldable condition not pushed down when parent
filter is totally pushed down
Key: SPARK-28220
URL: https://issues.apache.org/jira/browse/SPARK-28220
Pro
[
https://issues.apache.org/jira/browse/SPARK-27802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16875945#comment-16875945
]
liupengcheng commented on SPARK-27802:
--
[~shahid] yes, but I checked master branch,
[
https://issues.apache.org/jira/browse/SPARK-28195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-28195:
-
Description:
Currently, we encountered an issue when executing
`InsertIntoDataSourceDirCommand`
liupengcheng created SPARK-28195:
Summary: CheckAnalysis not working for Command and report
misleading error message
Key: SPARK-28195
URL: https://issues.apache.org/jira/browse/SPARK-28195
Project: Sp
[
https://issues.apache.org/jira/browse/SPARK-27802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-27802:
-
Description:
Recently, we hit this issue when testing spark2.3. It report the following
error m
liupengcheng created SPARK-27802:
Summary: SparkUI throws NoSuchElementException when inconsistency
appears between `ExecutorStageSummaryWrapper`s and `ExecutorSummaryWrapper`s
Key: SPARK-27802
URL: https://issues
[
https://issues.apache.org/jira/browse/SPARK-27214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-27214:
-
Description:
Currently, Spark locality wait mechanism is not friendly for large job, when
numbe
[
https://issues.apache.org/jira/browse/SPARK-27214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-27214:
-
Description:
Currently, Spark locality wait mechanism is not friendly for large job, when
numbe
[
https://issues.apache.org/jira/browse/SPARK-27214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-27214:
-
Description:
Currently, Spark locality wait mechanism is not friendly for large job, when
numbe
liupengcheng created SPARK-27214:
Summary: Upgrading locality level when lots of pending tasks have
been waiting more than locality.wait
Key: SPARK-27214
URL: https://issues.apache.org/jira/browse/SPARK-27214
[
https://issues.apache.org/jira/browse/SPARK-26927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16777836#comment-16777836
]
liupengcheng commented on SPARK-26927:
--
[~Ngone51]
Let's say we got the following
[
https://issues.apache.org/jira/browse/SPARK-26927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26927:
-
Issue Type: Bug (was: Improvement)
> Race condition may cause dynamic allocation not working
>
[
https://issues.apache.org/jira/browse/SPARK-26941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26941:
-
Summary: incorrect computation of maxNumExecutorFailures in
ApplicationMaster for streaming (w
[
https://issues.apache.org/jira/browse/SPARK-26941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26941:
-
Component/s: YARN
Summary: maxNumExecutorFailures should be computed with
spark.streamin
liupengcheng created SPARK-26941:
Summary: maxNumExecutorFailures should be computed with
spark.streaming.dynamicAllocation.maxExecutors in streaming
Key: SPARK-26941
URL: https://issues.apache.org/jira/browse/SP
[
https://issues.apache.org/jira/browse/SPARK-26927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26927:
-
Description:
Recently, we catch a bug that caused our production spark thriftserver hangs:
Ther
[
https://issues.apache.org/jira/browse/SPARK-26927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26927:
-
Attachment: Selection_046.jpg
> Race condition may cause dynamic allocation not working
> --
[
https://issues.apache.org/jira/browse/SPARK-26927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26927:
-
Description:
Recently, we catch a bug that caused our production spark thriftserver hangs:
Ther
[
https://issues.apache.org/jira/browse/SPARK-26927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26927:
-
Attachment: Selection_045.jpg
> Race condition may cause dynamic allocation not working
> --
[
https://issues.apache.org/jira/browse/SPARK-26927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26927:
-
Attachment: Selection_043.jpg
> Race condition may cause dynamic allocation not working
> --
[
https://issues.apache.org/jira/browse/SPARK-26927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26927:
-
Attachment: Selection_044.jpg
> Race condition may cause dynamic allocation not working
> --
[
https://issues.apache.org/jira/browse/SPARK-26927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26927:
-
Attachment: Selection_042.jpg
> Race condition may cause dynamic allocation not working
> --
liupengcheng created SPARK-26927:
Summary: Race condition may cause dynamic allocation not working
Key: SPARK-26927
URL: https://issues.apache.org/jira/browse/SPARK-26927
Project: Spark
Issue
liupengcheng created SPARK-26892:
Summary: saveAsTextFile throws NullPointerException when null row
present
Key: SPARK-26892
URL: https://issues.apache.org/jira/browse/SPARK-26892
Project: Spark
liupengcheng created SPARK-26877:
Summary: Support user staging directory in yarn mode
Key: SPARK-26877
URL: https://issues.apache.org/jira/browse/SPARK-26877
Project: Spark
Issue Type: Impro
[
https://issues.apache.org/jira/browse/SPARK-26877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26877:
-
Summary: Support user-level app staging directory in yarn mode when
spark.yarn.stagingDir specif
[
https://issues.apache.org/jira/browse/SPARK-26712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26712:
-
Summary: Single disk broken causing YarnShuffleSerivce not available (was:
Disk broken causing
[
https://issues.apache.org/jira/browse/SPARK-26768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26768:
-
Description:
Recently, when I was reading some code of `BlockManager.getBlockData`, I found
tha
[
https://issues.apache.org/jira/browse/SPARK-26768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26768:
-
Attachment: Selection_037.jpg
> Remove useless code in BlockManager
> --
liupengcheng created SPARK-26768:
Summary: Remove useless code in BlockManager
Key: SPARK-26768
URL: https://issues.apache.org/jira/browse/SPARK-26768
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-26750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26750:
-
Summary: Estimate memory overhead should taking multi-cores into account
(was: Estimate memory
liupengcheng created SPARK-26750:
Summary: Estimate memory overhead with multi-cores
Key: SPARK-26750
URL: https://issues.apache.org/jira/browse/SPARK-26750
Project: Spark
Issue Type: Improve
[
https://issues.apache.org/jira/browse/SPARK-26689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26689:
-
Summary: Single disk broken causing broadcast failure (was: Disk broken
causing broadcast failu
[
https://issues.apache.org/jira/browse/SPARK-26689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752961#comment-16752961
]
liupengcheng commented on SPARK-26689:
--
[~tgraves] In production environment, yarn.
[
https://issues.apache.org/jira/browse/SPARK-26728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26728:
-
Summary: Make rdd.unpersist blocking configurable (was: Make rdd.unpersist
and broadcast.unpers
liupengcheng created SPARK-26728:
Summary: Make rdd.unpersist and broadcast.unpersist blocking
configurable
Key: SPARK-26728
URL: https://issues.apache.org/jira/browse/SPARK-26728
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-26712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26712:
-
Description:
Currently, `ExecutorShuffleInfo` can be recovered from file if NM recovery
enabled
[
https://issues.apache.org/jira/browse/SPARK-26712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26712:
-
Description:
Currently, `ExecutorShuffleInfo` can be recovered from file if NM recovery
enabled
[
https://issues.apache.org/jira/browse/SPARK-26712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26712:
-
Description:
Currently, `ExecutorShuffleInfo` can be recovered from file if NM recovery
enabled
[
https://issues.apache.org/jira/browse/SPARK-26712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26712:
-
Description:
Currently, `ExecutorShuffleInfo` can be recovered from file if NM recovery
enabled
[
https://issues.apache.org/jira/browse/SPARK-26712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26712:
-
Description:
Currently, `ExecutorShuffleInfo` can be recovered from file if NM recovery
enabled
[
https://issues.apache.org/jira/browse/SPARK-26689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26689:
-
Summary: Disk broken causing broadcast failure (was: Bad disk causing
broadcast failure)
> Dis
[
https://issues.apache.org/jira/browse/SPARK-26712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26712:
-
Issue Type: Bug (was: Improvement)
> Disk broken causing YarnShuffleSerivce not available
> ---
[
https://issues.apache.org/jira/browse/SPARK-26712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26712:
-
Summary: Disk broken causing YarnShuffleSerivce not available (was: Disk
broken caused NM recov
liupengcheng created SPARK-26712:
Summary: Disk broken caused NM recovery failure causing
YarnShuffleSerivce not available
Key: SPARK-26712
URL: https://issues.apache.org/jira/browse/SPARK-26712
Proje
[
https://issues.apache.org/jira/browse/SPARK-26689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750669#comment-16750669
]
liupengcheng commented on SPARK-26689:
--
[~tgraves] We use yarn as the resource mana
liupengcheng created SPARK-26689:
Summary: Bad disk causing broadcast failure
Key: SPARK-26689
URL: https://issues.apache.org/jira/browse/SPARK-26689
Project: Spark
Issue Type: Bug
liupengcheng created SPARK-26684:
Summary: Add logs when allocating large memory for
PooledByteBufAllocator
Key: SPARK-26684
URL: https://issues.apache.org/jira/browse/SPARK-26684
Project: Spark
liupengcheng created SPARK-26674:
Summary: Consolidate CompositeByteBuf when reading large frame
Key: SPARK-26674
URL: https://issues.apache.org/jira/browse/SPARK-26674
Project: Spark
Issue T
[
https://issues.apache.org/jira/browse/SPARK-26660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26660:
-
Attachment: screenshot-1.png
> Add warning logs for large taskBinary size
>
liupengcheng created SPARK-26660:
Summary: Add warning logs for large taskBinary size
Key: SPARK-26660
URL: https://issues.apache.org/jira/browse/SPARK-26660
Project: Spark
Issue Type: Improv
[
https://issues.apache.org/jira/browse/SPARK-26634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26634:
-
Affects Version/s: (was: 2.4.0)
> OutputCommitCoordinator may allow task of FetchFailureStag
[
https://issues.apache.org/jira/browse/SPARK-26634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26634:
-
Description:
In our production spark cluster, we encoutered a case that the task of retry
stage
liupengcheng created SPARK-26634:
Summary: OutputCommitCoordinator may allow task of
FetchFailureStage commit again
Key: SPARK-26634
URL: https://issues.apache.org/jira/browse/SPARK-26634
Project: Spa
[
https://issues.apache.org/jira/browse/SPARK-26614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26614:
-
Fix Version/s: 2.3.1
2.4.0
> Speculation kill might cause job failure
> -
[
https://issues.apache.org/jira/browse/SPARK-26612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26612:
-
Fix Version/s: 2.3.1
2.4.0
> Speculation kill causing finished stage recomput
[
https://issues.apache.org/jira/browse/SPARK-26614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16743569#comment-16743569
]
liupengcheng commented on SPARK-26614:
--
Already resolved by https://github.com/apac
[
https://issues.apache.org/jira/browse/SPARK-26612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16743571#comment-16743571
]
liupengcheng commented on SPARK-26612:
--
Already resolved by https://github.com/apac
[
https://issues.apache.org/jira/browse/SPARK-26612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng resolved SPARK-26612.
--
Resolution: Fixed
Fix Version/s: 2.2.2
> Speculation kill causing finished stage recomp
[
https://issues.apache.org/jira/browse/SPARK-26614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng resolved SPARK-26614.
--
Resolution: Fixed
Fix Version/s: 2.2.2
> Speculation kill might cause job failure
> ---
[
https://issues.apache.org/jira/browse/SPARK-26530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26530:
-
Summary: Validate heartheat arguments in HeartbeatReceiver (was: Validate
heartheat arguments i
liupengcheng created SPARK-26614:
Summary: Speculation kill might cause job failure
Key: SPARK-26614
URL: https://issues.apache.org/jira/browse/SPARK-26614
Project: Spark
Issue Type: Bug
liupengcheng created SPARK-26612:
Summary: Speculation kill causing finished stage recomputed
Key: SPARK-26612
URL: https://issues.apache.org/jira/browse/SPARK-26612
Project: Spark
Issue Type
[
https://issues.apache.org/jira/browse/SPARK-26614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26614:
-
Description:
This issue is similar to SPARK-26612
Some odd exceptions might be thrown in specul
[
https://issues.apache.org/jira/browse/SPARK-26126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng resolved SPARK-26126.
--
Resolution: Not A Problem
> Put scala-library deps into root pom instead of spark-tags module
[
https://issues.apache.org/jira/browse/SPARK-26529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26529:
-
Summary: Add debug logs for confArchive when preparing local resource
(was: Add logs for IOExc
[
https://issues.apache.org/jira/browse/SPARK-26126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26126:
-
Summary: Put scala-library deps into root pom instead of spark-tags module
(was: Should put sca
[
https://issues.apache.org/jira/browse/SPARK-26126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733836#comment-16733836
]
liupengcheng edited comment on SPARK-26126 at 1/4/19 5:41 AM:
[
https://issues.apache.org/jira/browse/SPARK-26126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733836#comment-16733836
]
liupengcheng commented on SPARK-26126:
--
[~hyukjin.kwon] Yes, there is no actual pro
liupengcheng created SPARK-26530:
Summary: Validate heartheat arguments in SparkSubmitArguments
Key: SPARK-26530
URL: https://issues.apache.org/jira/browse/SPARK-26530
Project: Spark
Issue Ty
liupengcheng created SPARK-26529:
Summary: Add logs for IOException when preparing local resource
Key: SPARK-26529
URL: https://issues.apache.org/jira/browse/SPARK-26529
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-26126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733738#comment-16733738
]
liupengcheng commented on SPARK-26126:
--
[~hyukjin.kwon] it's an issue, it's really
[
https://issues.apache.org/jira/browse/SPARK-26525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26525:
-
Description:
Currently, spark would not release ShuffleBlockFetcherIterator until the whole
tas
[
https://issues.apache.org/jira/browse/SPARK-26525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26525:
-
Description:
Currently, spark would not release ShuffleBlockFetcherIterator until the whole
tas
[
https://issues.apache.org/jira/browse/SPARK-26525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liupengcheng updated SPARK-26525:
-
Description:
Currently, spark would not release ShuffleBlockFetcherIterator until the whole
tas
1 - 100 of 150 matches
Mail list logo