[
https://issues.apache.org/jira/browse/SPARK-27169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796039#comment-16796039
]
acupple commented on SPARK-27169:
---------------------------------
The full event log is too large.
unknown stage log:
[^stage_3511.log]
> number of active tasks is negative on executors page
> ----------------------------------------------------
>
> Key: SPARK-27169
> URL: https://issues.apache.org/jira/browse/SPARK-27169
> Project: Spark
> Issue Type: Bug
> Components: Web UI
> Affects Versions: 2.3.2
> Reporter: acupple
> Priority: Minor
> Attachments: QQ20190315-102215.png, QQ20190315-102235.png,
> image-2019-03-19-15-17-25-522.png, image-2019-03-19-15-21-03-766.png,
> job_1924.log, stage_3511.log
>
>
> I use spark to process some data in HDFS and HBASE, I use one thread consume
> message from a queue, and then submit to a thread pool(16 fix size)for spark
> processor.
> But when run for some time, the active jobs will be thousands, and number of
> active tasks are negative.
> Actually, these jobs are already done when I check driver logs。
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]