[ 
https://issues.apache.org/jira/browse/SPARK-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314140#comment-14314140
 ] 

Twinkle Sachdeva commented on SPARK-4705:
-----------------------------------------

Hi,

So here is the final approach I have taken regarding UI.

If there is no application, where logging of event is happening per attempt, 
then previous UI will continue to appear. As soon as there is one or more 
application, whose events has been logged per attempt ( even if there is only 
one attempt), then UI will change to per attempt UI ( please see the 
attachment).

By logging per attempt, I meant the changed folder structure.

Please note that in case of no attempt specific UI, anchor was on application 
id value. In the new UI( UI - 2 ) , anchor will appear for attempt ID.

Thanks,

> Driver retries in yarn-cluster mode always fail if event logging is enabled
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-4705
>                 URL: https://issues.apache.org/jira/browse/SPARK-4705
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, YARN
>    Affects Versions: 1.2.0
>            Reporter: Marcelo Vanzin
>         Attachments: Screen Shot 2015-02-10 at 6.27.49 pm.png, multi-attempts 
> with no attempt based UI.png
>
>
> yarn-cluster mode will retry to run the driver in certain failure modes. If 
> even logging is enabled, this will most probably fail, because:
> {noformat}
> Exception in thread "Driver" java.io.IOException: Log directory 
> hdfs://vanzin-krb-1.vpc.cloudera.com:8020/user/spark/applicationHistory/application_1417554558066_0003
>  already exists!
>         at org.apache.spark.util.FileLogger.createLogDir(FileLogger.scala:129)
>         at org.apache.spark.util.FileLogger.start(FileLogger.scala:115)
>         at 
> org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:74)
>         at org.apache.spark.SparkContext.<init>(SparkContext.scala:353)
> {noformat}
> The even log path should be "more unique". Or perhaps retries of the same app 
> should clean up the old logs first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to