[ https://issues.apache.org/jira/browse/HIVE-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15941350#comment-15941350 ]
Xuefu Zhang commented on HIVE-13517: ------------------------------------ [~stakiar], thanks for working on this. The patch looks good to me. However, I'm a little concern about the usability of this. For an end user, I need to create a dedicated log4j file just in order to get the thread-id logged. Compared to directly modifying the log4j file in Spark installation, I am not sure of the advantage here. Instead, I'm wondering if we should change the default log4j file in Spark such that this comes out of box. I'd think the thread-id is useful across all spark applications. Thoughts? > Hive logs in Spark Executor and Driver should show thread-id. > ------------------------------------------------------------- > > Key: HIVE-13517 > URL: https://issues.apache.org/jira/browse/HIVE-13517 > Project: Hive > Issue Type: Bug > Components: Spark > Affects Versions: 1.2.1, 2.0.0 > Reporter: Szehon Ho > Assignee: Sahil Takiar > Attachments: executor-driver-log.PNG, HIVE-13517.1.patch, > HIVE-13517.2.patch > > > In Spark, there might be more than one task running in one executor. > Similarly, there may be more than one thread running in Driver. > This makes debugging through the logs a nightmare. It would be great if there > could be thread-ids in the logs. -- This message was sent by Atlassian JIRA (v6.3.15#6346)