[ 
https://issues.apache.org/jira/browse/SPARK-26058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-26058:
------------------------------
       Priority: Minor  (was: Major)
    Description: 
In order to make the bug more evident, please change the log4j configuration to 
use this pattern, instead of default.
{code}
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %C: 
%m%n
{code}

The logging class recorded in the log is :
{code}
INFO org.apache.spark.internal.Logging$class
{code}
instead of the actual logging class.

Sample output of the logs, after applying the above log4j configuration change.
{code}
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: Stopped Spark 
web UI at http://9.234.206.241:4040
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: 
MapOutputTrackerMasterEndpoint stopped!
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: MemoryStore 
cleared
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: BlockManager 
stopped
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: 
BlockManagerMaster stopped
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: 
OutputCommitCoordinator stopped!
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: Successfully 
stopped SparkContext
{code}


This happens due to the fact, actual logging is done inside the trait logging 
and that is picked up as logging class for the log message. It can either be 
corrected by using `log` variable directly instead of delegator logInfo methods 
or if we would like to not miss out on theoretical performance benefits of 
pre-checking logXYZ.isEnabled, then we can use scala macro to inject those 
checks. Later has a disadvantage, that during debugging wrong line number 
information may be produced.

  was:

In order to make the bug more evident, please change the log4j configuration to 
use this pattern, instead of default.
{code}
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %C: 
%m%n
{code}

The logging class recorded in the log is :
{code}
INFO org.apache.spark.internal.Logging$class
{code}
instead of the actual logging class.

Sample output of the logs, after applying the above log4j configuration change.
{code}
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: Stopped Spark 
web UI at http://9.234.206.241:4040
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: 
MapOutputTrackerMasterEndpoint stopped!
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: MemoryStore 
cleared
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: BlockManager 
stopped
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: 
BlockManagerMaster stopped
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: 
OutputCommitCoordinator stopped!
18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: Successfully 
stopped SparkContext
{code}


This happens due to the fact, actual logging is done inside the trait logging 
and that is picked up as logging class for the log message. It can either be 
corrected by using `log` variable directly instead of delegator logInfo methods 
or if we would like to not miss out on theoretical performance benefits of 
pre-checking logXYZ.isEnabled, then we can use scala macro to inject those 
checks. Later has a disadvantage, that during debugging wrong line number 
information may be produced.

     Issue Type: Improvement  (was: Bug)

I don't think that's a bug, but if there's a way to get the logging class 
properly without changing all the logging code, OK.

> Incorrect logging class loaded for all the logs.
> ------------------------------------------------
>
>                 Key: SPARK-26058
>                 URL: https://issues.apache.org/jira/browse/SPARK-26058
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.4.0, 3.0.0
>            Reporter: Prashant Sharma
>            Priority: Minor
>
> In order to make the bug more evident, please change the log4j configuration 
> to use this pattern, instead of default.
> {code}
> log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %C: 
> %m%n
> {code}
> The logging class recorded in the log is :
> {code}
> INFO org.apache.spark.internal.Logging$class
> {code}
> instead of the actual logging class.
> Sample output of the logs, after applying the above log4j configuration 
> change.
> {code}
> 18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: Stopped Spark 
> web UI at http://9.234.206.241:4040
> 18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: 
> MapOutputTrackerMasterEndpoint stopped!
> 18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: MemoryStore 
> cleared
> 18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: BlockManager 
> stopped
> 18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: 
> BlockManagerMaster stopped
> 18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: 
> OutputCommitCoordinator stopped!
> 18/11/14 13:44:48 INFO org.apache.spark.internal.Logging$class: Successfully 
> stopped SparkContext
> {code}
> This happens due to the fact, actual logging is done inside the trait logging 
> and that is picked up as logging class for the log message. It can either be 
> corrected by using `log` variable directly instead of delegator logInfo 
> methods or if we would like to not miss out on theoretical performance 
> benefits of pre-checking logXYZ.isEnabled, then we can use scala macro to 
> inject those checks. Later has a disadvantage, that during debugging wrong 
> line number information may be produced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to