[jira] [Updated] (SPARK-17802) Lots of "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" In spark logs
[ https://issues.apache.org/jira/browse/SPARK-17802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Owen updated SPARK-17802: -- Assignee: Shuai Lin > Lots of "java.lang.ClassNotFoundException: > org.apache.hadoop.ipc.CallerContext" In spark logs > - > > Key: SPARK-17802 > URL: https://issues.apache.org/jira/browse/SPARK-17802 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Reporter: Shuai Lin >Assignee: Shuai Lin >Priority: Minor > Fix For: 2.1.0 > > > SPARK-16757 sets the hadoop {{CallerContext}} when calling hadoop/hdfs apis > to make spark applications more diagnosable in hadoop/hdfs logs. However, the > {{CallerContext}} is only added since [hadoop > 2.8|https://issues.apache.org/jira/browse/HDFS-9184], which is not officially > releaed yet. So each time {{utils.CallerContext.setCurrentContext()}} is > called (e.g [when a task is > created|https://github.com/apache/spark/blob/b678e46/core/src/main/scala/org/apache/spark/scheduler/Task.scala#L95-L96]), > a "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" > error is logged, which pollutes the spark logs when there are lots of tasks. > We should improve this so it's only logged once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-17802) Lots of "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" In spark logs
[ https://issues.apache.org/jira/browse/SPARK-17802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shuai Lin updated SPARK-17802: -- Description: SPARK-16757 sets the hadoop {{CallerContext}} when calling hadoop/hdfs apis to make spark applications more diagnosable in hadoop/hdfs logs. However, the {{CallerContext}} is only added since [hadoop 2.8|https://issues.apache.org/jira/browse/HDFS-9184], which is not officially releaed yet. So each time {{utils.CallerContext.setCurrentContext()}} is called (e.g [when a task is created|https://github.com/apache/spark/blob/b678e46/core/src/main/scala/org/apache/spark/scheduler/Task.scala#L95-L96]), a "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" error is logged, which pollutes the spark logs when there are lots of tasks. We should improve this so it's only logged once. was: SPARK-16757 sets the hadoop {{CallerContext}} when calling hadoop/hdfs apis to make spark applications more diagnosable in hadoop/hdfs logs. However, the {{CallerContext}} is only added since [hadoop 2.8|https://issues.apache.org/jira/browse/HDFS-9184], which is not even officially releaed yet. So each time {{utils.CallerContext.setCurrentContext()}} is called (e.g [when a task is created|https://github.com/apache/spark/blob/b678e46/core/src/main/scala/org/apache/spark/scheduler/Task.scala#L95-L96]), a "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" error is logged, which pollutes the spark logs when there are lots of tasks. We should improve this so it's only logged once. > Lots of "java.lang.ClassNotFoundException: > org.apache.hadoop.ipc.CallerContext" In spark logs > - > > Key: SPARK-17802 > URL: https://issues.apache.org/jira/browse/SPARK-17802 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Reporter: Shuai Lin >Priority: Minor > > SPARK-16757 sets the hadoop {{CallerContext}} when calling hadoop/hdfs apis > to make spark applications more diagnosable in hadoop/hdfs logs. However, the > {{CallerContext}} is only added since [hadoop > 2.8|https://issues.apache.org/jira/browse/HDFS-9184], which is not officially > releaed yet. So each time {{utils.CallerContext.setCurrentContext()}} is > called (e.g [when a task is > created|https://github.com/apache/spark/blob/b678e46/core/src/main/scala/org/apache/spark/scheduler/Task.scala#L95-L96]), > a "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" > error is logged, which pollutes the spark logs when there are lots of tasks. > We should improve this so it's only logged once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-17802) Lots of "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" In spark logs
[ https://issues.apache.org/jira/browse/SPARK-17802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shuai Lin updated SPARK-17802: -- Description: SPARK-16757 sets the hadoop {{CallerContext}} when calling hadoop/hdfs apis to make spark applications more diagnosable in hadoop/hdfs logs. However, the {{CallerContext}} is only added since [hadoop 2.8|https://issues.apache.org/jira/browse/HDFS-9184], which is not even officially releaed yet. So each time {{utils.CallerContext.setCurrentContext()}} is called (e.g [when a task is created|https://github.com/apache/spark/blob/b678e46/core/src/main/scala/org/apache/spark/scheduler/Task.scala#L95-L96]), a "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" error is logged, which pollutes the spark logs when there are lots of tasks. We should improve this so it's only logged once. was: SPARK-16757 sets the hadoop {{CallerContext}} when calling hadoop/hdfs apis to make spark applications more diagnosable in hadoop/hdfs logs. However, the {{CallerContext}} is only added since [hadoop 2.8|https://issues.apache.org/jira/browse/HDFS-9184]. So each time {{utils.CallerContext.setCurrentContext()}} is called (e.g [when a task is created|https://github.com/apache/spark/blob/b678e46/core/src/main/scala/org/apache/spark/scheduler/Task.scala#L95-L96]), a "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" error is logged, which pollutes the spark logs when there are lots of tasks. We should improve this so it's only logged once. > Lots of "java.lang.ClassNotFoundException: > org.apache.hadoop.ipc.CallerContext" In spark logs > - > > Key: SPARK-17802 > URL: https://issues.apache.org/jira/browse/SPARK-17802 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Reporter: Shuai Lin >Priority: Minor > > SPARK-16757 sets the hadoop {{CallerContext}} when calling hadoop/hdfs apis > to make spark applications more diagnosable in hadoop/hdfs logs. However, the > {{CallerContext}} is only added since [hadoop > 2.8|https://issues.apache.org/jira/browse/HDFS-9184], which is not even > officially releaed yet. So each time > {{utils.CallerContext.setCurrentContext()}} is called (e.g [when a task is > created|https://github.com/apache/spark/blob/b678e46/core/src/main/scala/org/apache/spark/scheduler/Task.scala#L95-L96]), > a "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" > error is logged, which pollutes the spark logs when there are lots of tasks. > We should improve this so it's only logged once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-17802) Lots of "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" In spark logs
[ https://issues.apache.org/jira/browse/SPARK-17802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shuai Lin updated SPARK-17802: -- Description: SPARK-16757 sets the hadoop {{CallerContext}} when calling hadoop/hdfs apis to make spark applications more diagnosable in hadoop/hdfs logs. However, the {{CallerContext}} is only added since [hadoop 2.8|https://issues.apache.org/jira/browse/HDFS-9184]. So each time {{utils.CallerContext.setCurrentContext()}} is called (e.g [when a task is created|https://github.com/apache/spark/blob/b678e46/core/src/main/scala/org/apache/spark/scheduler/Task.scala#L95-L96]), a "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" error is logged, which pollutes the spark logs when there are lots of tasks. We should improve this so it's only logged once. was: SPARK-16757 sets the hadoop {{CallerContext}} when calling hadoop/hdfs apis to make spark applications more diagnosable in hadoop/hdfs logs. However, the {{CallerContext}} is only added since hadoop 2.8. So each time {{utils.CallerContext.setCurrentContext()}} is called (e.g when a task is created), a "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" error is logged. We should improve this so it's only logged once. > Lots of "java.lang.ClassNotFoundException: > org.apache.hadoop.ipc.CallerContext" In spark logs > - > > Key: SPARK-17802 > URL: https://issues.apache.org/jira/browse/SPARK-17802 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Reporter: Shuai Lin >Priority: Minor > > SPARK-16757 sets the hadoop {{CallerContext}} when calling hadoop/hdfs apis > to make spark applications more diagnosable in hadoop/hdfs logs. However, the > {{CallerContext}} is only added since [hadoop > 2.8|https://issues.apache.org/jira/browse/HDFS-9184]. So each time > {{utils.CallerContext.setCurrentContext()}} is called (e.g [when a task is > created|https://github.com/apache/spark/blob/b678e46/core/src/main/scala/org/apache/spark/scheduler/Task.scala#L95-L96]), > a "java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext" > error is logged, which pollutes the spark logs when there are lots of tasks. > We should improve this so it's only logged once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org