[ 
https://issues.apache.org/jira/browse/SPARK-28005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16861410#comment-16861410
 ] 

Imran Rashid commented on SPARK-28005:
--------------------------------------

cc [~cltlfcjin]

> SparkRackResolver should not log for resolving empty list
> ---------------------------------------------------------
>
>                 Key: SPARK-28005
>                 URL: https://issues.apache.org/jira/browse/SPARK-28005
>             Project: Spark
>          Issue Type: Bug
>          Components: Scheduler
>    Affects Versions: 3.0.0
>            Reporter: Imran Rashid
>            Priority: Major
>
> After SPARK-13704, {{SparkRackResolver}} generates an INFO message everytime 
> is called with 0 arguments:
> https://github.com/apache/spark/blob/master/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/SparkRackResolver.scala#L73-L76
> That actually happens every 1s when there are no active executors, because of 
> the repeated offers that happen as part of delay scheduling:
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala#L134-L139
> while this is relatively benign, its a pretty annoying thing to be logging at 
> INFO level every 1 second.
> This is easy to reproduce -- in spark-shell, with dynamic allocation, set log 
> level to info, see the logs appear every 1 second.  Then run something, see 
> the msgs stop.  After the executors timeout, see the msgs reappear.
> {noformat}
> scala> :paste
> // Entering paste mode (ctrl-D to finish)
> sc.setLogLevel("info")
> Thread.sleep(5000)
> sc.parallelize(1 to 10).count()
> // Exiting paste mode, now interpreting.
> 19/06/11 12:43:40 INFO yarn.SparkRackResolver: Got an error when resolving 
> hostNames. Falling back to /default-rack for all
> 19/06/11 12:43:41 INFO yarn.SparkRackResolver: Got an error when resolving 
> hostNames. Falling back to /default-rack for all
> 19/06/11 12:43:42 INFO yarn.SparkRackResolver: Got an error when resolving 
> hostNames. Falling back to /default-rack for all
> 19/06/11 12:43:43 INFO yarn.SparkRackResolver: Got an error when resolving 
> hostNames. Falling back to /default-rack for all
> 19/06/11 12:43:44 INFO yarn.SparkRackResolver: Got an error when resolving 
> hostNames. Falling back to /default-rack for all
> 19/06/11 12:43:45 INFO spark.SparkContext: Starting job: count at <pastie>:28
> 19/06/11 12:43:45 INFO scheduler.DAGScheduler: Got job 0 (count at 
> <pastie>:28) with 2 output partitions
> 19/06/11 12:43:45 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 
> (count at <pastie>:28)
> ...
> 19/06/11 12:43:54 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose 
> tasks have all completed, from pool 
> 19/06/11 12:43:54 INFO scheduler.DAGScheduler: ResultStage 0 (count at 
> <pastie>:28) finished in 9.548 s
> 19/06/11 12:43:54 INFO scheduler.DAGScheduler: Job 0 finished: count at 
> <pastie>:28, took 9.613049 s
> res2: Long = 10                                                               
>   
> scala> 
> ...
> 19/06/11 12:44:56 INFO yarn.SparkRackResolver: Got an error when resolving 
> hostNames. Falling back to /default-rack for all
> 19/06/11 12:44:57 INFO yarn.SparkRackResolver: Got an error when resolving 
> hostNames. Falling back to /default-rack for all
> 19/06/11 12:44:58 INFO yarn.SparkRackResolver: Got an error when resolving 
> hostNames. Falling back to /default-rack for all
> 19/06/11 12:44:59 INFO yarn.SparkRackResolver: Got an error when resolving 
> hostNames. Falling back to /default-rack for all
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to