The solution:
Edit /opt/spark-0.9.0-incubating-bin-hadoop2/conf/log4j.properties, changing
Spark's output to WARN. Done!

Refer to:
https://github.com/amplab-extras/SparkR-pkg/blob/master/pkg/src/src/main/resources/log4j.properties#L8




eduardocalfaia wrote
> Have you already tried in conf/log4j.properties?
> log4j.rootCategory=OFF
> 
> Em 4/3/14, 13:46, weida xu escreveu:
>> Hi, alll
>>
>> When I start spark in the shell. It automatically output some system 
>> info every minute, see below. Can I stop or block the output of these 
>> info? I tried the ":silent" comnond, but the automatical output remains.
>>
>> 14/04/03 19:34:30 INFO MetadataCleaner: Ran metadata cleaner for 
>> SHUFFLE_BLOCK_MANAGER
>> 14/04/03 19:34:30 INFO BlockManager: Dropping non broadcast blocks 
>> older than 1396524270698
>> 14/04/03 19:34:30 INFO MetadataCleaner: Ran metadata cleaner for 
>> BLOCK_MANAGER
>> 14/04/03 19:34:30 INFO BlockManager: Dropping broadcast blocks older 
>> than 1396524270701
>> 14/04/03 19:34:30 INFO MetadataCleaner: Ran metadata cleaner for 
>> BROADCAST_VARS
>> 14/04/03 19:34:30 INFO MetadataCleaner: Ran metadata cleaner for 
>> HTTP_BROADCAST
>> 14/04/03 19:34:30 INFO MetadataCleaner: Ran metadata cleaner for 
>> MAP_OUTPUT_TRACKER
>> 14/04/03 19:34:31 INFO MetadataCleaner: Ran metadata cleaner for 
>> SPARK_CONTEXT
>> 14/04/03 19:34:31 INFO DAGScheduler: shuffleToMapStage 0 --> 0
>> 14/04/03 19:34:31 INFO DAGScheduler: stageIdToStage 0 --> 0
>> 14/04/03 19:34:31 INFO DAGScheduler: stageIdToJobIds 0 --> 0
>> 14/04/03 19:34:31 INFO DAGScheduler: pendingTasks 0 --> 0
>> 14/04/03 19:34:31 INFO DAGScheduler: jobIdToStageIds 0 --> 0
>> 14/04/03 19:34:31 INFO DAGScheduler: stageToInfos 0 --> 0
>> 14/04/03 19:34:31 INFO MetadataCleaner: Ran metadata cleaner for 
>> DAG_SCHEDULER
>>
>>
>>
> 
> 
> -- 
> Informativa sulla Privacy: http://www.unibs.it/node/8155





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-stop-system-info-output-in-spark-shell-tp3704p4266.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to