Deadlock between spark logging and wildfly logging

2014-11-28 Thread Charles
We create spark context in an application running inside wildfly container.
When spark context is created, we see following entires in the wildfly log.
After the log4j-default.properties is loaded, every entry from spark is
printed out twice. And after running for a while, we start to see deadlock
between spark logging thread and wildfly logging thread. 

 Can I control the spark logging in the driver application? How can I turn
it off in the driver application? How can I control the level of spark logs
in the driver application?

2014-11-27 14:39:26,719 INFO  [akka.event.slf4j.Slf4jLogger]
(spark-akka.actor.default-dispatcher-4) Slf4jLogger started
2014-11-27 14:39:26,917 INFO  [Remoting]
(spark-akka.actor.default-dispatcher-2) Starting remoting
2014-11-27 14:39:27,719 INFO  [Remoting]
(spark-akka.actor.default-dispatcher-2) Remoting started; listening on
addresses :[akka.tcp://spark@172.32.1.12:43918]
2014-11-27 14:39:27,733 INFO  [Remoting]
(spark-akka.actor.default-dispatcher-2) Remoting now listens on addresses:
[akka.tcp://spark@172.32.1.12:43918]
2014-11-27 14:39:27,892 INFO  [org.apache.spark.SparkEnv] (MSC service
thread 1-16) Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
2014-11-27 14:39:27,895 ERROR [stderr] (MSC service thread 1-16) 14/11/27
14:39:27 INFO SparkEnv: Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
2014-11-27 14:39:27,896 INFO  [org.apache.spark.SparkEnv] (MSC service
thread 1-16) Registering BlockManagerMaster
2014-11-27 14:39:27,896 ERROR [stderr] (MSC service thread 1-16) 14/11/27
14:39:27 INFO SparkEnv: Registering BlockManagerMaster
2014-11-27 14:39:28,041 INFO  [org.apache.spark.storage.DiskBlockManager]
(MSC service thread 1-16) Created local directory at
/tmp/spark-local-20141127143928-d33c
2014-11-27 14:39:28,041 ERROR [stderr] (MSC service thread 1-16) 14/11/27
14:39:28 INFO DiskBlockManager: Created local directory at
/tmp/spark-local-20141127143928-d33c
2014-11-27 14:39:28,055 INFO  [org.apache.spark.storage.MemoryStore] (MSC
service thread 1-16) MemoryStore started with capacity 4.3 GB.
2014-11-27 14:39:28,055 ERROR [stderr] (MSC service thread 1-16) 14/11/27
14:39:28 INFO MemoryStore: MemoryStore started with capacity 4.3 GB.
2014-11-27 14:39:28,117 INFO  [org.apache.spark.network.ConnectionManager]
(MSC service thread 1-16) Bound socket to port 34018 with id =
ConnectionManagerId(ip-172-32-1-12,34018)
2014-11-27 14:39:28,118 ERROR [stderr] (MSC service thread 1-16) 14/11/27
14:39:28 INFO ConnectionManager: Bound socket to port 34018 with id =
ConnectionManagerId(ip-172-32-1-12,34018)
2014-11-27 14:39:28,162 INFO  [org.apache.spark.storage.BlockManagerMaster]
(MSC service thread 1-16) Trying to register BlockManager
2014-11-27 14:39:28,163 ERROR [stderr] (MSC service thread 1-16) 14/11/27
14:39:28 INFO BlockManagerMaster: Trying to register BlockManager
2014-11-27 14:39:28,181 INFO 
[org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo]
(spark-akka.actor.default-dispatcher-3) Registering block manager
ip-172-32-1-12:34018 with 4.3 GB RAM
2014-11-27 14:39:28,185 ERROR [stderr]
(spark-akka.actor.default-dispatcher-3) 14/11/27 14:39:28 INFO
BlockManagerMasterActor$BlockManagerInfo: Registering block manager
ip-172-32-1-12:34018 with 4.3 GB RAM



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Deadlock-between-spark-logging-and-wildfly-logging-tp20009.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Deadlock between spark logging and wildfly logging

2014-11-28 Thread Sean Owen
Are you sure it's deadlock? print the thread dump (from kill -QUIT) of
the thread(s) that are deadlocked, I suppose, to show where the issue
is. It seems unlikely that a logging thread would be holding locks
that the app uses.

On Fri, Nov 28, 2014 at 4:01 PM, Charles charles...@cenx.com wrote:
 We create spark context in an application running inside wildfly container.
 When spark context is created, we see following entires in the wildfly log.
 After the log4j-default.properties is loaded, every entry from spark is
 printed out twice. And after running for a while, we start to see deadlock
 between spark logging thread and wildfly logging thread.

  Can I control the spark logging in the driver application? How can I turn
 it off in the driver application? How can I control the level of spark logs
 in the driver application?

 2014-11-27 14:39:26,719 INFO  [akka.event.slf4j.Slf4jLogger]
 (spark-akka.actor.default-dispatcher-4) Slf4jLogger started
 2014-11-27 14:39:26,917 INFO  [Remoting]
 (spark-akka.actor.default-dispatcher-2) Starting remoting
 2014-11-27 14:39:27,719 INFO  [Remoting]
 (spark-akka.actor.default-dispatcher-2) Remoting started; listening on
 addresses :[akka.tcp://spark@172.32.1.12:43918]
 2014-11-27 14:39:27,733 INFO  [Remoting]
 (spark-akka.actor.default-dispatcher-2) Remoting now listens on addresses:
 [akka.tcp://spark@172.32.1.12:43918]
 2014-11-27 14:39:27,892 INFO  [org.apache.spark.SparkEnv] (MSC service
 thread 1-16) Using Spark's default log4j profile:
 org/apache/spark/log4j-defaults.properties
 2014-11-27 14:39:27,895 ERROR [stderr] (MSC service thread 1-16) 14/11/27
 14:39:27 INFO SparkEnv: Using Spark's default log4j profile:
 org/apache/spark/log4j-defaults.properties
 2014-11-27 14:39:27,896 INFO  [org.apache.spark.SparkEnv] (MSC service
 thread 1-16) Registering BlockManagerMaster
 2014-11-27 14:39:27,896 ERROR [stderr] (MSC service thread 1-16) 14/11/27
 14:39:27 INFO SparkEnv: Registering BlockManagerMaster
 2014-11-27 14:39:28,041 INFO  [org.apache.spark.storage.DiskBlockManager]
 (MSC service thread 1-16) Created local directory at
 /tmp/spark-local-20141127143928-d33c
 2014-11-27 14:39:28,041 ERROR [stderr] (MSC service thread 1-16) 14/11/27
 14:39:28 INFO DiskBlockManager: Created local directory at
 /tmp/spark-local-20141127143928-d33c
 2014-11-27 14:39:28,055 INFO  [org.apache.spark.storage.MemoryStore] (MSC
 service thread 1-16) MemoryStore started with capacity 4.3 GB.
 2014-11-27 14:39:28,055 ERROR [stderr] (MSC service thread 1-16) 14/11/27
 14:39:28 INFO MemoryStore: MemoryStore started with capacity 4.3 GB.
 2014-11-27 14:39:28,117 INFO  [org.apache.spark.network.ConnectionManager]
 (MSC service thread 1-16) Bound socket to port 34018 with id =
 ConnectionManagerId(ip-172-32-1-12,34018)
 2014-11-27 14:39:28,118 ERROR [stderr] (MSC service thread 1-16) 14/11/27
 14:39:28 INFO ConnectionManager: Bound socket to port 34018 with id =
 ConnectionManagerId(ip-172-32-1-12,34018)
 2014-11-27 14:39:28,162 INFO  [org.apache.spark.storage.BlockManagerMaster]
 (MSC service thread 1-16) Trying to register BlockManager
 2014-11-27 14:39:28,163 ERROR [stderr] (MSC service thread 1-16) 14/11/27
 14:39:28 INFO BlockManagerMaster: Trying to register BlockManager
 2014-11-27 14:39:28,181 INFO
 [org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo]
 (spark-akka.actor.default-dispatcher-3) Registering block manager
 ip-172-32-1-12:34018 with 4.3 GB RAM
 2014-11-27 14:39:28,185 ERROR [stderr]
 (spark-akka.actor.default-dispatcher-3) 14/11/27 14:39:28 INFO
 BlockManagerMasterActor$BlockManagerInfo: Registering block manager
 ip-172-32-1-12:34018 with 4.3 GB RAM



 --
 View this message in context: 
 http://apache-spark-user-list.1001560.n3.nabble.com/Deadlock-between-spark-logging-and-wildfly-logging-tp20009.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Deadlock between spark logging and wildfly logging

2014-11-28 Thread Charles
Here you go. 

Result resolver thread-3 - Thread t@35654
   java.lang.Thread.State: BLOCKED
at java.io.PrintStream.flush(PrintStream.java:335)
- waiting to lock 104f7200 (a java.io.PrintStream) owned by
null_Worker-1 t@1022
at
org.jboss.stdio.StdioContext$DelegatingPrintStream.flush(StdioContext.java:216)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:297)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
- locked 13e0275f (a java.io.OutputStreamWriter)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
- locked 1af95e38 (a org.apache.log4j.ConsoleAppender)
at
org.apache.log4j.JBossAppenderHandler.doPublish(JBossAppenderHandler.java:42)
at org.jboss.logmanager.ExtHandler.publish(ExtHandler.java:79)
at org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:296)
at org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:304)
at org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:304)
at org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:304)
at org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:304)
at org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:304)
at org.jboss.logmanager.Logger.logRaw(Logger.java:721)
at org.slf4j.impl.Slf4jLogger.log(Slf4jLogger.java:326)
at org.slf4j.impl.Slf4jLogger.log(Slf4jLogger.java:320)
at org.slf4j.impl.Slf4jLogger.info(Slf4jLogger.java:180)
at org.apache.spark.Logging$class.logInfo(Logging.scala:50)


null_Worker-1 - Thread t@1022
   java.lang.Thread.State: BLOCKED
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:231)
- waiting to lock 1af95e38 (a org.apache.log4j.ConsoleAppender) owned 
by
Result resolver thread-3 t@35654
at
org.apache.log4j.JBossAppenderHandler.doPublish(JBossAppenderHandler.java:42)
at org.jboss.logmanager.ExtHandler.publish(ExtHandler.java:79)
at org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:296)
at org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:304)
at org.jboss.logmanager.Logger.logRaw(Logger.java:721)
at org.jboss.logmanager.Logger.log(Logger.java:506)
at
org.jboss.stdio.AbstractLoggingWriter.write(AbstractLoggingWriter.java:71)
- locked 2a11d902 (a java.lang.StringBuilder)
at 
org.jboss.stdio.WriterOutputStream.finish(WriterOutputStream.java:143)
at org.jboss.stdio.WriterOutputStream.flush(WriterOutputStream.java:164)
- locked 2c765985 (a sun.nio.cs.US_ASCII$Decoder)
at java.io.PrintStream.write(PrintStream.java:482)
- locked 104f7200 (a java.io.PrintStream)
at
org.jboss.stdio.StdioContext$DelegatingPrintStream.write(StdioContext.java:264)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
- locked 15f65ea5 (a java.io.OutputStreamWriter)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
at java.io.PrintWriter.flush(PrintWriter.java:320)
- locked 15f65ea5 (a java.io.OutputStreamWriter)
at clojure.core$flush.invoke(core.clj:3429)
at taoensso.timbre$str_println.doInvoke(timbre.clj:15)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at taoensso.timbre$fn__3179.invoke(timbre.clj:170)
at clojure.core$juxt$fn__4209.invoke(core.clj:2433)
at
taoensso.timbre$wrap_appender_juxt$fn__3244$fn__3248.invoke(timbre.clj:297)
at
taoensso.timbre$wrap_appender_juxt$fn__3229$fn__3231.invoke(timbre.clj:319)
at taoensso.timbre$send_to_appenders_BANG_.doInvoke(timbre.clj:398)
at clojure.lang.RestFn.invoke(RestFn.java:866)
at
cenx.levski.performance_exception_calculation$scheduled_exception_calculation.doInvoke(performance_exception_calculation.clj:207)



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Deadlock-between-spark-logging-and-wildfly-logging-tp20009p20013.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org