Re: Log4J2 NoSQL query

2015-07-21 Thread Ralph Goers
I am not familiar with Logstash or Fluentd, but I am familiar with Apache Flume.

Log4j 2 provides capabilities to write directly to a SQL or some NoSQL 
databases.  Some users prefer to do this, but it does have some risk as to what 
to do if the target destination becomes unavailable. As Remko pointed out, the 
failover appender can be used to mitigate this somewhat. While we would like 
all the Appenders to be able to recover for errors connecting to their 
destinations I am not sure if all of them do at this time.

The main advantages you get by using the appenders are simplicity and speed. If 
they can recover from errors then you would have pretty good reliability, 
assuming you do something when the connection is unavailable.

The reason Log4j comes with a Flume Appender is that Flume can provide more 
reliability than you will get with the Appenders out of the box.  That said, 
the Flume Appender has 3 variations, 2 of which will involve having log events 
written to local disk before they are forwarded on to Flume.  This same 
technique could be used with other Appenders to insure that the log events are 
actually delivered.  I should note that the current version of Flume supports 
the embedded Flume agent, which currently can only write to a remote Flume 
using Avro. There has been discussion on the Flume mailing list to relax this 
so in the future it may be possible to have the Flume Appender use the embedded 
Flume Agent to send data to any of the NoSQL databases Flume supports.

Ralph


> On Jul 21, 2015, at 11:01 PM, kusmanjali  wrote:
> 
> 1. Most of the logging frame works use a log forwarder + queue
> mechanism(Logstash, Fluectd) to store logs into database. What is the
> advantage of using this over using Log4j NoSQL appenders to write directly
> to the database.
> 
> 2. Any material/link to get more insight into how log4j2 handles the
> database connection and failover. And how we can scale this architecture to
> store logs from multiple servers into a single database.
> 
> Our idea is to build a central logging system just by using Log4 and doing
> away with forwarders and queues.
> 
> -- 
> with regards
> Kusmanjali Jenamoni



-
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org



Re: FATAL Unable to register shutdown hook because JVM is shutting down.

2015-07-21 Thread Ralph Goers
The currentContext value indicates whether the LoggerContext stored in the 
ThreadLocal should be used. This is usually the wrong choice which is why the 
value is usually set to false.  Instead, the LoggerContext is located via the 
ClassLoaderContextSelector which finds the LoggerContext for the ClassLoader of 
the class that owns the Logger.

It looks to me like something is trying to log after logging has already shut 
down.  This is causing it to retrieve the LoggerContext, which has already been 
stopped, and then it tries to restart it. Because the LoggerContext is normally 
stopped by the execution of the shutdown hook it is a very bad sign that the 
shutdown hook is trying to be reestablished.  My guess is that Spark has its 
own shutdown hook and is trying to log from it after our shutdown hook has 
already run.

When Matt wrote this code I believe he allowed for this situation but I’d 
really have to dig into the code more to figure out. However, I did find 
http://stackoverflow.com/questions/17400136/how-to-log-within-shutdown-hooks-with-log4j2
 

 which says to implement your own ShutdownCallbackRegistry.

It might be a good idea to add the variation of the Shutdown registry mentioned 
in the article so that it can be configured.

Ralph

> On Jul 21, 2015, at 8:57 PM, Remko Popma  wrote:
> 
> Looks like the context returned by
> AbstractLoggerAdapter.java:102
> is the result of LogManager.getContext(someClass, false);
> 
> The boolean "false" indicates that a different LoggerContext may be created 
> (and started). I suspect this is what causes the problem. 
> 
> Ralph understands this part better than me. Ralph, should the slf4j 
> Log4jLoggerFactory pass "false" here? The default for LogManager seems to be 
> "true"...
> 
> Remko
> 
> Sent from my iPhone
> 
>> On 2015/07/21, at 22:58, Mikael Ståldal  wrote:
>> 
>> AbstractLoggerAdapter.java:102
> 
> -
> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> For additional commands, e-mail: log4j-user-h...@logging.apache.org
> 
> 



Re: Log4J2 NoSQL query

2015-07-21 Thread Remko Popma
Hi,

I am not familiar with Logstash and Fluectd, but in general I would assume that 
most databases may not be able to keep up with log events during a burst, and 
these mechanisms provide a persisted queue where your log events can be 
recovered from in case the process or box crashed before all events are saved 
to the database. 

If you have a NonSQL database that is as fast as the intermediate storage then 
I don't see any advantage to the intermediate storage. But again, I'm not 
familiar with these products so I may be missing something. 

2. Log4j has a failover appender (which must be up and available when log4j 
starts up). If appending to the main appender fails Log4j will append the event 
to the failover appender. I would encourage you to study the documentation and 
experiment with some common failure scenarios. Note that the failover appender 
never "becomes primary": every event is first attempted to append to the 
primary appender, and if an exception occurs it is sent to the failover 
appender. It may be better to rely on the clustering/failover capabilities of 
your database. 

About scalability: this depends on (a) how many transactions per second can 
your database handle, and (b) how many log events per second you expect your 
application to send during bursts and how many applications you have running 
simultaneously. If one app gets busy, do they all get busy or is the load 
independent? 

If during peaks the apps generate more transactions than the database can 
handle, then having an intermediate queue may still be valuable. 

Remko

Sent from my iPhone

> On 2015/07/22, at 15:01, kusmanjali  wrote:
> 
> 1. Most of the logging frame works use a log forwarder + queue
> mechanism(Logstash, Fluectd) to store logs into database. What is the
> advantage of using this over using Log4j NoSQL appenders to write directly
> to the database.
> 
> 2. Any material/link to get more insight into how log4j2 handles the
> database connection and failover. And how we can scale this architecture to
> store logs from multiple servers into a single database.
> 
> Our idea is to build a central logging system just by using Log4 and doing
> away with forwarders and queues.
> 
> -- 
> with regards
> Kusmanjali Jenamoni

-
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org



Log4J2 NoSQL query

2015-07-21 Thread kusmanjali
1. Most of the logging frame works use a log forwarder + queue
mechanism(Logstash, Fluectd) to store logs into database. What is the
advantage of using this over using Log4j NoSQL appenders to write directly
to the database.

2. Any material/link to get more insight into how log4j2 handles the
database connection and failover. And how we can scale this architecture to
store logs from multiple servers into a single database.

Our idea is to build a central logging system just by using Log4 and doing
away with forwarders and queues.

-- 
with regards
Kusmanjali Jenamoni


Re: FATAL Unable to register shutdown hook because JVM is shutting down.

2015-07-21 Thread Remko Popma
Looks like the context returned by
AbstractLoggerAdapter.java:102
is the result of LogManager.getContext(someClass, false);

The boolean "false" indicates that a different LoggerContext may be created 
(and started). I suspect this is what causes the problem. 

Ralph understands this part better than me. Ralph, should the slf4j 
Log4jLoggerFactory pass "false" here? The default for LogManager seems to be 
"true"...

Remko

Sent from my iPhone

> On 2015/07/21, at 22:58, Mikael Ståldal  wrote:
> 
> AbstractLoggerAdapter.java:102

-
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org



Re: JDBC Appender : how to have the pattern produce NULL?

2015-07-21 Thread Remko Popma
Have you tried ThreadContext.remove("NUM")?

Remko

Sent from my iPhone

> On 2015/07/22, at 7:45, Benjamin Jaton  wrote:
> 
> Hello,
> 
> I am using the JDBCAppender:
> 
> {
>  "type" : "Jdbc",
>  "name" : "MyDatabaseAppender",
>  "tableName" : "LOGS",
>  "ignoreExceptions" : "false",
>  "ConnectionFactory" : {
>"class" : "test.ConnectionFactory",
>"method" : "getDatabaseAppenderDataSource"
>  },
>  "Column" : [
> { "name" : "ID", "pattern" : "%X{ID}", "isUnicode" : "false" },
> { "name" : "NUM", "pattern" : "%X{NUM}", "isUnicode" : "false" }
> ]
> }
> 
> As you can see, I am using the ThreadContext to pass the variables.
> 
> ThreadContext.put("ID", id++);
> ThreadContext.put("NUM", "41");
> 
> The NUM column is of type BIGINT, and is nullable.
> 
> I am trying to set it to NULL with:
>  ThreadContext.put("NUM", "NULL");
> or
>  ThreadContext.put("NUM", null);
> or
>  ThreadContext.put("NUM", "");
> 
> Every time I get
> Caused by: org.apache.derby.client.am.SqlException: Error for batch element
> #0: Invalid character string format for type BIGINT.
>at org.apache.derby.client.am.Statement.completeExecute(Unknown Source)
> 
> I believe this is because, in
> org.apache.logging.log4j.core.appender.db.jdbc.JdbcDatabaseManager.writeInternal()
> we always do
>  this.statement.setString(i++, column.layout.toSerializable(event));
> 
> So we always end up feeding the String "" or "NULL" to the statement.
> 
> Is there anything I can do to alter this behavior?
> 
> Thanks

-
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org



JDBC Appender : how to have the pattern produce NULL?

2015-07-21 Thread Benjamin Jaton
Hello,

I am using the JDBCAppender:

{
  "type" : "Jdbc",
  "name" : "MyDatabaseAppender",
  "tableName" : "LOGS",
  "ignoreExceptions" : "false",
  "ConnectionFactory" : {
"class" : "test.ConnectionFactory",
"method" : "getDatabaseAppenderDataSource"
  },
  "Column" : [
 { "name" : "ID", "pattern" : "%X{ID}", "isUnicode" : "false" },
 { "name" : "NUM", "pattern" : "%X{NUM}", "isUnicode" : "false" }
 ]
}

As you can see, I am using the ThreadContext to pass the variables.

ThreadContext.put("ID", id++);
ThreadContext.put("NUM", "41");

The NUM column is of type BIGINT, and is nullable.

I am trying to set it to NULL with:
  ThreadContext.put("NUM", "NULL");
or
  ThreadContext.put("NUM", null);
or
  ThreadContext.put("NUM", "");

Every time I get
Caused by: org.apache.derby.client.am.SqlException: Error for batch element
#0: Invalid character string format for type BIGINT.
at org.apache.derby.client.am.Statement.completeExecute(Unknown Source)

I believe this is because, in
org.apache.logging.log4j.core.appender.db.jdbc.JdbcDatabaseManager.writeInternal()
we always do
  this.statement.setString(i++, column.layout.toSerializable(event));

So we always end up feeding the String "" or "NULL" to the statement.

Is there anything I can do to alter this behavior?

Thanks


Re: Print the LoggerContext name

2015-07-21 Thread Clément Guillaume
Yes, I have the log4j-web jar in my classpath, I'm in a servlet 3.0
environment so I don't need any particular config, or did I miss something?



On Sun, Jul 19, 2015 at 11:26 AM, Gary Gregory 
wrote:

> And you have your web app configured per
> https://logging.apache.org/log4j/2.x/manual/webapp.html ?
>
> Gary
>
> On Sat, Jul 18, 2015 at 8:30 PM, Clément Guillaume  >
> wrote:
>
> > Yes, I trying to print the servletContext.servletContextName using the
> > WebLookup (${web:servletContextName}). But it doesn't work for now.
> >
> > On Sat, Jul 18, 2015 at 8:26 PM, Gary Gregory 
> > wrote:
> >
> > > So you are using servletContextName?
> > >
> > > Gary
> > >
> > > On Sat, Jul 18, 2015 at 8:19 PM, Clément Guillaume <
> > cguilla...@hotpads.com
> > > >
> > > wrote:
> > >
> > > > Hi, thank you for your answers,
> > > >
> > > > My primary goal is to is to print the WebApplication name. I found
> here
> > > > <
> http://logging.apache.org/log4j/2.x/manual/webapp.html#ContextParams>
> > > > that
> > > > by default the LoggerContext name is the WebApplication name.
> > > >
> > > > And I just found that the WebLookup
> > > > 
> > > > can achieve exactly
> > > > what I want. But for now I can't make it
> > > > works, WebLookup.getServletContext() is returning null, I'm
> > investigating
> > > > more (I'm using a 3.0 servlet).
> > > >
> > > > I'm also trying to make my custom Lookup, but it's currently not
> found
> > in
> > > > the registered Lookups (even if the annotation processor find it and
> > put
> > > it
> > > > into the Log4j2Plugins.dat file).
> > > >
> > > > On Fri, Jul 17, 2015 at 6:44 PM, Remko Popma 
> > > > wrote:
> > > >
> > > > > One way to achieve this would be to make a custom Lookup (
> > > > > https://logging.apache.org/log4j/2.x/manual/extending.html#Lookups
> )
> > > and
> > > > > include it in your PatternLayout's pattern.
> > > > >
> > > > > Sent from my iPhone
> > > > >
> > > > > > On 2015/07/18, at 9:39, Gary Gregory 
> > wrote:
> > > > > >
> > > > > > Hm... I do not think so. Patches welcome of course. What are your
> > > > > > expectations for such a feature?
> > > > > >
> > > > > > Gary
> > > > > >
> > > > > > On Mon, Jul 13, 2015 at 8:24 PM, Clément Guillaume <
> > > > > cguilla...@hotpads.com>
> > > > > > wrote:
> > > > > >
> > > > > >> Is it possible to print the LoggerContext name in the output?
> > > > > >>
> > > > > >> Clément
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > E-Mail: garydgreg...@gmail.com | ggreg...@apache.org
> > > > > > Java Persistence with Hibernate, Second Edition
> > > > > > 
> > > > > > JUnit in Action, Second Edition <
> http://www.manning.com/tahchiev/>
> > > > > > Spring Batch in Action 
> > > > > > Blog: http://garygregory.wordpress.com
> > > > > > Home: http://garygregory.com/
> > > > > > Tweet! http://twitter.com/GaryGregory
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > E-Mail: garydgreg...@gmail.com | ggreg...@apache.org
> > > Java Persistence with Hibernate, Second Edition
> > > 
> > > JUnit in Action, Second Edition 
> > > Spring Batch in Action 
> > > Blog: http://garygregory.wordpress.com
> > > Home: http://garygregory.com/
> > > Tweet! http://twitter.com/GaryGregory
> > >
> >
>
>
>
> --
> E-Mail: garydgreg...@gmail.com | ggreg...@apache.org
> Java Persistence with Hibernate, Second Edition
> 
> JUnit in Action, Second Edition 
> Spring Batch in Action 
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>


Re: FATAL Unable to register shutdown hook because JVM is shutting down.

2015-07-21 Thread Mikael Ståldal
There is only one JVM process involved as far as I know. I have a script
which looks like this:

#!/bin/sh
date "+%F %T,%N Before JVM start"
java
com.magine.maglev.Job
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
date "+%F %T,%N After JVM end"


I tried to make it run longer by a Thread.sleep(60*1000), but same problem:

2015-07-21 15:51:04,599538113 Before JVM start
2015-07-21 15:51:05,366  INFO com.magine.maglev.Job$: Startup
[... more logging ...]
2015-07-21 15:52:10,799 FATAL Unable to register shutdown hook because JVM
is shutting down. java.lang.IllegalStateException: Cannot add new shutdown
hook as this is not started. Current state: STOPPED
at
org.apache.logging.log4j.core.util.DefaultShutdownCallbackRegistry.addShutdownCallback(DefaultShutdownCallbackRegistry.java:113)
at
org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4jContextFactory.java:244)
at
org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerContext.java:182)
at
org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:143)
at
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:146)
at
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:41)
at
org.apache.logging.log4j.LogManager.getContext(LogManager.java:175)
at
org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:102)
at
org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
at
org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:42)
at
org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:284)
at org.apache.spark.Logging$class.log(Logging.scala:52)
at
org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1.log(AkkaRpcEnv.scala:93)
at
org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:55)
at
org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
at
scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at
org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at
org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1.aroundReceive(AkkaRpcEnv.scala:93)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

2015-07-21 15:52:11,145821352 After JVM end



On Tue, Jul 21, 2015 at 3:18 PM, Remko Popma  wrote:

> That's strange: if you are logging at startup the shutdown hook should be
> registered before the call to logger.log() returns... Does the batch start
> another JVM process?
>
> Sent from my iPhone
>
> > On 2015/07/21, at 17:18, Mikael Ståldal 
> wrote:
> >
> > The app do some logging at startup as well, and quite some logging in
> > between. However, it is a short running batch job, and in this case the
> JVM
> > only runs for 6 seconds in total.
> >
> > See below to get some more context about the timing. The first and last
> log
> > line are not from Log4j, they are from a bash script used to start the
> JVM.
> >
> > I think that this is not only one problem, but two. First that this
> occurs
> > in the first place. Second that it is logged at FATAL level.
> >
> >
> > 2015-07-21 10:11:24,022  Before JVM start
> >
> > 2015-07-21 10:11:24,771  INFO com.magine.maglev.Job$: Startup
> > [... more logging here ...]
> > 2015-07-21 10:11:30,216 FATAL Unable to register shutdown hook because
> JVM
> > is shutting down. java.lang.IllegalStateException: Cannot add new
> shutdown
> > hook as this is not started. Current state: STOPPED
> >at
> >
> org.apache.logging.log4j.core.util.DefaultShutdownCallbackRegistry.addShutdownCallback(DefaultShutdownCallbackRegistry.java:113)
> >at
> >
> org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4jContextFactory.java:244)
> >at
> >
> org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerContext.java:182)
> >at
> > org.apache.logging.log4j.core.LoggerContext.start(Logge

Re: FATAL Unable to register shutdown hook because JVM is shutting down.

2015-07-21 Thread Remko Popma
That's strange: if you are logging at startup the shutdown hook should be 
registered before the call to logger.log() returns... Does the batch start 
another JVM process?

Sent from my iPhone

> On 2015/07/21, at 17:18, Mikael Ståldal  wrote:
> 
> The app do some logging at startup as well, and quite some logging in
> between. However, it is a short running batch job, and in this case the JVM
> only runs for 6 seconds in total.
> 
> See below to get some more context about the timing. The first and last log
> line are not from Log4j, they are from a bash script used to start the JVM.
> 
> I think that this is not only one problem, but two. First that this occurs
> in the first place. Second that it is logged at FATAL level.
> 
> 
> 2015-07-21 10:11:24,022  Before JVM start
> 
> 2015-07-21 10:11:24,771  INFO com.magine.maglev.Job$: Startup
> [... more logging here ...]
> 2015-07-21 10:11:30,216 FATAL Unable to register shutdown hook because JVM
> is shutting down. java.lang.IllegalStateException: Cannot add new shutdown
> hook as this is not started. Current state: STOPPED
>at
> org.apache.logging.log4j.core.util.DefaultShutdownCallbackRegistry.addShutdownCallback(DefaultShutdownCallbackRegistry.java:113)
>at
> org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4jContextFactory.java:244)
>at
> org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerContext.java:182)
>at
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:143)
>at
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:146)
>at
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:41)
>at
> org.apache.logging.log4j.LogManager.getContext(LogManager.java:175)
>at
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:102)
>at
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
>at
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:42)
>at
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
>at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:284)
>at org.apache.spark.Logging$class.log(Logging.scala:52)
>at
> org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1.log(AkkaRpcEnv.scala:93)
>at
> org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:55)
>at
> org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
>at
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>at
> org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
>at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>at
> org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1.aroundReceive(AkkaRpcEnv.scala:93)
>at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
>at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 
> 2015-07-21 10:11:30,558 After JVM end
> 
> 
>> On Tue, Jul 21, 2015 at 12:58 AM, Remko Popma  wrote:
>> 
>> Gary, enjoy your holiday!
>> 
>> Mikael,
>> At first glance it seems to me that this problem happens because the
>> application does not start logging until it has done all the work and the
>> JVM is shutting down.
>> 
>> However, I'm not sure it really is a problem: Log4j tried to registered a
>> shutdown hook, but couldn't. It then notified you of the problem and
>> continued to work. (The stack trace was generated from a try/catch block.)
>> 
>> If this error report is annoying, a workaround may be to change to your app
>> to start logging at startup.
>> 
>> Remko
>> 
>> On Mon, Jul 20, 2015 at 6:25 PM, Mikael Ståldal > wrote:
>> 
>>> 2015-07-20 11:23:31,741 FATAL Unable to register shutdown hook because
>> JVM
>>> is shutting down. java.lang.IllegalStateException: Cannot add new
>> shutdown
>>> hook as this is not started. Current state: STOPPED
>>>at
>> org.apache.logging.log4j.core.util.DefaultShutdownCallbackRegistry.addShutdownCallback(DefaultShutdownCallbackRegistry.java:113)
>>>at
>> org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4j

Re: FATAL Unable to register shutdown hook because JVM is shutting down.

2015-07-21 Thread Mikael Ståldal
The app do some logging at startup as well, and quite some logging in
between. However, it is a short running batch job, and in this case the JVM
only runs for 6 seconds in total.

See below to get some more context about the timing. The first and last log
line are not from Log4j, they are from a bash script used to start the JVM.

I think that this is not only one problem, but two. First that this occurs
in the first place. Second that it is logged at FATAL level.


2015-07-21 10:11:24,022  Before JVM start

2015-07-21 10:11:24,771  INFO com.magine.maglev.Job$: Startup
[... more logging here ...]
2015-07-21 10:11:30,216 FATAL Unable to register shutdown hook because JVM
is shutting down. java.lang.IllegalStateException: Cannot add new shutdown
hook as this is not started. Current state: STOPPED
at
org.apache.logging.log4j.core.util.DefaultShutdownCallbackRegistry.addShutdownCallback(DefaultShutdownCallbackRegistry.java:113)
at
org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4jContextFactory.java:244)
at
org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerContext.java:182)
at
org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:143)
at
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:146)
at
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:41)
at
org.apache.logging.log4j.LogManager.getContext(LogManager.java:175)
at
org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:102)
at
org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
at
org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:42)
at
org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:284)
at org.apache.spark.Logging$class.log(Logging.scala:52)
at
org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1.log(AkkaRpcEnv.scala:93)
at
org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:55)
at
org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
at
scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at
org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at
org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1.aroundReceive(AkkaRpcEnv.scala:93)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

2015-07-21 10:11:30,558 After JVM end


On Tue, Jul 21, 2015 at 12:58 AM, Remko Popma  wrote:

> Gary, enjoy your holiday!
>
> Mikael,
> At first glance it seems to me that this problem happens because the
> application does not start logging until it has done all the work and the
> JVM is shutting down.
>
> However, I'm not sure it really is a problem: Log4j tried to registered a
> shutdown hook, but couldn't. It then notified you of the problem and
> continued to work. (The stack trace was generated from a try/catch block.)
>
> If this error report is annoying, a workaround may be to change to your app
> to start logging at startup.
>
> Remko
>
> On Mon, Jul 20, 2015 at 6:25 PM, Mikael Ståldal  >
> wrote:
>
> > 2015-07-20 11:23:31,741 FATAL Unable to register shutdown hook because
> JVM
> > is shutting down. java.lang.IllegalStateException: Cannot add new
> shutdown
> > hook as this is not started. Current state: STOPPED
> > at
> >
> >
> org.apache.logging.log4j.core.util.DefaultShutdownCallbackRegistry.addShutdownCallback(DefaultShutdownCallbackRegistry.java:113)
> > at
> >
> >
> org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4jContextFactory.java:244)
> > at
> >
> >
> org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerContext.java:182)
> > at
> > org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:143)
> > at
> >
> >
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:146)
> > at
> >
> >
> or