Hi Ralph, thanks for your reply. I'm using Log4j2 version 2.7. Which
behavior does this version have? Shutdown completes in a reasonable amount
of time, yet the compression action thread cannot finish. Is there a
version that does wait for the threads? Is there anything I can do to help
on this issue? Thanks for working on it, looking forward to a fix for that.

Robert

On Sun, Jan 22, 2017 at 10:07 PM, Apache <ralph.go...@dslextreme.com> wrote:

> It is actually interesting that you mention that as I am working on that
> code right now.
>
> This is a bit of code that has been troublesome for us and the behavior
> depends on which version of Log4j you are using.  Log4j used to spawn a
> thread to do the compression and the thread did not always complete before
> shutdown. In 2.7 the code modified to use an ExecutorService. However, this
> had the implementation of that had the undesirable side effect that it
> caused shutdown to wait for a long period of time for no good reason, so
> that code was just recently reverted.  Since our unit tests now regularly
> fail again because the test app is completing before the compression action
> completes I am now looking at this again to find a better solution that
> will wait for the compression action to complete, but not cause shutdown to
> be delayed when there is nothing to do.
>
> Ralph
>
> > On Jan 22, 2017, at 1:06 PM, Robert Schmidtke <ro.schmid...@gmail.com>
> wrote:
> >
> > Hi everyone,
> >
> > I am currently debugging an issue, and I would like to know how an
> > asynchronous compression action that is currently running in a thread
> > created through the rolling file manager (
> > https://github.com/apache/logging-log4j2/blob/master/
> log4j-core/src/main/java/org/apache/logging/log4j/core/appender/rolling/
> RollingFileManager.java#L326)
> > reacts to JVM shutdown. I could not find any code that would block until
> > the action is done during shutdown.
> >
> > Some background to the problem: I'm creating large log files in a Spark
> on
> > Yarn application, and I roll over at a size of 4GB, using gz. When
> > analyzing the log files, I see that I get log.1.gz, log.2.gz (just like
> the
> > pattern I defined) but also log.5 and log.5.gz. The log.5.gz archive is
> not
> > readable, so I'm guessing that the compression action could not finish
> its
> > work, because the JVM it is running in was shut down by Yarn too early. I
> > suspected that calling LogManager.shutdown(); would block until all
> threads
> > are done, but it does not seem to be the case.
> >
> > What am I missing here? What needs to be the appropriate setup for having
> > Log4j2 finish its compression actions before its JVM is shut down? Many
> > thanks in advance!
> >
> > Robert
> >
> > --
> > My GPG Key ID: 336E2680
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>
>


-- 
My GPG Key ID: 336E2680

Reply via email to