It appears version 2.8 is close to being released, will it be possible to
get the RollingFileManager threads out of the thread group and join();
them, like I had tried before? 2.7 uses thread groups as well in its
factory, yet I don't see the RollingFileManager threads during shutdown.
Anyway, thanks a bunch for your advice and looking forward to fixing this :)

Robert

On Mon, Jan 23, 2017 at 11:59 AM, Robert Schmidtke <ro.schmid...@gmail.com>
wrote:

> Hi all, thanks for all the replies and details. I looked into the source
> and it would seem that for a shutdown timeout of 0, the executor service
> does not wait for all threads to complete, because it's missing the
> awaitTermination(); call: https://logging.apache.
> org/log4j/log4j-2.7/log4j-core/xref/org/apache/logging/log4j/core/util/
> ExecutorServices.html#L76 I guess I will try and add a large enough
> timeout value and see if that helps.
>
> On Mon, Jan 23, 2017 at 9:53 AM, Mikael Ståldal <mikael.stal...@magine.com
> > wrote:
>
>> The problem with Log4j 2.7 is that it uses a pool of non-daemon threads
>> for
>> the RollingFileManager tasks. This will needlessly block application
>> shutdown when the application exit by returning from the main method (and
>> not use System.exit()). If you use System.exit(), you will not experience
>> this problem.
>>
>> However, if you use System.exit(), you might instead abort a background
>> task that happens to be in progress.
>>
>> On Sun, Jan 22, 2017 at 10:16 PM, Robert Schmidtke <
>> ro.schmid...@gmail.com>
>> wrote:
>>
>> > Hi Ralph, thanks for your reply. I'm using Log4j2 version 2.7. Which
>> > behavior does this version have? Shutdown completes in a reasonable
>> amount
>> > of time, yet the compression action thread cannot finish. Is there a
>> > version that does wait for the threads? Is there anything I can do to
>> help
>> > on this issue? Thanks for working on it, looking forward to a fix for
>> that.
>> >
>> > Robert
>> >
>> > On Sun, Jan 22, 2017 at 10:07 PM, Apache <ralph.go...@dslextreme.com>
>> > wrote:
>> >
>> > > It is actually interesting that you mention that as I am working on
>> that
>> > > code right now.
>> > >
>> > > This is a bit of code that has been troublesome for us and the
>> behavior
>> > > depends on which version of Log4j you are using.  Log4j used to spawn
>> a
>> > > thread to do the compression and the thread did not always complete
>> > before
>> > > shutdown. In 2.7 the code modified to use an ExecutorService. However,
>> > this
>> > > had the implementation of that had the undesirable side effect that it
>> > > caused shutdown to wait for a long period of time for no good reason,
>> so
>> > > that code was just recently reverted.  Since our unit tests now
>> regularly
>> > > fail again because the test app is completing before the compression
>> > action
>> > > completes I am now looking at this again to find a better solution
>> that
>> > > will wait for the compression action to complete, but not cause
>> shutdown
>> > to
>> > > be delayed when there is nothing to do.
>> > >
>> > > Ralph
>> > >
>> > > > On Jan 22, 2017, at 1:06 PM, Robert Schmidtke <
>> ro.schmid...@gmail.com>
>> > > wrote:
>> > > >
>> > > > Hi everyone,
>> > > >
>> > > > I am currently debugging an issue, and I would like to know how an
>> > > > asynchronous compression action that is currently running in a
>> thread
>> > > > created through the rolling file manager (
>> > > > https://github.com/apache/logging-log4j2/blob/master/
>> > > log4j-core/src/main/java/org/apache/logging/log4j/core/appen
>> der/rolling/
>> > > RollingFileManager.java#L326)
>> > > > reacts to JVM shutdown. I could not find any code that would block
>> > until
>> > > > the action is done during shutdown.
>> > > >
>> > > > Some background to the problem: I'm creating large log files in a
>> Spark
>> > > on
>> > > > Yarn application, and I roll over at a size of 4GB, using gz. When
>> > > > analyzing the log files, I see that I get log.1.gz, log.2.gz (just
>> like
>> > > the
>> > > > pattern I defined) but also log.5 and log.5.gz. The log.5.gz
>> archive is
>> > > not
>> > > > readable, so I'm guessing that the compression action could not
>> finish
>> > > its
>> > > > work, because the JVM it is running in was shut down by Yarn too
>> > early. I
>> > > > suspected that calling LogManager.shutdown(); would block until all
>> > > threads
>> > > > are done, but it does not seem to be the case.
>> > > >
>> > > > What am I missing here? What needs to be the appropriate setup for
>> > having
>> > > > Log4j2 finish its compression actions before its JVM is shut down?
>> Many
>> > > > thanks in advance!
>> > > >
>> > > > Robert
>> > > >
>> > > > --
>> > > > My GPG Key ID: 336E2680
>> > >
>> > >
>> > >
>> > > ---------------------------------------------------------------------
>> > > To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
>> > > For additional commands, e-mail: log4j-user-h...@logging.apache.org
>> > >
>> > >
>> >
>> >
>> > --
>> > My GPG Key ID: 336E2680
>> >
>>
>>
>>
>> --
>> [image: MagineTV]
>>
>> *Mikael Ståldal*
>> Senior software developer
>>
>> *Magine TV*
>> mikael.stal...@magine.com
>> Grev Turegatan 3  | 114 46 Stockholm, Sweden  |   www.magine.com
>>
>> Privileged and/or Confidential Information may be contained in this
>> message. If you are not the addressee indicated in this message
>> (or responsible for delivery of the message to such a person), you may not
>> copy or deliver this message to anyone. In such case,
>> you should destroy this message and kindly notify the sender by reply
>> email.
>>
>
>
>
> --
> My GPG Key ID: 336E2680
>



-- 
My GPG Key ID: 336E2680

Reply via email to