On 2019/01/28 21:23:00, Matt Sicker <[email protected]> wrote:
> I like the idea in general, though I wonder if this is already doable
> with an existing plugin?
>
> On Mon, 28 Jan 2019 at 04:43, 于得水 <[email protected]> wrote:
> >
> > Hello, Log4j developers
> > We have a problem when debugging an online production system. Our
> > production system manages and distributes data across multiple worker
> > machines. There's a bug that can cause data unbalanced placement or even
> > unavailable under heavy work load. In this scenario, DEBUG level log will
> > help us a lot to diagnose the issue. However, we cannot always set logger's
> > level to DEBUG because that will store too many logs on disk and slow down
> > the production service, especially the bug just occurs occasionally.
> >
> > I wonder if we could add a new type of memory appender in Log4j. This
> > appender will store log entries in a memory queue first, with a
> > configurable maximum queue size and a policy (like FIFO) to roll out stale
> > log entry once the queue is full. If any problem occurs, like some types of
> > exception we're interest is thrown, user can trigger the dump of this
> > appender to flush in memory logs into file for future diagnostic use. So it
> > can only record 'useful' DEBUG logs and related context in disk, avoid
> > wasting disk space and slowing down production service.
> >
> > If you think it's worth doing, I can create a JIRA and paste my
> > prototype Pull Request for review.
> >
> > Thanks,
> > Deshui
>
>
>
> --
> Matt Sicker <[email protected]>
> Hi, Matt. Thanks for the reply. I checked existing appenders. Seems no
> existing components can be combined together to trigger dumping the
> size-limited log event queue (like Ring Buffer Remko mentioned in this JIRA
> https://issues.apache.org/jira/browse/LOG4J2-1137) from memory.