[
https://issues.apache.org/jira/browse/LOG4J2-1230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Gary Gregory updated LOG4J2-1230:
---------------------------------
Description:
Under heavy traffic load several SYSLOG messages can be concatenated, which is
not correct.
The problem is, several threads in AbstractOutputStreamAppender can reach in
the next code write() before one thread reaches flush() (the readLock.lock() is
not lock at all)
{code:java}
@Override
public void append(final LogEvent event) {
readLock.lock();
try {
final byte[] bytes = getLayout().toByteArray(event);
if (bytes.length > 0) {
manager.write(bytes);
if (this.immediateFlush || event.isEndOfBatch()) {
manager.flush();
}
}
} catch (final AppenderLoggingException ex) {
error("Unable to write to stream " + manager.getName() + " for
appender " + getName());
throw ex;
} finally {
readLock.unlock();
}
}
{code}
In DatagramOutputStream all messages are cumulated in instance variable
private byte[] data;
Synchronized methods don't help and in this case the synchronization in
DatagramOutputStream is useless, as the correct synchronization should be
provided in AbstractOutputStreamAppender.
was:
Under heavy traffic load several SYSLOG messages can be concatenated, which is
not correct.
The problem is, several threads in AbstractOutputStreamAppender can reach in
the next code write() before one thread reaches flush() (the readLock.lock() is
not lock at all)
@Override
public void append(final LogEvent event) {
readLock.lock();
try {
final byte[] bytes = getLayout().toByteArray(event);
if (bytes.length > 0) {
manager.write(bytes);
if (this.immediateFlush || event.isEndOfBatch()) {
manager.flush();
}
}
} catch (final AppenderLoggingException ex) {
error("Unable to write to stream " + manager.getName() + " for
appender " + getName());
throw ex;
} finally {
readLock.unlock();
}
}
In DatagramOutputStream all messages are cumulated in instance variable
private byte[] data;
Synchronized methods don't help and in this case the synchronization in
DatagramOutputStream is useless, as the correct synchronization should be
provided in AbstractOutputStreamAppender.
> Concatenated SYSLOG Messages
> ----------------------------
>
> Key: LOG4J2-1230
> URL: https://issues.apache.org/jira/browse/LOG4J2-1230
> Project: Log4j 2
> Issue Type: Bug
> Components: Core
> Affects Versions: 2.5
> Environment: Linux, Java7, WebLogic 12
> Reporter: Vladimir Hudec
> Original Estimate: 8h
> Remaining Estimate: 8h
>
> Under heavy traffic load several SYSLOG messages can be concatenated, which
> is not correct.
> The problem is, several threads in AbstractOutputStreamAppender can reach in
> the next code write() before one thread reaches flush() (the readLock.lock()
> is not lock at all)
> {code:java}
> @Override
> public void append(final LogEvent event) {
> readLock.lock();
> try {
> final byte[] bytes = getLayout().toByteArray(event);
> if (bytes.length > 0) {
> manager.write(bytes);
> if (this.immediateFlush || event.isEndOfBatch()) {
> manager.flush();
> }
> }
> } catch (final AppenderLoggingException ex) {
> error("Unable to write to stream " + manager.getName() + " for
> appender " + getName());
> throw ex;
> } finally {
> readLock.unlock();
> }
> }
> {code}
> In DatagramOutputStream all messages are cumulated in instance variable
> private byte[] data;
> Synchronized methods don't help and in this case the synchronization in
> DatagramOutputStream is useless, as the correct synchronization should be
> provided in AbstractOutputStreamAppender.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]