Hi,
To increase the performance of the JDBC appender. we can do like this.
Write a client logging SDK which will put the data to the socket using the
socket appnder class.
And then write a audit server which listens to that socket and writes it to the
database usign the
jdbc appender.
This incrases the performance as the audit client just puts the data to the
socket and proceeds.
Any comments on this....
rgrds
Pradeep







Ceki G

**********************************************************************
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.

This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.


**********************************************************************

ülcü <[EMAIL PROTECTED]> on 05/21/2002 11:17:22 PM

Please respond to "Log4J Developers List" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:    (bcc: Pradeep NB Kumar/India/KINDLE)
Subject:  Comments on JDBCAppender




Kevin and others,

Given the recent remarks on JDBCAppender performance, one possibility
for improvement is to use prepared statements and batch processing.

I have created a table called logEvents for testing purposes:

CREATE TABLE logEvents (
    loggerName varchar(200),
    timestamp bigint,
    levelName varChar(32),
    message varchar(1024),
    NDC varchar(1024)
);

(I am using PostgresQL 7.1.3).

Here is a some JDBC code to exercise the table.

import java.sql.*;
import org.apache.log4j.*;
import org.apache.log4j.spi.*;

public class JDBCTest {

   public static void main(String[] args) throws Exception {

     Logger root = Logger.getRootLogger();

     Connection conn = null;
     String driver = "org.postgresql.Driver";

     Class.forName(driver).newInstance();

     conn = DriverManager.getConnection(args[0], args[1], args[2]);

     double start;
     double end;
     int LOOP_LEN = 100;
     int counter = 0;

     // -------------------------- Normal statement:
     start = System.currentTimeMillis();
     Statement s = null;

     for(int i = 0; i <  LOOP_LEN; i++) {
       s = conn.createStatement();
       NDC.push("counter "+(counter++));
       LoggingEvent event = new LoggingEvent(Category.class.getName(),
                                             root, Level.DEBUG,
                                             "message " + i,
                                             null);

       s.executeUpdate("INSERT INTO logEvents (loggerName, timestamp, "
                                             + "levelName, message, NDC)"
       + "VALUES ('"+ event.logger.getName()+ "', "
       + event.timeStamp + ", '"
       + event.level.toString() + "', '"
       + event.getRenderedMessage() + "', '"
       + event.getNDC() + "')");
       NDC.pop();
     }
     s.close();
     end = System.currentTimeMillis();
     System.out.println("Overall (simple statement) : "+(end-start));
     System.out.println("Average: "+((end-start)*1000)/LOOP_LEN + "in
microsecs.");

     PreparedStatement stmt;

     // Prepared statement
     start = System.currentTimeMillis();
     stmt = conn.prepareStatement("INSERT INTO logEvents (loggerName,
timestamp, "
                                             + "levelName, message, NDC)"
                                         + "VALUES (?, ?, ?, ?, ?)");

     for(int i = 0; i <  LOOP_LEN; i++) {
       NDC.push("counter "+(counter++));
       MDC.put("hello", "x");
       LoggingEvent event = new LoggingEvent(Category.class.getName(),
                                             root, Level.DEBUG,
                                             "message " + i,
                                             null);

       stmt.setString(1, event.logger.getName());
       stmt.setLong(2, event.timeStamp);
       stmt.setString(3, event.level.toString());
       stmt.setString(4, event.getRenderedMessage());

       stmt.setString(5, event.getNDC());
       NDC.pop();
       stmt.executeUpdate();
     }
     stmt.close();
     end = System.currentTimeMillis();
     System.out.println("Overall (prepared statement) : "+(end-start));
     System.out.println("Average: "+((end-start)*1000)/LOOP_LEN + "in
microsecs.");

     // --- Batch mode -----------------------
     start = System.currentTimeMillis();
     stmt = conn.prepareStatement("INSERT INTO logEvents (loggerName,
timestamp, "
                                             + "levelName, message, NDC)"
                                        + "VALUES (?, ?, ?, ?, ?)");

     for(int i = 0; i <  LOOP_LEN; i++) {
       NDC.push("counter "+(counter++));
       LoggingEvent event = new LoggingEvent(Category.class.getName(),
                                             root, Level.DEBUG,
                                             "message" + i,
                                             null);

       stmt.setString(1, event.logger.getName());
       stmt.setLong(2, event.timeStamp);
       stmt.setString(3, event.level.toString());
       stmt.setString(4, event.getRenderedMessage());

       stmt.setString(5, event.getNDC());
       NDC.pop();
       stmt.addBatch();
     }
     stmt.executeBatch();
     stmt.close();
     end = System.currentTimeMillis();
     System.out.println("Overall (prepared statement) : "+(end-start));
     System.out.println("Average: "+((end-start)*1000)/LOOP_LEN + "in
microsecs.");

     conn.close();

   }
}


Running this test code gives:

~/ >java JDBCTest  jdbc:postgresql://somehost/someDatabaseName ceki ****

Overall (simple statement) : 411.0
Average: 4110.0in microsecs.

Overall (prepared statement) : 421.0
Average: 4210.0in microsecs.

Overall (prepared statement) : 150.0
Average: 1500.0in microsecs.


As you can see prepared batch statements are significantly faster (3x)
than prepared statements, whereas prepared statements are only
marginally faster than simple statements. These results depend on the
database and your mileage may vary.

Notice that I did not insert the MDC or the trowable string
representation. IMHO, these fields are best represented as blobs
which PostgresQL version 7.1.3 does not support, although 7.2 does
reportedly support them.

(The only alternative I see to BLOBs are bit fields or arrays.)

Once we settle on the best representation of the table, I think a
table structure (table name and column names) should be fixed once and
fall all. This would allow other components to query the database and
present the results to the user in a convenient form. This cannot be
done if the underlying table name and columns are not fixed.

That is it for the moment.









--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to