By default for batch inserts ibatis caches the prepared statements in
list and always re-uses the last statement for table inserts. This
model works for batch inserts for a single table. However, we have a
requirement where we need to do batch inserts into several tables in a
single transaction. Since ibatis is designed for batch inserts to a
single table, using multi table insert gives us pathetic performance, as
ibatis creates a new PreparedStatement for every query (occuping
database cursor as well).
I looked at the source code and modified SqlExecutor==>addBatch()
method. This method uses a Map instead of a list. Corresponding
changes are done to populate the map and retrieve from map. Following
is the code.
private Map<String,PreparedStatement> statementMap = new
HashMap<String,PreparedStatement>();
public void addBatch(StatementScope statementScope, Connection conn,
String sql, Object[] parameters) throws SQLException {
PreparedStatement ps = null;
//if (currentSql != null && currentSql.equals(sql)) {
if (currentSql != null && statementMap.containsKey(currentSql)) {
//int last = statementList.size() - 1;
ps = (PreparedStatement) statementMap.get(currentSql);
} else {
ps = prepareStatement(statementScope.getSession(), conn, sql);
setStatementTimeout(statementScope.getStatement(), ps);
currentSql = sql;
statementList.add(ps);
statementMap.put(currentSql,ps);
batchResultList.add(new
BatchResult(statementScope.getStatement().getId(), sql));
}
statementScope.getParameterMap().setParameters(statementScope, ps,
parameters);
ps.addBatch();
size++;
}
public int executeBatch() throws SQLException {
int totalRowCount = 0;
Collection<PreparedStatement> psColl = statementMap.values();
//for (int i = 0, n = statementList.size(); i < n; i++) {
Iterator<PreparedStatement> it = psColl.iterator();
//for (int i = 0, n = psColl.size(); i < n; i++) {
while(it.hasNext()){
//PreparedStatement ps = (PreparedStatement)
statementList.get(i);
PreparedStatement ps = it.next();
int[] rowCounts = ps.executeBatch();
for (int j = 0; j < rowCounts.length; j++) {
if (rowCounts[j] == Statement.SUCCESS_NO_INFO) {
// do nothing
} else if (rowCounts[j] == Statement.EXECUTE_FAILED) {
throw new SQLException("The batched statement at index " + j
+ " failed to execute.");
} else {
totalRowCount += rowCounts[j];
}
}
}
return totalRowCount;
}
Que: After making these changes everything works for us and the
performance is very good as it starts using cached PreparedStatements.
Does it look like a proper implementation? Will it have any other
impact that I may not have seen yet?
-- Nitin