I have recently been doing some microbenchmarking to evaluate various O.R mapping technologies Vs JDBC using Derby (10.0.2.1), and I think I have identified a resource leak/performance issue that only appears while using Derby is network server mode in conjunction with the IBM DB2 driver (although I have been unable to isolate the problem to either the client or the server. However the problem does not exhibit itself while running embedded.
The benchmark itself is very simple: open db connection create table pojo (id int not null primary key, int0 int); for (50x) { insert into pojo values ( 0, 0); for(1000x) { select * from pojo where id = 0 for update; update pojo set int0=int0+1 where id = 0; } delete from pojo where id = 0; } close db connection Using the TSC on the x86 arch it is possible to get clk freq resolution timings of these operations ... The benchmark was run on an AMD Athlon XP 2400+ running Fedora core 1 and JDK 1.4.2_04. In embedded mode, the benchmark demonstrated essentially a constant time for the select+update operation in the order of 0.85millisecs Running in client/server with the DB2 type 4 universal driver; in the same VM, in a different VM on the same machine, and on a separate machine, the benchmark demonstrated a linear growth relationship with each iteration. The minimum was 7.15ms, the max 22.69ms, with an increment of 0.31ms per 1000x iterations. As a comparision, I ran Derby using C-JDBC (embedding Derby in a C-CJDBC controller) in the same configurations and it demonstrated a constant/average access time of approx 8.33ms per 1000x So I believe there to be a problem in either the client-side, or server-side DRDA code ... Has anyone else seen this behavior? Regards - Larry Cable __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com