I am running a benchmark of inserting 100 million (100M) items into a
table. I am seeing performance I don't understand. Graph:
http://imgur.com/hH1Jr. Can anyone explain:

1. Why does write speed (writes/second) slow down dramatically around 28M
items?
2. Are there parameters (perhaps related to table size) that would change
this write performance?

=======================

Create and insert statements:

create table if not exists t_foo (
  key binary(16) primary key,
  value binary(16));

insert or replace into t_foo (key, value) values (?, ?)

key and value are each 16-byte arrays.

I turn auto-commit off and commit every 1000 inserts.
I set synchronous mode to OFF and journaling mode to WAL (write-ahead log).

I am using sqlite 3.7.15 through the Xerial JDBC driver (see
https://bitbucket.org/xerial/sqlite-jdbc). I built it myself, due to a
glibc incompatibility (see
https://groups.google.com/d/msg/Xerial/F9roGuUjH6c/6RuxqmG6UK4J).

I am running on Gentoo. Output of uname -a:

Linux mymachine 3.2.1-c42.31 #1 SMP Mon Apr 30 10:55:12 CDT 2012 x86_64
Quad-Core AMD Opteron(tm) Processor 1381 AuthenticAMD GNU/Linux

It has 8G of memory.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to