On Fri, Dec 28, 2012 at 03:34:02PM -0600, Dan Frankowski wrote:
> I am running a benchmark of inserting 100 million (100M) items into a
> table. I am seeing performance I don't understand. Graph:
> http://imgur.com/hH1Jr. Can anyone explain:
> 
> 1. Why does write speed (writes/second) slow down dramatically around 28M
> items?

Most probably, indices became too large to fit in the in-memory cache.
You can verify this by tracing system activity: this threshold should 
manifest itself by drastical increase in _read_ operations on disk(s).

> 2. Are there parameters (perhaps related to table size) that would change
> this write performance?

CACHE_SIZE. It makes sense to enlarge it up to the all available memory.

Valentin Davydov.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to