Hello Bob !

I'm using the default sqlite page size, but I also did a try with 32KB page size and I've got a bi smaller overall database size but no visible perfomance gain in terms of time and I/O.

Also the memory usage skyrocked, also forcing memory swap.

The OS was OS X yosemite, I also posted before a small program with a sample of the problematic data only which end with a database of around 340MB and the same poor perfomance.

Cheers !


On 01/10/16 19:34, Bob Friesenhahn wrote:
On Sat, 1 Oct 2016, Domingo Alvarez Duarte wrote:

Hello !

I'm using sqlite (trunk) for a database (see bellow) and for a final database file of 22GB a "vacuum" was executed and doing so it made a lot of I/O ( 134GB reads and 117GB writes in 2h:30min).

What means are you using the evaluate the total amount of I/O?

At what level (e.g. OS system call, individual disk I/O) are you measuring the I/O?

If the problem is more physical disk I/O than expected then is it possible that the underlying filesystem blocksize does not match the blocksize that SQLite is using? You may have an issue with write amplification at the filesystem level.

Bob

_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to