Re: [sqlite] What is wrong with this queries?

2012-12-29 Thread Igor Korot
Yuriy, On Sat, Dec 29, 2012 at 8:49 PM, Yuriy Kaminskiy wrote: > Igor Korot wrote: >> Hi, ALL, >> >> sqlite> CREATE TABLE leagueplayers(id integer, playerid integer, value >> integer, >> currvalue double, foreign key(id) references leagues(id), foreign >> key(playerid) >> references players(pla

Re: [sqlite] What is wrong with this queries?

2012-12-29 Thread Yuriy Kaminskiy
Igor Korot wrote: > Hi, ALL, > > sqlite> CREATE TABLE leagueplayers(id integer, playerid integer, value > integer, > currvalue double, foreign key(id) references leagues(id), foreign > key(playerid) > references players(playerid)); > sqlite> INSERT INTO leagueplayers VALUES(1,(SELECT playerid,va

[sqlite] What is wrong with this queries?

2012-12-29 Thread Igor Korot
Hi, ALL, sqlite> CREATE TABLE leagueplayers(id integer, playerid integer, value integer, currvalue double, foreign key(id) references leagues(id), foreign key(playerid) references players(playerid)); sqlite> INSERT INTO leagueplayers VALUES(1,(SELECT playerid,value,currvalue FROM players)); Error

[sqlite] System.Data.SQLite version 1.0.83.0 released

2012-12-29 Thread Joe Mistachkin
System.Data.SQLite version 1.0.83.0 (with SQLite 3.7.15.1) is now available on the System.Data.SQLite website: http://system.data.sqlite.org/ Further information about this release can be seen at http://system.data.sqlite.org/index.html/doc/trunk/www/news.wiki Please post on the SQLi

Re: [sqlite] Write performance question for 3.7.15

2012-12-29 Thread Simon Slavin
On 29 Dec 2012, at 9:45pm, Michael Black wrote: > During the 1M commit the CPU drops to a couple % and the disk I/O is pretty > constant...albeit slow For the last few years, since multi-core processors have been common on computers, SQLite performance has usually been limited by the perfo

Re: [sqlite] Write performance question for 3.7.15

2012-12-29 Thread Michael Black
Referencing the C program I sent earlierI've found a COMMIT every 1M records does best. I had an extra zero on my 100,000 which gives the EKG appearance. I averaged 25,000 inserts/sec over 50M records with no big knees in the performance (there is a noticeable knee on the commit though around

Re: [sqlite] Write performance question for 3.7.15

2012-12-29 Thread Michael Black
I wrote a C program doing your thing (with random data so each key is unique) I see some small knees at 20M and 23M -- but nothing like what you're seeing as long as I don't do the COMMIT. Seems the COMMIT is what's causing the sudden slowdown. When doing the COMMIT I see your dramatic slowdown (a

Re: [sqlite] Write performance question for 3.7.15

2012-12-29 Thread Valentin Davydov
On Fri, Dec 28, 2012 at 03:35:17PM -0600, Dan Frankowski wrote: > > 3. Would horizontal partitioning (i.e. creating multiple tables, each for a > different key range) help? This would seriously impair read performance (you'd have to access two indices instead of one). Valentin Davydov. _

Re: [sqlite] Fwd: Write performance question for 3.7.15

2012-12-29 Thread Valentin Davydov
On Fri, Dec 28, 2012 at 03:34:02PM -0600, Dan Frankowski wrote: > I am running a benchmark of inserting 100 million (100M) items into a > table. I am seeing performance I don't understand. Graph: > http://imgur.com/hH1Jr. Can anyone explain: > > 1. Why does write speed (writes/second) slow down dr

Re: [sqlite] Write performance question for 3.7.15

2012-12-29 Thread Simon Slavin
On 29 Dec 2012, at 12:37pm, Stephen Chrzanowski wrote: > My guess would be the OS slowing things down with write caching. The > system will hold so much data in memory as a cache to write to the disk, > and when the cache gets full, the OS slows down and waits on the HDD. Try > doing a [dd] to

Re: [sqlite] Fwd: Write performance question for 3.7.15

2012-12-29 Thread Stephen Chrzanowski
My guess would be the OS slowing things down with write caching. The system will hold so much data in memory as a cache to write to the disk, and when the cache gets full, the OS slows down and waits on the HDD. Try doing a [dd] to a few gig worth of random data and see if you get the same kind o