Ok I want to convert a large application which uses ad-hoc data 
structures and file formats to a sqlite database.  But before I do that
I am working on a proof of concept test to show that yea it will work 
once conversion is done.

But alas my first test is failing miserably.  Well, it works it just 
takes way too long.  Like at least all night.
I create table tbl2
                     f1 varchar(30)
                     f2 varchar(30)
                     f3 varchar(30)
                     identity integer primary index autoincrement

That works I have the table.
I go to insert into with
insert into tbl2 values ("00000001", "00000001", "00000001", NULL);

insert into tbl2 values ("00000002", "00000002", "00000002", NULL);

Ok I did each insert as separate query using the 5 minute example and 
it took all night and never did complete.

So then I decided...ok the fsync() for serialization is the problem.

So I looked at limits.html and make a single query out of as many as 
would fit in a 1000000 byte buffer.  That turned out to be
15625 inserts.  That should be one fsync per query so should be much 
faster...but alas I don't think it is.

My second test would be a random query into this large table to see 
how long that took.  Then random queries from a large
number of threads.  Get a query time under 1 second for pulling up a 
record...and this product is so in my application.

Any ideas,
Thanks,
David

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to