On 24 Feb 2011, at 10:51pm, Greg Barker wrote:

> What do
> you do if there could be anywhere between 30-150 columns?

I would never have any table with 150 columns.  It should be possible to keep 
the schema for your table in your head.

> Optimizing performance
> for an application where both the queries and data can take many different
> shapes and sizes is beginning to seem like quite a daunting task.

For 'daunting' read 'impossible', since the optimal setup will vary over the 
life of the system you're writing, as the tables get bigger, the operating 
system gets slower and the hardware gets faster.  If there was a good general 
solution it would be built into SQLite as the default setting.

There's a big difference between 'fast enough to do what's wanted' and 'as fast 
as possible without heroic measures'.  The first is vital to the success of the 
project.  The second is just bragging rights.  I almost never make performance 
tweaks because although they might save me 600 milliseconds on the sort of 
lookup my users actually do, my users don't notice a difference of 2/3rds of a 
second.

My employer pays about 240 dollars a day for my time.  So for 480 dollars I 
could spend two days optimizing one of my schema or it could buy a faster 
computer (hard disk ?) and speed up not only database access but also 
everything else done with that computer.  Since I have too much to do as it is, 
I'm unlikely to spend the extra two days and emerge with a program which does 
weird things in a weird way just to save a second or two.  I'm curious what my 
customers would have chosen back when I was contracting and being paid a great 
deal more than 240 dollars a day for my time.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to