Hi Jay,

I have a database whose size is around 250 MB. I have a table which has
around 140,000 records. Doing a count(*) on that takes me 473 seconds. I
believe it's not supposed to take that much time.

Here is the schema -

create table ES_TASK_DETAILS (
GID INTEGER PRIMARY KEY AUTOINCREMENT,
TASK_INFO_GID INTEGER NOT NULL,
FILE_TYPE_GID INTEGER NOT NULL,
EXTRACTED_PATH VARCHAR(1024),  # this is normally around 256 chars
TARGET_PATH VARCHAR(1024), # # this is normally around 256 chars
MEDIA_PATH VARCHAR(1024) NOT NULL, # this is normally around 256 chars
SIZE LONG,
CREATION_TIME LONG,
MODIFICATION_TIME LONG,
JOBID_GID NOT NULL,
IS_IN_DU INTEGER NOT NULL,
NAME varchar(255) # this is normally around 128chars
)

$dbh->do("PRAGMA page_size=4096");
$dbh->commit;
$dbh->do("PRAGMA synchronous=0");
$dbh->commit;
$dbh->do("PRAGMA default_cache_size=32768");
$dbh->commit;
$dbh->do("PRAGMA cache_size=8192");
$dbh->commit;

Raj

On 7/17/06, Jay Sprenkle <[EMAIL PROTECTED]> wrote:

On 7/17/06, Sripathi Raj <[EMAIL PROTECTED]> wrote:
> Hi,
>
>  I read in one of the threads that count(*) has to be hand optimized
since
> SQLite doesn't optimize it. Any pointers on how to do that?

When doing joins put the table that contributes the smallest number of
rows to the result set as the first table.

Don't use inexact comparisons in the where clause (like and not equal and
such)

the explain option in the command helps you compare different
statements and how they are executed

--
SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
http://www.reddawn.net/~jsprenkl/Sqlite

Cthulhu Bucks!
http://www.cthulhubucks.com

Reply via email to