dbikash <[EMAIL PROTECTED]> writes:

> 
> 
> Hello,
> 
> My database contains a 'time' field where I am storing the unix epoch
> timestamp (generating programmatically using time() function in a C/Linux
> environment). Hence the database actually contains records sorted by time,
> though there may be multiple records in one time instance. I also have an
> index on this time field.
> 
> I will have to keep rolling my database such that it contains only 5 days of
> data. I also have a restriction on the database size. Therefore I have a
> function that gets invoked periodically and decides if any data needs to be
> deleted. If yes, it calculates the time from which data will be deleted,
> i.e. all records prior to this time can be deleted.
> 
> However, the delete command "DELETE FROM table WHERE time < XXX" is very
> slow (XXX is in time_t format). VACUUM is even slower. (I need to VACUUM
> since I have a restriction on the database size.) My database has 1 million
> records and is more than 100 MB in size.
> 
> What would be the most optimal way of making the deletion?
> 
> Thanks,
> Bikash

An alternative approach might be just to 'reuse' the old entries rather
than deleting old and adding new.

So to add a 'new' entry you could instead 
UPDATE table SET <new column contents> WHERE table.timestamp is old LIMIT 1;

Obviously you would need to replace "table.timestamp is old" with a valid
criterion matching your requirements.

You would also need a fallback INSERT in case there was no available
'old' record to be replaced.
This is where your size criterion could be examined most effectively too.

Just a thought ...

MikeW

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to