As I have previously written, we have a use case where a process is continually 
inserting records into a table at a continuous rate of about 8/second right 
now, every second of every day.  There are test measurement records being 
received from devices in a network.  Because of disk space requirements, only 
30 days of data is kept and older data is purged out each day.   The purging 
has a great effect on the rate of insertion and keeping a sustained rate.  In 
addition, since this is a continuous process, there is no down time except 
maybe once a month.  

It seems this use case is not handled that well by the database and search the 
net, it seems that many other databases have the same issue.  So I was 
wondering if it might be possible to attack this problem from an application 
level.  Instead of having one table that is constantly being inserted and 
deleted from, what if I had 6 tables, one each for 5 weeks and the insertion 
always inserts into the proper table for the week associated with the data.  
When the 6'th week's table starts to be inserted into, the oldest table can be 
dropped and recreated as the next week's available table.   It does make the 
insertion and query tricky but purging out results is a matter of dropping and 
recreating a table.

Any thoughts on an approach like this?  This almost seems like something that 
the database could support...

Brett 

Reply via email to