Thanks alot for this information.

We have disabled the writing of data to the rollback journal file before 
writing to the database (could this account for 10 writes instead of 12?). We 
did this to speed up the write to the database, which it has. It would seem 
from your email that this could be a bad idea and open up the possibility of 
database corruption. We really need to limit the amount of writes made to the 
flash as it is a NOR flash chip and write speed is quite slow. Is it possible 
to create this journal file in RAM somehow? Assuming that the journal file will 
not be too much bigger than one page (1024 bytes) as we are running on an 
embedded system and are a little limited for RAM.

Thanks again for your help

Mark

> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: 04 October 2005 11:47
> To: [email protected]
> Subject: Re: [sqlite] Why so many page writes to database file on
> inserts and updates?
> 
> 
> "Mark Allan" <[EMAIL PROTECTED]> wrote:
> > Can someone out there advise me as to why SQlite needs to perform 
> > so many writes to disk when executing an insert. The insert consists
> > of 6 columns. There is a total of 5 indexes on the table. 
> The average 
> > size of one of these records is around 800 bytes. We have an SQlite 
> > page size of 1024 bytes. But it seems that 10 pages are 
> being written to. 
> > 
> 
> The table itself and all indices are stored separately from
> one another.  So right away there are 6 page that need updating
> with any change.  Then each page must be written twice - 
> once to the rollback journal and once to the main database.
> This is necessary so that updates are atomic and so that
> an unexpected crash or power loss does not corrupt the
> database file.  So for inserting into a table with 5
> indices, I count 12 pages that need to be written, not 10.
> --
> D. Richard Hipp <[EMAIL PROTECTED]>
> 
> 

Reply via email to