Hi,

I am no expert but try to increase your btree page size to the default page
size of your storage. I think sqlite defaults to a 1K page size but im sure
you can bump it up to 4K and see if that helps. I work with rather large
databases ( 5-8Gb ) and although increasing my page size from 1K to 8K (
unix server with 8k page size ), it made no difference but surely it has
to?.

Messing with the default_cache_size does make a massive difference,
especially on systems with lots of applications hogging the IO. I also
started by just increasing it to an arb number such as 20000, but try to
work out how much of the system's memory you are willing to allocate to
SQLite and set it accordingly. For my 4-5Gb database I set my cache size so
that SQLite can use up to about 1Gb of memory. Easy to work out just take
the amount you want to allocate, say 500Mb and divide by the btree page size
and thats the number you set default_cache_size to.

Also, if you can somehow guarentee that the system wont crash ( which you
cant ) or if a database crash makes no difference and you can just rebuild
the DB ( or have a backup ) then do a "PRAGMA synchronous = OFF" or a safer
"PRAGMA synchronous = NORMAL". This makes a massive difference but at the
cost of losing your database if something goes wrong.

I hope it helps.


Kastuar, Abhitesh wrote:
> 
> Hi,
> 
> I am running into some issues that seem related to the current database
> file size.
> 
>  
> 
> Our application is periodically saving about 150MB of data to the
> database. Starting around the 30th interval or so, the time to insert
> the data grows significantly - it initially goes up by 3-4x but then
> each subsequent save cycle continues to grow eventually leading to
> performance that is 20x slower than it is at the beginning.
> 
>  
> 
> Some of the changes I have tried: 
> 
> -          larger transactions i.e. more inserts coalesced into one
> "giant" transaction
> 
> -          changing the default_cache_size to 20000 
> 
>  
> 
> My next thought was to try "VACUUM"ing the database after each save
> cycle (cause I am seeing extensive disk fragmentation). However, I am
> not sure if that will eliminate the drastic performance degradation.
> 
>  
> 
> Our application is running on Windows XP (Core 2 Duo with 2G of RAM and
> 200GB hard drive)
> 
>  
> 
> Would appreciate any pointers....
> 
>  
> 
> Thanks.
> 
> -Abhitesh.
> 
>  
> 
> 
> <DIV>
> 
> E-mail confidentiality.
> --------------------------------
> This e-mail contains confidential and / or privileged information
> belonging to Spirent Communications plc, its affiliates and / or
> subsidiaries. If you are not the intended recipient, you are hereby
> notified that any disclosure, copying, distribution and / or the taking of
> any action based upon reliance on the contents of this transmission is
> strictly forbidden. If you have received this message in error please
> notify the sender by return e-mail and delete it from your system. If you
> require assistance, please contact our IT department at
> [EMAIL PROTECTED]
> 
> Spirent Communications plc,
> Spirent House, Crawley Business Quarter, Fleming Way, Crawley, West
> Sussex, RH10 9QL, United Kingdom.
> Tel No. +44 (0) 1293 767676
> Fax No. +44 (0) 1293 767677
> 
> Registered in England Number 470893
> Registered at Spirent House, Crawley Business Quarter, Fleming Way,
> Crawley, West Sussex, RH10 9QL, United Kingdom 
> 
> Or if within the US,
> 
> Spirent Communications,
> 26750 Agoura Road, Calabasas, CA, 91302, USA.
> Tel No. 1-818-676- 2300 
> 
> </DIV>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/File-size-issue--tf3555473.html#a9937397
Sent from the SQLite mailing list archive at Nabble.com.


-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to