Yes, that does help me. Thank you for sharing!
-Dan
Rajesh Nair-5 wrote:
>
> I have a real time program which logs more than 30,000 records, each
> record of about 200 bytes, per day and the company in which it has
> been installed is working 24/365. I installed the project on 2005
> August and
I have a real time program which logs more than 30,000 records, each
record of about 200 bytes, per day and the company in which it has
been installed is working 24/365. I installed the project on 2005
August and it is working fine till date. It perform some report
generations (4 or 5) every day. T
Hello!
В сообщении от Monday 16 February 2009 22:14:03 Jay A. Kreibich написал(а):
> > Of cource, write operations must be grouped becouse memory allocation
> > for write transaction is proportional to database size (see offsite).
>
> This limitation was removed about a year ago around 3.5.7. R
On Mon, Feb 16, 2009 at 07:55:33PM +0300, Alexey Pechnikov scratched on the
wall:
> Of cource, write operations must be grouped becouse memory allocation
> for write transaction is proportional to database size (see offsite).
This limitation was removed about a year ago around 3.5.7. Rather
Hello!
В сообщении от Monday 16 February 2009 17:42:25 danjenkins написал(а):
> I fully understand that performance will depend on the coding, database
> structure and indexing (& hardware) but, assuming these are taken care of,
> should a 100 million record table perform loosely in the same perfo
Hi. I've started a SQLite C++ project that could peak at 100 million records
(250 bytes per record spread over 20 fields) and would like to ask if anyone
has seen SQLite projects of this magnitude.
The Windows data logging project will add up to 1 million records per day
and run queries containi
6 matches
Mail list logo