Dennis,

I think I will go the normalized way, since the objects we are talking about
are in fact sub-objects, and I can keep a list of them.

Do you (or anyone else) have any experience with those large DBs in terms of
SELECT execution times?
ms, seconds or minutes?

Since I would mainly use field numbers in the WHERE clause I´d create  an
index on them I guess?


The client first wanted to store that data in an XML file,
but I was able to convince him to use a DB.

The Client is not allways right,
but he´s allways the client ;)

Thanks again,

André 

-----Original Message-----
From: Dennis Cote [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 10, 2006 10:23 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to handle large amount of data?

André Goliath wrote:

>Denis,
>I didn´t knew references where possible in Sqlite 3.2,
>thanks for pointing that out!
>  
>
The reference keyword in the column declaration doesn't really do 
anything in SQLite. I use it more as a comment for human readers so they 
know which column to use when joining tables.

>Do you have more specific information on how that is implemented in SQLite?
>Would you think the amount of memory needed would be acceptable for an
>field_data with
>1.000.000 objects * 21 fields * 1.000 chars ?
>It is quite unlikely to reach that DB size,
>but I have to expect the unexpected ;)
>
>  
>
Well, that would be a large table, about 21 GB plus overhead, say 25-30 
GB. But if that's the amount of data you have to store, you will have to 
store it somewhere. It doesn't take anymore more space to store the data 
in a normalized table.

As Jay said, the table is never loaded into memory, so memory should not 
be an issue.

HTH
Dennis Cote

Reply via email to