On 14 Aug 2009, at 3:24pm, Dmitri Priimak wrote:

> I have a database with few simple tables. Database is updated  
> regularly
> and than distributed to the clients, which only use it for reading.  
> So,
> concurrency is not an issue. But the database is already quite large.
> The file is about 2.6GB and one table has about 1,800,000 rows
> and another one is even larger. Counting all rows in all tables it has
> about 5 million rows. In a few year it will grow to about 80 million  
> rows.
> So, do you think that SQLite can scale to that level?

No problem.  Your problems, if any, will come from underlying hardware  
and OS problems: enough memory, for a start.

You could look into compression.  I assume that the copy of the  
database you distribute has no unnecessary indices in.  Also that if  
you delete records or indexes you VACUUM occasionally.

Dr Hipp has his own extension that allows you to read directly from a  
compressed and encrypted database:

<http://www.hwaci.com/sw/sqlite/prosupport.html#compress>

It'll cost money but it will mean you can continue to distribute on a  
single DVD (assuming your data is compressible enough) and it gives  
you the additional encryption ability.  Your only problem is that  
you're at Stanford and Dr Hipp was at Duke so he hates you.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to