On Fri, Aug 14, 2009 at 07:24:31AM -0700, Dmitri Priimak scratched on the wall:

> I have a database with few simple tables. Database is updated regularly 
> and than distributed to the clients, which only use it for reading. So, 
> concurrency is not an issue. But the database is already quite large. 
> The file is about 2.6GB and one table has about 1,800,000 rows
> and another one is even larger. Counting all rows in all tables it has 
> about 5 million rows. In a few year it will grow to about 80 million rows.
> So, do you think that SQLite can scale to that level?

  Yes.  SQLite can handle the numbers.  That's not an issue.  The
  question is really just one of performance.  ~40GB is a fair amount
  of data to slug around on and off disk, although that's also highly
  dependent on how the data is used.  But databases of that size are
  not that unheard of in the SQLite world, even if they're a bit
  unusual.

  The good news is that if the database is distributed as a
  more-or-less "read-only" image, you can do a lot to prep for
  performance.  For example, VACUUM the database when you're done
  making it, mess with the page size for performance, add lots of
  indexes to trade space for performance, and so on.  As long as you're
  getting performance you're happy with, SQLite can handle the size.

    -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to