On Mon, Jul 18, 2011 at 05:01:55PM +0200, Stephan Beal scratched on the wall:
> On Mon, Jul 18, 2011 at 4:58 PM, [email protected]
> <[email protected]>wrote:
> 
> > These are addresses accessed by a program. There will be 100 billion
> > entries
> >
> 
> You won't be able to fit that many in your database - sqlite3 cannot scale
> to the file size you will need for that. Assuming 10-byte addresses (as you
> demonstrated), 10 bytes x 100B records = 1 terrabyte JUST for the addresses
> (not including any sqlite3-related overhead per record, which is probably
> much larger than the 10 bytes you're saving).

  In theory, the maximum size of an SQLite database is 128 TB.

  2^31 pages (2 giga-pages) @ 2^16 bytes (64K) = 128 TB, or ~140e12.

  (I know http://sqlite.org/limits.html says 14TB, but I think they
   dropped a digit)

  If your file system can handle this or not is a different story.

  Using SQLite for this type of data seems very questionable, however.
  As Stephen points out, the database with just the addresses is likely
  to be in the 3 to 4 TB range. You said "There will be 100 billion
  entries or so like this, which makes it necessary to use the
  database," but I think just the opposite is true.  If you have a
  *very* large number of data points with with a very specific access
  pattern, using a general purpose tool seems like exactly the wrong
  choice.  You need some custom system that is highly optimized for
  both storage space and your specific access patterns.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to