On 17/04/2009 12:01 AM, Vinnie wrote:
> Dear Group:
> 
> I've done some calculations and its a fairly likely scenario that my users 
> will end up with sqlite databases that are over 1 gigabyte in size, in some 
> cases 4 gigabytes. An upper limit on the number of rows in a table could be 
> as high as 100,000 (yeah that not very high). There are rows containing blobs 
> that average around 50 kilobytes in size.
> 
> Is there a limit to the database size on Windows or Macintosh? I did a search 
> and the only thing I came up with was that large file support was enabled for 
> Unix in one of the releases.
> 
> I'm looking at sqlite.c from the amalgamation and it says that >2GB file 
> support is enabled on POSIX if the underlying OS supports it. And "Similar is 
> true for Mac OS X". But there is no mention of Windows.

IIRC: Earlier this week, Richard Hipp in response to a question on 
scalability gave the impression that up to 2 TiB would behave linearly. 
So the only question remaining is whether your filesystem can handle a 
file as large as you need (a) at all (b) reliably (c) fast enough.

My *guess* is that you shouldn't have any problem (except on a Windows 
"FAT" filesystem, but you wouldn't be using that, would you?).

Irrespective of what people tell you and how authoritative they seem, I 
would recommend that you do some simple tests: create an ordinary file 
of size 3.9Gib, then say 6 GiB (4 GiB is a magic hurdle because that 
number is 2^32). If that's OK, write a couple of quick scripts, one to 
populate the database with typical rows, one to query the data base, 
retrieving both low-rowid rows and high-rowid rows and comparing the 
timing. You may wish to experiment with varying the page size (upwards).

HTH,
John

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to