According to http://en.wikipedia.org/wiki/File_Allocation_Table , the limit
on FAT16 is 2 gigabytes per file, on FAT32 it's 4 gigabytes per file, and on
NTFS it's very, very large.

In my application I needed to deal with splitting my data into 2 gigabyte
(maximum) database file sizes, and I had two options:

I could implement the DISKIO subfeature of SQLite3 which would let me
emulate a very large file system on top of smaller file chunks, or...

I could just implement a "Collection" object which implements the same C++
interface to my database, but splits the data across multiple databases,
each of which are limited in size to 2 gigabytes.

I found for my application that the latter choice was much easier and faster
to implement.

For each database file opened on a new connection, my code executes "PRAGMA
max_page_count=XXX;" after figuring out how large a page is and dividing my
desired maximum size by it.  As of the 3.4.0 release, this max_page_count is
per-connection and not per-file.  And it's only checked when allocating new
pages to the file through the pager allocation routines.

--andy



On 7/4/07, Ian Frosst <[EMAIL PROTECTED]> wrote:

Is the file system holding your file Fat32, or NTFS?  If it's Fat32, it
may
be the source of your problem, as it doesn't support very large files.

Ian

On 7/4/07, Krishnamoorthy, Priya (IE10) <
[EMAIL PROTECTED]>
wrote:
>
> Hi all,
>
>
>
> I am using SQLite3 database in my application.
>
>
>
> My application runs on Windows XP (32 bit) platform. I am not able to
> store more than 2GB of data in my database. Is it not possible to store
> more than 2gb data in windows XP?
>
>
>
> I used SQlite3 in Linux and could store more than 2GB.
>
>
>
> Please help me in this regard
>
>
>
> Best regards,
>
> Priya
>
>
>
>
>
>

Reply via email to