On 3/1/16 7:07 AM, Stephan Beal wrote:
> On Tue, Mar 1, 2016 at 12:59 PM, Matthias-Christian Ott <ott at mirix.org>
> wrote:
>
>> Unfortunately, this limits the maximum number of elements that can ever
>> be inserted during a table's life-time to 2^63 - 1. While this might be
>> acceptable in some cases it is an artificial limitation.
>>
> Artificial, yes, but so is "64 bits." You will likely hit other limitations
> far before getting anywhere near 2^63-1 insertions:
>
> https://www.sqlite.org/limits.html
>
> e.g. point #13:
>
> *Maximum Number Of Rows In A Table*
>
> The theoretical maximum number of rows in a table is 264 (18446744073709551616
> or about 1.8e+19). This limit is unreachable since the maximum database
> size of 140 terabytes will be reached first. A 140 terabytes database can
> hold no more than approximately 1e+13 rows, and then only if there are no
> indices and if each row contains very little data.
>
You can hit 2^63 insertions well before hitting the size limit of the 
database if you have also been doing deletions.

A simple repeated pattern of Add record N, Remove record N-1 will 
eventually us up the limits on record number.

The answer here is to detect when the record number is getting near to 
'too big' and then go through an 'renumber' the record back in sequence 
from 1 and reset the next record number. This might require changing 
data elsewhere if record numbers have been saved anywhere, which means 
it might make sense to not wait till overflow in imminent, but do it 
proactively when minimal records (ideally no records) are referenced 
elsewhere (and only in known locations).

2^63 is such a big number that it is likely you can find a time for 
standard PM when you can do this with the database offline before things 
overflow. At one insertion a nano-second, you have almost 3 centuries 
before this needs to be done (if I am doing my math right), so you can 
do it when you migrate to a new box.

With 32 bit numbers, these limits were readily reachable, 64 bits was a 
quantum leap in range, adding 32 doubling to the limit, which will take 
a while to catch up (which we likely will). The 140 terabyte limit is 
something that I could see being hit now in some applications.

-- 
Richard Damon

Reply via email to