On 01/03/16 12:07, Stephan Beal wrote:
> On Tue, Mar 1, 2016 at 12:59 PM, Matthias-Christian Ott <ott at mirix.org>
> wrote:
> 
>> Unfortunately, this limits the maximum number of elements that can ever
>> be inserted during a table's life-time to 2^63 - 1. While this might be
>> acceptable in some cases it is an artificial limitation.
>>
> 
> Artificial, yes, but so is "64 bits." You will likely hit other limitations
> far before getting anywhere near 2^63-1 insertions:
> 
> https://www.sqlite.org/limits.html
> 
> e.g. point #13:
> 
> *Maximum Number Of Rows In A Table*
> 
> The theoretical maximum number of rows in a table is 264 (18446744073709551616
> or about 1.8e+19). This limit is unreachable since the maximum database
> size of 140 terabytes will be reached first. A 140 terabytes database can
> hold no more than approximately 1e+13 rows, and then only if there are no
> indices and if each row contains very little data.

I use SQLite as a buffer for small amounts of data so I'm not near the
limits of the database. However, 2^63 - 1 for autoincrement columns
refers to the life-time of the table not the limits of the database. I'm
aware that pushing 2^63 - 1 elements through the queue is not realistic
even in the largest deployments. But saying that your software breaks
after 2^63 - 1 elements feels somewhat like saying that your name can
only be 20 character because some mainframe software can't handle more
than that. It's a feeling that you have failed as a software developer
to deliver a proper solution and now pretend that your improper solution
is proper because nobody will hit the limits.

Reply via email to