On 15 Jun 2019, at 2:42pm, Dan Kaminsky <dan.kamin...@medal.com> wrote:

[about the 32676 hard limit on the number of columns in a table]

> I spent quite a bit of time hacking large column support into a working
> Python pipeline, and I'd prefer never to run that in production.
> Converting this compile time variable into a runtime knob would be
> appreciated.

Something you should know about SQLite is that if it needs to find the 2001st 
column of a row it has to read the entire row from storage and walk through all 
2000 columns before the one it wants.  So both storing and recalling data in 
wide tables is very inefficient.

To compensate for this problem, which occurs in many SQL engines, you can turn 
your wide table into a thin table (key/value pairs) by adding the column name 
to the key.  SQLite is extremely good at handling tall thin tables.

If you think about what you're really doing with your data you're find that 
although it's classically drawn out as a huge 2D grid, the data is closer to an 
Entity–attribute–value model, and more suited to a tall table with a long key.

There's no reason why your library should have to know how SQLite is used to 
store data.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to