Hello all,

I'm considering using SQLite for a new application. The schema will contain
a bunch of small tables with few columns (~10) plus one large table with
many columns and 1000...10000 rows.

'Many columns' typically fits into the default 2000 column limit, but can
exceed it at times (that is, on some of the foreseen databases). It will
never exceed the theoretical / compile time selectable limit of 32k columns.
Queries on this big table will be rather straight-forward: either on the
table alone (SELECT * FROM table_large) or one join on one field to one of
the smaller tables.

The  http://www.sqlite.org/limits.html Limits page  warns: "There are places
in the SQLite code generator that use algorithms that are O(N²) where N is
the number of columns." which is kind of discouraging to increase max.
column count at compile time, but is not very specific about when this
happens...

I now have two design options:
- increase max. column count at compile time (possibly setting
SQLITE_LIMIT_COLUMN on databases where  I don't need more than (say) 1000
columns) and accept the quoted performance degradation.
- alternatively, in the client handle cases with more than 2000 columns,
splitting the storage up into two (or more) tables

Any advise, experience - or more specifics on the "O(N²)" remark are highly
welcome!

Thanks for your help - Stefan
-- 
View this message in context: 
http://old.nabble.com/Limitation-on-Column-count-tp27117364p27117364.html
Sent from the SQLite mailing list archive at Nabble.com.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to