I cannot imagine ever needing more then 2000 columns in a table, if I would, I could always create a parallel table



As currently implemented, there is no fixed limit to
the number
of columns you can put in a table in SQLite.  If the
CREATE TABLE
statement will fit in memory, then SQLite will
accept it.  Call
the number of columns in a table K.  I am proposing
to limit the
value of K to something like 2000.

Would this cause anyone any grief?

Note that SQLite is optimized for a K that is small
- a few dozen
at most.  There are algorithms in the parser that
run in time
O(K*K).  These could be changed to O(K) but with K
small the
constant of proportionality is such that it isn't
worthwhile.
So, even though SQLite will work on a table with a
million or
more columns, it is not a practical thing to do, in
general.

The largest value of K I have seen in the wild is in
the low 100s. I thought that I was testing with K
values in
the thousands, but I just checked and I think the
test
scripts only go as high as K=1000 in one place.


The reason it would be good to limit K to about 2000
is
that if I do so there are some places where I can
increase
the run-time performance some.  It would also reduce
code complexity in a few spots.

So who out there needs a value of K larger than
2000?
What is the largest K that anybody is using?  Who
would
object if I inserted a limit on K that was in the
range
of 1000 or 2000?
--
D. Richard Hipp <[EMAIL PROTECTED]>






        
        
                
Yahoo! Mail - Com 250MB de espaço. Abra sua conta! http://mail.yahoo.com.br/







Reply via email to