Thanks for your answers! I am very happy with SQLite as it is I was
just wondering if I could improve it for this case.

I am using tables with this configuration for performance reasons. I
have to support an indeterminate number of columns (user data) and a
"normal" design is not as fast as this solution. I can't remember the
results of the tests right now but the differences in loading data
into the database and reading it to memory were very large.

Thanks,
Jose

On 9/13/06, Dennis Cote <[EMAIL PROTECTED]> wrote:
jose simas wrote:
> My application uses SQLite as its file format and up to two of the
> tables can have several thousand columns (up to 20 or 30 thousand at
> times).
>
> When I open a connection there's a noticeable pause (around one second
> on a file with a table of 7,000 columns, for example). There is also a
> noticeable delay in sorting them by the primary key.
>
> Is there anything I can do to favour this kind of tables?
>
Jose,

What can you possibly be doing with tables that have that many columns?
Are you sure you don't mean 20K-30K rows? In SQL a row corresponds to a
record, and a column corresponds to a field in a record.

If you really mean columns, then your best approach is probably to
redesign your tables to move much of the data into other related tables.
Can you give us some idea of your table schema and how it is used?

There will be little or no benefit to compiling sqlite yourself.

Dennis Cote

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------



-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to