Hi Pavel,
yes - you are quite right, I wasn't precise here! Sorry about that!
What I meant was a join on unit_id, which obviously will have an index if
that's needed.
Thanks for your help!
Kind regards Stefan
Btw., in the same context, I noticed
This is from your last e-mail:
> So, I can't see the advantage of normalization here. (Note that I do not
> plan to join or index
> on measurement columns!)
This is from your first e-mail:
> Queries on this big table will be rather straight-forward: either on the
> table alone (SELECT * FROM tab
Hi all,
based on Pavels input, I did some quick trials and thought I let you know
the results.
I'm running on an old pentiu...@3.2ghz with 3G memory and Win-7. So, don't
expect
super-fast times!
1. I have one DB with 5411 units and 1517 measurements with columns
unit_id, m_1, .. m_1517
> When doing so: Any idea of what would be the worst case column select
> strategy on the huge table to compare with?
Take some query from your typical usage where only last column of the
huge table is used and all other columns either not used at all or
just one-two of them used for other conditi
Hi Pavel,
thanks a lot; I will reconsider this option and run some test cases to
compare.
But that will be tomorrow I guess...
When doing so: Any idea of what would be the worst case column select
strategy on the huge table to compare with?
Stefan
Pavel Ivanov-2 wrote:
>
>> so normalization w
One may be able to make "measurement type" a column, thus eliminating the
need for a column for each type.
Some speed may be recoverable with indexing.
regards,
Adam
On Tue, Jan 12, 2010 at 1:21 PM, Pavel Ivanov wrote:
> > so normalization would lead to a doubling
> > of the storage space (add
> so normalization would lead to a doubling
> of the storage space (add a measurement_id to each measurement).
My strong belief is that when you try this normalization you'll see
that such doubling of storage is a good enough trade-off for the speed
you'll achieve. I don't think that speed of quer
Hi Adam,
thanks for your suggestion. Unfortunately, it doesn't help in my case.
Essentially, we are talking about a time series (rows) of n different
measurements (columns) - so normalization would lead to a doubling
of the storage space (add a measurement_id to each measurement).
Second, the
Good day,
In general I try to work within the limits of any database engine that I am
using. Often, the limits are there for good reasons (such as speed
problems). I would suggest seeing if there is a way to normalize the big
tables such that infrequently used columns are split into tables that
Hello all,
I'm considering using SQLite for a new application. The schema will contain
a bunch of small tables with few columns (~10) plus one large table with
many columns and 1000...1 rows.
'Many columns' typically fits into the default 2000 column limit, but can
exceed it at times (that i
10 matches
Mail list logo