On a large table it will be much faster to use an index than to force a
row scan as your initial query did.
L B wrote:
> I have obtained a great performance improvement now
> just adding an index in the 2 columns
>
> dtfrom and idcatalogue,
>
> removing the 2 single indexes on the column dtfro
I have obtained a great performance improvement now
just adding an index in the 2 columns
dtfrom and idcatalogue,
removing the 2 single indexes on the column dtfrom and
idcatalogue.
I could not believe, now the query
SELECT MAX(DTFROM) AS MAXDT, IDCATALOGUE FROM
CRONOCATALOGUES GROUP BY IDCAT
I have investigated more.
It seems that the performance problem is simply
related to the query:
SELECT MAX(DTFROM) AS MAXDT, IDCATALOGUE FROM
CRONOCATALOGUES GROUP BY IDCATALOGUE
which takes at first execution 9 seconds in access and
30 seconds in sqlite, and it takes, after first
execution (OS c
I'm no expert, but I believe I recall reading on one of the SQLite
docs that the performance of Joins can be a bit slower than on other
DB systems. Seeing as you have a couple of tables being joined, this
could be the case. I also think I recall reading some posts in this
list about people changing
Hi all,
I have a 500mb database in access, migrated to sqlite.
The structure is:
--this table has 100 records
CREATE TABLE CATALOGUES
(
IDCATALOGUE INTEGER PRIMARY KEY AUTOINCREMENT,
CATALOGUENAME TEXT,
IDPERSONALDATA INTEGER,
TYPE INTEGER,
ACTIVE INTEGER
);
C
5 matches
Mail list logo