Re: [sqlite] How to speed up database open

2014-12-12 Thread Paul
> > On Thu, Dec 11, 2014 at 10:58 AM, Paul  wrote:
> > 
> > >
> > > I have yet to try and test if dropping stat tables worth the effort.
> > >
> > 
> > Most of the work is involved in loading sqlite_stat4. On the other hand,
> > most of the benefit comes from sqlite_stat1. So consider compiling without
> > SQLITE_ENABLE_STAT4. You will still probably get good query plans, but the
> > startup time should be reduced.
> > 
> 
> Thanks you for analysis of the data, Richard. 
> I am going to try your advice and post the results.
> 

I've done different tests, and my results are:

 - Compiling without stat4(3) is equivalent to simply dropping sqlite_stat4(3) 
table, and 
   as stated before, on average it takes 1.2 ms (with) vs 0.4 ms (without) to 
open database.
   That is very good, because I don't have to compile two different version of 
library.

 - It takes roughly same time to read sqlite_stat3 as it takes to read 
sqlite_stat4, 
   sqlite_stat3 wins by a few microseconds. So if I'd ever choose to ignore 
additional 
   open overhead, I'd definitely go for sqlite_stat4.

 - Compiling with USE_PREAD macro saves a dozen more microseconds.


I'll stick to the advice and drop sqlite_stat4 table, leaving only sqlite_stat1.

Thanks everyone for help!

Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up database open

2014-12-11 Thread Paul

> 
> On 2014/12/11 17:58, Paul wrote:
> >
> >> On 2014/12/11 13:51, Paul wrote:
> >> I have yet to try and test if dropping stat tables worth the effort. Some 
> >> databases in fact can grow pretty big, up to few 
> >> hundred of megabytes// 
> 
> In that case maybe keep the Stat1 tables and there is also the option of 
> using stuff like "USING" clauses and "LIKELY" or "UNLIKELY" 
> planner directives in your queries to force a tried and tested query plan on 
> the QP in lieu of using the stat tables - but now you 
> are getting very hands-on with your data and no longer leaving it up to the 
> internal wisdom of SQLite - something I don't usually 
> advocate, but as you rightly observed - your case is quite special.
> 

I am going to disable stat4 and keep only stat1. Yes, I could of use USING and 
co.
But as you correctly pointed out, sqlite does very good job optimizing queries.
I trust sqlite more than I trust my intuition.
>From my personal experience, intuition does not perform very well in complex 
>systems with a lot of variables.

Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up database open

2014-12-11 Thread RSmith


On 2014/12/11 17:58, Paul wrote:



On 2014/12/11 13:51, Paul wrote:
I have yet to try and test if dropping stat tables worth the effort. Some databases in fact can grow pretty big, up to few 
hundred of megabytes// 


In that case maybe keep the Stat1 tables and there is also the option of using stuff like "USING" clauses and "LIKELY" or "UNLIKELY" 
planner directives in your queries to force a tried and tested query plan on the QP in lieu of using the stat tables - but now you 
are getting very hands-on with your data and no longer leaving it up to the internal wisdom of SQLite - something I don't usually 
advocate, but as you rightly observed - your case is quite special.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up database open

2014-12-11 Thread Paul
> On Thu, Dec 11, 2014 at 10:58 AM, Paul  wrote:
> 
> >
> > I have yet to try and test if dropping stat tables worth the effort.
> >
> 
> Most of the work is involved in loading sqlite_stat4. On the other hand,
> most of the benefit comes from sqlite_stat1. So consider compiling without
> SQLITE_ENABLE_STAT4. You will still probably get good query plans, but the
> startup time should be reduced.
> 

Thanks you for analysis of the data, Richard. 
I am going to try your advice and post the results.

Best regards,
Paul

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up database open

2014-12-11 Thread Richard Hipp
On Thu, Dec 11, 2014 at 10:58 AM, Paul  wrote:

>
> I have yet to try and test if dropping stat tables worth the effort.
>

Most of the work is involved in loading sqlite_stat4.  On the other hand,
most of the benefit comes from sqlite_stat1.  So consider compiling without
SQLITE_ENABLE_STAT4.  You will still probably get good query plans, but the
startup time should be reduced.


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up database open

2014-12-11 Thread Paul
Hi Ryan, thanks for reply.

> 
> On 2014/12/11 13:51, Paul wrote:
> > In my specific case I need to open database as fast as possible.
> > Usual working cycle: open -> select small data set -> close.
> > It is irrelevant how much time it takes to open database when
> > data is being added or updated, since it happens not too often.
> 
> /Snipped for brevity/
> 
> Hi Paul,
> 
> You seem to know your way around systems so I will not waste time on details. 
> The core of the problem is that it takes time to open 
> a database and file and extra cpu cycles because upon opening an SQLite 
> database much cpu and I/O time is spent reading and 
> interpreting the enitre schema (which sound complex even though the actual 
> data might be small) and then checking for hot-journals, 
> opening accompanying file objects (possibly WAL journals etc.) and some basic 
> maintenance. It then loads the stat tables and not 
> only read the data but use it to set up certain Query planner adjustments 
> which eats a few more cycles (This is something I think 
> happen on startup, but I might be wrong). Also, I think start-up routines is 
> surely not an area of waste but probably not a great 
> focus of fine-tuning performance and optimizations (Richard or Dan might shed 
> more light if I am wrong about this). Ether way, once 
> it is open, the speed is lightning quick, as you have noticed. The point 
> being: It is the opening that eats the cpu time.
> 
> Next point is that you cannot do much about that in terms of your explanation 
> of how you access data, and I won't try to dissuade 
> you from the way you use it. One thing that I notice as a definitive 
> possibility is simply dropping all stat tables from your system 
> and at least saving those reading and adjusting steps. The reason why I 
> suggest this is that you have already done the research and 
> noticed the time degradation due to it, but more importantly, there is no 
> need.
> 
> The stat tables help the Query planner (NGQP as it is officially known these 
> days) to make decisions on making queries on large 
> datasets a bit faster (sometimes a LOT faster). You however do not have any 
> large datasets or performance-bending queries, you have 
> no need for this, your need is more to save those cycles at file-open time. 
> You need something we do not often see in database 
> designs: Quantity over quality - and dropping the stat tables (and the 
> maintenance routines causing them to exist) should do you a 
> favour.
> 
> Best of luck with the implementation!
> Ryan
> 

Thanks for confirmation of my mental model of internals of sqlite :)
Sqlite does all the preparations needed for most optimal performance. And I 
agree that this
is the best choice in overwhelming number of cases. And my case is very very 
specific.
I have yet to try and test if dropping stat tables worth the effort. Some 
databases in fact can grow
pretty big, up to few hundred of megabytes. It is yet unknown how optimal will 
they perform.
But so far, ignoring the CPU overhead, we have a big gain in a disk performance 
area. 
Sqlite performs much much better than original storage.


Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up database open

2014-12-11 Thread RSmith


On 2014/12/11 13:51, Paul wrote:

In my specific case I need to open database as fast as possible.
Usual working cycle: open -> select small data set -> close.
It is irrelevant how much time it takes to open database when
data is being added or updated, since it happens not too often.


/Snipped for brevity/

Hi Paul,

You seem to know your way around systems so I will not waste time on details. The core of the problem is that it takes time to open 
a database and file and extra cpu cycles because upon opening an SQLite database much cpu and I/O time is spent reading and 
interpreting the enitre schema (which sound complex even though the actual data might be small) and then checking for hot-journals, 
opening accompanying file objects (possibly WAL journals etc.) and some basic maintenance. It then loads the stat tables and not 
only read the data but use it to set up certain Query planner adjustments which eats a few more cycles (This is something I think 
happen on startup, but I might be wrong).  Also, I think start-up routines is surely not an area of waste but probably not a great 
focus of fine-tuning performance and optimizations (Richard or Dan might shed more light if I am wrong about this).  Ether way, once 
it is open, the speed is lightning quick, as you have noticed. The point being: It is the opening that eats the cpu time.


Next point is that you cannot do much about that in terms of your explanation of how you access data, and I won't try to dissuade 
you from the way you use it. One thing that I notice as a definitive possibility is simply dropping all stat tables from your system 
and at least saving those reading and adjusting steps. The reason why I suggest this is that you have already done the research and 
noticed the time degradation due to it, but more importantly, there is no need.


The stat tables help the Query planner (NGQP as it is officially known these days) to make decisions on making queries on large 
datasets a bit faster (sometimes a LOT faster). You however do not have any large datasets or performance-bending queries, you have 
no need for this, your need is more to save those cycles at file-open time. You need something we do not often see in database 
designs: Quantity over quality - and dropping the stat tables (and the maintenance routines causing them to exist) should do you a 
favour.


Best of luck with the implementation!
Ryan




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up database open

2014-12-11 Thread Paul
PART 2

INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','676 1 1','0 330 330','0 327 
330',X'0408040253be558403a9');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','676 1 1','0 661 661','0 655 
661',X'04080402547bf6900b13');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 5 1','676 720 723','1 691 
723',X'0409040253aaffe802ac');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 5 1','676 915 917','1 805 
917',X'0409040253bfe67d03d2');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 5 1','676 955 956','1 824 
956',X'0409040253bff7c203fa');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 5 1','676 962 966','1 826 
966',X'0409040253bff8040404');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 5 1','676 974 975','1 831 
975',X'0409040253c00551040d');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 6 1','676 979 982','1 832 
982',X'0409040253c005520414');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 3 1','676 992 992','1 836 
992',X'0409040253c0116e041f');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 5 1','676 999 999','1 840 
999',X'0409040253c02e080426');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 4 1','676 1320 1323','1 
1069 1323',X'04090402541ace35076c');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 6 1','676 1375 1379','1 
1101 1379',X'04090402542c0ede0817');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 5 1','676 1419 1421','1 
1121 1421',X'040904025432eca90867');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 5 1','676 1428 1432','1 
1124 1432',X'04090402543309980873');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 5 1','676 1438 1438','1 
1129 1438',X'040904025433fb84087d');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 5 1','676 1622 1623','1 
1246 1623',X'040904025451099009d5');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 5 1','676 1635 1637','1 
1251 1637',X'04090402545763c60a07');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 1 1','676 1654 1654','1 
1260 1654',X'04090402545786690a1c');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1093 7 1','676 1668 1671','1 
1273 1671',X'04090402546614990a75');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1002 1 1','1769 1985 1985','2 
1559 1985',X'040104020353980b10020e');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1002 1 1','1769 2316 2316','2 
1887 2316',X'040104020354047722065a');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','1002 1 1','1769 2647 2647','2 
2218 2647',X'040104020354597abf0a2c');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','158 1 1','2771 2802 2802','3 
2373 2802',X'04020401271152e64c8f7b');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL_idx','44 37 1','2929 2932 2956','4 
2503 2956',X'0402040227145487fdf10a65');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','17 17','13 17',X'0301080e');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','110 110','99 
110',X'03010864');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','1 1','330 330','316 
330',X'030201013d03');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','1 1','661 661','643 
661',X'0302090284');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','1 1','992 992','974 
992',X'03020903cf');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','1 1','1323 1323','1300 
1323',X'0302090515');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','1549 1549','1526 
1549',X'03020805f7');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','1652 1652','1624 
1652',X'0302080659');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','1 1','1654 1654','1625 
1654',X'030201065a03');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','1738 1738','1704 
1738',X'03020806a9');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','1982 1982','1944 
1982',X'030201079903');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','1 1','1985 1985','1946 
1985',X'030202079b2714');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','1991 1992','1952 
1992',X'03020207a12711');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','2015 2015','1974 
2015',X'03020107b703');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','2023 2024','1981 
2024',X'03020207be2711');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','2025 2025','1982 
2025',X'03020107bf03');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','1 1','2316 2316','2269 
2316',X'03020908de');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','2555 2555','2508 
2555',X'03020809cd');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','1 1','2647 2647','2598 
2647',X'0302010a2703');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','2703 2703','2654 
2703',X'0302080a5f');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','2850 2850','2800 
2850',X'0302080af1');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','2856 2856','2805 
2856',X'0302010af603');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','2870 2870','2817 
2870',X'0302010b0203');
INSERT INTO sqlite_stat4 VALUES('LLL','LLL','2 1','2956 2957','2900 
2957',X'0302020b582711');
INSERT INTO sqlite_stat4 

Re: [sqlite] How to speed up database open

2014-12-11 Thread Paul
 Hello, Richard, thanks for quick reply.

Unfortunately, no, there is no way. On our servers we have big number
of entities that represent client data. Data for different clients can
be read at any given point of time by clients and by a bunch of daemons 
responsible for different maintenance jobs. Recently we have switched 
our central index file from hand made, transation-less format to sqlite.
We have daemon that provides API both for clients and for maintenance daemons.
So, there is roughly 500 requests per second to the daemon. Most of requests
require simple data to be extracted, but since now it takes more time to read
that data, we observe considerable CPU usage growth. This growth is not
sever for us at all. But I am a performance paranoid type of a person :D
So that's why I am curios if maybe anything can be tuned inside of sqlite.
And seeing how removing stat table reduces time 4x times I thought
maybe there are other ways around.

Unfortunately database structure is a of commercial secret.
But I hope modified content of sqlite_stat* will shed some light.

PART 1

INSERT INTO sqlite_stat1 VALUES('AAA','AAA','4711 2 2 1');
INSERT INTO sqlite_stat1 VALUES('BBB','BBB_idx','18249 3');
INSERT INTO sqlite_stat1 VALUES('BBB','BBB','18249 22 1');
INSERT INTO sqlite_stat1 VALUES('CCC','CCC_idx','54 3 1');
INSERT INTO sqlite_stat1 VALUES('DDD',NULL,'478');
INSERT INTO sqlite_stat1 VALUES('EEE','EEE_idx','836 1');
INSERT INTO sqlite_stat1 VALUES('FFF',NULL,'16');
INSERT INTO sqlite_stat1 VALUES('GGG','GGG','7 1');
INSERT INTO sqlite_stat1 VALUES('HHH',NULL,'13');
INSERT INTO sqlite_stat1 VALUES('III','III_idx','2918 2 1');
INSERT INTO sqlite_stat1 VALUES('JJJ','JJJ','2915 1 1');
INSERT INTO sqlite_stat1 VALUES('KKK','KKK_idx','7070 3');
INSERT INTO sqlite_stat1 VALUES('KKK','KKK','7070 1');
INSERT INTO sqlite_stat1 VALUES('LLL','LLL_idx','2973 595 2 1');
INSERT INTO sqlite_stat1 VALUES('LLL','LLL','2973 2 1');
INSERT INTO sqlite_stat1 VALUES('MMM','MMM_idx1','7942 3 3');
INSERT INTO sqlite_stat1 VALUES('MMM','MMM_idx2','7942 1324 1');
INSERT INTO sqlite_stat1 VALUES('MMM','MMM_idx3','7942 3 1');
INSERT INTO sqlite_stat1 VALUES('NNN','NNN','223 2 1');
INSERT INTO sqlite_stat1 VALUES('OOO',NULL,'6');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','14 14 1','306 306 315','216 216 
315',X'0402010200d9020098');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','12 12 1','445 445 449','286 286 
449',X'04020101011f0270');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','11 11 1','520 520 523','313 313 
523',X'04020101013a0267');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','9 9 1','600 600 602','341 341 
602',X'0402010101560261');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','11 11 1','702 702 709','386 386 
709',X'040201020183020096');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','11 11 1','797 797 802','432 432 
802',X'0402010201b1020094');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','1 1 1','1047 1047 1047','557 557 
1047',X'04020101022e0209');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','2 2 1','1571 1571 1571','946 946 
1571',X'0402010103b30206');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','12 12 1','1733 1733 1739','1090 
1090 1739',X'040201020443020094');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','15 15 1','1882 1882 1889','1168 
1168 1889',X'0402010104910277');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','9 9 1','1916 1916 1917','1183 1183 
1917',X'0402010104a00261');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','11 11 1','2040 2040 2042','1250 
1250 2042',X'0402010104e30276');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','4 4 1','2092 2092 2095','1277 1277 
2095',X'0402010204fe020223');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','11 11 1','2210 2210 2220','1363 
1363 2220',X'040201020554020234');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','9 9 1','2606 2606 2611','1621 1621 
2611',X'0402010206560200a3');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','4 4 1','2618 2618 2619','1625 1625 
2619',X'04020102065a020093');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','9 9 1','2932 2932 2935','1801 1801 
2935',X'04020101070b0270');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','1 1 1','3143 3143 3143','1930 1930 
3143',X'04020102078f020163');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','11 11 1','3276 3276 3285','1992 
1992 3285',X'0402010207d302024d');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','1 1 1','3667 3667 3667','2262 2262 
3667',X'0402010208e4020163');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','11 11 1','3727 3727 3731','2292 
2292 3731',X'0402010109020275');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','11 11 1','3950 3950 3955','2436 
2436 3955',X'0402010209970200f4');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','4 4 1','4189 4189 4191','2578 2578 
4191',X'040201020a2b020089');
INSERT INTO sqlite_stat4 VALUES('AAA','AAA','11 11 1','4649 4649 4659','2836 
2836 4659',X'040201020b46020340');
INSERT INTO sqlite_stat4 VALUES('BBB','BBB_idx','15 1','204 216','88 
216',X'030601526e1cde35d6764d25');
INSERT INTO sqlite_stat4 

Re: [sqlite] How to speed up database open

2014-12-11 Thread Paul
Hello, Simon.

> 
> On 11 Dec 2014, at 11:51am, Paul  wrote:
> 
> > I understand, that having them is a must for a decent performance. 
> > In my specific case I have millions of individual database files. 
> > This is one, among other reasons that I can't keep them open all the time. 
> > Just too many of them. These databases are being opened frequently. 
> > Let's say 500 times per second. In most cases, just to query a single row. 
> > Ironically, querying takes only a handful of microseconds, and most 
> > CPU time is spent reading same database structure over and over again.
> > 
> > Can you please make some advice, what can be done to reduce this overhead?
> 
> The problem with this is that it cannot be solved by SQLite's programmers 
> because most of the time is taken by operating system calls. Merely opening a 
> file (which you no doubt know is not done by sqlite_open() but delayed until 
> the first access) is a time-consuming procedure. Once SQLite has access to 
> the data it is, as you have shown, very fast.

But my test shows that opening is actually stunningly fast.
On the other hand, parsing database structure is slow and CPU bound.
75% of the parsing time is dedicated to reading stat tables.
System calls take a fraction of CPU time, to be more specific: 10%.

> 
> You explain that you have millions of individual database files. Is that the 
> only reason you can't open the database and keep it open, or are there others 
> ? Also, do all these separate databases have the same tables with the same 
> columns in ?

This is not the only reason, but most important one. 
Fitting all database connections in the RAM would be impossible.
Yes, database files are all have same structure.

> 
> My normal advice would be that before you start querying you merge your 
> millions of separate database files into one big one. Judging by the degree 
> of technical information in your question you don't need me to suggest ways 
> of programming or scripting this, or of keeping a merged central copy 
> up-to-date. The only question is whether it is appropriate to your 
> circumstances.

Unfortunately, it is not possible in our case. We have separate directories as 
a mean of encapsulation of an entity whose central index is based on sqlite.
Also, having one single index will reduce concurrency dramatically.

Thanks,
Paul

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up database open

2014-12-11 Thread Richard Hipp
On Thu, Dec 11, 2014 at 6:51 AM, Paul  wrote:

>
> Hello.
>
> In my specific case I need to open database as fast as possible.
> Usual working cycle: open -> select small data set -> close.
> It is irrelevant how much time it takes to open database when
> data is being added or updated, since it happens not too often.
> But for selects it's a different story, selects are frequent.
> Database structure is pretty complex, but I would say not too much.
> 21 tables (including sqlite_stat1 and sqlite_stat4) and 11 indices.
>
> For me, it takes 1.2 ms to read database structure. And the process
> of reading database structure is strictly CPU bound. When I run
> profiler I see 90% of time is spent in sqlite3InitOne() function and
> 64% inside of loadStatTbl() that is being called by sqlite3InitOne().
> Times are measured performing hundred thousands of cycles:
> OPEN -> SELECT -> CLOSE
> This procdure is not I/O bound, database is small (2MiB) and easily
> fits in the file system cache. I have double-checked with iostat.
>
> Weird thing is that when I drop either sqlite_stat1 or sqlite_stat4
> or both of them, average time drops from 1.2 ms to 0.4 ms. Why is this
> so expensive to read those tables?
>

Can you send us a copy of your schema and the STAT tables?  You can
generate the content we need by running "sqlite3 YOURDATABASE .fullschema"
and capturing the output.  With that, we can reproduce your problems and
perhaps offer some hope of (minor) improvement.

But really, the process of opening the database connection does require
reading and parsing the entire database schema and then running "SELECT *
FROM sqlite_stat1; SELECT * FROM sqlite_stat4;" queries to extract the STAT
data.  That will all take *some* time.  Is there no way that you can open
the database connections in advance and having the standing by?


>
> I understand, that having them is a must for a decent performance.
> In my specific case I have millions of individual database files.
> This is one, among other reasons that I can't keep them open all the time.
> Just too many of them. These databases are being opened frequently.
> Let's say 500 times per second. In most cases, just to query a single row.
> Ironically, querying takes only a handful of microseconds, and most
> CPU time is spent reading same database structure over and over again.
>
> Can you please make some advice, what can be done to reduce this overhead?
> Even reducing it on the account of deleting stat tables, that I do not
> consider an option at all, is not enough. 400 microseconds for database
> initialization is a lot, if it takes only 5 microseconds to query data.
>
> What specifically I mean by saying initialization:
> 1) sqlite3_open(...);
> 2) sqlite3_exec(..." PRAGMA page_size = 4096; "...);
> 3) sqlite3_exec(..." PRAGMA temp_store = MEMORY; "...);
> 4) sqlite3_exec(..." PRAGMA cache_size = 1; "...);
> 5) sqlite3_exec(..." PRAGMA foreign_keys = ON; "...);
> 6) sqlite3_exec(..." PRAGMA synchronous = NORMAL; "...);
>
> Sequence is actually irrelevant. And if I skip all the pragmas,
> initialization time will be delayed until the first query, so I guess
> there is nothing specific about these pragmas.
>
> Thanks,
> Paul
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up database open

2014-12-11 Thread Simon Slavin

On 11 Dec 2014, at 11:51am, Paul  wrote:

> I understand, that having them is a must for a decent performance. 
> In my specific case I have millions of individual database files. 
> This is one, among other reasons that I can't keep them open all the time. 
> Just too many of them. These databases are being opened frequently. 
> Let's say 500 times per second. In most cases, just to query a single row. 
> Ironically, querying takes only a handful of microseconds, and most 
> CPU time is spent reading same database structure over and over again.
> 
> Can you please make some advice, what can be done to reduce this overhead?

The problem with this is that it cannot be solved by SQLite's programmers 
because most of the time is taken by operating system calls.  Merely opening a 
file (which you no doubt know is not done by sqlite_open() but delayed until 
the first access) is a time-consuming procedure.  Once SQLite has access to the 
data it is, as you have shown, very fast.

You explain that you have millions of individual database files.  Is that the 
only reason you can't open the database and keep it open, or are there others ? 
 Also, do all these separate databases have the same tables with the same 
columns in ?

My normal advice would be that before you start querying you merge your 
millions of separate database files into one big one.  Judging by the degree of 
technical information in your question you don't need me to suggest ways of 
programming or scripting this, or of keeping a merged central copy up-to-date.  
The only question is whether it is appropriate to your circumstances.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to speed up database open

2014-12-11 Thread Paul

Hello.

In my specific case I need to open database as fast as possible.
Usual working cycle: open -> select small data set -> close.
It is irrelevant how much time it takes to open database when
data is being added or updated, since it happens not too often.
But for selects it's a different story, selects are frequent.
Database structure is pretty complex, but I would say not too much.
21 tables (including sqlite_stat1 and sqlite_stat4) and 11 indices.

For me, it takes 1.2 ms to read database structure. And the process
of reading database structure is strictly CPU bound. When I run
profiler I see 90% of time is spent in sqlite3InitOne() function and 
64% inside of loadStatTbl() that is being called by sqlite3InitOne().
Times are measured performing hundred thousands of cycles:
OPEN -> SELECT -> CLOSE
This procdure is not I/O bound, database is small (2MiB) and easily 
fits in the file system cache. I have double-checked with iostat.

Weird thing is that when I drop either sqlite_stat1 or sqlite_stat4
or both of them, average time drops from 1.2 ms to 0.4 ms. Why is this
so expensive to read those tables? 

I understand, that having them is a must for a decent performance. 
In my specific case I have millions of individual database files. 
This is one, among other reasons that I can't keep them open all the time. 
Just too many of them. These databases are being opened frequently. 
Let's say 500 times per second. In most cases, just to query a single row. 
Ironically, querying takes only a handful of microseconds, and most 
CPU time is spent reading same database structure over and over again.

Can you please make some advice, what can be done to reduce this overhead?
Even reducing it on the account of deleting stat tables, that I do not
consider an option at all, is not enough. 400 microseconds for database 
initialization is a lot, if it takes only 5 microseconds to query data.

What specifically I mean by saying initialization:
1) sqlite3_open(...);
2) sqlite3_exec(..." PRAGMA page_size = 4096; "...);
3) sqlite3_exec(..." PRAGMA temp_store = MEMORY; "...);
4) sqlite3_exec(..." PRAGMA cache_size = 1; "...);
5) sqlite3_exec(..." PRAGMA foreign_keys = ON; "...);
6) sqlite3_exec(..." PRAGMA synchronous = NORMAL; "...);

Sequence is actually irrelevant. And if I skip all the pragmas,
initialization time will be delayed until the first query, so I guess
there is nothing specific about these pragmas.

Thanks,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up a bulk import

2014-04-07 Thread Drago, William @ MWG - NARDAEAST
rvalCount = 0;
}
}// End while.
stopWatch.Stop();
insertTime = stopWatch.Elapsed;
stopWatch.Reset();
System.Console.Write("Processed " + lineCount + " 
lines.\r\n");
}// End myReader.
System.Console.WriteLine("\nCommiting data...");
stopWatch.Start();
transaction.Commit();
stopWatch.Stop();
commitTime = stopWatch.Elapsed;
}// End transaction.
}// End SQLCommand.
}// End connection.

elapsedTime = String.Format(timePattern,
insertTime.Hours, insertTime.Minutes, insertTime.Seconds,
insertTime.Milliseconds / 10);
System.Console.WriteLine("Insert time: " + elapsedTime);

elapsedTime = String.Format(timePattern,
commitTime.Hours, commitTime.Minutes, commitTime.Seconds,
commitTime.Milliseconds / 10);
System.Console.WriteLine("Commit time: " + elapsedTime);
}
}
}
---END---

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of J Trahair
Sent: Monday, April 07, 2014 2:33 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] How to speed up a bulk import

Hi

I am using SQLite running under VB.net in a import routine of a series of csv 
files, but it seems to be slow, and I don't mean microseconds, I mean minutes.

A typical INSERT might be:

INSERT INTO AllSales (Source, MachineName, Location, UserPIN, TariffName, 
CustomerID, DateOfSale, TimeOfSale, TicketGroup, Action, HasBeenVerified, 
EPOSRecNo, CrossingName, QuantitySold) VALUES ('Terminal', 'HH098016', 'Test 
Company', '6992', 'Test Company', '20140326-135946', '2014-03-26', '13:59:46', 
'Test Company', 'Logout:
1#867.90$2#$3#$

Re: [sqlite] How to speed up a bulk import

2014-04-07 Thread RSmith


On 2014/04/07 20:33, J Trahair wrote:

Hi

I am using SQLite running under VB.net in a import routine of a series of csv files, but it seems to be slow, and I don't mean 
microseconds, I mean minutes.


A typical INSERT might be:

INSERT INTO AllSales (Source, MachineName, Location, UserPIN, TariffName, CustomerID, DateOfSale, TimeOfSale, TicketGroup, Action, 
HasBeenVerified, EPOSRecNo, CrossingName, QuantitySold) VALUES ('Terminal', 'HH098016', 'Test Company', '6992', 'Test Company', 
'20140326-135946', '2014-03-26', '13:59:46', 'Test Company', 'Logout: 

Re: [sqlite] How to speed up a bulk import

2014-04-07 Thread Simon Slavin

On 7 Apr 2014, at 7:33pm, J Trahair  wrote:

> Would having a field index or key help?

Adding more indexes and keys make searches faster, at the expense of making the 
original INSERT slower.

As a test, instead of executing the INSERT commands, write the commands 
themselves to a text file.  Then add a BEGIN and the beginning and an END at 
the end.  Then use the shell tool to .read that file.  Is is faster or slower 
than your VB code ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up a bulk import

2014-04-07 Thread Petite Abeille

On Apr 7, 2014, at 8:33 PM, J Trahair  wrote:

> Any suggestions welcome. Thank you.

One word: transaction.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to speed up a bulk import

2014-04-07 Thread J Trahair

Hi

I am using SQLite running under VB.net in a import routine of a series 
of csv files, but it seems to be slow, and I don't mean microseconds, I 
mean minutes.


A typical INSERT might be:

INSERT INTO AllSales (Source, MachineName, Location, UserPIN, 
TariffName, CustomerID, DateOfSale, TimeOfSale, TicketGroup, Action, 
HasBeenVerified, EPOSRecNo, CrossingName, QuantitySold) VALUES 
('Terminal', 'HH098016', 'Test Company', '6992', 'Test Company', 
'20140326-135946', '2014-03-26', '13:59:46', 'Test Company', 'Logout: 

Re: [sqlite] how to speed up this ?

2010-12-24 Thread Vander Clock Stephane

> Select
>H1.ID
> from
>HASH1 H1
> where
>x1_y1 BETWEEN min11 AND max11 AND
>x1_y2 BETWEEN min12 AND max12 AND
>x1_y3 BETWEEN min13 AND max13 AND
>x1_y4 BETWEEN min14 AND max14 AND
>x1_y5 BETWEEN min15 AND max15;
>
>

no it's not work at all !! without an rtree index the speed is a total 
disaster (several minutes)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-24 Thread Vander Clock Stephane
i do it.. but it's change nothing :(

On 12/24/2010 3:47 PM, Simon Slavin wrote:
> On 24 Dec 2010, at 8:17am, Vander Clock Stephane wrote:
>
>> I have a key like this
>>
>> 123-098-230-120-111 where (123), (098), (230), (120), (111) are what i
>> call node
>> Node are integer comprise between 0 and 255 (bytes)
> You mean from 000 to 254.  255 breaks your system because you are using 
> modulus 255.
>
>> and i need to found "similare" key.
>> A similar key is a key where one node is max + - 10 more or lower than
>> another node
>>
>> so
>>
>> 123-098-230-120-111
>> is similare to
>> 120-097-235-118-110
>>
>> but not similare to
>> 180-197-215-018-010
>>
>> it's for our graphical software
> Precalculate five sets of minimum and maximum bounds:
>
> min11 = max((<#randomnumber>   % 255)-10,0)
> max11 = min((<#randomnumber>   % 255)+10,255)
>
> Then you can just seize the ten values you need from the table and use them 
> to make up your SELECT command.
>
> Select
>H1.ID
> from
>HASH1 H1
> where
>x1_y1 BETWEEN min11 AND max11 AND
>x1_y2 BETWEEN min12 AND max12 AND
>x1_y3 BETWEEN min13 AND max13 AND
>x1_y4 BETWEEN min14 AND max14 AND
>x1_y5 BETWEEN min15 AND max15;
>
> This will be faster than trying to get SQLite to do the calculations for 
> every row of your 50 000 000 row table.
>
> It will work much faster with an index on x1_y1.
> It may or may not work faster with an index on 
> (x1_y1,x1_y2,x1_y3,x1_y4,x1_y5).  Try it.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-24 Thread Simon Slavin

On 24 Dec 2010, at 12:47pm, Simon Slavin wrote:

> Precalculate five sets of minimum and maximum bounds:
> 
> min11 = max((<#randomnumber>  % 255)-10,0)
> max11 = min((<#randomnumber>  % 255)+10,255)
> 
> Then you can just seize the ten values you need from the table and use them 
> to make up your SELECT command.

Sorry, replace the 'seize the ten values you need from the table' with 'use 
those ten values'.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-24 Thread Simon Slavin

On 24 Dec 2010, at 8:17am, Vander Clock Stephane wrote:

> I have a key like this
> 
> 123-098-230-120-111 where (123), (098), (230), (120), (111) are what i 
> call node
> Node are integer comprise between 0 and 255 (bytes)

You mean from 000 to 254.  255 breaks your system because you are using modulus 
255.

> and i need to found "similare" key.
> A similar key is a key where one node is max + - 10 more or lower than 
> another node
> 
> so
> 
> 123-098-230-120-111
> is similare to
> 120-097-235-118-110
> 
> but not similare to
> 180-197-215-018-010
> 
> it's for our graphical software

Precalculate five sets of minimum and maximum bounds:

min11 = max((<#randomnumber>  % 255)-10,0)
max11 = min((<#randomnumber>  % 255)+10,255)

Then you can just seize the ten values you need from the table and use them to 
make up your SELECT command.

Select
  H1.ID
from
  HASH1 H1
where
  x1_y1 BETWEEN min11 AND max11 AND
  x1_y2 BETWEEN min12 AND max12 AND
  x1_y3 BETWEEN min13 AND max13 AND
  x1_y4 BETWEEN min14 AND max14 AND
  x1_y5 BETWEEN min15 AND max15;

This will be faster than trying to get SQLite to do the calculations for every 
row of your 50 000 000 row table.

It will work much faster with an index on x1_y1.
It may or may not work faster with an index on (x1_y1,x1_y2,x1_y3,x1_y4,x1_y5). 
 Try it.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-24 Thread Vander Clock Stephane
can you gave me the name of a good SSD you advise me to buy ?
i decide to make a try !

Thanks again
stéphane


On 12/24/2010 12:24 AM, John Drescher wrote:
> On Thu, Dec 23, 2010 at 4:06 PM, Vander Clock Stephane
>   wrote:
>> that very very much expensive :( how much you thing ?
>>
> $500 to $600 US.
>
> John
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-24 Thread Vander Clock Stephane

> Can you describe what you're trying to do with that command ?

of course !

I have a key like this

123-098-230-120-111 where (123), (098), (230), (120), (111) are what i 
call node
Node are integer comprise between 0 and 255 (bytes)

and i need to found "similare" key.
A similar key is a key where one node is max + - 10 more or lower than 
another node

so

123-098-230-120-111
is similare to
120-097-235-118-110

but not similare to
180-197-215-018-010

it's for our graphical software

so i add in a rtree table the key like this

insert into rtreetable
Node1_min = 123-10,
Node1_max = 123+10,
Node2_min = 098-10,
Node2_max = 098+10,
etc..

and after to detect similare node i simply do

select from rtreetable
where
Node1_min >= 120 and
Node1_max <= 120 and
Node2_min >= 098 and
Node2_max <= 098 and
etc...


it's ok ?

thanks you by advance !
stéphane

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Vander Clock Stephane
hmmm,

how spatial index can help me better than sqlite Rtree index ?

i just need to do

Select
 H1.ID
from
 HASH1 H1
where
 x1_y1_min>= x and
 x1_y1_max<= y and
 x1_y2_min>= z and
 x1_y2_max<= w and
 x1_y3_min>= a and
 x1_y3_max<= b and
 x1_y4_min>= c and
 x1_y4_max<= d and
 x1_y5_min>= e and
 x1_y5_max<= f;

thanks by advance
stéphane



On 12/24/2010 12:40 AM, stormtrooper wrote:
> I'd be surprised if you run out of memory, updating a field, even with a
> 50,000,000 row table.
>
> Also, if you are working with real geospatial data such as Imagery or vector
> data, there are many applications that may be suited for these calculations.
>
> Spatialite is a Sqlite extension that has spatial indexes(rtree) and spatial
> functions such as buffer, intersect,etc
>
>
>
> Vander Clock Stephane wrote:
>> i don't know but i quite sure not, because the cost to update all the
>> row in the table Hash will be much much (much) more expensive ...
>> and also this solution it's absolutely not multi thread :(
>>
>> On 12/23/2010 11:46 PM, stormtrooper wrote:
>>> would it run faster if you add two columns to the Hast table - randmax
>>> and
>>> randmin
>>>
>>> update Hash set randmax = max((<#randomnumber>   % 255)-10,0)
>>> update Hash set randmin = min((<#randomnumber>   % 255)+10,255)
>>>
>>> CREATE INDEX HASH_RMIN_IDX ON HASH (RANDMIN);
>>> CREATE INDEX HASH_RMAX_IDX ON HASH (RANDMAX);
>>>
>>> Select
>>>  H1.ID
>>> from
>>>  HASH1 H1
>>> where
>>>  x1_y1_min>= randmax and
>>>  x1_y1_max<= randmin and
>>>  x1_y2_min>= randmax and
>>>  x1_y2_max<= randmin and
>>>  x1_y3_min>= randmax and
>>>  x1_y3_max<= randmin and
>>>  x1_y4_min>= randmax and
>>>  x1_y4_max<= randmin and
>>>  x1_y5_min>= randmax and
>>>  x1_y5_max<= randmin;
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Simon Slavin

On 23 Dec 2010, at 8:19pm, Vander Clock Stephane wrote:

> in my test all the random are different ...

Can you describe what you're trying to do with that command ?  Or can anyone 
else explain to me what Stephane is trying to do ?  I don't see the point of a 
construction like this

Select
  H1.ID
from
  HASH1 H1
where
  x1_y1_min>= max((<#randomnumber>  % 255)-10,0) and
  x1_y1_max<= min((<#randomnumber>  % 255)+10,255) and
  x1_y2_min>= max((<#randomnumber>  % 255)-10,0) and
  x1_y2_max<= min((<#randomnumber>  % 255)+10,255) and
  x1_y3_min>= max((<#randomnumber>  % 255)-10,0) and
  x1_y3_max<= min((<#randomnumber>  % 255)+10,255) and
  x1_y4_min>= max((<#randomnumber>  % 255)-10,0) and
  x1_y4_max<= min((<#randomnumber>  % 255)+10,255) and
  x1_y5_min>= max((<#randomnumber>  % 255)-10,0) and
  x1_y5_max<= min((<#randomnumber>  % 255)+10,255);
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread stormtrooper

I'd be surprised if you run out of memory, updating a field, even with a
50,000,000 row table. 

Also, if you are working with real geospatial data such as Imagery or vector
data, there are many applications that may be suited for these calculations.

Spatialite is a Sqlite extension that has spatial indexes(rtree) and spatial
functions such as buffer, intersect,etc



Vander Clock Stephane wrote:
> 
> i don't know but i quite sure not, because the cost to update all the 
> row in the table Hash will be much much (much) more expensive ...
> and also this solution it's absolutely not multi thread :(
> 
> On 12/23/2010 11:46 PM, stormtrooper wrote:
>> would it run faster if you add two columns to the Hast table - randmax
>> and
>> randmin
>>
>> update Hash set randmax = max((<#randomnumber>  % 255)-10,0)
>> update Hash set randmin = min((<#randomnumber>  % 255)+10,255)
>>
>> CREATE INDEX HASH_RMIN_IDX ON HASH (RANDMIN);
>> CREATE INDEX HASH_RMAX_IDX ON HASH (RANDMAX);
>>
>> Select
>> H1.ID
>> from
>> HASH1 H1
>> where
>> x1_y1_min>= randmax and
>> x1_y1_max<= randmin and
>> x1_y2_min>= randmax and
>> x1_y2_max<= randmin and
>> x1_y3_min>= randmax and
>> x1_y3_max<= randmin and
>> x1_y4_min>= randmax and
>> x1_y4_max<= randmin and
>> x1_y5_min>= randmax and
>> x1_y5_max<= randmin;
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/how-to-speed-up-this---tp30524605p30525283.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread John Drescher
On Thu, Dec 23, 2010 at 4:06 PM, Vander Clock Stephane
 wrote:
> that very very much expensive :( how much you thing ?
>

$500 to $600 US.

John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Vander Clock Stephane
that very very much expensive :( how much you thing ?



On 12/23/2010 11:55 PM, John Drescher wrote:
>> i m affraid so ... but what it's will be with 50 000 000 rows ? i don't
>> have 100 gigabytes of memory :(
> I would get a 256GB SSD.
>
> John
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Vander Clock Stephane
i don't know but i quite sure not, because the cost to update all the 
row in the table Hash will be much much (much) more expensive ...
and also this solution it's absolutely not multi thread :(

On 12/23/2010 11:46 PM, stormtrooper wrote:
> would it run faster if you add two columns to the Hast table - randmax and
> randmin
>
> update Hash set randmax = max((<#randomnumber>  % 255)-10,0)
> update Hash set randmin = min((<#randomnumber>  % 255)+10,255)
>
> CREATE INDEX HASH_RMIN_IDX ON HASH (RANDMIN);
> CREATE INDEX HASH_RMAX_IDX ON HASH (RANDMAX);
>
> Select
> H1.ID
> from
> HASH1 H1
> where
> x1_y1_min>= randmax and
> x1_y1_max<= randmin and
> x1_y2_min>= randmax and
> x1_y2_max<= randmin and
> x1_y3_min>= randmax and
> x1_y3_max<= randmin and
> x1_y4_min>= randmax and
> x1_y4_max<= randmin and
> x1_y5_min>= randmax and
> x1_y5_max<= randmin;
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread John Drescher
> i m affraid so ... but what it's will be with 50 000 000 rows ? i don't
> have 100 gigabytes of memory :(

I would get a 256GB SSD.

John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread stormtrooper

would it run faster if you add two columns to the Hast table - randmax and
randmin

update Hash set randmax = max((<#randomnumber> % 255)-10,0)
update Hash set randmin = min((<#randomnumber> % 255)+10,255)

CREATE INDEX HASH_RMIN_IDX ON HASH (RANDMIN);
CREATE INDEX HASH_RMAX_IDX ON HASH (RANDMAX); 

Select
   H1.ID
from
   HASH1 H1
where
   x1_y1_min >= randmax and
   x1_y1_max <= randmin and
   x1_y2_min >= randmax and
   x1_y2_max <= randmin and
   x1_y3_min >= randmax and
   x1_y3_max <= randmin and
   x1_y4_min >= randmax and
   x1_y4_max <= randmin and
   x1_y5_min >= randmax and
   x1_y5_max <= randmin; 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



-- 
View this message in context: 
http://old.nabble.com/how-to-speed-up-this---tp30524605p30524963.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Vander Clock Stephane

> Right.  So you have a database with 2 000 000 rows that is 1.8GB
> So your first 1 000 000 rows takes up about 1GB.
> And your test case with just 1 000 000 rows in runs really fast.
>
> So what is happening is that most of the first 1 000 000 rows fits in memory. 
>  Once the database gets bigger than that the application has to keep fetching 
> information from disk all the time, and that's far slower.
>

i m affraid so ... but what it's will be with 50 000 000 rows ? i don't 
have 100 gigabytes of memory :(
with 50 000 000 rows and btree index like where fiedA = yyy then result 
is imediate ! but why not also
with rtree index ?

thanks you by advance
stéphane

>> how to speed up this query :
>>
>> Select
>>H1.ID
>> from
>>HASH1 H1
>> where
>>x1_y1_min>= max((<#randomnumber>  % 255)-10,0) and
>>x1_y1_max<= min((<#randomnumber>  % 255)+10,255) and
>>x1_y2_min>= max((<#randomnumber>  % 255)-10,0) and
>>x1_y2_max<= min((<#randomnumber>  % 255)+10,255) and
>>x1_y3_min>= max((<#randomnumber>  % 255)-10,0) and
>>x1_y3_max<= min((<#randomnumber>  % 255)+10,255) and
>>x1_y4_min>= max((<#randomnumber>  % 255)-10,0) and
>>x1_y4_max<= min((<#randomnumber>  % 255)+10,255) and
>>x1_y5_min>= max((<#randomnumber>  % 255)-10,0) and
>>x1_y5_max<= min((<#randomnumber>  % 255)+10,255);
> If there is only one random number involved in all that, precalculate all the 
> max and mins:
>
> ll = max((<#randomnumber>  % 255)-10,0)
> mm = min((<#randomnumber>  % 255)+10,255)

in my test all the random are different ...
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Simon Slavin

On 23 Dec 2010, at 7:56pm, Vander Clock Stephane wrote:

> Windows 2008 R2 with 8GB of memory.
> 
> but actually i run the test on a beta server with only 1Gb of memory and 
> win2003 ..
> 
> the database si with 2 000 000 rows is 1.8 GO

Right.  So you have a database with 2 000 000 rows that is 1.8GB
So your first 1 000 000 rows takes up about 1GB.
And your test case with just 1 000 000 rows in runs really fast.

So what is happening is that most of the first 1 000 000 rows fits in memory.  
Once the database gets bigger than that the application has to keep fetching 
information from disk all the time, and that's far slower.

> how to speed up this query :
> 
> Select
>   H1.ID
> from
>   HASH1 H1
> where
>   x1_y1_min >= max((<#randomnumber> % 255)-10,0) and
>   x1_y1_max <= min((<#randomnumber> % 255)+10,255) and
>   x1_y2_min >= max((<#randomnumber> % 255)-10,0) and
>   x1_y2_max <= min((<#randomnumber> % 255)+10,255) and
>   x1_y3_min >= max((<#randomnumber> % 255)-10,0) and
>   x1_y3_max <= min((<#randomnumber> % 255)+10,255) and
>   x1_y4_min >= max((<#randomnumber> % 255)-10,0) and
>   x1_y4_max <= min((<#randomnumber> % 255)+10,255) and
>   x1_y5_min >= max((<#randomnumber> % 255)-10,0) and
>   x1_y5_max <= min((<#randomnumber> % 255)+10,255);

If there is only one random number involved in all that, precalculate all the 
max and mins:

ll = max((<#randomnumber> % 255)-10,0)
mm = min((<#randomnumber> % 255)+10,255)

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Vander Clock Stephane
Windows 2008 R2 with 8GB of memory.

but actually i run the test on a beta server with only 1Gb of memory and 
win2003 ..

the database si with 2 000 000 rows is 1.8 GO

thanks by advance !
stéphane

On 12/23/2010 10:52 PM, Simon Slavin wrote:
> On 23 Dec 2010, at 7:36pm, Vander Clock Stephane wrote:
>
>> when their is lower than 1 000 000 row it's return in lower than 10 ms
>> with more than 1 000 000 row it's return with around 350 ms :(
>> and i need more than 50 000 000 rows :( :(
> How much memory do you have in that computer ?  What operating system are you 
> running ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Simon Slavin

On 23 Dec 2010, at 7:36pm, Vander Clock Stephane wrote:

> when their is lower than 1 000 000 row it's return in lower than 10 ms
> with more than 1 000 000 row it's return with around 350 ms :(
> and i need more than 50 000 000 rows :( :(

How much memory do you have in that computer ?  What operating system are you 
running ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up a query between two tables?

2009-09-15 Thread Kermit Mei
On Tue, 2009-09-15 at 22:31 +1000, John Machin wrote:
> On 15/09/2009 7:25 PM, Kermit Mei wrote:
> > On Tue, 2009-09-15 at 17:37 +1000, John Machin wrote:
> >> On 15/09/2009 4:47 PM, Kermit Mei wrote:
> >>>
> >>> sqlite> SELECT HomeDev.text, ZPhDev.id
> >>>...> FROM ZPhDev
> >>>...> INNER JOIN HomeDev ON (HomeDev.id = ZPhDev.id)
> >> Are you sure that you mean ZPhDev.id in the ON clause?? 
> > 
> > Oh,I'm sorry, that should be :
> > CREATE VIEW ZHView AS SELECT HomeDev.text, ZPhDev.id FROM ZPhDev INNER
> > JOIN HomeDev ON (HomeDev.id = ZPhDev.HomeDevId) ORDER By HomeDev.text;
> > 
> > 
> >> If so, what's 
> >> the point of having the ZPhDev.HomeDevId which isn't used and whose name 
> >> suggests that it should be used? 
> 
> 
> You didn't answer this question:
> 
>  >> If not, what's the point of having
>  >> ZphDev.id? Could the second table be collapsed into an isaZPhDev
>  >> (true/false) column in the first table?

ZphDev.id is used in other tables to reference the items in ZphDev,
HomeDev table hasn't used it directly.

> 
> >>> Then, How can I speed up this kind of operation?
> >> Standard advice: ensure that you have indexes on whatever the join 
> >> columns should be. You might get more help if you showed the CREATE 
> >> TABLE (including pk and any other constraints) and CREATE INDEX 
> >> statements, and added a note about the relationship between the two tables.
> 
> And the relationship between the tables is ..??

In HomeDev, some items is a description for every items in ZPhDev.


> > The homeDev table:
> > CREATE TABLE HomeDev (color TEXT, secId NUMERIC, background TEXT,
> > pageNum int, icon_y int, icon_x int, size_y int, size_x int, pos_y int,
> > pos_x int, id INTEGER PRIMARY KEY, text varchar(15), icon varchar(50),
> > type int);
> > 
> > The ZPhDev table:
> > CREATE TABLE ZPhDev (HomeDevId NUMERIC, id INTEGER PRIMARY KEY);
> 
> No relationship comments, no foreign key clause, ... no useful info.
> 
> > How can I create index for them?
> 
> With a CREATE INDEX statement. Please consider reading an SQL text ... 
> anyone have any suggestions for Kermit?

I'm sorry for my poor English. I don't ask you about SQL syntax, I just
want to know create index for which columns.

> 
> HomeDev.id already has an index, automatically created because it is a 
> primary key and in this case it's not an extra index, it's the rowid index.
> 
> >>>  Use view like this:
> >>>
> >>> sqlite> CREATE VIEW ZHview AS
> >>>...> SELECT HomeDev.text, ZPhDev.id
> >>>...> FROM ZPhDev
> >>>...> INNER JOIN HomeDev ON (HomeDev.id = ZPhDev.id)
> >>>...> ORDER By HomeDev.text;
> >>>
> >>> And can it speed up my operation?
> >> Read this: http://en.wikipedia.org/wiki/View_%28database%29 and see if 
> >> you can find any indication that a (non-materialised) view speeds 
> >> anything up at runtime.
> > 
> > Thanks,but I can't find anything about how can I speed it up on wiki. 
> 
> Three possibilities, any one or more or all of which could be true: the 
> information about how a non-materialised view can speed up a query is:
> (1) there but you can't find it.
> (2) not there because the wiki authors left it out for some reason.
> (3) not there because such a view can't speed up a query.
> 
> Here's a hint: re-read the section on Equivalence, which says in essence 
> that selecting from a view makes the programmer's job easier, but at 
> runtime it's just a text substitution exercise. There's a very close 
> parallel to using functions in procedural programming languages -- 
> replacing say 3 copies of the same code with one copy plus 3 calls to it 
>   has little effect on the runtime speed.

Yes, I see. 

Thank you,very much. For some reason, I may waste your time, I'm sorry
for that.

Kermit


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up a query between two tables?

2009-09-15 Thread John Machin
On 15/09/2009 7:25 PM, Kermit Mei wrote:
> On Tue, 2009-09-15 at 17:37 +1000, John Machin wrote:
>> On 15/09/2009 4:47 PM, Kermit Mei wrote:
>>>
>>> sqlite> SELECT HomeDev.text, ZPhDev.id
>>>...> FROM ZPhDev
>>>...> INNER JOIN HomeDev ON (HomeDev.id = ZPhDev.id)
>> Are you sure that you mean ZPhDev.id in the ON clause?? 
> 
> Oh,I'm sorry, that should be :
> CREATE VIEW ZHView AS SELECT HomeDev.text, ZPhDev.id FROM ZPhDev INNER
> JOIN HomeDev ON (HomeDev.id = ZPhDev.HomeDevId) ORDER By HomeDev.text;
> 
> 
>> If so, what's 
>> the point of having the ZPhDev.HomeDevId which isn't used and whose name 
>> suggests that it should be used? 


You didn't answer this question:

 >> If not, what's the point of having
 >> ZphDev.id? Could the second table be collapsed into an isaZPhDev
 >> (true/false) column in the first table?

>>> Then, How can I speed up this kind of operation?
>> Standard advice: ensure that you have indexes on whatever the join 
>> columns should be. You might get more help if you showed the CREATE 
>> TABLE (including pk and any other constraints) and CREATE INDEX 
>> statements, and added a note about the relationship between the two tables.

And the relationship between the tables is ..??

> The homeDev table:
> CREATE TABLE HomeDev (color TEXT, secId NUMERIC, background TEXT,
> pageNum int, icon_y int, icon_x int, size_y int, size_x int, pos_y int,
> pos_x int, id INTEGER PRIMARY KEY, text varchar(15), icon varchar(50),
> type int);
> 
> The ZPhDev table:
> CREATE TABLE ZPhDev (HomeDevId NUMERIC, id INTEGER PRIMARY KEY);

No relationship comments, no foreign key clause, ... no useful info.

> How can I create index for them?

With a CREATE INDEX statement. Please consider reading an SQL text ... 
anyone have any suggestions for Kermit?

HomeDev.id already has an index, automatically created because it is a 
primary key and in this case it's not an extra index, it's the rowid index.

>>>  Use view like this:
>>>
>>> sqlite> CREATE VIEW ZHview AS
>>>...> SELECT HomeDev.text, ZPhDev.id
>>>...> FROM ZPhDev
>>>...> INNER JOIN HomeDev ON (HomeDev.id = ZPhDev.id)
>>>...> ORDER By HomeDev.text;
>>>
>>> And can it speed up my operation?
>> Read this: http://en.wikipedia.org/wiki/View_%28database%29 and see if 
>> you can find any indication that a (non-materialised) view speeds 
>> anything up at runtime.
> 
> Thanks,but I can't find anything about how can I speed it up on wiki. 

Three possibilities, any one or more or all of which could be true: the 
information about how a non-materialised view can speed up a query is:
(1) there but you can't find it.
(2) not there because the wiki authors left it out for some reason.
(3) not there because such a view can't speed up a query.

Here's a hint: re-read the section on Equivalence, which says in essence 
that selecting from a view makes the programmer's job easier, but at 
runtime it's just a text substitution exercise. There's a very close 
parallel to using functions in procedural programming languages -- 
replacing say 3 copies of the same code with one copy plus 3 calls to it 
  has little effect on the runtime speed.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up a query between two tables?

2009-09-15 Thread Kermit Mei
On Tue, 2009-09-15 at 17:37 +1000, John Machin wrote:
> On 15/09/2009 4:47 PM, Kermit Mei wrote:
> > Hello community!
> > 
> > I have two tables:
> > 
> > HomeDev(id, text, icon)
> > 
> > and
> > 
> > ZPhDev(id,HomeDevId)
> > 
> > Now, I must usually access(read-only) the items:
> > 
> > sqlite> SELECT HomeDev.text, ZPhDev.id
> >...> FROM ZPhDev
> >...> INNER JOIN HomeDev ON (HomeDev.id = ZPhDev.id)
> 
> Are you sure that you mean ZPhDev.id in the ON clause?? 

Oh,I'm sorry, that should be :
CREATE VIEW ZHView AS SELECT HomeDev.text, ZPhDev.id FROM ZPhDev INNER
JOIN HomeDev ON (HomeDev.id = ZPhDev.HomeDevId) ORDER By HomeDev.text;


> If so, what's 
> the point of having the ZPhDev.HomeDevId which isn't used and whose name 
> suggests that it should be used? If not, what's the point of having 
> ZphDev.id? Could the second table be collapsed into an isaZPhDev 
> (true/false) column in the first table?
> 
> >...> ORDER By HomeDev.text;
> > 
> > Then, How can I speed up this kind of operation?
> 
> Standard advice: ensure that you have indexes on whatever the join 
> columns should be. You might get more help if you showed the CREATE 
> TABLE (including pk and any other constraints) and CREATE INDEX 
> statements, and added a note about the relationship between the two tables.

The homeDev table:
CREATE TABLE HomeDev (color TEXT, secId NUMERIC, background TEXT,
pageNum int, icon_y int, icon_x int, size_y int, size_x int, pos_y int,
pos_x int, id INTEGER PRIMARY KEY, text varchar(15), icon varchar(50),
type int);

The ZPhDev table:
CREATE TABLE ZPhDev (HomeDevId NUMERIC, id INTEGER PRIMARY KEY);

How can I create index for them?



> >  Use view like this:
> > 
> > sqlite> CREATE VIEW ZHview AS
> >...> SELECT HomeDev.text, ZPhDev.id
> >...> FROM ZPhDev
> >...> INNER JOIN HomeDev ON (HomeDev.id = ZPhDev.id)
> >...> ORDER By HomeDev.text;
> > 
> > And can it speed up my operation?
> 
> Read this: http://en.wikipedia.org/wiki/View_%28database%29 and see if 
> you can find any indication that a (non-materialised) view speeds 
> anything up at runtime.

Thanks,but I can't find anything about how can I speed it up on wiki. 



Regards
Kermit Mei

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-29 Thread MikeW
Julian Bui <[EMAIL PROTECTED]> writes:

> 
> Hey MikeW,
> 
> The article you posted seems like something I might want to try.  I am
> currently using JDBC to embed sqlite in my java app.  Do you know if there
> are equivalent statements for java?
> 
> Please let me know if you do.
> 
> Thanks,
> Julian
> 

Just issue the PRAGMA command(s) like any other SQL commands,
http://www.sqlite.org/pragma.html
presumably with a 
 connection.createStatement("PRAGMA temp_store = 2") etc.

Regards,
MikeW

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-28 Thread Julian Bui
Hey MikeW,

The article you posted seems like something I might want to try.  I am
currently using JDBC to embed sqlite in my java app.  Do you know if there
are equivalent statements for java?

Please let me know if you do.

Thanks,
Julian

On Mon, Oct 27, 2008 at 7:58 AM, MikeW <[EMAIL PROTECTED]> wrote:

> Christophe Leske <[EMAIL PROTECTED]> writes:
>
> > How about the cache size? or does this only pertain to databases which
> > get inserts?
> >
> > Christophe Leske
>
> Speedup tip:
> http://article.gmane.org/gmane.comp.db.sqlite.general/41990
>
> You may have seen it !
>
> MikeW
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-28 Thread Christophe Leske

> How many memory has your embedded project? You can create a new 
> in-memory database and copy there your database data.
>   
That´s what i am currently doing, but we are using too much memory this 
way, we are out of specs.



-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-28 Thread Eduardo Morras
At 13:47 15/10/2008, you wrote:
>Hi there,
>
>i am using a 120MB database in an embedded project (a DVD-ROM project)
>and was wondering what I can do to speed up its reading using diverse
>PRAGMA statements.
>The database is locked, meaning that no data is being inserted or
>deleted from it. I am solely after speeding up its reading performance.
>
>Indices have been set, would augmenting the cache size for Sqlite do
>something?

How many memory has your embedded project? You can create a new 
in-memory database and copy there your database data.

>Grateful for any info,
>
>--
>Christophe Leske

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-28 Thread Christophe Leske

> Speedup tip:
> http://article.gmane.org/gmane.comp.db.sqlite.general/41990
>   
Hello Mike,

first of all, thank you for your tips.

Yes, i saw that posting, and i am already using it in my code.

But thanks again,

-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-27 Thread MikeW
Christophe Leske <[EMAIL PROTECTED]> writes:

> How about the cache size? or does this only pertain to databases which 
> get inserts?
> 
> Christophe Leske

Speedup tip:
http://article.gmane.org/gmane.comp.db.sqlite.general/41990

You may have seen it !

MikeW

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-23 Thread Griggs, Donald
Regarding: ... my criteria for a lean and slim SQlite3 db access ... I
can use in my DVD-ROM project. 

I don't know for sure, but I'd suspect that any time saved by using
prepared statements would be vastly overwhelmed by even one extra
DVD-rom seek.

I imagine you VACUUM the database before burning it to disk, right?
Setting as large a cache size as practical might help to keep some
indices in RAM.

If by chance this is reasonable for your application, you could even
copy some tables to a database in temporary space on the target system's
hard drive, and ATTACH this db to your DVD-rom DB.

If there are some long strings (or blobs) in the database, then it may
help to define key columns first, and put the lengthier columns last --
or in their own separate tables.

Even if you have plenty of space on the DVD rom, there might even been a
performance improvement by using a compressed file system so as to
reduce seeks.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-23 Thread Christophe Leske

> Or you could have a look at the Perl SQLite functionality:
> http://search.cpan.org/~msergeant/DBD-SQLite-0.31/lib/DBD/SQLite.pm
>
> Yes, Perl is weird initially if you have only (say) written VB !!
> Improves the résumé though ... and your ninja status !
>   
Perl is fine, no problem, but it does not satisfy my criteria for a lean 
and slim SQlite3 db access (read "command line interpreter") I can use 
in my DVD-ROM project.

-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-23 Thread MikeW
Christophe Leske <[EMAIL PROTECTED]> writes:

> 
> Yes, but I am using Adobe Director as a production environment. This is 
> a single threaded application, which also doesn´t allow for threaded 
> calls to a database. Plus, i got no access to the source code of the 
> so-called Xtra (=DLL) which emits the call to the DB.
> 
> All i got is an Xtra which spawns a new thread in which the command line 
> executable is run. I need the thread in order to keep my application 
> running smoothly which otherwise stalls.
> 
> Regading the pre-recording of statements: can this be achieved somehow 
> if the parameters of the call change all the time?
> 

Or you could have a look at the Perl SQLite functionality:
http://search.cpan.org/~msergeant/DBD-SQLite-0.31/lib/DBD/SQLite.pm

Yes, Perl is weird initially if you have only (say) written VB !!
Improves the résumé though ... and your ninja status !

Regards,
MikeW



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-20 Thread John Stanton
Christophe Leske wrote:
> John Stanton schrieb:
> 
>>The sqlite3.exe program is set up as a utility and maintenance tool, not 
>>a production environment and is designed to that end.  If you want 
>>maximum performance it is not the way to go; instead embed the Sqlite 
>>calls inside your application and optimize access.  If you are 
>>performing ad-hoc DB tasks then it or one of the many similar function 
>>Ssqlite tools are appropriate.  One is the Firefox plug in.
>>
>>You can imagine that having to compile the SQL for over and over instead 
>>of storing and re-using the compiled code adds considerably to overhead 
>>on frequently run jobs.
> 
> Yes, but I am using Adobe Director as a production environment. This is 
> a single threaded application, which also doesn´t allow for threaded 
> calls to a database. Plus, i got no access to the source code of the 
> so-called Xtra (=DLL) which emits the call to the DB.

You cab build your own command line interfsce embedding Sqlite.
> 
> All i got is an Xtra which spawns a new thread in which the command line 
> executable is run. I need the thread in order to keep my application 
> running smoothly which otherwise stalls.
> 
> Regading the pre-recording of statements: can this be achieved somehow 
> if the parameters of the call change all the time?

This is where sqlite_bind functions.
> 
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-20 Thread Christophe Leske
John Stanton schrieb:
> The sqlite3.exe program is set up as a utility and maintenance tool, not 
> a production environment and is designed to that end.  If you want 
> maximum performance it is not the way to go; instead embed the Sqlite 
> calls inside your application and optimize access.  If you are 
> performing ad-hoc DB tasks then it or one of the many similar function 
> Ssqlite tools are appropriate.  One is the Firefox plug in.
>
> You can imagine that having to compile the SQL for over and over instead 
> of storing and re-using the compiled code adds considerably to overhead 
> on frequently run jobs.
Yes, but I am using Adobe Director as a production environment. This is 
a single threaded application, which also doesn´t allow for threaded 
calls to a database. Plus, i got no access to the source code of the 
so-called Xtra (=DLL) which emits the call to the DB.

All i got is an Xtra which spawns a new thread in which the command line 
executable is run. I need the thread in order to keep my application 
running smoothly which otherwise stalls.

Regading the pre-recording of statements: can this be achieved somehow 
if the parameters of the call change all the time?


-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-19 Thread John Stanton
The sqlite3.exe program is set up as a utility and maintenance tool, not 
a production environment and is designed to that end.  If you want 
maximum performance it is not the way to go; instead embed the Sqlite 
calls inside your application and optimize access.  If you are 
performing ad-hoc DB tasks then it or one of the many similar function 
Ssqlite tools are appropriate.  One is the Firefox plug in.

You can imagine that having to compile the SQL for over and over instead 
of storing and re-using the compiled code adds considerably to overhead 
on frequently run jobs..
JS

Christophe Leske wrote:
> John,
> 
> thanks for your suggestions, but i am wondering if any of your 
> suggestions can be applied to the sqlite3.exe command line application?
> 
> 
> 
>>Prepare your statements only once and then use bind.  Do not use 
>>  
> 
> How is this done? Can the command line executable be modified in such a way?
> 
> 
>>sqlite3_exec.  Do not open and close the DB for each read, instead open 
>>once and let the cache work.  
> 
> That´s already done.
> 
>>Avoid row scans by defining indices. 
> 
> Already there.
> 
>> Use 
>>the new index selection functionality to force the use of the best 
>>index. 
> 
> Would you care to provide more information about this? A simple link 
> would be enough.
> 
> 
>>Place large and less frequently accessed columns at the end of 
>>the Sqlite rows.
>>  
> 
> Ok, thanks. Will do. However, i read out the whole row all the time.
> BTW, does it help to specify which coloumns i would like instead of all?
> I would like to get all minus one coloumn in general.
> 
>>When you obey these rules you will get very good read perfprmance from 
>>Sqlite.  The cacheing is important if you are using a slow disk or flash 
>>memory.  Look at shared cache mode if you have multiple users
> 
> Nope, just one from DVD.
> 
> Thanks again,
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-19 Thread Christophe Leske
John,

thanks for your suggestions, but i am wondering if any of your 
suggestions can be applied to the sqlite3.exe command line application?


> Prepare your statements only once and then use bind.  Do not use 
>   
How is this done? Can the command line executable be modified in such a way?

> sqlite3_exec.  Do not open and close the DB for each read, instead open 
> once and let the cache work.  
That´s already done.
> Avoid row scans by defining indices. 
Already there.
>  Use 
> the new index selection functionality to force the use of the best 
> index. 
Would you care to provide more information about this? A simple link 
would be enough.

> Place large and less frequently accessed columns at the end of 
> the Sqlite rows.
>   
Ok, thanks. Will do. However, i read out the whole row all the time.
BTW, does it help to specify which coloumns i would like instead of all?
I would like to get all minus one coloumn in general.
> When you obey these rules you will get very good read perfprmance from 
> Sqlite.  The cacheing is important if you are using a slow disk or flash 
> memory.  Look at shared cache mode if you have multiple users
Nope, just one from DVD.

Thanks again,

-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-17 Thread John Stanton
Prepare your statements only once and then use bind.  Do not use 
sqlite3_exec.  Do not open and close the DB for each read, instead open 
once and let the cache work.  Avoid row scans by defining indices.  Use 
the new index selection functionality to force the use of the best 
index.  Place large and less frequently accessed columns at the end of 
the Sqlite rows.

When you obey these rules you will get very good read perfprmance from 
Sqlite.  The cacheing is important if you are using a slow disk or flash 
memory.  Look at shared cache mode if you have multiple users.
JS

MikeW wrote:
> Christophe Leske <[EMAIL PROTECTED]> writes:
> 
> 
>>Hi there,
>>
>>i am using a 120MB database in an embedded project (a DVD-ROM project) 
>>and was wondering what I can do to speed up its reading using diverse 
>>PRAGMA statements.
>>The database is locked, meaning that no data is being inserted or 
>>deleted from it. I am solely after speeding up its reading performance.
>>
>>Indices have been set, would augmenting the cache size for Sqlite do 
>>something?
>>
>>Grateful for any info,
>>
> 
> Did you say that the PRAMGA locking_mode=EXCLUSIVE was set ?
> The doc suggests this speeds things up ...
> 
> You are not clear about which aspect is slow - are you already using 
> sqlite3_bind_x() and placeholders (?) in your SQL statements ?
> Doing a sqlite3_prepare every time you do a query can slow things
> down.
> Sorry if you are already using all code-level best practise !!
> 
> MikeW
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-17 Thread Christophe Leske

> When you say 'sqlite.exe' I presume you are referring to 'sqlite3.exe' ?
> (http://www.sqlite.org/sqlite.html)
>   
Yes. And I am using v3.6.4.

> Using the correct INDEX can speed queries up vastly, so if you can
> identify how you are accessing the data, and then set that/those
> columns as INDEXed, that will help ...
>   
Have done that. It almost doubles my database, but it is worth it.

How about the cache size? or does this only pertain to databases which 
get inserts?

> (Techie note 
> http://20bits.com/2008/05/13/interview-questions-database-indexes/)
>   
 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-17 Thread MikeW
 <[EMAIL PROTECTED]> writes:

> 
> Hello Mike, 
> 
> Did you say that the PRAMGA locking_mode=EXCLUSIVE was set ?
> The doc suggests this speeds things up ...
> 
> I have that set, yes.
> 
> You are not clear about which aspect is slow - are you already using 
> sqlite3_bind_x() and placeholders (?) in your SQL statements ?
> Doing a sqlite3_prepare every time you do a query can slow things
> down.
> 
> I am using the sqlite.exe command line executable in a separate thread to
query my database. I don't have
> code level control over the access to it.
> 
> I was thus wondering if there is anything i can do on a query level (as input
to the command line executable) to
> speed things up.
> 
> Any help is much appreciated, 
> 
> christophe Leske
> 

When you say 'sqlite.exe' I presume you are referring to 'sqlite3.exe' ?
(http://www.sqlite.org/sqlite.html)
If not, update your software !

Using the correct INDEX can speed queries up vastly, so if you can
identify how you are accessing the data, and then set that/those
columns as INDEXed, that will help ...
http://www.sqlite.org/lang_createindex.html
Best choice for index could be
a) field that directly identifies record, eg. serial number
b) field that allows quick rejection of most non-matching records,
reducing amount of data to search by other methods, eg. surname

Of course, creating an index uses more space in the DB, so there
is always a tradeoff.
(Techie note http://20bits.com/2008/05/13/interview-questions-database-indexes/)

MikeW



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-17 Thread leske
Hello Mike, 


Did you say that the PRAMGA locking_mode=EXCLUSIVE was set ?
The doc suggests this speeds things up ...


I have that set, yes.



You are not clear about which aspect is slow - are you already using 
sqlite3_bind_x() and placeholders (?) in your SQL statements ?
Doing a sqlite3_prepare every time you do a query can slow things
down.


I am using the sqlite.exe command line executable in a separate thread to query 
my database. I don't have code level control over the access to it.

I was thus wondering if there is anything i can do on a query level (as input 
to the command line executable) to speed things up.

Any help is much appreciated, 

christophe Leske
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-17 Thread MikeW
Christophe Leske <[EMAIL PROTECTED]> writes:

> 
> Hi there,
> 
> i am using a 120MB database in an embedded project (a DVD-ROM project) 
> and was wondering what I can do to speed up its reading using diverse 
> PRAGMA statements.
> The database is locked, meaning that no data is being inserted or 
> deleted from it. I am solely after speeding up its reading performance.
> 
> Indices have been set, would augmenting the cache size for Sqlite do 
> something?
> 
> Grateful for any info,
> 
Did you say that the PRAMGA locking_mode=EXCLUSIVE was set ?
The doc suggests this speeds things up ...

You are not clear about which aspect is slow - are you already using 
sqlite3_bind_x() and placeholders (?) in your SQL statements ?
Doing a sqlite3_prepare every time you do a query can slow things
down.
Sorry if you are already using all code-level best practise !!

MikeW



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-15 Thread Cory Nelson
That's a tricky one.  Databases like random access, but DVD-ROM not so
much :)  Maybe delete indexes to force a full table scan?

On Wed, Oct 15, 2008 at 5:47 AM, Christophe Leske <[EMAIL PROTECTED]> wrote:
> Hi there,
>
> i am using a 120MB database in an embedded project (a DVD-ROM project)
> and was wondering what I can do to speed up its reading using diverse
> PRAGMA statements.
> The database is locked, meaning that no data is being inserted or
> deleted from it. I am solely after speeding up its reading performance.
>
> Indices have been set, would augmenting the cache size for Sqlite do
> something?
>
> Grateful for any info,
>
> --
> Christophe Leske

-- 
Cory Nelson
http://www.int64.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to speed up read-only databases?

2008-10-15 Thread Christophe Leske
Hi there,

i am using a 120MB database in an embedded project (a DVD-ROM project) 
and was wondering what I can do to speed up its reading using diverse 
PRAGMA statements.
The database is locked, meaning that no data is being inserted or 
deleted from it. I am solely after speeding up its reading performance.

Indices have been set, would augmenting the cache size for Sqlite do 
something?

Grateful for any info,

-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-04 Thread Stephen Oberholtzer
On Wed, Jun 4, 2008 at 7:12 AM, Christophe Leske <[EMAIL PROTECTED]> wrote:

> Wilson, Ron P schrieb:
> > I'm not a guru yet, but I think you are not using the latlon index in
> > your query.  Perhaps if you index on lat and lon separately your query
> > will use those indices.  I think the lines below indicate using the
> > indices on class_dds and rowid.
> >
>
> Thanks also for the tiling hint, but my application is already written,
> and I have to stick to the databse given. I might reconsider though if
> the performance is still bad.
>

Does that mean you can't change the application at all?

Remember, the tiling (I call it "blockmap" since that's what Doom called it)
method doesn't require changing your existing table layout; it only requires
adding new tables.  Furthermore, the contents of these new tables can be
computed from the existing data in the Cities table -- all your application
would need to do is check for the existence of those tables, and if it
doesn't find them, create them and populate them.  It'd be an expensive
initial setup, but you only have to pay it once.

Also recall that SQLite supports multiple databases:

attach 'blockmap.db' as bm;
-- the following two tables are created in the blockmap.db file because they
have the 'bm.' qualifier
create table bm.blockmap (id integer primary key, lat real, long real);
create table bm.blockmapCity (blockmapId integer not null, cityId integer
not null);

-- but you don't need to use the 'bm.' qualifier, *unless* more than one of
the attached databases has a blockmapcity
-- by the way: tmp_blockfilter is a temporary table containing blockmap
IDs.  SQLite won't use an index for an IN clause, but it will use one if
you're joining against another table.
select c.* from Cities c join blockmapCity bc on bc.cityId=c.id join
tmp_blockfilter tb on tb.blockmapId = bc.blockmapId;

-- 
-- Stevie-O
Real programmers use COPY CON PROGRAM.EXE
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-04 Thread Griggs, Donald
Hi Christophe,

Regarding:  What I find to be weird is that just ONE index seems to
yield the same results as several fields indexed:

Perhaps you're using this already, but prefixing your SELECT with
"EXPLAIN QUERY PLAN"  will quickly identify exactly which, if any
indicies are used.   It's a much briefer output than using just the
"EXPLAIN" prefix.

Also, Sqlite uses at most *one* index per table per SELECT, I believe.

 


This email and any attachments have been scanned for known viruses using 
multiple scanners. We believe that this email and any attachments are virus 
free, however the recipient must take full responsibility for virus checking. 
This email message is intended for the named recipient only. It may be 
privileged and/or confidential. If you are not the named recipient of this 
email please notify us immediately and do not copy it or use it for any 
purpose, nor disclose its contents to any other person.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-04 Thread Christophe Leske

> class_dds has a maximum value of 6, so there where-clause "class_dds<11" 
> is totally unecessary - if i ditch this part, the response time is 
> coming down to 900ms from 2700ms for my request.
> I will now time again.
>   
Some new timings - i basically got it. What I find to be weird is that 
just ONE index seems to yield the same results as several fields indexed:

no indices:
-- "TIME for Between statement:2548"
-- 11
-- "TIME for <> statement:2528"
-- 11
-- "TIME for <> statement without ORDER BY-clause:1070"
-- 11
-- "TIME for <> statement without ORDER BY-clause:987"
-- 11

On index (longitude) - WHOAA!:

-- "TIME for Between statement:18"
-- 11
-- "TIME for <> statement:11"
-- 11
-- "TIME for <> statement without ORDER BY-clause:10"
-- 11
-- "TIME for <> statement without ORDER BY-clause:9"
-- 11

two indices (longitude and latitude):
-- "TIME for Between statement:11"
-- 11
-- "TIME for <> statement:11"
-- 11
-- "TIME for <> statement without ORDER BY-clause:10"
-- 11
-- "TIME for <> statement without ORDER BY-clause:10"
-- 11


3 field index (class_dds, longitude, latitude) - ?What gives?:
-- "TIME for Between statement:2540"
-- 11
-- "TIME for <> statement:2528"
-- 11
-- "TIME for <> statement without ORDER BY-clause:999"
-- 11
-- "TIME for <> statement without ORDER BY-clause:991"
-- 11

a 2 field index (longitude, latitude):
-- "TIME for Between statement:10"
-- 11
-- "TIME for <> statement:11"
-- 11
-- "TIME for <> statement without ORDER BY-clause:10"
-- 11
-- "TIME for <> statement without ORDER BY-clause:9"
-- 11


-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-04 Thread Christophe Leske
Christophe Leske schrieb:
>> Question, have you tried an index on class_dds, longitude_DDS, and 
>> latitude_DDS?
>>
>> CREATE INDEX tableidx ON table (class_dds, longitude_DDS, latitude_DDS);
>>
>> Since all three fields are used in the query, I am curious if that would 
>> help in any way.
>>   
>> 
> Doesn´t do anything, there is something else going here, i think - it 
> might well be that because of the order - statement, none of the indices 
> is actually being used:
>   
I got it:

SELECT * FROM Cities WHERE class_dds<11 and (longitude_DDS BETWEEN 
6.765103 and 7.089129) AND (latitude_DDS BETWEEN 44.261771 and 
44.424779) ORDER BY class_dds ASC Limit 20

class_dds has a maximum value of 6, so there where-clause "class_dds<11" 
is totally unecessary - if i ditch this part, the response time is 
coming down to 900ms from 2700ms for my request.

I will now time again.

-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-04 Thread Christophe Leske

> Question, have you tried an index on class_dds, longitude_DDS, and 
> latitude_DDS?
>
> CREATE INDEX tableidx ON table (class_dds, longitude_DDS, latitude_DDS);
>
> Since all three fields are used in the query, I am curious if that would 
> help in any way.
>   
Doesn´t do anything, there is something else going here, i think - it 
might well be that because of the order - statement, none of the indices 
is actually being used:

here some more timings (yours is at the end - the number between the 
timings is a count of the result sets just to make sure that they all 
return the right set of 11 results):

No indices whatsoever:
-- "TIME for Between statement:2794"
-- 11
-- "TIME for <> statement:2775"
-- 11

One index (longitude):
-- "TIME for Between statement:2776"
-- 11
-- "TIME for <> statement:2770"
-- 11

two indices (longitude and latitude separetely):
-- "TIME for Between statement:2786"
-- 11
-- "TIME for <> statement:2792"
-- 11

A 3 field index (class_dds, latitude_dds, longitude_dds):
-- "TIME for Between statement:2783"
-- 11
-- "TIME for <> statement:2762"
-- 11



-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-04 Thread John Elrick
Christophe Leske wrote:
> There is virtually no difference in using indices or not in my query.
>
> I also tried to reformulate my statement in order not to use BETWEEN but 
> a sandwiched > and < statement:
>
> SELECT * FROM Cities WHERE class_dds<11 and (longitude_DDS BETWEEN 
> 6.765103 and 7.089129) AND (latitude_DDS BETWEEN 44.261771 and 
> 44.424779) ORDER BY class_dds ASC Limit 20
>
> became
>
> SELECT * FROM Cities WHERE class_dds<11 and (longitude_DDS>6.765103 and 
> longitude_DDS<7.089129) AND (latitude_DDS>44.261771 and 
> latitude_DDS<44.424779) ORDER BY class_dds ASC Limit 20
>
>   
> The timing with a latlon index (latitude and longitude indexed):
> TIME:2814 ms
>
> The timing with just one index (latitude):
> TIME:2797 ms
>
> Timing with two indices (latitude and longitude separetly):
> TIME:2787 ms
>
>
> Timing with two indices (lat/lon) and the reformulated query above:
> TIME:2763 ms
>
>
>   

Question, have you tried an index on class_dds, longitude_DDS, and 
latitude_DDS?

CREATE INDEX tableidx ON table (class_dds, longitude_DDS, latitude_DDS);

Since all three fields are used in the query, I am curious if that would 
help in any way.


John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-04 Thread Christophe Leske
Federico Granata schrieb:
> can you post those rows with
> .mode insert
> so I can do a fast try ?
>   
INSERT INTO table VALUES('Pietraporzio',5,-1,7.032936,44.345913);
INSERT INTO table VALUES('Sambuco',5,-1,7.081367,44.33763);
INSERT INTO table VALUES('Le Pra',6,-1,6.88,44.316667);
INSERT INTO table VALUES('Bagni',6,-1,7.08,44.3);
INSERT INTO table VALUES('Argentera',6,-1,6.937569,44.396168);
INSERT INTO table VALUES('Bersezio',6,-1,6.970739,44.377898);
INSERT INTO table VALUES('Saint-Dalmas-le-Selvage',6,-1,6.867705,44.285194);
INSERT INTO table VALUES('Ferrere',6,-1,6.950052,44.354508);
INSERT INTO table VALUES('San Bernolfo',6,-1,7.039278,44.263371);
INSERT INTO table VALUES('Murenz',6,-1,6.998868,44.348969);
INSERT INTO table VALUES('Bagni di Vinadio',6,-1,7.074884,44.290033);

-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-04 Thread Federico Granata
can you post those rows with
.mode insert
so I can do a fast try ?
Tnx.

--
[image: Just A Little Bit Of
Geekness]
Le tre grandi virtù di un programmatore: pigrizia, impazienza e arroganza.
(Larry Wall).

2008/6/4 Christophe Leske <[EMAIL PROTECTED]>:

>
> > Can you give me some row of your db (also fake data are ok) so I try to
> > populate a db with 840k row and test your query on my machine ...
> >
> You can either take these rows here:
>
> Pietraporzio|5|-1|7.032936|44.345913
> Sambuco|5|-1|7.081367|44.33763
> Le Pra|6|-1|6.88|44.316667
> Bagni|6|-1|7.08|44.3
> Argentera|6|-1|6.937569|44.396168
> Bersezio|6|-1|6.970739|44.377898
> Saint-Dalmas-le-Selvage|6|-1|6.867705|44.285194
> Ferrere|6|-1|6.950052|44.354508
> San Bernolfo|6|-1|7.039278|44.263371
> Murenz|6|-1|6.998868|44.348969
> Bagni di Vinadio|6|-1|7.074884|44.290033
>
> or you can download a 170 MB big database similar to the one i am using,
> but populatet with free data from geonames.org from
>
> http://www.multimedial.de/earth/DB/free.7z
>
> That one is 1.4 Gb unpacked, and has yet no indices (i think, i prepared
> it some time ago).
>
> It should yield to similar results.
>
> Thanks for your time,
>
> --
> Christophe Leske
>
> www.multimedial.de - [EMAIL PROTECTED]
> http://www.linkedin.com/in/multimedial
> Lessingstr. 5 - 40227 Duesseldorf - Germany
> 0211 261 32 12 - 0177 249 70 31
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-04 Thread Christophe Leske
There is virtually no difference in using indices or not in my query.

I also tried to reformulate my statement in order not to use BETWEEN but 
a sandwiched > and < statement:

SELECT * FROM Cities WHERE class_dds<11 and (longitude_DDS BETWEEN 
6.765103 and 7.089129) AND (latitude_DDS BETWEEN 44.261771 and 
44.424779) ORDER BY class_dds ASC Limit 20

became

SELECT * FROM Cities WHERE class_dds<11 and (longitude_DDS>6.765103 and 
longitude_DDS<7.089129) AND (latitude_DDS>44.261771 and 
latitude_DDS<44.424779) ORDER BY class_dds ASC Limit 20

The timing with a latlon index (latitude and longitude indexed):
TIME:2814 ms

The timing with just one index (latitude):
TIME:2797 ms

Timing with two indices (latitude and longitude separetly):
TIME:2787 ms


Timing with two indices (lat/lon) and the reformulated query above:
TIME:2763 ms


So all in all, there is no substantial speed gain to be found - it 
probably has to do with the fact that i am sorting at the end?

-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-04 Thread Christophe Leske

> Can you give me some row of your db (also fake data are ok) so I try to
> populate a db with 840k row and test your query on my machine ...
>   
You can either take these rows here:

Pietraporzio|5|-1|7.032936|44.345913
Sambuco|5|-1|7.081367|44.33763
Le Pra|6|-1|6.88|44.316667
Bagni|6|-1|7.08|44.3
Argentera|6|-1|6.937569|44.396168
Bersezio|6|-1|6.970739|44.377898
Saint-Dalmas-le-Selvage|6|-1|6.867705|44.285194
Ferrere|6|-1|6.950052|44.354508
San Bernolfo|6|-1|7.039278|44.263371
Murenz|6|-1|6.998868|44.348969
Bagni di Vinadio|6|-1|7.074884|44.290033

or you can download a 170 MB big database similar to the one i am using, 
but populatet with free data from geonames.org from

http://www.multimedial.de/earth/DB/free.7z

That one is 1.4 Gb unpacked, and has yet no indices (i think, i prepared 
it some time ago).

It should yield to similar results.

Thanks for your time,

-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-04 Thread Federico Granata
>
> Can someone tell me what kind of performance one is to expect from a
> 40Mb Sqlite database like the one I have?
>
if you put it on a floppy and throw  it out of the window it fall at 9.8 m/s
...

Can you give me some row of your db (also fake data are ok) so I try to
populate a db with 840k row and test your query on my machine ...
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-04 Thread Christophe Leske
Wilson, Ron P schrieb:
> I'm not a guru yet, but I think you are not using the latlon index in
> your query.  Perhaps if you index on lat and lon separately your query
> will use those indices.  I think the lines below indicate using the
> indices on class_dds and rowid.
>   

Thanks to everyone who responded, i got some pretty good feedback.

Thanks also for the tiling hint, but my application is already written, 
and I have to stick to the databse given. I might reconsider though if 
the performance is still bad.

Can someone tell me what kind of performance one is to expect from a 
40Mb Sqlite database like the one I have?

-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-03 Thread P Kishor
On 6/3/08, D. Richard Hipp <[EMAIL PROTECTED]> wrote:
>
>  On Jun 3, 2008, at 10:27 AM, Christophe Leske wrote:
>  >
>  > We have a city database that is being queried regurlarly depending on
>  > the lat/long position of the viewport in order to show city names and
>  > labels.
>
>
>
> SQLite has an optional R-Tree engine.  The R-Tree is a new addition
>  and has not appeared in any release of SQLite, but it is available via
>  CVS.  There is little documentation other than a README file in the
>  source directory.  Nevertheless, an R-Tree is specifically designed to
>  solve the kind of query you describe above.  If you are willing to
>  work on the bleeding edge, you might want to investigate SQLite's R-
>  Tree capabilities.

Richard,

I am very interested in this. A few of us GIS-types have been
tinkering with this (http://sqlitegis.org), but working off what you
have been doing would likely be a lot better.


>
>
>
>  D. Richard Hipp
>  [EMAIL PROTECTED]
>
>
>
>
>  ___
>  sqlite-users mailing list
>  sqlite-users@sqlite.org
>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Puneet Kishor http://punkish.eidesis.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-03 Thread D. Richard Hipp

On Jun 3, 2008, at 10:27 AM, Christophe Leske wrote:
>
> We have a city database that is being queried regurlarly depending on
> the lat/long position of the viewport in order to show city names and
> labels.


SQLite has an optional R-Tree engine.  The R-Tree is a new addition  
and has not appeared in any release of SQLite, but it is available via  
CVS.  There is little documentation other than a README file in the  
source directory.  Nevertheless, an R-Tree is specifically designed to  
solve the kind of query you describe above.  If you are willing to  
work on the bleeding edge, you might want to investigate SQLite's R- 
Tree capabilities.


D. Richard Hipp
[EMAIL PROTECTED]



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-03 Thread Eric Minbiole
> -- Even if you only go down to 1'-by-1' granularity, you've divided the
> world into 64,800 blocks.  Assuming that your 840K cities are all over the
> globe, and that about 70% of Earth is covered by water, that means that only
> about 20,000 blocks would actually have cities in them.  But with 840K
> cities, that means you're only considering about 42 cities for a single
> block.

Nice!  Though I haven't been part of this thread, I wanted to comment 
that this is a very elegant & efficient solution.  Kind of like a 2-D 
hash table.  I can see other areas where a similar approach would be 
helpful.  Thanks for a good tip-- I'm sure it will come in handy at some 
point.

~Eric

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-03 Thread Stephen Oberholtzer
On Tue, Jun 3, 2008 at 1:27 PM, Christophe Leske <[EMAIL PROTECTED]> wrote:

> Hi,
>
> i am a new member of this list and interested in speeding up my sqlite
> queries.
>
> I am using SQlite in a 3d environment which is close to Google Earth or
> Nasa WorldWind.
>
> We have a city database that is being queried regurlarly depending on
> the lat/long position of the viewport in order to show city names and
> labels.




>From what I gather,  you've got something very similar to the 2D-spatial
problem:

"I need to find cities that are within R miles of (X,Y)."

This translates as

"I need to find objects that are within the circle with origin (X, Y) and
radius=R".

This is fundamentally a collision-detection algorithm, and I have a
suggestion that might help, based on the way old DOS game Doom optimized its
collision detection code.

Here's the setup: You divide the world into equally-sized blocks of width W
and height H. Let's say, for the sake of argument, that W=1' and H=1' (this
is huge, but it helps illustrate the point)
Since the world is 180 degrees north-to-south and 360 degrees around the
equator, this gives 64,800 blocks.  So, for example:

CREATE TABLE blockmap (id integer not null primary key, lat real, long
real);

Then you need to build a correspondence table:

CREATE TABLE blockmapCity(blockmapId, cityId);

A naive implementation might only mark a city's center, while a more
advanced version might get fancy and have an approximate size of the city
and place it in multiple blockmaps, in case it was big enough to spill over
into adjacent blocks.

What you do then, in order to do a lookup, is to find all of the blocks that
intersect with your circle.  This can be done easily with the right math.
Then, once you've figured out which blocks to include, you just filter out
the relevant cities from blockmapCity.  Once you have *those* cities you can
filter them out as precisely as you were doing before.

Some notes:

-- Even if you only go down to 1'-by-1' granularity, you've divided the
world into 64,800 blocks.  Assuming that your 840K cities are all over the
globe, and that about 70% of Earth is covered by water, that means that only
about 20,000 blocks would actually have cities in them.  But with 840K
cities, that means you're only considering about 42 cities for a single
block.
-- The algorithm used to prune down the set of blocks to include doesn't
need to be perfect. Remember, this is all an optimization; even if you
return every blockmap in the same *hemisphere*, you'd still be searching
through only 420K cities instead of 840K!


If you need any more help implementing something like this, go ahead and
reply to the list.
If you can provide a concrete set of data (for example, all or most or at
least a significant number of the cities in the US) I can help put together
a more concrete example.



>
> Plus, there are additional databases for special features, like natural
> hazards and catastrophies.
>
> The city database has around 840.000 records,  the following schema and
> weights currently short under 40Mb:
>
> sqlite> .schema cities
> CREATE TABLE Cities (NAME_DDS TEXT, CLASS_DDS NUMERIC, POPEST_DDS
> NUMERIC, LONGI
> TUDE_DDS NUMERIC, LATITUDE_DDS NUMERIC);
> CREATE INDEX class ON Cities(CLASS_DDS ASC);
> CREATE INDEX latlon on Cities(latitude_dds,longitude_dds);
>
> My questions are:
>
> - how do I speed up the queries? For small lat/long windows, and high
> classes for the cities, i get long query times (e.g. about 600ms)
> Is this reasonable to ask for, or IS that already a top speed for this
> kind of query?
>
> - I have indexed latitude AND longitude,as you can see above. Is this ok?
>
> - I came across the EXLPAIN command, and have read an email by someone
> on this list on how to analyze my queries. I should probably do that,
> yet i am unfamiliar with reading the output of the Explain command.
>
> Thanks for your time and eventual help,
>
> --
> Christophe Leske
>
> www.multimedial.de - [EMAIL PROTECTED]
> http://www.linkedin.com/in/multimedial
> Lessingstr. 5 - 40227 Duesseldorf - Germany
> 0211 261 32 12 - 0177 249 70 31
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
-- Stevie-O
Real programmers use COPY CON PROGRAM.EXE
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-03 Thread Jay A. Kreibich
On Tue, Jun 03, 2008 at 07:56:11PM +0200, Christophe Leske scratched on the 
wall:

> A typical query that causes problems would be:
> 
> SELECT * FROM Cities WHERE class_dds<11 and (longitude_DDS BETWEEN 
> 6.765103 and 7.089129) AND (latitude_DDS BETWEEN 44.261771 and 
> 44.424779) ORDER BY class_dds ASC Limit 20
> 
> Am i right that no matter what limit is given to the SQL statement, the 
> complete query is executed first, AND THEN filtered according to the 
> limit? This is what i think i a seeing here...

  The limit is applied *after* the ORDER so the system has no choice
  but to find every match for the WHERE statement, then ORDER it, then
  LIMIT it.

  In theory, the system could walk the index on class_dds to get the
  ORDER BY "for free" (and could then terminate the query as soon as
  the LIMIT is reached), but I can guess the nature of class_dss will
  prevent this.  Basically if any one value is contained in 5 to 10% of
  the rows, an index won't be used and the system will do a full
  table-scan (this isn't unique to SQLite; nearly all DBs do this
  because it is faster in the general case).

  It also seems unlikely that the index will be of much use unless
  you're looking for specific values.  An index can be used for a
  range, but not a double-range like you've got going here.  This is
  part of the reason why many databases offer GIS extensions... the
  indexing problem for space is non-trivial.

  I'd try dropping the latlog index and just making one on lat.  Put
  the lat and long conditions first, as they're going to give you the
  most filtering for the least cost.  Try using dual GT/LT clauses
  rather than BETWEEN if the lat index still isn't used.

> I am therefore also after something that cuts off the query after a 
> certain amount of results have been found.

  That isn't going to happen unless you can get rid of the ORDER *or*
  make the ORDER on something that is used as an index.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"'People who live in bamboo houses should not throw pandas.' Jesus said that."
   - "The Ninja", www.AskANinja.com, "Special Delivery 10: Pop!Tech 2006"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-03 Thread Petite Abeille

On Jun 3, 2008, at 7:27 PM, Christophe Leske wrote:

> i am a new member of this list and interested in speeding up my  
> sqlite queries.

There are no magic bullets, but "The SQLite Query Optimizer Overview"  
is a good read:

http://www.sqlite.org/optoverview.html

As well as "Query Plans":

http://www.sqlite.org/cvstrac/wiki?p=QueryPlans

Once the basics are covered and you still need to speed up things,  
take a look at physically partitioning your data in several database  
files. This might reduce quite noticeably the overall processing time  
for a given query at the cost of a bit of added code complexity.

--
PA.
http://alt.textdrive.com/nanoki/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-03 Thread Wilson, Ron P
PS. Also, I am not sure about the BETWEEN command - does it use indices?
If not you could write the query without BETWEEN.

RW

Ron Wilson, S/W Systems Engineer III, Tyco Electronics, 434.455.6453

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Christophe Leske
Sent: Tuesday, June 03, 2008 1:56 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] How to speed up my queries?

Wilson, Ron P schrieb:
> Hi Christophe,
>
> 1. Please give us an example query.  SELECT * FROM Cities where
> LONGITUDE_DDS=? AND LATITUDE_DDS=?
> 2. Paste in the EXPLAIN results from the command line tool.
> 3. Is the database file local or are you accessing it over a network?
>   
Hi,

the database file is local, right next to the app using it. I am using 
the sqlite3.exe command line tool for the queries, but would eventually 
like to ditch it for the native support.
However, since the app I am using is a single threaded application 
(Adobe Director), eventual queries that take too long to complete do 
completely block the app which is why i have threaded the queries using 
a multi-threaded shell extension which does the queries, then reports 
back the results.

A typical query that causes problems would be:

SELECT * FROM Cities WHERE class_dds<11 and (longitude_DDS BETWEEN 
6.765103 and 7.089129) AND (latitude_DDS BETWEEN 44.261771 and 
44.424779) ORDER BY class_dds ASC Limit 20

Am i right that no matter what limit is given to the SQL statement, the 
complete query is executed first, AND THEN filtered according to the 
limit? This is what i think i a seeing here...

I am therefore also after something that cuts off the query after a 
certain amount of results have been found.

The explain results from the command line tool:

sqlite> EXPLAIN SELECT * FROM Cities WHERE class_dds<11 and 
(longitude_DDS BETWE
EN 6.765103 and 7.089129) AND (latitude_DDS BETWEEN 44.261771 and 
44.424779) ORD
ER BY class_dds ASC Limit 20
   ...> ;
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Trace  0 0 0 EXPLAIN SELECT * FROM Cities 
WHERE class_
dds<11 and (longitude_DDS BETWEEN 6.765103 and 7.089129) AND 
(latitude_DDS BETWE
EN 44.261771 and 44.424779) ORDER BY class_dds ASC Limit 20
;  00
1 Noop   0 0 000
2 Integer201 000
3 MustBeInt  1 0 000
4 IfZero 1 42000
5 Integer112 000
6 Real   0 3 0 6.765103   00
7 Real   0 4 0 7.089129   00
8 Real   0 5 0 44.2617710001  00
9 Real   0 6 0 44.424779  00
10Goto   0 43000
11SetNumColumns  0 6 000
12OpenRead   0 3 000
13SetNumColumns  0 2 000
14OpenRead   2 6 0 keyinfo(1,BINARY)  00
15Rewind 2 408 0  00
16SCopy  2 8 000
17IsNull 8 40000
18Affinity   8 1 0 cb 00
19IdxGE  2 408 1  00
20Column 2 0 11   00
21IsNull 1139000
22IdxRowid   2 11000
23MoveGe 0 0 11   00
24Column 0 3 12   00
25Lt 3 3912collseq(BINARY)  6b
26Gt 4 3912collseq(BINARY)  6b
27Column 0 4 17   00
28Lt 5 3917collseq(BINARY)  6b
29Gt 6 3917collseq(BINARY)  6b
30Column 0 0 22   00
31Column 2 0 23   00
32Column 0 2 24   00
33Column 0 3 25   00
34Column 0 4 26   00
35Column 0 5 27   00
36ResultRow  226 000
37AddImm 1 -1000
38IfZero 1 40000
39Next   2 19000
40Close  0 0 000
41Close  2 0 000
42Halt   0 0 000
43Transaction0 0 000
44VerifyCookie   0 202   000
45TableLock  0 3 0 Cities 00
46Goto  

Re: [sqlite] How to speed up my queries?

2008-06-03 Thread Wilson, Ron P
I'm not a guru yet, but I think you are not using the latlon index in
your query.  Perhaps if you index on lat and lon separately your query
will use those indices.  I think the lines below indicate using the
indices on class_dds and rowid.

19IdxGE  2 408 1  00
22IdxRowid   2 11000

LIMIT 20 should also limit the query to the first 20 matches; i.e. I
don't think it is actually finding N results and filtering down to the
first 20.  At least I think that's what this means:

37AddImm 1 -1000
38IfZero 1 40000

RW

Ron Wilson, S/W Systems Engineer III, Tyco Electronics, 434.455.6453

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Christophe Leske
Sent: Tuesday, June 03, 2008 1:56 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] How to speed up my queries?

Wilson, Ron P schrieb:
> Hi Christophe,
>
> 1. Please give us an example query.  SELECT * FROM Cities where
> LONGITUDE_DDS=? AND LATITUDE_DDS=?
> 2. Paste in the EXPLAIN results from the command line tool.
> 3. Is the database file local or are you accessing it over a network?
>   
Hi,

the database file is local, right next to the app using it. I am using 
the sqlite3.exe command line tool for the queries, but would eventually 
like to ditch it for the native support.
However, since the app I am using is a single threaded application 
(Adobe Director), eventual queries that take too long to complete do 
completely block the app which is why i have threaded the queries using 
a multi-threaded shell extension which does the queries, then reports 
back the results.

A typical query that causes problems would be:

SELECT * FROM Cities WHERE class_dds<11 and (longitude_DDS BETWEEN 
6.765103 and 7.089129) AND (latitude_DDS BETWEEN 44.261771 and 
44.424779) ORDER BY class_dds ASC Limit 20

Am i right that no matter what limit is given to the SQL statement, the 
complete query is executed first, AND THEN filtered according to the 
limit? This is what i think i a seeing here...

I am therefore also after something that cuts off the query after a 
certain amount of results have been found.

The explain results from the command line tool:

sqlite> EXPLAIN SELECT * FROM Cities WHERE class_dds<11 and 
(longitude_DDS BETWE
EN 6.765103 and 7.089129) AND (latitude_DDS BETWEEN 44.261771 and 
44.424779) ORD
ER BY class_dds ASC Limit 20
   ...> ;
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Trace  0 0 0 EXPLAIN SELECT * FROM Cities 
WHERE class_
dds<11 and (longitude_DDS BETWEEN 6.765103 and 7.089129) AND 
(latitude_DDS BETWE
EN 44.261771 and 44.424779) ORDER BY class_dds ASC Limit 20
;  00
1 Noop   0 0 000
2 Integer201 000
3 MustBeInt  1 0 000
4 IfZero 1 42000
5 Integer112 000
6 Real   0 3 0 6.765103   00
7 Real   0 4 0 7.089129   00
8 Real   0 5 0 44.2617710001  00
9 Real   0 6 0 44.424779  00
10Goto   0 43000
11SetNumColumns  0 6 000
12OpenRead   0 3 000
13SetNumColumns  0 2 000
14OpenRead   2 6 0 keyinfo(1,BINARY)  00
15Rewind 2 408 0  00
16SCopy  2 8 000
17IsNull 8 40000
18Affinity   8 1 0 cb 00
19IdxGE  2 408 1  00
20Column 2 0 11   00
21IsNull 1139000
22IdxRowid   2 11000
23MoveGe 0 0 11   00
24Column 0 3 12   00
25Lt 3 3912collseq(BINARY)  6b
26Gt 4 3912collseq(BINARY)  6b
27Column 0 4 17   00
28Lt 5 3917collseq(BINARY)  6b
29Gt 6 3917collseq(BINARY)  6b
30Column 0 0 22   00
31Column 2 0 23   00
32Column 0 2 24   00
33Column 0 3 25   00
34Column 0 4 26   00
35Column 0 5 27   00
36ResultRow  226 0

Re: [sqlite] How to speed up my queries?

2008-06-03 Thread Christophe Leske
Wilson, Ron P schrieb:
> Hi Christophe,
>
> 1. Please give us an example query.  SELECT * FROM Cities where
> LONGITUDE_DDS=? AND LATITUDE_DDS=?
> 2. Paste in the EXPLAIN results from the command line tool.
> 3. Is the database file local or are you accessing it over a network?
>   
Hi,

the database file is local, right next to the app using it. I am using 
the sqlite3.exe command line tool for the queries, but would eventually 
like to ditch it for the native support.
However, since the app I am using is a single threaded application 
(Adobe Director), eventual queries that take too long to complete do 
completely block the app which is why i have threaded the queries using 
a multi-threaded shell extension which does the queries, then reports 
back the results.

A typical query that causes problems would be:

SELECT * FROM Cities WHERE class_dds<11 and (longitude_DDS BETWEEN 
6.765103 and 7.089129) AND (latitude_DDS BETWEEN 44.261771 and 
44.424779) ORDER BY class_dds ASC Limit 20

Am i right that no matter what limit is given to the SQL statement, the 
complete query is executed first, AND THEN filtered according to the 
limit? This is what i think i a seeing here...

I am therefore also after something that cuts off the query after a 
certain amount of results have been found.

The explain results from the command line tool:

sqlite> EXPLAIN SELECT * FROM Cities WHERE class_dds<11 and 
(longitude_DDS BETWE
EN 6.765103 and 7.089129) AND (latitude_DDS BETWEEN 44.261771 and 
44.424779) ORD
ER BY class_dds ASC Limit 20
   ...> ;
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Trace  0 0 0 EXPLAIN SELECT * FROM Cities 
WHERE class_
dds<11 and (longitude_DDS BETWEEN 6.765103 and 7.089129) AND 
(latitude_DDS BETWE
EN 44.261771 and 44.424779) ORDER BY class_dds ASC Limit 20
;  00
1 Noop   0 0 000
2 Integer201 000
3 MustBeInt  1 0 000
4 IfZero 1 42000
5 Integer112 000
6 Real   0 3 0 6.765103   00
7 Real   0 4 0 7.089129   00
8 Real   0 5 0 44.2617710001  00
9 Real   0 6 0 44.424779  00
10Goto   0 43000
11SetNumColumns  0 6 000
12OpenRead   0 3 000
13SetNumColumns  0 2 000
14OpenRead   2 6 0 keyinfo(1,BINARY)  00
15Rewind 2 408 0  00
16SCopy  2 8 000
17IsNull 8 40000
18Affinity   8 1 0 cb 00
19IdxGE  2 408 1  00
20Column 2 0 11   00
21IsNull 1139000
22IdxRowid   2 11000
23MoveGe 0 0 11   00
24Column 0 3 12   00
25Lt 3 3912collseq(BINARY)  6b
26Gt 4 3912collseq(BINARY)  6b
27Column 0 4 17   00
28Lt 5 3917collseq(BINARY)  6b
29Gt 6 3917collseq(BINARY)  6b
30Column 0 0 22   00
31Column 2 0 23   00
32Column 0 2 24   00
33Column 0 3 25   00
34Column 0 4 26   00
35Column 0 5 27   00
36ResultRow  226 000
37AddImm 1 -1000
38IfZero 1 40000
39Next   2 19000
40Close  0 0 000
41Close  2 0 000
42Halt   0 0 000
43Transaction0 0 000
44VerifyCookie   0 202   000
45TableLock  0 3 0 Cities 00
46Goto   0 11000

-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up my queries?

2008-06-03 Thread Wilson, Ron P
Hi Christophe,

1. Please give us an example query.  SELECT * FROM Cities where
LONGITUDE_DDS=? AND LATITUDE_DDS=?
2. Paste in the EXPLAIN results from the command line tool.
3. Is the database file local or are you accessing it over a network?

RW

Ron Wilson, S/W Systems Engineer III, Tyco Electronics, 434.455.6453
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Christophe Leske
Sent: Tuesday, June 03, 2008 1:27 PM
To: General Discussion of SQLite Database
Subject: [sqlite] How to speed up my queries?

Hi,
 
i am a new member of this list and interested in speeding up my sqlite 
queries.

I am using SQlite in a 3d environment which is close to Google Earth or 
Nasa WorldWind.

We have a city database that is being queried regurlarly depending on 
the lat/long position of the viewport in order to show city names and 
labels.

Plus, there are additional databases for special features, like natural 
hazards and catastrophies.

The city database has around 840.000 records,  the following schema and 
weights currently short under 40Mb:

sqlite> .schema cities
CREATE TABLE Cities (NAME_DDS TEXT, CLASS_DDS NUMERIC, POPEST_DDS 
NUMERIC, LONGI
TUDE_DDS NUMERIC, LATITUDE_DDS NUMERIC);
CREATE INDEX class ON Cities(CLASS_DDS ASC);
CREATE INDEX latlon on Cities(latitude_dds,longitude_dds);

My questions are:

- how do I speed up the queries? For small lat/long windows, and high 
classes for the cities, i get long query times (e.g. about 600ms)
Is this reasonable to ask for, or IS that already a top speed for this 
kind of query?

- I have indexed latitude AND longitude,as you can see above. Is this
ok?

- I came across the EXLPAIN command, and have read an email by someone 
on this list on how to analyze my queries. I should probably do that, 
yet i am unfamiliar with reading the output of the Explain command.

Thanks for your time and eventual help,

-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to speed up my queries?

2008-06-03 Thread Christophe Leske
Hi,
 
i am a new member of this list and interested in speeding up my sqlite 
queries.

I am using SQlite in a 3d environment which is close to Google Earth or 
Nasa WorldWind.

We have a city database that is being queried regurlarly depending on 
the lat/long position of the viewport in order to show city names and 
labels.

Plus, there are additional databases for special features, like natural 
hazards and catastrophies.

The city database has around 840.000 records,  the following schema and 
weights currently short under 40Mb:

sqlite> .schema cities
CREATE TABLE Cities (NAME_DDS TEXT, CLASS_DDS NUMERIC, POPEST_DDS 
NUMERIC, LONGI
TUDE_DDS NUMERIC, LATITUDE_DDS NUMERIC);
CREATE INDEX class ON Cities(CLASS_DDS ASC);
CREATE INDEX latlon on Cities(latitude_dds,longitude_dds);

My questions are:

- how do I speed up the queries? For small lat/long windows, and high 
classes for the cities, i get long query times (e.g. about 600ms)
Is this reasonable to ask for, or IS that already a top speed for this 
kind of query?

- I have indexed latitude AND longitude,as you can see above. Is this ok?

- I came across the EXLPAIN command, and have read an email by someone 
on this list on how to analyze my queries. I should probably do that, 
yet i am unfamiliar with reading the output of the Explain command.

Thanks for your time and eventual help,

-- 
Christophe Leske

www.multimedial.de - [EMAIL PROTECTED]
http://www.linkedin.com/in/multimedial
Lessingstr. 5 - 40227 Duesseldorf - Germany
0211 261 32 12 - 0177 249 70 31


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up copy all the data in file db to memory db?

2006-12-24 Thread Joe Wilson
VACUUM ought to work on a :memory: database, I would think.
Give it a try and see what happens.

--- PY <[EMAIL PROTECTED]> wrote:
> Do you have any idea for my question 2??
> >  2. How to release the unused page in the memory database? just like the
> > vacuum in the file database.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] how to speed up copy all the data in file db to memory db?

2006-12-24 Thread PY

Thanks. I will try it.

Do you have any idea for my question 2??

 2. How to release the unused page in the memory database? just like the
vacuum in the file database.




Thanks.






On 12/24/06, Joe Wilson <[EMAIL PROTECTED]> wrote:


This mailing list thread may be helpful:
http://www.mail-archive.com/sqlite-users@sqlite.org/msg15902.html

--- PY <[EMAIL PROTECTED]> wrote:
> I am using a memory database in an embedded device with sqlite_3.3.5.
> For performance consideration, all the database access are running in
the
> SDRAM.
> I need to perform both CopyFileDBToMemDB and CopyMemDBToFileDB with high
> performace and less memory used.
>
> My current solution is create schema then insert all data to the
destination
> table. But when I create the index after insert all the data, all the
> CPU and Memory resource almost exhaust.
>
>
> The problem is
> 1. Could I perform a page by page copy to reach the synchronization
purpose?
>  2. How to release the unused page in the memory database? just like the
> vacuum in the file database.
>
>
>Would you please help to tell me how to improve that?
>
> Thanks for your great help.
>
>
> Thanks,
> VK
>


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com


-
To unsubscribe, send email to [EMAIL PROTECTED]

-




Re: [sqlite] how to speed up copy all the data in file db to memory db?

2006-12-23 Thread Joe Wilson
This mailing list thread may be helpful:
http://www.mail-archive.com/sqlite-users@sqlite.org/msg15902.html

--- PY <[EMAIL PROTECTED]> wrote:
> I am using a memory database in an embedded device with sqlite_3.3.5.
> For performance consideration, all the database access are running in the
> SDRAM.
> I need to perform both CopyFileDBToMemDB and CopyMemDBToFileDB with high
> performace and less memory used.
> 
> My current solution is create schema then insert all data to the destination
> table. But when I create the index after insert all the data, all the
> CPU and Memory resource almost exhaust.
> 
> 
> The problem is
> 1. Could I perform a page by page copy to reach the synchronization purpose?
>  2. How to release the unused page in the memory database? just like the
> vacuum in the file database.
> 
> 
>Would you please help to tell me how to improve that?
> 
> Thanks for your great help.
> 
> 
> Thanks,
> VK
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] how to speed up copy all the data in file db to memory db?

2006-12-23 Thread PY

Hi All,

I am using a memory database in an embedded device with sqlite_3.3.5.
For performance consideration, all the database access are running in the
SDRAM.
I need to perform both CopyFileDBToMemDB and CopyMemDBToFileDB with high
performace and less memory used.

My current solution is create schema then insert all data to the destination
table. But when I create the index after insert all the data, all the
CPU and Memory resource almost exhaust.


The problem is
1. Could I perform a page by page copy to reach the synchronization purpose?
2. How to release the unused page in the memory database? just like the
vacuum in the file database.


  Would you please help to tell me how to improve that?

Thanks for your great help.


Thanks,
VK


Re: [sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-20 Thread Michael Scharf

PY wrote:

In fact, I use sqlite in a embedded device. So the memroy problem is really
critical to us.
For profermance improving, all of the database running in a
in-memory database.


Hmm, I am not 100% sure, but I think queries like
  select distinct x from foo
or
  select distinct x from foo order by x
have to create internally a temp table with
the results anyway to find duplicates or to
sort the table.


Furthermore, I need to sort the result, which following sql command has
better performance?
- Create Table tmp as select distinct x from foo;
 Select x from tmp where OID>0 and OID<=20 order by x;


This one will only sort the columns 1 to 20, but not the
entire table, so it does not solve your problem...

- Create Table tmp as select distinct x from foo order by x;   Select x 
from

tmp where OID>0 and OID<=20;


This will put the table sorted into the temp table. That's
actually how I use it. In fact, this does not work with
arbitrary SQL databases. But it works well with SQLite, because
SQLite optimizer does not remove the ORDER BY clause (it could,
because there is no guarantee in SQL in which order tables are
stored) and it inserts into the tmp table in order.

However, I don't know what the memory overhead is for this
kind of query. Your original approach, with the LIMIT has
create an internal result table anyway, but putting it into
a temp table might double the amount of memory needed.

Well, the best way to find out is to try it ;-)

Michael
--
http://MichaelScharf.blogspot.com/


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-20 Thread PY

Dear Chandrashekar,

Sorry, I don't understand what is the precompiled query.
Would you please help me to descript that?


Thanks for your great help.


VK



On 9/19/06, Chandrashekar H S <[EMAIL PROTECTED]> wrote:


Hi
Are u using precompiled queries? If not try using it...


On 9/19/06, PY <[EMAIL PROTECTED]> wrote:
>
> Hi All,
> I have a problem about LIMIT & OFFSET profermance.
> Due to the limitation of memory, I could not get all of the query result
> at
> a time.
> In our soluction, we use the LIMIT and OFFSET to avoid the problem of
> memory
> issue.
>
> we observed the performance of LIMIT & OFFSET, it looks like a liner
grow
> of
> the response time. In our table, it only has 300~500 records.
>
>
>
> Here is the dummy script
> Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
> create index idx_foo on foo(x);
>
> insert into foo(x) values('text001');
> :
> :
> insert into foo(x) values('text300');
>
>
> Select distinct x from foo LIMIT 20 OFFSET 0; Select distinct x from foo
> LIMIT 20 OFFSET 20; Select distinct x from foo LIMIT 20 OFFSET 40;
Select
> distinct x from foo LIMIT 20 OFFSET 60;
> :
> :
> Select distinct x from foo LIMIT 20 OFFSET 280;
>
>
>
> Would you please help to tell me how to improve that?
>
> Thanks for your great help.
>
>
> Thanks,
> VK
>
>




Re: [sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-20 Thread PY

Hi all,

Thanks for your suggestion.

In fact, I use sqlite in a embedded device. So the memroy problem is really
critical to us.
For profermance improving, all of the database running in a
in-memory database.
For my case, is that will improve the query performance if I create the temp
table of FOO?

Furthermore, I need to sort the result, which following sql command has
better performance?
- Create Table tmp as select distinct x from foo;
 Select x from tmp where OID>0 and OID<=20 order by x;

 or

- Create Table tmp as select distinct x from foo order by x;   Select x from
tmp where OID>0 and OID<=20;



Thanks for your clearfy.



VK.


Re: [sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-19 Thread Michael Scharf

Hi,

I had a similar problem at a larger scale. One trick is to create
a temporary table from where you can fetch the rows quickly:

  Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
  create index idx_foo on foo(x);

  insert into foo(x) values('text001');
  :
  :
  insert into foo(x) values('text300');

Now create a temp table on your query:

  Create Table tmp as select distinct x from foo;

This table has the nice property that the (implicitely
created) OID is in order. OID starts with 1!
But now you can say:

  Select x from tmp where OID>0 and OID<=20;
  Select x from tmp where OID>20 and OID<=30;
  ...

Also the calculation of the size of the table is
also very fast:

  Select max(OID) from t;

This is *much* faster for huge tables than

  Select count(*) from t;

Because max simply jumps to the last record, but
count has to touch every record.

Don't modify the temp table: drop it and create it
new. Else the trick with the order of the OID
won't work.

BTW: how can 500 records be a memory problem?

Michael



I have a problem about LIMIT & OFFSET profermance.
Due to the limitation of memory, I could not get all of the query result at
a time.
In our soluction, we use the LIMIT and OFFSET to avoid the problem of 
memory

issue.

we observed the performance of LIMIT & OFFSET, it looks like a liner 
grow of

the response time. In our table, it only has 300~500 records.



Here is the dummy script
Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
create index idx_foo on foo(x);

insert into foo(x) values('text001');
:
:
insert into foo(x) values('text300');


Select distinct x from foo LIMIT 20 OFFSET 0; Select distinct x from foo
LIMIT 20 OFFSET 20; Select distinct x from foo LIMIT 20 OFFSET 40; Select
distinct x from foo LIMIT 20 OFFSET 60;
:
:
Select distinct x from foo LIMIT 20 OFFSET 280;



Would you please help to tell me how to improve that?

Thanks for your great help.


Thanks,
VK




--
http://MichaelScharf.blogspot.com/


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-19 Thread Christian Smith

PY uttered:


Hi All,
I have a problem about LIMIT & OFFSET profermance.
Due to the limitation of memory, I could not get all of the query result at
a time.
In our soluction, we use the LIMIT and OFFSET to avoid the problem of memory
issue.

we observed the performance of LIMIT & OFFSET, it looks like a liner grow of
the response time. In our table, it only has 300~500 records.



Here is the dummy script
Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
create index idx_foo on foo(x);

insert into foo(x) values('text001');
:
:
insert into foo(x) values('text300');


Select distinct x from foo LIMIT 20 OFFSET 0; Select distinct x from foo
LIMIT 20 OFFSET 20; Select distinct x from foo LIMIT 20 OFFSET 40; Select
distinct x from foo LIMIT 20 OFFSET 60;
:
:
Select distinct x from foo LIMIT 20 OFFSET 280;



Would you please help to tell me how to improve that?



I could only observe the issue on the slowest machine I have (SUN 
SPARCclassic). I guess this is an embedded device, based on the fact 
you're also limited in memory.


Using your query, the index for x is ignored. To do the "DISTINCT" 
processing, SQLite creates a temporary table which is populated with 
distinct rows from the base table. This is done using a full table scan, 
so the index wouldn't help anyway.


Your best bet, if you always want distinct x, is to add a UNIQUE 
constraint to x:

CREATE TABLE foo (id INTEGER PRIMARY KEY, x TEXT UNIQUE);

Now your selects, minus the distinct clause, will run in constant time:
SELECT x FROM foo ORDER BY x LIMIT 20 OFFSET 60;

Note, using "ORDER BY" will make SQLite use the implicit index on x, and 
also ensure the ordering of the result set, which is not defined 
otherwise.





Thanks for your great help.


Thanks,
VK



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-19 Thread Jay Sprenkle

On 9/18/06, PY <[EMAIL PROTECTED]> wrote:

Hi All,
I have a problem about LIMIT & OFFSET profermance.


Is there any chance you can use an index on your select?
If it has an index it might be able to calculate the position of the
offset and not have to read the entire result set. This is just a
guess though. I'm not sure if sqlite is able to do that.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-19 Thread Chandrashekar H S

Hi
Are u using precompiled queries? If not try using it...


On 9/19/06, PY <[EMAIL PROTECTED]> wrote:


Hi All,
I have a problem about LIMIT & OFFSET profermance.
Due to the limitation of memory, I could not get all of the query result
at
a time.
In our soluction, we use the LIMIT and OFFSET to avoid the problem of
memory
issue.

we observed the performance of LIMIT & OFFSET, it looks like a liner grow
of
the response time. In our table, it only has 300~500 records.



Here is the dummy script
Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
create index idx_foo on foo(x);

insert into foo(x) values('text001');
:
:
insert into foo(x) values('text300');


Select distinct x from foo LIMIT 20 OFFSET 0; Select distinct x from foo
LIMIT 20 OFFSET 20; Select distinct x from foo LIMIT 20 OFFSET 40; Select
distinct x from foo LIMIT 20 OFFSET 60;
:
:
Select distinct x from foo LIMIT 20 OFFSET 280;



Would you please help to tell me how to improve that?

Thanks for your great help.


Thanks,
VK




[sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-18 Thread PY

Hi All,
I have a problem about LIMIT & OFFSET profermance.
Due to the limitation of memory, I could not get all of the query result at
a time.
In our soluction, we use the LIMIT and OFFSET to avoid the problem of memory
issue.

we observed the performance of LIMIT & OFFSET, it looks like a liner grow of
the response time. In our table, it only has 300~500 records.



Here is the dummy script
Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
create index idx_foo on foo(x);

insert into foo(x) values('text001');
:
:
insert into foo(x) values('text300');


Select distinct x from foo LIMIT 20 OFFSET 0; Select distinct x from foo
LIMIT 20 OFFSET 20; Select distinct x from foo LIMIT 20 OFFSET 40; Select
distinct x from foo LIMIT 20 OFFSET 60;
:
:
Select distinct x from foo LIMIT 20 OFFSET 280;



Would you please help to tell me how to improve that?

Thanks for your great help.


Thanks,
VK


Re: [sqlite] how to speed up da insertion?

2006-05-10 Thread Jay Sprenkle

On 5/9/06, Will <[EMAIL PROTECTED]> wrote:



I insert like followings :

rc = sqlite3_prepare(m_pDB, szSQL, -1, , 0);
sqlite3_bind_int64(pStmt, 1, m_pTBDataDBData->lTime);
...

rc = sqlite3_step(pStmt);

rc =sqlite3_finalize(pStmt);


Putting your insert statements inside a transaction will speed them up
a lot:

begin transaction;
insert into mytable values( '10:30 2004/04/01');
insert into mytable values( '10:31 2004/04/01');
insert into mytable values( '10:32 2004/04/01');
insert into mytable values( '10:33 2004/04/01');
commit;


A large number of indexes on the tables can also slow down inserts.


[sqlite] how to speed up da insertion?

2006-05-09 Thread Will


I insert like followings :  

rc = sqlite3_prepare(m_pDB, szSQL, -1, , 0);  
sqlite3_bind_int64(pStmt, 1, m_pTBDataDBData->lTime);   
...

rc = sqlite3_step(pStmt);  

rc =sqlite3_finalize(pStmt);



I feel this is not the best way for insertion.
I insert over 400 times per second.

I'm beginer in sql.

Is there any suggesttion?

Re: [sqlite] How to speed up create index on temp database?

2005-11-12 Thread 黄涛

Jay Sprenkle wrote:


On 11/10/05, Huang Tao <[EMAIL PROTECTED]> wrote:
 


Hello:

I run sqlite in embedded system which use nand flash. So I have to
reduce write count. Save index in master database will cause much
write. I try to dynamic create index on temp database. But the speed
is not very well.
example:
create table employee (id int primary key not null, name);
insert 1 record to table;
create index temp.idx_name on employee (name);
create index will use 24.519761 sec

with gprof, I see most time was used in sqlite3BtreeInsert, which
called 10001 times.
Anyone can give me suggestion to improve the speed?
   



Can you move your temp tables to in an in-memory(RAM) database?
 


You may not understand my question, the _table_ must be in NAND Flash.
and the most time is spend in create index.
In my test case, all table and index are in RAM(table in tmpfs, which is 
ram base file system in Linux, index in temp database, which is ram base 
in SQLite)
To create a big btree, may be just need so much time, but I hope I can 
speed it up. I believe I can, but I need some hint.


RE: [sqlite] How to speed up create index on temp database?

2005-11-11 Thread roger



> 
> Can you move your temp tables to in an in-memory(RAM) database?

In the package I posted earlier (the uSQLiteServer), the example
database supplied with the system shows the use of memory tables used
in conjunction with disk based tables, and how you can set up the
memory tables from data in the disk database(you can also use similar
techniques to update the disks from the memory.

Note that when using the uSQLiteServer all users see the memory based
tables, which is not the case in a normal 'shared' SQLite scenario.
However, the techniques are the same if you have a single user (of
course if you had distributed users you could use uSQLite server ;-) )

See http://users.iol.it/irwin





Re: [sqlite] How to speed up create index on temp database?

2005-11-11 Thread Jay Sprenkle
On 11/10/05, Huang Tao <[EMAIL PROTECTED]> wrote:
> Hello:
>
> I run sqlite in embedded system which use nand flash. So I have to
> reduce write count. Save index in master database will cause much
> write. I try to dynamic create index on temp database. But the speed
> is not very well.
> example:
> create table employee (id int primary key not null, name);
> insert 1 record to table;
> create index temp.idx_name on employee (name);
> create index will use 24.519761 sec
>
> with gprof, I see most time was used in sqlite3BtreeInsert, which
> called 10001 times.
> Anyone can give me suggestion to improve the speed?

Can you move your temp tables to in an in-memory(RAM) database?


[sqlite] How to speed up create index on temp database?

2005-11-10 Thread Huang Tao
Hello:

I run sqlite in embedded system which use nand flash. So I have to
reduce write count. Save index in master database will cause much
write. I try to dynamic create index on temp database. But the speed
is not very well.
example:
create table employee (id int primary key not null, name);
insert 1 record to table;
create index temp.idx_name on employee (name);
create index will use 24.519761 sec

with gprof, I see most time was used in sqlite3BtreeInsert, which
called 10001 times.
Anyone can give me suggestion to improve the speed?

CPU: 96MHz mips32
Memory: 16M
File System:  tmpfs (so all run in memory)
SQLite: 3.2.7


  1   2   >