No, when you use OFFSET you are reading and discarding rows.  There is no 
difference between:

select * from t;

and discarding all the results except the last row, and,

select * from t limit 99999999,1;

for a table containing 100000000 rows.  In both cases you have to read the 
entire table in order to find the last row.  

However,

select * from t order by _rowid_ desc limit 1;

returns the last row directly rather than requiring the entire table to be read 
and will be fast no matter whether the file is cached or not.

The optimization was for LIMIT without the OFFSET, meaning that you know you 
only want so many rows of the results from the beginning, not at some offset 
(which increases the number of rows included, they are just not returned to you 
-- being discarded -- they still have to be read and processed) by not even 
calculating the rows beyond the LIMIT, if those can be determined and primarily 
affects complicated queries.

I have fed the bug report for the Windows cache handling back to the OEM where 
it will get to Microsoft.  While not severe, it does affect the usage of the 
FileSystem cache and when triggered under memory pressure can force page 
eviction (or swapping) that is unnecessary or even detrimental.  I suspect it 
was not found previously because the specific access pattern for a file is 
rather uncommon -- however that should make it easy to find and fix I should 
think.

However, what you are talking about here is now the difference between doing 
physical I/O and avoiding I/O by reading from a cache in RAM.  

I/O is slow and the best way to optimize it (speed it up) is not to do it.  
(Gee that is old, from the 1960's I believe -- cannot remember who to attribute 
it to though).

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-----Original Message-----
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of x
>Sent: Sunday, 17 June, 2018 06:35
>To: SQLite mailing list
>Subject: Re: [sqlite] .timer
>
>It’s like having a world class soccer team and having to play in a
>league where the players are forced to wear stiletto heels. And it
>gets worse.
>
>Richard was saying he had sped up LIMIT queries in 3.24. I checked
>this out and was running LIMIT queries in sqlite expert (3.23.1) and
>my app (3.24). The former was taking 3 times as long to run the
>queries (not sure how much of that was down to improvements or sqlite
>expert). Anyway, I was getting quite excited until I realised I was
>conducting the test with a warm cache. When I tried the same with a
>cold cache I felt depressed even though the factor of 3 was still
>evident.
>
>The code below will illustrate. In one case it takes 109 secs to
>retrieve one record from a table that only took 77 secs to create.
>Keith mentioned earlier he was unable to test FILE_FLAG_NO_BUFFERING
>because of some setting in sqlite3. While I’ve no experience of the
>ramifications of setting this flag the LIMIT OFFSET looks like it’s
>tailor made for such a setting. Short of getting Microsoft to fix
>this, maybe what we need is a ‘pragma buffering = on/off’ although
>I’ve no idea if that’s possible or the difficulty involved.
>
>SQLite version 3.24.0 2018-06-04 19:24:41
>Enter ".help" for usage hints.
>Connected to a transient in-memory database.
>Use ".open FILENAME" to reopen on a persistent database.
>sqlite> .timer on
>sqlite> .open mytemp.db
>sqlite> create table test as -- creates test table with 100 million
>rows
>   ...> with recursive cte(x,y) as
>   ...> (values(1,'012345678901234567890123456789')
>   ...> union all select x+1,y from cte where x<100000000)
>   ...> select * from cte;
>Run Time: real 77.348 user 68.156250 sys 8.234375
>
>sqlite> .shell flushmem
>Available RAM - pre flush = 13.6147 GBs - post flush = 14.1428 GBs
>sqlite> select rowid from test order by rowid limit 99999999,1;
>100000000
>Run Time: real 30.722 user 3.515625 sys 17.609375
>
>sqlite> .shell flushmem
>Available RAM - pre flush = 14.2898 GBs - post flush = 14.4573 GBs
>sqlite> select rowid from test order by rowid limit 99999999,1;
>100000000
>Run Time: real 33.748 user 5.000000 sys 18.078125
>
>sqlite> .shell flushmem
>Available RAM - pre flush = 14.4758 GBs - post flush = 14.4825 GBs
>sqlite> select rowid from test order by rowid limit 99999999,1;
>100000000
>Run Time: real 44.493 user 5.281250 sys 25.625000
>
>Above results unimpressive and inconsistent.
>
>sqlite> select rowid from test order by rowid limit 99999999,1;
>100000000
>Run Time: real 7.269 user 2.609375 sys 4.656250
>
>sqlite> select rowid from test order by rowid limit 99999999,1;
>100000000
>Run Time: real 7.230 user 2.859375 sys 4.375000
>
>Above 2 with warm cache.
>
>sqlite> .shell flushmem
>Available RAM - pre flush = 14.497 GBs - post flush = 14.4306 GBs
>sqlite> select rowid from test order by rowid desc limit 99999999,1;
>1
>Run Time: real 103.339 user 4.062500 sys 20.671875
>
>sqlite> .shell flushmem
>Available RAM - pre flush = 10.1498 GBs - post flush = 14.4109 GBs
>sqlite> select rowid from test order by rowid desc limit 99999999,1;
>1
>Run Time: real 92.210 user 3.812500 sys 15.500000
>
>sqlite> .shell flushmem
>Available RAM - pre flush = 10.3382 GBs - post flush = 14.5637 GBs
>sqlite> select rowid from test order by rowid desc limit 99999999,1;
>1
>Run Time: real 109.676 user 3.796875 sys 21.562500
>
>Woeful results and again inconsistent.
>
>sqlite> select rowid from test order by rowid desc limit 99999999,1;
>1
>Run Time: real 7.405 user 2.062500 sys 5.343750
>
>sqlite> select rowid from test order by rowid desc limit 99999999,1;
>1
>Run Time: real 7.440 user 2.546875 sys 4.890625
>
>Above 2 with warm cache.
>_______________________________________________
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to