> On Jan 9, 2017, at 1:54 PM, Kevin Grittner <kgri...@gmail.com> wrote:
> 
> On Mon, Jan 9, 2017 at 11:49 AM, Israel Brewster <isr...@ravnalaska.net> 
> wrote:
> 
>> [load of new data]
> 
>> Limit  (cost=354643835.82..354643835.83 rows=1 width=9) (actual
>> time=225998.319..225998.320 rows=1 loops=1)
> 
>> [...] I ran the query again [...]
> 
>> Limit  (cost=354643835.82..354643835.83 rows=1 width=9) (actual
>> time=9636.165..9636.166 rows=1 loops=1)
> 
>> So from four minutes on the first run to around 9 1/2 seconds on the second.
>> Presumably this difference is due to caching?
> 
> It is likely to be, at least in part.  Did you run VACUUM on the
> data before the first run?  If not, hint bits may be another part
> of it.  The first access to each page after the bulk load would
> require some extra work for visibility checking and would cause a
> page rewrite for the hint bits.

That could be - I had planned to run a VACUUM ANALYZE after creating the 
indexes, but forgot. By the time I got around to running the second query, 
autovacuum should have kicked in and done it for me.

> 
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to