On Thu, 11 Jun 2020 at 07:41, Mikkel Lauritsen <ren...@tala.dk> wrote: > I have a query that runs much slower in Postgres on Windows than on > Linux
> Using explain analyze on the database running on Windows I get > > -> Index Scan using event_pkey on event t1 (cost=0.56..0.95 rows=1 > width=295) (actual time=0.075..0.075 rows=0 loops=229227) > If I copy the database to my laptop running Linux (Postgres 12 on Fedora > 32, i7-9750H, 16 GB ram) I get the exact same execution plan. Explain > analyze says > > -> Index Scan using event_pkey on event t1 (cost=0.56..0.95 rows=1 > width=295) (actual time=0.008..0.008 rows=0 loops=229227) > > The table is freshly vacuumed. It has about 10M rows and takes about > 2.6G disk space; the index is about 600M. Everything is cached; there's > basically no disk I/O happening while the query is executing. Can you confirm what: SELECT pg_relation_size('event_pkey'), pg_relation_size('event'); says on each > The only Postgres configuration difference between the Windows and Linux > environments is shared_buffers, which is 4G on my laptop and 512M on the > Windows server, and effective_cache_size which are 8G on the laptop and > 16G on the server. There is some slight advantage to having the buffers directly in shared buffers. Having them in the kernel's page cache does still require getting them into shared buffers. Going by these sizes it seems much more likely that the Linux instance could have all buffers in shared_buffers, but it seems likely the Windows instance won't. I can't imagine that counts for 10x, but it surely must count for something. It would be good to see: SET track_io_timing = on; EXPLAIN (ANALYZE, BUFFERS) <the query> David