Re: FPGA optimization ...

2019-11-04 Thread Gunther
Hi Thomas, you said: For the record, this is not exactly a new thing. Netezza (a PostgreSQL fork started in 1999 IBM) used FPGAs. Now there's swarm64 [1], another PostgreSQL fork, also using FPGAs with newer PostgreSQL releases. yes, I found the swarm thing on Google, and heard about Netezza

Re: FPGA optimization ...

2019-11-04 Thread Tomas Vondra
On Mon, Nov 04, 2019 at 06:33:15PM -0500, Gunther wrote: The time has come. FPGA optimization is in the palm of our hands (literally a 2 TB 40 GB/s IO PostgreSQL server fits into less than a shoe box), and on Amazon AWS F1 instances. Some demos are beginning to exist:

FPGA optimization ...

2019-11-04 Thread Gunther
The time has come. FPGA optimization is in the palm of our hands (literally a 2 TB 40 GB/s IO PostgreSQL server fits into less than a shoe box), and on Amazon AWS F1 instances. Some demos are beginning to exist: https://github.com/Xilinx/data-analytics.

Re: Huge shared hit for small table

2019-11-04 Thread Jeff Janes
On Mon, Nov 4, 2019 at 3:38 PM Scott Rankin wrote: > Definitely no long-running transactions on this table; > Any long running transactions at all? The lock on the table is only necessary to explain why the problem would have gone away at the same time as the reindex finished. If there is a

Re: Huge shared hit for small table

2019-11-04 Thread Scott Rankin
I think we have a winner. I looked in and found a process that was 'idle in transaction' for a couple days - and once I killed it, query performance went back to normal. Thank you all for the very quick responses on this. On 11/4/19, 3:41 PM, "Peter Geoghegan" wrote: On Mon, Nov 4,

Re: Huge shared hit for small table

2019-11-04 Thread Peter Geoghegan
On Mon, Nov 4, 2019 at 12:38 PM Scott Rankin wrote: > Definitely no long-running transactions on this table; in fact, this table is > pretty infrequently updated – on the order of a few tens of rows updated per > day. But a long running transaction will have an impact on all tables -- not just

Re: Huge shared hit for small table

2019-11-04 Thread Scott Rankin
Definitely no long-running transactions on this table; in fact, this table is pretty infrequently updated – on the order of a few tens of rows updated per day. From: Jeff Janes Date: Monday, November 4, 2019 at 3:32 PM To: Scott Rankin Cc: "pgsql-performance@lists.postgresql.org" Subject:

Re: Huge shared hit for small table

2019-11-04 Thread Peter Geoghegan
On Mon, Nov 4, 2019 at 12:32 PM Jeff Janes wrote: > Could there be a long-open transaction, which is preventing hint-bits from > getting on set on the table rows, as well on the index rows? Contention on a small number of rows may also be a factor. > A reindex would not by itself fix the

Re: Huge shared hit for small table

2019-11-04 Thread Jeff Janes
On Mon, Nov 4, 2019 at 2:38 PM Scott Rankin wrote: > Hello all, > > > > We are trying to debug some slow performance in our production environment > (Amazon RDS, Postgresql 9.6.11), and we’re looking at a particular EXPLAIN > node that seems… weird. This is a very large query involving a number

Re: Huge shared hit for small table

2019-11-04 Thread Scott Rankin
Thanks to Justin for the clarification around pgstatindex: Staging: version2 tree_level1 index_size425984 root_block_no3 internal_pages1 leaf_pages50 empty_pages0 deleted_pages0 avg_leaf_density70.86 leaf_fragmentation16 Production: version2 tree_level1 index_size360448 root_block_no3

Re: Huge shared hit for small table

2019-11-04 Thread Andres Freund
Hi, On 2019-11-04 19:56:57 +, Scott Rankin wrote: > The index is exceedingly simple: > > > CREATE UNIQUE INDEX "programPK" ON program(id int8_ops); > > From pg_stat_user_indexes: > > Staging: > > idx_scan: 5826745 > idx_tup_read: 52715470 > idx_tup_fetch: 52644465 > > Production: > > idx_scan

Re: Huge shared hit for small table

2019-11-04 Thread Peter Geoghegan
On Mon, Nov 4, 2019 at 11:56 AM Justin Pryzby wrote: > I think it's because some heap pages are being visited many times, due to the > index tuples being badly "fragmented". Note, I'm not talking about > fragmentation of index *pages*, which is what pgstattuple reports (which > wouldn't have

Re: Huge shared hit for small table

2019-11-04 Thread Justin Pryzby
On Mon, Nov 04, 2019 at 07:38:40PM +, Scott Rankin wrote: > In the staging environment, we get this: > > Index Scan using "programPK" on public.program prog (cost=0.29..0.35 rows=1 > width=16) (actual time=0.002..0.003 rows=1 loops=21965) > Output: prog.id, prog.version, prog.active,

Re: Huge shared hit for small table

2019-11-04 Thread Scott Rankin
The index is exceedingly simple: CREATE UNIQUE INDEX "programPK" ON program(id int8_ops); From pg_stat_user_indexes: Staging: idx_scan: 5826745 idx_tup_read: 52715470 idx_tup_fetch: 52644465 Production: idx_scan : 7277919087 idx_tup_read: 90612605047 idx_tup_fetch: 5207807880 From: Andres

Re: Huge shared hit for small table

2019-11-04 Thread Andres Freund
Hi, On 2019-11-04 19:38:40 +, Scott Rankin wrote: > In the staging environment, we get this: > > Index Scan using "programPK" on public.program prog (cost=0.29..0.35 rows=1 > width=16) (actual time=0.002..0.003 rows=1 loops=21965) > Output: prog.id, prog.version, prog.active,

Huge shared hit for small table

2019-11-04 Thread Scott Rankin
Hello all, We are trying to debug some slow performance in our production environment (Amazon RDS, Postgresql 9.6.11), and we’re looking at a particular EXPLAIN node that seems… weird. This is a very large query involving a number of joins, but it performs pretty well in our staging