[GENERAL] Question on notifications

2012-04-29 Thread Alexander Reichstadt
Hi,

From the documentation I was able to build a trigger firing upon deletion of a 
record a function that delivers tablename_operation as a notification one needs 
to subscribe to. So in terminal I can say LISTEN persons_delete and instantly 
will receive

Asynchronous notification "persons_delete" received from server process 
with PID 54790.

if there was a delete. But what I don't fully understand is how to do this with 
PQnotifies. Following the docu I get no notifications even though I subscribe 
to them after successfully connecting to the server the same way I do using 
terminal.

Googling didn't give me examples I was able to use. Please, can someone help?

Thanks
Alex

signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [GENERAL] Bug? Query plans / EXPLAIN using gigabytes of memory

2012-04-29 Thread Toby Corkindale

On 30/04/12 11:26, Rob Sargentg wrote:

On 04/29/2012 07:19 PM, Toby Corkindale wrote:

On 27/04/12 09:33, Tom Lane wrote:

Toby Corkindale writes:

I've created a bit of a test case now.
There's a Perl script here:
http://dryft.net/postgres/


AFAICT, what is happening is that we're repeating the planning of that
messy nest of views for each child table of foo. For most of the
children the planner eventually decides that the join degenerates to
nothing because of constraint exclusion, but not until it's expended a
fair amount of time and memory space per child.

I looked at whether we could improve that by having inheritance_planner
use a temporary memory context per child, but that doesn't look very
practical: it would add a good deal of extra data-copying overhead,
and some of the data structures involved are not easily copiable.

The general scheme of replanning per child might be questioned as well,
but IMO it's fairly important given the looseness of inheritance
restrictions --- it's not unlikely that you *need* different plans for
different children. We might be able to reconsider that approach
whenever we invent an explicit concept of partitioned tables, since
presumably the partitions would all be essentially alike.

In the meantime, the best advice I can come up with is to reconsider
whether you need so many partitions. That mechanism is really designed
for only a dozen or two partitions at most.



Hi Tom,
Thanks for looking into this, I appreciate you spending the time.

The system I've come up with for partitioning this data requires quite
a lot of partitions - say thirty to seventy - but I didn't realise it
would cause trouble down the line, so I'll see if it can be reworked
to reduce the number.

For what it's worth, the actual query that was blowing out to
gigabytes was only hitting a couple of dozen partitions per table it
was touching - but it was hitting three such tables, about sixteen
times (!) each.

I'm still curious about why I can do a SELECT * FROM complexview
without using much memory, but an UPDATE foo FROM complexview causes
all the memory to get exhausted?

Thanks,
Toby


Does

UPDATE foo set 
where foo.id in (select id from complexview...)

also swallow the memory?


Yes, definitely. (See an earlier post of mine for several variations on 
the query)


However a two-stage process doesn't, ie.
create temp table as select id from complexview;
update foo where id in (select id from complexview);
(or the same thing with FROM)


--
.signature

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bug? Query plans / EXPLAIN using gigabytes of memory

2012-04-29 Thread Tom Lane
Toby Corkindale  writes:
> I'm still curious about why I can do a SELECT * FROM complexview without 
> using much memory, but an UPDATE foo FROM complexview causes all the 
> memory to get exhausted?

The former only requires the complexview to get planned once.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bug? Query plans / EXPLAIN using gigabytes of memory

2012-04-29 Thread Rob Sargentg

On 04/29/2012 07:19 PM, Toby Corkindale wrote:

On 27/04/12 09:33, Tom Lane wrote:

Toby Corkindale  writes:

I've created a bit of a test case now.
There's a Perl script here:
http://dryft.net/postgres/


AFAICT, what is happening is that we're repeating the planning of that
messy nest of views for each child table of foo.  For most of the
children the planner eventually decides that the join degenerates to
nothing because of constraint exclusion, but not until it's expended a
fair amount of time and memory space per child.

I looked at whether we could improve that by having inheritance_planner
use a temporary memory context per child, but that doesn't look very
practical: it would add a good deal of extra data-copying overhead,
and some of the data structures involved are not easily copiable.

The general scheme of replanning per child might be questioned as well,
but IMO it's fairly important given the looseness of inheritance
restrictions --- it's not unlikely that you *need* different plans for
different children.  We might be able to reconsider that approach
whenever we invent an explicit concept of partitioned tables, since
presumably the partitions would all be essentially alike.

In the meantime, the best advice I can come up with is to reconsider
whether you need so many partitions.  That mechanism is really designed
for only a dozen or two partitions at most.



Hi Tom,
Thanks for looking into this, I appreciate you spending the time.

The system I've come up with for partitioning this data requires quite 
a lot of partitions - say thirty to seventy - but I didn't realise it 
would cause trouble down the line, so I'll see if it can be reworked 
to reduce the number.


For what it's worth, the actual query that was blowing out to 
gigabytes was only hitting a couple of dozen partitions per table it 
was touching - but it was hitting three such tables, about sixteen 
times (!) each.


I'm still curious about why I can do a SELECT * FROM complexview 
without using much memory, but an UPDATE foo FROM complexview causes 
all the memory to get exhausted?


Thanks,
Toby


Does

UPDATE foo set 
where foo.id in (select id from complexview...)

also swallow the memory?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bug? Query plans / EXPLAIN using gigabytes of memory

2012-04-29 Thread Toby Corkindale

On 27/04/12 09:33, Tom Lane wrote:

Toby Corkindale  writes:

I've created a bit of a test case now.
There's a Perl script here:
http://dryft.net/postgres/


AFAICT, what is happening is that we're repeating the planning of that
messy nest of views for each child table of foo.  For most of the
children the planner eventually decides that the join degenerates to
nothing because of constraint exclusion, but not until it's expended a
fair amount of time and memory space per child.

I looked at whether we could improve that by having inheritance_planner
use a temporary memory context per child, but that doesn't look very
practical: it would add a good deal of extra data-copying overhead,
and some of the data structures involved are not easily copiable.

The general scheme of replanning per child might be questioned as well,
but IMO it's fairly important given the looseness of inheritance
restrictions --- it's not unlikely that you *need* different plans for
different children.  We might be able to reconsider that approach
whenever we invent an explicit concept of partitioned tables, since
presumably the partitions would all be essentially alike.

In the meantime, the best advice I can come up with is to reconsider
whether you need so many partitions.  That mechanism is really designed
for only a dozen or two partitions at most.



Hi Tom,
Thanks for looking into this, I appreciate you spending the time.

The system I've come up with for partitioning this data requires quite a 
lot of partitions - say thirty to seventy - but I didn't realise it 
would cause trouble down the line, so I'll see if it can be reworked to 
reduce the number.


For what it's worth, the actual query that was blowing out to gigabytes 
was only hitting a couple of dozen partitions per table it was touching 
- but it was hitting three such tables, about sixteen times (!) each.


I'm still curious about why I can do a SELECT * FROM complexview without 
using much memory, but an UPDATE foo FROM complexview causes all the 
memory to get exhausted?


Thanks,
Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general