Re: [HACKERS] Proposal: Select ... AS OF Savepoint

2007-11-04 Thread Simon Riggs
On Mon, 2007-11-05 at 11:58 +0530, Gokulakannan Somasundaram wrote:

> The idea was to write a syncpoint every N seconds where we
> record the
> time and a snapshot of what's in progress. 
> 
> What exactly is getting recorded here? Will the Syncpoint be similar
> to the Undo Log at distinct intervals? 

Postgres needs to record the snapshot to allow visibility checks. There
is no Undo log; the database records themselves represent the logical
equivalent of an Undo log in other databases.

But as I said, I'm not working on this, so I'm not going into detailed
design anytime soon.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Proposal: Select ... AS OF Savepoint

2007-11-04 Thread Gokulakannan Somasundaram
On 11/4/07, Simon Riggs <[EMAIL PROTECTED]> wrote:
>
> On Fri, 2007-11-02 at 13:40 +0100, Hans-Juergen Schoenig wrote:
> > >
> > > I think Simon Riggs is already working on that idea. This one is
> > > fairly easy to implement. I think these are some of the features
> > > only a time-stamp based database can implement. I think database
> > > standards were formed during the time, when the data consistency was
> > > provided with Lock based mechanisms. And moreover i have already
> > > committed on the indexes with snapshot and i am still waiting for
> > > its approval from hackers. If that does go through, then i need to
> > > work on the reverse mapping hash tables, which is really a long
> > > task. So i may not be able to take  up  time-travel now.
> > >
> >
> >
> >
> >
> > if i remember my last talk with Simon correctly the idea is to have
> > timetravel across transactions.
> > having this feature inside a transaction will not make it into CVS as
> > it is basically of no practical use.
> > i would suggest to put some effort into making it work across
> > transactions. just saving the snapshot is not enough
> > here - there are a couple of other things which have to be taken into
> > consideration (transaction wraparound, etc.)
> >
> >
> > if you want to work on timetravel my team and i can provide some
> > assistance as we wanted to help in this area anyway.



Thanks for your inputs Simon.

Yeh, I'd want to do that for recovery purposes though, not for general
> access.


I guessed it.


The idea was to write a syncpoint every N seconds where we record the
> time and a snapshot of what's in progress.


What exactly is getting recorded here? Will the Syncpoint be similar to the
Undo Log at distinct intervals?
This may be a stupid question. But is it not a good idea to implement
time-travel through the Replication server.

The syncpoints would need to
> be visible in the system like prepared transactions. A superuser could
> reconnect to one of the syncpoints and see data as it was at the
> previous time. Difficulties being dropped objects and the negative
> effects on vacuuming, both of which are surmountable, but are big
> current blockers.
>
> I'm not working on this currently, maybe an 8.5+ feature.
>
> --
>   Simon Riggs
>   2ndQuadrant  http://www.2ndQuadrant.com
>
>


-- 
Thanks,
Gokul.
CertoSQL Project,
Allied Solution Group.
(www.alliedgroups.com)


Re: [HACKERS] type money causes unrestorable dump

2007-11-04 Thread D'Arcy J.M. Cain
On Sun, 04 Nov 2007 20:38:11 -0500
Tom Lane <[EMAIL PROTECTED]> wrote:
> "D'Arcy J.M. Cain" <[EMAIL PROTECTED]> writes:
> > Hmm.  I think I like Tom's version better.  However, since my primary
> > goal here is to remove the deprecation I will let you guys duke it out
> > over the additional clause.  :-)
> 
> Just pick the wording you like and commit it; we've spent more than
> enough time on this already.

OK.  I can't seem to connect at the moment but I will commit tomorrow
morning if not before.

-- 
D'Arcy J.M. Cain <[EMAIL PROTECTED]> |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Proposal: real procedures again (8.4)

2007-11-04 Thread Bruce Momjian

This has been saved for the 8.4 release:

http://momjian.postgresql.org/cgi-bin/pgpatches_hold

---

Pavel Stehule wrote:
> Hello,
> 
> I found lot of discus about this topic.
> 
> http://www.postgresql.org/docs/faqs.TODO.html
> http://archives.postgresql.org/pgsql-hackers/2003-08/msg00501.php
> http://archives.postgresql.org/pgsql-hackers/2004-09/msg00734.php
> http://archives.postgresql.org/pgsql-hackers/2004-08/msg00872.php
> http://archives.postgresql.org/pgsql-hackers/2004-09/msg00702.php
> 
> There is one result - OUT params for functions. I propose start with
> simple goals that we can enhance in future.
> 
> First goal: Procedures support for better conformance with ANSI SQL:
> 
> * procedure returns any only through OUT, INOUT params,
> * procedure has own executor, that allows byref params (and own
> transaction management in future),
> * procedure can be overloaded,
> * procedure can not returns recordset or multi recordset,
> * procedure doesn't support default parameters,
> * SQL statement CALL allows only expression (this proposal doesn't
> need session variables) for older environments
> * new SPI_exec_procedures API (allows binding to host variables) and
> some similar in libpq, that allow CALL implementation in pgsql and
> others.
> * new internal exec_exec_proc (allow ref on datum variable) used in
> plpgsql statement CALL.
> * new V2 calling convention (maybe based on SQL/CLI)
> * no changes in current functions support
> 
> Later:
> * procedure can manages transactions,
> * procedure can returns recordset or multi recordset,
> * procedure allows default parameters,
> * CALL statement allows session variables
> * no changes in current functions support
> 
> Why new calling convention? I would to support byref variables and
> then I have to carry memory context info ... and maybe some others
> 
> Nice a weekend
> 
> Pavel Stehule
> 
> p.s.
> 
> Why procedures? New parts of ANSI SQL use it, and what is worse, they
> use methods:
> http://www.wiscorp.com/H2-2005-350-WG4-wlg005-Part-7-History-2nd-Edition-Text-for-Working-Draft.pdf
> 
> ---(end of broadcast)---
> TIP 7: You can help support the PostgreSQL project by donating at
> 
> http://www.postgresql.org/about/donate

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] should I worry?

2007-11-04 Thread Tom Lane
I wrote:
>> Hmm, this is messier than I thought.  What evidently has happened is
>> that at one time or another, one of the two tables involved in an FK
>> relationship has been dropped and re-created.  If you'd had proper
>> FK constraints the constraints would have gone away cleanly, but with
>> these old trigger definitions there was no mechanism to make that
>> happen, and so the triggers on the other table remained in place.

I looked back in our CVS history and found that the above statement
is incorrect, or at least an oversimplification.  Ever since PG 7.0
(the first release with FOREIGN KEY support) there has been a mechanism
to auto-drop the FK triggers on the other side of the relationship.
Since 7.3 the pg_depend mechanism has handled it, but before that DROP
TABLE did a scan of pg_trigger for entries having tgconstrrelid pointing
to the doomed table.  So how come it broke?  Further study provided a
pathway that could cause this: 7.0 pg_dump failed to include
tgconstrrelid (the "FROM table" clause) in its CREATE CONSTRAINT TRIGGER
commands.  This was fixed in 7.1 and all later releases, but was never
back-patched to 7.0.x.  In 7.3 and later, the backend has a hack to
regenerate the missing tgconstrrelid value when loading an RI constraint
trigger definition, but 7.0-7.2 will just load the definition and set
tgconstrrelid = 0.

So the scenario must have gone like this:

1. Create some FOREIGN KEY constraints in 7.0.
2. Dump the database using 7.0's pg_dump.
3. Load into 7.0, 7.1, or 7.2.
4. Drop and recreate the table on one side of the FK relationship,
   but don't re-create the FK constraint.
5. Continue to use the database up to the present day without ever
   noticing that the FK constraint was only partially enforced.
   (Dumps and reloads would not have changed its status.)

It's still not clear to me how perso managed to have only a DELETE
trigger and no UPDATE trigger for the FK reference from
perso_competences, but all the other inconsistencies seem to be
explainable by this mechanism.

Also, I was wondering why some of the trigger definitions had
names like "RI_ConstraintTrigger_28974011" rather than the underlying
constraint name, which is always "" in this set of triggers.
I now notice that these entries are also associated with duplicated
sets of triggers, which makes me think they are the result of manual
attempts at patching broken RI trigger sets.

I don't suppose there is enough history of schema changes in this DB
to confirm or deny these theories?

Anyway, the conclusion that the breakage must have gone undetected
since 7.2 or before makes me feel that maybe this isn't quite as
critical as I thought before.  There can't be a huge number of people
in such situations, and the FKs aren't working per spec for them
anyway.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] type money causes unrestorable dump

2007-11-04 Thread Tom Lane
"D'Arcy J.M. Cain" <[EMAIL PROTECTED]> writes:
> Hmm.  I think I like Tom's version better.  However, since my primary
> goal here is to remove the deprecation I will let you guys duke it out
> over the additional clause.  :-)

Just pick the wording you like and commit it; we've spent more than
enough time on this already.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] should I worry?

2007-11-04 Thread Tom Lane
I wrote:
> Hmm, this is messier than I thought.  What evidently has happened is
> that at one time or another, one of the two tables involved in an FK
> relationship has been dropped and re-created.  If you'd had proper
> FK constraints the constraints would have gone away cleanly, but with
> these old trigger definitions there was no mechanism to make that
> happen, and so the triggers on the other table remained in place.

That seems to have happened several times, in fact.  After tweaking
ConvertTriggerToFK() to be more verbose and to insist on finding all
three triggers, I get this:

NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY perso_competences(pcomp_perso_cod) REFERENCES perso(perso_cod)
DETAIL:  Found referencing table's trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY perso_position(ppos_perso_cod) REFERENCES perso(perso_cod)
DETAIL:  Found referencing table's trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY objet_position(pobj_obj_cod) REFERENCES objets(obj_cod)
DETAIL:  Found referencing table's trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY objet_position(pobj_obj_cod) REFERENCES objets(obj_cod)
DETAIL:  Found referenced table's DELETE trigger.
NOTICE:  converting trigger group into constraint "" FOREIGN KEY 
objet_position(pobj_obj_cod) REFERENCES objets(obj_cod)
DETAIL:  Found referenced table's UPDATE trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY perso_objets(perobj_perso_cod) REFERENCES perso(perso_cod)
DETAIL:  Found referencing table's trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY perso_objets(perobj_obj_cod) REFERENCES objets(obj_cod)
DETAIL:  Found referencing table's trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY perso_objets(perobj_obj_cod) REFERENCES objets(obj_cod)
DETAIL:  Found referenced table's DELETE trigger.
NOTICE:  converting trigger group into constraint "" FOREIGN KEY 
perso_objets(perobj_obj_cod) REFERENCES objets(obj_cod)
DETAIL:  Found referenced table's UPDATE trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY messages_dest(dmsg_msg_cod) REFERENCES messages(msg_cod)
DETAIL:  Found referencing table's trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY messages_dest(dmsg_msg_cod) REFERENCES messages(msg_cod)
DETAIL:  Found referenced table's DELETE trigger.
NOTICE:  converting trigger group into constraint "" FOREIGN KEY 
messages_dest(dmsg_msg_cod) REFERENCES messages(msg_cod)
DETAIL:  Found referenced table's UPDATE trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY messages_dest(dmsg_perso_cod) REFERENCES perso(perso_cod)
DETAIL:  Found referencing table's trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY messages_exp(emsg_msg_cod) REFERENCES messages(msg_cod)
DETAIL:  Found referencing table's trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY messages_exp(emsg_msg_cod) REFERENCES messages(msg_cod)
DETAIL:  Found referenced table's DELETE trigger.
NOTICE:  converting trigger group into constraint "" FOREIGN KEY 
messages_exp(emsg_msg_cod) REFERENCES messages(msg_cod)
DETAIL:  Found referenced table's UPDATE trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY messages_exp(emsg_perso_cod) REFERENCES perso(perso_cod)
DETAIL:  Found referencing table's trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY perso_competences(pcomp_perso_cod) REFERENCES perso(perso_cod)
DETAIL:  Found referencing table's trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY perso_competences(pcomp_perso_cod) REFERENCES perso(perso_cod)
DETAIL:  Found referenced table's DELETE trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY perso_position(ppos_pos_cod) REFERENCES positions(pos_cod)
DETAIL:  Found referenced table's DELETE trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY perso_position(ppos_pos_cod) REFERENCES positions(pos_cod)
DETAIL:  Found referenced table's UPDATE trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY objet_position(pobj_pos_cod) REFERENCES positions(pos_cod)
DETAIL:  Found referenced table's DELETE trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY objet_position(pobj_pos_cod) REFERENCES positions(pos_cod)
DETAIL:  Found referenced table's UPDATE trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY or_position(por_pos_cod) REFERENCES positions(pos_cod)
DETAIL:  Found referenced table's DELETE trigger.
NOTICE:  ignoring incomplete trigger group for constraint "" FOREIGN 
KEY or_position(por_pos_cod) REFERENCES positions(pos_cod)
DET

Re: [HACKERS] type money causes unrestorable dump

2007-11-04 Thread D'Arcy J.M. Cain
On Sun, 4 Nov 2007 17:24:10 -0500 (EST)
Bruce Momjian <[EMAIL PROTECTED]> wrote:
> D'Arcy J.M. Cain wrote:
> > +   
> > +Since the output of this data type is locale-sensitive, it may not
> > +work to load money data into a database that has a different
> > +setting of lc_monetary.  To avoid problems, before
> > +restoring a dump make sure lc_monetary has the same or
> > +equivalent value as in the database that was dumped.
> > +   
> > +
> 
> How about:
> 
> > +restoring a dump make sure lc_monetary has a value similar
> > +to the dumped database.

Hmm.  I think I like Tom's version better.  However, since my primary
goal here is to remove the deprecation I will let you guys duke it out
over the additional clause.  :-)

-- 
D'Arcy J.M. Cain <[EMAIL PROTECTED]> |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] "bad key in cancel request"

2007-11-04 Thread Neil Conway
On Sun, 2007-11-04 at 11:06 -0500, Tom Lane wrote:
> No, if it's intended for the log it should be LOG.  Your other proposals
> are actually *less* likely to get to where the DBA could see them.

Good point. I suggested WARNING because that suggests that something is
awry, whereas LOG is used for routine log messages.

I've changed the messages from DEBUG2 to LOG in CVS HEAD -- I didn't
backport the change, but I can if anyone feels strongly about it.

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] type money causes unrestorable dump

2007-11-04 Thread Bruce Momjian
D'Arcy J.M. Cain wrote:
> +   
> +Since the output of this data type is locale-sensitive, it may not
> +work to load money data into a database that has a different
> +setting of lc_monetary.  To avoid problems, before
> +restoring a dump make sure lc_monetary has the same or
> +equivalent value as in the database that was dumped.
> +   
> +

How about:

> +restoring a dump make sure lc_monetary has a value similar
> +to the dumped database.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PATCHES] [HACKERS] Text <-> C string

2007-11-04 Thread Bruce Momjian

This has been saved for the 8.4 release:

http://momjian.postgresql.org/cgi-bin/pgpatches_hold

---

Brendan Jurd wrote:
> As discussed on -hackers, I'm trying to get rid of some redundant code
> by creating a widely useful set of functions to convert between text
> and C string in the backend.
> 
> The new extern functions, declared in include/utils/builtins.h and
> defined in backend/utils/adt/varlena.c, are:
> 
> char * text_cstring(const text *t)
> char * text_cstring_limit(const text *t, int len)
> text * cstring_text(const char *s)
> text * cstring_text_limit(const char *s, int len)
> 
> Within varlena.c, the actual conversions are performed by:
> 
> char * do_text_cstring(const text *t, const int len)
> text * do_cstring_text(const char *s, const int len)
> 
> These functions now do the work for the fmgr functions textin and
> textout, as well as being directly accessible by backend code.
> 
> I've searched through the backend for any code which converted between
> text and C string manually (with memcpy and VARDATA), replacing with
> calls to one of the four new functions as appropriate.
> 
> I came across some areas which were using the same, or similar,
> conversion technique on other varlena data types, such as bytea or
> xmltype.  In cases where the conversion was completely identical I
> used the new functions.  In cases with any differences (even if they
> seemed minor) I played it safe and left them alone.
> 
> I'd now like to submit my work so far for review.  This patch compiled
> cleanly on Linux and passed all parallel regression tests.  It appears
> to be performance-neutral based on a few rough tests; I haven't tried
> to profile the changes in detail.
> 
> There is still a lot of code out there using DirectFunctionCall1 to
> call text(in|out)).  I've decided to wait for some community feedback
> on the patch as it stands before replacing those calls.  There are a
> great many, and it would be a shame to have to go through them more
> than once.
> 
> I would naively expect that replacing fmgr calls with direct calls
> would lead to a performance gain (no fmgr overhead), but honestly I'm
> not sure whether that would actually make a difference.
> 
> Thanks for your time,
> BJ

[ Attachment, skipping... ]

> 
> ---(end of broadcast)---
> TIP 7: You can help support the PostgreSQL project by donating at
> 
> http://www.postgresql.org/about/donate

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] should I worry?

2007-11-04 Thread Tom Lane
> On Sun, 4 Nov 2007, Tom Lane wrote:
>> Would it be possible for you to send me (off-list) all of the CREATE
>> CONSTRAINT TRIGGER commands appearing in the dump?

> [done]

Hmm, this is messier than I thought.  What evidently has happened is
that at one time or another, one of the two tables involved in an FK
relationship has been dropped and re-created.  If you'd had proper
FK constraints the constraints would have gone away cleanly, but with
these old trigger definitions there was no mechanism to make that
happen, and so the triggers on the other table remained in place.
In some cases it looks like the FK constraint was re-established
with a fresh ALTER TABLE ADD CONSTRAINT command, leading to redundant
sets of triggers on one side of the relationship, while in other cases
it wasn't, leading to a partially functional FK constraint :-(
This explains why your dump contains some invalid data: the constraint
wasn't being enforced against the FK table.

So the question is what to do when we see a situation like this.

As the code stands, it will try to re-create an FK constraint after
seeing the two triggers on the PK table, regardless of whether a
trigger is present on the FK table.  That has a couple of disadvantages:
* it may try to create an FK constraint that was only partially enforced
before, leading to errors like we saw in Olivier's report.
* in the situation where the user had re-established the constraint,
we may create redundant FK constraints.

The only thing I can think to do differently is to insist on seeing
all three matching triggers before we create the FK constraint.
This could be programmed so that we make only one constraint not two
when there's redundant triggers in the input.  The downside is that
we'd fail to translate a constraint that was only partially enforced
in the source database.  Maybe that's the best thing; it's symmetric
with what will happen when we see only the trigger on the FK table
and none for the PK table.  (We can't do anything else in that case,
for lack of sufficient information.)

It would be nice if we could throw warnings for incomplete trigger
sets, but I see no very helpful way to do that --- we'd only be
sure we could emit the warning upon client disconnect, and then
it's too late to be sure the user would see the warning.

Comments, better ideas?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Test lab

2007-11-04 Thread Stefan Kaltenbrunner
Greg Smith wrote:
> On Sat, 3 Nov 2007, Stefan Kaltenbrunner wrote:
> 
>> there is the various dbt workloads,sysbench, jans tpc-w
>> implementation, hell even pgbench
> 
> The DBT workloads are good for simulating disk-bound operations, but I
> don't think they're sufficient by themselves for detecting performance
> regressions because of that.  TPC-W might serve to better simulate when
> things are CPU-bound, that particular implementation felt a bit out of
> date when I tried using it and I think it could use a round of polishing.

sure it might need work but it is still a noteworthy thing we could use
(or at least seriously evaluate) and it seems a bit wrong to judge on
what might (or what might not) detect a regression in that regard.
Especially since we don't have any long term consistant tracking yet so
we don't really know what is good and what not.

> 
> I never got the database tests in SysBench to produce useful results,
> the minute I cranked the number of simultaneous clients up there were
> deadlock issues that suggested the PostgreSQL porting effort still
> needed work. Lots of general crashes when I was testing that as well.

hmm I have not seen that and the recent freebsd related scalability
benchmarks(http://people.freebsd.org/~kris/scaling/)  seem to indicate
that it seems to work quite well at least for some stuff.

> 
> pgbench can work well for testing low-level operations.  I use it
> frequently to see how fast a system can execute individual statements,
> and in that context I've found it useful for finding performance
> regressions. If you run it enough to average out the noise the results
> can be stable (I've been working on some pgbench tools to do just that:
> http://www.westnet.com/~gsmith/content/postgresql/pgbench-tools.htm )
> The main problem I've run into is that the pgbench binary itself becomes
> increasingly a bottleneck once the client load increases.  The simple,
> single select()/parse/execute loop it runs breaks down around 50 clients
> on the systems I've tested, and you really need to run pgbench on
> another server to reach even 100 usefully.

well it might still give us a baseline to compare against - but point taken.

> 
> The big problem with all these benchmarks is that none of them stress
> query planning in any useful way.  One thing I've been looking for is a
> public data set and tests that depend on the planner working correctly
> in order to work efficiently.  For example, it would be great to be able
> to show someone how to test whether they had correctly analyzed the
> tables and set shared_buffers + effective_cache_size usefully on a test
> system. I envision loading a bunch of data, then running a difficult
> plan that will only execute effectively if the underlying components are
> tuned properly.  Sadly I don't actually know enough about that area to
> write such a test myself.

well one thing I have been been wondering about if it might make sense
as a start to just troll -hackers and -bugs from the past few years and
collect all the bug/regression reproduction samples posted (especially
the planner related ones) and do benchmarking/testing with a special
focus on plan changes or planning time/quality regressions.


Stefan

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Test lab

2007-11-04 Thread Simon Riggs
On Fri, 2007-11-02 at 10:42 -0700, Joshua D. Drake wrote:

> The test lab is finally starting to come to fruition. We (the
> community) have been donated hardware via MyYearbook and Hi5. It is my
> understanding that we may also have some coming from HP.
> 
> We are currently setting up a Trac for management and publishing of
> results etc... I have also spoken with Mark Wong and he is going to be
> helping with DBT and such. 
> 
> The first machine we are going to have up and have ready access to is a
> HP DL 585. It has 8 cores (Opteron), 32GB of ram and 28 spindles over 4
> channels.
> 
> My question is -hackers, is who wants first bite and what do they
> want :) 

I'll take a few slots, probably 3 x 1 days, at least a week apart. Won't
be able to start before 19th Nov.

I want to look at scaling issues on some isolated workloads on in-memory
databases, as well as WAL writing. I'll generate the data directly on
the system.

Any chance we can validate the I/O config and publish bonnie results
first, please? 

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Test lab

2007-11-04 Thread Simon Riggs
On Fri, 2007-11-02 at 17:25 -0700, Mark Wong wrote:
> On Fri, 02 Nov 2007 15:20:27 -0400
> Tom Lane <[EMAIL PROTECTED]> wrote:
> 
> > "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> > > My question is -hackers, is who wants first bite and what do they
> > > want :) 
> > 
> > Something I'd like to have back real soon is the daily DBT run against
> > CVS HEAD that Mark Wong was doing at OSDL.  Maybe we don't need a
> > particularly enormous machine for that, but comparable runs day after
> > day are real nice for noting when patches had unexpected performance
> > impacts...
> 
> I expect the processors in this system to be faster than what I was using but 
> this system does have about a third of the number of spindles I had 
> previously.  In my spare time I am trying to complete a TPC-E implementation 
> (dbt5) to the current spec revision and it is supposed to have significantly 
> less disk requirements than the TPC-C derivative (dbt2) I was using in the 
> past.  If we believe TPC-E achieved all its goals, I think it would be 
> appropriate to start using that as soon as the kit is ready.
> 
> Anyway want to help with the kit? :)  It's the C stored functions that need 
> to be revised.

Mark,

Why don't you post a TODO list for TPC-E somewhere, so people can bite
small pieces off of the list. I'm sure there's lots of people can help
if we do it that way.

I'm more interested now in less disk-bound workloads, so TPC-E is good.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [GENERAL] AutoVacuum Behaviour Question

2007-11-04 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
 No, it isn't.  Please add a TODO item about it:
 * Prevent long-lived temp tables from causing frozen-Xid advancement
 starvation
>> 
> Jeff Amiel wrote:
>> Can somebody explain this one to me?  because of our auditing technique, we 
>> have many LONG lived temp tables.(one per pooled connection)...so as 
>> long as the pool isn't disturbed, these temp tables can exist for a long 
>> time (weeksmonths?)

> Hmm.  The problem is that the system can't advance the frozen Xid for a
> database when there are temp tables that live for long periods of time.
> Autovacuum can't vacuum those tables; if the app vacuums them itself
> then there's no problem, but you can only vacuum them in the same
> session that creates it.

I'm not convinced there's a huge problem here.  Surely Jeff's app is
going to either vacuum or truncate those temp tables occasionally;
otherwise they'll bloat to the point of uselessness.  Either action
will fix the problem.

The real issue is that the app has to remember to do that.  Perhaps
a better TODO item would be
* Find a way to autovacuum temp tables
though I admit I have no clue how to do that without giving up most
of the performance advantages of temp tables.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Intel x64 vs AMD x64 pgdata

2007-11-04 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sun, 04 Nov 2007 19:28:46 +0100
Zdenek Kotala <[EMAIL PROTECTED]> wrote:

> Joshua D. Drake wrote:
> 
> > 
> > x86_64 is x86_64, regardless of intel or amd.
> 
> Not exactly, ask kernel guys ;-). But for user space yes.

For the context of the discussion...

Joshua D. Drake

> 
>   Zdenek
> 


- -- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHLhG8ATb/zqfZUUQRAgwhAKCbpUKHy+1s8M+spvDrv93Lvw3xUgCffCR6
waU+dYx+RWCRBQBjFGUjvL8=
=q9jd
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Intel x64 vs AMD x64 pgdata

2007-11-04 Thread Zdenek Kotala

Joshua D. Drake wrote:



x86_64 is x86_64, regardless of intel or amd.


Not exactly, ask kernel guys ;-). But for user space yes.

Zdenek

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] "bad key in cancel request"

2007-11-04 Thread Tom Lane
Neil Conway <[EMAIL PROTECTED]> writes:
> ereport(DEBUG2,
> (errmsg_internal("bad key in cancel request for process %d",
>  backendPID)));

> I think this ought to be logged at a higher level than DEBUG2: for one
> thing, it is a potential security issue the DBA might want to be aware
> of. It could also indicate a misconfigured or buggy client application.
> Therefore, I think we should raise the level to WARNING, or perhaps
> NOTICE. Any objections?

No, if it's intended for the log it should be LOG.  Your other proposals
are actually *less* likely to get to where the DBA could see them.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Test lab

2007-11-04 Thread Greg Smith

On Sat, 3 Nov 2007, Stefan Kaltenbrunner wrote:

there is the various dbt workloads,sysbench, jans tpc-w implementation, 
hell even pgbench


The DBT workloads are good for simulating disk-bound operations, but I 
don't think they're sufficient by themselves for detecting performance 
regressions because of that.  TPC-W might serve to better simulate when 
things are CPU-bound, that particular implementation felt a bit out of 
date when I tried using it and I think it could use a round of polishing.


I never got the database tests in SysBench to produce useful results, the 
minute I cranked the number of simultaneous clients up there were deadlock 
issues that suggested the PostgreSQL porting effort still needed work. 
Lots of general crashes when I was testing that as well.


pgbench can work well for testing low-level operations.  I use it 
frequently to see how fast a system can execute individual statements, and 
in that context I've found it useful for finding performance regressions. 
If you run it enough to average out the noise the results can be stable 
(I've been working on some pgbench tools to do just that: 
http://www.westnet.com/~gsmith/content/postgresql/pgbench-tools.htm ) The 
main problem I've run into is that the pgbench binary itself becomes 
increasingly a bottleneck once the client load increases.  The simple, 
single select()/parse/execute loop it runs breaks down around 50 clients 
on the systems I've tested, and you really need to run pgbench on another 
server to reach even 100 usefully.


The big problem with all these benchmarks is that none of them stress 
query planning in any useful way.  One thing I've been looking for is a 
public data set and tests that depend on the planner working correctly in 
order to work efficiently.  For example, it would be great to be able to 
show someone how to test whether they had correctly analyzed the tables 
and set shared_buffers + effective_cache_size usefully on a test system. 
I envision loading a bunch of data, then running a difficult plan that 
will only execute effectively if the underlying components are tuned 
properly.  Sadly I don't actually know enough about that area to write 
such a test myself.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] type money causes unrestorable dump

2007-11-04 Thread D'Arcy J.M. Cain
On Sat, 03 Nov 2007 15:47:40 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Greg's objection caused me to rethink that.  Doing it would be a problem
> when transporting dump files across platforms: what if the appropriate
> locale name is spelled differently on the new machine?  We should
> probably leave it in the user's hands to get this right.  So the added
> text could be used as I suggested, or tweaked to say that you must set
> lc_monetary to something equivalent to the prior setting.

OK, how is this?

Index: doc/src/sgml/datatype.sgml
===
RCS file: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v
retrieving revision 1.211
diff -u -p -u -r1.211 datatype.sgml
--- doc/src/sgml/datatype.sgml  21 Oct 2007 20:04:37 -  1.211
+++ doc/src/sgml/datatype.sgml  4 Nov 2007 17:09:03 -
@@ -834,14 +834,6 @@ ALTER SEQUENCE 
Monetary Types

-   
-
- The money type is deprecated. Use
- numeric or decimal instead, in
- combination with the to_char function.
-
-   
-

 The money type stores a currency amount with a fixed
 fractional precision; see 
  Monetary Types
  


-- 
D'Arcy J.M. Cain <[EMAIL PROTECTED]> |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] should I worry?

2007-11-04 Thread Tom Lane
[EMAIL PROTECTED] writes:
> I've tried it and got those logs:

BTW, is that a complete list of the NOTICEs you got?  I'd expect to see
exactly two "ignoring" messages for each "converting" message, and it's
a bit worrisome that that's not what you seem to have.

Another thing that's strange is that some of the messages reference
auto-generated trigger names instead of constraint names:

Nov  4 16:02:26 sun postgres[12505]: [227-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "RI_ConstraintTrigger_28974035" on 
table
Nov  4 16:02:26 sun postgres[12505]: [227-2]  "perso_competences"

That's fairly unhelpful since it makes it harder to match up the
messages, but I didn't see any such cases when I was testing the patch
here.

Would it be possible for you to send me (off-list) all of the CREATE
CONSTRAINT TRIGGER commands appearing in the dump?  I don't need to
see anything else, but I'm curious to look at those.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] should I worry?

2007-11-04 Thread Tom Lane
[EMAIL PROTECTED] writes:
> I've got two problems:

> Looking at the errors, ISTM foreign statement is the over way round :
> levt_tevt_cod is in ligne_evt NOT in type_evt

No, that's just how we've worded FK violation errors for some time.
The real question is how did FK violations get into your dump?

> And, looking at the generated foreign keys, I don't see ON UPDATE/ON
> DELETE clauses

The ones we can see here are default (NO ACTION) cases, so I'm not
convinced you're describing a real problem.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] should I worry?

2007-11-04 Thread ohp
Dear Tom,
On Sat, 3 Nov 2007, Tom Lane wrote:

> Date: Sat, 03 Nov 2007 21:21:20 -0400
> From: Tom Lane <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Cc: Heikki Linnakangas <[EMAIL PROTECTED]>,
>  pgsql-hackers list 
> Subject: Re: [HACKERS] should I worry?
>
> [EMAIL PROTECTED] writes:
> > Is there a query I can use to know all the unamed trigger, delete them and
> > recreate with the right sentence?
>
> I've applied a patch that should persuade the backend to convert the old
> CREATE CONSTRAINT TRIGGER commands into proper foreign-key constraints.
> I'd suggest applying the patch and re-loading the dump instead of trying
> to fix things manually.
>
> http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/commands/trigger.c.diff?r1=1.219;r2=1.220
>
>   regards, tom lane
>
I've tried it and got those logs:

Nov  4 16:02:24 sun postgres[12505]: [189-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "RI_ConstraintTrigger_28974025" on 
table
Nov  4 16:02:24 sun postgres[12505]: [189-2]  "perso_competences"
Nov  4 16:02:24 sun postgres[12505]: [190-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "perso_position"
Nov  4 16:02:24 sun postgres[12505]: [191-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "objet_position"
Nov  4 16:02:24 sun postgres[12505]: [192-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "objet_position"
Nov  4 16:02:24 sun postgres[12505]: [193-1] NOTICE:  converting foreign-key 
trigger group into constraint "" on table "objet_position"
Nov  4 16:02:25 sun postgres[12505]: [194-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "perso_objets"
Nov  4 16:02:25 sun postgres[12505]: [195-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "perso_objets"
Nov  4 16:02:25 sun postgres[12505]: [196-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "perso_objets"
Nov  4 16:02:25 sun postgres[12505]: [197-1] NOTICE:  converting foreign-key 
trigger group into constraint "" on table "perso_objets"
Nov  4 16:02:25 sun postgres[12505]: [198-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "messages_dest"
Nov  4 16:02:25 sun postgres[12505]: [199-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "messages_dest"
Nov  4 16:02:25 sun postgres[12505]: [200-1] NOTICE:  converting foreign-key 
trigger group into constraint "" on table "messages_dest"
Nov  4 16:02:25 sun postfix/smtpd[12751]: connect from 
82-32-100-168.cable.ubr01.hawk.blueyonder.co.uk[82.32.100.168]
Nov  4 16:02:26 sun postgres[12505]: [201-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "messages_dest"
Nov  4 16:02:26 sun postgres[12505]: [202-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "messages_exp"
Nov  4 16:02:26 sun postgres[12505]: [203-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "messages_exp"
Nov  4 16:02:26 sun postgres[12505]: [204-1] NOTICE:  converting foreign-key 
trigger group into constraint "" on table "messages_exp"
Nov  4 16:02:26 sun postgres[12505]: [205-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "messages_exp"
Nov  4 16:02:26 sun postgres[12505]: [206-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "RI_ConstraintTrigger_28974598" on 
table
Nov  4 16:02:26 sun postgres[12505]: [206-2]  "perso_competences"
Nov  4 16:02:26 sun postgres[12505]: [207-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "RI_ConstraintTrigger_28974610" on 
table
Nov  4 16:02:26 sun postgres[12505]: [207-2]  "perso_competences"
Nov  4 16:02:26 sun postgres[12505]: [208-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "perso_position"
Nov  4 16:02:26 sun postgres[12505]: [209-1] NOTICE:  converting foreign-key 
trigger group into constraint "" on table "perso_position"
Nov  4 16:02:26 sun postgres[12505]: [210-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "objet_position"
Nov  4 16:02:26 sun postgres[12505]: [211-1] NOTICE:  converting foreign-key 
trigger group into constraint "" on table "objet_position"
Nov  4 16:02:26 sun postgres[12505]: [212-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "or_position"
Nov  4 16:02:26 sun postgres[12505]: [213-1] NOTICE:  converting foreign-key 
trigger group into constraint "" on table "or_position"
Nov  4 16:02:26 sun postgres[12505]: [214-1] NOTICE:  ignoring incomplete 
foreign-key trigger group for constraint "" on table "ligne_evt"
Nov  4 16:02:26 sun postgres[12505]: [215-1] NOTICE:  converting foreign-key 
trigger group into constraint "" 

Re: [HACKERS] Segmentation fault using digest from pg_crypto

2007-11-04 Thread Bruce Momjian
Marko Kreen wrote:
> On 8/24/07, Manuel Sugawara <[EMAIL PROTECTED]> wrote:
> > Tom Lane <[EMAIL PROTECTED]> writes:
> > > Manuel Sugawara <[EMAIL PROTECTED]> writes:
> > >> "Marko Kreen" <[EMAIL PROTECTED]> writes:
> > >>> In 8.0 the pgcrypto functions were non-strict and checked for NULLs.
> > >>> In 8.1 they were made STRICT.
> > >>> In 8.2 the NULL check were removed from code.
> > >
> > >> Not an smart move IMHO, I didn't create the function, it was created
> > >> as part of my upgrade process. May I suggest to put back the check?.
> > >
> > > That's the standard way of doing things in C functions (ie, rely on
> > > STRICT markings) and I see nothing wrong with it.
> > >
> > > If you were using an upgrade process that failed to incorporate
> > > version-to-version changes in the definitions of contrib functions,
> > > this is hardly going to be the only problem you encounter.
> >
> > I was under the impression that the standar procedure for upgrading
> > was to read the release notes searching for this kind of gotchas. I've
> > reviewed (again) the release notes for 8.1 and 8.2 and didn't find
> > anything related but maybe I'm not searching hard enough.
> >
> > If something is not going a work (or is going a work in a different
> > way) in some version after loading a shot form a previous one I think
> > it should be documented or some kind of backwards compatibility
> > mechanism should be provided.
> 
> That's a valid complaint and I take the blame.
> 
> The problem was that such "evolution" was not actually planned.
> So when I noticed the 8.2 commit, I did not think of the implications
> hard enough to realize the need for release note for it.
> 
> Tom, how about putting a note about that into next 8.2 minor
> release notes?  (8.3 too?)  Something like "You need to refresh
> pgcrypto functions, because since rel 8.2 the code depends
> on functions being tagged STRICT."

Seems 8.2.5 was released without this release notes mention, but we
haven't gotten any complaints about it so perhaps we don't need to add
anything.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Proposal: Select ... AS OF Savepoint

2007-11-04 Thread Simon Riggs
On Fri, 2007-11-02 at 13:40 +0100, Hans-Juergen Schoenig wrote:
> > 
> > I think Simon Riggs is already working on that idea. This one is
> > fairly easy to implement. I think these are some of the features
> > only a time-stamp based database can implement. I think database
> > standards were formed during the time, when the data consistency was
> > provided with Lock based mechanisms. And moreover i have already
> > committed on the indexes with snapshot and i am still waiting for
> > its approval from hackers. If that does go through, then i need to
> > work on the reverse mapping hash tables, which is really a long
> > task. So i may not be able to take  up  time-travel now. 
> > 
> 
> 
> 
> 
> if i remember my last talk with Simon correctly the idea is to have
> timetravel across transactions.
> having this feature inside a transaction will not make it into CVS as
> it is basically of no practical use.
> i would suggest to put some effort into making it work across
> transactions. just saving the snapshot is not enough
> here - there are a couple of other things which have to be taken into
> consideration (transaction wraparound, etc.)
> 
> 
> if you want to work on timetravel my team and i can provide some
> assistance as we wanted to help in this area anyway.

Yeh, I'd want to do that for recovery purposes though, not for general
access.

The idea was to write a syncpoint every N seconds where we record the
time and a snapshot of what's in progress. The syncpoints would need to
be visible in the system like prepared transactions. A superuser could
reconnect to one of the syncpoints and see data as it was at the
previous time. Difficulties being dropped objects and the negative
effects on vacuuming, both of which are surmountable, but are big
current blockers.

I'm not working on this currently, maybe an 8.5+ feature.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] xlogdump

2007-11-04 Thread Simon Riggs
On Fri, 2007-11-02 at 10:54 +, Gregory Stark wrote:

> Incidentally I would like to call xlog.c:RecordIsValid() which is currently a
> static function. Any objection to exporting it? It doesn't depend on any
> external xlog.c state.

You'll have some fun with that because most of the stuff in xlog.c
depends upon module-level state.

We should change that, but I'd like to see Florian's code go in first
then refactor things.

If you change those programs too much we'll need completely separate
programs for each release, which might be a bad thing.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Asynchronous commit documentation gap

2007-11-04 Thread Simon Riggs
On Fri, 2007-11-02 at 11:13 +0100, Florian Weimer wrote:
> The documentation doesn't really tell how to disable synchronous
> commits for a single commit.  I believe the correct command is
> 
>   SET LOCAL synchronous_commit TO OFF;
> 
> just before the COMMIT statement.

Yes, in fact anywhere within the transaction will do.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] "bad key in cancel request"

2007-11-04 Thread Neil Conway
I noticed that processCancelRequest() emits a log message at DEBUG2 when
it receives a cancel request with a bad key or for a non-existent PID.
For example,

ereport(DEBUG2,
(errmsg_internal("bad key in cancel request for process %d",
 backendPID)));

I think this ought to be logged at a higher level than DEBUG2: for one
thing, it is a potential security issue the DBA might want to be aware
of. It could also indicate a misconfigured or buggy client application.
Therefore, I think we should raise the level to WARNING, or perhaps
NOTICE. Any objections?

-Neil



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate