Re: [HACKERS] [pgsql-cluster-hackers] 3rd Cluster Hackers Summit, May 15th in Ottawa

2012-02-26 Thread Simon Riggs
On Sun, Feb 12, 2012 at 8:33 PM, Joshua Berkus j...@agliodbs.com wrote:

 = Project Reports: 5 minutes from each project
   * Hot Standby/Binary Replication
   * pgPoolII
   * PostgresXC
   * Your Project Here

I'd like some time to discuss my new project: Bi-Directional
Replication for Core. I don't have all the answers for it yet, but
expect to have something solid to discuss in May.

If I could have 90 minutes, that would be useful.

Thanks.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ISO8601 nitpicking

2012-02-26 Thread Peter Eisentraut
On fre, 2012-02-24 at 10:40 -0800, Daniel Farina wrote:
 On Fri, Feb 24, 2012 at 4:45 AM, Peter Eisentraut pete...@gmx.net wrote:
  On tor, 2012-02-23 at 23:41 -0800, Daniel Farina wrote:
  As it turns out, evidence would suggests that the ISO output in
  Postgres isn't, unless there's an ISO standard for date and time that
  is referring to other than 8601.
 
  Yes, ISO 9075, the SQL standard.  This particular issue has been
  discussed many times; see the archives.
 
 
 I did try searching, but this did not come up quickly, except as the
 T is not necessary, as is commonly repeated on the web.

This thread for example:
http://archives.postgresql.org/message-id/ec26f5ce-9f3b-40c9-bf23-f0c2b96e3...@gmail.com

 The manual is misleading to me on this admittedly very fine point:

Yes, that should probably be cleaned up.  I repeat my contribution to
the above thread:

So we'd have a setting called ECMA that's really ISO, and a
setting called ISO that's really SQL, and a setting called
SQL that's really Postgres, and a setting called Postgres
that's also Postgres but different.

Maybe we should just rename the setings to A, B, C, and D.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] leakproof

2012-02-26 Thread Peter Eisentraut
On fre, 2012-02-24 at 23:00 -0500, Noah Misch wrote:
 I also liked Kevin's suggestion of DISCREET

That would probably create too much confusion with discrete.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Triggers with DO functionality

2012-02-26 Thread Peter Eisentraut
On fre, 2012-02-24 at 13:55 -0600, Kevin Grittner wrote:
  By default, a trigger function runs as the table owner, ie it's
 implicitly SEC DEF
  to the table owner.
  
 Really?  That's certainly what I would *want*, but it's not what I've
 seen. 

Yes, you're right, that was my recollection as well.  I was doubly
confused.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] check constraint validation takes access exclusive locks

2012-02-26 Thread Pavel Stehule
Hello

I rechecked Depesz's article -
http://www.depesz.com/2011/07/01/waiting-for-9-2-not-valid-checks/

The behave of current HEAD is different than behave described in article.

alter table a validate constraint a_a_check needs a access exclusive
locks and blocks table modification - I tested inserts.

Is it expected behave.

session one:

postgres=# create table a(a int);
CREATE TABLE
postgres=# alter table a add check (a  0) not valid;
ALTER TABLE
postgres=# begin;
BEGIN
postgres=# alter table a validate constraint a_a_check;
ALTER TABLE

session two:

postgres=# update a set a = 100; -- it waits to commit in session one

Regards

Pavel Stehule

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers, patch v11

2012-02-26 Thread Dimitri Fontaine
Thanks for your further testing!

Thom Brown t...@linux.com writes:
 Further testing reveals a problem with FTS configurations when using
 the example function provided in the docs:

Could you send me your tests so that I add them to the proper regression
test?  I've been lazy on one or two object types and obviously that's
where I have to check some more.

 Also command triggers for DROP CONVERSION aren't working.  A glance at
 pg_cmdtrigger shows that the system views the command as DROP
 CONVERSION_P.

That's easy to fix, that's a typo in gram.y.  I'm not seeing other ones
like this though.

-  | DROP CONVERSION_P  
{ $$ = DROP CONVERSION_P; }
+  | DROP CONVERSION_P  
{ $$ = DROP CONVERSION; }


 What is DROP ASSERTION?  It's showing as a valid command for a command
 trigger, but it's not documented.

It's a Not Implemented Feature for which we have the grammar support to
be able to fill a standard compliant checkbox, or something like that.
It could be better for me to remove explicit support for it in the
command triggers patch?

 I've noticed that ALTER object name OWNER TO role doesn't result in
 any trigger being fired except for tables.

 ALTER OPERATOR FAMILY  RENAME TO ... doesn't fire command triggers.

 ALTER OPERATOR CLASS with RENAME TO or OWNER TO doesn't fire command
 triggers, but with SET SCHEMA it does.

It seems I've forgotten to add some support here, that happens in
alter.c and is easy enough to check and complete, thanks for the
testing.

 And there's no command trigger available for ALTER VIEW.

Will add.

 I'll hold off on testing any further until a new patch is available.

That should happen soon. Ah, the joys of coding while kids are at home
thanks to school holidays. I can't count how many times I've been killed
by a captain and married to a princess while writing that patch, sorry
about those hiccups here.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WARNING: concurrent insert in progress within table resource

2012-02-26 Thread Pavel Stehule
Hello

I tested creating some larger indexes

There was a warning:

postgres=# CREATE INDEX idx_resource_name ON resource (name, tid);
WARNING:  concurrent insert in progress within table resource

I am sure so there was only one active session - so this warning is strange.


postgres=# select version();
 version
──
 PostgreSQL 9.2devel on i686-pc-linux-gnu, compiled by gcc (GCC) 4.5.1
20100924 (Red Hat 4.5.1-4), 32-bit
(1 row)

Regards

Pavel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Runtime SHAREDIR for testing CREATE EXTENSION

2012-02-26 Thread Peter Eisentraut
On lör, 2012-02-25 at 14:21 +0100, Christoph Berg wrote:
 Well, I'm trying to invoke the extension's make check target at
 extension build time. I do have a temporary installation I own
 somehwere in my $HOME, but that is still trying to find extensions in
 /usr/share/postgresql/9.1/extension/*.control, because I am using the
 system's postgresql version. The build process is not running as root,
 so I cannot do an install of the extension to its final location.
 Still it would be nice to run regression tests. All that seems to be
 missing is the ability to put
 
 extension_control_path = /home/buildd/tmp/extension
 
 into the postgresql.conf of the temporary PG installation, or some
 other way like CREATE EXTENSION foobar WITH CONTROL
 '/home/buildd/...'. 

Yeah, of course, the extension path is not related to the data
directory.  So we do need some kind of path setting, just like
dynamic_library_path.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] leakproof

2012-02-26 Thread Peter Eisentraut
On ons, 2012-02-22 at 10:56 -0500, Andrew Dunstan wrote:
 The trouble with leakproof is that it 
 doesn't point to what it is that's not leaking, which is information 
 rather than memory, as many might imagine (and I did) without further 
 hints. I'm not sure any single English word would be as descriptive as
 I'd like. 

Well, we have RETURNS NULL ON NULL INPUT, so maybe DOES NOT LEAK
INFORMATION. ;-)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How to know a table has been modified?

2012-02-26 Thread Kevin Grittner
Tatsuo Ishii  wrote:
 
 For TRIGGER, I cannot thinking of any way. Any idea will be
 welcome.
 
It would require creating cooperating triggers in the database and
having a listener, but you might consider the
triggered_change_notifications() trigger function included in 9.2. 
It works at least as far back as 9.0; I haven't tried it any further
back.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY with hints, rebirth

2012-02-26 Thread Heikki Linnakangas

On 24.02.2012 22:55, Simon Riggs wrote:

A long time ago, in a galaxy far away, we discussed ways to speed up
data loads/COPY.
http://archives.postgresql.org/pgsql-hackers/2007-01/msg00470.php

In particular, the idea that we could mark tuples as committed while
we are still loading them, to avoid negative behaviour for the first
reader.

Simple patch to implement this is attached, together with test case.

 ...

What exactly does it do? Previously, we optimised COPY when it was
loading data into a newly created table or a freshly truncated table.
This patch extends that and actually sets the tuple header flag as
HEAP_XMIN_COMMITTED during the load. Doing so is simple 2 lines of
code. The patch also adds some tests for corner cases that would make
that action break MVCC - though those cases are minor and typical data
loads will benefit fully from this.


This doesn't work with subtransactions:

postgres=# create table a as select 1 as id;
SELECT 1
postgres=# copy a to '/tmp/a';
COPY 1
postgres=# begin;
BEGIN
postgres=# truncate a;
TRUNCATE TABLE
postgres=# savepoint sp1;
SAVEPOINT
postgres=# copy a from '/tmp/a';
COPY 1
postgres=# select * from a;
 id

(0 rows)

The query should return the row copied in the same subtransaction.



In the link above, Tom suggested reworking HeapTupleSatisfiesMVCC()
and adding current xid to snapshots. That is an invasive change that I
would wish to avoid at any time and explains the long delay in
tackling this. The way I've implemented it, is just as a short test
during XidInMVCCSnapshot() so that we trap the case when the xid ==
xmax and so would appear to be running. This is much less invasive and
just as performant as Tom's original suggestion.


TransactionIdIsCurrentTransactionId() can be fairly expensive if you 
have a lot of subtransactions open...


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Checkpointer vs pg_stat_bgwriter

2012-02-26 Thread Magnus Hagander
Hi!

I admit to not having actually tested this since I don't have a good
cluster to test it on right now, but from what I can tell the code in
the new checkpointer process only sends statistics to the collector
once the checkpoint is finished (checkpointer.c, line 549). The 9.1
and earlier sent this every time they entered a delay state (in
BgWriterNap() called from CheckpointWriteDelay()).

So in 9.1 and earlier we could see how a checkpoint wrote things as it
was running, but in 9.2 we'll get it all as one big block at the end
of the checkpoint - which can be a lot later in the spread case.

Am I reading the code right?

And if so, was this an intentional change, and if so why? To me it
seems like a loss of functionality that should be fixed..

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers, patch v11

2012-02-26 Thread Thom Brown
On 26 February 2012 14:12, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 Thanks for your further testing!

 Thom Brown t...@linux.com writes:
 Further testing reveals a problem with FTS configurations when using
 the example function provided in the docs:

 Could you send me your tests so that I add them to the proper regression
 test?  I've been lazy on one or two object types and obviously that's
 where I have to check some more.

Which tests?  The FTS Config test was what I posted before.  I haven't
gone to any great effort to set up tests for each command.  I've just
been making them up as I go along.

 What is DROP ASSERTION?  It's showing as a valid command for a command
 trigger, but it's not documented.

 It's a Not Implemented Feature for which we have the grammar support to
 be able to fill a standard compliant checkbox, or something like that.
 It could be better for me to remove explicit support for it in the
 command triggers patch?

Well considering there are commands that exist which we don't allow
triggers on, it seems weird to support triggers on commands which
aren't implemented.  DROP ASSERTION doesn't appear anywhere else in
the documentation, so I can't think of how supporting a trigger for it
could be useful.

 I've noticed that ALTER object name OWNER TO role doesn't result in
 any trigger being fired except for tables.

 ALTER OPERATOR FAMILY  RENAME TO ... doesn't fire command triggers.

 ALTER OPERATOR CLASS with RENAME TO or OWNER TO doesn't fire command
 triggers, but with SET SCHEMA it does.

 It seems I've forgotten to add some support here, that happens in
 alter.c and is easy enough to check and complete, thanks for the
 testing.

So would the fix cover many cases at once?

 I'll hold off on testing any further until a new patch is available.

 That should happen soon. Ah, the joys of coding while kids are at home
 thanks to school holidays. I can't count how many times I've been killed
 by a captain and married to a princess while writing that patch, sorry
 about those hiccups here.

Being killed by a captain does make things more difficult, yes.

-- 
Thom

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \i tab completion initialization problem on HEAD

2012-02-26 Thread Peter van Hardenberg
On Fri, Feb 24, 2012 at 9:46 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Actually, what I should have asked is are you running Lion?.
 Because with libedit on Lion, tab completion is 100% broken, as per
 http://archives.postgresql.org/pgsql-hackers/2011-07/msg01642.php
 This is just the latest installment in a long and sad story of
 libedit being mostly not up to snuff on OS X.

 I can reproduce the behavior you mention on my own Mac, but the fact
 that it appears to work after the first time is probably just blind
 luck from happenstance locations of malloc results :-(

 As for GNU readline, I suspect you weren't actually testing it.
 Note that the thing called /usr/lib/libreadline.dylib is not GNU
 readline, it's only a symlink to libedit.


I am indeed running Lion. Thanks for helping me track down the cause.

-- 
Peter van Hardenberg
San Francisco, California
Everything was beautiful, and nothing hurt. -- Kurt Vonnegut

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Speed dblink using alternate libpq tuple storage

2012-02-26 Thread Marko Kreen
On Fri, Feb 24, 2012 at 05:46:16PM +0200, Marko Kreen wrote:
 - rename to PQrecvRow() and additionally provide PQgetRow()

I tried it and it seems to work as API - there is valid behaviour
for both sync and async connections.

Sync connection - PQgetRow() waits for data from network:

if (!PQsendQuery(db, q))
die(db, PQsendQuery);
while (1) {
r = PQgetRow(db);
if (!r)
break;
handle(r);
PQclear(r);
}
r = PQgetResult(db);

Async connection - PQgetRow() does PQisBusy() loop internally,
but does not read from network:

   on read event:
PQconsumeInput(db);
while (1) {
r = PQgetRow(db);
if (!r)
break;
handle(r);
PQclear(r);
}
if (!PQisBusy(db))
r = PQgetResult(db);
else
waitForMoredata();


As it seems to simplify life for quite many potential users,
it seems worth including in libpq properly.

Attached patch is on top of v20120223 of row-processor
patch.  Only change in general code is allowing
early exit for syncronous connection, as we now have
valid use-case for it.

-- 
marko

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 0087b43..b2779a8 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -4115,6 +4115,111 @@ int PQflush(PGconn *conn);
read-ready and then read the response as described above.
   /para
 
+  para
+   Above-mentioned functions always wait until full resultset has arrived
+   before makeing row data available as PGresult.  Sometimes it's
+   more useful to process rows as soon as the arrive from network.
+   For that, following functions can be used:
+   variablelist
+varlistentry id=libpq-pqgetrow
+ term
+  functionPQgetRow/function
+  indexterm
+   primaryPQgetRow/primary
+  /indexterm
+ /term
+
+ listitem
+  para
+   Waits for the next row from a prior
+   functionPQsendQuery/function,
+   functionPQsendQueryParams/function,
+   functionPQsendQueryPrepared/function call, and returns it.
+   A null pointer is returned when no more rows are available or
+   some error happened.
+synopsis
+PGresult *PQgetRow(PGconn *conn);
+/synopsis
+  /para
+
+  para
+   If this function returns non-NULL result, it is a
+   structnamePGresult/structname that contains exactly 1 row.
+   It needs to be freed later with functionPQclear/function.
+  /para
+  para
+   On synchronous connection, the function will wait for more
+   data from network until all resultset is done.  So it returns
+   NULL only if resultset has completely received or some error
+   happened.  In both cases, call functionPQgetResult/function
+   next to get final status.
+  /para
+
+  para
+   On asynchronous connection the function does not read more data
+   from network.   So after NULL call functionPQisBusy/function
+   to see whether final structnamePGresult/structname is avilable
+   or more data needs to be read from network via
+   functionPQconsumeInput/function.  Do not call
+   functionPQisBusy/function before functionPQgetRow/function
+   has returned NULL, as functionPQisBusy/function will parse
+   any available rows and add them to main functionPGresult/function
+   that will be returned later by functionPQgetResult/function.
+  /para
+
+ /listitem
+/varlistentry
+
+varlistentry id=libpq-pqrecvrow
+ term
+  functionPQrecvRow/function
+  indexterm
+   primaryPQrecvRow/primary
+  /indexterm
+ /term
+
+ listitem
+  para
+   Get row data without constructing PGresult for it.  This is the
+   underlying function for functionPQgetRow/function.
+synopsis
+int PQrecvRow(PGconn *conn, PGresult **hdr_p, PGrowValue **row_p);
+/synopsis
+  /para
+
+  para
+   It returns row data as pointers to network buffer.
+   All structures are owned by applicationlibpq/application's
+   structnamePGconn/structname and must not be freed or stored
+   by user.  Instead row data should be copied to user structures, before
+   any applicationlibpq/application result-processing function
+   is called.
+  /para
+  para
+   It returns 1 when row data is available.
+   Argument parameterhdr_p/parameter will contain pointer
+   to empty structnamePGresult/structname that describes
+   row contents.  Actual data is in parameterrow_p/parameter.
+   For the description of structure structnamePGrowValue/structname
+   see xref linkend=libpq-altrowprocessor.
+  /para
+  paraIt returns 0 when no more rows are avalable.  On synchronous
+   connection, it means resultset is fully arrived.  Call
+   functionPQgetResult/function to get final 

Re: [HACKERS] CLOG contention, part 2

2012-02-26 Thread Robert Haas
On Sat, Feb 25, 2012 at 2:16 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On Wed, Feb 8, 2012 at 11:26 PM, Robert Haas robertmh...@gmail.com wrote:
 Given that, I obviously cannot test this at this point,

 Patch with minor corrections attached here for further review.

All right, I will set up some benchmarks with this version, and also
review the code.

As a preliminary comment, Tom recently felt that it was useful to
reduce the minimum number of CLOG buffers from 8 to 4, to benefit very
small installations.  So I'm guessing he'll object to an
across-the-board doubling of the amount of memory being used, since
that would effectively undo that change.  It also makes it a bit hard
to compare apples to apples, since of course we expect that by using
more memory we can reduce the amount of CLOG contention.  I think it's
really only meaningful to compare contention between implementations
that use approximately the same total amount of memory.  It's true
that doubling the maximum number of buffers from 32 to 64 straight up
does degrade performance, but I believe that's because the buffer
lookup algorithm is just straight linear search, not because we can't
in general benefit from more buffers.

 pgbench loads all the data in one go, then pretends the data got their
 one transaction at a time. So pgbench with no mods is actually the
 theoretically most unreal imaginable. You have to run pgbench for 1
 million transactions before you even theoretically show any gain from
 this patch, and it would need to be a long test indeed before the
 averaged effect of the patch was large enough to avoid the zero
 contribution from the first million transacts.

Depends on the scale factor.  At scale factor 100, the first million
transactions figure to have replaced a sizeable percentage of the rows
already.  But I can use your other patch to set up the run.  Maybe
scale factor 300 would be good?

 However, there is a potential fly in the ointment: in other cases in
 which we've reduced contention at the LWLock layer, we've ended up
 with very nasty contention at the spinlock layer that can sometimes
 eat more CPU time than the LWLock contention did.   In that light, it
 strikes me that it would be nice to be able to partition the
 contention N ways rather than just 2 ways.  I think we could do that
 as follows.  Instead of having one control lock per SLRU, have N
 locks, where N is probably a power of 2.  Divide the buffer pool for
 the SLRU N ways, and decree that each slice of the buffer pool is
 controlled by one of the N locks.  Route all requests for a page P to
 slice P mod N.  Unlike this approach, that wouldn't completely
 eliminate contention at the LWLock level, but it would reduce it
 proportional to the number of partitions, and it would reduce spinlock
 contention according to the number of partitions as well.  A down side
 is that you'll need more buffers to get the same hit rate, but this
 proposal has the same problem: it doubles the amount of memory
 allocated for CLOG.  Of course, this approach is all vaporware right
 now, so it's anybody's guess whether it would be better than this if
 we had code for it.  I'm just throwing it out there.

 We've already discussed that and my patch for that has already been
 rules out by us for this CF.

I'm not aware that anybody's coded up the approach I'm talking about.
You've proposed splitting this up a couple of ways, but AFAICT they
all boil down to splitting up CLOG into multiple SLRUs, whereas what
I'm talking about is to have just a single SLRU, but with multiple
control locks.  I feel that approach is a bit more flexible, because
it could be applied to any SLRU, not just CLOG.  But I haven't coded
it, let alone tested it, so I might be all wet.

 I agree with you that we should further analyse CLOG contention in
 following releases but that is not an argument against making this
 change now.

No, but the fact that this approach is completely untested, or at
least that no test results have been posted, is an argument against
it.  Assuming this version compiles and works I'll try to see what I
can do about bridging that gap.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers, patch v11

2012-02-26 Thread Thom Brown
On 26 February 2012 19:49, Thom Brown t...@linux.com wrote:
 On 26 February 2012 14:12, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 Thanks for your further testing!

 Thom Brown t...@linux.com writes:
 Further testing reveals a problem with FTS configurations when using
 the example function provided in the docs:

 Could you send me your tests so that I add them to the proper regression
 test?  I've been lazy on one or two object types and obviously that's
 where I have to check some more.

 Which tests?  The FTS Config test was what I posted before.  I haven't
 gone to any great effort to set up tests for each command.  I've just
 been making them up as I go along.

 What is DROP ASSERTION?  It's showing as a valid command for a command
 trigger, but it's not documented.

 It's a Not Implemented Feature for which we have the grammar support to
 be able to fill a standard compliant checkbox, or something like that.
 It could be better for me to remove explicit support for it in the
 command triggers patch?

 Well considering there are commands that exist which we don't allow
 triggers on, it seems weird to support triggers on commands which
 aren't implemented.  DROP ASSERTION doesn't appear anywhere else in
 the documentation, so I can't think of how supporting a trigger for it
 could be useful.

 I've noticed that ALTER object name OWNER TO role doesn't result in
 any trigger being fired except for tables.

 ALTER OPERATOR FAMILY  RENAME TO ... doesn't fire command triggers.

 ALTER OPERATOR CLASS with RENAME TO or OWNER TO doesn't fire command
 triggers, but with SET SCHEMA it does.

 It seems I've forgotten to add some support here, that happens in
 alter.c and is easy enough to check and complete, thanks for the
 testing.

 So would the fix cover many cases at once?

 I'll hold off on testing any further until a new patch is available.

 That should happen soon. Ah, the joys of coding while kids are at home
 thanks to school holidays. I can't count how many times I've been killed
 by a captain and married to a princess while writing that patch, sorry
 about those hiccups here.

 Being killed by a captain does make things more difficult, yes.

I've got a question regarding the function signatures required for
command triggers, and apologies if it's already been discussed to
death (I didn't see all the original conversations around this).
These differ from regular trigger functions which don't require any
arguments, and instead use special variables.  Why aren't we doing the
same for command triggers?  So instead of having the parameters
tg_when, cmd_tag, objectid, schemaname and objectname, using pl/pgsql
as an example, we'd have the variables TG_WHEN (already exists), TG_OP
(already exists and equivalent to cmd_tag), TG_RELID (already exists,
although maybe not directly equivalent), TG_REL_SCHEMA (doesn't exist
but would replace schemaname) and TG_RELNAME (this is actually
deprecated but could be re-used for this purpose).

Advantages of implementing it like this is that there's consistency in
the trigger system, it's easier as no function parameters required,
and any future options you may wish to add won't break functions from
previous versions, meaning more room for adding stuff later on.

Disadvantages are that there's more maintenance overhead for
supporting multiple languages using special variables.

-- 
Thom

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] leakproof

2012-02-26 Thread A.M.

On Feb 26, 2012, at 10:39 AM, Peter Eisentraut wrote:

 On ons, 2012-02-22 at 10:56 -0500, Andrew Dunstan wrote:
 The trouble with leakproof is that it 
 doesn't point to what it is that's not leaking, which is information 
 rather than memory, as many might imagine (and I did) without further 
 hints. I'm not sure any single English word would be as descriptive as
 I'd like. 
 
 Well, we have RETURNS NULL ON NULL INPUT, so maybe DOES NOT LEAK
 INFORMATION. ;-)

If you are willing to go full length, then the computer science term is 
referential transparency, no? 

http://en.wikipedia.org/wiki/Referential_transparency_(computer_science)

So a function could be described as REFERENTIALLY TRANSPARENT.

Cheers,
M
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Misleading CREATE TABLE error

2012-02-26 Thread Robert Haas
On Fri, Feb 24, 2012 at 4:03 AM, Peter Eisentraut pete...@gmx.net wrote:
 On tis, 2011-11-29 at 06:33 +0200, Peter Eisentraut wrote:
   I'm not trying to inherit a relation, I'm trying to base a table on
   it.  As it happens, cows is a foreign table, which *is* a table,
   just not a regular table.  It might be useful to add support to clone
   foreign tables into regular tables, the use-case being that you may
   wish to import all the data locally into a table of the same
   structure.  But the gripe here is the suggestion that the relation
   would have been inherited, which would actually be achieved using
   INHERITS.
 
  Interesting.  I agree that there's no obvious reason why that
  shouldn't be allowed to work.  Could be useful with views, too.

 I recently came across a situation where LIKE with a composite type
 might have been useful.

 This was the last piece of the puzzle that was missing in this area, for
 which I have now developed a fix.  The problem was that
 parserOpenTable() rejected composite types.  But the only thing that was
 really adding over using relation_open() directly was nicer error
 pointers.  So I removed a few levels of indirection there, and
 integrated the error pointer support directly into
 transformTableLikeClause().  This also has the advantage that the ...
 is not a table, view, or ... message now has error pointer support.

Looks reasonable.  The only thing you didn't copy from
parserOpenTable() is the special error-handling for CTEs, but AFAICS
that's irrelevant here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] leakproof

2012-02-26 Thread Robert Haas
On Sun, Feb 26, 2012 at 6:44 PM, A.M. age...@themactionfaction.com wrote:
 If you are willing to go full length, then the computer science term is 
 referential transparency, no?

 http://en.wikipedia.org/wiki/Referential_transparency_(computer_science)

 So a function could be described as REFERENTIALLY TRANSPARENT.

Hmm, I think that's very close to what we're looking for.  It might be
slightly stronger, in that it could conceivably be OK for a leakproof
function to read, but not modify, global variables... but I can't
think of any particular reason why we'd want to allow that case.
OTOH, it seems to imply that referential transparency is a property of
expressions built from pure functions, and since what we're labeling
here are functions, that brings us right back to PURE.

I'm thinking we should go with PURE.  I still can't think of any real
use case for pushing down anything other than an immutable function,
and I think that immutable + no-side-effects = pure.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] leakproof

2012-02-26 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Sun, Feb 26, 2012 at 6:44 PM, A.M. age...@themactionfaction.com wrote:
 http://en.wikipedia.org/wiki/Referential_transparency_(computer_science)
 So a function could be described as REFERENTIALLY TRANSPARENT.

 Hmm, I think that's very close to what we're looking for.  It might be
 slightly stronger, in that it could conceivably be OK for a leakproof
 function to read, but not modify, global variables... but I can't
 think of any particular reason why we'd want to allow that case.
 OTOH, it seems to imply that referential transparency is a property of
 expressions built from pure functions, and since what we're labeling
 here are functions, that brings us right back to PURE.

Yeah.  Comparing that page to the one on pure functions, there doesn't
seem to be any difference that is relevant to what we're concerned
about.  And neither page directly addresses the question of error
conditions, though if you hold your head at the proper angle you might
argue that that's implicit in the no side effect rule.  But I think
we're going to have to clearly document that requirement no matter
what term we choose.

 I'm thinking we should go with PURE.

Works for me.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] leakproof

2012-02-26 Thread Andrew Dunstan



On 02/26/2012 08:23 PM, Tom Lane wrote:

Robert Haasrobertmh...@gmail.com  writes:

On Sun, Feb 26, 2012 at 6:44 PM, A.M.age...@themactionfaction.com  wrote:

http://en.wikipedia.org/wiki/Referential_transparency_(computer_science)
So a function could be described as REFERENTIALLY TRANSPARENT.

Hmm, I think that's very close to what we're looking for.  It might be
slightly stronger, in that it could conceivably be OK for a leakproof
function to read, but not modify, global variables... but I can't
think of any particular reason why we'd want to allow that case.
OTOH, it seems to imply that referential transparency is a property of
expressions built from pure functions, and since what we're labeling
here are functions, that brings us right back to PURE.

Yeah.  Comparing that page to the one on pure functions, there doesn't
seem to be any difference that is relevant to what we're concerned
about.  And neither page directly addresses the question of error
conditions, though if you hold your head at the proper angle you might
argue that that's implicit in the no side effect rule.  But I think
we're going to have to clearly document that requirement no matter
what term we choose.


I'm thinking we should go with PURE.

Works for me.




Not for me. My objection is the same as when I started this thread - 
that the term doesn't convey to someone just looking at it the salient 
point about the feature, any more then LEAKPROOF does. SILENT strikes me 
as much closer to what is actually described.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign key locks, 2nd attempt

2012-02-26 Thread Robert Haas
On Thu, Feb 23, 2012 at 11:01 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
 This
 seems like a horrid mess that's going to be unsustainable both from a
 complexity and a performance standpoint.  The only reason multixacts
 were tolerable at all was that they had only one semantics.  Changing
 it so that maybe a multixact represents an actual updater and maybe
 it doesn't is not sane.

 As far as complexity, yeah, it's a lot more complex now -- no question
 about that.

 Regarding performance, the good thing about this patch is that if you
 have an operation that used to block, it might now not block.  So maybe
 multixact-related operation is a bit slower than before, but if it
 allows you to continue operating rather than sit waiting until some
 other transaction releases you, it's much better.

That's probably true, although there is some deferred cost that is
hard to account for.  You might not block immediately, but then later
somebody might block either because the mxact SLRU now needs fsyncs or
because they've got to decode an mxid long after the relevant segment
has been evicted from the SLRU buffers.  In general, it's hard to
bound that latter cost, because you only avoid blocking once (when the
initial update happens) but you might pay the extra cost of decoding
the mxid as many times as the row is read, which could be arbitrarily
many.  How much of a problem that is in practice, I'm not completely
sure, but it has worried me before and it still does.  In the worst
case scenario, a handful of frequently-accessed rows with MXIDs all of
whose members are dead except for the UPDATE they contain could result
in continual SLRU cache-thrashing.

From a performance standpoint, we really need to think not only about
the cases where the patch wins, but also, and maybe more importantly,
the cases where it loses.  There are some cases where the current
mechanism, use SHARE locks for foreign keys, is adequate.  In
particular, it's adequate whenever the parent table is not updated at
all, or only very lightly.  I believe that those people will pay
somewhat more with this patch, and especially in any case where
backends end up waiting for fsyncs in order to create new mxids, but
also just because I think this patch will have the effect of
increasing the space consumed by each individual mxid, which imposes a
distributed cost of its own.

I think we should avoid having a theoretical argument about how
serious these problems are; instead, you should try to construct
somewhat-realistic worst case scenarios and benchmark them.  Tom's
complaint about code complexity is basically a question of opinion, so
I don't know how to evaluate that objectively, but performance is
something we can measure.  We might still disagree on the
interpretation of the results, but I still think having some real
numbers to talk about based on carefully-thought-out test cases would
advance the debate.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Runtime SHAREDIR for testing CREATE EXTENSION

2012-02-26 Thread Robert Haas
On Sun, Feb 26, 2012 at 10:36 AM, Peter Eisentraut pete...@gmx.net wrote:
 On lör, 2012-02-25 at 14:21 +0100, Christoph Berg wrote:
 Well, I'm trying to invoke the extension's make check target at
 extension build time. I do have a temporary installation I own
 somehwere in my $HOME, but that is still trying to find extensions in
 /usr/share/postgresql/9.1/extension/*.control, because I am using the
 system's postgresql version. The build process is not running as root,
 so I cannot do an install of the extension to its final location.
 Still it would be nice to run regression tests. All that seems to be
 missing is the ability to put

 extension_control_path = /home/buildd/tmp/extension

 into the postgresql.conf of the temporary PG installation, or some
 other way like CREATE EXTENSION foobar WITH CONTROL
 '/home/buildd/...'.

 Yeah, of course, the extension path is not related to the data
 directory.  So we do need some kind of path setting, just like
 dynamic_library_path.

That logic seems sound to me, so +1.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Memory usage during sorting

2012-02-26 Thread Robert Haas
On Sat, Feb 25, 2012 at 4:31 PM, Jeff Janes jeff.ja...@gmail.com wrote:
 I'm not sure about the conclusion, but given this discussion, I'm
 inclined to mark this Returned with Feedback.

 OK, thanks.  Does anyone have additional feed-back on how tightly we
 wish to manage memory usage?  Is trying to make us use as much memory
 as we are allowed to without going over a worthwhile endeavor at all,
 or is it just academic nitpicking?

I'm not sure, either.  It strikes me that, in general, it's hard to
avoid a little bit of work_mem overrun, since we can't know whether
the next tuple will fit until we've read it, and then if it turns out
to be big, well, the best thing we can do is free it, but perhaps
that's closing the barn door after the horse has gotten out already.

Having recently spent quite a bit of time looking at tuplesort.c as a
result of Peter Geoghegan's work and some other concerns, I'm inclined
to think that it needs more than minor surgery.  That file is peppered
with numerous references to Knuth which serve the dual function of
impressing the reader with the idea that the code must be very good
(since Knuth is a smart guy) and rendering it almost completely
impenetrable (since the design is justified by reference to a textbook
most of us probably do not have copies of).

A quick Google search for external sorting algorithms suggest that the
typical way of doing an external sort is to read data until you fill
your in-memory buffer, quicksort it, and dump it out as a run.  Repeat
until end-of-data; then, merge the runs (either in a single pass, or
if there are too many, in multiple passes).  I'm not sure whether that
would be better than what we're doing now, but there seem to be enough
other people doing it that we might want to try it out.  Our current
algorithm is to build a heap, bounded in size by work_mem, and dribble
tuples in and out, but maintaining that heap is pretty expensive;
there's a reason people use quicksort rather than heapsort for
in-memory sorting.  As a desirable side effect, I think it would mean
that we could dispense with retail palloc and pfree altogether.  We
could just allocate a big chunk of memory, copy tuples into it until
it's full, using a pointer to keep track of the next unused byte, and
then, after writing the run, reset the allocation pointer back to the
beginning of the buffer.  That would not only avoid the cost of going
through palloc/pfree, but also the memory overhead imposed by
bookkeeping and power-of-two rounding.

If we do want to stick with the current algorithm, there seem to be
some newer techniques for cutting down on the heap maintenance
overhead.  Heikki's been investigating that a bit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FDW system columns

2012-02-26 Thread Robert Haas
On Sat, Feb 25, 2012 at 3:56 PM, Thom Brown t...@linux.com wrote:
 If there seems to be a consensus on removing system column from foreign
 tables, I'd like to work on this issue.  Attached is a halfway patch,
 and ISTM there is no problem so far.


 I can say that at least PgAdmin doesn't use these columns.

 So we still have all of these columns for foreign tables.  I've tested
 Hanada-san's patch and it removes all of the system columns.  Could we
 consider applying it, or has a use-case for them since been
 discovered?

Not to my knowledge, but Hanada-san described his patch as a halfway
patch, implying that it wasn't done.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Website stylesheet for local docs

2012-02-26 Thread Robert Haas
On Sat, Feb 25, 2012 at 7:54 AM, Magnus Hagander mag...@hagander.net wrote:
 I've asked for this a few times before, but it seems others aren't as
 keen on it as me :-) Personally, I find the docs easier to read when
 formatted with the new website styles that Thom put together, and I
 also like to see things the way they're going to look when they go up
 there.

Agreed.

 Attached patch makes it possible to say make STYLE=website for the
 docs, which will then simply replace the stylesheet reference with one
 that goes to fetch docs.css on the website.

Wouldn't it be better to include the stylesheet in our tree, if we're
going to depend on it?

 I'm not suggesting we
 change the default or anything, just making it reasonably easy to get
 it done for one-off builds.

Why not change the default?  Does anyone really prefer the bare bones
doc output?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Memory usage during sorting

2012-02-26 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 A quick Google search for external sorting algorithms suggest that the
 typical way of doing an external sort is to read data until you fill
 your in-memory buffer, quicksort it, and dump it out as a run.  Repeat
 until end-of-data; then, merge the runs (either in a single pass, or
 if there are too many, in multiple passes).  I'm not sure whether that
 would be better than what we're doing now, but there seem to be enough
 other people doing it that we might want to try it out.  Our current
 algorithm is to build a heap, bounded in size by work_mem, and dribble
 tuples in and out, but maintaining that heap is pretty expensive;
 there's a reason people use quicksort rather than heapsort for
 in-memory sorting.

Well, the reason the heapsort approach is desirable here is that you end
up with about half as many runs to merge, because the typical run length
is significantly more than what will fit in work_mem.  Don't get too
excited about micro-optimization if you haven't understood the larger
context.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgstat documentation tables

2012-02-26 Thread Robert Haas
On Sat, Feb 25, 2012 at 9:33 AM, Magnus Hagander mag...@hagander.net wrote:
 On Mon, Jan 16, 2012 at 02:03, Greg Smith g...@2ndquadrant.com wrote:
 On 01/15/2012 12:20 PM, Tom Lane wrote:

 Please follow the style already used for system catalogs; ie I think
 there should be a summary table with one entry per view, and then a
 separate description and table-of-columns for each view.


 Yes, that's a perfect precedent.  I think the easiest path forward here is
 to tweak the updated pg_stat_activity documentation, since that's being
 refactoring first anyway.  That can be reformatted until it looks just like
 the system catalog documentation.  And then once that's done, the rest of
 them can be converted over to follow the same style.  I'd be willing to work
 on doing that in a way that improves what is documented, too.  The
 difficulty of working with the existing tables has been the deterrent for
 improving that section to me.

 I've applied a patch that does this now. Hopefully, I didn't create
 too many spelling errors or such :-)

 I also applied a separate patch that folded the list of functions into
 the list of views, since that's where they are called, as a way to
 reduce duplicate documentation. I did it as a spearate patch to make
 it easier to back out if people think that was a bad idea...

I think it's a little awkward this way; maybe it would be better as a
separate table column.  Or maybe it was better the way it was; I'm not
sure.  Or maybe we could have a separate table that just gives the
equivalences between stats table-column pairs and functions.  Of those
ideas, I think I like separate table in the same column the best.

Also, I wonder if we should promote section 27.2.2.1. Other Statistics
Functions to 27.2.3.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)

2012-02-26 Thread Robert Haas
On Fri, Feb 24, 2012 at 9:43 AM, Peter Geoghegan pe...@2ndquadrant.com wrote:
 Tom's point example does not seem to be problematic to me - the cast
 *should* blame the 42 const token, as the cast doesn't work as a
 result of its representation, which is in point of fact why the core
 system blames the Const node and not the coercion one.

I think I agree Tom's position upthread: blaming the coercion seems to
me to make more sense.  But if that's what we're trying to do, then
why does parse_coerce() say this?

/*
 * Set up to point at the constant's text if the input routine throws
 * an error.
 */

/me is confused.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initial 9.2 pgbench write results

2012-02-26 Thread Robert Haas
On Fri, Feb 24, 2012 at 5:35 AM, Simon Riggs si...@2ndquadrant.com wrote:
 On Thu, Feb 23, 2012 at 11:59 PM, Robert Haas robertmh...@gmail.com wrote:
 this doesn't feel like the right time to embark on a bunch of new
 engineering projects.

 IMHO this is exactly the right time to do full system tuning. Only
 when we have major projects committed can we move towards measuring
 things and correcting deficiencies.

Ideally we should measure things as we do them.  Of course there will
be cases that we fail to test which slip through the cracks, as Greg
is now finding, and I agree we should try to fix any problems that we
turn up during testing.  But, as I said before, so far Greg hasn't
turned up anything that can't be fixed by adjusting settings, so I
don't see a compelling case for change on that basis.

As a side point, there's no obvious reason why the problems Greg is
identifying here couldn't have been identified before committing the
background writer/checkpointer split.  The fact that we didn't find
them then suggests to me that we need to be more not less cautious in
making further changes in this area.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)

2012-02-26 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 I think I agree Tom's position upthread: blaming the coercion seems to
 me to make more sense.  But if that's what we're trying to do, then
 why does parse_coerce() say this?

 /*
  * Set up to point at the constant's text if the input routine throws
  * an error.
  */

 /me is confused.

There are two cases that are fundamentally different in the eyes of the
system:

'literal string'::typename defines a constant of the named type.
The string is fed to the type's input routine de novo, that is, it never
really had any other type.  (Under the hood, it had type UNKNOWN for a
short time, but that's an implementation detail.)  In this situation it
seems appropriate to point at the text string if the input routine
doesn't like it, because it is the input string and nothing else that is
wrong.

On the other hand, when you cast something that already had a known type
to some other type, any failure seems reasonable to blame on the cast
operator.

So in these terms there's a very real difference between what
'42'::bigint means and what 42::bigint means --- the latter implies
forming an int4 constant and then converting it to int8.

I think that what Peter is on about in
http://archives.postgresql.org/pgsql-hackers/2012-02/msg01152.php
is the question of what location to use for the *result* of
'literal string'::typename, assuming that the type's input function
doesn't complain.  Generally we consider that we should use the
leftmost token's location for the location of any expression composed
of more than one input token.  This is of course the same place for
'literal string'::typename, but not for the alternate syntaxes
typename 'literal string' and cast('literal string' as typename).
I'm not terribly impressed by the proposal to put in an arbitrary
exception to that general rule for the convenience of this patch.

Especially not when the only reason it's needed is that Peter is
doing the fingerprinting at what is IMO the wrong place anyway.
If he were working on the raw grammar output it wouldn't matter
what parse_coerce chooses to do afterwards.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Website stylesheet for local docs

2012-02-26 Thread Magnus Hagander
On Mon, Feb 27, 2012 at 04:37, Robert Haas robertmh...@gmail.com wrote:
 On Sat, Feb 25, 2012 at 7:54 AM, Magnus Hagander mag...@hagander.net wrote:
 I've asked for this a few times before, but it seems others aren't as
 keen on it as me :-) Personally, I find the docs easier to read when
 formatted with the new website styles that Thom put together, and I
 also like to see things the way they're going to look when they go up
 there.

 Agreed.

 Attached patch makes it possible to say make STYLE=website for the
 docs, which will then simply replace the stylesheet reference with one
 that goes to fetch docs.css on the website.

 Wouldn't it be better to include the stylesheet in our tree, if we're
 going to depend on it?

Probably, I just took the easiest route. That way it gets updated as
well. And since it was an optional feature. And since that stylesheet
depends on other stylesheets which depend on images etc, with some
fairly fixed paths in them...


 I'm not suggesting we
 change the default or anything, just making it reasonably easy to get
 it done for one-off builds.

 Why not change the default?  Does anyone really prefer the bare bones
 doc output?

Yes, Peter made a point about preferring that back when we changed the
developer docs to be on the main website (how it got worse but at
least he could work on his local build).

But it would be easy enough to flip the switch and instead have a make
STYLE=light or something like that...

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers