Re: [HACKERS] Crash dumps

2011-07-07 Thread Radosław Smogura
Craig Ringer cr...@postnewspapers.com.au Thursday 07 of July 2011 01:05:48
 On 6/07/2011 11:00 PM, Radosław Smogura wrote:
  I think IPC for fast shout down all backends and wait for report
  processing is quite enaugh.
 
 How do you propose to make that reliable, though?
 
 --
 Craig Ringer
 
 POST Newspapers
 276 Onslow Rd, Shenton Park
 Ph: 08 9381 3088 Fax: 08 9388 2258
 ABN: 50 008 917 717
 http://www.postnewspapers.com.au/

I want to add IPC layer to postgresql, few approches may be considerable, 
1. System IPC
2. Urgent data on socket
3. Sockets (at least descriptors) + threads
4. Not portable, fork in segfault (I think forked process should start in 
segfault too).

I actualy think for 3. sockets (on Linux pipes) + threads will be the best and 
more portable, for each backend PostgreSQL will open two channels urgent and 
normal, for each channel a thread will be spanned and this thread will just 
wait for data, backend will not start if it didn't connected to postmaster. 
Some security must be accounted when opening plain socket.

In context of crash, segfault sends information on urgent channel, and 
postmaster kills all backends except sender, giving it time to work in 
segfault.

Normal channels may, be used for scheduling some async operations, like read 
next n-blocks when sequence scan started.

By the way getting reports on segfault isn't something unusal, Your favorite 
software Java(tm) Virtual Machine does it.

Regards,
Radosław Smogura




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] dropping table in testcase alter_table.sql

2011-07-07 Thread Ashutosh Bapat
Hi,
I noticed that the test alter_table.sql is creating two tables tab1 and tab2
and it's not dropping it. Any test which follows this test and tries to
create tables with names tab1 and tab2 will fail (unless it drops those
tables first, but that may not work, since tab2.y depends upon tab1 in
alter_table.sql).

PFA patch which drops these two tables from alter_table.sql and
corresponding OUT change. The regression run clean with this patch.

-- 
Best Wishes,
Ashutosh Bapat
EntepriseDB Corporation
The Enterprise Postgres Company
diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out
index 9ab84f9..4f626dd 100644
--- a/src/test/regress/expected/alter_table.out
+++ b/src/test/regress/expected/alter_table.out
@@ -1503,20 +1503,22 @@ select * from another;
  two more   | 20
  three more | 30
 (3 rows)
 
 drop table another;
 -- table's row type
 create table tab1 (a int, b text);
 create table tab2 (x int, y tab1);
 alter table tab1 alter column b type varchar; -- fails
 ERROR:  cannot alter table tab1 because column tab2.y uses its row type
+drop table tab2;
+drop table tab1;
 -- disallow recursive containment of row types
 create temp table recur1 (f1 int);
 alter table recur1 add column f2 recur1; -- fails
 ERROR:  composite type recur1 cannot be made a member of itself
 alter table recur1 add column f2 recur1[]; -- fails
 ERROR:  composite type recur1 cannot be made a member of itself
 create domain array_of_recur1 as recur1[];
 alter table recur1 add column f2 array_of_recur1; -- fails
 ERROR:  composite type recur1 cannot be made a member of itself
 create temp table recur2 (f1 int, f2 recur1);
diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql
index b5d76ea..9d5ec63 100644
--- a/src/test/regress/sql/alter_table.sql
+++ b/src/test/regress/sql/alter_table.sql
@@ -1116,20 +1116,22 @@ alter table another
   alter f2 type bigint using f1 * 10;
 
 select * from another;
 
 drop table another;
 
 -- table's row type
 create table tab1 (a int, b text);
 create table tab2 (x int, y tab1);
 alter table tab1 alter column b type varchar; -- fails
+drop table tab2;
+drop table tab1;
 
 -- disallow recursive containment of row types
 create temp table recur1 (f1 int);
 alter table recur1 add column f2 recur1; -- fails
 alter table recur1 add column f2 recur1[]; -- fails
 create domain array_of_recur1 as recur1[];
 alter table recur1 add column f2 array_of_recur1; -- fails
 create temp table recur2 (f1 int, f2 recur1);
 alter table recur1 add column f2 recur2; -- fails
 alter table recur1 add column f2 int;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Expression Pruning in postgress

2011-07-07 Thread HarmeekSingh Bedi
Hi .

  Apology for the bandwith. Did not know who to ask the question . I
was interested in a brief detail on how postgress does expression
pruning between nodes . The basic question is as follows

Scenerio

   In a plan where Node 1 is parent {say join) and Node 2 is child
(say scan)  . If node 1 has a expression say foo(column) then scan
will project 'column'  for sure and join will
  evaluate foo(column).  Now if the node above join does not need
column ? how does postgress remove the column from join's projection
list . I am seeing that it does not in many
  cases specially when sort appears above.

 Question
 In general is there something in the code that can process a plan
tree and find out where expressions are generated and where there
references are not required - thus removing expressions not needed .
In my case the column is a huge column and hence it will matter if it
is removed from the projection list as soon as possible ?

Any insights welcome and thanks for all help

Regards
Harmeek

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: enhanced get diagnostics statement 2

2011-07-07 Thread Pavel Stehule
Hello

thank you very much for review.

I cleaned patch and merged your documentation patch

I hope, this is all - a language correction should do some native speaker

Regards

Pavel Stehule

2011/7/6 Shigeru Hanada shigeru.han...@gmail.com:
 (2011/06/02 17:39), Pavel Stehule wrote:
 This patch enhances a GET DIAGNOSTICS statement functionality. It adds
 a possibility of access to exception's data. These data are stored on
 stack when exception's handler is activated - and these data are
 access-able everywhere inside handler. It has a different behave (the
 content is immutable inside handler) and therefore it has modified
 syntax (use keyword STACKED). This implementation is in conformance
 with ANSI SQL and SQL/PSM  - implemented two standard fields -
 RETURNED_SQLSTATE and MESSAGE_TEXT and three PostgreSQL specific
 fields - PG_EXCEPTION_DETAIL, PG_EXCEPTION_HINT and
 PG_EXCEPTION_CONTEXT.

 The GET STACKED DIAGNOSTICS statement is allowed only inside
 exception's handler. When it is used outside handler, then diagnostics
 exception 0Z002 is raised.

 This patch has no impact on performance. It is just interface to
 existing stacked 'edata' structure. This patch doesn't change a
 current behave of GET DIAGNOSTICS statement.

 CREATE OR REPLACE FUNCTION public.stacked_diagnostics_test02()
   RETURNS void
   LANGUAGE plpgsql
 AS $function$
 declare _detail text; _hint text; _message text;
 begin
    perform ...
 exception when others then
    get stacked diagnostics
          _message = message_text,
          _detail = pg_exception_detail,
          _hint = pg_exception_hint;
    raise notice 'message: %, detail: %, hint: %', _message, _detail, _hint;
 end;
 $function$

 All regress tests was passed.

 Hi Pavel,

 I've reviewed your patch according to the page Reviewing a patch.
 During the review, I referred to Working-Draft of SQL 2003 to confirm
 the SQL specs.

 Submission review
 =
 * The patch is in context diff format.
 * The patch couldn't be applied cleanly to the current head.  But it
 requires only one hunk to be offset, and it could be fixed easily.
 I noticed that new variables needs_xxx, which were added to struct
 PLpgSQL_condition, are not used at all.  They should be removed, or
 something might be overlooked.
 * The patch includes reasonable regression tests.  The patch also
 includes hunks for pl/pgsql document which describes new
 feature.  But it would need some corrections:
  - folding too-long lines
  - fixing some grammatical errors (maybe)
  - clarify difference between CURRENT and STACKED
 I think that adding new section for GET STACKED DIAGNOSTICS would help
 to clarify the difference, because the keyword STACKED can be used only
 in exception clause, and available information is different from the one
 available for GET CURRENT DIAGNOSTICS.  Please find attached a patch
 which includes a proposal for document though it still needs review by
 English speaker.

 Usability review
 
 * The patch extends GET DIAGNOSTICS syntax to accept new keywords
 CURRENT and STACKED, which are described in the SQL/PSM standard.  This
 feature allows us to retrieve exception information in EXCEPTION clause.
 Naming of PG-specific fields might be debatable.
 * I think it's useful to get detailed information inside EXCEPTION clause.
 * We don't have this feature yet.
 * This patch follows SQL spec of GET DIAGNOSTICS, and extends about
 PG-specific variables.
 * pg_dump support is not required for this feature.
 * AFAICS, this patch doesn't have any danger, such as breakage of
 backward compatibility.

 Feature test
 
 * The new feature introduced by the patch works well.
 I tested about:
  - CURRENT doesn't affect existing feature
  - STACKED couldn't be used outside EXCEPTION clause
  - Values could be retrieved via RETURNED_SQLSTATE, MESSAGE_TEXT,
    PG_EXCEPTION_DETAIL, PG_EXCEPTION_HINT and PG_EXCEPTION_CONTEXT
  - Invalid item names properly cause error.
 * I'm not so familiar to pl/pgsql, but ISTM that enough cases are
 considered about newly added diagnostics items.
 * I didn't see any crash during my tests.

 In conclusion, this patch still needs some effort to be Ready for
 Committer, so I'll push it back to Waiting on Author.

 Regards,
 --
 Shigeru Hanada

*** ./doc/src/sgml/plpgsql.sgml.orig	2011-07-07 09:03:07.135669770 +0200
--- ./doc/src/sgml/plpgsql.sgml	2011-07-07 09:12:20.443762372 +0200
***
*** 1387,1393 
   command, which has the form:
  
  synopsis
! GET DIAGNOSTICS replaceablevariable/replaceable = replaceableitem/replaceable optional , ... /optional;
  /synopsis
  
   This command allows retrieval of system status indicators.  Each
--- 1387,1393 
   command, which has the form:
  
  synopsis
! GET optional CURRENT /optional DIAGNOSTICS replaceablevariable/replaceable = replaceableitem/replaceable optional , ... /optional;
  /synopsis
  
   This command allows retrieval of system status 

Re: [HACKERS] spinlock contention

2011-07-07 Thread Florian Pflug
On Jul7, 2011, at 03:35 , Robert Haas wrote:
 On Thu, Jun 23, 2011 at 11:42 AM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug f...@phlo.org wrote:
 On Jun12, 2011, at 23:39 , Robert Haas wrote:
 So, the majority (60%) of the excess spinning appears to be due to
 SInvalReadLock.  A good chunk are due to ProcArrayLock (25%).
 
 Hm, sizeof(LWLock) is 24 on X86-64, making sizeof(LWLockPadded) 32.
 However, cache lines are 64 bytes large on recent Intel CPUs AFAIK,
 so I guess that two adjacent LWLocks currently share one cache line.
 
 Currently, the ProcArrayLock has index 4 while SInvalReadLock has
 index 5, so if I'm not mistaken exactly the two locks where you saw
 the largest contention on are on the same cache line...
 
 Might make sense to try and see if these numbers change if you
 either make LWLockPadded 64bytes or arrange the locks differently...
 
 I fooled around with this a while back and saw no benefit.  It's
 possible a more careful test would turn up something, but I think the
 only real way forward here is going to be to eliminate some of that
 locking altogether.
 
 I also tried inserting an
 acquire-and-release cycle on MyProc-backendLock, which was just as
 good.  That seems like a pretty encouraging result, since there appear
 to be several ways of reimplementing SIGetDataEntries() that would
 work with a per-backend lock rather than a global one.

I did some research and found TLRW: Return of the Read-Write Lock
by D. Dice and N. Shavit. PDF can be found here
  http://transact09.cs.washington.edu/19_paper.pdf

They describe a read-write lock called bytelock with set of slots
for shared lockers, where each shared locker only needs to increment his
slot, and thus doesn't interfere with other concurrent shared locking
attempts. The price is that, after incrementing its slot, a shared 
locker must issue a fencing instruction and then verify that no writer
has taken the lock in the mean while. In their application, they 
distinguish between slotted threads - those who own a designated
slot, and unslotted thread - those who operation on the normal
shared counter.

I implemented that idea, but with a few modifications. First, I don't
distinguish between slotted and unslotted threads, but allow
multiple backends to share a slot. This means my implementation cannot
use a simple unlocked increment to update the slot, but instead uses
locked xadd. I also moved the slots out of the lock itself, and into
a separate set of LWLockPart structures. Each such structure contains
the counters for one slot id and multiple locks.

In effect, the resulting thing is an LWLock with a partitioned shared
counter. The partition one backend operates on for shared locks is
determined by its backend id.

I've added the implementation to the lock benchmarking tool at
  https://github.com/fgp/lockbench
and also pushed a patched version of postgres to
  https://github.com/fgp/postgres/tree/lwlock_part

The number of shared counter partitions is current 4, but can easily
be adjusted in lwlock.h. The code uses GCC's atomic fetch and add
intrinsic if available, otherwise it falls back to using a per-partition
spin lock.

Benchmarking results look promising so far. This is the through-put
I get, in acquisition/release cycles per second, for the LWLock
implementations on a 4-core machine. pg_lwlock_part,N,X is the
partitioned lock with N lock partitions and using LOCKED XADD if
X == yes. The numbers are normalized to the through-put in the
single-worker case.

-- Cycles / Second (Wall-Time) -
1   2   4   8   16  worker
wallwallwallwallwalltime
1   1.9 3.9 3.9 3.9 none
1   0.2 0.9 0.8 0.6 pg_lwlock_orig
1   0.4 0.3 0.3 0.3 pg_lwlock_cas
1   0.3 1.0 0.8 0.6 pg_lwlock_part,1,no
1   0.4 0.4 0.4 0.4 pg_lwlock_part,1,yes
1   2.0 0.4 0.4 0.3 pg_lwlock_part,2,no
1   2.0 0.7 0.8 0.7 pg_lwlock_part,2,yes
1   2.0 4.1 2.1 1.0 pg_lwlock_part,4,no
1   2.1 3.8 1.8 2.2 pg_lwlock_part,4,yes

These numbers show that as long is the number of workers does not
exceed the number of lock partitions, throughput increases approximately
linearly. It drops off afterwards, but I that's to be expected - 
the test executes LQLockAcquire/Release in a tight loop, so once there's
any kind of contention (even cache-line bouncing), performance drops.
This is also why the original LWLock beats CAS and the partitioned
lock with less than 4 partitions here - effectively, at least half of
the workers are sleeping at any given time, as the following
user vs. wall time numbers show.

-- User-Time / Wall-Time 
1   2   4   8   16  worker
1.0 1.9 3.9 3.9 3.9 none
1.0 1.9 1.1 1.3 1.8 

Re: [HACKERS] WIP: Fast GiST index build

2011-07-07 Thread Alexander Korotkov
New version of patch with readme and some bugfixes.
Some new tests with fast build parameters variations are on the wiki page.
While I doubt to interpret some results of tests, because it looks strange
for me. I particular I can't explain why decrease of buffer size affects
index quality so much (in my expectation decrease of buffer size should make
all build parameters closer to regular build). I'm going to recheck my
experiments, probably I'm missing something.

--
With best regards,
Alexander Korotkov.


gist_fast_build-0.5.0.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Moving the community git server

2011-07-07 Thread Magnus Hagander
This move should now be completed. If you see any errors still - other
than the change of SSH key - please validate that you have the new IP
address (204.145.120.222) - it could be excessive DNS caching.

If you have verified both the SSH key and the IP address and still see
problems, please let me know via email or irc and i'll take a look as
soon as I can!

//Magnus

On Tue, Jul 5, 2011 at 23:35, Magnus Hagander mag...@hagander.net wrote:
 Sometime later this week, the community git server
 (git.postgresql.org) will be migrated to a new server in the same
 datacenter. This does *not* affect the master git server for the
 project, just the anonymous mirror and those that run standalone
 projects on it, such as pgAdmin.

 When this move is done, the IP will change to a different address in
 the same subnet, and the server will have a new SSH key. That means
 you will get a security warning if you use the ssh protocol. The
 fingerprint of the new server is going to be:
 cc:0c:27:40:d7:a2:98:f4:69:07:0c:b6:3a:05:19:71 - verify that this is
 the one when you get a warning.

 When the server has been migrated, the old one will be switched to
 read-only access through http and git protocols for a day or two. The
 ssh protocol will be completely turned off on the old server. If you
 get an error about ssh login, it may simply mean that your DNS hasn't
 yet updated.

 I'll post again here with more details when the actual move happens,
 but I wanted to get a notice out early since it will cause security
 warnings for anybody trying to access it after the move.

 --
  Magnus Hagander
  Me: http://www.hagander.net/
  Work: http://www.redpill-linpro.com/




-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Crash dumps

2011-07-07 Thread Tom Lane
=?utf-8?q?Rados=C5=82aw_Smogura?= rsmog...@softperience.eu writes:
 Craig Ringer cr...@postnewspapers.com.au Thursday 07 of July 2011 01:05:48
 How do you propose to make that reliable, though?

 I want to add IPC layer to postgresql, few approches may be considerable, 
 1. System IPC
 2. Urgent data on socket
 3. Sockets (at least descriptors) + threads
 4. Not portable, fork in segfault (I think forked process should start in 
 segfault too).

An IPC layer to be invoked during segfaults?  Somehow I don't think
that's going to pass the reliability threshold.  It doesn't sound
promising from a portability standpoint either, since not one of your
suggestions will work everywhere, even without the already-segfaulted
context to worry about.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Expression Pruning in postgress

2011-07-07 Thread Tom Lane
HarmeekSingh Bedi harmeeksi...@gmail.com writes:
In a plan where Node 1 is parent {say join) and Node 2 is child
 (say scan)  . If node 1 has a expression say foo(column) then scan
 will project 'column'  for sure and join will
   evaluate foo(column).  Now if the node above join does not need
 column ? how does postgress remove the column from join's projection
 list .

See build_joinrel_tlist() in relnode.c.

 I am seeing that it does not in many
   cases specially when sort appears above.

Please show a concrete example.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review of VS 2010 support patches

2011-07-07 Thread Craig Ringer

On 7/07/2011 8:26 AM, Brar Piening wrote:

As before perltidy_before.patch has to be applied first and
VS2010v9.patch second.


OK, I've gone through builds with way too many versions of the Windows 
SDK and have test results to report.


The short version: please commit so I never, ever, ever have to do this 
again ;-) . I don't see anything newly broken; the only issues I hit 
were in master as well, and are probably related to local configuration 
issues and/or the sheer profusion of Windows SDK releases I've burdened 
my poor laptop with.


Note that x64 builds reported below are configured for plperl and 
plpython only. Other config.pl options are left at 'undef'.


Test results:
=

VS 2005
---

- SDK 6.0 (VS 2005) x86: OK, vcregress check passed

- SDK 6.0 (VS 2005) x64: OK, vcregress check passed

VS 2008
---

- SDK 6.1 (VS 2008) x86: OK, vcregress check passed

- SDK 6.1 (VS 2008) x64: Failed - vcbuild exited with code 255.
 (Also fails on unpatched git master x64)
 Since I'm getting crash report dialogs from
 vcbuild, I'm not inclined to blame Pg for this
 issue.

- SDK 6.1 (VS 2008) x64 (only plperl enabled): OK, vcregress passed

VS 2010
---

- SDK 7.0A (VS 2010) x86: OK, vcregress passed
- SDK 7.0A (VS 2010) x64: [Pending, missing x64 tools]

Latest Windows SDK
--

- SDK 7.1 x86: OK, vcregress passed
- SDK 7.1 x64: OK (incl. plpython), vcregress passed



Won't test:
===

- itanium. Does Pg build for itanium as things stand, anyway? Would 
anybody notice or care if it didn't?


Not tested yet, unsure if I'll have time


- vcregress plcheck, vcregress contrib for each combo

- x64 builds with anything more than plperl and plpython enabled. 
Library availability is a bit of an issue, and building all those 
libraries for x64 is outside what I can currently commit to, especially 
as they all require different build methods and some appear to require 
patches/fixes to build at all.


- ossp-uuid . No binaries available, doesn't have an NMakefile or
  vs project, and

Frankly, I suggest leaving these tests for the buildfarm to sort out. I 
don't see any sign of build process issues; they all build fine, and 
it's pretty darn unlikely that build changes would cause them to break 
at runtime. Windows buildfarm coverage looks pretty decent these days.


--
Craig Ringer

POST Newspapers
276 Onslow Rd, Shenton Park
Ph: 08 9381 3088 Fax: 08 9388 2258
ABN: 50 008 917 717
http://www.postnewspapers.com.au/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.2] Fix leaky-view problem, part 1

2011-07-07 Thread Kohei KaiGai
2011/7/7 Noah Misch n...@2ndquadrant.com:
 On Wed, Jul 06, 2011 at 10:25:12PM +0200, Kohei KaiGai wrote:
 *** a/src/backend/commands/view.c
 --- b/src/backend/commands/view.c

 --- 227,257 
                               atcmd-def = (Node *) lfirst(c);
                               atcmds = lappend(atcmds, atcmd);
                       }
               }

               /*
 +              * If optional parameters are specified, we must set options
 +              * using ALTER TABLE SET OPTION internally.
 +              */
 +             if (list_length(options)  0)
 +             {
 +                     atcmd = makeNode(AlterTableCmd);
 +                     atcmd-subtype = AT_SetRelOptions;
 +                     atcmd-def = (List *)options;
 +
 +                     atcmds = lappend(atcmds, atcmd);
 +             }
 +             else
 +             {
 +                     atcmd = makeNode(AlterTableCmd);
 +                     atcmd-subtype = AT_ResetRelOptions;
 +                     atcmd-def = (Node *) 
 list_make1(makeDefElem(security_barrier,
 +                                                                            
                                   NULL));
 +             }
 +             if (atcmds != NIL)
 +                     AlterTableInternal(viewOid, atcmds, true);
 +
 +             /*
                * Seems okay, so return the OID of the pre-existing view.
                */
               relation_close(rel, NoLock);    /* keep the lock! */

 That gets the job done for today, but DefineVirtualRelation() should not need
 to know all view options by name to simply replace the existing list with a
 new one.  I don't think you can cleanly use the ALTER TABLE SET/RESET code for
 this.  Instead, compute an option list similar to how DefineRelation() does so
 at tablecmds.c:491, then update pg_class.

My opinion is ALTER TABLE SET/RESET code should be enhanced to accept
an operation to reset all the existing options, rather than tricky
updates of pg_class.
How about an idea to add AT_ResetAllRelOptions for internal use only?

I'll fix up the regression test also.

Thanks,
-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Moving the community git server

2011-07-07 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 This move should now be completed.

Something weird seems to have happened to the gitweb service at the same
time, which broke some of my bookmarks.  On investigation, I think it's
gotten pickier about the local-part of the URLs.  For example

http://git.postgresql.org/gitweb/?p=postgresql.git  OK
http://git.postgresql.org/gitweb?p=postgresql.git   404

Is the latter supposed to work?  The former looks a bit odd to me,
but I'm no expert.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI 2PC coverage

2011-07-07 Thread Heikki Linnakangas

On 05.07.2011 20:06, Kevin Grittner wrote:

[resending after gzip of test patch]

In reviewing the recent fix to 2PC coverage in SSI, I found some
cases which didn't seem to be covered.  Dan bit the bullet and came
up with an additional isolation test to rigorously cover all the
permutations, to find *all* 2PC statement orderings which weren't
working right.  Because it was so big, he pared out tests which were
redundant, in that they exercised the same code paths and pointed at
the same issues.  A patch to add this test is attached.  Run against
HEAD it shows the errors.  It's kinda big, but I think it's worth
having.


I agree it'd be very nice to have this test, but 2.3 MB of expected 
output is a bit excessive. Let's try to cut that down into something 
more palatable.


There's two expected output files for this, one for max_prepared_xacts=0 
and another for the normal case. The max_prepared_xacts=0 case isn't 
very interesting, since all the PREPARE TRANSACTION commands fail. I 
think we should adjust the test harness to not run these tests at all if 
max_prepared_xacts=0. It would be better to skip the test and print out 
a notice pointing out that it was not run, it'll just give a false sense 
of security to run the test and report success, when it didn't test 
anything useful.


That alone cuts the size of the expected output to about 1 MB. That's 
much better, although it's still a lot of weight just for expected 
output. However, it compresses extremely well, to about 16 KB, so this 
isn't an issue for the size of distribution tarballs and such, only for 
git checkouts and on-disk size of extracted tarballs. I think that would 
be acceptable, although we could easily cut it a bit further if we want 
to. For example leaving out the word step from all the lines of 
executed test steps would cut it by about 80 KB.



Attached is also a patch to fix those, so that all permutations
work.


Thanks, committed the bug fix with some additional comments.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Moving the community git server

2011-07-07 Thread Magnus Hagander
On Thu, Jul 7, 2011 at 17:09, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 This move should now be completed.

 Something weird seems to have happened to the gitweb service at the same
 time, which broke some of my bookmarks.  On investigation, I think it's
 gotten pickier about the local-part of the URLs.  For example

 http://git.postgresql.org/gitweb/?p=postgresql.git              OK
 http://git.postgresql.org/gitweb?p=postgresql.git               404

 Is the latter supposed to work?  The former looks a bit odd to me,
 but I'm no expert.

I can confirm it broke. Both really should work I think - I'll take a
look at why later this evening, hopefully it's a quick fix.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Make relation_openrv atomic wrt DDL

2011-07-07 Thread Robert Haas
On Wed, Jul 6, 2011 at 10:44 PM, Noah Misch n...@2ndquadrant.com wrote:
 Attached.  I made the counter 64 bits wide, handled the nothing-found case per
 your idea, and improved a few comments cosmetically.  I have not attempted to
 improve the search_path interposition case.  We can recommend the workaround
 above, and doing better looks like an excursion much larger than the one
 represented by this patch.

I looked at this some more and started to get uncomfortable with the
whole idea of having RangeVarLockRelid() be a wrapper around
RangeVarGetRelid().  This hazard exists everywhere the latter function
gets called, not just in relation_open().  So it doesn't seem right to
fix the problem only in those places.

So I went through and incorporated the logic proposed for
RangeVarLockRelid() into RangeVarGetRelid() itself, and then went
through and examined all the callers of RangeVarGetRelid().  There are
some, such as has_table_privilege(), where it's really impractical to
take any lock, first because we might have no privileges at all on
that table and second because that could easily lead to a massive
amount of locking for no particular good reason.  I believe Tom
suggested that the right fix for these functions is to have them
index-scan the system catalogs using the caller's MVCC snapshot, which
would be right at least for pg_dump.  And there are other callers that
cannot acquire the lock as part of RangeVarGetRelid() for a variety of
other reasons.  However, having said that, there do appear to be a
number of cases that are can be fixed fairly easily.

So here's a (heavily) updated patch that tries to do that, along with
adding comments to the places where things still need more fixing.  In
addition to the problems corrected by your last version, this fixes
LOCK TABLE, ALTER SEQUENCE, ALTER TABLE .. RENAME, the whole-table
variant of REINDEX, CREATE CONSTRAINT TRIGGER (which is flat-out wrong
as it stands, since it acquires *no lock at all* on the table
specified in the FROM clause, never mind the question of doing so
atomically), CREATE RULE, and (partially) DROP TRIGGER and DROP RULE.

Regardless of exactly how we decide to proceed here, it strikes me
that there is a heck of a lot more work that could stand to be done in
this area...  :-(

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


atomic-rangevargetrelid.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI 2PC coverage

2011-07-07 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 On 05.07.2011 20:06, Kevin Grittner wrote:
 
 In reviewing the recent fix to 2PC coverage in SSI, I found some
 cases which didn't seem to be covered.  Dan bit the bullet and
 came up with an additional isolation test to rigorously cover all
 the permutations, to find *all* 2PC statement orderings which
 weren't working right.  Because it was so big, he pared out tests
 which were redundant, in that they exercised the same code paths
 and pointed at the same issues.  A patch to add this test is
 attached.  Run against HEAD it shows the errors.  It's kinda big,
 but I think it's worth having.
 
 I agree it'd be very nice to have this test, but 2.3 MB of
 expected output is a bit excessive. Let's try to cut that down
 into something more palatable.
 
OK
 
 There's two expected output files for this, one for
 max_prepared_xacts=0 and another for the normal case. The
 max_prepared_xacts=0 case isn't very interesting, since all the
 PREPARE TRANSACTION commands fail. I think we should adjust the
 test harness to not run these tests at all if
 max_prepared_xacts=0. It would be better to skip the test and
 print out a notice pointing out that it was not run, it'll just
 give a false sense of security to run the test and report success,
 when it didn't test anything useful.
 
 That alone cuts the size of the expected output to about 1 MB.
 
OK.  I'll work on this tonight unless Dan jumps in to claim it.
 
 That's much better, although it's still a lot of weight just for
 expected output. However, it compresses extremely well, to about
 16 KB, so this isn't an issue for the size of distribution
 tarballs and such, only for git checkouts and on-disk size of
 extracted tarballs. I think that would be acceptable, although we
 could easily cut it a bit further if we want to. For example
 leaving out the word step from all the lines of executed test
 steps would cut it by about 80 KB.
 
That seems simple enough.  Any other ideas?
 
 Attached is also a patch to fix those, so that all permutations
 work.
 
 Thanks, committed the bug fix with some additional comments.
 
Thanks!
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Heikki Linnakangas

On 05.07.2011 20:03, Kevin Grittner wrote:

In reviewing the 2PC changes mentioned in a separate post, both Dan
and I realized that these were dependent on the assumption that
SSI's commitSeqNo is assigned in the order in which the transactions
became visible.


This comment in the patch actually suggests a stronger requirement:


+ * For correct SERIALIZABLE semantics, the commitSeqNo must appear to be set
+ * atomically with the work of the transaction becoming visible to other
+ * transactions.


So, is it enough for the commitSeqNos to be set in the order the 
transactions become visible to others? I'm assuming 'yes' for now, as 
the approach being discussed to assign commitSeqNo in 
ProcArrayEndTransaction() without also holding SerializableXactHashLock 
is not going to work otherwise, and if I understood correctly you didn't 
see any correctness issue with that. Please shout if I'm missing something.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] spinlock contention

2011-07-07 Thread Robert Haas
On Thu, Jul 7, 2011 at 5:54 AM, Florian Pflug f...@phlo.org wrote:
 In effect, the resulting thing is an LWLock with a partitioned shared
 counter. The partition one backend operates on for shared locks is
 determined by its backend id.

 I've added the implementation to the lock benchmarking tool at
  https://github.com/fgp/lockbench
 and also pushed a patched version of postgres to
  https://github.com/fgp/postgres/tree/lwlock_part

 The number of shared counter partitions is current 4, but can easily
 be adjusted in lwlock.h. The code uses GCC's atomic fetch and add
 intrinsic if available, otherwise it falls back to using a per-partition
 spin lock.

I think this is probably a good trade-off for locks that are most
frequently taken in shared mode (like SInvalReadLock), but it seems
like it could be a very bad trade-off for locks that are frequently
taken in both shared and exclusive mode (e.g. ProcArrayLock,
BufMappingLocks).

I don't want to fiddle with your git repo, but if you attach a patch
that applies to the master branch I'll give it a spin if I have time.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dropping table in testcase alter_table.sql

2011-07-07 Thread Robert Haas
On Thu, Jul 7, 2011 at 3:05 AM, Ashutosh Bapat
ashutosh.ba...@enterprisedb.com wrote:
 I noticed that the test alter_table.sql is creating two tables tab1 and tab2
 and it's not dropping it. Any test which follows this test and tries to
 create tables with names tab1 and tab2 will fail (unless it drops those
 tables first, but that may not work, since tab2.y depends upon tab1 in
 alter_table.sql).

 PFA patch which drops these two tables from alter_table.sql and
 corresponding OUT change. The regression run clean with this patch.

The regression tests leave lots of objects lying around in the
regression database... why drop these two, as opposed to any of the
others?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RRR] 9.2 CF2: 20 days in

2011-07-07 Thread Robert Haas
On Thu, Jul 7, 2011 at 12:02 AM, Josh Berkus j...@agliodbs.com wrote:
 * 1/2 of patches are still pending development: 12 waiting on author,
 and 18 waiting for review. In addition, 7 patches are waiting for a
 committer.

We need to start marking the patches that are Waiting on Author as
Returned with Feedback, ideally after checking that the status in
the CF application is in fact up to date.   With a week left in the
CommitFest at this point, anything that has been reviewed and still
has issues is pretty much going to have to wait for the next round.
We need to focus on (a) reviewing the patches that haven't been
reviewed yet and (b) getting the stuff that is basically in good shape
committed.  Otherwise, we're still going to be doing this CommitFest
in September.

I have been attempting to keep somewhat on top of the stuff that has
become Ready for Committer, but there is too much of it for me to
handle by myself.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extra check in 9.0 exclusion constraint unintended consequences

2011-07-07 Thread Robert Haas
On Tue, Jul 5, 2011 at 12:26 PM, Jeff Davis pg...@j-davis.com wrote:
 In the 9.0 version of exclusion constraints, we added an extra check to
 ensure that, when searching for a conflict, a tuple at least found
 itself as a conflict. This extra check is not present in 9.1+.

 It was designed to help diagnose certain types of problems, and is fine
 for most use cases. A value is equal to itself (and therefore conflicts
 with itself), and a value overlaps with itself (and therefore conflicts
 with itself), which were the primary use cases. We removed the extra
 check in 9.1 because there are other operators for which that might not
 be true, like , but the use case is a little more obscure.

 However, values don't always overlap with themselves -- for instance the
 empty period (which was an oversight by me). So, Abel Abraham Camarillo
 Ojeda ran into a rather cryptic error message when he tried to do that:

 ERROR:  failed to re-find tuple within index t_period_excl
 HINT:  This may be because of a non-immutable index expression.

 I don't think we need to necessarily remove the extra check in 9.0,
 because the workaround is simple: add a WHERE clause to the constraint
 eliminating empty periods. Perhaps we could improve the error message
 and hint, and add a note in the documentation.

I think it's probably too late to go fiddling with the behavior of 9.0
at this point.  If we change the text of error messages, there is a
chance that it might break applications; it would also require those
messages to be re-translated, and I don't think the issue is really
important enough to justify a change.  I am happy to see us document
it better, though, since it's pretty clear that there is more
likelihood of hitting that error than we might have suspected at the
outset.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 On 05.07.2011 20:03, Kevin Grittner wrote:
 In reviewing the 2PC changes mentioned in a separate post, both
 Dan and I realized that these were dependent on the assumption
 that SSI's commitSeqNo is assigned in the order in which the
 transactions became visible.
 
 This comment in the patch actually suggests a stronger
 requirement:
 
 + * For correct SERIALIZABLE semantics, the commitSeqNo must
 appear to be set
 + * atomically with the work of the transaction becoming visible
 to other
 + * transactions.
 
 So, is it enough for the commitSeqNos to be set in the order the 
 transactions become visible to others? I'm assuming 'yes' for now,
 as the approach being discussed to assign commitSeqNo in 
 ProcArrayEndTransaction() without also holding
 SerializableXactHashLock is not going to work otherwise, and if I
 understood correctly you didn't see any correctness issue with
 that. Please shout if I'm missing something.
 
Hmm.  The more strict semantics are much easier to reason about and
ensure correctness.  I think that the looser semantics can be made
to work, but there needs to be more fussy logic in the SSI code to
deal with the fact that we don't know whether the visibility change
has occurred.  Really what pushed us to the patch using the stricter
semantics was how much the discussion of how to cover the edge
conditions with the looser semantics made our heads spin.  Anything
that confusing seems more prone to subtle bugs.
 
If people really think that route is better I can put a patch
together so that the two approaches can be compared side-by-side. 
If we go that way, it seemed that maybe we should move
LastSxactCommitSeqNo to VariableCacheData, right below
latestCompletedXid.  We need some way to communicate that the
commitSeqNo has been set -- probably a volatile boolean in local
memory, and we need to acquire ProcArrayLock in some places we
currently don't within the SSI code to get at the value.  Off-hand,
I don't see how that can be done without holding
SerializableXactHashLock while grabbing ProcArrayLock; and if we
need to do that to grab the assigned value, I'm not sure how much
we're buying.  Do you see some way to avoid that?
 
The other thing we need to do if we go with the looser semantics is
be pessimistic -- in all tests for dangerous structures we need to
assume that any transaction which is prepared *may* also be
committed, and may have committed before any other transaction which
is prepared or committed.  It would be good to put some bounds on
this.  I guess any monotonically increasing number in the system
which can be grabbed just before the commit would do.
 
Unless you've come up with some clever approach we missed, I fear
that a patch based on the looser semantics will be bigger and more
fragile.
 
Do you agree that the patch Dan and I posted *does not add any LW
locking* to a normal (not prepared) transaction if it either has no
XID or is not SERIALIZABLE?  And that ProcArrayLock is not held
during COMMIT PREPARED or ROLLBACK PREPARED unless the transaction
is SERIALIZABLE?  I'm a bit concerned that we're headed down this
other path because people aren't clear about these facts.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Moving the community git server

2011-07-07 Thread Magnus Hagander
On Thu, Jul 7, 2011 at 17:12, Magnus Hagander mag...@hagander.net wrote:
 On Thu, Jul 7, 2011 at 17:09, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 This move should now be completed.

 Something weird seems to have happened to the gitweb service at the same
 time, which broke some of my bookmarks.  On investigation, I think it's
 gotten pickier about the local-part of the URLs.  For example

 http://git.postgresql.org/gitweb/?p=postgresql.git              OK
 http://git.postgresql.org/gitweb?p=postgresql.git               404

 Is the latter supposed to work?  The former looks a bit odd to me,
 but I'm no expert.

 I can confirm it broke. Both really should work I think - I'll take a
 look at why later this evening, hopefully it's a quick fix.

Should be fixed with a redirect now.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.2] Fix leaky-view problem, part 1

2011-07-07 Thread Robert Haas
On Thu, Jul 7, 2011 at 10:56 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
 My opinion is ALTER TABLE SET/RESET code should be enhanced to accept
 an operation to reset all the existing options, rather than tricky
 updates of pg_class.
 How about an idea to add AT_ResetAllRelOptions for internal use only?

 I'll fix up the regression test also.

On an only semi-related note, ISTM that you may as well merge parts 0,
1, and 2 into a single patch, since there is no way we are going to
apply any of them without the others.  I suggest closing one of the
CommitFest entries and revising the other one to point to the
consolidated patch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inconsistency between postgresql.conf and docs

2011-07-07 Thread Tom Lane
Fujii Masao masao.fu...@gmail.com writes:
 On Thu, Jun 30, 2011 at 1:34 AM, Josh Berkus j...@agliodbs.com wrote:
 My preference would be to have:
 
 # REPLICATION
 
 # - Master Settings -
 # these settings affect the master role in replication
 # they will be ignored on the standby
 
 ... settings ...
 
 # - Standby Settings -
 # these settings affect the standby role in replication
 # they will be ignored on the master
 
 ... settings ...
 
 
 That's how I've been setting up the file for my customers; it's fairly
 clear and understandable.

 Looks better than it's now. Anyway, if you change those, you would
 need to change also the config_group in guc.c.

OK, so the plan is to move these settings into a separate top-level
group Replication, and sub-divide into master and standby settings,
removing the current classification for streaming versus
synchronous?  That makes sense to me too, but it touches code as
well as docs ... last call for objections ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RRR] 9.2 CF2: 20 days in

2011-07-07 Thread Joshua Berkus
Robert,
 
 We need to start marking the patches that are Waiting on Author as
 Returned with Feedback, ideally after checking that the status in
 the CF application is in fact up to date.   With a week left in the
 CommitFest at this point, anything that has been reviewed and still
 has issues is pretty much going to have to wait for the next round.
 We need to focus on (a) reviewing the patches that haven't been
 reviewed yet and (b) getting the stuff that is basically in good
 shape
 committed.  Otherwise, we're still going to be doing this CommitFest
 in September

Sure, I only want to do that for ones which have been waiting on author for 
more than a couple days though.  Working on that.

 I have been attempting to keep somewhat on top of the stuff that has
 become Ready for Committer, but there is too much of it for me to
 handle by myself.

Yeah, given that we're still in beta, I expected committing to be a problem.  
Not a surprise.

--Josh 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Moving the community git server

2011-07-07 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Thu, Jul 7, 2011 at 17:09, Tom Lane t...@sss.pgh.pa.us wrote:
 http://git.postgresql.org/gitweb/?p=postgresql.git  OK
 http://git.postgresql.org/gitweb?p=postgresql.git   404

 Should be fixed with a redirect now.

Thanks, my bookmarks seem OK again.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RRR] 9.2 CF2: 20 days in

2011-07-07 Thread Tom Lane
Joshua Berkus j...@agliodbs.com writes:
 I have been attempting to keep somewhat on top of the stuff that has
 become Ready for Committer, but there is too much of it for me to
 handle by myself.

 Yeah, given that we're still in beta, I expected committing to be a problem.  
 Not a surprise.

Between beta and vacation, I've not been able to provide any help in
this CF.  I'm still catching up from vacation but should be able to
provide some help next week.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Latch implementation that wakes on postmaster death on both win32 and Unix

2011-07-07 Thread Peter Geoghegan
I now think that we shouldn't change the return value format from the
most recent revisions of the patch (i.e. returning a bitfield). We
should leave it as-is, while documenting that it's possible, although
extremely unlikely, for it to incorrectly report Postmaster death, and
that clients therefore have a onus to check that themselves using
PostmasterIsAlive(). We already provide fairly weak guarantees as to
the validity of that return value (Note that if multiple wake-up
conditions are true, there is no guarantee that we return all of them
in one call, but we will return at least one). Making them a bit
weaker still seems acceptable.

In addition, we'd change the implementation of PostmasterIsAlive() to
/just/ perform the read() test as already described.

I'm not concerned about the possibility of spurious extra cycles of
auxiliary process event loops - should I be?

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.2] Fix leaky-view problem, part 1

2011-07-07 Thread Noah Misch
On Thu, Jul 07, 2011 at 03:56:26PM +0100, Kohei KaiGai wrote:
 2011/7/7 Noah Misch n...@2ndquadrant.com:
  On Wed, Jul 06, 2011 at 10:25:12PM +0200, Kohei KaiGai wrote:
  *** a/src/backend/commands/view.c
  --- b/src/backend/commands/view.c
 
  --- 227,257 
                                atcmd-def = (Node *) lfirst(c);
                                atcmds = lappend(atcmds, atcmd);
                        }
                }
 
                /*
  +              * If optional parameters are specified, we must set options
  +              * using ALTER TABLE SET OPTION internally.
  +              */
  +             if (list_length(options)  0)
  +             {
  +                     atcmd = makeNode(AlterTableCmd);
  +                     atcmd-subtype = AT_SetRelOptions;
  +                     atcmd-def = (List *)options;
  +
  +                     atcmds = lappend(atcmds, atcmd);
  +             }
  +             else
  +             {
  +                     atcmd = makeNode(AlterTableCmd);
  +                     atcmd-subtype = AT_ResetRelOptions;
  +                     atcmd-def = (Node *) 
  list_make1(makeDefElem(security_barrier,
  +                                                                          
                                      NULL));
  +             }
  +             if (atcmds != NIL)
  +                     AlterTableInternal(viewOid, atcmds, true);
  +
  +             /*
                 * Seems okay, so return the OID of the pre-existing view.
                 */
                relation_close(rel, NoLock);    /* keep the lock! */
 
  That gets the job done for today, but DefineVirtualRelation() should not 
  need
  to know all view options by name to simply replace the existing list with a
  new one.  I don't think you can cleanly use the ALTER TABLE SET/RESET code 
  for
  this.  Instead, compute an option list similar to how DefineRelation() does 
  so
  at tablecmds.c:491, then update pg_class.
 
 My opinion is ALTER TABLE SET/RESET code should be enhanced to accept
 an operation to reset all the existing options, rather than tricky
 updates of pg_class.

The pg_class update has ~20 lines of idiomatic code; see tablecmds.c:7931-7951.

 How about an idea to add AT_ResetAllRelOptions for internal use only?

If some operation is purely internal and does not otherwise benefit from the
ALTER TABLE infrastructure, there's no benefit in involving ALTER TABLE.
DefineVirtualRelation() uses ALTER TABLE to add columns because all that code
needs to exist anyway.  You could make a plain function to do the update that
gets called from both ATExecSetRelOptions() and DefineVirtualRelation().

Thanks,
nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RRR] 9.2 CF2: 20 days in

2011-07-07 Thread Robert Haas
On Thu, Jul 7, 2011 at 1:35 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Joshua Berkus j...@agliodbs.com writes:
 I have been attempting to keep somewhat on top of the stuff that has
 become Ready for Committer, but there is too much of it for me to
 handle by myself.

 Yeah, given that we're still in beta, I expected committing to be a problem. 
  Not a surprise.

 Between beta and vacation, I've not been able to provide any help in
 this CF.  I'm still catching up from vacation but should be able to
 provide some help next week.

That would be great, especially because I am leaving for a week's
vacation on Saturday, and so my availability to help with the final
push is going to be limited.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inconsistency between postgresql.conf and docs

2011-07-07 Thread Robert Haas
On Thu, Jul 7, 2011 at 1:26 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Fujii Masao masao.fu...@gmail.com writes:
 On Thu, Jun 30, 2011 at 1:34 AM, Josh Berkus j...@agliodbs.com wrote:
 My preference would be to have:

 # REPLICATION

 # - Master Settings -
 # these settings affect the master role in replication
 # they will be ignored on the standby

 ... settings ...

 # - Standby Settings -
 # these settings affect the standby role in replication
 # they will be ignored on the master

 ... settings ...


 That's how I've been setting up the file for my customers; it's fairly
 clear and understandable.

 Looks better than it's now. Anyway, if you change those, you would
 need to change also the config_group in guc.c.

 OK, so the plan is to move these settings into a separate top-level
 group Replication, and sub-divide into master and standby settings,
 removing the current classification for streaming versus
 synchronous?  That makes sense to me too, but it touches code as
 well as docs ... last call for objections ...

None here.  +1.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Latch implementation that wakes on postmaster death on both win32 and Unix

2011-07-07 Thread Robert Haas
On Thu, Jul 7, 2011 at 1:41 PM, Peter Geoghegan pe...@2ndquadrant.com wrote:
 I now think that we shouldn't change the return value format from the
 most recent revisions of the patch (i.e. returning a bitfield). We
 should leave it as-is, while documenting that it's possible, although
 extremely unlikely, for it to incorrectly report Postmaster death, and
 that clients therefore have a onus to check that themselves using
 PostmasterIsAlive(). We already provide fairly weak guarantees as to
 the validity of that return value (Note that if multiple wake-up
 conditions are true, there is no guarantee that we return all of them
 in one call, but we will return at least one). Making them a bit
 weaker still seems acceptable.

I agree - that seems like a good way to handle it.

 In addition, we'd change the implementation of PostmasterIsAlive() to
 /just/ perform the read() test as already described.

 I'm not concerned about the possibility of spurious extra cycles of
 auxiliary process event loops - should I be?

A tight loop would be bad, but an occasional spurious wake-up seems harmless.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoid index rebuilds for no-rewrite ALTER TABLE ALTER TYPE

2011-07-07 Thread Robert Haas
On Wed, Jul 6, 2011 at 2:50 PM, Noah Misch n...@2ndquadrant.com wrote:
 Drat; fixed in this version.  My local branches contain a large test battery
 that I filter out of the diff before posting.  This time, that filter also
 removed an essential part of the patch.

OK, I'm pretty happy with this version, with the following minor caveats:

1. I still think the documentation changes could use some
word-smithing.  If I end up being the one who commits this, I'll take
a look at that as part of the commit.

2. I think it would be helpful to add a comment explaining why
CheckIndexCompatible() is calling

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoid index rebuilds for no-rewrite ALTER TABLE ALTER TYPE

2011-07-07 Thread Robert Haas
On Thu, Jul 7, 2011 at 2:43 PM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Jul 6, 2011 at 2:50 PM, Noah Misch n...@2ndquadrant.com wrote:
 Drat; fixed in this version.  My local branches contain a large test battery
 that I filter out of the diff before posting.  This time, that filter also
 removed an essential part of the patch.

 OK, I'm pretty happy with this version, with the following minor caveats:

 1. I still think the documentation changes could use some
 word-smithing.  If I end up being the one who commits this, I'll take
 a look at that as part of the commit.

 2. I think it would be helpful to add a comment explaining why
 CheckIndexCompatible() is calling

Woops, sorry.  Hit send too soon.

...why CheckIndexCompatible() is calling ComputeIndexAttrs().

Rather than committing this immediately, I'm going to mark this Ready
for Committer, just in case Tom or someone else wants to look this
over and weigh in.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Heikki Linnakangas

On 07.07.2011 19:41, Kevin Grittner wrote:

Heikki Linnakangasheikki.linnakan...@enterprisedb.com  wrote:

On 05.07.2011 20:03, Kevin Grittner wrote:

In reviewing the 2PC changes mentioned in a separate post, both
Dan and I realized that these were dependent on the assumption
that SSI's commitSeqNo is assigned in the order in which the
transactions became visible.


This comment in the patch actually suggests a stronger
requirement:


+ * For correct SERIALIZABLE semantics, the commitSeqNo must
 appear to be set
+ * atomically with the work of the transaction becoming visible
 to other
+ * transactions.


So, is it enough for the commitSeqNos to be set in the order the
transactions become visible to others? I'm assuming 'yes' for now,
as the approach being discussed to assign commitSeqNo in
ProcArrayEndTransaction() without also holding
SerializableXactHashLock is not going to work otherwise, and if I
understood correctly you didn't see any correctness issue with
that. Please shout if I'm missing something.


Hmm.  The more strict semantics are much easier to reason about and
ensure correctness.


True.


 I think that the looser semantics can be made
to work, but there needs to be more fussy logic in the SSI code to
deal with the fact that we don't know whether the visibility change
has occurred.  Really what pushed us to the patch using the stricter
semantics was how much the discussion of how to cover the edge
conditions with the looser semantics made our heads spin.  Anything
that confusing seems more prone to subtle bugs.


Let's step back and analyse a bit closer what goes wrong with the 
current semantics. The problem is that commitSeqNo is assigned too late, 
sometime after the transaction has become visible to others.


Looking at the comparisons that we do with commitSeqNos, they all have 
conservative direction that we can err to, when in doubt. So here's an idea:


Let's have two sequence numbers for each transaction: prepareSeqNo and 
commitSeqNo. prepareSeqNo is assigned when a transaction is prepared (in 
PreCommit_CheckForSerializableConflicts), and commitSeqNo is assigned 
when it's committed (in ReleasePredicateLocks). They are both assigned 
from one counter, LastSxactCommitSeqNo, so that is advanced twice per 
transaction, and prepareSeqNo is always smaller than commitSeqNo for a 
transaction. Modify operations that currently use commitSeqNo to use 
either prepareSeqNo or commitSeqNo, so that we err on the safe side.


That yields a much smaller patch (attached). How does this look to you, 
am I missing anything?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
diff --git a/src/backend/storage/lmgr/predicate.c b/src/backend/storage/lmgr/predicate.c
index f0c8ee4..134a0a0 100644
--- a/src/backend/storage/lmgr/predicate.c
+++ b/src/backend/storage/lmgr/predicate.c
@@ -1184,6 +1184,7 @@ InitPredicateLocks(void)
 		}
 		PredXact-OldCommittedSxact = CreatePredXact();
 		SetInvalidVirtualTransactionId(PredXact-OldCommittedSxact-vxid);
+		PredXact-OldCommittedSxact-prepareSeqNo = 0;
 		PredXact-OldCommittedSxact-commitSeqNo = 0;
 		PredXact-OldCommittedSxact-SeqNo.lastCommitBeforeSnapshot = 0;
 		SHMQueueInit(PredXact-OldCommittedSxact-outConflicts);
@@ -1644,6 +1645,7 @@ RegisterSerializableTransactionInt(Snapshot snapshot)
 	/* Initialize the structure. */
 	sxact-vxid = vxid;
 	sxact-SeqNo.lastCommitBeforeSnapshot = PredXact-LastSxactCommitSeqNo;
+	sxact-prepareSeqNo = InvalidSerCommitSeqNo;
 	sxact-commitSeqNo = InvalidSerCommitSeqNo;
 	SHMQueueInit((sxact-outConflicts));
 	SHMQueueInit((sxact-inConflicts));
@@ -4401,18 +4403,13 @@ OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader,
 		{
 			SERIALIZABLEXACT *t2 = conflict-sxactIn;
 
-			/*
-			 * Note that if T2 is merely prepared but not yet committed, we
-			 * rely on t-commitSeqNo being InvalidSerCommitSeqNo, which is
-			 * larger than any valid commit sequence number.
-			 */
 			if (SxactIsPrepared(t2)
  (!SxactIsCommitted(reader)
-	|| t2-commitSeqNo = reader-commitSeqNo)
+	|| t2-prepareSeqNo = reader-commitSeqNo)
  (!SxactIsCommitted(writer)
-	|| t2-commitSeqNo = writer-commitSeqNo)
+	|| t2-prepareSeqNo = writer-commitSeqNo)
  (!SxactIsReadOnly(reader)
-			   || t2-commitSeqNo = reader-SeqNo.lastCommitBeforeSnapshot))
+			   || t2-prepareSeqNo = reader-SeqNo.lastCommitBeforeSnapshot))
 			{
 failure = true;
 break;
@@ -4453,17 +4450,11 @@ OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader,
 		{
 			SERIALIZABLEXACT *t0 = conflict-sxactOut;
 
-			/*
-			 * Note that if the writer is merely prepared but not yet
-			 * committed, we rely on writer-commitSeqNo being
-			 * InvalidSerCommitSeqNo, which is larger than any valid commit
-			 * sequence number.
-			 */
 			if (!SxactIsDoomed(t0)
  (!SxactIsCommitted(t0)
-	|| t0-commitSeqNo = writer-commitSeqNo)
+	|| t0-commitSeqNo = writer-prepareSeqNo)
  

Re: [HACKERS] Avoid index rebuilds for no-rewrite ALTER TABLE ALTER TYPE

2011-07-07 Thread Noah Misch
On Thu, Jul 07, 2011 at 02:44:49PM -0400, Robert Haas wrote:
 On Thu, Jul 7, 2011 at 2:43 PM, Robert Haas robertmh...@gmail.com wrote:
  On Wed, Jul 6, 2011 at 2:50 PM, Noah Misch n...@2ndquadrant.com wrote:
  Drat; fixed in this version.  My local branches contain a large test 
  battery
  that I filter out of the diff before posting.  This time, that filter also
  removed an essential part of the patch.
 
  OK, I'm pretty happy with this version, with the following minor caveats:
 
  1. I still think the documentation changes could use some
  word-smithing.  If I end up being the one who commits this, I'll take
  a look at that as part of the commit.
 
  2. I think it would be helpful to add a comment explaining why
  CheckIndexCompatible() is calling
 
 Woops, sorry.  Hit send too soon.
 
 ...why CheckIndexCompatible() is calling ComputeIndexAttrs().

CheckIndexCompatible() calls ComputeIndexAttrs() to resolve the new operator
classes, collations and exclusion operators for each index column.  It then
checks those against the existing values for the same.  I figured that was
obvious enough, but do you want a new version noting that?

 Rather than committing this immediately, I'm going to mark this Ready
 for Committer, just in case Tom or someone else wants to look this
 over and weigh in.

Great.  Thanks for reviewing.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 
 Let's have two sequence numbers for each transaction: prepareSeqNo
 and commitSeqNo. prepareSeqNo is assigned when a transaction is
 prepared (in PreCommit_CheckForSerializableConflicts), and
 commitSeqNo is assigned when it's committed (in
 ReleasePredicateLocks). They are both assigned from one counter,
 LastSxactCommitSeqNo, so that is advanced twice per transaction,
 and prepareSeqNo is always smaller than commitSeqNo for a
 transaction. Modify operations that currently use commitSeqNo to
 use either prepareSeqNo or commitSeqNo, so that we err on the safe
 side.
 
 That yields a much smaller patch (attached). How does this look to
 you, am I missing anything?
 
Very clever.  I'll need to study this and think about it.  I'll try
to post a response before I go to bed tonight.  Hopefully Dan can
weigh in, too.  (I know he was traveling with limited Internet
access -- not sure about his return date.)
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoid index rebuilds for no-rewrite ALTER TABLE ALTER TYPE

2011-07-07 Thread Robert Haas
On Thu, Jul 7, 2011 at 2:55 PM, Noah Misch n...@2ndquadrant.com wrote:
 CheckIndexCompatible() calls ComputeIndexAttrs() to resolve the new operator
 classes, collations and exclusion operators for each index column.  It then
 checks those against the existing values for the same.  I figured that was
 obvious enough, but do you want a new version noting that?

I guess one question I had was... are we depending on the fact that
ComputeIndexAttrs() performs a bunch of internal sanity checks?  Or
are we just expecting those to always pass, and we're going to examine
the outputs after the fact?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small SSI issues

2011-07-07 Thread Robert Haas
On Tue, Jul 5, 2011 at 12:03 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Robert Haas robertmh...@gmail.com wrote:

 Well, as long as we can verify that OLDSERXID_MAX_PAGE has the
 same value for BLCKSZ=8K before and after this patch, I don't see
 any real downside to applying it.  If, hypothetically, it's buggy,
 it's only going to break things for non-default block sizes which
 are, by your description, not correct right now anyway.

 Outside of a code comment, the entire patch consists of replacing
 the definition of the OLDSERXID_MAX_PAGE macro.  The old definition
 is:

  (SLRU_PAGES_PER_SEGMENT * 0x1 - 1)

 SLRU_PAGES_PER_SEGMENT is define to be 32.  So this is:

  (32 * 0x1) - 1 = 2097151

 The new definition is the min of the old one and a value based on
 BLCKSZ:

  (MaxTransactionId + 1) / OLDSERXID_ENTRIESPERPAGE - 1)

 Where entries per page is BLCKSZ / sizeof(uint64).

 For an 8K BLCKSZ this is:

  ((0x + 1) / 1024) - 1 = 4194303

 So the macro is guaranteed to have the same value as it currently
 does for BLCKSZ of 16KB or lower.

I went ahead and committed this.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoid index rebuilds for no-rewrite ALTER TABLE ALTER TYPE

2011-07-07 Thread Noah Misch
On Thu, Jul 07, 2011 at 03:06:46PM -0400, Robert Haas wrote:
 On Thu, Jul 7, 2011 at 2:55 PM, Noah Misch n...@2ndquadrant.com wrote:
  CheckIndexCompatible() calls ComputeIndexAttrs() to resolve the new operator
  classes, collations and exclusion operators for each index column.  It then
  checks those against the existing values for the same.  I figured that was
  obvious enough, but do you want a new version noting that?
 
 I guess one question I had was... are we depending on the fact that
 ComputeIndexAttrs() performs a bunch of internal sanity checks?  Or
 are we just expecting those to always pass, and we're going to examine
 the outputs after the fact?

Those checks can fail; consider an explicit operator class or collation that
does not support the destination type.  At that stage, we neither rely on those
checks nor mind if they do fire.  If we somehow miss the problem at that stage,
DefineIndex() will detect it later.  Likewise, if we hit an error in
CheckIndexCompatible(), we would also hit it later in DefineIndex().

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoid index rebuilds for no-rewrite ALTER TABLE ALTER TYPE

2011-07-07 Thread Robert Haas
On Thu, Jul 7, 2011 at 3:21 PM, Noah Misch n...@2ndquadrant.com wrote:
 On Thu, Jul 07, 2011 at 03:06:46PM -0400, Robert Haas wrote:
 On Thu, Jul 7, 2011 at 2:55 PM, Noah Misch n...@2ndquadrant.com wrote:
  CheckIndexCompatible() calls ComputeIndexAttrs() to resolve the new 
  operator
  classes, collations and exclusion operators for each index column.  It then
  checks those against the existing values for the same.  I figured that was
  obvious enough, but do you want a new version noting that?

 I guess one question I had was... are we depending on the fact that
 ComputeIndexAttrs() performs a bunch of internal sanity checks?  Or
 are we just expecting those to always pass, and we're going to examine
 the outputs after the fact?

 Those checks can fail; consider an explicit operator class or collation that
 does not support the destination type.  At that stage, we neither rely on 
 those
 checks nor mind if they do fire.  If we somehow miss the problem at that 
 stage,
 DefineIndex() will detect it later.  Likewise, if we hit an error in
 CheckIndexCompatible(), we would also hit it later in DefineIndex().

OK.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inconsistency between postgresql.conf and docs

2011-07-07 Thread Peter Eisentraut
On tor, 2011-07-07 at 13:26 -0400, Tom Lane wrote:
 OK, so the plan is to move these settings into a separate top-level
 group Replication, and sub-divide into master and standby settings,

Most of the messages use the term primary rather than master.  I
think there was a discussion in 9.0 in favor of that term.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Make relation_openrv atomic wrt DDL

2011-07-07 Thread Noah Misch
On Thu, Jul 07, 2011 at 11:43:30AM -0400, Robert Haas wrote:
 On Wed, Jul 6, 2011 at 10:44 PM, Noah Misch n...@2ndquadrant.com wrote:
  Attached.  I made the counter 64 bits wide, handled the nothing-found case 
  per
  your idea, and improved a few comments cosmetically.  I have not attempted 
  to
  improve the search_path interposition case.  We can recommend the workaround
  above, and doing better looks like an excursion much larger than the one
  represented by this patch.
 
 I looked at this some more and started to get uncomfortable with the
 whole idea of having RangeVarLockRelid() be a wrapper around
 RangeVarGetRelid().  This hazard exists everywhere the latter function
 gets called, not just in relation_open().  So it doesn't seem right to
 fix the problem only in those places.

Yes; I wished to focus on the major case for this round, then address the
other callers later.  We can do it this way, though.

It does make long-term sense to expose only the lock-taking form, making
otherwise-unaffected callers say NoLock explicitly.

 So I went through and incorporated the logic proposed for
 RangeVarLockRelid() into RangeVarGetRelid() itself, and then went
 through and examined all the callers of RangeVarGetRelid().  There are
 some, such as has_table_privilege(), where it's really impractical to
 take any lock, first because we might have no privileges at all on
 that table and second because that could easily lead to a massive
 amount of locking for no particular good reason.  I believe Tom
 suggested that the right fix for these functions is to have them
 index-scan the system catalogs using the caller's MVCC snapshot, which
 would be right at least for pg_dump.  And there are other callers that
 cannot acquire the lock as part of RangeVarGetRelid() for a variety of
 other reasons.  However, having said that, there do appear to be a
 number of cases that are can be fixed fairly easily.
 
 So here's a (heavily) updated patch that tries to do that, along with
 adding comments to the places where things still need more fixing.  In
 addition to the problems corrected by your last version, this fixes
 LOCK TABLE, ALTER SEQUENCE, ALTER TABLE .. RENAME, the whole-table
 variant of REINDEX, CREATE CONSTRAINT TRIGGER (which is flat-out wrong
 as it stands, since it acquires *no lock at all* on the table
 specified in the FROM clause, never mind the question of doing so
 atomically), CREATE RULE, and (partially) DROP TRIGGER and DROP RULE.

Looks basically sound; see a few code comments below.

 Regardless of exactly how we decide to proceed here, it strikes me
 that there is a heck of a lot more work that could stand to be done in
 this area...  :-(

Yes.  DDL-DDL concurrency is a much smaller practical concern, but it is a
quality-of-implementation hole.

 --- a/src/backend/catalog/namespace.c
 +++ b/src/backend/catalog/namespace.c

 @@ -238,37 +246,121 @@ RangeVarGetRelid(const RangeVar *relation, bool failOK)
   }
  
   /*
 -  * Some non-default relpersistence value may have been specified.  The
 -  * parser never generates such a RangeVar in simple DML, but it can 
 happen
 -  * in contexts such as CREATE TEMP TABLE foo (f1 int PRIMARY KEY).  
 Such
 -  * a command will generate an added CREATE INDEX operation, which must 
 be
 -  * careful to find the temp table, even when pg_temp is not first in the
 -  * search path.
 +  * If lockmode = NoLock, the caller is assumed to already hold some sort
 +  * of heavyweight lock on the target relation.  Otherwise, we're 
 preceding
 +  * here with no lock at all, which means that any answers we get must be
 +  * viewed with a certain degree of suspicion.  In particular, any time 
 we
 +  * AcceptInvalidationMessages(), the anwer might change.  We handle that
 +  * case by retrying the operation until either (1) no more invalidation
 +  * messages show up or (2) the answer doesn't change.

The third sentence is true for all lock levels.  The fourth sentence is true
for lock levels _except_ NoLock.

 + /*
 +  * If no lock requested, we assume the caller knows what they're
 +  * doing.  They should have already acquired a heavyweight lock 
 on
 +  * this relation earlier in the processing of this same 
 statement,
 +  * so it wouldn't be appropriate to AcceptInvalidationMessages()
 +  * here, as that might pull the rug out from under them.
 +  */

What sort of rug-pullout do you have in mind?  Also, I don't think it matters
if the caller acquired the lock during this _statement_.  It just needs to
hold a lock, somehow.

 --- a/src/backend/commands/lockcmds.c
 +++ b/src/backend/commands/lockcmds.c

 @@ -69,67 +70,10 @@ LockTableCommand(LockStmt *lockstmt)
   * rv is NULL and we should just silently ignore any dropped child rel.

This comment refers to a now-removed argument.

   */
  static void
 -LockTableRecurse(Oid 

[HACKERS] excessive backpatching of gitignore files

2011-07-07 Thread Peter Eisentraut
I noticed that the 8.4 branch has a file contrib/pg_upgrade/.gitignore
even though that version does not ship pg_upgrade.  Apparently, a
few .gitignore files have been backpatched without checking whether they
apply.  pg_archivecleanup and unaccent are also affected, and there
might be other directories introduced in other branches.  Should
probably be cleaned up.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inconsistency between postgresql.conf and docs

2011-07-07 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On tor, 2011-07-07 at 13:26 -0400, Tom Lane wrote:
 OK, so the plan is to move these settings into a separate top-level
 group Replication, and sub-divide into master and standby settings,

 Most of the messages use the term primary rather than master.  I
 think there was a discussion in 9.0 in favor of that term.

Well, there seems to be a lot more usage of the term master than
the other in the docs ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] spurious use of %m format in pg_upgrade

2011-07-07 Thread Peter Eisentraut
On tor, 2011-07-07 at 00:22 -0400, Tom Lane wrote:
 Is there a way to persuade gcc to complain about such extensions when
 used in contexts where we don't know they work?

I don't think so.

First of all, the comment in pg_config_manual.h says that we *want* the
compiler to recognize %m as valid, and we apply the same attribute
globally.  And secondly, the difference between the gnu_printf attribute
and the plain printf attribute is that the latter checks for what the
target's C library accepts, which is equivalent to gnu_printf if you're
on glibc, at least.  According to the gcc source code, the target is
supposed to override the conversion specifier list accordingly, but it
looks like, for example, FreeBSD doesn't do that, which could be
considered a bug there.  The only override in the gcc source code itself
is MinGW, which is where the whole trouble in pg_config_manual.h stems
from in the first place.  (Look for TARGET_OVERRIDES_FORMAT_ATTRIBUTES
in the gcc source if you want to investigate.)

There does not, unfortunately, appear to be an attribute that says to
check conversion specifiers according to some standard (even though the
internal structures of gcc appear to support that, but they'd currently
have %m in the wrong category there anyway).



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 That yields a much smaller patch (attached). How does this look to
 you, am I missing anything?
 
 Very clever.  I'll need to study this and think about it.  I'll try
 to post a response before I go to bed tonight.  Hopefully Dan can
 weigh in, too.  (I know he was traveling with limited Internet
 access -- not sure about his return date.)

Just so everybody's clear --- this isn't making beta3, if it's not going
to get committed today.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Heikki Linnakangas

On 07.07.2011 21:50, Heikki Linnakangas wrote:

On 07.07.2011 19:41, Kevin Grittner wrote:

Heikki Linnakangasheikki.linnakan...@enterprisedb.com wrote:

On 05.07.2011 20:03, Kevin Grittner wrote:

In reviewing the 2PC changes mentioned in a separate post, both
Dan and I realized that these were dependent on the assumption
that SSI's commitSeqNo is assigned in the order in which the
transactions became visible.


This comment in the patch actually suggests a stronger
requirement:


+ * For correct SERIALIZABLE semantics, the commitSeqNo must
appear to be set
+ * atomically with the work of the transaction becoming visible
to other
+ * transactions.


So, is it enough for the commitSeqNos to be set in the order the
transactions become visible to others? I'm assuming 'yes' for now,
as the approach being discussed to assign commitSeqNo in
ProcArrayEndTransaction() without also holding
SerializableXactHashLock is not going to work otherwise, and if I
understood correctly you didn't see any correctness issue with
that. Please shout if I'm missing something.


Hmm. The more strict semantics are much easier to reason about and
ensure correctness.


True.


I think that the looser semantics can be made
to work, but there needs to be more fussy logic in the SSI code to
deal with the fact that we don't know whether the visibility change
has occurred. Really what pushed us to the patch using the stricter
semantics was how much the discussion of how to cover the edge
conditions with the looser semantics made our heads spin. Anything
that confusing seems more prone to subtle bugs.


Let's step back and analyse a bit closer what goes wrong with the
current semantics. The problem is that commitSeqNo is assigned too late,
sometime after the transaction has become visible to others.

Looking at the comparisons that we do with commitSeqNos, they all have
conservative direction that we can err to, when in doubt. So here's an
idea:

Let's have two sequence numbers for each transaction: prepareSeqNo and
commitSeqNo. prepareSeqNo is assigned when a transaction is prepared (in
PreCommit_CheckForSerializableConflicts), and commitSeqNo is assigned
when it's committed (in ReleasePredicateLocks). They are both assigned
from one counter, LastSxactCommitSeqNo, so that is advanced twice per
transaction, and prepareSeqNo is always smaller than commitSeqNo for a
transaction. Modify operations that currently use commitSeqNo to use
either prepareSeqNo or commitSeqNo, so that we err on the safe side.

That yields a much smaller patch (attached). How does this look to you,
am I missing anything?


Committed, so that we get this into beta3. We had some quick offlist 
discussion with Keving and Dan about this, Kevin pointed out a few more 
places that needed to use prepareSeqNo instead of commitSeqNo, out of 
which one seemed legitimate to me, and was included in the committed patch.


Kevin and Dan also pointed out that a 2PC transaction can stay in 
prepared state for a long time, and we could optimize that by setting 
prepareSeqNo only at the final COMMIT PREPARED. I objected to that, for 
the reason that it's currently very convenient for testing purposes that 
a transaction prepared with PREPARE TRANSACTION is in the exact same 
state as regular transaction that's going through regular commit processing.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Tom Lane t...@sss.pgh.pa.us wrote:
 Just so everybody's clear --- this isn't making beta3, if it's not
 going to get committed today.
 
 Yeah, Heikki let me know that off-list.  I thought the last mention
 on the -hackers list of a cutoff date for that was the 11th.  Did I
 misunderstand or was there some later change which didn't get
 mentioned on the list?

The 11th is the announce date.  We normally wrap the Thursday before,
so that packagers (particularly the Windows-installer crew) have some
time to do their thing.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Tom Lane
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 Kevin and Dan also pointed out that a 2PC transaction can stay in 
 prepared state for a long time, and we could optimize that by setting 
 prepareSeqNo only at the final COMMIT PREPARED. I objected to that, for 
 the reason that it's currently very convenient for testing purposes that 
 a transaction prepared with PREPARE TRANSACTION is in the exact same 
 state as regular transaction that's going through regular commit processing.

Seems to me there's a more fundamental reason not to do that, which is
that once you've done PREPARE it is no longer legitimate to decide to
roll back the transaction to get out of a dangerous structure --- ie,
you have to target one of the other xacts involved instead.  Shouldn't
the assignment of a prepareSeqNo correspond to the point where the xact
is no longer a target for SSI rollback?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] -Wformat-zero-length

2011-07-07 Thread Peter Eisentraut
I was adding gcc printf attributes to more functions in obscure places,
and now I'm seeing this in pg_upgrade:

relfilenode.c:72:2: warning: zero-length gnu_printf format string 
[-Wformat-zero-length]

So the options I can see are either adding the compiler option
-Wno-format-zero-length (with configure check and all), or hack up the
code to do something like this instead:  Before:

prep_status();

After:

prep_status(%s, );

Comments?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Dan Ports
On Thu, Jul 07, 2011 at 04:48:55PM -0400, Tom Lane wrote:
 Seems to me there's a more fundamental reason not to do that, which is
 that once you've done PREPARE it is no longer legitimate to decide to
 roll back the transaction to get out of a dangerous structure --- ie,
 you have to target one of the other xacts involved instead.  Shouldn't
 the assignment of a prepareSeqNo correspond to the point where the xact
 is no longer a target for SSI rollback?

That part is already accomplished by setting SXACT_FLAG_PREPARED (and
choosing a new victim if we think we want to abort a transaction with
that flag set).

prepareSeqNo is being used as a lower bound on the transaction's commit
sequence number. It's currently set at the same time as the PREPARED
flag, but it doesn't have to be.

Dan

-- 
Dan R. K. Ports  MIT CSAILhttp://drkp.net/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:
 Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 Kevin and Dan also pointed out that a 2PC transaction can stay
 in prepared state for a long time, and we could optimize that by
 setting prepareSeqNo only at the final COMMIT PREPARED. I
 objected to that, for the reason that it's currently very
 convenient for testing purposes that a transaction prepared with
 PREPARE TRANSACTION is in the exact same state as regular
 transaction that's going through regular commit processing.
 
 Seems to me there's a more fundamental reason not to do that,
 which is that once you've done PREPARE it is no longer legitimate
 to decide to roll back the transaction to get out of a dangerous
 structure --- ie, you have to target one of the other xacts
 involved instead. Shouldn't the assignment of a prepareSeqNo
 correspond to the point where the xact is no longer a target for
 SSI rollback?
 
No, it wouldn't be useful at all if we had an accurate commitSeqNo. 
It's being used to bracket the moment of visibility, which with
Heikki's patch now falls between the prepareSeqNo and commitSeqNo. 
The name is perhaps misleading.  It's useful only for determining
what *other* transactions require rollback.
 
In any event, SSI will never cause a transaction to fail after it
has prepared.  This patch doesn't change that, nor would the change
Dan and I suggested.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Dan Ports
We should also apply the attached patch, which corrects a minor issue
with the conditions for flagging transactions that could potentially
make a snapshot unsafe.

There's a small window wherein a transaction is committed but not yet
on the finished list, and we shouldn't flag it as a potential conflict
if so. We can also skip adding a doomed transaction to the list of
possible conflicts because we know it won't commit.

This is not really a related issue, but Kevin and I found it while
looking into this issue, and it was included in the patch we sent out.

Dan

-- 
Dan R. K. Ports  MIT CSAILhttp://drkp.net/
*** a/src/backend/storage/lmgr/predicate.c
--- b/src/backend/storage/lmgr/predicate.c
***
*** 1669,1676  RegisterSerializableTransactionInt(Snapshot snapshot)
  			 othersxact != NULL;
  			 othersxact = NextPredXact(othersxact))
  		{
! 			if (!SxactIsOnFinishedList(othersxact) 
! !SxactIsReadOnly(othersxact))
  			{
  SetPossibleUnsafeConflict(sxact, othersxact);
  			}
--- 1676,1684 
  			 othersxact != NULL;
  			 othersxact = NextPredXact(othersxact))
  		{
! 			if (!SxactIsCommitted(othersxact)
!  !SxactIsDoomed(othersxact)
!  !SxactIsReadOnly(othersxact))
  			{
  SetPossibleUnsafeConflict(sxact, othersxact);
  			}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Kevin Grittner
Dan Ports d...@csail.mit.edu wrote:
 We should also apply the attached patch, which corrects a minor
 issue with the conditions for flagging transactions that could
 potentially make a snapshot unsafe.
 
 There's a small window wherein a transaction is committed but not
 yet on the finished list, and we shouldn't flag it as a potential
 conflict if so. We can also skip adding a doomed transaction to
 the list of possible conflicts because we know it won't commit.
 
 This is not really a related issue, but Kevin and I found it while
 looking into this issue, and it was included in the patch we sent
 out.
 
Agreed -- that was in the SSI atomic commit patch, but probably
should have been submitted separately.  The issue is really
orthogonal -- it's just that we found it while looking at the other
race condition.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch review : Add ability to constrain backend temporary file space

2011-07-07 Thread Cédric Villemain
2011/6/24 Mark Kirkwood mark.kirkw...@catalyst.net.nz:

 This  version moves the check *before* we write the new buffer, so should
 take care of issues about really large write buffers, plugins etc. Also I
 *think* that means there is no need to amend the documentation.

 Cheers

 Mark

 P.s: Hopefully I've got a context diff this time, just realized that git
 diff -c does *not* produce a context diff (doh).



Mark, I have this (hopefully) final review in my TODO list, I won't be
able to check until CHAR(11) is done, and probably not before the 19th
of July.

So... if one want to make progress in the meantime, please do.
-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Heikki Linnakangas

On 08.07.2011 00:21, Dan Ports wrote:

We should also apply the attached patch, which corrects a minor issue
with the conditions for flagging transactions that could potentially
make a snapshot unsafe.

There's a small window wherein a transaction is committed but not yet
on the finished list, and we shouldn't flag it as a potential conflict
if so. We can also skip adding a doomed transaction to the list of
possible conflicts because we know it won't commit.


Hmm, it's now also possible for the transaction to be prepared, and 
already visible to others, but not yet flagged as committed. Shouldn't 
those be included too?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Heikki Linnakangas

On 08.07.2011 00:33, Heikki Linnakangas wrote:

On 08.07.2011 00:21, Dan Ports wrote:

We should also apply the attached patch, which corrects a minor issue
with the conditions for flagging transactions that could potentially
make a snapshot unsafe.

There's a small window wherein a transaction is committed but not yet
on the finished list, and we shouldn't flag it as a potential conflict
if so. We can also skip adding a doomed transaction to the list of
possible conflicts because we know it won't commit.


Hmm, it's now also possible for the transaction to be prepared, and
already visible to others, but not yet flagged as committed. Shouldn't
those be included too?


Oh, never mind, I read the test backwards. They are included, as the 
patch stands. I'll commit this too..


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI atomic commit

2011-07-07 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 
 I'll commit this too..
 
Thanks much for staying up past midnight to get these into beta3!
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -Wformat-zero-length

2011-07-07 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 I was adding gcc printf attributes to more functions in obscure places,
 and now I'm seeing this in pg_upgrade:

 relfilenode.c:72:2: warning: zero-length gnu_printf format string 
 [-Wformat-zero-length]

 So the options I can see are either adding the compiler option
 -Wno-format-zero-length (with configure check and all), or hack up the
 code to do something like this instead:  Before:

 prep_status();

 After:

 prep_status(%s, );

 Comments?

Shouldn't it be prep_status(\n)?  If not, why not?  (Right offhand, it
looks to me like prep_status could stand to be redefined in a less
bizarre fashion anyhow.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -Wformat-zero-length

2011-07-07 Thread Tom Lane
I wrote:
 Peter Eisentraut pete...@gmx.net writes:
 I was adding gcc printf attributes to more functions in obscure places,
 and now I'm seeing this in pg_upgrade:

 relfilenode.c:72:2: warning: zero-length gnu_printf format string 
 [-Wformat-zero-length]

 Shouldn't it be prep_status(\n)?  If not, why not?

On closer inspection, it appears to me that prep_status should never be
called with a string containing a newline, period, and the test it
contains for that case is just brain damage.  The only reason to call it
at all is to produce a line like

message ..

where something more is expected to be added to the line later.  Calls
that are meant to produce a complete line could go directly to pg_log.

This in turn implies that transfer_all_new_dbs's use of the function is
broken and needs to be rethought.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.2] Fix leaky-view problem, part 2

2011-07-07 Thread Noah Misch
On Sun, Jul 03, 2011 at 11:41:47AM +0200, Kohei KaiGai wrote:
 The simplified version of fix-leaky-view patch. The part of reloptions
 for views got splitted out
 into the part-0 patch, so it needs to be applied prior to this patch.
 Rest of logic to prevent unexpected pushing down across security
 barrier is not changed.
 
 Thanks,
 
 2011/6/6 Kohei Kaigai kohei.kai...@emea.nec.com:
  This patch enables to fix up leaky-view problem using qualifiers that 
  reference only one-side of join-loop inside of view definition.
 
  The point of this scenario is criteria to distribute qualifiers of 
  scanning-plan distributed in distribute_qual_to_rels(). If and when a 
  qualifiers that reference only one-side of join-loop, the optimizer may 
  distribute this qualifier into inside of the join-loop, even if it goes 
  over the boundary of a subquery expanded from a view for row-level security.
  This behavior allows us to reference whole of one-side of join-loop using 
  functions with side-effects.
  The solution is quite simple; it prohibits to distribute qualifiers over 
  the boundary of subquery, however, performance cost is unignorable, because 
  it also disables to utilize obviously indexable qualifiers such as 
  (id=123), so this patch requires users a hint whether a particular view is 
  for row-level security, or not.
 
  This patch newly adds CREATE SECURITY VIEW statement that marks a flag to 
  show this view was defined for row-level security purpose. This flag shall 
  be stored as reloptions.
  If this flag was set, the optimizer does not distribute qualifiers over the 
  boundary of subqueries expanded from security views, except for obviously 
  safe qualifiers.
  (Right now, we consider built-in indexable operators are safe, but it might 
  be arguable.)

I took a moderately-detailed look at this patch.  This jumped out:

 --- a/src/backend/optimizer/util/clauses.c
 +++ b/src/backend/optimizer/util/clauses.c

 +static bool
 +contain_leakable_functions_walker(Node *node, void *context)
 +{
 + if (node == NULL)
 + return false;
 +
 + if (IsA(node, FuncExpr))
 + {
 + /*
 +  * Right now, we have no way to distinguish safe functions with
 +  * leakable ones, so, we treat all the function call possibly
 +  * leakable.
 +  */
 + return true;
 + }
 + else if (IsA(node, OpExpr))
 + {
 + OpExpr *expr = (OpExpr *) node;
 +
 + /*
 +  * Right now, we assume operators implemented by built-in 
 functions
 +  * are not leakable, so it does not need to prevent 
 optimization.
 +  */
 + set_opfuncid(expr);
 + if (get_func_lang(expr-opfuncid) != INTERNALlanguageId)
 + return true;
 + /* else fall through to check args */
 + }

Any user can do this:

CREATE OPERATOR !-! (PROCEDURE = int4in, RIGHTARG = cstring);
SELECT !-! 'foo';

Making a distinction based simply on the call being an operator vs. a function
is a dead end.  I see these options:

1. The user defining a security view can be assumed to trust the operator class
members of indexes defined on the tables he references.  Keep track of which
those are and treat only them as non-leakable.  This covers many interesting
cases, but it's probably tricky to implement and/or costly at runtime.

2. Add a pg_proc flag indicating whether the function is known leak-free.
Simple, but tedious and perhaps error-prone.

3. Trust operators owned by PGUID.  This is simple and probably covers the
essential cases, but it's an ugly hack.

4. Trust nothing as leak-free.  Simple; performance will be unattractive.

There are probably others.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review of VS 2010 support patches

2011-07-07 Thread Brar Piening

 Original Message  
Subject: Re: [HACKERS] Review of VS 2010 support patches
From: Craig Ringer cr...@postnewspapers.com.au
To: Brar Piening b...@gmx.de
Date: 07.07.2011 16:44



Frankly, I suggest leaving these tests for the buildfarm to sort out. 
I don't see any sign of build process issues; they all build fine, and 
it's pretty darn unlikely that build changes would cause them to break 
at runtime. Windows buildfarm coverage looks pretty decent these days.


As I had no Idea whether the buildfarm is even ready to work with VS 
2010 I set out and tested it.


I can happily tell you that I have just now completed my first 
successful buildfarm run using the attached build-farm.conf


Regards,

Brar

# -*-perl-*- hey - emacs - this is a perl file

=comment

Copyright (c) 2003-2010, Andrew Dunstan

See accompanying License file for license details

=cut 


package PGBuild;

use strict;

use vars qw(%conf);

# use vars qw($VERSION); $VERSION = 'REL_4.5';

my $branch;
{
no warnings qw(once);
$branch = $main::branch;
}

%conf = 
(
 scm = 'git', # or 'cvs'
 scmrepo = undef , # default is community repo for either type
 scm_url = undef, # webref for diffs on server - use default for 
community
 # git_reference = undef, # for --reference on git repo
 # cvsmethod = 'update', # or 'export'
 use_git_cvsserver = undef, # or 'true' if repo is a git cvsserver
 make = 'make', # or gmake if required. can include path if necessary.
 tar_log_cmd = undef, # default is tar -z -cf runlogs.tgz *.log
   # replacement must have the same effect
 # must be absolute, can be either Unix or Windows style for MSVC
 build_root = 'P:\PgBuildFarm\build_root',
 use_vpath = undef, # set true to do vpath builds

 # path to directory with auxiliary web script 
 # if relative, the must be relative to buildroot/branch
 # possibly only necessary now on WIndows, if at all
 aux_path = ../..,

 keep_error_builds = 0,
 target = http://www.pgbuildfarm.org/cgi-bin/pgstatus.pl;,
 upgrade_target = http://www.pgbuildfarm.org/cgi-bin/upgrade.pl;,
 animal = CHANGEME,
 secret = CHANGEME,

 # change this to a true value if using MSVC, in which case also
 # see MSVC section below

 using_msvc = 1,

 # if force_every is a scalar it will be used on all branches, like this
 # for legacy reasons:
 # force_every = 336 , # max hours between builds, undef or 0 = 
unforced
 # we now prefer it to be a hash with branch names as the keys, like 
this
 #
 # this setting should be kept conservatively high, or not used at all  
- 
 # for the most part it's best to let the script decide if something 
 # has changed that requires a new run for the branch.
 #
 # an entry with a name of 'default' matches any branch not named
 force_every = {
 # HEAD = 48,
 # REL8_3_STABLE = 72,
 # default = 168,
 },


 # alerts are triggered if the server doesn't see a build on a branch 
after
 # this many hours, and then sent out every so often,

 alerts = {
 #HEAD  = { alert_after = 72,  alert_every = 24 },
 # REL8_1_STABLE = { alert_after = 240, alert_every = 48 },
 },

 print_success = undef,

 # pattern for files whose changes do NOT trigger a build
 # use qr[/(doc|po)/] to ignore changes to docs and po files 
(recommended)
 # undef means don't ignore anything.
 trigger_filter = qr[/(doc|po)/] ,

 # settings for mail notices - default to notifying nobody
 # these lists contain addresses to be notified
 # must be complete email addresses, as the email is sent from the 
server

 mail_events =
 {
 all = [], # unconditional
 fail = [], # if this build fails
 change = [], # if this build causes a state change
 green = [], # if this build causes a state change to/from OK
 },
 
 # env settings to apply within build/report process
 # these settings will be seen by all the processes, including the
 # configure process.

 build_env =
 {
 # use a dedicated cache for the build farm. this should give us
 # very high hit rates and slightly faster cache searching.

 CCACHE_DIR = /home/andrew/pgfarmbuild/ccache/$branch, 

 ### use these settings for CYGWIN
 # CYGWIN = 'server',
 # MAX_CONNECTIONS = '3',

 ### set this if you need a proxy setting for the
 # outbound web transaction that reports the results
 # BF_PROXY = 

Re: [HACKERS] Online base backup from the hot-standby

2011-07-07 Thread Jun Ishiduka

 As you proposed, adding new field which stores the backup end location
 taken from minRecoveryPoint, into pg_control sounds good idea.

Update patch.

Regards.


Jun Ishizuka
NTT Software Corporation
TEL:045-317-7018
E-Mail: ishizuka@po.ntts.co.jp



standby_online_backup_03.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Make relation_openrv atomic wrt DDL

2011-07-07 Thread Robert Haas
On Thu, Jul 7, 2011 at 3:54 PM, Noah Misch n...@2ndquadrant.com wrote:
 Yes.  DDL-DDL concurrency is a much smaller practical concern, but it is a
 quality-of-implementation hole.

Agreed, although I'm not too pleased about the fact that this doesn't
fix nextval().  That seems like a fairly significant hole (but one to
be addressed by a later patch).

 The third sentence is true for all lock levels.  The fourth sentence is true
 for lock levels _except_ NoLock.

I rewrote this whole blurb.  See what you think.

 +             /*
 +              * If no lock requested, we assume the caller knows what 
 they're
 +              * doing.  They should have already acquired a heavyweight 
 lock on
 +              * this relation earlier in the processing of this same 
 statement,
 +              * so it wouldn't be appropriate to 
 AcceptInvalidationMessages()
 +              * here, as that might pull the rug out from under them.
 +              */

 What sort of rug-pullout do you have in mind?  Also, I don't think it matters
 if the caller acquired the lock during this _statement_.  It just needs to
 hold a lock, somehow.

What I'm specifically concerned about here is the possibility that we
have code which does RangeVarGetRelid() multiple times and expects to
get the same relation every time.  Now, granted, any such places are
bugs.  But I am not eager to change the logic here without looking
harder for them (and also for performance reasons).

 ... also making it cleaner to preserve this optimization.

That optimization is now gone anyway.

 Incidentally, you've added in many places this pattern of commenting that a
 lock level must match another lock level used elsewhere.  Would it be better
 to migrate away from looking up a relation oid in one function and opening the
 relation in another function, instead passing either a Relation or a RangeVar?
 You did substitute passing a Relation in a couple of places.

Well, I've got these:

- ReindexTable() must match reindex_relation()
- ExecRenameStmt() must match RenameRelation()
- DropTrigger() must match RemoveTriggerById()
- DefineRule() must match DefineQueryRewrite()
- RemoveRewriteRule() must match RemoveRewriteRuleById()

(Whoever came up with these names was clearly a genius...)

RemoveTriggerById() and RemoveRewriteRuleById() are part of the
drop-statement machinery - they can be called either because that
rule/trigger itself gets dropped, or because some other object gets
dropped and it cascades to the rule/trigger.  I'm pretty sure I don't
want to go start mucking with that machinery.

On the other hand, RenameRelation() is only called in one place other
than ExecRenameStmt(), and it looks to me like the other caller could
just as well use RenameRelationInternal().  If we change that, then we
can redefine RenameRelation() to take a RangeVar instead of an Oid and
push the rest of the logic from ExecRenameStmt() into it.  That would
probably be better all around.

The situation with DefineRule and DefineQueryRewrite is a bit more
nuanced.  As you suggest, those could probably be merged into one
function taking both an Oid and a RangeVar.  If the Oid is not
InvalidOid, we do heap_open(); else, we do heap_openrv().  That's
slightly ugly, but there are only two callers, so maybe it's not so
bad.

Offhand, I don't see any good way to rearrange reindex_relation()
along those lines.  ReindexTable() wants to check permissions, not
just open the relation.  And passing a Relation to reindex_relation()
rather than an Oid or RangeVar is no good either; you still end up
with multiple people needing to know what lock level to use.

At any rate, I'm not inventing this problem; there are plenty of
existing instances where this same phenomenon occurs.  At least I'm
documenting the ones I'm adding.  There's probably room for further
improvement and restructuring of this code, but right at the moment I
feel like the reasonable alternatives are (a) to pass a lock level
that matches what will ultimately be taken later on or (b) pass
NoLock.  I'm voting for the former.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


atomic-rangevargetrelid-v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: Allow \dd to show constraint comments

2011-07-07 Thread Robert Haas
On Wed, Jun 29, 2011 at 12:15 PM, Merlin Moncure mmonc...@gmail.com wrote:
 I was kind of hoping to avoid dealing with this can of worms with this
 simple patch, which by itself seems uncontroversial. If there's
 consensus that \dd and the other backslash commands need further
 reworking, I can probably devote a little more time to this. But let's
 not have the perfect be the enemy of the good.

 Patch applies clean, does what it is supposed to do, and matches other
 conventions in describe.c  Passing to committer.   pg_comments may be
 a better way to go, but that is a problem for another day...

I am inclined to say that we should reject this patch as it stands.
With or without pg_comments, I think we need a plan for \dd, and
adding one object type is not a plan.  The closest thing I've seen to
a plan is this comment from Josh:

--
ISTM that \dd is best suited as a command to show the comments for
objects for which we don't have an individual backslash command, or
objects for which it's not practical to show the comment in the
backslash command.
--

If someone wants to implement that, I'm OK with it, though I think we
should also consider the alternative of abolishing \dd and just always
display the comments via the per-object type commands (e.g. \d+ would
display the table, constraint, trigger, and rule comments).  I don't
want to, as Josh says, let the perfect be the enemy of the good, but
if we change this as proposed we're just going to end up changing it
again.  That's going to be more work than just doing it once, and if
it happens over the course of multiple releases, then it creates more
annoyance for our users, too.  I don't really think this is such a
large project that we can't get it right in one try.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Creating temp tables inside read only transactions

2011-07-07 Thread Darren Duncan

Guillaume Lelarge wrote [on pgsql-general]:

On Thu, 2011-07-07 at 16:01 +, mike beeper wrote [on pgsql-general]:

I have a function that creates a temp table, populate it with results
during intermediate processing, and reads from it at the end.  When
the transaction is marked as read only, it does not allow creation of
temp table, even though there are no permanent writes to the db.  Are
there any workarounds? The following block errors out.

SET TRANSACTION ISOLATION LEVEL READ COMMITTED READ ONLY;
create temp table test(test int);


When you create a temporary table, PostgreSQL needs to add rows in
pg_class, pg_attribute, and probably other system catalogs. So there are
writes, which aren't possible in a read-only transaction. Hence the
error. And no, there is no workaround.


That sounds like a deficiency to overcome.

It should be possible for those system catalogs to be virtual, defined like 
union views over similar immutable tables for the read-only database plus 
mutable in-memory ones for the temporary tables.


Are there any plans in the works to do this?

On the other hand, if one can have lexical-scope tables (table-typed routine 
variables), and I know Pg 8.4+ has named subqueries which handle a lot of cases 
where temp tables would otherwise be used, I would certainly expect those to 
work when you're dealing with a readonly database.


-- Darren Duncan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] proposal: new contrib module plpgsql's embeded sql validator

2011-07-07 Thread Pavel Stehule
Hello all,

a lazy deep SQL validation inside plpgsq functions is interesting
attribute. It allows to work with temporary tables and it make testing
and debugging harder, because lot of errors in embedded queries are
detected too late. I wrote a simple module that can to help little
bit. It is based on plpgsql plugin API and it ensures a deep
validation of embedded sql early - after start of execution. I am
thinking, so this plugin is really useful and it is example of plpgsql
pluging - that is missing in contrib.

Example:

buggy function - raise error when par  10


CREATE OR REPLACE FUNCTION public.kuku(a integer)
 RETURNS integer
 LANGUAGE plpgsql
AS $function$
begin
  if (a  10) then
return b + 1;
  else
return a + 1;
  end if;
end;
$function$

but it is works for par = 10

postgres=# select kuku(1);
 kuku
--
2
(1 row)

postgres=# load 'plpgsql';
LOAD
postgres=# load 'plpgsql_esql_checker';
LOAD
postgres=# select kuku(1);
ERROR:  column b does not exist
LINE 1: SELECT b + 1
   ^
QUERY:  SELECT b + 1
CONTEXT:  PL/pgSQL function kuku line 3 at RETURN

with esql checker this bug is identified without dependency on used
parameter's value

What do you think about this idea?

The code contains a plpgsql_statement_tree walker - it should be moved
to core and used generally - statistic, coverage tests, ...

Regards

Pavel Stehule


plpgsql-checker-9.0.tgz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extra check in 9.0 exclusion constraint unintended consequences

2011-07-07 Thread Jeff Davis
On Thu, 2011-07-07 at 12:36 -0400, Robert Haas wrote:
 I think it's probably too late to go fiddling with the behavior of 9.0
 at this point.  If we change the text of error messages, there is a
 chance that it might break applications; it would also require those
 messages to be re-translated, and I don't think the issue is really
 important enough to justify a change.

Good point on the error messages -- I didn't really think of that as a
big deal.

 I am happy to see us document
 it better, though, since it's pretty clear that there is more
 likelihood of hitting that error than we might have suspected at the
 outset.

Doc patch attached, but I'm not attached to the wording. Remember that
we only need to update the 9.0 docs, I don't think you want to apply
this to master (though I'm not sure how this kind of thing is normally
handled).

Regards,
Jeff Davis


excl-oper-doc.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Creating temp tables inside read only transactions

2011-07-07 Thread Jeff Davis
On Thu, 2011-07-07 at 20:56 -0700, Darren Duncan wrote:
  When you create a temporary table, PostgreSQL needs to add rows in
  pg_class, pg_attribute, and probably other system catalogs. So there are
  writes, which aren't possible in a read-only transaction. Hence the
  error. And no, there is no workaround.
 
 That sounds like a deficiency to overcome.
 
 It should be possible for those system catalogs to be virtual, defined like 
 union views over similar immutable tables for the read-only database plus 
 mutable in-memory ones for the temporary tables.

Ideally, yes, from a logical standpoint there are catalog entries that
are only interesting to one backend.

But that doesn't mean it's easy to do. Remember that catalog lookups
(even though most go through a cache) are a path that is important to
performance. Also, more complex catalog interpretations may introduce
some extra bootstrapping challenges.

 Are there any plans in the works to do this?

I don't think so. It sounds like some fairly major work for a
comparatively minor benefit.

Suggestions welcome, of course, to either make the work look more minor
or the benefits look more major ;)

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dropping table in testcase alter_table.sql

2011-07-07 Thread Ashutosh Bapat
On Thu, Jul 7, 2011 at 9:41 PM, Robert Haas robertmh...@gmail.com wrote:

 On Thu, Jul 7, 2011 at 3:05 AM, Ashutosh Bapat
 ashutosh.ba...@enterprisedb.com wrote:
  I noticed that the test alter_table.sql is creating two tables tab1 and
 tab2
  and it's not dropping it. Any test which follows this test and tries to
  create tables with names tab1 and tab2 will fail (unless it drops those
  tables first, but that may not work, since tab2.y depends upon tab1 in
  alter_table.sql).
 
  PFA patch which drops these two tables from alter_table.sql and
  corresponding OUT change. The regression run clean with this patch.

 The regression tests leave lots of objects lying around in the
 regression database... why drop these two, as opposed to any of the
 others?


I think, tab1 and tab2 are too common names, for anyone to pick up for the
tables. Also, the test alter_table.sql is dropping many other tables (even
those which have undergone renaming), then why not these two?



 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company




-- 
Best Wishes,
Ashutosh Bapat
EntepriseDB Corporation
The Enterprise Postgres Company