[HACKERS] Varlena and binary

2011-02-06 Thread Radosław Smogura
Hi,

I'm sending small patch for textsend. It reduces unnecessary copies, and 
memory usage for duplication of varlena data. May you look?

Kind regards,
Radosław Smogura
diff --git a/src/backend/utils/adt/varlena.c b/src/backend/utils/adt/varlena.c
index e111d26..f24bbcd 100644
--- a/src/backend/utils/adt/varlena.c
+++ b/src/backend/utils/adt/varlena.c
@@ -442,12 +442,20 @@ textrecv(PG_FUNCTION_ARGS)
 Datum
 textsend(PG_FUNCTION_ARGS)
 {
-	text	   *t = PG_GETARG_TEXT_PP(0);
-	StringInfoData buf;
-
-	pq_begintypsend(&buf);
-	pq_sendtext(&buf, VARDATA_ANY(t), VARSIZE_ANY_EXHDR(t));
-	PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
+text   *t = PG_GETARG_TEXT_PP(0);
+const char* textData = VARDATA_ANY(t);
+const int   textSize = VARSIZE_ANY_EXHDR(t);
+char* textConverted = pg_server_to_client(textData, textSize);
+//Logic based on pq_sendtext
+if (textConverted == textData) {
+		PG_RETURN_BYTEA_P(t);
+}else {
+		StringInfoData buf;
+		pq_begintypsend(&buf);
+		appendBinaryStringInfo(&buf, textConverted, strlen(textConverted));
+		pfree(textConverted);
+		PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
+	}
 }
 
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Spread checkpoint sync

2011-02-06 Thread Greg Smith

Robert Haas wrote:

With the fsync queue compaction patch applied, I think most of this is
now not needed.  Attached please find an attempt to isolate the
portion that looks like it might still be useful.  The basic idea of
what remains here is to make the background writer still do its normal
stuff even when it's checkpointing.  In particular, with this patch
applied, PG will:

1. Absorb fsync requests a lot more often during the sync phase.
2. Still try to run the cleaning scan during the sync phase.
3. Pause for 3 seconds after every fsync.
  


Yes, the bits you extracted were the remaining useful parts from the 
original patch.  Today was quiet here because there were sports on or 
something, and I added full auto-tuning magic to the attached update.  I 
need to kick off benchmarks and report back tomorrow to see how well 
this does, but any additional patch here would only be code cleanup on 
the messy stuff I did in here (plus proper implementation of the pair of 
GUCs).  This has finally gotten to the exact logic I've been meaning to 
complete as spread sync since the idea was first postponed in 8.3, with 
the benefit of some fsync aborption improvements along the way too


The automatic timing is modeled on the existing 
checkpoint_completion_target concept, except with a new tunable (not yet 
added as a GUC) currently called CheckPointSyncTarget, set to 0.8 right 
now.  What I think I want to do is make the existing 
checkpoint_completion_target now be the target for the end of the sync 
phase, matching its name; people who bumped it up won't necessarily even 
have to change anything.  Then the new guc can be 
checkpoint_write_target, representing the target that is in there right now.


I tossed the earlier idea of counting relations to sync based on the 
write phase data as too inaccurate after testing, and with it for now 
goes checkpoint sorting.  Instead, I just take a first pass over 
pendingOpsTable to get a total number of things to sync, which will 
always match the real count barring strange circumstances (like dropping 
a table).


As for the automatically determining the interval, I take the number of 
syncs that have finished so far, divide by the total, and get a number 
between 0.0 and 1.0 that represents progress on the sync phase.  I then 
use the same basic CheckpointWriteDelay logic that is there for 
spreading writes out, except with the new sync target.  I realized that 
if we assume the checkpoint writes should have finished in 
CheckPointCompletionTarget worth of time or segments, we can compute a 
new progress metric with the formula:


progress = CheckPointCompletionTarget + (1.0 - 
CheckPointCompletionTarget) * finished / goal;


Where "finished" is the number of segments written out, while "goal" is 
the total.  To turn this into an example, let's say the default 
parameters are set, we've finished the writes, and  finished 1 out of 4 
syncs; that much work will be considered:


progress = 0.5 + (1.0 - 0.5) * 1 / 4 = 0.625

On a scale that effectively aimes to be finished sync work by 0.8.

I don't use quite the same logic as the CheckpointWriteDelay though.  It 
turns out the existing checkpoint_completion implementation doesn't 
always work like I thought it did, which provide some very interesting 
insight into why my attempts to work around checkpoint problems haven't 
worked as well as expected the last few years.  I thought that what it 
did was wait until an amount of time determined by the target was 
reached until it did the next write.  That's not quite it; what it 
actually does is check progress against the target, then sleep exactly 
one nap interval if it is is ahead of schedule.  That is only the same 
thing if you have a lot of buffers to write relative to the amount of 
time involved.  There's some alternative logic if you don't have 
bgwriter_lru_maxpages set, but in the normal situation it effectively it 
means that:


maximum write spread time=bgwriter_delay * checkpoint dirty blocks

No matter how far apart you try to spread the checkpoints.  Now, 
typically, when people run into these checkpoint spikes in production, 
reducing shared_buffers improves that.  But I now realize that doing so 
will then reduce the average number of dirty blocks participating in the 
checkpoint, and therefore potentially pull the spread down at the same 
time!  Also, if you try and tune bgwriter_delay down to get better 
background cleaning, you're also reducing the maximum spread.  Between 
this issue and the bad behavior when the fsync queue fills, no wonder 
this has been so hard to tune out of production systems.  At some point, 
the reduction in spread defeats further attempts to reduce the size of 
what's written at checkpoint time, by lowering the amount of data involved.


What I do instead is nap until just after the planned schedule, then 
execute the sync.  What ends up happening then is that there can be a 
long pause between the end of the write phase and 

Re: [HACKERS] SQL/MED - file_fdw

2011-02-06 Thread Shigeru HANADA
On Fri, 4 Feb 2011 10:10:56 -0500
Robert Haas  wrote:
> Was there supposed to be a patch attached here?  Or where is it?  We
> are past out of time to get this committed, and there hasn't been a
> new version in more than two weeks.
Sorry for late to post patches.
Attached are revised version of file_fdw patch.

This patch is based on latest FDW API patches which are posted in
another thread "SQL/MED FDW API", and copy_export-20110104.patch which
was posted by Itagaki-san.

Regards,
--
Shigeru Hanada


file_fdw.patch.gz
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: FDW API

2011-02-06 Thread Shigeru HANADA
On Mon, 31 Jan 2011 22:00:55 +0900
Shigeru HANADA  wrote:
> I'll post FDW API patches which reflect comments first, and then I'll
> rebase postgresql_fdw against them.

Sorry for late, attached are revised version of FDW API patches which
reflect Heikki's comments except removing catalog lookup via
IsForeignTable().  ISTM that the point is avoiding catalog lookup
during planning, but I have not found when we can set "foreign table
flag" without catalog lookup during RelOptInfo generation.

Please apply attached patches in this order.

1) fdw_catalog_lookup.patch
2) fdw_handler.patch
3) foreign_scan.patch

To execute SELECT quereis for foreign tables, you need a FDW which has
valid fdwhandler function.  The file_fdw which is posted in another
thread "SQL/MED file_fdw" would help.

Changes from last patches are:

1) Now SELECT FOR UPDATE check for foreign tables are done properly in
executor phase, in ExecLockTuple().  Or such check should be done in
parser or planner?

2) Server version is checked in pg_dump (>= 90100).

3) ReScan is not required now.  If ReScan is not supplied, ForeignScan
uses EndScan + BeginSacn instead.

4) FDW-Info in EXPLAIN is shown always, except FDW set NULL to
explainInfo.

Regards,
--
Shigeru Hanada


fdw_catalog_lookup.patch.gz
Description: Binary data


fdw_handler.patch.gz
Description: Binary data


foreign_scan.patch.gz
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump directory archive format / parallel pg_dump

2011-02-06 Thread Jaime Casanova
On Sun, Feb 6, 2011 at 2:12 PM, Jaime Casanova  wrote:
> On Tue, Feb 1, 2011 at 11:32 PM, Joachim Wieland  wrote:
>> On Sun, Jan 30, 2011 at 5:26 PM, Robert Haas  wrote:
>>> The parallel pg_dump portion of this patch (i.e. the still-uncommitted
>>> part) no longer applies.  Please rebase.
>>
>> Here is a rebased version with some minor changes as well. I haven't
>> tested it on Windows now but will do so as soon as the Unix part has
>> been reviewed.
>>
>
> code review:
>

ah! two other things i forget:

- there is no docs
- pg_dump and pg_restore are inconsistent:
  pg_dump requires the directory to be provided with the -f option:
pg_dump -Fd -f dir_dump
  pg_restore pass the directory as an argument for -Fd: pg_restore -Fd dir_dump

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte y capacitación de PostgreSQL

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TYPE 2: skip already-provable no-work rewrites

2011-02-06 Thread Noah Misch
On Mon, Feb 07, 2011 at 12:04:02AM -0500, Robert Haas wrote:
> On Sun, Feb 6, 2011 at 8:18 PM, Noah Misch  wrote:
> >> Or how about passing an ObjectType? ?Then we could specify
> >> OBJECT_TABLE, OBJECT_FOREIGN_TABLE, or OBJECT_TYPE.
> >
> > Could this be done without a several-line blob of code at each call site to
> > determine the answer? ?If and only if so, this sounds better.
> 
> Yeah, that's a problem.  New thought: how about we go back more or
> less to the original coding, except replacing the second argument
> (only) with a Relation?  In other words, callers will pass either a
> Relation (which might be a table or foreign table) or a type name.
> Not particularly elegant, but no worse than what we had before.

Sounds good.  Thanks.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How to make contrib/sepgsql on Ubuntu Maverick ?

2011-02-06 Thread Robert Haas
On Sun, Feb 6, 2011 at 5:03 PM, Andrew Dunstan  wrote:
> Yeah, this bit of the Makefile looks bogus:
>
>   sepgsql-regtest.pp: sepgsql-regtest.te
>        $(MAKE) -f $(DESTDIR)/usr/share/selinux/devel/Makefile $@
>

I agree.  That looks bogus.  KaiGai?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] little mistakes in HS/SR

2011-02-06 Thread Robert Haas
On Sun, Feb 6, 2011 at 6:44 PM, Bruce Momjian  wrote:
> Robert Haas wrote:
>> On Thu, Jul 22, 2010 at 1:41 AM, Fujii Masao  wrote:
>> > We should enclose -1 with  tag.
>>
>> A quick survey of the documentation as a whole suggests that we
>> enclose -1 with  in a few places but more commonly we don't.
>> I have no position on whether we should do it or not, but maybe we
>> should try to be consistent throughout the docs?  Or at least have a
>> consistent rule for deciding what to do in a particular case?
>
> Excellent question.  I went through the documentation and removed
>  tags where appropriate --- there are cases where we are
> referencing an actual number, and there  makes sense.  Applied
> patch attached.
>
> I think the larger question is whether we should say "zero" for 0 and
> "one" for 1, etc.  Prose typography suggests we should, but for
> technical manuals, I am not sure.  Ideas?

I am doubtful that this makes sense in general.  I suspect it depends
somewhat on context.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] REVIEW: PL/Python table functions

2011-02-06 Thread Hitoshi Harada
2011/2/7 Jan Urbański :
> On 04/02/11 16:26, Hitoshi Harada wrote:
>> 2011/1/28 Jan Urbański :
>>> On 27/01/11 00:41, Jan Urbański wrote:
 I'm also attaching an updated version that should apply on top of my
 github refactor branch (or incrementally over the new set of refactor
 patches that I will post shortly to the refactor thread).
>>>
>>> Attached is a patch for master, as the refactorings have already been
>>> merged.
>>
>> Sorry, but could you update your patch? Patching it against HEAD today
>> makes rej.
>
> Sure, here's an updated patch.

Thanks,

I revisited the problem of typeinfo cache, and I guess this is not
what you want;

db1=# create function func1(t text) returns record as $$ return
{'v1':1,'v2':2,t:3} $$ language plpythonu;
CREATE FUNCTION
db1=# select * from func1('v3') as (v3 int, v2 int, v1 int);
 v3 | v2 | v1
++
  3 |  2 |  1
(1 row)

db1=# select * from func1('v3') as (v1 int, v2 int, v3 int);
 v1 | v2 | v3
++
  3 |  2 |  1
(1 row)

db1=# select * from func1('v4') as (v1 int, v2 int, v3 int);
ERROR:  key "v3" not found in mapping
HINT:  To return null in a column, add the value None to the mapping
with the key named after the column.
CONTEXT:  while creating return value
PL/Python function "func1"
db1=# select * from func1('v4') as (v1 int, v2 int, v4 int);
ERROR:  key "v3" not found in mapping
HINT:  To return null in a column, add the value None to the mapping
with the key named after the column.
CONTEXT:  while creating return value
PL/Python function "func1"

The PL/pgSQL case you pointed earlier is consistent because it fetches
the values positionally. The column name is only an on-demand
labeling. However, for mapping dict of python into the table row
should always map it by key. At least the function author (including
me :P) expects it.

Regards,


-- 
Hitoshi Harada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TYPE 2: skip already-provable no-work rewrites

2011-02-06 Thread Robert Haas
On Sun, Feb 6, 2011 at 8:18 PM, Noah Misch  wrote:
>> Or how about passing an ObjectType?  Then we could specify
>> OBJECT_TABLE, OBJECT_FOREIGN_TABLE, or OBJECT_TYPE.
>
> Could this be done without a several-line blob of code at each call site to
> determine the answer?  If and only if so, this sounds better.

Yeah, that's a problem.  New thought: how about we go back more or
less to the original coding, except replacing the second argument
(only) with a Relation?  In other words, callers will pass either a
Relation (which might be a table or foreign table) or a type name.
Not particularly elegant, but no worse than what we had before.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A different approach to extension NO USER DATA feature

2011-02-06 Thread Robert Haas
On Sun, Feb 6, 2011 at 1:23 PM, Tom Lane  wrote:
> I've not attempted to implement this idea, but offhand it looks like
> it would take a day or two's work, which seems well worthwhile to
> expend now in the hope of getting this feature right the first time.
>
> Comments?

The design you propose seems pretty good, and I don't think we can
commit this without this change.  The scheduling implications are not
great, but since this seems like an important patch I guess we should
live with it, so +1.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: remove tags.

2011-02-06 Thread Robert Haas
On Sun, Feb 6, 2011 at 11:12 PM, Tom Lane  wrote:
> Bruce Momjian  writes:
>> Bruce Momjian wrote:
>>> remove tags.
>
>> Sorry, vague commit message (I forgot squash).
>
>> Can I will use git ammend to improve this message?

Absolutely not.

> How about git revert, instead?  It's not apparent to me that these
> changes were improvements.

I'll buy that one.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: remove tags.

2011-02-06 Thread Tom Lane
Bruce Momjian  writes:
> Bruce Momjian wrote:
>> remove tags.

> Sorry, vague commit message (I forgot squash).

> Can I will use git ammend to improve this message?

How about git revert, instead?  It's not apparent to me that these
changes were improvements.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Force strings passed to and from plperl to be in UTF8 encoding.

2011-02-06 Thread Andrew Dunstan



On 02/06/2011 09:13 PM, Alex Hunsaker wrote:

I would have loved to add some regression
tests for some of this (like the example hek2cstr states). Is there
any way to do that?



I can't think of an obvious way. Anyone else?

cheers

ndrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Force strings passed to and from plperl to be in UTF8 encoding.

2011-02-06 Thread Alex Hunsaker
On Sun, Feb 6, 2011 at 18:02, Andrew Dunstan  wrote:
>
>
>
>
>
> On 02/06/2011 05:31 PM, Andrew Dunstan wrote:
>>
>> Force strings passed to and from plperl to be in UTF8 encoding.
>>
>> String are converted to UTF8 on the way into perl and to the
>> database encoding on the way back. This avoids a number of
>> observed anomalies, and ensures Perl a consistent view of the
>> world.
>>
>> Some minor code cleanups are also accomplished.
>>
>> Alex Hunsaker, reviewed by Andy Colson.
>
>
> This has broken the buildfarm :-(

Drat.

> perl ppport.h reports:
>
>   *** WARNING: Uses HeUTF8, which may not be portable below perl
>   5.11.0, even with 'ppport.h'
> Experimentation on a CentOS machine suggests we can cure it with this:
>
>   #ifndef HeUTF8
>   #define HeUTF8(he)             ((HeKLEN(he) == HEf_SVKEY) ?            \
>                                    SvUTF8(HeKEY_sv(he))
>   :                 \
>                                    (U32)HeKUTF8(he))
>   #endif

Yeah, that should work. BTW I would have loved to add some regression
tests for some of this (like the example hek2cstr states). Is there
any way to do that?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TYPE 2: skip already-provable no-work rewrites

2011-02-06 Thread Noah Misch
On Sun, Feb 06, 2011 at 12:54:19PM -0500, Robert Haas wrote:
> On Sun, Feb 6, 2011 at 12:13 PM, Robert Haas  wrote:
> > That's not quite so good for translators, I think.
> >
> > Another option is that we could just say "relation" (table, foreign
> > table, etc...) or "type". ?We use the word relation as a more generic
> > version of table in a few other places.

Seems fine.

> Or how about passing an ObjectType?  Then we could specify
> OBJECT_TABLE, OBJECT_FOREIGN_TABLE, or OBJECT_TYPE.

Could this be done without a several-line blob of code at each call site to
determine the answer?  If and only if so, this sounds better.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Force strings passed to and from plperl to be in UTF8 encoding.

2011-02-06 Thread Andrew Dunstan






On 02/06/2011 05:31 PM, Andrew Dunstan wrote:

Force strings passed to and from plperl to be in UTF8 encoding.

String are converted to UTF8 on the way into perl and to the
database encoding on the way back. This avoids a number of
observed anomalies, and ensures Perl a consistent view of the
world.

Some minor code cleanups are also accomplished.

Alex Hunsaker, reviewed by Andy Colson.



This has broken the buildfarm :-(

perl ppport.h reports:

   *** WARNING: Uses HeUTF8, which may not be portable below perl
   5.11.0, even with 'ppport.h'


Experimentation on a CentOS machine suggests we can cure it with this:

   #ifndef HeUTF8
   #define HeUTF8(he) ((HeKLEN(he) == HEf_SVKEY) ?\
SvUTF8(HeKEY_sv(he))
   : \
(U32)HeKUTF8(he))
   #endif


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A different approach to extension NO USER DATA feature

2011-02-06 Thread David E. Wheeler
On Feb 6, 2011, at 10:23 AM, Tom Lane wrote:

> I've not attempted to implement this idea, but offhand it looks like
> it would take a day or two's work, which seems well worthwhile to
> expend now in the hope of getting this feature right the first tim

Seems worthwhile, and a much better approach. But do we really want to limit it 
only to tables?

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] little mistakes in HS/SR

2011-02-06 Thread Bruce Momjian
Robert Haas wrote:
> On Thu, Jul 22, 2010 at 1:41 AM, Fujii Masao  wrote:
> > We should enclose -1 with  tag.
> 
> A quick survey of the documentation as a whole suggests that we
> enclose -1 with  in a few places but more commonly we don't.
> I have no position on whether we should do it or not, but maybe we
> should try to be consistent throughout the docs?  Or at least have a
> consistent rule for deciding what to do in a particular case?

Excellent question.  I went through the documentation and removed
 tags where appropriate --- there are cases where we are
referencing an actual number, and there  makes sense.  Applied
patch attached.

I think the larger question is whether we should say "zero" for 0 and
"one" for 1, etc.  Prose typography suggests we should, but for
technical manuals, I am not sure.  Ideas?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/doc/src/sgml/advanced.sgml b/doc/src/sgml/advanced.sgml
index 9cb7173..11859b4 100644
--- a/doc/src/sgml/advanced.sgml
+++ b/doc/src/sgml/advanced.sgml
@@ -533,7 +533,7 @@ WHERE pos < 3;
 
 
 The above query only shows the rows from the inner query having
-rank less than 3.
+rank less than 3.

 

diff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml
index bb4657e..b3ecda5 100644
--- a/doc/src/sgml/array.sgml
+++ b/doc/src/sgml/array.sgml
@@ -259,7 +259,7 @@ SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill';
 
   If any dimension is written as a slice, i.e., contains a colon, then all
   dimensions are treated as slices.  Any dimension that has only a single
-  number (no colon) is treated as being from 1
+  number (no colon) is treated as being from 1
   to the number specified.  For example, [2] is treated as
   [1:2], as in this example:
 
diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index dcbb2c8..62830cb 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -835,8 +835,8 @@ SELECT pg_stop_backup();
 GNU tar return an error code indistinguishable from
 a fatal error if a file was truncated while tar was
 copying it.  Fortunately, GNU tar versions 1.16 and
-later exit with 1 if a file was changed during the backup,
-and 2 for other errors.
+later exit with 1 if a file was changed during the backup,
+and 2 for other errors.

 

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 09152e6..eda82c5 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -1455,7 +1455,7 @@
return the cast destination type as their result type.  A cast
function can have up to three arguments.  The second argument,
if present, must be type integer; it receives the type
-   modifier associated with the destination type, or -1
+   modifier associated with the destination type, or -1
if there is none.  The third argument,
if present, must be type boolean; it receives true
if the cast is an explicit cast, false otherwise.
@@ -6541,8 +6541,7 @@
OID of the object within its system catalog, or null if the
object is not a general database object.
For advisory locks it is used to distinguish the two key
-   spaces (1 for an int8 key, 2 for two
-   int4 keys).
+   spaces (1 for an int8 key, 2 for two int4 keys).
   
  
  
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index a7bb2cd..a2c0494 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -921,7 +921,7 @@ omicron bryanh  guest1
   include_realm
   

-If set to 1, the realm name from the authenticated user
+If set to 1, the realm name from the authenticated user
 principal is included in the system user name that's passed through
 user name mapping (). This is
 useful for handling users from multiple realms.
@@ -992,7 +992,7 @@ omicron bryanh  guest1
   include_realm
   

-If set to 1, the realm name from the authenticated user
+If set to 1, the realm name from the authenticated user
 principal is included in the system user name that's passed through
 user name mapping (). This is
 useful for handling users from multiple realms.
@@ -1161,7 +1161,7 @@ omicron bryanh  guest1
   include_realm
   

-If set to 1, the realm name from the authenticated user
+If set to 1, the realm name from the authenticated user
 principal is included in the system user name that's passed through
 user name mapping (). This is
 useful for handling users from multiple realms.
@@ -1372,7 +1372,7 @@ omicron bryanh  guest1
   ldaptls
   

-Set to 1 to make the conn

Re: [HACKERS] multiple -f support

2011-02-06 Thread Simon Riggs
On Sun, 2011-02-06 at 12:07 -0500, Robert Haas wrote:
> On Sun, Feb 6, 2011 at 11:16 AM, Bruce Momjian  wrote:
> > I assume having psql support multiple -f files is not a high priority or
> > something we don't want.
> 
> IIRC, nobody objected to the basic concept, and it seems useful.  I
> thought we were pretty close to committing something along those lines
> at one point, actually.  I don't remember exactly where the wheels
> came off.
> 
> Maybe a TODO?

As Andrew pointed out, IIRC, you can do this with a little shell
programming already.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Visual Studio 2010/Windows SDK 7.1 support

2011-02-06 Thread Brar Piening
On Sun, 30 Jan 2011 21:26:22 +0100, Magnus Hagander 
 wrote:

it's not something we should hold up the CF / release for.


I agree.
At least it should get some more testing besides mine.

I've set up virtual machines with VS 2003, VS 2005 Express, VS 2008 
Express (+ my PC with VS 2010) for testing purposes but I didn't test 
all possible build paths with respect to the external libraries to include.
While I didn't change much of the existing VS 2005/8 code I currently 
can't guarantee that the VS 2010 build will work for every possible 
external library one could include (yet I didn't stumble into any 
failure while testing) and still I could have broken some VS 2005/8 
build path too.
The patch could also be extended to automatically support building libpq 
when VS 2003 is detected or support other desireable features that 
aren't really in the context of supporting VS 2010.


Being somewhat short of time in the next weeks I'm at least willing to 
rebase the patch on request and do some more testing or fix issues 
someone else has detected before the next release (9.2?) goes beta.


If there's some pressure to support VS 2010 asap - please let me know 
and I'll see what I can do.


Best regards,

Brar

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How to make contrib/sepgsql on Ubuntu Maverick ?

2011-02-06 Thread Andrew Dunstan



On 02/05/2011 12:58 AM, Vladimir Kokovic wrote:


make -C sepgsql all
make[2]: Entering directory 
`/media/sda5/postgresql-9.1devel/build/contrib/sepgsql'
sed 's,MODULE_PATHNAME,$libdir/sepgsql,g' 
/media/sda5/postgresql-9.1devel/build/../../tmp/postgresql-git/postgresql/contrib/sepgsql/sepgsql.sql.in 
 >sepgsql.sql

make -f /usr/share/selinux/devel/Makefile sepgsql-regtest.pp
make[3]: Entering directory 
`/media/sda5/postgresql-9.1devel/build/contrib/sepgsql'

make[3]: /usr/share/selinux/devel/Makefile: No such file or directory
make[3]: *** No rule to make target 
`/usr/share/selinux/devel/Makefile'. Stop.
make[3]: Leaving directory 
`/media/sda5/postgresql-9.1devel/build/contrib/sepgsql'

make[2]: *** [sepgsql-regtest.pp] Error 2
make[2]: Leaving directory 
`/media/sda5/postgresql-9.1devel/build/contrib/sepgsql'

make[1]: *** [all-sepgsql-recurse] Error 2
make[1]: Leaving directory `/media/sda5/postgresql-9.1devel/build/contrib'
make: *** [world-contrib-recurse] Error 2


Yeah, this bit of the Makefile looks bogus:

   sepgsql-regtest.pp: sepgsql-regtest.te
$(MAKE) -f $(DESTDIR)/usr/share/selinux/devel/Makefile $@


Shouldn't configure be working out the location of this Makefile and 
setting a variable accordingly?


I'm trying to work out a sane way to come up with buildfarm coverage of 
this module. I'm glad somebody at least is testing it.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] How to make contrib/sepgsql on Ubuntu Maverick ?

2011-02-06 Thread Vladimir Kokovic
Hi,

My build script:
-
#!/bin/sh
set -v
set -e
cd /media/sda5/postgresql-9.1devel/build
export CFLAGS="-g3 -gdwarf-2"
../../tmp/postgresql-git/postgresql/configure
'--srcdir=../../tmp/postgresql-git/postgresql' '--enable-cassert' \
'--enable-nls' '--enable-integer-datetimes' '--with-perl' '--with-python'
'--with-tcl' '--with-krb5' '--with-openssl' \
'--enable-thread-safety' '--with-ldap' '--with-gssapi' '--with-pam'
'--with-libxml' '--with-libxslt' '--with-selinux' \
'--prefix=/media/sda5/postgresql-9.1devel/20110204' > configure-out1.log
2>&1
make world > make-out1.log 2>&1
make install-world > make-install-out1.log 2>&1
exit 0


tail make-out1.log:

make -C sepgsql all
make[2]: Entering directory
`/media/sda5/postgresql-9.1devel/build/contrib/sepgsql'
sed 's,MODULE_PATHNAME,$libdir/sepgsql,g'
/media/sda5/postgresql-9.1devel/build/../../tmp/postgresql-git/postgresql/contrib/sepgsql/
sepgsql.sql.in >sepgsql.sql
make -f /usr/share/selinux/devel/Makefile sepgsql-regtest.pp
make[3]: Entering directory
`/media/sda5/postgresql-9.1devel/build/contrib/sepgsql'
make[3]: /usr/share/selinux/devel/Makefile: No such file or directory
make[3]: *** No rule to make target `/usr/share/selinux/devel/Makefile'.
Stop.
make[3]: Leaving directory
`/media/sda5/postgresql-9.1devel/build/contrib/sepgsql'
make[2]: *** [sepgsql-regtest.pp] Error 2
make[2]: Leaving directory
`/media/sda5/postgresql-9.1devel/build/contrib/sepgsql'
make[1]: *** [all-sepgsql-recurse] Error 2
make[1]: Leaving directory `/media/sda5/postgresql-9.1devel/build/contrib'
make: *** [world-contrib-recurse] Error 2

Best regards,
Vladimir Kokovic, DP senior, Belgrade, Serbia


Re: [HACKERS] SAVEPOINTs and COMMIT performance

2011-02-06 Thread Simon Riggs
On Sun, 2011-02-06 at 12:11 -0500, Bruce Momjian wrote:
> Did this ever get addressed?

Patch attached.

Seems like the easiest fix I can come up with.

> Simon Riggs wrote:
> > 
> > As part of a performance investigation for a customer I've noticed an
> > O(N^2) performance issue on COMMITs of transactions that contain many
> > SAVEPOINTs. I've consistently measured COMMIT times of around 9 seconds,
> > with 49% CPU, mostly in LockReassignCurrentOwner().
> > 
> > BEGIN;
> > INSERT...
> > SAVEPOINT ...
> > INSERT...
> > SAVEPOINT ...
> > ... (repeat 10,000 times)
> > COMMIT;
> > 
> > The way SAVEPOINTs work is that each is nested within the previous one,
> > so that at COMMIT time we must recursively commit all the
> > subtransactions before we issue final commit.
> > 
> > That's a shame because ResourceOwnerReleaseInternal() contains an
> > optimisation to speed up final commit, by calling ProcReleaseLocks().
> > 
> > What we actually do is recursively call LockReassignCurrentOwner() which
> > sequentially scans LockMethodLocalHash at each level of transaction. The
> > comments refer to this as "retail" rather than the wholesale method,
> > which never gets to execute anything worthwhile in this case.
> > 
> > This issue does NOT occur in PLpgSQL functions that contain many
> > EXCEPTION clauses in a loop, since in that case the subtransactions are
> > started and committed from the top level so that the subxact nesting
> > never goes too deep.
> > 
> > Fix looks like we need special handling for the depth-first case, rather
> > than just a recursion loop in CommitTransactionCommand().
> > 
> > Issues looks like it goes all the way back, no fix for 9.0.
> > 
> > I notice also that the nesting model of SAVEPOINTs also means that
> > read-only subtransactions will still generate an xid when followed by a
> > DML statement. That's unnecessary, but required given current design.
> > 
> > -- 
> >  Simon Riggs   www.2ndQuadrant.com
> >  PostgreSQL Development, 24x7 Support, Training and Services
> > 
> > 
> > -- 
> > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-hackers
> 

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 18e9ce1..07c84b4 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -269,7 +269,7 @@ static TransactionId RecordTransactionAbort(bool isSubXact);
 static void StartTransaction(void);
 
 static void StartSubTransaction(void);
-static void CommitSubTransaction(void);
+static void CommitSubTransaction(bool isTopLevel);
 static void AbortSubTransaction(void);
 static void CleanupSubTransaction(void);
 static void PushTransaction(void);
@@ -2518,7 +2518,7 @@ CommitTransactionCommand(void)
 		case TBLOCK_SUBEND:
 			do
 			{
-CommitSubTransaction();
+CommitSubTransaction(true);
 s = CurrentTransactionState;	/* changed by pop */
 			} while (s->blockState == TBLOCK_SUBEND);
 			/* If we had a COMMIT command, finish off the main xact too */
@@ -3664,7 +3664,7 @@ ReleaseCurrentSubTransaction(void)
 			 BlockStateAsString(s->blockState));
 	Assert(s->state == TRANS_INPROGRESS);
 	MemoryContextSwitchTo(CurTransactionContext);
-	CommitSubTransaction();
+	CommitSubTransaction(false);
 	s = CurrentTransactionState;	/* changed by pop */
 	Assert(s->state == TRANS_INPROGRESS);
 }
@@ -3925,9 +3925,13 @@ StartSubTransaction(void)
  *
  *	The caller has to make sure to always reassign CurrentTransactionState
  *	if it has a local pointer to it after calling this function.
+ *
+ *	isTopLevel means that this CommitSubTransaction() is being issued as a
+ *	sequence of actions leading directly to a main transaction commit
+ *	allowing some actions to be optimised.
  */
 static void
-CommitSubTransaction(void)
+CommitSubTransaction(bool isTopLevel)
 {
 	TransactionState s = CurrentTransactionState;
 
@@ -3974,15 +3978,21 @@ CommitSubTransaction(void)
 
 	/*
 	 * The only lock we actually release here is the subtransaction XID lock.
-	 * The rest just get transferred to the parent resource owner.
 	 */
 	CurrentResourceOwner = s->curTransactionOwner;
 	if (TransactionIdIsValid(s->transactionId))
 		XactLockTableDelete(s->transactionId);
 
+	/*
+	 * Other locks should get transferred to their parent resource owner.
+	 * Doing that is an O(N^2) operation, so if isTopLevel then we can just
+	 * leave the lock records as they are, knowing they will all get released
+	 * by the top level commit using ProcReleaseLocks(). We only optimize
+	 * this for commit; aborts may need to do other cleanup.
+	 */
 	ResourceOwnerRelease(s->curTransactionOwner,
 		 RESOURCE_RELEASE_LOCKS,
-		 true, false);
+		 true, isTopLevel);
 	ResourceOwnerRelease(s->curTransactionOwner,
 		 RESOURCE_RELEASE_A

Re: [HACKERS] SSI patch version 14

2011-02-06 Thread Kevin Grittner
"Kevin Grittner"  wrote:
> Jeff Davis wrote:
>  
>> What does PredicateLockTuple do that needs a share lock? Does a
>> pin suffice?
> 
> If one process is reading a tuple and another is writing it (e.g.,
> UPDATE or DELETE) the concern is that we need to be able to
> guarantee that either the predicate lock appears in time for the
> writer to see it on the call to CheckForSerializableConflictIn or
> the reader sees the MVCC changes in
> CheckForSerializableConflictOut. It's OK if the conflict is
> detected both ways, but if it's missed both ways then we could have
> a false negative, allowing anomalies to slip through. It didn't
> seem to me on review that acquiring the predicate lock after
> releasing the shared buffer lock (and just having the pin) would be
> enough to ensure that a write which followed that would see the
> predicate lock.
> 
> reader has pin and shared lock
> writer increments pin count
> reader releases shared lock
> writer acquires exclusive lock
> writer checks predicate lock and fails to see one
> reader adds predicate lock
> we have a problem
 
Hmmm...  Or do we?  If both sides were careful to record what they're
doing before checking for a conflict, the pin might be enough.  I'll
check for that.  In at least one of those moves I was moving the
predicate lock acquisition from after the conflict check to before,
but maybe I didn't need to move it quite so far
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/python custom exceptions for SPI

2011-02-06 Thread Jan Urbański
On 27/01/11 23:24, Jan Urbański wrote:
> On 11/01/11 12:20, Jan Urbański wrote:
>> On 11/01/11 01:27, Tom Lane wrote:
>>> Hannu Krosing  writes:
 On 10.1.2011 17:20, Jan Urbański wrote:
> I changed that patch to use Perl instead of sed to generate the
> exceptions, which should be a more portable.
> 
> Updated as an incremental patch on to of the recently sent version of
> explicit-subxacts.

Updated again.
diff --git a/src/pl/plpython/Makefile b/src/pl/plpython/Makefile
index 33dddc6..b3f0d0e 100644
*** a/src/pl/plpython/Makefile
--- b/src/pl/plpython/Makefile
*** PSQLDIR = $(bindir)
*** 86,94 
--- 86,102 
  
  include $(top_srcdir)/src/Makefile.shlib
  
+ # Force this dependency to be known (see src/pl/plpgsql/src/Makefile)
+ plpython.o: spiexceptions.h
+ 
+ # Generate spiexceptions.h from utils/errcodes.h
+ spiexceptions.h: $(top_srcdir)/src/backend/utils/errcodes.txt generate-spiexceptions.pl
+ 	$(PERL) $(srcdir)/generate-spiexceptions.pl $< > $@
  
  all: all-lib
  
+ distprep: spiexceptions.h
+ 
  install: all installdirs install-lib
  ifeq ($(python_majorversion),2)
  	cd '$(DESTDIR)$(pkglibdir)' && rm -f plpython$(DLSUFFIX) && $(LN_S) $(shlib) plpython$(DLSUFFIX)
*** endif
*** 134,143 
  submake:
  	$(MAKE) -C $(top_builddir)/src/test/regress pg_regress$(X)
  
! clean distclean maintainer-clean: clean-lib
  	rm -f $(OBJS)
  	rm -rf results
  	rm -f regression.diffs regression.out
  ifeq ($(PORTNAME), win32)
  	rm -f python${pytverstr}.def
  endif
--- 142,156 
  submake:
  	$(MAKE) -C $(top_builddir)/src/test/regress pg_regress$(X)
  
! clean distclean: clean-lib
  	rm -f $(OBJS)
  	rm -rf results
  	rm -f regression.diffs regression.out
+ 
+ # since we distribute spiexceptions.h, only remove it in maintainer-clean
+ maintainer-clean: clean distclean
+ 	rm -f spiexceptions.h
+ 
  ifeq ($(PORTNAME), win32)
  	rm -f python${pytverstr}.def
  endif
diff --git a/src/pl/plpython/expected/plpython_error.out b/src/pl/plpython/expected/plpython_error.out
index 7597ca7..afbc6db 100644
*** a/src/pl/plpython/expected/plpython_error.out
--- b/src/pl/plpython/expected/plpython_error.out
*** CREATE FUNCTION sql_syntax_error() RETUR
*** 32,38 
  'plpy.execute("syntax error")'
  LANGUAGE plpythonu;
  SELECT sql_syntax_error();
! ERROR:  plpy.SPIError: syntax error at or near "syntax"
  LINE 1: syntax error
  ^
  QUERY:  syntax error
--- 32,38 
  'plpy.execute("syntax error")'
  LANGUAGE plpythonu;
  SELECT sql_syntax_error();
! ERROR:  spiexceptions.SyntaxError: syntax error at or near "syntax"
  LINE 1: syntax error
  ^
  QUERY:  syntax error
*** CREATE FUNCTION exception_index_invalid_
*** 54,60 
  return rv[0]'
  	LANGUAGE plpythonu;
  SELECT exception_index_invalid_nested();
! ERROR:  plpy.SPIError: function test5(unknown) does not exist
  LINE 1: SELECT test5('foo')
 ^
  HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
--- 54,60 
  return rv[0]'
  	LANGUAGE plpythonu;
  SELECT exception_index_invalid_nested();
! ERROR:  spiexceptions.UndefinedFunction: function test5(unknown) does not exist
  LINE 1: SELECT test5('foo')
 ^
  HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
*** return None
*** 74,80 
  '
  	LANGUAGE plpythonu;
  SELECT invalid_type_uncaught('rick');
! ERROR:  plpy.SPIError: type "test" does not exist
  CONTEXT:  PL/Python function "invalid_type_uncaught"
  /* for what it's worth catch the exception generated by
   * the typo, and return None
--- 74,80 
  '
  	LANGUAGE plpythonu;
  SELECT invalid_type_uncaught('rick');
! ERROR:  spiexceptions.UndefinedObject: type "test" does not exist
  CONTEXT:  PL/Python function "invalid_type_uncaught"
  /* for what it's worth catch the exception generated by
   * the typo, and return None
*** SELECT valid_type('rick');
*** 140,145 
--- 140,183 
   
  (1 row)
  
+ /* Check catching specific types of exceptions
+  */
+ CREATE TABLE specific (
+ i integer PRIMARY KEY
+ );
+ NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "specific_pkey" for table "specific"
+ CREATE FUNCTION specific_exception(i integer) RETURNS void AS
+ $$
+ from plpy import spiexceptions
+ try:
+ plpy.execute("insert into specific values (%s)" % (i or "NULL"));
+ except spiexceptions.NotNullViolation, e:
+ plpy.notice("Violated the NOT NULL constraint, sqlstate %s" % e.sqlstate)
+ except spiexceptions.UniqueViolation, e:
+ plpy.notice("Violated the UNIQUE constraint, sqlstate %s" % e.sqlstate)
+ $$ LANGUAGE plpythonu;
+ SELECT specific_exception(2);
+  specific_exception 
+ 
+  
+ (1 row)
+ 
+ SELECT specific_exception(NULL);
+ NOTICE:  Violated the NOT NULL constraint, sqlstate 23502
+ CONTEXT:  PL/Python function "specific_exception"
+  

Re: [HACKERS] SSI patch version 14

2011-02-06 Thread Kevin Grittner
Jeff Davis  wrote:
> On Sat, 2011-02-05 at 14:43 -0600, Kevin Grittner wrote:
 
>> In working on this I noticed the apparent need to move two calls
>> to PredicateLockTuple a little bit to keep them inside the buffer
>> lock.  Without at least a share lock on the buffer, it seems that
>> here is a window where a read could miss the MVCC from a write and
>> the write could fail to see the predicate lock.
 
> What does PredicateLockTuple do that needs a share lock? Does a pin
> suffice?
 
If one process is reading a tuple and another is writing it (e.g.,
UPDATE or DELETE) the concern is that we need to be able to guarantee
that either the predicate lock appears in time for the writer to see
it on the call to CheckForSerializableConflictIn or the reader sees
the MVCC changes in CheckForSerializableConflictOut.  It's OK if the
conflict is detected both ways, but if it's missed both ways then we
could have a false negative, allowing anomalies to slip through.  It
didn't seem to me on review that acquiring the predicate lock after
releasing the shared buffer lock (and just having the pin) would be
enough to ensure that a write which followed that would see the
predicate lock.

reader has pin and shared lock
writer increments pin count
reader releases shared lock
writer acquires exclusive lock
writer checks predicate lock and fails to see one
reader adds predicate lock
we have a problem

-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types - cache lookup failed for function

2011-02-06 Thread Erik Rijkers
On Sun, February 6, 2011 07:41, Jeff Davis wrote:
> New patch. All known TODO items are closed, although I should do a



I've been testing and find the patch to be
generally very stable.

More review follows ASAP, but I wanted to mention
this curious 'bug' already.

(below all with latest git + patch 2011.02.05; but
occurred also in 2011.01.30 version)

I was trying
where intrange @> integer

which admittedly is not in the documentation,
but does already half work, and would be really
convenient to have.  As it stands the construct
seems to fail after ANALYZE, when there is more
than 1 row:

-- file t/cache_lookup_failure2.sql
drop table if exists t;
create table t (ir int4range);
insert into t values (range(1,11));
select count(*) from t;
select * from t where ir @> 5; --> OK
analyze t;
select * from t where ir @> 5; --> OK
insert into t values (range(1,11));
select * from t where ir @> 5;
analyze t;
select * from t where ir @> 5;

running that file gives me

PGPORT=6563
PGDATA=/var/data1/pg_stuff/pg_installations/pgsql.range_types/data
PGDATABASE=testdb

2011.02.06 20:03:03 
rijkers@denkraam:/var/data1/pg_stuff/pg_sql/pgsql.range_types [0]
$ clear; psql -Xqa -d testdb -f t/cache_lookup_failure2.sql
drop table if exists t;
create table t (ir int4range);
insert into t values (range(1,11));
select count(*) from t;
 count
---
 1
(1 row)

select * from t where ir @> 5; --> OK
ir
---
 [ 1, 11 )
(1 row)

analyze t;
select * from t where ir @> 5; --> OK
ir
---
 [ 1, 11 )
(1 row)

insert into t values (range(1,11));
select * from t where ir @> 5;
ir
---
 [ 1, 11 )
 [ 1, 11 )
(2 rows)

analyze t;
select * from t where ir @> 5;
psql:t/cache_lookup_failure2.sql:11: ERROR:  cache lookup failed for function 0


(of course I realize that
   ... WHERE ir @> range(5);

 "fixes" the problem, but it would be
really nice to have it work on integers)

Thanks,

Erik Rijkers




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-06 Thread Jeff Davis
On Sat, 2011-02-05 at 14:43 -0600, Kevin Grittner wrote:
> "Kevin Grittner"  wrote:
>  
> > So now that I'm sure we actually do need code there, I'll add it.
>  
> In working on this I noticed the apparent need to move two calls to
> PredicateLockTuple a little bit to keep them inside the buffer lock. 
> Without at least a share lock on the buffer, it seems that here is a
> window where a read could miss the MVCC from a write and the write
> could fail to see the predicate lock.  Please see whether this seems
> reasonable:
>  
> http://git.postgresql.org/gitweb?p=users/kgrittn/postgres.git;a=commitdiff;h=7841a22648c3f4ae46f674d7cf4a7c2673cf9ed2

What does PredicateLockTuple do that needs a share lock? Does a pin
suffice?

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump directory archive format / parallel pg_dump

2011-02-06 Thread Jaime Casanova
On Tue, Feb 1, 2011 at 11:32 PM, Joachim Wieland  wrote:
> On Sun, Jan 30, 2011 at 5:26 PM, Robert Haas  wrote:
>> The parallel pg_dump portion of this patch (i.e. the still-uncommitted
>> part) no longer applies.  Please rebase.
>
> Here is a rebased version with some minor changes as well. I haven't
> tested it on Windows now but will do so as soon as the Unix part has
> been reviewed.
>

code review:

something i found, and is a very simple one, is this warning (there's
a similar issue in _StartMasterParallel with the buf variable)
"""
pg_backup_directory.c: In function ‘_EndMasterParallel’:
pg_backup_directory.c:856: warning: ‘status’ may be used uninitialized
in this function
"""

i guess the huge amount of info is showing the patch is just for
debugging and will be removed before commit, right?

functional review:

it works good most of the time, just a few points:
- if i interrupt the process the connections stay, i guess it could
catch the signal and finish the connections
- if i have an exclusive lock on a table and a worker starts dumping
it, it fails because it can't take the lock but it just say "it was
ok" and would prefer an error

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte y capacitación de PostgreSQL

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/python tracebacks

2011-02-06 Thread Jan Urbański
On 27/01/11 22:58, Jan Urbański wrote:
> On 23/12/10 14:56, Jan Urbański wrote:
>> Here's a patch implementing traceback support for PL/Python mentioned in
>> http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
>> an incremental patch on top of the plpython-refactor patch sent eariler.
> 
> Updated to master.

Updated to master again.
diff --git a/src/pl/plpython/expected/plpython_do.out b/src/pl/plpython/expected/plpython_do.out
index a21b088..fb0f0e5 100644
*** a/src/pl/plpython/expected/plpython_do.out
--- b/src/pl/plpython/expected/plpython_do.out
*** NOTICE:  This is plpythonu.
*** 3,6 
--- 3,9 
  CONTEXT:  PL/Python anonymous code block
  DO $$ nonsense $$ LANGUAGE plpythonu;
  ERROR:  NameError: global name 'nonsense' is not defined
+ DETAIL:  Traceback (most recent call last):
+   PL/Python anonymous code block, line 1, in 
+ nonsense 
  CONTEXT:  PL/Python anonymous code block
diff --git a/src/pl/plpython/expected/plpython_error.out b/src/pl/plpython/expected/plpython_error.out
index 7597ca7..08b6ba4 100644
*** a/src/pl/plpython/expected/plpython_error.out
--- b/src/pl/plpython/expected/plpython_error.out
*** SELECT sql_syntax_error();
*** 35,40 
--- 35,43 
  ERROR:  plpy.SPIError: syntax error at or near "syntax"
  LINE 1: syntax error
  ^
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function "sql_syntax_error", line 1, in 
+ plpy.execute("syntax error")
  QUERY:  syntax error
  CONTEXT:  PL/Python function "sql_syntax_error"
  /* check the handling of uncaught python exceptions
*** CREATE FUNCTION exception_index_invalid(
*** 45,50 
--- 48,56 
  	LANGUAGE plpythonu;
  SELECT exception_index_invalid('test');
  ERROR:  IndexError: list index out of range
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function "exception_index_invalid", line 1, in 
+ return args[1]
  CONTEXT:  PL/Python function "exception_index_invalid"
  /* check handling of nested exceptions
   */
*** SELECT exception_index_invalid_nested();
*** 57,62 
--- 63,71 
  ERROR:  plpy.SPIError: function test5(unknown) does not exist
  LINE 1: SELECT test5('foo')
 ^
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function "exception_index_invalid_nested", line 1, in 
+ rv = plpy.execute("SELECT test5('foo')")
  HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
  QUERY:  SELECT test5('foo')
  CONTEXT:  PL/Python function "exception_index_invalid_nested"
*** return None
*** 75,80 
--- 84,92 
  	LANGUAGE plpythonu;
  SELECT invalid_type_uncaught('rick');
  ERROR:  plpy.SPIError: type "test" does not exist
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function "invalid_type_uncaught", line 3, in 
+ SD["plan"] = plpy.prepare(q, [ "test" ])
  CONTEXT:  PL/Python function "invalid_type_uncaught"
  /* for what it's worth catch the exception generated by
   * the typo, and return None
*** return None
*** 121,126 
--- 133,141 
  	LANGUAGE plpythonu;
  SELECT invalid_type_reraised('rick');
  ERROR:  plpy.Error: type "test" does not exist
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function "invalid_type_reraised", line 6, in 
+ plpy.error(str(ex))
  CONTEXT:  PL/Python function "invalid_type_reraised"
  /* no typo no messing about
   */
*** SELECT valid_type('rick');
*** 140,145 
--- 155,255 
   
  (1 row)
  
+ /* error in nested functions to get a traceback
+ */
+ CREATE FUNCTION nested_error() RETURNS text
+ 	AS
+ 'def fun1():
+ 	plpy.error("boom")
+ 
+ def fun2():
+ 	fun1()
+ 
+ def fun3():
+ 	fun2()
+ 
+ fun3()
+ return "not reached"
+ '
+ 	LANGUAGE plpythonu;
+ SELECT nested_error();
+ ERROR:  plpy.Error: boom
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function "nested_error", line 10, in 
+ fun3()
+   PL/Python function "nested_error", line 8, in fun3
+ fun2()
+   PL/Python function "nested_error", line 5, in fun2
+ fun1()
+   PL/Python function "nested_error", line 2, in fun1
+ plpy.error("boom")
+ CONTEXT:  PL/Python function "nested_error"
+ /* raising plpy.Error is just like calling plpy.error
+ */
+ CREATE FUNCTION nested_error_raise() RETURNS text
+ 	AS
+ 'def fun1():
+ 	raise plpy.Error("boom")
+ 
+ def fun2():
+ 	fun1()
+ 
+ def fun3():
+ 	fun2()
+ 
+ fun3()
+ return "not reached"
+ '
+ 	LANGUAGE plpythonu;
+ SELECT nested_error_raise();
+ ERROR:  plpy.Error: boom
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function "nested_error_raise", line 10, in 
+ fun3()
+   PL/Python function "nested_error_raise", line 8, in fun3
+ fun2()
+   PL/Python function "nested_error_raise", line 5, in fun2
+ fun1()
+   PL/Python function "nested_error_raise", line 2, in fun1
+ raise plpy.Error("boom")
+ CONTEXT:  PL/Python function "nested_error_raise"
+ /* usin

Re: [HACKERS] 64-bit pgbench V2

2011-02-06 Thread Euler Taveira de Oliveira

Em 06-02-2011 13:09, Bruce Momjian escreveu:


What happened to this idea/patch?


I refactored the patch [1] to not depend on strtoll.


[1] http://archives.postgresql.org/message-id/4d2cccd9@timbira.com


--
  Euler Taveira de Oliveira
  http://www.timbira.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add support for logging the current role

2011-02-06 Thread Stephen Frost
* Itagaki Takahiro (itagaki.takah...@gmail.com) wrote:
> Could you try to "make html" in the doc directory?

Yeah, doesn't seem to work for me (missing '/bin/collateindex.pl',
apparently..).

> Your new decumentation after
> | These columns may be included in the CSV output:
> will be unaligned plain text without some tags.

Ok, I've cleaned up that part of the documentation to be a table instead
of the listings that were there, seems like a better approach anyway.
Updated patch attached, which has also been rebased against HEAD.

Thanks!

Stephen

commit d81c425a4c320540769084ceeb7d23bc3662
Author: Stephen Frost 
Date:   Sun Feb 6 14:02:05 2011 -0500

Change log_csv_options listing to a table

This patch changes the listing of field options available to
log_csv_options into a table, which will hopefully both look
better and be clearer.

commit f9851cdfaeb931f01c015f5651b72d16957c7114
Merge: 3e71e33 5ed45ac
Author: Stephen Frost 
Date:   Sun Feb 6 13:26:17 2011 -0500

Merge branch 'master' of git://git.postgresql.org/git/postgresql into 
log_csv_options

commit 3e71e338a2b9352d730f59a989027e33d99bea50
Author: Stephen Frost 
Date:   Fri Jan 28 22:44:33 2011 -0500

Cleanup log_csv_options patch

Clean up of various function declarations to hopefully be correct
and clean and matching PG conventions.  Also move TopMemoryContext
usage to later, since the local variables don't need to be in
TopMemoryContext.  Lastly, ensure that a comma is not produced
after the last CSV field, and that one is produced if
application_name is not the last field.

Review by Itagaki Takahiro, thanks!

commit 1825def11badd661d219fa4c516f06e0ad423443
Merge: ff249ae 847e8c7
Author: Stephen Frost 
Date:   Wed Jan 19 06:50:03 2011 -0500

Merge branch 'master' of git://git.postgresql.org/git/postgresql into 
log_csv_options

commit ff249aeac7216da623bf77840380d5e767f681fc
Author: Stephen Frost 
Date:   Wed Jan 19 00:26:52 2011 -0500

Add log_csv_fields GUC for CSV output & curr_role

This patch adds a new GUC called 'log_csv_fields'.  This GUC allows
the user to control the set of fields written to the CSV output as
well as the order in which they are written.  The default set of
fields remains those that were included in 9.0, to avoid breaking
existing user configurations.

In passing, update 'user_name' for log_line_prefix and log_csv_fields
to mean 'session user' (which could be reset by a superuser with
set session authorization), and add a 'role_name' option (%U) to
log_line_prefix and log_csv_fields, to allow users to log the
current role (as set by SET ROLE- not impacted by SECURITY DEFINER
functions).
*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
***
*** 3519,3525  local0.*/var/log/postgresql
  
  
   %u
!  User name
   yes
  
  
--- 3519,3538 
  
  
   %u
!  Session user name, typically the user name which was used
!  to authenticate to PostgreSQL with,
!  but can be changed by a superuser, see SET SESSION
!  AUTHORIZATION
!  yes
! 
! 
!  %U
!  Current role name, when set with SET ROLE;
!  the current role identifier is relevant for permission checking;
!  Returns 'none' if the current role matches the session user.
!  Note: Log messages from inside SECURITY DEFINER
!  functions will show the calling role, not the effective role
!  inside the SECURITY DEFINER function
   yes
  
  
***
*** 3636,3641  FROM pg_stat_activity;
--- 3649,3677 

   
  
+  
+   log_csv_fields (string)
+   
+log_csv_fields configuration parameter
+   
+   
+
+ Controls the set and order of the fields which are written out in
+ the CSV-format log file.
+ 
+ The default is: log_time, user_name, database_name, process_id,
+ connection_from, session_id, session_line_num, command_tag,
+ session_start_time, virtual_transaction_id, transaction_id,
+ error_severity, sql_state_code, message, detail, hint,
+ internal_query, internal_query_pos, context, query, query_pos,
+ location, application_name
+ 
+ For details on what these fields are, refer to the log_line_prefix
+ and CSV logging documentation.
+
+   
+  
+ 
   
log_lock_waits (boolean)

***
*** 3743,3776  FROM pg_stat_activity;
  Including csvlog in the log_destination list
  provides a convenient way to import log files into a database table.
  This option emits log lines in comma

Re: [HACKERS] pl/python quoting functions

2011-02-06 Thread Jan Urbański
On 06/02/11 10:54, Jan Urbański wrote:
> On 04/02/11 18:10, Hitoshi Harada wrote:
>> 2011/1/11 Jan Urbański :
>>> Here's a patch that adds a few PL/Python functions for quoting strings.
>>> It's an incremental patch on top of the plpython-refactor patch sent in
>>> http://archives.postgresql.org/message-id/4d135170.3080...@wulczer.org.
>>>
>>> Git branch for this patch:
>>> https://github.com/wulczer/postgres/tree/functions
>>>
>>> The new functions are plpy.quote_literal, plpy.quote_nullable and
>>> plpy.quote_ident, and work just like their sql or plperl equivalents.
>>>
>>
>> I reviewed this.
>>
>> The patch applies and compiles cleanly and all the tests are passed.
>> The patch adds 3 functions which works as the corresponding SQL
>> functions. The test is enough, without any additional docs. No
>> feature/performance issues found.
>>
>> I mark this "Reader for Committer".
> 
> Thanks!
> 
> I guess a short paragraph in the Utility Functions section of the
> PL/Python docs would be in order, I'll try to add it today.

Added docs and merged with master.
diff --git a/doc/src/sgml/plpython.sgml b/doc/src/sgml/plpython.sgml
index e05c293..8a995b2 100644
*** a/doc/src/sgml/plpython.sgml
--- b/doc/src/sgml/plpython.sgml
*** $$ LANGUAGE plpythonu;
*** 997,1002 
--- 997,1018 
  configuration
 variables. See  for more information.

+   
+Another set of utility functions
+are plpy.quote_literal(string),
+plpy.quote_nullable(string)
+and plpy.quote_ident(string). They are
+equivalent to the builtin quoting functions described
+in . They are useful when constructing
+ad-hoc queries. A PL/Python equivalent of dynamic SQL
+from  would be:
+ 
+ plpy.execute("UPDATE tbl SET %s = %s where key = %s" % (
+ plpy.quote_ident(colname),
+ plpy.quote_nullable(newvalue),
+ plpy.quote_literal(keyvalue)))
+ 
+   
   
  
   
diff --git a/src/pl/plpython/Makefile b/src/pl/plpython/Makefile
index 16d78ae..292e360 100644
*** a/src/pl/plpython/Makefile
--- b/src/pl/plpython/Makefile
*** REGRESS = \
*** 79,84 
--- 79,85 
  	plpython_types \
  	plpython_error \
  	plpython_unicode \
+ 	plpython_quote \
  	plpython_drop
  # where to find psql for running the tests
  PSQLDIR = $(bindir)
diff --git a/src/pl/plpython/expected/plpython_quote.out b/src/pl/plpython/expected/plpython_quote.out
index ...b33ee3f .
*** a/src/pl/plpython/expected/plpython_quote.out
--- b/src/pl/plpython/expected/plpython_quote.out
***
*** 0 
--- 1,87 
+ -- test quoting functions
+ CREATE FUNCTION quote(t text, how text) RETURNS text AS $$
+ if how == "literal":
+ return plpy.quote_literal(t)
+ elif how == "nullable":
+ return plpy.quote_nullable(t)
+ elif how == "ident":
+ return plpy.quote_ident(t)
+ else:
+ raise plpy.Error("unrecognized quote type %s" % how)
+ $$ LANGUAGE plpythonu;
+ SELECT quote(t, 'literal') FROM (VALUES
+('abc'),
+('a''bc'),
+('''abc'''),
+(''),
+(),
+('xyzv')) AS v(t);
+quote   
+ ---
+  'abc'
+  'a''bc'
+  '''abc'''
+  ''
+  
+  'xyzv'
+ (6 rows)
+ 
+ SELECT quote(t, 'nullable') FROM (VALUES
+('abc'),
+('a''bc'),
+('''abc'''),
+(''),
+(),
+(NULL)) AS v(t);
+quote   
+ ---
+  'abc'
+  'a''bc'
+  '''abc'''
+  ''
+  
+  NULL
+ (6 rows)
+ 
+ SELECT quote(t, 'ident') FROM (VALUES
+('abc'),
+('a b c'),
+('a " ''abc''')) AS v(t);
+ quote 
+ --
+  abc
+  "a b c"
+  "a "" 'abc'"
+ (3 rows)
+ 
+ -- test errors
+ SELECT quote(NULL::text, 'literal');
+ ERROR:  TypeError: argument 1 must be string, not None
+ CONTEXT:  PL/Python function "quote"
+ SELECT quote(NULL::text, 'ident');
+ ERROR:  TypeError: argument 1 must be string, not None
+ CONTEXT:  PL/Python function "quote"
+ SELECT quote('abc', 'random');
+ ERROR:  plpy.Error: unrecognized quote type random
+ CONTEXT:  PL/Python function "quote"
+ DO $$ plpy.quote_literal(3) $$ LANGUAGE plpythonu;
+ ERROR:  TypeError: argument 1 must be string, not int
+ CONTEXT:  PL/Python anonymous code block
+ DO $$ plpy.quote_literal(None) $$ LANGUAGE plpythonu;
+ ERROR:  TypeError: argument 1 must be string, not None
+ CONTEXT:  PL/Python anonymous code block
+ DO $$ plpy.quote_literal({'a': 'b'}) $$ LANGUAGE plpythonu;
+ ERROR:  TypeError: argument 1 must be string, not dict
+ CONTEXT:  PL/Python anonymous code block
+ DO $$ plpy.quote_literal() $$ LANGUAGE plpythonu;
+ ERROR:  TypeError: function takes exactly 1 argument (0 given)
+ CONTEXT:  PL/Python anonymous code block
+ DO $$ plpy.quote_literal(1, 2) $$ LANGUAGE plpythonu;
+ ERROR:  TypeError: function takes exactly 1 argument (2 given)
+ CONTEXT:  PL/Python anonymous code block
+ DO $$ plpy.quote_nullable([1, 2]) $$ LANGUAGE plpythonu;
+ ERROR:  TypeError: argument 1 must be string or None, not list
+ CONTEXT:  PL/P

Re: [HACKERS] SSI patch version 14

2011-02-06 Thread Kevin Grittner
"Kevin Grittner"  wrote:
 
> I'm working on it now, and hope to have it all settled down today.
 
Done and pushed to the git repo.  Branch serializable on:
 
git://git.postgresql.org/git/users/kgrittn/postgres.git
 
Heikki: I'm back to not having any outstanding work on the patch.
 
Does anyone want me to post a new version of the patch with recent
changes?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] A different approach to extension NO USER DATA feature

2011-02-06 Thread Tom Lane
As I work through the extensions patch, the aspect of it that I like the
least is the NO USER DATA clause and related functions.  I think it's
badly designed, badly implemented, and doesn't solve the problem.
If we accept it as-is, we'll soon have to rework it, and at that point
we'll be left with a large backwards-compatibility issue.

To recap: the problem we need to solve is the possibility that an
extension has configuration tables whose contents might get altered
within a particular database; if so, we need to have pg_dump dump out
those alterations so that the configuration will be preserved.  This
isn't just an academic issue; contrib/tsearch2 for example had tables
exactly like that, before it got absorbed into core, and I believe
PostGIS has such today.

The solution offered by the submitted patch looks like this:

1. The CREATE EXTENSION command has an optional clause NO USER DATA,
which users are not supposed to include when initially loading an
extension, but which pg_dump will always include when emitting CREATE
EXTENSION in dump scripts.

2. There is a function pg_extension_with_user_data() that an extension's
SQL script can call to detect whether NO USER DATA was specified.

3. There is a function pg_extension_flag_dump(oid) that an extension's
SQL script can call to mark a created object as (sort of) not part of
the extension after all, so that pg_dump will dump that object anyway.

The way that you'd use these features to solve the problem is to make
the extension's SQL script flag the configuration table for dumping, and
also avoid creating the table or loading any predefined entries into it
if NO USER DATA is active.  (Which seems to mean that that option is
named backwards, but I digress.)

IMO this approach has got numerous serious problems:

1. Because the flag operates at the whole-SQL-object level, the only
way to have a configuration table is to exclude the whole table
definition from being treated as part of the extension.  This makes
it impossible to upgrade the table definition (eg, add columns or
indexes) when installing a new version of the extension.  This also
makes it really easy to break the extension during selective restores
... there's no assurance the table will get created at all.

2. Also, because we're working at the whole-SQL-object level, there's
no good way to deal with the possibility that the configuration table
contains some rows created by the extension itself and others added by
the user.  All such rows will get dumped and reloaded separately from
the extension, meaning there's no way to upgrade the extension-provided
initial data either.

3. Use of pg_extension_with_user_data() requires the extension SQL
script to contain conditional logic, which is not easy unless you want
to assume plpgsql is available.  Which extensions really should not do.
(Once again: plpgsql may be installed by default, but that doesn't mean
it's guaranteed to be present.)

4. If the SQL script uses pg_extension_with_user_data() to decide not to
create the table, then it has no way to apply pg_extension_flag_dump,
with the result that after a dump and reload cycle, the configuration
table doesn't appear to be related to the extension at all.  Thus the
user has another way to break the extension, which is to accidentally
drop the configuration table.

So basically I think this approach doesn't work, can't be made to
work, and imposes unreasonable burdens on extension authors anyway.
If we're going to support user-modifiable configuration tables,
we need to do it a different way.

After a bit of thought I believe that we can fix this if we are willing
to teach pg_dump explicitly about extension configuration tables.
The behavior we want for those is for the table schema definition to
never be dumped (the table should always be created by CREATE EXTENSION),
but for some subset of the table data to get dumped, excluding any
system-provided rows.  An extension that wants to make use of that
ability would probably need to add a boolean column "system_data" or
something similar to its configuration tables.  Having done so,
the extension's SQL script could call a function that identifies the
configuration table and tells how to decide which rows to dump,
along the lines of

pg_extension_partial_dump (table_name regclass, where_condition text)

for example,

SELECT pg_extension_partial_dump('my_table', 'WHERE NOT system_data');

I'm envisioning that this function would add that information to the
pg_extension entry, and then pg_dump would use it to select out a subset
of rows that it would dump as user data, even though it's not dumping
the table definition.

One interesting point is that for this to work during pg_upgrade,
we'd need to include those user-data rows in the upgrade dump, otherwise
they'd get lost.  So at least for pg_upgrade, those rows need to be
considered schema not data.  I'm not sure if we should handle them that
way all the time or whether pg_dump should special-case this

Re: [HACKERS] REVIEW: Determining client_encoding from client locale

2011-02-06 Thread Stephen Frost
Ibrar,

* Ibrar Ahmed (ibrar.ah...@gmail.com) wrote:
> I have reviewed/tested this patch.

Great, thanks for that!

> In my point code should be like this
> 
>  *if (conn->client_encoding_initial && conn->client_encoding_initial[0])
>{
>if (packet)
>{
>strcpy(packet + packet_len, "client_encoding");
>packet_len += strlen("client_encoding") + 1;
>strcpy(packet + packet_len,
> conn->client_encoding_initial);
>packet_len += strlen(conn->client_encoding_initial) +
> 1;
>   }
> }*

Makes sense to me, just reading through this email.  Have you tested
this change..?  Could you provide it as an additional patch or a new
patch including the change against head, assuming it still works well in
your testing?

> I will test this patch on Windows and will send results.

That would be great, it's not easy for me to test under Windows.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Add support for logging the current role

2011-02-06 Thread Stephen Frost
* Itagaki Takahiro (itagaki.takah...@gmail.com) wrote:
> I agree that it's logically good design, but we could not accept it
> as long as it breaks tools in the real world...

If it does, I think it's pretty clear that those tools are themselves
broken..

> Will it break "SHOW ALL" and "SELECT * FROM pg_settings" output?
> I'm worried that they are not designed to display such a long value.

It certainly won't break those commands in PostgreSQL.  If tools made
assumptions about how long a string could be that don't match PG's
understand, those tools are broken.  This also isn't the only string and
such tools could be broken by, eg, setting log_prefix to a very long
value (entirely possible to do..).

> >> I think it depends default log filename, that contains %S (seconds)
> >> suffix. We can remove %S from log_filename; if we use a log per-day,
> >> those log might contain different columns even after restart. If we
> >> cannot avoid zigged csv fields completely, SIGHUP seems reasonable for it.
> >
> > This problem is bigger than SIGHUP, but at least with a restart
> > required, the default will work properly.  The default configuration
> > wouldn't work w/ a change to the log line and a SIGHUP done.
> 
> "Only works with the default settings" is just wrong design.

It's also not anywhere close to what I said.  If you have a suggestion
about how to fix it that's reasonable, please suggest it.  Suggesting
that "because we could, given a complicated enough configuration,
create jagged files anyway, we should encourage it to happen by allowing
the change on SIGHUP" isn't a solution to the problem and will just
create more problems.  It will certainly work just fine with more than
just the default settings, but, yes, there are some settings which would
cause PG to create jagged CSV files.

If we keep it as requiring restart and then automatically move or
truncate log files which are in the way on that restart, or decide to
not start at all, we could make it so PG doesn't create a jagged CSV
file.  I don't particularly care for any of those options.  At least
one of those would be a solution, however none of those include
allowing it on SIGHUP, which is what you're advocating.

> If we cannot provide a perfect solution, we should allow users to
> control everything as they like. I still think PGC_SIGHUP is the
> best mode for the parameter, with a note of caution in the docs.

Doing it on a SIGHUP would be *guaranteed* to create jagged CSV files
which then couldn't be loaded into PG.  I don't agree with this and the
impression I got from Tom and Andrew was that they agreed to not do it
on SIGHUP.  It's unfortuante that we seem to be at an impasse here, but
of everyone voicing opinions on it so far, it would appear that you're
in the minority.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] pl/python custom datatype parsers

2011-02-06 Thread Jan Urbański
On 04/02/11 17:19, Hitoshi Harada wrote:
> 2011/1/28 Jan Urbański :
>> On 23/12/10 15:15, Jan Urbański wrote:
>>> Here's a patch implementing custom parsers for data types mentioned in
>>> http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
>>> an incremental patch on top of the plpython-refactor patch sent eariler.
>>
>> Updated to master.
> 
> I reviewed this for some time today.

Thank you.

> The patch applies with hunks, compiles and tests are passed, though it
> looks like not having additional test along with it.

I added a simple test. I had to add an expected file for the case when
hstore is compiled without PL/Python integration.

> - in hstore_plpython.c,
>   PLyParsers parsers = {
>   .in = hstore_to_dict,
>   .out = dict_to_hstore
>   };
>   I'm not sure if this coding style is used anywhere in the core.
> Isn't this the C99 style?

Ooops, you're right. Fixed.

> - You need define custom variable class to use this feature.
> plpython.hstore = 'public.hstore'. I wonder why it's called
> plpython[u].hstore = 'public.hstore' (with 'u') because the language
> is called "plpythonu".

I think plpython.hstore was what showed up in discussion... I'd be fine
with calling the variable plpythonu.hstore, if that's the consensus.

> - typo in plpython.h,
>   Types for parsres functions that ...

Fixed.

> - I tried the sample you mention upthread,
>   regression=# select pick_one('a=>3, b=>4', 'b');
>   ERROR:  TypeError: string indices must be integers
>   CONTEXT:  PL/Python function "pick_one"
> 
>   My python is 2.4.3 again.

Hm, this means that the hstore has not been transformed into a Python
dict, but into a string, which is what happens if you *don't* have
plpython hstore integration enabled. I think that was because of an
issue with my changes to hstore's Makefile, that made it compile without
Python support, even if the sources were configured with --with-python.

There's also a gotcha: if you set plpython.hstore to 'public.hstore',
you will have to DROP (or CREATE OR REPLACE again) all functions that
accept or return hstores, because their I/O routines are already cached.
Not sure how big of a problem that is (or how to fix it in an elegant
manner). Making the parameter PGC_POSTMASTER is an easy solution... but
not very nice.

> That's it for now. It is an exciting feature and plpython will be the
> first language to think of when you're building "object database" if
> this feature is in. The design here will affect following pl/perl and
> other so it is important enough to discuss.

Yes, I ended up writing this patch as a PoC of how you can integrate
procedural languages with arbitrary addon modules, so it would be good
to have a discussion about the general mechanisms. I'm aware that this
discussion, and subsequently this patch, might be punted to 9.2
(although that would be a shame).

Cheers,
Jan
diff --git a/contrib/hstore/Makefile b/contrib/hstore/Makefile
index 1d533fd..fd85052 100644
*** a/contrib/hstore/Makefile
--- b/contrib/hstore/Makefile
***
*** 1,8 
  # contrib/hstore/Makefile
  
  MODULE_big = hstore
  OBJS = hstore_io.o hstore_op.o hstore_gist.o hstore_gin.o hstore_compat.o \
! 	crc32.o
  
  DATA_built = hstore.sql
  DATA = uninstall_hstore.sql
--- 1,9 
  # contrib/hstore/Makefile
  
  MODULE_big = hstore
+ 
  OBJS = hstore_io.o hstore_op.o hstore_gist.o hstore_gin.o hstore_compat.o \
! 	hstore_plpython.o crc32.o
  
  DATA_built = hstore.sql
  DATA = uninstall_hstore.sql
*** top_builddir = ../..
*** 18,20 
--- 19,28 
  include $(top_builddir)/src/Makefile.global
  include $(top_srcdir)/contrib/contrib-global.mk
  endif
+ 
+ ifeq ($(with_python), yes)
+ override CFLAGS += -I$(srcdir) -I$(top_builddir)/src/pl/plpython \
+ 			$(python_includespec) -DHSTORE_PLPYTHON_SUPPORT
+ SHLIB_LINK = $(python_libspec) $(python_additional_libs) \
+ 		$(filter -lintl,$(LIBS)) $(CPPFLAGS)
+ endif
diff --git a/contrib/hstore/expected/hstore.out b/contrib/hstore/expected/hstore.out
index 354fff2..049fdd5 100644
*** a/contrib/hstore/expected/hstore.out
--- b/contrib/hstore/expected/hstore.out
*** select count(*) from testhstore where h
*** 1461,1463 
--- 1461,1502 
   1
  (1 row)
  
+ -- plpython integration
+ set plpython.hstore = 'public.hstore';
+ create language plpythonu;
+ create or replace function select_one(h hstore, idx text) returns text as $$
+ try:
+ return h.get(idx)
+ except AttributeError:
+ # h has not been transformed into a dict and is a string
+ return None
+ $$ language plpythonu;
+ select coalesce(select_one(h, 'node'), '') as node from testhstore where select_one(h, 'line')::integer < 10;
+   node  
+ 
+  AA
+  CBB
+  
+  
+  CBA
+  CBC
+  
+  
+  
+ (9 rows)
+ 
+ reset plpython.hstore;
+ create or replace function select_one(h hstore, idx text) returns text as $$
+ try:
+ return h.get(idx)
+ except AttributeError:
+ # h has not been transformed into a dict and 

Re: [HACKERS] ALTER TYPE 2: skip already-provable no-work rewrites

2011-02-06 Thread Robert Haas
On Sun, Feb 6, 2011 at 12:13 PM, Robert Haas  wrote:
> That's not quite so good for translators, I think.
>
> Another option is that we could just say "relation" (table, foreign
> table, etc...) or "type".  We use the word relation as a more generic
> version of table in a few other places.

Or how about passing an ObjectType?  Then we could specify
OBJECT_TABLE, OBJECT_FOREIGN_TABLE, or OBJECT_TYPE.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TYPE 2: skip already-provable no-work rewrites

2011-02-06 Thread Robert Haas
On Sun, Feb 6, 2011 at 8:58 AM, Noah Misch  wrote:
> On Sun, Feb 06, 2011 at 08:40:44AM -0500, Noah Misch wrote:
>> On Sun, Feb 06, 2011 at 07:54:52AM -0500, Robert Haas wrote:
>> > Yeeah, that's actually a little ugly.   It's actually a domain
>> > over a composite type, not a composite type proper, IIUC. Better
>> > ideas?
>>
>> There are no domains over composite types.  get_rels_with_domain() finds all
>> relations having columns of the (scalar) domain type.  It then calls
>> find_composite_type_dependencies to identify uses of the composite types
>> discovered in the previous step.
>>
>> Honestly, RELKIND_COMPOSITE_TYPE is a reasonable choice despite the technical
>> mismatch.  One more-correct approach would be to have two arguments, a 
>> catalog
>> OID (pg_type or pg_class, currently) and a relkind, 0 when the catalog OID !=
>> pg_class.  Might be an improvement, albeit a minor one.
>
> Scratch that.  How about classid and objid arguments, passing them to
> getObjectionDescription() internally?  We already do something very similar in
> ATExecAlterColumnType for a related case.

That's not quite so good for translators, I think.

Another option is that we could just say "relation" (table, foreign
table, etc...) or "type".  We use the word relation as a more generic
version of table in a few other places.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SAVEPOINTs and COMMIT performance

2011-02-06 Thread Bruce Momjian

Did this ever get addressed?

---

Simon Riggs wrote:
> 
> As part of a performance investigation for a customer I've noticed an
> O(N^2) performance issue on COMMITs of transactions that contain many
> SAVEPOINTs. I've consistently measured COMMIT times of around 9 seconds,
> with 49% CPU, mostly in LockReassignCurrentOwner().
> 
> BEGIN;
> INSERT...
> SAVEPOINT ...
> INSERT...
> SAVEPOINT ...
> ... (repeat 10,000 times)
> COMMIT;
> 
> The way SAVEPOINTs work is that each is nested within the previous one,
> so that at COMMIT time we must recursively commit all the
> subtransactions before we issue final commit.
> 
> That's a shame because ResourceOwnerReleaseInternal() contains an
> optimisation to speed up final commit, by calling ProcReleaseLocks().
> 
> What we actually do is recursively call LockReassignCurrentOwner() which
> sequentially scans LockMethodLocalHash at each level of transaction. The
> comments refer to this as "retail" rather than the wholesale method,
> which never gets to execute anything worthwhile in this case.
> 
> This issue does NOT occur in PLpgSQL functions that contain many
> EXCEPTION clauses in a loop, since in that case the subtransactions are
> started and committed from the top level so that the subxact nesting
> never goes too deep.
> 
> Fix looks like we need special handling for the depth-first case, rather
> than just a recursion loop in CommitTransactionCommand().
> 
> Issues looks like it goes all the way back, no fix for 9.0.
> 
> I notice also that the nesting model of SAVEPOINTs also means that
> read-only subtransactions will still generate an xid when followed by a
> DML statement. That's unnecessary, but required given current design.
> 
> -- 
>  Simon Riggs   www.2ndQuadrant.com
>  PostgreSQL Development, 24x7 Support, Training and Services
> 
> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] multiple -f support

2011-02-06 Thread Robert Haas
On Sun, Feb 6, 2011 at 11:16 AM, Bruce Momjian  wrote:
> I assume having psql support multiple -f files is not a high priority or
> something we don't want.

IIRC, nobody objected to the basic concept, and it seems useful.  I
thought we were pretty close to committing something along those lines
at one point, actually.  I don't remember exactly where the wheels
came off.

Maybe a TODO?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] keeping a timestamp of the last stats reset (for a db, table and function)

2011-02-06 Thread Tomas Vondra
On 6.2.2011 08:17, Greg Smith wrote:
> Below is what changed since the last posted version, mainly as feedback
> for Tomas:
> 
> -Explained more clearly that pg_stat_reset and
> pg_stat_reset_single_counters will both touch the database reset time,
> and that it's initialized upon first connection to the database.
> 
> -Added the reset time to the list of fields in pg_stat_database and
> pg_stat_bgwriter.
> 
> -Fixed some tab/whitespace issues.  It looks like you had tab stops set
> at 8 characters during some points when you were editing non-code
> files.  Also, there were a couple of spot where you used a tab while
> text in the area used spaces.  You can normally see both types of errors
> if you read a patch, they showed up as misaligned things in the context
> diff.
> 
> -Removed some extra blank lines that didn't fit the style of the
> surrounding code.
> 
> Basically, all the formatting bits I'm nitpicking about I found just by
> reading the patch itself; they all stuck right out.  I'd recommend a
> pass of that before submitting things if you want to try and avoid those
> in the future.

Thanks for fixing the issues and for the review in general. I have to
tune the IDE a bit to produce the right format and be a bit more careful
when creating the patches.

regards
Tomas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-06 Thread Kevin Grittner
I wrote:
 
> First cut [of having predicate locks linked to later row versions
> for conflict detection purposes]
 
> It passes all the usual regression tests, the new isolation tests,
> and the example posted earlier in the thread of a test case which
> was allowing an anomaly. (That is to say, the anomaly is now
> prevented.)
> 
> I didn't get timings, but it *seems* noticeably slower; hopefully
> that's either subjective or fixable. Any feedback on whether this
> seems a sane approach to the issue is welcome.
 
After sleeping on it and reviewing the patch this morning, I'm
convinced that this is fundamentally right, but the recursion in
RemoveTargetIfNoLongerUsed() can't work.  I'll have to just break the
linkage there and walk the HTAB in the general cleanup to eliminate
orphaned targets.
 
I'm working on it now, and hope to have it all settled down today.  I
fear I've messed up Heikki's goal of a commit this weekend, but I
think this is a "must fix".
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/python explicit subtransactions

2011-02-06 Thread Jan Urbański
On 02/02/11 14:16, Steve Singer wrote:
> On 11-01-27 05:11 PM, Jan Urbański wrote:
>> On 23/12/10 15:32, Jan Urbański wrote:
>>> Here's a patch implementing explicitly starting subtransactions
>>> mentioned in
>>> http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
>>> an incremental patch on top of the spi-in-subxacts patch sent eariler.
>>
>> Updated to the spi-in-subxacts version sent earlier.
>>
> 
> Submission Review
> -
> 
> The patch applies against master.
> Test updates are included.
> 
> The patch doesn't included any documentation updates.  The author did
> mention that he'll do these if it looks like the patch is going to be
> accepted.

PFA an updated patch with documentation.

> The plpython_subxact regression test you addded is failing on both
> python3 and 2.4 for me.  It seems to be creating functions with the same
> name twice and the second time is failing with "ERROR: function ."
> already exists.  I think this is just an issue with your expect files.

The expect files for older Pythons were broken by the change to include
HINT and DETAIL messages when errors are reported from Python. I fixed
them and now the regression tests are passing for me on Python 2.4, 2.6
and 3.1.

> Code Review
> 
> 
> 
> PLy_abort_open_subtransactions(...) line 1215:
> 
> ereport(WARNING,
> (errmsg("Forcibly aborting a subtransaction "
> "that has not been exited")));
> 
> "Forcibly" should be "forcibly" (lower case)
> 
> Similarly in PLy_subxact_enter and PLy_subxact_exit a few
> PLy_exception_set calls start with an upper case character when I think
> we want it to be lower case.

Yeah, changed them.

Thanks,
Jan
diff --git a/doc/src/sgml/plpython.sgml b/doc/src/sgml/plpython.sgml
index e05c293..9cd4879 100644
*** a/doc/src/sgml/plpython.sgml
--- b/doc/src/sgml/plpython.sgml
*** $$ LANGUAGE plpythonu;
*** 943,949 
  

  
!   
 Trapping Errors
  
 
--- 943,949 
  

  
!   
 Trapping Errors
  
 
*** CREATE FUNCTION try_adding_joe() RETURNS
*** 966,971 
--- 966,1084 
  $$ LANGUAGE plpythonu;
  
 
+   
+  
+ 
+  
+   Explicit subtransactions
+   
+ Recovering from errors caused by database access as described
+ in  can lead to an undesirable situation
+ where some operations succeed before one of them fails and after recovering
+ from that error the data is left in an inconsistent state. PL/Python offers
+ a solution to this problem in the form of explicit subtransactions.
+   
+ 
+   
+Subtransaction context managers
+
+  Consider a function that implements a transfer between two accounts:
+ 
+ CREATE FUNCTION transfer_funds() RETURNS void AS $$
+ try:
+ plpy.execute("UPDATE accounts SET balance = balance - 100 WHERE account_name = 'joe'")
+ plpy.execute("UPDATE accounts SET balance = balance + 100 WHERE account_name = 'mary'")
+ except plpy.SPIError, e:
+ result = "error transferring funds: %s" % e.args
+ else:
+ result = "funds transferred correctly"
+ plpy.execute("INSERT INTO operations(result) VALUES ('%s')" % result)
+ $$ LANGUAGE plpythonu;
+ 
+  If the second UPDATE statement results in an exception
+  being raised, this function will report the error, but the result of the
+  first UPDATE will nevertheless be committed. In other
+  words, the funds will be withdrawn from Joe's account, but will not be
+  transferred to Mary's account.
+
+
+  To avoid such issues, you can wrap your plpy.execute
+  calls in an explicit subtransaction. The plpy module
+  provides a helper object to manage explicit subtransactions that gets
+  created with plpy.subtransaction() function. Objects created
+  by this function implements the context manager interface as defined by
+  http://www.python.org/dev/peps/pep-0343/";>PEP 343.
+  Using explicit subtransacrions we can rewrite our function as:
+ 
+ CREATE FUNCTION transfer_funds2() RETURNS void AS $$
+ try:
+ with plpy.subtransaction():
+ plpy.execute("UPDATE accounts SET balance = balance - 100 WHERE account_name = 'joe'")
+ plpy.execute("UPDATE accounts SET balance = balance + 100 WHERE account_name = 'mary'")
+ except plpy.SPIError, e:
+ result = "error transferring funds: %s" % e.args
+ else:
+ result = "funds transferred correctly"
+ plpy.execute("INSERT INTO operations(result) VALUES ('%s')" % result)
+ $$ LANGUAGE plpythonu;
+ 
+  Note that the use of try/catch is still
+  required. Otherwise the exception would propagate to the top of the Python
+  stack and would cause the whole function to abort with
+  a PostgreSQL error.
+  The operations table would not have any row inserted
+  into it. The subtransaction context manager does not trap errors, it only
+  assures that all database operations execute

Re: [HACKERS] multiple -f support

2011-02-06 Thread Bruce Momjian

I assume having psql support multiple -f files is not a high priority or
something we don't want.

---

Mark Wong wrote:
> Hi all,
> 
> I took a stab at changing this up a little bit.  I pushed the logic
> that David introduced down into process_file().  In doing so I changed
> up the declaration of process_file() to accept an additional parameter
> specifying how many files are being passed to the function.  Doing it
> this way also makes use of the logic for the current single
> transaction flag without turning it into dead code.
> 
> I also added a check to return EXIT_FAILURE if any error was thrown
> from any of the sql files used, thus stopping execution of any further
> file.  Gabrielle tells me the error echoed in this event is not clear
> so there is still a little more work that needs to be done.
> 
> Regards,
> Mark

[ Attachment, skipping... ]

> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] REVIEW: PL/Python table functions

2011-02-06 Thread Jan Urbański
On 04/02/11 16:26, Hitoshi Harada wrote:
> 2011/1/28 Jan Urbański :
>> On 27/01/11 00:41, Jan Urbański wrote:
>>> I'm also attaching an updated version that should apply on top of my
>>> github refactor branch (or incrementally over the new set of refactor
>>> patches that I will post shortly to the refactor thread).
>>
>> Attached is a patch for master, as the refactorings have already been
>> merged.
> 
> Sorry, but could you update your patch? Patching it against HEAD today
> makes rej.

Sure, here's an updated patch.

Jan 
diff --git a/src/pl/plpython/Makefile b/src/pl/plpython/Makefile
index 16d78ae..167393e 100644
*** a/src/pl/plpython/Makefile
--- b/src/pl/plpython/Makefile
*** REGRESS = \
*** 79,84 
--- 79,85 
  	plpython_types \
  	plpython_error \
  	plpython_unicode \
+ 	plpython_composite \
  	plpython_drop
  # where to find psql for running the tests
  PSQLDIR = $(bindir)
diff --git a/src/pl/plpython/expected/plpython_composite.out b/src/pl/plpython/expected/plpython_composite.out
index ...1576588 .
*** a/src/pl/plpython/expected/plpython_composite.out
--- b/src/pl/plpython/expected/plpython_composite.out
***
*** 0 
--- 1,309 
+ CREATE FUNCTION multiout_simple(OUT i integer, OUT j integer) AS $$
+ return (1, 2)
+ $$ LANGUAGE plpythonu;
+ SELECT multiout_simple();
+  multiout_simple 
+ -
+  (1,2)
+ (1 row)
+ 
+ SELECT * FROM multiout_simple();
+  i | j 
+ ---+---
+  1 | 2
+ (1 row)
+ 
+ SELECT i, j + 2 FROM multiout_simple();
+  i | ?column? 
+ ---+--
+  1 |4
+ (1 row)
+ 
+ SELECT (multiout_simple()).j + 3;
+  ?column? 
+ --
+ 5
+ (1 row)
+ 
+ CREATE FUNCTION multiout_simple_setof(n integer = 1, OUT integer, OUT integer) RETURNS SETOF record AS $$
+ return [(1, 2)] * n
+ $$ LANGUAGE plpythonu;
+ SELECT multiout_simple_setof();
+  multiout_simple_setof 
+ ---
+  (1,2)
+ (1 row)
+ 
+ SELECT * FROM multiout_simple_setof();
+  column1 | column2 
+ -+-
+1 |   2
+ (1 row)
+ 
+ SELECT * FROM multiout_simple_setof(3);
+  column1 | column2 
+ -+-
+1 |   2
+1 |   2
+1 |   2
+ (3 rows)
+ 
+ CREATE FUNCTION multiout_record_as(typ text,
+first text, OUT first text,
+second integer, OUT second integer,
+retnull boolean) RETURNS record AS $$
+ if retnull:
+ return None
+ if typ == 'dict':
+ return { 'first': first, 'second': second, 'additionalfield': 'must not cause trouble' }
+ elif typ == 'tuple':
+ return ( first, second )
+ elif typ == 'list':
+ return [ first, second ]
+ elif typ == 'obj':
+ class type_record: pass
+ type_record.first = first
+ type_record.second = second
+ return type_record
+ $$ LANGUAGE plpythonu;
+ SELECT * from multiout_record_as('dict', 'foo', 1, 'f');
+  first | second 
+ ---+
+  foo   |  1
+ (1 row)
+ 
+ SELECT multiout_record_as('dict', 'foo', 1, 'f');
+  multiout_record_as 
+ 
+  (foo,1)
+ (1 row)
+ 
+ SELECT *, s IS NULL as snull from multiout_record_as('tuple', 'xxx', NULL, 'f') AS f(f, s);
+   f  | s | snull 
+ -+---+---
+  xxx |   | t
+ (1 row)
+ 
+ SELECT *, f IS NULL as fnull, s IS NULL as snull from multiout_record_as('tuple', 'xxx', 1, 't') AS f(f, s);
+  f | s | fnull | snull 
+ ---+---+---+---
+|   | t | t
+ (1 row)
+ 
+ SELECT * from multiout_record_as('obj', NULL, 10, 'f');
+  first | second 
+ ---+
+| 10
+ (1 row)
+ 
+ CREATE FUNCTION multiout_setof(n integer,
+OUT power_of_2 integer,
+OUT length integer) RETURNS SETOF record AS $$
+ for i in range(n):
+ power = 2 ** i
+ length = plpy.execute("select length('%d')" % power)[0]['length']
+ yield power, length
+ $$ LANGUAGE plpythonu;
+ SELECT * FROM multiout_setof(3);
+  power_of_2 | length 
+ +
+   1 |  1
+   2 |  1
+   4 |  1
+ (3 rows)
+ 
+ SELECT multiout_setof(5);
+  multiout_setof 
+ 
+  (1,1)
+  (2,1)
+  (4,1)
+  (8,1)
+  (16,2)
+ (5 rows)
+ 
+ CREATE FUNCTION multiout_return_table() RETURNS TABLE (x integer, y text) AS $$
+ return [{'x': 4, 'y' :'four'},
+ {'x': 7, 'y' :'seven'},
+ {'x': 0, 'y' :'zero'}]
+ $$ LANGUAGE plpythonu;
+ SELECT * FROM multiout_return_table();
+  x |   y   
+ ---+---
+  4 | four
+  7 | seven
+  0 | zero
+ (3 rows)
+ 
+ CREATE FUNCTION multiout_array(OUT integer[], OUT text) RETURNS SETOF record AS $$
+ yield [[1], 'a']
+ yield [[1,2], 'b']
+ yield [[1,2,3], None]
+ $$ LANGUAGE plpythonu;
+ SELECT * FROM multiout_array();
+  column1 | column2 
+ -+-
+  {1} | a
+  {1,2}   | b
+  {1,2,3} | 
+ (3 rows)
+ 
+ CREATE FUNCTION singleout_composite(OUT type_record) AS $$
+ return {'first': 1, 'second

Re: [HACKERS] 64-bit pgbench V2

2011-02-06 Thread Bruce Momjian

What happened to this idea/patch?

---

Greg Smith wrote:
> Tom Lane wrote:
> > Please choose a way that doesn't introduce new portability assumptions.
> > The backend gets along fine without strtoll, and I don't see why pgbench
> > should have to require it.
> >   
> 
> Funny you should mention this...it turns out there is some code already 
> there, I just didn't notice it before because it's only the unsigned 
> 64-bit strtoul used, not the signed one I was looking for, and it's only 
> called in one place I didn't previously check.  
> src/interfaces/ecpg/ecpglib/data.c does this:
> 
> *((unsigned long long int *) (var + offset * act_tuple)) = 
> strtoull(pval, &scan_length, 10);
> 
> The appropriate autoconf magic was in the code all along for both 
> versions, so my bad not noticing it until now.  It even transparently 
> remaps the BSD-ism of calling it strtoq.
> 
> I suspect that this alone isn't sufficient to make the code I'm trying 
> to wedge into pgbench to always work on the platforms I consider must 
> haves, because of the weird names like _strtoi64 that Windows uses:  
> http://msdn.microsoft.com/en-us/library/h80404d3(v=VS.80).aspx  In fact, 
> I wouldn't be surprised to discover the ECPG code above doesn't do the 
> right thing if compiled with a 64-bit MSVC version.  Don't expect that's 
> a popular combination to explicitly test in a way that hits the code 
> path where this line is at.
> 
> The untested (I need to setup for building Windows to really confirm 
> this works) next patch attempt I've attached does what I think is the 
> right general sort of thing here.  It extends the autoconf remapping 
> that was already being done to include the second variation on how the 
> function needed can be named in a MSVC build.  This might improve the 
> ECPG compatibility issue I theorize could be there on that platform.  
> Given the autoconf stuff and use of the unsigned version was already a 
> dependency, I'd rather improve that code (so it's more obvious when it 
> is broken) than do the refactoring work suggested to re-use the server's 
> internal 64-bit parsing method instead.  I could split this into two 
> patches instead--"add 64-bit strtoull/strtoll support for MSVC" on the 
> presumption it's actually broken now (possibly wrong on my part) and 
> "make pgbench use 64-bit values"--but it's not so complicated as one.
> 
> I expect there is almost zero overlap between "needs pgbench setshell to 
> return >32 bit return values" and "not on a platform with a working 
> 64-bit strtoull variation".  What I did to hedge against that was add a 
> little check to pgbench that lets you confirm whether setshell lines are 
> limited to 32 bits or not, depending on whether the appropriate function 
> was found.  It tries to fall back to the existing strtol in that case, 
> and I've put a note when that happens (and matching documentation to 
> look for it) into the debug output of the program.
> 
> I'll continue with testing work here, but what's attached is now the 
> first form I think this could potentially be committed in once it's 
> known to be free of obvious bugs (testing at this database scale takes 
> forever).  I can revisit not using the library function instead if Tom 
> or someone else really opposes this new approach.  Given most of the 
> autoconf bits are already there and the limited number of platforms 
> where this is a problem, I think there's little gain for doing that work 
> though.
> 
> Style/functional suggestions appreciated.
> 
> -- 
> Greg Smith  2ndQuadrant US  Baltimore, MD
> PostgreSQL Training, Services and Support
> g...@2ndquadrant.com   www.2ndQuadrant.us
> 


> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add support for logging the current role

2011-02-06 Thread Itagaki Takahiro
On Sun, Feb 6, 2011 at 23:31, Stephen Frost  wrote:
> * Itagaki Takahiro (itagaki.takah...@gmail.com) wrote:
>> I think we need to improve postgresql.conf.sample a bit more, especially
>> the long line for #log_csv_fields = '...'. 330 characters in it!
>>   #1. Leave the long line because it is needed.
> It's needed to match what the current default is..

I agree that it's logically good design, but we could not accept it
as long as it breaks tools in the real world...
Will it break "SHOW ALL" and "SELECT * FROM pg_settings" output?
I'm worried that they are not designed to display such a long value.

>> I think it depends default log filename, that contains %S (seconds)
>> suffix. We can remove %S from log_filename; if we use a log per-day,
>> those log might contain different columns even after restart. If we
>> cannot avoid zigged csv fields completely, SIGHUP seems reasonable for it.
>
> This problem is bigger than SIGHUP, but at least with a restart
> required, the default will work properly.  The default configuration
> wouldn't work w/ a change to the log line and a SIGHUP done.

"Only works with the default settings" is just wrong design.
If we cannot provide a perfect solution, we should allow users to
control everything as they like. I still think PGC_SIGHUP is the
best mode for the parameter, with a note of caution in the docs.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add support for logging the current role

2011-02-06 Thread Stephen Frost
* Itagaki Takahiro (itagaki.takah...@gmail.com) wrote:
> I think we need to improve postgresql.conf.sample a bit more, especially
> the long line for #log_csv_fields = '...'. 330 characters in it!
>   #1. Leave the long line because it is needed.

It's needed to match what the current default is..

>   #2. Hide the variable from the default conf.

I don't like that idea.

>   #3. Use short %x mnemonic both in log_line_prefix and log_csv_fields.
>   (It might require many additional mnemonics.)

It would require a lot more, and wouldn't scale well either (this was
discussed previously..).

> I think it depends default log filename, that contains %S (seconds)
> suffix. We can remove %S from log_filename; if we use a log per-day,
> those log might contain different columns even after restart. If we
> cannot avoid zigged csv fields completely, SIGHUP seems reasonable for it.

This problem is bigger than SIGHUP, but at least with a restart
required, the default will work properly.  The default configuration
wouldn't work w/ a change to the log line and a SIGHUP done.  If you
change the log filename then it could generate jagged CSV files even on
restart.  We'd have to move the old log file out of the way when/if we
detect that it had a different CSV format and that's really just not
practical.  We don't want to cause problems for people, but I don't
think there's a way to prevent jagged CSVs if they're going to go out of
thier way to configure PG to create them.

> How about changing the type of csv_log_fields from List* to fixed
> array of LogCSVFields? If so, we can use an array-initializer
> instead of build_default_csvlog_list() ?  The code will be simplified.
> Fixed length won't be a problem because it would be rare that the
> same field are specified many times.

I really don't think changing it to an array is necessary or even
particularly helpful, and I don't agree that the code would actually
be simpler- you still have to make sure the CSV fields are kept in
the order requested by the user.  Also, you'd have to remember to update
the length of the static array every time a new log option was added or
risk writing past the end of the array..

> Could you try to "make html" in the doc directory?
> Your new decumentation after
> | These columns may be included in the CSV output:
> will be unaligned plain text without some tags.

Alright, I can take a look at improving the documentation situation,
though I certainly wouldn't complain if someone who has it all working
already were to help..

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-02-06 Thread Robert Haas
On Sun, Feb 6, 2011 at 8:47 AM, Noah Misch  wrote:
> On Sun, Feb 06, 2011 at 08:15:30AM -0500, Robert Haas wrote:
>> On Sun, Feb 6, 2011 at 5:52 AM, Noah Misch  wrote:
>> > 1. Add PLpgSQL_var.should_be_detoasted; check it in plpgsql_param_fetch().
>> > Essentially Pavel's original patch, only with the check logic moved up from
>> > exec_eval_datum() to plpgsql_param_fetch() to avoid bothering a couple 
>> > other
>> > callers that would not benefit. ?Tom and Robert objected to the new 
>> > bookkeeping.
>>
>> I don't understand why it's necessary.  It seems to me that the case
>> we're concerned about is when someone is referencing a variable that
>> is toasted which they might later want to reference again.  We're
>> going to have to notice that the value is toasted and detoast it
>> anyway before we can really do anything with it.  So why can't we
>> arrange to overwrite the *source* of the data we're fetching with the
>> detoasted version?
>>
>> I know this is probably a stupid question, but i don't understand the
>> code well enough to see why this can't work.
>
> The detoast currently happens well after PL/pgSQL has handed off the datum.
> Consider this function, my original benchmark when reviewing this patch:
>
>  CREATE OR REPLACE FUNCTION f(runs int) RETURNS void LANGUAGE plpgsql AS $$
>  DECLARE
>        foo     text;
>  BEGIN
>        SELECT c INTO foo FROM t;
>        FOR n IN 1 .. runs LOOP
>                PERFORM foo < 'x';
>        END LOOP;
>  END
>  $$;
>
> Suppose "foo" is toasted.  As the code stands in master, it gets detoasted in
> text_lt().  Certainly we won't overwrite the source back in PL/pgSQL from the
> detoast point in text_lt().

Right, that much seems obvious...

> Pavel's optimization requires that we identify the
> need to detoast the datum earlier and do so preemptively.

I guess I need to look at the patch more.  I'm failing to understand
why that can't be done within one or two functions, without passing
around a new piece of state.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TYPE 2: skip already-provable no-work rewrites

2011-02-06 Thread Noah Misch
On Sun, Feb 06, 2011 at 08:40:44AM -0500, Noah Misch wrote:
> On Sun, Feb 06, 2011 at 07:54:52AM -0500, Robert Haas wrote:
> > Yeeah, that's actually a little ugly.   It's actually a domain
> > over a composite type, not a composite type proper, IIUC. Better
> > ideas?
> 
> There are no domains over composite types.  get_rels_with_domain() finds all
> relations having columns of the (scalar) domain type.  It then calls
> find_composite_type_dependencies to identify uses of the composite types
> discovered in the previous step.
> 
> Honestly, RELKIND_COMPOSITE_TYPE is a reasonable choice despite the technical
> mismatch.  One more-correct approach would be to have two arguments, a catalog
> OID (pg_type or pg_class, currently) and a relkind, 0 when the catalog OID !=
> pg_class.  Might be an improvement, albeit a minor one.

Scratch that.  How about classid and objid arguments, passing them to
getObjectionDescription() internally?  We already do something very similar in
ATExecAlterColumnType for a related case.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-02-06 Thread Noah Misch
On Sun, Feb 06, 2011 at 08:15:30AM -0500, Robert Haas wrote:
> On Sun, Feb 6, 2011 at 5:52 AM, Noah Misch  wrote:
> > 1. Add PLpgSQL_var.should_be_detoasted; check it in plpgsql_param_fetch().
> > Essentially Pavel's original patch, only with the check logic moved up from
> > exec_eval_datum() to plpgsql_param_fetch() to avoid bothering a couple other
> > callers that would not benefit. ?Tom and Robert objected to the new 
> > bookkeeping.
> 
> I don't understand why it's necessary.  It seems to me that the case
> we're concerned about is when someone is referencing a variable that
> is toasted which they might later want to reference again.  We're
> going to have to notice that the value is toasted and detoast it
> anyway before we can really do anything with it.  So why can't we
> arrange to overwrite the *source* of the data we're fetching with the
> detoasted version?
> 
> I know this is probably a stupid question, but i don't understand the
> code well enough to see why this can't work.

The detoast currently happens well after PL/pgSQL has handed off the datum.
Consider this function, my original benchmark when reviewing this patch:

  CREATE OR REPLACE FUNCTION f(runs int) RETURNS void LANGUAGE plpgsql AS $$
  DECLARE
foo text;
  BEGIN
SELECT c INTO foo FROM t;
FOR n IN 1 .. runs LOOP
PERFORM foo < 'x';
END LOOP;
  END
  $$;

Suppose "foo" is toasted.  As the code stands in master, it gets detoasted in
text_lt().  Certainly we won't overwrite the source back in PL/pgSQL from the
detoast point in text_lt().  Pavel's optimization requires that we identify the
need to detoast the datum earlier and do so preemptively.

Thanks,
nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TYPE 2: skip already-provable no-work rewrites

2011-02-06 Thread Noah Misch
On Sun, Feb 06, 2011 at 07:54:52AM -0500, Robert Haas wrote:
> On Sun, Feb 6, 2011 at 4:15 AM, Noah Misch  wrote:
> >> That didn't quite work, because there's a caller in typecmds.c that
> >> doesn't have the relation handy. ?So I made it take a relkind and a
> >> name, which works fine.
> >
> > Hmm, indeed. ?In get_rels_with_domain(), it's a scalar type.
> 
> Yeeah, that's actually a little ugly.   It's actually a domain
> over a composite type, not a composite type proper, IIUC. Better
> ideas?

There are no domains over composite types.  get_rels_with_domain() finds all
relations having columns of the (scalar) domain type.  It then calls
find_composite_type_dependencies to identify uses of the composite types
discovered in the previous step.

Honestly, RELKIND_COMPOSITE_TYPE is a reasonable choice despite the technical
mismatch.  One more-correct approach would be to have two arguments, a catalog
OID (pg_type or pg_class, currently) and a relkind, 0 when the catalog OID !=
pg_class.  Might be an improvement, albeit a minor one.

> >> The attached patch takes this approach. ?It's very slightly more code,
> >> but it reduces the amount of spooky action at a distance.
> >
> >> Comments?
> >
> > Your patch improves the code. ?My standard for commending a refactor-only 
> > patch
> > is rather high, though, and this patch doesn't reach it. ?The ancestral code
> > placement wasn't obviously correct, but neither is this. ?So I'd vote -0.
> 
> Well, what's your suggestion, then?  Your patch pops the test up from
> ATRewriteTable() up to ATRewriteTables(), but that's not obviously
> correct either, and it's an awkward place to do it because we don't
> have the Relation object handy at the point where the check needs to
> be done.

Agreed.  I couldn't think of any grand improvements, so my patch bore the
conceptually-smallest change I could come up with to handle that particular
issue.  That is, I felt it was the change that could most easily be verified as
rejecting the same cases as the old test.

nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-02-06 Thread Robert Haas
On Sun, Feb 6, 2011 at 5:52 AM, Noah Misch  wrote:
> 1. Add PLpgSQL_var.should_be_detoasted; check it in plpgsql_param_fetch().
> Essentially Pavel's original patch, only with the check logic moved up from
> exec_eval_datum() to plpgsql_param_fetch() to avoid bothering a couple other
> callers that would not benefit.  Tom and Robert objected to the new 
> bookkeeping.

I don't understand why it's necessary.  It seems to me that the case
we're concerned about is when someone is referencing a variable that
is toasted which they might later want to reference again.  We're
going to have to notice that the value is toasted and detoast it
anyway before we can really do anything with it.  So why can't we
arrange to overwrite the *source* of the data we're fetching with the
detoasted version?

I know this is probably a stupid question, but i don't understand the
code well enough to see why this can't work.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TYPE 2: skip already-provable no-work rewrites

2011-02-06 Thread Robert Haas
On Sun, Feb 6, 2011 at 4:15 AM, Noah Misch  wrote:
>> That didn't quite work, because there's a caller in typecmds.c that
>> doesn't have the relation handy.  So I made it take a relkind and a
>> name, which works fine.
>
> Hmm, indeed.  In get_rels_with_domain(), it's a scalar type.

Yeeah, that's actually a little ugly.   It's actually a domain
over a composite type, not a composite type proper, IIUC. Better
ideas?

>> The attached patch takes this approach.  It's very slightly more code,
>> but it reduces the amount of spooky action at a distance.
>
>> Comments?
>
> Your patch improves the code.  My standard for commending a refactor-only 
> patch
> is rather high, though, and this patch doesn't reach it.  The ancestral code
> placement wasn't obviously correct, but neither is this.  So I'd vote -0.

Well, what's your suggestion, then?  Your patch pops the test up from
ATRewriteTable() up to ATRewriteTables(), but that's not obviously
correct either, and it's an awkward place to do it because we don't
have the Relation object handy at the point where the check needs to
be done.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-02-06 Thread Noah Misch
Let's see if I can summarize the facts we've collected so far.  I see four
options based on the discussion:

1. Add PLpgSQL_var.should_be_detoasted; check it in plpgsql_param_fetch().
Essentially Pavel's original patch, only with the check logic moved up from
exec_eval_datum() to plpgsql_param_fetch() to avoid bothering a couple other
callers that would not benefit.  Tom and Robert objected to the new bookkeeping.

2. Deduce the need to detoast and do so in plpgsql_param_fetch().  Avoids the
new bookkeeping.  Tom objected to the qualitative performance implications, and
Pavel measured a 3% performance regression.

3. Deduce the need to detoast and do so in exec_assign_value().  Tom's proposal.
This avoids the new bookkeeping and does not touch a hot path.  It could
eliminate a datum copy in some cases.  Pavel, Noah, Heikki and Robert objected
to the detoasts of never-referenced variables.

4. Change nothing.


Or to perhaps put it even simpler:
1. Swallow some ugly bookkeeping.
2. Slow down a substantial range of PL/pgSQL code to a small extent.
3. Slow down unused-toasted-variable use cases to a large extent.
4. Leave the repeated-detoast use cases slow to a large extent.

In principle, given access to a global profile of PL/pgSQL usage, we could
choose objectively between #2, #3 and #4.  I can't see an objective method for
choosing between #1 and the others; we'd need a conversion factor between the
value of the performance improvement and the cost of that code.  In practice,
we're in wholly subjective territory.

nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/python quoting functions

2011-02-06 Thread Jan Urbański
On 04/02/11 18:10, Hitoshi Harada wrote:
> 2011/1/11 Jan Urbański :
>> Here's a patch that adds a few PL/Python functions for quoting strings.
>> It's an incremental patch on top of the plpython-refactor patch sent in
>> http://archives.postgresql.org/message-id/4d135170.3080...@wulczer.org.
>>
>> Git branch for this patch:
>> https://github.com/wulczer/postgres/tree/functions
>>
>> The new functions are plpy.quote_literal, plpy.quote_nullable and
>> plpy.quote_ident, and work just like their sql or plperl equivalents.
>>
> 
> I reviewed this.
> 
> The patch applies and compiles cleanly and all the tests are passed.
> The patch adds 3 functions which works as the corresponding SQL
> functions. The test is enough, without any additional docs. No
> feature/performance issues found.
> 
> I mark this "Reader for Committer".

Thanks!

I guess a short paragraph in the Utility Functions section of the
PL/Python docs would be in order, I'll try to add it today.

Jan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/python explicit subtransactions

2011-02-06 Thread Jan Urbański
On 02/02/11 14:16, Steve Singer wrote:
> On 11-01-27 05:11 PM, Jan Urbański wrote:
>> On 23/12/10 15:32, Jan Urbański wrote:
>>> Here's a patch implementing explicitly starting subtransactions
>>> mentioned in
>>> http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
>>> an incremental patch on top of the spi-in-subxacts patch sent eariler.
>>
>> Updated to the spi-in-subxacts version sent earlier.
>>
> [review]
> 
> Other than that I don't see any issues.  I am marking this as waiting
> for author since the documentation is still outstanding.

Thanks Steve, I'm writing docs and changing error messages to lowercase
as you suggested and will send an updated patch today.

Jan

PS: Oh, I just saw Peter's suggestion to rename the function from
plpy.subxact() to plpy.subtransaction(). Fine with me, will do that too.

J

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TYPE 2: skip already-provable no-work rewrites

2011-02-06 Thread Noah Misch
On Sun, Feb 06, 2011 at 02:15:47AM -0500, Robert Haas wrote:
> On Sat, Feb 5, 2011 at 7:44 PM, Noah Misch  wrote:
> >> But this is a little unsatisfying, for two reasons. ?First, the error
> >> message will be subtly wrong: we can make it complain about a table or
> >> a type, but not a foreign table. ?At a quick look, it likes the right
> >> fix might be to replace the second and third arguments to
> >> find_composite_type_dependencies() with a Relation.
> >
> > Seems like a clear improvement.
> 
> That didn't quite work, because there's a caller in typecmds.c that
> doesn't have the relation handy.  So I made it take a relkind and a
> name, which works fine.

Hmm, indeed.  In get_rels_with_domain(), it's a scalar type.

> >> Second, I wonder
> >> if we shouldn't refactor things so that all the checks fire in
> >> ATRewriteTables() rather than doing them in different places. ?Seems
> >> like that might be cleaner.
> >
> > Offhand, this seems reasonable, too. ?I assumed there was some good reason 
> > it
> > couldn't be done there for non-tables, but nothing comes to mind.
> 
> Actually, thinking about this more, I'm thinking if we're going to
> change anything, it seems we ought to go the other way.  If we ever
> actually did support recursing into wherever the composite type
> dependencies take us, we'd want to detect that before phase 3 and add
> work-queue items for each table that we needed to futz with.
> 
> The attached patch takes this approach.  It's very slightly more code,
> but it reduces the amount of spooky action at a distance.

> Comments?

Your patch improves the code.  My standard for commending a refactor-only patch
is rather high, though, and this patch doesn't reach it.  The ancestral code
placement wasn't obviously correct, but neither is this.  So I'd vote -0.

nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers