Re: [HACKERS] SSI patch version 14

2011-01-25 Thread Heikki Linnakangas

On 25.01.2011 22:53, Kevin Grittner wrote:

Heikki Linnakangas  wrote:

On 25.01.2011 05:30, Kevin Grittner wrote:



The readme says this:

4. PostgreSQL supports subtransactions -- an issue not mentioned
in the papers.


But I don't see any mention anywhere else on how subtransactions
are handled. If a subtransaction aborts, are its predicate locks
immediately released?


No.  Here's the reasoning.  Within a top level transaction, you
might start a subtransaction, read some data, and then decide based
on what you read that the subtransaction should be rolled back.  If
the decision as to what is part of the top level transaction can
depend on what is read in the subtransaction, predicate locks taken
by the subtransaction must survive rollback of the subtransaction.

Does that make sense to you?


Yes, that's what I suspected. And I gather that all the data structures 
in predicate.c work with top-level xids, not subxids. When looking at an 
xid that comes from a tuple's xmin or xmax, for example, you always call 
SubTransGetTopmostTransaction() before doing much else with it.



 Is there somewhere you would like to
see that argument documented?


README-SSI .

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] REVIEW: WIP: plpgsql - foreach in

2011-01-25 Thread Pavel Stehule
2011/1/26 Itagaki Takahiro :
> On Mon, Jan 24, 2011 at 20:10, Pavel Stehule  wrote:
>> we have to iterate over array's items because it allow seq. access to
>> array's data. I need a global index for function "array_get_isnull". I
>> can't to use a buildin functions like array_slize_size or
>> array_get_slice, because there is high overhead of array_seek
>> function. I redesigned the initial part of main cycle, but code is
>> little bit longer :(, but I hope, it is more readable.
>
> The attached patch only includes changes in plpgsql. I tested it
> combined with the previous one, that includes other directories.
>
> * src/pl/plpgsql/src/gram.y
> | for_variable K_SLICE ICONST
> The parameter for SLICE must be an integer literal. Is it a design?
> I got a syntax error when I wrote a function like below. However,
> FOREACH returns different types on SLICE = 0 or SLICE >= 1.
> (element type for 0, or array type for >=1)  So, we cannot write
> a true generic function that accepts all SLICE values even if
> the syntax supports an expr for the parameter.

Yes, it is design. You wrote a reason. A parametrized SLICE needs only
a few lines more, but a) I really don't know a use case, b) there is
significant difference between slice 0 and slice 1

>
> CREATE FUNCTION foreach_test(int[], int) RETURNS SETOF int[] AS
> $$
> DECLARE
>  a integer[];
> BEGIN
>  FOREACH a SLICE $2 IN $1 LOOP -- syntax error
>    RETURN NEXT a;
>  END LOOP;
> END;
> $$ LANGUAGE plpgsql;
>
> * /doc/src/sgml/plpgsql.sgml
> + FOREACH target  SCALE
> s/SCALE/SLICE/ ?
>
> * Why do you use "foreach_a" for some identifiers?
> We use "foreach" only for arrays, so "_a" suffix doesn't seem needed.

yes, it isn't needed. But FOREACH is a new construct, that can be used
for more different iterations - maybe iterations over hstore,
element's set, ...

so _a means - this is implementation for arrays.



>
> * accumArrayResult() for SLICE has a bit overhead.
> If we want to optimize it, we could use memcpy() because slices are
> placed in continuous memory. But I'm not sure the worth; I guess
> FOREACH will be used with SLICE = 0 in many cases.
>

I agree with you, so SLICE > 0 will not be used often.

accumArrayResult isn't expensive function - for non varlena types. The
cutting of some array needs a redundant code to CopyArrayEls and
construct_md_array. All core functions are not good for our purpose,
because doesn't expect a seq array reading :(. Next - simply copy can
be done only for arrays without null bitmap, else you have to do some
bitmap rotations :(. So, I don't would do this optimization in this
moment. It has sense for multidimensional numeric arrays, but can be
solved in next version. It's last commitfest now, and I don't do this
patch more complex then it is now.

Regards

Pavel Stehule


> --
> Itagaki Takahiro
>

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] REVIEW: WIP: plpgsql - foreach in

2011-01-25 Thread Itagaki Takahiro
On Mon, Jan 24, 2011 at 20:10, Pavel Stehule  wrote:
> we have to iterate over array's items because it allow seq. access to
> array's data. I need a global index for function "array_get_isnull". I
> can't to use a buildin functions like array_slize_size or
> array_get_slice, because there is high overhead of array_seek
> function. I redesigned the initial part of main cycle, but code is
> little bit longer :(, but I hope, it is more readable.

The attached patch only includes changes in plpgsql. I tested it
combined with the previous one, that includes other directories.

* src/pl/plpgsql/src/gram.y
| for_variable K_SLICE ICONST
The parameter for SLICE must be an integer literal. Is it a design?
I got a syntax error when I wrote a function like below. However,
FOREACH returns different types on SLICE = 0 or SLICE >= 1.
(element type for 0, or array type for >=1)  So, we cannot write
a true generic function that accepts all SLICE values even if
the syntax supports an expr for the parameter.

CREATE FUNCTION foreach_test(int[], int) RETURNS SETOF int[] AS
$$
DECLARE
  a integer[];
BEGIN
  FOREACH a SLICE $2 IN $1 LOOP -- syntax error
RETURN NEXT a;
  END LOOP;
END;
$$ LANGUAGE plpgsql;

* /doc/src/sgml/plpgsql.sgml
+ FOREACH target  SCALE
s/SCALE/SLICE/ ?

* Why do you use "foreach_a" for some identifiers?
We use "foreach" only for arrays, so "_a" suffix doesn't seem needed.

* accumArrayResult() for SLICE has a bit overhead.
If we want to optimize it, we could use memcpy() because slices are
placed in continuous memory. But I'm not sure the worth; I guess
FOREACH will be used with SLICE = 0 in many cases.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] log_checkpoints and restartpoint

2011-01-25 Thread Fujii Masao
Hi,

When log_checkpoints is enabled, checkpoint logs the number of
WAL files created/deleted/recycled, but restartpoint doesn't.
This is OK before 9.0 because restartpoint had never created/
deleted/recycled WAL files. But, in 9.0 or later, restartpoint might
do that while walreceiver is running. So I think that it's useful to
log those numbers even in restartpoint in the now.

The attached patch changes LogCheckpointEnd so that it logs
the number of WAL files created/deleted/recycled even in
restartpoint.

And I found the problem about the initialization of CheckpointStats
struct. The attached patch also fixes this.

Comments?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


log_restartpoint_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/python SPI in subtransactions

2011-01-25 Thread Steve Singer

On 10-12-23 08:45 AM, Jan Urbański wrote:

Here's a patch implementing a executing SPI in an subtransaction
mentioned in
http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
an incremental patch on top of the plpython-refactor patch sent eariler.

Git branch for this patch:
https://github.com/wulczer/postgres/tree/spi-in-subxacts.

Without it the error handling in PL/Python is really broken, as we jump
between from a saught longjmp back into Python without any cleanup. As
an additional bonus you no longer get all the ugly "unrecognized error
in PLy_spi_execute_query" errors.



I see you've merged the changes from the refactoring branch down but 
haven't yet posted an updated patch.  This review is based on 
2f2b4a33bf344058620a5c684d1f2459e505c727


As a disclaimer, I have worked  python before but not used plpython for 
anything substantial.



Submission Review
---
I think Jan intends to post an updated patch once the refactor branch 
has been dealt with.


The patch updates the excepted results of the regression tests so they 
no longer expect the 'unrecognized error' warnings.   No new unit test 
are added to verify that behavior changes will continue to function as 
intended (though they could be)


No documentation updates are included.  The current documentation is 
silent on the behaviour of plpython when SPI calls generate errors so 
this change doesn't invalidate any documentation but it would be nice if 
we described what effect SQL invoked through SPI from the functions have 
on the transaction. Maybe a "Trapping Errors" section?



Usability Review
---
Does the patch implement what it says:  yes

Do we want this:  yes I think so.  This patch allows pl/python functions 
to catch errors from the SQL they issue and deal with them as the 
function author sees fit.  I see this being useful.


A concern I have is that some users might find this surprising.  For 
plpgsql the exception handling will rollback SQL from before the 
exception and I suspect the other PL's are the same.  It would be good 
if a few more people chimed in on if they see this as a good idea.


Another concern is that we are probably breaking some peoples code.

Consider the following:

BEGIN;
create table foo(a int4 primary key);
DO $$
r=plpy.execute("insert into foo values ('1')")
try :
  r=plpy.execute("insert into foo values ('1')")
except:
  plpy.log("something went wrong")
$$ language plpython2u;
select * FROM foo;
a
---
 1
(1 row)


This code is going to behave different with the patch.  Without the 
patch the select fails because a) the transaction has rolled back which 
rollsback both insert and the create table.   With the patch the first 
row shows up in the select.   How concerned are we with changing the 
behaviour of existing plpython functions? This needs discussion.



I am finding the treatment of savepoints very strange.
If as a function author I'm able to recover from errors then I'd expect 
(or maybe want) to be able to manage them through savepoints



BEGIN;
create table foo(a int4 primary key);
DO $$
plpy.execute("savepoint save")
r=plpy.execute("insert into foo values ('1')")
try :
  r=plpy.execute("insert into foo values ('1')")
except:
  plpy.execute("rollback to save")
  plpy.log("something went wrong")
$$ language plpython2u;
select * FROM foo;
a
---
 1
(1 row)

when I wrote the above I was expecting either an error when I tried to 
use the savepoint (like in plpgsql) or for it rollback the insert. 
Without the patch I get

PL/Python: plpy.SPIError: SPI_execute failed: SPI_ERROR_TRANSACTION
This is much better than silently ignoring the command.

I've only glanced at your transaction manager patch, from what I can 
tell it will give me another way of managing the inner transaction but 
I'm not sure it will make the savepoint calls work(will it?). I also 
wonder how useful in practice this patch will be if the other patch 
doesn't also get applied (function others will be stuck with an all or 
nothing as their options for error handling)




Code Review
---
I don't see any issues with the code.






Cheers,
Jan







--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sepgsql contrib module

2011-01-25 Thread KaiGai Kohei
(2011/01/26 12:23), KaiGai Kohei wrote:
>>> Yikes.  On further examination, exec_object_restorecon() is pretty
>>> bogus.  Surely you need some calls to quote_literal_cstr() in there
>>> someplace.
>>
> Are you concerning about the object name being supplied to
> selabel_lookup_raw() in exec_object_restorecon()?
> I also think this quoting you suggested is reasonable.
> 
How about the case when the object name only contains alphabet and
numerical characters?

Thanks,
-- 
KaiGai Kohei 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Per-column collation, the finale

2011-01-25 Thread Noah Misch
On Wed, Jan 26, 2011 at 12:35:07AM +0200, Peter Eisentraut wrote:
> On tis, 2011-01-25 at 10:14 +0900, Itagaki Takahiro wrote:
> > and I have an almost empty pg_collation catalog with it:
> > 
> > =# SELECT * FROM pg_collation;
> >  collname | collnamespace | collencoding | collcollate | collctype
> > --+---+--+-+---
> >  default  |11 |0 | |
> > (1 row)
> 
> The initdb output should say something about how it got there.

FWIW, I tried and had the same problem.  initdb gave:

  creating collations ... not supported on this platform

"configure" was failing to detect locale_t for me, and this fixed it:

*** a/configure.in
--- b/configure.in
***
*** 1116,1122  AC_TYPE_INTPTR_T
  AC_TYPE_UINTPTR_T
  AC_TYPE_LONG_LONG_INT
  
! AC_CHECK_TYPES([locale_t],
  [#include ])
  
  AC_CHECK_TYPES([struct cmsgcred, struct fcred, struct sockcred], [], [],
--- 1116,1122 
  AC_TYPE_UINTPTR_T
  AC_TYPE_LONG_LONG_INT
  
! AC_CHECK_TYPES([locale_t], [], [],
  [#include ])
  
  AC_CHECK_TYPES([struct cmsgcred, struct fcred, struct sockcred], [], [],

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sepgsql contrib module

2011-01-25 Thread KaiGai Kohei
(2011/01/24 12:49), Robert Haas wrote:
> On Sun, Jan 23, 2011 at 9:56 PM, Robert Haas  wrote:
>> On Sun, Jan 23, 2011 at 8:53 PM, Robert Haas  wrote:
>>> 2011/1/21 KaiGai Kohei:
 The attached patch is a revised version.
>>>
>>> I've committed this.  Cleanup coming...
>>
>> Yikes.  On further examination, exec_object_restorecon() is pretty
>> bogus.  Surely you need some calls to quote_literal_cstr() in there
>> someplace.
>
Are you concerning about the object name being supplied to
selabel_lookup_raw() in exec_object_restorecon()?
I also think this quoting you suggested is reasonable.

>>  And how about using getObjectDescriptionOids() for the
>> error message, instead of the entirely bogus construction that's there
>> now?
> 
It seems to me a good idea. I'll try to revise corresponding code.

> Also, shouldn't a bunch of these messages end in ": %m"?
> 
When these messages are because of unexpected operating system errors,
such as fails in communication with selinux, the "%m" will give us good
suggestions.

I'll submit these fixes within a few days, please wait for a while.

Thanks,
-- 
KaiGai Kohei 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TYPE 2: skip already-provable no-work rewrites

2011-01-25 Thread Noah Misch
On Tue, Jan 25, 2011 at 06:40:08PM -0500, Robert Haas wrote:
> On Mon, Jan 24, 2011 at 7:10 PM, Noah Misch  wrote:
> > * at1.1-default-composite.patch
> > Remove the error when the user adds a column with a default value to a table
> > whose rowtype is used in a column elsewhere.
> 
> Can we fix this without moving the logic around quite so much?  I'm
> worried that could introduce bugs.
> 
> It strikes me that the root of the problem here is that this test is
> subtly wrong:
> 
> if (newrel)
> find_composite_type_dependencies(oldrel->rd_rel->reltype,
> 
>   RelationGetRelationName(oldrel),
> 
>   NULL);

Correct.

> So it seems
> to me that we could fix this with something like the attached.
> Thoughts?

I'm fine with this patch.  A few notes based on its context in the larger
project:

> --- a/src/backend/commands/tablecmds.c
> +++ b/src/backend/commands/tablecmds.c

> @@ -3366,14 +3367,14 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, 
> LOCKMODE lockmode)
>   }
>  
>   /*
> -  * If we need to rewrite the table, the operation has to be propagated 
> to
> -  * tables that use this table's rowtype as a column type.
> +  * If we change column data types or add/remove OIDs, the operation has 
> to
> +  * be propagated to tables that use this table's rowtype as a column 
> type.
>*
>* (Eventually this will probably become true for scans as well, but at
>* the moment a composite type does not enforce any constraints, so it's
>* not necessary/appropriate to enforce them just during ALTER.)
>*/
> - if (newrel)
> + if (tab->new_changetypes || tab->new_changeoids)

The next patch removed new_changeoids, so we would instead be keeping it with
this as the only place referencing it.

>   find_composite_type_dependencies(oldrel->rd_rel->reltype,
>   
>  RelationGetRelationName(oldrel),
>   
>  NULL);
> @@ -6347,6 +6348,7 @@ ATPrepAlterColumnType(List **wqueue,
>   newval->expr = (Expr *) transform;
>  
>   tab->newvals = lappend(tab->newvals, newval);
> + tab->new_changetypes = true;

The at2v2 patch would then morph to do something like:

if (worklevel != WORK_NONE)
tab->new_changetypes = true;

That weakens the name "new_changetypes" a bit.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions support for pg_dump, patch v27

2011-01-25 Thread Itagaki Takahiro
On Wed, Jan 26, 2011 at 02:57, David E. Wheeler  wrote:
>>> Other than adminpack, I think it makes sense to say that extensions
>>> are not going into pg_catalog…
>>
>> Given this, we should maybe see about either making adminpack part of
>> PostgreSQL's core distribution (probably a good idea) or moving it out
>> of pg_catalog so we don't have an exception to the rule.
>
> +1

I doubt it can solve the real problem.
It is my understanding that we install them in pg_catalog because
they are designed to be installed in template1 for pgAdmin, right?

We have a problem in pg_dump when we install extension modules in
template1. If we create a database, installed functions are copied
from template1. However, they are also dumped with pg_dump unless
they are in pg_catalog. So, we encounter "function already exists"
errors when pg_restore.

Since pg_dump won't dump user objects in pg_catalog, adminpack can
avoid the above errors by installing functions in pg_catalog.
CREATE EXTENSION might have the same issue -- Can EXTENSION work
without errors when we install extensions in template databases?
To avoid errors, pg_dump might need to dump extensions as
"CREATE OR REPLACE EXTENSION" or "CREATE EXTENSION IF NOT EXISTS"
rather than "CREATE EXTENSION".

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Change pg_last_xlog_receive_location not to move backwards

2011-01-25 Thread Fujii Masao
Thanks for the review!

On Wed, Jan 26, 2011 at 3:40 AM, Jeff Janes  wrote:
> I believe the new walrcv->receiveStart was introduced to divide up those
> behaviors that should go backwards from those that should not.

Yes.

> The non-retreating value is used in 3 places (via GetWalRcvWriteRecPtr)
> in xlog.c.  (And some other places I haven't examined yet.)

Yes.

> The third seems more problematic.  In the XLogPageRead,
> it checks to see if more records have been received beyond what
> has been applied.  By using the non-retreating value here, it seems
> like the xlog replay could start replaying records that the wal
> receiver is in the process of overwriting.  Now, I've argued to myself
> that this is not a problem, because the receiver is overwriting them
> with identical data to that which is already there.

Yes. I don't think that it's a problem, too.

> But by that logic, why does any part of it (walrcv->receiveStart in
> the patch, walrcv->receivedUpto unpatched) need to retreat?  From
> the previous discussion, I understand that the concern is that we don't
> want to retrieve and write out partial xlog files.  What I don't understand
> is how we could get our selves into the position in which we are doing
> that, other than by someone going in and doing impermissible things to
> the PGDATA directory behind our backs.

That logic exists because we'd like to check that newly-received WAL
data is consistent with previous one by validating the header of new
WAL file. So since we need the header of new WAL file, we retreat the
replication starting location to the beginning of the WAL file when
reconnecting to the primary.

The following code (in XLogPageRead) validates the header of new
WAL file.

--
if (switched_segment && targetPageOff != 0)
{
/*
 * Whenever switching to a new WAL segment, we read the first 
page of
 * the file and validate its header, even if that's not where 
the
 * target record is.  This is so that we can check the 
additional
 * identification info that is present in the first page's 
"long"
 * header.
 */
readOff = 0;
if (read(readFile, readBuf, XLOG_BLCKSZ) != XLOG_BLCKSZ)
{
ereport(emode_for_corrupt_record(emode, *RecPtr),
(errcode_for_file_access(),
 errmsg("could not read from log file 
%u, segment %u, offset %u: %m",
readId, readSeg, 
readOff)));
goto next_record_is_invalid;
}
if (!ValidXLOGHeader((XLogPageHeader) readBuf, emode))
goto next_record_is_invalid;
}
--

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TYPE 2: skip already-provable no-work rewrites

2011-01-25 Thread Robert Haas
On Mon, Jan 24, 2011 at 7:10 PM, Noah Misch  wrote:
> * at1.2-doc-set-data-type.patch
> The documentation used "ALTER TYPE" when it meant "SET DATA TYPE", a subform 
> of
> "ALTER TABLE" or "ALTER FOREIGN TABLE".  Fixes just that.

Committed this part.  For reasons involving me being tired, I
initially thought that only the first part was correct, which is why I
did it as two commits.  But it's obviously right, so now it's all
committed.  I back-patched the ALTER TABLE part to 9.0.X so it'll show
up in the web site docs after the next minor release.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TYPE 2: skip already-provable no-work rewrites

2011-01-25 Thread Robert Haas
On Mon, Jan 24, 2011 at 7:10 PM, Noah Misch  wrote:
> * at1.1-default-composite.patch
> Remove the error when the user adds a column with a default value to a table
> whose rowtype is used in a column elsewhere.

Can we fix this without moving the logic around quite so much?  I'm
worried that could introduce bugs.

It strikes me that the root of the problem here is that this test is
subtly wrong:

if (newrel)
find_composite_type_dependencies(oldrel->rd_rel->reltype,

  RelationGetRelationName(oldrel),

  NULL);

So what this is saying is: If the user has performed an operation that
requires a rewrite, then we can't carry out that operation if the
rowtype is used elsewhere, because we wouldn't be able to propagate
the rewrite to those other objects.  That's correct, unless the
operation in question is one which isn't supported by composite types
anyway.  We trigger a rewrite if there is a has-OIDs change or if
tab->newvals contains any elements, which can happen if either there
is a type change or if a column with a default is added.  So it seems
to me that we could fix this with something like the attached.
Thoughts?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


defaults-are-not-so-evil.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] SSL Client Certificate pass phrases

2011-01-25 Thread Andrew Dunstan


I had a requirement the other day to support a connection using an SSL 
Client certificate. I set this up, and it worked nicely. But there's a 
fly in the ointment. While the openssl libraries will ask for a pass 
phrase for the key file if required when running psql, this is not 
usable in other circumstances. pgAdminIII fails on it miserably, and so 
does a dblink connection. The first is especially important, because it 
makes the use of client certificates in fact quite dangerous when the 
client is a running on a laptop computer which is liable to be stolen. I 
actually have requirements to make both these cases work if possible.


ISTM we need to be able to supply a pass phrase to libpq (via the 
options?) which would allow libpq to call 
|SSL_CTX_set_default_passwd_cb_userdata or something similar which would 
allow the key file to be unlocked.


Thoughts?

cheers

andrew
|

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Perl 5.12 complains about ecpg parser-hacking scripts

2011-01-25 Thread Peter Eisentraut
On sön, 2011-01-23 at 12:23 -0500, Tom Lane wrote:
> Andy Colson  writes:
> > Is there anyway to make bison/yacc/gcc/etc spit out the rule names?
> 
> bison -v produces a debug output file that includes nicely cleaned-up
> versions of all the rules.  But it includes a lot of other stuff too,
> and I'm not at all sure that the file format is stable across bison
> versions, so maybe depending on that isn't a hot idea.

Having maintained a gram.y -> keywords.sgml script based on that idea
for N years, I can report that this is not stable enough to work here.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Per-column collation, the finale

2011-01-25 Thread Peter Eisentraut
On tis, 2011-01-25 at 10:14 +0900, Itagaki Takahiro wrote:
> On Sat, Jan 15, 2011 at 06:41, Peter Eisentraut  wrote:
> > I've been going over this patch with a fine-tooth comb for the last two
> > weeks, fixed some small bugs, added comments, made initdb a little
> > friendlier, but no substantial changes.
> 
> What can I do to test the patch?
> No regression tests are included in it,

Please refer to the regress.sgml hunk about running the test.

> and I have an almost empty pg_collation catalog with it:
> 
> =# SELECT * FROM pg_collation;
>  collname | collnamespace | collencoding | collcollate | collctype
> --+---+--+-+---
>  default  |11 |0 | |
> (1 row)

The initdb output should say something about how it got there.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-01-25 Thread Kevin Grittner
Jeff Davis  wrote:
 
> It's OK to leave it to 9.2. But if it's a RO deferrable
> transaction, it's just going to go to sleep in that case anyway;
> so why not look for an opportunity to get a safe snapshot right
> away?
 
If you're talking about doing this only for DEFERRABLE transactions
it *might* make sense for 9.1.  I'd need to look at what's involved.
We make similar checks for all read only transactions, so they can
withdraw from SSI while running, if their snapshot *becomes* safe.
I don't think I'd want to consider messing with that code at this
point.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] a regression

2011-01-25 Thread Peter Eisentraut
On tis, 2011-01-25 at 16:19 -0500, Tom Lane wrote:
> What would probably be reasonable to do is a one-time run over a lot
> of locales, and then collate the results to see how many distinct
> outputs we see and over what sets of locales.  Then we could make some
> reasoned decisions about which cases are worth carrying variant
> expected files for.

We already did that and the current state is the result of that.

http://archives.postgresql.org/message-id/20090254.03722.pete...@gmx.net


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-01-25 Thread Kevin Grittner
Jeff Davis  wrote:
 
> I think just annotating RWConflict.in/outLink and
> PredTranList.available/activeList with the types of things they
> hold would be a help.
> 
> Also, you say something about RWConflict and "when the structure
> is not in use". Can you elaborate on that a bit?
 
Please let me know whether this works for you:
 
http://git.postgresql.org/gitweb?p=users/kgrittn/postgres.git;a=commitdiff;h=325ec55e8c9e5179e2e16ff303af6afc1d6e732b
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] a regression

2011-01-25 Thread Tom Lane
Andrew Dunstan  writes:
> On 01/25/2011 12:30 PM, Tom Lane wrote:
>> It's only a regression if it used to pass in that locale.  We can't
>> realistically try to support every possible locale in the tests.

> Maybe someone would like to set up a buildfarm member that tests a whole 
> slew of locales. We've had the capability for a couple of years now.

I don't want to promise that we will fix any random locale that anyone
sets up on a buildfarm member.

What would probably be reasonable to do is a one-time run over a lot of
locales, and then collate the results to see how many distinct outputs
we see and over what sets of locales.  Then we could make some reasoned
decisions about which cases are worth carrying variant expected files
for.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] a regression

2011-01-25 Thread David Fetter
On Tue, Jan 25, 2011 at 02:43:31PM -0500, Andrew Dunstan wrote:
> On 01/25/2011 12:30 PM, Tom Lane wrote:
> >>marcin@skpsms:~/postgresql$ locale
> >>LC_COLLATE="pl_PL.UTF-8"
> >It's only a regression if it used to pass in that locale.  We can't
> >realistically try to support every possible locale in the tests.
> 
> Maybe someone would like to set up a buildfarm member that tests a
> whole slew of locales. We've had the capability for a couple of
> years now.

Is it just a matter of setting a flock of environment variables as
part of the setup?

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to add a primary key using an existing index

2011-01-25 Thread Tom Lane
Gurjeet Singh  writes:
> Attached is the updated patch with doc changes and test cases.

Applied with assorted corrections.  Aside from the refactoring I wanted,
there were various oversights.

> I have consciously disallowed the ability to specify storage_parameters
> using the WITH clause, if somebody thinks it is wise to allow that and is
> needed, I can do that.

AFAICS, WITH would be supplied at the time of index creation; it's not
appropriate to include it here, any more than INDEX TABLESPACE.

A point that may or may not have gotten discussed back when is that it's
important that the result of this process be dumpable by pg_dump, ie
there not be any hidden discrepancies between the state after ADD
CONSTRAINT USING INDEX and the state you'd get from straight ADD
CONSTRAINT, because the latter is the syntax pg_dump is going to emit.
ADD CONSTRAINT can handle WITH and INDEX TABLESPACE, so carrying those
over from the original index specification is no problem, but
non-default index opclasses or sort ordering options would be a big
problem.  That would in particular completely break pg_upgrade, because
the on-disk index wouldn't match the catalog entries created by running
pg_dump.  I added some code to check and disallow non-default opclass
and options.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] a regression

2011-01-25 Thread Peter Eisentraut
On tis, 2011-01-25 at 12:30 -0500, Tom Lane wrote:
> marcin mank  writes:
> > On Tue, Jan 25, 2011 at 5:46 PM, marcin mank  wrote:
> >> I did:
> >> git clone git://git.postgresql.org/git/postgresql.git && cd postgresql
> >> && ./configure --prefix=/home/marcin/pg91 --enable-cassert
> >> --enable-debug && make check
> >> 
> >> which gave me the attached regression.diffs
> 
> > uh, this may have something to do with :
> 
> > marcin@skpsms:~/postgresql$ locale
> > LC_COLLATE="pl_PL.UTF-8"
> 
> It's only a regression if it used to pass in that locale.  We can't
> realistically try to support every possible locale in the tests.

I can say with some authority that we don't support the pl (glibc)
locale in the regression tests.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-01-25 Thread Kevin Grittner
Heikki Linnakangas  wrote:
> On 25.01.2011 05:30, Kevin Grittner wrote:
 
> The readme says this:
>> 4. PostgreSQL supports subtransactions -- an issue not mentioned
>>in the papers.
> 
> But I don't see any mention anywhere else on how subtransactions
> are handled. If a subtransaction aborts, are its predicate locks
> immediately released?
 
No.  Here's the reasoning.  Within a top level transaction, you
might start a subtransaction, read some data, and then decide based
on what you read that the subtransaction should be rolled back.  If
the decision as to what is part of the top level transaction can
depend on what is read in the subtransaction, predicate locks taken
by the subtransaction must survive rollback of the subtransaction.
 
Does that make sense to you?  Is there somewhere you would like to
see that argument documented?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-01-25 Thread Heikki Linnakangas

On 25.01.2011 05:30, Kevin Grittner wrote:

Attached.


The readme says this:

+   4. PostgreSQL supports subtransactions -- an issue not mentioned
+in the papers.


But I don't see any mention anywhere else on how subtransactions are 
handled. If a subtransaction aborts, are its predicate locks immediately 
released?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to add a primary key using an existing index

2011-01-25 Thread Gurjeet Singh
On Wed, Jan 26, 2011 at 5:31 AM, Tom Lane  wrote:

> In the end I think this is mainly an issue of setting appropriate
> expectations in the documentation.  I've added the following text to
> the ALTER TABLE manual page:
>
> 
>  After this command is executed, the index is owned by the
>  constraint, in the same way as if the index had been built by
>  a regular ADD PRIMARY KEY or ADD UNIQUE
>  command.  In particular, dropping the constraint will make the index
>  disappear too.
> 
>

I'd change that last sentence to:

... dropping the constraint will drop the index too.

'disappear' doesn't seem accurate in the context.

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurjeet@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [HACKERS] Patch to add a primary key using an existing index

2011-01-25 Thread Tom Lane
Gurjeet Singh  writes:
> On Tue, Jan 25, 2011 at 9:01 AM, Tom Lane  wrote:
>> One other issue that might be worthy of discussion is that as things
>> stand, execution of the ADD CONSTRAINT USING INDEX syntax will cause
>> the constraint to absorb the index as an INTERNAL dependency.  That
>> means dropping the constraint would make the index go away silently ---
>> it no longer has any separate life.

> Since we rename the index automatically to match the constraint name,
> implying that the index now belongs to the system, I think the user should
> expect the index to go away with the constraint; else we have to remember
> index's original name and restore that name on DROP CONSTRAINT, which IMHO
> will be even more unintuitive.

Yeah, that's a good point.  Also, the documented example usage of this
feature is

   To recreate a primary key constraint, without blocking updates while the
   index is rebuilt:

CREATE UNIQUE INDEX CONCURRENTLY dist_id_temp_idx on distributors (dist_id);
ALTER TABLE distributors DROP CONSTRAINT distributors_pkey,
ADD CONSTRAINT distributors_pkey PRIMARY KEY USING INDEX dist_id_temp_idx;


with the implication that after you do that, the installed index is
exactly like you would have gotten from straight ADD PRIMARY KEY.
If there's something funny about it, then it's not just a replacement.

In the end I think this is mainly an issue of setting appropriate
expectations in the documentation.  I've added the following text to
the ALTER TABLE manual page:

 
  After this command is executed, the index is owned by the
  constraint, in the same way as if the index had been built by
  a regular ADD PRIMARY KEY or ADD UNIQUE
  command.  In particular, dropping the constraint will make the index
  disappear too.
 

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Include WAL in base backup

2011-01-25 Thread Magnus Hagander
On Tue, Jan 25, 2011 at 16:34, Magnus Hagander  wrote:
> On Tue, Jan 25, 2011 at 15:04, Fujii Masao  wrote:
>> On Tue, Jan 25, 2011 at 10:28 PM, Fujii Masao  wrote:
 (the discussed changse above have been applied and pushed to github)
>>>
>>> Thanks! I'll test and review that.
>>
>> WAL file might get recycled or removed while walsender is reading it.
>> So the WAL file which pg_basebackup seemingly received successfully
>> might be actually invalid. Shouldn't we need to check that what we read
>> is valid as XLogRead does?

We should, and the easiest way is to actually use XLogRead() since the
code is already there. How about the way in this patch?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index 73f26b4..0775b7a 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1460,7 +1460,7 @@ The commands accepted in walsender mode are:
   
 
   
-BASE_BACKUP [LABEL 'label'] [PROGRESS] [FAST]
+BASE_BACKUP [LABEL 'label'] [PROGRESS] [FAST] [WAL]
 
  
   Instructs the server to start streaming a base backup.
@@ -1505,6 +1505,18 @@ The commands accepted in walsender mode are:
  
 

+
+   
+WAL
+
+ 
+  Include the necessary WAL segments in the backup. This will include
+  all the files between start and stop backup in the
+  pg_xlog directory of the base directory tar
+  file.
+ 
+
+   
   
  
  
@@ -1561,7 +1573,10 @@ The commands accepted in walsender mode are:


 
- pg_xlog (including subdirectories)
+ pg_xlog, including subdirectories. If the backup is run
+ with wal files included, a synthesized version of pg_xlog will be
+ included, but it will only contain the files necessary for the
+ backup to work, not the rest of the contents.
 

   
diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml
index 321c8ca..60ffa3c 100644
--- a/doc/src/sgml/ref/pg_basebackup.sgml
+++ b/doc/src/sgml/ref/pg_basebackup.sgml
@@ -145,6 +145,31 @@ PostgreSQL documentation
  
 
  
+  -x
+  --xlog
+  
+   
+Includes the required transaction log files (WAL files) in the
+backup. This will include all transaction logs generated during
+the backup. If this option is specified, it is possible to start
+a postmaster directly in the extracted directory without the need
+to consult the log archive, thus making this a completely standalone
+backup.
+   
+   
+
+ The transaction log files are collected at the end of the backup.
+ Therefore, it is necessary for the
+  parameter to be set high
+ enough that the log is not removed before the end of the backup.
+ If the log has been rotated when it's time to transfer it, the
+ backup will fail and be unusable.
+
+   
+  
+ 
+
+ 
   -Z level
   --compress=level
   
diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c
index 251ed8e..819419e 100644
--- a/src/backend/replication/basebackup.c
+++ b/src/backend/replication/basebackup.c
@@ -37,6 +37,7 @@ typedef struct
 	const char *label;
 	bool		progress;
 	bool		fastcheckpoint;
+	bool		includewal;
 }	basebackup_options;
 
 
@@ -46,11 +47,17 @@ static void _tarWriteHeader(char *filename, char *linktarget,
 struct stat * statbuf);
 static void send_int8_string(StringInfoData *buf, int64 intval);
 static void SendBackupHeader(List *tablespaces);
-static void SendBackupDirectory(char *location, char *spcoid);
 static void base_backup_cleanup(int code, Datum arg);
 static void perform_base_backup(basebackup_options *opt, DIR *tblspcdir);
 static void parse_basebackup_options(List *options, basebackup_options *opt);
 
+/*
+ * Size of each block sent into the tar stream for larger files.
+ *
+ * XLogSegSize *MUST* be evenly dividable by this
+ */
+#define TAR_SEND_SIZE 32768
+
 typedef struct
 {
 	char	   *oid;
@@ -78,7 +85,10 @@ base_backup_cleanup(int code, Datum arg)
 static void
 perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
 {
-	do_pg_start_backup(opt->label, opt->fastcheckpoint);
+	XLogRecPtr	startptr;
+	XLogRecPtr	endptr;
+
+	startptr = do_pg_start_backup(opt->label, opt->fastcheckpoint);
 
 	PG_ENSURE_ERROR_CLEANUP(base_backup_cleanup, (Datum) 0);
 	{
@@ -87,12 +97,6 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
 		struct dirent *de;
 		tablespaceinfo *ti;
 
-
-		/* Add a node for the base directory */
-		ti = palloc0(sizeof(tablespaceinfo));
-		ti->size = opt->progress ? sendDir(".", 1, true) : -1;
-		tablespaces = lappend(tablespaces, ti);
-
 		/* Collect information about all tablespaces */
 		while ((de = ReadDir(tblspcd

Re: [HACKERS] Patch to add a primary key using an existing index

2011-01-25 Thread Gurjeet Singh
Sorry for not being on top of this.

On Tue, Jan 25, 2011 at 9:01 AM, Tom Lane  wrote:

> I wrote:
> > ... If that's the only issue then I don't see any need to wait on
> > the author, so will take this one.
>
> I find myself quite dissatisfied with the way that this patch adds yet
> another bool flag to index_create (which has too many of those already),
> with the effect of causing it to exactly *not* do an index creation.
> That's a clear violation of the principle of least astonishment IMNSHO.
> I think what's needed here is to refactor things a bit so that the
> constraint-creation code is pulled out of index_create and called
> separately where needed.  Hacking on that now.
>

Thanks.

One other issue that might be worthy of discussion is that as things
> stand, execution of the ADD CONSTRAINT USING INDEX syntax will cause
> the constraint to absorb the index as an INTERNAL dependency.  That
> means dropping the constraint would make the index go away silently ---
> it no longer has any separate life.  If the intent is just to provide a
> way to get the effect of ALTER ADD PRIMARY KEY CONCURRENTLY, then this
> behavior is probably fine.  But someone who believes DROP CONSTRAINT
> exactly reverses the effects of ADD CONSTRAINT might be surprised.
> Comments?
>

Since we rename the index automatically to match the constraint name,
implying that the index now belongs to the system, I think the user should
expect the index to go away with the constraint; else we have to remember
index's original name and restore that name on DROP CONSTRAINT, which IMHO
will be even more unintuitive.

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurjeet@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [HACKERS] SSI, simplified

2011-01-25 Thread Kevin Grittner
Joshua Tolley  wrote:
 
>> http://www.xtranormal.com/watch/8285377/?listid=20642536 
 
> I foresee a whole new set of animated postgres tutorials...
 
Seriously, I was thinking the same thing.  I threw this together
while waiting to see if DBT-2 could shake out any new problems after
some minor changes to the SSI patch.  It was my first time playing
with the site and it took about an hour and a half, and didn't quite
use up all the free credits you get with the trial.  With some
actual *planning* someone with some experience could put together
some nice training clips, I would think.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI, simplified

2011-01-25 Thread Joshua Tolley
On Sun, Jan 23, 2011 at 07:48:04PM -0600, Kevin Grittner wrote:
> http://www.xtranormal.com/watch/8285377/?listid=20642536

I foresee a whole new set of animated postgres tutorials...

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [HACKERS] a regression

2011-01-25 Thread Andrew Dunstan



On 01/25/2011 12:30 PM, Tom Lane wrote:




marcin@skpsms:~/postgresql$ locale
LC_COLLATE="pl_PL.UTF-8"

It's only a regression if it used to pass in that locale.  We can't
realistically try to support every possible locale in the tests.




Maybe someone would like to set up a buildfarm member that tests a whole 
slew of locales. We've had the capability for a couple of years now.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Seeking Mentors for Funded Reviewers

2011-01-25 Thread Richard Broersma
On Tue, Jan 25, 2011 at 11:15 AM, Josh Berkus  wrote:


> For 9.1, what about doing a bug-finding bounty when we get into the 9.1
> beta cycle?  Mozilla has been using bug bounties and they've been
> surprisingly successful.
>


This is do-able.  We just have to present this in a way that will meet the
requirements of the 501c.

It needs to be a learning experience and there needs to be a well defined
criteria of what will be delivered by the person awarded with the grant.


-- 
Regards,
Richard Broersma Jr.


Re: [HACKERS] Seeking Mentors for Funded Reviewers

2011-01-25 Thread Josh Berkus

> However, having a mentee begin work early in the 9.2 commit-fest cycle
> might be advantageous.  I imagine that there is less pressure to review
> all of the patches in the early commit-fests.  Perhaps this will give
> prospective mentors the ablility to spend more time with mentee's.

For 9.1, what about doing a bug-finding bounty when we get into the 9.1
beta cycle?  Mozilla has been using bug bounties and they've been
surprisingly successful.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Seeking Mentors for Funded Reviewers

2011-01-25 Thread Richard Broersma
On Tue, Jan 25, 2011 at 9:46 AM, Josh Berkus  wrote:


> In several weeks, the review period for 9.1 will be over.  Is this a plan
> for 9.2?


Yes.  Our timing for this grant is unfortunate as it will likely be issued
too late to be useful for the 9.1 commit-fests.  The delay is mostly my
fault.  I wasn't able to devote enough time to the grant process late last
year.

However, having a mentee begin work early in the 9.2 commit-fest cycle might
be advantageous.  I imagine that there is less pressure to review all of the
patches in the early commit-fests.  Perhaps this will give prospective
mentors the ablility to spend more time with mentee's.

-- 
Regards,
Richard Broersma Jr.


Re: [HACKERS] SSI patch version 14

2011-01-25 Thread Jeff Davis
On Tue, 2011-01-25 at 09:41 -0600, Kevin Grittner wrote:
> Yep.  For the visual thinkers out there, the whole concept can be
> understood by looking at the jpeg file that's in the Wiki page:
>  
> http://wiki.postgresql.org/images/e/eb/Serialization-Anomalies-in-Snapshot-Isolation.png

Yes, that helped a lot throughout the review process. Good job keeping
it up-to-date!
 
> Yes, that would work.  It would lower one type of overhead a little
> and allow RO transactions to be released from SSI tracking earlier. 
> The question is how to determine it without taking too much time
> scanning the finished transaction list for every active read write
> transaction every time you start a RO transaction.  I don't think
> that it's a trivial enough issue to consider for 9.1; it's certainly
> one to put on the list to look at for 9.2.

It's OK to leave it to 9.2. But if it's a RO deferrable transaction,
it's just going to go to sleep in that case anyway; so why not look for
an opportunity to get a safe snapshot right away?

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Change pg_last_xlog_receive_location not to move backwards

2011-01-25 Thread Jeff Janes
On Mon, Jan 24, 2011 at 11:25 AM, Robert Haas  wrote:
> On Thu, Jan 13, 2011 at 2:40 AM, Fujii Masao  wrote:
>> On Thu, Jan 13, 2011 at 4:24 PM, Fujii Masao  wrote:
>>> So I'm thinking to change pg_last_xlog_receive_location not to
>>> move backwards.
>>
>> The attached patch does that.
>
> It looks to me like this is changing more than just the return value
> of pg_last_xlog_receive_location.  receivedUpto isn't only used for
> that one function, and there's no explanation in your email or in the
> patch of why the new behavior is correct and/or better for the other
> places where it's used.

I believe the new walrcv->receiveStart was introduced to divide up those
behaviors that should go backwards from those that should not.

The non-retreating value is used in 3 places (via GetWalRcvWriteRecPtr)
in xlog.c.  (And some other places I haven't examined yet.)

One place is to decide whether to remove/recycle XLog files, and I think the
non-retreating value is at least as suitable for this usage.

One is to feed the pg_last_xlog_receive_location() function.

The third seems more problematic.  In the XLogPageRead,
it checks to see if more records have been received beyond what
has been applied.  By using the non-retreating value here, it seems
like the xlog replay could start replaying records that the wal
receiver is in the process of overwriting.  Now, I've argued to myself
that this is not a problem, because the receiver is overwriting them
with identical data to that which is already there.

But by that logic, why does any part of it (walrcv->receiveStart in
the patch, walrcv->receivedUpto unpatched) need to retreat?  From
the previous discussion, I understand that the concern is that we don't
want to retrieve and write out partial xlog files.  What I don't understand
is how we could get our selves into the position in which we are doing
that, other than by someone going in and doing impermissible things to
the PGDATA directory behind our backs.

I've been spinning my wheels on that part for a while.  Since I don't understand
what we are defending against in the original code, I can't evaluate
if the patch
maintains that defense.

Cheers,

Jeff

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-01-25 Thread Jeff Davis
On Tue, 2011-01-25 at 11:17 -0600, Kevin Grittner wrote:
> > The heavy use of SHMQueue is quite reasonable, but for some reason
> > I find the API very difficult to read. I think it would help (at
> > least me) quite a lot to annotate (i.e. with a comment in the
> > struct) the various lists and links with the types of data being
> > held.
>  
> We've tried to comment enough, but when you have your head buried in
> code you don't always recognize how mysterious something can look. 
> Can you suggest some particular places where more comments would be
> helpful?

I think just annotating RWConflict.in/outLink and
PredTranList.available/activeList with the types of things they hold
would be a help.

Also, you say something about RWConflict and "when the structure is not
in use". Can you elaborate on that a bit?

I'll address the rest of your comments in a later email.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [RRR] [HACKERS] Seeking Mentors for Funded Reviewers

2011-01-25 Thread Joshua D. Drake
On Tue, 2011-01-25 at 14:14 +, Dave Page wrote:
> On Tue, Jan 25, 2011 at 1:39 PM, Richard Broersma
>  wrote:
> > On Tue, Jan 25, 2011 at 12:42 AM, Dave Page  wrote:
> >>
> >> Will the scheme be open to everyone, or just .USians?
> >
> > I do believe that such grants are limited to members of PgUS.  Although, I
> > should mention that there's no restriction for residents of any country
> > becoming a member of PgUS.
> 
> OK.

Correct. You will have to be a member of PgUS. However, we don't
restrict who can be a member.

Sincerely,

Joshua D. Drake

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to add a primary key using an existing index

2011-01-25 Thread Joshua Tolley
On Mon, Jan 24, 2011 at 07:01:13PM -0500, Tom Lane wrote:
> One other issue that might be worthy of discussion is that as things
> stand, execution of the ADD CONSTRAINT USING INDEX syntax will cause
> the constraint to absorb the index as an INTERNAL dependency.  That
> means dropping the constraint would make the index go away silently ---
> it no longer has any separate life.  If the intent is just to provide a
> way to get the effect of ALTER ADD PRIMARY KEY CONCURRENTLY, then this
> behavior is probably fine.  But someone who believes DROP CONSTRAINT
> exactly reverses the effects of ADD CONSTRAINT might be surprised.
> Comments?

So you'd manually create an index, attach it to a constraint, drop the
constraint, and find that the index had disappeared? ISTM since you created
the index explicitly, you should have to drop it explicitly as well.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [HACKERS] Extensions support for pg_dump, patch v27

2011-01-25 Thread David E. Wheeler
On Jan 25, 2011, at 7:27 AM, David Fetter wrote:

>> Other than adminpack, I think it makes sense to say that extensions
>> are not going into pg_catalog…
> 
> Given this, we should maybe see about either making adminpack part of
> PostgreSQL's core distribution (probably a good idea) or moving it out
> of pg_catalog so we don't have an exception to the rule.

+1

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] a regression

2011-01-25 Thread Tom Lane
marcin mank  writes:
> On Tue, Jan 25, 2011 at 5:46 PM, marcin mank  wrote:
>> I did:
>> git clone git://git.postgresql.org/git/postgresql.git && cd postgresql
>> && ./configure --prefix=/home/marcin/pg91 --enable-cassert
>> --enable-debug && make check
>> 
>> which gave me the attached regression.diffs

> uh, this may have something to do with :

> marcin@skpsms:~/postgresql$ locale
> LC_COLLATE="pl_PL.UTF-8"

It's only a regression if it used to pass in that locale.  We can't
realistically try to support every possible locale in the tests.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-01-25 Thread Kevin Grittner
Jeff Davis  wrote:
 
> For starters, the above structures require unlimited memory, while
> we have fixed amounts of memory. The predicate locks are
> referenced by a global shared hash table as well as per-process
> SHMQueues. It can adapt memory usage downward in three ways:
>   * increase lock granularity -- e.g. change N page locks into a
> table lock
>   * "summarization" -- fold multiple locks on the same object
> across many old committed transactions into a single lock
>   * use the SLRU
 
Those last two are related -- the summarization process takes the
oldest committed-but-still-significant transaction and does two
things with it:
 
(1)  We move predicate locks to the "dummy" transaction, kept just
for this purpose.  We combine locks against the same lock target,
using the more recent commit point to determine when the resulting
lock can be removed.
 
(2)  We use SLRU to keep track of the top level xid of the old
committed transaction, and the earliest commit point of any
transactions to which it had an out-conflict.
 
The above reduces the information available to avoid serialization
failure in certain corner cases, and is more expensive to access
than the other structures, but it keeps us running in pessimal
circumstances, even if at a degraded level of performance.
 
> The heavy use of SHMQueue is quite reasonable, but for some reason
> I find the API very difficult to read. I think it would help (at
> least me) quite a lot to annotate (i.e. with a comment in the
> struct) the various lists and links with the types of data being
> held.
 
We've tried to comment enough, but when you have your head buried in
code you don't always recognize how mysterious something can look. 
Can you suggest some particular places where more comments would be
helpful?
 
> The actual checking of conflicts isn't quite so simple, either,
> because we want to detect problems before the victim transaction
> has done too much work. So, checking is done along the way in
> several places, and it's a little difficult to follow exactly how
> those smaller checks add up to the overall serialization-anomaly
> check (the third point in my simple model).
> 
> There are also optimizations around transactions declared READ
> ONLY. Some of these are a little difficult to follow as well, and
> I haven't followed them all.
 
Yeah.  After things were basically working correctly, in terms of
not allowing any anomalies, we still had a lot of false positives. 
Checks around the order and timing of commits, as well as read-only
status, helped bring these down.  The infamous receipting example
was my main guide here.  There are 210 permutations in the way the
statements can be interleaved, of which only six can result in
anomalies.  At first we only managed to commit a couple dozen
permutations.  As we added code to cover optimizations for various
special cases the false positive rate dropped a little at a time,
until that test hit 204 commits, six rollbacks.  Although, all the
tests in the dcheck target are useful -- if I made a mistake in
implementing an optimization there would sometimes be just one or
two of the other tests which would fail.  Looking at which
permutations got it right and which didn't was a really good way to
figure out what I did wrong.
 
> There is READ ONLY DEFERRABLE, which is a nice feature that waits
> for a "safe" snapshot, so that the transaction will never be
> aborted.
 
*And* will not contribute to causing any other transaction to be
rolled back, *and* dodges the overhead of predicate locking and
conflict checking.  Glad you like it!  ;-)
 
> Now, on to my code comments (non-exhaustive):
> 
> * I see that you use a union as well as using "outLinks" to also
> be a free list. I suppose you did this to conserve shared memory
> space, right?
 
Yeah, we tried to conserve shared memory space where we could do so
without hurting performance.  Some of it gets to be a little bit-
twiddly, but it seemed like a good idea at the time.  Does any of it
seem like it creates a confusion factor which isn't worth it
compared to the shared memory savings?
 
> * Still some compiler warnings:
> twophase.c: In function *FinishPreparedTransaction*:
> twophase.c:1360: warning: implicit declaration of function
> *PredicateLockTwoPhaseFinish*
 
Ouch!  That could cause bugs, since the implicit declaration didn't
actually *match* the actual definition.  Don't know how we missed
that.  I strongly recommend that anyone who wants to test 2PC with
the patch add this commit to it:
 
http://git.postgresql.org/gitweb?p=users/kgrittn/postgres.git;a=commitdiff;h=8e020d97bc7b8c72731688515b6d895f7e243e27
 
> * You have a function called CreatePredTran. We are already using
> "Transaction" and "Xact" -- we probably don't need "Tran" as well.
 
OK.  Will rename if you like.  Since that creates the PredTran
structure, I assume you want that renamed, too?
 
> * HS error has a typo:
> "ERROR:  cannot set transaction isolationn level to seria

Re: [HACKERS] a regression

2011-01-25 Thread marcin mank
On Tue, Jan 25, 2011 at 5:46 PM, marcin mank  wrote:
> Hello.
> I did:
> git clone git://git.postgresql.org/git/postgresql.git && cd postgresql
> && ./configure --prefix=/home/marcin/pg91 --enable-cassert
> --enable-debug && make check
>
> which gave me the attached regression.diffs
>

uh, this may have something to do with :

marcin@skpsms:~/postgresql$ locale
LANG=pl_PL.UTF-8
LC_CTYPE="pl_PL.UTF-8"
LC_NUMERIC="pl_PL.UTF-8"
LC_TIME="pl_PL.UTF-8"
LC_COLLATE="pl_PL.UTF-8"
LC_MONETARY="pl_PL.UTF-8"
LC_MESSAGES="pl_PL.UTF-8"
LC_PAPER="pl_PL.UTF-8"
LC_NAME="pl_PL.UTF-8"
LC_ADDRESS="pl_PL.UTF-8"
LC_TELEPHONE="pl_PL.UTF-8"
LC_MEASUREMENT="pl_PL.UTF-8"
LC_IDENTIFICATION="pl_PL.UTF-8"

Because
LC_COLLATE=C make check

passes.

If this is expected, sorry for the noise.

Greetings
Marcin Mańk

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] a regression

2011-01-25 Thread marcin mank
Hello.
I did:
git clone git://git.postgresql.org/git/postgresql.git && cd postgresql
&& ./configure --prefix=/home/marcin/pg91 --enable-cassert
--enable-debug && make check

which gave me the attached regression.diffs

marcin@skpsms:~/postgresql$ gcc -v
Using built-in specs.
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Debian
4.3.2-1.1' --with-bugurl=file:///usr/share/doc/gcc-4.3/README.Bugs
--enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr
--enable-shared --with-system-zlib --libexecdir=/usr/lib
--without-included-gettext --enable-threads=posix --enable-nls
--with-gxx-include-dir=/usr/include/c++/4.3 --program-suffix=-4.3
--enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc
--enable-mpfr --enable-cld --enable-checking=release
--build=x86_64-linux-gnu --host=x86_64-linux-gnu
--target=x86_64-linux-gnu
Thread model: posix
gcc version 4.3.2 (Debian 4.3.2-1.1)
marcin@skpsms:~/postgresql$ uname -a
Linux skpsms 2.6.24-23-xen #1 SMP Mon Jan 26 03:09:12 UTC 2009 x86_64 GNU/Linux

Greetings
Marcin Mańk


regression.diffs
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-01-25 Thread Pavel Stehule
2011/1/25 Tom Lane :
> Robert Haas  writes:
>> On Tue, Jan 25, 2011 at 10:03 AM, Tom Lane  wrote:
>>> The arguments that were made against this were about maintenance costs
>>> and code footprint.  Claims about performance are not really relevant,
>>> especially when they're entirely unsupported by evidence.
>
>> How much evidence do you need to the effect that detoasting a value
>> that's never used will hurt performance?
>
> I agree that at some microscopic level it will cost something.  What's
> not been shown is that there's any significant cost in any real-world
> use pattern.  As Pavel said upthread, the main thing here is to get rid
> of cases where there are many many repeated detoastings.  Adding an
> occasional detoast that wouldn't have happened before is a good tradeoff
> for that.  To convince me that we should contort the code to go further,
> you need to show that that's more than a negligible cost.

I did a few tests:

create table a1(a int);
create table a2(a int)

insert into a1 select array_fill(1, array[100]) from generate_series(1,1);
insert into a2 select array_fill(1, array[1]) from generate_series(1,1);

create or replace function s(int[]) returns int as $$
declare s int = 0; i int;
begin
  for i in array_lower($1,1) .. array_upper($1,1)
  loop
s := s + $1[i];
  end loop;
  return s;
end;
$$ language plpgsql immutable;

next I tested queries

1, select sum(s(a)) from a1;
2, select sum(s(a)) from a2;

variant a) my first patch - detoast on first usage with avoiding to
useless detoast checking
variant b) my first patch - detoast on first usage without avoiding to
useless detoast checking

time for 1 - about 300 ms, a is bout 1.5% faster than b
time for 2 - about 3 ms, a is about 3% faster than b

Regards

Pavel Stehule

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-01-25 Thread Robert Haas
On Tue, Jan 25, 2011 at 10:47 AM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Tue, Jan 25, 2011 at 10:03 AM, Tom Lane  wrote:
>>> The arguments that were made against this were about maintenance costs
>>> and code footprint.  Claims about performance are not really relevant,
>>> especially when they're entirely unsupported by evidence.
>
>> How much evidence do you need to the effect that detoasting a value
>> that's never used will hurt performance?
>
> I agree that at some microscopic level it will cost something.  What's
> not been shown is that there's any significant cost in any real-world
> use pattern.  As Pavel said upthread, the main thing here is to get rid
> of cases where there are many many repeated detoastings.  Adding an
> occasional detoast that wouldn't have happened before is a good tradeoff
> for that.  To convince me that we should contort the code to go further,
> you need to show that that's more than a negligible cost.

Well, what if somebody calls the function like this?

SELECT foo(a, b) FROM huge_table

This is not a particularly uncommon thing to do, and it might easily
be the case that foo accesses b for some values of a but not all.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-01-25 Thread Tom Lane
Robert Haas  writes:
> On Tue, Jan 25, 2011 at 10:03 AM, Tom Lane  wrote:
>> The arguments that were made against this were about maintenance costs
>> and code footprint.  Claims about performance are not really relevant,
>> especially when they're entirely unsupported by evidence.

> How much evidence do you need to the effect that detoasting a value
> that's never used will hurt performance?

I agree that at some microscopic level it will cost something.  What's
not been shown is that there's any significant cost in any real-world
use pattern.  As Pavel said upthread, the main thing here is to get rid
of cases where there are many many repeated detoastings.  Adding an
occasional detoast that wouldn't have happened before is a good tradeoff
for that.  To convince me that we should contort the code to go further,
you need to show that that's more than a negligible cost.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-01-25 Thread Kevin Grittner
Thanks for the review, Jeff!

Dan Ports  wrote:
> On Tue, Jan 25, 2011 at 01:07:39AM -0800, Jeff Davis wrote:
>> At a high level, there is a nice conceptual simplicity. Let me
>> try to summarize it as I understand it:
>> * RW dependencies are detected using predicate locking.
>> * RW dependencies are tracked from the reading transaction (as an
>> "out") conflict; and from the writing transaction (as an "in"
>> conflict).
>> * Before committing a transaction, then by looking only at the RW
>> dependencies (and predicate locks) for current and past
>> transactions, you can determine if committing this transaction
>> will result in a cycle (and therefore a serialization anomaly);
>> and if so, abort it.
> 
> This summary is right on. I would add one additional detail or
> clarification to the last point, which is that rather than
> checking for a cycle, we're checking for a transaction with both
> "in" and "out" conflicts, which every cycle must contain.
 
Yep.  For the visual thinkers out there, the whole concept can be
understood by looking at the jpeg file that's in the Wiki page:
 
http://wiki.postgresql.org/images/e/eb/Serialization-Anomalies-in-Snapshot-Isolation.png
 
>> * In RegisterSerializableTransactionInt(), for a RO xact, it
>> considers any concurrent RW xact a possible conflict. It seems
>> like it would be possible to know whether a RW transaction may
>> have overlapped with any committed RW transaction (in
>> finishedLink queue), and only add those as potential conflicts.
>> Would that work? If so, that would make more snapshots safe.
> 
> Interesting idea. That's worth some careful thought. I think it's
> related to the condition that the RW xact needs to commit with a
> conflict out to a transaction earlier than the RO xact. My first
> guess is that this wouldn't make more transactions safe, but could
> detect safe snapshots faster.
 
Yes, that would work.  It would lower one type of overhead a little
and allow RO transactions to be released from SSI tracking earlier. 
The question is how to determine it without taking too much time
scanning the finished transaction list for every active read write
transaction every time you start a RO transaction.  I don't think
that it's a trivial enough issue to consider for 9.1; it's certainly
one to put on the list to look at for 9.2.
 
>> * When a transaction finishes, then PID should probably be set to
>> zero. You only use it for waking up a deferrable RO xact waiting
>> for a snapshot, right?
> 
> Correct. It probably wouldn't hurt to clear that field when
> releasing the transaction, but we don't use it after.
 
I thought they might show up in the pid column of pg_locks, but I
see they don't.  Should they?  If so, should we still see the pid
after a commit, for as long as the lock survives?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Include WAL in base backup

2011-01-25 Thread Magnus Hagander
On Tue, Jan 25, 2011 at 15:04, Fujii Masao  wrote:
> On Tue, Jan 25, 2011 at 10:28 PM, Fujii Masao  wrote:
>>> (the discussed changse above have been applied and pushed to github)
>>
>> Thanks! I'll test and review that.
>
> WAL file might get recycled or removed while walsender is reading it.
> So the WAL file which pg_basebackup seemingly received successfully
> might be actually invalid. Shouldn't we need to check that what we read
> is valid as XLogRead does?

Yikes. Yes, you are correct. We do catch the case when it is removed,
but not when it's recycled.

I'll need to look at that. Not sure if I'll have time today, but I'll
do it as soon as I can :-)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-01-25 Thread Robert Haas
On Tue, Jan 25, 2011 at 10:03 AM, Tom Lane  wrote:
> Heikki Linnakangas  writes:
>> Detoasting on first usage, ie. exec_eval_datum(), seems the best to me.
>> Compared to detoasting on assignment, it avoids the performance
>> regression if the value is never used, and I don't think checking if the
>> value is toasted at every exec_eval_datum() call adds too much overhead.
>
> The arguments that were made against this were about maintenance costs
> and code footprint.  Claims about performance are not really relevant,
> especially when they're entirely unsupported by evidence.

How much evidence do you need to the effect that detoasting a value
that's never used will hurt performance?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions support for pg_dump, patch v27

2011-01-25 Thread David Fetter
On Tue, Jan 25, 2011 at 04:23:41PM +0100, Dimitri Fontaine wrote:
> Itagaki Takahiro  writes:
> > * Extensions installed in pg_catalog:
> > If we install an extension into pg_catalog,
> >   CREATE EXTENSION xxx WITH SCHEMA pg_catalog
> > pg_dump dumps nothing about the extension. We might need special care
> > for modules that install functions only in pg_catalog.
> 
> In src/backend/catalog/pg_depend.c we find the code for
> recordDependencyOn() which is in fact in recordMultipleDependencies(),
> and sayth so:
> 
>   /*
>* If the referenced object is pinned by the system, there's no 
> real
>* need to record dependencies on it.  This saves lots of space 
> in
>* pg_depend, so it's worth the time taken to check.
>*/
> 
> So I'm not sure about what we can do here other than error on using
> pg_catalog in CREATE EXTENSION?  How do we want to manage adminpack?
> 
> Other than adminpack, I think it makes sense to say that extensions
> are not going into pg_catalog…

Given this, we should maybe see about either making adminpack part of
PostgreSQL's core distribution (probably a good idea) or moving it out
of pg_catalog so we don't have an exception to the rule.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add ENCODING option to COPY

2011-01-25 Thread Hitoshi Harada
2011/1/25 Itagaki Takahiro :
> On Sat, Jan 15, 2011 at 02:25, Hitoshi Harada  wrote:
>> The patch overrides client_encoding by the added ENCODING option, and
>> restores it as soon as copy is done.
>
> We cannot do that because error messages should be encoded in the original
> encoding even during COPY commands with encoding option. Error messages
> could contain non-ASCII characters if lc_messages is set.

Agreed.

>> I see some complaints ask to use
>> pg_do_encoding_conversion() instead of
>> pg_client_to_server/server_to_client(), but the former will surely add
>> slight overhead per reading line
>
> If we want to reduce the overhead, we should cache the conversion procedure
> in CopyState. How about adding something like "FmgrInfo file_to_server_covv"
> into it?

I looked down to the code and found that we cannot pass FmgrInfo * to
any functions defined in pg_wchar.h, since the header file is shared
in libpq, too.

For the record, I also tried pg_do_encoding_conversion() instead of
pg_client_to_server/server_to_client(), and the simple benchmark shows
it is too slow.

with 300 lines with 3 columns (~22MB tsv) COPY FROM

*utf8 -> utf8 (no conversion)
13428.233ms
13322.832ms
15661.093ms

*euc_jp -> utf8 (client_encoding)
17527.470ms
16457.452ms
16522.337ms

*euc_jp -> utf8 (pg_do_encoding_conversion)
20550.983ms
21425.313ms
20774.323ms

I'll check the code more if we have better alternatives.

Regards,


-- 
Hitoshi Harada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions support for pg_dump, patch v27

2011-01-25 Thread Dimitri Fontaine
Itagaki Takahiro  writes:
> * Extensions installed in pg_catalog:
> If we install an extension into pg_catalog,
>   CREATE EXTENSION xxx WITH SCHEMA pg_catalog
> pg_dump dumps nothing about the extension. We might need special care
> for modules that install functions only in pg_catalog.

In src/backend/catalog/pg_depend.c we find the code for
recordDependencyOn() which is in fact in recordMultipleDependencies(),
and sayth so:

/*
 * If the referenced object is pinned by the system, there's no 
real
 * need to record dependencies on it.  This saves lots of space 
in
 * pg_depend, so it's worth the time taken to check.
 */

So I'm not sure about what we can do here other than error on using
pg_catalog in CREATE EXTENSION?  How do we want to manage adminpack?

Other than adminpack, I think it makes sense to say that extensions are
not going into pg_catalog…

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-01-25 Thread Tom Lane
Heikki Linnakangas  writes:
> Detoasting on first usage, ie. exec_eval_datum(), seems the best to me. 
> Compared to detoasting on assignment, it avoids the performance 
> regression if the value is never used, and I don't think checking if the 
> value is toasted at every exec_eval_datum() call adds too much overhead.

The arguments that were made against this were about maintenance costs
and code footprint.  Claims about performance are not really relevant,
especially when they're entirely unsupported by evidence.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is there a way to build PostgreSQL client libraries with MinGW

2011-01-25 Thread Andrew Dunstan



On 01/25/2011 09:15 AM, Xiaobo Gu wrote:

Hi Andrew,
The config.log is as following



So what is the declaration of accept at

d:/amber/devtool/rtools/mingw64/lib/gcc/../../x86_64-w64-mingw32/include/winsock2.h:1035:37:

?

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] relhasexclusion is in the wrong place

2011-01-25 Thread Tom Lane
I've just noticed that we were guilty of very sloppy thinking in
defining pg_class.relhasexclusion.  The only place that actually
*uses* that value, rather than jumping through hoops to maintain it,
is BuildIndexInfo --- and what it's looking at is not the pg_class
entry of the table, but the pg_class entry of the index.  There is
no need whatsoever to maintain such a flag at the table level.

This being the case, I think we should move the flag to pg_index
(and rename it to indisexclusion).  That will get rid of all the
semantic fuzziness around it, the need to update it in VACUUM,
etc.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions support for pg_dump, patch v27

2011-01-25 Thread Dimitri Fontaine
Hi,

Itagaki Takahiro  writes:
> Hi, I read the patch and test it in some degree.  It works as expected
> generally, but still needs a bit more development in edge case.

Thanks for the review!

> * commands/extension.h needs cleanup.
> - Missing "extern" declarations for create_extension and
> create_extension_with_user_data variables.
> - ExtensionControlFile and ExtensionList can be hidden from the header.
> - extension_objects_fctx is not used at all.

Fixed in git.  I'm not yet used to the project style where we declare
some structures into the c code rather than the headers… and then it's
cleanup.

> * There are many recordDependencyOn(&myself, &CreateExtensionAddress, ...)
> in some places, but I'm not sure we forget something -- TRIGGERs?

I think we're good here.  The extensions I know that use triggers are
installing the functions, it's still the user who CREATE TRIGGER and
uses the function.  And even if we have an extension that contains some
CREATE TRIGGER commands in its script, the trigger will depend on a
function and the function on the extension.

If there's a use case for CREATE TRIGGER in an extension's script and
having the trigger not depend on another extension's object, like a
function, then I didn't see it.  I'm ok to add the triggers as first
class dependency objects in the extension patch if there's a need…

> * Will we still need uninstaller scripts?
> DROP EXTENSION depends on object dependency to drop objects,
> but we have a few configurations that cannot be described in the
> dependency. For example, rows inserted into pg_am for user AMs
> or reverting default settings modified by the extension.

Well I'm unconvinced by index AM extensions.  Unfortunately, if you want
a crash safe AM, you need to patch core code, it's not an extension any
more.  About reverting default settings, I'm not following.

What I saw is existing extensions that didn't need uninstall script once
you can DROP EXTENSION foo; but of course it would be quite easy to add
a parameter for that in the control file.  Do we need it, though?

> * Extensions installed in pg_catalog:
> If we install an extension into pg_catalog,
>   CREATE EXTENSION xxx WITH SCHEMA pg_catalog
> pg_dump dumps nothing about the extension. We might need special care
> for modules that install functions only in pg_catalog.

I didn't spot that, I just didn't think about that use case.  On a quick
test here it seems like the dependency is not recorded in this
case. Will fix.

> * I can install all extensions successfully, but there is a warning
> in hstore. The warning was not reported to the client, but the whole
> contents of hstore.sql.in was recorded in the server log.
>
>   WARNING:  => is deprecated as an operator name
>
> We sets client_min_messages for the issue in hstore.sql.in, but I think
> we also need an additional "SET log_min_messages TO ERROR" around it.

We can do that, but what's happening now is my understanding of the
consensus we reached on the list.

> * Is it support nested CREATE EXTENSION calls?
> It's rare case, but we'd have some error checks for it.

In fact earthdistance could CREATE EXTENSION cube; itself in its install
script.  As you say, though, it's a rare case and it was agreed that
this feature could wait until a later version, so I didn't spend time on
it.

> * It passes all regression test for both backend and contrib modules,
> but there are a couple of warnings during build and installation.
>
> Three compiler warnings found. Please check.
> pg_proc.c: In function 'ProcedureCreate':
> pg_proc.c:110:8: warning: 'extensionOid' may be used uninitialized in
> this function
> pg_shdepend.c: In function 'shdepReassignOwned':
> pg_shdepend.c:1335:6: warning: implicit declaration of function
> 'AlterOperatorOwner_oid'
> operatorcmds.c:369:1: warning: no previous prototype for
> 'AlterOperatorOwner_oid'

Oops sorry about that, I miss them too easily.  What's the trick to turn
warnings into errors already please?

Fixed in the git repository.

> * Five warnings also found during make install in contrib directory.
> ../../config/install-sh: ./uninstall_btree_gist.sql does not exist.
> ../../config/install-sh: ./uninstall_pageinspect.sql does not exist.
> ../../config/install-sh: ./uninstall_pgcrypto.sql does not exist.
> ../../config/install-sh: ./uninstall_pgrowlocks.sql does not exist.
> ../../config/install-sh: ./uninstall_pgstattuple.sql does not exist.

That's the DATA = uninstall_ lines in the Makefile.  Removed in the git
repository.  Cleared.

> * You use "const = expr" in some places, ex. if( InvalidOid != ...),
>   but we seem to prefer "expr = const".

It allows to get a compiler error when you mistakenly use = rather than
== because the Left Hand Side is a constant, so I got used to writing
things this way.  Should I review my patch and adapt?  Will do after
some votes :)

> * [off topic] I found some installer scripts are inconsistent.
> They uses CREATE FUNCTION for some of functions, but CREA

Re: [RRR] [HACKERS] Seeking Mentors for Funded Reviewers

2011-01-25 Thread Dave Page
On Tue, Jan 25, 2011 at 1:39 PM, Richard Broersma
 wrote:
> On Tue, Jan 25, 2011 at 12:42 AM, Dave Page  wrote:
>>
>> Will the scheme be open to everyone, or just .USians?
>
> I do believe that such grants are limited to members of PgUS.  Although, I
> should mention that there's no restriction for residents of any country
> becoming a member of PgUS.

OK.

>>
>> If the latter,
>> I'd be a little concerned that it may have a negative effect on
>> attracting reviewers from outside the US.
>
> Hmm...  I hadn't considered the possibility of PgUS grants beings a turn-off
> to potential non-US residents.  Would PgUS's open enrollment alleviate your
> concern?

Yes, I think so.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Include WAL in base backup

2011-01-25 Thread Fujii Masao
On Tue, Jan 25, 2011 at 10:28 PM, Fujii Masao  wrote:
>> (the discussed changse above have been applied and pushed to github)
>
> Thanks! I'll test and review that.

WAL file might get recycled or removed while walsender is reading it.
So the WAL file which pg_basebackup seemingly received successfully
might be actually invalid. Shouldn't we need to check that what we read
is valid as XLogRead does?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is there a way to build PostgreSQL client libraries with MinGW

2011-01-25 Thread Andrew Dunstan



On 01/25/2011 06:40 AM, Xiaobo Gu wrote:

I also tried 64bit 4.5.2 GCC shipped with Rtools, the same error

checking types of arguments for accept()... configure: error: could not determin
e argument types




I don't have this setup, soi I can't debug it. Neither does anyone else 
I know of. I suggest you look in config.log to see where it's failing.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [RRR] [HACKERS] Seeking Mentors for Funded Reviewers

2011-01-25 Thread Richard Broersma
On Tue, Jan 25, 2011 at 12:42 AM, Dave Page  wrote:

>
> Will the scheme be open to everyone, or just .USians?


I do believe that such grants are limited to members of PgUS.  Although, I
should mention that there's no restriction for residents of any country
becoming a member of PgUS.



> If the latter,
> I'd be a little concerned that it may have a negative effect on
> attracting reviewers from outside the US.
>

Hmm...  I hadn't considered the possibility of PgUS grants beings a turn-off
to potential non-US residents.  Would PgUS's open enrollment alleviate your
concern?


-- 
Regards,
Richard Broersma Jr.


Re: [HACKERS] Include WAL in base backup

2011-01-25 Thread Fujii Masao
On Tue, Jan 25, 2011 at 8:21 PM, Magnus Hagander  wrote:
>> +               elog(DEBUG1, "Going to send wal from %i.%i to %i.%i",
>> +                        logid, logseg,
>> +                        endlogid, endlogseg);
>>
>> %u should be used instead of %i. Or how about logging the filename?
>
> I actually plan to remove that DEBUG output completely (sorry, forgot
> to mention that), I'm not sure it's useful to have it around once we
> apply. Or do you think it would be useful to keep?

Nope. I'm OK to remove that.

>> +                       XLogFileName(xlogname, ThisTimeLineID, logid, 
>> logseg);
>> +                       sprintf(fn, "./pg_xlog/%s", xlogname);
>>
>> How about using XLogFilePath? instead?
>
> Can't do that, because we need the "./" part when we call sendFile() -
> it's structured around having that one, since we get it from the file
> name looping.

Understood.

>> +                       if (logid > endptr.xlogid ||
>> +                               (logid == endptr.xlogid && logseg >= 
>> endptr.xrecoff / XLogSegSize))
>>
>> Why don't you use endlogseg?
>
> Honestly? Because I thought I added endlogseg just for the debugging
> output, didn't realize I had it down here.
>
> But if I do, then I should not use the XLByteToPrevSeg() per your
> suggestion above, right? Keep it the way it is.

Yes. If we check "logseg >= endlogseg", we should use XLByteToSeg.

>> So, what about sparating the progress report into two parts: one for $PGDATA 
>> and
>> tablespaces, another for WAL files?
>
> We can't really do that. We need to send the progress report before we
> begin the tar file, and we don't know how many xlog segments we will
> have at that time. And we don't want to send pg_xlog as a separate tar
> stream, because if we do we won't be able to get the single-tar-output
> in the simple case - thus no piping etc. I actually had the xlog data
> as it's own tar stream in the beginning, but Heikki convinced me of
> the advantage of keeping it in the main one...
>
> What we could do, I guess, is to code pg_basebackup to detect when it
> starts receiving xlog files and then stop increasing the percentage at
> that point, instead just doing a counter?

Umm.. not progressing the report during receiving WAL files seems
also odd. But I don't have another good idea... For now, I'm OK if the
behavior of the progress report is well documented.

> (the discussed changse above have been applied and pushed to github)

Thanks! I'll test and review that.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions support for pg_dump, patch v27

2011-01-25 Thread Itagaki Takahiro
On Mon, Jan 24, 2011 at 18:22, Dimitri Fontaine  wrote:
> Following recent commit of the hstore Makefile cleaning, that I included
> into my extension patch too, I have a nice occasion to push version 27
> of the patch.  Please find it enclosed.

Hi, I read the patch and test it in some degree.  It works as expected
generally, but still needs a bit more development in edge case.

* commands/extension.h needs cleanup.
- Missing "extern" declarations for create_extension and
create_extension_with_user_data variables.
- ExtensionControlFile and ExtensionList can be hidden from the header.
- extension_objects_fctx is not used at all.

* There are many recordDependencyOn(&myself, &CreateExtensionAddress, ...)
in some places, but I'm not sure we forget something -- TRIGGERs?

* Will we still need uninstaller scripts?
DROP EXTENSION depends on object dependency to drop objects,
but we have a few configurations that cannot be described in the
dependency. For example, rows inserted into pg_am for user AMs
or reverting default settings modified by the extension.

* Extensions installed in pg_catalog:
If we install an extension into pg_catalog,
  CREATE EXTENSION xxx WITH SCHEMA pg_catalog
pg_dump dumps nothing about the extension. We might need special care
for modules that install functions only in pg_catalog.

* I can install all extensions successfully, but there is a warning
in hstore. The warning was not reported to the client, but the whole
contents of hstore.sql.in was recorded in the server log.

  WARNING:  => is deprecated as an operator name

We sets client_min_messages for the issue in hstore.sql.in, but I think
we also need an additional "SET log_min_messages TO ERROR" around it.

* Is it support nested CREATE EXTENSION calls?
It's rare case, but we'd have some error checks for it.

* It passes all regression test for both backend and contrib modules,
but there are a couple of warnings during build and installation.

Three compiler warnings found. Please check.
pg_proc.c: In function 'ProcedureCreate':
pg_proc.c:110:8: warning: 'extensionOid' may be used uninitialized in
this function
pg_shdepend.c: In function 'shdepReassignOwned':
pg_shdepend.c:1335:6: warning: implicit declaration of function
'AlterOperatorOwner_oid'
operatorcmds.c:369:1: warning: no previous prototype for
'AlterOperatorOwner_oid'

* Five warnings also found during make install in contrib directory.
../../config/install-sh: ./uninstall_btree_gist.sql does not exist.
../../config/install-sh: ./uninstall_pageinspect.sql does not exist.
../../config/install-sh: ./uninstall_pgcrypto.sql does not exist.
../../config/install-sh: ./uninstall_pgrowlocks.sql does not exist.
../../config/install-sh: ./uninstall_pgstattuple.sql does not exist.

* You use "const = expr" in some places, ex. if( InvalidOid != ...),
  but we seem to prefer "expr = const".

* [off topic] I found some installer scripts are inconsistent.
They uses CREATE FUNCTION for some of functions, but CREATE OR REPLACE
FUNCTION for others. We'd better write documentation about how to write
installer scripts because CREATE EXTENSION has some implicit assumptions
in them. For example, "Don't use transaction", etc.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is there a way to build PostgreSQL client libraries with MinGW

2011-01-25 Thread Xiaobo Gu
I also tried 64bit 4.5.2 GCC shipped with Rtools, the same error
> checking types of arguments for accept()... configure: error: could not 
> determin
> e argument types



On Tue, Jan 25, 2011 at 7:03 PM, Xiaobo Gu  wrote:
> Hi,
> I have successfully built 32bit PostgreSQL 9.0.2 using 32bit GCC 4.5.0
> and MinGW packaged by tdm64-gcc-4.5.1 from
> http://tdm-gcc.tdragon.net/download.
>
> But for 64bit there is only 4.5.1 GCC, which is not stable now, and
> the configure script does not pass.
>
> $ configure --without-zlib
> checking build system type... i686-pc-mingw32
> checking host system type... i686-pc-mingw32
> checking which template to use... win32
> checking whether to build with 64-bit integer date/time support... yes
> checking whether NLS is wanted... no
> checking for default port number... 5432
> checking for block size... 8kB
> checking for segment size... 1GB
> checking for WAL block size... 8kB
> checking for WAL segment size... 16MB
> checking for gcc... gcc
> checking for C compiler default output file name... a.exe
> checking whether the C compiler works... yes
> checking whether we are cross compiling... no
> checking for suffix of executables... .exe
> checking for suffix of object files... o
> checking whether we are using the GNU C compiler... yes
> checking whether gcc accepts -g... yes
> checking for gcc option to accept ISO C89... none needed
> checking if gcc supports -Wdeclaration-after-statement... yes
> checking if gcc supports -Wendif-labels... yes
> checking if gcc supports -fno-strict-aliasing... yes
> checking if gcc supports -fwrapv... yes
> checking whether the C compiler still works... yes
> checking how to run the C preprocessor... gcc -E
> checking allow thread-safe client libraries... yes
> checking whether to build with Tcl... no
> checking whether to build Perl modules... no
> checking whether to build Python modules... no
> checking whether to build with GSSAPI support... no
> checking whether to build with Kerberos 5 support... no
> checking whether to build with PAM support... no
> checking whether to build with LDAP support... no
> checking whether to build with Bonjour support... no
> checking whether to build with OpenSSL support... no
> configure: WARNING: *** Readline does not work on MinGW --- disabling
> checking for grep that handles long lines and -e... /bin/grep
> checking for egrep... /bin/grep -E
> checking for ld used by GCC... 
> d:/amber/devtool/mingw64/x86_64-w64-mingw32/bin/l
> d.exe
> checking if the linker 
> (d:/amber/devtool/mingw64/x86_64-w64-mingw32/bin/ld.exe)
> is GNU ld... yes
> checking for ranlib... ranlib
> checking for strip... strip
> checking whether it is possible to strip libraries... yes
> checking for ar... ar
> checking for dlltool... dlltool
> checking for dllwrap... dllwrap
> checking for windres... windres
> checking for tar... /bin/tar
> checking whether ln -s works... no, using cp -p
> checking for gawk... gawk
> checking for a thread-safe mkdir -p... /bin/mkdir -p
> checking for bison... /bin/bison
> configure: using bison (GNU Bison) 2.4.2
> checking for flex... /bin/flex
> configure: using flex 2.5.35
> checking for perl... /bin/perl
> configure: using perl 5.6.1
> configure: WARNING:
> *** The installed version of Perl, /bin/perl, is too old to use with 
> PostgreSQL.
>
> *** Perl version 5.8 or later is required, but this is 5.6.1.
> configure: WARNING:
> *** Without Perl you will not be able to build PostgreSQL from Git.
> *** You can obtain Perl from any CPAN mirror site.
> *** (If you are using the official distribution of PostgreSQL then you do not
> *** need to worry about this, because the Perl output is pre-generated.)
> checking for main in -lm... yes
> checking for library containing setproctitle... no
> checking for library containing dlopen... no
> checking for library containing socket... -lwsock32
> checking for library containing shl_load... no
> checking for library containing getopt_long... none required
> checking for library containing crypt... no
> checking for library containing fdatasync... no
> checking for library containing gethostbyname_r... no
> checking for library containing shmget... no
> checking for ANSI C header files... yes
> checking for sys/types.h... yes
> checking for sys/stat.h... yes
> checking for stdlib.h... yes
> checking for string.h... yes
> checking for memory.h... yes
> checking for strings.h... yes
> checking for inttypes.h... yes
> checking for stdint.h... yes
> checking for unistd.h... yes
> checking crypt.h usability... no
> checking crypt.h presence... no
> checking for crypt.h... no
> checking dld.h usability... no
> checking dld.h presence... no
> checking for dld.h... no
> checking fp_class.h usability... no
> checking fp_class.h presence... no
> checking for fp_class.h... no
> checking getopt.h usability... yes
> checking getopt.h presence... yes
> checking for getopt.h... yes
> checking ieeefp.h usability... yes
> checking ieeefp.h presence... yes
> checking f

Re: [HACKERS] Include WAL in base backup

2011-01-25 Thread Magnus Hagander
On Tue, Jan 25, 2011 at 10:56, Fujii Masao  wrote:
> On Tue, Jan 25, 2011 at 12:33 AM, Magnus Hagander  wrote:
>> Here's an updated version of the patch:
>>
>> * rebased on the current git master (including changing the switch
>> from -w to -x since -w was used now)
>> * includes some documentation
>> * use XLogByteToSeg and uint32 as mentioned here
>> * refactored to remove SendBackupDirectory(), removing the ugly closetar 
>> boolean
>
> I reviewed the patch. Here are comments.
>
>
> +               {"xlog", no_argument, NULL, 'w'},
>
> Typo: s/w/x

Ah, oops. thanks.

> /*
>  * BASE_BACKUP [LABEL ] [PROGRESS] [FAST]
>  */
>
> In repl_gram.y, the above comment needs to be updated.

Same here - oops, thanks. It was also missing the quotes around .


> Both this patch and the multiple-backup one removes SendBackupDirectory
> in the almost same way. So, how about committing that common part first?

I think they're small enough that we'll just commit it along with
whichever comes first and then have the other one merge with it.

> +               XLByteToSeg(endptr, endlogid, endlogseg);
>
> XLByteToPrevSeg should be used for the backup end location. Because
> when it's a boundary byte, all the required WAL records exist in the
> previous WAL segment. This is why pg_stop_backup uses XLByteToPrevSeg
> for the backup end location.

Well, it's just for debugging output, really, but see below.


> +               elog(DEBUG1, "Going to send wal from %i.%i to %i.%i",
> +                        logid, logseg,
> +                        endlogid, endlogseg);
>
> %u should be used instead of %i. Or how about logging the filename?

I actually plan to remove that DEBUG output completely (sorry, forgot
to mention that), I'm not sure it's useful to have it around once we
apply. Or do you think it would be useful to keep?


> +                       char    xlogname[64];
>
> How about using MAXFNAMELEN instead of 64?
>

Agreed.


> +                       XLogFileName(xlogname, ThisTimeLineID, logid, logseg);
> +                       sprintf(fn, "./pg_xlog/%s", xlogname);
>
> How about using XLogFilePath? instead?

Can't do that, because we need the "./" part when we call sendFile() -
it's structured around having that one, since we get it from the file
name looping.


> +                       if (logid > endptr.xlogid ||
> +                               (logid == endptr.xlogid && logseg >= 
> endptr.xrecoff / XLogSegSize))
>
> Why don't you use endlogseg?

Honestly? Because I thought I added endlogseg just for the debugging
output, didn't realize I had it down here.

But if I do, then I should not use the XLByteToPrevSeg() per your
suggestion above, right? Keep it the way it is.

> The estimated size of $PGDATA doesn't include WAL files, but the
> actual one does.
> This inconsistency messes up the progress report as follows:
>
>    33708/17323 kB (194%) 1/1 tablespaces

Yeah, that is definitely a potential problem. I think we're just going
to have to document it - and in a big database, it's not going to be
quite as bad...


> So, what about sparating the progress report into two parts: one for $PGDATA 
> and
> tablespaces, another for WAL files?

We can't really do that. We need to send the progress report before we
begin the tar file, and we don't know how many xlog segments we will
have at that time. And we don't want to send pg_xlog as a separate tar
stream, because if we do we won't be able to get the single-tar-output
in the simple case - thus no piping etc. I actually had the xlog data
as it's own tar stream in the beginning, but Heikki convinced me of
the advantage of keeping it in the main one...

What we could do, I guess, is to code pg_basebackup to detect when it
starts receiving xlog files and then stop increasing the percentage at
that point, instead just doing a counter?

(the discussed changse above have been applied and pushed to github)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is there a way to build PostgreSQL client libraries with MinGW

2011-01-25 Thread Xiaobo Gu
Hi,
I have successfully built 32bit PostgreSQL 9.0.2 using 32bit GCC 4.5.0
and MinGW packaged by tdm64-gcc-4.5.1 from
http://tdm-gcc.tdragon.net/download.

But for 64bit there is only 4.5.1 GCC, which is not stable now, and
the configure script does not pass.

$ configure --without-zlib
checking build system type... i686-pc-mingw32
checking host system type... i686-pc-mingw32
checking which template to use... win32
checking whether to build with 64-bit integer date/time support... yes
checking whether NLS is wanted... no
checking for default port number... 5432
checking for block size... 8kB
checking for segment size... 1GB
checking for WAL block size... 8kB
checking for WAL segment size... 16MB
checking for gcc... gcc
checking for C compiler default output file name... a.exe
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables... .exe
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking if gcc supports -Wdeclaration-after-statement... yes
checking if gcc supports -Wendif-labels... yes
checking if gcc supports -fno-strict-aliasing... yes
checking if gcc supports -fwrapv... yes
checking whether the C compiler still works... yes
checking how to run the C preprocessor... gcc -E
checking allow thread-safe client libraries... yes
checking whether to build with Tcl... no
checking whether to build Perl modules... no
checking whether to build Python modules... no
checking whether to build with GSSAPI support... no
checking whether to build with Kerberos 5 support... no
checking whether to build with PAM support... no
checking whether to build with LDAP support... no
checking whether to build with Bonjour support... no
checking whether to build with OpenSSL support... no
configure: WARNING: *** Readline does not work on MinGW --- disabling
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ld used by GCC... d:/amber/devtool/mingw64/x86_64-w64-mingw32/bin/l
d.exe
checking if the linker (d:/amber/devtool/mingw64/x86_64-w64-mingw32/bin/ld.exe)
is GNU ld... yes
checking for ranlib... ranlib
checking for strip... strip
checking whether it is possible to strip libraries... yes
checking for ar... ar
checking for dlltool... dlltool
checking for dllwrap... dllwrap
checking for windres... windres
checking for tar... /bin/tar
checking whether ln -s works... no, using cp -p
checking for gawk... gawk
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for bison... /bin/bison
configure: using bison (GNU Bison) 2.4.2
checking for flex... /bin/flex
configure: using flex 2.5.35
checking for perl... /bin/perl
configure: using perl 5.6.1
configure: WARNING:
*** The installed version of Perl, /bin/perl, is too old to use with PostgreSQL.

*** Perl version 5.8 or later is required, but this is 5.6.1.
configure: WARNING:
*** Without Perl you will not be able to build PostgreSQL from Git.
*** You can obtain Perl from any CPAN mirror site.
*** (If you are using the official distribution of PostgreSQL then you do not
*** need to worry about this, because the Perl output is pre-generated.)
checking for main in -lm... yes
checking for library containing setproctitle... no
checking for library containing dlopen... no
checking for library containing socket... -lwsock32
checking for library containing shl_load... no
checking for library containing getopt_long... none required
checking for library containing crypt... no
checking for library containing fdatasync... no
checking for library containing gethostbyname_r... no
checking for library containing shmget... no
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking crypt.h usability... no
checking crypt.h presence... no
checking for crypt.h... no
checking dld.h usability... no
checking dld.h presence... no
checking for dld.h... no
checking fp_class.h usability... no
checking fp_class.h presence... no
checking for fp_class.h... no
checking getopt.h usability... yes
checking getopt.h presence... yes
checking for getopt.h... yes
checking ieeefp.h usability... yes
checking ieeefp.h presence... yes
checking for ieeefp.h... yes
checking ifaddrs.h usability... no
checking ifaddrs.h presence... no
checking for ifaddrs.h... no
checking langinfo.h usability... no
checking langinfo.h presence... no
checking for langinfo.h... no
checking poll.h usability... no
checking poll.h presence... no
checking for poll.h... no
checking pwd.h usability... yes
checking pwd.h presence... yes
checking for pwd.h... yes
checking sys/ioctl.h usability... no
checking sys/ioctl.h 

Re: [HACKERS] SSI patch version 14

2011-01-25 Thread Dan Ports
Thanks for working your way through this patch. I'm certainly well
aware that that's not a trivial task!

I'm suffering through a bout of insomnia, so I'll respond to some of
your high-level comments in hopes that serializability will help put me
to sleep (as it often does). I'll leave the more detailed code comments
for later when I'm actually looking at the code, or better yet Kevin
will take care of them and I won't have to. ;-)

On Tue, Jan 25, 2011 at 01:07:39AM -0800, Jeff Davis wrote:
> At a high level, there is a nice conceptual simplicity. Let me try to
> summarize it as I understand it:
>   * RW dependencies are detected using predicate locking.
>   * RW dependencies are tracked from the reading transaction (as an
> "out") conflict; and from the writing transaction (as an "in"
> conflict).
>   * Before committing a transaction, then by looking only at the RW 
> dependencies (and predicate locks) for current and past 
> transactions, you can determine if committing this transaction will
> result in a cycle (and therefore a serialization anomaly); and if
> so, abort it.

This summary is right on. I would add one additional detail or
clarification to the last point, which is that rather than checking for
a cycle, we're checking for a transaction with both "in" and "out"
conflicts, which every cycle must contain.

> That's where the simplicity ends, however ;)

Indeed!

> Tracking of RW conflicts of current and past transactions is more
> complex. Obviously, it doesn't keep _all_ past transactions, but only
> ones that overlap with a currently-running transaction. It does all of
> this management using SHMQueue. There isn't much of an attempt to
> gracefully handle OOM here as far as I can tell, it just throws an error
> if there's not enough room to track a new transaction (which is
> reasonable, considering that it should be quite rare and can be
> mitigated by increasing max_connections). 

If the OOM condition you're referring to is the same one from the
following comment, then it can't happen: (Apologies if I've
misunderstood what you're referring to.)

> * In RegisterSerializableTransactionInt, if it doesn't get an sxact, it
> goes into summarization. But summarization assumes that it has at least
> one finished xact. Is that always true? If you have enough memory to
> hold a transaction for each connection, plus max_prepared_xacts, plus
> one, I think that's true. But maybe that could be made more clear?

Yes -- the SerializableXact pool is allocated up front and it
definitely has to be bigger than the number of possible active
transactions. In fact, it's much larger: 10 * (MaxBackends +
max_prepared_xacts) to allow some room for the committed transactions
we still have to track.

> * In RegisterSerializableTransactionInt(), for a RO xact, it considers
> any concurrent RW xact a possible conflict. It seems like it would be
> possible to know whether a RW transaction may have overlapped with any
> committed RW transaction (in finishedLink queue), and only add those as
> potential conflicts. Would that work? If so, that would make more
> snapshots safe.

Interesting idea. That's worth some careful thought. I think it's
related to the condition that the RW xact needs to commit with a
conflict out to a transaction earlier than the RO xact. My first guess
is that this wouldn't make more transactions safe, but could detect
safe snapshots faster.

> * When a transaction finishes, then PID should probably be set to zero.
> You only use it for waking up a deferrable RO xact waiting for a
> snapshot, right?

Correct. It probably wouldn't hurt to clear that field when releasing
the transaction, but we don't use it after.

> * I'm a little unclear on summarization and writing to the SLRU. I don't
> see where it's determining that the predicate locks can be safely
> released. Couldn't the oldest transaction still have relevant predicate
> locks?

When a SerializableXact gets summarized to the SLRU, its predicate
locks aren't released; they're transferred to the dummy
OldCommittedSxact.

> I'll keep working on this patch. I hope I can be of some help getting
> this committed, because I'm looking forward to this feature. And I
> certainly think that you and Dan have applied the amount of planning,
> documentation, and careful implementation necessary for a feature like
> this.

Hopefully my comments here will help clarify the patch. It's not lost
on me that there's no shortage of complexity in the patch, so if you
found anything particularly confusing we should probably add some
documentation to README-SSI.

Dan

-- 
Dan R. K. Ports  MIT CSAILhttp://drkp.net/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Include WAL in base backup

2011-01-25 Thread Fujii Masao
On Tue, Jan 25, 2011 at 12:33 AM, Magnus Hagander  wrote:
> Here's an updated version of the patch:
>
> * rebased on the current git master (including changing the switch
> from -w to -x since -w was used now)
> * includes some documentation
> * use XLogByteToSeg and uint32 as mentioned here
> * refactored to remove SendBackupDirectory(), removing the ugly closetar 
> boolean

I reviewed the patch. Here are comments.


+   {"xlog", no_argument, NULL, 'w'},

Typo: s/w/x


/*
 * BASE_BACKUP [LABEL ] [PROGRESS] [FAST]
 */

In repl_gram.y, the above comment needs to be updated.


Both this patch and the multiple-backup one removes SendBackupDirectory
in the almost same way. So, how about committing that common part first?


+   XLByteToSeg(endptr, endlogid, endlogseg);

XLByteToPrevSeg should be used for the backup end location. Because
when it's a boundary byte, all the required WAL records exist in the
previous WAL segment. This is why pg_stop_backup uses XLByteToPrevSeg
for the backup end location.


+   elog(DEBUG1, "Going to send wal from %i.%i to %i.%i",
+logid, logseg,
+endlogid, endlogseg);

%u should be used instead of %i. Or how about logging the filename?


+   charxlogname[64];

How about using MAXFNAMELEN instead of 64?


+   XLogFileName(xlogname, ThisTimeLineID, logid, logseg);
+   sprintf(fn, "./pg_xlog/%s", xlogname);

How about using XLogFilePath? instead?


+   if (logid > endptr.xlogid ||
+   (logid == endptr.xlogid && logseg >= 
endptr.xrecoff / XLogSegSize))

Why don't you use endlogseg?


The estimated size of $PGDATA doesn't include WAL files, but the
actual one does.
This inconsistency messes up the progress report as follows:

33708/17323 kB (194%) 1/1 tablespaces

So, what about sparating the progress report into two parts: one for $PGDATA and
tablespaces, another for WAL files?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Bug in parse_basebackup_options Re: [COMMITTERS] pgsql: Make walsender options order-independent

2011-01-25 Thread Magnus Hagander
On Tue, Jan 25, 2011 at 10:32, Fujii Masao  wrote:
> On Mon, Jan 24, 2011 at 7:41 AM, Magnus Hagander  wrote:
>> Make walsender options order-independent
>>
>> While doing this, also move base backup options into
>> a struct instead of increasing the number of parameters
>> to multiple functions for each new option.
>
> When I read the include-WAL-in-backup patch, I found that this
> commit had the bug; parse_basebackup_options fails in parsing
> options because of wrong usage of MemSet. Here is the patch.

Wow. I know I fixed that at one point, it must've gone bad again in a
merge. Interestingly enough it works in my environment, but that must
be pure luck.

So thanks - applied.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [JDBC] Weird issues when reading UDT from stored function

2011-01-25 Thread Lukas Eder
>
> > Here, we've somehow got the first two fields of u_address_type - street
> and

 > zip - squashed together into one column named 'street', and all the other
> > columns nulled out.
>
> I think this is the old problem of PL/pgsql having two forms of SELECT
> INTO.  You can either say:
>
> SELECT col1, col2, col3, ... INTO recordvar FROM ...
>
> Or you can say:
>
> SELECT col1, col2, col3, ... INTO nonrecordvar1, nonrecordvar2,
> nonrecordvar3, ... FROM ...
>
> In this case, since address is a recordvar, it's expecting the first
> form - thus the first select-list item gets matched to the first
> column of the address, rather than to address as a whole.  It's not
> smart enough to consider the types of the items involved - only
> whether they are records.  :-(
>

So what you're suggesting is that the plpgsql code is causing the issues?
Are there any indications about how I could re-write this code? The
important thing for me is to have the aforementioned signature of the
plpgsql function with one UDT OUT parameter. Even if this is a bit awkward
in general, in this case, I don't mind rewriting the plpgsql function
content to create a workaround for this problem...


[HACKERS] Bug in parse_basebackup_options Re: [COMMITTERS] pgsql: Make walsender options order-independent

2011-01-25 Thread Fujii Masao
On Mon, Jan 24, 2011 at 7:41 AM, Magnus Hagander  wrote:
> Make walsender options order-independent
>
> While doing this, also move base backup options into
> a struct instead of increasing the number of parameters
> to multiple functions for each new option.

When I read the include-WAL-in-backup patch, I found that this
commit had the bug; parse_basebackup_options fails in parsing
options because of wrong usage of MemSet. Here is the patch.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


parse_basebackup_options_bugfix_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-01-25 Thread Jeff Davis
On Mon, 2011-01-24 at 21:30 -0600, Kevin Grittner wrote: 
> Attached.
>  
> Dan and I have spent many, many hours desk-check and testing,
> including pounding for many hours in DBT-2 at high concurrency
> factors on a 16 processor box.  In doing so, we found and fixed a few
> issues.  Neither of us is aware of any bugs or deficiencies
> remaining, even after a solid day of pounding in DBT-2, other than
> the failure to extend any new functionality to hot standby.
>  
> At Heikki's suggestion I have included a test to throw an error on an
> attempt to switch to serializable mode during recovery.  Anything
> more to address that issue can be a small follow-up patch -- probably
> mainly a few notes in the docs.

Here is my first crack at a review:

First of all, I am very impressed with the patch. I was pleased to see
that both READ ONLY DEFERRABLE and 2PC work! Also, I found the patch
very usable and did not encounter any bugs or big surprises.

Second, I do not understand this patch entirely, so the following
statements can be considered questions as much as answers.

At a high level, there is a nice conceptual simplicity. Let me try to
summarize it as I understand it:
  * RW dependencies are detected using predicate locking.
  * RW dependencies are tracked from the reading transaction (as an
"out") conflict; and from the writing transaction (as an "in"
conflict).
  * Before committing a transaction, then by looking only at the RW 
dependencies (and predicate locks) for current and past 
transactions, you can determine if committing this transaction will
result in a cycle (and therefore a serialization anomaly); and if
so, abort it.

That's where the simplicity ends, however ;)

For starters, the above structures require unlimited memory, while we
have fixed amounts of memory. The predicate locks are referenced by a
global shared hash table as well as per-process SHMQueues. It can adapt
memory usage downward in three ways:
  * increase lock granularity -- e.g. change N page locks into a
table lock
  * "summarization" -- fold multiple locks on the same object across
many old committed transactions into a single lock
  * use the SLRU

Tracking of RW conflicts of current and past transactions is more
complex. Obviously, it doesn't keep _all_ past transactions, but only
ones that overlap with a currently-running transaction. It does all of
this management using SHMQueue. There isn't much of an attempt to
gracefully handle OOM here as far as I can tell, it just throws an error
if there's not enough room to track a new transaction (which is
reasonable, considering that it should be quite rare and can be
mitigated by increasing max_connections). 

The heavy use of SHMQueue is quite reasonable, but for some reason I
find the API very difficult to read. I think it would help (at least me)
quite a lot to annotate (i.e. with a comment in the struct) the various
lists and links with the types of data being held.

The actual checking of conflicts isn't quite so simple, either, because
we want to detect problems before the victim transaction has done too
much work. So, checking is done along the way in several places, and
it's a little difficult to follow exactly how those smaller checks add
up to the overall serialization-anomaly check (the third point in my
simple model).

There are also optimizations around transactions declared READ ONLY.
Some of these are a little difficult to follow as well, and I haven't
followed them all.

There is READ ONLY DEFERRABLE, which is a nice feature that waits for a
"safe" snapshot, so that the transaction will never be aborted.

Now, on to my code comments (non-exhaustive):

* I see that you use a union as well as using "outLinks" to also be a
free list. I suppose you did this to conserve shared memory space,
right?

* In RegisterSerializableTransactionInt(), for a RO xact, it considers
any concurrent RW xact a possible conflict. It seems like it would be
possible to know whether a RW transaction may have overlapped with any
committed RW transaction (in finishedLink queue), and only add those as
potential conflicts. Would that work? If so, that would make more
snapshots safe.

* When a transaction finishes, then PID should probably be set to zero.
You only use it for waking up a deferrable RO xact waiting for a
snapshot, right?

* Still some compiler warnings:
twophase.c: In function ‘FinishPreparedTransaction’:
twophase.c:1360: warning: implicit declaration of function
‘PredicateLockTwoPhaseFinish’

* You have a function called CreatePredTran. We are already using
"Transaction" and "Xact" -- we probably don't need "Tran" as well.

* HS error has a typo:
"ERROR:  cannot set transaction isolationn level to serializable during
recovery"

* I'm a little unclear on summarization and writing to the SLRU. I don't
see where it's determining that the predicate locks can be safely
released. Couldn't the oldest transaction still have relevant predicate
locks?

* 

Re: [HACKERS] Fwd: [JDBC] Weird issues when reading UDT from stored function

2011-01-25 Thread rsmogura

Hi,
I don't know if this is a bug, but at least I haven't found any clear 
statement in documentation about; this should be wrote with big and bold 
letters.


In any way I think this is bug or big inconsistency, because of, as was 
stated in previous mail
test=# CREATE FUNCTION p_enhance_address3 (address OUT u_address_type, 
i1 OUT

int)
AS $$
BEGIN
SELECT t_author.address
INTO address
FROM t_author
WHERE first_name = 'George';
i1 = 12;
END;
$$ LANGUAGE plpgsql;
test=# select *
from p_enhance_address3();
  address   | i1
+
 ("(""(Parliament Hill,77)"",NW31A9)",) | 12
(1 row),
but if you will create above function without last, i1 parameter 
(SELECT * FROM p_enhance_address2();) then result will be
   street| zip | city | country | since | 
code

-+-+--+-+---+--
 ("(""Parliament Hill"",77)",NW31A9) | |  | |   |
In last case, I think, result should be "packed" in one column, because 
of it clearly "unpacked" record.


On Tue, 25 Jan 2011 14:39:51 +0700, Lukas Eder wrote:

Here, we've somehow got the first two fields of u_address_type -

street and



zip - squashed together into one column named 'street', and all

the other

columns nulled out.

 
I think this is the old problem of PL/pgsql having two forms of
SELECT
INTO.  You can either say:
 
SELECT col1, col2, col3, ... INTO recordvar FROM ...
 
Or you can say:
 
SELECT col1, col2, col3, ... INTO nonrecordvar1, nonrecordvar2,
nonrecordvar3, ... FROM ...
 
In this case, since address is a recordvar, it's expecting the first

form - thus the first select-list item gets matched to the first
column of the address, rather than to address as a whole.  It's not

smart enough to consider the types of the items involved - only
whether they are records.  :-(


 
So what you're suggesting is that the plpgsql code is causing the
issues? Are there any indications about how I could re-write this
code? The important thing for me is to have the aforementioned
signature of the plpgsql function with one UDT OUT parameter. Even
if this is a bit awkward in general, in this case, I don't mind
rewriting the plpgsql function content to create a workaround for
this problem... 



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [RRR] [HACKERS] Seeking Mentors for Funded Reviewers

2011-01-25 Thread Dave Page
On Tue, Jan 25, 2011 at 2:43 AM, Richard Broersma
 wrote:
>
>
> On Mon, Jan 24, 2011 at 5:53 PM, Josh Berkus  wrote:
>>
>> On 1/24/11 12:17 PM, Richard Broersma wrote:
>> > PgUS is preparing to fund a grant for PgUS members to learn and
>> > participate in the patch review process.  We looking for experienced
>> > reviewers that can assist a candidate through to process of testing a
>> > patch - to submitting the final review.  The ultimate deliverable
>> > would be the actual review posted to Hackers.
>> >
>> > Would anyone be available to assist with this?
>>
>> Do we have candidate mentees?
>>
>
> Not at the moment.  Were still in the process of getting ready.
>
> We have the funding.  We're looking for mentors.  Next we'll just about
> ready to open the application process.  But I'd expect several weeks to pass
> before have ready to look at applicants.

Will the scheme be open to everyone, or just .USians? If the latter,
I'd be a little concerned that it may have a negative effect on
attracting reviewers from outside the US.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers