[HACKERS] some questions about SELECT FOR UPDATE/SHARE

2009-02-12 Thread Tao Ma
Hi,
I noticed Tom Lane made SELECT FOR UPDATE/SHARE work on inheritance trees.
But todo list is not updated to reflect this change.
http://wiki.postgresql.org/wiki/Todo#Inheritance

Actually, there is a patch trying to remove the ONLY clause on the foreign 
key checks on the inheritance tables. Matt Newell, the author of the patch, 
remove too many things(both ONLY and FOR SHARE) to achieve his aims. At that 
time, SELECT FOR UPDATE/SHARE cannot work on inheritance tables. Without FOR 
UPDATE/SHARE, there will be some bugs, if there are concurrent updates(Tom 
Lane said).

the discussions about the Matt Newell's path are here:
http://archives.postgresql.org/pgsql-patches/2005-11/msg00062.php

My question is: Is it possible to remove the ONLY from the RI checks after 
Tom Lane made SELECT FOR UPDATE/SHARE work on inheritance trees?

Best wishes 



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] temporarily stop autovacuum

2009-02-12 Thread Bernd Helmle
--On Mittwoch, Februar 11, 2009 13:18:11 -0500 Robert Haas 
robertmh...@gmail.com wrote:



In any case it's not difficult to write a script that loops over all
of your tables with ALTER TABLE.  It's probably not as fast as a
single UPDATE statement, but I suspect you'd need to have an enormous
number of tables for that to matter much.


Agreed, we often recommend this for all kinds of GRANTs, REVOKEs and so on. 
But while we don't have (yet) any facility to achieve this behavior with 
these commands, for autovacuum, a possible solution exists, and although a 
crude temporarily one, i know people seeing pg_autovacuum as a feature to 
do exactly this kind of maintenance.



--
 Thanks

   Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: fix SET WITHOUT OIDS, add SET WITH OIDS

2009-02-12 Thread Greg Stark
Well for one thing because they don't scale well to billions of  
records. For another they're even less like the standard or anything  
any other database has.


I agree with you that there's no reason to actively deprecate OIDs or  
hurt users who use them. But we should make it as easy as possible for  
users who want to move to a normal primary key, not put obstacles in  
their way like large full table rewrites.


--
Greg


On 10 Feb 2009, at 01:49, Tom Lane t...@sss.pgh.pa.us wrote:


Greg Stark st...@enterprisedb.com writes:

I think what you propose would be a mistake. We want to encourage
people to move *away* from OIDS.


Why?  I don't agree with that premise, and therefore not with any
of the rest of your argument.

   regards, tom lane


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimization rules for semi and anti joins

2009-02-12 Thread Greg Stark

On 11 Feb 2009, at 00:03, Tom Lane t...@sss.pgh.pa.us wrote:


Actually, that makes less sense than the antijoin case.  For antijoin
there is a well-defined value for the extended columns, ie null.  For
a semijoin the RHS values might come from any of the rows that happen
to join to the current LHS row, so I'm just as happy that it's
syntactically impossible to reference them.


Actually I think the way mysql users used to spell EXISTS/IN before  
mysql supported them would qualify as a semijoin where you can access  
the columns:


SELECT distinct a.* from a,b WHERE a.id = b.id

To access columns from b in postgres you would have to use DISTINCT ON. 


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DISCARD ALL failing to acquire locks on pg_listen

2009-02-12 Thread Matteo Beccati
Hi Tom,

 Given I was using 8.3, it seemed quite right to set the reset statement
 to ABORT; DISCARD ALL. Everything works fine, until a load spike
 happens and pgpool-II reset queries start to lag behind, with DISCARD
 ALL failing to acquire an exclusive locks on the pg_listen system table,
 although the application isn't using any LISTEN/NOTIFY. The reason was
 not obvious to me, but looking at the man page explained a lot: DISCARD
 ALL also performs an UNLISTEN *.
 
 Seems like we could/should fix UNLISTEN * to not do anything if it is
 known that the current backend never did any LISTENs.

Here's my proposed patch, both for HEAD and 8.3:

http://www.beccati.com/misc/pgsql/async_unlisten_skip_HEAD.patch
http://www.beccati.com/misc/pgsql/async_unlisten_skip_REL8_3_STABLE.patch

I tried to write a regression test, but couldn't find a suitable way to
get the regression framework cope with trace_notify printing the backend
pid. I even tried to add a @backend_pid@ variable to pg_regress, but
soon realised that the pid is not available to psql when variable
substitution happens.

So, here's the output of some tests I made:

http://www.beccati.com/misc/pgsql/async_unlisten_skip.out

Note: DISCARD doesn't produce any debug output, because the guc
variables are being reset before the Async_UnlistenAll is called.


Cheers

-- 
Matteo Beccati

OpenX - http://www.openx.org

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby: subxid cache changes

2009-02-12 Thread Simon Riggs

On Thu, 2009-02-12 at 09:50 +0200, Heikki Linnakangas wrote:

 It occurs to me that we don't need this patch for hot standby if we 
 abuse the main xid array (SnapshotData.xip) to store the unobserved xids 
 instead of the subxid array. That one is always scanned in 
 XidInMVCCSnapshot. I think we should do that rather than try to salvage 
 this patch.

At this stage, yes.

 So far so good, but what about all the other callers of 
 SubTransGetParent()? For example, XactLockTableWait will fail an 
 assertion if asked to wait on a subtransaction which is then released.

I agree that it could fail the assertion, though it is clear that the
assertion should now be removed.

The logic is: if there is no lock table entry for that xid *and* it is
not in progress *and* it is not in pg_subtrans, then it must have been
an aborted subtransaction of a currently active xact or it has otherwise
completed.

I think we can rework the other aspects also.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Need help in porting Oracle PL/SQL's OUT paramater based procedures

2009-02-12 Thread Gurjeet Singh
Hi All,

I am involved in porting Spacewalk https://fedorahosted.org/spacewalk/'s
backend DB schema from Oracle to Postgres. We are almost done with table
migration, and are now attempting procedure/function porting.

A few things have been sorted out
(linkhttps://www.redhat.com/archives/spacewalk-devel/2009-February/msg00123.html),
and am now stuck with the OUT parameters! I saw the example of converting a
PL/SQL function
cs_parse_urlhttp://www.postgresql.org/docs/8.1/static/plpgsql-porting.html#PLPGSQL-PORTING-EX3,
and see that finally it has been advised to use the ported version as

SELECT * FROM cs_parse_url('http://foobar.com/query.cgi?baz');

that is, not passing anything for the OUT or INOUT parameters. This works
fine for a simple SELECT usage, but does not play well when this function is
to be called from another function, (and assuming that it'd break the
application code too, which uses Oracle syntax of calling functions)!

I have a simple function f() which I'd like to be ported in such a way that
it works when called from other plpgsql code, as well as when the
application uses the Oracle like syntax. Here's a sample usage of the
function f() in Oracle:

snip
create or replace function f( a in out int, b out varchar ) return char as
begin
a := 10;
b := 'some string';
return 'c';
end;
/

create or replace function f_caller return int as
a int;
b varchar(32);
begin
dbms_output.put_line( f( a, b ) );
dbms_output.put_line( a );
dbms_output.put_line( b );
return 0;
end;
/

set serveroutput on

select f_caller from dual;

  F_CALLER
--
 0

c
10
some string
/snip

Has anyone attempted porting PL/SQL, and if so, please share your experince
with the OUT parameters.

Thanks and best regards,

PS: Postgres 8.1 is going to be the oldest supported version by Spacewalk.
-- 
gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


[HACKERS] Fwd: Need help in porting Oracle PL/SQL's OUT paramater based procedures

2009-02-12 Thread Gurjeet Singh
(forgot to include general list earlier)

Hi All,

I am involved in porting Spacewalk https://fedorahosted.org/spacewalk/'s
backend DB schema from Oracle to Postgres. We are almost done with table
migration, and are now attempting procedure/function porting.

A few things have been sorted out
(linkhttps://www.redhat.com/archives/spacewalk-devel/2009-February/msg00123.html),
and am now stuck with the OUT parameters! I saw the example of converting a
PL/SQL function
cs_parse_urlhttp://www.postgresql.org/docs/8.1/static/plpgsql-porting.html#PLPGSQL-PORTING-EX3,
and see that finally it has been advised to use the ported version as

SELECT * FROM 
cs_parse_url('http://foobar.com/query.cgi?baz'http://foobar.com/query.cgi?baz%27
);

that is, not passing anything for the OUT or INOUT parameters. This works
fine for a simple SELECT usage, but does not play well when this function is
to be called from another function, (and assuming that it'd break the
application code too, which uses Oracle syntax of calling functions)!

I have a simple function f() which I'd like to be ported in such a way that
it works when called from other plpgsql code, as well as when the
application uses the Oracle like syntax. Here's a sample usage of the
function f() in Oracle:

snip
create or replace function f( a in out int, b out varchar ) return char as
begin
a := 10;
b := 'some string';
return 'c';
end;
/

create or replace function f_caller return int as
a int;
b varchar(32);
begin
dbms_output.put_line( f( a, b ) );
dbms_output.put_line( a );
dbms_output.put_line( b );
return 0;
end;
/

set serveroutput on

select f_caller from dual;

  F_CALLER
--
 0

c
10
some string
/snip

Has anyone attempted porting PL/SQL, and if so, please share your experince
with the OUT parameters.

Thanks and best regards,

PS: Postgres 8.1 is going to be the oldest supported version by Spacewalk.
-- 
gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [HACKERS] Hot Standby: subxid cache changes

2009-02-12 Thread Heikki Linnakangas

Simon Riggs wrote:

On Thu, 2009-02-12 at 09:50 +0200, Heikki Linnakangas wrote:
So far so good, but what about all the other callers of 
SubTransGetParent()? For example, XactLockTableWait will fail an 
assertion if asked to wait on a subtransaction which is then released.


I agree that it could fail the assertion, though it is clear that the
assertion should now be removed.


No, then you just get an infinite loop instead, trying to get the parent 
of 0 over and over again.



The logic is: if there is no lock table entry for that xid *and* it is
not in progress *and* it is not in pg_subtrans, then it must have been
an aborted subtransaction of a currently active xact or it has otherwise
completed.


Right, we got it right that far. But after the subtransaction has 
completed, the question is: what's its parent? That's what the patch got 
wrong.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [GENERAL] Fwd: Need help in porting Oracle PL/SQL's OUT paramater based procedures

2009-02-12 Thread Richard Huxton
Gurjeet Singh wrote:
 that is, not passing anything for the OUT or INOUT parameters. This works
 fine for a simple SELECT usage, but does not play well when this function is
 to be called from another function, (and assuming that it'd break the
 application code too, which uses Oracle syntax of calling functions)!
 
 I have a simple function f() which I'd like to be ported in such a way that
 it works when called from other plpgsql code, as well as when the
 application uses the Oracle like syntax. Here's a sample usage of the
 function f() in Oracle:

If you really want Oracle-compatible functions I think there's a company
that might sell you a solution :-)

However, failing that you'll want an example of OUT parameters in
PostgreSQL code - see below. The main thing to remember is that the OUT
is really just a shortcut way of defining a record type that gets
returned. It's nothing like passing by reference in insert real
programming language here.


BEGIN;

CREATE OR REPLACE FUNCTION f1(IN a integer, INOUT b integer, OUT c
integer) RETURNS RECORD AS $$
BEGIN
c := a + b;
b := b + 1;
-- No values in RETURN
RETURN;
END;
$$ LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION f2() RETURNS boolean AS $$
DECLARE
a integer := 1;
b integer := 2;
c integer := -1;
r RECORD;
BEGIN
r := f1(a, b);
-- Original variables unaffected
RAISE NOTICE 'a=%, b=%, c=%', a,b,c;
-- OUT params are here instead
RAISE NOTICE 'r.b=%, r.c=%', r.b, r.c;

-- This works, though notice we treat the function as a row-source
SELECT (f1(a,b)).* INTO b,c;
RAISE NOTICE 'a=%, b=%, c=%', a,b,c;

RETURN true;
END;
$$ LANGUAGE plpgsql;

SELECT f2();

ROLLBACK;

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] mingw check hung

2009-02-12 Thread Magnus Hagander
On Mon, Feb 02, 2009 at 07:37:46AM -0500, Andrew Dunstan wrote:
 
 
 Magnus Hagander wrote:
 Hmm. Actually, if I look at how things were before, I think we only
 called SetEnvironmentVariable() in case we set a variable, and never if
 we removed one. I'm not sure that's correct behavior, but it's
 apparently non-crashing behavior. Perhaps we need to restore that one?
 
 I'd be in favor of restoring it for both mingw and msvc in that case -
 that way we keep the platforms as close to each other as possible.
 
 Comments?
 
 
   
 
 works for me.

Patch applied for this.

//Magnus

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimization rules for semi and anti joins

2009-02-12 Thread Kevin Grittner
 Tom Lane t...@sss.pgh.pa.us wrote: 
 Kevin Grittner kevin.gritt...@wicourts.gov writes:
 (A semijoin B on (Pab)) antijoin C on (Pbc)
 = A semijoin (B antijoin C on (Pbc)) on (Pab)
  
 I think this one is true, and it doesn't seem to be mentioned,
 unless I'm missing something.  It seems potentially useful.
 
 Hmm, it doesn't seem terribly well-defined --- the values of B are
 indeterminate above the semijoin in the first case, so having Pbc
 refer to them doesn't seem like a good idea.  In particular, it
 seems like in the first case the semijoin could randomly choose a B
 row that has a join partner in C, causing the A row to disappear
 from the result, when the same A row has another B partner that does
 not join to C --- and the second form would find that B partner and
 allow the A row to be output.
 
I was looking at it from the abstraction that A semijoin B could be
treated as the equivalent of an inner join with duplicate A rows from
the result removed before the final result of the enclosing query.  It
seems you've been interpreting it as meaning the inner join of A to
the first (arbitrarily chosen) row of B found.  It appears that these
two views of it generate the same results for the other identities,
but not this one.
 
The first case here could be implemented as an inner join which
included (in addition to any columns needed for other purposes) a row
identifier for A and all the columns of B which are needed for the Pbc
predicate.  The antijoin could be performed on that result, after
which duplicate A rows would be eliminated, as well as the row
identifier and the B columns.  A simplified (and only slightly
artificial) example of where this could buy orders of magnitude
improvement in run time follows.
 
Imagine that A is a Party table with ten million rows.  Imagine that B
and C are sets of rows within a hundred million row CaseHist table
which records events on cases, some of which are associated with
parties.  B represents warrant issuing events, each of which is
related to a party.  C represents warrant disposition events, each of
which is related to a warrant issuing event.  Both tables are indexed
on case number and a sequence number.  Party has a name index.
 
You've got a name, and you want a list of outstanding warrants for
parties with a matching name.
 
The second case above would be the natural way to write the query. 
The first case, implemented as I describe, would be orders of
magnitude faster.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [GENERAL] Fwd: Need help in porting Oracle PL/SQL's OUT paramater based procedures

2009-02-12 Thread Gurjeet Singh
On Thu, Feb 12, 2009 at 6:18 PM, Richard Huxton d...@archonet.com wrote:

 Gurjeet Singh wrote:
  that is, not passing anything for the OUT or INOUT parameters. This works
  fine for a simple SELECT usage, but does not play well when this function
 is
  to be called from another function, (and assuming that it'd break the
  application code too, which uses Oracle syntax of calling functions)!
 
  I have a simple function f() which I'd like to be ported in such a way
 that
  it works when called from other plpgsql code, as well as when the
  application uses the Oracle like syntax. Here's a sample usage of the
  function f() in Oracle:

 If you really want Oracle-compatible functions I think there's a company
 that might sell you a solution :-)


:) Spacewalk is not interested you see.




 However, failing that you'll want an example of OUT parameters in
 PostgreSQL code - see below. The main thing to remember is that the OUT
 is really just a shortcut way of defining a record type that gets
 returned. It's nothing like passing by reference in insert real
 programming language here.


 BEGIN;

 CREATE OR REPLACE FUNCTION f1(IN a integer, INOUT b integer, OUT c
 integer) RETURNS RECORD AS $$
 BEGIN
c := a + b;
b := b + 1;
-- No values in RETURN
RETURN;
 END;
 $$ LANGUAGE plpgsql;

 CREATE OR REPLACE FUNCTION f2() RETURNS boolean AS $$
 DECLARE
a integer := 1;
b integer := 2;
c integer := -1;
r RECORD;
 BEGIN
r := f1(a, b);
-- Original variables unaffected
RAISE NOTICE 'a=%, b=%, c=%', a,b,c;
-- OUT params are here instead
RAISE NOTICE 'r.b=%, r.c=%', r.b, r.c;

-- This works, though notice we treat the function as a row-source
SELECT (f1(a,b)).* INTO b,c;
RAISE NOTICE 'a=%, b=%, c=%', a,b,c;

RETURN true;
 END;
 $$ LANGUAGE plpgsql;

 SELECT f2();

 ROLLBACK;


You see, you did not pass the third (OUT) parameter when calling the
function:

   r := f1(a, b);

This differs from Oracle syntax where you _need_ to pass the third
parameter.

And what if the Oracle function actually returns a value too? How do we
handle that in the application, because we can't declare RECORD vars in
Java/perl/python etc.

Thanks and best regards,
-- 
gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [HACKERS] Optimization rules for semi and anti joins

2009-02-12 Thread Kevin Grittner
 I wrote: 
 You've got a name, and you want a list of outstanding warrants for
 parties with a matching name.
 
Correction, if that was the list you wanted, you would use an inner
join, not a semijoin.  For purposes of this illustration I guess you
would be looking for a list of parties who have outstanding warrants,
not a list of the warrants themselves.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade project status

2009-02-12 Thread Peter Eisentraut
On Thursday 12 February 2009 05:15:02 Bruce Momjian wrote:
 I am working on pg_migrator TODO items.  My current list is:
[long]

Can we clarify now that this is not going to ship with the PostgreSQL 8.4 
tarball?

 I am not aware of any server changes needed for 8.3-8.4 migration.

OK, Zdenek, any concerns, or can we consider this chapter closed?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade project status

2009-02-12 Thread Bruce Momjian
Peter Eisentraut wrote:
 On Thursday 12 February 2009 05:15:02 Bruce Momjian wrote:
  I am working on pg_migrator TODO items.  My current list is:
 [long]
 
 Can we clarify now that this is not going to ship with the PostgreSQL 8.4 
 tarball?

That is not really my decision, but I am not going to advocate its
inclusion unless there is some unusual ground-swell of demand and others
are going to advocate its inclusion, not me.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Which installation parts are backward compatible?

2009-02-12 Thread Peter Eisentraut
I've been examining multi-major-version binary packaging again, and I was 
wondering whether we have a good overview over which pieces of the 
installation are backward compatible (that is, they can be shared between all 
major versions) and which are not.  For example, psql 8.4 can now presumably 
be shared between all major versions, so previous schemes to have several 
psqls installed can be simpified.

Here is a start:

ItemCompatible across major versions?
(i.e. the 8.4 version works with 7.4+ server)

clusterdb   yes?
createdbyes?
createlang  yes?
createuser  yes?
dropdb  yes?
droplangyes?
dropuseryes?
ecpgno?
initdb  NO
pg_config   (should be kept around for each major version)
pg_controldata  NO
pg_ctl  ?
pg_dump read yes, write NO
pg_dumpall  read yes, write NO
pg_resetxlogNO
pg_restore  yes?
pgtcl_*mod  YES
postgresNO
psqlYES
reindexdb   yes?
vacuumdbyes?

If no such list exists yet, perhaps we can complete the above one, document 
it, and pass it on to the packagers.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: hooking parser

2009-02-12 Thread Peter Eisentraut
On Wednesday 11 February 2009 12:05:03 Pavel Stehule wrote:
 It works. And I thing, this should to solve lot of special task
 related to increase compatibility with Oracle, Informix, or it could
 be usefull for some others (json support).

 postgres=# load 'decode';
 LOAD
 postgres=# select
 decode(null::integer,2,'ahoj',3,'Petr',1,'Pavel',null, 'jaja');
  decode
 
  jaja
 (1 row)

I think what you want here is some way to define a function that takes an 
arbitrary number of arguments of arbitrary type and let the function figure 
everything out.  I see no reason why this can't be a variant on CREATE 
FUNCTION, except that of course you need to figure out some API and function 
resolution details.  But it doesn't have to be a completely different concept 
like a binary plugin.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SE-PostgreSQL and row level security

2009-02-12 Thread KaiGai Kohei

BogDan Vatra wrote:

I've tested you patch in windows and in linux and it just work, it's a
killer feature. I have to tank you and all who worked on this.
On windows I have one little problem, mingw does not have strtok_r
function and I have to add it myself (see attached file).


Indeed, I could not find strtok_r in any other implementation.
PostgreSQL adopts multi-processes model, so it might not be necessary
to use thread safe interface.


A message for postgresql decision board:

   Dear postgresql hackers, if I can do something to push row level acl
for 8.4 please tell me, I do anything to have this feature, it will
help me, and I hope many others, this feature will help to develop
client to postgres applications without a server application or tones
of triggers and viewers.


I can understand your pains and you want the row-level security stuffs
to be merged within the vanilla v8.4. However, I would like you to
understand we don't have infinite time to review proposed features
for the upcoming v8.4.
Thus, I separated a few features (including row-level facility) to
reduce the scale of patches, and the dieted patches are now under
reviewing.
If we change our strategy *from now*, it will break anything. :(

At least, I'll provide row-level facilities (both DAC and MAC) for the
first CommitFest of v8.5 development cycle. It might not be the best
for you, but it is better than nothing in v8.4.

Thanks,


BogDan,


BogDan Vatra wrote:

Hi,
[...]

In my understanding, the row-level ACLs feature (plus a bit
enhancement)

can

help your requirements. I developed it with SE-PostgreSQL in parallel,

but also postponed to v8.5 series.

It enables to assign database ACLs on individual tuples, and filter out

violated tupled from the result set of SELECT, UPDATE and DELETE.

So, it is not very hard. At least, we already have an implementation.
:)

Where is it ? I like to try it?

The latest full-functional revision (r1467) is here:
   http://code.google.com/p/sepgsql/downloads/list

However, it has a few confliction to the latest CVS HEAD, so I modified
the patch a bit and added a feature to support the replacement rule in
default acl configuration. It is the attached one (r1537).

  If is working why is not included in 8.4?
  IMHO this is a killer feature. I like to try this, and if you want I
like
  to give you more feedbacks.

We are standing on open source project, so it is impossible to do anything
in my own way.

However, I guess it will match with what you want to do.

 Example: drink table is shared by several normal users

postgres=# CREATE TABLE drink (
postgres(# id  serial primary key,
postgres(# nametext,
postgres(# price   int
postgres(# ) with (row_level_acl=on, default_row_acl='{%u=rwx/kaigai}');
NOTICE:  CREATE TABLE will create implicit sequence drink_id_seq for
serial column drink.id
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
drink_pkey for table drink
CREATE TABLE
postgres=# GRANT SELECT, UPDATE, INSERT, DELETE ON drink TO public;
GRANT
postgres=# GRANT USAGE ON drink_id_seq TO public;
GRANT
postgres=# INSERT INTO drink (name, price) VALUES ('coke', 120);
INSERT 0 1
postgres=# SELECT security_acl, * FROM drink;
 security_acl | id | name | price
-++--+---
  {kaigai=rwx/kaigai} |  1 | coke |   120
(1 row)

-- NOTE: %u in the default_row_acl is replaced by 'kaigai'

postgres=# \q
[kai...@saba ~]$ psql postgres -Uymj
psql (8.4devel)
Type help for help.

postgres= SELECT security_acl, * FROM drink;
  security_acl | id | name | price
--++--+---
(0 rows)

-- NOTE: violated tuples are filtered out from the viewpoint of 'ymj'.

postgres= INSERT INTO drink (name, price) VALUES ('juice', 140), ('beer',
240);
INSERT 0 2
postgres= SELECT security_acl, * FROM drink;
security_acl   | id | name  | price
--++---+---
  {ymj=rwx/kaigai} |  2 | juice |   140
  {ymj=rwx/kaigai} |  3 | beer  |   240
(2 rows)

postgres= \q
[kai...@saba ~]$ psql postgres -Utak
psql (8.4devel)
Type help for help.

postgres= SELECT security_acl, * FROM drink;
  security_acl | id | name | price
--++--+---
(0 rows)

postgres= INSERT INTO drink (name, price) VALUES ('tea', 120), ('water',
100);
INSERT 0 2
postgres= SELECT security_acl, * FROM drink;
security_acl   | id | name  | price
--++---+---
  {tak=rwx/kaigai} |  4 | tea   |   120
  {tak=rwx/kaigai} |  5 | water |   100
(2 rows)

-- NOTE: A normal user 'tak' cannot see tuples by others.

postgres= UPDATE drink SET price = price * 1.2;
UPDATE 2
postgres= SELECT security_acl, * FROM drink;
security_acl   | id | name  | price
--++---+---
  {tak=rwx/kaigai} |  4 | tea   |   144
  {tak=rwx/kaigai} |  5 | water |   120
(2 rows)

-- NOTE: Only his tuples are affected.

postgres= UPDATE drink SET security_acl = '{=rwx/tak}';
ERROR:  Only owner or superuser can set 

Re: [HACKERS] WIP: hooking parser

2009-02-12 Thread Pavel Stehule
2009/2/12 Peter Eisentraut pete...@gmx.net:
 On Wednesday 11 February 2009 12:05:03 Pavel Stehule wrote:
 It works. And I thing, this should to solve lot of special task
 related to increase compatibility with Oracle, Informix, or it could
 be usefull for some others (json support).

 postgres=# load 'decode';
 LOAD
 postgres=# select
 decode(null::integer,2,'ahoj',3,'Petr',1,'Pavel',null, 'jaja');
  decode
 
  jaja
 (1 row)

 I think what you want here is some way to define a function that takes an
 arbitrary number of arguments of arbitrary type and let the function figure
 everything out.  I see no reason why this can't be a variant on CREATE
 FUNCTION, except that of course you need to figure out some API and function
 resolution details.  But it doesn't have to be a completely different concept
 like a binary plugin.


Actually I need add some metada to parameter list, Question is, what
is more simple and more readable - descriptive or procedural solution.
And what we are able to implement.

example DECODE(any1, any2, (asany1, asany2).,(asany2)+)

Actually I thing so with some hook of parser transform stage we should
to this task more simply.

I found next sample, that should be solved via hook - emulation of
Oracle behave '' is null.

Regards
Pavel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] fix for regression tests (locale cs_CZ)

2009-02-12 Thread Peter Eisentraut
On Monday 02 February 2009 21:42:36 Zdenek Kotala wrote:
 I attached fix for regression tests and Czech locale. It is not complete
 yet, because I fighting with foreign_data test. But it fix three other
 tests.

done

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Which installation parts are backward compatible?

2009-02-12 Thread Michael Meskes
On Thu, Feb 12, 2009 at 04:16:05PM +0200, Peter Eisentraut wrote:
 Item  Compatible across major versions?
   (i.e. the 8.4 version works with 7.4+ server)
 ...
 ecpg  no?

It depends on which kind of compatibility you're looking for. The grammar
accepted will surely be different. But running a new application with a new
ecpglib against an old server works as long as libpq works. 

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: mes...@jabber.org
Go VfL Borussia! Go SF 49ers! Use Debian GNU/Linux! Use PostgreSQL!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Which installation parts are backward compatible?

2009-02-12 Thread Andrew Dunstan



Peter Eisentraut wrote:

pg_restore  yes?

  


I don't know how far back pg_restore works, but the reason I have not 
produced a backport of parallel restore is that the HEAD version works 
with dumps and servers at least as far back as 8.2 (and I was careful to 
make sure it supported older dumps).


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Which installation parts are backward compatible?

2009-02-12 Thread Euler Taveira de Oliveira
Peter Eisentraut escreveu:
 If no such list exists yet, perhaps we can complete the above one, document 
 it, and pass it on to the packagers.
 
Are you suggesting that if an user has 7.4 and install 8.3 then the package
will replace psql-7.4 with psql-8.3? It will confuse users more that help them
(psql --version ?) and we will have extra effort to maintain the compatibility
version list at each binary.

IMHO, we shouldn't advise packagers to do it and instead put some efforts in
the in-place-upgrade project.


-- 
  Euler Taveira de Oliveira
  http://www.timbira.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Which installation parts are backward compatible?

2009-02-12 Thread Alvaro Herrera
Euler Taveira de Oliveira wrote:
 Peter Eisentraut escreveu:
  If no such list exists yet, perhaps we can complete the above one, document 
  it, and pass it on to the packagers.
  
 Are you suggesting that if an user has 7.4 and install 8.3 then the package
 will replace psql-7.4 with psql-8.3? It will confuse users more that help them
 (psql --version ?) and we will have extra effort to maintain the compatibility
 version list at each binary.

psql is already compatible with old versions.  I don't think we should
throw away that work by not maintaining it.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Which installation parts are backward compatible?

2009-02-12 Thread Euler Taveira de Oliveira
Alvaro Herrera escreveu:
 Euler Taveira de Oliveira wrote:
 Peter Eisentraut escreveu:
 If no such list exists yet, perhaps we can complete the above one, document 
 it, and pass it on to the packagers.

 Are you suggesting that if an user has 7.4 and install 8.3 then the package
 will replace psql-7.4 with psql-8.3? It will confuse users more that help 
 them
 (psql --version ?) and we will have extra effort to maintain the 
 compatibility
 version list at each binary.
 
 psql is already compatible with old versions.  I don't think we should
 throw away that work by not maintaining it.
 
I think we shouldn't too. But I think newbie users will be confused when (s)he
doesn't find the exact binary version at her/his machine.


-- 
  Euler Taveira de Oliveira
  http://www.timbira.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_restore --multi-thread

2009-02-12 Thread Peter Eisentraut
I know we've already had a discussion on the naming of the pg_restore -m 
option, but in any case this description in pg_restore --help is confusing:

-m, --multi-thread=NUM   use this many parallel connections to restore

Either it is using that many threads in the client, or it is using that many 
connections to the server.  I assume the implementation does approximately 
both, but we should be clear about what we promise to the user.  Either: 
Reserve this many connections on the server.  Or: Reserve this many threads 
in the kernel of the client.  The documentation in the reference/man page is 
equally confused.

Also, the term multi is redundant, because whether it is multi or single is 
obviously determined by the value of NUM.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] some questions about SELECT FOR UPDATE/SHARE

2009-02-12 Thread Tom Lane
Tao Ma feng_e...@163.com writes:
 My question is: Is it possible to remove the ONLY from the RI checks after 
 Tom Lane made SELECT FOR UPDATE/SHARE work on inheritance trees?

No.  The main thing standing in the way of RI on inheritance trees is
the lack of cross-tree uniqueness constraints.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Writing and Reading bytea

2009-02-12 Thread Amit Gupta
We need insert rows to a catalog table that would store partitions info:

CATALOG(pg_partition,2336) BKI_WITHOUT_OIDS
{
Oid partrelid;  /* partition table Oid */
Oid parentrelid;/* Parent table Oid */
int2parttype;   /* Type of partition, list, hash, range */
int2partkey;/* partition key */
Oid keytype;/* type of partition key */
int2keyorder;   /* order of the key in multi-key partitions */
bytea   minval;
bytea   maxval; /* min and max for range partition */
bytea   listval;
int2hashval;/* hash value */
} FormData_pg_partition;



The following code is used to write bytea:
...
min_ba = (bytea *) palloc(len+VARHDRSZ);
memcpy(VARDATA(min_ba), b_min, len);
SET_VARSIZE(min_ba, len+VARHDRSZ);
values[Anum_pg_partition_minval-1]= (Datum)min_ba ;
...
Relation r = heap_open(PartitionRelationId, RowExclusiveLock);
   TupleDesc tupDesc = r-rd_att;
HeapTuple tup = heap_form_tuple(tupDesc, values, nulls);
simple_heap_insert(r, tup);
CatalogUpdateIndexes(r, tup);
heap_close(r, RowExclusiveLock);

We can see the correct bytes in the pg_partition table after exectuing
the above code. However, retrieving the bytea datatypes seems
problematic.
The following code didn't work:

pg_partrel = heap_open(PartitionRelationId, AccessShareLock);
pg_partscan = systable_beginscan(pg_partrel, PartitionParentIndexId, true,
SnapshotNow, 1, skey);
while (HeapTupleIsValid(pg_parttup= systable_getnext(pg_partscan)))
{
Form_pg_partition pg_part = (Form_pg_partition) GETSTRUCT(pg_parttup);
Datum attr = heap_getattr(tuple, pg_part-partkey, rel-rd_att, isnull)
Datum min_part_attr = (Datum) (pg_part-minval);
Datum max_part_attr = (Datum) (pg_part-maxval);
  ..

}

max_part_attr is not poining to right mem location.  After doing some
investgation, we found since minval extends to 10 bytes (instead of 5
bytes occupied by struct varlena), max_part_attr value is not correct.
We also tried doing a hack:
max_ part_attr = (Datum)
(((void*)(pg_part-minval))+VARSIZE_ANY(pg_part-minval));

but still we are facing problems.

Any pointers in this respect will be helpful.

Thanks,
Amit
Persistent Systems

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Writing and Reading bytea

2009-02-12 Thread Heikki Linnakangas

Amit Gupta wrote:

The following code didn't work:

pg_partrel = heap_open(PartitionRelationId, AccessShareLock);
pg_partscan = systable_beginscan(pg_partrel, PartitionParentIndexId, true,
SnapshotNow, 1, skey);
while (HeapTupleIsValid(pg_parttup= systable_getnext(pg_partscan)))
{
Form_pg_partition pg_part = (Form_pg_partition) GETSTRUCT(pg_parttup);
Datum attr = heap_getattr(tuple, pg_part-partkey, rel-rd_att, isnull)
Datum min_part_attr = (Datum) (pg_part-minval);
Datum max_part_attr = (Datum) (pg_part-maxval);
  ..

}


You need to use heap_getattr to access columns after the first variable 
length column.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Writing and Reading bytea

2009-02-12 Thread Heikki Linnakangas

Amit Gupta wrote:

We need insert rows to a catalog table that would store partitions info:

CATALOG(pg_partition,2336) BKI_WITHOUT_OIDS
{
Oid partrelid;  /* partition table Oid */
Oid parentrelid;/* Parent table Oid */
int2parttype;   /* Type of partition, list, hash, range */
int2partkey;/* partition key */
Oid keytype;/* type of partition key */
int2keyorder;   /* order of the key in multi-key partitions */
bytea   minval;
bytea   maxval; /* min and max for range partition */
bytea   listval;
int2hashval;/* hash value */
} FormData_pg_partition;


I realize you're still in early phase of hacking, but let me just point 
out that bytea is hardly the right data type for min/max value, unless 
the partitioning key is actually a bytea column. I can't suggest a 
better alternative off the top of my head. We have hacked around that 
problem in pg_statistic stavalues columns, but it really is a hack.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --multi-thread

2009-02-12 Thread Andrew Dunstan



Peter Eisentraut wrote:
I know we've already had a discussion on the naming of the pg_restore -m 
option, but in any case this description in pg_restore --help is confusing:


-m, --multi-thread=NUM   use this many parallel connections to restore

Either it is using that many threads in the client, or it is using that many 
connections to the server.  I assume the implementation does approximately 
both, but we should be clear about what we promise to the user.  Either: 
Reserve this many connections on the server.  Or: Reserve this many threads 
in the kernel of the client.  The documentation in the reference/man page is 
equally confused.


Also, the term multi is redundant, because whether it is multi or single is 
obviously determined by the value of NUM.


  



The implementation is actually different across platforms: on Windows 
the workers are genuine threads, while elsewhere they are forked 
children in the same fashion as the backend (non-EXEC_BACKEND case). In 
either case, the program will use up to NUM concurrent connections to 
the server.


I'm not sure what you mean about reserving threads in the client kernel.

I also don't really understand what is confusing about the description.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: hooking parser

2009-02-12 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 I think what you want here is some way to define a function that takes an 
 arbitrary number of arguments of arbitrary type and let the function figure 
 everything out.  I see no reason why this can't be a variant on CREATE 
 FUNCTION, except that of course you need to figure out some API and function 
 resolution details.

We've already got variadic any functions --- the problem is to tell
the parser what the function's result type will be, given a particular
parameter list.  I agree that hooking transformExpr is not exactly the
most ideal way to attack that from a performance or complexity
standpoint.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_migrator and handling dropped columns

2009-02-12 Thread Bruce Momjian
bruce wrote:
 Peter Eisentraut wrote:
  Bruce Momjian wrote:
   Now that pg_migrator is BSD licensed, and already in C, I am going to
   spend my time trying to improve pg_migrator for 8.4:
   
 http://pgfoundry.org/projects/pg-migrator/
  
  What is the plan now?  Get pg_upgrade working, get pg_migrator working, 
  ship pg_migrator in core or separately?  Is there any essential 
  functionality that we need to get into the server code before release? 
  Should we try to get dropped columns working?  It's quite late to be 

I have thought about how to handle dumped columns and would like to get
some feedback on this.

It is easy to find the dropped columns with 'pg_attribute.attisdropped 
= true'.

The basic problem is that dropped columns do not appear in the pg_dump
output schema, but still exist in the data files.  While the missing
data is not a problem, the dropped column's existence affects all
subsequent columns, increasing their attno values and their placement in
the data files.

I can think of three possible solutions, all involve recreating and
dropping the dropped column in the new schema:

1  modify the pg_dumpall --schema-only output file before
   loading to add the dropped column
2  drop/recreate the table after loading to add the dropped
   column
3  modify the system tables directly to add the dropped column,
   perhaps using pg_depend information

#1 seems like the best option, though it requires parsing the pg_dump
file to some extent.  #2 is a problem because dropping/recreating the
table might be difficult because of foreign key relationships, even for
empty tables. #3 seems prone to maintenance requirements every time we
change system object relationships.

Once the dropped column is created in the new server, it can be dropped
to match the incoming data files.

Comments?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --multi-thread

2009-02-12 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 The implementation is actually different across platforms: on Windows 
 the workers are genuine threads, while elsewhere they are forked 
 children in the same fashion as the backend (non-EXEC_BACKEND case). In 
 either case, the program will use up to NUM concurrent connections to 
 the server.

How about calling it --num-connections or something like that?  I agree
with Peter that thread is not the best terminology on platforms where
there is no threading involved.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --multi-thread

2009-02-12 Thread Joshua D. Drake
On Thu, 2009-02-12 at 11:32 -0500, Tom Lane wrote:
 Andrew Dunstan and...@dunslane.net writes:
  The implementation is actually different across platforms: on Windows 
  the workers are genuine threads, while elsewhere they are forked 
  children in the same fashion as the backend (non-EXEC_BACKEND case). In 
  either case, the program will use up to NUM concurrent connections to 
  the server.
 
 How about calling it --num-connections or something like that?  I agree
 with Peter that thread is not the best terminology on platforms where
 there is no threading involved.

--num-workers or --num-connections would both work.

Joshua D. Drake

 
   regards, tom lane
 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] some questions about SELECT FOR UPDATE/SHARE

2009-02-12 Thread Bruce Momjian
Tao Ma wrote:
 Hi,
 I noticed Tom Lane made SELECT FOR UPDATE/SHARE work on inheritance trees.
 But todo list is not updated to reflect this change.
 http://wiki.postgresql.org/wiki/Todo#Inheritance

OK, TODO updated to mark this as done.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --multi-thread

2009-02-12 Thread Andrew Dunstan



Joshua D. Drake wrote:

On Thu, 2009-02-12 at 11:32 -0500, Tom Lane wrote:
  

Andrew Dunstan and...@dunslane.net writes:

The implementation is actually different across platforms: on Windows 
the workers are genuine threads, while elsewhere they are forked 
children in the same fashion as the backend (non-EXEC_BACKEND case). In 
either case, the program will use up to NUM concurrent connections to 
the server.
  

How about calling it --num-connections or something like that?  I agree
with Peter that thread is not the best terminology on platforms where
there is no threading involved.



--num-workers or --num-connections would both work.

  


*shrug* whatever. What should the short option be (if any?). -n is 
taken, so -N ?


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --multi-thread

2009-02-12 Thread Joshua D. Drake
On Thu, 2009-02-12 at 11:47 -0500, Andrew Dunstan wrote:
 
 Joshua D. Drake wrote:
  On Thu, 2009-02-12 at 11:32 -0500, Tom Lane wrote:

  Andrew Dunstan and...@dunslane.net writes:
  
  The implementation is actually different across platforms: on Windows 
  the workers are genuine threads, while elsewhere they are forked 
  children in the same fashion as the backend (non-EXEC_BACKEND case). In 
  either case, the program will use up to NUM concurrent connections to 
  the server.

  How about calling it --num-connections or something like that?  I agree
  with Peter that thread is not the best terminology on platforms where
  there is no threading involved.
  
 
  --num-workers or --num-connections would both work.
 

 
 *shrug* whatever. What should the short option be (if any?). -n is 
 taken, so -N ?

Works for me.

 
 cheers
 
 andrew
 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] 64 bit PostgreSQL 8.3.6 build on AIX 5300 with GCC 4.2.0 - fail on pg_regress

2009-02-12 Thread Dickson S. Guedes
Hi all,

I'm compiling PostgreSQL 8.3.6 on AIX 5.3.0.0 with GCC 4.2.0 and it is working.

But, after message All of PostgreSQL successfully made. Ready to
install., when it starts regression test, some unexpected (?) occurs.

First I doing only a *make* and it is doing *make install* instead just make

Second, there is a pg_regress execution that fails trying to connect
in a pg instance that is not in there, is should (must?) be there?

The steps I'm following, with some cuts, is:

---
postg...@db01 $ uid=204(postgres) gid=1(staff)

postg...@db01 $ export CC=gcc
postg...@db01 $ export CFLAGS=-maix64
postg...@db01 $ export LDFLAGS=-maix64 -Wl,-bbigtoc
postg...@db01 $ export OBJECT_MODE=64
postg...@db01 $ export AR=ar -X64
postg...@db01 $ ./configure --enable-integer-datetimes
--prefix=/srv/postgresql/8.3.6

postg...@db01 $ make
...
...
gmake[1]: Leaving directory `/srv/postgresql/work/postgresql-8.3.6/config'
All of PostgreSQL successfully made. Ready to install.
...
...
gmake[3]: Leaving directory `/srv/postgresql/work/postgresql-8.3.6/contrib/spi'
rm -rf ./testtablespace
mkdir ./testtablespace
./pg_regress --temp-install=./tmp_check --top-builddir=../../..
--srcdir=/srv/postgresql/work/postgresql-8.3.6/src/test/regress
--temp-port=55432 --schedule=./parallel_schedule --multibyte=SQL_ASCII
--load-language=plpgsql
== creating temporary installation==
== initializing database system   ==
== starting postmaster==
running on port 55432 with pid 348354
== creating database regression ==
CREATE DATABASE
ALTER DATABASE
== installing plpgsql ==
CREATE LANGUAGE
== running regression test queries==
...
...
...
== shutting down postmaster   ==
server stopped

===
 All 114 tests passed.
===
...
...
...
gmake[3]: Leaving directory `/srv/postgresql/work/postgresql-8.3.6/contrib/spi'
rm -rf ./testtablespace
mkdir ./testtablespace
./pg_regress --psqldir=/srv/postgresql/8.3.6/bin
--schedule=./serial_schedule
--srcdir=/srv/postgresql/work/postgresql-8.3.6/src/test/regress
--multibyte=SQL_ASCII --load-language=plpgsql
(using postmaster on Unix socket, default port)
== dropping database regression ==
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket /tmp/.s.PGSQL.5432?
command failed: /srv/postgresql/8.3.6/bin/psql -X -c DROP DATABASE
IF EXISTS \regression\ postgres
gmake[2]: *** [installcheck] Error 2
gmake[2]: Leaving directory
`/srv/postgresql/work/postgresql-8.3.6/src/test/regress'
gmake[1]: *** [installcheck] Error 2
gmake[1]: Leaving directory `/srv/postgresql/work/postgresql-8.3.6/src/test'
gmake: *** [installcheck] Error 2
make: 1254-004 The error code from the last command is 2.
Stop.
---

I'm wrong in anything?

Thanks!
-- 
Dickson S. Guedes
-
mail/xmpp: gue...@guedesoft.net  -  skype: guediz
http://guedesoft.net - http://planeta.postgresql.org.br

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_migrator and handling dropped columns

2009-02-12 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 I can think of three possible solutions, all involve recreating and
 dropping the dropped column in the new schema:

(4) add a switch to pg_dump to include dropped columns in its
schema output and then drop them.  This seems far more maintainable
than writing separate code that tries to parse the output.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Which installation parts are backward compatible?

2009-02-12 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 I've been examining multi-major-version binary packaging again, and I was 
 wondering whether we have a good overview over which pieces of the 
 installation are backward compatible (that is, they can be shared between all
 major versions) and which are not.  For example, psql 8.4 can now presumably 
 be shared between all major versions, so previous schemes to have several 
 psqls installed can be simpified.

ISTM that having psql alone be cross-version-compatible will be just
about completely uninteresting to packagers.  If we could make *all*
the user-facing executables be cross-version, then we'd be getting
somewhere; it would be possible to install them all in /usr/bin and
just have a version-specific subdirectory under /usr/libexec or
someplace for the rest of the stuff, which users wouldn't need to
have in their PATH anyway.

Looking at your list, it seems the only part of that that might not
be within reach is that pg_dump output from version N typically
doesn't load into server versions  N.  pg_dump is complicated enough
without trying to make it handle that too :-(.

The other parts to worry about are libraries (but existing shlib
versioning schemes may be enough for that) and include files.
Not sure what to do with include files.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DISCARD ALL failing to acquire locks on pg_listen

2009-02-12 Thread Tom Lane
Matteo Beccati p...@beccati.com writes:
 Seems like we could/should fix UNLISTEN * to not do anything if it is
 known that the current backend never did any LISTENs.

 Here's my proposed patch, both for HEAD and 8.3:

I'll take a look.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN fast insert database hang

2009-02-12 Thread Teodor Sigaev

This freezes the whole system even with autovacuum = off in
postgresql.conf.  As before, the backends wait on a semop() call.
Fixed. There was a deadlock of LockBufferForCleanup and LockBuffer(SHARE). 
Redesign that place to downgrade  LockBufferForCleanup to LockBuffer(EXCLUSIVE) 
with correction of page's locking during scan of pending list.




I was able to reproduce the recovery failure this way once as well,
but that part of the problem seems to be much more erratic.  Most of

Fixed too. I missed comments on XLogInsert:
 * NB: this routine feels free to scribble on the XLogRecData structs,
 * though not on the data they reference.  This is OK since the XLogRecData
 * structs are always just temporaries in the calling code.

and I reused once initialized XLogRecData many times in a loop.
--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/


fast_insert_gin-0.27.gz
Description: Unix tar archive

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_migrator and handling dropped columns

2009-02-12 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  I can think of three possible solutions, all involve recreating and
  dropping the dropped column in the new schema:
 
 (4) add a switch to pg_dump to include dropped columns in its
 schema output and then drop them.  This seems far more maintainable
 than writing separate code that tries to parse the output.

That would certainly be the easiest.  I was going to have trouble
generating the exact column creation string anyway in pg_migrator.

I assume I would also drop the column in the pg_dump output.  

Is this acceptable to everyone?  We could name the option
-u/--upgrade-compatible.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN fast insert

2009-02-12 Thread Teodor Sigaev

But the *real* problem is that you simply can not guarantee that
someone doesn't increase the size of the pending list between the time


If insertion process has bigger work_mem. Agree.


What did you think of the idea of simply abandoning support for
conventional indexscans in GIN?  I agree that we could probably kluge
something to make conventional scans still work reliably, but it seems
to me that it'd be ugly, fragile, and quite possibly slow enough to not
ever beat bitmap scans anyway.


I don't like this idea because it forbids conventional indexscans even with 
fastupdate=off.


May readonly query change the index? Index doesn't use xmin/xmax/cmin/cmax 
anyhow, so it doesn't depend on transaction state. If so, gingettuple could make 
cleanup of pending list if it got lossy bitmap and repeat search. Although it 
could  be slow but it will never produce a failures and it will cause very rare 
(and GIN could emit WARNING/NOTICE/LOG message). And this solution allows to 
remove disabling of indexscan in gincostestimate.


--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DISCARD ALL failing to acquire locks on pg_listen

2009-02-12 Thread Tom Lane
Matteo Beccati p...@beccati.com writes:
 Seems like we could/should fix UNLISTEN * to not do anything if it is
 known that the current backend never did any LISTENs.

 Here's my proposed patch, both for HEAD and 8.3:

This seems a bit overcomplicated.  I had in mind something like this...

Index: src/backend/commands/async.c
===
RCS file: /cvsroot/pgsql/src/backend/commands/async.c,v
retrieving revision 1.145
diff -c -r1.145 async.c
*** src/backend/commands/async.c1 Jan 2009 17:23:37 -   1.145
--- src/backend/commands/async.c12 Feb 2009 18:28:43 -
***
*** 277,282 
--- 277,286 
if (Trace_notify)
elog(DEBUG1, Async_Unlisten(%s,%d), relname, MyProcPid);
  
+   /* If we couldn't possibly be listening, no need to queue anything */
+   if (pendingActions == NIL  !unlistenExitRegistered)
+   return;
+ 
queue_listen(LISTEN_UNLISTEN, relname);
  }
  
***
*** 291,296 
--- 295,304 
if (Trace_notify)
elog(DEBUG1, Async_UnlistenAll(%d), MyProcPid);
  
+   /* If we couldn't possibly be listening, no need to queue anything */
+   if (pendingActions == NIL  !unlistenExitRegistered)
+   return;
+ 
queue_listen(LISTEN_UNLISTEN_ALL, );
  }
  



regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_migrator and handling dropped columns

2009-02-12 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Tom Lane wrote:
 (4) add a switch to pg_dump to include dropped columns in its
 schema output and then drop them.  This seems far more maintainable
 than writing separate code that tries to parse the output.

 I assume I would also drop the column in the pg_dump output.  

Right, that's what I meant --- do all the work within pg_dump.

 Is this acceptable to everyone?  We could name the option
 -u/--upgrade-compatible.

If the switch is specifically for pg_upgrade support (enabling this as
well as any other hacks we find necessary), which seems like a good
idea, then don't chew up a short option letter for it.  There should be
a long form only.  And probably not even list it in the user
documentation.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_migrator and handling dropped columns

2009-02-12 Thread Joshua D. Drake
On Thu, 2009-02-12 at 13:39 -0500, Tom Lane wrote:

 Right, that's what I meant --- do all the work within pg_dump.
 
  Is this acceptable to everyone?  We could name the option
  -u/--upgrade-compatible.
 
 If the switch is specifically for pg_upgrade support (enabling this as
 well as any other hacks we find necessary), which seems like a good
 idea, then don't chew up a short option letter for it.  There should be
 a long form only.  And probably not even list it in the user
 documentation.

Why wouldn't we want to list it?

Joshua D. Drake

 
   regards, tom lane
 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN fast insert

2009-02-12 Thread Tom Lane
Teodor Sigaev teo...@sigaev.ru writes:
 What did you think of the idea of simply abandoning support for
 conventional indexscans in GIN?

 I don't like this idea because it forbids conventional indexscans even with 
 fastupdate=off.

So?  Barring some evidence that there's a significant performance win
from a conventional indexscan, this is a weak argument.  AFAICS the only
significant advantage of the conventional API is to support ordered
scans, and GIN doesn't do that anyway.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_migrator and handling dropped columns

2009-02-12 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Tom Lane wrote:
  (4) add a switch to pg_dump to include dropped columns in its
  schema output and then drop them.  This seems far more maintainable
  than writing separate code that tries to parse the output.
 
  I assume I would also drop the column in the pg_dump output.  
 
 Right, that's what I meant --- do all the work within pg_dump.
 
  Is this acceptable to everyone?  We could name the option
  -u/--upgrade-compatible.
 
 If the switch is specifically for pg_upgrade support (enabling this as
 well as any other hacks we find necessary), which seems like a good
 idea, then don't chew up a short option letter for it.  There should be
 a long form only.  And probably not even list it in the user
 documentation.

OK, works for me;  any objections from anyone?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_migrator and handling dropped columns

2009-02-12 Thread Bruce Momjian
Joshua D. Drake wrote:
 On Thu, 2009-02-12 at 13:39 -0500, Tom Lane wrote:
 
  Right, that's what I meant --- do all the work within pg_dump.
  
   Is this acceptable to everyone?  We could name the option
   -u/--upgrade-compatible.
  
  If the switch is specifically for pg_upgrade support (enabling this as
  well as any other hacks we find necessary), which seems like a good
  idea, then don't chew up a short option letter for it.  There should be
  a long form only.  And probably not even list it in the user
  documentation.
 
 Why wouldn't we want to list it?

Because it is for internal use by upgrade utilities, not for user use.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_migrator and handling dropped columns

2009-02-12 Thread Tom Lane
Joshua D. Drake j...@commandprompt.com writes:
 On Thu, 2009-02-12 at 13:39 -0500, Tom Lane wrote:
 a long form only.  And probably not even list it in the user
 documentation.

 Why wouldn't we want to list it?

Because it's for internal use only.  Although the effect we're
discussing here is relatively harmless, it seems possible that
further down the road we might find a need for hacks that would
render the output entirely unfit for ordinary dump purposes.
I don't see a need to encourage people to play with fire.

It's hardly unprecedented for us to have undocumented internal
options --- there are some in postgres.c for example.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN fast insert

2009-02-12 Thread Robert Haas
On Thu, Feb 12, 2009 at 1:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Teodor Sigaev teo...@sigaev.ru writes:
 What did you think of the idea of simply abandoning support for
 conventional indexscans in GIN?

 I don't like this idea because it forbids conventional indexscans even with
 fastupdate=off.

 So?  Barring some evidence that there's a significant performance win
 from a conventional indexscan, this is a weak argument.  AFAICS the only
 significant advantage of the conventional API is to support ordered
 scans, and GIN doesn't do that anyway.

Wouldn't it force you to recheck all tuples on the page, instead of
just rechecking the one of interest?

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --multi-thread

2009-02-12 Thread Cédric Villemain
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Joshua D. Drake a écrit :
 On Thu, 2009-02-12 at 11:47 -0500, Andrew Dunstan wrote:
 Joshua D. Drake wrote:
 On Thu, 2009-02-12 at 11:32 -0500, Tom Lane wrote:
   
 Andrew Dunstan and...@dunslane.net writes:
 
 The implementation is actually different across platforms: on Windows 
 the workers are genuine threads, while elsewhere they are forked 
 children in the same fashion as the backend (non-EXEC_BACKEND case). In 
 either case, the program will use up to NUM concurrent connections to 
 the server.
   
 How about calling it --num-connections or something like that?  I agree
 with Peter that thread is not the best terminology on platforms where
 there is no threading involved.
 
 --num-workers or --num-connections would both work.

   
 *shrug* whatever. What should the short option be (if any?). -n is 
 taken, so -N ?
 
 Works for me.

is -j already affected ?


 
 cheers

 andrew



- --
Cédric Villemain
Administrateur de Base de Données
Cel: +33 (0)6 74 15 56 53
http://dalibo.com - http://dalibo.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkmUcvUACgkQo/dppWjpEvzT5gCg44yo8CbfT3AAevzbPXphqu3K
oeUAnAy6/epLlwe7DWWneIB8XVeDIu/+
=Q8iq
-END PGP SIGNATURE-

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --multi-thread

2009-02-12 Thread Jonah H. Harris
On Thu, Feb 12, 2009 at 11:37 AM, Joshua D. Drake j...@commandprompt.comwrote:

 --num-workers or --num-connections would both work.


--num-parallel?

-- 
Jonah H. Harris, Senior DBA
myYearbook.com


Re: [HACKERS] DISCARD ALL failing to acquire locks on pg_listen

2009-02-12 Thread Matteo Beccati

Tom Lane ha scritto:

Matteo Beccati p...@beccati.com writes:

Seems like we could/should fix UNLISTEN * to not do anything if it is
known that the current backend never did any LISTENs.



Here's my proposed patch, both for HEAD and 8.3:


This seems a bit overcomplicated.  I had in mind something like this...


Much easier indeed... I didn't notice the unlistenExitRegistered variable.


Cheers

--
Matteo Beccati

OpenX - http://www.openx.org

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN fast insert

2009-02-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Thu, Feb 12, 2009 at 1:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 So?  Barring some evidence that there's a significant performance win
 from a conventional indexscan, this is a weak argument.  AFAICS the only
 significant advantage of the conventional API is to support ordered
 scans, and GIN doesn't do that anyway.

 Wouldn't it force you to recheck all tuples on the page, instead of
 just rechecking the one of interest?

In the scenario at hand you'd have to do that anyway.

Bear in mind that if the query is predicted to return more than a few
rows, the planner is always going to pick bitmap scan anyway.  So this
whole issue is really only going to arise when you have a very bad
rowcount prediction (or a very stale plan), leading to a choice of
indexscan plan followed by enough rows actually returned to make the TID
bitmap become lossy.  That's certainly within the realm of possibility,
particularly since we often don't have good estimators for
GIN-compatible operators.  But I think designing to squeeze every last
bit of performance out of the case is a mistake.  We should be satisfied
to have correctness.

In the end this is a tradeoff: how much complexity and loss of
maintainability are we willing to accept to squeeze out a bit more
performance?  I'm leaning to the KISS end of that choice.  The tests
I did yesterday suggested to me that it would be difficult even to
measure a performance gain from supporting conventional indexscan in
GIN.  IMHO the kinds of schemes that are being tossed around here are
not remotely sane to choose if they don't lead to *big* wins.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --multi-thread

2009-02-12 Thread Michael Glaesemann


On 2009-02-12, at 14:15 , Jonah H. Harris wrote:

On Thu, Feb 12, 2009 at 11:37 AM, Joshua D. Drake j...@commandprompt.com 
wrote:



--num-workers or --num-connections would both work.



--num-parallel?


--num-concurrent?

Michael Glaesemann
michael.glaesem...@myyearbook.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DISCARD ALL failing to acquire locks on pg_listen

2009-02-12 Thread Tom Lane
Matteo Beccati p...@beccati.com writes:
 Tom Lane ha scritto:
 This seems a bit overcomplicated.  I had in mind something like this...

 Much easier indeed... I didn't notice the unlistenExitRegistered variable.

Just for completeness, I attach another form of the patch that I thought
about for a bit.  This adds the ability for UNLISTEN ALL to revert the
backend to the state where subsequent UNLISTENs don't cost anything.
This could be of value in a scenario where you have pooled connections
and just a small fraction of the client threads are using LISTEN.  That
seemed like kind of an unlikely use-case though.  The problem is that
this patch adds some cycles to transaction commit/abort for everyone,
whether they ever use LISTEN/UNLISTEN/DISCARD or not.  It's not a lot of
cycles, but even so I'm thinking it's not a win overall.  Comments?

regards, tom lane

Index: src/backend/access/transam/xact.c
===
RCS file: /cvsroot/pgsql/src/backend/access/transam/xact.c,v
retrieving revision 1.272
diff -c -r1.272 xact.c
*** src/backend/access/transam/xact.c   20 Jan 2009 18:59:37 -  1.272
--- src/backend/access/transam/xact.c   12 Feb 2009 18:24:12 -
***
*** 1703,1708 
--- 1703,1709 
AtEOXact_SPI(true);
AtEOXact_xml();
AtEOXact_on_commit_actions(true);
+   AtEOXact_Notify(true);
AtEOXact_Namespace(true);
/* smgrcommit already done */
AtEOXact_Files();
***
*** 1939,1944 
--- 1940,1946 
AtEOXact_SPI(true);
AtEOXact_xml();
AtEOXact_on_commit_actions(true);
+   AtEOXact_Notify(true);
AtEOXact_Namespace(true);
/* smgrcommit already done */
AtEOXact_Files();
***
*** 2084,2089 
--- 2086,2092 
AtEOXact_SPI(false);
AtEOXact_xml();
AtEOXact_on_commit_actions(false);
+   AtEOXact_Notify(false);
AtEOXact_Namespace(false);
AtEOXact_Files();
AtEOXact_ComboCid();
Index: src/backend/commands/async.c
===
RCS file: /cvsroot/pgsql/src/backend/commands/async.c,v
retrieving revision 1.145
diff -c -r1.145 async.c
*** src/backend/commands/async.c1 Jan 2009 17:23:37 -   1.145
--- src/backend/commands/async.c12 Feb 2009 18:24:13 -
***
*** 167,172 
--- 167,178 
  /* True if we've registered an on_shmem_exit cleanup */
  static bool unlistenExitRegistered = false;
  
+ /* True if this backend has (or might have) an active LISTEN entry */
+ static bool haveActiveListen = false;
+ 
+ /* True if current transaction is trying to commit an UNLISTEN ALL */
+ static bool committingUnlistenAll = false;
+ 
  bool  Trace_notify = false;
  
  
***
*** 277,282 
--- 283,292 
if (Trace_notify)
elog(DEBUG1, Async_Unlisten(%s,%d), relname, MyProcPid);
  
+   /* If we couldn't possibly be listening, no need to queue anything */
+   if (pendingActions == NIL  !haveActiveListen)
+   return;
+ 
queue_listen(LISTEN_UNLISTEN, relname);
  }
  
***
*** 291,296 
--- 301,310 
if (Trace_notify)
elog(DEBUG1, Async_UnlistenAll(%d), MyProcPid);
  
+   /* If we couldn't possibly be listening, no need to queue anything */
+   if (pendingActions == NIL  !haveActiveListen)
+   return;
+ 
queue_listen(LISTEN_UNLISTEN_ALL, );
  }
  
***
*** 493,499 
heap_freetuple(tuple);
  
/*
!* now that we are listening, make sure we will unlisten before dying.
 */
if (!unlistenExitRegistered)
{
--- 507,526 
heap_freetuple(tuple);
  
/*
!* Remember that this backend has at least one active LISTEN.  Also,
!* this LISTEN negates the effect of any earlier UNLISTEN ALL in the
!* same transaction.
!*
!* Note: it's still possible for the current transaction to fail before
!* we reach commit.  In that case haveActiveListen might be uselessly
!* left true; but that's OK, if not optimal, so we don't expend extra
!* effort to cover that corner case.
!*/
!   haveActiveListen = true;
!   committingUnlistenAll = false;
! 
!   /*
!* Now that we are listening, make sure we will unlisten before dying.
 */
if (!unlistenExitRegistered)
{
***
*** 569,574 
--- 596,608 
simple_heap_delete(lRel, lTuple-t_self);
  
heap_endscan(scan);
+ 
+   /*
+* Remember that we're trying to commit UNLISTEN ALL.  Since we might
+* still fail before reaching commit, we can't reset haveActiveListen
+* immediately.
+*/
+   committingUnlistenAll = true;
  }
  
  /*
***
*** 

Re: [HACKERS] pg_migrator and handling dropped columns

2009-02-12 Thread Bruce Momjian
Tom Lane wrote:
 Joshua D. Drake j...@commandprompt.com writes:
  On Thu, 2009-02-12 at 13:39 -0500, Tom Lane wrote:
  a long form only.  And probably not even list it in the user
  documentation.
 
  Why wouldn't we want to list it?
 
 Because it's for internal use only.  Although the effect we're
 discussing here is relatively harmless, it seems possible that
 further down the road we might find a need for hacks that would
 render the output entirely unfit for ordinary dump purposes.
 I don't see a need to encourage people to play with fire.
 
 It's hardly unprecedented for us to have undocumented internal
 options --- there are some in postgres.c for example.

The important point is that we add comments in the source code about why
it is undocumented.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DISCARD ALL failing to acquire locks on pg_listen

2009-02-12 Thread Robert Haas
On Thu, Feb 12, 2009 at 2:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Just for completeness, I attach another form of the patch that I thought
 about for a bit.  This adds the ability for UNLISTEN ALL to revert the
 backend to the state where subsequent UNLISTENs don't cost anything.
 This could be of value in a scenario where you have pooled connections
 and just a small fraction of the client threads are using LISTEN.  That
 seemed like kind of an unlikely use-case though.  The problem is that
 this patch adds some cycles to transaction commit/abort for everyone,
 whether they ever use LISTEN/UNLISTEN/DISCARD or not.  It's not a lot of
 cycles, but even so I'm thinking it's not a win overall.  Comments?

This is so lightweight I'd be inclined to go for it, even if the use
case is pretty narrow.  Do you think you can actually construct a
benchmark where the difference is measurable?

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DISCARD ALL failing to acquire locks on pg_listen

2009-02-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Thu, Feb 12, 2009 at 2:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Just for completeness, I attach another form of the patch that I thought
 about for a bit.  This adds the ability for UNLISTEN ALL to revert the
 backend to the state where subsequent UNLISTENs don't cost anything.
 This could be of value in a scenario where you have pooled connections
 and just a small fraction of the client threads are using LISTEN.  That
 seemed like kind of an unlikely use-case though.  The problem is that
 this patch adds some cycles to transaction commit/abort for everyone,
 whether they ever use LISTEN/UNLISTEN/DISCARD or not.  It's not a lot of
 cycles, but even so I'm thinking it's not a win overall.  Comments?

 This is so lightweight I'd be inclined to go for it, even if the use
 case is pretty narrow.  Do you think you can actually construct a
 benchmark where the difference is measurable?

Almost certainly not, but a cycle saved is a cycle earned ...

The real problem I'm having with it is that I don't believe the
use-case.  The normal scenario for a listener is that you LISTEN and
then you sit there waiting for events.  In the above scenario, a client
thread would only be able to receive events when it actively had control
of its pool connection; so it seems like it would be at risk of missing
things when it didn't.  It seems much more likely that you'd design the
application so that listening clients aren't pooled but are listening
continuously.  The guys sending NOTIFY events might well be pooled, but
they're not the issue.

If someone can show me a plausible use-case that gets a benefit from
this form of the patch, I don't have a problem with making other people
pay a few cycles for that.  I'm just fearing that nobody would get a win
at all, and then neither the cycles nor the extra complexity would give
us any benefit.  (The extra hooks into xact.c are actually bothering me
as much as the cycles.  Given that we're intending to throw all this
code away and reimplement LISTEN/NOTIFY completely pretty soon, I'd just
as soon keep down the number of contact points with the rest of the
system.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] GIST versus GIN indexes for intarrays

2009-02-12 Thread Tom Lane
Rusty Conover rcono...@infogears.com writes:
 The gist__int_ops is the default operator class for integer[] arrays,  
 as shown at:
 http://www.postgresql.org/docs/current/static/intarray.html

Ah, so you have contrib/intarray installed.

[ pokes at it... ]  Seems like what we have here is another iteration
of this ancient bug:
http://archives.postgresql.org/pgsql-committers/2004-01/msg00073.php
to wit, contrib/intarray is defining its own @ and @ operators that
conflict with those since added to the core.  In the case Rusty is
showing, the @ gets resolved as intarray's @ (because that's an
exact match, where the core provides anyarray @ anyarray) and then
this operator is NOT a member of the core-provided GIN opclass for
integer arrays.

The short-term workaround for Rusty is probably to create his GIN
index using the intarray-provided gin__int_ops opclass.  But it
seems to me that we ought to get rid of intarray's @ and @ operators
and have the module depend on the core anyarray operators, just as we
have already done for = and .  Comments?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synch Replication

2009-02-12 Thread Alvaro Herrera
Fujii Masao escribió:

I noticed two very minor issues while reading your docs:

This is because WAL files generated in the primary server before this 
 built-in
replication starts have to be transferred to the standby server by
using file-based log shipping. When TT
 CLASS=VARNAME
 archive_mode/TT
  is TT
 CLASS=LITERAL
 unsent/TT
 ,

You probably mean unset here.

 TT
 CLASS=VARNAME
 enable_replication/TT
  (TT
 CLASS=TYPE
 boolean/TT
 )

It has been said that variables that enable/disable features should only
be named after the feature that they affect, omitting the enable verb.
So in this case it should be set as replication=off or
replication=on.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fillfactor for toast tables is useless?

2009-02-12 Thread Alvaro Herrera
ITAGAKI Takahiro wrote:
 With reloption patch, we can set WITH options to toast tables.
 However, fillfactor for toast tables is useless, no?

Maybe what we should do is just reject fillfactor for toast tables for
now.  I think this is easy to do.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [ADMIN] database corruption help

2009-02-12 Thread Tom Lane
John Lister john.lister...@kickstone.com writes:
 GMT LOG:  relation pg_class TID 15538/4: dead HOT-updated tuple --- 
 cannot shrink relation

Hmm.  The comments in vacuum.c about this case suppose that it could
only be a transient condition, ie the tuple became dead while we were
looking at the page.  Evidently it's persistent for you, which means
that for some reason heap_page_prune() is failing to remove an
already-dead tuple.  I suspect this implies a corrupt HOT chain, but
maybe the data is okay and what you've got is really a bug in
heap_page_prune.

Could you send a dump of page 15538 of pg_class, using pg_filedump?
The source code for it is here:
http://sources.redhat.com/rhdb/utilities.html
Best would be -i -f format, ie
pg_filedump -i -f -R 15538 $PGDATA/base/something/1259

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] connection logging dtrace probe

2009-02-12 Thread Robert Treat
I whipped up a quick dtrace probe for one of our servers to monitor connection 
attempts.  My goal was to monitor for any connection attempts from a specific 
role within the database.  Unfortunatly you can't set logging of connections 
for a specific user, and logging all connections on that machine would be 
quite the logfile bloater... enter dtrace. With the probe, I can do something 
like this:

-bash-3.00$ /opt/csw/bin/sudo dtrace -n 'postgresql*:::connection 
{printf(connection attempt: %...@%s\n,copyinstr(arg0),copyinstr(arg1)) }' | 
grep robert
dtrace: description 'postgresql*:::connection ' matched 5 probes
  2  18984ServerLoop:connection connection attempt: rob...@robert
  2  16222ServerLoop:connection connection attempt: rob...@robert
  1  16876ServerLoop:connection connection attempt: rob...@pagila

which can be piped to logfile or whatever. I'm attaching a patch against 8.4 
as an idea of what I've implemented (actual implementation was against a 
custom build) but should be close to working (don't have a working pg repo on 
any solaris machines atm). Any feedback appreciated (mostly wondering about 
probe name or location). TIA

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 3380b80..ddf23d8 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -118,6 +118,7 @@
 #include utils/datetime.h
 #include utils/memutils.h
 #include utils/ps_status.h
+#include pg_trace.h
 
 #ifdef EXEC_BACKEND
 #include storage/spin.h
@@ -3142,6 +3143,8 @@ BackendInitialize(Port *port)
 		elog(FATAL, could not disable timer for authorization timeout);
 	PG_SETMASK(BlockSig);
 
+	TRACE_POSTGRESQL_CONNECTION_ATTEMPT(port-user_name, port-database_name);
+
 	if (Log_connections)
 		ereport(LOG,
 (errmsg(connection authorized: user=%s database=%s,
diff --git a/src/backend/utils/probes.d b/src/backend/utils/probes.d
index f68a7d2..d8b418a 100644
--- a/src/backend/utils/probes.d
+++ b/src/backend/utils/probes.d
@@ -91,4 +91,6 @@ provider postgresql {
 	probe xlog__switch();
 	probe wal__buffer__write__dirty__start();
 	probe wal__buffer__write__dirty__done();
+
+	probe connection__attempt(char *, char *);
 };

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DISCARD ALL failing to acquire locks on pg_listen

2009-02-12 Thread Matteo Beccati
Tom Lane wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Thu, Feb 12, 2009 at 2:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Just for completeness, I attach another form of the patch that I thought
 about for a bit.  This adds the ability for UNLISTEN ALL to revert the
 backend to the state where subsequent UNLISTENs don't cost anything.
 This could be of value in a scenario where you have pooled connections
 and just a small fraction of the client threads are using LISTEN.  That
 seemed like kind of an unlikely use-case though.  The problem is that
 this patch adds some cycles to transaction commit/abort for everyone,
 whether they ever use LISTEN/UNLISTEN/DISCARD or not.  It's not a lot of
 cycles, but even so I'm thinking it's not a win overall.  Comments?
 
 This is so lightweight I'd be inclined to go for it, even if the use
 case is pretty narrow.  Do you think you can actually construct a
 benchmark where the difference is measurable?
 
 Almost certainly not, but a cycle saved is a cycle earned ...
 
 The real problem I'm having with it is that I don't believe the
 use-case.  The normal scenario for a listener is that you LISTEN and
 then you sit there waiting for events.  In the above scenario, a client
 thread would only be able to receive events when it actively had control
 of its pool connection; so it seems like it would be at risk of missing
 things when it didn't.  It seems much more likely that you'd design the
 application so that listening clients aren't pooled but are listening
 continuously.  The guys sending NOTIFY events might well be pooled, but
 they're not the issue.
 
 If someone can show me a plausible use-case that gets a benefit from
 this form of the patch, I don't have a problem with making other people
 pay a few cycles for that.  I'm just fearing that nobody would get a win
 at all, and then neither the cycles nor the extra complexity would give
 us any benefit.  (The extra hooks into xact.c are actually bothering me
 as much as the cycles.  Given that we're intending to throw all this
 code away and reimplement LISTEN/NOTIFY completely pretty soon, I'd just
 as soon keep down the number of contact points with the rest of the
 system.)

Imagine a web application interacting with a deamon using LISTEN/NOTIFY.
It happened in past to me to build one, so I guess it could be a fairly
common scenario, which you already described. Now if both the front end
and the deamon use the same pooler to have a common failover process,
previously listening connections could be reused by the web app if the
daemon is restarted and the pooler is not. Does it look plausible?

That said, I don't mind if we go with the previous two-liner fix :)


Cheers

-- 
Matteo Beccati

OpenX - http://www.openx.org

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --multi-thread

2009-02-12 Thread David Fetter
On Thu, Feb 12, 2009 at 02:16:39PM -0500, Michael Glaesemann wrote:

 On 2009-02-12, at 14:15 , Jonah H. Harris wrote:

 On Thu, Feb 12, 2009 at 11:37 AM, Joshua D. Drake j...@commandprompt.com 
 wrote:

 --num-workers or --num-connections would both work.

 --num-parallel?

 --num-concurrent?

--num-bikeshed? ;)

Cheers,
David (purple!)
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Missing files after make install ?

2009-02-12 Thread Olivier Thauvin
Hi,

I am the packager of postgresql for mandriva, and I receive this bug to
today

https://qa.mandriva.com/show_bug.cgi?id=47722

postgresql8.3-contribe missed some files like uuid-ossp.sql

Description of problem:
uuid-ossp was not found in the .spec file

By take a look in the /contrib/README file, some items are missed
intagg 
intarray 
spi 
start-scripts 
xml2 (libxml2 ?)

***

I just check, and trully, files provided in contrib, mainly *.sql, are
not installed during %make install from contrib/ subdirectory.

Is this an error ? a mistake ? If this should be fix, can you provide a
patch quickly (before I release 8.3.6 :) ?

Thanks by advance.


pgpQOc8yggzb6.pgp
Description: PGP signature


[HACKERS] The science of optimization in practical terms?

2009-02-12 Thread Joshua D. Drake
Hello,

I was helping a customer today with what is becoming a common theme with
a lot of work we do. Basically, It was working fine until recently.
Now 90% of the time it is as simple as running an ANALYZE VERBOSE and
picking apart relations that aren't being maintained properly and adjust
autovacuum or vacuum appropriately. If it isn't that, it is usually
something like increasing effective_cache_size, or
default_statistics_target.

However, in recent times I have found that increasing cpu_tuple_cost,
cpu_operator_cost and cpu_index_tuple_cost to be very useful. This is
always in the scenario of, queries were running fine for months and
then all of a sudden, they are not. It is also always on systems that
we are already maintaining and thus (in theory) are in good shape.

So my question is, what is the science in practical terms behind those
parameters? Normally I would just accept it as another PostgreSQL
idiosyncrasy but the performance differences I am talking about are
large. After changing cpu_tuple_cost and cpu_operator_cost today to 0.5
I decreased two queries from 10 seconds and 15 seconds to 2 seconds and
~900 ms respectively.

Sincerely,

Joshua D. Drake
 

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Missing files after make install ?

2009-02-12 Thread Olivier Thauvin
Hi,

I am the packager of postgresql for mandriva, and I receive this bug to
today

https://qa.mandriva.com/show_bug.cgi?id=47722

postgresql8.3-contribe missed some files like uuid-ossp.sql

Description of problem:
uuid-ossp was not found in the .spec file

By take a look in the /contrib/README file, some items are missed
intagg 
intarray 
spi 
start-scripts 
xml2 (libxml2 ?)

***

I just check, and trully, files provided in contrib, mainly *.sql, are
not installed during %make install from contrib/ subdirectory.

Is this an error ? a mistake ? If this should be fix, can you provide a
patch quickly (before I release 8.3.6 :) ?

Thanks by advance.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Missing files after make install ?

2009-02-12 Thread Tom Lane
Olivier Thauvin olivier.thau...@aerov.jussieu.fr writes:
 By take a look in the /contrib/README file, some items are missed
 intagg 
 intarray 
 spi 
 start-scripts 
 xml2 (libxml2 ?)

Your complainant seems to be assuming that there's a one-to-one mapping
between contrib module names and .sql files.  Not so, at least not in
these cases.

The names I see in the Fedora RPM are

./usr/share/pgsql/contrib/_int.sql  = intarray
./usr/share/pgsql/contrib/adminpack.sql
./usr/share/pgsql/contrib/autoinc.sql
./usr/share/pgsql/contrib/btree_gist.sql
./usr/share/pgsql/contrib/chkpass.sql
./usr/share/pgsql/contrib/cube.sql
./usr/share/pgsql/contrib/dblink.sql
./usr/share/pgsql/contrib/dict_int.sql
./usr/share/pgsql/contrib/dict_xsyn.sql
./usr/share/pgsql/contrib/earthdistance.sql
./usr/share/pgsql/contrib/fuzzystrmatch.sql
./usr/share/pgsql/contrib/hstore.sql
./usr/share/pgsql/contrib/insert_username.sql
./usr/share/pgsql/contrib/int_aggregate.sql = intagg
./usr/share/pgsql/contrib/isn.sql
./usr/share/pgsql/contrib/lo.sql
./usr/share/pgsql/contrib/ltree.sql
./usr/share/pgsql/contrib/moddatetime.sql
./usr/share/pgsql/contrib/pageinspect.sql
./usr/share/pgsql/contrib/pg_buffercache.sql
./usr/share/pgsql/contrib/pg_freespacemap.sql
./usr/share/pgsql/contrib/pg_trgm.sql
./usr/share/pgsql/contrib/pgcrypto.sql
./usr/share/pgsql/contrib/pgrowlocks.sql
./usr/share/pgsql/contrib/pgstattuple.sql
./usr/share/pgsql/contrib/pgxml.sql = xml2
./usr/share/pgsql/contrib/refint.sql
./usr/share/pgsql/contrib/seg.sql
./usr/share/pgsql/contrib/sslinfo.sql
./usr/share/pgsql/contrib/tablefunc.sql
./usr/share/pgsql/contrib/test_parser.sql
./usr/share/pgsql/contrib/timetravel.sql
./usr/share/pgsql/contrib/tsearch2.sql

Maybe this should be cleaned up sometime, but that's how it is in 8.3.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Updates of SE-PostgreSQL 8.4devel patches (r1530)

2009-02-12 Thread KaiGai Kohei

KaiGai Kohei wrote:

The series of SE-PostgreSQL patches are updated:
[1/5] 
http://sepgsql.googlecode.com/files/sepostgresql-sepgsql-8.4devel-3-r1530.patch
[2/5] 
http://sepgsql.googlecode.com/files/sepostgresql-utils-8.4devel-3-r1530.patch
[3/5] 
http://sepgsql.googlecode.com/files/sepostgresql-policy-8.4devel-3-r1530.patch
[4/5] 
http://sepgsql.googlecode.com/files/sepostgresql-docs-8.4devel-3-r1530.patch
[5/5] 
http://sepgsql.googlecode.com/files/sepostgresql-tests-8.4devel-3-r1530.patch


BTW, what is the current status of revewing the patches?
Is it necessary to wait for a few days more?

If you have anything unclear, please feel free to ask me anything.

Thanks,


- List of updates:
* These are rebased to the latest CVS HEAD because of conflictions.
  - The src/include/catalog/pg_proc.h got a confliction due to the
newly added SQL functions.
  - The src/bin/pg_dump/pg_dump.c got a confliction due to the stuff
to dump toast_reloptions.
* bugfix: An incorrect procedure entry for sepgsql_server_getcon().
* cleanup: A strange error message in testcases.

Rest of parts are unchanged.

Please comment anything.

Thanks,


--
OSS Platform Development Division, NEC
KaiGai Kohei kai...@ak.jp.nec.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Missing files after make install ?

2009-02-12 Thread Andrew Dunstan



Olivier Thauvin wrote:

Hi,

I am the packager of postgresql for mandriva, and I receive this bug to
today

https://qa.mandriva.com/show_bug.cgi?id=47722

postgresql8.3-contribe missed some files like uuid-ossp.sql

Description of problem:
uuid-ossp was not found in the .spec file

By take a look in the /contrib/README file, some items are missed
intagg 
intarray 
spi 
start-scripts 
xml2 (libxml2 ?)


***

I just check, and trully, files provided in contrib, mainly *.sql, are
not installed during %make install from contrib/ subdirectory.

Is this an error ? a mistake ? If this should be fix, can you provide a
patch quickly (before I release 8.3.6 :) ?

Thanks by advance.
  


It's up to you to decide what you want packaged, not us. For example, as 
a packager, you surely don't want to use our initscript - you should be 
using one that is appropriate for your distro. All the other packagers 
manage to make these decisions. Maybe you would get some inspiration 
from the spec files used by other RPM based distros.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fillfactor for toast tables is useless?

2009-02-12 Thread ITAGAKI Takahiro

Alvaro Herrera alvhe...@commandprompt.com wrote:

 ITAGAKI Takahiro wrote:
  With reloption patch, we can set WITH options to toast tables.
  However, fillfactor for toast tables is useless, no?
 
 Maybe what we should do is just reject fillfactor for toast tables for
 now.  I think this is easy to do.

Hmmm... this might have been discussed already, I think it would be better
to have relopt_kind not as a sequence number but as a bit flag.
If it was flags, we can reject fillfactor for toast tables in a natural way
without duplicated autovacuum_* definitions:

  - fillfactor   - RELOPT_KIND_HEAP
  - autovacuum_* - RELOPT_KIND_HEAP | RELOPT_KIND_TOAST

26 entries are remained even after we use 6 kinds in the core.
(HEAP, TOAST, BTREE, HASH, GIN and GIST)

The attached is a patch to change 3 things:
  - Reject toast.fillfactor.
  - Modify relopt_kind to bit flags.
  - Report relation type on unrecognized parameter errors.

Comments welcome.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



reject_toast_fillfactor.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers