Re: [HACKERS] MERGE command for inheritance

2010-08-12 Thread Boxuan Zhai
On Thu, Aug 12, 2010 at 12:49 AM, Greg Smith g...@2ndquadrant.com wrote:

 Tom Lane wrote:

 Do we really think this is anywhere near committable now?



 There's a relatively objective standard for the first thing needed for
 commit--does it work?--in the form of the regression tests Simon put
 together before development.  I just tried the latest merge_v102.patch
 (regression diff attached) to see how that's going.  There are still a
 couple of errors in there.  It looks to me like the error handling and
 related DO NOTHING support are the next pair of things that patch needs work
 on.  I'd rather see that sorted out than to march onward to inheritance
 without the fundamentals even nailed down yet.



Sorry for the mismatch problem of regress. In fact, I am still unable to
make the regression test run on my machine. When I try the command
pg_regreess in /src/test/regress/, it always gives a FATAL error:

FATAL:  parameter port cannot be changed without restarting the server
psql: FATAL:  parameter port cannot be changed without restarting the
server
command failed: C:/msys/1.0/local/pgsql/bin//psql -X -c DROP DATABASE IF
EXISTS \regression\ postgres

However, I can run this command directly in the MinGW command line
interface. I guess this is because the psql_command() function has some
check before accept commands. And the MinGW environment  cannot pass these
checks.

All the SQL commands in the .sql file have been tested by hand. And they are
all correct. However, the .out file is not automatic generated by pgsql.

I may need to find a linux system to try to generate the correct .out file
some other time. Or, would someone help me to generate an .out file through
pg_regress?


 --
 Greg Smith  2ndQuadrant US  Baltimore, MD
 PostgreSQL Training, Services and Support
 g...@2ndquadrant.com   www.2ndQuadrant.us http://www.2ndquadrant.us/


 ***
 /home/postgres/pgwork/repo/git/postgresql/src/test/regress/expected/merge.out
   2010-08-11 12:23:50.0 -0400
 ---
 /home/postgres/pgwork/repo/git/postgresql/src/test/regress/results/merge.out
2010-08-11 12:33:27.0 -0400
 ***
 *** 44,57 
  WHEN MATCHED THEN
UPDATE SET balance = t.balance + s.balance
  ;
 ! SELECT * FROM target;
 !  id | balance
 ! +-
 !   1 |  10
 !   2 |  25
 !   3 |  50
 ! (3 rows)
 !
  ROLLBACK;
  -- do a simple equivalent of an INSERT SELECT
  BEGIN;
 --- 44,50 
  WHEN MATCHED THEN
UPDATE SET balance = t.balance + s.balance
  ;
 ! NOTICE:  one tuple is ERROR
  ROLLBACK;
  -- do a simple equivalent of an INSERT SELECT
  BEGIN;
 ***
 *** 61,66 
 --- 54,61 
  WHEN NOT MATCHED THEN
INSERT VALUES (s.id, s.balance)
  ;
 + NOTICE:  one tuple is ERROR
 + NOTICE:  one tuple is ERROR
  SELECT * FROM target;
   id | balance
  +-
 ***
 *** 102,107 
 --- 97,103 
  WHEN MATCHED THEN
DELETE
  ;
 + NOTICE:  one tuple is ERROR
  SELECT * FROM target;
   id | balance
  +-
 ***
 *** 165,176 
  ERROR:  multiple actions on single target row

  ROLLBACK;
 !
  -- This next SQL statement
  --  fails according to standard
  --  suceeds in PostgreSQL implementation by simply ignoring the second
  --  matching row since it activates no WHEN clause
  BEGIN;
  MERGE into target t
  USING (select * from source) AS s
  ON t.id = s.id
 --- 161,175 
  ERROR:  multiple actions on single target row

  ROLLBACK;
 ! ERROR:  syntax error at or near ERROR
 ! LINE 1: ERROR:  multiple actions on single target row
 ! ^
  -- This next SQL statement
  --  fails according to standard
  --  suceeds in PostgreSQL implementation by simply ignoring the second
  --  matching row since it activates no WHEN clause
  BEGIN;
 + ERROR:  current transaction is aborted, commands ignored until end of
 transaction block
  MERGE into target t
  USING (select * from source) AS s
  ON t.id = s.id
 ***
 *** 179,184 
 --- 178,184 
  WHEN NOT MATCHED THEN
INSERT VALUES (s.id, s.balance)
  ;
 + ERROR:  current transaction is aborted, commands ignored until end of
 transaction block
  ROLLBACK;
  -- Now lets prepare the test data to generate 2 non-matching rows
  DELETE FROM source WHERE id = 3 AND balance = 5;
 ***
 *** 188,195 
  +-
2 |   5
3 |  20
 -   4 |   5
4 |  40
  (4 rows)

  -- This next SQL statement
 --- 188,195 
  +-
2 |   5
3 |  20
4 |  40
 +   4 |   5
  (4 rows)

  -- This next SQL statement
 ***
 *** 203,216 
  WHEN NOT MATCHED THEN
INSERT VALUES (s.id, s.balance)
  ;
  SELECT * FROM target;
   id | balance
  +-
1 |  10
2 |  20
3 |  30
 -   4 |   5
4 |  40
  (5 rows)

  ROLLBACK;
 --- 203,218 
  WHEN NOT MATCHED THEN
INSERT VALUES (s.id, s.balance)
  ;
 + NOTICE:  one tuple is ERROR
 + NOTICE: 

Re: [HACKERS] review: psql: edit function, show function commands patch

2010-08-12 Thread Pavel Stehule
2010/8/11 Robert Haas robertmh...@gmail.com:
 On Wed, Aug 11, 2010 at 12:28 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Aug 10, 2010 at 11:58 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 BTW, at least in the usage in that loop, get_functiondef_dollarquote_tag
 seems grossly overdesigned.  It would be clearer, shorter, and faster if
 you just had a strncmp test for AS $function there.

 As far as I can see, the only purpose of that code is to support the
 desire to have \sf+ display  rather than a line number for the
 lines that FOLLOW the function body.  But I'm wondering if we should
 just forget about that and let the numbering run continuously from the
 first AS $function line to end of file.  That would get rid of a
 bunch of rather grotty code in the \sf patch, also.

 Oh?  Considering that in the standard pg_get_functiondef output, the
 ending $function$ delimiter is always on the very last line, that sounds
 pretty useless.  +1 for just numbering forward from the start line.

 OK.

 BTW, the last I looked, \sf+ was using what I thought to be a quite ugly
 and poorly-considered formatting for the line number.  I would suggest
 eight blanks for a header line and %-7d  as the prefix format for a
 numbered line.  The reason for making sure the prefix is 8 columns rather
 than some other width is to not mess up tab-based formatting of the
 function body.  I would also prefer a lot more visual separation between
 the line number and the code than %4d  will offer; and as for the
 stars, they're just useless and distracting.

 I don't have a strong preference, but that seems reasonable.  I
 suggest that we punt the \sf portion of this patch back for rework for
 the next CommitFest, and focus on getting the \e and \ef changes
 committed.  I think the \sf code can be a lot simpler if we get rid of
 the code that's intended to recognize the ending delimeter.


the proposed changes are not complex, and there are not reason to move
\sf to next commitfest. I am thinking about little bit simplification
- there can by only one cycle without two. After \e commiting there
are other complex code. If some code isn't  clean, then it is because
there are  \o and pager support.

 Another thought is that we might want to add a comment to
 pg_get_functiondef() noting that anyone changing the output format
 should be careful not to break the line-number-finding form of \ef in
 the process.


+1

Regards

Pavel Stehule

 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise Postgres Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] string_to_array with an empty input string

2010-08-12 Thread Pavel Stehule
 Really?

 FOR var IN SELECT UNNEST(arr) LOOP ... END LOOP

 I mean, doing everything is sort of clunky in PL/pgsql, but this
 doesn't seem particularly bad as PL/pgsql idioms go.


this simple construction can take much more memory than other. I
proposed two or three years ago FOREACH statement

FOREACH var IN array LOOP END LOOP

this statement can be implemented very efective - and I think it can
be joined to some form of string_to_array function, because var
specify target element type.

FOREACH var IN parse('',...) LOOP END LOOP

Regards

Pavel Stehule

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] string_to_array with an empty input string

2010-08-12 Thread Dimitri Fontaine
 I definitely agree that PL/pgsql could be more usable.  Or if not,
 then some other PL with a better overall design could be more usable.
 I am not entirely sure how to create such a thing, however.

Would the standard plpsm be just that? Pavel maintains a pg implémentation of 
it, I believe.

Regards,
-- 
dim
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] string_to_array with an empty input string

2010-08-12 Thread Pavel Stehule
2010/8/12 Dimitri Fontaine dfonta...@hi-media.com:
 I definitely agree that PL/pgsql could be more usable.  Or if not,
 then some other PL with a better overall design could be more usable.
 I am not entirely sure how to create such a thing, however.

 Would the standard plpsm be just that? Pavel maintains a pg implémentation of 
 it, I believe.

there isn't nothing about iteration over collections :(

Regards

Pavel


 Regards,
 --
 dim
 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] typos in HS source code comment

2010-08-12 Thread Fujii Masao
Hi,

When I was enjoying reading the HS source code,
I found some typos. Attached patch fixes them.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


hs_typo_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Libpq: PQftype, PQfsize

2010-08-12 Thread Bozena Potempa
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
[..]
Bozena Potempa bozena.pote...@otc.pl writes:
 I have a test table with varchar(40) column. After executing the 
 following
 query: 
 select substr(fc,1,2) from test
 PQftype returns for the result column PG_TYPE_TEXT and 
PQfsize returns -1. 
 Is it the expected behaviour?

Yes.  substr() returns text.  But even if it returned varchar, 
you'd probably get -1 for the fsize.  PG does not make any 
attempt to predict the result width of functions.

Thank you. In this case (substring) there is no much to predict, just a
simple calculation, but I understand that it is a part of larger and more
complicated functionality. I tried to find a workaround with a type cast: 
select substr(fc,1,2)::varchar(2) from test
Now the type returned is varchar, but the size is still -1. I think that it
is not a correct return: the size is specified explicitly in the query and
could be used by PQfsize. 

Bozena


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regression tests versus the buildfarm environment

2010-08-12 Thread Peter Eisentraut
On ons, 2010-08-11 at 16:17 -0400, Tom Lane wrote:
 We should have the buildfarm configuration such that any one run uses
 the same port number for both installed and uninstalled regression
 tests.

I'm getting lost here what the actual requirements are.  The above is
obviously not going to work as a default for pg_regress, because the
port number for an installed test is determined by the user and is
likely to be in the public range, whereas the uninstalled test should
use something from the private range.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regression tests versus the buildfarm environment

2010-08-12 Thread Peter Eisentraut
On ons, 2010-08-11 at 15:06 -0400, Andrew Dunstan wrote:
 You original email said:
 
 For some historic reasons, I have my local scripts set up so
 that they build development instances using the hardcoded port
 65432.
 
 I think my response would be Don't do that.

Do you have a concrete suggestion for a different way to handle it?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regression tests versus the buildfarm environment

2010-08-12 Thread Andrew Dunstan



On 08/12/2010 08:43 AM, Peter Eisentraut wrote:

On ons, 2010-08-11 at 15:06 -0400, Andrew Dunstan wrote:

You original email said:

 For some historic reasons, I have my local scripts set up so
 that they build development instances using the hardcoded port
 65432.

I think my response would be Don't do that.

Do you have a concrete suggestion for a different way to handle it?




Well, I do all my builds under a common directory, and my setup shell 
script has stuff like this to choose a port:


   for port in `seq -w 5701 5799` ; do
   grep -q -- --with-pgport=$port $base/*/config.log || break
   done

It's worked fairly well for me for about five years now. No doubt there 
could be many variations on this theme.


cheers

andrew


Re: [HACKERS] review: psql: edit function, show function commands patch

2010-08-12 Thread Pavel Stehule
Hello

attached updated \sf implementation. It is little bit simplyfied with
support a pager and output forwarding. Formating was updated per Tom's
request.

Regards

Pavel Stehule


 BTW, the last I looked, \sf+ was using what I thought to be a quite ugly
 and poorly-considered formatting for the line number.  I would suggest
 eight blanks for a header line and %-7d  as the prefix format for a
 numbered line.  The reason for making sure the prefix is 8 columns rather
 than some other width is to not mess up tab-based formatting of the
 function body.  I would also prefer a lot more visual separation between
 the line number and the code than %4d  will offer; and as for the
 stars, they're just useless and distracting.

                        regards, tom lane

*** ./doc/src/sgml/ref/psql-ref.sgml.orig	2010-08-12 02:40:59.0 +0200
--- ./doc/src/sgml/ref/psql-ref.sgml	2010-08-12 15:01:04.339404200 +0200
***
*** 2100,2105 
--- 2100,2131 
  
  
varlistentry
+ termliteral\sf[+] optional replaceable class=parameterfunction_description/ optional  replaceable class=parameterline_number/ /optional /optional /literal/term
+ 
+ listitem
+ para
+  This command fetches and shows the definition of the named function,
+  in the form of a commandCREATE OR REPLACE FUNCTION/ command.
+  If literal+/literal is appended to the command name, then output
+  lines has a line number.
+ /para
+ 
+ para
+  The target function can be specified by name alone, or by name
+  and arguments, for example literalfoo(integer, text)/.
+  The argument types must be given if there is more
+  than one function of the same name.
+ /para
+ 
+ para
+ If a line number is specified, applicationpsql/application will
+ show the specified line as first line. Previous lines are skiped.
+ /para
+ /listitem
+   /varlistentry
+ 
+ 
+   varlistentry
  termliteral\set [ replaceable class=parametername/replaceable [ replaceable class=parametervalue/replaceable [ ... ] ] ]/literal/term
  
  listitem
*** ./src/bin/psql/command.c.orig	2010-08-12 02:40:59.0 +0200
--- ./src/bin/psql/command.c	2010-08-12 14:39:22.334403954 +0200
***
*** 32,37 
--- 32,38 
  #ifdef USE_SSL
  #include openssl/ssl.h
  #endif
+ #include signal.h
  
  #include portability/instr_time.h
  
***
*** 46,51 
--- 47,53 
  #include input.h
  #include large_obj.h
  #include mainloop.h
+ #include pqsignal.h
  #include print.h
  #include psqlscan.h
  #include settings.h
***
*** 1083,1088 
--- 1085,1232 
  		free(opt0);
  	}
  
+ 	/* \sf = show a function source code */
+ 	else if (strcmp(cmd, sf) == 0 || strcmp(cmd, sf+) == 0)
+ 	{
+ 		bool show_lineno;
+ 		int	first_visible_line = -1;
+ 		
+ 		show_lineno = (strcmp(cmd, sf+) == 0);
+ 		
+ 		if (!query_buf)
+ 		{
+ 			psql_error(no query buffer\n);
+ 			status = PSQL_CMD_ERROR;
+ 		}
+ 		else
+ 		{
+ 			char	*func;
+ 			Oid		foid = InvalidOid;
+ 			
+ 			func = psql_scan_slash_option(scan_state,
+ 	OT_WHOLE_LINE, NULL, true);
+ 			first_visible_line = strip_lineno_from_funcdesc(func);
+ 			if (first_visible_line == 0)
+ 			{
+ /* error already reported */
+ status = PSQL_CMD_ERROR;
+ 			}
+ 			else if (!func)
+ 			{
+ psql_error(missing a function name\n);
+ status = PSQL_CMD_ERROR;
+ 			}
+ 			else if (!lookup_function_oid(pset.db, func, foid))
+ 			{
+ /* error already reported */
+ status = PSQL_CMD_ERROR;
+ 			}
+ 			else if (!get_create_function_cmd(pset.db, foid, query_buf))
+ 			{
+ /* error already reported */
+ status = PSQL_CMD_ERROR;
+ 			}
+ 			
+ 			if (func)
+ free(func);
+ 			
+ 			if (status != PSQL_CMD_ERROR)
+ 			{
+ FILE *output;
+ bool	is_pager = false;
+ 
+ /*
+  * Count a lines in function definition - it's used for opening
+  * a pager. Get a output stream - stdout, pager or forwarded output.
+  */
+ if (pset.queryFout == stdout)
+ {
+ 	int	lc = 0;
+ 	const char *lines = query_buf-data;
+ 	
+ 	while (*lines != '\0')
+ 	{
+ 		lc++;
+ 		/* find start of next line */
+ 		lines = strchr(lines, '\n');
+ 		if (!lines)
+ 			break;
+ 		lines++;
+ 	}
+ 	
+ 	output = PageOutput(lc, pset.popt.topt.pager);
+ 	is_pager = output != stdout;
+ }
+ else
+ {
+ 	/* use a prepared query output, pager isn't activated */
+ 	output = pset.queryFout;
+ 	is_pager = false;
+ }
+ 
+ if (first_visible_line  0 || show_lineno)
+ {
+ 	bool	is_header = true;		/* true, when header lines is processed */
+ 	int	lineno = 0;
+ 	char *lines = query_buf-data;
+ 	
+ 	/*
+ 	 * lineno 1 should correspond to the first line of the function
+ 	 * body. We expect that pg_get_functiondef() 

Re: [HACKERS] Libpq: PQftype, PQfsize

2010-08-12 Thread Tom Lane
Bozena Potempa bozena.pote...@otc.pl writes:
 Thank you. In this case (substring) there is no much to predict, just a
 simple calculation, but I understand that it is a part of larger and more
 complicated functionality. I tried to find a workaround with a type cast: 
 select substr(fc,1,2)::varchar(2) from test
 Now the type returned is varchar, but the size is still -1. I think that it
 is not a correct return: the size is specified explicitly in the query and
 could be used by PQfsize. 

Oh ... actually the problem there is that you have the wrong idea about
what PQfsize means.  What that returns is pg_type.typlen for the data
type, which is always going to be -1 for a varlena type like varchar.

The thing that you need to look at if you want to see information like
the max length of a varchar is typmod (PQfmod).  The typmod generally
has some funny datatype-specific encoding; for varchar and char it
works like this:
-1: max length unknown or unspecified
n0: max length is n-4 characters

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regression tests versus the buildfarm environment

2010-08-12 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On ons, 2010-08-11 at 16:17 -0400, Tom Lane wrote:
 We should have the buildfarm configuration such that any one run uses
 the same port number for both installed and uninstalled regression
 tests.

 I'm getting lost here what the actual requirements are.  The above is
 obviously not going to work as a default for pg_regress, because the
 port number for an installed test is determined by the user and is
 likely to be in the public range, whereas the uninstalled test should
 use something from the private range.

Certainly, but the buildfarm's installed test doesn't try to start on
5432.  It starts on whatever branch_port the buildfarm owner has
specified for that animal and that branch.  It's the owner's
responsibility to make that nonconflicting across the services and
buildfarm critters he has running on a given machine.  What I'm saying
is that *in the buildfarm* we want the make check case to also use
this predetermined safe port number.  That has nothing whatever to do
with what is good practice for other cases.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE command for inheritance

2010-08-12 Thread Robert Haas
On Thu, Aug 12, 2010 at 2:24 AM, Boxuan Zhai bxzhai2...@gmail.com wrote:
 Sorry for the mismatch problem of regress. In fact, I am still unable to
 make the regression test run on my machine. When I try the command
 pg_regreess in /src/test/regress/, it always gives a FATAL error:

The intention is that you should run make check in that directory.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regression tests versus the buildfarm environment

2010-08-12 Thread Andrew Dunstan



On 08/12/2010 10:22 AM, Tom Lane wrote:

Peter Eisentrautpete...@gmx.net  writes:

On ons, 2010-08-11 at 16:17 -0400, Tom Lane wrote:

We should have the buildfarm configuration such that any one run uses
the same port number for both installed and uninstalled regression
tests.

I'm getting lost here what the actual requirements are.  The above is
obviously not going to work as a default for pg_regress, because the
port number for an installed test is determined by the user and is
likely to be in the public range, whereas the uninstalled test should
use something from the private range.

Certainly, but the buildfarm's installed test doesn't try to start on
5432.  It starts on whatever branch_port the buildfarm owner has
specified for that animal and that branch.  It's the owner's
responsibility to make that nonconflicting across the services and
buildfarm critters he has running on a given machine.  What I'm saying
is that *in the buildfarm* we want the make check case to also use
this predetermined safe port number.  That has nothing whatever to do
with what is good practice for other cases.




Well, I think the steps to do that are:

   * change src/test/GNUmakefile to provide for a TMP_PORT setting that
 gets passed to pg_regress
   * change src/tools/msvc/vcregress.pl to match
   * backpatch these changes to all live branches
   * change the buildfarm script to use the new options.


I don't think this should preclude changing the way we calculate the 
default port for pg_regress, for the reasons mentioned upthread.


cheers

andrew


[HACKERS] libpq

2010-08-12 Thread Dmitriy Igrishin
Hey all,

Is it guaranteed that functions, which accept PGconn* (PGresult*)
properly works if PGconn* (PGresult*) is NULL or is it better to check
on NULL the value of the pointer before calling those functions?

Regards,
Dmitriy


Re: [HACKERS] libpq

2010-08-12 Thread Tom Lane
Dmitriy Igrishin dmit...@gmail.com writes:
 Is it guaranteed that functions, which accept PGconn* (PGresult*)
 properly works if PGconn* (PGresult*) is NULL or is it better to check
 on NULL the value of the pointer before calling those functions?

I think they all do check for NULL, but whether they give back a default
result that's useful for you is harder to say.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] libpq and TOO MANY CONNECTIONS

2010-08-12 Thread Dmitriy Igrishin
Hey all,

It seams, that all of the error codes should be obtained by calling
PQresultErrorField().
But this function works only with results. So in what situations
TOO MANY CONNECTIONS error may be thrown after successfully connection ? :)
In case of dblink?

Regards,
Dmitriy


[HACKERS] CommitFest 2010-07 week four progress report

2010-08-12 Thread Kevin Grittner
New numbers on where we are with this CommitFest, at the end of the
fourth week:
 
72 patches were submitted
 3 patches were withdrawn (deleted) by their authors
12 patches were moved to CommitFest 2010-09
--
57 patches in CommitFest 2010-07
--
 3 committed to 9.0
--
54 patches for 9.1
--
 1 rejected
18 returned with feedback
28 committed for 9.1
--
47 disposed
--
 7 pending
 2 ready for committer
--
 5 will still need reviewer attention
 1 waiting on author to respond to review
--
 4 patches need review now and have a reviewer assigned
 
With about 56 hours left until the close of the CommitFest, we're
down to two Ready for Committer and three other potentially
committable patches.  Do we have a plan (with a time line) for
producing an 9.1alpha1 release after the CF closes?  (Have we even
set a policy on whether we do that when we're still in beta testing
for the release which hit feature freeze six months ago?)
 
No patches were moved to the next CF this week, seven were
committed, and one was returned with feedback.  Two of the four
patches in Needs Review status are WiP patches, and for both a
review is long overdue by CF guidelines.  The other two in Needs
Review status had new patches posted yesterday which respond to
committer feedback.  The last action for the one patch in Waiting
on Author status was feedback from Tom five days ago.
 
 
Last week:
 
 72 patches were submitted
  3 patches were withdrawn (deleted) by their authors
 12 patches were moved to CommitFest 2010-09
 --
 57 patches in CommitFest 2010-07
 --
  3 committed to 9.0
 --
 54 patches for 9.1
 --
  1 rejected
 17 returned with feedback
 21 committed for 9.1
 --
 39 disposed
 --
 15 pending
  9 ready for committer
 --
  6 will still need reviewer attention
  1 waiting on author to respond to review
 --
  5 patches need review now and have a reviewer assigned


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Here's an updated patch, with the invalidation changes merged in and
 hopefully-suitable adjustments elsewhere.

I haven't tested this patch, but I read through it (and have I mentioned
how unbelievably illegible -u format patches are?).  It seems to be
close to commitable, but I found a few issues.  In no particular order:

storage.sgml needs to be updated to describe this file-naming scheme.

BackendRelFileNode isn't a particularly nice struct name; it seems like
it puts the most important aspect of the struct's purpose somewhere in
the middle of the name.  Maybe RelFileNodeBackend would be better, or
RelFileNodeFull, or something like that.

In GetNewRelFileNode, you've changed some code that is commented
/* This logic should match RelationInitPhysicalAddr */, but there
is not any corresponding change in RelationInitPhysicalAddr.  I find
this bothersome.  I think the problem is that you've made it look
like the assignment of rnode.backend is part of the logic that ought
to match RelationInitPhysicalAddr.  Perhaps set that off, at least
by a blank line, if not its own comment.

The relpath() and relpathperm() macros are a tad underdocumented for my
taste; at least put comments noting that one takes a RelFileNode and the
other a BackendRelFileNode.  And I wonder if you couldn't find better
names for relpathperm and relpathbackend.

assign_maxconnections and assign_autovacuum_max_workers need to be fixed
for MAX_BACKENDS; in fact I think all the occurrences of INT_MAX / 4 in
guc.c need to be replaced.  (And if I were you I'd grep to see if
anyplace outside guc.c knows that value...)  Also, as a matter of style,
the comment currently attached to max_connections needs to be attached
to the definition of the constant, not just modified where it is.

As an old bit-shaver this sorta bothers me:

+++ b/src/include/utils/rel.h
@@ -127,7 +127,7 @@ typedef struct RelationData
struct SMgrRelationData *rd_smgr;   /* cached file handle, or NULL 
*/
int rd_refcnt;  /* reference count */
boolrd_istemp;  /* rel is a temporary relation 
*/
-   boolrd_islocaltemp; /* rel is a temp rel of this session */
+   BackendId   rd_backend; /* owning backend id, if 
temporary relation */
boolrd_isnailed;/* rel is nailed in cache */
boolrd_isvalid; /* relcache entry is valid */
charrd_indexvalid;  /* state of rd_indexlist: 0 = not 
valid, 1 =

The struct is going to be less efficiently packed with that field order.
Maybe swap rd_istemp and rd_backend?

+   Assert(relation-rd_backend != InvalidOid);
ought to be InvalidBackendId, no?  Both new calls of GetTempNamespaceBackendId
have this wrong.  You might also want to change the comment of
GetTempNamespaceBackendId to note that its failure result is not just -1
but InvalidBackendId.

Lastly, it bothers me that you've put in code to delete files belonging
to temp rels during crash restart, without any code to clean up their
catalog entries.  This will therefore lead to dangling pg_class
references, with uncertain but probably not very nice consequences.
I think that probably shouldn't go in until there's code to drop the
catalog entries too.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread David Fetter
On Thu, Aug 12, 2010 at 12:27:45PM -0400, Tom Lane wrote:
 Robert Haas robertmh...@gmail.com writes:
  Here's an updated patch, with the invalidation changes merged in and
  hopefully-suitable adjustments elsewhere.
 
 I haven't tested this patch, but I read through it (and have I mentioned
 how unbelievably illegible -u format patches are?).

I have every confidence that you, of all people, can arrange to use
'filterdiff --format=context' on attached patches automatically,
saving you some time and us some boredom :)

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Joshua D. Drake
On Thu, 2010-08-12 at 09:44 -0700, David Fetter wrote:
 On Thu, Aug 12, 2010 at 12:27:45PM -0400, Tom Lane wrote:
  Robert Haas robertmh...@gmail.com writes:
   Here's an updated patch, with the invalidation changes merged in and
   hopefully-suitable adjustments elsewhere.
  
  I haven't tested this patch, but I read through it (and have I mentioned
  how unbelievably illegible -u format patches are?).
 
 I have every confidence that you, of all people, can arrange to use
 'filterdiff --format=context' on attached patches automatically,
 saving you some time and us some boredom :)

I was under the impression that the project guideline was that we only
accepted context diffs?

Joshua D. Drake

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread David Fetter
On Thu, Aug 12, 2010 at 09:53:54AM -0700, Joshua D. Drake wrote:
 On Thu, 2010-08-12 at 09:44 -0700, David Fetter wrote:
  On Thu, Aug 12, 2010 at 12:27:45PM -0400, Tom Lane wrote:
   Robert Haas robertmh...@gmail.com writes:
Here's an updated patch, with the invalidation changes merged
in and hopefully-suitable adjustments elsewhere.
   
   I haven't tested this patch, but I read through it (and have I
   mentioned how unbelievably illegible -u format patches are?).
  
  I have every confidence that you, of all people, can arrange to
  use 'filterdiff --format=context' on attached patches
  automatically, saving you some time and us some boredom :)
 
 I was under the impression that the project guideline was that we
 only accepted context diffs?

Since they're trivially producible from unified diffs, this is a
pretty silly reason to bounce--or even comment on--patches.  It's less
a guideline than a personal preference, namely Tom's.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Alvaro Herrera
Excerpts from David Fetter's message of jue ago 12 12:59:33 -0400 2010:
 On Thu, Aug 12, 2010 at 09:53:54AM -0700, Joshua D. Drake wrote:

  I was under the impression that the project guideline was that we
  only accepted context diffs?
 
 Since they're trivially producible from unified diffs, this is a
 pretty silly reason to bounce--or even comment on--patches.  It's less
 a guideline than a personal preference, namely Tom's.

Not that I review that many patches, but I do dislike unified diffs too.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Robert Haas
On Thu, Aug 12, 2010 at 12:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 Here's an updated patch, with the invalidation changes merged in and
 hopefully-suitable adjustments elsewhere.

 I haven't tested this patch, but I read through it (and have I mentioned
 how unbelievably illegible -u format patches are?).

Sorry, I'll run it through filterdiff for you next time.  As you know,
I have the opposite preference, but since I was producing this
primarily for you to read

I can clean up the rest of this stuff, but not this:

 Lastly, it bothers me that you've put in code to delete files belonging
 to temp rels during crash restart, without any code to clean up their
 catalog entries.  This will therefore lead to dangling pg_class
 references, with uncertain but probably not very nice consequences.
 I think that probably shouldn't go in until there's code to drop the
 catalog entries too.

I thought about this pretty carefully, and I don't believe that there
are any unpleasant consequences.  The code that assigns relfilenode
numbers is pretty careful to check that the newly assigned value is
unused BOTH in pg_class and in the directory where the file will be
created, so there should be no danger of a number getting used over
again while the catalog entries remain.  Also, the drop-object code
doesn't mind that the physical storage doesn't exist; it's perfectly
happy with that situation.  It is true that you will get an ERROR if
you try to SELECT from a table whose physical storage has been
removed, but that seems OK.

We have two existing mechanisms for removing the catalog entries: when
a backend is first asked to access a temporary file, it does a DROP
SCHEMA ... CASCADE on any pre-existing temp schema.  And a table is in
wraparound trouble and the owning backend is no longer running,
autovacuum will drop it.  Improving on this seems difficult: if you
wanted to *guarantee* that the catalog entries were removed before we
started letting in connections, you'd need to fork a backend per
database and have each one iterate through all the temp schemas and
drop them.  Considering that the existing code seems to have been
pretty careful about how this stuff gets handled, I don't think it's
worth making the whole startup sequence slower for it.  What might be
worth considering is changing the autovacuum policy to eliminate the
wraparound check, and just have it drop temp table catalog entries for
any backend not currently running, period.

Thoughts?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Libpq: PQftype, PQfsize

2010-08-12 Thread Bozena Potempa
 From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
 Thank you. In this case (substring) there is no much to 
predict, just 
 a simple calculation, but I understand that it is a part of 
larger and 
 more complicated functionality. I tried to find a workaround 
with a type cast:
 select substr(fc,1,2)::varchar(2) from test Now the type returned is 
 varchar, but the size is still -1. I think that it is not a correct 
 return: the size is specified explicitly in the query and could be 
 used by PQfsize.

Oh ... actually the problem there is that you have the wrong 
idea about what PQfsize means.  What that returns is 
pg_type.typlen for the data type, which is always going to be 
-1 for a varlena type like varchar.

The thing that you need to look at if you want to see 
information like the max length of a varchar is typmod 
(PQfmod).  The typmod generally has some funny 
datatype-specific encoding; for varchar and char it works like this:
   -1: max length unknown or unspecified
   n0: max length is n-4 characters

Thank you very much Tom. PQfmode returns the correct value when using a type
cast, so it solves my current problem. 
Perhaps you will implement the exact column size for querries with character
functions somwhere in the future. It is a nice feature, which is implemented
by Oracle or MS SQL Server.  Do not know about MySQL.

Regards,
Bozena Potempa


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to show individual statement latencies in pgbench output

2010-08-12 Thread Tom Lane
Greg Smith g...@2ndquadrant.com writes:
 Florian Pflug wrote:
 Attached is an updated version (v4).

 I've attached a v5.  No real code changes from Florian's version, just 
 some wording/style fixes and rework on the documentation.

I'm looking through this patch now.  It looks mostly good, but I am
wondering just exactly what is the rationale for adding comment
statements to the data structures, rather than ignoring them as before.
It seems like a complete waste of logic, memory space, and cycles;
moreover it renders the documentation's statement that comments
are ignored incorrect.  I did not find anything in the patch history
explaining the point of that change.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: xml_is_well_formed

2010-08-12 Thread Pavel Stehule
Hello

I checked last version:

* there are not a problem with regress and contrib regress tests
* the design is simple and clean now - well documented

notes:
* don't get a patch via copy/paste from mailing list archive - there
are a broken xml2 tests via this access!
* I didn't find a sentence so default for xml_is_well_formed a content
checking - some like without change of xmloption, the behave is same
as xml_is_well_formed_content

Regards

Pavel Stehule



2010/8/11 Mike Fowler m...@mlfowler.com:
 On 11/08/10 21:27, Tom Lane wrote:

 Robert Haasrobertmh...@gmail.com  writes:

 On Mon, Aug 9, 2010 at 10:41 AM, Robert Haasrobertmh...@gmail.com
  wrote:

 There's also the fact that it would probably end up parsing the data
 twice.  Given xmloption, I'm inclined to think Tom has it right:
 provided xml_is_well_formed() that follows xmloption, plus a specific
 version for each of content and document.

 Another reasonable option here would be to forget about having
 xml_is_well_formed() per se and ONLY offer
 xml_is_well_formed_content() and xml_is_well_formed_document().

 We already have xml_is_well_formed(); just dropping it doesn't seem like
 a helpful choice.

 As a project management note, this CommitFest is over in 4 days, so
 unless we have a new version of this patch real soon now we need to
 defer it to the September 15th CommitFest

 Yes.  Mike, are you expecting to submit a new version before the end of
 the week?


 Yes and here it is, apologies for the delay. I have re-implemented
 xml_is_well_formed such that it is sensitive to the XMLOPTION. The
 additional _document and _content methods are now present. Tests and
 documentation adjusted to suit.

 Regards,

 --
 Mike Fowler
 Registered Linux user: 379787


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Thu, Aug 12, 2010 at 12:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Lastly, it bothers me that you've put in code to delete files belonging
 to temp rels during crash restart, without any code to clean up their
 catalog entries.  This will therefore lead to dangling pg_class
 references, with uncertain but probably not very nice consequences.

 I thought about this pretty carefully, and I don't believe that there
 are any unpleasant consequences.  The code that assigns relfilenode
 numbers is pretty careful to check that the newly assigned value is
 unused BOTH in pg_class and in the directory where the file will be
 created, so there should be no danger of a number getting used over
 again while the catalog entries remain.  Also, the drop-object code
 doesn't mind that the physical storage doesn't exist; it's perfectly
 happy with that situation.

Well, okay, but I'd suggest adding comments to the drop-table code
pointing out that it is now NECESSARY for it to not complain if the file
isn't there.  This was never a design goal before, AFAIR --- the fact
that it works like that is kind of accidental.  I am also pretty sure
that there used to be at least warning messages for that case, which we
would now not want.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Robert Haas
On Thu, Aug 12, 2010 at 2:06 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Thu, Aug 12, 2010 at 12:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Lastly, it bothers me that you've put in code to delete files belonging
 to temp rels during crash restart, without any code to clean up their
 catalog entries.  This will therefore lead to dangling pg_class
 references, with uncertain but probably not very nice consequences.

 I thought about this pretty carefully, and I don't believe that there
 are any unpleasant consequences.  The code that assigns relfilenode
 numbers is pretty careful to check that the newly assigned value is
 unused BOTH in pg_class and in the directory where the file will be
 created, so there should be no danger of a number getting used over
 again while the catalog entries remain.  Also, the drop-object code
 doesn't mind that the physical storage doesn't exist; it's perfectly
 happy with that situation.

 Well, okay, but I'd suggest adding comments to the drop-table code
 pointing out that it is now NECESSARY for it to not complain if the file
 isn't there.  This was never a design goal before, AFAIR --- the fact
 that it works like that is kind of accidental.  I am also pretty sure
 that there used to be at least warning messages for that case, which we
 would now not want.

That seems like a good idea.  I'll post an updated patch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq and TOO MANY CONNECTIONS

2010-08-12 Thread Tom Lane
Dmitriy Igrishin dmit...@gmail.com writes:
 It seams, that all of the error codes should be obtained by calling
 PQresultErrorField().
 But this function works only with results. So in what situations
 TOO MANY CONNECTIONS error may be thrown after successfully connection ? :)

It isn't.

The lack of a way to report a SQLSTATE code for a connection failure is
an API shortcoming in libpq; it's not the backend's problem.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest 2010-07 week four progress report

2010-08-12 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 With about 56 hours left until the close of the CommitFest, we're
 down to two Ready for Committer and three other potentially
 committable patches.

I'm working on the pgbench latency patch now, and expect to have it
committed today.  I'll look at xml_is_well_formed next, unless somebody
beats me to it.  pg_stat_get_backend_server_addr is Peter's to commit,
and I suppose Robert will commit the BackendId-in-relpath patch after
another round of tweaking.  gincostestimate may as well be moved to
Returned With Feedback, since Teodor doesn't seem to be responding
(on vacation, perhaps).

 Do we have a plan (with a time line) for
 producing an 9.1alpha1 release after the CF closes?

I don't think anyone's thought about it much.  I'm tempted to propose
that we delay it until after the git conversion, so that alpha1 is the
first tarball we try to produce from the git repository.  I would just
as soon that that first time be with something noncritical ;-)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to show individual statement latencies in pgbench output

2010-08-12 Thread Tom Lane
Greg Smith g...@2ndquadrant.com writes:
 Florian Pflug wrote:
 Attached is an updated version (v4).

 I've attached a v5.

BTW, I discovered a rather nasty shortcoming of this patch on platforms
without ENABLE_THREAD_SAFETY.  It doesn't work, and what's worse, it
*looks* like it's working, because it gives you plausible-looking
numbers.  But actually the numbers are only averages across the first
worker thread.  The other threads are in sub-processes where they can't
affect the contents of the parent's arrays.

Since platforms without ENABLE_THREAD_SAFETY are only a small minority
these days, this is probably not sufficient reason to reject the patch.
What I plan to do instead is reject the combination of -r with -j larger
than 1 on such platforms:

if (is_latencies)
{
/*
 * is_latencies only works with multiple threads in thread-based
 * implementations, not fork-based ones, because it supposes that the
 * parent can see changes made to the command data structures by child
 * threads.  It seems useful enough to accept despite this limitation,
 * but perhaps we should FIXME someday.
 */
#ifndef ENABLE_THREAD_SAFETY
if (nthreads  1)
{
fprintf(stderr, -r does not work with -j larger than 1 on this 
platform.\n);
exit(1);
}
#endif

It could be fixed with enough effort, by having the child threads pass
back their numbers through the child-to-parent pipes.  I'm not
interested in doing that myself though.

If anyone thinks this problem makes it uncommittable, speak up now.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [ADMIN] postgres 9.0 crash when bringing up hot standby

2010-08-12 Thread Robert Haas
On Wed, Aug 11, 2010 at 10:25 AM, Robert Haas robertmh...@gmail.com wrote:
 [request form more information]

Per off-list discussion with Alanoly, we've determined the following:

dblink was compiled with the same flags as libpqwalreciever
dblink works
libpqwalreceiver crashes

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to show individual statement latencies in pgbench output

2010-08-12 Thread Tom Lane
Greg Smith g...@2ndquadrant.com writes:
 I've attached a v5.  No real code changes from Florian's version, just 
 some wording/style fixes and rework on the documentation.

I've committed this with some editorialization.  The main non-cosmetic
change was that I pulled the latency statistics counters out of the
per-Command data structures and put them into per-thread arrays instead.
I did this for two reasons:

1. Having different threads munging adjacent array entries without any
locking makes me itch.  On some platforms that could possibly fail
entirely, and in any case it's likely to be a performance hit on
machines where processors lock whole cache lines (which is most of them
these days, I think).

2. It should make it a lot easier to pass the per-thread results back up
to the parent in a fork-based implementation, should anyone desire to
fix the limitation I mentioned before.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Alvaro Herrera
Excerpts from Robert Haas's message of jue ago 12 13:29:57 -0400 2010:

 We have two existing mechanisms for removing the catalog entries: when
 a backend is first asked to access a temporary file, it does a DROP
 SCHEMA ... CASCADE on any pre-existing temp schema.  And a table is in
 wraparound trouble and the owning backend is no longer running,
 autovacuum will drop it.  Improving on this seems difficult: if you
 wanted to *guarantee* that the catalog entries were removed before we
 started letting in connections, you'd need to fork a backend per
 database and have each one iterate through all the temp schemas and
 drop them.  Considering that the existing code seems to have been
 pretty careful about how this stuff gets handled, I don't think it's
 worth making the whole startup sequence slower for it.  What might be
 worth considering is changing the autovacuum policy to eliminate the
 wraparound check, and just have it drop temp table catalog entries for
 any backend not currently running, period.

What about having autovacuum silenty drop the catalog entry if it's a
temp entry for which the underlying file does not exist?

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [ADMIN] postgres 9.0 crash when bringing up hot standby

2010-08-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Per off-list discussion with Alanoly, we've determined the following:

 dblink was compiled with the same flags as libpqwalreciever
 dblink works
 libpqwalreceiver crashes

I wonder if the problem is not so much libpqwalreceiver as the
walreceiver process.  Maybe an ordinary backend process does some
prerequisite initialization that walreceiver is missing.  Hard to
guess what, though ... I can't think of anything dlopen() depends on
that should be under our control.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [ADMIN] postgres 9.0 crash when bringing up hot standby

2010-08-12 Thread Tom Lane
I wrote:
 I wonder if the problem is not so much libpqwalreceiver as the
 walreceiver process.  Maybe an ordinary backend process does some
 prerequisite initialization that walreceiver is missing.  Hard to
 guess what, though ... I can't think of anything dlopen() depends on
 that should be under our control.

Actually, that idea is easily tested: try doing
LOAD 'libpqwalreceiver';
in a regular backend process.

If that still crashes, it might be useful to truss or strace the backend
while it runs the command, and compare that to the trace of
LOAD 'dblink';

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Robert Haas
On Thu, Aug 12, 2010 at 5:29 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
 Excerpts from Robert Haas's message of jue ago 12 13:29:57 -0400 2010:

 We have two existing mechanisms for removing the catalog entries: when
 a backend is first asked to access a temporary file, it does a DROP
 SCHEMA ... CASCADE on any pre-existing temp schema.  And a table is in
 wraparound trouble and the owning backend is no longer running,
 autovacuum will drop it.  Improving on this seems difficult: if you
 wanted to *guarantee* that the catalog entries were removed before we
 started letting in connections, you'd need to fork a backend per
 database and have each one iterate through all the temp schemas and
 drop them.  Considering that the existing code seems to have been
 pretty careful about how this stuff gets handled, I don't think it's
 worth making the whole startup sequence slower for it.  What might be
 worth considering is changing the autovacuum policy to eliminate the
 wraparound check, and just have it drop temp table catalog entries for
 any backend not currently running, period.

 What about having autovacuum silenty drop the catalog entry if it's a
 temp entry for which the underlying file does not exist?

I think that would be subject to race conditions.  The current
mechanism is actually pretty good, and I think we can build on it if
we want to do more, rather than inventing something new.  We just need
to be specific about what problem we're trying to solve.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Thu, Aug 12, 2010 at 5:29 PM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
 What about having autovacuum silenty drop the catalog entry if it's a
 temp entry for which the underlying file does not exist?

 I think that would be subject to race conditions.

Well, autovacuum's willingness to drop sufficiently old temp tables
would already risk any such race conditions.  However ...

 The current
 mechanism is actually pretty good, and I think we can build on it if
 we want to do more, rather than inventing something new.  We just need
 to be specific about what problem we're trying to solve.

... I agree with this point.  This idea wouldn't fix the concern I had
about the existence of pg_class entries with no underlying file: if that
causes any issues we'd have to fix them anyway.  So I'm not sure what
the gain is.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Tom Lane
One other thought about all this: in the past, one objection that's been
raised to deleting files during crash restart is the possible loss of
forensic evidence.  It might be a good idea to provide some fairly
simple way for developers to disable that deletion subroutine.  I'm not
sure that it rises to the level of needing a GUC, though.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to show individual statement latencies in pgbench output

2010-08-12 Thread Florian Pflug
On Aug12, 2010, at 19:48 , Tom Lane wrote:
 Greg Smith g...@2ndquadrant.com writes:
 Florian Pflug wrote:
 Attached is an updated version (v4).
 
 I've attached a v5.  No real code changes from Florian's version, just 
 some wording/style fixes and rework on the documentation.
 
 I'm looking through this patch now.  It looks mostly good, but I am
 wondering just exactly what is the rationale for adding comment
 statements to the data structures, rather than ignoring them as before.
 It seems like a complete waste of logic, memory space, and cycles;
 moreover it renders the documentation's statement that comments
 are ignored incorrect.  I did not find anything in the patch history
 explaining the point of that change.

To be able to include the comments (with an average latency of zero) in the 
latency report. This makes the latency report as self-explanatory as the 
original script was (Think latency report copy-and-pasted into an e-mail or 
wiki). It also has the benefit of making the line numbers of the latency report 
agree to those of the original script, which seemed like a natural thing to do, 
and might make some sorts of post-processing easier. It does make doCustom() a 
bit more complex, though.

Anyway, I guess the chance of adding this back is slim now that the patch is 
committed. Oh well.

Thanks for committing this, and
best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] typos in HS source code comment

2010-08-12 Thread Robert Haas
On Thu, Aug 12, 2010 at 5:02 AM, Fujii Masao masao.fu...@gmail.com wrote:
 When I was enjoying reading the HS source code,
 I found some typos. Attached patch fixes them.

I've committed all of this except for the following, which I'm not
certain is correct:

--- a/src/backend/access/transam/clog.c
+++ b/src/backend/access/transam/clog.c
@@ -355,10 +355,10 @@ TransactionIdSetStatusBit(TransactionId xid, XidStatus sta
/*
 * Update the group LSN if the transaction completion LSN is higher.
 *
-* Note: lsn will be invalid when supplied during InRecovery processing,
-* so we don't need to do anything special to avoid LSN updates during
-* recovery. After recovery completes the next clog change will set the
-* LSN correctly.
+* Note: lsn will be invalid when supplied while we're performing
+* recovery but hot standby is disabled, so we don't need to do
+* anything special to avoid LSN updates in that case. After recovery
+* completes the next clog change will set the LSN correctly.
 */
if (!XLogRecPtrIsInvalid(lsn))
{

TransactionIdSetStatusBit is called from TransactionIdSetPageStatus,
which seems to think that the validity of lsn is based on whether
we're doing an async commit.  Your change may be correct, but I'm not
certain of it...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Robert Haas
On Thu, Aug 12, 2010 at 6:23 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Thu, Aug 12, 2010 at 5:29 PM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
 What about having autovacuum silenty drop the catalog entry if it's a
 temp entry for which the underlying file does not exist?

 I think that would be subject to race conditions.

 Well, autovacuum's willingness to drop sufficiently old temp tables
 would already risk any such race conditions.  However ...

I don't think we do.  It only drops them if the backend isn't running.
 And even if the backend starts running after we check and before we
drop it, that's OK.  We're only dropping the OLD table, which by
definition isn't relevant to the new session.  Perhaps we should be
taking a lock on the relation first, but I think that can only really
bite us if the relation were dropped and a new relation were created
with the same OID - in that case, we might drop an unrelated table,
though it's vanishingly unlikely in practice.

 The current
 mechanism is actually pretty good, and I think we can build on it if
 we want to do more, rather than inventing something new.  We just need
 to be specific about what problem we're trying to solve.

 ... I agree with this point.  This idea wouldn't fix the concern I had
 about the existence of pg_class entries with no underlying file: if that
 causes any issues we'd have to fix them anyway.  So I'm not sure what
 the gain is.

Right.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to show individual statement latencies in pgbench output

2010-08-12 Thread Greg Smith

Tom Lane wrote:

It could be fixed with enough effort, by having the child threads pass
back their numbers through the child-to-parent pipes.  I'm not
interested in doing that myself though.
  


That does seem an improvement with a very limited user base it's 
applicable to.  Probably the next useful improvement to this feature is 
to get per-statement data into the latency log files if requested.  If 
this issue gets in the way there somehow, maybe it's worth squashing 
then.  I don't think it will though.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RecordTransactionCommit() and SharedInvalidationMessages

2010-08-12 Thread Robert Haas
On Thu, Aug 12, 2010 at 12:11 AM, Fujii Masao masao.fu...@gmail.com wrote:
 It appears to me that RecordTransactionCommit() only needs to WAL-log
 shared invalidation messages when wal_level is hot_standby, but I
 don't see a guard to prevent it from doing it in all cases.
[...]
 The fix looks pretty simple (see attached), although I don't have any
 clear idea how to test it.
 Should use XLogStandbyInfoActive() macro, for the sake of consistency.
 And, RelcacheInitFileInval should be initialized with false just in case.

How about this?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company


record_transaction_commmit-v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Robert Haas
On Thu, Aug 12, 2010 at 7:04 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 One other thought about all this: in the past, one objection that's been
 raised to deleting files during crash restart is the possible loss of
 forensic evidence.  It might be a good idea to provide some fairly
 simple way for developers to disable that deletion subroutine.  I'm not
 sure that it rises to the level of needing a GUC, though.

See http://archives.postgresql.org/pgsql-hackers/2010-06/msg00622.php :

In this version of the patch, I've improved the temporary file cleanup
mechanism.  In CVS HEAD, when a transaction commits or aborts, we
write an XLOG record with all relations that must be unlinked,
including temporary relations.  With this patch, we no longer include
temporary relations in the XLOG records (which may be a tiny
performance benefit for some people); instead, on every startup of the
database cluster, we just nuke all temporary relation files (which is
now easy to do, since they are named differently than files for
non-temporary relations) at the same time that we nuke other temp
files.  This produces slightly different behavior.  In CVS HEAD,
temporary files get removed whenever an xlog redo happens, so either
at cluster start or after a backend crash, but only to the extent that
they appear in WAL.  With this patch, we can be sure of removing ALL
stray files, which is maybe ever-so-slightly leaky in CVS HEAD.  But
on the other hand, because it hooks into the existing temporary file
cleanup code, it only happens at cluster startup, NOT after a backend
crash.  The existing coding leaves temporary sort files and similar
around after a backend crash for forensics purposes.  That might or
might not be worth rethinking for non-debug builds, but I don't think
there's any very good reason to think that temporary relation files
need to be handled differently than temporary work files.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] typos in HS source code comment

2010-08-12 Thread Fujii Masao
On Fri, Aug 13, 2010 at 8:28 AM, Robert Haas robertmh...@gmail.com wrote:
 I've committed all of this except for the following, which I'm not
 certain is correct:

Thanks for the commit.

 --- a/src/backend/access/transam/clog.c
 +++ b/src/backend/access/transam/clog.c
 @@ -355,10 +355,10 @@ TransactionIdSetStatusBit(TransactionId xid, XidStatus 
 sta
        /*
         * Update the group LSN if the transaction completion LSN is higher.
         *
 -        * Note: lsn will be invalid when supplied during InRecovery 
 processing,
 -        * so we don't need to do anything special to avoid LSN updates during
 -        * recovery. After recovery completes the next clog change will set 
 the
 -        * LSN correctly.
 +        * Note: lsn will be invalid when supplied while we're performing
 +        * recovery but hot standby is disabled, so we don't need to do
 +        * anything special to avoid LSN updates in that case. After recovery
 +        * completes the next clog change will set the LSN correctly.
         */
        if (!XLogRecPtrIsInvalid(lsn))
        {

 TransactionIdSetStatusBit is called from TransactionIdSetPageStatus,
 which seems to think that the validity of lsn is based on whether
 we're doing an async commit.  Your change may be correct, but I'm not
 certain of it...

Before 9.0, since xact_redo_commit always calls TransactionIdCommitTree,
TransactionIdSetStatusBit always receives InvalidXLogRecPtr. In 9.0,
xact_redo_commit calls TransactionIdCommitTree only when hot standby is
disabled. OTOH, when hot standby is enabled, xact_redo_commit calls
TransactionIdAsyncCommitTree, so TransactionIdSetStatusBit receives the
valid lsn.

Regards,

PS. I'll be unable to read hackers from Aug 13th to 20th because of
a vacation.

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RecordTransactionCommit() and SharedInvalidationMessages

2010-08-12 Thread Fujii Masao
On Fri, Aug 13, 2010 at 10:24 AM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Aug 12, 2010 at 12:11 AM, Fujii Masao masao.fu...@gmail.com wrote:
 It appears to me that RecordTransactionCommit() only needs to WAL-log
 shared invalidation messages when wal_level is hot_standby, but I
 don't see a guard to prevent it from doing it in all cases.
 [...]
 The fix looks pretty simple (see attached), although I don't have any
 clear idea how to test it.
 Should use XLogStandbyInfoActive() macro, for the sake of consistency.
 And, RelcacheInitFileInval should be initialized with false just in case.

 How about this?

Looks good to me.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to show individual statement latencies in pgbench output

2010-08-12 Thread Tom Lane
Florian Pflug f...@phlo.org writes:
 On Aug12, 2010, at 19:48 , Tom Lane wrote:
 I'm looking through this patch now.  It looks mostly good, but I am
 wondering just exactly what is the rationale for adding comment
 statements to the data structures, rather than ignoring them as before.

 To be able to include the comments (with an average latency of zero)
 in the latency report. This makes the latency report as
 self-explanatory as the original script was (Think latency report
 copy-and-pasted into an e-mail or wiki). It also has the benefit of
 making the line numbers of the latency report agree to those of the
 original script, which seemed like a natural thing to do, and might
 make some sorts of post-processing easier. It does make doCustom() a
 bit more complex, though.

I had wondered if the original form of the patch printed line numbers
rather than the actual line contents.  If that were true then it'd make
sense to include comment lines.  In the current form of the patch,
though, I think the output is just as readable without comment lines ---
and I'm not thrilled with having this patch materially affect the
behavior for cases where -r wasn't even specified.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Thu, Aug 12, 2010 at 7:04 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 One other thought about all this: in the past, one objection that's been
 raised to deleting files during crash restart is the possible loss of
 forensic evidence.

 ...  With this patch, we can be sure of removing ALL
 stray files, which is maybe ever-so-slightly leaky in CVS HEAD.  But
 on the other hand, because it hooks into the existing temporary file
 cleanup code, it only happens at cluster startup, NOT after a backend
 crash.

Doh.  Thanks.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-08-12 Thread Robert Haas
On Thu, Aug 12, 2010 at 2:12 PM, Robert Haas robertmh...@gmail.com wrote:
 That seems like a good idea.  I'll post an updated patch.

Here is an updated patch.  It's in context-diff format this time, but
that made it go over 100K, so I gzip'd it because I can't remember
what the attachment size limit is these days.  I'm not sure whether
that works out to a win or not.

I think this addresses all previous review comments with the exception
that I've been unable to devise a more clever name for relpathperm()
and relpathbackend().

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company


temprelnames-v5.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers