Re: [HACKERS] Get more from indices.

2013-11-22 Thread Kyotaro HORIGUCHI
Hello. I found a bug(?) in thsi patch as I considered on another
patch.

In truncate_useless_pathkeys, if query_pathkeys is longer than
pathkeys made from index columns old patch picked up the latter
as IndexPath's pathkeys. But the former is more suitable
according to the context here.

the attched pathkey_and_uniqueindx_v4_20131122.patch is changed
as follows.

 - Rebased to current master.

 - truncate_useless_pathkeys returns root-query_pathkeys when
   the index is fully-ordered and query_pathkeys contains the
   pathkeys made from index columns.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index 606734a..43be0a5 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -953,7 +953,8 @@ build_index_paths(PlannerInfo *root, RelOptInfo *rel,
 		index_pathkeys = build_index_pathkeys(root, index,
 			  ForwardScanDirection);
 		useful_pathkeys = truncate_useless_pathkeys(root, rel,
-	index_pathkeys);
+	index_pathkeys,
+	index-full_ordered);
 		orderbyclauses = NIL;
 		orderbyclausecols = NIL;
 	}
@@ -1015,7 +1016,8 @@ build_index_paths(PlannerInfo *root, RelOptInfo *rel,
 		index_pathkeys = build_index_pathkeys(root, index,
 			  BackwardScanDirection);
 		useful_pathkeys = truncate_useless_pathkeys(root, rel,
-	index_pathkeys);
+	index_pathkeys,
+	index-full_ordered);
 		if (useful_pathkeys != NIL)
 		{
 			ipath = create_index_path(root, index,
diff --git a/src/backend/optimizer/path/pathkeys.c b/src/backend/optimizer/path/pathkeys.c
index 9c8ede6..fd104b2 100644
--- a/src/backend/optimizer/path/pathkeys.c
+++ b/src/backend/optimizer/path/pathkeys.c
@@ -369,6 +369,29 @@ get_cheapest_path_for_pathkeys(List *paths, List *pathkeys,
 }
 
 /*
+ * path_is_ordered
+ * Return true if the path is apparently ordered on the pathkeys.
+ */
+bool
+path_is_ordered(Path *path, List *pathkeys)
+{
+	if (pathkeys_contained_in(pathkeys, path-pathkeys))
+		return true;
+
+	/*
+	 * If IndexPath is fully ordered, it is sufficiently ordered if index
+	 * pathkeys is a subset of target pathkeys
+	 */
+	if (pathkeys  path-pathkeys 
+		IsA(path, IndexPath) 
+		((IndexPath*)path)-indexinfo-full_ordered 
+		(pathkeys_contained_in(path-pathkeys, pathkeys)))
+		return true;
+
+	return false;
+}
+
+/*
  * get_cheapest_fractional_path_for_pathkeys
  *	  Find the cheapest path (for retrieving a specified fraction of all
  *	  the tuples) that satisfies the given pathkeys and parameterization.
@@ -381,6 +404,8 @@ get_cheapest_path_for_pathkeys(List *paths, List *pathkeys,
  * 'pathkeys' represents a required ordering (in canonical form!)
  * 'required_outer' denotes allowable outer relations for parameterized paths
  * 'fraction' is the fraction of the total tuples expected to be retrieved
+ * paths-pathkeys might be replaced with pathkeys so as to declare that the
+ * path is ordered on it.
  */
 Path *
 get_cheapest_fractional_path_for_pathkeys(List *paths,
@@ -403,9 +428,17 @@ get_cheapest_fractional_path_for_pathkeys(List *paths,
 			compare_fractional_path_costs(matched_path, path, fraction) = 0)
 			continue;
 
-		if (pathkeys_contained_in(pathkeys, path-pathkeys) 
+		if (path_is_ordered(path, pathkeys) 
 			bms_is_subset(PATH_REQ_OUTER(path), required_outer))
+		{
+			/*
+			 * Set required pathkeys as the path's pathkeys so as to declare
+			 * that this path is ordred on the pathkeys.
+			 */
+			if (length(path-pathkeys)  length(pathkeys))
+path-pathkeys = pathkeys;
 			matched_path = path;
+		}
 	}
 	return matched_path;
 }
@@ -798,7 +831,7 @@ build_join_pathkeys(PlannerInfo *root,
 	 * contain pathkeys that were useful for forming this joinrel but are
 	 * uninteresting to higher levels.
 	 */
-	return truncate_useless_pathkeys(root, joinrel, outer_pathkeys);
+	return truncate_useless_pathkeys(root, joinrel, outer_pathkeys, false);
 }
 
 /
@@ -1455,7 +1488,8 @@ right_merge_direction(PlannerInfo *root, PathKey *pathkey)
  * So the result is always either 0 or list_length(root-query_pathkeys).
  */
 static int
-pathkeys_useful_for_ordering(PlannerInfo *root, List *pathkeys)
+pathkeys_useful_for_ordering(PlannerInfo *root, List *pathkeys,
+			 bool pk_full_ordered)
 {
 	if (root-query_pathkeys == NIL)
 		return 0;/* no special ordering requested */
@@ -1469,23 +1503,36 @@ pathkeys_useful_for_ordering(PlannerInfo *root, List *pathkeys)
 		return list_length(root-query_pathkeys);
 	}
 
+	/*
+	 * Given that the pathkeys compose an unique key, they are useful for
+	 * ordering when they are a sublist of query_pathkeys. Expand pathkeys to
+	 * root-query_pathkeys in this case.
+	 */
+	if (pk_full_ordered 
+		pathkeys_contained_in(pathkeys, root-query_pathkeys))
+		return list_length(root-query_pathkeys);
+
 	

Re: [HACKERS] Using indices for UNION.

2013-11-22 Thread Kyotaro HORIGUCHI
Hello, As I was reconsidering after your advise, I came up with
an idea of hacking on query tree tranaformation phase in
subquery_planner, not on that of plan generation phase as
before. (And the older patch is totally scrapped:-)


Currently flatten_simple_union_all() flattens 'simple' UNION
'ALL' at the top level of 'current' query. I noticed that a
little modification there also could flatten simple UNION (w/o
ALL), and it has finally almost same effect regarding more usage
of indexes for UNION. Additionally, applying still more the
'UNION ALL and inheritance table' patch, even coveres UNION's on
inheritance tables.

This patch is far small and neat (though I say so myself :-) than
the previous ones. The following plans we got from unpatched
PostgreSQL.

original=# explain (analyze on, costs off)
| select a, b from cu11 union select a, b from cu12 order by a, b limit 10;
|   QUERY PLAN
| -
|  Limit (actual time=272.252..272.259 rows=10 loops=1)
|  -  Unique (actual time=272.251..272.258 rows=10 loops=1)
|   -  Sort (actual time=272.249..272.252 rows=10 loops=1)
| Sort Key: cu11.a, cu11.b
| Sort Method: external sort  Disk: 3520kB
| -  Append (actual time=0.020..70.935 rows=20 loops=1)
|  -  Seq Scan on cu11 (actual time=0.019..26.741 rows=10 loops=1)
|  -  Seq Scan on cu12 (actual time=0.006..25.183 rows=10 loops=1)
|  Total runtime: 273.162 ms

And of course for * (= a, b, c, d) this,

original=# explain (analyze on, costs off)
|select * from cu11 union select * from cu12 order by a, b limit 10;
|QUERY PLAN
| 
|  Limit (actual time=234.880..234.891 rows=10 loops=1)
|  -  Unique (actual time=234.879..234.889 rows=10 loops=1)
|   -  Sort (actual time=234.878..234.881 rows=10 loops=1)
|Sort Key: cu11.a, cu11.b, cu11.c, cu11.d
|Sort Method: external sort  Disk: 5280kB
|-  Append (actual time=0.017..49.502 rows=20 loops=1)
|   -  Seq Scan on cu11 (actual time=0.017..15.649 rows=10 loops=1)
|   -  Seq Scan on cu12 (actual time=0.007..14.939 rows=10 loops=1)
|  Total runtime: 236.029 ms


 We'll get the following plan changed with this patch plus the
latest pathkey_and_uniqueindx_v4_20131122.patch patcch. (It got
a small fix on pathkeys fixation for index paths)

https://commitfest.postgresql.org/action/patch_view?id=1280

patched=# explain (analyze on, costs off)
|   select * from cu11 union select * from cu12 order by a, b limit 10;
|  QUERY PLAN
| --
|  Limit (actual time=0.059..0.081 rows=10 loops=1)
|  -  Unique (actual time=0.058..0.079 rows=10 loops=1)
|   -  Merge Append (actual time=0.056..0.065 rows=10 loops=1)
| Sort Key: cu11.a, cu11.b, cu11.c, cu11.d
| -  Index Scan ... on cu11 (actual time=0.032..0.037 rows=10 loops=1)
| -  Index Scan ... on cu12 (actual time=0.021..0.021 rows=1 loops=1)
|  Total runtime: 0.162 ms

Moreover, with the 'UNION ALL' patches , say,
union_inh_idx_typ1_v1_20131024.patch and
union_inh_idx_typ1_add_v1_20131024.patch, we'll get the following
plan for UNION on inhertance tables,

https://commitfest.postgresql.org/action/patch_view?id=1270


patched2=# explain (analyze on, costs off)
|  select * from pu1 union select * from pu2 order by a, b limit 10;
| QUERY PLAN
| ---
|  Limit (actual time=0.209..0.234 rows=10 loops=1)
|  -  Unique (actual time=0.206..0.230 rows=10 loops=1)
|   -  Merge Append (actual time=0.204..0.216 rows=10 loops=1)
|Sort Key: pu1.a, pu1.b, pu1.c, pu1.d
|-  Index Scan..on pu1 (actual time=0.004..0.004 rows=0 loops=1)
|-  Index Scan..on cu11 (actual time=0.050..0.058 rows=10 loops=1)
|-  Index Scan..on cu12 (actual time=0.052..0.052 rows=1 loops=1)
|-  Index Scan..on pu2 (actual time=0.002..0.002 rows=0 loops=1)
|-  Index Scan..on cu21 (actual time=0.046..0.046 rows=1 loops=1)
|-  Index Scan..on cu22 (actual time=0.046..0.046 rows=1 loops=1)
|  Total runtime: 0.627 ms

The attached union_uses_idx_v4_20131122.patch is changed as
follows.

 - Rebased to current master.

 - Scrapped the whold old stuff.

 - flatten_simple_union_all() is renamed to
   flatten_simple_union() and modified to do so.  UNION is
   flattend only if sortClause exists and distinctClause is NIL.

==
Test tables can be created following the command below,

| create table pu1 (a int not null, b int not null, c int, d text);
| create unique index i_pu1_ab on pu1 (a, b);
| create table cu11 (like pu1 including all) inherits (pu1);
| create table cu12 (like 

Re: [HACKERS] new unicode table border styles for psql

2013-11-22 Thread Pavel Stehule
Hello


2013/11/21 Peter Eisentraut pete...@gmx.net

 On 11/21/13, 2:09 AM, Pavel Stehule wrote:
  Hello
 
  I wrote new styles for  psql table borders.
 
  http://postgres.cz/wiki/Pretty_borders_in_psql
 
  This patch is simply and I am think so some styles can be interesting
  for final presentation.
 
  Do you think so this feature is generally interesting and should be in
 core?

 Maybe make the border setting a string containing the various characters
 by index.  Then everyone can create their own crazy borders.


I seriously though about it, but not sure if it is good way.

- So we still have to deliver some basic set of styles still.
- People prefer prefabricate solution with simply activation now - so
customization use only a few  people
- buitin style requires about 110 bytes and it is safe and verified,
dynamic styling requires some parser, maybe some other checks

So for this use case I prefer very primitive (and simple) design as was
proposed.

Regards

Pavel


Re: [HACKERS] new unicode table border styles for psql

2013-11-22 Thread Pavel Stehule
Hello


2013/11/21 Merlin Moncure mmonc...@gmail.com

 On Thu, Nov 21, 2013 at 1:09 AM, Pavel Stehule pavel.steh...@gmail.com
 wrote:
  Hello
 
  I wrote new styles for  psql table borders.
 
  http://postgres.cz/wiki/Pretty_borders_in_psql
 
  This patch is simply and I am think so some styles can be interesting for
  final presentation.
 
 great. hm, maybe we could integrate color? (see:

 http://merlinmoncure.blogspot.com/2012/09/psql-now-with-splash-of-color.html
 ).


it is next possible enhancing - I would to go forward in small steps,
please :)

minimally (and independent on proposed patch) we can introduce some like
final regexp filtering  - that can be used for this or other purposes.

Regards

Pavel




 merlin



Re: [HACKERS] gaussian distribution pgbench

2013-11-22 Thread Fabien COELHO


3. That said, this could be handy. But it would be even more handy if you 
could get Gaussian random numbers with \setrandom, so that you could use this 
with custom scripts. And once you implement that, do we actually need the -g 
flag anymore? If you want TPC-B transactions with gaussian distribution, you 
can write a custom script to do that. The documentation includes a full 
script that corresponds to the built-in TPC-B script.


So what I'd actually like to see is \setgaussian, for use in custom scripts.


Indeed, great idea! That looks pretty elegant! It would be something like:

  \setgauss var min max sigma

I'm not sure whether sigma should be relative to max-min, or absolute.
I would say relative is better...

A concerned I raised is that what one should really want is a pseudo 
randomized (discretized) gaussian, i.e. you want the probability of each 
value along a gaussian distribution, *but* no direct frequency correlation 
between neighbors. Otherwise, you may have unwanted/unrealistic positive 
cache effects. Maybe this could be achieved by an independent built-in, 
say either:


  \randomize var min max [parameter ?]
  \randomize var min max val [parameter]

Which would mean take variable var which must be in [min,max], and apply a 
pseudo-random transformation which results is also in [min,max].


From a probabilistic point of view, it seems to me that a randomized 
(discretized) exponential would be more significant to model a server 
load.


  \setexp var min max lambda...

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new unicode table border styles for psql

2013-11-22 Thread Pavel Stehule
2013/11/21 Szymon Guz mabew...@gmail.com

 On 21 November 2013 21:15, Szymon Guz mabew...@gmail.com wrote:




 On 21 November 2013 20:20, Pavel Stehule pavel.steh...@gmail.com wrote:

 So here is patch for 9.4

 7 new line styles, 2 new border styles, \pset border autocomplete

 Regards

 Pavel




 2013/11/21 Szymon Guz mabew...@gmail.com

 On 21 November 2013 08:09, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 I wrote new styles for  psql table borders.

 http://postgres.cz/wiki/Pretty_borders_in_psql

 This patch is simply and I am think so some styles can be interesting
 for final presentation.

 Do you think so this feature is generally interesting and should be in
 core?

 Regards

 Pavel


 YES!

 - Szymon



 That's pretty cool, I'd love to see it in the core, however it doesn't
 contain any documentation, so I'm afraid it will be hard to use for people.

 thanks,
 Szymon


 Hi Pavel,
 I've found two errors in the documentation at
 http://postgres.cz/wiki/Pretty_borders_in_psql

 1)

 The unicode-double5 style looks like:

 x=# select * from t;
 ┌───┬───┬───┐
 │ a │ b │ t │
 ╞═══╪═══╪═══╡
 │ 1 │ 1 │ a │
 ├───┼───┼───┤
 │ 2 │ 2 │ b │
 ├───┼───┼───┤
 │ 3 │ 3 │ c │
 └───┴───┴───┘
 (3 rows)

 (There are horizontal lines between rows)

 2) There is no unicode-double6 in psql, however it exists on the website.


website is related to patch for 9.3 (I add note there)

patch for 9.4 is fixed - and now with small doc

Regards

Pavel



 regards,
 Szymon

commit 43f0026617891485f5ceddce6024b164622ac4cc
Author: root root@localhost.localdomain
Date:   Thu Nov 21 20:16:00 2013 +0100

initial

diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 264cfe6..aad2304 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -1947,7 +1947,8 @@ lo_import 152801
   acronymHTML/acronym format, this will translate directly
   into the literalborder=.../literal attribute; in the
   other formats only values 0 (no border), 1 (internal dividing lines),
-  and 2 (table frame) make sense.
+  and 2 (table frame), 3 (internal dividing lines with rows
+  separators), 4 (table frame with internal divided cells).
   literallatex/literal and literallatex-longtable/literal
   also support a literalborder/literal value of 3 which adds
   a dividing line between each row.
@@ -2087,8 +2088,11 @@ lo_import 152801
   listitem
   para
   Sets the border line drawing style to one
-  of literalascii/literal, literalold-ascii/literal
-  or literalunicode/literal.
+  of literalascii/literal, literalold-ascii/literal,
+  literalunicode/literal, literalunicode-double1/literal,
+  literalunicode-double2/literal, literalunicode-double3/literal,
+  literalunicode-double4/literal, literalunicode-double5/literal,
+  literalunicode-bold1/literal or literalunicode-bold2/literal.
   Unique abbreviations are allowed.  (That would mean one
   letter is enough.)
   The default setting is literalascii/.
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 638d8cb..7b2b621 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -2271,7 +2271,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			psql_error(\\pset: allowed formats are unaligned, aligned, wrapped, html, latex, troff-ms\n);
 			return false;
 		}
-
 	}
 
 	/* set table line style */
@@ -2285,12 +2284,27 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			popt-topt.line_style = pg_asciiformat_old;
 		else if (pg_strncasecmp(unicode, value, vallen) == 0)
 			popt-topt.line_style = pg_utf8format;
+		else if (pg_strncasecmp(unicode2, value, vallen) == 0)
+			popt-topt.line_style = pg_utf8format2;
+		else if (pg_strncasecmp(unicode-double1, value, vallen) == 0)
+			popt-topt.line_style = pg_utf8double1format;
+		else if (pg_strncasecmp(unicode-double2, value, vallen) == 0)
+			popt-topt.line_style = pg_utf8double2format;
+		else if (pg_strncasecmp(unicode-double3, value, vallen) == 0)
+			popt-topt.line_style = pg_utf8double3format;
+		else if (pg_strncasecmp(unicode-double4, value, vallen) == 0)
+			popt-topt.line_style = pg_utf8double4format;
+		else if (pg_strncasecmp(unicode-double5, value, vallen) == 0)
+			popt-topt.line_style = pg_utf8double5format;
+		else if (pg_strncasecmp(unicode-bold1, value, vallen) == 0)
+			popt-topt.line_style = pg_utf8bold1format;
+		else if (pg_strncasecmp(unicode-bold2, value, vallen) == 0)
+			popt-topt.line_style = pg_utf8bold2format;
 		else
 		{
 			psql_error(\\pset: allowed line styles are ascii, old-ascii, unicode\n);
 			return false;
 		}
-
 	}
 
 	/* set border style/width */
@@ -2298,7 +2312,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 	{
 		if (value)
 			popt-topt.border = 

Re: [HACKERS] PL/Python: domain over array support

2013-11-22 Thread Marko Kreen
On Sat, Oct 26, 2013 at 11:17:19AM -0200, Rodolfo Campero wrote:
 The attached patch add support of domains over arrays to PL/Python (eg:
 CREATE DOMAIN my_domain AS integer[]).
 
 Basically it just uses get_base_element_type instead of get_element_type
 in plpy_typeio.c, and uses domain_check before returning a sequence as
 array in PLySequence_ToArray whenever appropriate.

Generally looks fine.  Please lose the C++ comments though, this style
is not used in Postgres sources.

 There's one line I'm not sure about; I modified a switch statement (line
 427):
 switch (element_type ? element_type : getBaseType(arg-typoid))
 The rationale is that when element_type is set, it is already a base type,
 because there is no support of arrays of domains in PostgreSQL, but this
 may not held true in the future.

Was there any actual need to modify that?  Or was it just performance
optimization?  ATM it creates asymmetry between PLy_output_datum_func2
and PLy_input_datum_func2.

If it's just performace optimization, then it should be done in both
functions, but seems bad idea to do it in this patch.  So I think
it's better to leave it out.

-- 
marko



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Get more from indices.

2013-11-22 Thread Etsuro Fujita
Kyotaro HORIGUCHI wrote:
 Hello. I found a bug(?) in thsi patch as I considered on another patch.

 In truncate_useless_pathkeys, if query_pathkeys is longer than pathkeys
made
 from index columns old patch picked up the latter as IndexPath's pathkeys.

 But the former is more suitable according to the context here.

 the attched pathkey_and_uniqueindx_v4_20131122.patch is changed as
follows.

  - Rebased to current master.

  - truncate_useless_pathkeys returns root-query_pathkeys when
the index is fully-ordered and query_pathkeys contains the
pathkeys made from index columns.

OK, I'd like to look at this version of the patch more closely, then.

Thanks,

Best regards,
Etsuro Fujita



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Can we trust fsync?

2013-11-22 Thread Florian Weimer

On 11/21/2013 12:45 AM, Craig Ringer wrote:

I'm really concerned by this post on Linux's fsync and disk flush behaviour:

http://milek.blogspot.com.au/2010/12/linux-osync-and-write-barriers.html

and seeking opinions from folks here who've been deeply involved in
write reliability work.


With ext4 and XFS on plain/LVM/md block devices, this issue should 
really be a thing of the past.  I think the kernel folks would treat 
this as bugs nowadays, too.


--
Florian Weimer / Red Hat Product Security Team


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/Python: domain over array support

2013-11-22 Thread Rodolfo Campero
Marko,

2013/11/22 Marko Kreen mark...@gmail.com

 On Sat, Oct 26, 2013 at 11:17:19AM -0200, Rodolfo Campero wrote:
  The attached patch add support of domains over arrays to PL/Python (eg:
  CREATE DOMAIN my_domain AS integer[]).
 
  Basically it just uses get_base_element_type instead of get_element_type
  in plpy_typeio.c, and uses domain_check before returning a sequence as
  array in PLySequence_ToArray whenever appropriate.

 Generally looks fine.  Please lose the C++ comments though, this style
 is not used in Postgres sources.


Done.


  There's one line I'm not sure about; I modified a switch statement (line
  427):
  switch (element_type ? element_type : getBaseType(arg-typoid))
  The rationale is that when element_type is set, it is already a base
 type,
  because there is no support of arrays of domains in PostgreSQL, but this
  may not held true in the future.

 Was there any actual need to modify that?  Or was it just performance
 optimization?  ATM it creates asymmetry between PLy_output_datum_func2
 and PLy_input_datum_func2.

 If it's just performace optimization, then it should be done in both
 functions, but seems bad idea to do it in this patch.  So I think
 it's better to leave it out.


There was no actual need to modify that, so I dropped that change in this
new patch.

There are other cosmetic changes in this patch, wrt previous version (not
preexistent code):
 * adjusted alignment of variable name rv in line 12
 * reworded comment in line 850, resulting in more than 80 characters, so I
splitted the comment into a multiline comment following the surrounding
style.

Thanks for your review.
diff --git a/src/pl/plpython/expected/plpython_types.out b/src/pl/plpython/expected/plpython_types.out
index 91106e0..785ffca 100644
--- a/src/pl/plpython/expected/plpython_types.out
+++ b/src/pl/plpython/expected/plpython_types.out
@@ -664,6 +664,34 @@ SELECT * FROM test_type_conversion_array_error();
 ERROR:  return value of function with array return type is not a Python sequence
 CONTEXT:  while creating return value
 PL/Python function test_type_conversion_array_error
+CREATE DOMAIN ordered_pair_domain AS integer[] CHECK (array_length(VALUE,1)=2 AND VALUE[1]  VALUE[2]);
+CREATE FUNCTION test_type_conversion_array_domain(x ordered_pair_domain) RETURNS ordered_pair_domain AS $$
+plpy.info(x, type(x))
+return x
+$$ LANGUAGE plpythonu;
+SELECT * FROM test_type_conversion_array_domain(ARRAY[0, 100]::ordered_pair_domain);
+INFO:  ([0, 100], type 'list')
+CONTEXT:  PL/Python function test_type_conversion_array_domain
+ test_type_conversion_array_domain 
+---
+ {0,100}
+(1 row)
+
+SELECT * FROM test_type_conversion_array_domain(NULL::ordered_pair_domain);
+INFO:  (None, type 'NoneType')
+CONTEXT:  PL/Python function test_type_conversion_array_domain
+ test_type_conversion_array_domain 
+---
+ 
+(1 row)
+
+CREATE FUNCTION test_type_conversion_array_domain_check_violation() RETURNS ordered_pair_domain AS $$
+return [2,1]
+$$ LANGUAGE plpythonu;
+SELECT * FROM test_type_conversion_array_domain_check_violation();
+ERROR:  value for domain ordered_pair_domain violates check constraint ordered_pair_domain_check
+CONTEXT:  while creating return value
+PL/Python function test_type_conversion_array_domain_check_violation
 ---
 --- Composite types
 ---
diff --git a/src/pl/plpython/plpy_typeio.c b/src/pl/plpython/plpy_typeio.c
index caccbf9..0a2307a 100644
--- a/src/pl/plpython/plpy_typeio.c
+++ b/src/pl/plpython/plpy_typeio.c
@@ -373,7 +373,7 @@ PLy_output_datum_func2(PLyObToDatum *arg, HeapTuple typeTup)
 	arg-typioparam = getTypeIOParam(typeTup);
 	arg-typbyval = typeStruct-typbyval;
 
-	element_type = get_element_type(arg-typoid);
+	element_type = get_base_element_type(arg-typoid);
 
 	/*
 	 * Select a conversion function to convert Python objects to PostgreSQL
@@ -427,7 +427,8 @@ static void
 PLy_input_datum_func2(PLyDatumToOb *arg, Oid typeOid, HeapTuple typeTup)
 {
 	Form_pg_type typeStruct = (Form_pg_type) GETSTRUCT(typeTup);
-	Oid			element_type = get_element_type(typeOid);
+	/* It's safe to handle domains of array types as its base array type. */
+	Oid			element_type = get_base_element_type(typeOid);
 
 	/* Get the type's conversion information */
 	perm_fmgr_info(typeStruct-typoutput, arg-typfunc);
@@ -808,6 +809,7 @@ static Datum
 PLySequence_ToArray(PLyObToDatum *arg, int32 typmod, PyObject *plrv)
 {
 	ArrayType  *array;
+	Datum   rv;
 	int			i;
 	Datum	   *elems;
 	bool	   *nulls;
@@ -844,8 +846,15 @@ PLySequence_ToArray(PLyObToDatum *arg, int32 typmod, PyObject *plrv)
 
 	lbs = 1;
 	array = construct_md_array(elems, nulls, 1, len, lbs,
-			   get_element_type(arg-typoid), arg-elm-typlen, arg-elm-typbyval, arg-elm-typalign);
-	return PointerGetDatum(array);
+			   get_base_element_type(arg-typoid), arg-elm-typlen, arg-elm-typbyval, arg-elm-typalign);
+	/*
+	 * If the result type is a domain of array, the resulting 

Re: [HACKERS] [PERFORM] Cpu usage 100% on slave. s_lock problem.

2013-11-22 Thread Heikki Linnakangas

On 21.11.2013 21:37, Merlin Moncure wrote:

On Thu, Nov 21, 2013 at 10:37 AM, Heikki Linnakangas

In my patch, I put the barrier inside the if (!LocalRecoveryInProgress)
block. That codepath can only execute once in a backend, so performance is
not an issue there. Does that look sane to you?


oh right -- certainly!


Ok, commmited.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/Python: domain over array support

2013-11-22 Thread Marko Kreen
On Fri, Nov 22, 2013 at 08:45:56AM -0200, Rodolfo Campero wrote:
 There are other cosmetic changes in this patch, wrt previous version (not
 preexistent code):
  * adjusted alignment of variable name rv in line 12
  * reworded comment in line 850, resulting in more than 80 characters, so I
 splitted the comment into a multiline comment following the surrounding
 style.

Good.

One more thing - please update Python 3 regtests too.

-- 
marko



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-22 Thread Heikki Linnakangas

On 19.11.2013 16:20, Andres Freund wrote:

On 2013-11-18 23:15:59 +0100, Andres Freund wrote:

Afaics it's likely a combination/interaction of bugs and fixes between:
* the initial HS code
* 5a031a5556ff83b8a9646892715d7fef415b83c3
* f44eedc3f0f347a856eea8590730769125964597


Yes, the combination of those is guilty.

Man, this is (to a good part my) bad.


But that'd mean nobody noticed it during 9.3's beta...


It's fairly hard to reproduce artificially since a) there have to be
enough transactions starting and committing from the start of the
checkpoint the standby is starting from to the point it does
LogStandbySnapshot() to cross a 32768 boundary b) hint bits often save
the game by not accessing clog at all anymore and thus not noticing the
corruption.
I've reproduced the issue by having an INSERT ONLY table that's never
read from. It's helpful to disable autovacuum.


For the archive, here's what I used to reproduce this. It creates master 
and a standby, and also uses an INSERT only table. To make it trigger 
more easily, it helps to insert sleeps in CreateCheckpoint(), around the 
LogStandbySnapshot() call.


- Heikki


test-hot-standby-bug.sh
Description: Bourne shell script

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Store Extension Options

2013-11-22 Thread Fabrízio de Royes Mello
On Thu, Nov 21, 2013 at 11:06 AM, Robert Haas robertmh...@gmail.com wrote:

 On Wed, Nov 20, 2013 at 9:35 PM, Fabrízio de Royes Mello
 fabriziome...@gmail.com wrote:
   So, with this patch we can do that:
  
   ALTER TABLE foo
  SET (ext.somext.do_replicate=true);
  
   When 'ext' is the fixed prefix, 'somext' is the extension name,
   'do_replicate' is the
   extension option and 'true' is the value.
 
  This doesn't seem like a particular good choice of syntax;
 
  What's your syntax suggestion?

 I dunno, but I doubt that hardcoding ext as an abbreviation for
 extension is a good decision.  I also doubt that any fixed prefix is a
 good decision.


I use this form to simplify implementation and not change sql syntax, but
we can discuss another way or syntax.


  and I also have my doubts about the usefulness of the feature.
 
  This feature can be used for replication solutions, but also can be
used for
  any extension that need do some specific configuration about tables,
  attributes and/or indexes.

 So, create your own configuration table with a column of type regclass.


This can be a solution, but with a config table we have some problems:
a) no dependency tracking (pg_depend)
b) no correct locking
c) no relcache
d) harder to do correctly for columns

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


Re: [HACKERS] information schema parameter_default implementation

2013-11-22 Thread Rodolfo Campero
Review: information schema parameter_default implementation (v2)

This is a review of the patch submitted in
http://archives.postgresql.org/message-id/1384483678.5008.1.ca...@vanquo.pezone.net
(information schema parameter_default implementation).

Previous review from Amit Khandekar covers technical aspects:
http://www.postgresql.org/message-id/CACoZds0eB3-yZ+pVLp9Sf5Xs_9xsDZ=jdp1d83ra-hjvjjo...@mail.gmail.com

Submission review
=
 * Is the patch in a patch format which has context? (eg: context diff
format)
Yes

 * Does it apply cleanly to the current git master?
I had to apply fromdos to remove trailing whitespace.
After that, the patch applies cleanly to HEAD.
Make builds without warnings, except for:

In file included from gram.y:13675:0:
scan.c: In function ‘yy_try_NUL_trans’:
scan.c:10185:23: warning: unused variable ‘yyg’ [-Wunused-variable]

but from previous messages in this mailing list I think that's unrelated to
this patch and normal.
The regression tests all pass successfully against the new patch.

 * Does it include reasonable tests, necessary doc patches, etc?
Yes

Usability review

 * Does the patch actually implement that?
The patch implements the column parameter_default of information schema
view parameters, defined in the SQL:2011 standard.
I could not get a hand to the spec, but I found a document where it is
mentioned: http://msdn.microsoft.com/en-us/library/jj191733(v=sql.105).aspx

 * Do we want that?
I think we do, as it is defined in the spec.

 * Do we already have it?
No

 * Does it follow SQL spec, or the community-agreed behavior?
SQL:2011.

 * Does it include pg_dump support (if applicable)?
N/A

 * Are there dangers?
None AFAICS.

 * Have all the bases been covered?
Yes.

Feature test

 * Does the feature work as advertised?
Yes

 * Are there corner cases the author has failed to consider?
None that I can see.

 * Are there any assertion failures or crashes?
No

Performance review
==
N/A

Coding review
=
I'm not skilled enough to do a code review; see previous review from Amit:
http://www.postgresql.org/message-id/CACoZds0eB3-yZ+pVLp9Sf5Xs_9xsDZ=jdp1d83ra-hjvjjo...@mail.gmail.com



2013/11/21 Peter Eisentraut pete...@gmx.net

 On 11/20/13, 8:39 PM, Rodolfo Campero wrote:
  2013/11/20 Peter Eisentraut pete...@gmx.net mailto:pete...@gmx.net
 
  Updated patch
 
 
  I can't apply the patch; maybe I'm doing something wrong?

 It looks like you are not in the right directory.




-- 
Rodolfo Campero
Anachronics S.R.L.
Tel: (54 11) 4899 2088
rodolfo.camp...@anachronics.com
http://www.anachronics.com


Re: [HACKERS] UNNEST with multiple args, and TABLE with multiple funcs

2013-11-22 Thread Andrew Gierth
 Tom == Tom Lane t...@sss.pgh.pa.us writes:

 Tom I've committed this patch after some significant editorialization, but
 Tom leaving the use of TABLE( ... ) syntax in-place.  If we decide that we
 Tom don't want to risk doing that, we can change to some other syntax later.

Is this intended:

create function foo() returns setof footype language plpgsql
  as $f$ begin return next row(1,true); end; $f$;

select pg_typeof(f), row_to_json(f) from foo() with ordinality f(p,q);
 pg_typeof |   row_to_json   
---+-
 record| {p:1,q:true,ordinality:1}
(1 row)

select pg_typeof(f), row_to_json(f) from foo() f(p,q);
 pg_typeof |   row_to_json
---+--
 footype   | {a:1,b:true}
(1 row)

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Minor patch for the uuid-ossp extension

2013-11-22 Thread roadrunner6

When trying to add the extension with \i it writes an error message:
Use CREATE EXTENSION uuid-ossp to load this file.

Unfortunatly this does not work for extensions with dashes. Must CREATE 
EXTENSION uuid-ossp. Proposed patch is attached.


Regards
Mario

diff -Nurb contrib.orig/uuid-ossp/uuid-ossp--1.0.sql 
contrib/uuid-ossp/uuid-ossp--1.0.sql
--- contrib.orig/uuid-ossp/uuid-ossp--1.0.sql   2013-11-22 13:48:21.588674030 
+0100
+++ contrib/uuid-ossp/uuid-ossp--1.0.sql2013-11-22 13:52:13.232387782 
+0100
@@ -1,7 +1,7 @@
 /* contrib/uuid-ossp/uuid-ossp--1.0.sql */

 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use CREATE EXTENSION uuid-ossp to load this file. \quit
+\echo Use CREATE EXTENSION uuid-ossp to load this file. \quit

 CREATE FUNCTION uuid_nil()
 RETURNS uuid
diff -Nurb contrib.orig/uuid-ossp/uuid-ossp--unpackaged--1.0.sql 
contrib/uuid-ossp/uuid-ossp--unpackaged--1.0.sql
--- contrib.orig/uuid-ossp/uuid-ossp--unpackaged--1.0.sql   2013-11-22 
13:44:04.589862871 +0100
+++ contrib/uuid-ossp/uuid-ossp--unpackaged--1.0.sql2013-11-22 
13:52:19.480164238 +0100
@@ -1,7 +1,7 @@
 /* contrib/uuid-ossp/uuid-ossp--unpackaged--1.0.sql */

 -- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use CREATE EXTENSION uuid-ossp to load this file. \quit
+\echo Use CREATE EXTENSION uuid-ossp to load this file. \quit

 ALTER EXTENSION uuid-ossp ADD function uuid_nil();
 ALTER EXTENSION uuid-ossp ADD function uuid_ns_dns();
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add \i option to bring in the specified file as a quoted literal

2013-11-22 Thread Alvaro Herrera
Amit Kapila escribió:
 On Fri, Nov 22, 2013 at 1:33 AM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:

  \ib homer ~/photos/homer.jpg
  insert into people (name, photo) values ('Homer', :homer);
 
  Isn't something similar already supported as mentioned in docs:
 
 One example use of this mechanism is to copy the contents of a file
 into a table column. First load the file into a variable and then
 interpolate the variable's value as a quoted string:
 
 testdb= \set content `cat my_file.txt`
 testdb= INSERT INTO my_table VALUES (:'content');
 
 or do you prefer an alternative without any kind of quote using \ib?

If the only use case of the feature proposed in this thread is to load
stuff from files to use as column values, then we're pretty much done,
and this patch is not needed -- except, maybe, that the `` is unlikely
to work on Windows, as already mentioned elsewhere.  But if the OP had
something else in mind, let's hear what it is.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-22 Thread Heikki Linnakangas

On 21.11.2013 22:55, Andres Freund wrote:

On 2013-11-20 12:48:50 +0200, Heikki Linnakangas wrote:

On 19.11.2013 16:22, Andres Freund wrote:

On 2013-11-19 15:20:01 +0100, Andres Freund wrote:

Imo something the attached patch should be done. The description I came

g up with is:


 Fix Hot-Standby initialization of clog and subtrans.


Looks ok for a back-patchable fix.


Do you plan to commit this? Or who is going to?


Ok, committed.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Handling GIN incomplete splits

2013-11-22 Thread Michael Paquier
On Thu, Nov 21, 2013 at 9:44 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 Here's a new version. To ease the review, I split the remaining patch again
 into two, where the first patch is just yet more refactoring.

 I also fixed some bugs in WAL logging and replay that I bumped into while
 testing.
Cool. Here is the review of the two remaining patches.
1) More refactoring, general remarks:
- Code compiles without warnings
- Passes make check
- If I got it correctly... this patch separates the part managing data
to be inserted from ginbtree. I can understand the meaning behind that
if we consider that GinBtree is used only to define methods and search
conditions (flags and keys). However I am wondering if this does not
make the code more complicated... Particularly the flag isDelete that
is only passed to ginxlogInsert meritates its comment IMO. Note that I
haven't read the 2nd patch when writing that :)
- With this patch, previous SELECT example takes 5.236 ms in average,
runtime does not change.
1-1) This block of code is repeated several times and should be
refactored into a single function:
/* During index build, count the new page */
if (buildStats)
{
if (btree-isData)
buildStats-nDataPages++;
else
buildStats-nEntryPages++;
}
Something with a function like that perhaps:
static void ginCountNewPage(GinBtree btree, GinStatsData *buildStats);
1-2) Could it be possible to change the variable name of
GinBtreeEntryInsertData *entry in entryIsEnoughSpace? entry-entry
is kind of hard to apprehend... Renaming it to either insertEntry.
Another idea would be to rename entry in GinBtreeEntryInsertData to
entryData or entryTuple.
1-3) This block of code is present two times:
+   if (!isleaf)
+   {
+   PostingItem *pitem = GinDataPageGetPostingItem(lpage, off);
+   PostingItemSetBlockNumber(pitem, updateblkno);
+   }
Should the update of a downlink to point to the next page be a
separate function?
2) post recovery cleanup:
- OK, so roughly the soul of this patch is to change the update
mechanism for a left child gin page so as the parent split is always
done first before any new data is inserted in this child. And this
ensures that we can remove the xlog cleanup mechanism for gin page
splits in the same fashion as gist... xlog redo mechanism is then
adapted according to that.
- Compilation fails becausze the flags GIN_SPLIT_ISLEAF and
GIN_SPLIT_ISDATA are missing in gin_private.h. The patch attached
fixes that though.
- With my additional patch, it passes make check, compilation shows no warnings.
- I did some tests with the patch:
-- Index creation time
vanilla: 3266.834
with the two patches: 3412.473 ms
-- Tried the new redo mechanism by simulating some server failures a
couple of times and saw no failures
- I am seeing similar run times for queries like the example used in
previous emails of this thread. So no problem on this side.
- And... Here are some comments about the code:
2-1) In ginFindParents, is the case where the stack has no parent
possible (aka the stack is the root itself)? Shouldn't this code path
check if root is NULL or not?
2-2) Not sure that this structure is in-line with the project policy:
struct
{
   BlockNumber left;
   BlockNumber right;
} children;
Why not adding a complementary structure in gin_private.h doing that?
It could be used as well in ginxlogSplit to specify a left/right
family of block numbers.
2-3) s/kepy/kept (comments of ginFinishSplit)
Other than that, the patch looks great. I particularly like the new
redo logic in ginxlog.c, the code is more easily understandable with
the split redo removed. Even if I am just a noob for this code, it is
nicely built and structured.

Regards,
-- 
Michael
diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h
index d27ff7d..382a23c 100644
--- a/src/include/access/gin_private.h
+++ b/src/include/access/gin_private.h
@@ -48,7 +48,9 @@ typedef GinPageOpaqueData *GinPageOpaque;
 #define GIN_META (1  3)
 #define GIN_LIST (1  4)
 #define GIN_LIST_FULLROW  (1  5) /* makes sense only on GIN_LIST 
page */
-#define GIN_INCOMPLETE_SPLIT (1  6)  /* page was split, but parent not 
updated */
+#define GIN_SPLIT_ISLEAF  (1  6)
+#define GIN_SPLIT_ISDATA  (1  7)
+#define GIN_INCOMPLETE_SPLIT (1  8)  /* page was split, but parent not 
updated */
 
 /* Page numbers of fixed-location pages */
 #define GIN_METAPAGE_BLKNO (0)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] commit fest 2013-11 week 1 report

2013-11-22 Thread Peter Eisentraut
We started with

Fri Nov 15

Status Summary. Needs Review: 79, Waiting on Author: 7, Ready for Committer: 5, 
Committed: 7, Returned with Feedback: 3, Rejected: 1. Total: 102.

We are now at

Fri Nov 22

Status Summary. Needs Review: 47, Waiting on Author: 28, Ready for Committer: 
10, Committed: 18, Returned with Feedback: 3, Rejected: 3. Total: 109.

(some late arrivals, some patches split)

Progress has been quite good.

Almost all patch authors responded to my call to sign up for reviewing
someone else's patch.

20 patches are still without reviewer.  Most of those are the typical
difficult topics (indexes, replication), so now might be a good time
for experts in those areas to start picking up the remaining patches.

In the coming week, we will be following up with reviewers to send in
their first review if they haven't already done so.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suggestion: Issue warning when calling SET TRANSACTION outside transaction block

2013-11-22 Thread Alvaro Herrera
Bruce Momjian escribió:

 OK, here is a patch which changes ABORT from NOTICE to WARNING, and SET
 from ERROR (which is new in 9.4) to WARNING.

I don't like that this patch changes RequireTransactionChain() from
actually requiring one, to a function that maybe requires a transaction
chain, and maybe it only complains about there not being one.  I mean,
it's like you had named the new throwError boolean as notReally or
something like that.  Also, the new comment paragraph is bad because it
explains who must pass true/false, instead of what's the effect of each
value (and let the callers choose which value to pass).

I would create a separate function to implement this, maybe
WarnUnlessInTransactionBlock() or something like that.  That would make
the patch a good deal smaller (because not changing existing callers).

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-22 Thread Andres Freund
On 2013-11-22 15:01:10 +0200, Heikki Linnakangas wrote:
 On 21.11.2013 22:55, Andres Freund wrote:
 On 2013-11-20 12:48:50 +0200, Heikki Linnakangas wrote:
 Looks ok for a back-patchable fix.
 
 Do you plan to commit this? Or who is going to?
 
 Ok, committed.

Thanks!

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] address sanitizer crash

2013-11-22 Thread Peter Eisentraut
AddressSanitizer (http://clang.llvm.org/docs/AddressSanitizer.html)
(also available through gcc, but I used clang), reports one bug, which
I traced down to this code in ruleutils.c:

[elsewhere:]
#define ViewSelectRuleName _RETURN


/*
 * Get the pg_rewrite tuple for the view's SELECT rule
 */
args[0] = ObjectIdGetDatum(viewoid);
args[1] = PointerGetDatum(ViewSelectRuleName);
nulls[0] = ' ';
nulls[1] = ' ';
spirc = SPI_execute_plan(plan_getviewrule, args, nulls, true, 2);

[I also think that the 2 here is wrong, probably thinking it means 2
arguments, but it means 2 result rows, but only one is needed.  But
that's unrelated to the bug.]

The datums end up in datumCopy(), which tries to copy 64 bytes of
name data, overrunning the end of the string.

I think the correct code is something like

args[1] = DirectFunctionCall1(namein, CStringGetDatum(ViewSelectRuleName));

which indeed makes the crash go away.  (A more explicit string-to-name
function might be useful and could also be used in other places.)

This is the only remaining issue found by AddressSanitizer that is
clearly attributable to the PostgreSQL source code (after the recently
fixed issue with memcpy with identical arguments).  There are crashes
in PL/Perl and PL/Python, but it's not clear to me yet whose fault
they are.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] commit fest 2013-11 week 1 report

2013-11-22 Thread Pavel Golub
Hello, Peter.

Is is possible to add small patch to the current commit fest?

You wrote:

PE We started with

PE Fri Nov 15

PE Status Summary. Needs Review: 79, Waiting on Author: 7, Ready for
PE Committer: 5, Committed: 7, Returned with Feedback: 3, Rejected: 1. Total: 
102.

PE We are now at

PE Fri Nov 22

PE Status Summary. Needs Review: 47, Waiting on Author: 28, Ready
PE for Committer: 10, Committed: 18, Returned with Feedback: 3, Rejected: 3. 
Total: 109.

PE (some late arrivals, some patches split)

PE Progress has been quite good.

PE Almost all patch authors responded to my call to sign up for reviewing
PE someone else's patch.

PE 20 patches are still without reviewer.  Most of those are the typical
PE difficult topics (indexes, replication), so now might be a good time
PE for experts in those areas to start picking up the remaining patches.

PE In the coming week, we will be following up with reviewers to send in
PE their first review if they haven't already done so.





-- 
With best wishes,
 Pavel  mailto:pa...@gf.microolap.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Custom Scan APIs (Re: [HACKERS] Custom Plan node)

2013-11-22 Thread Jim Mlodgenski
KaiGai


On Tue, Nov 19, 2013 at 9:41 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote:

 Thanks for your review.

 2013/11/19 Jim Mlodgenski jimm...@gmail.com:
  My initial review on this feature:
  - The patches apply and build, but it produces a warning:
  ctidscan.c: In function ‘CTidInitCustomScanPlan’:
  ctidscan.c:362:9: warning: unused variable ‘scan_relid’
 [-Wunused-variable]
 
 This variable was only used in Assert() macro, so it causes a warning if
 you
 don't put --enable-cassert on the configure script.
 Anyway, I adjusted the code to check relid of RelOptInfo directly.



The warning is now gone.


  I'd recommend that you split the part1 patch containing the ctidscan
 contrib
  into its own patch. It is more than half of the patch and its certainly
  stands on its own. IMO, I think ctidscan fits a very specific use case
 and
  would be better off being an extension instead of in contrib.
 
 OK, I split them off. The part-1 is custom-scan API itself, the part-2 is
 ctidscan portion, and the part-3 is remote join on postgres_fdw.


Attached is a patch for the documentation. I think the documentation still
needs a little more work, but it is pretty close. I can add some more
detail to it once finish adapting the hadoop_fdw to using the custom scan
api and have a better understanding of all of the calls.


 Thanks,
 --
 KaiGai Kohei kai...@kaigai.gr.jp

*** a/doc/src/sgml/custom-scan.sgml	2013-11-18 17:50:02.652039003 -0500
--- b/doc/src/sgml/custom-scan.sgml	2013-11-22 09:09:13.624254649 -0500
***
*** 8,47 
secondaryhandler for/secondary
   /indexterm
   para
!   Custom-scan API enables extension to provide alternative ways to scan or
!   join relations, being fully integrated with cost based optimizer,
!   in addition to the built-in implementation.
!   It consists of a set of callbacks, with a unique name, to be invoked during
!   query planning and execution. Custom-scan provider should implement these
!   callback functions according to the expectation of API.
   /para
   para
!   Overall, here is four major jobs that custom-scan provider should implement.
!   The first one is registration of custom-scan provider itself. Usually, it
!   shall be done once at literal_PG_init()/literal entrypoint on module
!   loading.
!   The other three jobs shall be done for each query planning and execution.
!   The second one is submission of candidate paths to scan or join relations,
!   with an adequate cost, for the core planner.
!   Then, planner shall chooses a cheapest path from all the candidates.
!   If custom path survived, the planner kicks the third job; construction of
!   literalCustomScan/literal plan node, being located within query plan
!   tree instead of the built-in plan node.
!   The last one is execution of its implementation in answer to invocations
!   by the core executor.
   /para
   para
!   Some of contrib module utilize the custom-scan API. It may be able to
!   provide a good example for new development.
variablelist
 varlistentry
  termxref linkend=ctidscan/term
  listitem
   para
!   Its logic enables to skip earlier pages or terminate scan prior to
!   end of the relation, if inequality operator on literalctid/literal
!   system column can narrow down the scope to be scanned, instead of
!   the sequential scan that reads a relation from the head to the end.
   /para
  /listitem
 /varlistentry
--- 8,46 
secondaryhandler for/secondary
   /indexterm
   para
!   The custom-scan API enables an extension to provide alternative ways to scan
!   or join relations leveraging the cost based optimizer. The API consists of a
!   set of callbacks, with a unique names, to be invoked during query planning 
!   and execution. A custom-scan provider should implement these callback 
!   functions according to the expectation of the API.
   /para
   para
!   Overall, there are four major tasks that a custom-scan provider should 
!   implement. The first task is the registration of custom-scan provider itself.
!   Usually, this needs to be done once at the literal_PG_init()/literal 
!   entrypoint when the module is loading. The remaing three tasks are all done
!   when a query is planning and executing. The second task is the submission of
!   candidate paths to either scan or join relations with an adequate cost for
!   the core planner. Then, the planner will choose the cheapest path from all of
!   the candidates. If the custom path survived, the planner starts the third 
!   task; construction of a literalCustomScan/literal plan node, located
!   within the query plan tree instead of the built-in plan node. The last task
!   is the execution of its implementation in answer to invocations by the core
!   executor.
   /para
   para
!   Some of contrib modules utilize the custom-scan API. They may provide a good
!   example for new development.
variablelist
 varlistentry
  termxref linkend=ctidscan/term
 

Re: [HACKERS] Status of FDW pushdowns

2013-11-22 Thread Merlin Moncure
On Thu, Nov 21, 2013 at 6:43 PM, Shigeru Hanada
shigeru.han...@gmail.com wrote:
 2013/11/22 Tom Lane t...@sss.pgh.pa.us:
 Merlin Moncure mmonc...@gmail.com writes:
 On Thu, Nov 21, 2013 at 9:05 AM, Bruce Momjian br...@momjian.us wrote:
 I know join pushdowns seem insignificant, but it helps to restrict what
 data must be passed back because you would only pass back joined rows.

 By 'insignificant' you mean 'necessary to do any non-trivial real
 work'.   Personally, I'd prefer it if FDW was extended to allow
 arbitrary parameterized queries like every other database connectivity
 API ever made ever.

 [ shrug... ]  So use dblink.  For better or worse, the FDW stuff is
 following the SQL standard's SQL/MED design, which does not do it
 like that.

 Pass-through mode mentioned in SQL/MED standard might be what he wants.

happen to have a link handy?

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new unicode table border styles for psql

2013-11-22 Thread Merlin Moncure
On Fri, Nov 22, 2013 at 2:23 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 Hello


 2013/11/21 Merlin Moncure mmonc...@gmail.com

 On Thu, Nov 21, 2013 at 1:09 AM, Pavel Stehule pavel.steh...@gmail.com
 wrote:
  Hello
 
  I wrote new styles for  psql table borders.
 
  http://postgres.cz/wiki/Pretty_borders_in_psql
 
  This patch is simply and I am think so some styles can be interesting
  for
  final presentation.
 
 great. hm, maybe we could integrate color? (see:

 http://merlinmoncure.blogspot.com/2012/09/psql-now-with-splash-of-color.html).


 it is next possible enhancing - I would to go forward in small steps, please
 :)

 minimally (and independent on proposed patch) we can introduce some like
 final regexp filtering  - that can be used for this or other purposes.

Yeah.  A per field regexp would do the trick.  As you have it, I like
Peter's idea best.  Being able to specify the various character codes
makes a lot of sense.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new unicode table border styles for psql

2013-11-22 Thread Pavel Stehule
2013/11/22 Merlin Moncure mmonc...@gmail.com

 On Fri, Nov 22, 2013 at 2:23 AM, Pavel Stehule pavel.steh...@gmail.com
 wrote:
  Hello
 
 
  2013/11/21 Merlin Moncure mmonc...@gmail.com
 
  On Thu, Nov 21, 2013 at 1:09 AM, Pavel Stehule pavel.steh...@gmail.com
 
  wrote:
   Hello
  
   I wrote new styles for  psql table borders.
  
   http://postgres.cz/wiki/Pretty_borders_in_psql
  
   This patch is simply and I am think so some styles can be interesting
   for
   final presentation.
  
  great. hm, maybe we could integrate color? (see:
 
 
 http://merlinmoncure.blogspot.com/2012/09/psql-now-with-splash-of-color.html
 ).
 
 
  it is next possible enhancing - I would to go forward in small steps,
 please
  :)
 
  minimally (and independent on proposed patch) we can introduce some like
  final regexp filtering  - that can be used for this or other purposes.

 Yeah.  A per field regexp would do the trick.  As you have it, I like
 Peter's idea best.  Being able to specify the various character codes
 makes a lot of sense.


there is other issue - simply parser will be really user unfriendly, and
user friendly parser will not by simply :(

have you some idea about input format?

Regards

Pavel



 merlin



Re: [HACKERS] address sanitizer crash

2013-11-22 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 AddressSanitizer (http://clang.llvm.org/docs/AddressSanitizer.html)
 (also available through gcc, but I used clang), reports one bug, which
 I traced down to this code in ruleutils.c:

 [elsewhere:]
 #define ViewSelectRuleName _RETURN

 /*
  * Get the pg_rewrite tuple for the view's SELECT rule
  */
 args[0] = ObjectIdGetDatum(viewoid);
 args[1] = PointerGetDatum(ViewSelectRuleName);
 nulls[0] = ' ';
 nulls[1] = ' ';
 spirc = SPI_execute_plan(plan_getviewrule, args, nulls, true, 2);

Yes, the plan clearly is built expecting $2 to be of type name,
so this has been wrong since day 1.  Amazing that no actual bug reports
have surfaced from it.

 I think the correct code is something like
 args[1] = DirectFunctionCall1(namein, 
 CStringGetDatum(ViewSelectRuleName));

That would be OK.  The more usual coding pattern is to declare a local
NameData variable, namestrcpy into that, and then PointerGetDatum on
the variable's address is actually correct.  However, that's just
micro-optimization that I don't think we care about here.

 [I also think that the 2 here is wrong, probably thinking it means 2
 arguments, but it means 2 result rows, but only one is needed.  But
 that's unrelated to the bug.]

Yes, while harmless that's clearly in error, should be zero.  The other
call of SPI_execute_plan in ruleutils.c has the same thinko.

Please fix and back-patch.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new unicode table border styles for psql

2013-11-22 Thread Alvaro Herrera
Pavel Stehule escribió:

 2013/11/21 Peter Eisentraut pete...@gmx.net

  Maybe make the border setting a string containing the various characters
  by index.  Then everyone can create their own crazy borders.
 
 I seriously though about it, but not sure if it is good way.

How about having a single unicode line style, and then have a
different \pset setting to determine exactly what chars to print?  This
wouldn't allow for programmability, but it seems better UI to me.
This proliferation of unicode line style names seems odd.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] commit fest 2013-11 week 1 report

2013-11-22 Thread Euler Taveira
On 22-11-2013 11:07, Pavel Golub wrote:
 Is is possible to add small patch to the current commit fest?
 
No. Deadline was 11/15. Add it to next CF [1].


[1] https://commitfest.postgresql.org/action/commitfest_view?id=21


-- 
   Euler Taveira   Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MultiXact pessmization in 9.3

2013-11-22 Thread Alvaro Herrera
Andres Freund wrote:

 While looking at the multixact truncation code (mail nearby), I've
 noticed that there's a significant difference in the way multixact
 members are accessed since fkey locks were introduced:
 
 9.3 when heap_lock_tuple finds a XMAX_IS_MULTI tuple it executes
 MultiXactIdWait() which in turn uses GetMultiXactIdMembers() to get the
 xids to wait for. But it skips the lookup if the mxid we lookup is older
 than OldestVisibleMXactId.
 
 9.3+ instead always looks up the members because GetMultiXactIdMembers()
 is also used in cases where we need to access old xids (to check whether
 members have commited in non-key updates).

But HeapTupleSatisfiesUpdate(), called at the top of heap_lock_tuple,
has already called MultiXactIdIsRunning() (which calls GetMembers)
before that's even considered.  So the call heap_lock_tuple should have
members obtained from the multixact.c cache.

Am I misunderstanding which code path you mean?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new unicode table border styles for psql

2013-11-22 Thread Merlin Moncure
On Fri, Nov 22, 2013 at 8:45 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Pavel Stehule escribió:

 2013/11/21 Peter Eisentraut pete...@gmx.net

  Maybe make the border setting a string containing the various characters
  by index.  Then everyone can create their own crazy borders.
 
 I seriously though about it, but not sure if it is good way.

 How about having a single unicode line style, and then have a
 different \pset setting to determine exactly what chars to print?  This
 wouldn't allow for programmability, but it seems better UI to me.
 This proliferation of unicode line style names seems odd.

That makes sense to me, especially if you could pass escapes.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor patch for the uuid-ossp extension

2013-11-22 Thread Tom Lane
roadrunn...@gmx.at writes:
 When trying to add the extension with \i it writes an error message:
  Use CREATE EXTENSION uuid-ossp to load this file.

 Unfortunatly this does not work for extensions with dashes. Must CREATE 
 EXTENSION uuid-ossp. Proposed patch is attached.

[ memo to self: never, ever accept another contrib module whose name
isn't a plain SQL identifier ]

Yeah, that's a problem, but I don't find your solution acceptable:

-\echo Use CREATE EXTENSION uuid-ossp to load this file. \quit
+\echo Use CREATE EXTENSION uuid-ossp to load this file. \quit

That's just ignoring the English text quoting convention that these
messages are trying to follow.  I guess we could shade the convention a
bit by using single not double quotes around the recommended command.
psql doesn't make that tremendously easy, but a bit of experimentation
says this works:

regression=# \echo Use '''CREATE EXTENSION uuid-ossp''' to load this file. 
Use 'CREATE EXTENSION uuid-ossp' to load this file.

Does that look reasonable to people?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor patch for the uuid-ossp extension

2013-11-22 Thread Alvaro Herrera
Tom Lane wrote:
 roadrunn...@gmx.at writes:

 regression=# \echo Use '''CREATE EXTENSION uuid-ossp''' to load this file. 
 Use 'CREATE EXTENSION uuid-ossp' to load this file.
 
 Does that look reasonable to people?

+1

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor patch for the uuid-ossp extension

2013-11-22 Thread Andrew Dunstan


On 11/22/2013 10:19 AM, Alvaro Herrera wrote:

Tom Lane wrote:

roadrunn...@gmx.at writes:
regression=# \echo Use '''CREATE EXTENSION uuid-ossp''' to load this file.
Use 'CREATE EXTENSION uuid-ossp' to load this file.

Does that look reasonable to people?

+1



+1

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MultiXact pessmization in 9.3

2013-11-22 Thread Andres Freund
On 2013-11-22 12:04:31 -0300, Alvaro Herrera wrote:
 Andres Freund wrote:
 
  While looking at the multixact truncation code (mail nearby), I've
  noticed that there's a significant difference in the way multixact
  members are accessed since fkey locks were introduced:
  
  9.3 when heap_lock_tuple finds a XMAX_IS_MULTI tuple it executes
  MultiXactIdWait() which in turn uses GetMultiXactIdMembers() to get the
  xids to wait for. But it skips the lookup if the mxid we lookup is older
  than OldestVisibleMXactId.
  
  9.3+ instead always looks up the members because GetMultiXactIdMembers()
  is also used in cases where we need to access old xids (to check whether
  members have commited in non-key updates).
 
 But HeapTupleSatisfiesUpdate(), called at the top of heap_lock_tuple,
 has already called MultiXactIdIsRunning() (which calls GetMembers)
 before that's even considered.  So the call heap_lock_tuple should have
 members obtained from the multixact.c cache.
 
 Am I misunderstanding which code path you mean?

Yes, somewhat: 9.3 GetMultiXactIdMembers() didn't do any work if the
multixact was old enough:
if (MultiXactIdPrecedes(multi, OldestVisibleMXactId[MyBackendId]))
{
debug_elog2(DEBUG2, GetMembers: it's too old);
*xids = NULL;
return -1;
}
so, in OLTP style workloads, doing a heap_lock_tuple() often didn't have
to perform any IO when locking a tuple that previously had been locked
using a multixact. We knew that none of the members could still be
running and thus didn't have to ask the SLRU for them since a
not-running member cannot still have a row locked.

Now, fkey locks changed that because for !HEAP_XMAX_LOCK_ONLY mxacts, we
need to look up the members to get the update xid and check whether it
has committed or aborted, even if we know that it isn't currently
running anymore due do OldestVisibleMXactId. But there's absolutely no
need to do that for HEAP_XMAX_LOCK_ONLY mxacts, the old reasoning for
not looking up the members if old is just fine.

Additionally, we don't ever set hint bits indicating that a
HEAP_XMAX_IS_MULTI  !HEAP_XMAX_LOCK_ONLY mxact has commited, so we'll
do HeapTupleGetUpdateXid(), TransactionIdIsCurrentTransactionId(),
TransactionIdIsInProgress(), TransactionIdDidCommit() calls for every
HeapTupleSatisfiesMVCC().

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Building on S390

2013-11-22 Thread Michael Meskes
Hi,

I spend some time trying to figure out why PostgreSQL builds on
S390-Linux, but Postgres-XC doesn't. Well at least this holds for the Debian
packages. So far I haven't figured it out.  However, it appears to me that the
build should fail for both. I'm not an S390 expert by any means, but I was told
that S390 requires -fPIC and the build failure in the Debian package of XC came
from a stray -fpic that was used together with -fPIC. But alas the PostgreSQL
build has both as well.

Anyway, I changed src/makefiles/Makefile.linux to include

ifeq $(findstring s390,$(host_cpu)) s390
CFLAGS_SL = -fPIC
else

before setting CFLAGS_SL = -fpic et voila XC builds just fine on S390. 

So I wonder shouldn't we use -fPIC instead of -fpic for S390 in general?

Michael

-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at gmail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Can we trust fsync?

2013-11-22 Thread Greg Stark
On Thu, Nov 21, 2013 at 1:43 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Also, it's not that hard to do plug-pull testing to verify that your
 system is telling the truth about fsync.  This really ought to be part
 of acceptance testing for any new DB server.


I've never tried it but I always wondered how easy it was to do. How would
you ever know you had tested it enough?


The original mail was referencing a problem with syncing *meta* data
though. The semantics around meta data syncs are much less clearly
specified, in part because file systems traditionally made nearly all meta
data operations synchronous. Doing plug-pull testing on Postgres would not
test meta data syncing very well since Postgres specifically avoids doing
much meta data operations by overwriting existing files and blocks as much
as possible. You would have to test doing table extensions or pulling the
plug immediately after switching xlog files repeatedly to have any coverage
at all there.


-- 
greg


Re: [HACKERS] Can we trust fsync?

2013-11-22 Thread Tom Lane
Greg Stark st...@mit.edu writes:
 On Thu, Nov 21, 2013 at 1:43 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Also, it's not that hard to do plug-pull testing to verify that your
 system is telling the truth about fsync.  This really ought to be part
 of acceptance testing for any new DB server.

 I've never tried it but I always wondered how easy it was to do. How would
 you ever know you had tested it enough?

I used the program Greg Smith recommends on our wiki (can't remember the
name offhand) when I got a new house server this spring.  With the RAID
card configured for writethrough and no battery, it failed all over the
place.  Fixed those configuration bugs, it was okay three or four times
in a row, which was good enough for me.

 The original mail was referencing a problem with syncing *meta* data
 though. The semantics around meta data syncs are much less clearly
 specified, in part because file systems traditionally made nearly all meta
 data operations synchronous. Doing plug-pull testing on Postgres would not
 test meta data syncing very well since Postgres specifically avoids doing
 much meta data operations by overwriting existing files and blocks as much
 as possible.

True.  You're better off with a specialized testing program.  (Though
now you mention it, I wonder whether that program was stressing metadata
or not.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building on S390

2013-11-22 Thread Tom Lane
Michael Meskes mes...@postgresql.org writes:
 I spend some time trying to figure out why PostgreSQL builds on
 S390-Linux, but Postgres-XC doesn't. Well at least this holds for the Debian
 packages. So far I haven't figured it out.  However, it appears to me that the
 build should fail for both. I'm not an S390 expert by any means, but I was 
 told
 that S390 requires -fPIC and the build failure in the Debian package of XC 
 came
 from a stray -fpic that was used together with -fPIC. But alas the PostgreSQL
 build has both as well.

I think this is probably nonsense.  I spent ten years maintaining Postgres
for Red Hat, and I never saw any such failure on s390 in their packages.
If -fpic weren't good enough for shared libraries on s390, how'd any of
those builds get through their regression tests?

It may well be that *mixing* -fpic and -fPIC is a bad idea, but I'd say
that points to an error in something XC is doing, because the core
Postgres build doesn't use -fPIC anywhere for Linux/s390, AFAICS.
Furthermore, if we change that convention now, we're going to increase
the risk of such mixing failures for other people.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Can we trust fsync?

2013-11-22 Thread Claudio Freire
On Fri, Nov 22, 2013 at 1:16 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 The original mail was referencing a problem with syncing *meta* data
 though. The semantics around meta data syncs are much less clearly
 specified, in part because file systems traditionally made nearly all meta
 data operations synchronous. Doing plug-pull testing on Postgres would not
 test meta data syncing very well since Postgres specifically avoids doing
 much meta data operations by overwriting existing files and blocks as much
 as possible.

 True.  You're better off with a specialized testing program.  (Though
 now you mention it, I wonder whether that program was stressing metadata
 or not.)


You can always stress metadata by leaving atime updates in their full
setting (whatever it is for that filesystem).


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNNEST with multiple args, and TABLE with multiple funcs

2013-11-22 Thread Tom Lane
Andrew Gierth and...@tao11.riddles.org.uk writes:
 Is this intended:

[ I assume you forgot a create type footype here ]

 create function foo() returns setof footype language plpgsql
   as $f$ begin return next row(1,true); end; $f$;

 select pg_typeof(f), row_to_json(f) from foo() with ordinality f(p,q);
  pg_typeof |   row_to_json   
 ---+-
  record| {p:1,q:true,ordinality:1}
 (1 row)

 select pg_typeof(f), row_to_json(f) from foo() f(p,q);
  pg_typeof |   row_to_json
 ---+--
  footype   | {a:1,b:true}
 (1 row)

Well, it's not insane on its face.  The rowtype of f in the first example
is necessarily a built-on-the-fly record, but in the second case using
the properties of the underlying named composite type is possible, and
consistent with what happens in 9.3 and earlier.  (Not that I'm claiming
we were or are totally consistent ...)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suggestion: Issue warning when calling SET TRANSACTION outside transaction block

2013-11-22 Thread Bruce Momjian
On Fri, Nov 22, 2013 at 10:24:35AM -0300, Alvaro Herrera wrote:
 Bruce Momjian escribió:
 
  OK, here is a patch which changes ABORT from NOTICE to WARNING, and SET
  from ERROR (which is new in 9.4) to WARNING.
 
 I don't like that this patch changes RequireTransactionChain() from
 actually requiring one, to a function that maybe requires a transaction
 chain, and maybe it only complains about there not being one.  I mean,
 it's like you had named the new throwError boolean as notReally or
 something like that.  Also, the new comment paragraph is bad because it
 explains who must pass true/false, instead of what's the effect of each
 value (and let the callers choose which value to pass).
 
 I would create a separate function to implement this, maybe
 WarnUnlessInTransactionBlock() or something like that.  That would make
 the patch a good deal smaller (because not changing existing callers).

Good points.  I have modified the attached patch to do as you suggested.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/doc/src/sgml/ref/abort.sgml b/doc/src/sgml/ref/abort.sgml
new file mode 100644
index 246e8f8..f3a2fa8
*** a/doc/src/sgml/ref/abort.sgml
--- b/doc/src/sgml/ref/abort.sgml
*** ABORT [ WORK | TRANSACTION ]
*** 63,70 
/para
  
para
!Issuing commandABORT/ when not inside a transaction does
!no harm, but it will provoke a warning message.
/para
   /refsect1
  
--- 63,69 
/para
  
para
!Issuing commandABORT/ outside of a transaction block has no effect.
/para
   /refsect1
  
diff --git a/doc/src/sgml/ref/rollback.sgml b/doc/src/sgml/ref/rollback.sgml
new file mode 100644
index b265545..4f79621
*** a/doc/src/sgml/ref/rollback.sgml
--- b/doc/src/sgml/ref/rollback.sgml
*** ROLLBACK [ WORK | TRANSACTION ]
*** 59,66 
/para
  
para
!Issuing commandROLLBACK/ when not inside a transaction does
!no harm, but it will provoke a warning message.
/para
   /refsect1
  
--- 59,66 
/para
  
para
!Issuing commandROLLBACK/ outside of a transaction
!block has no effect.
/para
   /refsect1
  
diff --git a/doc/src/sgml/ref/set.sgml b/doc/src/sgml/ref/set.sgml
new file mode 100644
index 6290c9d..5a84f69
*** a/doc/src/sgml/ref/set.sgml
--- b/doc/src/sgml/ref/set.sgml
*** SET [ SESSION | LOCAL ] TIME ZONE { rep
*** 110,118 
   para
Specifies that the command takes effect for only the current
transaction.  After commandCOMMIT/ or commandROLLBACK/,
!   the session-level setting takes effect again.
!   productnamePostgreSQL/productname reports an error if
!   commandSET LOCAL/ is used outside a transaction block.
   /para
  /listitem
 /varlistentry
--- 110,117 
   para
Specifies that the command takes effect for only the current
transaction.  After commandCOMMIT/ or commandROLLBACK/,
!   the session-level setting takes effect again.  This has no effect
!   outside of a transaction block.
   /para
  /listitem
 /varlistentry
diff --git a/doc/src/sgml/ref/set_constraints.sgml b/doc/src/sgml/ref/set_constraints.sgml
new file mode 100644
index 895a5fd..a33190c
*** a/doc/src/sgml/ref/set_constraints.sgml
--- b/doc/src/sgml/ref/set_constraints.sgml
*** SET CONSTRAINTS { ALL | replaceable cla
*** 99,108 
  
para
 This command only alters the behavior of constraints within the
!current transaction. Thus, if you execute this command outside of a
!transaction block
!(commandBEGIN/command/commandCOMMIT/command pair), it will
!generate an error.
/para
   /refsect1
  
--- 99,105 
  
para
 This command only alters the behavior of constraints within the
!current transaction.  This has no effect outside of a transaction block.
/para
   /refsect1
  
diff --git a/doc/src/sgml/ref/set_transaction.sgml b/doc/src/sgml/ref/set_transaction.sgml
new file mode 100644
index 391464a..e90ff4a
*** a/doc/src/sgml/ref/set_transaction.sgml
--- b/doc/src/sgml/ref/set_transaction.sgml
*** SET SESSION CHARACTERISTICS AS TRANSACTI
*** 185,191 
para
 If commandSET TRANSACTION/command is executed without a prior
 commandSTART TRANSACTION/command or commandBEGIN/command,
!it will generate an error.
/para
  
para
--- 185,191 
para
 If commandSET TRANSACTION/command is executed without a prior
 commandSTART TRANSACTION/command or commandBEGIN/command,
!it will have no effect.
/para
  
para
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
new file mode 100644
index 0591f3f..bab048d
*** a/src/backend/access/transam/xact.c
--- b/src/backend/access/transam/xact.c
*** static void CallSubXactCallbacks(SubXact
*** 265,270 
--- 265,272 
  	 

Re: [HACKERS] Building on S390

2013-11-22 Thread Michael Meskes
On Fri, Nov 22, 2013 at 11:27:45AM -0500, Tom Lane wrote:
 I think this is probably nonsense.  I spent ten years maintaining Postgres
 for Red Hat, and I never saw any such failure on s390 in their packages.
 If -fpic weren't good enough for shared libraries on s390, how'd any of
 those builds get through their regression tests?

You've got a point here.

 It may well be that *mixing* -fpic and -fPIC is a bad idea, but I'd say
 that points to an error in something XC is doing, because the core
 Postgres build doesn't use -fPIC anywhere for Linux/s390, AFAICS.

I actually only compared to the Debian build which *does* have -fPIC and indeed
it seems it adds -fPIC unconditionally. But then the PostgreSQL package works
flawlessly which obviously points to XC for the problem. I give you that.

 Furthermore, if we change that convention now, we're going to increase
 the risk of such mixing failures for other people.

Sure, but if this a bug we should. I'm not saying it is, I simply don't know.

The thread is starting with my email here
http://lists.debian.org/debian-s390/2013/10/msg8.html and the reply said:

It uses -fpic instead of -fPIC.

No, I'm not shortening that email reply here. :)

Checking the Debian logs it appears that all calls use *both* which seems to do
the right thing. And yes, it appears there is a change in XC that makes it
break. But still, I would think there has to be a correct set of options.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at gmail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNNEST with multiple args, and TABLE with multiple funcs

2013-11-22 Thread Andrew Gierth
 Tom == Tom Lane t...@sss.pgh.pa.us writes:

 Tom [ I assume you forgot a create type footype here ]

yeah, sorry

 Tom Well, it's not insane on its face.  The rowtype of f in the
 Tom first example is necessarily a built-on-the-fly record, but in
 Tom the second case using the properties of the underlying named
 Tom composite type is possible, and consistent with what happens in
 Tom 9.3 and earlier.  (Not that I'm claiming we were or are totally
 Tom consistent ...)

Right, but your changes to the code make it look like there was an
intended change there - with the scan type tupdesc being forced to
RECORD type and its column names changed.

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suggestion: Issue warning when calling SET TRANSACTION outside transaction block

2013-11-22 Thread Bruce Momjian
On Fri, Nov 22, 2013 at 12:17:41PM -0500, Bruce Momjian wrote:
 Good points.  I have modified the attached patch to do as you suggested.

Also, I have read through the thread and summarized the positions of the
posters:

  9.3 WARNING ERROR
  SET noneTom, DavidJ, AndresFRobert, Kevin
  SAVEPOINT   error   Tom, DavidJ, Robert, 
AndresF, Kevin
  LOCK, DECLARE   error   Tom, DavidJ, Robert, 
AndresF, Kevin

Everyone seems to agree that SAVEPOINT, LOCK, and DECLARE should remain
as errors.  Everyone also seems to agree that BEGIN and COMMIT should
remain warnings, and ABORT should be changed from notice to warning.

Our only disagreement seems to be how to handle the SET commands, which
used to report nothing.  Would anyone else like to correct or express an
opinion?  Given the current vote count and backward-compatibility,
warning seems to be the direction we are heading.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY table FROM STDIN doesn't show count tag

2013-11-22 Thread Rajeev rastogi
On 19 November 2013, Amit Khandekar wrote:
On 18 November 2013 18:00, Rajeev rastogi 
rajeev.rast...@huawei.commailto:rajeev.rast...@huawei.com wrote:
On 18 November 2013, Amit Khandekar wrote:
 Please find the patch for the same and let me know your suggestions.
In this call :
  success = handleCopyIn(pset.db, 
  pset.cur_cmd_source,
 
 PQbinaryTuples(*results), intres)  success;
  if (success  intres)
  success = 
  PrintQueryResults(intres);
Instead of handling of the result status this way, what if we use the 
ProcessResult()  argument 'result' to pass back the COPY result status to 
the caller ? We already call PrintQueryResults(results) after the 
ProcessResult() call. So we don't have to  have a COPY-specific 
PrintQueryResults() call. Also, if there is a subsequent SQL command in the 
same query string, the consequence of the patch is that the client prints 
both COPY output and the last command output. So my suggestion would also 
allow us to be consistent with the general behaviour that only the last SQL 
command output is printed in case of multiple SQL commands. Here is how it 
gets printed with your patch :
 Thank you for valuable comments. Your suggestion is absolutely correct.
 psql -d postgres -c \copy tab from '/tmp/st.sql' delimiter ' '; insert 
 into tab values ('lll', 3)
COPY 1
INSERT 0 1
This is not harmful, but just a matter of consistency.
I hope you meant to write test case as psql -d postgres -c \copy tab from 
stdin; insert into tab values ('lll', 3), as if we are reading from file, 
then the above issue does not come.
I meant COPY with a slash. \COPY is equivalent to COPY FROM STDIN. So the 
issue can also be reproduced by :
\COPY tab from 'client_filename' ...

 I have modified the patch as per your comment and same is attached with this 
 mail.

Thanks. The COPY FROM looks good.
OK..Thanks

With the patch applied, \COPY TO 'data_file' command outputs the  COPY status 
into the data file, instead of printing it in the psql session.

postgres=# \copy tab to '/tmp/fout';
postgres=#

$ cat /tmp/fout
ee  909
COPY 1
This is probably because client-side COPY overrides the pset.queryFout with 
its own destination file, and while printing the COPY status, the overridden 
file pointer is not yet reverted back.

This looks to be an issue without our new patch also. Like I tried following 
command and output was as follows:
rajeev@linux-ltr9:~/9.4gitcode/install/bin ./psql -d postgres -c \copy tbl to 
'new.txt';insert into tbl values(55);
rajeev@linux-ltr9:~/9.4gitcode/install/bin cat new.txt
5
67
5
67
2
2
99
1
1
INSERT 0 1

I have fixed the same as per your suggestion by resetting the pset.queryFout 
after the function call handleCopyOut.

Please let me know in-case of any other issues.

Thanks and Regards,
Kumar Rajeev Rastogi


copydefectV3.patch
Description: copydefectV3.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Service on Windows does not start. ~ is not a valid Win32 application

2013-11-22 Thread Rajeev rastogi
ON 11 November 2013, Naoya Anzai Wrote:

 
 Hi Amit,
 
  I have uploaded your patch for next commit fest, hope you can support
  it if there is any feedback for your patch by reviewer/committer.
 Thanks! Okay, I will support you.

1. Patch applies cleanly to master HEAD.
2. No Compilation Warning.
3. It works as per the patch expectation.

One suggestion:
Instead of using sizeof(cmdLine),
a. Can't we use strlen  (hence small 'for' loop).
b. Or use memmove to move one byte. 

Thanks and Regards,
Kumar Rajeev Rastogi


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add min and max execute statement time in pg_stat_statement

2013-11-22 Thread Rajeev rastogi
On 14 November 2013, Kondo Mitsumasa wrote:

 Subject: Re: [HACKERS] Add min and max execute statement time in
 pg_stat_statement
 
 Oh! Sorry...
 I forgot to attach my latest patch.

* Is the patch in a patch format which has context?
No

* Does it apply cleanly to the current git master?
Yes. 

* Does it compiles without any warning?
No. Compilation fails on windows platform. 
.\contrib\pg_stat_statements\pg_stat_statements.c(1232) : error 
C2102: '' requires l-value
1232 Line is: values[i++] = Float8GetDatumFast(sqrt(sqtime - avtime * 
avtime));

Thanks and Regards,
Kumar Rajeev Rastogi


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY table FROM STDIN doesn't show count tag

2013-11-22 Thread Rajeev rastogi
On 20 November, Amit Khandekar wrote:
I hope you meant to write test case as psql -d postgres -c \copy tab from 
stdin; insert into tab values ('lll', 3), as if we are reading from file, 
then the above issue does not come.
I meant COPY with a slash. \COPY is equivalent to COPY FROM STDIN. So the 
issue can also be reproduced by :
\COPY tab from 'client_filename' ...

 I have modified the patch as per your comment and same is attached with 
 this mail.

Thanks. The COPY FROM looks good.
OK..Thanks
With the patch applied, \COPY TO 'data_file' command outputs the  COPY 
status into the data file, instead of printing it in the psql session.
postgres=# \copy tab to '/tmp/fout';
postgres=#
 $ cat /tmp/fout
ee  909
COPY 1
This is probably because client-side COPY overrides the pset.queryFout with 
its own destination file, and while printing the COPY status, the overridden 
file pointer is not yet reverted back.
 This looks to be an issue without our new patch also. Like I tried following 
 command and output was as follows:
rajeev@linux-ltr9:~/9.4gitcode/install/binmailto:rajeev@linux-ltr9:~/9.4gitcode/install/bin
 ./psql -d postgres -c \copy tbl to 'new.txt';insert into tbl values(55);
rajeev@linux-ltr9:~/9.4gitcode/install/binmailto:rajeev@linux-ltr9:~/9.4gitcode/install/bin
 cat new.txt
5
67
5
67
2
2
99
1
1
INSERT 0 1

Ok. Yes it is an existing issue. Because we are now printing the COPY status 
even for COPY TO, the existing issue surfaces too easily with the patch. \COPY 
TO is a pretty common scenario. And it does not have to have a subsequent 
another command
to reproduce the issue Just a single \COPY TO command reproduces the issue.
I have fixed the same as per your suggestion by resetting the pset.queryFout 
after the function call handleCopyOut.
!  pset.queryFout = stdout;

The original pset.queryFout may not be stdout. psql -o option can override the 
stdout default.

I think solving the \COPY TO part is going to be a  different (and an 
involved) issue to solve than the COPY FROM. Even if we manage to revert back 
the queryFout, I think ProcessResult() is not the right place to do it. 
ProcessResult() should not
 assume that  somebody else has changed queryFout. Whoever has changed it 
 should revert it. Currently, do_copy() is indeed doing this correctly:

  save_file = *override_file;
  *override_file = copystream;
  success = SendQuery(query.data);
  *override_file = save_file;

But the way SendQuery() itself processes the results and prints them, is 
conflicting with the above.

So I think it is best to solve this as a different issue, and we should , for 
this commitfest,  fix only COPY FROM. Once the \COPY existing issue is solved, 
only then we can start printing the \COPY TO status as well.

You mean to say that I should change the patch to keep only COPY FROM related 
changes and remove changes related to COPY TO.
If yes, then I shall change the patch accordingly  and also mention same in 
documentation also.
Please let me know about this so that I can share the modified patch.

Thanks and Regards,
Kumar Rajeev Rastogi


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Bruce Momjian

Sending to hackers for comment;   seems setting
default_transaction_read_only to true via ALTER database/user or
postgresql.conf can cause pg_dump, pg_dumpall, and pg_upgrade failures. 
We are considering the right solution:

---

On Fri, Nov 22, 2013 at 01:32:30PM -0500, Bruce Momjian wrote:
 On Thu, Nov 21, 2013 at 06:22:50AM -0800, Kevin Grittner wrote:
   Well, pg_upgrade can't handle every possible configuration.  How
   do we even restore into such a database?  You marked the database
   as read-only, and pg_upgrade is going to honor that and not
   modify it.
  
  That interpretation makes no sense to me.  I know of users who have
  databases where 90% of their transactions don't modify data, so
  they set the *default* for transactions to read only, and override
  that for transactions that are read write.  The default is not, and
  never has been, a restriction on what is allowed -- it is a default
  that is quite easy to override.  If we have tools that don't handle
  that correctly, I consider that a bug.
 
 OK, this is good information to hear.
 
   I believe a pg_dumpall restore might fail in the same way.
  
  Then it should also be fixed.
 
 Yes, that is easy to do.
 
   You need to change the default on the old cluster before
   upgrading.  It is overly cumbersome to set the
   default_transaction_read_only for every database connection
  
  Why is this any different from other settings we cover at the front
  of pg_dump output?:
  
  | SET statement_timeout = 0;
  | SET lock_timeout = 0;
  | SET client_encoding = 'UTF8';
  | SET standard_conforming_strings = on;
  | SET check_function_bodies = false;
  | SET client_min_messages = warning;
  
   and there are many other settings that might also cause failures.
  
  You mean, like the above?
  
   What you might be able to do is to set PGOPTIONS to -c
   default_transaction_read_only=false and run pg_upgrade.  If more
   people report this problem, I could document this work-around.
  
  This is most likely to bite those using serializable transactions
  for data integrity, because declaring transactions read only makes
  a huge difference in performance in those cases.  That is where I
  have seen people set the default for read only to on; they want to
  explicitly set it off only when needed.
  
  I would be happy to supply a patch to treat
  default_transaction_read_only the same as statement_timeout or
  standard_conforming_strings in pg_dump and related utilities. 
  Since it causes backup/restore failure on perfectly valid databases
  I even think this is a bug which merits back-patching.
 
 Not sure about backpatching.  default_transaction_read_only has been
 around since 7.4.  Setting it to true would cause pg_dump to fail unless
 you changed the database setting, and pg_dumpall would fail completely
 as there is no way to turn off the database setting.
 
 The problem is that I don't remember any report of this failing in
 pg_dump, pg_dumpall, or pg_upgrade, so saying it is a major issue is
 hard to accept.
 
 However, looking forward, I think we should add it to pg_dump (and hence
 pg_dumpall), and we should fix pg_upgrade so it is more reliable.  I am
 thinking we should either document in pg_upgrade the use of PGOPTIONS to
 avoid this issue, or have pg_upgrade append to PGOPTIONS in its
 environment to use some of pg_dump's resets, and that will be passed to
 libpq connections, psql, and all the utilities pg_upgrade calls.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PoC] pgstattuple2: block sampling to reduce physical read

2013-11-22 Thread firoz e v
On 16/09/13 16:20, Satoshi Nagayasu wrote:
 Thanks for checking. Fixed to eliminate SnapshotNow.

Looking forward to get a new patch, incorporating the comments, that are 
already given in the following mails:

1. Jaime Casanova: The name pgstattuple2, doesn't convince me... maybe you can 
use pgstattuple() if you use a second argument (percentage of the sample) to 
overload the function.
(http://www.postgresql.org/message-id/5265ad16.3090...@catalyst.net.nz)

The comment related to having an argument, to mention the sampling number, is 
also given by Greg Smith: There should be an input parameter to the function 
for how much sampling to do
(http://www.postgresql.org/message-id/51ee62d4.7020...@2ndquadrant.com)

2. Yourself: I think it could be improved by sorting sample block numbers 
before physical block reads in order to eliminate random access on the disk.
(http://www.postgresql.org/message-id/525779c5.2020...@uptime.jp) for which, 
Mark Kirkwood , has given a rough patch.

Regards,
Firoz EV


Re: [HACKERS] Logging WAL when updating hintbit

2013-11-22 Thread Dilip kumar
On 19 November 2013 22:19, Sawada Masahiko Wrote

 
  Thank you for comment.
  Actually, I had thought to add separate parameter.
 
  I think that he said that if you can proof that amount of WAL is
  almost same and without less performance same as before, you might
 not
  need to separate parameter in your patch.
 
 
 Thanks!
 I took it wrong.
 I think that there are quite a few difference amount of WAL.
 
  Did you test about amount of WAL size in your patch?
 
 Not yet. I will do that.

1. Patch applies cleanly to master HEAD.
2. No Compilation Warning.
3. It works as per the patch expectation.

Some Suggestion:
1. Add new WAL level (all) in comment in postgresql.conf 
   wal_level = hot_standby # minimal, archive, or 
hot_standby


Performance Test Result:
Run with pgbench for 300 seconds

WAL level : hot_standby
WAL Size  :   111BF8A8
TPS   :   125

WAL level : all
WAL Size  :   11DB5AF8
TPS   :   122 

* TPS is almost constant but WAL size is increased around 11M.

This is the first level of observation, I will continue to test few more 
scenarios including performance test on standby.

Regards,
Dilip Kumar










-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logging WAL when updating hintbit

2013-11-22 Thread Dilip kumar
On 20 November 2013 22:12, Sawada Masahiko Wrote

 
  1. Patch applies cleanly to master HEAD.
  2. No Compilation Warning.
  3. It works as per the patch expectation.
 
  Some Suggestion:
  1. Add new WAL level (all) in comment in postgresql.conf
 wal_level = hot_standby # minimal, archive,
 or hot_standby
 
 Thank you for reviewing the patch.
 Yes, I will do that. And I'm going to implement documentation patch.

OK, once I get it, I will review the same.


 
  Performance Test Result:
  Run with pgbench for 300 seconds
 
  WAL level : hot_standby
  WAL Size  :   111BF8A8
  TPS   :   125
 
  WAL level : all
  WAL Size  :   11DB5AF8
  TPS   :   122
 
  * TPS is almost constant but WAL size is increased around 11M.
 
  This is the first level of observation, I will continue to test few
 more scenarios including performance test on standby.
 
 Thank you for performance testing.
 According of test result, TPS of 'all'  lower than 'hot_standby',but
 WAL size is increased.
 I think that it should be separate parameter.
 And TPS on master side is is almost constant. so this patch might have
 several benefit for performance improvement on standby side if the
 result of performance test on standby side is improved.

[Performance test on standby:]

I have executed pgbench on master with WAL LEVEL hot_stanby and all option, 
and after that run pgbench on standby with select-only option.

WAL LEVEL (on master): hot_standby  
Select TPS (on standby)  : 4098

WAL LEVEL (on master): all  
Select TPS (on standby)  : 4115


Regards,
Dilip








-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Why is UPDATE with column-list syntax not implemented

2013-11-22 Thread AK
9.3 documentation says:

According to the standard, the column-list syntax should allow a list of
columns to be assigned from a single row-valued expression, such as a
sub-select:

UPDATE accounts SET (contact_last_name, contact_first_name) =
(SELECT last_name, first_name FROM salesmen
 WHERE salesmen.id = accounts.sales_id);
This is not currently implemented — the source must be a list of independent
expressions.

Why is this not implemented? Is it considered inconvenient to use, or
difficult to implement. or not important enough, or some other reason?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Why-is-UPDATE-with-column-list-syntax-not-implemented-tp5779600.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY table FROM STDIN doesn't show count tag

2013-11-22 Thread Rajeev rastogi
On 21 November 2013, Amit Khandekar 
amit.khande...@enterprisedb.commailto:amit.khande...@enterprisedb.com wrote:
Ok. we will then first fix the \COPY TO issue where it does not revert back 
the overriden psql output file handle. Once this is solved, fix for both COPY 
FROM and COPY TO, like how it is done in the patch earlier 
(copydefectV2.patch).

I analyzed the solution to fix \COPY TO issue but unfortunately I observed that 
do_copy is already resetting the value of cur_cmd_source and queryFout but 
before that itself result status is printed. So we'll have to reset the value 
before result status is being displayed.

So as other alternative solutions, I have two approaches:

1.  We can store current file destination queryFout in some local variable 
and pass the same to SendQuery function as a parameter. Same can be used to 
reset the value of queryFout after return from ProcessResult

From all other callers of SendQuery , we can pass NULL value for this new 
parameter.

2.  We can add new structure member variable FILE *prevQueryFout in 
structure struct _psqlSettings, which hold the value of queryFout before 
being changed in do_copy. And then same can be used to reset value in SendQuery 
or ProcessResult.

Please let me know which approach is OK or if any other approach suggested.
Based on feedback I shall prepare the new patch and share the same.

Thanks and Regards,
Kumar Rajeev Rastogi




Re: [HACKERS] Status of FDW pushdowns

2013-11-22 Thread David Fetter
On Fri, Nov 22, 2013 at 08:25:05AM -0600, Merlin Moncure wrote:
 On Thu, Nov 21, 2013 at 6:43 PM, Shigeru Hanada
 shigeru.han...@gmail.com wrote:
  2013/11/22 Tom Lane t...@sss.pgh.pa.us:
  Merlin Moncure mmonc...@gmail.com writes:
  On Thu, Nov 21, 2013 at 9:05 AM, Bruce Momjian br...@momjian.us wrote:
  I know join pushdowns seem insignificant, but it helps to restrict what
  data must be passed back because you would only pass back joined rows.
 
  By 'insignificant' you mean 'necessary to do any non-trivial real
  work'.   Personally, I'd prefer it if FDW was extended to allow
  arbitrary parameterized queries like every other database connectivity
  API ever made ever.
 
  [ shrug... ]  So use dblink.  For better or worse, the FDW stuff is
  following the SQL standard's SQL/MED design, which does not do it
  like that.
 
  Pass-through mode mentioned in SQL/MED standard might be what he wants.
 
 happen to have a link handy?

http://www.wiscorp.com/sql20nn.zip

You'll want to look at the PDF with MED in its title.

Passthrough mode, which is how the standard handles this problem is
basically a thing where you set it to be on, then everything your send
until setting it to off is passed through to the remote side.  The
people writing the standard didn't think too much about the
possibility that the remote side might speak a broader or different
dialect of SQL from the local server.  They also didn't imagine cases
where what's being passed isn't SQL at all.

In addition to breaking any possible parser, the feature as
described in the standard is just ripe for un-patchable exploits *in
its design*.

Of all the misdesign-by-committee contained in the standard, this
piece is far and away the stupidest I've encountered to date.  We
should not even vaguely attempt to implement it.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MultiXact pessmization in 9.3

2013-11-22 Thread Alvaro Herrera
Andres Freund wrote:
 On 2013-11-22 12:04:31 -0300, Alvaro Herrera wrote:

 Yes, somewhat: 9.3 GetMultiXactIdMembers() didn't do any work if the
 multixact was old enough:
   if (MultiXactIdPrecedes(multi, OldestVisibleMXactId[MyBackendId]))
   {
   debug_elog2(DEBUG2, GetMembers: it's too old);
   *xids = NULL;
   return -1;
   }
 so, in OLTP style workloads, doing a heap_lock_tuple() often didn't have
 to perform any IO when locking a tuple that previously had been locked
 using a multixact. We knew that none of the members could still be
 running and thus didn't have to ask the SLRU for them since a
 not-running member cannot still have a row locked.

Right.

 Now, fkey locks changed that because for !HEAP_XMAX_LOCK_ONLY mxacts, we
 need to look up the members to get the update xid and check whether it
 has committed or aborted, even if we know that it isn't currently
 running anymore due do OldestVisibleMXactId. But there's absolutely no
 need to do that for HEAP_XMAX_LOCK_ONLY mxacts, the old reasoning for
 not looking up the members if old is just fine.

Correct.  The only difficulty here is that we would need to pass down
the fact that we know for certain that this is only a locking Multixact.
There are some callers that go to it indirectly via MultiXactIdWait or
MultiXactIdExpand, but now that I look I think it's fine for those to
pass false (i.e. assume there might be an update and disable the
optimization), since those aren't hot compared to the other cases.

This patch implements this idea, but I haven't tested it much beyond
compiling and ensuring it passes the existing tests.

Regarding the outdated comment, I had a rewritten version of that
somewhere which I evidently forgot to push :-(  I noticed it was
outdated a couple of weeks after I pushed the main patch.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services
*** a/contrib/pgrowlocks/pgrowlocks.c
--- b/contrib/pgrowlocks/pgrowlocks.c
***
*** 168,174  pgrowlocks(PG_FUNCTION_ARGS)
  
  allow_old = !(infomask  HEAP_LOCK_MASK) 
  	(infomask  HEAP_XMAX_LOCK_ONLY);
! nmembers = GetMultiXactIdMembers(xmax, members, allow_old);
  if (nmembers == -1)
  {
  	values[Atnum_xids] = {0};
--- 168,175 
  
  allow_old = !(infomask  HEAP_LOCK_MASK) 
  	(infomask  HEAP_XMAX_LOCK_ONLY);
! nmembers = GetMultiXactIdMembers(xmax, members, allow_old,
!  false);
  if (nmembers == -1)
  {
  	values[Atnum_xids] = {0};
*** a/src/backend/access/heap/heapam.c
--- b/src/backend/access/heap/heapam.c
***
*** 3987,3993  l3:
  			 * the case, HeapTupleSatisfiesUpdate would have returned
  			 * MayBeUpdated and we wouldn't be here.
  			 */
! 			nmembers = GetMultiXactIdMembers(xwait, members, false);
  
  			for (i = 0; i  nmembers; i++)
  			{
--- 3987,3995 
  			 * the case, HeapTupleSatisfiesUpdate would have returned
  			 * MayBeUpdated and we wouldn't be here.
  			 */
! 			nmembers =
! GetMultiXactIdMembers(xwait, members, false,
! 	  HEAP_XMAX_IS_LOCKED_ONLY(infomask));
  
  			for (i = 0; i  nmembers; i++)
  			{
***
*** 4008,4014  l3:
  }
  			}
  
! 			pfree(members);
  		}
  
  		/*
--- 4010,4017 
  }
  			}
  
! 			if (members)
! pfree(members);
  		}
  
  		/*
***
*** 4157,4163  l3:
   * been the case, HeapTupleSatisfiesUpdate would have returned
   * MayBeUpdated and we wouldn't be here.
   */
! nmembers = GetMultiXactIdMembers(xwait, members, false);
  
  if (nmembers = 0)
  {
--- 4160,4168 
   * been the case, HeapTupleSatisfiesUpdate would have returned
   * MayBeUpdated and we wouldn't be here.
   */
! nmembers =
! 	GetMultiXactIdMembers(xwait, members, false,
! 		  HEAP_XMAX_IS_LOCKED_ONLY(infomask));
  
  if (nmembers = 0)
  {
***
*** 5217,5223  GetMultiXactIdHintBits(MultiXactId multi, uint16 *new_infomask,
  	 * We only use this in multis we just created, so they cannot be values
  	 * pre-pg_upgrade.
  	 */
! 	nmembers = GetMultiXactIdMembers(multi, members, false);
  
  	for (i = 0; i  nmembers; i++)
  	{
--- 5222,5228 
  	 * We only use this in multis we just created, so they cannot be values
  	 * pre-pg_upgrade.
  	 */
! 	nmembers = GetMultiXactIdMembers(multi, members, false, false);
  
  	for (i = 0; i  nmembers; i++)
  	{
***
*** 5293,5299  MultiXactIdGetUpdateXid(TransactionId xmax, uint16 t_infomask)
  	 * Since we know the LOCK_ONLY bit is not set, this cannot be a multi from
  	 * pre-pg_upgrade.
  	 */
! 	nmembers = GetMultiXactIdMembers(xmax, members, false);
  
  	if (nmembers  0)
  	{
--- 5298,5304 
  	 * Since we know the LOCK_ONLY bit is not set, this cannot be a multi from
  	 * pre-pg_upgrade.
  	 */
! 	nmembers = GetMultiXactIdMembers(xmax, members, false, 

Re: [HACKERS] new unicode table border styles for psql

2013-11-22 Thread Pavel Stehule
2013/11/22 Alvaro Herrera alvhe...@2ndquadrant.com

 Pavel Stehule escribió:

  2013/11/21 Peter Eisentraut pete...@gmx.net

   Maybe make the border setting a string containing the various
 characters
   by index.  Then everyone can create their own crazy borders.
  
  I seriously though about it, but not sure if it is good way.

 How about having a single unicode line style, and then have a
 different \pset setting to determine exactly what chars to print?  This
 wouldn't allow for programmability, but it seems better UI to me.
 This proliferation of unicode line style names seems odd.


-1

After thinking I don't see any value for common user. Users like you, me,
Merlin are able to parametrize output or patching source code.

Any parametrization expect some secure store, that will support exchange of
styles. And it expect robust parser of unicode strings, or ascii strings
with unicode escaped chars.

We cannot parse a escaped unicode chars now on client side, and cost of
parser is higher of benefit externally parametrized borders.

Regards

Pavel



 --
 Įlvaro Herrerahttp://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services



Re: [HACKERS] Replication Node Identifiers and crashsafe Apply Progress

2013-11-22 Thread Robert Haas
On Thu, Nov 21, 2013 at 8:26 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-11-21 08:22:05 -0500, Robert Haas wrote:
 On Thu, Nov 21, 2013 at 6:15 AM, Andres Freund and...@2ndquadrant.com 
 wrote:
   WRT performance: I agree that fixed-width identifiers are more
   performant, that's why I went for them, but I am not sure it's that
   important. The performance sensitive parts should all be done using the
   internal id the identifier maps to, not the public one.
 
  But I thought the internal identifier was exactly what we're creating.
 
  Sure. But how often are we a) going to create such an identifier b)
  looking it up?

 Never.  Make that the replication solution's problem.  Make the core
 support deal only with UUIDs or pairs of 64-bit integers or something
 like that, and let the replication solution decide what they mean.

 I think we're misunderstanding each other. I was commenting on your fear
 that strings longer than NAMEDATALEN or something would be bad for
 performance - which I don't think is very relevant because the lookups
 from public to internal identifier shouldn't be in any performance
 critical path.

 I personally would prefer a string because it'd allow me to build an
 identifier using the criterions I'd originally outlined outside of this
 infrastructure.

Yeah, there's some confusion here.  I don't care at all about the
performance characteristics of long strings here, because we shouldn't
be using them anywhere in the core code.  What I do care about is
making sure that whatever core support we use here is agnostic to how
the internal identifiers - relatively short bit strings - are
generated.  The patch as proposed puts forward a particular way of
doing that, and I think that neither that method *nor any other*
should be part of core.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNNEST with multiple args, and TABLE with multiple funcs

2013-11-22 Thread Tom Lane
Andrew Gierth and...@tao11.riddles.org.uk writes:
 Tom == Tom Lane t...@sss.pgh.pa.us writes:
  Tom Well, it's not insane on its face.  The rowtype of f in the
  Tom first example is necessarily a built-on-the-fly record, but in
  Tom the second case using the properties of the underlying named
  Tom composite type is possible, and consistent with what happens in
  Tom 9.3 and earlier.  (Not that I'm claiming we were or are totally
  Tom consistent ...)

 Right, but your changes to the code make it look like there was an
 intended change there - with the scan type tupdesc being forced to
 RECORD type and its column names changed.

I did set things up so that if you have a RECORD result, the column names
will be those of the query's alias list; this was in response to the
comment in the patch that complained that we were inconsistent about where
we were getting the names from if you had a mix of named-composite
functions and other functions.  I believe what is happening in the case
you show is that the function is returning a composite Datum that's marked
with the composite type's OID, and the upstream consumers are looking at
that, not at the scan tupdesc.  I'm not really excited about tracing down
exactly what the data flow is ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replication Node Identifiers and crashsafe Apply Progress

2013-11-22 Thread Andres Freund
On 2013-11-22 14:43:15 -0500, Robert Haas wrote:
 The patch as proposed puts forward a particular way of
 doing that, and I think that neither that method *nor any other*
 should be part of core.

Working on something like that, updated the patch state to waiting on
author.

Thanks,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Kevin Grittner
Bruce Momjian br...@momjian.us wrote:

 Not sure about backpatching.  default_transaction_read_only has been
 around since 7.4.  Setting it to true would cause pg_dump to fail unless
 you changed the database setting, and pg_dumpall would fail completely
 as there is no way to turn off the database setting.

See the attached patch.  It seems to fix pg_dump and pg_dumpall.  I
don't think it will cause any problem for *dumping* earlier
versions,  Backpatching would mean that if you try to restore a
dump made by 8.4 or later software to a 7.3 or earlier database,
you would get an error; but I don't think that's supported, and I
would be amazed if that were the *only* error you got if you tried
that.

 The problem is that I don't remember any report of this failing in
 pg_dump, pg_dumpall, or pg_upgrade, so saying it is a major issue is
 hard to accept.

Any time that you can appear to successfully dump a database, and
the restore attempt fails, I consider that to be a major issue.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why is UPDATE with column-list syntax not implemented

2013-11-22 Thread Tom Lane
AK alk...@gmail.com writes:
 9.3 documentation says:
 According to the standard, the column-list syntax should allow a list of
 columns to be assigned from a single row-valued expression, such as a
 sub-select:

 UPDATE accounts SET (contact_last_name, contact_first_name) =
 (SELECT last_name, first_name FROM salesmen
  WHERE salesmen.id = accounts.sales_id);
 This is not currently implemented — the source must be a list of independent
 expressions.

 Why is this not implemented? Is it considered inconvenient to use, or
 difficult to implement. or not important enough, or some other reason?

It's difficult to implement.  You'd need to do some significant
restructuring of the way UPDATE is handled.  Probably someone will
attempt it at some point.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building on S390

2013-11-22 Thread Tom Lane
Michael Meskes mes...@postgresql.org writes:
 On Fri, Nov 22, 2013 at 11:27:45AM -0500, Tom Lane wrote:
 Furthermore, if we change that convention now, we're going to increase
 the risk of such mixing failures for other people.

 Sure, but if this a bug we should. I'm not saying it is, I simply don't know.

Well, *if* it's a bug in core PG then we should do something about it,
but at the moment there's no evidence of that.  What seems the most
likely theory here is that the Debian maintainer has broken their package
with an ill-considered patch.  We can't take responsibility for other
people's hacks.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Kevin Grittner
Kevin Grittner kgri...@ymail.com wrote:

 See the attached patch.

Trying that again.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Companydiff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 63a8009..199ffb0 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -2586,6 +2586,9 @@ _doSetFixedOutputState(ArchiveHandle *AH)
 	/* Likewise for lock_timeout */
 	ahprintf(AH, SET lock_timeout = 0;\n);
 
+	/* Restore will need to write to the target database */
+	ahprintf(AH, SET default_transaction_read_only = off;\n);
+
 	/* Select the correct character set encoding */
 	ahprintf(AH, SET client_encoding = '%s';\n,
 			 pg_encoding_to_char(AH-public.encoding));

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Tom Lane
Kevin Grittner kgri...@ymail.com writes:
 Kevin Grittner kgri...@ymail.com wrote:
 See the attached patch.

 Trying that again.

That looks sane for pg_dump, but I would rather have expected that
pg_dumpall would need to emit the same thing (possibly more than once
due to reconnections).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:

 That looks sane for pg_dump, but I would rather have expected
 that pg_dumpall would need to emit the same thing (possibly more
 than once due to reconnections).

I was kinda surprised myself.  I changed it for pg_dump, tested
that, and then tested pg_dumpall to get a baseline, and the setting
was taken care of.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building on S390

2013-11-22 Thread Peter Eisentraut
On 11/22/13, 12:41 PM, Michael Meskes wrote:
 Checking the Debian logs it appears that all calls use *both* which seems to 
 do
 the right thing. And yes, it appears there is a change in XC that makes it
 break. But still, I would think there has to be a correct set of options.

According to the Debian build logs, postgres-xc compiles the entire
backend with -fPIC.  Not sure what sense that makes.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Andres Freund
On 2013-11-22 12:45:25 -0800, Kevin Grittner wrote:
 Tom Lane t...@sss.pgh.pa.us wrote:
 
  That looks sane for pg_dump, but I would rather have expected
  that pg_dumpall would need to emit the same thing (possibly more
  than once due to reconnections).
 
 I was kinda surprised myself.  I changed it for pg_dump, tested
 that, and then tested pg_dumpall to get a baseline, and the setting
 was taken care of.

pg_dumpall is lazy and just executes pg_dump for every database, that's
the reason... But are you sure it also unsets default_transaction_readonly for
the roles etc. it creates?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:

 are you sure it also unsets default_transaction_readonly for
 the roles etc. it creates?

I'm not sure I understand.  Could you give an example of what
you're concerned about?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Andres Freund
On 2013-11-22 13:07:29 -0800, Kevin Grittner wrote:
 Andres Freund and...@2ndquadrant.com wrote:
 
  are you sure it also unsets default_transaction_readonly for
  the roles etc. it creates?
 
 I'm not sure I understand.  Could you give an example of what
 you're concerned about?

pg_dumpall first spits out global data (users, databases, tablespaces)
and then invokes pg_dump for every single database. So I'd very strongly
suspect that your patch will issue the CREATE ROLE etc. calls without
unsetting default_transaction_readonly.

E.g. output looks like:
--
-- PostgreSQL database cluster dump
--
..
CREATE ROLE andres;
ALTER ROLE andres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN
REPLICATION;
...
\connect postgres

--
-- PostgreSQL database dump
--


CREATE TABLE pgbench_accounts (
aid integer NOT NULL,
bid integer,
abalance integer,
filler character(84)
)
WITH (fillfactor=100);

...
\connect regression
...


Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2013-11-22 13:07:29 -0800, Kevin Grittner wrote:
 I'm not sure I understand.  Could you give an example of what
 you're concerned about?

 pg_dumpall first spits out global data (users, databases, tablespaces)
 and then invokes pg_dump for every single database. So I'd very strongly
 suspect that your patch will issue the CREATE ROLE etc. calls without
 unsetting default_transaction_readonly.

Yeah, that's what I was wondering about.  I don't think pg_dumpall -g
invokes pg_dump at all, so how could that case work?  Maybe it would
only fail if the postgres database is read-only, though.  Try it with
default-read-only set in postgresql.conf instead of as a database
property.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:
 On 2013-11-22 13:07:29 -0800, Kevin Grittner wrote:
 Andres Freund and...@2ndquadrant.com wrote:

 are you sure it also unsets default_transaction_readonly for
 the roles etc. it creates?

 I'm not sure I understand.  Could you give an example of what
 you're concerned about?

 pg_dumpall first spits out global data (users, databases, tablespaces)
 and then invokes pg_dump for every single database. So I'd very strongly
 suspect that your patch will issue the CREATE ROLE etc. calls without
 unsetting default_transaction_readonly.

I changed my postgres database to default to read only (which is
not insane, given that I've seen so many people accidentally run
DDL there rather than in the database they intended), and got these
errors when I used it for my connection to restore pg_dumpall
output:

ERROR:  cannot execute COMMENT in a read-only transaction
ERROR:  cannot execute CREATE EXTENSION in a read-only transaction
ERROR:  cannot execute COMMENT in a read-only transaction
ERROR:  cannot execute REVOKE in a read-only transaction
ERROR:  cannot execute REVOKE in a read-only transaction
ERROR:  cannot execute GRANT in a read-only transaction
ERROR:  cannot execute GRANT in a read-only transaction

Oddly, it didn't complain about creating users within a read-only
transaction.  That seems like a potential bug.

Will look at covering pg_dumpall for that initial connection.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why is UPDATE with column-list syntax not implemented

2013-11-22 Thread AK
Claudio,

Can you elaborate how rules can help?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Why-is-UPDATE-with-column-list-syntax-not-implemented-tp5779600p5779896.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why is UPDATE with column-list syntax not implemented

2013-11-22 Thread AK
Thank you, Tom!



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Why-is-UPDATE-with-column-list-syntax-not-implemented-tp5779600p5779899.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Andres Freund
On 2013-11-22 13:34:18 -0800, Kevin Grittner wrote:
 Oddly, it didn't complain about creating users within a read-only
 transaction.  That seems like a potential bug.

There's lots of things that escape XactReadOnly. I've thought (and I
think suggested) before that we should put in another layer of defense
by also putting a check in AssignTransactionId(). Imo the compatibility
woes (like not being able to run SELECT txid_current();) are well worth
the nearly ironclad guarantee that we're not writing.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Kevin Grittner
Kevin Grittner kgri...@ymail.com wrote:

This covers pg_dumpall globals.  Tested with a read-only postgres
database and with default_transaction_read_only = on in the
postgresql.conf file.

It does nothing about pg_upgrade, which is sort of a separate
issue.  My inclination is that connections to the new cluster
should set this and connections to the old should not.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Companydiff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 63a8009..199ffb0 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -2586,6 +2586,9 @@ _doSetFixedOutputState(ArchiveHandle *AH)
 	/* Likewise for lock_timeout */
 	ahprintf(AH, SET lock_timeout = 0;\n);
 
+	/* Restore will need to write to the target database */
+	ahprintf(AH, SET default_transaction_read_only = off;\n);
+
 	/* Select the correct character set encoding */
 	ahprintf(AH, SET client_encoding = '%s';\n,
 			 pg_encoding_to_char(AH-public.encoding));
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 336ae58..4540a19 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -452,6 +452,9 @@ main(int argc, char *argv[])
 	 * database we're connected to at the moment is fine.
 	 */
 
+	/* Restore will need to write to the target database */
+	fprintf(OPF, SET default_transaction_read_only = off;\n);
+
 	/* Replicate encoding and std_strings in output */
 	fprintf(OPF, SET client_encoding = '%s';\n,
 			pg_encoding_to_char(encoding));

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Andres Freund
On 2013-11-22 13:34:18 -0800, Kevin Grittner wrote:
 I changed my postgres database to default to read only (which is
 not insane, given that I've seen so many people accidentally run
 DDL there rather than in the database they intended)

FWIW, I am less than convinced that it is correct for pg_dump[all] to
emit SET default_transaction_readonly = off; The user might explicitly
have set that to prevent against somebody restoring into the wrong
database or somesuch. Why is it our business to override such decisions?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why is UPDATE with column-list syntax not implemented

2013-11-22 Thread Claudio Freire
On Fri, Nov 22, 2013 at 6:36 PM, AK alk...@gmail.com wrote:
 Claudio,

 Can you elaborate how rules can help?


Well... that specific example:

 UPDATE accounts SET (contact_last_name, contact_first_name) =
 (SELECT last_name, first_name FROM salesmen
  WHERE salesmen.id = accounts.sales_id);

Can be rewritten as

UPDATE accounts SET contact_last_name = t.last_name,
contact_first-name = t.first_name
FROM (SELECT salesmen.id as salesmen_id, last_name, first_name FROM salesmen) t
WHERE t.salesmen_id = accounts.sales_id;

That's not 100% general, but it's quite general enough, transforming:

UPDATE T SET (field_list) = (SELECT field_list_b from_expr
WHERE T.F = join_expr filter_expr)

Into

UPDATE T SET field_n = tmp.field_b_n for all n FROM (SELECT
join_expr AS T_F, field_list_b from_expr WHERE
filter_expr) tmp WHERE T.F = tmp.T_F;

That's *almost* a regex.

It's possible the transformation can be done at the AST-level more
generally, but I don't know enough of postgres parser to go deeper
into that path, but the general idea being that it can be done even
more generally with CTEs, if the where clause terms that relate to the
updated table can be pinpointed and extracted into the CTE (as long as
they're stable).


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] why semicolon after begin is not allowed in postgresql?

2013-11-22 Thread AK
I am reading the following in the documentation: Tip: A common mistake is to
write a semicolon immediately after BEGIN. This is incorrect and will result
in a syntax error.

So, common mistake means semicolons after BEGIN seem consistent to many
people - it seems consistent to me as well. If PostgreSql allowed them, we
would have one less rule to memorize, shorter documentation, less mistakes
and so on. In other words, without this limitation PostgreSql would be
slightly more useful, right?

What am I missing? Why do we need this rule? How is it making PostgreSql
better?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/why-semicolon-after-begin-is-not-allowed-in-postgresql-tp5779905.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:

 FWIW, I am less than convinced that it is correct for
 pg_dump[all] to emit SET default_transaction_readonly = off;

It doesn't change the database setting, just the connection which
is issuing commands to create global object -- ones that don't
reside in the database we are connected to.  As the comment in
pg_dumpall.c says, right above where I added this:

    /*
 * We used to emit \connect postgres here, but that served no purpose
 * other than to break things for installations without a postgres
 * database.  Everything we're restoring here is a global, so whichever
 * database we're connected to at the moment is fine.
 */

 The user might explicitly have set that to prevent against
 somebody restoring into the wrong database or somesuch. Why is it
 our business to override such decisions?

Hmm.  If your argument had been that the user intended that the
database be a read-only database, then I would say that your
argument held no water.  Any user can choose to make any
transaction (or all subsequent transactions on the connection)
read-write any time they want.  The problem with pg_dumpall as it
stands is that it sets the databases to that default and then tries
to load data into them, which fails.

But you have a point regarding adding what I proposed to pg_dump. 
The more I think about it, the more I'm inclined to think that
pg_dumpall should insert this right after the \connect to a
database it is going to write to, rather than affecting pg_dump
output at all. That would allow you default protection against
writing pg_dump output to a database that was flagged this way. 
You could get around it by connecting interactively with psql,
issuing a SET command, and bringing in dump output with \i from a
text file.  If we go this way, would we want options on pg_restore
and/or psql to issue this set when reading in a file (or piped
input)?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why semicolon after begin is not allowed in postgresql?

2013-11-22 Thread Mike Blackwell
I believe the section you are reading refers to the BEGIN keyword in the
procedural language plpgsql, not the SQL 'BEGIN' command.  The issue stems
from confusing two distinct languages both of which, along with several
more procedural languages, are documented in the same manual.

__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com


http://www.rrdonnelley.com/
* mike.blackw...@rrd.com*


On Fri, Nov 22, 2013 at 4:24 PM, AK alk...@gmail.com wrote:

 I am reading the following in the documentation: Tip: A common mistake is
 to
 write a semicolon immediately after BEGIN. This is incorrect and will
 result
 in a syntax error.

 So, common mistake means semicolons after BEGIN seem consistent to many
 people - it seems consistent to me as well. If PostgreSql allowed them, we
 would have one less rule to memorize, shorter documentation, less mistakes
 and so on. In other words, without this limitation PostgreSql would be
 slightly more useful, right?

 What am I missing? Why do we need this rule? How is it making PostgreSql
 better?



 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/why-semicolon-after-begin-is-not-allowed-in-postgresql-tp5779905.html
 Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] why semicolon after begin is not allowed in postgresql?

2013-11-22 Thread Adrian Klaver

On 11/22/2013 02:24 PM, AK wrote:

I am reading the following in the documentation: Tip: A common mistake is to
write a semicolon immediately after BEGIN. This is incorrect and will result
in a syntax error.

So, common mistake means semicolons after BEGIN seem consistent to many
people - it seems consistent to me as well. If PostgreSql allowed them, we
would have one less rule to memorize, shorter documentation, less mistakes
and so on. In other words, without this limitation PostgreSql would be
slightly more useful, right?


In Postgresql it is allowed:

test= BEGIN ;
BEGIN

In plpgsql it is not, which is where you got the above documentation. 
That is because SQL BEGIN != plpgsql BEGIN





What am I missing? Why do we need this rule? How is it making PostgreSql
better?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/why-semicolon-after-begin-is-not-allowed-in-postgresql-tp5779905.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why semicolon after begin is not allowed in postgresql?

2013-11-22 Thread Merlin Moncure
On Fri, Nov 22, 2013 at 4:34 PM, Mike Blackwell mike.blackw...@rrd.com wrote:
 I believe the section you are reading refers to the BEGIN keyword in the
 procedural language plpgsql, not the SQL 'BEGIN' command.  The issue stems
 from confusing two distinct languages both of which, along with several more
 procedural languages, are documented in the same manual.

This is inherited constraint from Oracle pl/sql which pl/pgsql is, uh,
inspired by.  In pl/sql, all block opening constructs (THEN, LOOP,
BEGIN) do not get semi-colons.  BEGIN is a weird case because it's
(quite unfortunately) also the same thing that explicitly opens a
transaction in vanilla SQL; you use a semi-colon there as with any
standard SQL statement.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why semicolon after begin is not allowed in postgresql?

2013-11-22 Thread Kevin Grittner
AK alk...@gmail.com wrote:

 I am reading the following in the documentation: Tip: A common
 mistake is to write a semicolon immediately after BEGIN. This is
 incorrect and will result in a syntax error.

 So, common mistake means semicolons after BEGIN seem consistent
 to many people - it seems consistent to me as well. If PostgreSql
 allowed them, we would have one less rule to memorize, shorter
 documentation, less mistakes and so on. In other words, without
 this limitation PostgreSql would be slightly more useful, right?

 What am I missing? Why do we need this rule? How is it making
 PostgreSql better?

I think it only seems confusing because PostgreSQL also uses BEGIN
as a synonym for START TRANSACTION (and people tend to use the
shorter synonym to save keystrokes).  In plpgsql BEGIN is not a
command, it is part of declaring a code block.  Wouldn't these look
funny to you?:

IF x = 1 THEN;
  ...
END IF;

CASE;
  WHEN x = 1 THEN
    ...
  WHEN x = 2 THEN
    ...
  ELSE
    ...
END;

LOOP;
  ...
END LOOP;

etc.

Why should BEGIN be different from the above when it is not a
command, but part of declaring a code block?

In the nearest analog in the SQL standard, the BEGIN/END block is
called a compound statement, and like any other statement it is
ended by a semicolon; the standard does not allow a semicolon after
the BEGIN.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why semicolon after begin is not allowed in postgresql?

2013-11-22 Thread Andrew Dunstan


On 11/22/2013 05:24 PM, AK wrote:

I am reading the following in the documentation: Tip: A common mistake is to
write a semicolon immediately after BEGIN. This is incorrect and will result
in a syntax error.

So, common mistake means semicolons after BEGIN seem consistent to many
people - it seems consistent to me as well. If PostgreSql allowed them, we
would have one less rule to memorize, shorter documentation, less mistakes
and so on. In other words, without this limitation PostgreSql would be
slightly more useful, right?

What am I missing? Why do we need this rule? How is it making PostgreSql
better?






You're referring specifically to plpgsql, not to Postgres or SQL generally.

plpgsql is derived from PLSQL which is derived from Ada which has this 
grammatical rule.


The explanation is this: only complete statements are followed by 
semicolons. But in these languages, begin on its own is not a complete 
statement. It's the start of a compound statement, the end of which will 
be end, which is indeed followed by a semicolon.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for fail-back without fresh backup

2013-11-22 Thread Bruce Momjian
On Thu, Nov 21, 2013 at 11:43:34PM +0100, Andres Freund wrote:
 On 2013-11-21 14:40:36 -0800, Jeff Janes wrote:
  But if the transaction would not have otherwise generated WAL (i.e. a
  select that did not have to do any HOT pruning, or an update with zero rows
  matching the where condition), doesn't it now have to flush and wait when
  it would otherwise not?
 
 We short circuit that if there's no xid assigned. Check
 RecordTransactionCommit().

OK, that was my question, now answered.  Thanks.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Can we trust fsync?

2013-11-22 Thread Bruce Momjian
On Fri, Nov 22, 2013 at 11:16:06AM -0500, Tom Lane wrote:
 Greg Stark st...@mit.edu writes:
  On Thu, Nov 21, 2013 at 1:43 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  Also, it's not that hard to do plug-pull testing to verify that your
  system is telling the truth about fsync.  This really ought to be part
  of acceptance testing for any new DB server.
 
  I've never tried it but I always wondered how easy it was to do. How would
  you ever know you had tested it enough?
 
 I used the program Greg Smith recommends on our wiki (can't remember the
 name offhand) when I got a new house server this spring.  With the RAID
 card configured for writethrough and no battery, it failed all over the
 place.  Fixed those configuration bugs, it was okay three or four times
 in a row, which was good enough for me.
 
  The original mail was referencing a problem with syncing *meta* data
  though. The semantics around meta data syncs are much less clearly
  specified, in part because file systems traditionally made nearly all meta
  data operations synchronous. Doing plug-pull testing on Postgres would not
  test meta data syncing very well since Postgres specifically avoids doing
  much meta data operations by overwriting existing files and blocks as much
  as possible.
 
 True.  You're better off with a specialized testing program.  (Though
 now you mention it, I wonder whether that program was stressing metadata
 or not.)

The program is diskchecker:

http://brad.livejournal.com/2116715.html

I got the author to re-host the source code on github a few years ago.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Can we trust fsync?

2013-11-22 Thread Peter Geoghegan
On Fri, Nov 22, 2013 at 2:57 PM, Bruce Momjian br...@momjian.us wrote:
 The program is diskchecker:

 http://brad.livejournal.com/2116715.html

 I got the author to re-host the source code on github a few years ago.

It might be worth re-implementing this for -contrib. The fact that we
mention diskchecker.pl in the docs, and it is a pretty obscure Perl
script on some guy's personal website doesn't inspire much confidence.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Bruce Momjian
On Fri, Nov 22, 2013 at 01:55:10PM -0800, Kevin Grittner wrote:
 It does nothing about pg_upgrade, which is sort of a separate
 issue.  My inclination is that connections to the new cluster
 should set this and connections to the old should not.

It is going to be tricky to conditionally set/reset an environment
variable for default_transaction_read_only for old/new clusters.  We
never write to the old cluster, so I am not sure setting/resetting
default_transaction_read_only is needed.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Can we trust fsync?

2013-11-22 Thread Bruce Momjian
On Fri, Nov 22, 2013 at 03:06:31PM -0800, Peter Geoghegan wrote:
 On Fri, Nov 22, 2013 at 2:57 PM, Bruce Momjian br...@momjian.us wrote:
  The program is diskchecker:
 
  http://brad.livejournal.com/2116715.html
 
  I got the author to re-host the source code on github a few years ago.
 
 It might be worth re-implementing this for -contrib. The fact that we
 mention diskchecker.pl in the docs, and it is a pretty obscure Perl
 script on some guy's personal website doesn't inspire much confidence.

Well, it was his idea, and quite a good one.  I guess we could
reimplement this in C if someone wants to do the legwork.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Can we trust fsync?

2013-11-22 Thread Josh Berkus
On 11/22/2013 03:23 PM, Bruce Momjian wrote:
 On Fri, Nov 22, 2013 at 03:06:31PM -0800, Peter Geoghegan wrote:
 On Fri, Nov 22, 2013 at 2:57 PM, Bruce Momjian br...@momjian.us wrote:
 The program is diskchecker:

 http://brad.livejournal.com/2116715.html

 I got the author to re-host the source code on github a few years ago.

 It might be worth re-implementing this for -contrib. The fact that we
 mention diskchecker.pl in the docs, and it is a pretty obscure Perl
 script on some guy's personal website doesn't inspire much confidence.
 
 Well, it was his idea, and quite a good one.  I guess we could
 reimplement this in C if someone wants to do the legwork.

Yeah, too bad Brad didn't post a license for it.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Can we trust fsync?

2013-11-22 Thread Bruce Momjian
On Fri, Nov 22, 2013 at 03:27:29PM -0800, Josh Berkus wrote:
 On 11/22/2013 03:23 PM, Bruce Momjian wrote:
  On Fri, Nov 22, 2013 at 03:06:31PM -0800, Peter Geoghegan wrote:
  On Fri, Nov 22, 2013 at 2:57 PM, Bruce Momjian br...@momjian.us wrote:
  The program is diskchecker:
 
  http://brad.livejournal.com/2116715.html
 
  I got the author to re-host the source code on github a few years ago.
 
  It might be worth re-implementing this for -contrib. The fact that we
  mention diskchecker.pl in the docs, and it is a pretty obscure Perl
  script on some guy's personal website doesn't inspire much confidence.
  
  Well, it was his idea, and quite a good one.  I guess we could
  reimplement this in C if someone wants to do the legwork.
 
 Yeah, too bad Brad didn't post a license for it.

We can ask him.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building on S390

2013-11-22 Thread Greg Stark
On Fri, Nov 22, 2013 at 8:51 PM, Peter Eisentraut pete...@gmx.net wrote:

 On 11/22/13, 12:41 PM, Michael Meskes wrote:
  Checking the Debian logs it appears that all calls use *both* which
 seems to do
  the right thing. And yes, it appears there is a change in XC that makes
 it
  break. But still, I would think there has to be a correct set of options.

 According to the Debian build logs, postgres-xc compiles the entire
 backend with -fPIC.  Not sure what sense that makes.


Debian policy is to always use -fPIC

IIRC -fpic is good enough as long as the total size of the library is below
some limit. I'm not sure precisely what this size is that has to be below
the limit but if I recall correctly it's something you have no way to
determine in advance for a general purpose library. So Debian decided long
long ago to just use -fPIC always.


-- 
greg


Re: [HACKERS] Can we trust fsync?

2013-11-22 Thread Michael Paquier
On Sat, Nov 23, 2013 at 8:06 AM, Peter Geoghegan p...@heroku.com wrote:
 On Fri, Nov 22, 2013 at 2:57 PM, Bruce Momjian br...@momjian.us wrote:
 The program is diskchecker:

 http://brad.livejournal.com/2116715.html

 I got the author to re-host the source code on github a few years ago.

 It might be worth re-implementing this for -contrib. The fact that we
 mention diskchecker.pl in the docs, and it is a pretty obscure Perl
 script on some guy's personal website doesn't inspire much confidence.
Yes, having that in contrib would be useful. Those would bring a plus
when testing disks for Postgres.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Kevin Grittner
Bruce Momjian br...@momjian.us wrote:
 On Fri, Nov 22, 2013 at 01:55:10PM -0800, Kevin Grittner wrote:

 It does nothing about pg_upgrade, which is sort of a separate
 issue.  My inclination is that connections to the new cluster
 should set this and connections to the old should not.

 It is going to be tricky to conditionally set/reset an
 environment variable for default_transaction_read_only for
 old/new clusters.

Why?  If you know you have connected to the new cluster, set it to
off; if you know you have connected to the old cluster, don't touch
it.

 We never write to the old cluster, so I am not sure
 setting/resetting default_transaction_read_only is needed.

I'm sure it isn't.  That's why I said that connections to the old
cluster should not set it -- by which I meant they should not do
anything with it.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-22 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:
 On 2013-11-22 13:34:18 -0800, Kevin Grittner wrote:
 Oddly, it didn't complain about creating users within a read-only
 transaction.  That seems like a potential bug.

 There's lots of things that escape XactReadOnly. I've thought (and I
 think suggested) before that we should put in another layer of defense
 by also putting a check in AssignTransactionId(). Imo the compatibility
 woes (like not being able to run SELECT txid_current();) are well worth
 the nearly ironclad guarantee that we're not writing.

I agree that something like that is would be a good idea; however,
I'm sure you would agree that would not be material for a
back-patch to a stable branch.

Another thing I've mused about is having some way to lock a
database to read-only, such that only the owner or a superuser
could change that.  Another setting which I know some people would
like to lock is transaction isolation level.  I haven't really
thought of a good UI for that sort of thing, though.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >