Re: [HACKERS] Shortcoming in CLOBBER_FREED_MEMORY coverage: disk buffer pointers

2015-01-24 Thread Peter Geoghegan
On Sat, Jan 24, 2015 at 1:31 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Another idea is to teach Valgrind that whenever a backend reduces its
 pin count on a shared buffer to zero, that buffer should become undefined
 memory.

That should be fairly straightforward to implement.

 But I don't know if that will help --- if the buffer is then
 re-accessed, is Valgrind able to distinguish freshly-computed pointers
 into it from stale ones?

I don't think so. However, I think that
VALGRIND_CHECK_VALUE_IS_DEFINED() might be used. I believe you could
have Valgrind builds deference a pointer, and make sure that it
pointed into defined memory. But what would the generally useful choke
points for such a check be?

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Shortcoming in CLOBBER_FREED_MEMORY coverage: disk buffer pointers

2015-01-24 Thread Tom Lane
Peter Geoghegan p...@heroku.com writes:
 On Sat, Jan 24, 2015 at 1:31 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Another idea is to teach Valgrind that whenever a backend reduces its
 pin count on a shared buffer to zero, that buffer should become undefined
 memory.

 That should be fairly straightforward to implement.

 But I don't know if that will help --- if the buffer is then
 re-accessed, is Valgrind able to distinguish freshly-computed pointers
 into it from stale ones?

 I don't think so. However, I think that
 VALGRIND_CHECK_VALUE_IS_DEFINED() might be used. I believe you could
 have Valgrind builds deference a pointer, and make sure that it
 pointed into defined memory. But what would the generally useful choke
 points for such a check be?

Not sure.  There are wide swaths of the system where it would be perfectly
valid to see a pointer into buffer storage, so long as you still had a pin
on that page.

However, after further consideration it seems like even without solving
the buffer-reaccess problem, a Valgrind tweak such as above would have
caught this bug, and probably most other similar bugs.  Running with a
large shared_buffers value actually works in our favor for this: you're
unlikely to get aliasing between different pages occupying the same
buffer.  And most queries don't (intentionally) re-access the same page,
so while detection of a stale pointer wouldn't be certain it'd be fairly
probable.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] numeric access out of bounds

2015-01-24 Thread Tom Lane
Andrew Gierth and...@tao11.riddles.org.uk writes:
 I can see two possible fixes: one to correct the assumptions in the
 macros, the other to check for NaN before calling init_var_from_num in
 numeric_send (all the other functions seem to do this check explicitly).
 Which would be preferable?

I'm inclined to think special-casing NaN in numeric_send is the thing to
do, since it is that way everywhere else.  If we could push down all the
special casing into init_var_from_num then that would probably be better,
but in most cases that looks unattractive.

Perhaps it'd make sense to add an Assert that the input isn't NaN in
init_var_from_num?  That wouldn't really do all that much to forestall
future similar errors, since it wouldn't expose an oversight unless the
code was actually tested with NaN input ... but it's better than nothing.

Looks like set_var_from_num has same issue, too.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Shortcoming in CLOBBER_FREED_MEMORY coverage: disk buffer pointers

2015-01-24 Thread Tom Lane
While the CLOBBER_FREED_MEMORY hack does a fairly good job of catching
stale pointers to already-freed memory, commit fd496129d160950e exhibits
a case that is not caught at all: RelationBuildRowSecurity was copying
*pointers into disk buffers* into backend-local relcaches.  This would
of course work just as long as the relevant system catalog pages stayed
in shared buffers ... which is probably long enough that you'd never
notice it in typical developer testing.

I wonder if there's anything we can do to catch such cases more
mechanically?

One brute-force answer is to run the regression tests with a very small
shared_buffers setting; but it's not clear what is small enough, nor
what might be so small as to cause failures.

Another idea is to teach Valgrind that whenever a backend reduces its
pin count on a shared buffer to zero, that buffer should become undefined
memory.  But I don't know if that will help --- if the buffer is then
re-accessed, is Valgrind able to distinguish freshly-computed pointers
into it from stale ones?

Ideas?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New CF app deployment

2015-01-24 Thread Tomas Vondra
Hi,

seems the CF app uses an invalid e-mail address when sending messages to
pgsql-hackers - I've added a comment to one of the patches and got this:

  pgsql-hackers-testing@localhost
Unrouteable address

Maybe that's expected as the CF app is new, but I haven't seen it
mentioned in this thread.

regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical column ordering

2015-01-24 Thread Tomas Vondra
On 20.1.2015 22:30, Alvaro Herrera wrote:
 I've decided to abandon this patch.  I have spent too much time looking
 at it now.
 
 If anyone is interested in trying to study, I can provide the patches I
 came up with, explanations, and references to prior discussion -- feel
 free to ask.

I'll take look. Can you share the patches etc. - either here, or maybe
send it to me directly?

regards
Tomas

-- 
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade and rsync

2015-01-24 Thread Bruce Momjian
On Fri, Jan 23, 2015 at 02:34:36PM -0500, Stephen Frost wrote:
   You'd have to replace the existing data directory on the master to do
   that, which pg_upgrade was designed specifically to not do, in case
   things went poorly.
  
  Why? Just rsync the new data directory onto the old directory on the
  standbys. That's fine and simple.
 
 That still doesn't address the need to use --size-only, it would just
 mean that you don't need to use -H.  If anything the -H part is the
 aspect which worries me the least about this approach.

I can now confirm that it works, just as Stephen said.  I was able to
upgrade a standby cluster that contained the regression database, and
the pg_dump output was perfect.

I am attaching doc instruction that I will add to all branches as soon
as someone else confirms my results.  You will need to use rsync
--itemize-changes to see the hard links being created, e.g.:

   hf+ pgsql/data/base/16415/28188 = pgsql.old/data/base/16384/28188

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/doc/src/sgml/pgupgrade.sgml b/doc/src/sgml/pgupgrade.sgml
new file mode 100644
index e1cd260..91f40ce
*** a/doc/src/sgml/pgupgrade.sgml
--- b/doc/src/sgml/pgupgrade.sgml
*** pg_upgrade.exe
*** 409,414 
--- 409,484 
 /step
  
 step
+ titleUpgrade any Log-Shipping Standby Servers/title
+ 
+ para
+  If you have Log-Shipping Standby Servers (xref
+  linkend=warm-standby), follow these steps to upgrade them (before
+  starting any servers):
+ /para
+ 
+ procedure
+ 
+  step
+   titleInstall the new PostgreSQL binaries on standby servers/title
+ 
+   para
+Make sure the new binaries and support files are installed
+on all the standby servers.  Do emphasisnot/ run
+applicationinitdb/.  If applicationinitdb/ was run, delete
+the standby server data directories.  Also, install any custom
+shared object files on the new standbys that you installed in the
+new master cluster.
+   /para
+  /step
+ 
+  step
+   titleRun applicationrsync//title
+ 
+   para
+From a directory that is above the old and new database cluster
+directories, run this for each slave:
+ 
+ programlisting
+rsync --archive --hard-links --size-only old_dir new_dir remote_dir
+ /programlisting
+ 
+where optionold_dir/ and optionnew_dir/ are relative to the
+current directory, and optionremote_dir/ is emphasisabove/
+the old and new cluster directories on the standby server.  The old
+and new relative cluster paths must match on the master and standby
+server.  Consult the applicationrsync/ manual page for details
+on specifying optionremote_dir/.  applicationrsync/ will
+be fast when option--link/ mode is used because it will create
+hard links on the remote server rather than transfering user data.
+   /para
+  /step
+ 
+  step
+   titleConfigure log-shipping to standby servers/title
+ 
+   para
+Configure the servers for log shipping.  (You do not need to run
+functionpg_start_backup()/ and functionpg_stop_backup()/
+or take a file system backup as the slaves are still sychronized
+with the master.)
+   /para
+  /step
+ 
+ /procedure
+ 
+/step
+ 
+step
+ titleStart the new server/title
+ 
+ para
+  The new server and any applicationrsync/'ed standby servers can
+  now be safely started.
+ /para
+/step
+ 
+step
  titlePost-Upgrade processing/title
  
  para
*** psql --username postgres --file script.s
*** 548,562 
/para
  
para
-A Log-Shipping Standby Server (xref linkend=warm-standby) cannot
-be upgraded because the server must allow writes.  The simplest way
-is to upgrade the primary and use commandrsync/ to rebuild the
-standbys.  You can run commandrsync/ while the primary is down,
-or as part of a base backup (xref linkend=backup-base-backup)
-which overwrites the old standby cluster.
-   /para
- 
-   para
 If you want to use link mode and you do not want your old cluster
 to be modified when the new cluster is started, make a copy of the
 old cluster and upgrade that in link mode. To make a valid copy
--- 618,623 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Failback to old master

2015-01-24 Thread Maeldron T.
2014-11-13 9:05 GMT+01:00 Heikki Linnakangas hlinnakan...@vmware.com:


 Right. You have to be careful to make sure the standby really did fully
 catch up with the master, though. If it happens that the replication
 connection is momentarily down when you shut down the master, for example,
 then the master won't wait for the standby. You can use pg_controlinfo to
 verify that, before promoting the standby.


 - Heikki


Dear Heikki,

would you please tell me which line I should check to be 100% sure that
everything was sent to the slave when the master was shut down?

Latest checkpoint location:   1F/B842C3D8
Prior checkpoint location:1F/B837B9B8
Latest checkpoint's REDO location:1F/B841A050
Latest checkpoint's REDO WAL file:0001001F00B8
Latest checkpoint's TimeLineID:   1
Latest checkpoint's PrevTimeLineID:   1
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID:  0/15845855
Latest checkpoint's NextOID:  450146
Latest checkpoint's NextMultiXactId:  2250
Latest checkpoint's NextMultiOffset:  4803
Latest checkpoint's oldestXID:984
Latest checkpoint's oldestXID's DB:   1
Latest checkpoint's oldestActiveXID:  15845855
Latest checkpoint's oldestMultiXid:   1
Latest checkpoint's oldestMulti's DB: 1
Is it the first line (Latest checkpoint location) or do I have to check
more/else?

I plan to do this on the weekend.

Thank you.

M.


[HACKERS] Re: Abbreviated keys for Datum tuplesort (was: Re: B-Tree support function number 3 (strxfrm() optimization))

2015-01-24 Thread Robert Haas
On Fri, Jan 23, 2015 at 4:13 PM, Andrew Gierth
and...@tao11.riddles.org.uk wrote:
 [pruning the Cc: list and starting a new thread]

 Here's the cleaned-up version of the patch to allow abbreviated keys
 when sorting a single Datum. This also removes comments that suggest
 that the caller of tuplesort_begin_datum should ever have to care
 about the abbreviated key optimization.

 I'll add this to the CF.

I think this is a good idea.  Do you have a test case that shows the benefit?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Abbreviated keys for Datum tuplesort (was: Re: B-Tree support function number 3 (strxfrm() optimization))

2015-01-24 Thread Peter Geoghegan
On Sat, Jan 24, 2015 at 6:19 PM, Robert Haas robertmh...@gmail.com wrote:
 I think this is a good idea.  Do you have a test case that shows the benefit?

I agree. It seems likely that this will show a similar benefit to
other cases already tested, but I'd like to see a test case too.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: searching in array function - array_position

2015-01-24 Thread Pavel Stehule
Hi

with array_offsets - returns a array of offsets

Regards

Pavel

2015-01-20 21:32 GMT+01:00 Jim Nasby jim.na...@bluetreble.com:

 On 1/20/15 11:12 AM, Pavel Stehule wrote:

 I am sending updated version - it allow third optional argument that
 specify where searching should to start. With it is possible repeatably
 call this function.


 What happened to returning an array of offsets? I think that would be both
 easier to use than this version as well as performing better.

 I see you dropped multi-dimension support, but I think that's fine.

 --
 Jim Nasby, Data Architect, Blue Treble Consulting
 Data in Trouble? Get it in Treble! http://BlueTreble.com

diff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml
new file mode 100644
index 9ea1068..b3630b4
*** a/doc/src/sgml/array.sgml
--- b/doc/src/sgml/array.sgml
*** SELECT * FROM sal_emp WHERE pay_by_quart
*** 600,605 
--- 600,614 
index, as described in xref linkend=indexes-types.
   /para
  
+  para
+   You can also search any value in array using the functionarray_offset/
+   function (It returns a position of first occurrence of value in the array):
+ 
+ programlisting
+ SELECT array_offset(ARRAY['sun','mon','tue','wen','thu','fri','sat'], 'mon');
+ /programlisting
+  /para
+ 
   tip
para
 Arrays are not sets; searching for specific array elements
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
new file mode 100644
index d57243a..c563165
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
*** SELECT NULLIF(value, '(none)') ...
*** 11474,11479 
--- 11474,11482 
  primaryarray_lower/primary
/indexterm
indexterm
+ primaryarray_offset/primary
+   /indexterm
+   indexterm
  primaryarray_prepend/primary
/indexterm
indexterm
*** SELECT NULLIF(value, '(none)') ...
*** 11592,11597 
--- 11595,11626 
 /row
 row
  entry
+  literal
+   functionarray_offset/function(typeanyarray/type, typeanyelement/type optional, typeint/type/optional)
+  /literal
+ /entry
+ entrytypeint/type/entry
+ entryreturns a offset of first occurrence of some element in a array. It uses
+ a literalIS NOT DISTINCT FROM/ operator for comparation. Third
+ optional argument can specify a initial offset when searching starts. /entry
+ entryliteralarray_offset(ARRAY['sun','mon','tue','wen','thu','fri','sat'], 'mon')/literal/entry
+ entryliteral2/literal/entry
+/row
+row
+ entry
+  literal
+   functionarray_offsets/function(typeanyarray/type, typeanyelement/type)
+  /literal
+ /entry
+ entrytypeint[]/type/entry
+ entryreturns a array of offset of all occurrences some element in a array. It uses
+ a literalIS NOT DISTINCT FROM/ operator for comparation. Returns empty array,
+ when there are no occurence of element in input array./entry
+ entryliteralarray_offsets(ARRAY['A','A','B','A'], 'A')/literal/entry
+ entryliteral{1,2,4}/literal/entry
+/row
+row
+ entry
   literal
functionarray_prepend/function(typeanyelement/type, typeanyarray/type)
   /literal
diff --git a/src/backend/utils/adt/array_userfuncs.c b/src/backend/utils/adt/array_userfuncs.c
new file mode 100644
index 600646e..61a080d
*** a/src/backend/utils/adt/array_userfuncs.c
--- b/src/backend/utils/adt/array_userfuncs.c
***
*** 12,20 
--- 12,24 
   */
  #include postgres.h
  
+ #include catalog/pg_type.h
  #include utils/array.h
  #include utils/builtins.h
  #include utils/lsyscache.h
+ #include utils/typcache.h
+ 
+ static bool array_offset_common(FunctionCallInfo fcinfo, int *result);
  
  
  /*-
*** array_agg_array_finalfn(PG_FUNCTION_ARGS
*** 612,614 
--- 616,879 
  
  	PG_RETURN_DATUM(result);
  }
+ 
+ 
+ /*
+  * array_offset - returns a offset of entered element in a array.
+  * Returns NULL when values is not a element of the array. It allow
+  * searching a NULL value due using a NOT DISTINCT FROM operator. 
+  * 
+  * Biggest difference against width_array is unsorted input array.
+  */
+ Datum
+ array_offset(PG_FUNCTION_ARGS)
+ {
+ 	int	result;
+ 
+ 	if (array_offset_common(fcinfo, result))
+ 		PG_RETURN_INT32(result);
+ 
+ 	PG_RETURN_NULL();
+ }
+ 
+ 
+ Datum
+ array_offset_start(PG_FUNCTION_ARGS)
+ {
+ 	int	result;
+ 
+ 	if (array_offset_common(fcinfo, result))
+ 		PG_RETURN_INT32(result);
+ 
+ 	PG_RETURN_NULL();
+ }
+ 
+ /*
+  * Common part for functions array_offset and array_offset_startpos
+  */
+ static bool
+ array_offset_common(FunctionCallInfo fcinfo,
+ 		 int 	*result)
+ {
+ 	ArrayType	*array;
+ 	Oid		collation = PG_GET_COLLATION();
+ 	Oid			element_type;
+ 	Datum		searched_element = (Datum) 0,
+ value;
+ 	bool		

Re: [HACKERS] hung backends stuck in spinlock heavy endless loop

2015-01-24 Thread Martijn van Oosterhout
On Thu, Jan 22, 2015 at 03:50:03PM -0600, Merlin Moncure wrote:
 Quick update:  not done yet, but I'm making consistent progress, with
 several false starts.  (for example, I had a .conf problem with the
 new dynamic shared memory setting and git merrily bisected down to the
 introduction of the feature.).
 I have to triple check everything :(. The problem is generally
 reproducible but I get false negatives that throws off the bisection.
 I estimate that early next week I'll have it narrowed down
 significantly if not to the exact offending revision.

I've never used it but the BBChop project claims to be able to bisect
even in the case of intermittent failure.

https://github.com/Ealdwulf/bbchop/

It claims to be slow, but I don't think that's the limiting factor here...

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 He who writes carelessly confesses thereby at the very outset that he does
 not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


[HACKERS] numeric access out of bounds

2015-01-24 Thread Andrew Gierth
Happened to notice this:

postgres=# select numeric_send('NaN');
numeric_send

 \x7f7ec000
(1 row)

7f7e obviously screams accessing memory beyond the end of data, and
indeed this is so: init_var_from_num, when passed a NaN, accesses two
bytes after the input. This probably goes unnoticed because a NaN is 6
bytes including varlena header, so the next two bytes wouldn't cause a
segfault (and clients shouldn't care about the value since the NaN flag
is set), but it's still clearly wrong.

I can see two possible fixes: one to correct the assumptions in the
macros, the other to check for NaN before calling init_var_from_num in
numeric_send (all the other functions seem to do this check explicitly).
Which would be preferable?

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers