Re: [HACKERS] Materialized views WIP patch

2013-02-22 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2013-02-21 14:11:10 -0800, Kevin Grittner wrote:
 DISABLE MATERIALIZED VIEW mv;  -- ALTER clause for constraints
 DISCARD MATERIALIZED VIEW DATA mv;  -- session state
 RELEASE MATERIALIZED VIEW DATA mv;  -- savepoint
 RESET MATERIALIZED VIEW DATA mv;  -- run-time parameter
 
 I think any of these could work.  I'm personally most inclined
 toward DISABLE MATERIALIZED VIEW.  It seems to convey the semantics
 better, especially if you leave out DATA as an additonal word. 

 I vote for RESET or DISCARD. DISABLE sounds more like you disable
 automatic refreshes or somesuch.

Yeah, I don't much like DISABLE either.  I'm also concerned about
overloading RESET this way --- that statement has complicated-enough
syntax already, not to mention way too many shades of meaning.  So that
leaves me voting for DISCARD M.V. DATA, which seems pretty precise.
It's a bit verbose, but since when has SQL been succinct?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] use_remote_explain missing in docs of postgres_fdw

2013-02-22 Thread Tom Lane
Michael Paquier michael.paqu...@gmail.com writes:
 While testing a bit this feature, I noticed that use_remote_explain is
 available in the list of options for FOREIGN TABLE and SERVER but this is
 not specified in the docs:
 http://www.postgresql.org/docs/devel/static/postgres-fdw.html

Wups ... for some reason it's use_remote_estimate in the docs.  Thinko
on my part probably --- although on reflection, maybe that's a better
name anyway?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] OSSP UUID present but cannot be compiled

2013-02-22 Thread Tom Lane
David E. Wheeler da...@justatheory.com writes:
 While building 9.2.3 on OS X 10.8.2 today:
 checking ossp/uuid.h usability... no
 checking ossp/uuid.h presence... yes
 configure: WARNING: ossp/uuid.h: present but cannot be compiled
 configure: WARNING: ossp/uuid.h: check for missing prerequisite headers?

 I think I have reported this before. Maybe it's not worth worrying about? I 
 seem to be able to install the uuid-ossp extension and it works. So Ignore?

Yeah, you did:
http://www.postgresql.org/message-id/62fdd311-1afc-4296-95d8-5751d1407...@justatheory.com

I still think it's incumbent on OSSP to fix this, not us.  Or if
they've gone dead, maybe we need to start looking for some other
source of uuid support.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review : Add hooks for pre- and post-processor executables for COPY and \copy

2013-02-22 Thread Etsuro Fujita
Hi Amit,

Thank you for your careful review!

 -Original Message-
 From: Amit Kapila [mailto:amit.kap...@huawei.com]
 Sent: Friday, February 22, 2013 7:18 PM
 To: 'Etsuro Fujita'; 'pgsql-hackers'
 Subject: RE: [HACKERS] Review : Add hooks for pre- and post-processor
 executables for COPY and \copy
 
 On Wednesday, February 20, 2013 5:25 PM Etsuro Fujita wrote:
  Hi Amit,
 
  Thank you for the review.
 
 Etsuro-san, you are welcome.
 
   From: Amit Kapila [mailto:amit.kap...@huawei.com]
 
Test case issues:
--
1. Broken pipe is not handled in case of psql \copy command;
    Issue are as follows:
        Following are verified on SuSE-Linux 10.2.
        1) psql is exiting when \COPY xxx TO command is issued
and
   command/script is not found
                    When popen is called in write mode it is creating
   valid
   file descriptor and when it tries to write to file Broken pipe
  error
   is  coming which is not handled.
                        psql# \copy pgbench_accounts TO PROGRAM
   '../compress.sh pgbench_accounts4.txt'
        2) When \copy command is in progress then
  program/command
is
   killed/crashed due to any problem
           psql is exiting.
  
   This is a headache.  I have no idea how to solve this.
  
   I think we can keep it for committer to take a call on this issue.
 
  Agreed.
 
   I have found few more minor issues as below:
  
   1. The comment above do_copy can be modified to address the new
   functionality it can handle.
   /*
* Execute a \copy command (frontend copy). We have to open a file,
   then
* submit a COPY query to the backend and either feed it data from
  the
* file or route its response into the file.
*/
   bool
   do_copy(const char *args)
 
  Done.
 
   2.
   @@ -256,8 +273,14 @@ do_copy(const char *args)
   +if (options-file == NULL  options-program)
   +{
   +psql_error(program is not supported to
   + stdout/pstdout or
   from stdin/pstdin\n);
   +return false;
   +}
  
   should call free_copy_options(options); before return false;
 
  Good catch!  Done.
 
   3. \copy command doesn't need semicolon at end, however it was
  working
   previous to your patch, but
  now it is giving error.
   postgres=# \copy t1 from 'e:\pg_git_code\Data\t1_Data.txt';
   e:/pg_git_code/Data/t1_Data.txt';: No such file or directory
   e:/pg_git_code/Data/t1_Data.txt';: No such file or directory
 
  Sorry, I've fixed the bug.
 
   4. Please check if OpenPipeStream() it needs to call
  if (ReleaseLruFile()),
 
  OpenPipeStream() calls ReleaseLruFile() by itself if necessary.
 
 I have asked this thinking that ReleaseLruFile() may not be useful for
 OpenPipeStream,
 As I was not sure how the new file descriptors get allocated for popen.
 But now again reading popen specs, I got the point that it can be useful.
 
   5. Following in copy.sgml can be changed to make more meaningful as
   the first line looks little adhoc.
   + para
   +  The command that input comes from or that output goes to.
   +  The command for COPY FROM, which input comes from, must write
   + its
   output
   +  to standard output.  The command for COPY TO, which output
  goes
   + to,
   must
   +  read its input from standard input.
   + /para
 
  I've struggled to make the document more meaningful.
 
 To be honest, I am not sure whether introducing pre, post processor
 terminology is right or not,
 But again I shall let committer decide about this point.

Agreed.

   6. Can we have one example of this new syntax, it can make it more
   meaningful.
 
  Done.
 
  Sorry for the long delay.
 
 All the reported issues are handled in the new patch.
 
 I have one small another doubt that in function parse_slash_copy, you
 avoided expand tilde
 for program case, which I am not sure is the right thing or not.

Sorry, I'm not sure that, too.  I'd like to leave this for committers.

 I am marking this patch as Ready For Committer.

Thanks!

Best regards,
Etsuro Fujita

 Notes For Committer
 ---
 1. Broken pipe is not handled in case of psql \copy command;
This is currently documented
 2. Documentation needs to be checked, especially with focus whether
 introducing pre, post processor terminology is
Okay.
 3. In function parse_slash_copy, expand tilde is avaoided, is it okay?
 
 
 With Regards,
 Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2.3 crashes during archive recovery

2013-02-22 Thread Heikki Linnakangas

On 22.02.2013 02:13, Michael Paquier wrote:

On Thu, Feb 21, 2013 at 11:09 PM, Heikki Linnakangas
hlinnakan...@vmware.com  wrote:


On 15.02.2013 15:49, Heikki Linnakangas wrote:


Attached is a patch for git master. The basic idea is to split
InArchiveRecovery into two variables, InArchiveRecovery and
ArchiveRecoveryRequested. ArchiveRecoveryRequested is set when
recovery.conf exists. But if we don't know how far we need to recover,
we first perform crash recovery with InArchiveRecovery=false. When we
reach the end of WAL in pg_xlog, InArchiveRecovery is set, and we
continue with normal archive recovery.



New version of this attached, with a few bugs fixed.

I'm thinking that this should be back-patched to 9.2, but not to earlier
branches. Before 9.2, we don't PANIC at a reference to a non-existent page
until end of recovery, even if we've already reached consistency. The same
basic issue still exists in earlier versions, though: if you have
hot_standby=on, the system will open for read-only queries too early,
before the database is consistent. But this patch is invasive enough that
I'm weary of back-patching it further, when the worst that can happen is
that there's a small window right after startup when you can see an
inconsistent database in hot standby mode. Maybe after we get some more
testing of this in 9.2 and master. Opinions on that?


People have not yet complained about this problem with versions prior to
9.1. Is it worth backpatching in this case?


Possibly not..

Anyway, I've committed this to master and 9.2 now.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review : Add hooks for pre- and post-processor executables for COPY and \copy

2013-02-22 Thread Amit Kapila
On Wednesday, February 20, 2013 5:25 PM Etsuro Fujita wrote:
 Hi Amit,
 
 Thank you for the review.

Etsuro-san, you are welcome.
 
  From: Amit Kapila [mailto:amit.kap...@huawei.com]
 
   Test case issues:
   --
   1. Broken pipe is not handled in case of psql \copy command;
       Issue are as follows:
           Following are verified on SuSE-Linux 10.2.
           1) psql is exiting when \COPY xxx TO command is issued
   and
  command/script is not found
                   When popen is called in write mode it is creating
  valid
  file descriptor and when it tries to write to file Broken pipe
 error
  is  coming which is not handled.
                           psql# \copy pgbench_accounts TO PROGRAM
  '../compress.sh pgbench_accounts4.txt'
           2) When \copy command is in progress then
 program/command
   is
  killed/crashed due to any problem
              psql is exiting.
 
  This is a headache.  I have no idea how to solve this.
 
  I think we can keep it for committer to take a call on this issue.
 
 Agreed.
 
  I have found few more minor issues as below:
 
  1. The comment above do_copy can be modified to address the new
  functionality it can handle.
  /*
   * Execute a \copy command (frontend copy). We have to open a file,
  then
   * submit a COPY query to the backend and either feed it data from
 the
   * file or route its response into the file.
   */
  bool
  do_copy(const char *args)
 
 Done.
 
  2.
  @@ -256,8 +273,14 @@ do_copy(const char *args)
  +if (options-file == NULL  options-program)
  +{
  +psql_error(program is not supported to
  + stdout/pstdout or
  from stdin/pstdin\n);
  +return false;
  +}
 
  should call free_copy_options(options); before return false;
 
 Good catch!  Done.
 
  3. \copy command doesn't need semicolon at end, however it was
 working
  previous to your patch, but
 now it is giving error.
  postgres=# \copy t1 from 'e:\pg_git_code\Data\t1_Data.txt';
  e:/pg_git_code/Data/t1_Data.txt';: No such file or directory
  e:/pg_git_code/Data/t1_Data.txt';: No such file or directory
 
 Sorry, I've fixed the bug.
 
  4. Please check if OpenPipeStream() it needs to call
 if (ReleaseLruFile()),
 
 OpenPipeStream() calls ReleaseLruFile() by itself if necessary.

I have asked this thinking that ReleaseLruFile() may not be useful for
OpenPipeStream,
As I was not sure how the new file descriptors get allocated for popen.
But now again reading popen specs, I got the point that it can be useful.

  5. Following in copy.sgml can be changed to make more meaningful as
  the first line looks little adhoc.
  + para
  +  The command that input comes from or that output goes to.
  +  The command for COPY FROM, which input comes from, must write
  + its
  output
  +  to standard output.  The command for COPY TO, which output
 goes
  + to,
  must
  +  read its input from standard input.
  + /para
 
 I've struggled to make the document more meaningful.

To be honest, I am not sure whether introducing pre, post processor
terminology is right or not,
But again I shall let committer decide about this point.

  6. Can we have one example of this new syntax, it can make it more
  meaningful.
 
 Done.
 
 Sorry for the long delay.

All the reported issues are handled in the new patch.

I have one small another doubt that in function parse_slash_copy, you
avoided expand tilde
for program case, which I am not sure is the right thing or not.


I am marking this patch as Ready For Committer.


Notes For Committer
---
1. Broken pipe is not handled in case of psql \copy command;
   This is currently documented
2. Documentation needs to be checked, especially with focus whether
introducing pre, post processor terminology is
   Okay.
3. In function parse_slash_copy, expand tilde is avaoided, is it okay?


With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2.3 crashes during archive recovery

2013-02-22 Thread Heikki Linnakangas

On 14.02.2013 19:18, Fujii Masao wrote:

Yes. And the resource agent for streaming replication in Pacemaker (it's the
OSS clusterware) is the user of that archive recovery scenario, too. When it
starts up the server, it always creates the recovery.conf and starts the server
as the standby. It cannot start the master directly, IOW the server is always
promoted to the master from the standby. So when it starts up the server
after the server crashes, obviously it executes the same recovery scenario
(i.e., force archive recovery instead of crash one) as Kyotaro described.

The reason why that resource agent cannot start up the master directly is
that it manages three server states, called Master, Slave and Down. It can
move the server state from Down to Slave, and the reverse direction.
Also it can move the state from Slave to Master, and the reverse direction.
But there is no way to move the state between Down and Master directly.
This kind of the state transition model is isolated case in
clusterware, I think.


I don't have much sympathy for that to be honest. Seems like something 
that should be fixed in Pacemaker or the scripts used to glue it with 
PostgreSQL. However, this patch should make that work, so I guess 
everyone is happy.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2.3 crashes during archive recovery

2013-02-22 Thread Heikki Linnakangas

On 15.02.2013 10:33, Kyotaro HORIGUCHI wrote:

Sorry, I omitted to show how we found this issue.

In HA DB cluster cosists of Pacemaker and PostgreSQL, PostgreSQL
is stopped by 'pg_ctl stop -m i' regardless of situation.


That seems like a bad idea. If nothing else, crash recovery can take a 
long time. I don't know much about Pacemaker, but wouldn't it make more 
sense to at least try fast shutdown first, falling back to immediate 
shutdown after a timeout.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] use_remote_explain missing in docs of postgres_fdw

2013-02-22 Thread Michael Paquier
On Fri, Feb 22, 2013 at 5:13 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Michael Paquier michael.paqu...@gmail.com writes:
  While testing a bit this feature, I noticed that use_remote_explain is
  available in the list of options for FOREIGN TABLE and SERVER but this is
  not specified in the docs:
  http://www.postgresql.org/docs/devel/static/postgres-fdw.html

 Wups ... for some reason it's use_remote_estimate in the docs.  Thinko
 on my part probably --- although on reflection, maybe that's a better
 name anyway?

Indeed, use_remote_estimate is better. use_remote_explain makes it sound
like an EXPLAIN query would be sent to remote server. That is at least the
first impression I had when testing the feature without reading the docs.
-- 
Michael


Re: [HACKERS] FDW for PostgreSQL

2013-02-22 Thread Thom Brown
On 21 February 2013 10:30, Tom Lane t...@sss.pgh.pa.us wrote:
 Shigeru Hanada shigeru.han...@gmail.com writes:
 [ postgres_fdw.v5.patch ]

 Applied with a lot of revisions.

Bit of an issue with selecting rows:

postgres=# SELECT * FROM animals;
 id | animal_name | animal_type | lifespan
+-+-+--
  1 | cat | mammal  |   20
  2 | dog | mammal  |   12
  3 | robin   | bird|   12
  4 | dolphin | mammal  |   30
  5 | gecko   | reptile |   18
  6 | human   | mammal  |   85
  7 | elephant| mammal  |   70
  8 | tortoise| reptile |  150
(8 rows)

postgres=# SELECT animals FROM animals;
 animals
-
 (,,,)
 (,,,)
 (,,,)
 (,,,)
 (,,,)
 (,,,)
 (,,,)
 (,,,)
(8 rows)

postgres=# SELECT animals, animal_name FROM animals;
animals| animal_name
---+-
 (,cat,,)  | cat
 (,dog,,)  | dog
 (,robin,,)| robin
 (,dolphin,,)  | dolphin
 (,gecko,,)| gecko
 (,human,,)| human
 (,elephant,,) | elephant
 (,tortoise,,) | tortoise
(8 rows)

postgres=# EXPLAIN (ANALYSE, VERBOSE) SELECT animals FROM animals;
   QUERY PLAN
-
 Foreign Scan on public.animals  (cost=100.00..100.24 rows=8 width=45)
(actual time=0.253..0.255 rows=8 loops=1)
   Output: animals.*
   Remote SQL: SELECT NULL, NULL, NULL, NULL FROM public.animals
 Total runtime: 0.465 ms
(4 rows)

--
Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] use_remote_explain missing in docs of postgres_fdw

2013-02-22 Thread Tom Lane
Michael Paquier michael.paqu...@gmail.com writes:
 On Fri, Feb 22, 2013 at 5:13 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Wups ... for some reason it's use_remote_estimate in the docs.  Thinko
 on my part probably --- although on reflection, maybe that's a better
 name anyway?

 Indeed, use_remote_estimate is better. use_remote_explain makes it sound
 like an EXPLAIN query would be sent to remote server. That is at least the
 first impression I had when testing the feature without reading the docs.

Well, it does do that, at least for the part of the query that will be
executed on the remote server.  But if you read it to mean the whole
query would be EXPLAINed on the remote server, that's wrong.

The objection I have to use_remote_explain is that it's focusing too
much on the mechanism (ie, send an EXPLAIN command for execution) rather
than the result (ie, get a cost estimate from the remote server).
So I'm inclined to change the code to match the docs rather than vice
versa.  Anyone have a contrary opinion?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FDW for PostgreSQL

2013-02-22 Thread Tom Lane
Thom Brown t...@linux.com writes:
 Bit of an issue with selecting rows:

Ooops, looks like I screwed up the logic for whole-row references.
Will fix, thanks for the test case!

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FDW for PostgreSQL

2013-02-22 Thread Thom Brown
On 22 February 2013 14:10, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 Bit of an issue with selecting rows:

 Ooops, looks like I screwed up the logic for whole-row references.
 Will fix, thanks for the test case!

Retried after your changes and all is well.  Thanks Tom.

-- 
Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] in-catalog Extension Scripts and Control parameters (templates?)

2013-02-22 Thread Dimitri Fontaine
Hi,

Please find attached v3 of the Extension Templates patch, with full
pg_dump support thanks to having merged default_full_version, appended
with some regression tests now that it's possible.

The patch also implements ALTER RENAME and OWNER facilities for those
new templates objects.

Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 Now, back to Extension Templates: the pg_dump output from the attached
 patch is not smart enough to cope with an extension that has been
 upgraded, it will only install the *default* version of it.

That's been fixed by merging in the default_full_version patch.

 There are two ways that I see about addressing that point:

   - implement default_full_version support for CREATE EXTENSION and have
 it working both in the case of file based installation and template
 based installation, then pg_dump work is really straightforward;

 CREATE EXTENSION pair VERSION '1.2'; -- will install 1.0 then update

And that just works at pg_restore time, automatically, without pg_dump
having to know anything about how.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support



templates.v3.patch.gz
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: create extension default_full_version

2013-02-22 Thread Dimitri Fontaine
Ibrar Ahmed ibrar.ah...@gmail.com writes:
 * In case we have hstore--1.3.sql file and want to install that file, but
 failed because of default_full_version.

That's now fixed, please see the Extension Templates patch at

  http://www.postgresql.org/message-id/m21uc8l4j8@2ndquadrant.fr

Where you will even find regression tests for that problem.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] OSSP UUID present but cannot be compiled

2013-02-22 Thread David E. Wheeler
On Feb 22, 2013, at 12:42 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Yeah, you did:
 http://www.postgresql.org/message-id/62fdd311-1afc-4296-95d8-5751d1407...@justatheory.com
 
 I still think it's incumbent on OSSP to fix this, not us.  Or if
 they've gone dead, maybe we need to start looking for some other
 source of uuid support.

The OSSP UUID library has not been updated since 2008, and their FTP server has 
been down for a couple of years. Ralf posts to Twitter now and then, but has 
never responded to any of my DMs or emails.

Yeah, I think it's time to look elsewhere -- or to fork it.

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for REINDEX CONCURRENTLY

2013-02-22 Thread Fujii Masao
On Thu, Feb 21, 2013 at 11:55 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
 A ShareUpdateExclusiveLock is taken on index or table that is going to be
 rebuilt just before calling ReindexRelationConcurrently. So the solution I
 have here is to make REINDEX CONCURRENTLY fail for session 2. REINDEX
 CONCURRENTLY is made to allow a table to run DML in parallel to the
 operation so it doesn't look strange to me to make session 2 fail if REINDEX
 CONCURRENTLY is done in parallel on the same relation.

Thanks for updating the patch!

With updated patch, REINDEX CONCURRENTLY seems to fail even when
SharedUpdateExclusiveLock is taken by the command other than REINDEX
CONCURRENTLY, for example, VACUUM. Is this intentional? This behavior
should be avoided. Otherwise, users might need to disable autovacuum
whenever they run REINDEX CONCURRENTLY.

With updated patch, unfortunately, I got the similar deadlock error when I
ran REINDEX CONCURRENTLY in session1 and ANALYZE in session2.

ERROR:  deadlock detected
DETAIL:  Process 70551 waits for ShareLock on virtual transaction
3/745; blocked by process 70652.
Process 70652 waits for ShareUpdateExclusiveLock on relation 17460 of
database 12293; blocked by process 70551.
Process 70551: REINDEX TABLE CONCURRENTLY pgbench_accounts;
Process 70652: ANALYZE pgbench_accounts;
HINT:  See server log for query details.
STATEMENT:  REINDEX TABLE CONCURRENTLY pgbench_accounts;

Like original problem that I reported, temporary index created by REINDEX
CONCURRENTLY was NOT marked as INVALID.

=# \di pgbench_accounts*
List of relations
 Schema |   Name| Type  |  Owner   |  Table
+---+---+--+--
 public | pgbench_accounts_pkey | index | postgres | pgbench_accounts
 public | pgbench_accounts_pkey_cct | index | postgres | pgbench_accounts
(2 rows)

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Function Bike Shedding

2013-02-22 Thread Robert Haas
On Thu, Feb 21, 2013 at 1:16 PM, Merlin Moncure mmonc...@gmail.com wrote:
 Well, for case the of operator, it means whatever we reserve to mean.
 Very much agree on limitations of symbolic representation of behaviors
 (especially since some of the best ones were reserved by SQL or other
 acctors), so I think there is growing consensus that such things
 should get moved to functions.   But functions are a lot less terse
 than operators so functions describing clearly defined behaviors are
 appreciated.

 So, get() means what *define it to mean*, but the definition should be
 consistent. If it's shorthand for get from some multiple key/value
 container then fine.  If get() is just not specific enough -- let's
 at least try and go for something behavior specific (such as getMember
 or some such) before punting and resolving type specific function
 names.

 In fact, a an awful lot of $propsal's behaviors are in fact direct
 proxies for hstore behaviors, and a superficial think is suggesting
 that around 90% of hstore API would make sense in JSON terms (even
 though Andrew didn't implement all those behaviors and we're not going
 to ask him to).  That to me is suggesting that tuple manipulation is a
 pretty general problem (hstore AKA tuple) and json only brings a
 couple of things to the table that isn't already covered there.
 Isn't it nice that you can document functions like avals/svals ONCE
 and not have to rewrite your triggers when you swap out hstore for
 json to get a couple extra behavior bits?

Naming the JSON stuff the same way we've already named the hstore
stuff is a somewhat promising idea, but it's hard for me to believe
we'd truly resist the urge to tinker.  avals and svals are completely
opaque to me; without reading the manual I have no idea what those
things mean.  If they had longer, more descriptive names it would be
more tempting.  Still, if the behaviors line up closely enough for
government work and we want to match the names up as well, I think
that'd be tolerable.

What I think is NOT tolerable is choosing a set of short but arbitrary
names which are different from anything that we have now and
pretending that we'll want to use those again for the next data type
that comes along.  That's just wishful thinking.  Programmers who
believe that their decisions will act as precedent for all future code
are almost inevitably disappointed.  Precedent grows organically out
of what happens; it's very hard to create it ex nihilo, especially
since we have no clear idea what future data types we'll likely want
to add.  Sure, if we add something that's just like JSON but with a
few extra features, we'll be able to reuse the names no problem.  But
that's unlikely, because we typically resist the urge to add things
that are too much like what we already have.  The main reason we're
adding JSON when we already have hstore is because JSON has become
something of a standard.  We probably WILL add more container types
in the future, but I'd guess that they are likely to be as different
from JSON as JSON is from XML, or from arrays.  I'm not convinced we
can define a set of semantics that are going to sweep that broadly.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON Function Bike Shedding

2013-02-22 Thread David E. Wheeler
On Feb 22, 2013, at 9:37 AM, Robert Haas robertmh...@gmail.com wrote:

 What I think is NOT tolerable is choosing a set of short but arbitrary
 names which are different from anything that we have now and
 pretending that we'll want to use those again for the next data type
 that comes along.  That's just wishful thinking.  Programmers who
 believe that their decisions will act as precedent for all future code
 are almost inevitably disappointed.  Precedent grows organically out
 of what happens; it's very hard to create it ex nihilo, especially
 since we have no clear idea what future data types we'll likely want
 to add.  Sure, if we add something that's just like JSON but with a
 few extra features, we'll be able to reuse the names no problem.  But
 that's unlikely, because we typically resist the urge to add things
 that are too much like what we already have.  The main reason we're
 adding JSON when we already have hstore is because JSON has become
 something of a standard.  We probably WILL add more container types
 in the future, but I'd guess that they are likely to be as different
 from JSON as JSON is from XML, or from arrays.  I'm not convinced we
 can define a set of semantics that are going to sweep that broadly.

Maybe. I would argue, however, that a key/value-oriented data type will always 
call those things keys and values. So keys() and vals() (or get_keys() and 
get_vals()) seems pretty reasonable to me.

Anyway, back to practicalities, Andrew last posted:

 I am going to go the way that involves the least amount of explicit casting 
 or array construction. So get_path() stays, but becomes non-variadic. get() 
 can take an int or variadic text[], so you can do:
 
get(myjson,0)
get(myjson,'f1')
get(myjson,'f1','2','f3')
get_path(myjson,'{f1,2,f3}')

I would change these to mention the return types:

   get_json(myjson,0)
   get_json(myjson,'f1')
   get_json(myjson,'f1','2','f3')
   get_path_json(myjson,'{f1,2,f3}')

And then the complementary text-returning versions:

   get_text(myjson,0)
   get_text(myjson,'f1')
   get_text(myjson,'f1','2','f3')
   get_path_text(myjson,'{f1,2,f3}')

I do think that something like length() has pretty good semantics across data 
types, though. So to update the proposed names, taking in the discussion, I now 
propose:

Existing Name  Proposed Name
-- ---
json_array_length() length()
json_each() each_json()
json_each_as_text() each_text()
json_get()  get_json()
json_get_as_text()  get_text()
json_get_path() get_path_json()
json_get_path_as_text() get_path_text()
json_object_keys()  get_keys()
json_populate_record()  to_record()
json_populate_recordset()   to_records()
json_unnest()   get_values()
json_agg()  json_agg()

I still prefer to_record() and to_records() to populate_record(). It just feels 
more like a cast to me. I dislike json_agg(), but assume we're stuck with it.

But at this point, I’m happy to leave Andrew to it. The functionality is 
awesome.

Best,

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_xlogdump

2013-02-22 Thread Alvaro Herrera
Andres Freund wrote:
 On 2013-02-13 12:09:37 -0300, Alvaro Herrera wrote:
  Here's an updated version of pg_xlogdump.  This is rebased on top of the
  committed xlogreader, palloc restructuring and libpgcommon, PG_RMGR
  stuff, and is basically a revamped version of what Andres submitted in
  http://www.postgresql.org/message-id/1357672187-7693-5-git-send-email-and...@2ndquadrant.com
 
 Two tiny followup bits, I had fixed since:
 * one copy-and-paste-o in an error message
 * replace stupid directory verification implementation
 * fix include in compat.c to include utils/timestamp.h instead of
   datatype/

Applied with some additional fixes.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_xlogdump

2013-02-22 Thread Andres Freund
On 2013-02-22 16:58:37 -0300, Alvaro Herrera wrote:
 Andres Freund wrote:
  On 2013-02-13 12:09:37 -0300, Alvaro Herrera wrote:
   Here's an updated version of pg_xlogdump.  This is rebased on top of the
   committed xlogreader, palloc restructuring and libpgcommon, PG_RMGR
   stuff, and is basically a revamped version of what Andres submitted in
   http://www.postgresql.org/message-id/1357672187-7693-5-git-send-email-and...@2ndquadrant.com
  
  Two tiny followup bits, I had fixed since:
  * one copy-and-paste-o in an error message
  * replace stupid directory verification implementation
  * fix include in compat.c to include utils/timestamp.h instead of
datatype/
 
 Applied with some additional fixes.

Thanks!

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sql_drop Event Trigger

2013-02-22 Thread Robert Haas
On Thu, Feb 21, 2013 at 12:47 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 You're misunderstanding.  If you do DROP EVENT TRIGGER, the DDL_DROP
 event won't fire at all.  So no matter how messed up your system is, you
 can always fix it by simply dropping the event trigger.

 What I was saying is that if you have some command other than DROP EVENT
 TRIGGER, which happens to drop an event trigger, said event trigger will
 not be present in the pg_dropped_objects results.

Hmm.  But, that means that if some other object manages to depend on
an event trigger, and you drop the event trigger with CASCADE taking
the other object with it, then some other event trigger being used
for, say, replication might fail to see the drop.  Right now that's
not possible but it seems potentially fragile.  Not that I have a
great idea.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sql_drop Event Trigger

2013-02-22 Thread Robert Haas
On Thu, Feb 21, 2013 at 12:52 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 What if the object that gets whacked around is one of the named
 objects rather than some dependency thereof?  Suppose for example that
 the event trigger drops the same object that the user tried to drop.
 We need to error out cleanly in that case, not blindly proceed with
 the drop.

 An error is raised, which I think is sane.  I think this peculiar
 situation warrants its own few lines in the new regression test.

Definitely.

 One funny thing I noticed is that if I add a column in a table being
 dropped, the targetObjects list does not change after the trigger has
 run.  The reason for this is that the table's attributes are not present
 in the targetObjects list; instead they are dropped manually by calling
 DeleteAttributeTuples().  I saw that you can end up with lingering
 pg_attribute entries that way.

I venture to guess that this is exactly the sort of thing that made
Tom argue upthread that we shouldn't be putting a firing point in the
middle of the drop operation.  Any slip-ups here will result in
corrupt catalogs, and it's not exactly future-proof either.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Show type in psql SELECT

2013-02-22 Thread Mike Toews
Hi hackers,

Type info can be viewed with \d mytable, however often I'd like to see
the type (and typmod) info in SELECT queries with psql, similar to pgAdmin
III. For example:

my_db=# \pset type
my_db=# SELECT * FROM my_table;
 gid | description| width
 integer | character varying(255) | numeric(6,3)
-++--
   1 | Hello  | 3.220
(1 row)

or in expanded form:

my_db=# \x
my_db=# SELECT * FROM my_table;
-[ RECORD 1 ]--
gid : integer| 1
description : character varying(255) | Hello
width : numeric(6,3) | 3.220

Has anyone else thought this was a missing feature?

-Mike


Re: [HACKERS] Materialized views WIP patch

2013-02-22 Thread Josh Berkus

 That feels completely wrong to me.  For one thing, I can't think of
 any ALTER commands to populate or remove data.  What did you think
 of the idea of something like DISCARD MATERIALIZED VIEW DATA as a
 new statment?  Or maybe RESET MATERIALIZED VIEW?

I prefer RESET, especially since it could eventually support RESET ALL
MATERIALIZED VIEWS if that turns out to be useful.  How does the parser
like that?

BTW, to contradict Peter E., for my part I would NOT want matview resets
to be logged as DDL.  I would only want matview definitition changes to
be so logged.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Strange Windows problem, lock_timeout test request

2013-02-22 Thread Stephen Frost
Zoltán,

* Zoltán Böszörményi (z...@cybertec.at) wrote:
 The patch now passed make check in both cases.

Is v29 the latest version of this patch..?

Looking through the patch, I've noticed a couple of things:

First off, it's not in context diff format, which is the PG standard for
patch submission.  Next, the documentation has a few issues: 

- Heavy-weight should really be defined in terms of specific lock
  types or modes.  We don't use the 'heavyweight' term anywhere else in
  the documentation that I've found.

- I'd reword this:

  Abort any statement that tries to acquire a heavy-weight lock on rows,
  pages, tables, indices or other objects and the lock(s) has to wait
  more than the specified number of milliseconds.

  as:

  Abort any statement which waits longer than the specified number of
  milliseconds while attempting to acquire a lock on ...

- I don't particularly like lock_timeout_option, for a couple of
  reasons.  First is simply the name is terrible, but beyond that, it
  strikes me that wanting to set both a 'per-lock timeout' and a
  'overall waiting-for-locks timeout' at the same time would be a
  reasonable use-case.  If we're going to have 2 GUCs and we're going to
  support each of those options, why not just let the user specify
  values for each?

- This is a bit disingenuous:

  If literalNOWAIT/ option is not specified and
  varnamelock_timeout/varname is set and the lock or statement
  (depending on varnamelock_timeout_option/varname) needs to wait
  more than the specified value in milliseconds, the command reports
  an error after timing out, rather than waiting indefinitely.

  The SELECT would simply continue to wait until the lock is available.
  That's a bit more specific than 'indefinitely'.  Also, we might add a
  sentence about statement_timeout as well, if we're going to document
  what can happen if you don't use NOWAIT with your SELECT-FOR-UPDATE.
  Should we add documentation to the other commands that wait for locks?

- Looks like this was ended mid-thought...:

+ * Lock a semaphore (decrement count), blocking if count would be  0
+ * until a timeout triggers. Returns true if

- Not a big fan of this:

+* See notes in PGSemaphoreLock.

- I'm not thrilled with the new API for defining the timeouts.
  Particularly, I believe the more common convention for passing around
  arrays of structures is to have an empty array at the end, which
  avoids having to remember to update the # of entries every time it
  gets changed.  Of course, another option would be to use our existing
  linked list implementation and its helper macros such as our
  foreach() construct.

- As I've mentioned in other places/times, comments should be about why
  we're doing something, not what we're doing- the code tells you that.
  As such, comments like this really aren't great:
  /* Assert request is sane */
  /* Now re-enable the timer, if necessary. */

- Do we really need TimestampTzPlusMicroseconds..?

In general, I like this feature and a number of things above are pretty
small issues.  The main questions, imv, are if we really need both
'options', and, if so, how they should work, and the API for defining
timers.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] OSSP UUID present but cannot be compiled

2013-02-22 Thread Peter Eisentraut
On Fri, 2013-02-22 at 03:42 -0500, Tom Lane wrote:
 I still think it's incumbent on OSSP to fix this, not us.  Or if
 they've gone dead, maybe we need to start looking for some other
 source of uuid support. 

ossp-uuid still works, in spite of its age, but I have started a
replacement here if people are interested:
https://github.com/petere/pglibuuid



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bugfix: --echo-hidden is not supported by \sf statements

2013-02-22 Thread Stephen Frost
Pavel,

* Pavel Stehule (pavel.steh...@gmail.com) wrote:
 We cannot to remove  minimal_error_message() because there
 are two SQL queries and if we do fault tolerant oid lookup, then
 still pg_get_functiondef can raise exception. 

Why is that?  lookup_function_oid() only collects the oid to pass to
get_create_function_cmd(), why not just issue one query to the backend?
And use PSQLexec() to boot and get --echo-hidden, etc, for free?  And
eliminate the one-off error handling for just this case?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] bugfix: --echo-hidden is not supported by \sf statements

2013-02-22 Thread Pavel Stehule
2013/2/23 Stephen Frost sfr...@snowman.net:
 Pavel,

 * Pavel Stehule (pavel.steh...@gmail.com) wrote:
 We cannot to remove  minimal_error_message() because there
 are two SQL queries and if we do fault tolerant oid lookup, then
 still pg_get_functiondef can raise exception.

 Why is that?  lookup_function_oid() only collects the oid to pass to
 get_create_function_cmd(), why not just issue one query to the backend?
 And use PSQLexec() to boot and get --echo-hidden, etc, for free?  And
 eliminate the one-off error handling for just this case?

yes, we can do it. There is only one issue

routines for parsing function signature in regproc and regprocedure
should be updated - and I would to get some agreement than I start to
do modify core.

Regards

Pavel



 Thanks,

 Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] New hooks for snapshot and transaction ID acquisition from external source

2013-02-22 Thread Michael Paquier
Hi,

What do you think about adding hooks for transaction ID and snapshot
acquisition? Those hooks could be located in AssignTransactionId:xact.c for
transaction ID and GetTransactionSnapshot:snapmgr.c for snapshots.
This is useful for multi-master applications that use external tools to
feed with unique transaction IDs and global snapshots all the nodes of a
cluster in order to maintain global data consistency and visibility, one of
the examples being cluster applications like Postgres-XC.

Opinions?
-- 
Michael


Re: [HACKERS] Show type in psql SELECT

2013-02-22 Thread Erik Rijkers
On Sat, February 23, 2013 00:09, Mike Toews wrote:

 Type info can be viewed with \d mytable, however often I'd like to see
 the type (and typmod) info in SELECT queries with psql, similar to pgAdmin
 III. For example:

 my_db=# \pset type
 my_db=# SELECT * FROM my_table;
  gid | description| width
  integer | character varying(255) | numeric(6,3)
 -++--
1 | Hello  | 3.220
 (1 row)

 Has anyone else thought this was a missing feature?


As an occasional pgAdmin user, I have often thought that this feature should be 
added to psql.  It
would save time going back and forth between different views (especially with 
broad tables).  I
would be glad to have the possibility. It would have to be optional, of course.

In short: +1 from me.


thanks,

Erik Rijkers





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers