Re: [HACKERS] VALID UNTIL

2012-03-22 Thread Tom Lane
=?ISO-8859-1?Q?Fabr=EDzio_de_Royes_Mello?= fabriziome...@gmail.com writes:
 2012/3/14 Tom Lane t...@sss.pgh.pa.us
 For most people it won't
 matter, but for people who are using the feature, it seems like
 important information.  Per the OP's complaint, it's particularly
 important for those who have forgotten they're using the feature
 (and hence would not think to specify + ...)

 You' re right, then I attached a new patch with your suggestion.

Applied with minor corrections (mostly, ensuring the displayed text
is translatable).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reconstructing Insert queries with indirection

2012-03-22 Thread Ashutosh Bapat
On Wed, Mar 21, 2012 at 10:58 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Ashutosh Bapat ashutosh.ba...@enterprisedb.com writes:
  Consider following sequence of commands

  create type complex as (r float8, i float8);
  create type quad as (c1 complex, c2 complex);
  create temp table quadtable(f1 int, q quad);

  insert into quadtable (f1, q.c1.r, q.c2.i) values(44,55,66);

  While parsing the INSERT query, we parse the query with three columns and
  three values in the target list, but during rewriting we combine q.c1.r
 and
  q.c2.i into a single column in the form of FieldStore structure. In
  Postgres-XC, we deparse these parse trees, to be sent to other PostgreSQL
  servers.

 Well, basically you have a broken design there.  We are not going to
 adopt a restriction that post-rewrite trees are necessarily exactly
 representable as SQL, so there are going to be corner cases where this
 approach fails.


That's an optimization, and in the cases it fails, we fall back to basics.
If there are known differences, please let us know.


  The assertion is added by commit 858d1699. The notes for the commit have
  following paragraph related to FieldStore deparsing.

  I chose to represent an assignment ArrayRef as array[subscripts] :=
  source,
  which is fairly reasonable and doesn't omit any information.
  However,
  FieldStore is problematic because the planner will fold multiple
  assignments
  to fields of the same composite column into one FieldStore, resulting
  in a
  structure that is hard to understand at all, let alone display
  comprehensibly.
  So in that case I punted and just made it print the source
  expression(s).

  So, there doesn't seem to be any serious reason behind the restriction.

 If you have a proposal for some reasonable way to print the actual
 meaning of the expression (and a patch to do it), we can certainly
 consider changing that code.  I don't think it's possible to display it
 as standard SQL, though.  The ArrayRef case is already not standard SQL.


Let me try to come up with a patch.

   regards, tom lane




-- 
Best Wishes,
Ashutosh Bapat
EntepriseDB Corporation
The Enterprise Postgres Company


Re: [HACKERS] Proposal: Create index on foreign table

2012-03-22 Thread Etsuro Fujita
(2012/03/22 9:24), Tom Lane wrote:
 What's at stake in the current discussion is whether it would be
 advantageous to an FDW if we were to store some information about
 remote indexes in the local catalogs.  It would still be the FDW's
 responsibility, and nobody else's, to make use of that information.
 I can believe that we might eventually decide to do that; but I do not
 think we have enough experience with different sorts of FDWs to define
 a good solution today.  And I think that most likely a good solution
 will *not* conflate such remote-index information with local indexes.
 
 So basically my reaction to Etsuro-san's proposal is this is
 premature.  I think he should be hacking together some FDW-private
 facilities for individual FDWs instead (with the full understanding
 that these might be throwaway prototypes), and then looking for a
 common abstraction after he's done a few of those.

OK.  I'd like to at first focus on file FDW and Postgres FDW.  I'd like
to thank everyone who commented on this topic.  Thanks!

Best regards,
Etsuro Fujita

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checkpoint patches

2012-03-22 Thread Robert Haas
On Wed, Mar 21, 2012 at 3:38 PM, Robert Haas robertmh...@gmail.com wrote:
 It looks like I neglected to record that information for the last set
 of runs.  But I can try another set of runs and gather that
 information.

OK.  On further review, my previous test script contained several
bugs.  So you should probably ignore the previous set of results.  I
did a new set of runs, and this time bumped up checkpoint_segments a
bit more, in the hopes of giving the cache a bit more time to fill up
with dirty data between checkpoints.  Here's the full settings I used:

shared_buffers = 8GB
maintenance_work_mem = 1GB
synchronous_commit = off
checkpoint_timeout = 15min
checkpoint_completion_target = 0.9
wal_writer_delay = 20ms
log_line_prefix = '%t [%p] '
log_checkpoints='on'
checkpoint_segments='1000'
checkpoint_sync_pause='3'  # for the checkpoint-sync-pause-v1 branch only

With that change, each of the 6 tests (3 per branch) involved exactly
2 checkpoints, all triggered by time rather than by xlog.  The tps
results are:

checkpoint-sync-pause-v1: 2613.439217, 2498.874628, 2477.282015
master: 2479.955432, 2489.480892, 2458.600233
writeback-v1: 2386.394628, 2457.038789, 2410.833487

The 90th percentile latency results are:

checkpoint-sync-pause-v1: 1481, 1490, 1488
master: 1516, 1499, 1483
writeback-v1: 1497, 1502, 1491

However, looking at this a bit more, I think the
checkpoint-sync-pause-v1 patch contains an obvious bug - the GUC is
supposedly represented in seconds (though not marked with GUC_UNIT_S,
oops) but what the sleep implements is actually *tenths of a second*.
So I think I'd better rerun these tests with checkpoint_sync_pause=30
so that I get a three-second delay rather than a
three-tenths-of-a-second delay between each fsync.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HOT updates REDIRECT line pointers

2012-03-22 Thread Simon Riggs
On Thu, Mar 22, 2012 at 1:28 AM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Mar 21, 2012 at 9:22 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 It strikes me that it likely wouldn't be any
 worse than, oh, say, flipping the default value of
 standard_conforming_strings,

 Really?  It's taking away functionality and not supplying any substitute
 (or at least you did not propose any).  In fact, you didn't even suggest
 exactly how you propose to not break joined UPDATE/DELETE.

 Oh, hmm, interesting.  I had been thinking that you were talking about
 a case where *user code* was relying on the semantics of the TID,
 which has always struck me as an implementation detail that users
 probably shouldn't get too attached to.  But now I see that you're
 talking about something much more basic - the fundamental
 implementation of UPDATE and DELETE relies on the TID not changing
 under them.  That pretty much kills this idea dead in the water.

Surely it just stops you using that idea 100% of the time. I don't see
why you can't have this co-exist with the current mechanism. So it
doesn't kill it for the common case.

But would the idea deliver much value? Is line pointer bloat a
problem? (I have no idea if it is/is not)

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HOT updates REDIRECT line pointers

2012-03-22 Thread Merlin Moncure
On Wed, Mar 21, 2012 at 8:28 PM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Mar 21, 2012 at 9:22 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 It strikes me that it likely wouldn't be any
 worse than, oh, say, flipping the default value of
 standard_conforming_strings,

 Really?  It's taking away functionality and not supplying any substitute
 (or at least you did not propose any).  In fact, you didn't even suggest
 exactly how you propose to not break joined UPDATE/DELETE.

 Oh, hmm, interesting.  I had been thinking that you were talking about
 a case where *user code* was relying on the semantics of the TID,
 which has always struck me as an implementation detail that users
 probably shouldn't get too attached to.

small aside: tid usage is the best method for kludging a delete/limit:
delete from del where ctid  = any (array(select ctid from del limit
10)); (via http://postgres.cz/wiki/PostgreSQL_SQL_Tricks)

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster compression, again

2012-03-22 Thread Simon Riggs
On Thu, Mar 15, 2012 at 10:34 PM, Daniel Farina dan...@heroku.com wrote:

 I'd really like to find a way to layer both message-oblivious and
 message-aware transport under FEBE with both backend and frontend
 support without committing the project to new code for-ever-and-ever.
 I guess I could investigate it in brief now, unless you've already
 thought about/done some work in that area.

Not done anything in that area myself.

I think its important that we have compression for the COPY protocol
within libpq, so I'll add that to my must-do list - but would be more
than happy if you wanted to tackle that yourself.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HOT updates REDIRECT line pointers

2012-03-22 Thread Robert Haas
On Thu, Mar 22, 2012 at 9:31 AM, Simon Riggs si...@2ndquadrant.com wrote:
 Surely it just stops you using that idea 100% of the time. I don't see
 why you can't have this co-exist with the current mechanism. So it
 doesn't kill it for the common case.

I guess you could use it if you knew that there were no DELETE or
UPDATE statements in progress on that table, but it seems like
figuring that out would be more trouble than it's worth.

 But would the idea deliver much value? Is line pointer bloat a
 problem? (I have no idea if it is/is not)

Good question.  I think that all things being equal it would be worth
getting rid of, but I'm not sure it's worth paying a lot for.  Maybe
when I have time I'll try to gather some statistics.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Trivial libpq refactoring patch (was: Re: [HACKERS] Another review of URI for libpq, v7 submission)

2012-03-22 Thread Tom Lane
Alex Shulgin a...@commandprompt.com writes:
 While working on this fix I've figured out that I need my
 conninfo_uri_parse to support use_defaults parameter, like
 conninfo(_array)_parse functions do.

 The problem is that the block of code which does the defaults handling
 is duplicated in both of the above functions.  What I'd like to do is
 extract it into a separate function to call.  What I wouldn't like is
 bloating the original URI patch with this unrelated change.

Applied with minor adjustments --- notably, I thought
conninfo_add_defaults was a better name for the function.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Gsoc2012 Idea --- Social Network database schema

2012-03-22 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 Well, the standard syntax apparently aims to reduce the number of
 returned rows, which ORDER BY does not.  Maybe you could do it
 with ORDER BY .. LIMIT, but the idea here I think is that we'd
 like to sample the table without reading all of it first, so that
 seems to miss the point.
 
 I think actually the traditional locution is more like
   WHERE random()  constant
 where the constant is the fraction of the table you want.  And
 yeah, the presumption is that you'd like it to not actually read
 every row.  (Though unless the sampling density is quite a bit
 less than 1 row per page, it's not clear how much you're really
 going to win.)
 
It's all going to depend on the use cases, which I don't think I've
heard described very well yet.
 
I've had to pick random rows from, for example, a table of
disbursements to support a financial audit.  In those cases it has
been the sample size that mattered, and order didn't.  One
interesting twist there is that for some of these financial audits
they wanted the probability of a row being selected to be
proportional to the dollar amount of the disbursement.  I don't
think you can do this without a first pass across the whole data
set.
 
I've also been involved in developing software to pick random people
for jury selection processes.  In some of these cases, you don't
want a certain percentage, you want a particular number of people,
and you want the list to be ordered so that an initial set of
potential jurors can be seated from the top of the list and then as
the voir dire[1] process progresses you can replace excused jurors
from progressive positions on the randomized list.
 
In both cases you need to be able to explain the technique used in
lay terms and show why it is very hard for anyone to predict or
control which rows are chosen, but also use statistical analysis to
prove that there is no significant correlation between the rows
chosen and identifiable characteristics of the rows.  For example,
selecting all the rows from random blocks would be right out for
juror selection because a list from the state DOT of people with
driver's licenses and state ID cards would probably be in license
number order when loaded, and since the start of the driver's
license number is a soundex of the last name, there is a strong
correlation between blocks of the table and ethnicity.
 
One technique which might be suitably random without reading the
whole table would be to figure out a maximum block number and tuple
ID for the table, and generate a series of random ctid values to
read.  If the tuple doesn't exist or is not visible to the snapshot,
you ignore it and continue, until you have read the requisite number
of rows.  You could try to generate them in advance and sort them by
block number, but then you need to solve the problems of what to do
if that set of ctids yields too many rows or too few rows, both of
which have sticky issues.
 
-Kevin
 
[1] http://en.wikipedia.org/wiki/Voir_dire#Use_in_the_United_States


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Gsoc2012 Idea --- Social Network database schema

2012-03-22 Thread Christopher Browne
On Thu, Mar 22, 2012 at 12:38 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 Well, the standard syntax apparently aims to reduce the number of
 returned rows, which ORDER BY does not.  Maybe you could do it
 with ORDER BY .. LIMIT, but the idea here I think is that we'd
 like to sample the table without reading all of it first, so that
 seems to miss the point.

 I think actually the traditional locution is more like
       WHERE random()  constant
 where the constant is the fraction of the table you want.  And
 yeah, the presumption is that you'd like it to not actually read
 every row.  (Though unless the sampling density is quite a bit
 less than 1 row per page, it's not clear how much you're really
 going to win.)

 It's all going to depend on the use cases, which I don't think I've
 heard described very well yet.

 I've had to pick random rows from, for example, a table of
 disbursements to support a financial audit.  In those cases it has
 been the sample size that mattered, and order didn't.  One
 interesting twist there is that for some of these financial audits
 they wanted the probability of a row being selected to be
 proportional to the dollar amount of the disbursement.  I don't
 think you can do this without a first pass across the whole data
 set.

This one was commonly called Dollar Unit Sampling, though the
terminology has gradually gotten internationalized.
http://www.dummies.com/how-to/content/how-does-monetary-unit-sampling-work.html

What the article doesn't mention is that some particularly large items
might wind up covering multiple samples.  In the example, they're
looking for a sample every $3125 down the list.  If there was a single
transaction valued at $3, that (roughly) covers 10 of the desired
samples.

It isn't possible to do this without scanning across the entire table.

If you want repeatability, you probably want to instantiate a copy of
enough information to indicate the ordering chosen.  That's probably
something that needs to be captured as part of the work of the audit,
so not only does it need to involve a pass across the data, it
probably requires capturing a fair bit of data for posterity.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, How would the Lone Ranger handle this?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)

2012-03-22 Thread Tom Lane
Peter Geoghegan pe...@2ndquadrant.com writes:
 [ pg_stat_statements_norm_2012_02_29.patch ]

I started to look at this patch (just the core-code modifications so
far).  There are some things that seem not terribly well thought out:

* It doesn't look to me like it will behave very sanely with rules.
The patch doesn't store queryId in a stored rule tree, so a Query
retrieved from a stored rule will have a zero queryId, and that's
what will get pushed through to the resulting plan tree as well.
So basically all DO ALSO or DO INSTEAD operations are going to get
lumped together by pg_stat_statements, and separated from the queries
that triggered them, which seems pretty darn unhelpful.

I don't know that storing queryId would be better, since after a restart
that'd mean there are query IDs running around in the system that the
current instance of pg_stat_statements has never heard of.  Permanently
stored query IDs would also be a headache if you needed to change the
fingerprint algorithm, or if there were more than one add-on trying to
use the query ID support.

I'm inclined to think that the most useful behavior is to teach the
rewriter to copy queryId from the original query into all the Queries
generated by rewrite.  Then, all rules fired by a source query would
be lumped into that query for tracking purposes.  This might not be
the ideal behavior either, but I don't see a better solution.

* The patch injects the query ID calculation code by redefining
parse_analyze and parse_analyze_varparams as hookable functions and
then getting into those hooks.  I don't find this terribly sane either.
pg_stat_statements has no interest in the distinction between those two
methods of getting into parse analysis.  Perhaps more to the point,
those are not the only two ways of getting into parse analysis: some
places call transformTopLevelStmt directly, for instance
pg_analyze_and_rewrite_params.  While it might be that the code paths
that do that are not of interest for fingerprinting queries, it's far
from obvious that these two are the correct and only places to do such
fingerprinting.

I think that if we are going to take the attitude that we only care
about fingerprinting queries that come in from the client, then we
ought to call the fingerprinting code in the client-message-processing
routines in postgres.c.  But in that case we need to be a little clearer
about what we are doing with unfingerprinted queries.  Alternatively,
we might take the position that we want to fingerprint every Query
struct, but in that case the existing hooks are clearly insufficient.
This seems to boil down to what you want to have happen with queries
created/executed inside functions, which is something I don't recall
being discussed.

Either way, I think we'd be a lot better advised to define a single
hook post_parse_analysis_hook and make the core code responsible
for calling it at the appropriate places, rather than supposing that
the contrib module knows exactly which core functions ought to be
the places to do it.

Thoughts?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Finer Extension dependencies

2012-03-22 Thread Dimitri Fontaine
Hi,

Again, thanks very much for the review.  Here's an updated patch (just
merged against master) fixing most of your comments here.  I couldn't
reproduce previous problems with the attached:

 - DROP EXTENSION was broken, asking to cascade to self
 - CREATE EXTENSION was bypassing requires

I could reproduce the second problem then fix it with the following one
liner. I missed it because my test case still fails for not finding the
cube type rather than the cube extension without this fix:

-   if (!OidIsValid(featoid)  !missing_ok)
+   if (!OidIsValid(*featoid)  !missing_ok)

Thank you all for your patience while I was busy elsewhere, it's
definitely not a show stopper in my book :)

dim=# create extension earthdistance;
ERROR:  feature cube is not currently provided
HINT:  Please install an extension that provides it first

dim=# create extension cube;
CREATE EXTENSION

dim=# create extension earthdistance;
CREATE EXTENSION

dim=# drop extension cube cascade;
NOTICE:  drop cascades to extension earthdistance
DROP EXTENSION

Hitoshi Harada umi.tan...@gmail.com writes:
 - There are some mixture of pg_extension_feature and pg_extension_features

Fixed.

 - The doc says pg_extension_features has four columns but it's not true.

Well the SGML table describing the catalog has 4 cols :)

 - Line 608 is bad. In the loop, provides_itself is repeatedly changed
 to true and false and I guess that's not what you meant.

Fixed.

 - Line 854+, you can fold two blocks into one.  The two blocks are
 similar and by giving provides list with list_make1 when
 control-provides  == NIL you can do it in one block.

Fixed.

 - s/trak/track/

Fixed, I guess the English would need rephrasing.

 - Line 960, you missed updating classId for dependency.

I don't think so.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

diff --git a/contrib/pg_upgrade_support/pg_upgrade_support.c b/contrib/pg_upgrade_support/pg_upgrade_support.c
index 472f152..b5633d4 100644
--- a/contrib/pg_upgrade_support/pg_upgrade_support.c
+++ b/contrib/pg_upgrade_support/pg_upgrade_support.c
@@ -151,6 +151,7 @@ create_empty_extension(PG_FUNCTION_ARGS)
 	Datum		extConfig;
 	Datum		extCondition;
 	List	   *requiredExtensions;
+	List   *features = NIL;	/* FIXME, get features from catalogs */
 
 	if (PG_ARGISNULL(4))
 		extConfig = PointerGetDatum(NULL);
@@ -190,7 +191,8 @@ create_empty_extension(PG_FUNCTION_ARGS)
 		 text_to_cstring(extVersion),
 		 extConfig,
 		 extCondition,
-		 requiredExtensions);
+		 requiredExtensions,
+		 features);
 
 	PG_RETURN_VOID();
 }
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 9564e01..bf7dd74 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -149,6 +149,11 @@
  /row
 
  row
+  entrylink linkend=catalog-pg-extensionstructnamepg_extension_feature/structname/link/entry
+  entryfeatures provided by installed extensions/entry
+ /row
+
+ row
   entrylink linkend=catalog-pg-foreign-data-wrapperstructnamepg_foreign_data_wrapper/structname/link/entry
   entryforeign-data wrapper definitions/entry
  /row
@@ -3058,6 +3063,51 @@
   /para
  /sect1
 
+ sect1 id=catalog-pg-extension-feature
+  titlestructnamepg_extension_feature/structname/title
+
+  indexterm zone=catalog-pg-extension-feature
+   primarypg_extension_feature/primary
+  /indexterm
+
+  para
+   The catalog structnamepg_extension_feature/structname stores
+   information about the features provided by installed extensions.
+   See xref linkend=extend-extensions for details about extensions.
+  /para
+
+  table
+   titlestructnamepg_extension_feature/ Columns/title
+
+   tgroup cols=4
+thead
+ row
+  entryName/entry
+  entryType/entry
+  entryReferences/entry
+  entryDescription/entry
+ /row
+/thead
+
+tbody
+ row
+  entrystructfieldextoid/structfield/entry
+  entrytypeoid/type/entry
+  entryliterallink linkend=catalog-pg-extensionstructnamepg_extension/structname/link.oid/literal/entry
+  entryOid of the extension that provides this feature/entry
+ /row
+
+ row
+  entrystructfieldextfeature/structfield/entry
+  entrytypename/type/entry
+  entry/entry
+  entryName of the feature/entry
+ /row
+
+/tbody
+   /tgroup
+  /table
+ /sect1
 
  sect1 id=catalog-pg-foreign-data-wrapper
   titlestructnamepg_foreign_data_wrapper/structname/title
@@ -6827,11 +6877,17 @@
  row
   entrystructfieldrequires/structfield/entry
   entrytypename[]/type/entry
-  entryNames of prerequisite extensions,
+  entryNames of prerequisite features,
or literalNULL/literal if none/entry
  /row
 
  row
+  entrystructfieldprovides/structfield/entry
+  entrytypename[]/type/entry
+  entryNames of provided features/entry
+ /row
+
+ row
   entrystructfieldcomment/structfield/entry
   

Re: [HACKERS] HOT updates REDIRECT line pointers

2012-03-22 Thread Pavan Deolasee
On Thu, Mar 22, 2012 at 6:58 AM, Robert Haas robertmh...@gmail.com wrote:

 On Wed, Mar 21, 2012 at 9:22 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  It strikes me that it likely wouldn't be any
  worse than, oh, say, flipping the default value of
  standard_conforming_strings,
 
  Really?  It's taking away functionality and not supplying any substitute
  (or at least you did not propose any).  In fact, you didn't even suggest
  exactly how you propose to not break joined UPDATE/DELETE.

 Oh, hmm, interesting.  I had been thinking that you were talking about
 a case where *user code* was relying on the semantics of the TID,
 which has always struck me as an implementation detail that users
 probably shouldn't get too attached to.  But now I see that you're
 talking about something much more basic - the fundamental
 implementation of UPDATE and DELETE relies on the TID not changing
 under them.  That pretty much kills this idea dead in the water.


IIRC in early versions of HOT, I tried to swap the TIDs of newer versions
with the older version to handle this problem, but soon realized that it
might turn out too tricky and error-prone. The UPDATE/DELETE problem and
any other piece of code that works with TIDs and cache them across buffer
lock/unlock could face the issue. But it will be worthwhile to revisit the
issue and see if there is some easy way to reclaim those redirect line
pointers. If the HOT chain does not become dead, there will always to that
overhead of additional line pointer.

Thanks,
Pavan


Re: [HACKERS] Finer Extension dependencies

2012-03-22 Thread Alvaro Herrera

Excerpts from Dimitri Fontaine's message of jue mar 22 14:38:29 -0300 2012:
 Hi,
 
 Again, thanks very much for the review.  Here's an updated patch (just
 merged against master) fixing most of your comments here.  I couldn't
 reproduce previous problems with the attached:

get_available_versions_for_extension seems to contain a bunch of
commented-out lines ...

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Finer Extension dependencies

2012-03-22 Thread Dimitri Fontaine
Alvaro Herrera alvhe...@commandprompt.com writes:
 get_available_versions_for_extension seems to contain a bunch of
 commented-out lines ...

Damn. Sorry about that.  Here's a cleaned-up version of the patch.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

diff --git a/contrib/pg_upgrade_support/pg_upgrade_support.c b/contrib/pg_upgrade_support/pg_upgrade_support.c
index 472f152..b5633d4 100644
--- a/contrib/pg_upgrade_support/pg_upgrade_support.c
+++ b/contrib/pg_upgrade_support/pg_upgrade_support.c
@@ -151,6 +151,7 @@ create_empty_extension(PG_FUNCTION_ARGS)
 	Datum		extConfig;
 	Datum		extCondition;
 	List	   *requiredExtensions;
+	List   *features = NIL;	/* FIXME, get features from catalogs */
 
 	if (PG_ARGISNULL(4))
 		extConfig = PointerGetDatum(NULL);
@@ -190,7 +191,8 @@ create_empty_extension(PG_FUNCTION_ARGS)
 		 text_to_cstring(extVersion),
 		 extConfig,
 		 extCondition,
-		 requiredExtensions);
+		 requiredExtensions,
+		 features);
 
 	PG_RETURN_VOID();
 }
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 9564e01..bf7dd74 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -149,6 +149,11 @@
  /row
 
  row
+  entrylink linkend=catalog-pg-extensionstructnamepg_extension_feature/structname/link/entry
+  entryfeatures provided by installed extensions/entry
+ /row
+
+ row
   entrylink linkend=catalog-pg-foreign-data-wrapperstructnamepg_foreign_data_wrapper/structname/link/entry
   entryforeign-data wrapper definitions/entry
  /row
@@ -3058,6 +3063,51 @@
   /para
  /sect1
 
+ sect1 id=catalog-pg-extension-feature
+  titlestructnamepg_extension_feature/structname/title
+
+  indexterm zone=catalog-pg-extension-feature
+   primarypg_extension_feature/primary
+  /indexterm
+
+  para
+   The catalog structnamepg_extension_feature/structname stores
+   information about the features provided by installed extensions.
+   See xref linkend=extend-extensions for details about extensions.
+  /para
+
+  table
+   titlestructnamepg_extension_feature/ Columns/title
+
+   tgroup cols=4
+thead
+ row
+  entryName/entry
+  entryType/entry
+  entryReferences/entry
+  entryDescription/entry
+ /row
+/thead
+
+tbody
+ row
+  entrystructfieldextoid/structfield/entry
+  entrytypeoid/type/entry
+  entryliterallink linkend=catalog-pg-extensionstructnamepg_extension/structname/link.oid/literal/entry
+  entryOid of the extension that provides this feature/entry
+ /row
+
+ row
+  entrystructfieldextfeature/structfield/entry
+  entrytypename/type/entry
+  entry/entry
+  entryName of the feature/entry
+ /row
+
+/tbody
+   /tgroup
+  /table
+ /sect1
 
  sect1 id=catalog-pg-foreign-data-wrapper
   titlestructnamepg_foreign_data_wrapper/structname/title
@@ -6827,11 +6877,17 @@
  row
   entrystructfieldrequires/structfield/entry
   entrytypename[]/type/entry
-  entryNames of prerequisite extensions,
+  entryNames of prerequisite features,
or literalNULL/literal if none/entry
  /row
 
  row
+  entrystructfieldprovides/structfield/entry
+  entrytypename[]/type/entry
+  entryNames of provided features/entry
+ /row
+
+ row
   entrystructfieldcomment/structfield/entry
   entrytypetext/type/entry
   entryComment string from the extension's control file/entry
diff --git a/doc/src/sgml/extend.sgml b/doc/src/sgml/extend.sgml
index 8d5b9d0..af5fc4c 100644
--- a/doc/src/sgml/extend.sgml
+++ b/doc/src/sgml/extend.sgml
@@ -463,9 +463,30 @@
   termvarnamerequires/varname (typestring/type)/term
   listitem
para
-A list of names of extensions that this extension depends on,
-for example literalrequires = 'foo, bar'/literal.  Those
-extensions must be installed before this one can be installed.
+A list of features that this extension depends on, for
+example literalrequires = 'foo, bar'/literal. Those features
+must be provided by an already installed extension before this one
+can be installed.
+   /para
+  /listitem
+ /varlistentry
+
+ varlistentry
+  termvarnameprovides/varname (typestring/type)/term
+  listitem
+   para
+A list of names of features that this extension provides, for
+example literalprovides = 'foo, extname_bugfix_12345'/literal.
+Those features can help providing finer dependencies: when updating
+an existing extension you can add new features in this list so that
+it's possible to depend on those new features. It also makes it
+possible to deprecate features that an extension would no longer
+provide.
+   /para
+   para
+The extension's name itself is always considered a member of
+the literalprovides/literal list, so that you can 

Re: [HACKERS] renaming domain constraint

2012-03-22 Thread Peter Eisentraut
On ons, 2012-03-21 at 11:57 -0300, Alvaro Herrera wrote:
 Excerpts from Robert Haas's message of mié mar 21 11:43:17 -0300 2012:
  On Fri, Mar 16, 2012 at 1:34 PM, Peter Eisentraut pete...@gmx.net wrote:
   Here is a patch for being able to rename constraints of domains.  It
   goes on top of the previously committed patch for renaming table
   constraints.
  
  I don't like the way you've modified get_constraint_oid(), which is
  currently parallel to many other get_whatever_oid() functions and with
  this patch, would no longer be.  There seems to be little point in
  shoehorning the new functionality into the existing function anyway,
  considering that you've conditionalized basically every piece of logic
  in the function.  I think you should just invent a completely separate
  function and be done with it.
 
 get_relation_constraint_oid() plus get_domain_constraint_oid()?

Makes sense.  Updated patch attached.
diff --git a/doc/src/sgml/ref/alter_domain.sgml b/doc/src/sgml/ref/alter_domain.sgml
index 2511a12..c59975a 100644
--- a/doc/src/sgml/ref/alter_domain.sgml
+++ b/doc/src/sgml/ref/alter_domain.sgml
@@ -32,6 +32,8 @@ ALTER DOMAIN replaceable class=PARAMETERname/replaceable
 ALTER DOMAIN replaceable class=PARAMETERname/replaceable
 DROP CONSTRAINT [ IF EXISTS ] replaceable class=PARAMETERconstraint_name/replaceable [ RESTRICT | CASCADE ]
 ALTER DOMAIN replaceable class=PARAMETERname/replaceable
+ RENAME CONSTRAINT replaceable class=PARAMETERconstraint_name/replaceable TO replaceable class=PARAMETERnew_constraint_name/replaceable
+ALTER DOMAIN replaceable class=PARAMETERname/replaceable
 VALIDATE CONSTRAINT replaceable class=PARAMETERconstraint_name/replaceable
 ALTER DOMAIN replaceable class=PARAMETERname/replaceable
 OWNER TO replaceable class=PARAMETERnew_owner/replaceable
@@ -103,6 +105,15 @@ ALTER DOMAIN replaceable class=PARAMETERname/replaceable
/varlistentry
 
varlistentry
+termRENAME CONSTRAINT/term
+listitem
+ para
+  This form changes the name of a constraint on a domain.
+ /para
+/listitem
+   /varlistentry
+
+   varlistentry
 termVALIDATE CONSTRAINT/term
 listitem
  para
@@ -182,7 +193,7 @@ ALTER DOMAIN replaceable class=PARAMETERname/replaceable
   termreplaceable class=PARAMETERconstraint_name/replaceable/term
   listitem
para
-Name of an existing constraint to drop.
+Name of an existing constraint to drop or rename.
/para
   /listitem
  /varlistentry
@@ -226,6 +237,15 @@ ALTER DOMAIN replaceable class=PARAMETERname/replaceable
  /varlistentry
 
  varlistentry
+  termreplaceable class=PARAMETERnew_constraint_name/replaceable/term
+  listitem
+   para
+The new name for the constraint.
+   /para
+  /listitem
+ /varlistentry
+
+ varlistentry
   termreplaceable class=PARAMETERnew_owner/replaceable/term
   listitem
para
@@ -289,6 +309,13 @@ ALTER DOMAIN zipcode DROP CONSTRAINT zipchk;
   /para
 
   para
+   To rename a check constraint on a domain:
+programlisting
+ALTER DOMAIN zipcode RENAME CONSTRAINT zipchk TO zip_check;
+/programlisting
+  /para
+
+  para
To move the domain into a different schema:
 programlisting
 ALTER DOMAIN zipcode SET SCHEMA customers;
diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c
index e6e0347..250069f 100644
--- a/src/backend/catalog/objectaddress.c
+++ b/src/backend/catalog/objectaddress.c
@@ -753,7 +753,7 @@ get_object_address_relobject(ObjectType objtype, List *objname,
 			case OBJECT_CONSTRAINT:
 address.classId = ConstraintRelationId;
 address.objectId =
-	get_constraint_oid(reloid, depname, missing_ok);
+	get_relation_constraint_oid(reloid, depname, missing_ok);
 address.objectSubId = 0;
 break;
 			default:
diff --git a/src/backend/catalog/pg_constraint.c b/src/backend/catalog/pg_constraint.c
index 342cf75..bf174b6 100644
--- a/src/backend/catalog/pg_constraint.c
+++ b/src/backend/catalog/pg_constraint.c
@@ -736,12 +736,12 @@ AlterConstraintNamespaces(Oid ownerId, Oid oldNspId,
 }
 
 /*
- * get_constraint_oid
+ * get_relation_constraint_oid
  *		Find a constraint on the specified relation with the specified name.
  *		Returns constraint's OID.
  */
 Oid
-get_constraint_oid(Oid relid, const char *conname, bool missing_ok)
+get_relation_constraint_oid(Oid relid, const char *conname, bool missing_ok)
 {
 	Relation	pg_constraint;
 	HeapTuple	tuple;
@@ -794,6 +794,64 @@ get_constraint_oid(Oid relid, const char *conname, bool missing_ok)
 }
 
 /*
+ * get_domain_constraint_oid
+ *		Find a constraint on the specified domain with the specified name.
+ *		Returns constraint's OID.
+ */
+Oid
+get_domain_constraint_oid(Oid typid, const char *conname, bool missing_ok)
+{
+	Relation	pg_constraint;
+	HeapTuple	tuple;
+	SysScanDesc scan;
+	ScanKeyData skey[1];
+	Oid			conOid = InvalidOid;
+
+	/*
+	 * Fetch the constraint tuple from 

Re: [HACKERS] Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)

2012-03-22 Thread Peter Geoghegan
On 22 March 2012 17:19, Tom Lane t...@sss.pgh.pa.us wrote:
 I'm inclined to think that the most useful behavior is to teach the
 rewriter to copy queryId from the original query into all the Queries
 generated by rewrite.  Then, all rules fired by a source query would
 be lumped into that query for tracking purposes.  This might not be
 the ideal behavior either, but I don't see a better solution.

+1. This behaviour seems fairly sane. The lumping together of DO ALSO
and DO INSTEAD operations was a simple oversight.

 This seems to boil down to what you want to have happen with queries
 created/executed inside functions, which is something I don't recall
 being discussed.

Uh, well, pg_stat_statements is clearly supposed to monitor execution
of queries from within functions - there is a GUC,
pg_stat_statements.track, which can be set to 'all' to track nested
queries. That being the case, we should clearly be fingerprinting
those query trees too.

The fact that we'll fingerprint these queries even though we usually
don't care about them doesn't seem like a problem, since in practice
the vast majority will be prepared.

 Either way, I think we'd be a lot better advised to define a single
 hook post_parse_analysis_hook and make the core code responsible
 for calling it at the appropriate places, rather than supposing that
 the contrib module knows exactly which core functions ought to be
 the places to do it.

I agree.

Since you haven't mentioned the removal of parse-analysis Const
location alterations, I take it that you do not object to that, which
is something I'm glad of.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-03-22 Thread Tom Lane
Marco Nenciarini marco.nenciar...@2ndquadrant.it writes:
 Attached is v5, which should address all the remaining issues.

I started to look at this patch a bit.  I'm quite confused by the fact
that some, but not all, of the possible FK action types now come in an
EACH variant.  This makes no sense at all to me.  ISTM that EACH is a
property of the FK constraint as a whole, that is that it says the
constraint is from array elements on the referencing side to column
values on the referenced side, rather than the normal case of column
values to column values.  Why would the possible actions be affected,
and why only these?  The patch documentation is extremely unclear about
this.  It's even less clear about what the semantics are in multi-key
cases.  Right offhand I would say that multi-key cases are nonsensical
and should be forbidden outright, because there is no way to figure out
which collections of elements of different arrays should be considered
to be a referencing item.

Could we see a specification of what the referencing semantics are
intended to be, please?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)

2012-03-22 Thread Tom Lane
Peter Geoghegan pe...@2ndquadrant.com writes:
 Since you haven't mentioned the removal of parse-analysis Const
 location alterations, I take it that you do not object to that, which
 is something I'm glad of.

I remain un-thrilled about it, but apparently nobody else cares, so
I'll yield the point.  (I do however object to your removal of the
cast location value from the param_coerce_hook signature.  The fact
that one current user of the hook won't need it anymore doesn't mean
no others would.  Consider throwing a can't coerce error from within
the hook function, for instance.)

Will you adjust the patch for the other issues?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)

2012-03-22 Thread Peter Geoghegan
On 22 March 2012 19:07, Tom Lane t...@sss.pgh.pa.us wrote:
 Will you adjust the patch for the other issues?

Sure. I'll take a look at it now.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] COPY / extend ExclusiveLock

2012-03-22 Thread Stephen Frost
Greetings,

  I've recently become a bit annoyed and frustrated looking at this in
  top:

23296 postgres  20   0 3341m 304m 299m S   12  0.9   1:50.02 postgres: sfrost 
gis [local] COPY waiting
24362 postgres  20   0 3353m 298m 285m D   12  0.9   1:24.99 postgres: sfrost 
gis [local] COPY
24429 postgres  20   0 3340m 251m 247m S   11  0.8   1:13.79 postgres: sfrost 
gis [local] COPY waiting
24138 postgres  20   0 3341m 249m 244m S   10  0.8   1:28.09 postgres: sfrost 
gis [local] COPY waiting
24153 postgres  20   0 3340m 246m 241m S   10  0.8   1:24.44 postgres: sfrost 
gis [local] COPY waiting
24166 postgres  20   0 3341m 318m 313m S   10  1.0   1:40.52 postgres: sfrost 
gis [local] COPY waiting
24271 postgres  20   0 3340m 288m 283m S   10  0.9   1:34.12 postgres: sfrost 
gis [local] COPY waiting
24528 postgres  20   0 3341m 290m 285m S   10  0.9   1:21.23 postgres: sfrost 
gis [local] COPY waiting
24540 postgres  20   0 3340m 241m 236m S   10  0.7   1:15.91 postgres: sfrost 
gis [local] COPY waiting

  Has anyone been working on or considering how to improve the logic
  around doing extends on relations to perhaps make larger extensions
  for larger tables?  Or make larger extensions when tables are growing
  very quickly?  
  
  I haven't looked at the code, but I'm guessing we extend relations
  when they're full (that part makes sense..), but we extend them an
  itty-bitty bit at a time, which very quickly ends up being not fast
  enough for the processes that want to get data into the table.

  My gut feeling is that we could very easily and quickly improve this
  situation by having a way to make larger extensions, and then using
  that method when we detect that a table is growing very quickly.

Thoughts?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] checkpoint patches

2012-03-22 Thread Robert Haas
On Thu, Mar 22, 2012 at 9:07 AM, Robert Haas robertmh...@gmail.com wrote:
 However, looking at this a bit more, I think the
 checkpoint-sync-pause-v1 patch contains an obvious bug - the GUC is
 supposedly represented in seconds (though not marked with GUC_UNIT_S,
 oops) but what the sleep implements is actually *tenths of a second*.
 So I think I'd better rerun these tests with checkpoint_sync_pause=30
 so that I get a three-second delay rather than a
 three-tenths-of-a-second delay between each fsync.

OK, I did that, rerunning the test with just checkpoint-sync-pause-v1
and master, still with scale factor 1000 and 32 clients.  Tests were
run on the two branches in alternation, so checkpoint-sync-pause-v1,
then master, then checkpoint-sync-pause-v1, then master, etc.; with a
new initdb and data load each time.  TPS numbers:

checkpoint-sync-pause-v1: 2594.448538, 2600.231666, 2580.041376
master: 2466.31, 2450.752843, 2291.613305

90th percentile latency:

checkpoint-sync-pause-v1: 1487, 1488, 1481
master: 1493, 1519, 1507

That's about a 6% increase in throughput and about a 1.3% reduction in
90th-percentile latency.  On the other hand, the two timed checkpoints
on the master branch, on each test run, are exactly 15 minutes apart,
whereas with the patch, they're 15 minutes and 30-40 seconds apart,
which may account for some of the difference.  I'm going to do a bit
more testing to try to isolate that.

I'm attaching a possibly-interesting graph comparing the first
checkpoint-sync-pause-v1 run against the second master run; I chose
that particular combination because those are the runs with the median
tps results.  It's interesting how eerily similar the two runs are to
each other; they have spikes and dips in almost the same places, and
what looks like random variation is apparently not so random after
all.  The graph attached here is based on tps averaged over ten second
intervals.

Thoughts?  Comments?  Ideas?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
attachment: tps-10s.png
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checkpoint patches

2012-03-22 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
 TPS numbers:
 
 checkpoint-sync-pause-v1: 2594.448538, 2600.231666, 2580.041376
 master: 2466.31, 2450.752843, 2291.613305
 
 90th percentile latency:
 
 checkpoint-sync-pause-v1: 1487, 1488, 1481
 master: 1493, 1519, 1507

Well, those numbers just aren't that exciting. :/

Then again, I'm a bit surprised that the latencies aren't worse, perhaps
the previous improvements have made the checkpoint pain go away for the
most part..

 The graph attached here is based on tps averaged over ten second
 intervals.

The graph is definitely interesting..  Would it be possible for you to
produce a graph of latency over the time of the run?  I'd be looking for
spikes in latency near/around the 15m checkpoint marks and/or fsync
times.  If there isn't really one, then perhaps the checkpoint spreading
is doing a sufficient job.  Another option might be to intentionally
tune the checkpoint spreading down to force it to try and finish the
checkpoint faster and then see if the patches improve the latency in
that situation.  Perhaps, in whatever workloads there are which are
better suited to faster checkpoints (there must be some, right?
otherwise there wouldn't be a GUC for it..), these patches would help.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Finer Extension dependencies

2012-03-22 Thread Alvaro Herrera

Excerpts from Dimitri Fontaine's message of jue mar 22 15:08:27 -0300 2012:
 Alvaro Herrera alvhe...@commandprompt.com writes:
  get_available_versions_for_extension seems to contain a bunch of
  commented-out lines ...
 
 Damn. Sorry about that.  Here's a cleaned-up version of the patch.

Hmm .. feature names should be globally unique, right?  If so I think
you're missing an UNIQUE index on the new catalog, covering just the
feature name.  You have a two column index (extoid, featurename), so you
could have two different extensions providing the same feature.  Is this
okay?  If it is, then there is a bit of a bogus code in
get_extension_feature_oids because it says it assumes that there is
only one row.  Now maybe you just want to return the first one found
and that's okay, but in that case the comment is bogus.

I noticed that you've left unspecified whether an extension should have
a provides entry for itself or not -- I mean the code adds one if it's
not there.  I'm not sure about this, maybe it's okay.  But this makes it
impossible for it to say provides: extname-0.5 and have a dependent
extension fail if it only requires extname, because that one will be
provided automatically whether extname's author wants it or not.
Again, maybe this is okay.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another review of URI for libpq, v7 submission

2012-03-22 Thread Alex
Alex a...@commandprompt.com writes:

 Marko Kreen mark...@gmail.com writes:

 On Thu, Mar 15, 2012 at 11:29:31PM +0200, Alex wrote:
 https://github.com/a1exsh/postgres/commits/uri

 The point of the patch is to have one string with all connection options,
 in standard format, yes?  So why does not this work:

   db = PQconnectdb(postgres://localhost);

 ?

 Good catch.

 I've figured out that we'll need a bit more intrusive change than simply
 overriding the expand_dbname check in conninfo_array_parse (like the
 current version does) to support URIs in all PQconnect* variants.

Okay, at last here's v9, rebased against current master branch.

What's new in this version is parse_connection_string function to be
called instead of conninfo_parse.  It will check for possible URI in the
connection string and dispatch to conninfo_uri_parse if URI was found,
otherwise it will fall back to conninfo_parse.

In two places in code we don't want to parse the string unless it is
deemed a real connection string and not plain dbname.  The new function
recognized_connection_string is added to check this.

Thanks Marko for spotting the problem and thanks Tom for committing the
refactoring patch!

--
Alex



binFI5wAVF1ys.bin
Description: libpq-uri-v9.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Finer Extension dependencies

2012-03-22 Thread Dimitri Fontaine
Alvaro Herrera alvhe...@commandprompt.com writes:
 Hmm .. feature names should be globally unique, right?  If so I think
 you're missing an UNIQUE index on the new catalog, covering just the
 feature name.  You have a two column index (extoid, featurename), so you
 could have two different extensions providing the same feature.  Is this
 okay?  If it is, then there is a bit of a bogus code in

You're right, this looks like something I forgot to go back to, the
unique index should be global on the feature's name.  I will go make
that happen (tomorrow).

 I noticed that you've left unspecified whether an extension should have
 a provides entry for itself or not -- I mean the code adds one if it's
 not there.  I'm not sure about this, maybe it's okay.  But this makes it
 impossible for it to say provides: extname-0.5 and have a dependent
 extension fail if it only requires extname, because that one will be
 provided automatically whether extname's author wants it or not.
 Again, maybe this is okay.

I think it is ok, at least that's how I intended the feature to work.
The use case I want to allow is for the other extension's author to say
its extension depends on the extname-0.5 feature. In fact I guess that
you would rather provide feature names, not version, so as not to have
to look up a feature matrix each time.

For the use case you're concerned with, I think that if an extension's
upgrade is not compatible with the previous version, the extension name
itself should be changed (extname2 or extname-1.0 or whatever).

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Uppercase tab completion keywords in psql?

2012-03-22 Thread Bruce Momjian
Postgres 9.2 has been modified so psql no longer uppercases SQL keywords
when using tab completation, by this commit:

commit 69f4f1c3576abc535871c6cfa95539e32a36120f
Author: Peter Eisentraut pete...@gmx.net
Date:   Wed Feb 1 20:16:40 2012 +0200

psql: Case preserving completion of SQL key words

Instead of always completing SQL key words in upper case, look at 
the word being completed and match the case.

reviewed by Fujii Masao

For example, in 9.1:

test= selTAB

becomes

test= SELECT

However, in 9.2, this will produce:

test= select

FYI, fortunately this will still complete as upper case:

test= SelTAB

Robert Haas and I are disappointed by this change.  I liked the fact
that I could post nice-looking SQL queries without having to use my
capslock key (which I use as a second control key).  Any chance of
reverting this change?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checkpoint patches

2012-03-22 Thread Robert Haas
On Thu, Mar 22, 2012 at 3:45 PM, Stephen Frost sfr...@snowman.net wrote:
 Well, those numbers just aren't that exciting. :/

Agreed.  There's clearly an effect, but on this test it's not very big.

 Then again, I'm a bit surprised that the latencies aren't worse, perhaps
 the previous improvements have made the checkpoint pain go away for the
 most part..

I think it's pretty obvious from the graph that the checkpoint pain
hasn't gone away; it's just that this particular approach doesn't do
anything to address the pain associated with this particular test.

 The graph is definitely interesting..  Would it be possible for you to
 produce a graph of latency over the time of the run?  I'd be looking for
 spikes in latency near/around the 15m checkpoint marks and/or fsync
 times.  If there isn't really one, then perhaps the checkpoint spreading
 is doing a sufficient job.  Another option might be to intentionally
 tune the checkpoint spreading down to force it to try and finish the
 checkpoint faster and then see if the patches improve the latency in
 that situation.  Perhaps, in whatever workloads there are which are
 better suited to faster checkpoints (there must be some, right?
 otherwise there wouldn't be a GUC for it..), these patches would help.

See attached.  It looks a whole lot like the tps graph, if you look at
the tps graph upside with your 1/x glasses on.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
attachment: latency-10s.png
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Uppercase tab completion keywords in psql?

2012-03-22 Thread Andres Freund
On Thursday, March 22, 2012 10:49:55 PM Bruce Momjian wrote:
 Postgres 9.2 has been modified so psql no longer uppercases SQL keywords
 when using tab completation, by this commit:
 
   commit 69f4f1c3576abc535871c6cfa95539e32a36120f
   Author: Peter Eisentraut pete...@gmx.net
   Date:   Wed Feb 1 20:16:40 2012 +0200
 
   psql: Case preserving completion of SQL key words

 Robert Haas and I are disappointed by this change.  I liked the fact
 that I could post nice-looking SQL queries without having to use my
 capslock key (which I use as a second control key).  Any chance of
 reverting this change?
Seconded.

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Uppercase tab completion keywords in psql?

2012-03-22 Thread Andrew Dunstan



On 03/22/2012 05:49 PM, Bruce Momjian wrote:


Robert Haas and I are disappointed by this change.  I liked the fact
that I could post nice-looking SQL queries without having to use my
capslock key (which I use as a second control key).  Any chance of
reverting this change?



Should it be governed by a setting?

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Speed dblink using alternate libpq tuple storage

2012-03-22 Thread Tom Lane
Marko Kreen mark...@gmail.com writes:
 [ patches against libpq and dblink ]

I think this patch needs a lot of work.

AFAICT it breaks async processing entirely, because many changes have been
made that fail to distinguish insufficient data available as yet from
hard error.  As an example, this code at the beginning of
getAnotherTuple:
  
/* Get the field count and make sure it's what we expect */
if (pqGetInt(tupnfields, 2, conn))
!   return EOF;

is considering three cases: it got a 2-byte integer (and can continue on),
or there aren't yet 2 more bytes available in the buffer, in which case it
should return EOF without doing anything, or pqGetInt detected a hard
error and updated the connection error state accordingly, in which case
again there is nothing to do except return EOF.  In the patched code we
have:

/* Get the field count and make sure it's what we expect */
if (pqGetInt(tupnfields, 2, conn))
!   {
!   /* Whole the message must be loaded on the buffer here */
!   errmsg = libpq_gettext(protocol error\n);
!   goto error_and_forward;
!   }

which handles neither the second nor third case correctly: it thinks that
data not here yet is a hard error, and then makes sure it is an error by
destroying the parsing state :-(.  And if in fact pqGetInt did log an
error, that possibly-useful error message is overwritten with an entirely
useless protocol error text.

I don't think the error return cases for the row processor have been
thought out too well either.  The row processor is not in charge of what
happens to the PGresult, and it certainly has no business telling libpq to
just exit immediately from the topmost libpq function.  If we do that
we'll probably lose sync with the data stream and be unable to recover use
of the connection at all.  Also, do we need to consider any error cases
for the row processor other than out-of-memory?  If so it might be a good
idea for it to have some ability to store a custom error message into the
PGconn, which it cannot do given the current function API.

In the same vein, I am fairly uncomfortable with the blithe assertion that
a row processor can safely longjmp out of libpq.  This was never foreseen
in the original library coding and there are any number of places that
that might break, now or in the future.  Do we really need to allow it?
If we do, it would be a good idea to decorate the libpq functions that are
now expected to possibly longjmp with comments saying so.  Tracing all the
potential call paths that might be aborted by a longjmp is an essential
activity anyway.

Another design deficiency is PQskipResult().  This is badly designed for
async operation because once you call it, it will absolutely not give back
control until it's read the whole query result.  (It'd be better to have
it set some kind of flag that makes future library calls discard data
until the query result end is reached.)  Something else that's not very
well-designed about it is that it might throw away more than just incoming
tuples.  As an example, suppose that the incoming data at the time you
call it consists of half a dozen rows followed by an ErrorResponse.  The
ErrorResponse will be eaten and discarded, leaving the application no clue
why its transaction has been aborted, or perhaps even the entire session
cancelled.  What we probably want here is just to transiently install a
row processor that discards all incoming data, but the application is
still expected to eventually fetch a PGresult that will tell it whether
the server side of the query completed or not.

In the dblink patch, given that you have to worry about elogs coming out
of BuildTupleFromCStrings and tuplestore_puttuple anyway, it's not clear
what is the benefit of using malloc rather than palloc to manage the
intermediate buffers in storeHandler --- what that seems to mostly
accomplish is increase the risk of permanent memory leaks.  I don't see
much value in per-column buffers either; it'd likely be cheaper to just
palloc one workspace that's big enough for all the column strings
together.  And speaking of leaks, doesn't storeHandler leak the
constructed tuple on each call, not to mention whatever might be leaked by
the called datatype input functions?

It also seems to me that the dblink patch breaks the case formerly handled
in materializeResult() of a PGRES_COMMAND_OK rather than PGRES_TUPLES_OK
result.  The COMMAND case is supposed to be converted to a single-column
text result, and I sure don't see where that's happening now.

BTW, am I right in thinking that some of the hunks in this patch are meant
to fix a pre-existing bug of failing to report the correct connection
name?  If so, it'd likely be better to split those out and submit as a
separate patch, instead of conflating them with a feature addition.

Lastly, as to the pqgetrow patch, the lack of any demonstrated test case
for these functions makes me uncomfortable 

Re: [HACKERS] Publish checkpoint timing and sync files summary data to pg_stat_bgwriter

2012-03-22 Thread Peter Geoghegan
On 23 January 2012 02:08, Robert Haas robertmh...@gmail.com wrote:
 On Sat, Jan 21, 2012 at 6:32 PM, Jeff Janes jeff.ja...@gmail.com wrote:
 I'm finding the backend_writes column pretty unfortunate.  The only
 use I know of for it is to determine if the bgwriter is lagging
 behind.  Yet it doesn't serve even this purpose because it lumps
 together the backend writes due to lagging background writes, and the
 backend writes by design due to the use buffer access strategy
 during bulk inserts.

 +1 for separating those.

I decided to have a go myself. Attached patch breaks out strategy
allocations in pg_stat_bgwriter, but not strategy writes. My thinking
is that this may serve to approximate non-BAS_NORMAL writes, with the
considerable advantage of not requiring that I work backwards to
figure out strategy from some block when backend-local syncing (yes,
syncing, not writing) a buffer to work out which strategy object
references the buffer. The bookkeeping that that would likely entail
seems to make it infeasible.

Incidentally, it seems Postgres doesn't currently record backend
writes when the buffer doesn't go on to be sync'd. That seems
problematic to me, or at the very least a misrepresentation, since
temporary tables will be written out by the backend for example. Not
sure if it's worth fixing, though I've added a comment to that effect
at the site of where backend_writes is bumped.

I have corrected the md.c bug. This previously would have prevented
the sync_files (number of relation segments synced) value from being
valid in non-log_checkpoints configurations.

I'm not currently confident that the strategy_alloc filed is a very
useful proxy for a strategy_backend_writes field. I think that rather
than bumping the strategy allocation count analogously to the way the
overall count is bumped (within StrategyGetBuffer()), I should have
bumped earlier within BufferAlloc() so that it'd count if the buffer
was requested with non-BAS_NORMAL strategy but was found in
shared_buffers (so control never got as far as StrategyGetBuffer() ).
That might make the value more closely line-up to the would-be value
of a strategy_backend_writes column. What do you think?

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
new file mode 100644
index 840e54a..f714cb8
*** a/doc/src/sgml/monitoring.sgml
--- b/doc/src/sgml/monitoring.sgml
*** SELECT pg_stat_get_backend_pid(s.backend
*** 806,811 
--- 806,818 
   the functionpg_stat_get_buf_alloc/function function./entry
   /row
   row
+   entrybuffers_strat_alloc/entry
+   entrytypebigint/type/entry
+   entryNumber of buffers allocated with non-default buffer access strategy.
+  This value can also be returned by directly calling
+  the functionpg_stat_get_buf_strat_alloc/function function./entry
+  /row
+  row
entrystats_reset/entry
entrytypebigint/type/entry
entryThe last time these statistics were reset.
*** SELECT pg_stat_get_backend_pid(s.backend
*** 1703,1708 
--- 1710,1741 
/entry
   /row
  
+  row
+   entryliteralfunctionpg_stat_get_bgwriter_write_time()/function/literal/entry
+   entrytypebigint/type/entry
+   entry
+ 	   Total amount of time that has been spent in the part of checkpoint
+ 	   processing where files are written to disk, in milliseconds.
+   /entry
+  /row
+ 
+  row
+   entryliteralfunctionpg_stat_get_bgwriter_sync_time()/function/literal/entry
+   entrytypebigint/type/entry
+   entry
+ 	   Total amount of time that has been spent in the part of checkpoint
+ 	   processing where files are synchronized to disk, in milliseconds.
+   /entry
+  /row
+ 
+  row
+   entryliteralfunctionpg_stat_get_bgwriter_sync_files()/function/literal/entry
+   entrytypebigint/type/entry
+   entry
+ 	   Total number of files that have been synchronized to disk during
+ 	   checkpoint processing.
+   /entry
+  /row
  
   row
entryliteralfunctionpg_stat_get_wal_senders()/function/literal/entry
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
new file mode 100644
index ff7f521..e481be3
*** a/src/backend/access/transam/xlog.c
--- b/src/backend/access/transam/xlog.c
*** LogCheckpointStart(int flags, bool resta
*** 7492,7501 
  }
  
  /*
!  * Log end of a checkpoint.
   */
  static void
! LogCheckpointEnd(bool restartpoint)
  {
  	long		write_secs,
  sync_secs,
--- 7492,7501 
  }
  
  /*
!  * Time and potentially log the end of a checkpoint.
   */
  static void
! TimeCheckpointEnd(bool restartpoint)
  {
  	long		write_secs,
  sync_secs,
*** LogCheckpointEnd(bool restartpoint)
*** 7511,7519 
  
  	CheckpointStats.ckpt_end_t = GetCurrentTimestamp();
  
- 	

Re: [HACKERS] checkpoint patches

2012-03-22 Thread Jeff Janes
On Thu, Mar 22, 2012 at 6:07 AM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Mar 21, 2012 at 3:38 PM, Robert Haas robertmh...@gmail.com wrote:
 It looks like I neglected to record that information for the last set
 of runs.  But I can try another set of runs and gather that
 information.

 OK.  On further review, my previous test script contained several
 bugs.  So you should probably ignore the previous set of results.  I
 did a new set of runs, and this time bumped up checkpoint_segments a
 bit more, in the hopes of giving the cache a bit more time to fill up
 with dirty data between checkpoints.  Here's the full settings I used:

 shared_buffers = 8GB
 maintenance_work_mem = 1GB
 synchronous_commit = off
 checkpoint_timeout = 15min
 checkpoint_completion_target = 0.9
 wal_writer_delay = 20ms
 log_line_prefix = '%t [%p] '
 log_checkpoints='on'
 checkpoint_segments='1000'
 checkpoint_sync_pause='3'  # for the checkpoint-sync-pause-v1 branch only

 With that change, each of the 6 tests (3 per branch) involved exactly
 2 checkpoints, all triggered by time rather than by xlog.

Are you sure this is the case?  If the server was restarted right
before the pgbench -T 1800, then there should 15 minutes of no
checkpoint, followed by about 15*0.9 minutes + some sync time of one
checkpoint, and maybe just a bit of the starting of another
checkpoint.  If the server was not bounced between pgbench -i and
pgbench -T 1800, then the first checkpoint would start at some
unpredictable time into the benchmark run.

Cheers,

Jeff

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checkpoint patches

2012-03-22 Thread Robert Haas
On Thu, Mar 22, 2012 at 7:03 PM, Jeff Janes jeff.ja...@gmail.com wrote:
 On Thu, Mar 22, 2012 at 6:07 AM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Mar 21, 2012 at 3:38 PM, Robert Haas robertmh...@gmail.com wrote:
 It looks like I neglected to record that information for the last set
 of runs.  But I can try another set of runs and gather that
 information.

 OK.  On further review, my previous test script contained several
 bugs.  So you should probably ignore the previous set of results.  I
 did a new set of runs, and this time bumped up checkpoint_segments a
 bit more, in the hopes of giving the cache a bit more time to fill up
 with dirty data between checkpoints.  Here's the full settings I used:

 shared_buffers = 8GB
 maintenance_work_mem = 1GB
 synchronous_commit = off
 checkpoint_timeout = 15min
 checkpoint_completion_target = 0.9
 wal_writer_delay = 20ms
 log_line_prefix = '%t [%p] '
 log_checkpoints='on'
 checkpoint_segments='1000'
 checkpoint_sync_pause='3'  # for the checkpoint-sync-pause-v1 branch only

 With that change, each of the 6 tests (3 per branch) involved exactly
 2 checkpoints, all triggered by time rather than by xlog.

 Are you sure this is the case?  If the server was restarted right
 before the pgbench -T 1800, then there should 15 minutes of no
 checkpoint, followed by about 15*0.9 minutes + some sync time of one
 checkpoint, and maybe just a bit of the starting of another
 checkpoint.  If the server was not bounced between pgbench -i and
 pgbench -T 1800, then the first checkpoint would start at some
 unpredictable time into the benchmark run.

I didn't stop and restart the server after pgbench -i; it fires off
the test pgbench right after initializing.  That seems to provide
enough padding to ensure two checkpoints within the actual run.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Standbys, txid_current_snapshot, wraparound

2012-03-22 Thread Daniel Farina
Some time ago I reported bug 6291[0], which reported a Xid wraparound,
both as reported in pg_controldata and by txid_current_snapshot.
Unfortunately, nobody could reproduce it.

Today, the same system of ours just passed the wraparound mark
successfully at this time, incrementing the epoch.  However, two
standbys have not done the same: they have wrapped to a low txid.  At
this time, pg_controldata does report the correct epoch, as I read it,
unlike the original case.

I have not yet tried to reproduce this in a minimal way, but I wanted
to relate this information as soon as possible.

These systems are 9.0.6, on Ubuntu 10.04 LTS, amd64.

[0]: http://archives.postgresql.org/pgsql-bugs/2011-11/msg00094.php

-- 
fdr

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checkpoint patches

2012-03-22 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
 On Thu, Mar 22, 2012 at 3:45 PM, Stephen Frost sfr...@snowman.net wrote:
  Well, those numbers just aren't that exciting. :/
 
 Agreed.  There's clearly an effect, but on this test it's not very big.

Ok, perhaps that was because of how you were analyzing it using the 90th
percetile..?

 See attached.  It looks a whole lot like the tps graph, if you look at
 the tps graph upside with your 1/x glasses on.

Well, what I'm looking at with this graph are the spikes on master up to
near 100ms latency (as averaged across 10 seconds) while
checkpoint-sync-pause-v1 stays down closer to 60 and never above 70ms.
That makes this patch look much more interesting, in my view..

I'm assuming there's some anomaly or inconsistincy with the last few
seconds, where the latency drops for master and spikes with the patch.
If there isn't, then it'd be good to have a longer run to figure out if
there really is an issue with the checkpoint patch still having major
spikes.

Thanks,

Stephen


signature.asc
Description: Digital signature


[HACKERS] Reporting WAL file containing checkpoint's REDO record in pg_controldata's result

2012-03-22 Thread Fujii Masao
Hi,

I'd like to propose to change pg_controldata so that it reports the name
of WAL file containing the latest checkpoint's REDO record, as follows:

$ pg_controldata $PGDATA
...
Latest checkpoint's REDO location:0/16D6ACC (file
00010001)
Latest checkpoint's TimeLineID:   1
...

This simplifies very much the way to calculate the archive file cutoff point
because the reported WAL file is just cutoff point itself. If the file name is
not reported, we have to calculate the cutoff point from the reported
location and timeline, which is complicated calculation. We can use
pg_xlogfile_name function to calculate that, but it cannot be executed in
the standby. Another problem is that pg_xlogfile_name always uses
current timeline for the calculation, so if the reported timeline is not
the same as current one, pg_xlogfile_name cannot return the correct WAL
file name. Making pg_controldata report that WAL file name gets rid of
such a complexity.

You may think that archive_cleanup_command is usable for that purpose.
That's true. But it's not usable simply for  the case where there are more
than one standby servers. In this case, the archive file cutoff point needs
to be calculated from each standby's REDO location and timeline.

Attached patch changes pg_controldata as above. Thought?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
*** a/src/bin/pg_controldata/pg_controldata.c
--- b/src/bin/pg_controldata/pg_controldata.c
***
*** 24,29 
--- 24,30 
  #include fcntl.h
  
  #include access/xlog.h
+ #include access/xlog_internal.h
  #include catalog/pg_control.h
  
  
***
*** 101,106  main(int argc, char *argv[])
--- 102,110 
  	char		sysident_str[32];
  	const char *strftime_fmt = %c;
  	const char *progname;
+ 	uint32	log;
+ 	uint32	seg;
+ 	char		xlogfilename[MAXFNAMELEN];
  
  	set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN(pg_controldata));
  
***
*** 177,182  main(int argc, char *argv[])
--- 181,193 
  			 localtime(time_tmp));
  
  	/*
+ 	 * Calculate the WAL file name containing the latest checkpoint's REDO
+ 	 * record.
+ 	 */
+ 	XLByteToSeg(ControlFile.checkPointCopy.redo, log, seg);
+ 	XLogFileName(xlogfilename, ControlFile.checkPointCopy.ThisTimeLineID, log, seg);
+ 
+ 	/*
  	 * Format system_identifier separately to keep platform-dependent format
  	 * code out of the translatable message string.
  	 */
***
*** 204,212  main(int argc, char *argv[])
  	printf(_(Prior checkpoint location:%X/%X\n),
  		   ControlFile.prevCheckPoint.xlogid,
  		   ControlFile.prevCheckPoint.xrecoff);
! 	printf(_(Latest checkpoint's REDO location:%X/%X\n),
  		   ControlFile.checkPointCopy.redo.xlogid,
! 		   ControlFile.checkPointCopy.redo.xrecoff);
  	printf(_(Latest checkpoint's TimeLineID:   %u\n),
  		   ControlFile.checkPointCopy.ThisTimeLineID);
  	printf(_(Latest checkpoint's full_page_writes: %s\n),
--- 215,224 
  	printf(_(Prior checkpoint location:%X/%X\n),
  		   ControlFile.prevCheckPoint.xlogid,
  		   ControlFile.prevCheckPoint.xrecoff);
! 	printf(_(Latest checkpoint's REDO location:%X/%X (file %s)\n),
  		   ControlFile.checkPointCopy.redo.xlogid,
! 		   ControlFile.checkPointCopy.redo.xrecoff,
! 		   xlogfilename);
  	printf(_(Latest checkpoint's TimeLineID:   %u\n),
  		   ControlFile.checkPointCopy.ThisTimeLineID);
  	printf(_(Latest checkpoint's full_page_writes: %s\n),

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reporting WAL file containing checkpoint's REDO record in pg_controldata's result

2012-03-22 Thread Jaime Casanova
On Thu, Mar 22, 2012 at 11:06 PM, Fujii Masao masao.fu...@gmail.com wrote:

 We can use
 pg_xlogfile_name function to calculate that, but it cannot be executed in
 the standby. Another problem is that pg_xlogfile_name always uses
 current timeline for the calculation, so if the reported timeline is not
 the same as current one, pg_xlogfile_name cannot return the correct WAL
 file name. Making pg_controldata report that WAL file name gets rid of
 such a complexity.


i would think that pg_xlogfile_name() is not allowed in the standby
because ThisTimelineId is not very well defined in recovery but if you
extend pg_xlogfile_name() to also receive a timelineid as you
suggested in [1] then that version of the function could be allowed in
the standby.
or there is something else i'm missing?

is that enough for you to solve your problem?

[1] 
http://archives.postgresql.org/message-id/cahgqgwhwqjgeksmp2oteru8p0sj6x7ypqyh5qqeqcbqsxaa...@mail.gmail.com

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers