Re: [HACKERS] A couple of gripes about the gettext plurals patch

2009-05-28 Thread Peter Eisentraut
On Thursday 28 May 2009 00:54:32 Tom Lane wrote:
 To wit, the current
 coding fails to respect the gettext domain when working with pluralized
 messages.

The ngettext() calls use the default textdomain that main.c sets up.  The PLs 
use dngettext().  Is that not correct?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] problem with plural-forms

2009-05-28 Thread Peter Eisentraut
On Wednesday 27 May 2009 23:02:19 Zdenek Kotala wrote:
 Peter Eisentraut píše v út 26. 05. 2009 v 13:39 +0300:
  Of course the concrete example that you show doesn't actually take
  advantage of this, so if it is important to you, please send a patch to
  fix it.

 Fix attached. I found only two problems, both in psql. I did not fix .po
 files. Is necessary to fix them manually or do you regenerate files?

fixed

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] survey of WAL blocksize changes

2009-05-28 Thread Simon Riggs

On Wed, 2009-05-27 at 21:09 -0400, Tom Lane wrote:

 So, if we assume that these numbers are real and not artifacts, it seems
 we have to postulate at least four distinct block-size-dependent
 performance effects:

Two performance effects would be sufficient to explain the results.

* Optimal performance for small WAL changes is reached at around 4kB.
Anything smaller or larger lessens the benefit from this.

* Optimal performance for full page writes is reached at a WAL block
size 2-4 times larger than db block size, corresponding to sizes of WAL
records generated by test.

The two effects have a tail off on either side, giving the four effects
you spoke of.

 It's not too hard to believe any of those individually, and even to
 think of plausible mechanisms.  But it seems a bit unlikely that effects
 3 and 4 would exist but consistently cross over right at our traditional
 choice of block size.

I could believe two, but we would need some careful instrumentation to
reveal at what times we got benefit. We will never achieve improvements
if we look at figures averaged over longer periods. 

We should be trying to improve specific parts of the checkpoint cycle,
which I would break down like this:
* ramp-up
* checkpoint spike
* post-checkpoint trough
* normal running
There is very clear modal behaviour showing in the tests and we should
look at the effects of patches in each case. I could well believe that
we make a gain at one stage and lose on another.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] survey of WAL blocksize changes

2009-05-28 Thread Simon Riggs

On Wed, 2009-05-27 at 17:51 -0700, Mark Wong wrote:
 On Wed, May 27, 2009 at 1:46 AM, Simon Riggs si...@2ndquadrant.com wrote:
 
  On Tue, 2009-05-26 at 19:51 -0700, Mark Wong wrote:
  It appears for this workload using a 16KB or 32KB gets more than 4%
  throughput improvement, but some of that could be noise.
 
  The baseline appears to have a significant jump in txn response time
  after 77 mins on the baseline test. I think you should rerun that. My
  guess would be it will reduce any gains shown with higher settings.
 
 Oopsies.  I've rerun, but now that there is no dip, the average
 throughput still didn't change much:
 
 BS notpm % Change from default
 -- - --
  1 14673 -5.1%
  2 15864 2.7%
  4 15774 2.1%
  8 15454 (default)
 16 16118 4.3%
 32 16051 3.9%
 64 14874 -3.8%
 
 Pointers to raw data:
 
 BS url
 -- ---
  1 
 http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.wal.1/
  2 
 http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.wal.2/
  4 
 http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.wal.4/
  8 
 http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.wal.8/report/
 16 
 http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.wal.16/
 32 
 http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.wal.32/
 64 
 http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.wal.64/

Look at these graphs, in this order
http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.wal.16/report/rt_d.png
http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.wal.2/report/rt_d.png
http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.wal.8/report/rt_d.png

BS 16 and 2 look very similar, though with 16 clearly a better curve.
BS=8 looks very strange in comparison. Still something wrong, I suspect.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Markus Wanner

Hi,

Quoting Marc G. Fournier scra...@hub.org:

Please repost ...


Peter referred to this message here:

http://archives.postgresql.org/pgsql-hackers/2008-12/msg01879.php

However, please be cautious before applying such a patch.

Regards

Markus Wanner


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Markus Wanner

Hi,

Quoting Marc G. Fournier scra...@hub.org:
Actually, I have done that on at least one of the 8.x tags too, so  
if that is it, more then those two tags should be causing issues ...


Not *every* such issue causes problems. An example that's perfectly fine:

 cvs commit -m first commit fileA
 cvs tag TEST filA
 cvs commit -m second commit fileB
 cvs tag TEST fileB

In such a situation, a converter can easily push-down the tag TEST  
to the second commit, because fileA is the same (in that revision) as  
after the first commit. After all, the results in the RCS files are  
exactly the same as if you did the following:


 cvs commit -m first commit fileA
 cvs commit -m second commit fileB
 cvs tag TEST fileA fileB

A converter can't possibly distinguish these two.

However, if both files get committed the second time, but only one  
gets tagged, it gets problematic (always assuming the commit actually  
changes the file):


 cvs commit -m first commit fileA
 cvs tag TEST filA
 cvs commit -m second commit fileA fileB
 cvs tag TEST fileB

That's perfectly valid from CVS's point of view, unwanted for the  
Postgres repository and hard to handle for a converter to git (or  
mercurial, monotone, etc..), because the tag TEST is on the first  
commit for fileA but on the second for fileB, while both of fileA and  
fileB differ between the commits.


Regards

Markus Wanner


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Markus Wanner

Hi,

Quoting Robert Haas robertmh...@gmail.com:

I think this is a semantic argument.  The problem isn't that we don't
understand how CVS behaves; it's that we find that behavior
undesirable


I fully agree to that and find it undesirable as well.


aka broken.


Well, for some it's a feature, for others a bug ;-)

My point was that other converters have better support for such  
(undesirable, but still existent) tags that span multiple commits. If  
that's unwanted anyway, it seems cleaner to fix the CVS repository,  
yes. Has that been done now? Or is somebody going to do it? (See  
Peter's patch he just linked again upthread).



If we really care about having a tag that
contains the exact files that are tagged in CVS, we can create a
branch from one of the commits involved, and then apply a commit to
that branch that places it in the state that matches the contents of
the CVS tag.


Exactly (with the difference that with the branch you preserve the  
history of changes, while the variant with the tag does not).



AIUI, this is not very different from what you'd have to
do in Subversion, where a tag is a branch is a copy.


I think so, too. I'd even state that subversion doesn't really support  
tagging, instead it simulates tags with branches.


Regards

Markus Wanner

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread Dimitri Fontaine
Hi all,

Seems the night has been providing lots of thoughs :)

Josh Berkus j...@agliodbs.com writes:
 Sure.  I think that having better search path management would be a
 wonderful thing; it would encourage people to use schema more in general.

 However, that doesn't mean that I think it should be part of the extensions
 design, or even a gating factor.

First, this thread allowed us to go from:
  we don't know where to install extensions 
to:
  we all agree that a specific pg_extension schema is a good idea, as
   soon as user is free not to use it at extension install time.

So you see, search_path and extensions are related and thinking about
their relationship will help design the latter.

 search_path_suffix = 'pg_modules, information_schema'
 search_path = 'main,web,accounts'

 ... would mean that any object named would search in
 main,web,accounts,pg_modules,information_schema.  This would be one way to
 solve the issue of having extra schema for extensions or other utilities
 in applications.

That really seems exactly to be what we're proposing with pre_ and post_
search_path components: don't change current meaning of search_path,
just give DBAs better ways to manage it. And now that you're leaning
towards a search_path suffix, don't you want a prefix too?

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 The contents of a particular schema are more or less analagous to an
 application.  In most programming languages, an application informs
 the system of the libraries that it needs and the system goes off and
 loads the symbols in those libraries into the application's namespace.
  Using search path basically requires the user to tell the application
 where to find those symbols, which ISTM is exactly backwards.

Well, in fact, not so much, because the application is using SET to tell
the system where to search for needed objects. That's about the same as
your loading lib into the application namespace analogy.

Now, using PostgreSQL, you can pre-set the setting at the database and
role levels in order not to have to manage it explicitly in the
application code. That's only a DBA convenience though, for places where
the code and the database are not managed by the same teams (or at least
it's the way I think about it --- this and database upgrades without
costly application rewrites).

 Also, it seems to me that we could create a system schema called
 something like pg_extension and make it empty.  Every extension could
 install in its own schema and then tell pg_extension to inherit it
 that schema.  Then if you want to just get all the extensions, you can
 just set your search path to include pg_extension, and as new
 extensions are added or old ones are removed, you'll still have all
 the extensions without changing anything.

Then you do the exact same thing with the public schema in the first
place, inheriting pg_extension if needed, and you deprecate search_path
entirely. Don't forget the schemas are not there to solve extension
managing problems, but a separate tool that have a great overlay with
extensions, because we tend to like to have a schema (or more) per
extension.

Your proposal doesn't include any notion of search order within the tree
of available schemas, which means we're loosing half of the search_path
features (the other half is restricting the searches, which you address).

I think I'm failing to understand where your proposal leads us the same
way you seem to be failing to follow mine...

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread Dimitri Fontaine
Andrew Dunstan and...@dunslane.net writes:
 Dimitri Fontaine wrote:
   we all agree that a specific pg_extension schema is a good idea, as
soon as user is free not to use it at extension install time.

 I don't think we all agree on that at all. ;-)

Ooops, my mistake, as few people where taking that as implicit and as a
reasoning basepoint in their mails, I assumed we were past the question
already. Sorry to see that's too quick a conclusion... and thanks for
pointing out the absence of consensus!

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread Dimitri Fontaine
Hi,

Tom Lane t...@sss.pgh.pa.us writes:
 Andrew Gierth and...@tao11.riddles.org.uk writes:
 Splitting up search_path is something I've been thinking about for a
 while (and threw out on IRC as a suggestion, which is where Dimitri
 got it); it was based on actual experience running an app that set the
 search path in the connection parameters in order to select which of
 several different schemas to use for part (not all) of the data.  When
 setting search_path this way, there is no way to set only part of it;
 the client-supplied value overrides everything.

 Obviously there are other possible solutions, but pretending there
 isn't a problem will get nowhere.

 I agree that some more flexibility in search_path seems reasonable,
 but what we've got at the moment is pretty handwavy.  Dimitri didn't
 suggest what the uses of the different parts of a three-part path
 would be, and also failed to say what the implications for the default
 creation namespace would be, as well as the existing special handling
 of pg_temp and pg_catalog.  That stuff all works together pretty
 closely; it'd be easy to end up making it less usable not more so.

What I have in mind is not to change current semantics, but allow users
to have easier ways to manage things. Some other place in this thread we
see syntax sugar propositions or tools to allow adding schemas in first
or last place of search_path.

It could be that some other ideas or better tools would be a much better
way to solve the problem at hand, but as you asked, here's a rough
sketch of how I'd use what I'm proposing:

The mydb database is used from several applications and roles, and host
10 application schemas and 3 extensions (ip4r, prefix, pgq,
say). Depending on the role, not all 10 schemas are in the search_path,
and we're using non qualified objects names when the application
developer think they're part of the database system (that includes
extensions).

What this currently means is that all role specific schemas must embed
the extensions schemas at the right place. When prefix extension is
added, all of them are to get reviewed.

A better way to solve this is to have the database post_search_path (or
call it search_path_suffix) contain the extensions schemas. Now the
roles are set up without search_path_suffix, and it's easy to add an
extension living in its own schema. (we'll have to choose whether
defining a role specific search_path_suffix overrides the database
specific one, too).

Having all extensions live in pg_extension schema also solves the
problem in a much easier way, except for people who care about not
messing it all within a single schema (fourre-tout is the french for a
place where you put anything and everything).

As Josh is saying too, as soon as we have SQL level extension object
with dependancies, we'll be able to list all of a particular extension's
objects without needing to have them live in separate schemas.
 \df pgq.  -- list all functions in schema pgq
 \dt pgq.  -- list all tables in schema pgq
 \de pgq.  -- list all objects provided by extension pgq

Still, for extension upgrading or name collisions between extensions, or
some more cases I'm not thinking about now, pg_extension will not be all
what you need. We already have schemas and search_path, and it's not
always pretty nor fun to play with. Would prefix/suffix components help?

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Compiler warning cleanup - unitilized const variables, pointer type mismatch

2009-05-28 Thread Zdenek Kotala
I attached another cleanup patch which fixes following warnings reported
by Sun Studio:

zic.c, line 1534: warning: const object should have initializer: tzh0
dynloader.c, line 7: warning: empty translation unit
pgstat.c, line 666: warning: const object should have initializer: all_zeroes
pgstat.c, line 799: warning: const object should have initializer: all_zeroes
pgstat.c, line 2552: warning: const object should have initializer: all_zeroes
preproc.c, line 39569: warning: pointer expression or its operand do not 
point to the same object yyerror_range, result is undefined and non-portable 
tab-complete.c, line 587: warning: assignment type mismatch:
pointer to function(pointer to const char, int, int) returning pointer 
to pointer to char = pointer to void


Following list is still unfixed plus see my comments:

gram.c, line 28487: warning: pointer expression or its operand do not point 
to the same object yyerror_range, result is undefined and non-portable 

- This is really strange warning. The code is really strange
because it points to -1 index of array, but I'm not bison guru.
Maybe it is correct, but it would be good if somebody check it.

../../../src/include/pg_config.h, line 782: warning: macro redefined: 
_FILE_OFFSET_BITS

   - This probably needs some extra love in configure.

regc_lex.c, line 401: warning: loop not entered at top
regc_lex.c, line 484: warning: loop not entered at top
regc_lex.c, line 578: warning: loop not entered at top
regc_lex.c, line 610: warning: loop not entered at top
regc_lex.c, line 870: warning: loop not entered at top
regc_lex.c, line 1073: warning: loop not entered at top
postgres.c, line 3845: warning: loop not entered at top

- Assert on not reached place probably confuse compiler. I'm not
sure if it make sense nowadays? Most compiler should optimize
this anyway and code is removed. I suppose to remove these
asserts.


Zdenek
diff -Nrc pgsql.orig.8fc4f032818a/src/backend/port/dynloader/solaris.c pgsql.orig/src/backend/port/dynloader/solaris.c
*** pgsql.orig.8fc4f032818a/src/backend/port/dynloader/solaris.c	2009-05-28 11:09:24.874020865 +0200
--- pgsql.orig/src/backend/port/dynloader/solaris.c	2009-05-28 11:09:24.893688008 +0200
***
*** 5,7 
--- 5,11 
   *
   * see solaris.h
   */
+ 
+ /* compiler complains about empty unit, let compiler quite */
+ extern int no_such_variable;
+ 
diff -Nrc pgsql.orig.8fc4f032818a/src/backend/postmaster/pgstat.c pgsql.orig/src/backend/postmaster/pgstat.c
*** pgsql.orig.8fc4f032818a/src/backend/postmaster/pgstat.c	2009-05-28 11:09:24.882883806 +0200
--- pgsql.orig/src/backend/postmaster/pgstat.c	2009-05-28 11:09:24.894106321 +0200
***
*** 662,669 
  void
  pgstat_report_stat(bool force)
  {
! 	/* we assume this inits to all zeroes: */
! 	static const PgStat_TableCounts all_zeroes;
  	static TimestampTz last_report = 0;
  
  	TimestampTz now;
--- 662,668 
  void
  pgstat_report_stat(bool force)
  {
! 	static const PgStat_TableCounts all_zeroes = {0,0,0,0,0,0,0,0,0,0,0};
  	static TimestampTz last_report = 0;
  
  	TimestampTz now;
***
*** 795,802 
  static void
  pgstat_send_funcstats(void)
  {
! 	/* we assume this inits to all zeroes: */
! 	static const PgStat_FunctionCounts all_zeroes;
  
  	PgStat_MsgFuncstat msg;
  	PgStat_BackendFunctionEntry *entry;
--- 794,800 
  static void
  pgstat_send_funcstats(void)
  {
! 	static const PgStat_FunctionCounts all_zeroes = {0,{0,0},{0,0}};
  
  	PgStat_MsgFuncstat msg;
  	PgStat_BackendFunctionEntry *entry;
***
*** 2548,2555 
  void
  pgstat_send_bgwriter(void)
  {
! 	/* We assume this initializes to zeroes */
! 	static const PgStat_MsgBgWriter all_zeroes;
  
  	/*
  	 * This function can be called even if nothing at all has happened. In
--- 2546,2552 
  void
  pgstat_send_bgwriter(void)
  {
! 	static const PgStat_MsgBgWriter all_zeroes = { {0,0},0,0,0,0,0,0,0};
  
  	/*
  	 * This function can be called even if nothing at all has happened. In
diff -Nrc pgsql.orig.8fc4f032818a/src/bin/psql/tab-complete.c pgsql.orig/src/bin/psql/tab-complete.c
*** pgsql.orig.8fc4f032818a/src/bin/psql/tab-complete.c	2009-05-28 11:09:24.890322342 +0200
--- pgsql.orig/src/bin/psql/tab-complete.c	2009-05-28 11:09:24.894581639 +0200
***
*** 557,563 
  
  
  /* Forward declaration of functions */
! static char **psql_completion(char *text, int start, int end);
  static char *create_command_generator(const char *text, int state);
  static char *drop_command_generator(const char *text, int state);
  static char *complete_from_query(const char *text, int state);
--- 557,563 
  
  
  /* Forward declaration of functions */
! static char **psql_completion(const char *text, int start, int end);
  static char *create_command_generator(const char *text, int state);
  static char *drop_command_generator(const char *text, int state);
  static 

Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Albe Laurenz
Kevin Grittner wrote:
 1. implementation of the paper's technique sans predicate locking,
 that would avoid more serialization anomalies but not all?
  
 I saw that as a step along the way to support for fully serializable
 transactions.  If covered by a migration path GUC which defaulted to
 current behavior, it would allow testing of all of the code except the
 predicate lock tracking (before the predicate locking code was
 created), in order to give proof of concept, check performance impact
 of that part of the code, etc.  I wasn't thinking that it would be a
 useful long-term option without the addition of the predicate locks.

I cannot prove it, but I have a feeling that the impact on
performance and concurrency will be considerably higher for an
implementation with predicate locks. Every WHERE-clause in a SELECT
will add one or more checks for each concurrent writer.

So while I think it is a good idea to approach full serializability
in a step-by-step approach, it would be wise to consider the possibility
that we will not reach the goal (because implementing predicate locks
might be too difficult or the result perform too badly).

So any intermediate step should be useful in itself, unless we are
ready to rip out the whole thing again.

What would be the useful intermediate steps in this case?

From the user perspective, will an implementation of the paper's
approach as an intermediate step provide a useful and understandable
isolation level?

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #4822: xmlattributes encodes '' twice

2009-05-28 Thread Itagaki Takahiro

Tom Lane t...@sss.pgh.pa.us wrote:

  =# SELECT xmlelement(name a, xmlattributes('./qa?a=1b=2' as href), 'QA');
   xmlelement
  
   a href=./qa?a=1amp;amp;b=2Qamp;A/a
 
  '' in xmlattributes seems to be encoded twice.
 
 This was apparently broken by Peter's patch here:
 http://archives.postgresql.org/pgsql-committers/2009-04/msg00124.php
 
 We might have to add a bool flag
 to map_sql_value_to_xml_value() to enable or disable mapping of special
 characters.

Here is a patch to fix the bug. I added a parameter 'encode' to
map_sql_value_to_xml_value() and pass false for xml attributes.

char *
map_sql_value_to_xml_value(Datum value, Oid type, bool encode)

Also a special regression test is added for it:

SELECT xmlelement(name element,
  xmlattributes (1 as one, 'deuce' as two,  as three),
  'content', );
 xmlelement

 element one=1 two=deuce 
three=lt;gt;amp;quot;'contentlt;gt;amp;'/element
(1 row)


Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



not-encode-xmlattributes.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Heikki Linnakangas

Guillaume Smet wrote:

On Tue, Apr 28, 2009 at 5:35 PM, Guillaume Smet
guillaume.s...@gmail.com wrote:

On Tue, Apr 28, 2009 at 5:22 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:

At a normal startup, the checkpoint record would be there as usual. And an
archive recovery starts at the location indicated by the backup label.

AFAICS calling RequestXLogSwitch() before CreateCheckPoint would be
equivalent to calling pg_switch_xlog() just before shutting down.

That's what I had in mind when writing the patch but I didn't know the
implications of this particular checkpoint.

So moving the call before CreateCheckPoint is what I really intended
now that I have in mind these implications and I don't know why it would be
a problem to miss this checkpoint in the logs archived.


What do we decide about this problem?

Should we just call RequestXLogSwitch() before the creation of the
shutdown checkpoint or do we need a more complex patch? If so can
anybody explain the potential problem of this approach so we can
figure how to fix it?


I've committed a patch to do the RequstXLogSwitch() before shutdown 
checkpoint as discussed. It seems safe to me. (sorry for the delay, and 
thanks for the reminder)


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compiler warning cleanup - unitilized const variables, pointer type mismatch

2009-05-28 Thread Michael Meskes
On Thu, May 28, 2009 at 11:11:20AM +0200, Zdenek Kotala wrote:
 I attached another cleanup patch which fixes following warnings reported
 by Sun Studio:
 ...
 preproc.c, line 39569: warning: pointer expression or its operand do not 
 point to the same object yyerror_range, result is undefined and non-portable 
 ...
 Following list is still unfixed plus see my comments:
 
 gram.c, line 28487: warning: pointer expression or its operand do not point 
 to the same object yyerror_range, result is undefined and non-portable 
 ...

These two should be the same, both coming from bison. Both files are
auto-generated, thus it might be bison that has to be fixed to remove this
warning. Given that I didn't find any mentioning of preproc in your patch I
suppose it just hit the wrong list though.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: mes...@jabber.org
Go VfL Borussia! Go SF 49ers! Use Debian GNU/Linux! Use PostgreSQL!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compiler warning cleanup - unitilized const variables, pointer type mismatch

2009-05-28 Thread Zdenek Kotala

Michael Meskes píše v čt 28. 05. 2009 v 13:33 +0200:
 On Thu, May 28, 2009 at 11:11:20AM +0200, Zdenek Kotala wrote:
  I attached another cleanup patch which fixes following warnings reported
  by Sun Studio:
  ...
  preproc.c, line 39569: warning: pointer expression or its operand do not 
  point to the same object yyerror_range, result is undefined and 
  non-portable 
  ...
  Following list is still unfixed plus see my comments:
  
  gram.c, line 28487: warning: pointer expression or its operand do not 
  point to the same object yyerror_range, result is undefined and 
  non-portable 
  ...
 
 These two should be the same, both coming from bison. Both files are
 auto-generated, thus it might be bison that has to be fixed to remove this
 warning. 

yeah it is generated, but question is if generated code is valid or it
is bug in bison. If it bison bug we need to care about it. There is the
code:

  yyerror_range[1] = yylloc;
  /* Using YYLLOC is tempting, but would change the location of
 the look-ahead.  YYLOC is available though.  */
  YYLLOC_DEFAULT (yyloc, (yyerror_range - 1), 2);
  *++yylsp = yyloc;

Problem is with YYLLOC_DEFAULT. When I look on macro definition 

#define YYLLOC_DEFAULT(Current, Rhs, N)  \
  Current.first_line   = Rhs[1].first_line;  \
  Current.first_column = Rhs[1].first_column;\
  Current.last_line= Rhs[N].last_line;   \
  Current.last_column  = Rhs[N].last_column;

It seems to me that it is OK, because 1 is used as a index which finally
point on yyerror_range[0]. 


 Given that I didn't find any mentioning of preproc in your patch I
 suppose it just hit the wrong list though.

I'm sorry copy paste error. Yeah, I did not fix preproc too.

Zdenek



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Peter Eisentraut
On Thursday 28 May 2009 04:49:19 Tom Lane wrote:
 Yeah.  The fundamental problem with all the practical approaches I've
 heard of is that they only work for a subset of possible predicates
 (possible WHERE clauses).  The idea that you get true serializability
 only if your queries are phrased just so is ... icky.  So icky that
 it doesn't sound like an improvement over what we have.

Is it even possible to have a predicate locking implementation that can verify 
whether an arbitrary predicate implies another arbitrary predicate?  And this 
isn't constraint exclusion, where it is acceptable to have false negatives.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread Stephen Frost
* Dimitri Fontaine (dfonta...@hi-media.com) wrote:
 Andrew Dunstan and...@dunslane.net writes:
  Dimitri Fontaine wrote:
we all agree that a specific pg_extension schema is a good idea, as
 soon as user is free not to use it at extension install time.
 
  I don't think we all agree on that at all. ;-)
 
 Ooops, my mistake, as few people where taking that as implicit and as a
 reasoning basepoint in their mails, I assumed we were past the question
 already. Sorry to see that's too quick a conclusion... and thanks for
 pointing out the absence of consensus!

I'm not real happy with it either.  Sure, we can track module
dependencies seperately, but if we go down this route then we have to
come up with some concept of an extension namespace that different
extension use and prefix their functions/tables/etc with to avoid
overlap with each other.  Gee, doesn't that sound familiar.  Not to
mention that it's nice to be able to control access to an extension in
one place rather than having to figure out all the pieces of a
particular extension (sure, through the dependencies, but are we really
going to have a GRANT USAGE ON EXT x TO role1; ?  and what happens if
someone changes the permissions on an individual item afterwards?
etc..).

Almost unrelated, I fail to see the value in continuing to keep the
magic part of the search_path (eg: pg_catalog) to ourselves and not
giving our users some ability to manipulate it.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Heikki Linnakangas

Peter Eisentraut wrote:

On Thursday 28 May 2009 04:49:19 Tom Lane wrote:

Yeah.  The fundamental problem with all the practical approaches I've
heard of is that they only work for a subset of possible predicates
(possible WHERE clauses).  The idea that you get true serializability
only if your queries are phrased just so is ... icky.  So icky that
it doesn't sound like an improvement over what we have.


Is it even possible to have a predicate locking implementation that can verify 
whether an arbitrary predicate implies another arbitrary predicate?


I don't think you need that for predicate locking. To determine if e.g 
an INSERT and a SELECT conflict, you need to determine if the INSERTed 
tuple matches the predicate in the SELECT. No need to deduce anything 
between two predicates, but between a tuple and a predicate.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread Stephen Frost
* Dimitri Fontaine (dfonta...@hi-media.com) wrote:
 A better way to solve this is to have the database post_search_path (or
 call it search_path_suffix) contain the extensions schemas. Now the
 roles are set up without search_path_suffix, and it's easy to add an
 extension living in its own schema. (we'll have to choose whether
 defining a role specific search_path_suffix overrides the database
 specific one, too).
 
 Having all extensions live in pg_extension schema also solves the
 problem in a much easier way, except for people who care about not
 messing it all within a single schema (fourre-tout is the french for a
 place where you put anything and everything).

I certainly agree with this approach, naming aside (I'd probably rather
have 'system_search_path' that's added on as a suffix, or something
similar).

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Peter Eisentraut
On Thursday 28 May 2009 03:38:49 Tom Lane wrote:
 * SET TRANSACTION ISOLATION LEVEL something-else should provide our
 current snapshot-driven behavior.  I don't have a strong feeling about
 whether something-else should be spelled REPEATABLE READ or SNAPSHOT,
 but lean slightly to the latter.

Could someone describe concisely what behavior snapshot isolation provides 
that repeatable read does?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Peter Eisentraut
On Thursday 28 May 2009 15:24:59 Heikki Linnakangas wrote:
 I don't think you need that for predicate locking. To determine if e.g
 an INSERT and a SELECT conflict, you need to determine if the INSERTed
 tuple matches the predicate in the SELECT. No need to deduce anything
 between two predicates, but between a tuple and a predicate.

That might the easy part.  The hard part is determining whether a SELECT and 
an UPDATE conflict.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compiler warning cleanup - unitilized const variables, pointer type mismatch

2009-05-28 Thread Michael Meskes
On Thu, May 28, 2009 at 01:51:07PM +0200, Zdenek Kotala wrote:
 Problem is with YYLLOC_DEFAULT. When I look on macro definition 
 
 #define YYLLOC_DEFAULT(Current, Rhs, N)  \
   Current.first_line   = Rhs[1].first_line;  \
   Current.first_column = Rhs[1].first_column;\
   Current.last_line= Rhs[N].last_line;   \
   Current.last_column  = Rhs[N].last_column;
 
 It seems to me that it is OK, because 1 is used as a index which finally
 point on yyerror_range[0]. 

Wait, this is the bison definition. Well to be more precise the bison
definition in your bison version. Mine is different:

# define YYLLOC_DEFAULT(Current, Rhs, N)\
do  \
  if (YYID (N))\
{   \
  (Current).first_line   = YYRHSLOC (Rhs, 1).first_line;\
  (Current).first_column = YYRHSLOC (Rhs, 1).first_column;  \
  (Current).last_line= YYRHSLOC (Rhs, N).last_line; \
  (Current).last_column  = YYRHSLOC (Rhs, N).last_column;   \
}   \
  else  \
{   \
  (Current).first_line   = (Current).last_line   =  \
YYRHSLOC (Rhs, 0).last_line;\
  (Current).first_column = (Current).last_column =  \
YYRHSLOC (Rhs, 0).last_column;  \
   }   \
while (YYID (0))

Having said that, it doesn't really matter as we redefine the macro:

#define YYLLOC_DEFAULT(Current, Rhs, N) \
do { \
if (N) \
(Current) = (Rhs)[1]; \
else \
(Current) = (Rhs)[0]; \
} while (0)

I have to admit that those version look strikingly unsimilar to me. There is no
reference to Rhs[N] in our macro at all. But then I have no idea whether this
is needed.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: mes...@jabber.org
Go VfL Borussia! Go SF 49ers! Use Debian GNU/Linux! Use PostgreSQL!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Aidan Van Dyk
* Robert Haas robertmh...@gmail.com [090527 22:43]:
 On Wed, May 27, 2009 at 10:09 PM, Aidan Van Dyk ai...@highrise.ca wrote:
  * Robert Haas robertmh...@gmail.com [090527 21:30]:
 
   And actually looking at the history of the gpo repo, the branches are all
   messed up with merges and stuff that I'm not sure where they are coming
   from...  8.2, 8.3, and master(HEAD) are all the same as my gpo repo, but 
   the
   back branchs are very bad...
 
  This is really quite horrible.  What is the best way forward here?
 
  That depends entirely on what the project wants.
 
 I can't speak for anyone else, but what I want is for the git tree on
 git.postgresql.org to match CVS.

Well, sure, but I think the way forward part implied recognition that
the current tree at git.postgresql.org *doesn't* match CVS very closely
(for back branches), and that people currently rely on it and use it.

So, again, the answer to the question really does depend on what the
canonical VCS of the project is.  As of now, it's *still* CVS, and
those using either git repo can still develop and submit patches to CVS
easily.

When the project switches, there will probably need to be a more
canonical conversion, with one of the tools that doesn't support
incremental imports, and then people will have to adjust their current
repo with any of rebase/graft/filter-branch to adjust their work
history onto the official tree...

All that based on the assumption that when the project switches to git,
they actually want all the CVS history in their official tree.  Its
certainly not necessary, and possibly not even desirable...  PostgreSQL
could just as easily to a linus style switch when they switch to git,
and just import the latest release in each branch as the starting
point for each branch.  The git repository will have no history, and
people can choose which history they want to graft in...  CVSROOT can be
made available as a historical download.

a.

-- 
Aidan Van Dyk Create like a god,
ai...@highrise.ca   command like a king,
http://www.highrise.ca/   work like a slave.


signature.asc
Description: Digital signature


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Simon Riggs

On Thu, 2009-05-28 at 14:04 +0300, Heikki Linnakangas wrote:

 I've committed a patch to do the RequstXLogSwitch() before shutdown 
 checkpoint as discussed. It seems safe to me. (sorry for the delay, and 
 thanks for the reminder)

Not sure if that is a fix that will work in all cases. 

There is a potential timing problem with when the archiver is shutdown:
that may now be fixed in 8.4, see what you think.

Also if archiving is currently stalled, then files will not be
transferred, even if you switch xlogs. So this is at best a partial fix
to the problem and the need for a manual check of file contents
remains. 

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Heikki Linnakangas

Simon Riggs wrote:

On Thu, 2009-05-28 at 14:04 +0300, Heikki Linnakangas wrote:

I've committed a patch to do the RequstXLogSwitch() before shutdown 
checkpoint as discussed. It seems safe to me. (sorry for the delay, and 
thanks for the reminder)


Not sure if that is a fix that will work in all cases. 


There is a potential timing problem with when the archiver is shutdown:
that may now be fixed in 8.4, see what you think.


Can you elaborate?


Also if archiving is currently stalled, then files will not be
transferred, even if you switch xlogs. So this is at best a partial fix
to the problem and the need for a manual check of file contents
remains. 


Yep. Maybe we should print the filename of the last WAL segment to the 
log at shutdown, so that you can easily check that you have everything 
in the archive.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Simon Riggs

On Thu, 2009-05-28 at 16:19 +0300, Heikki Linnakangas wrote:
 Simon Riggs wrote:
  On Thu, 2009-05-28 at 14:04 +0300, Heikki Linnakangas wrote:
  
  I've committed a patch to do the RequstXLogSwitch() before shutdown 
  checkpoint as discussed. It seems safe to me. (sorry for the delay, and 
  thanks for the reminder)
  
  Not sure if that is a fix that will work in all cases. 
  
  There is a potential timing problem with when the archiver is shutdown:
  that may now be fixed in 8.4, see what you think.
 
 Can you elaborate?

Is the archiver still alive and working after the log switch occurs?

If the archiver is working, but has fallen behind at the point of
shutdown, does the archiver operate for long enough to ensure we are
archived up to the point of the log switch prior to checkpoint?

  Also if archiving is currently stalled, then files will not be
  transferred, even if you switch xlogs. So this is at best a partial fix
  to the problem and the need for a manual check of file contents
  remains. 
 
 Yep. Maybe we should print the filename of the last WAL segment to the 
 log at shutdown, so that you can easily check that you have everything 
 in the archive.

You still need a script to read that and synchronize file contents.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New trigger option of pg_standby

2009-05-28 Thread Simon Riggs

On Wed, 2009-05-27 at 12:08 -0400, Bruce Momjian wrote:
 Ideally someone would have
 taken ownership of the issue, summarized the email conclusions, gotten
 a patch together, and submitted it for application.

Just a further comment on this, based upon the patch Heikki recently
committed.

I raised various issues with recovery *after* feature freeze in 8.3,
doing everything you mentioned above: patch (Jun '07), 4 months before
beta1. 3.5 months later the patch was still un-reviewed and you deferred
the patch until 8.4, without comment (Sep '07). Changes were eventually
committed more than a year after original discussion (Apr '08).
HACKERS Minor changes to recovery related code

My other comments relate to that experience, and others.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Robert Haas
On Thu, May 28, 2009 at 8:59 AM, Aidan Van Dyk ai...@highrise.ca wrote:
 All that based on the assumption that when the project switches to git,
 they actually want all the CVS history in their official tree.  Its
 certainly not necessary, and possibly not even desirable...  PostgreSQL
 could just as easily to a linus style switch when they switch to git,
 and just import the latest release in each branch as the starting
 point for each branch.  The git repository will have no history, and
 people can choose which history they want to graft in...  CVSROOT can be
 made available as a historical download.

That would suck for me.  I use git log a lot to see how things have
changed over time.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Heikki Linnakangas

Simon Riggs wrote:

On Thu, 2009-05-28 at 16:19 +0300, Heikki Linnakangas wrote:

Simon Riggs wrote:

On Thu, 2009-05-28 at 14:04 +0300, Heikki Linnakangas wrote:

I've committed a patch to do the RequstXLogSwitch() before shutdown 
checkpoint as discussed. It seems safe to me. (sorry for the delay, and 
thanks for the reminder)
Not sure if that is a fix that will work in all cases. 


There is a potential timing problem with when the archiver is shutdown:
that may now be fixed in 8.4, see what you think.

Can you elaborate?


Is the archiver still alive and working after the log switch occurs?


Yes.


If the archiver is working, but has fallen behind at the point of
shutdown, does the archiver operate for long enough to ensure we are
archived up to the point of the log switch prior to checkpoint?


Yes, it archives all pending WAL segments before exiting.

Ok, we're good then I guess.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sun blade 1000 donation

2009-05-28 Thread Andy Colson

Greg Smith wrote:

On Wed, 27 May 2009, andy wrote:

I have a Sun blade 1000 that's just collecting dust now days...It 
weighs a ton.


Bah, I know I picked one of those up myself once, which means it's far 
from being what I'd consider a heavy server as Sun hardware goes.  Specs 
say it's 70 pounds and pulls 670W.  It's a tower form factor through, 
right?  That would make it hard to install some places.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD



Yeah, when it shipped I think it was about 75 pounds.  It is a tower, 
yes, and an impressively large box (my experience with servers is 
limited, this is the first I've ever gotten to play with, so it may not 
be out of the ordinary).  I think my kill-a-watt said, at idle, it was 
near 300W.  (Though it's been a while, I may not be remembering that 
correctly, and I don't recall looking at it under load)


-Andy

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Simon Riggs

On Thu, 2009-05-28 at 16:52 +0300, Heikki Linnakangas wrote:

  If the archiver is working, but has fallen behind at the point of
  shutdown, does the archiver operate for long enough to ensure we are
  archived up to the point of the log switch prior to checkpoint?
 
 Yes, it archives all pending WAL segments before exiting.

I don't think it does, please look again. 

 Ok, we're good then I guess.

No, because as I said, if archive_command has been returning non-zero
then the archive will be incomplete.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A couple of gripes about the gettext plurals patch

2009-05-28 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On Thursday 28 May 2009 00:54:32 Tom Lane wrote:
 To wit, the current
 coding fails to respect the gettext domain when working with pluralized
 messages.

 The ngettext() calls use the default textdomain that main.c sets up.  The PLs
 use dngettext().  Is that not correct?

If that's okay, why didn't we adopt that approach for the mainline
errmsg processing?  Or more to the point: I think it's a seriously bad
idea that ereports in PLs need to be coded differently from those in
the core backend, especially with respect to a relatively-little-used
feature.  Want to make a side bet on how long till the first bug gets
committed?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Aidan Van Dyk
* Robert Haas robertmh...@gmail.com [090528 09:49]:
 On Thu, May 28, 2009 at 8:59 AM, Aidan Van Dyk ai...@highrise.ca wrote:
  All that based on the assumption that when the project switches to git,
  they actually want all the CVS history in their official tree.  Its
  certainly not necessary, and possibly not even desirable...  PostgreSQL
  could just as easily to a linus style switch when they switch to git,
  and just import the latest release in each branch as the starting
  point for each branch.  The git repository will have no history, and
  people can choose which history they want to graft in...  CVSROOT can be
  made available as a historical download.
 
 That would suck for me.  I use git log a lot to see how things have
 changed over time.

No, the whole point is that you graft whatever history *you* want in...
So if PostgreSQL offical git only starts when the offical VCS was in
git, you graft on gpo, or git, or some personal one-time cvs2git or
parsecvs history you want in...

It would be the projects way of saying basically None of the current
cvs imports are perfect and we recognize that.  So we're starting fresh,
use whatever historical cvs import *you* find best for your history and
graft it in.   Just the linux kernel has a few historical repos
available for people to graft into linus's tree which only started in
2.6.12. 

If you have work that requires the history of the current gpo repo, you
keep using it.  If you have work requring the current git repo, you keep
using it.  If you have no work, but you're a stickler for perfect
imports, you start working on parsecvs and cvs2git, and make a new
history every time you find another quirk...

a.


-- 
Aidan Van Dyk Create like a god,
ai...@highrise.ca   command like a king,
http://www.highrise.ca/   work like a slave.


signature.asc
Description: Digital signature


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Andrew Dunstan



Robert Haas wrote:

On Thu, May 28, 2009 at 8:59 AM, Aidan Van Dyk ai...@highrise.ca wrote:
  

All that based on the assumption that when the project switches to git,
they actually want all the CVS history in their official tree.  Its
certainly not necessary, and possibly not even desirable...  PostgreSQL
could just as easily to a linus style switch when they switch to git,
and just import the latest release in each branch as the starting
point for each branch.  The git repository will have no history, and
people can choose which history they want to graft in...  CVSROOT can be
made available as a historical download.



That would suck for me.  I use git log a lot to see how things have
changed over time.


  


Indeed. Losing the history is not an acceptable option.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Heikki Linnakangas

Simon Riggs wrote:

On Thu, 2009-05-28 at 16:52 +0300, Heikki Linnakangas wrote:


If the archiver is working, but has fallen behind at the point of
shutdown, does the archiver operate for long enough to ensure we are
archived up to the point of the log switch prior to checkpoint?

Yes, it archives all pending WAL segments before exiting.


I don't think it does, please look again. 


Still looks ok to me. pgarch_ArchiverCopyLoop() loops until all ready 
WAL segments have been archived (assuming no errors).



Ok, we're good then I guess.


No, because as I said, if archive_command has been returning non-zero
then the archive will be incomplete.


Yes. You think that's wrong? How would you like it to behave, then? I 
don't think you want the shutdown to wait indefinitely until all files 
have been archived if there's an error.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compiler warning cleanup - unitilized const variables, pointer type mismatch

2009-05-28 Thread Zdenek Kotala

Michael Meskes píše v čt 28. 05. 2009 v 14:47 +0200:
 On Thu, May 28, 2009 at 01:51:07PM +0200, Zdenek Kotala wrote:
  Problem is with YYLLOC_DEFAULT. When I look on macro definition 
  
  #define YYLLOC_DEFAULT(Current, Rhs, N)  \
Current.first_line   = Rhs[1].first_line;  \
Current.first_column = Rhs[1].first_column;\
Current.last_line= Rhs[N].last_line;   \
Current.last_column  = Rhs[N].last_column;
  
  It seems to me that it is OK, because 1 is used as a index which finally
  point on yyerror_range[0]. 
 
 Wait, this is the bison definition. Well to be more precise the bison
 definition in your bison version. Mine is different:

I took it from documentation. I have same as you in generated code.

 # define YYLLOC_DEFAULT(Current, Rhs, N)\
 do  \
   if (YYID (N))\
 {   \
   (Current).first_line   = YYRHSLOC (Rhs, 1).first_line;\
   (Current).first_column = YYRHSLOC (Rhs, 1).first_column;  \
   (Current).last_line= YYRHSLOC (Rhs, N).last_line; \
   (Current).last_column  = YYRHSLOC (Rhs, N).last_column;   \
 }   \
   else  \
 {   \
   (Current).first_line   = (Current).last_line   =  \
 YYRHSLOC (Rhs, 0).last_line;\
   (Current).first_column = (Current).last_column =  \
 YYRHSLOC (Rhs, 0).last_column;  \
}   \
 while (YYID (0))
 
 Having said that, it doesn't really matter as we redefine the macro:
 
 #define YYLLOC_DEFAULT(Current, Rhs, N) \
 do { \
 if (N) \
 (Current) = (Rhs)[1]; \
 else \
 (Current) = (Rhs)[0]; \
 } while (0)
 
 I have to admit that those version look strikingly unsimilar to me. There is 
 no
 reference to Rhs[N] in our macro at all. But then I have no idea whether this
 is needed.

Current is only int. See gramparse.h. I think we could rewrite it this
way:

#define YYLLOC_DEFAULT(Current, Rhs, N) \
do { \
if (N) \
(Current) = (Rhs)[1]; \
else \
(Current) = (Rhs)[N]; \
} while (0)

It is same result and compiler is quite.

Zdenek






-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Simon Riggs

On Thu, 2009-05-28 at 17:21 +0300, Heikki Linnakangas wrote:
 Simon Riggs wrote:
  
  I don't think it does, please look again. 
 
 Still looks ok to me. pgarch_ArchiverCopyLoop() loops until all ready 
 WAL segments have been archived (assuming no errors).

No, it doesn't now, though it did used to. line 440.

  Ok, we're good then I guess.
  
  No, because as I said, if archive_command has been returning non-zero
  then the archive will be incomplete.
 
 Yes. You think that's wrong? How would you like it to behave, then? I 
 don't think you want the shutdown to wait indefinitely until all files 
 have been archived if there's an error.

The complaint was that we needed to run a manual step to synchronise the
pg_xlog directory on the standby. We still need to do that, even after
the patch has been committed because 2 cases are not covered, so what is
the point of the recent change? It isn't enough. It *might* be enough,
most of the time, but you have no way of knowing that is the case and it
is dangerous not to check.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: 
 
 1. Needs to be fully spec-compliant serializable behavior. No
 anomalities.
 
That is what the paper describes, and where I want to end up.
 
 2. No locking that's not absolutely necessary, regardless of the
 WHERE-clause used. No table locks, no page locks. Block only on
 queries/updates that would truly conflict with concurrent updates
 
If you do a table scan, how do you not use a table lock?
 
Also, the proposal is to *not* block in *any* cases beyond where
snapshot isolation currently blocks.  None.  Period.  This is the big
difference from traditional techniques to achieve serializable
transactions.
 
 3. No serialization errors that are not strictly necessary.
 
That would require either the blocking approach which is has
traditionally been used, or a rigorous graphing of all read-write
dependencies (or anti-dependencies, depending on whose terminology you
prefer).  I expect either approach would perform much worse than what
the techniques in the paper.  Published benchmarks, some confirmed by
an ACM Repeatability Committee, have so far validated that intuition.
 
 4. Reasonable performance. Performance in single-backend case should
 be indistinguishable from what we have now and what we have with the
 more lenient isolation levels.
 
This should have no impact on performance for those not choosing
serializable transactions.  Benchmarks of the proposed technique have
so far shown performance ranging from marginally better than snapshot
to 15% below snapshot, whith traditional serializable techniques
benchmarking as much as 70% below snapshot.
 
 5. Reasonable scalability. Shouldn't slow down noticeably when 
 concurrent updaters are added as long as they don't conflict.
 
That should be no problem for this technique.
 
 6. No tuning knobs. It should just work.
 
Well, I think some tuning knobs might be useful, but we can certainly
offer working defaults.  Whether they should be exposed as knobs to
the users or kept away from their control depends, in my view, on how
much benefit there is to tweaking them for different environments and
how big a foot-gun they represent.  No tuning knobs seems an odd
requirement to put on this one feature versus all other new features.
 
 Now let's discuss implementation. It may well be that there is no 
 solution that totally satisfies all those requirements, so there's 
 plenty of room for various tradeoffs to discuss.
 
Then they seem more like desirable characteristics than
requirements, but OK.
 
 I think fully spec-compliant behavior is a hard requirement, or
 we'll find ourselves adding yet another isolation level in the next
 release to achieve it.  The others are negotiable.
 
There's an odd dichotomy to direction given in this area.  On the one
hand, I often see the advice to submit small patches which advance
toward a goal without breaking anything, but then I see statements
like this, which seem at odds with that notion.
 
My personal inclination is to have a GUC (perhaps eliminated after the
implementation is complete, performant, and well-tested) to enable the
new techniques, initially defaulted to off.  There is a pretty clear
path to a mature implementation through a series of iterations.  That
seems at least one order of magnitude more likely to succeed than
trying to come up with a single, final patch.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Kevin Grittner
Albe Laurenz laurenz.a...@wien.gv.at wrote:
 
 Every WHERE-clause in a SELECT will add one or more checks for each
 concurrent writer.
 
That has not been the case in any implementation of predicate locks
I've used so far.  It seems that any technique with those performance
characteristics would be one to avoid.
 
 From the user perspective, will an implementation of the paper's
 approach as an intermediate step provide a useful and understandable
 isolation level?
 
Well, to be clear, the paper states that predicate locking is a
requirement, but we've had some ideas about how we might make progress
without a full implemenation of that; so I guess your question should
be taken to mean in the absence of full predicate locking support.
 
Possibly.  It would reduce the frequency of anomalies for those not
doing explicit locking, and Robert Haas has said that it might allow
him to drop some existing explicit locking.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 Robert Haas wrote:
 That would suck for me.  I use git log a lot to see how things have
 changed over time.

 Indeed. Losing the history is not an acceptable option.

I think the same.  If git is not able to maintain our project history
then it is not mature enough to be considered as our official VCS.
This is not a negotiable requirement.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Robert Haas
On Thu, May 28, 2009 at 10:18 AM, Aidan Van Dyk ai...@highrise.ca wrote:
 * Robert Haas robertmh...@gmail.com [090528 09:49]:
 On Thu, May 28, 2009 at 8:59 AM, Aidan Van Dyk ai...@highrise.ca wrote:
  All that based on the assumption that when the project switches to git,
  they actually want all the CVS history in their official tree.  Its
  certainly not necessary, and possibly not even desirable...  PostgreSQL
  could just as easily to a linus style switch when they switch to git,
  and just import the latest release in each branch as the starting
  point for each branch.  The git repository will have no history, and
  people can choose which history they want to graft in...  CVSROOT can be
  made available as a historical download.

 That would suck for me.  I use git log a lot to see how things have
 changed over time.

 No, the whole point is that you graft whatever history *you* want in...
 So if PostgreSQL offical git only starts when the offical VCS was in
 git, you graft on gpo, or git, or some personal one-time cvs2git or
 parsecvs history you want in...

I want the project infrastructure to do this for me so I don't have to
do anything except git clone.  It's not a big deal for me to port my
WIP over to a new git repo if this one is busted, which it sounds like
it is.  But I'm not interested in rolling my own history.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Kevin Grittner
Peter Eisentraut pete...@gmx.net wrote: 
 
 Could someone describe concisely what behavior snapshot isolation
 provides that repeatable read does?
 
Phantom reads are not possible in snapshot isolation.  They are
allowed to occur (though not required to occur) in repeatable read.
 
Note that in early versions of the SQL standard, this difference was
sufficient to qualify as serializable; but recent versions raised
the bar for serializable transactions.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Greg Stark
On Thu, May 28, 2009 at 3:40 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 2. No locking that's not absolutely necessary, regardless of the
 WHERE-clause used. No table locks, no page locks. Block only on
 queries/updates that would truly conflict with concurrent updates

 If you do a table scan, how do you not use a table lock?

Once again, the type of scan is not relevant. it's quite possible to
have a table scan and only read some of the records, or to have an
index scan and read all the records.

You need to store some representation of the qualifiers on the scan,
regardless of whether they're index conditions or filters applied
afterwards. Then check that condition on any inserted tuple to see if
it conflicts.

I think there's some room for some flexibility on the not absolutely
necessary but I would want any serialization failure to be
justifiable by simple inspection of the two transactions. That is, I
would want only queries where a user could see why the database could
not prove the two transactions were serializable even if she knows
they don't. Any case where the conditions are obviously mutually
exclusive should not generate spurious conflicts.

Offhand the problem cases seem to be conditions like WHERE
func(column) where func() is not immutable (I don't think STABLE is
enough here). I would be ok with discarding conditions like this -- if
they're the only conditions on the query that would effectively make
it a table lock like you're describing. But one we could justify to
the user -- any potential insert might cause a serialization failure
depending on the unknown semantics of func().

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Heikki Linnakangas

Simon Riggs wrote:

On Thu, 2009-05-28 at 17:21 +0300, Heikki Linnakangas wrote:

Simon Riggs wrote:
I don't think it does, please look again. 
Still looks ok to me. pgarch_ArchiverCopyLoop() loops until all ready 
WAL segments have been archived (assuming no errors).


No, it doesn't now, though it did used to. line 440.


postmaster never sends SIGTERM to pgarch, and postmaster is still alive.


Ok, we're good then I guess.

No, because as I said, if archive_command has been returning non-zero
then the archive will be incomplete.
Yes. You think that's wrong? How would you like it to behave, then? I 
don't think you want the shutdown to wait indefinitely until all files 
have been archived if there's an error.


The complaint was that we needed to run a manual step to synchronise the
pg_xlog directory on the standby. We still need to do that, even after
the patch has been committed because 2 cases are not covered, so what is
the point of the recent change? It isn't enough. It *might* be enough,
most of the time, but you have no way of knowing that is the case and it
is dangerous not to check.


So you check. This solves Guillaume's immediate concern. If you have a 
suggestion for further improvements, I'm all ears.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Andreas Pflug
Simon Riggs wrote:

 No, because as I said, if archive_command has been returning non-zero
 then the archive will be incomplete.
   
 Yes. You think that's wrong? How would you like it to behave, then? I 
 don't think you want the shutdown to wait indefinitely until all files 
 have been archived if there's an error.
 

 The complaint was that we needed to run a manual step to synchronise the
 pg_xlog directory on the standby. We still need to do that, even after
 the patch has been committed because 2 cases are not covered, so what is
 the point of the recent change? It isn't enough. It *might* be enough,
 most of the time, but you have no way of knowing that is the case and it
 is dangerous not to check.
   
If archiving has stalled, it's not a clean shutdown anyway and I
wouldn't expect the wal archive to be automatically complete. I'd still
appreciate a warning that while the shutdown appeared regular, wal
wasn't written completely. But the corner case of shutting down a
smoothly running server, the wal archive archive should be complete as well.

Regards,
Andreas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Greg Stark
On Thu, May 28, 2009 at 3:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Andrew Dunstan and...@dunslane.net writes:
 Robert Haas wrote:
 That would suck for me.  I use git log a lot to see how things have
 changed over time.

 Indeed. Losing the history is not an acceptable option.

 I think the same.  If git is not able to maintain our project history
 then it is not mature enough to be considered as our official VCS.
 This is not a negotiable requirement.

I think the idea is that you could choose, for example, the level of
granularity you want to keep. That could be interesting in the future
-- someone who submitted a patch (or anyone who was working in that
area) might want to keep all their intermediate commits and not just
the one big commit for the whole feature.

But it's not like we have a lot of choices for our history. Only a few
patches were maintained in a distributed vc system so far and I don't
think many people followed them. Also given the massive changes
patches have tended to get when being committed keeping the history of
the patch development seems kind of pointless.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Guillaume Smet
On Thu, May 28, 2009 at 5:02 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 So you check. This solves Guillaume's immediate concern. If you have a
 suggestion for further improvements, I'm all ears.

Thanks for applying the patch.

Yes, the problem is that before this change, even with a working
replication and a clean shutdown, you still had to replicate the last
WAL file by hand. Personnally, I have an eye on each postgresql log
file when I switch from one server to another so I can see if anything
is going wrong (that said, it could be a problem with more than 2
servers...).

This patch just fixes this problem not the other concerns and corner
cases we might have. If we want to go further, we need to agree on
what we want exactly and which corner cases we want to cover but it's
probably 8.5 material at this point.

-- 
Guillaume

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sun blade 1000 donation

2009-05-28 Thread Jignesh K. Shah



On 05/27/09 22:00, Josh Berkus wrote:

Andy,


I have a Sun blade 1000 that's just collecting dust now days.  I was
wondering if there were any pg-hackers that could find use for it.

Its dual UltraSPARC III 750 (I think) and has two 36? gig fiber channel
scsi disks.

It weighs a ton.

I'd be happy to donate it to a good cause.


Feh, as much as we need more servers, we're really limited in our 
ability to accept stuff which is large  high power consumption.


Now, if we had a DSL line we could hook it to, I could see using it 
for the buildfarm; it would be interesting old HW / old Solaris for us.




Actually I think you can use cutting edge OpenSolaris 2009.06 release 
(which will happen in less than a week)  for SPARC on that hardware. I 
haven't tried it out on Sun Blade 1000/2000 yet but in theory you can. 
Refer to the following thread


http://mail.opensolaris.org/pipermail/indiana-discuss/2009-February/014134.html

Though you will need an Automated Installer setup to install OpenSolaris 
on SPARC

http://dlc.sun.com/osol/docs/content/dev/AIinstall/index.html


Regards,
Jignesh


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Simon Riggs

On Thu, 2009-05-28 at 17:16 +0200, Guillaume Smet wrote:
 On Thu, May 28, 2009 at 5:02 PM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
  So you check. This solves Guillaume's immediate concern. If you have a
  suggestion for further improvements, I'm all ears.
 
 Thanks for applying the patch.
 
 Yes, the problem is that before this change, even with a working
 replication and a clean shutdown, you still had to replicate the last
 WAL file by hand. Personnally, I have an eye on each postgresql log
 file when I switch from one server to another so I can see if anything
 is going wrong (that said, it could be a problem with more than 2
 servers...).
 
 This patch just fixes this problem not the other concerns and corner
 cases we might have. If we want to go further, we need to agree on
 what we want exactly and which corner cases we want to cover but it's
 probably 8.5 material at this point.

Your original post wanted to know we are sure we have all the useful
XLog files when we perform a clean shutdown of master. The patch does
not solve the problem you stated. 

You may consider it useful, but a manual check or script execution must
still happen. 

If you feel we have moved forwards, that's good, but since no part of
the *safe* maintenance procedure has changed, I don't see that myself.
Only the unsafe way of doing it got faster.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Kevin Grittner
Greg Stark st...@enterprisedb.com wrote:
 
 Once again, the type of scan is not relevant. it's quite possible to
 have a table scan and only read some of the records, or to have an
 index scan and read all the records.
 
 You need to store some representation of the qualifiers on the scan,
 regardless of whether they're index conditions or filters applied
 afterwards. Then check that condition on any inserted tuple to see
 if it conflicts.
 
 I think there's some room for some flexibility on the not
 absolutely necessary but I would want any serialization failure to
 be justifiable by simple inspection of the two transactions. That
 is, I would want only queries where a user could see why the
 database could not prove the two transactions were serializable even
 if she knows they don't. Any case where the conditions are obviously
 mutually exclusive should not generate spurious conflicts.
 
 Offhand the problem cases seem to be conditions like WHERE
 func(column) where func() is not immutable (I don't think STABLE is
 enough here). I would be ok with discarding conditions like this --
 if they're the only conditions on the query that would effectively
 make it a table lock like you're describing. But one we could
 justify to the user -- any potential insert might cause a
 serialization failure depending on the unknown semantics of func().
 
Can you cite anywhere that such techniques have been successfully used
in a production environment, or are you suggesting that we break new
ground here?  (The techniques I've been assuming are pretty well-worn
and widely used.)  I've got nothing against a novel implementation,
but I do think that it might be better to do that as an enhancement,
after we have the thing working using simpler techniques.
 
One other note -- I've never used Oracle, but years back I was told by
a fairly credible programmer who had, that when running a serializable
SELECT statement you could get a serialization failure even if it was
the only user query running on the system.  Apparently (at least at
that time) background maintenance operations could deadlock with a
SELECT.  Basically, I feel that the reason for using serializable
transactions is that you don't know what concurrent uses may happen in
advance or how they may conflict, and you should always be prepared to
handle serialization failures.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Markus Wanner

Hi,

Quoting Tom Lane t...@sss.pgh.pa.us:

I think the same.  If git is not able to maintain our project history
then it is not mature enough to be considered as our official VCS.


As Aidan pointed out, the question is not *if* git can represent it.  
It's rather *how*. Especially WRT changes of historical information in  
the CVS repository underneath.


Heikki is considered about having to merge WIP branches in case the  
(CVS and git repository) history changes, so he'd like to maintain the  
old history as well as the changed one. OTOH Robert doesn't want to  
fiddle with multiple histories and expects to have just exactly one  
history. Obviously one can't have both. Either one has to rebase/merge  
his changes onto the new history, or continue with multiple histories.


Being a monotone fan, I have to admit that git definitely provides the  
most options on *how* to handle these cases, see Aidan's mail upthread.


Knowing most of the corruptions of CVS in use in the wild (by fiddling  
with cvs_import for monotone) I now consider git (and svn, hg, bzr,  
mtn..) to be more mature than CVS, certainly much more consistent. So  
if maturity (not age) is your major concern, I'd rather flee from CVS  
now than tomorrow.


Regards

Markus Wanner

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Robert Haas
On Thu, May 28, 2009 at 8:43 AM, Peter Eisentraut pete...@gmx.net wrote:
 On Thursday 28 May 2009 15:24:59 Heikki Linnakangas wrote:
 I don't think you need that for predicate locking. To determine if e.g
 an INSERT and a SELECT conflict, you need to determine if the INSERTed
 tuple matches the predicate in the SELECT. No need to deduce anything
 between two predicates, but between a tuple and a predicate.

 That might the easy part.  The hard part is determining whether a SELECT and
 an UPDATE conflict.

What's hard about that?  INSERTs are the hard case, because the rows
you care about don't exist yet.  SELECT, UPDATE, and DELETE are easy
by comparison; you can lock the actual rows at issue.  Unless I'm
confused?

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compiler warning cleanup - unitilized const variables, pointer type mismatch

2009-05-28 Thread Tom Lane
Zdenek Kotala zdenek.kot...@sun.com writes:
 I attached another cleanup patch which fixes following warnings reported
 by Sun Studio:

I'm not too impressed with any of these.  The proposed added
initializers just increase future maintenance effort without solving
any real problem (since the variables are required by C standard to
initialize to zero).  The proposed signature change on psql_completion
is going to replace a warning on your system with outright failures on
other people's.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Robert Haas
On Thu, May 28, 2009 at 11:04 AM, Greg Stark st...@enterprisedb.com wrote:
 On Thu, May 28, 2009 at 3:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Andrew Dunstan and...@dunslane.net writes:
 Robert Haas wrote:
 That would suck for me.  I use git log a lot to see how things have
 changed over time.

 Indeed. Losing the history is not an acceptable option.

 I think the same.  If git is not able to maintain our project history
 then it is not mature enough to be considered as our official VCS.
 This is not a negotiable requirement.

 I think the idea is that you could choose, for example, the level of
 granularity you want to keep. That could be interesting in the future
 -- someone who submitted a patch (or anyone who was working in that
 area) might want to keep all their intermediate commits and not just
 the one big commit for the whole feature.

I don't think that was the idea - Aidan floated the idea of just
checking the current version of each branch into git, rather than
importing the full history from CVS (and letting indivdual cloners fix
their own history if they were so inclined).  I think that's a
non-starter.

I'm still not sure who is going to take responsibility for fixing the
git tree we have now.  I don't think it's going to work for us to
leave it broken until we're ready to do the cutover, and then do one
monolithic move.  If the tools we're using to do the import now have
broken our tree, then we need to fix it, and them.  Ideally I'd like
to get a bi-directional conversion working, so that committers could
commit via either CVS or GIT during the transition, but I'm not sure
whether that's feasible.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Robert Haas
On Thu, May 28, 2009 at 11:40 AM, Markus Wanner mar...@bluegap.ch wrote:
 Quoting Tom Lane t...@sss.pgh.pa.us:
 I think the same.  If git is not able to maintain our project history
 then it is not mature enough to be considered as our official VCS.

 As Aidan pointed out, the question is not *if* git can represent it. It's
 rather *how*. Especially WRT changes of historical information in the CVS
 repository underneath.

 Heikki is considered about having to merge WIP branches in case the (CVS and
 git repository) history changes, so he'd like to maintain the old history as
 well as the changed one. OTOH Robert doesn't want to fiddle with multiple
 histories and expects to have just exactly one history. Obviously one can't
 have both. Either one has to rebase/merge his changes onto the new history,
 or continue with multiple histories.

My understanding is that the histories of some of the branches we have
now are flat-out wrong.  I don't have a problem keeping those
alongside the corrected history for ease of rebasing and porting
commits, but I don't want to punt the problem of figuring out what the
one, true, and correct history is to the user.  The canonical
repository needs to provide that, and if it provides other alternative
timelines (a la Star Trek) for the convenience of people in Heikki's
situation, that's OK too, as long as they are clearly labeled as such.
 I think ideally we'd phase those out and garbage collect them
eventually, but we can certainly keep them for a while.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Guillaume Smet
On Thu, May 28, 2009 at 5:36 PM, Simon Riggs si...@2ndquadrant.com wrote:
 If you feel we have moved forwards, that's good, but since no part of
 the *safe* maintenance procedure has changed, I don't see that myself.
 Only the unsafe way of doing it got faster.

I disagree with you.

The situation was:
- you stop the master;
- everything seems to be OK in the log files (archiving and so on);
- it's broken anyway as you don't have the last log file;
- you have to copy the last log file manually.
- you can start the slave.

It is now:
- you stop the master;
- if everything is OK in the log files, the last log file has been
archived (and yes I check it manually too) and it's done. If not (and
it's the exception, not the rule) I have to copy manually the missing
WAL files;
- you can start the slave.

I think it's a step forward, maybe not sufficient for you but I prefer
the situation now than before. It's safer because of the principle of
least surprise: I'm pretty sure a lot of people didn't even think that
the last WAL file was systematically missing.

As Heikki stated it, if you have concrete proposals of how we can fix
the other corner cases, we're all ears. Considering my current level
of knowledge, that's all I can do by myself.

IMHO, that's something that needs to be treated in the massive
replication work planned for 8.5.

-- 
Guillaume

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Markus Wanner

Hi,

Quoting Robert Haas robertmh...@gmail.com:

I don't think that was the idea - Aidan floated the idea of just
checking the current version of each branch into git, rather than
importing the full history from CVS (and letting indivdual cloners fix
their own history if they were so inclined).  I think that's a
non-starter.


I'd say it depends on how hard it is to fix one's history. If it's  
just a config option instructing git to fetch everything before  
revision X from repository Y...


OTOH, it would certainly be nicer to have a default history, where  
only people who require another history would need such a config  
option. I'm not quite sure what's possible there.



I don't think it's going to work for us to
leave it broken until we're ready to do the cutover, and then do one
monolithic move.


Agreed. However, I'm pretty certain this won't be the last time we  
have to fix the git repository. Conversion from a bunch of RCS files  
is just way too ambiguous.



If the tools we're using to do the import now have
broken our tree, then we need to fix it, and them.


..and the CVS repository.

Regards

Markus Wanner


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Greg Stark
On Thu, May 28, 2009 at 4:33 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:

 Can you cite anywhere that such techniques have been successfully used
 in a production environment

Well there's a reason our docs say: Such a locking system is complex
to implement and extremely expensive in execution

 or are you suggesting that we break new
 ground here?  (The techniques I've been assuming are pretty well-worn
 and widely used.)

Well they're well-worn in very different databases which have much
less flexibility in how they access data. In part that inflexibility
comes *from* their decision to implement transaction isolation using
locks and to tie those locks to the indexing infrastructure.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compiler warning cleanup - unitilized const variables, pointer type mismatch

2009-05-28 Thread Tom Lane
Michael Meskes mes...@postgresql.org writes:
 I have to admit that those version look strikingly unsimilar to me. There is 
 no
 reference to Rhs[N] in our macro at all. But then I have no idea whether this
 is needed.

The default version of the macro is intended to track both the starting
and ending locations of every construct.  Our simplified version only
tracks the starting locations.  The inputs are RHS[k], the location
values for the constituent elements of the current production, and
the output is the location value for the construct being formed.
In the default version, you naturally want to copy the start of
RHS[1] and the end of RHS[N], where N is the number of production
elements.  In ours, we just copy the location of RHS[1].  However,
both macros need a special case for empty productions (with N = 0).

AFAICS, Sun's compiler is just too stupid and shouldn't be emitting
this warning.  Perhaps the right response is to file a bug report
against the compiler.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Kevin Grittner
Greg Stark st...@enterprisedb.com wrote:
 
 I would want any serialization failure to be
 justifiable by simple inspection of the two transactions.
 
BTW, there are often three (or more) transaction involved in creating
a serialization failure, where any two of them alone would not fail. 
You probably knew that, but just making sure
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Simon Riggs

On Thu, 2009-05-28 at 17:50 +0200, Guillaume Smet wrote:

 I think it's a step forward, maybe not sufficient for you but I prefer
 the situation now than before. It's safer because of the principle of
 least surprise: I'm pretty sure a lot of people didn't even think that
 the last WAL file was systematically missing.

If I hadn't spoken out, I think you would have assumed you were safe and
so would everybody else. Time is saved only if you perform the step
manually - if time saving was your objective you should have been using
a script in the first place. If you're using a script, carry on using
it: nothing has changed, you still need to check.

 As Heikki stated it, if you have concrete proposals of how we can fix
 the other corner cases, we're all ears. Considering my current level
 of knowledge, that's all I can do by myself.

I'm not sure there is a solution even. Fixing a broken archive_command
is not something PostgreSQL can achieve, by definition.

It's good you submitted a patch, I have no problem there, BTW, but
applying a patch during beta, should either fix the problem or not be
applied at all.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Markus Wanner

Hi,

Quoting Robert Haas robertmh...@gmail.com:

My understanding is that the histories of some of the branches we have
now are flat-out wrong.


AFAIU only the latest revisions of the branches have been compared.  
Keeping history and future in mind, that's not telling much, IMO. In  
my experience, there's much more wrong with converted CVS repositories  
- the latest revisions are often just the tip of the iceberg.  
Depending on your definition of wrong, of course.



I don't have a problem keeping those
alongside the corrected history for ease of rebasing and porting
commits, but I don't want to punt the problem of figuring out what the
one, true, and correct history is to the user.


Understood and agreed. (In a distributed VCS, you cannot delete  
history by definition, because every user is free to keep his version).


However, I'm pretty certain this is not the last flat-out wrong  
thing we find in the CVS or in the converted git repository. Going to  
fix and rebase every time might be pretty annoying and time consuming.  
Thus alternatives like those mentioned by Aidan sound interesting to me.


Regards

Markus Wanner

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fast ALTER TABLE ... ADD COLUMN ... DEFAULT xxx?

2009-05-28 Thread Dmitry Koterov

 Dmitry Koterov dmi...@koterov.ru writes:
  No, I meant that in case of the row (1, NULL, NULL, 2, 3, NULL):
  - the corresponding NULL bitmap is (100110...)
  - the corresponding tuple is (1, 2, 3)
  - t_natts=3 (if I am not wrong here)

 You are wrong --- t_natts would be six here.  In general the length of
 the null bitmap in a tuple (if it has one at all) is always exactly
 equal to its t_natts value.


And so, the real number of values in the tuple - (1, 2, 3) above - is equal
to the number of 1-bits in NULL bitmap. And the size of NULL bitmap is held
in t_natts. I meant that when I said thanks to NULL bitmap, adding a new
nullable column is cheap. :-) And, of course, thanks to t_natts
(HeapTupleHeaderGetNatts macro) - too.


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Guillaume Smet
On Thu, May 28, 2009 at 6:06 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On Thu, 2009-05-28 at 17:50 +0200, Guillaume Smet wrote:

 I think it's a step forward, maybe not sufficient for you but I prefer
 the situation now than before. It's safer because of the principle of
 least surprise: I'm pretty sure a lot of people didn't even think that
 the last WAL file was systematically missing.

 If I hadn't spoken out, I think you would have assumed you were safe and
 so would everybody else. Time is saved only if you perform the step
 manually - if time saving was your objective you should have been using
 a script in the first place. If you're using a script, carry on using
 it: nothing has changed, you still need to check.

You might think that but I won't have. I will still monitor my log
files carefully and check the last WAL file is received and treated on
the slave as I currently do.

I prefer checking it visually than using a script.

At least, now, I have a chance to have it working without a manual intervention.

 It's good you submitted a patch, I have no problem there, BTW, but
 applying a patch during beta, should either fix the problem or not be
 applied at all.

Well, I don't think we'll agree on that. Anyway, have a nice day :).

-- 
Guillaume

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 What's hard about that?  INSERTs are the hard case, because the rows
 you care about don't exist yet.  SELECT, UPDATE, and DELETE are easy
 by comparison; you can lock the actual rows at issue.  Unless I'm
 confused?

UPDATE isn't really any easier than INSERT: the update might cause
the row to satisfy someone else's search condition that it didn't
previously satisfy.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 I'm still not sure who is going to take responsibility for fixing the
 git tree we have now.  I don't think it's going to work for us to
 leave it broken until we're ready to do the cutover, and then do one
 monolithic move.  If the tools we're using to do the import now have
 broken our tree, then we need to fix it, and them.  Ideally I'd like
 to get a bi-directional conversion working, so that committers could
 commit via either CVS or GIT during the transition, but I'm not sure
 whether that's feasible.

I fear the latter is probably pie in the sky, unfortunately --- to take
just one minor point, which commit timestamp is authoritative?  I think
we will have to make a clean cutover from CVS is authoritative to
CVS is dead and git is authoritative, and do a fresh repository
conversion at that instant.  What we should be doing to get prepared for
that is testing various conversion tools to see which one gives us the
best conversion.  And fixing anything in the CVS repository that is
preventing getting a sane conversion.

The existing git mirror is an unofficial service and is not going to be
the basis of the future authoritative repository.  Folks who have cloned
it will have to re-clone.  Sorry about that, but maintaining continuity
with that repository is just too far down the list of priorities
... especially when we already know it's broken.

I am hoping that git's cvs server emulation is complete enough that you
can commit through it --- anybody know?  But that will be just a
stopgap.

BTW, can anyone comment on whether and how we can maintain the current
split between master repository (that's not even accessible to
non-committers) and a public mirror?  If only from a standpoint of
security paranoia, I'd rather like to preserve that split, but I don't
know how well git will play with it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Robert Haas
On Thu, May 28, 2009 at 12:10 PM, Markus Wanner mar...@bluegap.ch wrote:
 Hi,

 Quoting Robert Haas robertmh...@gmail.com:

 My understanding is that the histories of some of the branches we have
 now are flat-out wrong.

 AFAIU only the latest revisions of the branches have been compared. Keeping
 history and future in mind, that's not telling much, IMO. In my experience,
 there's much more wrong with converted CVS repositories - the latest
 revisions are often just the tip of the iceberg. Depending on your
 definition of wrong, of course.

That's not the best news I've had today...

 I don't have a problem keeping those
 alongside the corrected history for ease of rebasing and porting
 commits, but I don't want to punt the problem of figuring out what the
 one, true, and correct history is to the user.

 Understood and agreed. (In a distributed VCS, you cannot delete history by
 definition, because every user is free to keep his version).

 However, I'm pretty certain this is not the last flat-out wrong thing we
 find in the CVS or in the converted git repository. Going to fix and rebase
 every time might be pretty annoying and time consuming. Thus alternatives
 like those mentioned by Aidan sound interesting to me.

To me they sound complex and inconvenient.  I guess I'm kind of
mystified by why we can't make this work reliably.  Other than the
broken tags issue we've discussed, it seems like the only real issue
should be how to group changes to different files into a single
commit.  Once you do that, you should be able to construct a
well-defined, total function f : cvs-file, cvs-revision - git
commit which is surjective on the space of git commits.  In fact it
might be a good idea to explicitly construct this mapping and drop it
into a database table somewhere so that people can sanity check it as
much as they wish.  Why is this harder than I think it is?

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread David E. Wheeler

On May 28, 2009, at 1:34 AM, Dimitri Fontaine wrote:


Andrew Dunstan and...@dunslane.net writes:

Dimitri Fontaine wrote:
 we all agree that a specific pg_extension schema is a good idea,  
as

  soon as user is free not to use it at extension install time.


I don't think we all agree on that at all. ;-)


Ooops, my mistake, as few people where taking that as implicit and  
as a
reasoning basepoint in their mails, I assumed we were past the  
question

already. Sorry to see that's too quick a conclusion... and thanks for
pointing out the absence of consensus!


I somehow missed Andrew's mail, but I agree that we don't all agree on  
that point. I'm fine with having a standard schema for extensions,  
just as long as I can tell the installer to actually install it in a  
different schema if I want/need to do so.


Best,

David


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Robert Haas
On Thu, May 28, 2009 at 12:19 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 I'm still not sure who is going to take responsibility for fixing the
 git tree we have now.  I don't think it's going to work for us to
 leave it broken until we're ready to do the cutover, and then do one
 monolithic move.  If the tools we're using to do the import now have
 broken our tree, then we need to fix it, and them.  Ideally I'd like
 to get a bi-directional conversion working, so that committers could
 commit via either CVS or GIT during the transition, but I'm not sure
 whether that's feasible.

 I fear the latter is probably pie in the sky, unfortunately --- to take
 just one minor point, which commit timestamp is authoritative?

That's just a question of deciding on a date when git becomes
authoritative and CVS ceases to be.

 I think
 we will have to make a clean cutover from CVS is authoritative to
 CVS is dead and git is authoritative, and do a fresh repository
 conversion at that instant.  What we should be doing to get prepared for
 that is testing various conversion tools to see which one gives us the
 best conversion.  And fixing anything in the CVS repository that is
 preventing getting a sane conversion.

That might work, but then we better be pretty darn confident that that
fresh conversion is actually correct.  I'd rather have them going
side-by-side so that we can verify everything before shutting the old
system off.

 The existing git mirror is an unofficial service and is not going to be
 the basis of the future authoritative repository.  Folks who have cloned
 it will have to re-clone.  Sorry about that, but maintaining continuity
 with that repository is just too far down the list of priorities
 ... especially when we already know it's broken.

 I am hoping that git's cvs server emulation is complete enough that you
 can commit through it --- anybody know?  But that will be just a
 stopgap.

 BTW, can anyone comment on whether and how we can maintain the current
 split between master repository (that's not even accessible to
 non-committers) and a public mirror?  If only from a standpoint of
 security paranoia, I'd rather like to preserve that split, but I don't
 know how well git will play with it.

You can set up one repository to mirror another.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Kevin Grittner
Greg Stark st...@enterprisedb.com wrote:
 On Thu, May 28, 2009 at 4:33 PM, Kevin Grittner wrote:

 Can you cite anywhere that such techniques have been successfully
 used in a production environment
 
 Well there's a reason our docs say: Such a locking system is
 complex to implement and extremely expensive in execution
 
I'm not clear on the reason for insisting that we use techniques that
*nobody* expects will work well.
 
 or are you suggesting that we break new
 ground here?  (The techniques I've been assuming are pretty
 well-worn and widely used.)
 
 Well they're well-worn in very different databases which have much
 less flexibility in how they access data. In part that inflexibility
 comes *from* their decision to implement transaction isolation using
 locks and to tie those locks to the indexing infrastructure.
 
I really don't see that.  The btree usage seems pretty clear.  The
other indexes seem solvable, with some work.  And there's an
incremental path this way, where we can get basic functionality
correct and tune one thing at a time until performance is acceptable. 
At the high end, we could even break this new ground and see if it
works better, although I personally doubt it will.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread David E. Wheeler

On May 28, 2009, at 1:13 AM, Dimitri Fontaine wrote:


Having all extensions live in pg_extension schema also solves the
problem in a much easier way, except for people who care about not
messing it all within a single schema (fourre-tout is the french for a
place where you put anything and everything).


Yes, just as long as your extensions schema doesn't turn into a  
bricolage of stuff. I mean, if I use a lot of extensions, it means  
that I end up with a giant collection of functions and types and  
whatnot in this one namespace. PHP programmers might be happy with it,  
but not I. ;-P



As Josh is saying too, as soon as we have SQL level extension object
with dependancies, we'll be able to list all of a particular  
extension's

objects without needing to have them live in separate schemas.
\df pgq.  -- list all functions in schema pgq
\dt pgq.  -- list all tables in schema pgq
\de pgq.  -- list all objects provided by extension pgq

Still, for extension upgrading or name collisions between  
extensions, or
some more cases I'm not thinking about now, pg_extension will not be  
all

what you need. We already have schemas and search_path, and it's not
always pretty nor fun to play with. Would prefix/suffix components  
help?


Yes, but I'm not sure that's the best interface for that  
functionality. Think I'll do some thinking on it myself…


Best,

David


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sun blade 1000 donation

2009-05-28 Thread Andy Colson

Jignesh K. Shah wrote:



On 05/27/09 22:00, Josh Berkus wrote:

Andy,


I have a Sun blade 1000 that's just collecting dust now days.  I was
wondering if there were any pg-hackers that could find use for it.

Its dual UltraSPARC III 750 (I think) and has two 36? gig fiber channel
scsi disks.

It weighs a ton.

I'd be happy to donate it to a good cause.


Feh, as much as we need more servers, we're really limited in our 
ability to accept stuff which is large  high power consumption.


Now, if we had a DSL line we could hook it to, I could see using it 
for the buildfarm; it would be interesting old HW / old Solaris for us.




Actually I think you can use cutting edge OpenSolaris 2009.06 release 
(which will happen in less than a week)  for SPARC on that hardware. I 
haven't tried it out on Sun Blade 1000/2000 yet but in theory you can. 
Refer to the following thread


http://mail.opensolaris.org/pipermail/indiana-discuss/2009-February/014134.html 



Though you will need an Automated Installer setup to install OpenSolaris 
on SPARC

http://dlc.sun.com/osol/docs/content/dev/AIinstall/index.html


Regards,
Jignesh




Well that could be fun to play with.  I have snv_99 on there now, so I'm 
not too outdated.  The two drives are in a zfs mirror and as long as you 
use both processors its a pretty snappy box.  (gmake vs gmake -j 4 is 
noticeably faster)


But still.. I'm buying a new computer and need to clear out some of the 
old one's first. (I took a count, and I have about 11 computers, 
counting anything I can ssh to or run apache on as a computer (so my 
gf's iTouch counts as a computer))


-Andy

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Greg Smith

On Thu, 28 May 2009, Robert Haas wrote:


My understanding is that the histories of some of the branches we have
now are flat-out wrong.  I don't have a problem keeping those
alongside the corrected history for ease of rebasing and porting
commits, but I don't want to punt the problem of figuring out what the
one, true, and correct history is to the user.


Right.  There has to be one true repo for the history here, and if it 
takes another repo conversion to do it that's unfortunate for people 
already using the existing repo, but as pointed out there are tools 
available to help them out.  You can't prioritize users of this early test 
repo ahead of the long-term goals here, and making it easier for new 
people to quickly start hacking on the codebase is very much a motivating 
factor behind the conversion.


Because the mapping of CVS commits into git ones has a bit of fuzziness to 
it, it's possible to turn fine-tuning the repo history into an endless 
project.  Wandering down that road helps no one.


The best way to control the scope creep here is to avoid doing that, and 
instead focus on what you really need from the repo conversion.  In this 
case, it's a hard requirement that current and back branches that are 
still maintained must produce a checked out result that is identical to if 
you were to check that version out of CVS.  There's already been some spot 
checking of that already, it may make sense to write up an official QA 
spec here.


Reconversion of the old history needs to happen as many times as necessary 
until that goal is reached for git to be adopted by the project one day. 
Because I think that's going to require an iterative process 
(convert/test/fix/repeat) I'm not sure what value there is to the better 
conversion tools that can't be used incrementally here.


If the goalposts are moved to every ancient tag/release ever must build 
perfectly and have sane history no matter how nasty its CVS history was, 
history conversion is doomed.  I don't think it's unrealistic to plan 
reaching a point where you can say we've confirmed every release build 
from 7.4 forward builds identically from git; older releases, betas, and 
similarly early builds should instead be built from the deprecated CVS 
repo.  If the scope of the conversion has higher standards than that, and 
I can't imagine why it should, there's going to be an enormous amount of 
time wasted playing around with tags that results in no benefit to users 
of the software.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Thu, May 28, 2009 at 12:19 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I think
 we will have to make a clean cutover from CVS is authoritative to
 CVS is dead and git is authoritative, and do a fresh repository
 conversion at that instant.  What we should be doing to get prepared for
 that is testing various conversion tools to see which one gives us the
 best conversion.  And fixing anything in the CVS repository that is
 preventing getting a sane conversion.

 That might work, but then we better be pretty darn confident that that
 fresh conversion is actually correct.

Well, yeah, which is one of several reasons why this isn't happening
tomorrow ;-).  Whatever tool we use should have survived a good deal
of advance testing.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sun blade 1000 donation

2009-05-28 Thread Andy Colson

Greg Smith wrote:

On Thu, 28 May 2009, Andy Colson wrote:

Yeah, when it shipped I think it was about 75 pounds.  It is a tower, 
yes, and an impressively large box (my experience with servers is 
limited, this is the first I've ever gotten to play with, so it may 
not be out of the ordinary).


To give you a better idea of the scale people were thinking with your 
original comment, the last Sun server I installed was 170 pounds and you 
had to provision a dedicated power outlet for it.  The Blade 1000 would 
be considered a medium sized server.  A small server is one that fits in 
1 to 3 rack units.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD


Sweet.  That sounds fun to play on.  So yeah, as I was saying before, 
its a 75lb box, nothing huge.. ya know... average... :-)


-Andy

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Simon Riggs

On Thu, 2009-05-28 at 18:02 +0300, Heikki Linnakangas wrote:

 postmaster never sends SIGTERM to pgarch, and postmaster is still alive.

Then we have a regression, since we changed the code to make sure the
archiver did shutdown even if there was a backlog. The reason is that if
there is a long backlog at the time we restart we may cause a long
outage while we wait for the archiver to shutdown, before postmaster
restarts.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Alvaro Herrera
Robert Haas escribió:

 To me they sound complex and inconvenient.  I guess I'm kind of
 mystified by why we can't make this work reliably.  Other than the
 broken tags issue we've discussed, it seems like the only real issue
 should be how to group changes to different files into a single
 commit.

There's another issue which is that of the $Id$ and similar tags.  We
have to decide what we want to do with them.  If we're not going to have
them in the Git repository, then they are only causing trouble right now
and it would be better to get rid of them completely for the conversion,
to avoid the noise that they will invariably cause.

We could, for example, say that a conversion process is supposed to
un-expand them (say sed -e 's/$Revision:[^$]*$/$Revision$/' and so on;
obviously it's a lot more complex for $Log$) *before* attempting to
analyze any revision.  I think that would make further munging a lot
simpler.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Tom Lane
Greg Smith gsm...@gregsmith.com writes:
 The best way to control the scope creep here is to avoid doing that, and 
 instead focus on what you really need from the repo conversion.  [...]
 If the goalposts are moved to every ancient tag/release ever must build 
 perfectly and have sane history no matter how nasty its CVS history was, 
 history conversion is doomed.

Right.  Shall we try to spec out exactly what our conversion
requirements are?  Here's a shot:

* Head of each active branch must check out the same as it does from CVS
(modulo $PostgreSQL$ and similar tags, which we've already agreed we can
abandon).

* Each released minor version tag must check out the same as from CVS,
at least back to some specified point (perhaps 7.4.0).  I'd really
prefer to insist on that all the way back.

* Each commit message in the CVS history must be retrievable from the
git history, and should correspond to the same file changes.  However,
we are okay with git sometimes treating one CVS commit as two or more
events with similar messages.  (I'm basing this on the behavior of
cvs2cl, which sometimes does that depending on how time-extended the
individual file updates were.)  Also, we won't be too picky about
whether the same commits on different branches are treated as one
event or multiple events.

Comments?  Other considerations?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 There's another issue which is that of the $Id$ and similar tags.  We
 have to decide what we want to do with them.  If we're not going to have
 them in the Git repository, then they are only causing trouble right now
 and it would be better to get rid of them completely for the conversion,
 to avoid the noise that they will invariably cause.

What was in the back of my mind was that we'd go around and mass-remove
$PostgreSQL$ (and any other lurking tags), but only from HEAD and only
after the repo conversion.  Although just before it would be okay too.
The stickier part of this is what to do about back branches;
particularly whether we are okay with checked-out versions of past
releases not matching the actual shipped tarballs on this point.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] User-facing aspects of serializable transactions

2009-05-28 Thread Robert Haas
On Thu, May 28, 2009 at 12:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 What's hard about that?  INSERTs are the hard case, because the rows
 you care about don't exist yet.  SELECT, UPDATE, and DELETE are easy
 by comparison; you can lock the actual rows at issue.  Unless I'm
 confused?

 UPDATE isn't really any easier than INSERT: the update might cause
 the row to satisfy someone else's search condition that it didn't
 previously satisfy.

Good point.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
 Right.  Shall we try to spec out exactly what our conversion
 requirements are?  Here's a shot:
[...]
 Comments?  Other considerations?

Certainly sounds reasonable to me.  I'd be really suprised if that's
really all that hard to accomplish.  I'd be happy to help with some
testing too if we feel that the current git repo is in reasonable shape
to do that testing against (or someone has another).

+1

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Heikki Linnakangas

Simon Riggs wrote:

On Thu, 2009-05-28 at 18:02 +0300, Heikki Linnakangas wrote:


postmaster never sends SIGTERM to pgarch, and postmaster is still alive.


Then we have a regression, since we changed the code to make sure the
archiver did shutdown even if there was a backlog.


The commit message of the commit that introduced the check for SIGTERM says:


Also, modify the archiver process to notice SIGTERM and refuse to issue any
more archive commands if it gets it.  The postmaster doesn't ever send it
SIGTERM; we assume that any such signal came from init and is a notice of
impending whole-system shutdown.  In this situation it seems imprudent 
to try

to start new archive commands --- if they aren't extremely quick they're
likely to get SIGKILL'd by init.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Robert Haas
On Thu, May 28, 2009 at 12:51 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Alvaro Herrera alvhe...@commandprompt.com writes:
 There's another issue which is that of the $Id$ and similar tags.  We
 have to decide what we want to do with them.  If we're not going to have
 them in the Git repository, then they are only causing trouble right now
 and it would be better to get rid of them completely for the conversion,
 to avoid the noise that they will invariably cause.

 What was in the back of my mind was that we'd go around and mass-remove
 $PostgreSQL$ (and any other lurking tags), but only from HEAD and only
 after the repo conversion.  Although just before it would be okay too.
 The stickier part of this is what to do about back branches;
 particularly whether we are okay with checked-out versions of past
 releases not matching the actual shipped tarballs on this point.

Mass-deleting these tags from HEAD and the current head of each
back-branch seems like a good place to start.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Alvaro Herrera
Tom Lane escribió:
 Alvaro Herrera alvhe...@commandprompt.com writes:
  There's another issue which is that of the $Id$ and similar tags.  We
  have to decide what we want to do with them.  If we're not going to have
  them in the Git repository, then they are only causing trouble right now
  and it would be better to get rid of them completely for the conversion,
  to avoid the noise that they will invariably cause.
 
 What was in the back of my mind was that we'd go around and mass-remove
 $PostgreSQL$ (and any other lurking tags), but only from HEAD and only
 after the repo conversion.  Although just before it would be okay too.
 The stickier part of this is what to do about back branches;
 particularly whether we are okay with checked-out versions of past
 releases not matching the actual shipped tarballs on this point.

You mean we would remove them from CVS?  I don't think that's
necessarily a good idea; it'd be massive changes for no good reason.  My
idea was to remove them from the repository that would be used for the
conversion (I think that means editing the ,v files), and not put that
change back to the real CVS repo.  Then the conversion to Git gets a
lot simpler; and the checking of this modified repo against copies
checked out from Git would be simpler.

Since this change is supposed to be scriptable, the script should be
available so potential testers of the conversion can get a converted
repository too.  (Or maybe we should just provide access to the modified
copy of the repo).

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] proposal: early casting in plpgsql

2009-05-28 Thread Pavel Stehule
Hello

current plpgsql cannot detect early some errors based on unknown
casting. Other problem is IO casting.

The reason is an late casting:

current_code is some like:

val = eval_expr(query, result_type);
if (result_type != expected_type)
{
   str = convert_to_string(val, result_type);
   val = convert_from_string(val, expected_type);
}

I propose for types with typmod -1 early casting - etc casting to
target type on planner level. We cannot use this method for defined
typmod, because we would to raise exception for following situation:

varchar(3) := 'ABCDE'; - casting do quietly necessary truncation

This should be everywhere, where we know an target type.

What this needs?

* new SPI function SPI_prepare_function_with_target_types, that calls
coerce_to_target_type function.
* add new field to PLpgSQL_expr - Oid *target_type

benefits:
* possible some strict mode - that use only predefined cast functions
(without I/O general conversion)
* some minor speed
* fix some strange issues
http://archives.postgresql.org/pgsql-hackers/2008-12/msg01932.php
* consistent behave with SQL

postgres=# create function fot(i numeric) returns date as $$begin
return i;end; $$ language plpgsql;
CREATE FUNCTION
Time: 2,346 ms
postgres=# select extract (year from fot(20081010));
CONTEXT:  PL/pgSQL function fot line 1 at RETURN
 date_part
---
  2008
(1 row)

what is nonsense
postgres=# select extract(year from 20081010::numeric::date);
ERROR:  cannot cast type numeric to date
LINE 1: select extract(year from 20081010::numeric::date);
  ^
Issues:
* current casting functions doesn't raise exception when we lost some detail :(

postgres=# select 'abc'::varchar(2), 10.22::numeric(10,1), 10.22::integer;
 varchar | numeric | int4
-+-+--
 ab  |10.2 |   10
(1 row)


* current integer input functions are too simple:

ERROR:  invalid input syntax for integer: 10.00
LINE 1: select int '10.00';
   ^
Possible enhancing:
when target variable has attypmod, then we could add to plan IO
casting via some new functions - this should simplify plpgsql code -
any casting should be removed

Ideas, comments?

regards
Pavel Stehule

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 Tom Lane escribió:
 What was in the back of my mind was that we'd go around and mass-remove
 $PostgreSQL$ (and any other lurking tags), but only from HEAD and only
 after the repo conversion.  Although just before it would be okay too.

 You mean we would remove them from CVS?  I don't think that's
 necessarily a good idea; it'd be massive changes for no good reason.

Uh, how is it different from any other mass edit, such as our annual
copyright-year updates, or pgindent runs?

 My idea was to remove them from the repository that would be used for the
 conversion (I think that means editing the ,v files),

Ick ... I'm willing to tolerate a few small manual ,v edits if we have
to do it to make tags consistent or something like that.  I don't think
we should be doing massive edits of that kind.

But anyway, that's not the interesting point.  The interesting point is
what about the historical aspect of it, not whether we want to dispense
with the tags going forward.  Should our repo conversion try to
represent the historical states of the files including the tag strings?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: early casting in plpgsql

2009-05-28 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 I propose for types with typmod -1 early casting - etc casting to
 target type on planner level. We cannot use this method for defined
 typmod, because we would to raise exception for following situation:

What existing coding habits will this break?  People have long been
accustomed to use plpgsql for end-runs around SQL casting behavior,
so I'm not really convinced by the idea that make it more like SQL
is automatically a good thing.

Also, it seems bizarre and inconsistent that it would work one way
for variables with a typmod and an entirely different way for those
without.  How will you explain that to users who never heard of a
typmod?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread Greg Stark
On Thu, May 28, 2009 at 5:30 PM, David E. Wheeler da...@kineticode.com wrote:
 Yes, just as long as your extensions schema doesn't turn into a bricolage of
 stuff. I mean, if I use a lot of extensions, it means that I end up with a
 giant collection of functions and types and whatnot in this one namespace.
 PHP programmers might be happy with it, but not I. ;-P

I don't understand what storing them in different namespaces and then
putting them all in your search_path accomplishes. You end up with the
same mishmash of things in your namespace.

The only way that mode of operation makes any sense to me is if you
explicitly prefix every invocation. Ie, you want the stuff installed
but not available in your namespace at all unless you explicitly
request it.

Actually there is another reason separate schemas does make some sense
to me. Private objects that the extension will use internally but
doesn't intend to make part of its public interface. It might be nice
if extensions could mark objects with a token like _private and have
that be created in a private schema separate from other extensions and
not in the default search path.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread Andrew Dunstan



Dimitri Fontaine wrote:

  we all agree that a specific pg_extension schema is a good idea, as
   soon as user is free not to use it at extension install time.

  


I don't think we all agree on that at all. ;-)

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread Josh Berkus

On 5/28/09 12:36 AM, Dimitri Fontaine wrote:

That really seems exactly to be what we're proposing with pre_ and post_
search_path components: don't change current meaning of search_path,
just give DBAs better ways to manage it. And now that you're leaning
towards a search_path suffix, don't you want a prefix too?


Yeah, I thought about a prefix, but I couldn't come up with a way it 
would be useful, and I could come up with a lot of scenarios where it 
would be a big foot-gun.


--
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread Tom Lane
Greg Stark st...@enterprisedb.com writes:
 I don't understand what storing them in different namespaces and then
 putting them all in your search_path accomplishes. You end up with the
 same mishmash of things in your namespace.

+1 ... naming conflicts between different extensions are going to be a
problem for people no matter what.  Sticking them in different schemas
doesn't really fix anything, it just means that you'll hit the problems
later instead of sooner.

I suppose there might be some use-case involving concurrent installation
of multiple versions of the *same* extension, but I'm not sure we should
be designing around that as a key case.

 Actually there is another reason separate schemas does make some sense
 to me. Private objects that the extension will use internally but
 doesn't intend to make part of its public interface. It might be nice
 if extensions could mark objects with a token like _private and have
 that be created in a private schema separate from other extensions and
 not in the default search path.

Well, an extension can certainly do that today, so why would it be a
factor in this discussion?  It's just an extra schema.  And I guess the
extension author has to explicitly qualify all those names, but if he
doesn't want those names in the search path I don't see much choice.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 On 5/28/09 12:36 AM, Dimitri Fontaine wrote:
 That really seems exactly to be what we're proposing with pre_ and post_
 search_path components: don't change current meaning of search_path,
 just give DBAs better ways to manage it. And now that you're leaning
 towards a search_path suffix, don't you want a prefix too?

 Yeah, I thought about a prefix, but I couldn't come up with a way it 
 would be useful, and I could come up with a lot of scenarios where it 
 would be a big foot-gun.

Also, a search path prefix is going to create curious interactions with
the default creation schema.  A suffix seems much less dangerous in that
respect.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sun blade 1000 donation

2009-05-28 Thread Josh Berkus

Andy,


Yeah, when it shipped I think it was about 75 pounds. It is a tower,
yes, and an impressively large box (my experience with servers is
limited, this is the first I've ever gotten to play with, so it may not
be out of the ordinary). I think my kill-a-watt said, at idle, it was
near 300W. (Though it's been a while, I may not be remembering that
correctly, and I don't recall looking at it under load)


Ok, that's not as bad as the spec sheet online looked.  The machine is 
still too slow/old for benchmarking though, and we couldn't rack it (our 
donated rack space is limited).  Does someone have a home for this 
machine?  And would we use it for buildfarm, or for something else?


--
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: early casting in plpgsql

2009-05-28 Thread Pavel Stehule
2009/5/28 Tom Lane t...@sss.pgh.pa.us:
 Pavel Stehule pavel.steh...@gmail.com writes:
 I propose for types with typmod -1 early casting - etc casting to
 target type on planner level. We cannot use this method for defined
 typmod, because we would to raise exception for following situation:

 What existing coding habits will this break?

I don't know about any. Actually we don't have variant datatype, so
this should not impact on existing applications.

 People have long been
 accustomed to use plpgsql for end-runs around SQL casting behavior,
 so I'm not really convinced by the idea that make it more like SQL
 is automatically a good thing.


for typmod others then -1 we should to use IO cast - but we should to
check, if it's one from known casts.

without strict mode this should be fully compatible (if we could to
expect so our casting functions are correct).

 Also, it seems bizarre and inconsistent that it would work one way
 for variables with a typmod and an entirely different way for those
 without.  How will you explain that to users who never heard of a
 typmod?


Now I thing so this should be solved well too. We need two kind of
casting functions - what we have - CASTs with INOUT and CASTs with
functions. For variables with typmod we have to call CASTs with INOUT.

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: early casting in plpgsql

2009-05-28 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 for typmod others then -1 we should to use IO cast - but we should to
 check, if it's one from known casts.

I still think it's fundamentally wrong to be treating typmod -1 so
differently from other typmods.  If this behavior is sane at all then
it should work in both cases.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] plperl error format vs plpgsql error format vs pgTAP

2009-05-28 Thread Kevin Field
I use pgTAP to make sure my functions produce the correct errors using
throws_ok().  So when I get an error from a plpgsql function, it looks
like this:

ERROR:  upper bound of FOR loop cannot be null
CONTEXT:  PL/pgSQL function foo line 35 at FOR with integer loop
variable

...which I can then test using throws_ok by giving it the string
'upper bound of FOR loop cannot be null'.  However, in a plperl
function, errors come out in this format:

error from Perl function check_no_loop: Loops not allowed!  Node 1
cannot be part of node 3 at line 13.

Unfortunately, I can't test for this without including the line
number, which means that changing any plperl function that I have such
a test for pretty much guarantees that I'll need to change the test to
reflect the new line numbers the errors would be thrown from in the
function.

Is it possible to unify the error reporting format, so pgTAP can test
them without needing line numbers from plperl functions?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] search_path vs extensions

2009-05-28 Thread Robert Haas
On Thu, May 28, 2009 at 2:27 PM, Greg Stark st...@enterprisedb.com wrote:
 On Thu, May 28, 2009 at 5:30 PM, David E. Wheeler da...@kineticode.com 
 wrote:
 Yes, just as long as your extensions schema doesn't turn into a bricolage of
 stuff. I mean, if I use a lot of extensions, it means that I end up with a
 giant collection of functions and types and whatnot in this one namespace.
 PHP programmers might be happy with it, but not I. ;-P

 I don't understand what storing them in different namespaces and then
 putting them all in your search_path accomplishes. You end up with the
 same mishmash of things in your namespace.

+1!

That's the thing that's really mystifying me about this whole
conversation.  It seems this compounds the work of managing extension
by requiring every extension to require an extra post-installation
step where we update everyone's search path (and that step can't be
automated because there's no way for the extension installation
process to update all of the places search_paths might be stored, even
if it could tell which ones ought to be updated).  Having a global
search_path_suffix will help with this a little bit, but I think there
are corner cases (such as the ones I mentioned upthread) where that's
not really going to be enough either.  It feels like a Java CLASSPATH,
or installing every application into /usr/local/application-name so
that your path has 50 bin directories in it.

It also seems to me that we're getting seriously sidetracked from the
dependency-tracking part of this project which seems to me to be a
much deeper and more fundamental issue.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Developer meeting minutes up

2009-05-28 Thread Greg Smith

On Thu, 28 May 2009, Tom Lane wrote:

Each released minor version tag must check out the same as from CVS, at 
least back to some specified point (perhaps 7.4.0).  I'd really prefer 
to insist on that all the way back.


We'd all like to hope that conversion process that works for everything 
back to 7.4.0 would would also give useful results for all the old ones, 
too.  And it's worth testing as far back as possible.  I think it's just 
unrealistic to set the bar too high in the off chance that one of these 
old releases has something that's harder to fix than producing that 
version is worth.  That might be the case for some of the 7.1 stuff 
mentioned upthread for example.  If there are only a few stragglers that 
won't play nice, it might be easier to just publish a git errata list of 
those releases and move on.


In related news, I wanted to make it a bit easier to track followup on the 
whole Action Item list from the meeting.  I converted those to the 
standard format we were already using on the ToDo list, which provides a 
way to check off items that are done.  It may be worth breaking those out 
from the rest of the minutes, so that it's easier to extend them with 
things like these fleshed out git requirements.  Example: 
http://wiki.postgresql.org/wiki/PgCon_2009_Developer_Meeting#Source_Code_Management


Thoughts?

--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >