Re: [HACKERS] Assertion failure in walreceiver

2010-02-25 Thread Fujii Masao
On Thu, Feb 25, 2010 at 4:31 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 I have one question. Do we support starting an archive recovery and
 standby server from a cold backup (not a base backup taken by online
 backup)? Though I think they would work and be very useful, I'm not
 sure they are safe.

 I don't see why not. We support that in PITR, streaming replication is
 just another way of getting the logs to the server.

I thought that because, AFAIK, there is no document about initial startup
of PITR from a cold backup. But I'd be happy if it's supported. After
failover happens, previous primary server might be able to become standby
from its old data without taking a new backup from new primary.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing cvs HEAD - HS/SR - xlog timeline 0 pg_xlogfile_name_offset

2010-02-25 Thread Heikki Linnakangas
Fujii Masao wrote:
 On Thu, Feb 25, 2010 at 9:31 AM, Erik Rijkers e...@xs4all.nl wrote:
 On Wed, February 24, 2010 20:40, Erik Rijkers wrote:

  pg_last_xlog_receive_location |  pg_xlogfile_name_offset
 ---+---
  E2/C012AD90   | (00E200C0,1224080)
  (1 row)

 These zero-timeline filenames look suspicious, no?
 Sorry, I've only now noticed that this issue is already on the v9.0 TODO for 
 Streaming Replication:

 pg_xlogfile_name(pg_last_xlog_receive/replay_location()) might report the 
 wrong name. Because a
 backend cannot know the actual timeline which is related to the location.
 

 http://archives.postgresql.org/message-id/3f0b79eb1001190135vd9f62f1sa7868abc1ea61...@mail.gmail.com

 so, nevermind..
 
 Yeah, since I thought that the current behavior that you reported
 would annoy many users, I added it to the TODO list.

Yeah, returning a filename with TLI 0 sure doesn't seem right.

A quick fix would be to just throw an error if you try to use
pg_xlog_filename() during hot standby. But there seems to be good
reasons to call pg_xlog_filename() during hot standby, given that both
of you ran into the same issue. What exactly were you trying to achieve
with it?

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Streaming rep - why log shipping is necessary?

2010-02-25 Thread marcin mank
Hello,
I was reading the SR docs, and have the following question:
Is there a fundamental reason why archive_command etc. is required in
streaming replication mode?

Can`t setting up the standby be more like:
pg_start_streaming_backup() on the master (this will be queuing up
files in pg_xlog)
copy the data dir
set up the slave to connect to the master via streaming protocol
set up the master to allow connections from the slave
start slave (slave pulls the necessary WAL records from the master via
streaming, and signals the master that it`s done backing up)
When standby starts accepting connections, we know that the standby is OK.

archive_command, restore_command, etc. would be configured empty in this mode.

The failure mode for this is the pg_xlog directory filling up on the
master before the backup is done. But then, we can tell people to use
the more combersome, current setup.


Greetings
Marcin Mańk

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming rep - why log shipping is necessary?

2010-02-25 Thread Heikki Linnakangas
marcin mank wrote:
 I was reading the SR docs, and have the following question:
 Is there a fundamental reason why archive_command etc. is required in
 streaming replication mode?
 
 Can`t setting up the standby be more like:
 pg_start_streaming_backup() on the master (this will be queuing up
 files in pg_xlog)
 copy the data dir
 set up the slave to connect to the master via streaming protocol
 set up the master to allow connections from the slave
 start slave (slave pulls the necessary WAL records from the master via
 streaming, and signals the master that it`s done backing up)
 When standby starts accepting connections, we know that the standby is OK.
 
 archive_command, restore_command, etc. would be configured empty in this mode.
 
 The failure mode for this is the pg_xlog directory filling up on the
 master before the backup is done. But then, we can tell people to use
 the more combersome, current setup.

The problem is first of all that there is no pg_start_streaming_backup()
command, but it's not only an issue during backup; the standby needs to
fall back to the archive if it falls behind so that the WAL files it
needs have already been recycled in the master.

There was the idea of adding a 'replication_lag_segments' setting, so
that the master always keeps n megabytes of WAL available for the
standby servers. See
http://archives.postgresql.org/pgsql-hackers/2010-01/msg02073.php. Not
sure what happened to it, Fujii was working on it but I guess he got
busy with other things.

If you're adventurous enough, it's actually possible to set an
archive_command that checks the status of the standby and returns
failure as long as the standby still needs the given WAL segment. That
way the primary doesn't recycle segments that are still needed by the
standby, and you can get away without restore_command in the standby.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing cvs HEAD - HS/SR - xlog timeline 0 pg_xlogfile_name_offset

2010-02-25 Thread Fujii Masao
On Thu, Feb 25, 2010 at 5:10 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 A quick fix would be to just throw an error if you try to use
 pg_xlog_filename() during hot standby. But there seems to be good
 reasons to call pg_xlog_filename() during hot standby, given that both
 of you ran into the same issue. What exactly were you trying to achieve
 with it?

Nothing ;) Frankly I just found that problem while testing the
combination of SR and system administration functions. But on
second thought, calling pg_xlogfile_name() during HS seems useful
to remove old WAL files from the archive that is shared from
multiple standbys. In this case, '%r' in restore_command cannot
be used, so we would need to calculate the name of the WAL files
that are not required for the subsequent recovery yet by using
pg_xlogfile_name() and pg_controldata etc.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Odd CVS revision number

2010-02-25 Thread Heikki Linnakangas
I just noticed that the revision numbering for the new
src/doc/sgml/recovery-config.sgml file I added started from 2 for some
reason. The first revision was 2.1, and when I just updated it the new
revision became 2.2.

It seems to work fine, but I've never seen CVS revision numbers like
that before. Anyone have a clue what might've caused that? Will that
cause confusion?

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recent vendor SSL renegotiation patches break PostgreSQL

2010-02-25 Thread Magnus Hagander
On Wed, Feb 24, 2010 at 17:47, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 2010/2/24 Tom Lane t...@sss.pgh.pa.us:
 Also, the coding seems a bit confused about whether the
 ssl_renegotiation_limit GUC exists when USE_SSL isn't set.  I think we
 have a project policy about whether GUCs should still exist when the
 underlying support isn't compiled, but I forget what it is :-(.

 I personally find it highly annoying when a GUC goes away, so I'm all
 for always having them there. And I thought that was our policy for
 new ones, but I can't find a reference to it...

 I see that ssl_ciphers is made to go away when USE_SSL isn't set,
 so the most consistent thing in the near term would be to do the same.

The difference is that ssl_ciphers is only set in postgresql.conf, so
it doesn't have the same exposure. I can certainly see a use-case
where a naive application will just disable ssl renegotiation because
it knows it can't deal with it (or the driver can't) uncondinionally -
but the use of SSL or not is controlled by the server at the other end
of the connection. Not failing then would be good..

 Revisiting the whole issue seems like not material for back-patching.

Is this something we should consider looking over for 9.0,or is it too
late already? (For other parameters, that is - a check of all the ones
we have that are #ifdef:ed out today, to see if they can be made
available even when the support isn't compiled in)

 SUSET seems less surprising to me.  I agree that it's hard to make
 a concrete case for a user doing anything terribly bad with it,
 but on the other hand is there much value in letting it be USERSET?

 The use case would be for example npgsql (or npgsql clients) being
 able to disable it from the client side, because they know they can't
 deal with it. Even in the case that the server doesn't know that.

 Fair enough, USERSET it is then.

Done. Will run some tests and then apply.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming rep - why log shipping is necessary?

2010-02-25 Thread marcin mank
On Thu, Feb 25, 2010 at 10:08 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 the standby needs to
 fall back to the archive if it falls behind so that the WAL files it
 needs have already been recycled in the master.

Oh, so the master does not have to keep track of the state of the
standbys. That`s a nice design.

 If you're adventurous enough, it's actually possible to set an
 archive_command that checks the status of the standby and returns
 failure as long as the standby still needs the given WAL segment. That
 way the primary doesn't recycle segments that are still needed by the
 standby, and you can get away without restore_command in the standby.

That would be a nice addition to pg_standby, like
pg_standby --check-streaming-standby postgres:qwe...@10.0.0.1
--check-streaming-standby postgres:qwe...@10.0.0.2:5433

Greetings
Marcin Mańk

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing cvs HEAD - HS/SR - xlog timeline 0 pg_xlogfile_name_offset

2010-02-25 Thread Heikki Linnakangas
Fujii Masao wrote:
 On Thu, Feb 25, 2010 at 5:10 PM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
 A quick fix would be to just throw an error if you try to use
 pg_xlog_filename() during hot standby. But there seems to be good
 reasons to call pg_xlog_filename() during hot standby, given that both
 of you ran into the same issue. What exactly were you trying to achieve
 with it?
 
 Nothing ;) Frankly I just found that problem while testing the
 combination of SR and system administration functions. But on
 second thought, calling pg_xlogfile_name() during HS seems useful
 to remove old WAL files from the archive that is shared from
 multiple standbys. In this case, '%r' in restore_command cannot
 be used, so we would need to calculate the name of the WAL files
 that are not required for the subsequent recovery yet by using
 pg_xlogfile_name() and pg_controldata etc.

Yeah. The current pg_*_last_location() functions don't cut it though,
you need to retain logs back to the redo location of the last
restartpoint. That's what %r returns. Maybe we should add another function?

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NaN/Inf fix for ECPG

2010-02-25 Thread Boszormenyi Zoltan
Rémi Zara írta:
 Le 24 févr. 2010 à 18:58, Boszormenyi Zoltan a écrit :
   
 Here's the attached test code. Compile it with

 gcc -Wall -o nantest nantest.c -lm

 and run it. It tests NAN anf INFINITY values with isinf() and isnan().
 The expected output is:

 ==
 $ ./nantest
 computed NAN
 1 0
 computed INFINITY
 0 1
 ==

 Instead of computed, NetBSD/x86-64 prints defined
 but the test results are the same as under Linux/x86-64.

 

 Here it is :
   

First, thanks for running it.

 -bash-4.1$ gcc -Wall -o nantest nantest.c -lm
 -bash-4.1$ ./nantest 
 defined NAN
 0 1
   

So: isnan((double)NAN) == false, isinf((double)NAN) == true?
No wonder this causes a little problem.

 defined INFINITY
 0 1

 Ok. So, on NetBSD/mips (#ifdef __NetBSD__  __mips__), isnan(NAN) is true, 
 isnan((double)NAN) is false, and isnan((double)(0.0 / 0.0)) is true.

 Regards,
 Rémi Zara
   

NAN on NetBSD/x86-64 is defined as:

  extern const union __float_u __nanf;
  #define NAN  __nanf.__val

I would guess that it's similar on mips. Is is possible that
NetBSD/mips has a conversion bug?

What I don't get is that the code I used in ECPG and in this
test code is the same as in src/backend/utils/adt/float.c. E.g.:
float8in sees NaN - value will be (double)NAN
float8out sees isnan(value) - outputs NaN string

Can someone shed some light on why the backend
doesn't get the problem as above? :-(

As Rémi says, isnan((double)(0.0 / 0.0)) == true for him.
Michael: IIRC, IEEE754 explicit about that the (0.0/0.0) division
produces NaN. How about doing it explicitely in ECPG?

Rémi: please, run this code to confirm the above?

Best regards,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil. (Matthew 5:37) - basics of digital technology.
May your kingdom come - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/

#include math.h
#include float.h
#include stdio.h

static double
get_float8_nan(void)
{
	printf(computed NAN\n);
	return (double) (0.0 / 0.0);
}

static double
get_float8_infinity(void)
{
#ifdef INFINITY
	printf(defined INFINITY\n);
	return (double) INFINITY;
#else
	printf(computed INFINITY\n);
	return (double) (HUGE_VAL * HUGE_VAL);
#endif
}

int main(void) {
	double	d;
	d = get_float8_nan();
	printf(%d %d\n, isnan(d), isinf(d));
	d = get_float8_infinity();
	printf(%d %d\n, isnan(d), isinf(d));
	return 0;
}


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NaN/Inf fix for ECPG

2010-02-25 Thread Rémi Zara

Le 25 févr. 2010 à 11:26, Boszormenyi Zoltan a écrit :
 
 NAN on NetBSD/x86-64 is defined as:
 
  extern const union __float_u __nanf;
  #define NAN  __nanf.__val
 

Same here:
math.h:extern const union __float_u __nanf;
math.h:#define  NAN __nanf.__val

 I would guess that it's similar on mips. Is is possible that
 NetBSD/mips has a conversion bug?
 
 What I don't get is that the code I used in ECPG and in this
 test code is the same as in src/backend/utils/adt/float.c. E.g.:
 float8in sees NaN - value will be (double)NAN
 float8out sees isnan(value) - outputs NaN string
 
 Can someone shed some light on why the backend
 doesn't get the problem as above? :-(
 
 As Rémi says, isnan((double)(0.0 / 0.0)) == true for him.
 Michael: IIRC, IEEE754 explicit about that the (0.0/0.0) division
 produces NaN. How about doing it explicitely in ECPG?
 
 Rémi: please, run this code to confirm the above?
 

bash-4.1$ gcc -Wall -o nantest1 nantest1.c -lm
bash-4.1$ ./nantest1 
computed NAN
1 0
defined INFINITY
0 1

Regards,

Rémi Zara
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] testing cvs HEAD - HS/SR - xlog timeline 0 pg_xlogfile_name_offset

2010-02-25 Thread Fujii Masao
On Thu, Feb 25, 2010 at 7:22 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 Yeah. The current pg_*_last_location() functions don't cut it though,
 you need to retain logs back to the redo location of the last
 restartpoint. That's what %r returns. Maybe we should add another function?

+1

It would be useful if we can know that location via SQL rather
than pg_controldata. Which should that function return, filename
or location? If we'll prevent pg_xlogfile_name() from being called
during recovery according to your suggestion, it should return the
filename.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Gokulakannan Somasundaram
 Yes. When a bit is cleared, that's OK, because a cleared bit just means
 you need to check visibility in the heap tuple. When a bit is set,
 however, it's important that it doesn't hit the disk before the
 corresponding heap page update. That's why visibilitymap_set() sets the
 LSN on the page.

 OK. Say a session doing the update, which is the fist update on the page,
resets the PD_ALL_VISIBLE and just before updating the visibility map
crashes. The subsequent inserts/updates/deletes, will see the PD_ALL_VISIBLE
flag cleared and never care to update the visibility map, but actually it
would have created tuples in index and table. So won't this return wrong
results?

Again it is not clear from your documentation, how you have handled this
situation?

Thanks,
Gokul.


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Gokulakannan Somasundaram
 The replay of the heap insert/update/delete record updates the
 visibility map.

 So are you planning to make that section, which writes the xlog and updates
the visibility map inside a PANIC section right?


Re: [HACKERS] Streaming replication and pg_xlogfile_name()

2010-02-25 Thread Simon Riggs
On Thu, 2010-02-25 at 12:02 +0900, Fujii Masao wrote:
 On Wed, Feb 24, 2010 at 7:56 AM, Simon Riggs si...@2ndquadrant.com wrote:
  On Thu, 2010-01-28 at 10:28 +0200, Heikki Linnakangas wrote:
  Fujii Masao wrote:
   In relation to the functions added recently, I found an annoying problem;
   pg_xlogfile_name(pg_last_xlog_receive/replay_location()) might report the
   wrong name because pg_xlogfile_name() always uses the current timeline,
   and a backend doesn't know the actual timeline related to the location
   which pg_last_xlog_receive/replay_location() reports. Even if a backend
   knows that, pg_xlogfile_name() would be unable to determine which 
   timeline
   should be used.
 
  Hmm, I'm not sure what the use case for this is
 
  Agreed. What is the use case for this?
 
 Since the current behavior would annoy many users (e.g., [*1]),
 I proposed to change it.
 
 [*1]
 http://archives.postgresql.org/pgsql-hackers/2010-02/msg02014.php

OK, go for it.

If we expose the timeline as part of an xlog location, then we should
do that everywhere as a change for 9.0. Clearly, xlog location has no
meaning without the timeline anyway, so this seems like a necessary
change not just a quick fix. It breaks compatibility, but since we're
changing replication in 9.0 that shouldn't be a problem.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Gokulakannan Somasundaram
 The replay of the heap insert/update/delete record updates the
 visibility map.


Say a checkpoint has occured in between and flushed the dirty pages into
disk, while the updater waits to update the visibility map. Now there will
be no replay for the insert/update/delete right?


Re: [HACKERS] NaN/Inf fix for ECPG

2010-02-25 Thread Boszormenyi Zoltan
I wrote:
 As Rémi says, isnan((double)(0.0 / 0.0)) == true for him.
 Michael: IIRC, IEEE754 explicit about that the (0.0/0.0) division
 produces NaN. How about doing it explicitely in ECPG?
   

I came up with three patches, they are attached.

Can you try whether the first patch (missing float.h from data.c)
solves the problem? And together with the 2nd one? In that
patch I fixed the order of float.h and math.h in nan_test.pgc,
which is the opposite of the order found in e.g. backend/utils/adt/float.c.

The 3rd patch is explicit about NetBSD/mips but it doesn't feel right.

They are working on Linux/x86-64 and NetBSD/x86-64. Can you try
the combinations below on pika outside the buildfarm whether they
still fail the ECPG make check?
- patch 1 by itself (12a)
- patch 1+2 (12a + 12-regr)
- patch 3 with/without 1+2

Sorry to give you work, but we don't have a mips machine.

Thanks in advance,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil. (Matthew 5:37) - basics of digital technology.
May your kingdom come - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/

*** pgsql/src/interfaces/ecpg/ecpglib/data.c~	2010-02-25 13:11:56.0 +0100
--- pgsql/src/interfaces/ecpg/ecpglib/data.c	2010-02-25 13:11:56.0 +0100
***
*** 5,10 
--- 5,11 
  
  #include stdlib.h
  #include string.h
+ #include float.h
  #include math.h
  
  #include ecpgtype.h
diff -dcrpN pgsql.orig/src/interfaces/ecpg/test/expected/pgtypeslib-nan_test.c pgsql/src/interfaces/ecpg/test/expected/pgtypeslib-nan_test.c
*** pgsql.orig/src/interfaces/ecpg/test/expected/pgtypeslib-nan_test.c	2010-02-16 19:56:08.0 +0100
--- pgsql/src/interfaces/ecpg/test/expected/pgtypeslib-nan_test.c	2010-02-25 13:16:17.0 +0100
***
*** 9,16 
  #line 1 nan_test.pgc
  #include stdio.h
  #include stdlib.h
- #include math.h
  #include float.h
  #include pgtypes_numeric.h
  #include decimal.h
  
--- 9,16 
  #line 1 nan_test.pgc
  #include stdio.h
  #include stdlib.h
  #include float.h
+ #include math.h
  #include pgtypes_numeric.h
  #include decimal.h
  
diff -dcrpN pgsql.orig/src/interfaces/ecpg/test/pgtypeslib/nan_test.pgc pgsql/src/interfaces/ecpg/test/pgtypeslib/nan_test.pgc
*** pgsql.orig/src/interfaces/ecpg/test/pgtypeslib/nan_test.pgc	2010-02-16 19:56:09.0 +0100
--- pgsql/src/interfaces/ecpg/test/pgtypeslib/nan_test.pgc	2010-02-25 13:15:07.0 +0100
***
*** 1,7 
  #include stdio.h
  #include stdlib.h
- #include math.h
  #include float.h
  #include pgtypes_numeric.h
  #include decimal.h
  
--- 1,7 
  #include stdio.h
  #include stdlib.h
  #include float.h
+ #include math.h
  #include pgtypes_numeric.h
  #include decimal.h
  
*** pgsql.orig/src/interfaces/ecpg/ecpglib/data.c	2010-02-04 11:10:03.0 +0100
--- pgsql/src/interfaces/ecpg/ecpglib/data.c	2010-02-25 12:57:49.0 +0100
***
*** 85,94 
  static double
  get_float8_nan(void)
  {
! #ifdef NAN
! 	return (double) NAN;  
! #else
  	return (double) (0.0 / 0.0);
  #endif
  }
  
--- 85,94 
  static double
  get_float8_nan(void)
  {
! #if !defined(NAN) || (defined(__NetBSD__)  defined(__mips__))
  	return (double) (0.0 / 0.0);
+ #else
+ 	return (double) NAN;
  #endif
  }
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Heikki Linnakangas
Gokulakannan Somasundaram wrote:
 OK. Say a session doing the update, which is the fist update on the page,
 resets the PD_ALL_VISIBLE and just before updating the visibility map
 crashes. The subsequent inserts/updates/deletes, will see the PD_ALL_VISIBLE
 flag cleared and never care to update the visibility map, but actually it
 would have created tuples in index and table.

The replay of the heap insert/update/delete record updates the
visibility map.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Heikki Linnakangas
Gokulakannan Somasundaram wrote:
 The replay of the heap insert/update/delete record updates the
 visibility map.

 So are you planning to make that section, which writes the xlog and updates
 the visibility map inside a PANIC section right?

The xlog record is already written in a critical section. Yeah, perhaps
the critical section needs to be extended to cover the visibility map
updates. The indexes haven't been changed at that point yet, so an
index-only scan still produces the right result, but a subsequent update
would fail to update the visibility map because the flag on the heap
page was already cleared.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Heikki Linnakangas
Gokulakannan Somasundaram wrote:
 Say a checkpoint has occured in between and flushed the dirty pages into
 disk, while the updater waits to update the visibility map. Now there will
 be no replay for the insert/update/delete right?

Yeah, good catch, that could happen.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recent vendor SSL renegotiation patches break PostgreSQL

2010-02-25 Thread Magnus Hagander
On Thu, Feb 25, 2010 at 10:42, Magnus Hagander mag...@hagander.net wrote:
 On Wed, Feb 24, 2010 at 17:47, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 Fair enough, USERSET it is then.

 Done. Will run some tests and then apply.

And applied.

I backpatched it to 8.2, which is as far as it applied fairly cleanly.
Before that, we don't have GUC_UNIT_KB for example, so it'll be a
different format of the patch as well. I'm not sure it's important
enough to go back beyond that...


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assertion failure in walreceiver

2010-02-25 Thread Greg Stark
On Thu, Feb 25, 2010 at 7:31 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 Committed removal of that and the assertion. You still can't use a copy
 of the data directory taken right after initdb, because the initial
 checkpoint record has the flag set indicating that archiving is not
 enabled. While we're at it, the error message doesn't seem right:

 FATAL:  recovery connections cannot start because the
 recovery_connections parameter is disabled on the WAL source server

 recovery_connections is on by default, the real problem is that
 archive_command and max_wal_senders are disabled.

Perhaps we need to put these flags in a record on startup. If they're
not set on the checkpoint you start at you check if the next record is
a shutdown and it starts up with the flags set.

I'm not sure that's exactly right as I've never looked at the wal
sequence on shutdown and startup. But it seems like a problem if you
want to start replication, find that archive_mode needs to be set so
you restart your database with it set but then still can't start up
the slave until a checkpoint happens on the master.


-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NaN/Inf fix for ECPG

2010-02-25 Thread Rémi Zara

Le 17 févr. 2010 à 12:18, Boszormenyi Zoltan a écrit :
 
 
 Is this buildfarm member for detecting bugs in the already
 obsolete NetBSD 5.0 BETA, or what? The final 5.0 and
 two bugfix releases are already out for a while. The owner
 of that particular machine should upgrade. 


I upgraded pika to NetBSD 5.0.2, and the problem is still there.
There are some tests (in core) which tests for NaN and Infinity, which pass. 
So either those tests are insufficient, or the code does something different 
there.
Anything you want me to try ?

Regards,

Rémi Zara
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql with GSS can crash

2010-02-25 Thread Zdenek Kotala

Hi all,

I got following stack:

 fd7ffed14b70 strlen () + 40
 fd7ffed71665 snprintf () + e5
 fd7fff36d088 pg_GSS_startup () + 88
 fd7fff36d43a pg_fe_sendauth () + 15a
 fd7fff36e557 PQconnectPoll () + 3b7
 fd7fff36e152 connectDBComplete () + a2
 fd7fff36dc32 PQsetdbLogin () + 1b2
 0041e96d main () + 30d
 0041302c  ()

It seems that connection is not fully configured and krbsrvname or 
pghost is not filled. Following code in fe-auth.c pg_GSS_startup() 
causes a crash:


440 maxlen = NI_MAXHOST + strlen(conn-krbsrvname) + 2;
441 temp_gbuf.value = (char *) malloc(maxlen);
442 snprintf(temp_gbuf.value, maxlen, %...@%s,
443  conn-krbsrvname, conn-pghost);
444 temp_gbuf.length = strlen(temp_gbuf.value);

And following code in fe-connect.c fillPGconn() fill NULL value.

571 tmp = conninfo_getval(connOptions, krbsrvname);
572 conn-krbsrvname = tmp ? strdup(tmp) : NULL;

I think that pg_GSS_startup should sanity the input.

Zdenek

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql with GSS can crash

2010-02-25 Thread Magnus Hagander
On Thu, Feb 25, 2010 at 15:04, Zdenek Kotala zdenek.kot...@sun.com wrote:
 Hi all,

 I got following stack:

  fd7ffed14b70 strlen () + 40
  fd7ffed71665 snprintf () + e5
  fd7fff36d088 pg_GSS_startup () + 88
  fd7fff36d43a pg_fe_sendauth () + 15a
  fd7fff36e557 PQconnectPoll () + 3b7
  fd7fff36e152 connectDBComplete () + a2
  fd7fff36dc32 PQsetdbLogin () + 1b2
  0041e96d main () + 30d
  0041302c  ()

 It seems that connection is not fully configured and krbsrvname or pghost is
 not filled. Following code in fe-auth.c pg_GSS_startup() causes a crash:

    440         maxlen = NI_MAXHOST + strlen(conn-krbsrvname) + 2;
    441         temp_gbuf.value = (char *) malloc(maxlen);
    442         snprintf(temp_gbuf.value, maxlen, %...@%s,
    443                          conn-krbsrvname, conn-pghost);
    444         temp_gbuf.length = strlen(temp_gbuf.value);

 And following code in fe-connect.c fillPGconn() fill NULL value.

    571         tmp = conninfo_getval(connOptions, krbsrvname);
    572         conn-krbsrvname = tmp ? strdup(tmp) : NULL;

 I think that pg_GSS_startup should sanity the input.

How did you get NULL in there? :-)
There's a default set for that one that's PG_KRB_SRVNAM, so it really
should never come out as NULL, I think...

As for pghost, that certainly seems to be a bug. We check that one in
krb5 and SSPI, but for some reason we seem to be missing it in GSSAPI.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recent vendor SSL renegotiation patches break PostgreSQL

2010-02-25 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Wed, Feb 24, 2010 at 17:47, Tom Lane t...@sss.pgh.pa.us wrote:
 I see that ssl_ciphers is made to go away when USE_SSL isn't set,
 so the most consistent thing in the near term would be to do the same.

 The difference is that ssl_ciphers is only set in postgresql.conf, so
 it doesn't have the same exposure. I can certainly see a use-case
 where a naive application will just disable ssl renegotiation because
 it knows it can't deal with it (or the driver can't) uncondinionally -
 but the use of SSL or not is controlled by the server at the other end
 of the connection. Not failing then would be good..

Hm, well, surely the client ought to know if the connection is actually
SSL or not.  But it's not important enough to argue about.

 Revisiting the whole issue seems like not material for back-patching.

 Is this something we should consider looking over for 9.0,or is it too
 late already? (For other parameters, that is - a check of all the ones
 we have that are #ifdef:ed out today, to see if they can be made
 available even when the support isn't compiled in)

I don't think it's appropriate to worry about it right now.  We have
bigger issues to deal with.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Odd CVS revision number

2010-02-25 Thread Tom Lane
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 I just noticed that the revision numbering for the new
 src/doc/sgml/recovery-config.sgml file I added started from 2 for some
 reason. The first revision was 2.1, and when I just updated it the new
 revision became 2.2.

 It seems to work fine, but I've never seen CVS revision numbers like
 that before. Anyone have a clue what might've caused that? Will that
 cause confusion?

No, CVS does that sometimes.  If you root around in the manual you can
find an explanation of how it chooses the initial revision number, but
I don't recall the triggering condition offhand.  We have several other
files that have 2.x version numbers for no particular reason except CVS
felt like assigning one.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recent vendor SSL renegotiation patches break PostgreSQL

2010-02-25 Thread Magnus Hagander
On Thu, Feb 25, 2010 at 15:27, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Wed, Feb 24, 2010 at 17:47, Tom Lane t...@sss.pgh.pa.us wrote:
 I see that ssl_ciphers is made to go away when USE_SSL isn't set,
 so the most consistent thing in the near term would be to do the same.

 The difference is that ssl_ciphers is only set in postgresql.conf, so
 it doesn't have the same exposure. I can certainly see a use-case
 where a naive application will just disable ssl renegotiation because
 it knows it can't deal with it (or the driver can't) uncondinionally -
 but the use of SSL or not is controlled by the server at the other end
 of the connection. Not failing then would be good..

 Hm, well, surely the client ought to know if the connection is actually
 SSL or not.  But it's not important enough to argue about.

You'd think so, but our entire setup of SSL on the client is designed
on the assumption that it doesn't ;)


 Revisiting the whole issue seems like not material for back-patching.

 Is this something we should consider looking over for 9.0,or is it too
 late already? (For other parameters, that is - a check of all the ones
 we have that are #ifdef:ed out today, to see if they can be made
 available even when the support isn't compiled in)

 I don't think it's appropriate to worry about it right now.  We have
 bigger issues to deal with.

Agreed.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stop_backup does not complete

2010-02-25 Thread Greg Stark
On Wed, Feb 24, 2010 at 11:14 PM, Josh Berkus j...@agliodbs.com wrote:

 Right.  I'm pointing out that production and trying out 9.0 for the
 first time are actually different circumstances, and we need to be able
 to handle both gracefully.  Since, if people have a bad experience
 trying it out for the first time, we'll never *get* to production.

Fwiw if it's not clear what's going on when you're trying out
something carefully for the first time it's 10x worse if you're stuck
in a situation like this when you have people breathing down your neck
yelling about how they're losing money for every second you're down.

In an ideal world it would be best if pg_stop_backup could actually
print the error status of the archiving command. Is there any way for
it to get ahold of the fact that the archiving is failing?

And do we have closure on whether a fast shutdown is hanging? Or was
that actually a smart shutdown?

Perhaps smart shutdown needs to print out what it's waiting on
periodically as well, and suggest a fast shutdown to abort those
transactions.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recent vendor SSL renegotiation patches break PostgreSQL

2010-02-25 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 I backpatched it to 8.2, which is as far as it applied fairly cleanly.
 Before that, we don't have GUC_UNIT_KB for example, so it'll be a
 different format of the patch as well. I'm not sure it's important
 enough to go back beyond that...

Hm, I'd kinda like to have it in 8.1 at least, since I'm on the hook for
support of 8.1 in RHEL5.  If you don't feel like messing with a further
back-patch I can take a look.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Odd CVS revision number

2010-02-25 Thread Andrew Dunstan



Heikki Linnakangas wrote:

I just noticed that the revision numbering for the new
src/doc/sgml/recovery-config.sgml file I added started from 2 for some
reason. The first revision was 2.1, and when I just updated it the new
revision became 2.2.

It seems to work fine, but I've never seen CVS revision numbers like
that before. Anyone have a clue what might've caused that? Will that
cause confusion?

  


It should be fine. 
http://www.eyrie.org/~eagle/notes/cvs/revisions.html says:


   **CVS, when assigning an initial version to a new file, doesn't
   always assign 1.1. Instead, it finds the highest numbered revision
   of any file in the same directory, takes the first digit, and
   assigns a revision of digit.1 to new files. In other words, if you
   have a file in the same directory that has a revision of 2.30, a new
   file in that directory will get a revision number of 2.1, not 1.1.


For some unknown reason, we have some version 2.x files in doc/src/sgml: 
http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/src/sgml/, which is 
why

you saw this.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NaN/Inf fix for ECPG

2010-02-25 Thread Tom Lane
Boszormenyi Zoltan z...@cybertec.at writes:
 Can you try whether the first patch (missing float.h from data.c)
 solves the problem? And together with the 2nd one? In that
 patch I fixed the order of float.h and math.h in nan_test.pgc,
 which is the opposite of the order found in e.g. backend/utils/adt/float.c.

 The 3rd patch is explicit about NetBSD/mips but it doesn't feel right.

The third patch is surely wrong.  We don't need to do that in the
backend's instance of get_float8_nan, so ecpglib shouldn't need it
either.

I suspect that the ultimate cause of this is either one of the header
inclusion inconsistencies you found, or something associated with
not pulling in all the stuff that postgres.h does.  port/netbsd.h
is empty though, so it's not immediately clear what might be missing.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication and pg_xlogfile_name()

2010-02-25 Thread Fujii Masao
On Thu, Feb 25, 2010 at 6:33 PM, Simon Riggs si...@2ndquadrant.com wrote:
 If we expose the timeline as part of an xlog location, then we should
 do that everywhere as a change for 9.0.

Everywhere? You mean changing the format of the return value of all
the following functions?

- pg_start_backup()
- pg_stop_backup()
- pg_switch_xlog()
- pg_current_xlog_location()
- pg_current_xlog_insert_location()

 Clearly, xlog location has no
 meaning without the timeline anyway, so this seems like a necessary
 change not just a quick fix. It breaks compatibility, but since we're
 changing replication in 9.0 that shouldn't be a problem.

Umm... ISTM a large number of users would complain about that
change because of compatibility.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Odd CVS revision number

2010-02-25 Thread Alvaro Herrera
Andrew Dunstan wrote:

 For some unknown reason, we have some version 2.x files in
 doc/src/sgml:
 http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/doc/src/sgml/,
 which is why
 you saw this.

Most likely, somebody incremented the rev number by hand in the dawn of
time.  Your doc also says:

It's easy to tell CVS to increment the first digit of the
revision number, provided that you're setting it to something
equal to or greater than the highest first digit of all
revisions of all files in the same directory. Just use the -r
flag on commit. For example:

cvs commit -r 2 file

where file currently has a revision of 1.something, will
check in that file with a new revision of 2.1.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication and pg_xlogfile_name()

2010-02-25 Thread Fujii Masao
On Thu, Feb 25, 2010 at 11:57 AM, Fujii Masao masao.fu...@gmail.com wrote:
 On Tue, Feb 23, 2010 at 4:08 PM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
 So it's not the TLI of the first record in the file, isn't it?

 Hmm, or is it the TLI of the last record? Not sure. Anyway, if there's a
 TLI switch in the current WAL file, curFileTLI doesn't always represent
 the TLI of the current record.

 Hmm. How about using lastPageTLI instead of curFileTLI? lastPageTLI
 would always represent the TLI of the current record.

I attached the revised patch which uses lastPageTLI instead of curFileTLI
as the timeline of the last applied record.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 13199,13204  postgres=# SELECT * FROM pg_xlogfile_name_offset(pg_stop_backup());
--- 13199,13208 
  This is usually the desired behavior for managing transaction log archiving
  behavior, since the preceding file is the last one that currently
  needs to be archived.
+ These functions also accept as a parameter the string that consists of timeline and
+ location, separated by a slash. In this case a transaction log file name is computed
+ by using the given timeline. On the other hand, if timeline is not supplied, the
+ current timeline is used for the computation.
 /para
  
 para
***
*** 13245,13257  postgres=# SELECT * FROM pg_xlogfile_name_offset(pg_stop_backup());
  literalfunctionpg_last_xlog_receive_location/function()/literal
  /entry
 entrytypetext/type/entry
!entryGet last transaction log location received and synced to disk during
! streaming recovery. If streaming recovery is still in progress
  this will increase monotonically. If streaming recovery has completed
  then this value will remain static at the value of the last WAL record
  received and synced to disk during that recovery. When the server has
  been started without a streaming recovery then the return value will be
! InvalidXLogRecPtr (0/0).
 /entry
/row
row
--- 13249,13263 
  literalfunctionpg_last_xlog_receive_location/function()/literal
  /entry
 entrytypetext/type/entry
!entryGet timeline and location of last transaction log received and synced
! to disk during streaming recovery. The return string is separated by a slash,
! the first value indicates the timeline and the other the location.
! If streaming recovery is still in progress
  this will increase monotonically. If streaming recovery has completed
  then this value will remain static at the value of the last WAL record
  received and synced to disk during that recovery. When the server has
  been started without a streaming recovery then the return value will be
! literal0/0/0/.
 /entry
/row
row
***
*** 13259,13270  postgres=# SELECT * FROM pg_xlogfile_name_offset(pg_stop_backup());
  literalfunctionpg_last_xlog_replay_location/function()/literal
  /entry
 entrytypetext/type/entry
!entryGet last transaction log location replayed during recovery.
  If recovery is still in progress this will increase monotonically.
  If recovery has completed then this value will remain static at
  the value of the last WAL record applied during that recovery.
  When the server has been started normally without a recovery
! then the return value will be InvalidXLogRecPtr (0/0).
 /entry
/row
   /tbody
--- 13265,13278 
  literalfunctionpg_last_xlog_replay_location/function()/literal
  /entry
 entrytypetext/type/entry
!entryGet timeline and location of last transaction log replayed during
! recovery. The return string is separated by a slash, the first value
! indicates the timeline and the other the location.
  If recovery is still in progress this will increase monotonically.
  If recovery has completed then this value will remain static at
  the value of the last WAL record applied during that recovery.
  When the server has been started normally without a recovery
! then the return value will be literal0/0/0/.
 /entry
/row
   /tbody
*** a/src/backend/access/transam/xlog.c
--- b/src/backend/access/transam/xlog.c
***
*** 392,397  typedef struct XLogCtlData
--- 392,399 
  	TimestampTz	recoveryLastXTime;
  	/* end+1 of the last record replayed */
  	XLogRecPtr	recoveryLastRecPtr;
+ 	/* tli of last record replayed */
+ 	TimeLineID	recoveryLastTLI;
  
  	slock_t		info_lck;		/* locks shared variables shown above */
  } XLogCtlData;
***
*** 

Re: [HACKERS] pg_stop_backup does not complete

2010-02-25 Thread Bruce Momjian
Heikki Linnakangas wrote:
 Josh Berkus wrote:
  OK, can you go through the reasons why pg_stop_backup would not
  complete?  
 
 pg_stop_backup() doesn't complete until all the WAL segments needed to
 restore from the backup are archived. If archive_command is failing,
 that never happens.

Yes, very old behavior allowed people to think they had a full backup
when the WAL files needed were not all archived, which was a bad thing. 
Thankfully no one reported catastrophic failure from the old behavior.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] plperl.on_init - bug or just me?

2010-02-25 Thread Richard Huxton
From memory and the thread below, I thought one of the key uses was to 
let me use a module from trusted plperl.

  http://archives.postgresql.org/pgsql-hackers/2010-02/msg00167.php

The example below has a TestModule that just exports one sub - visible 
from plerlu but not plperl. Presumably Safe just clamps down and my 
sub isn't marked as acceptable. Is this intended, or am I doing 
something stupid?


postgresql.conf:
plperl.on_init =
'use lib /home/richardh/dev/; use TestModule qw(add_one);'

-- tries to call TestModule::add_one
richardh=# SELECT add_one(1);
ERROR:  Undefined subroutine TestModule::add_one called at line 1.
CONTEXT:  PL/Perl function add_one

-- tries to call the exported main::add_one
richardh=# SELECT add_one_e(1);
ERROR:  Undefined subroutine main::add_one called at line 1.
CONTEXT:  PL/Perl function add_one_e

-- plperlu - TestModule::add_one
richardh=# SELECT add_one_u(1);
 add_one_u
---
 2
(1 row)


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl.on_init - bug or just me?

2010-02-25 Thread Andrew Dunstan



Richard Huxton wrote:
From memory and the thread below, I thought one of the key uses was to 
let me use a module from trusted plperl.

  http://archives.postgresql.org/pgsql-hackers/2010-02/msg00167.php

The example below has a TestModule that just exports one sub - visible 
from plerlu but not plperl. Presumably Safe just clamps down and my 
sub isn't marked as acceptable. Is this intended, or am I doing 
something stupid?





It's intended (at least by me).

Also, please see the recent discussion about loading extra stuff into 
the Safe container. At the very least that has been shelved for now. 
We're going to proceed with deliberation in this area. I'm quite 
concerned to make sure that we don't provide an opportunity for people 
to undermine the behaviour of the trusted language.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl.on_init - bug or just me?

2010-02-25 Thread Richard Huxton

On 25/02/10 17:10, Andrew Dunstan wrote:



Richard Huxton wrote:

Presumably Safe just clamps down and my
sub isn't marked as acceptable. Is this intended, or am I doing
something stupid?


It's intended (at least by me).

Also, please see the recent discussion about loading extra stuff into
the Safe container.


Ah - looks like I've missed a thread.

 At the very least that has been shelved for now.

We're going to proceed with deliberation in this area. I'm quite
concerned to make sure that we don't provide an opportunity for people
to undermine the behaviour of the trusted language.


Fair enough.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stop_backup does not complete

2010-02-25 Thread Greg Smith

Greg Stark wrote:

In an ideal world it would be best if pg_stop_backup could actually
print the error status of the archiving command. Is there any way for
it to get ahold of the fact that the archiving is failing?
  


This is in the area I mentioned I'd proposed a patch to improve not too 
long ago.  The archiver doesn't tell anyone anything about what it's 
doing right now, or even save its state information.  I made a proposal 
for making the bit it's currently working on (or just finished, or both) 
visible not too long ago:  
http://archives.postgresql.org/message-id/4b4fea18.5080...@2ndquadrant.com


The main content for that was tracking disk space, which wandered into a 
separate discussion, but it would be easy enough to use the information 
that intends to export (what archive file is currently being 
processed?) and print that in the error message too.  Makes it easy 
enough for people to infer the command is failing if the same segment 
number shows up every time in that message.


I didn't finish that only because the CF kicked off and I switched out 
of new development to review.  Since this class of error keeps popping 
up, I could easily finish that patch off by next week and see if it 
helps here.  I thought it was a long overdue bit of monitoring to add to 
the database anyway, just never had the time to work on it before.



And do we have closure on whether a fast shutdown is hanging? Or was
that actually a smart shutdown?
  


When I tested this myself, a smart shutdown hung every time, while a 
fast one blew right through the problem--matching what's described in 
the manual.  Josh suggested at one point he might have seen a situation 
where fast shutdown wasn't sufficient to work around this and an 
immediate one was required.  Certainly possible that happened for an as 
yet unknown reason--I've seen plenty of situations where fast shutdown 
didn't work--but I haven't been able to replicate it.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought: should we run pgindent now?

2010-02-25 Thread Robert Haas
On Thu, Feb 18, 2010 at 11:51 PM, Andrew Dunstan and...@dunslane.net wrote:
 Bruce Momjian wrote:
 I have a TODO on fixing some of the typedef finding. But I can generate
 an up to date version of the list Bruce last used in a day or two, and then
 get this better whacked into shape for another run at the more traditional
 time.

 I am ready to run pgindent whenever requested.
 There is an updated typedefs list at
 http://www.pgbuildfarm.org/cgi-bin/typedefs.pl

Bruce, you want to go ahead and do this?  The sooner the better, AFAICS.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tie user processes to postmaster was:(Re: [HACKERS] scheduler in core)

2010-02-25 Thread Jaime Casanova
On Tue, Feb 23, 2010 at 11:08 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
 Steve Atkins wrote:

 Would having a higher level process manager be adequate - one
 that spawns the postmaster and a list of associated processes
 (queue manager, job scheduler, random user daemons that are
 used for database application maintenance). It sounds like
 something like that would be able to start up and shut down
 an entire family of daemons, of which the postmaster is the major
 one, gracefully.

 Sort of a super-pg_ctl, eh?  Hmm, that sounds like it could work ...


Summarizing:

so we want some kind of super postmaster that starts some processes
(including the postgres' postmaster itself), and track their
availability.
- processes that doesn't need to connect to shared memory should start
here (ie: pgagent, slony daemons, pgbouncer, LISTEN applications, etc)
- processes that need to connect to shared memory should be childs of
postgres' postmaster

is this so different from what the postgres' postmaster itself does? i
mean, can we reuse that code?
this project of course growth beyond my known abilities, so while i
will try it (if anyone seems like he can takle it please go for it)...
maybe we can add this to the TODO if seems acceptable? specially, i'd
love to hear Tom's opinion on this one...

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Boolean partition constraint behaving strangely

2010-02-25 Thread Tom Lane
Dominik Sander depai...@gmail.com writes:
 I have an issue with a table partitioned by one boolean column. The
 query planner only seems to skip the non matching table if expired
 (the column I use for the partition) is true.

Hm, interesting case.  The reason it's behaving asymmetrically is the
fix for this bug:
http://archives.postgresql.org/pgsql-sql/2008-01/msg00084.php

The planner forces expressions like bool_var = true into the
simpler forms bool_var or NOT bool_var so as to recognize
that these forms are equivalent.  However, that means that your
expired = false case looks like the case that was removed as
incorrect, ie

+  * Unfortunately we *cannot* use
+  *NOT A R= B if: B = A
+  * because this type of reasoning fails to prove that B doesn't yield NULL.

It strikes me though that we could make the more limited deduction
that NOT A refutes A itself.  That would fix this case, and I think
it would cover all the cases that we would have recognized if we'd
left the clauses in boolean-comparison form.

I'll see about fixing this for the next updates.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Gokulakannan Somasundaram
 1) transaction information in index

This seems like a lot of bloat in indexes. It also means breaking
 a lot of other optimizations such as being able to read the tuples
 directly from the heap page without locking. I'm not sure how much
 those are worth though. But adding 24 bytes to every index entry seems
 pretty unlikely to be a win anyways.


Greg,
  I think, somewhere things have been misunderstood. we only need 8
bytes more per index entry. I thought Postgres has a 8 byte transaction id,
but it is only 4 bytes, so we only need to save the insertion and deletion
xids. So 8 bytes more per tuple.

Gokul.


Re: [HACKERS] pg_stop_backup does not complete

2010-02-25 Thread Bruce Momjian
Joshua D. Drake wrote:
 On Wed, 2010-02-24 at 12:32 -0800, Josh Berkus wrote:
   pg_stop_backup() doesn't complete until all the WAL segments needed to
   restore from the backup are archived. If archive_command is failing,
   that never happens.
  
  OK, so we need a way out of that cycle if the user is issuing
  pg_stop_backup because they *already know* that archive_command is
  failing.  Right now, there's no way out other than a fast shutdown,
  which is a bit user-hostile.
 
 Hmmm well... changing the archive_command to /bin/true and issuing a HUP
 would cause the command to succeed, but I still think that is over the
 top. I prefer Kevin's solution or some variant thereof:
 
 http://archives.postgresql.org/pgsql-hackers/2010-02/msg01853.php
 http://archives.postgresql.org/pgsql-hackers/2010-02/msg01907.php

Postgres 9.0 will be the first release to mention /bin/true as a way of
turning off archiving in extraordinary circumstances:

http://developer.postgresql.org/pgdocs/postgres/runtime-config-wal.html

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought: should we run pgindent now?

2010-02-25 Thread Bruce Momjian
Robert Haas wrote:
 On Thu, Feb 18, 2010 at 11:51 PM, Andrew Dunstan and...@dunslane.net wrote:
  Bruce Momjian wrote:
  I have a TODO on fixing some of the typedef finding. But I can generate
  an up to date version of the list Bruce last used in a day or two, and 
  then
  get this better whacked into shape for another run at the more traditional
  time.
 
  I am ready to run pgindent whenever requested.
  There is an updated typedefs list at
  http://www.pgbuildfarm.org/cgi-bin/typedefs.pl
 
 Bruce, you want to go ahead and do this?  The sooner the better, AFAICS.

I am ready.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming rep - why log shipping is necessary?

2010-02-25 Thread Josh Berkus

 If you're adventurous enough, it's actually possible to set an
 archive_command that checks the status of the standby and returns
 failure as long as the standby still needs the given WAL segment. That
 way the primary doesn't recycle segments that are still needed by the
 standby, and you can get away without restore_command in the standby.

I'd prefer something a little different ... is there any way to tell
which log segments a standby still needs, *from* the standby?

Given performance considerations, I'd prefer to set up HS/SR with log
shipping because I don't want any slaves asking the master for a really
old log and interfering with its write performance.  However, that
leaves the issue of How do I decide when I can delete archived log
segments off the slave because the slave is past them?

Currently, I'm recommending some interval of time, but that's very brute
force and error-prone.  I'd prefer some elegant way to determine log
segment contains no unapplied transactions.  Is there one?

--Josh Berkus

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought: should we run pgindent now?

2010-02-25 Thread Robert Haas
On Thu, Feb 25, 2010 at 3:17 PM, Bruce Momjian br...@momjian.us wrote:
 Robert Haas wrote:
 On Thu, Feb 18, 2010 at 11:51 PM, Andrew Dunstan and...@dunslane.net wrote:
  Bruce Momjian wrote:
  I have a TODO on fixing some of the typedef finding. But I can generate
  an up to date version of the list Bruce last used in a day or two, and 
  then
  get this better whacked into shape for another run at the more 
  traditional
  time.
 
  I am ready to run pgindent whenever requested.
  There is an updated typedefs list at
  http://www.pgbuildfarm.org/cgi-bin/typedefs.pl

 Bruce, you want to go ahead and do this?  The sooner the better, AFAICS.

 I am ready.

Does that mean you're going to do it, or are you waiting for some sort
of OK?  I believe everyone who expressed an opinion is in favor.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Tom Lane
Gokulakannan Somasundaram gokul...@gmail.com writes:
   I think, somewhere things have been misunderstood. we only need 8
 bytes more per index entry. I thought Postgres has a 8 byte transaction id,
 but it is only 4 bytes, so we only need to save the insertion and deletion
 xids. So 8 bytes more per tuple.

What makes you think you can get away without cmin/cmax?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Greg Stark
On Thu, Feb 25, 2010 at 8:09 PM, Gokulakannan Somasundaram
gokul...@gmail.com wrote:
   I think, somewhere things have been misunderstood. we only need 8
 bytes more per index entry. I thought Postgres has a 8 byte transaction id,
 but it is only 4 bytes, so we only need to save the insertion and deletion
 xids. So 8 bytes more per tuple.


Well in the heap we need

4 bytes: xmin
4 bytes: xmax
4 bytes: cid
6 bytes: ctid
6 bytes: various info bits including natts

In indexes we currently get away with a reduced header which has few
of the 6 bytes of info bits. However the only reason we can do is
because we impose arbitrary limitations that work for indexes but
wouldn't be reasonable for tables. Such as a lower maximum number of
columns, inability to add new columns or drop columns later, etc.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stop_backup does not complete

2010-02-25 Thread Bruce Momjian

Looks like we arrived at the best solution here.  I don't think it was
clear to users that pg_stop_backup() was issuing an archive_command and
hence they wouldn't be likely to understand the delay or correct a
problem.  This gives them the information they need at the time they
need it.

---

Tom Lane wrote:
 Greg Smith g...@2ndquadrant.com writes:
  Tom Lane wrote:
  The value of the HINT I think would be to make them (a) not afraid to
  hit control-C and (b) aware of the fact that their archiver has got
  a problem.
  
  Agreed on both points.  Patch attached that implements something similar 
  to Josh's wording, tweaking the original warning too.
 
 OK, everyone likes the immediate NOTICE.  I did a bit of copy-editing
 and committed the attached version.
 
   regards, tom lane
 
 Index: xlog.c
 ===
 RCS file: /cvsroot/pgsql/src/backend/access/transam/xlog.c,v
 retrieving revision 1.377
 diff -c -r1.377 xlog.c
 *** xlog.c19 Feb 2010 10:51:03 -  1.377
 --- xlog.c25 Feb 2010 02:15:49 -
 ***
 *** 8132,8138 
*
* We wait forever, since archive_command is supposed to work and we
* assume the admin wanted his backup to work completely. If you don't
 !  * wish to wait, you can set statement_timeout.
*/
   XLByteToPrevSeg(stoppoint, _logId, _logSeg);
   XLogFileName(lastxlogfilename, ThisTimeLineID, _logId, _logSeg);
 --- 8132,8139 
*
* We wait forever, since archive_command is supposed to work and we
* assume the admin wanted his backup to work completely. If you don't
 !  * wish to wait, you can set statement_timeout.  Also, some notices
 !  * are issued to clue in anyone who might be doing this interactively.
*/
   XLByteToPrevSeg(stoppoint, _logId, _logSeg);
   XLogFileName(lastxlogfilename, ThisTimeLineID, _logId, _logSeg);
 ***
 *** 8141,8146 
 --- 8142,8150 
   BackupHistoryFileName(histfilename, ThisTimeLineID, _logId, _logSeg,
 startpoint.xrecoff % 
 XLogSegSize);
   
 + ereport(NOTICE,
 + (errmsg(pg_stop_backup cleanup done, waiting for 
 required WAL segments to be archived)));
 + 
   seconds_before_warning = 60;
   waits = 0;
   
 ***
 *** 8155,8162 
   {
   seconds_before_warning *= 2;/* This wraps 
 in 10 years... */
   ereport(WARNING,
 ! (errmsg(pg_stop_backup still waiting 
 for archive to complete (%d seconds elapsed),
 ! waits)));
   }
   }
   
 --- 8159,8169 
   {
   seconds_before_warning *= 2;/* This wraps 
 in 10 years... */
   ereport(WARNING,
 ! (errmsg(pg_stop_backup still waiting 
 for all required WAL segments to be archived (%d seconds elapsed),
 ! waits),
 !  errhint(Check that your 
 archive_command is executing properly. 
 !  pg_stop_backup can be 
 cancelled safely, 
 !  but the database 
 backup will not be usable without all the WAL segments.)));
   }
   }
   
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Tom Lane
Greg Stark gsst...@mit.edu writes:
 In indexes we currently get away with a reduced header which has few
 of the 6 bytes of info bits. However the only reason we can do is
 because we impose arbitrary limitations that work for indexes but
 wouldn't be reasonable for tables. Such as a lower maximum number of
 columns, inability to add new columns or drop columns later, etc.

Wait a second, which idea are we currently talking about?  No heap at
all, or just the ability to check visibility without visiting the heap?

If it's a genuine IOT (ie no separate heap), then you are not going to
be able to get away without a full heap tuple header.  We've sweated
blood to get that struct down to where it is; there's no way to make it
smaller without giving up some really fundamental things, for example
the ability to do UPDATE :-(

If you just want to avoid a heap visit for visibility checks, I think
you'd only need to add xmin/xmax/cmin plus the hint bits for same.
This is going to end up costing 16 bytes in practice --- you might
think you could squeeze into 12 but on 64-bit machines (MAXALIGN 8)
you'll save nothing.  So that's effectively a doubling of index size
for common cases such as a single int4 or int8 index column.  The other
problem is the extra write load created by needing to update the index's
copies of the hint bits; not to mention extra writes to freeze the xids
when they get old enough.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication and pg_xlogfile_name()

2010-02-25 Thread Erik Rijkers
On Thu, February 25, 2010 17:34, Fujii Masao wrote:

 I attached the revised patch which uses lastPageTLI instead of curFileTLI
 as the timeline of the last applied record.


With this patch the standby compiles, tests, installs OK.
I wanted to check with you if the following is expected.

With standby (correctly) as follows :
LOG:  redo starts at 0/120
LOG:  consistent recovery state reached at 0/200
LOG:  database system is ready to accept read only connections

This is OK.

However, initially (even after the above 'ready' message)
the timeline value as reported by
  pg_xlogfile_name_offset(pg_last_xlog_replay_location())
is zero.

After 5 minutes or so (without any activity on primary
or standby), it proceeds to 1 (see below):

(standby)
2010.02.25 21:58:21 $ psql
psql (9.0devel)
Type help for help.

replicas=# \x
Expanded display is on.
replicas=# select
pg_last_xlog_replay_location()
,   pg_xlogfile_name_offset(pg_last_xlog_replay_location())
,   pg_last_xlog_receive_location()
,   pg_xlogfile_name_offset(pg_last_xlog_receive_location())
, now();
-[ RECORD 1 ]-+
pg_last_xlog_replay_location  | 0/0/200
pg_xlogfile_name_offset   | (0001,16777216)
pg_last_xlog_receive_location | 1/0/200
pg_xlogfile_name_offset   | (00010001,16777216)
now   | 2010-02-25 22:03:41.585808+01

replicas=# select
pg_last_xlog_replay_location()
,   pg_xlogfile_name_offset(pg_last_xlog_replay_location())
,   pg_last_xlog_receive_location()
,   pg_xlogfile_name_offset(pg_last_xlog_receive_location())
,   now();
-[ RECORD 1 ]-+
pg_last_xlog_replay_location  | 0/0/200
pg_xlogfile_name_offset   | (0001,16777216)
pg_last_xlog_receive_location | 1/0/200
pg_xlogfile_name_offset   | (00010001,16777216)
now   | 2010-02-25 22:06:56.008181+01

replicas=# select
pg_last_xlog_replay_location()
,   pg_xlogfile_name_offset(pg_last_xlog_replay_location())
,   pg_last_xlog_receive_location()
,   pg_xlogfile_name_offset(pg_last_xlog_receive_location())
,   now();
-[ RECORD 1 ]-+---
pg_last_xlog_replay_location  | 1/0/2B8
pg_xlogfile_name_offset   | (00010002,184)
pg_last_xlog_receive_location | 1/0/2B8
pg_xlogfile_name_offset   | (00010002,184)
now   | 2010-02-25 22:07:51.368363+01


I not sure this qualifies as a bug, but if not, it should probably be mentioned 
somewhere in the
documentation.

(Oh, and to answer Heikki's earlier question, what you trying to achieve?:  I 
am trying to keep
track of how far behind the standby is when I restore a large dump (500 GB or 
so) into the primary
(eventually I want at the same time run pgbench on both).)


thanks,

Erik Rijkers





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought: should we run pgindent now?

2010-02-25 Thread Bruce Momjian
Robert Haas wrote:
 On Thu, Feb 25, 2010 at 3:17 PM, Bruce Momjian br...@momjian.us wrote:
  Robert Haas wrote:
  On Thu, Feb 18, 2010 at 11:51 PM, Andrew Dunstan and...@dunslane.net 
  wrote:
   Bruce Momjian wrote:
   I have a TODO on fixing some of the typedef finding. But I can generate
   an up to date version of the list Bruce last used in a day or two, and 
   then
   get this better whacked into shape for another run at the more 
   traditional
   time.
  
   I am ready to run pgindent whenever requested.
   There is an updated typedefs list at
   http://www.pgbuildfarm.org/cgi-bin/typedefs.pl
 
  Bruce, you want to go ahead and do this? ?The sooner the better, AFAICS.
 
  I am ready.
 
 Does that mean you're going to do it, or are you waiting for some sort
 of OK?  I believe everyone who expressed an opinion is in favor.

I was waiting a few hours to get feedback.  I will do it at 0100 GMT
(2000 EST).

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] feature request

2010-02-25 Thread Omar Bettin
hello to everyone,
is a bit late for an italian, but after an long day debugging I had an idea.
Why not introduce a special SQL command like 
STORE WHERE [condition] FROM [table]
removing all data that meet the condition and storing them into another 
database?
Then, if a query that needs the stored data is executed after such command the 
database joins the stored data into the result query.
This can keep the production database lightweight and fast.
Regards

Omar Bettin

Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Gokulakannan Somasundaram
 Wait a second, which idea are we currently talking about?  No heap at
 all, or just the ability to check visibility without visiting the heap?


I was talking about the indexes with snapshot



 If it's a genuine IOT (ie no separate heap), then you are not going to
 be able to get away without a full heap tuple header.  We've sweated
 blood to get that struct down to where it is; there's no way to make it
 smaller without giving up some really fundamental things, for example
 the ability to do UPDATE :-(


Of course, as i said, the leaf pages will have HeapTuples in IOT. As a
Postgres user, definitely i am thankful for what has been done.


 If you just want to avoid a heap visit for visibility checks, I think
 you'd only need to add xmin/xmax/cmin plus the hint bits for same.
 This is going to end up costing 16 bytes in practice --- you might
 think you could squeeze into 12 but on 64-bit machines (MAXALIGN 8)
 you'll save nothing.  So that's effectively a doubling of index size
 for common cases such as a single int4 or int8 index column.


Yes but currently we are storing the size of index in IndexTuple, which is
also stored in ItemId. If we can somehow make it use that info, then we have
13 bits of flag for free and we can reduce it to 8 bytes of extra info. But
we need you to sweat some more blood for that :). But again, unless we
resolve the volatile functions issue, there is no use in worrying about
this.


 The other
 problem is the extra write load created by needing to update the index's
 copies of the hint bits; not to mention extra writes to freeze the xids
 when they get old enough.

But Tom, i remember that the vacuum was faster when index had visibility
info, since we need not touch the table. But maybe i am wrong. Atleast i
remember that was the case, when the
relation had only thick indexes.
Oh..Yeah... visibility map might have changed the equation.


Thanks,
Gokul


Re: [HACKERS] feature request

2010-02-25 Thread Omar Bettin

...could be
STORE WHERE [condition] FROM [table] INTO [database]

regards
Omar Bettin

- Original Message - 
From: Robert Haas robertmh...@gmail.com

To: Omar Bettin o.bet...@informaticaindustriale.it
Cc: pgsql-hackers@postgresql.org
Sent: Thursday, February 25, 2010 11:11 PM
Subject: Re: [HACKERS] feature request



On Thu, Feb 25, 2010 at 4:35 PM, Omar Bettin
o.bet...@informaticaindustriale.it wrote:

hello to everyone,
is a bit late for an italian, but after an long day debugging I had an 
idea.

Why not introduce a special SQL command like
STORE WHERE [condition] FROM [table]
removing all data that meet the condition and storing them into another
database?
Then, if a query that needs the stored data is executed after such 
command

the database joins the stored data into the result query.
This can keep the production database lightweight and fast.
Regards


DELETE ... RETURNING is useful for this kind of thing, sometimes.  And
you could use it inside a function to go and do something with each
row returned, though that might not be very graceful for large numbers
of rows.  The proposed syntax wouldn't actually work because it
doesn't specify where to put the data.

...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers 



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] feature request

2010-02-25 Thread Robert Haas
On Thu, Feb 25, 2010 at 5:17 PM, Omar Bettin
o.bet...@informaticaindustriale.it wrote:
 ...could be
 STORE WHERE [condition] FROM [table] INTO [database]

That still doesn't work, because a PostgreSQL backend doesn't have any
obvious way to access another database.  You'd need to use dblink or
something.  Eventually (but not any time soon) it will probably be
possible to do things like this, which would work for moving data
between tables in the same database:

WITH x AS (DELETE FROM ... RETURNING ...) INSERT INTO ... (...) SELECT
... FROM x

Doing anything with some OTHER database is going to require a lot more
infrastructure.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought: should we run pgindent now?

2010-02-25 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Robert Haas wrote:
 Does that mean you're going to do it, or are you waiting for some sort
 of OK?  I believe everyone who expressed an opinion is in favor.

 I was waiting a few hours to get feedback.  I will do it at 0100 GMT
 (2000 EST).

You can do it now as far as I'm concerned --- my next bit of work is in
the back branches anyway.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] feature request

2010-02-25 Thread Robert Haas
On Thu, Feb 25, 2010 at 4:35 PM, Omar Bettin
o.bet...@informaticaindustriale.it wrote:
 hello to everyone,
 is a bit late for an italian, but after an long day debugging I had an idea.
 Why not introduce a special SQL command like
 STORE WHERE [condition] FROM [table]
 removing all data that meet the condition and storing them into another
 database?
 Then, if a query that needs the stored data is executed after such command
 the database joins the stored data into the result query.
 This can keep the production database lightweight and fast.
 Regards

DELETE ... RETURNING is useful for this kind of thing, sometimes.  And
you could use it inside a function to go and do something with each
row returned, though that might not be very graceful for large numbers
of rows.  The proposed syntax wouldn't actually work because it
doesn't specify where to put the data.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow vacuumdb to only analyze

2010-02-25 Thread Bruce Momjian
decibel wrote:
 One of the talks at PGCon (update in place?) recommended running  
 vacuumdb -z to analyze all tables to rebuild statistics. Problem with  
 that is it also vacuums everything. ISTM it'd be useful to be able to  
 just vacuum all databases in a cluster, so I hacked it into vacuumdb.
 
 Of course, using a command called vacuumdb is rather silly, but I  
 don't see a reasonable way to deal with that. I did change the name  
 of the functions from vacuum_* to process_*, since they can vacuum  
 and/or analyze.
 
 The only thing I see missing is the checks for invalid combinations  
 of options, which I'm thinking should go in the function rather than  
 in the option parsing section. But I didn't want to put any more  
 effort into this if it's not something we actually want.

This is implemented in 9.0 from vacuumdb:

  -Z, --analyze-only  only update optimizer hints

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought: should we run pgindent now?

2010-02-25 Thread Robert Haas
On Thu, Feb 25, 2010 at 4:48 PM, Bruce Momjian br...@momjian.us wrote:
 Robert Haas wrote:
 On Thu, Feb 25, 2010 at 3:17 PM, Bruce Momjian br...@momjian.us wrote:
  Robert Haas wrote:
  On Thu, Feb 18, 2010 at 11:51 PM, Andrew Dunstan and...@dunslane.net 
  wrote:
   Bruce Momjian wrote:
   I have a TODO on fixing some of the typedef finding. But I can 
   generate
   an up to date version of the list Bruce last used in a day or two, 
   and then
   get this better whacked into shape for another run at the more 
   traditional
   time.
  
   I am ready to run pgindent whenever requested.
   There is an updated typedefs list at
   http://www.pgbuildfarm.org/cgi-bin/typedefs.pl
 
  Bruce, you want to go ahead and do this? ?The sooner the better, AFAICS.
 
  I am ready.

 Does that mean you're going to do it, or are you waiting for some sort
 of OK?  I believe everyone who expressed an opinion is in favor.

 I was waiting a few hours to get feedback.  I will do it at 0100 GMT
 (2000 EST).

OK, great!  Thanks for the clarification.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] feature request

2010-02-25 Thread Omar Bettin

I have read that 8.5 will have replication, so is just a feature request.

regards
Omar Bettin

- Original Message - 
From: Robert Haas robertmh...@gmail.com

To: Omar Bettin o.bet...@informaticaindustriale.it
Cc: pgsql-hackers@postgresql.org
Sent: Thursday, February 25, 2010 11:22 PM
Subject: Re: [HACKERS] feature request



On Thu, Feb 25, 2010 at 5:17 PM, Omar Bettin
o.bet...@informaticaindustriale.it wrote:

...could be
STORE WHERE [condition] FROM [table] INTO [database]


That still doesn't work, because a PostgreSQL backend doesn't have any
obvious way to access another database.  You'd need to use dblink or
something.  Eventually (but not any time soon) it will probably be
possible to do things like this, which would work for moving data
between tables in the same database:

WITH x AS (DELETE FROM ... RETURNING ...) INSERT INTO ... (...) SELECT
... FROM x

Doing anything with some OTHER database is going to require a lot more
infrastructure.

...Robert


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict version of version_stamp.pl

2010-02-25 Thread Bruce Momjian

David, I am sorry this didn't get applied, and the code has drifted too
much to apply it now.  Would you be able to make a new patch to make our
Perl files strict?

---

David Fetter wrote:
 On Fri, May 08, 2009 at 09:04:18PM -0400, Andrew Dunstan wrote:
 
 
  Joshua D. Drake wrote:
  Hello,
 
  Here is a diff of version_stamp.pl. It is not quite done as I can't
  actually get it to run. No matter what I do it doesn't appear to be able
  to open configure.in.
 
  If someone could help me figure out where I am being stupid I would
  appreciate it.
 
 
  Maybe you aren't running it in the right directory (i.e. the directory  
  where configure.in exists)?
 
  Anyway, I think what you want to achieve (without all the git crap) is  
  the attached.
 
 Here's some git crap, but it makes all the .pl programs strict-clean.
 Many are still horrendous, though.
 
 Cheers,
 David.
 -- 
 David Fetter da...@fetter.org http://fetter.org/
 Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
 Skype: davidfetter  XMPP: david.fet...@gmail.com
 
 Remember to vote!
 Consider donating to Postgres: http://www.postgresql.org/about/donate

[ Attachment, skipping... ]

 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict version of version_stamp.pl

2010-02-25 Thread David Fetter
On Thu, Feb 25, 2010 at 05:39:10PM -0500, Bruce Momjian wrote:
 
 David, I am sorry this didn't get applied, and the code has drifted too
 much to apply it now.  Would you be able to make a new patch to make our
 Perl files strict?

Please find updated patch attached.  It passes strict, warnings, and
perlcritic -4

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
diff --git a/src/tools/version_stamp.pl b/src/tools/version_stamp.pl
index 3243e16..e06f5f2 100755
--- a/src/tools/version_stamp.pl
+++ b/src/tools/version_stamp.pl
@@ -1,5 +1,7 @@
-#! /usr/bin/perl -w
+#!/usr/bin/env perl
 
+use strict;
+use warnings;
 #
 # version_stamp.pl -- update version stamps throughout the source tree
 #
@@ -22,26 +24,26 @@
 
 # Major version is hard-wired into the script.  We update it when we branch
 # a new development version.
-$major1 = 9;
-$major2 = 0;
+my $major1 = 9;
+my $major2 = 0;
 
 # Validate argument and compute derived variables
-$minor = shift;
+my $minor = shift;
 defined($minor) || die $0: missing required argument: minor-version\n;
 
+my ($dotneeded, $numericminor);
 if ($minor =~ m/^\d+$/) {
 $dotneeded = 1;
 $numericminor = $minor;
-} elsif ($minor eq devel) {
-$dotneeded = 0;
-$numericminor = 0;
-} elsif ($minor =~ m/^alpha\d+$/) {
-$dotneeded = 0;
-$numericminor = 0;
-} elsif ($minor =~ m/^beta\d+$/) {
-$dotneeded = 0;
-$numericminor = 0;
-} elsif ($minor =~ m/^rc\d+$/) {
+} elsif ($minor =~ m/
+^
+(
+devel |
+alpha\d+ |
+beta\d+ |
+rc\d+
+)
+$/x) {
 $dotneeded = 0;
 $numericminor = 0;
 } else {
@@ -49,32 +51,33 @@ if ($minor =~ m/^\d+$/) {
 }
 
 # Create various required forms of the version number
-$majorversion = $major1 . . . $major2;
+my $majorversion = $major1 . . . $major2;
+my $fullversion;
 if ($dotneeded) {
 $fullversion = $majorversion . . . $minor;
 } else {
 $fullversion = $majorversion . $minor;
 }
-$numericversion = $majorversion . . . $numericminor;
-$padnumericversion = sprintf(%d%02d%02d, $major1, $major2, $numericminor);
+my $numericversion = $majorversion . . . $numericminor;
+my $padnumericversion = sprintf(%d%02d%02d, $major1, $major2, $numericminor);
 
 # Get the autoconf version number for eventual nag message
 # (this also ensures we're in the right directory)
 
-$aconfver = ;
-open(FILE, configure.in) || die could not read configure.in: $!\n;
-while (FILE) {
+my $aconfver = ;
+open(my $file, '', configure.in) || die could not read configure.in: $!\n;
+while ($file) {
 if (m/^m4_if\(m4_defn\(\[m4_PACKAGE_VERSION\]\), \[(.*)\], \[\], 
\[m4_fatal/) {
 $aconfver = $1;
last;
 }
 }
-close(FILE);
+close($file);
 $aconfver ne  || die could not find autoconf version number in 
configure.in\n;
 
 # Update configure.in and other files that contain version numbers
 
-$fixedfiles = ;
+my $fixedfiles = ;
 
 sed_file(configure.in,
 -e 's/AC_INIT(\\[PostgreSQL\\], 
\\[[0-9a-z.]*\\]/AC_INIT([PostgreSQL], [$fullversion]/');
@@ -113,4 +116,5 @@ sub sed_file {
   or die mv failed: $?;
 
 $fixedfiles .= \t$filename\n;
+return;
 }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Karl Schnaitter
 The other
 problem is the extra write load created by needing to update the index's
 copies of the hint bits; not to mention extra writes to freeze the xids
 when they get old enough.

 But Tom, i remember that the vacuum was faster when index had visibility
 info, since we need not touch the table. But maybe i am wrong.


I disagree with that, Gokul -- if the ordering operators are volatile or
just incorrect, during DELETE, you could set xmax in the wrong IndexTuple.
Then there will be another IndexTuple that says it's visible, but it points
to a non-visible heap tuple. I think you should follow the pointers to the
heap before you decide to let an index tuple remain in the index during
vacuum. This would ensure that all references from an index to a heap tuple
are removed before vacuuming the heap tuple. I would be worried about what
might break if this invariant doesn't hold.

Tom is right about all the extra overhead involved with keeping visibility
info in the index. But it can be a good trade-off in some cases.

Karl


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Greg Stark
On Thu, Feb 25, 2010 at 9:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  We've sweated
 blood to get that struct down to where it is; there's no way to make it
 smaller without giving up some really fundamental things, for example
 the ability to do UPDATE :-(

Oh, this is a tangent but I think there are some more gains there, at
least now that we've eliminated vacuum full. The more we save the more
complex the code and data structure becomes so there may be a point
where it's not worthwhile any more. And of course if we do try to do
any of these then it wouldn't be part of IOT it would be a general
improvement which would help tables as well.

For future reference, here are some items that have come up in the past:

1) We've talked about having a common xmin in the page header and
then a bit indicating that the xmin is missing from the tuple header
because it matches the value in the page header. This would save a lot
of space in the common case where data was all loaded in a single
transaction and all the tuples have the same xmin.

2) Now that we don't have vacuum full the command-id is kind of a
waste. We could replace it with some kind of local memory data
structure which is capable of spilling to disk. When the transaction
commits it can be thrown away and no other session needs to be able to
see it. This could have an impact on future work on parallel query but
I think our phantom-command-id already has such issues anyways.

3) xmax and ctid are unavoidable since we never know when a tuple
might be deleted or updated in the future. But if we allowed the user
to mark a table insert-only then it could be left out and any
operation which tries to delete, update, or select for update a row in
the table would throw an error.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Tom Lane
Greg Stark gsst...@mit.edu writes:
 2) Now that we don't have vacuum full the command-id is kind of a
 waste.

Not really.

 We could replace it with some kind of local memory data
 structure which is capable of spilling to disk.

The performance costs of that would probably outweigh any space savings.

 I think our phantom-command-id already has such issues anyways.

It can, but it's relatively uncommon to update a large number of tuples
more than once in a transaction.  What you're suggesting would move that
bottleneck into mainstream cases.  And it would be a bigger bottleneck
since you would have no lookup key available within the tuple header.
You'd have to use ctid as the lookup key which means no ability to use
one table entry for multiple rows, not to mention what do you do before
the tuple has a ctid assigned?

 3) xmax and ctid are unavoidable since we never know when a tuple
 might be deleted or updated in the future. But if we allowed the user
 to mark a table insert-only then it could be left out and any
 operation which tries to delete, update, or select for update a row in
 the table would throw an error.

Anything with this field is optional is going to be a complete
disaster for mapping C structs over tuple headers...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Karl Schnaitter
If it's of any interest, I can say something about the hint bits in the
index tuple header. In my implementation, my decision was to use only one
hint bit. It went into the unused 13th bit of the IndexTuple header. When
the hint bit is set, it means that

(xmin is committed OR xmin = InvalidTransactionId)
AND (xmax is committed OR xmax = InvalidTransactionId)

Then there are 12 bytes for xmin/xmax/cid. I did sweat something over this
decision... but maybe it was a wasted effort if the 12 bytes end up
occupying 16 bytes anyway.

Karl

On Thu, Feb 25, 2010 at 1:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Greg Stark gsst...@mit.edu writes:
  In indexes we currently get away with a reduced header which has few
  of the 6 bytes of info bits. However the only reason we can do is
  because we impose arbitrary limitations that work for indexes but
  wouldn't be reasonable for tables. Such as a lower maximum number of
  columns, inability to add new columns or drop columns later, etc.

 Wait a second, which idea are we currently talking about?  No heap at
 all, or just the ability to check visibility without visiting the heap?

 If it's a genuine IOT (ie no separate heap), then you are not going to
 be able to get away without a full heap tuple header.  We've sweated
 blood to get that struct down to where it is; there's no way to make it
 smaller without giving up some really fundamental things, for example
 the ability to do UPDATE :-(

 If you just want to avoid a heap visit for visibility checks, I think
 you'd only need to add xmin/xmax/cmin plus the hint bits for same.
 This is going to end up costing 16 bytes in practice --- you might
 think you could squeeze into 12 but on 64-bit machines (MAXALIGN 8)
 you'll save nothing.  So that's effectively a doubling of index size
 for common cases such as a single int4 or int8 index column.  The other
 problem is the extra write load created by needing to update the index's
 copies of the hint bits; not to mention extra writes to freeze the xids
 when they get old enough.

regards, tom lane



Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Gokulakannan Somasundaram

 I disagree with that, Gokul -- if the ordering operators are volatile or
 just incorrect, during DELETE, you could set xmax in the wrong IndexTuple.
 Then there will be another IndexTuple that says it's visible, but it points
 to a non-visible heap tuple. I think you should follow the pointers to the
 heap before you decide to let an index tuple remain in the index during
 vacuum. This would ensure that all references from an index to a heap tuple
 are removed before vacuuming the heap tuple. I would be worried about what
 might break if this invariant doesn't hold.


Well, Karl, if we have to support function based indexes/IOT, one thing is
for sure. We can't support them for volatile functions / broken data types.
Everyone agrees with that. But the question is how we identify something is
not a volatile function. Only way currently is to let the user make the
decision( Or we should consult some mathematician ). So we need not consult
the heaptuple.

Gokul.


Re: [HACKERS] strict version of version_stamp.pl

2010-02-25 Thread Tom Lane
David Fetter da...@fetter.org writes:
 -} elsif ($minor eq devel) {
 -$dotneeded = 0;
 -$numericminor = 0;
 -} elsif ($minor =~ m/^alpha\d+$/) {
 -$dotneeded = 0;
 -$numericminor = 0;
 -} elsif ($minor =~ m/^beta\d+$/) {
 -$dotneeded = 0;
 -$numericminor = 0;
 -} elsif ($minor =~ m/^rc\d+$/) {
 +} elsif ($minor =~ m/
 +^
 +(
 +devel |
 +alpha\d+ |
 +beta\d+ |
 +rc\d+
 +)
 +$/x) {

FWIW, I don't care for the above part of the patch.  It doesn't seem to
me to improve readability one iota, if anything the reverse; and it
makes the logic less amenable to modification.  If we wanted to make the
behavior at all different for alpha/beta/rc cases, we'd have to undo it
anyway.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Karl Schnaitter
On Thu, Feb 25, 2010 at 3:59 PM, Gokulakannan Somasundaram 
gokul...@gmail.com wrote:

 I disagree with that, Gokul -- if the ordering operators are volatile or
 just incorrect, during DELETE, you could set xmax in the wrong IndexTuple.
 Then there will be another IndexTuple that says it's visible, but it points
 to a non-visible heap tuple. I think you should follow the pointers to the
 heap before you decide to let an index tuple remain in the index during
 vacuum. This would ensure that all references from an index to a heap tuple
 are removed before vacuuming the heap tuple. I would be worried about what
 might break if this invariant doesn't hold.


 Well, Karl, if we have to support function based indexes/IOT, one thing is
 for sure. We can't support them for volatile functions / broken data types.
 Everyone agrees with that. But the question is how we identify something is
 not a volatile function. Only way currently is to let the user make the
 decision( Or we should consult some mathematician ). So we need not consult
 the heaptuple.


First of all, volatility is not the only issue. The ordering ops could also
be incorrect, e.g., violate the transitivity property. there is no reliable
way to determine if a function is volatile and/or incorrectly specified.

Of course, PG can't support indexing with incorrect functions. However,
it's worthwhile to guard against too much damage being done if the user's
function has a bug. Maybe I'm wrong? Maybe an index tuple with a dangling
pointer is actually harmless?

Karl


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Tom Lane
Karl Schnaitter karl...@gmail.com writes:
 If it's of any interest, I can say something about the hint bits in the
 index tuple header. In my implementation, my decision was to use only one
 hint bit. It went into the unused 13th bit of the IndexTuple header. When
 the hint bit is set, it means that

 (xmin is committed OR xmin = InvalidTransactionId)
 AND (xmax is committed OR xmax = InvalidTransactionId)

 Then there are 12 bytes for xmin/xmax/cid. I did sweat something over this
 decision... but maybe it was a wasted effort if the 12 bytes end up
 occupying 16 bytes anyway.

Actually, if you need to squeeze a few more bits into that word, the
thing to do would be to get rid of storing the tuple length there.
This would involve adding the same type of indirection header that
we use for HeapTuples, so that the length would be available at need
without going back to the item pointer.  It'd be an invasive code change
but reasonably straightforward, and then you'd have room for normal hint
bits.  Squeezing cmin in there is just fantasy though.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql with Function Type in \df

2010-02-25 Thread Bruce Momjian

Did we ever get tab completion support for these backslash commands?

---

David Fetter wrote:
 On Fri, Apr 17, 2009 at 04:42:31PM -0400, Alvaro Herrera wrote:
  David Fetter wrote:
  
   Is this any better?
  
  So what happens if I do \dfaQ?  It should throw an error, yes?
 
 Interesting question.
  
  This help line:
  
   + fprintf(output, _(  \\df[S+] [PATTERN]  list functions.  Add a, n, 
   t, w for aggregate, normal, trigger, window\n));
  
  needs shortening to below 80 chars (or maybe split it in two lines.
  Just make sure they are a single translation item).
  
  It also seems like we're missing tab completion support for this.
 
 This is another interesting question.  I notice that the tab
 completion doesn't support things like \dit.
 
 Should I add that as a separate patch?
 
 Cheers,
 David.
 -- 
 David Fetter da...@fetter.org http://fetter.org/
 Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
 Skype: davidfetter  XMPP: david.fet...@gmail.com
 
 Remember to vote!
 Consider donating to Postgres: http://www.postgresql.org/about/donate
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Tom Lane
Karl Schnaitter karl...@gmail.com writes:
 Of course, PG can't support indexing with incorrect functions. However,
 it's worthwhile to guard against too much damage being done if the user's
 function has a bug. Maybe I'm wrong? Maybe an index tuple with a dangling
 pointer is actually harmless?

No, it's far from harmless.  As soon as that heap TID gets filled with
an unrelated tuple, you run the risk of indexscans alighting on and
perhaps modifying the wrong tuple.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought: should we run pgindent now?

2010-02-25 Thread Bruce Momjian
Robert Haas wrote:
 On Thu, Feb 25, 2010 at 4:48 PM, Bruce Momjian br...@momjian.us wrote:
  Robert Haas wrote:
  On Thu, Feb 25, 2010 at 3:17 PM, Bruce Momjian br...@momjian.us wrote:
   Robert Haas wrote:
   On Thu, Feb 18, 2010 at 11:51 PM, Andrew Dunstan and...@dunslane.net 
   wrote:
Bruce Momjian wrote:
I have a TODO on fixing some of the typedef finding. But I can 
generate
an up to date version of the list Bruce last used in a day or two, 
and then
get this better whacked into shape for another run at the more 
traditional
time.
   
I am ready to run pgindent whenever requested.
There is an updated typedefs list at
http://www.pgbuildfarm.org/cgi-bin/typedefs.pl
  
   Bruce, you want to go ahead and do this? ?The sooner the better, AFAICS.
  
   I am ready.
 
  Does that mean you're going to do it, or are you waiting for some sort
  of OK? ?I believe everyone who expressed an opinion is in favor.
 
  I was waiting a few hours to get feedback. ?I will do it at 0100 GMT
  (2000 EST).
 
 OK, great!  Thanks for the clarification.

Done.  The diff is here:

http://momjian.us/tmp/pgindent.diff

and I checked into CVS a copy of the typedef list I used from Andrew
Dunstan.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why isn't stats_temp_directory automatically created?

2010-02-25 Thread Bruce Momjian

I assume we decided we didn't want this.

---

Tom Lane wrote:
 Fujii Masao masao.fu...@gmail.com writes:
  Here is the revised patch; If stats_temp_directory indicates the symlink,
  we pursue the chain of symlinks and create the referenced directory.
 
 I looked at this patch a bit.  I'm still entirely unconvinced that we
 should be doing this at all --- if the directory is not there, there's
 a significant probability that there's something wrong that is beyond
 the backend's ability to understand or correct.  However, even ignoring
 that objection, the patch is not ready to commit for a number of
 reasons:
 
 * Repeating the operation every time the stats file is written doesn't
 seem like a particularly good idea; it eats cycles, and if the directory
 disappears during live operation then there is *definitely* something
 fishy going on.  Can't we fix it so that the work is only done when the
 path setting changes?  (In principle you could do it in
 assign_pgstat_temp_directory(), but I think something would be needed to
 ensure that only the stats collector process actually tries to create
 the directory.  Or maybe it would be simplest to try to run the code only
 when we get a failure from trying to create the stats temp file.)
 
 * I don't think the mkdir_p code belongs in fd.c.  It looks like
 you copied-and-pasted it from initdb.c, which isn't any good either;
 we don't want to maintain multiple copies of this.  Maybe a new
 src/port/ file is indicated.
 
 * elog(LOG) is not exactly an adequate response if the final chdir fails
 --- you have just broken the process beyond recovery.  That alone may be
 sufficient reason to reject the attempt to deal with symlinks.  As far
 as pgstat_temp_directory is concerned, I'm not sure of the point of
 making the GUC point to a symlink anyway --- if you have a GUC why not
 just point it where you want the directory to be?
 
   regards, tom lane
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-25 Thread Jeroen Vermeulen

Robert Haas wrote:

On Wed, Feb 17, 2010 at 5:52 PM, Jeroen Vermeulen j...@xs4all.nl wrote:

I may have cut this out of my original email for brevity... my impression is
that the planner's estimate is likely to err on the side of scalability, not
best-case response time; and that this is more likely to happen than an
optimistic plan going bad at runtime.


Interestingly, most of the mistakes that I have seen are in the
opposite direction.


I guess there's not much we can do about those, except decide after 
running that it's worth optimizing for specific values.




Yeb points out a devil in the details though: the cost estimate is unitless.
 We'd have to have some orders-of-magnitude notion of how the estimates fit
into the picture of real performance.


I'm not sure to what extent you can assume that the cost is
proportional to the execution time.  I seem to remember someone
(Peter?) arguing that they're not related by any fixed ratio, partly
because things like page costs vs. cpu costs didn't match physical
reality, and that in fact some attempts to gather better empirically
better values for things like random_page_cost and seq_page_cost
actually ended up making the plans worse rather than better.  It would
be nice to see some research in this area...


Getting representative workloads and machine configurations may make 
that hard.  :/


But all we really want is a check for really obscene costs, as an extra 
stopgap so we don't have to wait for the thing to execute before we 
decide it's too costly.  Surely there must be some line we can draw.



Jeroen

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] visibility maps and heap_prune

2010-02-25 Thread Bruce Momjian

Whatever happened to this?  It was in the first 9.0 commitfest but was
returned with feedback but never updated:

https://commitfest.postgresql.org/action/patch_view?id=75

---

Pavan Deolasee wrote:
 ISTM that the PD_ALL_VISIBLE flag and the visibility map bit can be set at
 the end of pruning operation if we know that there are only tuples visible
 to all transactions left in the page. The way pruning is done, I think it
 would be straight forward to get this information.
 
 Thanks,
 Pavan
 
 -- 
 Pavan Deolasee
 EnterpriseDB http://www.enterprisedb.com

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why isn't stats_temp_directory automatically created?

2010-02-25 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 I assume we decided we didn't want this.

I thought the risk/reward ratio was pretty bad.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why isn't stats_temp_directory automatically created?

2010-02-25 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  I assume we decided we didn't want this.
 
 I thought the risk/reward ratio was pretty bad.

Yea, the symlink issue killed it for me.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought: should we run pgindent now?

2010-02-25 Thread Robert Haas
On Thu, Feb 25, 2010 at 9:03 PM, Bruce Momjian br...@momjian.us wrote:
 Done.  The diff is here:

        http://momjian.us/tmp/pgindent.diff

 and I checked into CVS a copy of the typedef list I used from Andrew
 Dunstan.

Cool, thanks.  Let the rebasing (if any) begin.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assertion failure in walreceiver

2010-02-25 Thread Greg Stark
On Thu, Feb 25, 2010 at 7:31 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 While we're at it, the error message doesn't seem right:

 FATAL:  recovery connections cannot start because the
 recovery_connections parameter is disabled on the WAL source server

 recovery_connections is on by default, the real problem is that
 archive_command and max_wal_senders are disabled.

So do I understand this right, if you have archive_mode disabled and
try to start a slave you get this error. Then when you shut down your
master and set archive_mode on and bring it up and try again you'll
*still* get this message because the last checkpoint will be the final
shutdown checkpoint where  archive_mode was still disabled?


-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-25 Thread Robert Haas
On Thu, Feb 25, 2010 at 9:48 PM, Jeroen Vermeulen j...@xs4all.nl wrote:
 Robert Haas wrote:

 On Wed, Feb 17, 2010 at 5:52 PM, Jeroen Vermeulen j...@xs4all.nl wrote:

 I may have cut this out of my original email for brevity... my impression
 is
 that the planner's estimate is likely to err on the side of scalability,
 not
 best-case response time; and that this is more likely to happen than an
 optimistic plan going bad at runtime.

 Interestingly, most of the mistakes that I have seen are in the
 opposite direction.

 I guess there's not much we can do about those, except decide after running
 that it's worth optimizing for specific values.


 Yeb points out a devil in the details though: the cost estimate is
 unitless.
  We'd have to have some orders-of-magnitude notion of how the estimates
 fit
 into the picture of real performance.

 I'm not sure to what extent you can assume that the cost is
 proportional to the execution time.  I seem to remember someone
 (Peter?) arguing that they're not related by any fixed ratio, partly
 because things like page costs vs. cpu costs didn't match physical
 reality, and that in fact some attempts to gather better empirically
 better values for things like random_page_cost and seq_page_cost
 actually ended up making the plans worse rather than better.  It would
 be nice to see some research in this area...

 Getting representative workloads and machine configurations may make that
 hard.  :/

 But all we really want is a check for really obscene costs, as an extra
 stopgap so we don't have to wait for the thing to execute before we decide
 it's too costly.  Surely there must be some line we can draw.

I actually think there isn't any clean line.  Obscene is in the eye of
the beholder.  Frankly, I think this discussion is getting off into
the weeds.  It would be nice, perhaps, to have a feature that will
detect when the generic plan is the suxxor and attempt to find a
better one, but that's really, really hard for a whole bunch of
reasons.  Bruce's suggestion that we should provide some user control
over whether we plan at bind time or execute time seems likely to be
(1) considerably simpler to implement, (2) considerably easier to get
consensus on, and (3) capable of giving 90% of the benefit for an only
higher inconvenience factor.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] visibility maps and heap_prune

2010-02-25 Thread Robert Haas
On Thu, Feb 25, 2010 at 9:49 PM, Bruce Momjian br...@momjian.us wrote:
 Whatever happened to this?  It was in the first 9.0 commitfest but was
 returned with feedback but never updated:

        https://commitfest.postgresql.org/action/patch_view?id=75

Well, the patch author chose not to pursue it.  It's clearly far too
late now, at least for 9.0.

I'm pleased to see that you're not finding many patches that just
completely slipped through the cracks - seems like most things were
withdrawn on purpose, had problems, and/or were not pursued by the
author.  I think the CommitFest process has done a pretty good job of
making sure everything gets looked at.  The only small chink I see is
that there may be some patches (especially small ones or from
first-time contributors) which escaped getting added to a CommitFest
in the first place; and we don't really have a way of policing that.
Usually someone replies to the patch author and suggests adding it to
the next CF, but I can't swear that that happens in every case.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] visibility maps and heap_prune

2010-02-25 Thread Bruce Momjian
Robert Haas wrote:
 On Thu, Feb 25, 2010 at 9:49 PM, Bruce Momjian br...@momjian.us wrote:
  Whatever happened to this? ?It was in the first 9.0 commitfest but was
  returned with feedback but never updated:
 
  ? ? ? ?https://commitfest.postgresql.org/action/patch_view?id=75
 
 Well, the patch author chose not to pursue it.  It's clearly far too
 late now, at least for 9.0.
 
 I'm pleased to see that you're not finding many patches that just
 completely slipped through the cracks - seems like most things were
 withdrawn on purpose, had problems, and/or were not pursued by the
 author.  I think the CommitFest process has done a pretty good job of
 making sure everything gets looked at.  The only small chink I see is
 that there may be some patches (especially small ones or from
 first-time contributors) which escaped getting added to a CommitFest
 in the first place; and we don't really have a way of policing that.
 Usually someone replies to the patch author and suggests adding it to
 the next CF, but I can't swear that that happens in every case.

Yea, the complex issues are often lost, and I stopped tracking
commitfest items so I don't actually know if anything that got into the
commit fest was eventually just dropped by the author.  We can say we
don't need to persue those but they might be valuable/important.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-25 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 I actually think there isn't any clean line.  Obscene is in the eye of
 the beholder.  Frankly, I think this discussion is getting off into
 the weeds.  It would be nice, perhaps, to have a feature that will
 detect when the generic plan is the suxxor and attempt to find a
 better one, but that's really, really hard for a whole bunch of
 reasons.  Bruce's suggestion that we should provide some user control
 over whether we plan at bind time or execute time seems likely to be
 (1) considerably simpler to implement, (2) considerably easier to get
 consensus on, and (3) capable of giving 90% of the benefit for an only
 higher inconvenience factor.

It's not going to be easier to implement.  Yeah, it would be easy to
provide a global switch via a GUC setting, but that's not going to be
helpful, because this is the sort of thing that really needs to be
managed per-query.  Almost any nontrivial application is going to have
some queries that really need the custom plan and many that don't.
If people just turn the GUC on we might as well throw away the plan
caching mechanism altogether.  But putting support for a per-query level
of control into the protocol (and then every client library) as well as
every PL is going to be painful to implement, and even more painful to
use.

I still like the idea of automatically replanning with the known
parameter values, and noting whether the result plan was estimated to be
noticeably cheaper than the generic plan, and giving up on generating
custom plans if we didn't observe any such win over N tries.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] visibility maps and heap_prune

2010-02-25 Thread Robert Haas
On Thu, Feb 25, 2010 at 10:32 PM, Bruce Momjian br...@momjian.us wrote:
 Robert Haas wrote:
 On Thu, Feb 25, 2010 at 9:49 PM, Bruce Momjian br...@momjian.us wrote:
  Whatever happened to this? ?It was in the first 9.0 commitfest but was
  returned with feedback but never updated:
 
  ? ? ? ?https://commitfest.postgresql.org/action/patch_view?id=75

 Well, the patch author chose not to pursue it.  It's clearly far too
 late now, at least for 9.0.

 I'm pleased to see that you're not finding many patches that just
 completely slipped through the cracks - seems like most things were
 withdrawn on purpose, had problems, and/or were not pursued by the
 author.  I think the CommitFest process has done a pretty good job of
 making sure everything gets looked at.  The only small chink I see is
 that there may be some patches (especially small ones or from
 first-time contributors) which escaped getting added to a CommitFest
 in the first place; and we don't really have a way of policing that.
 Usually someone replies to the patch author and suggests adding it to
 the next CF, but I can't swear that that happens in every case.

 Yea, the complex issues are often lost, and I stopped tracking
 commitfest items so I don't actually know if anything that got into the
 commit fest was eventually just dropped by the author.  We can say we
 don't need to persue those but they might be valuable/important.

Yes, they could be valuable/important - anything that falls into that
category is probably going to turn into a TODO list item at this
point.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow vacuumdb to only analyze

2010-02-25 Thread Jaime Casanova
On Thu, Feb 25, 2010 at 5:09 PM, Bruce Momjian br...@momjian.us wrote:

 This is implemented in 9.0 from vacuumdb:

          -Z, --analyze-only              only update optimizer hints


maybe just noise, but it's not better to say optimizer statistics
instead of optimizer hints?

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-25 Thread Robert Haas
On Thu, Feb 25, 2010 at 10:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 I actually think there isn't any clean line.  Obscene is in the eye of
 the beholder.  Frankly, I think this discussion is getting off into
 the weeds.  It would be nice, perhaps, to have a feature that will
 detect when the generic plan is the suxxor and attempt to find a
 better one, but that's really, really hard for a whole bunch of
 reasons.  Bruce's suggestion that we should provide some user control
 over whether we plan at bind time or execute time seems likely to be
 (1) considerably simpler to implement, (2) considerably easier to get
 consensus on, and (3) capable of giving 90% of the benefit for an only
 higher inconvenience factor.

 It's not going to be easier to implement.  Yeah, it would be easy to
 provide a global switch via a GUC setting, but that's not going to be
 helpful, because this is the sort of thing that really needs to be
 managed per-query.  Almost any nontrivial application is going to have
 some queries that really need the custom plan and many that don't.
 If people just turn the GUC on we might as well throw away the plan
 caching mechanism altogether.

I agree.  A GUC is a really bad idea.

 But putting support for a per-query level
 of control into the protocol (and then every client library) as well as
 every PL is going to be painful to implement, and even more painful to
 use.

I suppose I should have learned by now not to argue with you over
technical points, but I don't see why this should be painful.  I mean,
it'll be a lot of work and it'll in the end touch a lot of different
parts of the code, but work != pain, and I don't see any reason why
the problem can't be attacked incrementally.  I'm also deeply
unconvinced that any other solution will be as satisfactory.

 I still like the idea of automatically replanning with the known
 parameter values, and noting whether the result plan was estimated to be
 noticeably cheaper than the generic plan, and giving up on generating
 custom plans if we didn't observe any such win over N tries.

Isn't part of the problem here precisely that the cost estimates for
the generic plan might not be too accurate?  The only instances of
this problem I've run across are the ones where MCVs need a different
treatment, and the problem isn't necessarily that the new estimate is
cheaper so much as that the old estimate isn't going to turn out as
predicted.  Also, there's no guarantee that the distribution of values
tried will be random - there's the case where non-MCVs are tried for
the first N times and then a non-MCV is tried on try N+1.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Gokulakannan Somasundaram
Tom,

Actually, if you need to squeeze a few more bits into that word, the
 thing to do would be to get rid of storing the tuple length there.
 This would involve adding the same type of indirection header that
 we use for HeapTuples, so that the length would be available at need
 without going back to the item pointer.  I


I feel the other one is easy. To store the hint bits inside the ItemId, in
the place of size. We have 16 bits there.Whenever the size is required, we
need to follow the offset and goto the corresponding tuple and then take the
size from there. The change seems to be minimal, but please bear with me, if
i am very ignorant about something.



   Squeezing cmin in there is just fantasy though.


I think we can get away with this, by making the person, who inserts and
selects in the same transaction to go and find the visibility through heap.
In the Index tuple hint bits, we can note down, if the command is a simple
insert/update/delete. By Simple insert, i mean that it doesn't have a
select. So if that is the case, it can be made visible to statements within
the same transaction. We can even document, that people can just insert a
savepoint between their insert and select. This would increase the xid and
make that tuple visible within the same transaction. All that seems to be
possible.

Thanks,
Gokul.


Re: [HACKERS] Allow vacuumdb to only analyze

2010-02-25 Thread Bruce Momjian
Jaime Casanova wrote:
 On Thu, Feb 25, 2010 at 5:09 PM, Bruce Momjian br...@momjian.us wrote:
 
  This is implemented in 9.0 from vacuumdb:
 
  ? ? ? ? ?-Z, --analyze-only ? ? ? ? ? ? ?only update optimizer hints
 
 
 maybe just noise, but it's not better to say optimizer statistics
 instead of optimizer hints?

Wow, I never noticed that but --analyze used hints too, and in 8.4 as
well.  I have updated it to call it statistics in the attached patch. 
The manual page does not call them hints.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +
Index: src/bin/scripts/vacuumdb.c
===
RCS file: /cvsroot/pgsql/src/bin/scripts/vacuumdb.c,v
retrieving revision 1.36
diff -c -c -r1.36 vacuumdb.c
*** src/bin/scripts/vacuumdb.c	26 Feb 2010 02:01:20 -	1.36
--- src/bin/scripts/vacuumdb.c	26 Feb 2010 04:13:42 -
***
*** 336,343 
  	printf(_(  -q, --quiet don't write any messages\n));
  	printf(_(  -t, --table='TABLE[(COLUMNS)]'  vacuum specific table only\n));
  	printf(_(  -v, --verbose   write a lot of output\n));
! 	printf(_(  -z, --analyze   update optimizer hints\n));
! 	printf(_(  -Z, --analyze-only  only update optimizer hints\n));
  	printf(_(  --help  show this help, then exit\n));
  	printf(_(  --version   output version information, then exit\n));
  	printf(_(\nConnection options:\n));
--- 336,343 
  	printf(_(  -q, --quiet don't write any messages\n));
  	printf(_(  -t, --table='TABLE[(COLUMNS)]'  vacuum specific table only\n));
  	printf(_(  -v, --verbose   write a lot of output\n));
! 	printf(_(  -z, --analyze   update optimizer statistics\n));
! 	printf(_(  -Z, --analyze-only  only update optimizer statistics\n));
  	printf(_(  --help  show this help, then exit\n));
  	printf(_(  --version   output version information, then exit\n));
  	printf(_(\nConnection options:\n));

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Path separator

2010-02-25 Thread Bruce Momjian

I assume we never came to any conclusion on this.

---

Magnus Hagander wrote:
 Tom Lane wrote:
  Magnus Hagander mag...@hagander.net writes:
  Answering myself here: the filesize for the frontend only part is
  about 2k on this system.
  
  Long meeting, time for coding.. :-) Here's a rough patch. Is this about
  what you had in mind?
  
  Hm, this seems to make the namespace pollution problem worse not better,
  because of de-staticizing so many functions.  I guess we could stick pg_
  prefixes on all of them.  That's a bit ugly; anybody have a better idea?
 
 Not really.
 
 
  It might be that it'd be better to push a couple more of the simple
  path-munging functions (like join_path_components) over into the new
  file, so as to have a more logical division of responsibilities.
  In my mind the two key areas here are path syntax knowledge and
  extracting absolute paths from environmental information.  The second
  part seems to be the one that doesn't belong on the frontend side.
 
 What would be the gain there? To be able to re-static-ify for example
 skip_drive? Or just a nicer division?
 
 We should probably also consider moving get_home_path() over to the
 frontend one, and get rid of the copy that's in fe-connect.c.
 
 
 //Magnus
 
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] visibility maps and heap_prune

2010-02-25 Thread Pavan Deolasee
On Fri, Feb 26, 2010 at 8:19 AM, Bruce Momjian br...@momjian.us wrote:


 Whatever happened to this?  It was in the first 9.0 commitfest but was
 returned with feedback but never updated:


Though Alex did some useful tests and review, and in fact confirmed that the
VACUUM time dropped from 16494 msec to 366 msec, I somehow kept waiting for
Heikki's decision on the general direction of the patch and lost interest in
between. If we are still interested in this, I can work out a patch and
submit for next release if not this.

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Gokulakannan Somasundaram


 First of all, volatility is not the only issue. The ordering ops could also
 be incorrect, e.g., violate the transitivity property. there is no reliable
 way to determine if a function is volatile and/or incorrectly specified.


No it is the only issue. If you create a datatype with volatile function for
ordering ops, then you have the broken data type(the one you are referring
to). So they are one and the same.

Thanks,
Gokul.


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Gokulakannan Somasundaram
 No, it's far from harmless.  As soon as that heap TID gets filled with
 an unrelated tuple, you run the risk of indexscans alighting on and
 perhaps modifying the wrong tuple.


Tom,
  In the Function based indexes on those functions, which we are
suspecting to be a volatile one Or in the datatypes, which we suspect to be
broken, can we have additional checks to ensure that to ensure that this
does not happen? I mean, do you think, that would solve the issue?

Thanks,
Gokul.


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-25 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Thu, Feb 25, 2010 at 10:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I still like the idea of automatically replanning with the known
 parameter values, and noting whether the result plan was estimated to be
 noticeably cheaper than the generic plan, and giving up on generating
 custom plans if we didn't observe any such win over N tries.

 Isn't part of the problem here precisely that the cost estimates for
 the generic plan might not be too accurate?

No, the estimates for the generic plan are typically fine *in
themselves*; they only look bad when you compare them to what you can do
with knowledge of specific parameter values.  An example is that the
default selectivity estimate for a range query (WHERE x  something AND
x  somethingelse) is 0.005.  In a large number of real cases, the
actual selectivity is way smaller, and you can determine that if you
know the actual comparison constants.  But it's tough to argue for
decreasing the default guess --- it's already small enough that you
could get screwed badly in the other direction if you queried a wide
range.

There may be some cases where the generic plan is wrongly estimated to
be cheaper than a custom plan that's actually better, but I haven't seen
many.  If that were happening a lot then people would be reporting that
the advice to force a replan via EXECUTE or whatever doesn't help.
I don't think that there is any body of evidence at all that would
justify undertaking extremely expensive development of an extremely
painful-to-use feature to deal with that type of case.

 Also, there's no guarantee that the distribution of values
 tried will be random - there's the case where non-MCVs are tried for
 the first N times and then a non-MCV is tried on try N+1.

Sure, there are always going to be cases where you lose.  Pushing the
responsibility onto the user doesn't really fix that though.  It's not
apparent to me that users are in that much better position than we are
to determine when a custom plan is helpful.

BTW, if it wasn't clear, I would be in favor of allowing the cutoff N to
be adjustable, as well as the cost ratio that's considered to constitute
a win.  So there would be some wiggle room to deal with that type of
situation.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-25 Thread Alex Hunsaker
On Thu, Feb 25, 2010 at 20:40, Tom Lane t...@sss.pgh.pa.us wrote:
 It's not going to be easier to implement.  Yeah, it would be easy to
 provide a global switch via a GUC setting, but that's not going to be
 helpful, because this is the sort of thing that really needs to be
 managed per-query.  Almost any nontrivial application is going to have
 some queries that really need the custom plan and many that don't.
 If people just turn the GUC on we might as well throw away the plan
 caching mechanism altogether.  But putting support for a per-query level
 of control into the protocol (and then every client library) as well as
 every PL is going to be painful to implement, and even more painful to
 use.

Not to mention you can already do this more or less client side with a
nice driver.  For example with DBD::Pg i can say:

$sth = $dbh-prepare('select * from foo where x = ?', {'pg_server_prepare'=1});

To get a prepared plan (it is also the default).

If for a particular query I know that I will get a better plan without
prepare, I can just change that 1 to a 0.  Or I can set it globally
via $dbh-{'pg_server_prepare'} = 0;

In other words im not quite sure what this would buy us.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-25 Thread Gokulakannan Somasundaram
On Fri, Feb 26, 2010 at 9:54 AM, Gokulakannan Somasundaram 
gokul...@gmail.com wrote:


 No, it's far from harmless.  As soon as that heap TID gets filled with
 an unrelated tuple, you run the risk of indexscans alighting on and
 perhaps modifying the wrong tuple.


 Tom,
 i think this will never happen. The only issue is when we need to go
back to the index from heap. This is to update the timestamps of
update/delete.

Gokul.


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-25 Thread Alex Hunsaker
On Thu, Feb 25, 2010 at 21:28, Alex Hunsaker bada...@gmail.com wrote:
 Not to mention you can already do this more or less client side with a
 nice driver.
 [ uninformed noise ... ]

I did seem to miss the part where everyone thinks this is a crock...
But I don't remember seeing numbers on parse time or how much
bandwidth this would potentially save.  People seem to think it would
be a big savings for just those 2 reasons?  Or did I miss some other
benefit?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >