Re: [BUGS] log_truncate_on_rotation=on is not truncating

2012-07-15 Thread Tom Lane
I wrote:
> The relevant bit of the code is

> /*
>  * The requirements here are to choose the next time > now that is a
>  * "multiple" of the log rotation interval.  "Multiple" can be interpreted
>  * fairly loosely.  In this version we align to log_timezone rather than
>  * GMT.
>  */

Oh, no, scratch that --- there's nothing wrong with that calculation.
Actually I think the issue is with the logic that decides whether to
truncate or append during rotation:

 * Decide whether to overwrite or append.  We can overwrite if (a)
 * Log_truncate_on_rotation is set, (b) the rotation was triggered by
 * elapsed time and not something else, and (c) the computed file name is
 * different from what we were previously logging into.
 *
 * Note: during the first rotation after forking off from the postmaster,
 * last_file_name will be NULL.  (We don't bother to set it in the
 * postmaster because it ain't gonna work in the EXEC_BACKEND case.) So we
 * will always append in that situation, even though truncating would
 * usually be safe.

This dates from around 2004, and at the time we did not think it was
very important if the first rotation opportunity failed to truncate.
Of course, if you restart the postmaster so often that it never gets to
the second rotation opportunity, that theory falls down.  But I guess
it's worth asking why you think it's a good plan to reboot so often.
Most people running Postgres are appalled by the idea that they might
have to have any downtime at all.

regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] log_truncate_on_rotation=on is not truncating

2012-07-15 Thread Tom Lane
[ just got around to looking at this bug report, sorry for the delay ]

Viswanatham Kiran Kumar  writes:
> Hi Chan,
>   I think this is configuration issue.

>   This issue is happening because log files will always append when
> server is restarted. What I can see from your configuration,
> log_rotation_age=1d (1 day) which is greater than daily schedule
> shutdown/restart time. So this is the reason why log files are growing.

Yeah, I think that's part of it, but it does still seem like a bug.
The relevant bit of the code is

/*
 * The requirements here are to choose the next time > now that is a
 * "multiple" of the log rotation interval.  "Multiple" can be interpreted
 * fairly loosely.  In this version we align to log_timezone rather than
 * GMT.
 */
rotinterval = Log_RotationAge * SECS_PER_MINUTE;/* convert to seconds */
now = (pg_time_t) time(NULL);
tm = pg_localtime(&now, log_timezone);
now += tm->tm_gmtoff;
now -= now % rotinterval;
now += rotinterval;
now -= tm->tm_gmtoff;
next_rotation_time = now;

Looking at this, it seems like it will choose a next_rotation_time that
is more than rotinterval beyond "now" if you have a log_timezone for
which advancing by gmtoff moves past the next natural rotinterval
multiple.  So if the server gets shut down before that time elapses,
it never will think it should rotate the logfile.

Chan, could you say exactly what log_timezone setting is in use on the
servers that are acting funny?

regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6733: All Tables Empty After pg_upgrade (PG 9.2.0 beta 2)

2012-07-15 Thread Tom Lane
Mike Wilson  writes:
> I've had some time to examine this closer over the weekend.  It
> appears that pg_upgrade for 9.2b2 segfaults which more than likely has
> something to do with the resulting converted database appearing to
> have no rows.

Yeah, more than likely :-(.  Could we see a stack trace from the
segfault?

> Of possible note in this DB is that the previous DBA renamed the
> "postgres" user.

Hmm.  There is a known bug in beta2 that's triggered by old and new
clusters not having the same name for the bootstrap superuser; although
I don't recall that the symptoms included a segfault.  In any case,
I'd suggest making sure the new cluster is initdb'd under the same
account that currently owns the old cluster.

regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Re: BUG #6712: PostgreSQL 9.2 beta2: alter table drop constraint does not work on inherited master table

2012-07-15 Thread Noah Misch
On Mon, Jul 02, 2012 at 04:16:31PM +0530, Amit Kapila wrote:
> > From: pgsql-bugs-ow...@postgresql.org 
> > [mailto:pgsql-bugs-ow...@postgresql.org] On Behalf Of 
> > miroslav.s...@fordfrog.com
> > Sent: Saturday, June 30, 2012 4:28 PM

> > test=# create table test_constraints (id int, val1 varchar, val2 int, unique
> > (val1, val2));
> > NOTICE:  CREATE TABLE / UNIQUE will create implicit index
> > "test_constraints_val1_val2_key" for table "test_constraints"
> > CREATE TABLE
> > test=# create table test_constraints_inh () inherits (test_constraints);
> > CREATE TABLE
> > test=# alter table only test_constraints drop constraint
> > test_constraints_val1_val2_key;
> > ERROR:  constraint "test_constraints_val1_val2_key" of relation
> > "test_constraints_inh" does not exist

> > postgresql tries to drop the constraint even from descendant table though
> > "only" is specified.

ONLY shouldn't be necessary, either.

> In function ATExecDropConstraint(), for the constarint 
> "test_constraints_val1_val2_key" con->connoinherit is false, 
> due to which it tries to drop the constrint from child table as well. 
> I have checked that from function index_constraint_create() when it calls 
> function CreateConstraintEntry(), the flag for noinherit passed is false. 
> I think this is the reason of failure for the same.

Agreed.  The other non-CHECK callers also get this wrong:

[local] regression=# select contype,connoinherit,count(*) from pg_constraint 
group by 1,2;
 contype | connoinherit | count 
-+--+---
 p   | f| 7
 x   | f| 2
 c   | f|17
 f   | f| 5

One can construct similar bugs around dropping foreign key and exclusion
constraints.  Though it may be irrelevant for command semantics, additionally
using connoinherit = 't' for contype = 't' (CONSTRAINT_TRIGGER) would
more-accurately represent the nature of those constraints.

Care to prepare a patch with a test case addition?

Thanks,
nm

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6733: All Tables Empty After pg_upgrade (PG 9.2.0 beta 2)

2012-07-15 Thread Mike Wilson
I've had some time to examine this closer over the weekend.  It appears that 
pg_upgrade for 9.2b2 segfaults which more than likely has something to do with 
the resulting converted database appearing to have no rows.  Earlier in this 
thread I reported that I was able to get the upgrade to work and this thread to 
be closed but I was in error.  At the time I was also testing with the 9.1.4 
pg_upgrade which does work and I thought that I had a successful 9.2b2 
pg_upgrade run.  Apologies for the confusion and let me know if you would like 
me to start a new thread.


...
pg_toast.pg_toast_948075_index: 948081 to 948081
c0.page_metadata_values_pkey: 948082 to 948082
c0.i_page_metadata_values_short_name: 948084 to 948084


Segmentation Fault (core dumped)
root@db4 /

My upgrade procedure is scripted and I hadn't noticed the core dump when I 
first reported the bug.  Here are the parameters of the run:
su - postgres -c "pg_upgrade --verbose --link \
--old-datadir=/opt/postgres/db/root/old --new-datadir=/opt/postgres/db/root/new 
--old-bindir=${OLDPG}/bin/64/ \
--new-bindir=${NEWPG}/bin/ --old-port=5432 --new-port=5920 --user=postgres"

As a test I have also been using the pg_upgrade from 9.1.4 which does work:

…
relname: pg_toast.pg_toast_948075: reloid: 948079 reltblspace: 
relname: pg_toast.pg_toast_948075_index: reloid: 948081 reltblspace: 
relname: c0.page_metadata_values_pkey: reloid: 948082 reltblspace: 
relname: c0.i_page_metadata_values_short_name: reloid: 948084 reltblspace: 


Database: postgres
relname: pg_catalog.pg_largeobject: reloid: 2613 reltblspace: 
relname: pg_catalog.pg_largeobject_loid_pn_index: reloid: 2683 reltblspace: 


Database: template1
relname: pg_catalog.pg_largeobject: reloid: 2613 reltblspace: 
relname: pg_catalog.pg_largeobject_loid_pn_index: reloid: 2683 reltblspace: 


executing: SELECT   spclocation FROMpg_catalog.pg_tablespace WHERE  
spcname != 'pg_default' AND spcname != 'pg_global'
…


I've also tried a step-wise migration by first converting to PG914 and then to 
PG92b2.  This also fails with a similar segfault after the 
c0.i_page_metadata_values_short_name index.  

Of possible note in this DB is that the previous DBA renamed the "postgres" 
user.  As part of this conversion process I am renaming it back to it's 
default.  I'm doing this before running pg_upgrade:
# shift jibjab su (postgres) account back to postgres rolname
su - postgres -c "psql -U jibjab c0 -c \"update pg_authid set 
rolname='postgres' where oid=10;\""

This probably isn't an issue as the 9.1.4 conversion works but I thought I 
should at least mention it.  Actually I don't think pg_upgrade will run 
correctly if there isn't a postgres user so I imagine I need to correct this 
issue before running the upgrade procedure anyway.

For now I am stymied in my attempt to upgrade and may have to look at trying to 
get the non-link version of the upgrade working.  That would be relatively 
painful though as this upgrade will be for a commercial internet site that 
can't easily tolerate a long down and the production DB is over a TB in size.  
I am really looking forward to 9.2's index only scans due to the size of the 
DB!  

Cheers and thanks for any information you have on the issue.

Mike Wilson
mfwil...@gmail.com



On Jul 12, 2012, at 6:52 PM, Bruce Momjian wrote:

> On Thu, Jul 12, 2012 at 05:21:31PM -0700, Mike Wilson wrote:
>> This can be closed.  I figured out what I was doing wrong, which was
>> after the conversion I was cleaning up the old datadir by deleting it,
>> which destroyed the hard links to the data since I am using pg_upgrade
>> --link
> 
> Uh, actually, a hard link has two directory entries pointing to the same
> file, so you can delete the old datadir and the new datadir should not
> be affected.
> 
> -- 
>  Bruce Momjian  http://momjian.us
>  EnterpriseDB http://enterprisedb.com
> 
>  + It's impossible for everything to be true. +