Re: [GENERAL] CLONE DATABASE (with copy on write?)

2011-11-12 Thread Simon Riggs
On Sat, Nov 12, 2011 at 9:40 PM, Clark C. Evans  wrote:

> Our company has some headaches in our application development
> and deployment process.  The chief problem is, "creating stages",
> which to this audience is, cloning a database efficiently,
> making and testing a few changes, perhaps recording the
> differences between databases, and then dropping the database.
>
> I'm eternally grateful for someone who pointed out that we
> should be using "CREATE DATABASE ... WITH TEMPLATE".  However,
> this has two big disadvantages.  First, it only works if you
> can kick the users off the clone.  Secondly, it still takes
> time, uses disk space, etc.  We have some big databases.

An interesting proposal. Thanks for taking the time to raise this.

The existing situation is that you need to either

1) quiesce the database so it can be copied locally

2) take a hot backup to create a clone on another server


(1) currently involves disconnection. Would a command to quiesce
sessions without disconnection be useful? We could get sessions to
sleep until woken after the copy. With large databases we would still
need to copy while sessions sleep to ensure a consistent database
after the copy.

Is (2) a problem for you? In what way?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large values for duration of COMMITs and slow queries. Due to large WAL config values?

2011-11-12 Thread Simon Riggs
On Sat, Nov 12, 2011 at 7:04 AM, Cody Caughlan  wrote:
> Postgres 9.1.1, master with 2 slaves via streaming replication.
>
> I've enabled slow query logging of 150ms and am seeing a large number
> of slow COMMITs:
>
> 2011-11-12 06:55:02 UTC pid:30897 (28/0-0) LOG:  duration: 232.398 ms
> statement: COMMIT
> 2011-11-12 06:55:08 UTC pid:30896 (27/0-0) LOG:  duration: 1078.789 ms
>  statement: COMMIT

Could well be related to checkpoints. Please try log_checkpoints = on
and see if there is a correlation.


> Recently we have bumped up wal_keep_segments and checkpoint_segments
> because we wanted to run long running queries on the slaves and we're
> receiving cancellation errors on the slaves. I think the master was
> recycling WAL logs from underneath the slave and thus canceling the
> queries. Hence, I believed I needed to crank up those values. It seems
> to work, I can run long queries (for statistics / reports) on the
> slaves just fine.

That reasoning isn't sound because it doesn't work like that.
Recycling WAL files has nothing to do with query cancelation on hot
standby.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CLONE DATABASE (with copy on write?)

2011-11-12 Thread Igor Polishchuk
I'm in a similar position, cloning a database multiple times to provide
development and qa databases for multiple groups. A feature desired by
Clark would greatly help many people with their development and qa
databases.
On the other hand, I imagine,  a feature like this would not be easy to
develop, and it is useful for many but not all the users.
So I doubt we can have it any time soon. 


On Sat, 2011-11-12 at 16:40 -0500, Clark C. Evans wrote:
> Hello all!  
> 
> Our company has some headaches in our application development
> and deployment process.  The chief problem is, "creating stages", 
> which to this audience is, cloning a database efficiently, 
> making and testing a few changes, perhaps recording the 
> differences between databases, and then dropping the database.
> 
> I'm eternally grateful for someone who pointed out that we
> should be using "CREATE DATABASE ... WITH TEMPLATE".  However,
> this has two big disadvantages.  First, it only works if you
> can kick the users off the clone.  Secondly, it still takes
> time, uses disk space, etc.  We have some big databases.  
> 
> I was also thinking about using ZFS with PostgreSQL to do
> some sort of copy-on-write.  However, this would require me
> to spawn a whole *new* PostgreSQL instance.  In both of these
> cases, you lose your cache...
> 
> So, I was wondering... could PostgreSQL grow the ability to
> "CLONE" a database by re-using existing file system blocks,
> sharing them across databases?  This would perhaps be fast,
> keep the shared memory cache relevant for both the old copy
> and the clone, and remove WAL overhead.  Then, if the block
> has to be edited, it'd be cloned in memory, and the clone 
> would be flushed.  
> 
> I'm just imagining a world where "deploying" a new version
> of our software that makes very small catalog changes and
> tweaks a few rows would be... quick.  Quick to try, check,
> test, and even deploy on live servers.
> 
> Best,
> 
> Clark
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Large values for duration of COMMITs and slow queries. Due to large WAL config values?

2011-11-12 Thread Igor Polishchuk
I saw such behavior a few years ago on multiple very busy databases
connected to the same EMC SAN. The SAN's cache got overwhelmed by the
databases IO, and the storage latency went up significantly. I don't
remember now what was the latency, but it was above 40ms. 
Is everything ok with your storage system? Is it possible your databases
produce more IOPS than your storage may handle?


On Sat, 2011-11-12 at 13:47 -0800, Cody Caughlan wrote:
> I've run VACUUM ANALYZE on all my tables to make sure the house has
> been cleaned. I still see a lot of slow queries / commits, even on
> primary key lookups and well indexed tables.
> 
> /Cody
> 
> On Fri, Nov 11, 2011 at 11:04 PM, Cody Caughlan  wrote:
> > Postgres 9.1.1, master with 2 slaves via streaming replication.
> >
> > I've enabled slow query logging of 150ms and am seeing a large number
> > of slow COMMITs:
> >
> > 2011-11-12 06:55:02 UTC pid:30897 (28/0-0) LOG:  duration: 232.398 ms
> > statement: COMMIT
> > 2011-11-12 06:55:08 UTC pid:30896 (27/0-0) LOG:  duration: 1078.789 ms
> >  statement: COMMIT
> > 2011-11-12 06:55:09 UTC pid:30842 (15/0-0) LOG:  duration: 2395.432 ms
> >  statement: COMMIT
> > 2011-11-12 06:55:09 UTC pid:30865 (23/0-0) LOG:  duration: 2395.153 ms
> >  statement: COMMIT
> > 2011-11-12 06:55:09 UTC pid:30873 (17/0-0) LOG:  duration: 2390.106 ms
> >  statement: COMMIT
> >
> > The machine has 16GB of RAM and plenty of disk space. What I think
> > might be relevant settings are:
> >
> > wal_buffers = 16MB
> > checkpoint_segments = 32
> > max_wal_senders = 10
> > checkpoint_completion_target = 0.9
> > wal_keep_segments = 1024
> > maintenance_work_mem = 256MB
> > work_mem = 88MB
> > shared_buffers = 3584MB
> > effective_cache_size = 10GB
> >
> > Recently we have bumped up wal_keep_segments and checkpoint_segments
> > because we wanted to run long running queries on the slaves and we're
> > receiving cancellation errors on the slaves. I think the master was
> > recycling WAL logs from underneath the slave and thus canceling the
> > queries. Hence, I believed I needed to crank up those values. It seems
> > to work, I can run long queries (for statistics / reports) on the
> > slaves just fine.
> >
> > But I now wonder if its having an adverse effect on the master, ala
> > these slow commit times and other slow queries (e.g. primary key
> > lookups on tables with not that many records), which seem to have
> > increased since the configuration change.
> >
> > I am watching iostat and sure enough, when %iowait gets > 15 or so
> > then a bunch more slow queries get logged. So I can see its disk
> > related.
> >
> > I just dont know what the underlying cause is.
> >
> > Any pointers would be appreciated. Thank you.
> >
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] Re: [GENERAL] Need Help Installing Dblink…(Desperately…)

2011-11-12 Thread Jerry Levan


On Nov 12, 2011, at 5:49 PM, Tom Lane wrote:

> Jerry Levan  writes:
>> On Nov 12, 2011, at 3:34 PM, Scott Mead wrote:
>>> The Makefile should be including the global did you try a 'make 
>>> install' ?
> 
>> I went back and did the make install from the dblink directory it installed
>> stuff in the share directory but invoking dblink functions failed with
>> an unknown definition. I invoked the create extension command but it
>> did no good...
> 
> Vague handwaving like that is an excellent way to guarantee that nobody
> can help you, because we can't figure out what you did wrong (or, maybe,
> what the code did wrong) based on this.  We need to see exactly what
> commands you gave and what results you got.  Please read
> http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
> 
> (In general, though, in PG 9.1 you do not source contrib SQL scripts
> directly.  CREATE EXTENSION is the way to install a contrib module into
> a database.)
> 
>   regards, tom lane

You are, as usual, correct in your criticism.

My only excuse is that I hit the panic button when things got out
of control ;(

My first bad step was moving the source folder after I built the
main database server, clients, etc and then trying to build the 
dblink extension.

It appears that some *.h files had been linked via absolute paths
causing the compilation of the dblink package build to fail.

Moving the source folder back to its original location fixed that problem.

The second difficulty was that I did not understand about extensions and
how to install and enable the rascals.

After I built the documentation and did some reading about extensions I
was able to (I hope) properly install the dblink package. My sql codes
that use dblink now act the same in 9.1.1 as they did in 9.0.x

I will find out for sure tomorrow when I upgrade two more macs…

Jerry
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: [GENERAL] Need Help Installing Dblink…(Desperately…)

2011-11-12 Thread Tom Lane
Jerry Levan  writes:
> On Nov 12, 2011, at 3:34 PM, Scott Mead wrote:
>> The Makefile should be including the global did you try a 'make install' 
>> ?

> I went back and did the make install from the dblink directory it installed
> stuff in the share directory but invoking dblink functions failed with
> an unknown definition. I invoked the create extension command but it
> did no good...

Vague handwaving like that is an excellent way to guarantee that nobody
can help you, because we can't figure out what you did wrong (or, maybe,
what the code did wrong) based on this.  We need to see exactly what
commands you gave and what results you got.  Please read
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

(In general, though, in PG 9.1 you do not source contrib SQL scripts
directly.  CREATE EXTENSION is the way to install a contrib module into
a database.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Large values for duration of COMMITs and slow queries. Due to large WAL config values?

2011-11-12 Thread Cody Caughlan
I've run VACUUM ANALYZE on all my tables to make sure the house has
been cleaned. I still see a lot of slow queries / commits, even on
primary key lookups and well indexed tables.

/Cody

On Fri, Nov 11, 2011 at 11:04 PM, Cody Caughlan  wrote:
> Postgres 9.1.1, master with 2 slaves via streaming replication.
>
> I've enabled slow query logging of 150ms and am seeing a large number
> of slow COMMITs:
>
> 2011-11-12 06:55:02 UTC pid:30897 (28/0-0) LOG:  duration: 232.398 ms
> statement: COMMIT
> 2011-11-12 06:55:08 UTC pid:30896 (27/0-0) LOG:  duration: 1078.789 ms
>  statement: COMMIT
> 2011-11-12 06:55:09 UTC pid:30842 (15/0-0) LOG:  duration: 2395.432 ms
>  statement: COMMIT
> 2011-11-12 06:55:09 UTC pid:30865 (23/0-0) LOG:  duration: 2395.153 ms
>  statement: COMMIT
> 2011-11-12 06:55:09 UTC pid:30873 (17/0-0) LOG:  duration: 2390.106 ms
>  statement: COMMIT
>
> The machine has 16GB of RAM and plenty of disk space. What I think
> might be relevant settings are:
>
> wal_buffers = 16MB
> checkpoint_segments = 32
> max_wal_senders = 10
> checkpoint_completion_target = 0.9
> wal_keep_segments = 1024
> maintenance_work_mem = 256MB
> work_mem = 88MB
> shared_buffers = 3584MB
> effective_cache_size = 10GB
>
> Recently we have bumped up wal_keep_segments and checkpoint_segments
> because we wanted to run long running queries on the slaves and we're
> receiving cancellation errors on the slaves. I think the master was
> recycling WAL logs from underneath the slave and thus canceling the
> queries. Hence, I believed I needed to crank up those values. It seems
> to work, I can run long queries (for statistics / reports) on the
> slaves just fine.
>
> But I now wonder if its having an adverse effect on the master, ala
> these slow commit times and other slow queries (e.g. primary key
> lookups on tables with not that many records), which seem to have
> increased since the configuration change.
>
> I am watching iostat and sure enough, when %iowait gets > 15 or so
> then a bunch more slow queries get logged. So I can see its disk
> related.
>
> I just dont know what the underlying cause is.
>
> Any pointers would be appreciated. Thank you.
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CLONE DATABASE (with copy on write?)

2011-11-12 Thread Clark C. Evans
Hello all!  

Our company has some headaches in our application development
and deployment process.  The chief problem is, "creating stages", 
which to this audience is, cloning a database efficiently, 
making and testing a few changes, perhaps recording the 
differences between databases, and then dropping the database.

I'm eternally grateful for someone who pointed out that we
should be using "CREATE DATABASE ... WITH TEMPLATE".  However,
this has two big disadvantages.  First, it only works if you
can kick the users off the clone.  Secondly, it still takes
time, uses disk space, etc.  We have some big databases.  

I was also thinking about using ZFS with PostgreSQL to do
some sort of copy-on-write.  However, this would require me
to spawn a whole *new* PostgreSQL instance.  In both of these
cases, you lose your cache...

So, I was wondering... could PostgreSQL grow the ability to
"CLONE" a database by re-using existing file system blocks,
sharing them across databases?  This would perhaps be fast,
keep the shared memory cache relevant for both the old copy
and the clone, and remove WAL overhead.  Then, if the block
has to be edited, it'd be cloned in memory, and the clone 
would be flushed.  

I'm just imagining a world where "deploying" a new version
of our software that makes very small catalog changes and
tweaks a few rows would be... quick.  Quick to try, check,
test, and even deploy on live servers.

Best,

Clark

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need Help Installing Dblink…(Desperately…)

2011-11-12 Thread John R Pierce

On 11/12/11 10:02 AM, Jerry Levan wrote:

I have a mac running 9.1.1.

I then built dblink from the contrib directory and I did not
see an install option in the make file.


did you build this 9.1.1 or is this a standard distribution?   if you 
built it, did you install it with `make install` ?   when you built it, 
it should have built the contrib automatically, and that should have 
been installed to the $SHAREDIR/contrib directory (often in 
/usr/share/pgsql91/ or similar)


To install a contributed extension that was built with postgres, simply 
execute the SQL command...


create extension dblink;



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] Need Help Installing Dblink…(Desperately…)

2011-11-12 Thread Jerry Levan

On Nov 12, 2011, at 3:34 PM, Scott Mead wrote:

> 
> On Sat, Nov 12, 2011 at 1:02 PM, Jerry Levan  wrote:
> I think I have somehow botched my installation of 9.1.1 on Mac OSX 10.7.2
> 
> I have a mac running 9.1.1.
> 
> I then built dblink from the contrib directory and I did not
> see an install option in the make file.
> 
>  The Makefile should be including the global did you try a 'make install' 
> ?
>  
> 

I went back and did the make install from the dblink directory it installed
stuff in the share directory but invoking dblink functions failed with
an unknown definition. I invoked the create extension command but it
did no good...

I went back to a previous version and loaded the older dblink.sql file
and it now appears to be working…at least the dblink function calls
that my sql uses is appearently working.
 
> I tried copying the dblink.so file to the postgresql library directory
> but i cannot figure out how to install the definitions..
> 
> google:
>  
> http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/contrib/dblink/Attic/README.dblink?rev=1.12.4.1
> 

This is pre-extension stuff…

>  
> Any clues would be helpful.
> 
> Thanks
> 
> Jerry
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] Need Help Installing Dblink…(Desperately…)

2011-11-12 Thread Scott Mead
On Sat, Nov 12, 2011 at 1:02 PM, Jerry Levan  wrote:

> I think I have somehow botched my installation of 9.1.1 on Mac OSX 10.7.2
>
> I have a mac running 9.1.1.
>
> I then built dblink from the contrib directory and I did not
> see an install option in the make file.
>

 The Makefile should be including the global did you try a 'make
install' ?


>
> I tried copying the dblink.so file to the postgresql library directory
> but i cannot figure out how to install the definitions..
>
> google:

http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/contrib/dblink/Attic/README.dblink?rev=1.12.4.1



> Any clues would be helpful.
>
> Thanks
>
> Jerry
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] pg_dump -n switch lock schema from dml/ddl?

2011-11-12 Thread Jeff Davis
On Thu, 2011-11-10 at 11:46 -0500, Tony Capobianco wrote:
> I'm testing out various pg_dump scenarios using the -n switch and I have
> a few questions:
> 
> - When using the -n switch, is the whole schema locked from all non-read
> DML/DDL operations?

No.

> - Once the final table is dumped 
> (i.e. pg_dump: dumping contents of table zip_data), 
> are there additional background processes that are still performing
> maintenance tasks?  Or is the entire process complete and all objects
> are released?

No background work results from using pg_dump.

> I'm asking because I have a schema with a large table with many indexes
> that is consuming the majority of the dump.  This version of the dump
> takes about 4 hours.  
> As a solution, we run 2 separate dumps in parallel, one with the schema
> excluding the large table and one including only the large table.  

FYI: you need to be a little careful running two pg_dumps in parallel.
It may (though not necessarily) increase the speed, but it also means
that you get different snapshots for the big table and all the rest of
the data.

Ordinarily, you only want on snapshot so that it's a single
point-in-time for all of the dumped data. Otherwise, you may have
inconsistent data.

> The option with just the large table takes 2.5 hours.  However, the
> option with the schema excluding the large table still takes 4 hours.
> If pg_dump locks each table individually, then releases when the dump is
> completed, I must be encountering lock contention.
> Also, I use the -v switch, however I'm not getting any information on
> how long the dump of each object takes, is there an option that exists
> where I can collect this information in the log file?

If you'd like to know what's happening on your system, the best way is
to start out with (while the pg_dumps are running):

  SELECT * FROM pg_stat_activity;

in a separate client connection. If the "waiting" flag is true on one
query for a significant amount of time, it may be lock-related.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select where not exists returning multiple rows?

2011-11-12 Thread Jeff Davis
On Tue, 2011-11-01 at 10:59 -0400, Chris Dumoulin wrote:
> Indexes:
>  "item_pkey" PRIMARY KEY, btree (sig)
> 
> And we're doing an insert like this:
> INSERT INTO Item (Sig, Type, Data) SELECT $1,$2,$3 WHERE NOT EXISTS ( 
> SELECT NULL FROM Item WHERE Sig=$4)
> 
> In this case $1 and $4 should always be the same.

Unrelated note: just use $1 twice.

>  The idea is to insert 
> if the row doesn't already exist.
> We're getting primary key constraint violations:

What's happening is that the NOT EXISTS is running before the INSERT,
and between those two another INSERT can happen. The PRIMARY KEY is
saving you from this problem in this case.

I recommend that you look into using SERIALIZABLE isolation mode as your
default:

http://www.postgresql.org/docs/9.1/static/runtime-config-client.html#GUC-DEFAULT-TRANSACTION-ISOLATION
http://www.postgresql.org/docs/9.1/static/sql-set-transaction.html
http://www.postgresql.org/docs/9.1/static/transaction-iso.html#XACT-SERIALIZABLE

That will still throw an error, but it protects you from all kinds of
similar problems that might not be caught by a primary key.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need Help Installing Dblink…(Desperately…)

2011-11-12 Thread Jerry Levan
I think I have somehow botched my installation of 9.1.1 on Mac OSX 10.7.2

I have a mac running 9.1.1.

I then built dblink from the contrib directory and I did not
see an install option in the make file.

I tried copying the dblink.so file to the postgresql library directory
but i cannot figure out how to install the definitions..

Any clues would be helpful.

Thanks

Jerry

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] = any((select )) needs cast, why?

2011-11-12 Thread Tom Lane
Frank van Vugt  writes:
> Could someone point me to the formal reason why in:
> the cast to int[ ] of the result row is needed here:
> postgres=# select 1 = any((select ARRAY[1,2])::int[]);

ANY(SELECT ...) normally means searching down the rows of the select
result, where the select is expected to return a single column matching
the type of the LHS.  Searching across the elements of an array requires
that the top-level argument of ANY() not be a SELECT.  You stuck a cast
in there, which satisfies the syntactic restriction, but realize that
you've broken the ability to search multiple rows of the select result.
The ANY argument is now expected to return a single scalar value of an
array type, where the array elements can be compared to the LHS.

In the particular example here, SELECT is just a waste of typing.
But I assume it's a dumbed-down example.  Depending on what you're
really doing, it might be sensible to use
ANY(SELECT UNNEST(arrayvalue) FROM ...)
if you're trying to search through elements of a column of array values.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] weird pg_statistic problem

2011-11-12 Thread Enrico Sirola
Hello Tom,

Il giorno 11/nov/2011, alle ore 22.05, Tom Lane ha scritto:

> Enrico Sirola  writes:
>> this morning I experienced a weird problem with our pgsql database (9.0.3):
>> while performing a simple query, I receive the following error:
>> Nov 11 10:24:09  postgres[23395]: [7-1] ERROR:  missing chunk number 0 
>> for toast value 550556127 in pg_toast_2619
> 
> Was this a transient error, or repeatable?
> 
> If it was transient, it's probably a recently-fixed issue:
> http://archives.postgresql.org/pgsql-hackers/2011-10/msg01366.php
> http://archives.postgresql.org/pgsql-committers/2011-11/msg00014.php

apparently, it's transient

>> ERROR:  duplicate key value violates unique constraint 
>> "pg_statistic_relid_att_inh_index"
>> DETAIL:  Key (starelid, staattnum, stainherit)=(531526103, 7, f) already 
>> exists.
> 
> This seems unrelated.  Can you repeat this one?  If so, try REINDEX'ing
> that index and see if the problem goes away.
> 
> It'd be worth your while to update to 9.0.5 --- we fixed a fair number
> of potential data-corruption issues since January.

while examinigs the db state, I found many other inconsistencies here and there 
(e.g. tables with multiple records with the same PK), at the end I restored a 
backup. 
I'll be installing today 9.0.5 and let you know if it happens again. Thanks a 
lot for your help,
Enrico

P.S.
by the way, I have a streaming replica server that has been corrupted as well.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] = any((select )) needs cast, why?

2011-11-12 Thread Frank van Vugt
L.S.

Could someone point me to the formal reason why in:

postgres=# select version();
version 
   
---
 PostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 
4.3.3, 64-bit
(1 row)


the cast to int[ ] of the result row is needed here:

postgres=# select 1 = any((select ARRAY[1,2])::int[]);
 ?column? 
--
 t
(1 row)


in order to avoid an error:

postgres=# select 1 = any((select ARRAY[1,2]));
ERROR:  operator does not exist: integer = integer[]
LINE 1: select 1 = any((select ARRAY[1,2]));
 ^
HINT:  No operator matches the given name and argument type(s). You might need 
to add explicit type casts.


Is the right hand side resulting in an array of an array or?

-- 


Best,




Frank.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-12 Thread hubert depesz lubaczewski
On Sun, Nov 06, 2011 at 09:34:24AM -0500, Tom Lane wrote:
> hubert depesz lubaczewski  writes:
> > Any chance of getting the fix in patch format so we could test it on
> > this system?
> 
> http://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=23998fe99c1220ba3a9eefee194e37ec1f14ae07

hi
just to close the loop - finally today I could restart the database with
patched 8.4. bug fixed, all works fine. thanks a lot.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general