Re: [GENERAL] function "XXX" already exists with same argument types

2011-10-06 Thread Sim Zacks


  
  
On 10/05/2011 05:27 PM, Alexander Farber wrote:

  Thank you Michael, but no -

On Wed, Oct 5, 2011 at 5:24 PM, Michael Glaesemann  wrote:

  

  psql:pref-2011-10-05-a.sql:339: ERROR:  function "pref_update_match"
already exists with same argument types
ALTER FUNCTION



Likely someone mistakenly added the functions to template1 of the machine you're restoring onto and they're getting added to the new database when it's created.

  
  
# psql -U postgres -W template1
Password for user postgres:
psql (8.4.7)
Type "help" for help.

template1=# \df
   List of functions
 Schema | Name | Result data type | Argument data types | Type
+--+--+-+--
(0 rows)


What is your process for creating the database?
Did you run your script once and then run it again after failure? It
does not run in a transaction, unless you ask it to, so if it failed
on something and you want to run it again, you should drop the
database and create it again.

  



Re: [GENERAL] I/O error on data file, can't run backup

2011-10-06 Thread Craig Ringer

On 10/06/2011 02:15 PM, Leif Biberg Kristensen wrote:


Model Family: Seagate Barracuda 7200.11 family
Device Model: ST31000340AS
Serial Number:9QJ1ZMHY
Firmware Version: SD15


Oh, joy. I have some of those, and can confirm their data-eating powers. 
Thankfully mine were in a backup server as part of a regularly verified 
RAID array with ECC on the volumes, so I didn't lose any data, but it 
was certainly frustrating.


The firmware updater is a right pain, because it only supports certain 
SATA controllers and you have to boot into it. Grr.


smartctl is a *vital* tool; the more people know about it and how 
awesome it is, the better.


--
Craig Ringer


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with pg_upgrade 9.0 -> 9.1

2011-10-06 Thread Thomas Kellerer

Bruce Momjian, 06.10.2011 02:15:

I now got the same error (alas with a different relation id) while
migrating a completely different data directory.

Anything I can do to help find the reason for this problem (or bug?)

Unfortuantely the data contains some confidential information so I
cannot make it available.


This bug was fixed just after 9.1.1 was released.  The bug is that
Windows doesn't properly pass the right flags for the oid set functions
to operate.  If you can compile the git 9.1.X current, the fix is in
there;  the fix will be in 9.1.2.



Thanks for the feedback.
As those were only development databases, it was no big deal to do the migration 
"the old way" ;)

Again I'm impressed by the speed how things are fixed in Postgres!

Regards
Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] user-interface to upload csv files

2011-10-06 Thread Albe Laurenz
Robert Buckley wrote:
> I am having problems getting csv files into postgres. Does anyone know if 
> there is an opensource user-
> interface to tackle this?

Did you try the COPY command (or \copy in psql)?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem dbi_link with postgresql 9.04

2011-10-06 Thread Albe Laurenz
Emanuel Araújo wrote:
> The field where the problem occurs is of type float or double, when 
> extracting data from firebird, it
> creates the tables materialized as type text. There is problem with null 
> values ​​or zero bytes. There
> are three records with value "1.5" and are those records that generates the 
> error.

>> What I would do is check how the data look in Perl.
>> Build a simple Perl script that selects the problem data and display them 
>> byte for byte.
>>
>> That should help understand the problem.

> I will try

Another possibility is to use the 8.4 installation where things work
and display the actual content with something like:

SELECT encode(col, 'escape') FROM tab WHERE ...

That should show any zero bytes.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres 8.4: archive_timeout vs. checkpoint_timeout

2011-10-06 Thread Frank Lanitz
Hi folks,

I want to refer to a question Rob did back in 2008 at
http://archives.postgresql.org/pgsql-general/2008-07/msg01167.php as we
are currently running into a similar question:
We are using warm standby via PITR using a shared drive between master
and slave node.

Our setup currently is set to archive_timeout = 60s and
checkpoint_timeout = 600s.

We expected that now every minute a WAL-file is written to the share,
but somehow we might misunderstood some part of the documentation as in
periods with low traffic on database the interval between WAL files is
>1min up to ten minutes.

However, the goal was to have a WAL file every minute so disaster
recovering can be done fast with a minimum of lost data.
Question is: What did we miss? Do we need to put checkpoint_timeout also
to 60s and does this makes sense at all?

Cheers,
Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] problem with round() in php script

2011-10-06 Thread Robert Buckley
hi,

In pgadmin3 this query works ok.

SELECT name,round(ges_kw,0)
FROM energie.tennet_auswertung_2010;

but in php 5.3.2 the result is 0 for all rows

$result = pg_query('
SELECT name,round(ges_kw,0)
FROM energie.tennet_auswertung_2010
');


What should I do to round up to 0 decimal places?

yours with thanks,

Rob

Re: [GENERAL] function "XXX" already exists with same argument types

2011-10-06 Thread Albe Laurenz
Alexander Farber wrote:
>>> psql:pref-2011-10-05-a.sql:339: ERROR:  function "pref_update_match"
>>> already exists with same argument types
>>> ALTER FUNCTION

>> Likely someone mistakenly added the functions to template1 of the machine 
>> you're restoring onto and
>> they're getting added to the new database when it's created.

> # psql -U postgres -W template1
> Password for user postgres:
> psql (8.4.7)
> Type "help" for help.
> 
> template1=# \df
>List of functions
>  Schema | Name | Result data type | Argument data types | Type
> +--+--+-+--
> (0 rows)

That should be \dfS

That the functions are in template1 is just a theory - the fact
is that they were already defined in the database where you
restored the dump.

What is you drop and recreate the database, then run
\dfS in a superuser psql session to make sure there is nothing there,
then restore.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] gaps/overlaps in a time table : current and previous row question

2011-10-06 Thread depstein
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of thomas veymont
> Sent: Wednesday, October 05, 2011 5:35 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] gaps/overlaps in a time table : current and previous row
> question
> 
> hello,
> 
> let's say that each rows in a table contains a start time and a end time
> ("timeinterval" type), but the index are not ordered nor consecutive, e.g :
> 
> $ select * from T order by starttime
> 
> index  | starttime|   endtime
> -+-+-
> 3|   t1 |  t2
> 1|   t3 |  t4
> 18  |   t5 |  t6
> 12  |   t7 |  t8
> 
> I want a result that shows time gaps and overlaps in this table, that is :
> 
> delta
> -+
> t3 - t2 |
> t5 - t4 |
> t7 - t6 |
> 
> how would I do that ? 

You can't. The order in which rows are retrieved  from a table is undefined, 
unless you specify it in your query. If the index cannot be used to specify the 
order, then there is no way for you to retrieve rows in the "correct" order.

If you could get the rows in the correct order, you could use the lag() window 
function to do what you want.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with round() in php script

2011-10-06 Thread Raymond O'Donnell
On 06/10/2011 08:59, Robert Buckley wrote:
> hi,
> 
> In pgadmin3 this query works ok.
> 
> SELECT name,round(ges_kw,0)
> FROM energie.tennet_auswertung_2010;
> 
> but in php 5.3.2 the result is 0 for all rows
> 
> $result = pg_query('
> SELECT name,round(ges_kw,0)
> FROM energie.tennet_auswertung_2010
> ');

The first parameter in pg_query() is supposed to be the connection
returned by pg_connect() - the docs say that it can be omitted, but
that's not recommended. Have you tried it with the connection included?

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with round() in php script

2011-10-06 Thread Robert Buckley
Im not quite sure what you are referring tothis is my php code. The 
connection is included, but not in the sql query!






Von: Raymond O'Donnell 
An: Robert Buckley 
Cc: "pgsql-general@postgresql.org" 
Gesendet: 10:42 Donnerstag, 6.Oktober 2011 
Betreff: Re: [GENERAL] problem with round() in php script

On 06/10/2011 08:59, Robert Buckley wrote:
> hi,
> 
> In pgadmin3 this query works ok.
> 
> SELECT name,round(ges_kw,0)
> FROM energie.tennet_auswertung_2010;
> 
> but in php 5.3.2 the result is 0 for all rows
> 
> $result = pg_query('
> SELECT name,round(ges_kw,0)
> FROM energie.tennet_auswertung_2010
> ');

The first parameter in pg_query() is supposed to be the connection
returned by pg_connect() - the docs say that it can be omitted, but
that's not recommended. Have you tried it with the connection included?

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

Re: [GENERAL] Postgresql Data directory Issue

2011-10-06 Thread Adarsh Sharma

Dear all,

Any update on the issue.


Thanks



Adarsh Sharma wrote:

Dear all,

I have a database server ( 10 databases near about 110 GB) running 
Postgresql-.8.3 )

Today I need to format that system but I an facing the below issues :-

1. I am trying to use the previous data directory 
(/opt/PostgresPlus/8.3/data) in new Postgresql-8.4 installation but it 
results in below error while installing :-


Data Directory 
[/media/a519e535-62a8-43a2-af93-2323690a3177/opt/PostgresPlus/8.3/data]:

Error: The existing data directory (catalog version: 200711281) is not
compatible with this server (catalog version: 200904091).
Press [Enter] to continue :

So i think I need to install Postgresql-8.3 to use that data.

2. I installed Postgresql-8.3 in new server and trying to use the data 
directory but it faces the below error :-


Existing data directory is not empty and it cannot able to use it .


Please note that I am  not able to take the complete databases backup 
because the database server is down and restarts when we start backups.


Any help will be appreciated.



Thanks






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restoring 2 Tables From All Databases Backup

2011-10-06 Thread Adarsh Sharma

Thanks to all, the problem is solved now.

But Still I donot know how to use existing data directory (near about 
110GB) in a new Postgres Installation.

I ask this in the list yesterday but still no clue on this.
Please guide if it is possible.


Best Regards
Adarsh





Raghavendra wrote:

Hi Adarsh,

Filip workaround is right approach, since its plain text format you 
need to play with SED/AWK to pull those two tables.


Following link will help you:-

http://blog.endpoint.com/2010/04/restoring-individual-table-data-from.html

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/



2011/10/5 pasman pasmański >


You should to create new database with two empty tables, set access
rights for all schemas readonly and pipe backup to this database.

2011/10/5, Dickson S. Guedes mailto:lis...@guedesoft.net>>:
> 2011/10/5 Adarsh Sharma mailto:adarsh.sha...@orkash.com>>:
>> About 1 month ago, I take a complete databases backup of my
Database
>> server
>> through pg_dumpall command.
>> Today I need to extract or restore only 2 tables in a database.
>>
>> Is it possible or I have to restore complete Databases again.
Size of
>> backup
>> is 10 GB in .sql.gz format.
>
> If your dump was created using custom format [1] (pg_dump
> --format=custom or -Fc)  you can do a pg_restore using
--use-list and
> --list [2].
>
> [1]
>

http://www.postgresql.org/docs/current/interactive/app-pgdump.html#PG-DUMP-OPTIONS
> [2]
>

http://www.postgresql.org/docs/current/interactive/app-pgrestore.html#APP-PGRESTORE-OPTIONS
>
> --
> Dickson S. Guedes
> mail/xmpp: gue...@guedesoft.net  -
skype: guediz
> http://guedesoft.net - http://www.postgresql.org.br
>
> --
> Sent via pgsql-general mailing list
(pgsql-general@postgresql.org )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


--

pasman

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general






Re: [GENERAL] gaps/overlaps in a time table : current and previous row question

2011-10-06 Thread thomas veymont
2011/10/6  :
>> -Original Message-
>> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
>> ow...@postgresql.org] On Behalf Of thomas veymont
>> Sent: Wednesday, October 05, 2011 5:35 PM
>> To: pgsql-general@postgresql.org
>> Subject: [GENERAL] gaps/overlaps in a time table : current and previous row
>> question
>>
>> hello,
>>
>> let's say that each rows in a table contains a start time and a end time
>> ("timeinterval" type), but the index are not ordered nor consecutive, e.g :
>>
>> $ select * from T order by starttime
>>
>> index  | starttime    |   endtime
>> -+-+-
>> 3        |   t1             |  t2
>> 1        |   t3             |  t4
>> 18      |   t5             |  t6
>> 12      |   t7             |  t8
>>
>> I want a result that shows time gaps and overlaps in this table, that is :
>>
>> delta
>> -+
>> t3 - t2 |
>> t5 - t4 |
>> t7 - t6 |
>>
>> how would I do that ?
>
> You can't. The order in which rows are retrieved  from a table is undefined, 
> unless you specify it in your query. If the index cannot be used to specify 
> the order, then there is no way for you to retrieve rows in the "correct" 
> order.
>
> If you could get the rows in the correct order, you could use the lag() 
> window function to do what you want.
>

yes.
there was an answer yesterday about doing this with a window function:
http://archives.postgresql.org/pgsql-general/2011-10/msg00157.php

thanks
tom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql Data directory Issue

2011-10-06 Thread Alban Hertroys
On 6 October 2011 11:04, Adarsh Sharma  wrote:
> Dear all,
>
> Any update on the issue.

Apparently, pg-migrator (later called pg_upgrade) is available for pg
8.3, see here for instructions:
http://www.depesz.com/index.php/2010/05/19/waiting-for-9-0-pg_upgrade/

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Never-idle autovacuum, and does (auto)vacuuming fill the OS cache ?

2011-10-06 Thread Vincent de Phily
Hi list,

I've got a heavily-updated table, and about 30 customers on the same system 
each with his own version of the table. The 3 configured autovacuum workers 
take turns vacuuming the table in each customer db; autovacuum is never idle 
and takes a large part of the available IO.

Fearing that vacuuming might accumulate lateness and hoping to see the system 
idle every now and then, I increased autovacuum_vacuum_cost_limit to 500 and 
decreased autovacuum_vacuum_cost_delay to 10. First question : is it an 
intelligent thing to do or am I better off ignoring the constant vacuuming and 
trusting that things will get done in time ? With the new settings, autovacuum 
is still constant (even though each one takes less time), but I'm wary of 
making autovacuum even less "io-nice".

Second thing : the vacuumed tables+indexes taken together are bigger than the 
available OS disk cache. Does vacuuming them fill the cache, or is there some 
kind of O_DIRECT in use ? I have a feeling (very un-verified) that this is not 
the most usefull data I could have in my cache.

This is all on PG 8.3. I know upgrading would improve things (particularly 
since a large percentage of the table remains static between vacuums), but 
we're still too busy for that right now (unless you tell me I'm going to see a 
night-and-day difference regarding this particular issue).


Thanks.
-- 
Vincent de Phily

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with round() in php script

2011-10-06 Thread Raymond O'Donnell
On 06/10/2011 10:02, Robert Buckley wrote:
> Im not quite sure what you are referring tothis is my php code. The
> connection is included, but not in the sql query!
> 
>  
> $connection=pg_connect("host=*** port=*** dbname=zgb user=***
> password=***");
> 
> $result = pg_query('
> SELECT name,round(ges_kw,0)
> FROM energie.tennet_auswertung_2010
>  ');

What I was suggesting was that you try it this way:

   $connection = pg_connect(
   $result = pg_query($connection, 'select.

IOW, you pass $connection as the first argument to pg_query(). See the
PHP docs here:

  http://ie.php.net/manual/en/function.pg-query.php

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql Data directory Issue

2011-10-06 Thread Adarsh Sharma

We can upgrade to new version if our previous version is running.
But In my Case I have only /opt/postgresql-8.3/data directory ( 110 Gb).

My server Crashes down and Postgres is not running.
I copied postgresql data directory to a new server and installed 
Postgresql8.3  but still it is not using the previous directory.



Thanks & regards
Adarsh Sharma
I donot want to upgrade


Alban Hertroys wrote:

On 6 October 2011 11:04, Adarsh Sharma  wrote:
  

Dear all,

Any update on the issue.



Apparently, pg-migrator (later called pg_upgrade) is available for pg
8.3, see here for instructions:
http://www.depesz.com/index.php/2010/05/19/waiting-for-9-0-pg_upgrade/

  




Re: [GENERAL] Postgresql Data directory Issue

2011-10-06 Thread Alban Hertroys
On 6 October 2011 12:18, Adarsh Sharma  wrote:
> We can upgrade to new version if our previous version is running.

But isn't upgrading exactly what you're trying to do? And why do you
think you need to have the old database running to be able to upgrade?
According to the link I sent you, you don't.

> But In my Case I have only /opt/postgresql-8.3/data directory ( 110 Gb).

And PG 8.4 software, right? So you are in the middle of upgrading.

> My server Crashes down and Postgres is not running.
> I copied postgresql data directory to a new server and installed
> Postgresql8.3  but still it is not using the previous directory.

Check the logs to see why it isn't using that directory. Is it a
different directory than where your settings expect it? You can also
use the -D switch for pg_ctl and start PG manually like that.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Strange primary key error on insertion

2011-10-06 Thread Rory Campbell-Lange
I have a strange issue (postgres 8.4) trying to insert old rows back
into the s_tbls table. A check on the primary key (n_id) between
s_tbl_import and s_tbls yields no matches, yet the index on s_tbls.n_id
yields an error when attempting to insert:

=> select n_id from s_tbl_import where n_id IN (
select n_id from s_tbls);

 n_id 
--
(0 rows)

=> insert into s_tbls (select * from s_tbl_import);

ERROR:  duplicate key value violates unique constraint "s_tbls_pkey"

Table "s_tbls"
  Column  |Type |   
Modifiers
--+-+
 n_id | integer | not null default 
nextval('s_tbls_n_id_seq'::regclass)
 dt_created   | timestamp without time zone | default now()
 dt_modified  | timestamp without time zone | default now()
 t_node   | text| 
...
Indexes:
"s_tbls_pkey" PRIMARY KEY, btree (n_id)


Thanks for any assistance
Rory







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restoring 2 Tables From All Databases Backup

2011-10-06 Thread Filip Rembiałkowski
2011/10/6 Adarsh Sharma 

> **
> Thanks to all, the problem is solved now.
>
> But Still I donot know how to use existing data directory (near about
> 110GB) in a new Postgres Installation.
> I ask this in the list yesterday but still no clue on this.
> Please guide if it is possible.
>
>
You need to learn more about postgres server mechanics... especially initdb,
pg_ctl, starting and stopping server and so on. See
http://www.postgresql.org/docs/9.1/static/runtime.html,
http://www.postgresql.org/docs/9.1/static/runtime-config-file-locations.html.

To use existing data directory in new installation, you can just stop the
server, replace data_directory, and start the server. Remember about file
permissions - data_directory must be owned by server process owner
("postgres") and chmod 700.

Filip


Re: [GENERAL] Strange primary key error on insertion

2011-10-06 Thread Filip Rembiałkowski
2011/10/6 Rory Campbell-Lange 

> I have a strange issue (postgres 8.4) trying to insert old rows back
> into the s_tbls table. A check on the primary key (n_id) between
> s_tbl_import and s_tbls yields no matches, yet the index on s_tbls.n_id
> yields an error when attempting to insert:
>
> => select n_id from s_tbl_import where n_id IN (
>select n_id from s_tbls);
>
> n_id
>--
>(0 rows)
>
> => insert into s_tbls (select * from s_tbl_import);
>
>ERROR:  duplicate key value violates unique constraint "s_tbls_pkey"
>
>
Looks like you had duplicates in s_tbl_import. Try this:
SELECT * FROM s_tbl_import WHERE n_id IN (
SELECT n_id from s_tbl_import group by n_id HAVING count(*)>1
);

 Table "s_tbls"
>  Column  |Type |
> Modifiers
>
> --+-+
>  n_id | integer | not null default
> nextval('s_tbls_n_id_seq'::regclass)
>  dt_created   | timestamp without time zone | default now()
>  dt_modified  | timestamp without time zone | default now()
>  t_node   | text|
> ...
> Indexes:
>"s_tbls_pkey" PRIMARY KEY, btree (n_id)
>


Re: [GENERAL] Never-idle autovacuum, and does (auto)vacuuming fill the OS cache ?

2011-10-06 Thread Andrew Sullivan
On Thu, Oct 06, 2011 at 11:58:15AM +0200, Vincent de Phily wrote:
> Fearing that vacuuming might accumulate lateness and hoping to see the system 
> idle every now and then, 

Why is your goal to see the system idle every now and then?  It's not
going to get tired if it keeps working, and if you have a lot of work
and can spin out that work so that the system always has a little bit
of work to do, then you use your resources more efficiently.

Normally, one likes to see some idle time because it is evidence of
"headroom" -- that you have more capacity than you actually need.  If
that's the reason you want to see the idle times, then surely you
don't want to tune the system with the goal of causing idleness.  You
want to tune the system so that the work gets done in as smooth and
fast a way possible.  So I would aim for maximum throughput (including
but not exclusively complete table maintenance) and then check whether
you're getting any idle time.  Off the cuff, though, it sounds to me
like you need more capacity than you have.

> This is all on PG 8.3. I know upgrading would improve things
> (particularly since a large percentage of the table remains static
> between vacuums), but we're still too busy for that right now
> (unless you tell me I'm going to see a night-and-day difference
> regarding this particular issue).

I think it might be more "dusk and day", but I have had very
impressive performance from 9.0.  Haven't tried 9.1.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restoring 2 Tables From All Databases Backup

2011-10-06 Thread Raymond O'Donnell
On 06/10/2011 11:34, Filip Rembiałkowski wrote:

> To use existing data directory in new installation, you can just stop
> the server, replace data_directory, and start the server. Remember about
> file permissions - data_directory must be owned by server process owner
> ("postgres") and chmod 700.
> 

BUT - and this is a big but - the data *must* be from the same Postgres
major version, same architecture, etc.

This might have been mentioned upthread, but it's worth repeating.

Ray.



-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] script not working in php

2011-10-06 Thread Robert Buckley
Hi,

this works ok in pgadmin3

select name[i], kw[i]
from ( select generate_series(1,6) as i, 
              array['Biomasse', 'Deponiegas', 'Solar', 'Wasser', 
'Wind','GESAMT'] as name, 
              array[biomasse_kw, deponiegas_kw, solar_kw,wasser_kw, wind_kw, 
kw] as kw
       from energie.tennet_auswertung_pivot_energie ) z
;

but in the php script I get error 500?


$result = pg_query($connection,'
select name[i], kw[i]
from ( select generate_series(1,6) as i, 
              array['Biomasse', 'Deponiegas', 'Solar', 'Wasser', 
'Wind','GESAMT'] as name, 
              array[biomasse_kw, deponiegas_kw, solar_kw,wasser_kw, wind_kw, 
kw] as kw
       from energie.tennet_auswertung_pivot_energie ) z
;
'); 

$rows = array();
while($r = pg_fetch_assoc($result)) {
    $rows[] = $r;
}

print json_encode(array_values(pg_fetch_all($result)));


can anyone explain whythis is happening?

thanks,

Rob

Re: [GENERAL] script not working in php

2011-10-06 Thread Raymond O'Donnell
On 06/10/2011 13:22, Robert Buckley wrote:
> Hi,
> 
> this works ok in pgadmin3
> 
> select name[i], kw[i]
> from ( select generate_series(1,6) as i, 
>   array['Biomasse', 'Deponiegas', 'Solar', 'Wasser',
> 'Wind','GESAMT'] as name, 
>   array[biomasse_kw, deponiegas_kw, solar_kw,wasser_kw,
> wind_kw, kw] as kw
>from energie.tennet_auswertung_pivot_energie ) z
> ;
> 
> but in the php script I get error 500?

What does it say in the PHP log?

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restoring 2 Tables From All Databases Backup

2011-10-06 Thread Adarsh Sharma

That's the bottleneck I need to solve:-

Previous data & Os Version :-  Postgresql-8.3 and Suse Enterprise Linux
New Data & OS Version :-   Postgresql-8.4 and Ubuntu 10.04

What to do know?

Thanks

Raymond O'Donnell wrote:

On 06/10/2011 11:34, Filip Rembiałkowski wrote:

  

To use existing data directory in new installation, you can just stop
the server, replace data_directory, and start the server. Remember about
file permissions - data_directory must be owned by server process owner
("postgres") and chmod 700.




BUT - and this is a big but - the data *must* be from the same Postgres
major version, same architecture, etc.

This might have been mentioned upthread, but it's worth repeating.

Ray.



  




[GENERAL] ts_rank error/result of '1e-020'

2011-10-06 Thread Henry Drexler
I have a workaround to the error/result, but am wondering what the result of
ts_rank of '1e-020' represents?

Here is the original:

select
ts_rank(to_tsvector('a_a_do_ug_read_retreqmon_ptam'),to_tsquery('a_a_do_ug_read_retrmso.com_ptam'))


Re: [GENERAL] ts_rank error/result of '1e-020'

2011-10-06 Thread Henry Drexler
it sent before I finished, here is the rest:

I have fixed this by doing the following:

select
ts_rank(to_tsvector(replace('a_a_do_ug_read_retreqmon_ptam','_','
')),plainto_tsquery(replace('a_a_do_ug_read_retrmso.com_ptam','_',' ')))

so I have found a solution, just wondering what the earlier error means/what
causes it.  I could not find any references to that error in documentation
or in help searches.


Re: [GENERAL] script not working in php

2011-10-06 Thread Rodrigo Gonzalez

On 10/06/2011 09:22 AM, Robert Buckley wrote:

Hi,

this works ok in pgadmin3

select name[i], kw[i]
from ( select generate_series(1,6) as i,
  array['Biomasse', 'Deponiegas', 'Solar', 'Wasser', 
'Wind','GESAMT'] as name,
  array[biomasse_kw, deponiegas_kw, solar_kw,wasser_kw, 
wind_kw, kw] as kw

   from energie.tennet_auswertung_pivot_energie ) z
;

but in the php script I get error 500?


$result = pg_query($connection,'
select name[i], kw[i]
from ( select generate_series(1,6) as i,
  array['Biomasse', 'Deponiegas', 'Solar', 'Wasser', 
'Wind','GESAMT'] as name,
  array[biomasse_kw, deponiegas_kw, solar_kw,wasser_kw, 
wind_kw, kw] as kw

   from energie.tennet_auswertung_pivot_energie ) z
;
');
You have single quotes inside the query and the complete string is 
defined with single quotes too.use double quotes to define the string


Rodrigo


Re: [GENERAL] Restoring 2 Tables From All Databases Backup

2011-10-06 Thread Filip Rembiałkowski
follow the other thread that you started. there are some suggestions for
you.


2011/10/6 Adarsh Sharma 

> **
> That's the bottleneck I need to solve:-
>
> Previous data & Os Version :-  Postgresql-8.3 and Suse Enterprise Linux
> New Data & OS Version :-   Postgresql-8.4 and Ubuntu 10.04
>
> What to do know?
>
> Thanks
>
>
> Raymond O'Donnell wrote:
>
> On 06/10/2011 11:34, Filip Rembiałkowski wrote:
>
>
>
>  To use existing data directory in new installation, you can just stop
> the server, replace data_directory, and start the server. Remember about
> file permissions - data_directory must be owned by server process owner
> ("postgres") and chmod 700.
>
>
>
>  BUT - and this is a big but - the data *must* be from the same Postgres
> major version, same architecture, etc.
>
> This might have been mentioned upthread, but it's worth repeating.
>
> Ray.
>
>
>
>
>
>
>


Re: [GENERAL] script not working in php

2011-10-06 Thread Robert Buckley
Thankssimple but important!



Von: Rodrigo Gonzalez 
An: Robert Buckley 
Cc: "pgsql-general@postgresql.org" 
Gesendet: 14:39 Donnerstag, 6.Oktober 2011 
Betreff: Re: [GENERAL] script not working in php


On 10/06/2011 09:22 AM, Robert Buckley wrote: 
Hi,
>
>
>this works ok in pgadmin3
>
>
>select name[i], kw[i]
>from ( select generate_series(1,6) as i, 
>              array['Biomasse', 'Deponiegas', 'Solar', 'Wasser', 
>'Wind','GESAMT'] as name, 
>              array[biomasse_kw, deponiegas_kw, solar_kw,wasser_kw, wind_kw, 
>kw] as kw
>       from energie.tennet_auswertung_pivot_energie ) z
>;
>
>
>but in the php script I get error 500?
>
>
>
>
>$result = pg_query($connection,'
>select name[i], kw[i]
>from ( select generate_series(1,6) as i, 
>              array['Biomasse', 'Deponiegas', 'Solar', 'Wasser', 
>'Wind','GESAMT'] as name, 
>              array[biomasse_kw, deponiegas_kw, solar_kw,wasser_kw, wind_kw, 
>kw] as kw
>       from energie.tennet_auswertung_pivot_energie ) z
>;
>'); 
>
You have single quotes inside the query and the complete string is defined with 
single quotes too.use double quotes to define the string

Rodrigo

Re: [GENERAL] Selecting All Columns Associated With Maximum Value of One Column

2011-10-06 Thread Rich Shepard

On Wed, 5 Oct 2011, Chris Curvey wrote:


Based on your subject line, I'm guessing that you want something like this:

select quant, param, site_id, sample_date, str_name from chemistry where
param = 'TDS' and str_name = 'BurrowCrk' and quant = (select max(quant)
from chemistry where param = 'TDS' and str_name = 'BurrowCrk')


Chris,

  Thank you. I missed seeing the latter part.

  This returns 0 rows, but it puts me on the right track.

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting All Columns Associated With Maximum Value of One Column

2011-10-06 Thread Rich Shepard

On Wed, 5 Oct 2011, David Johnston wrote:


"Max" is an aggregate function and thus requires one of:
1) GROUP BY
2) "Window" -  max(quant) OVER (PARTITION BY ...)
To be present in the query.


David,

  I was unaware of the windows functions. I see the document page for 9.0.5
so I'll carefully read that and upgrade from 9.0.4 (which may also have this
function; I'll check).

Thanks,

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting All Columns Associated With Maximum Value of One Column

2011-10-06 Thread Rich Shepard

On Wed, 5 Oct 2011, David Johnston wrote:


2) "Window" -  max(quant) OVER (PARTITION BY ...)


  Hmm-m-m. I have a problem here emulating the example on the document page.
Regardless of which column is first after SELECT postgres tells me that
column does not exist.

select site_id, sample_date, param, max(quant) over (partition by param)
chemistry;
ERROR:  column "site_id" does not exist
LINE 1: select site_id, sample_date, param, max(quant) over (partiti...
   ^
select str_name, site_id, sample_date, param, max(quant) over (partition by
str_name) chemistry;
ERROR:  column "str_name" does not exist
LINE 1: select str_name, site_id, sample_date, param, max(quant) ove...
   ^
  What am I doing incorrectly here?

  Also, with the window function can I limit the output to a single str_name
and param?

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Never-idle autovacuum, and does (auto)vacuuming fill the OS cache ?

2011-10-06 Thread Vincent de Phily
On Thursday 06 October 2011 07:00:20 Andrew Sullivan wrote:
> On Thu, Oct 06, 2011 at 11:58:15AM +0200, Vincent de Phily wrote:
> > Fearing that vacuuming might accumulate lateness and hoping to see the
> > system idle every now and then,
> 
> Why is your goal to see the system idle every now and then?  It's not
> going to get tired if it keeps working, and if you have a lot of work
> and can spin out that work so that the system always has a little bit
> of work to do, then you use your resources more efficiently.
> 
> Normally, one likes to see some idle time because it is evidence of
> "headroom" -- that you have more capacity than you actually need.  If
> that's the reason you want to see the idle times, then surely you
> don't want to tune the system with the goal of causing idleness.  You
> want to tune the system so that the work gets done in as smooth and
> fast a way possible.  So I would aim for maximum throughput (including
> but not exclusively complete table maintenance) and then check whether
> you're getting any idle time.  Off the cuff, though, it sounds to me
> like you need more capacity than you have.

I agree idleness per se is not the goal, and whatever work needs to be done 
might as well be spread smoothly over time. Idleness *is* however a measure of 
the ressource headroom available, and that headroom was incomfortably small 
(for IO) in this case. I was just groping for more performance out of the 
system.

In the case of vacuuming however, I think there's a point to be made about 
finishing fast when all vacuum workers are constantly busy : say the vacuum 
daemon notices that there are 10 tables that need vacuuming now. It allocates 
3 workers, but while they do their intentionally-slow work, the other 7 tables 
keep creating more vacuumable tuples, so it'll be more work overall because 
they're "late" in their "vacuum schedule". Does that make sense (I'm not sure 
id does) ?


Anyway, my particular issue is solved for now : I realized those tables were 
terribly bloated (often more than 99% slack), so I vacuum-fulled them and now 
the autovacuums run very fast and the disk is 90% idle again. That slack 
probably appeared at table initialization time because the fsm was not big 
enough. I since raised the fsm, but I think it's big enough during normal 
(non-init) usage anyway.

I'm still interested in more opinions about my two questions :
 * When does it make sense to make autovacuum more aggressive on IO, and by
   how much ?
 * Does vacuuming fill the OS's disk cache, and is it an issue if it does ?


Cheers.

-- 
Vincent de Phily

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange primary key error on insertion

2011-10-06 Thread Rory Campbell-Lange
On 06/10/11, Filip Rembiałkowski (plk.zu...@gmail.com) wrote:
> 2011/10/6 Rory Campbell-Lange 
> > => insert into s_tbls (select * from s_tbl_import);
> >
> >ERROR:  duplicate key value violates unique constraint "s_tbls_pkey"
> >
> >
> Looks like you had duplicates in s_tbl_import. Try this:
> SELECT * FROM s_tbl_import WHERE n_id IN (
> SELECT n_id from s_tbl_import group by n_id HAVING count(*)>1
> );

Hi Filip

You were right! Thanks for catching my silly error. 

Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting All Columns Associated With Maximum Value of One Column

2011-10-06 Thread David Johnston
On Oct 6, 2011, at 9:34, Rich Shepard  wrote:

> On Wed, 5 Oct 2011, David Johnston wrote:
> 
>> 2) "Window" -  max(quant) OVER (PARTITION BY ...)
> 
>  Hmm-m-m. I have a problem here emulating the example on the document page.
> Regardless of which column is first after SELECT postgres tells me that
> column does not exist.
> 
> select site_id, sample_date, param, max(quant) over (partition by param)
> chemistry;
> ERROR:  column "site_id" does not exist
> LINE 1: select site_id, sample_date, param, max(quant) over (partiti...

Missing the FROM before chemistry

>   ^
> select str_name, site_id, sample_date, param, max(quant) over (partition by
> str_name) chemistry;
> ERROR:  column "str_name" does not exist
> LINE 1: select str_name, site_id, sample_date, param, max(quant) ove...
>   ^
>  What am I doing incorrectly here?
> 
>  Also, with the window function can I limit the output to a single str_name
> and param?

Not directly.  After you create the windowed result you can turn it into a 
sub-query and filter that.  The other form suggested (where quant = select 
max(quant) from chemistry where ...) is probably a better performer though for 
your need - I did't read the subject line closely enough and my two options 
don't directly give you what you are looking for.  Though era ing the window 
query and then adding a (where quant_max = quant) clause would get you closer; 
quant_max being the column alias for the window expression.

> 
> Rich
> 

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting All Columns Associated With Maximum Value of One Column

2011-10-06 Thread Rich Shepard

On Thu, 6 Oct 2011, David Johnston wrote:


Missing the FROM before chemistry


  D'oh! Obviously not yet sufficiently cafinated this morning.


 Also, with the window function can I limit the output to a single str_name
and param?


Not directly.  After you create the windowed result you can turn it into a
sub-query and filter that.  The other form suggested (where quant = select
max(quant) from chemistry where ...) is probably a better performer though
for your need - I did't read the subject line closely enough and my two
options don't directly give you what you are looking for.  Though era ing
the window query and then adding a (where quant_max = quant) clause would
get you closer; quant_max being the column alias for the window
expression.


  Ah.

Thanks, David,

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting All Columns Associated With Maximum Value of One Column

2011-10-06 Thread Thomas Kellerer

Rich Shepard, 06.10.2011 15:13:

I was unaware of the windows functions. I see the document page for 9.0.5
so I'll carefully read that and upgrade from 9.0.4 (which may also have this
function; I'll check).


Windowing functions are available since 8.4



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Backup Database Question

2011-10-06 Thread Carlos Mennens
So I'm looking to start regularly backing up my production database at
work. I'm tired of doing it manually every day before I go home. I use
the built in 'pg_dump' or 'pg_dumpall' utilities however I don't know
which is more beneficial for a nightly backup. Perhaps I should be
using the 'pg_dumpall' as a weekly / full backup only and not perform
this nightly but honestly I have no idea so I'm asking the experts
here. When should I use 'pg_dump' & 'pg_dumpall'? Is there a downside
to just backing up the entire database cluster nightly besides I/O
load and sacrificing system performance?

My last question is does anyone know how I can easily automate my
backups for PostgreSQL in Linux using Cron or some well written script
someone has on the web? I'm looking for anything  that can simplify
and automate my backups for me so I don't have to do them manually by
hand before I leave the office.

Thanks for any help in this area!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup Database Question

2011-10-06 Thread Tom Lane
Carlos Mennens  writes:
> So I'm looking to start regularly backing up my production database at
> work. I'm tired of doing it manually every day before I go home. I use
> the built in 'pg_dump' or 'pg_dumpall' utilities however I don't know
> which is more beneficial for a nightly backup. Perhaps I should be
> using the 'pg_dumpall' as a weekly / full backup only and not perform
> this nightly but honestly I have no idea so I'm asking the experts
> here. When should I use 'pg_dump' & 'pg_dumpall'? Is there a downside
> to just backing up the entire database cluster nightly besides I/O
> load and sacrificing system performance?

Use pg_dumpall.  The extra time to dump the user and database
definitions is unlikely to be noticeable, and if push comes to shove
you'll be glad you had them.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup Database Question

2011-10-06 Thread Carlos Mennens
On Thu, Oct 6, 2011 at 11:19 AM, Tom Lane  wrote:
> Use pg_dumpall.  The extra time to dump the user and database
> definitions is unlikely to be noticeable, and if push comes to shove
> you'll be glad you had them.

Yes I agree but I didn't know enough about PostgreSQL to make that
determination. Seems very logical however. Does anyone know of a
PostgreSQL backup script floating around the Internet for Linux
systems? I found a great one for MySQL but sadly that doesn't do me
any good.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Never-idle autovacuum, and does (auto)vacuuming fill the OS cache ?

2011-10-06 Thread Andrew Sullivan
On Thu, Oct 06, 2011 at 03:39:02PM +0200, Vincent de Phily wrote:
> In the case of vacuuming however, I think there's a point to be made about 
> finishing fast when all vacuum workers are constantly busy : say the vacuum 
> daemon notices that there are 10 tables that need vacuuming now. It allocates 
> 3 workers, but while they do their intentionally-slow work, the other 7 
> tables > keep creating more vacuumable tuples, so it'll be more work overall 
> because 
> they're "late" in their "vacuum schedule". Does that make sense (I'm not sure 
> id does) ?

Yes, that's exactly the issue.  You need to balance the resource
you're depriving the "real" database transactions (i.e. the user ones)
against the cost of waiting, which waiting will probably cost those
user transactions in performance.  The reason there's no magic
solution is because much of this depends on your use patterns.
 
> Anyway, my particular issue is solved for now : I realized those tables were 
> terribly bloated (often more than 99% slack), so I vacuum-fulled them and now 
> the autovacuums run very fast and the disk is 90% idle again. That slack 
> probably appeared at table initialization time because the fsm was not big 
> enough. I since raised the fsm, but I think it's big enough during normal 
> (non-init) usage anyway.

This is what you want to keep an eye on, then.  Why do you think it
came from "initialization time", though?  VACUUM only has work to do
when dead tuples show up (e.g. from DELETE or UPDATE), and normally
when you first populate a table you do a COPY, which isn't going to
create dead tuples.

> I'm still interested in more opinions about my two questions :
>  * When does it make sense to make autovacuum more aggressive on IO, and by
>how much ?

At bottom, you don't want your tables to get so bloated that they
exhibit the problem you just saw, but you also don't want vacuum to be
taking so much I/O that your other tasks can't get done.  That's the
general principle; how it applies to your case depends rather on use
patters.  For instance, if you know that there will be at most 10%
churn on every table every day, but all transactions happen between
9:00 and 17:00 local time, then it's probably safe to allow that to
happen: as long as your FSM can keep track, it can all be recovered
every day after 17:00, so you might as well allow the work to build
up, & let the vacuums happen when they're not stealing any I/O from
user queries.  If, on the other hand, you get 100% churn on 50% of the
tables every day between 09:00 and 11:00, and the rest of the day is
mostly read-only traffic, with read-only traffic during all 24 hours
(don't scoff -- I had exactly this problem once) then you want to be
quite aggressive with the autovacuum settings, because keeping that
100% bloat down is going to pay off in a big way on the read-only
traffic.

>  * Does vacuuming fill the OS's disk cache, and is it an issue if it does ?

Well, it _affects_ the OS's disk cache.  Whether it fills it is
controlled by the cache algorithms and the amount of memory you have
devoted to cache.  Every time you touch the disk, you potentially
alter the cache in favour of what you just saw.

In the above artificial examples, the vacuums that run "after everyone
went home" will almost certainly end up taking over the cache, because
there's no other activity to keep other things in the disk cache.  In
the second example, though, with a lot of read-only activity all the
time, the things that are most popular are likely to remain in a
(modern) disk cache most of the time because they're called so often
that the vacuumed page doesn't end up being enough traffic to cause an
eviction (or, anyway, to evict for any significant time).

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql: type of array cells

2011-10-06 Thread Merlin Moncure
On Wed, Oct 5, 2011 at 7:07 PM, Amit Dor-Shifer
 wrote:
> I'm trying to use an array of objects in plpgsql (postgresql 8.4):
>
> drop type if exists test_t cascade;
>
> create type test_t AS
> (
>        i integer,
>        s text
> );
>
> create or replace function test2()
>        RETURNS SETOF test_t AS
> $$
> DECLARE
> arr test_t ARRAY[3];
> tmp test_t;
> BEGIN
>        FOR i in 1 .. 3
>        LOOP
>                -- ok. Can write to test_t.i
>                tmp.i:=i;
>                -- ok. Can assign a cell from arr to a test_t object
>                arr[i]:=tmp;
>                -- error:
>                arr[i].i=3;
>                RETURN NEXT tmp;
>        END LOOP;
> END;
> $$
> LANGUAGE plpgsql;
>
> I'm getting an error when attempting to interpret this function:
>
> NOTICE:  drop cascades to function test2()
> ERROR:  syntax error at or near "."
> LINE 21:   arr[i].i=3;
>                 ^
>
> ** Error **
>
> ERROR: syntax error at or near "."
> SQL state: 42601
> Character: 272
>
> Isn't arr[i] of type test_t??

It is, but it looks like you either have to break it up like this:

tmp := arr[i];
tmp.i = 3;
arr[i] := tmp;

or do it like this:
arr[i] := row(3, arr[i].s);

since the second form is allowed, I guess what you're trying should
probably work too.  generally though, the best way to do arrays is to
avoid iteration as much as possible.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup Database Question

2011-10-06 Thread Adam Cornett
I have a little bash script that is called by cron to make a backup of the
db, as well as being able to pull a copy of our production db to my local
machine for development.

It requires that you have a .pgpass file setup to connect to your database
without entering a password.

On Thu, Oct 6, 2011 at 11:26 AM, Carlos Mennens wrote:

> On Thu, Oct 6, 2011 at 11:19 AM, Tom Lane  wrote:
> > Use pg_dumpall.  The extra time to dump the user and database
> > definitions is unlikely to be noticeable, and if push comes to shove
> > you'll be glad you had them.
>
> Yes I agree but I didn't know enough about PostgreSQL to make that
> determination. Seems very logical however. Does anyone know of a
> PostgreSQL backup script floating around the Internet for Linux
> systems? I found a great one for MySQL but sadly that doesn't do me
> any good.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Adam Cornett
adam.corn...@gmail.com
(678) 296-1150


pull_production.sh
Description: Bourne shell script

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres 9.01, Amazon EC2/EBS, XFS, JDBC and lost connections

2011-10-06 Thread Sean Laurent
We've been running into a particularly strange problem that I'm trying to
better understand. The super short version is that our application servers
lose their connection to the database when I run a backup during periods of
higher load and fail to reconnect.

Here's an overview of the setup:

- PostgreSQL 9.0.1 hosted on a cc1.4xlarge Amazon EC2 instance running
CentOS 5.6
- 8 disk RAID-0 array of EBS volumes used for primary data storage
- 4 disk RAID-0 array of EBS volumes used for transaction logs
- Root partition is ext3
- RAID arrays are xfs

Backups are taken using a script that runs the following workflow:

- Tell Postgres to start a backup: SELECT pg_start_backup('RAID backup');
- Run "xfs_freeze" on the primary RAID array
- Tell Amazon to take snapshots of each of the EBS volumes
- Run "xfs_freeze -u" to thaw the primary RAID array
- Run "xfs_freeze" on the transaction log RAID array
- Tell Amazon to take snapshots of each of the EBS volumes
- Run "xfs_freeze -u" to thaw the transaction log RAID array
- Tell Postgres the backup is finished: SELECT pg_stop_backup();
- Remove old WAL files

The whole process takes roughly 7 seconds on average. The RAID arrays are
frozen for roughly 2 seconds on average.

Within a few seconds of the backup, our application servers start throwing
exceptions that indicate the database connection was closed. Meanwhile,
Postgres still shows the connections and we start seeing a really high
number (for us) of locks in the database. The application servers refuse to
recover and must be killed and restarted. Once they're killed off, the
connections actually go away and the locks disappear.

What's particularly weird is that this doesn't happen all the time. The
backups were running every hour, but we have only seen the app servers crash
5-10 times over the course of a month.

Has anyone encountered anything like this? Do any of these steps have
ramifications that I'm not considering? Especially something that might
explain the app server failure?

Thanks.

Sean Laurent
Director of Operations
StudyBlue, Inc.


[GENERAL] Tuning Variables For PostgreSQL

2011-10-06 Thread Carlos Mennens
I read all the time that most DBA's are required or should tune their
DBMS which obviously in my case would be PostgreSQL but I'm curious
what exactly is involved when tuning a DBMS like PostgreSQL. What are
some of the basic functions involved when tuning? Are there specific
things I should tweak on a newly built server freshly installed with
PostgreSQL? Can someone please clarify what exactly most people do
when they "tune"? Lastly I'm sure this has been discussed but after a
Google search, I can't find any updated info since 2009 so I would
like to know specifically what file system you've found PostgreSQL to
work on the best? What file system will give me the best performance
and stability on disk? I've currently running it on Linux Ext4 file
system and have no had any issues but I was wondering if there was
anything out there more suited to perform better on PostgreSQL.

Thanks!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup Database Question

2011-10-06 Thread John R Pierce

On 10/06/11 8:26 AM, Carlos Mennens wrote:

Yes I agree but I didn't know enough about PostgreSQL to make that
determination. Seems very logical however. Does anyone know of a
PostgreSQL backup script floating around the Internet for Linux
systems? I found a great one for MySQL but sadly that doesn't do me
any good.



   /path/to/pg_dumpall | gzip > /path/to/pgbackup-$(date -I).sql.gz

put this in a script accessible by the postgres user, and in the 
postgres user's crontab, invoke it something like..


30 1 * * * /path/to/backupscript

to run it every night at 1:30am.  If you only want to run it once a week...

30 1 * * 6 /path/to/backupscript

which will run it on Saturday morning at 1:30am (day 6)


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup Database Question

2011-10-06 Thread Carlos Mennens
On Thu, Oct 6, 2011 at 3:12 PM, John R Pierce  wrote:
>   /path/to/pg_dumpall | gzip > /path/to/pgbackup-$(date -I).sql.gz

Thanks John. I've never written a script so do I just use 'Vim' to
open a new file and just paste the following line?

#!/bin/bash
/usr/bin/pg_dumpall | gzip > /var/db_backup/pg_backup-$(date -I).sql.gz

Is that all I need to do or is there more steps / data involved?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup Database Question

2011-10-06 Thread Scott Marlowe
On Thu, Oct 6, 2011 at 1:18 PM, Carlos Mennens  wrote:
> On Thu, Oct 6, 2011 at 3:12 PM, John R Pierce  wrote:
>>   /path/to/pg_dumpall | gzip > /path/to/pgbackup-$(date -I).sql.gz
>
> Thanks John. I've never written a script so do I just use 'Vim' to
> open a new file and just paste the following line?
>
> #!/bin/bash
> /usr/bin/pg_dumpall | gzip > /var/db_backup/pg_backup-$(date -I).sql.gz
>
> Is that all I need to do or is there more steps / data involved?

You could check if it executed and send an email when it fails.

exec_status=(/usr/bin/pg_dumpall | gzip >
/var/db_backup/pg_backup-$(date -I).sql.gz);
if [[ exec_status -ne 0 ]] ;then
   //Send yourself an email here
fi;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup Database Question

2011-10-06 Thread Raymond O'Donnell
On 06/10/2011 20:18, Carlos Mennens wrote:
> On Thu, Oct 6, 2011 at 3:12 PM, John R Pierce  wrote:
>>   /path/to/pg_dumpall | gzip > /path/to/pgbackup-$(date -I).sql.gz
> 
> Thanks John. I've never written a script so do I just use 'Vim' to
> open a new file and just paste the following line?
> 
> #!/bin/bash
> /usr/bin/pg_dumpall | gzip > /var/db_backup/pg_backup-$(date -I).sql.gz
> 
> Is that all I need to do or is there more steps / data involved?

I think you also need to make it executable:

  chmod u+x my_backup_script

...or something like that.

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Connection Pooling

2011-10-06 Thread Brandon Phelps

Can anyone recommend a good solution for connection pooling?  Here is our setup:

2 PostgreSQL 9.1 servers (1 master, 1 hot standby).
1 Apache 2.2.17

We have a pretty extensive web application running on the apache server that 
talks to both of the database servers.  Updates and small tasks (simple 
selects, etc) are directed to the master DB server while large reports that can 
take a while to run are directed to the hot standby, as not to affect 
performance of the master.

Each page of the web app generally make a single connection to the database 
with the exception being the complicated reports which first make a connection 
to the master, verify that the user is allowed to access the page in question, 
close that connection, then open another connection to the hot standby for the 
report itself.

One connection per page is not all that bad however the end users who make use 
of the web app are quite familiar with it and often fly through the pages very 
fast.  We would like to implement some type of connection pooling so that these 
database connections (from web server to the DB servers) do not have to get 
created and torn down constantly.

I have checked out the pg_pool website however was not very impressed with the 
documentation provided.  Is pg_pool going to be our best solution or is there 
something better?  Any advice would be appreciated.

Thanks,
Brandon

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection Pooling

2011-10-06 Thread Adam Cornett
On Thu, Oct 6, 2011 at 3:57 PM, Brandon Phelps  wrote:

> Can anyone recommend a good solution for connection pooling?  Here is our
> setup:
>
> 2 PostgreSQL 9.1 servers (1 master, 1 hot standby).
> 1 Apache 2.2.17
>
> We have a pretty extensive web application running on the apache server
> that talks to both of the database servers.  Updates and small tasks (simple
> selects, etc) are directed to the master DB server while large reports that
> can take a while to run are directed to the hot standby, as not to affect
> performance of the master.
>
> Each page of the web app generally make a single connection to the database
> with the exception being the complicated reports which first make a
> connection to the master, verify that the user is allowed to access the page
> in question, close that connection, then open another connection to the hot
> standby for the report itself.
>
> One connection per page is not all that bad however the end users who make
> use of the web app are quite familiar with it and often fly through the
> pages very fast.  We would like to implement some type of connection pooling
> so that these database connections (from web server to the DB servers) do
> not have to get created and torn down constantly.
>
> I have checked out the pg_pool website however was not very impressed with
> the documentation provided.  Is pg_pool going to be our best solution or is
> there something better?  Any advice would be appreciated.
>
> Thanks,
> Brandon
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


If you want to do load balancing between your database servers (split reads
between the master and slave) or have auto failover then Pgpool-II is going
to be your best bet.  I just did a similar setup and the documentation isn't
the best, but some googling turned up a few guides that I was able to jump
between to get it setup and going.

If you just want all of the DB traffic to go to the master (and deal with
failover on your own) you can use pgbouncer, which is much simpler, but also
less feature-rich.

Adam


Re: [GENERAL] Problem with pg_upgrade 9.0 -> 9.1

2011-10-06 Thread Bruce Momjian
Thomas Kellerer wrote:
> Bruce Momjian, 06.10.2011 02:15:
> >> I now got the same error (alas with a different relation id) while
> >> migrating a completely different data directory.
> >>
> >> Anything I can do to help find the reason for this problem (or bug?)
> >>
> >> Unfortuantely the data contains some confidential information so I
> >> cannot make it available.
> >
> > This bug was fixed just after 9.1.1 was released.  The bug is that
> > Windows doesn't properly pass the right flags for the oid set functions
> > to operate.  If you can compile the git 9.1.X current, the fix is in
> > there;  the fix will be in 9.1.2.
> >
> 
> Thanks for the feedback.
> As those were only development databases, it was no big deal to do the 
> migration "the old way" ;)
> 
> Again I'm impressed by the speed how things are fixed in Postgres!

Actually, this was one of our slow ones.  :-)

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tuning Variables For PostgreSQL

2011-10-06 Thread Dickson S. Guedes
2011/10/6 Carlos Mennens :
> I read all the time that most DBA's are required or should tune their
> DBMS which obviously in my case would be PostgreSQL but I'm curious
> what exactly is involved when tuning a DBMS like PostgreSQL. What are
> some of the basic functions involved when tuning? Are there specific
> things I should tweak on a newly built server freshly installed with
> PostgreSQL? Can someone please clarify what exactly most people do
> when they "tune"? Lastly I'm sure this has been discussed but after a
> Google search, I can't find any updated info since 2009 so I would
> like to know specifically what file system you've found PostgreSQL to
> work on the best? What file system will give me the best performance
> and stability on disk? I've currently running it on Linux Ext4 file
> system and have no had any issues but I was wondering if there was
> anything out there more suited to perform better on PostgreSQL.

I suggest you to read the wiki [1]. It contains many useful information.

[1] http://wiki.postgresql.org/wiki/Performance_Optimization

-- 
Dickson S. Guedes
mail/xmpp: gue...@guedesoft.net - skype: guediz
http://guedesoft.net - http://www.postgresql.org.br

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ts_rank error/result of '1e-020'

2011-10-06 Thread Henry Drexler
On Thu, Oct 6, 2011 at 4:37 PM, Gavin Flower
wrote:

> On 07/10/11 01:40, Henry Drexler wrote:
>
>> I have a workaround to the error/result, but am wondering what the result
>> of ts_rank of '1e-020' represents?
>>
>> Here is the original:
>>
>> select
>> ts_rank(to_tsvector('a_a_do_**ug_read_retreqmon_ptam'),to_**
>> tsquery('a_a_do_ug_read_**retrmso.com_ptam'))
>>
>>
>>  I get essentialy the same result with pg 9.1.1
>
> 
>  9.9968e-21
> (1 row)
>
> gavin=>
>

I am also on "PostgreSQL 9.1.1"

Thanks for posting the   9.9968e-21, I did not realize it was using
notation to represent a number, I thought it was an error code.  That is
good news - thank you for taking the time ans showing me your result - that
solves it.


Re: [GENERAL] Backup Database Question

2011-10-06 Thread Samba
what about pg_rman ?

-
On Fri, Oct 7, 2011 at 1:20 AM, Raymond O'Donnell  wrote:

> On 06/10/2011 20:18, Carlos Mennens wrote:
> > On Thu, Oct 6, 2011 at 3:12 PM, John R Pierce 
> wrote:
> >>   /path/to/pg_dumpall | gzip > /path/to/pgbackup-$(date -I).sql.gz
> >
> > Thanks John. I've never written a script so do I just use 'Vim' to
> > open a new file and just paste the following line?
> >
> > #!/bin/bash
> > /usr/bin/pg_dumpall | gzip > /var/db_backup/pg_backup-$(date -I).sql.gz
> >
> > Is that all I need to do or is there more steps / data involved?
>
> I think you also need to make it executable:
>
>  chmod u+x my_backup_script
>
> ...or something like that.
>
> Ray.
>
>
> --
> Raymond O'Donnell :: Galway :: Ireland
> r...@iol.ie
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] PostgreSQL consulting companies in the Bay Area

2011-10-06 Thread Richard Price
Hi everyone,

I'm the founder and CEO of Academia.edu, which is a social networking site
for researchers. We have about 2.5 million monthly unique visitors and about
9 million monthly page views.

We use PostgreSQL on Amazon EC2, and currently our engineering team is 4
people. We're well aware that there may be some optimizations we could make
to our PostgreSQL set-up that we aren't aware of, as we aren't deeply
experienced with PostgreSQL. We are looking to bring in an expert PostgreSQL
consultant for a day or two to look through our PostgreSQL set-up, and give
us some tips on how to optimize it.

Does anyone know any companies/individuals in the Bay Area who offer
PostgreSQL consulting services? Any tips or suggestions would be greatly
appreciated!

Richard

CEO, Academia.edu

http://oxford.academia.edu/RichardPrice
http://linkedin.com/in/RichardPrice
tel: 415 829 2341


Re: [GENERAL] ts_rank error/result of '1e-020'

2011-10-06 Thread Gavin Flower

On 07/10/11 10:56, Henry Drexler wrote:



On Thu, Oct 6, 2011 at 4:37 PM, Gavin Flower 
mailto:gavinflo...@archidevsys.co.nz>> 
wrote:


On 07/10/11 01:40, Henry Drexler wrote:

I have a workaround to the error/result, but am wondering what
the result of ts_rank of '1e-020' represents?

Here is the original:

select

ts_rank(to_tsvector('a_a_do_ug_read_retreqmon_ptam'),to_tsquery('a_a_do_ug_read_retrmso.com_ptam'))


I get essentialy the same result with pg 9.1.1


 9.9968e-21
(1 row)

gavin=>


I am also on "PostgreSQL 9.1.1"

Thanks for posting the   9.9968e-21, I did not realize it was 
using notation to represent a number, I thought it was an error code. 
 That is good news - thank you for taking the time ans showing me your 
result - that solves it.


You're welcome.

It is amazing how 'trivial' things can have such significant effects - I 
once took a few hours to spot an extra ';' in a short C program that 
wasn't doing what it was meant to do!



In postgresql.conf I have:
extra_float_digits = 3  # min -15, max 3
the default is zero.

This might explain the difference in output.



Re: [GENERAL] Selecting All Columns Associated With Maximum Value of One Column

2011-10-06 Thread Gavin Flower

On 07/10/11 03:03, Rich Shepard wrote:

On Thu, 6 Oct 2011, David Johnston wrote:


Missing the FROM before chemistry


  D'oh! Obviously not yet sufficiently cafinated this morning.
[...]


You just infringed my patent on making unprovoked stupid mistakes in 
posts to the Pg mailing lists!


I thought I had a protected monopoly on appearing stupid in these posts.

[MORE SERIOUSLY]
Such mistakes are so easy to make!  One knows what one has written, so 
one does not check adequately!


Recently I made 3 silly mistakes in about five lines, but fortunately, 
someone was able to decode what I had written and said it was useful.


(Patent trolling is becoming really big business, so it would not 
surprise me if someone actually patented a sophisticated version of the 
patent I pretended to have above!)



Cheers,
Gavin



Re: [GENERAL] Add quto increment to existing column

2011-10-06 Thread Greg Williamson
Robert --

>Hi,
>
>
>I have a column in a table called hist_id with the datatype "integer". When I 
>created the table I assigned this column the primary key constraint but didn´t 
>make it an auto-increment column.
>
>
>How could I do this to an the already existing column?
>
>
>I have created the sequence with the following command but don´t know how to 
>change the existing column to auto-increment.
>
>
>
>
>$ create sequence hist_id_seq;
>


Perhaps:

ALTER TABLE history_foo ALTER COLUMN hist_id SET DEFAULT nextval('hist_id_seq');

?? (untested)

You may need to to update the sequence to reflect your current highest value 
for hist_id (or whatever the column is -- set the sequence using select 
setval('hist_id_seq', maxid)  so that the next value you get from the sequence 
doesn't collide with existing values.

HTH,

Greg Williamson


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup Database Question

2011-10-06 Thread Adrian Klaver
On Thursday, October 06, 2011 7:49:38 am Carlos Mennens wrote:
> So I'm looking to start regularly backing up my production database at
> work. I'm tired of doing it manually every day before I go home. I use
> the built in 'pg_dump' or 'pg_dumpall' utilities however I don't know
> which is more beneficial for a nightly backup. Perhaps I should be
> using the 'pg_dumpall' as a weekly / full backup only and not perform
> this nightly but honestly I have no idea so I'm asking the experts
> here. When should I use 'pg_dump' & 'pg_dumpall'? Is there a downside
> to just backing up the entire database cluster nightly besides I/O
> load and sacrificing system performance?

My 2 cents:

Use pg_dumpall -g to dump only the global objects

Use pg_dump -Fc to dump the individual databases in the cluster.
This gives you a compressed dump file. Furthermore it allows you to restore 
some 
subset of the database fairly easily should the need arise (see another recent 
thread where the OP wanted to restore only two tables out of a pg_dumpall file).

> 
> My last question is does anyone know how I can easily automate my
> backups for PostgreSQL in Linux using Cron or some well written script
> someone has on the web? I'm looking for anything  that can simplify
> and automate my backups for me so I don't have to do them manually by
> hand before I leave the office.
> 
> Thanks for any help in this area!

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection Pooling

2011-10-06 Thread Toby Corkindale

On 07/10/11 06:57, Brandon Phelps wrote:

Can anyone recommend a good solution for connection pooling? Here is our
setup:

2 PostgreSQL 9.1 servers (1 master, 1 hot standby).
1 Apache 2.2.17

We have a pretty extensive web application running on the apache server
that talks to both of the database servers. Updates and small tasks
(simple selects, etc) are directed to the master DB server while large
reports that can take a while to run are directed to the hot standby, as
not to affect performance of the master.

Each page of the web app generally make a single connection to the
database with the exception being the complicated reports which first
make a connection to the master, verify that the user is allowed to
access the page in question, close that connection, then open another
connection to the hot standby for the report itself.

One connection per page is not all that bad however the end users who
make use of the web app are quite familiar with it and often fly through
the pages very fast. We would like to implement some type of connection
pooling so that these database connections (from web server to the DB
servers) do not have to get created and torn down constantly.

I have checked out the pg_pool website however was not very impressed
with the documentation provided. Is pg_pool going to be our best
solution or is there something better? Any advice would be appreciated.


Depending on what software your application is written in, there will 
probably be support for persistent, pooled database connections.
I think you should look at using those for at least the master 
connections, if not both.


I know the DB connections in things like the Catalyst or Dancer 
frameworks will do that by default; if you've rolled your own web 
framework then you may have some more work to do though.


Cheers,
Toby


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 9.01, Amazon EC2/EBS, XFS, JDBC and lost connections

2011-10-06 Thread Tom Lane
Sean Laurent  writes:
> We've been running into a particularly strange problem that I'm trying to
> better understand. The super short version is that our application servers
> lose their connection to the database when I run a backup during periods of
> higher load and fail to reconnect.

> Here's an overview of the setup:

> - PostgreSQL 9.0.1 hosted on a cc1.4xlarge Amazon EC2 instance running
> CentOS 5.6
> - 8 disk RAID-0 array of EBS volumes used for primary data storage
> - 4 disk RAID-0 array of EBS volumes used for transaction logs
> - Root partition is ext3
> - RAID arrays are xfs

> Backups are taken using a script that runs the following workflow:

> - Tell Postgres to start a backup: SELECT pg_start_backup('RAID backup');
> - Run "xfs_freeze" on the primary RAID array
> - Tell Amazon to take snapshots of each of the EBS volumes
> - Run "xfs_freeze -u" to thaw the primary RAID array
> - Run "xfs_freeze" on the transaction log RAID array
> - Tell Amazon to take snapshots of each of the EBS volumes
> - Run "xfs_freeze -u" to thaw the transaction log RAID array
> - Tell Postgres the backup is finished: SELECT pg_stop_backup();
> - Remove old WAL files

> The whole process takes roughly 7 seconds on average. The RAID arrays are
> frozen for roughly 2 seconds on average.

> Within a few seconds of the backup, our application servers start throwing
> exceptions that indicate the database connection was closed. Meanwhile,
> Postgres still shows the connections and we start seeing a really high
> number (for us) of locks in the database. The application servers refuse to
> recover and must be killed and restarted. Once they're killed off, the
> connections actually go away and the locks disappear.

That's just weird.  It sounds like the "xfs_freeze" operation, or the
snapshotting operation, is somehow interrupting network traffic.  I'd
not expect such a thing on a normal server, but who knows what's
connected to what in an Amazon EC2 instance?

Anyway, I'd suggest trying to instrument something to prove or disprove
that there's a networking failure involved.  It might be as simple as
watching "ping" behavior ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general