Re: [GENERAL] Need Help : PostgreSQL Installation on Windows 7 64 bit

2011-10-18 Thread Wendi Adrian
Hi Craig,

thanks for your response.
I installed Windows 7 Professional on my workstation which connect with office 
network.
I tried to reinstall after disconnect from the network but still failed. So I 
am assuming that PostgreSQl cannot be installed on workstation (because it does 
successfully installed on my laptop).

Would you please to let me know what is the effect of language for PostgreSQL 
installation and do you have solution for this?
Please advise.
Thanks and regards,

Wendi





From: Craig Ringer 
To: Wendi Adrian 
Cc: "pgsql-general@postgresql.org" 
Sent: Wednesday, October 19, 2011 12:28 PM
Subject: Re: [GENERAL] Need Help : PostgreSQL Installation on Windows 7 64 bit

On 10/19/2011 09:21 AM, Wendi Adrian wrote:

> Do anyone can help me to solve this problem? Or, PostgreSQL does not
> support Windows 7 Professional 64 bit?

PostgreSQL does support Windows 7 Pro 64-bit; that's one of the OSes I use and 
it works fine.

It would be helpful to know which language your Windows install is in. Please 
also collect the information listed in this wiki page:

  http://wiki.postgresql.org/wiki/Troubleshooting_Installation

It's also important to specify whether your computer is on a domain or is 
standalone.

--
Craig Ringer

-- Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Need Help : PostgreSQL Installation on Windows 7 64 bit

2011-10-18 Thread Craig Ringer

On 10/19/2011 09:21 AM, Wendi Adrian wrote:


Do anyone can help me to solve this problem? Or, PostgreSQL does not
support Windows 7 Professional 64 bit?


PostgreSQL does support Windows 7 Pro 64-bit; that's one of the OSes I 
use and it works fine.


It would be helpful to know which language your Windows install is in. 
Please also collect the information listed in this wiki page:


  http://wiki.postgresql.org/wiki/Troubleshooting_Installation

It's also important to specify whether your computer is on a domain or 
is standalone.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory Error on Insert

2011-10-18 Thread Tom Lane
Mark Priest  writes:
> However, I am still curious as to why I am getting an out of memory
> error.  I can see how the performance might be terrible on such a
> query but I am surprised that postgres doesn't start using the disk at
> some point to reduce memory usage.   Could it be that postgres tries
> to keep temp tables in memory?

You're running out of memory in the planner, long before execution ever
happens.  (This is apparent from the memory map, but I also verified it
with a debugger yesterday.)  There really isn't any alternative but to
change the form of the query or upgrade to a newer PG.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory Error on Insert

2011-10-18 Thread Mark Priest
Thanks, Craig.

There are no triggers on the tables and the only constraints are the
primary keys.

I am thinking that the problem may be that I have too many full self
joins on the simple_group  table.  I am probably getting a
combinatorial explosion when postgres does cross joins on all the
derived tables.  I think I need to redesign the processing so that I
don't need to do so many joins.

However, I am still curious as to why I am getting an out of memory
error.  I can see how the performance might be terrible on such a
query but I am surprised that postgres doesn't start using the disk at
some point to reduce memory usage.   Could it be that postgres tries
to keep temp tables in memory?

On Tue, Oct 18, 2011 at 3:56 AM, Craig Ringer  wrote:
> On 10/18/2011 02:52 PM, Mark Priest wrote:
>>
>> I am getting an Out of Memory error in my server connection process
>> while running a large insert query.
>>
>> Postgres version: "PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
>> GCC gcc.exe (GCC) 3.4.2 (mingw-special)"
>> OS: Windows 7 Professional (v.6.1, build 7601 service pack 1)
>> The OS is 64 bit but the postgres app is a 32-bit app and I run out of
>> memory and the server process crashes as soon as I hit 2 GB of memory.
>>  I assume that is because that is the limit for 32-bit apps.
>> My client connection is via JDBC in case that is important.
>
>
> You're on an old 8.2 subrelease, 8.2.16 from 2010-03-15 that lacks important
> bug-fixes; the latest 8.2 is 8.2.22 from 2011-09-26. See:
>
>  http://www.postgresql.org/docs/8.2/static/release.html
>
> More to the point, you're on 8.2 on Windows! I strongly recommend moving to
> a newer release if you can, as the newer releases are significantly improved
> in performance and reliability on Windows.
>
>
> For this specific issue, the only thing that comes to mind is whether you
> have any AFTER INSERT triggers on this table, or whether you have any
> DEFERRABLE constraints (irrespective of whether or not they're INITIALLY
> DEFERRED or not). PostgreSQL must keep track of these to execute them at the
> end of the transaction, and currently doesn't support writing this list to
> disk when it gets too big so it can eventually fill the backend's available
> RAM on huge inserts.
>
> If your issue is with a constraint, a workaround is to drop the constraint,
> do the insert, then re-establish the constraint and commit the transaction.
>
> If it's a trigger, that's trickier. Do the insert in smaller batches if you
> can, or see if you can disable the trigger, do the inserts, then do all its
> work in one go at the end.
>
> --
> Craig Ringer
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Fwd: [GENERAL] Out of Memory Error on Insert

2011-10-18 Thread Mark Priest

> Thanks, Craig.
> 
> There are no triggers on the tables and the only constraints are the
> primary keys.
> 
> I am thinking that the problem may be that I have too many full self
> joins on the simple_group  table.  I am probably getting a
> combinatorial explosion when postgres does cross joins on all the
> derived tables.  I think I need to redesign the processing so that I
> don't need to do so many joins.
> 
> However, I am still curious as to why I am getting an out of memory
> error.  I can see how the performance might be terrible on such a
> query but I am surprised that postgres doesn't start using the disk at
> some point to reduce memory usage.   Could it be that postgres tries
> to keep temp tables in memory?

> On Tue, Oct 18, 2011 at 3:56 AM, Craig Ringer  wrote:
>> On 10/18/2011 02:52 PM, Mark Priest wrote:
>>> 
>>> I am getting an Out of Memory error in my server connection process
>>> while running a large insert query.
>>> 
>>> Postgres version: "PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
>>> GCC gcc.exe (GCC) 3.4.2 (mingw-special)"
>>> OS: Windows 7 Professional (v.6.1, build 7601 service pack 1)
>>> The OS is 64 bit but the postgres app is a 32-bit app and I run out of
>>> memory and the server process crashes as soon as I hit 2 GB of memory.
>>>  I assume that is because that is the limit for 32-bit apps.
>>> My client connection is via JDBC in case that is important.
>> 
>> 
>> You're on an old 8.2 subrelease, 8.2.16 from 2010-03-15 that lacks important
>> bug-fixes; the latest 8.2 is 8.2.22 from 2011-09-26. See:
>> 
>>  http://www.postgresql.org/docs/8.2/static/release.html
>> 
>> More to the point, you're on 8.2 on Windows! I strongly recommend moving to
>> a newer release if you can, as the newer releases are significantly improved
>> in performance and reliability on Windows.
>> 
>> 
>> For this specific issue, the only thing that comes to mind is whether you
>> have any AFTER INSERT triggers on this table, or whether you have any
>> DEFERRABLE constraints (irrespective of whether or not they're INITIALLY
>> DEFERRED or not). PostgreSQL must keep track of these to execute them at the
>> end of the transaction, and currently doesn't support writing this list to
>> disk when it gets too big so it can eventually fill the backend's available
>> RAM on huge inserts.
>> 
>> If your issue is with a constraint, a workaround is to drop the constraint,
>> do the insert, then re-establish the constraint and commit the transaction.
>> 
>> If it's a trigger, that's trickier. Do the insert in smaller batches if you
>> can, or see if you can disable the trigger, do the inserts, then do all its
>> work in one go at the end.
>> 
>> --
>> Craig Ringer
>> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need Help : PostgreSQL Installation on Windows 7 64 bit

2011-10-18 Thread Wendi Adrian
Dear all,

I am new in PostgreSQL and need assistance from you for my problem.
I tried to install PostreSQL v.9.0 which have been downloaded from PostgreSQL 
website for Windows x64. My computer use Windows 7 Professional 64 bit.
I follow the installation steps with no problems until the last step of 
installation (before create uninstaller).
The problem is installer cannot read files "postgresql.conf". And this was 
displayed on warning alert.

I keep go ahead to finish installation and go to C:\Program 
Files\PostgreSQl\9.0\Data and found no files there except empty pglog folder.
Do anyone can help me to solve this problem? Or, PostgreSQL does not support 
Windows 7 Professional 64 bit?
Please advice.
Thank you.

Regards,

Wendi A

[GENERAL] Masquerading a unique index as a primary key in 8.4?

2011-10-18 Thread David Pirotte
I have a large, frequently accessed table that needs a primary key
constraint added.  The table already has a unique index (but not a unique
constraint) on one of its columns.  The overly simplified schema looks like
this:

CREATE TABLE table_without_pk (not_a_pk integer not null, some_data text);
CREATE UNIQUE INDEX not_a_pk_idx ON table_without_pk (not_a_pk);

It looks like 9.1 added syntax that would allow us to create the primary key
constraint off of the existing unique index, but unfortunately we're on
8.4.  A post from several years ago (
http://archives.postgresql.org/pgsql-general/2004-12/msg01161.php) implies
that one can update the metadata tables underneath in order to accomplish
this:

UPDATE pg_index i SET indisprimary = 't' FROM pg_stat_user_indexes u WHERE
u.indexrelid = i.indexrelid AND u.indexrelname = 'not_a_pk_idx';

The underlying purpose is to get Londiste to acknowledge the table's key,
and this strategy seems to work without any problems.  Londiste doesn't seem
to care that the "primary key" is only reflected in pg_index and isn't
accompanied by the relevant pg_constraint entry.  Is modifying the
underlying pg_catalog tables like this "Very Bad"?  Will it have mysterious
and unintended consequences, or can I get away with it?  Thanks!

Cheers,
Dave


Re: [GENERAL] Postgre Performance

2011-10-18 Thread Scott Marlowe
On Tue, Oct 18, 2011 at 12:43 PM, John R Pierce  wrote:
> On 10/18/11 9:51 AM, Bill Moran wrote:
>>>
>>> Basically we wanted to limit the number of processes so that client code
>>> doesn't have to retry for unavailability for connection or sub processes ,
>>> but postgre takes care of queuing?
>>
>> pgpool and pgbouncer handle some of that, but I don't know if they do
>> exactly everything that you want.  Probably a good place to start, though.
>
> pools work great when you have a lot of clients that only sporadically make
> queries, like web users. each client (like the webserver) grabs a connection
> from the pool, runs its transactions, then releases the connection back to
> the pool.    a pool won't help much if all 100 of your clients want to make
> a query at the same time.
>
> your 4 CPU 8GB machine will likely be optimal doing no more than about 8
> queries at once. (give or take a few, depending on how many disk drives in
> your raids and how much IO concurrency the server can support).    oh, you
> mentioned MS Windows in there, ok, 8 is optimistic, the optimal value may be
> more like 4.
>
> if you have 100 clients that simultaneously want to make queries each 5
> minutes, you should consider using some sort of message queueing system,
> where your clients send a message to an application service, and the app
> server runs as many queue workers as you find are optimal, each of which
> reads a message from the queue, processes database requests to satisfy the
> message request, and returns the results to the client, then grabs the next
> queue entry and repeat

Or he could spend $35k or so on an HP DL580 with 4x8 core Xeons in it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] value too long - but for which column?

2011-10-18 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alexander Farber
Sent: Tuesday, October 18, 2011 3:44 PM
To: pgsql-general
Subject: [GENERAL] value too long - but for which column?

Hello,

I see the errors

ERROR:  value too long for type character varying(32)
CONTEXT:  SQL statement "update pref_users set first_name =  $1 , last_name
=  $2 , female =  $3 , avatar =  $4 , city =  $5 , last_ip =
 $6 , login = now() where id =  $7 "
PL/pgSQL function "pref_update_users" line 3 at SQL statement

So is there a way which columns
should be widened and is there
a way to turn string truncation into
a warning instead of a fatal error?

Regards
Alex

/Original Message


The most direct way to determine which field is causing the error is to look
at the data and count characters.

You could rewrite the query to be in the following form:

UPDATE ... SET first_name = ?::varchar(32), last_name = ?::varchar(32),
etc...

When type-casting with an explicit constraint the cast truncates silently
(i.e., without any warnings whatsoever).

The database acts as a last line of defense but you ideally want to push
your data validation logic higher in the stack so you can give meaningful
feedback to the user entering the data.  You can use the database metadata
to avoid hard-coding the arbitrary constraints into your software layer.

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] value too long - but for which column?

2011-10-18 Thread Alexander Farber
Hello,

I see the errors

ERROR:  value too long for type character varying(32)
CONTEXT:  SQL statement "update pref_users set first_name =  $1 ,
last_name =  $2 , female =  $3 , avatar =  $4 , city =  $5 , last_ip =
 $6 , login = now() where id =  $7 "
PL/pgSQL function "pref_update_users" line 3 at SQL statement

and (same error, but different line number)

ERROR:  value too long for type character varying(32)
CONTEXT:  SQL statement "insert into pref_users(id, first_name,
last_name, female, avatar, city, last_ip, login) values ( $1 ,  $2 ,
$3 ,  $4 ,  $5 ,  $6 ,  $7 , now())"
PL/pgSQL function "pref_update_users" line 14 at SQL statement

in the log files for PostgreSQL 8.4.7,
but my table has several varchar(32) columns -
which one is it?

pref=> \d pref_users;
Table "public.pref_users"
   Column   |Type |   Modifiers
+-+---
 id | character varying(32)   | not null
 first_name | character varying(32)   |
 last_name  | character varying(32)   |
 female | boolean |
 avatar | character varying(128)  |
 city   | character varying(32)   |
 login  | timestamp without time zone | default now()
 last_ip| inet|
 logout | timestamp without time zone |
 vip| timestamp without time zone |
 mail   | character varying(254)  |
Indexes:
"pref_users_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "pref_catch" CONSTRAINT "pref_catch_id_fkey" FOREIGN KEY
(id) REFERENCES pref_users(id)
TABLE "pref_game" CONSTRAINT "pref_game_id_fkey" FOREIGN KEY (id)
REFERENCES pref_users(id)
TABLE "pref_hand" CONSTRAINT "pref_hand_id_fkey" FOREIGN KEY (id)
REFERENCES pref_users(id)
TABLE "pref_luck" CONSTRAINT "pref_luck_id_fkey" FOREIGN KEY (id)
REFERENCES pref_users(id)
TABLE "pref_match" CONSTRAINT "pref_match_id_fkey" FOREIGN KEY
(id) REFERENCES pref_users(id)
TABLE "pref_misere" CONSTRAINT "pref_misere_id_fkey" FOREIGN KEY
(id) REFERENCES pref_users(id)
TABLE "pref_money" CONSTRAINT "pref_money_id_fkey" FOREIGN KEY
(id) REFERENCES pref_users(id)
TABLE "pref_pass" CONSTRAINT "pref_pass_id_fkey" FOREIGN KEY (id)
REFERENCES pref_users(id)
TABLE "pref_payment" CONSTRAINT "pref_payment_id_fkey" FOREIGN KEY
(id) REFERENCES pref_users(id)
TABLE "pref_rep" CONSTRAINT "pref_rep_author_fkey" FOREIGN KEY
(author) REFERENCES pref_users(id)
TABLE "pref_rep" CONSTRAINT "pref_rep_id_fkey" FOREIGN KEY (id)
REFERENCES pref_users(id)
TABLE "pref_status" CONSTRAINT "pref_status_id_fkey" FOREIGN KEY
(id) REFERENCES pref_users(id)

And here is my stored procedure
(I apologize for the huge width):

pref=> \df+ pref_update_users


List o
f functions
 Schema |   Name| Result data type |
   Argument data types
|  Type  | Volatility |
Owner | Language | Source code
 | Descri
ption
+---+--+
+++---+--+--+---
--
 public | pref_update_users | void | _id character
varying, _first_name character varying, _last_name character varying,
_female boolean, _avatar character vary
ing, _city character varying, _last_ip inet | normal | volatile   |
pref  | plpgsql  |
 |


  : begin


  :


  : update pref_users set


  : first_name = _first_name,


  : last_name  = _last_name,


  : female = _female,


  : avatar = _avatar,


  : city   = _city,


  : last_ip= _last_ip,


  : login  = now()


  : where id = _id;


  :


  : if not found then


  : insert into pref_users(id,
first_name,


  : last_name, female, avatar,
city, last_ip, login)


  : values (_id, _first_name,
_last_name,


  : _female, _avatar, _city,
_last_ip, now());


  : end if;


  : end;


  :
(1 row)


So is there a way which columns
should be widened and is there
a way to turn string truncation into
a warning in

[GENERAL] Postgre Performance

2011-10-18 Thread Deshpande, Yogesh Sadashiv (STSD-Openview)
Hello ,

We have a setup where in there are around 100 process running in parallel every 
5 minutes and each one of them opens a connection to database. We are observing 
that for each connection , postgre also created on sub processes. We have set 
max_connection to 100. So the number of sub process in the system is close to 
200 every 5 minutes. And because of this we are seeing very high CPU usage.  We 
need following information


1.   Is there any configuration we do that would pool the connection 
request rather than coming out with connection limit exceed.

2.   Is there any configuration we do that would limit the sub process to 
some value say 50 and any request for connection would get queued.

Basically we wanted to limit the number of processes so that client code 
doesn't have to retry for unavailability for connection or sub processes , but 
postgre takes care of queuing?

Thanks
Yogesh


[GENERAL] PostgreSQL - bindings were not allocated properly

2011-10-18 Thread leddy
Hi all,

Wondering if you can help. We have a PostgreSQL database that we are
using to store our data and we are using Access as the front-end. I
have linked all the tables and up to now have had no problem with
creating the forms/queries based on this data. Until now that is. One
of the tables which has around 27 million records (addresses), if I
query it using one Postcode (not unique but indexed), it works fine
and brings back the records. However if I join it to a local table
containing a list of addresses, and link it on the postcode, I get the
error "ODBC- call failed. Bindings were not allocated properly. (#15)"

Then if I try the same query that worked previously (or any other
query on any other table), I get this same error and the only way I
can get round it is by re-linking all the tables and closing and re-
opening the database.

I have no clue what this means and google is not helping much either.
Has anyone got any advice?

Many thanks in advance

leddy

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump not including custom CAST based on table types

2011-10-18 Thread Frédéric Rejol

Hello,
I created a custom CAST to cast from one table type to another.
pg_dump does not include my custom CAST.

Here is an example:

CREATE TABLE foo_source(id integer);
CREATE TABLE foo_target(id integer);

CREATE OR REPLACE FUNCTION cast_ident(foo_source)
RETURNS foo_target
AS
$BODY$
  DECLARE
result foo_target;
  BEGIN
result.id=$1.id;
RETURN result;
  END
$BODY$
LANGUAGE PLPGSQL VOLATILE;

CREATE CAST (foo_source AS foo_target)
   WITH FUNCTION cast_ident(foo_source)
   AS assignment;

--Casting works fine
SELECT (row(1)::foo_source)::foo_target as result;

--I can find the cast description in the catalog system.
SELECT castfunc::regprocedure,castsource::regtype,casttarget::regtype 
FROM pg_cast

WHERE castsource='foo_source'::regtype and casttarget='foo_target'::regtype;



pg_dump -s -U postgres test > test.sql

when I look at the "test.sql" dumped file, I cannot find the CAST command.


I read carrefully the archives regarding my problem.

http://archives.postgresql.org/pgsql-general/2007-11/msg00931.php

Michael Glaesemann  writes:

On Nov 17, 2007, at 0:36 , Tom Lane wrote:

pg_dump thinks it's a built-in system object.



What other objects might be susceptible to this? Operators? Operator
classes?


It's just casts.  They're a bit of a problem since they have neither
owners nor schemas, so there's not anything very concrete to base a
dump-or-don't-dump decision on.  The rule pg_dump uses is to dump it
if at least one of the three underlying objects (source type, dest type,
or function) is dumpable.  Here you've got 2 builtin types and
no function, so you lose.

regards, tom lane


My underlying objects are two tables foo_source and foo_target that can 
be assimilated to types and they are dumpable.


Is there another rule or is it a bug?


Frédéric Rejol.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Use true and false when bulk-exporting boolean data

2011-10-18 Thread Viktor Rosenfeld
Hi Alban,

in the end I used a COPY statement with a query and a CASE statement as
suggested by Henry.

Cheers,
Viktor

Alban Hertroys wrote:

> On 18 Oct 2011, at 17:54, Viktor Rosenfeld wrote:
> 
> > I was looking for an easier, more automatic way, but I wrote a few
> > scripts that wrapped the boolean fields in case statements as suggested.
> 
> 
> You are aware that COPY accepts a query as well?
> You could also have created a VIEW over that table that translates those 
> values.
> 
> Alban Hertroys
> 
> --
> If you can't see the forest for the trees,
> cut the trees and you'll see there is no forest.
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre Performance

2011-10-18 Thread John R Pierce

On 10/18/11 9:51 AM, Bill Moran wrote:

Basically we wanted to limit the number of processes so that client code 
doesn't have to retry for unavailability for connection or sub processes , but 
postgre takes care of queuing?

pgpool and pgbouncer handle some of that, but I don't know if they do
exactly everything that you want.  Probably a good place to start, though.


pools work great when you have a lot of clients that only sporadically 
make queries, like web users. each client (like the webserver) grabs a 
connection from the pool, runs its transactions, then releases the 
connection back to the pool.a pool won't help much if all 100 of 
your clients want to make a query at the same time.


your 4 CPU 8GB machine will likely be optimal doing no more than about 8 
queries at once. (give or take a few, depending on how many disk drives 
in your raids and how much IO concurrency the server can support).
oh, you mentioned MS Windows in there, ok, 8 is optimistic, the optimal 
value may be more like 4.


if you have 100 clients that simultaneously want to make queries each 5 
minutes, you should consider using some sort of message queueing system, 
where your clients send a message to an application service, and the app 
server runs as many queue workers as you find are optimal, each of which 
reads a message from the queue, processes database requests to satisfy 
the message request, and returns the results to the client, then grabs 
the next queue entry and repeat



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre Performance

2011-10-18 Thread Raghavendra
Here you go..

http://winpg.jp/~saito/pgbouncer/pgbouncer-1.4-win32.zip

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/



On Tue, Oct 18, 2011 at 11:08 PM, Deshpande, Yogesh Sadashiv (STSD-Openview)
 wrote:

>  I am not able to find binary distribution of pgbouncer for windows.. Can
> you point me to the location?
>
> ** **
>
> *From:* Raghavendra [mailto:raghavendra@enterprisedb.com]
> *Sent:* Tuesday, October 18, 2011 10:33 PM
> *To:* Bill Moran
> *Cc:* Deshpande, Yogesh Sadashiv (STSD-Openview);
> pgsql-general@postgresql.org
>
> *Subject:* Re: [GENERAL] Postgre Performance
>
>  ** **
>
>  > We need following information
>
> >
> > 1.   Is there any configuration we do that would pool the connection
> request rather than coming out with connection limit exceed.
>
>  Use pgpool or pgbouncer.
>
> ** **
>
>  ** **
>
> Use pgbouncer, which is a light weighted connection pooling tool, if you
> are not opting for load balancing.
>
> ** **
>
>
> > Basically we wanted to limit the number of processes so that client code
> doesn't have to retry for unavailability for connection or sub processes ,
> but postgre takes care of queuing?
>
>  ** **
>
> For controlling unavailability of connections, it may be possible at
> application level but its not possible at Database level. However, if
> connections reaches max limit, DB will alert you as it reached
> max_connection.
>
> ** **
>
> --Raghav
>


Re: [GENERAL] pg_dump not including custom CAST based on table types

2011-10-18 Thread Tom Lane
=?ISO-8859-1?Q?Fr=E9d=E9ric_Rejol?=  writes:
> I created a custom CAST to cast from one table type to another.
> pg_dump does not include my custom CAST.

Hmm.  The reason for that is that the table types aren't considered
dumpable objects.  I suppose we need to fix that, but in the meantime
you'd have better luck if you created the types as composite types
instead of implicit table rowtypes.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Use true and false when bulk-exporting boolean data

2011-10-18 Thread Alban Hertroys
On 18 Oct 2011, at 17:54, Viktor Rosenfeld wrote:

> I was looking for an easier, more automatic way, but I wrote a few
> scripts that wrapped the boolean fields in case statements as suggested.


You are aware that COPY accepts a query as well?
You could also have created a VIEW over that table that translates those values.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre Performance

2011-10-18 Thread Deshpande, Yogesh Sadashiv (STSD-Openview)
Hello Raghavendra,

Following are the details..

PostgreSQL9.0 , we running our application on 4CPU 8GB RAM system.

Thanks
Yogesh

From: Raghavendra [mailto:raghavendra@enterprisedb.com]
Sent: Tuesday, October 18, 2011 9:46 PM
To: Deshpande, Yogesh Sadashiv (STSD-Openview)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Postgre Performance

Dear Yogesh,

To get best answer's from community member's you need to provide complete 
information like,PG version, Server /Hardware info etc., So that it help's 
member's to assist you in right way.

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/


On Tue, Oct 18, 2011 at 7:27 PM, Deshpande, Yogesh Sadashiv (STSD-Openview) 
mailto:yogesh-sadashiv.deshpa...@hp.com>> 
wrote:
Hello ,

We have a setup where in there are around 100 process running in parallel every 
5 minutes and each one of them opens a connection to database. We are observing 
that for each connection , postgre also created on sub processes. We have set 
max_connection to 100. So the number of sub process in the system is close to 
200 every 5 minutes. And because of this we are seeing very high CPU usage.  We 
need following information


1.   Is there any configuration we do that would pool the connection 
request rather than coming out with connection limit exceed.

2.   Is there any configuration we do that would limit the sub process to 
some value say 50 and any request for connection would get queued.

Basically we wanted to limit the number of processes so that client code 
doesn't have to retry for unavailability for connection or sub processes , but 
postgre takes care of queuing?

Thanks
Yogesh



Re: [GENERAL] Postgre Performance

2011-10-18 Thread Deshpande, Yogesh Sadashiv (STSD-Openview)
I am not able to find binary distribution of pgbouncer for windows.. Can you 
point me to the location?

From: Raghavendra [mailto:raghavendra@enterprisedb.com]
Sent: Tuesday, October 18, 2011 10:33 PM
To: Bill Moran
Cc: Deshpande, Yogesh Sadashiv (STSD-Openview); pgsql-general@postgresql.org
Subject: Re: [GENERAL] Postgre Performance

> We need following information
>
> 1.   Is there any configuration we do that would pool the connection 
> request rather than coming out with connection limit exceed.
Use pgpool or pgbouncer.


Use pgbouncer, which is a light weighted connection pooling tool, if you are 
not opting for load balancing.


> Basically we wanted to limit the number of processes so that client code 
> doesn't have to retry for unavailability for connection or sub processes , 
> but postgre takes care of queuing?

For controlling unavailability of connections, it may be possible at 
application level but its not possible at Database level. However, if 
connections reaches max limit, DB will alert you as it reached max_connection.

--Raghav


[GENERAL] Can someone help explain what's going on from the attached logs?

2011-10-18 Thread Samuel Hwang
The log is getting from PostgreSQL 9.0.4
Basically we set up streaming replication hot-standby slave while master is
under heavy load
The slave started but not accepting read-only queries,
every request will trigger the "FATAL:  the database system is starting up"
error.

The slave will eventually be able to accept read-only queries after the load
on master is removed.
We don't see the same problem if we set up hot-standby slave while master is
not under heavy load.

the logs follow:

2011-10-18 10:34:06 MDT [17570]: [1-1] LOG:  database system was
interrupted; last known up at 2011-10-18 10:33:57 MDT
2011-10-18 10:34:06 MDT [17570]: [2-1] LOG:  creating missing WAL directory
"pg_xlog/archive_status"
2011-10-18 10:34:06 MDT [17570]: [3-1] LOG:  entering standby mode
2011-10-18 10:34:06 MDT [17573]: [1-1] LOG:  streaming replication
successfully connected to primary
2011-10-18 10:34:06 MDT [17570]: [4-1] DEBUG:  checkpoint record is at
3/6F1CBD68
2011-10-18 10:34:06 MDT [17570]: [5-1] DEBUG:  redo record is at 3/6FE8;
shutdown FALSE
2011-10-18 10:34:06 MDT [17570]: [6-1] DEBUG:  next transaction ID:
0/5245499; next OID: 1470356
2011-10-18 10:34:06 MDT [17570]: [7-1] DEBUG:  next MultiXactId: 703; next
MultiXactOffset: 1413
2011-10-18 10:34:06 MDT [17570]: [8-1] DEBUG:  oldest unfrozen transaction
ID: 654, in database 1
2011-10-18 10:34:06 MDT [17570]: [9-1] DEBUG:  transaction ID wrap limit is
2147484301, limited by database with OID 1
2011-10-18 10:34:06 MDT [17570]: [10-1] DEBUG:  initializing for hot standby
2011-10-18 10:34:06 MDT [17570]: [11-1] LOG:  consistent recovery state
reached at 3/6F1CBDC0
2011-10-18 10:34:06 MDT [17570]: [12-1] LOG:  redo starts at 3/6FE8
2011-10-18 10:34:06 MDT [17570]: [13-1] LOG:  consistent state delayed
because recovery snapshot incomplete
2011-10-18 10:34:06 MDT [17570]: [14-1] CONTEXT:  xlog redo  running xacts:
nextXid 5245499 latestCompletedXid 5245497 oldestRunningXid 5244950; 300
xacts: 5245124 5245125 5245126 5245127 5245128 5245129 5245130 5245131
5245132 5245133 5245134 5245135 5245136 5245137 5245138 5245139 5245140
5245141 5245142 5245143 5245144 5245145 5245146 5245147 5245148 5245149
5245150 5245151 5245152 5245153 5245154 5245155 5245156 5245157 5245426
5245427 5245428 5245429 5245430 5245431 5245432 5245433 5245434 5245435
5245436 5245437 5245438 5245439 5245440 5245441 5245442 5245443 5245444
5245445 5245446 5245447 5245448 5245449 5245450 5245451 5245452 5245453
5245454 5245455 5245456 5245498 5244950 5244951 5244952 5244953 5244954
5244955 5244956 5244957 5244958 5244959 5244960 5244961 5244962 5244963
5244964 5244965 5244966 5244967 5244968 5244969 5244970 5244971 5244972
5244973 5244974 5244975 5244976 5244977 5244978 5244979 5244980 5244981
5244982 5244983 5245159 5245160 5245161 5245162 5245163 5245164 5245165
5245166 5245167 5245168 5245169 5245170 5245171 5245172 5245173 5245174
5245175 5245176 5245177 5245178 5245179 5245180 5245181 5245182 5245183
5245184 5245185 5245186 5245187 5245188 5245189 5245190 5245191 5245192
5245193 5245194 5245195 5245196 5245197 5245198 5245199 5245200 5245201
5245202 5245203 5245204 5245205 5245206 5245207 5245208 5245209 5245210
5245211 5245212 5245213 5245214 5245215 5245216 5245217 5245218 5245219
5245222 5245223 5245224 5245225 5244987 5244988 5244989 5244990 5244991
5244992 5244993 5244994 5244995 5244996 5244997 5244998 5244999 5245000
5245001 5245002 5245003 5245004 5245005 5245006 5245007 5245008 5245009
5245010 5245011 5245012 5245013 5245014 5245015 5245016 5245017 5245018
5245019 5245020 5245307 5245308 5245309 5245310 5245311 5245312 5245313
5245314 5245315 5245316 5245317 5245318 5245319 5245320 5245321 5245322
5245323 5245324 5245325 5245326 5245327 5245328 5245329 5245330 5245331
5245332 5245333 5245334 5245335 5245336 5245337 5245242 5245243 5245244
5245245 5245246 5245247 5245248 5245249 5245250 5245251 5245252 5245253
5245254 5245255 5245256 5245257 5245258 5245259 5245260 5245261 5245262
5245263 5245264 5245265 5245266 5245267 5245268 5245269 5245270 5245271
5245272 5245273 5245274 5245275 5245276 5245277 5245062 5245063 5245064
5245065 5245066 5245067 5245068 5245069 5245070 5245071 5245072 5245073
5245074 5245075 5245076 5245077 5245078 5245079 5245080 5245081 5245082
5245083 5245084 5245085 5245086 5245087 5245088 5245089 5245090 5245091
5245092 5245093 5245094 5245095; subxid ovf
2011-10-18 10:34:07 MDT [17593]: [1-1] FATAL:  the database system is
starting up
2011-10-18 10:34:08 MDT [17633]: [1-1] FATAL:  the database system is
starting up
2011-10-18 10:34:09 MDT [17669]: [1-1] FATAL:  the database system is
starting up

Samuel


Re: [GENERAL] Postgre Performance

2011-10-18 Thread Joshua D. Drake


On 10/18/2011 06:57 AM, Deshpande, Yogesh Sadashiv (STSD-Openview) wrote:

Hello ,

We have a setup where in there are around 100 process running in
parallel every 5 minutes and each one of them opens a connection to
database. We are observing that for each connection , postgre also
created on sub processes. We have set max_connection to 100. So the
number of sub process in the system is close to 200 every 5 minutes. And
because of this we are seeing very high CPU usage. We need following
information

1.Is there any configuration we do that would pool the connection
request rather than coming out with connection limit exceed.


Yes you need a pooler. Initiating connections like that is expensive. 
The use of PgBouncer is your friend here.


http://wiki.postgresql.org/wiki/PgBouncer

Sincerely,

Joshua D. Drake

--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre Performance

2011-10-18 Thread Raghavendra
>
> > We need following information
> >
> > 1.   Is there any configuration we do that would pool the connection
> request rather than coming out with connection limit exceed.
>
> Use pgpool or pgbouncer.
>
>
Use pgbouncer, which is a light weighted connection pooling tool, if you are
not opting for load balancing.


> > Basically we wanted to limit the number of processes so that client code
> doesn't have to retry for unavailability for connection or sub processes ,
> but postgre takes care of queuing?
>
>
For controlling unavailability of connections, it may be possible at
application level but its not possible at Database level. However, if
connections reaches max limit, DB will alert you as it reached
max_connection.

--Raghav


Re: [GENERAL] Postgre Performance

2011-10-18 Thread Bill Moran
In response to "Deshpande, Yogesh Sadashiv (STSD-Openview)" 
:

> Hello ,
> 
> We have a setup where in there are around 100 process running in parallel 
> every 5 minutes and each one of them opens a connection to database. We are 
> observing that for each connection , postgre also created on sub processes. 
> We have set max_connection to 100. So the number of sub process in the system 
> is close to 200 every 5 minutes. And because of this we are seeing very high 
> CPU usage.

This does not follow logically, in my experience.  We have many servers that
have over 300 simultaneous connections, and the connections themselves do
not automatically create high CPU usage.

Unless of course, there is an issue with the particular OS you're using,
which you didn't mention.

> We need following information
> 
> 1.   Is there any configuration we do that would pool the connection 
> request rather than coming out with connection limit exceed.

Use pgpool or pgbouncer.

> 2.   Is there any configuration we do that would limit the sub process to 
> some value say 50 and any request for connection would get queued.

Set the max connection and handle the connection retry in your application.

> Basically we wanted to limit the number of processes so that client code 
> doesn't have to retry for unavailability for connection or sub processes , 
> but postgre takes care of queuing?

pgpool and pgbouncer handle some of that, but I don't know if they do
exactly everything that you want.  Probably a good place to start, though.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's the impact of archive_command failing?

2011-10-18 Thread David Kerr

On 10/18/2011 09:44 AM, Simon Riggs wrote:

On Tue, Oct 18, 2011 at 4:58 PM, David Kerr  wrote:


I have postgres setup for streaming replication and my slave box went down.

My question is, how long can that box stay down before it causes a material 
impact on the master?

The archive_command that I use will not archive logs while the slave is down.

I know the obvious problems:
  * you're not archiving, so you lose all of the recovery related benefits if 
you should crash
  * could run out of disk space in pg_xlog
  * your slave could get out of sync

What i'm concerned with is something like, if the master is unable to archive 
eventually
it will stop writing WALs Or something impacting the performance of the 
database.
or anything along those lines.



When it comes back up it will have to catchup. At some point it will
be quicker to regenerate the standby than to catchup.

Also, at some point you will run out of space in pg_xlog, which would
make the master crash. So probably best to have an archive_command
that starts deleting or compressing files before disk fills, but that
means your slave can then never catch up at that point.



Ok, that's better that I thought. Thanks!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's the impact of archive_command failing?

2011-10-18 Thread Simon Riggs
On Tue, Oct 18, 2011 at 4:58 PM, David Kerr  wrote:

> I have postgres setup for streaming replication and my slave box went down.
>
> My question is, how long can that box stay down before it causes a material 
> impact on the master?
>
> The archive_command that I use will not archive logs while the slave is down.
>
> I know the obvious problems:
>  * you're not archiving, so you lose all of the recovery related benefits if 
> you should crash
>  * could run out of disk space in pg_xlog
>  * your slave could get out of sync
>
> What i'm concerned with is something like, if the master is unable to archive 
> eventually
> it will stop writing WALs Or something impacting the performance of the 
> database.
> or anything along those lines.


When it comes back up it will have to catchup. At some point it will
be quicker to regenerate the standby than to catchup.

Also, at some point you will run out of space in pg_xlog, which would
make the master crash. So probably best to have an archive_command
that starts deleting or compressing files before disk fills, but that
means your slave can then never catch up at that point.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre Performance

2011-10-18 Thread Raghavendra
Dear Yogesh,

To get best answer's from community member's you need to provide complete
information like,PG version, Server /Hardware info etc., So that it help's
member's to assist you in right way.

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/



On Tue, Oct 18, 2011 at 7:27 PM, Deshpande, Yogesh Sadashiv (STSD-Openview)
 wrote:

>  Hello ,
>
> ** **
>
> We have a setup where in there are around 100 process running in parallel
> every 5 minutes and each one of them opens a connection to database. We are
> observing that for each connection , postgre also created on sub processes.
> We have set max_connection to 100. So the number of sub process in the
> system is close to 200 every 5 minutes. And because of this we are seeing
> very high CPU usage.  We need following information
>
> ** **
>
> **1.   **Is there any configuration we do that would pool the
> connection request rather than coming out with connection limit exceed.***
> *
>
> **2.   **Is there any configuration we do that would limit the sub
> process to some value say 50 and any request for connection would get
> queued.
>
> ** **
>
> Basically we wanted to limit the number of processes so that client code
> doesn’t have to retry for unavailability for connection or sub processes ,
> but postgre takes care of queuing?
>
> ** **
>
> Thanks
>
> Yogesh
>


[GENERAL] POstgreSQL Tablespace deletion issue

2011-10-18 Thread Vishnu S.
HI,

 

I am using PostgreSQL 8.4 in windows XP. Also using Slony-I for
implementing master/slave support.  I have a data base with some tables
and a table space with some other tables in another location. My issue
is that after a 30 days continuous run, some of the files in the
PostgreSQL table space get deleted.  The modification that I had made
recently is that, I have created indexes on some of the tables in the
table space. But the indexes are actually created on the default table
space, not in the table space where the tables resides. Is  any problem
in it? Note that there is antivirus or clan up applications running on
the system. The only thing that we have done is that , the machine has
restarted  5 - 6 times without stopping the PostgreSQL server. (The
PostgreSQL server running on 5432 post is a custom database server
created and not the default PostgreSQL server coming when PostgreSQL is
installed.) Is there any chances for failing the crash recovery?  Is
there any chance for deleting the table space files, if the crash
recovery is failed ? How can we avoid this ?

 

 

 

 

 

 

Thanks & Regards,

Vishnu S

 

* Confidentiality Statement/Disclaimer *

This message and any attachments is intended for the sole use of the intended 
recipient. It may contain confidential information. Any unauthorized use, 
dissemination or modification is strictly prohibited. If you are not the 
intended recipient, please notify the sender immediately then delete it from 
all your systems, and do not copy, use or print. Internet communications are 
not secure and it is the responsibility of the recipient to make sure that it 
is virus/malicious code exempt.
The company/sender cannot be responsible for any unauthorized alterations or 
modifications made to the contents. If you require any form of confirmation of 
the contents, please contact the company/sender. The company/sender is not 
liable for any errors or omissions in the content of this message.


[GENERAL] What's the impact of archive_command failing?

2011-10-18 Thread David Kerr
I have postgres setup for streaming replication and my slave box went down. 

My question is, how long can that box stay down before it causes a material 
impact on the master?

The archive_command that I use will not archive logs while the slave is down.

I know the obvious problems: 
 * you're not archiving, so you lose all of the recovery related benefits if 
you should crash
 * could run out of disk space in pg_xlog
 * your slave could get out of sync

What i'm concerned with is something like, if the master is unable to archive 
eventually
it will stop writing WALs Or something impacting the performance of the 
database. 
or anything along those lines.

Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Use true and false when bulk-exporting boolean data

2011-10-18 Thread Viktor Rosenfeld
I was looking for an easier, more automatic way, but I wrote a few
scripts that wrapped the boolean fields in case statements as suggested.

Thanks,
Viktor

Henry Drexler wrote:

> couldn't you just wrap it in a case statement to change the t to true
> etc...?
> 
> On Mon, Oct 17, 2011 at 2:29 PM, Viktor Rosenfeld  > wrote:
> 
> > Hi,
> >
> > I need to move data from PostgreSQL to MonetDB and also bulk-import data
> > into MonetDB that was bulk-exported from PostgreSQL by other people. My
> > problem is that boolean data is exported by PostgreSQL using the values
> > "t" and "f" (unquoted) and that MonetDB expects "true" and "false".  Is
> > there a way to change how boolean columns are exported?
> >
> > I'm using the following command to export:
> >
> >  \copy table to 'filename' WITH NULL 'NULL';
> >
> > And to import (in MonetDB):
> >
> >  copy into table from '/path/to/filename' USING DELIMITERS '\t','\n','';
> >
> > Cheers,
> > Viktor
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> >

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory Error on Insert

2011-10-18 Thread Tom Lane
Mark Priest  writes:
> I am getting an Out of Memory error in my server connection process
> while running a large insert query.

> Postgres version: "PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
> GCC gcc.exe (GCC) 3.4.2 (mingw-special)"
> OS: Windows 7 Professional (v.6.1, build 7601 service pack 1)
> The OS is 64 bit but the postgres app is a 32-bit app and I run out of
> memory and the server process crashes as soon as I hit 2 GB of memory.

FWIW, I see excessive memory consumption for this query in 8.2.x but
not in 8.3.x and later.  Some desultory investigation suggests that
the change is associated with rangetable representation improvements
that were made in 8.3.  Since 8.2.x is staring hard at its EOL date,
I suggest now might be a good time to update to a more recent release
series.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] many sql file and one transaction

2011-10-18 Thread salah jubeh


 
Thanks guys as you have pointed , I think the best solution is to go for CAT 
and set the appropriate options for psql. 


 
Regards



From: Merlin Moncure 
To: Cédric Villemain 
Cc: salah jubeh ; "andr...@a-kretschmer.de" 
; pgsql 
Sent: Tuesday, October 18, 2011 3:23 PM
Subject: Re: [GENERAL] many sql file and one transaction

On Tue, Oct 18, 2011 at 7:57 AM, Cédric Villemain
 wrote:
> 2011/10/18 salah jubeh :
>> Hello,
>> Thanks for the reply.
>> I considered  cat as an option but I did not go for it, because of the
>> number of sql files I have is large  which makes the code not readable
>> The second thing, which is more important is because I have some advantages
>> with  using -f such as the line number which causing the error.
>
> you can do :
>
> cat module1.sql \
>      module2.sql \
>      module_etc.sql \
>  | psql -f -
>
>
>
>> Regards
>>
>>
>>
>>
>>
>> 
>> From: "andr...@a-kretschmer.de" 
>> To: salah jubeh 
>> Sent: Tuesday, October 18, 2011 2:23 PM
>> Subject: Re: [GENERAL] many sql file and one transaction
>>
>>
>> Zitat von salah jubeh :
>>
>>> Hello,
>>>
>>>
>>> I have many SQL script files to update schema, delete data, unit
>>> test etc.  I want to run all the files in one transaction using
>>> shell script to ease the installation procedure. I can do that from
>>> the psql client by using the \i option
>>>
>>>
>>> BEGIN;
>>>
>>> \i  / .../ module1.sql
>>>
>>> \i  / .../ module2.sql
>>>
>>> \i  / .../ module_etc.sql
>>> COMMIT;
>>>
>>>
>>> Is there a way to do that  using psql command  shell script.
>>>
>>>
>>> I.E.
>>>
>>> I want to run the following  in one transaction,
>>>
>>> psql  -f  module1.sql
>>>
>>> psql  -f  module2.sql
>>>
>>> psql  -f  module_etc.sql
>>
>>
>> cat module1.sql module2.sql module_etc.sql | psql

also don't forget the -1 option to psql, which will wrap all your
commands in a transaction, and:
\set ON_ERROR_STOP

which will abort if there's an error.  either put this in all your
scripts, your .psqlrc, or echo it into psql like this:

cat <(echo "\set ON_ERROR_STOP")
     module1.sql \
     module2.sql \
     module_etc.sql \
| psql -1

you'll want to abort on error just to avoid the chance that a token in
your script will inadvertently close the transaction and cause a bunch
of garbage to be committed in the database -- this is likely to happen
say if you have a quotation error on function body.

merlin

Re: [GENERAL] How to correct: ERROR: permission denied: "RI_ConstraintTrigger_24966" is a system trigger

2011-10-18 Thread Adrian Klaver
On Tuesday, October 18, 2011 2:42:00 am Andre Lopes wrote:
> Hi,
> 
> I have created a database and all tables with a user, but I can't
> execute this alter table:
> 
> [code]
> xxx_database=> ALTER TABLE tdir_categories DISABLE TRIGGER ALL;
> ERROR:  permission denied: "RI_ConstraintTrigger_25366" is a system trigger
> [/code]
> 
> What can I do to solve this?

As pointed out below, you need to be a superuser. You can check the the 
superuser status of users by doing \du at the psql command line and looking in 
the attributes column.

> 
> Best Regards,
> 

> > You're trying to disable triggers associated with foreign key constraints
> > or CHECK constraints, too. You should probably skip those system
> > triggers in your function.
> > 
> > If you want to disable them too, I think you have to be the table owner,
> > or maybe superuser; I don't remember and haven tested. In any case, the
> > user you're running as doesn't have permission to disable those.
> > 
> > --
> > Craig Ringer

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] many sql file and one transaction

2011-10-18 Thread Merlin Moncure
On Tue, Oct 18, 2011 at 7:57 AM, Cédric Villemain
 wrote:
> 2011/10/18 salah jubeh :
>> Hello,
>> Thanks for the reply.
>> I considered  cat as an option but I did not go for it, because of the
>> number of sql files I have is large  which makes the code not readable
>> The second thing, which is more important is because I have some advantages
>> with  using -f such as the line number which causing the error.
>
> you can do :
>
> cat module1.sql \
>      module2.sql \
>      module_etc.sql \
>  | psql -f -
>
>
>
>> Regards
>>
>>
>>
>>
>>
>> 
>> From: "andr...@a-kretschmer.de" 
>> To: salah jubeh 
>> Sent: Tuesday, October 18, 2011 2:23 PM
>> Subject: Re: [GENERAL] many sql file and one transaction
>>
>>
>> Zitat von salah jubeh :
>>
>>> Hello,
>>>
>>>
>>> I have many SQL script files to update schema, delete data, unit
>>> test etc.  I want to run all the files in one transaction using
>>> shell script to ease the installation procedure. I can do that from
>>> the psql client by using the \i option
>>>
>>>
>>> BEGIN;
>>>
>>> \i  / .../ module1.sql
>>>
>>> \i  / .../ module2.sql
>>>
>>> \i  / .../ module_etc.sql
>>> COMMIT;
>>>
>>>
>>> Is there a way to do that  using psql command  shell script.
>>>
>>>
>>> I.E.
>>>
>>> I want to run the following  in one transaction,
>>>
>>> psql  -f  module1.sql
>>>
>>> psql  -f  module2.sql
>>>
>>> psql  -f  module_etc.sql
>>
>>
>> cat module1.sql module2.sql module_etc.sql | psql

also don't forget the -1 option to psql, which will wrap all your
commands in a transaction, and:
\set ON_ERROR_STOP

which will abort if there's an error.  either put this in all your
scripts, your .psqlrc, or echo it into psql like this:

cat <(echo "\set ON_ERROR_STOP")
 module1.sql \
 module2.sql \
 module_etc.sql \
 | psql -1

you'll want to abort on error just to avoid the chance that a token in
your script will inadvertently close the transaction and cause a bunch
of garbage to be committed in the database -- this is likely to happen
say if you have a quotation error on function body.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] many sql file and one transaction

2011-10-18 Thread Cédric Villemain
2011/10/18 salah jubeh :
> Hello,
> Thanks for the reply.
> I considered  cat as an option but I did not go for it, because of the
> number of sql files I have is large  which makes the code not readable
> The second thing, which is more important is because I have some advantages
> with  using -f such as the line number which causing the error.

you can do :

cat module1.sql \
  module2.sql \
  module_etc.sql \
 | psql -f -



> Regards
>
>
>
>
>
> 
> From: "andr...@a-kretschmer.de" 
> To: salah jubeh 
> Sent: Tuesday, October 18, 2011 2:23 PM
> Subject: Re: [GENERAL] many sql file and one transaction
>
>
> Zitat von salah jubeh :
>
>> Hello,
>>
>>
>> I have many SQL script files to update schema, delete data, unit
>> test etc.  I want to run all the files in one transaction using
>> shell script to ease the installation procedure. I can do that from
>> the psql client by using the \i option
>>
>>
>> BEGIN;
>>
>> \i  / .../ module1.sql
>>
>> \i  / .../ module2.sql
>>
>> \i  / .../ module_etc.sql
>> COMMIT;
>>
>>
>> Is there a way to do that  using psql command  shell script.
>>
>>
>> I.E.
>>
>> I want to run the following  in one transaction,
>>
>> psql  -f  module1.sql
>>
>> psql  -f  module2.sql
>>
>> psql  -f  module_etc.sql
>
>
> cat module1.sql module2.sql module_etc.sql | psql
>
>
> Regards, Andreas
>
>
>
>



-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] many sql file and one transaction

2011-10-18 Thread salah jubeh
Hello,

Thanks for the reply.

I considered  cat as an option but I did not go for it, because of the number 
of sql files I have is large  which makes the code not readable 

The second thing, which is more important is because I have some advantages 
with  using -f such as the line number which causing the error.

Regards




 

 



From: "andr...@a-kretschmer.de" 
To: salah jubeh 
Sent: Tuesday, October 18, 2011 2:23 PM
Subject: Re: [GENERAL] many  sql file and  one transaction


Zitat von salah jubeh :

> Hello,
>
>
> I have many SQL script files to update schema, delete data, unit  
> test etc.  I want to run all the files in one transaction using  
> shell script to ease the installation procedure. I can do that from  
> the psql client by using the \i option
>
>
> BEGIN;
>
> \i  / .../ module1.sql
>
> \i  / .../ module2.sql
>
> \i  / .../ module_etc.sql
> COMMIT;
>
>
> Is there a way to do that  using psql command  shell script. 
>
>
> I.E.
>
> I want to run the following  in one transaction, 
>
> psql  -f  module1.sql
>
> psql  -f  module2.sql
>
> psql  -f  module_etc.sql


cat module1.sql module2.sql module_etc.sql | psql


Regards, Andreas

Re: [GENERAL] many sql file and one transaction

2011-10-18 Thread Alban Hertroys
On 18 October 2011 14:11, salah jubeh  wrote:
> Hello,
>
> I have many SQL script files to update schema, delete data, unit test
> etc.  I want to run all the files in one transaction using shell script
> to ease the installation procedure. I can do that from the psql client by
> using the \i option
>
> BEGIN;
> \i  / .../ module1.sql
> \i  / .../ module2.sql
> \i  / .../ module_etc.sql
> COMMIT;
>
> Is there a way to do that  using psql command  shell script.
>
> I.E.
> I want to run the following  in one transaction,
> psql  -f  module1.sql
> psql  -f  module2.sql
> psql  -f  module_etc.sql

You can't use the above, as each psql instance has their own session.

You could cat the files together and feed them to psql though.
-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] many sql file and one transaction

2011-10-18 Thread salah jubeh
Hello, 


I have many SQL script files to update schema, delete data, unit test etc.  
I want to run all the files in one transaction using shell script to ease the 
installation procedure. I can do that from the psql client by using the \i 
option 


BEGIN;

\i  / .../ module1.sql 

\i  / .../ module2.sql 

\i  / .../ module_etc.sql 
COMMIT;


Is there a way to do that  using psql command  shell script.  


I.E.

I want to run the following  in one transaction,  

psql  -f  module1.sql 

psql  -f  module2.sql 

psql  -f  module_etc.sql 
 



Thanks in advance 


Re: [GENERAL] How to correct: ERROR: permission denied: "RI_ConstraintTrigger_24966" is a system trigger

2011-10-18 Thread Andre Lopes
Hi,

I have created a database and all tables with a user, but I can't
execute this alter table:

[code]
xxx_database=> ALTER TABLE tdir_categories DISABLE TRIGGER ALL;
ERROR:  permission denied: "RI_ConstraintTrigger_25366" is a system trigger
[/code]

What can I do to solve this?

Best Regards,



On Tue, Oct 18, 2011 at 9:08 AM, Craig Ringer  wrote:
> On 10/18/2011 03:52 PM, Andre Lopes wrote:
>>
>> Hi,
>>
>> I have created a database that have a function that disable triggers
>> on tables, but when I execute the function: (I have created the
>> database with the same user that I'm trying to execute the function)
>>
>> [code]
>> select triggerall(false);
>> [/code]
>>
>> return
>>
>> [code]
>> ERROR:  permission denied: "RI_ConstraintTrigger_24966" is a system
>> trigger
>> CONTEXT:  SQL statement "ALTER TABLE tdir_languages DISABLE TRIGGER ALL"
>> PL/pgSQL function "triggerall" line 14 at EXECUTE statement
>> [/code]
>
> You're trying to disable triggers associated with foreign key constraints or
> CHECK constraints, too. You should probably skip those system triggers in
> your function.
>
> If you want to disable them too, I think you have to be the table owner, or
> maybe superuser; I don't remember and haven tested. In any case, the user
> you're running as doesn't have permission to disable those.
>
> --
> Craig Ringer
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Log or notice values or rows that cause a constraint violation

2011-10-18 Thread Craig Ringer

Hi all

I'm once again trying to figure out which row of a 5000-record insert is 
violating a constraint, and can't help thinking how nice it'd be if Pg 
would report the contents of the row violating the constraint, or at 
least the values that were tested by the constraint check.


It's really, really frustrating to track down constraint violations 
without this; I usually land up using a trigger to RAISE NOTICE each row 
so I can see where the import dies, or I wrap the key field of an INSERT 
... SELECT in a polymorphic identity function that does a RAISE NOTICE 
of the input as a side-effect. Needless to say, an improvement would be 
nice here, as both of these are ugly hacks.


I keep on hoping that the violating data is sent in the error detail, 
but \set verbosity verbose in psql doesn't reveal the goods, and the 
problem row data isn't recorded in the log.


Given this command sequence:

create table test ( x integer not null, check(x > 0) );
insert into test(x) values (1),(2),(3),(-1),(-2),(3);

I'd like to see more than:

ERROR:  new row for relation "test" violates check constraint "test_x_check"

specifically something like:

ERROR:  new row for relation "test" violates check constraint "test_x_check"
DETAIL: violating row was (-2)

I know you can log statement params, but that's no use if your violators 
are part of a long COPY, a multi-insert, an INSERT ... SELECT, etc.


Have I missed something blindingly obvious in the docs and never noticed 
it in a couple of years of using Pg, or is there no way to get Pg to log 
and report this data at the moment?


--
Craig Ringer

POST Newspapers
276 Onslow Rd, Shenton Park
Ph: 08 9381 3088 Fax: 08 9388 2258
ABN: 50 008 917 717
http://www.postnewspapers.com.au/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql; execute query inside exists

2011-10-18 Thread Alban Hertroys
On 18 October 2011 09:57,   wrote:
> Hi,
>
> Thanks for the reply!
> But I don't want to check if the table exists, I want to see the
> result of the SELECT query, if a row presence or not.

So you want to check that the table contains data? In that case it
makes no sense to create the table if it doesn't contain data. It may
very well exist already.

> The tmp_tbl is a dynamic generated table name, but when I write the
> code without EXECUTE, I get syntax error too.

They were explaining why you got the error, they were not telling you
to leave out EXECUTE for dynamic SQL.

> In this case how can I check if a SELECT has result or not?
>
> SELECT INTO rndmd5 md5(random()::text);
>
> tmp_tbl := 'tbl_tmp_' || rndmd5;
>
>
> IF NOT EXISTS(SELECT * FROM tmp_tbl)
>   THEN
>     END IF;

You really should at least limit the possible amount of results from
that SELECT statement. You're not interested in the results.

Anyway, the way to do this in plpgsql is:

EXECUTE SELECT * FROM tmp_tbl LIMIT 1;
IF NOT FOUND THEN
...
END IF;

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to correct: ERROR: permission denied: "RI_ConstraintTrigger_24966" is a system trigger

2011-10-18 Thread Craig Ringer

On 10/18/2011 03:52 PM, Andre Lopes wrote:

Hi,

I have created a database that have a function that disable triggers
on tables, but when I execute the function: (I have created the
database with the same user that I'm trying to execute the function)

[code]
select triggerall(false);
[/code]

return

[code]
ERROR:  permission denied: "RI_ConstraintTrigger_24966" is a system trigger
CONTEXT:  SQL statement "ALTER TABLE tdir_languages DISABLE TRIGGER ALL"
PL/pgSQL function "triggerall" line 14 at EXECUTE statement
[/code]


You're trying to disable triggers associated with foreign key 
constraints or CHECK constraints, too. You should probably skip those 
system triggers in your function.


If you want to disable them too, I think you have to be the table owner, 
or maybe superuser; I don't remember and haven tested. In any case, the 
user you're running as doesn't have permission to disable those.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql; execute query inside exists

2011-10-18 Thread jozsef . kurucz
Hi,

Thanks for the reply!
But I don't want to check if the table exists, I want to see the
result of the SELECT query, if a row presence or not.
The tmp_tbl is a dynamic generated table name, but when I write the
code without EXECUTE, I get syntax error too.
In this case how can I check if a SELECT has result or not?


SELECT INTO rndmd5 md5(random()::text);

tmp_tbl := 'tbl_tmp_' || rndmd5;


IF NOT EXISTS(SELECT * FROM tmp_tbl)
   THEN
 END IF;


ERROR:  syntax error at or near "$1"
LINE 1: SELECT  NOT EXISTS(SELECT * FROM  $1 )


Thanks!



2011/10/17 Merlin Moncure :
> On Mon, Oct 17, 2011 at 8:20 AM, Merlin Moncure  wrote:
>> On Mon, Oct 17, 2011 at 2:32 AM,   wrote:
>>> Hi there,
>>>
>>> I would like to use EXISTS in a small plpgsql function but I always
>>> get a "syntax error". How can I execute a query inside the
>>> EXISTS function?
>>>
>>>
>>>
>>> IF NOT EXISTS(EXECUTE 'SELECT * FROM '|| tmp_tbl)
>>>   THEN
>>>      CREATE TABLE tt();
>>>
>>>
>>>
>>>
>>> ERROR:  syntax error at or near "EXECUTE"
>>> LINE 1: SELECT  NOT EXISTS(EXECUTE 'SELECT * FROM '||  $1 )
>>
>>
>> EXECUTE is a top level statement -- you can't run it inside a query
>> like that.  Also, EXISTS is not a way to check to see if a table does
>> exist -- it is a clause for the presence of a row and returns true if
>> it finds one -- but if the table does not exist you would get an SQL
>> error.
>>
>> A better way to do this is to query information_schema:
>>
>> PERFORM 1 FROM information_schema.tables where schema_name = x and
>> table_name = y;
>>
>> IF FOUND THEN
>>  CREATE TABLE ...
>> END IF;
>
> oops.. meant to say IF NOT FOUND... :-).
>
> merlin
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory Error on Insert

2011-10-18 Thread Craig Ringer

On 10/18/2011 02:52 PM, Mark Priest wrote:

I am getting an Out of Memory error in my server connection process
while running a large insert query.

Postgres version: "PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
GCC gcc.exe (GCC) 3.4.2 (mingw-special)"
OS: Windows 7 Professional (v.6.1, build 7601 service pack 1)
The OS is 64 bit but the postgres app is a 32-bit app and I run out of
memory and the server process crashes as soon as I hit 2 GB of memory.
  I assume that is because that is the limit for 32-bit apps.
My client connection is via JDBC in case that is important.



You're on an old 8.2 subrelease, 8.2.16 from 2010-03-15 that lacks 
important bug-fixes; the latest 8.2 is 8.2.22 from 2011-09-26. See:


  http://www.postgresql.org/docs/8.2/static/release.html

More to the point, you're on 8.2 on Windows! I strongly recommend moving 
to a newer release if you can, as the newer releases are significantly 
improved in performance and reliability on Windows.



For this specific issue, the only thing that comes to mind is whether 
you have any AFTER INSERT triggers on this table, or whether you have 
any DEFERRABLE constraints (irrespective of whether or not they're 
INITIALLY DEFERRED or not). PostgreSQL must keep track of these to 
execute them at the end of the transaction, and currently doesn't 
support writing this list to disk when it gets too big so it can 
eventually fill the backend's available RAM on huge inserts.


If your issue is with a constraint, a workaround is to drop the 
constraint, do the insert, then re-establish the constraint and commit 
the transaction.


If it's a trigger, that's trickier. Do the insert in smaller batches if 
you can, or see if you can disable the trigger, do the inserts, then do 
all its work in one go at the end.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to correct: ERROR: permission denied: "RI_ConstraintTrigger_24966" is a system trigger

2011-10-18 Thread Andre Lopes
Hi,

I have created a database that have a function that disable triggers
on tables, but when I execute the function: (I have created the
database with the same user that I'm trying to execute the function)

[code]
select triggerall(false);
[/code]

return

[code]
ERROR:  permission denied: "RI_ConstraintTrigger_24966" is a system trigger
CONTEXT:  SQL statement "ALTER TABLE tdir_languages DISABLE TRIGGER ALL"
PL/pgSQL function "triggerall" line 14 at EXECUTE statement
[/code]

This is known problem? How to solve this?


Best Regards,

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general