Re: [GENERAL] Multiple versions on same box

2007-10-09 Thread Scott Marlowe
On 10/9/07, Reece Hart [EMAIL PROTECTED] wrote:

  On Mon, 2007-10-08 at 17:34 -0700, Ralph Smith wrote:

 What is the best way to upgrade?  Use pg_dumpall (using 8.2's program),
 afterwards redirect that file into v8.2 via psql v8.2?


 There are lots of ways to do this. My favorite is to bring a new cluster
 on port 5433 and then pipe data between them, roughly like this:

 $ /path/to/7.4/pg_dumpall -p5432 ... | /path/to/8.2/psql -p5433 -qaf- 
 restore.log 21


Note that when possible you should dump with the pg_dump/pg_dumpall of the
version you're going to, in this case 8.2's pg_dumpall


[GENERAL] foreign keys and memory consumption

2007-10-09 Thread Jan Poslusny

Hi all,
pg 8.2.4 on Fedora Core 6 x86-64, mostly default postgres.conf just 
shared memory buffers increased to 256M. 1GB RAM.
I attempt to insert ~200k rows into table in one transaction from psql 
console, calling stored function of plperlu language, which inserts row 
by row via spi_exec_prepared. If table contains some foreign key 
(id_parent int4 not null references this_table(id) on delete cascade), 
memory allocated for processing child process grows and grows (roughly, 
1GB for first 100k rows is consumed) and is not released neither after 
successful end of transaction nor after Ctrl-C.
If table does not contain that foreign key, consumed memory does not 
grow in this way - all is O.K.
I do not know if I am missing something or if it is necessary or if it 
is memory leak. If somebody will be interesting, I can provide my 
postgres.conf and I can write script isolating and demonstrating this 
phenomena.


Regards,
pajout

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] foreign keys and memory consumption

2007-10-09 Thread Simon Riggs
On Tue, 2007-10-09 at 11:28 +0200, Jan Poslusny wrote:

 pg 8.2.4 on Fedora Core 6 x86-64, mostly default postgres.conf just 
 shared memory buffers increased to 256M. 1GB RAM.
 I attempt to insert ~200k rows into table in one transaction from psql 
 console, calling stored function of plperlu language, which inserts row 
 by row via spi_exec_prepared. If table contains some foreign key 
 (id_parent int4 not null references this_table(id) on delete cascade), 
 memory allocated for processing child process grows and grows (roughly, 
 1GB for first 100k rows is consumed) and is not released neither after 
 successful end of transaction nor after Ctrl-C.
 If table does not contain that foreign key, consumed memory does not 
 grow in this way - all is O.K.
 I do not know if I am missing something or if it is necessary or if it 
 is memory leak. If somebody will be interesting, I can provide my 
 postgres.conf and I can write script isolating and demonstrating this 
 phenomena.

It's a known side effect of the way FKs work currently.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] DB upgrade

2007-10-09 Thread Andrew Kelly
Hi folks,

please forgive what feels like a no-brainer even as I ask it, but...

I've read dozens of times in these lists that when one is upgrading from
an older to newer version of PG, the DB being dumped (older version)
should be done so using pg_dump from the newer version. I think I've
probably absorbed that bit of wisdom, but I've never actually had to put
it to use so it's always just been an academic understanding and a quick
scan every time I see the topic come up.

Well I'm actually going to need to dump an older DB and restore to a
newer version very soon, and I'd just like a point of clarification
please. Which bit exactly are we supposed to use from the newer version?

I mean, I've used pg_dump hundreds of times but I've never had need to
actually look at it and I've always assumed that it been a binary file,
especially since the advise is always to use the newer version's pg_dump
to start the migration, intimating the working bits are inside there
somewhere, and that they're different in every version.

Unless my installation is unique in some way of which I'm yet unaware,
pg_dump seems to be just a handful of lines in a perl script. In fact,
pg_dump, pg_restore and pg_dumpall are all simlinks to the same simple
perl script, the contents of which seem to be identical in both my
'older' and 'newer' versions of PG. Does this mean I can trust any old
dump from my older server to seamlessly plug into my newer version? Or
does thin mean there are other 'gizmos' than pg_dump which I need to
copy from new machine to old machine to perform the dump?

Andy

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] DB upgrade

2007-10-09 Thread Peter Eisentraut
Am Dienstag, 9. Oktober 2007 schrieb Andrew Kelly:
 Unless my installation is unique in some way of which I'm yet unaware,

Yes, it's a Debian package.

 pg_dump seems to be just a handful of lines in a perl script. In fact,
 pg_dump, pg_restore and pg_dumpall are all simlinks to the same simple
 perl script, the contents of which seem to be identical in both my
 'older' and 'newer' versions of PG. Does this mean I can trust any old
 dump from my older server to seamlessly plug into my newer version? Or
 does thin mean there are other 'gizmos' than pg_dump which I need to
 copy from new machine to old machine to perform the dump?

The advice remains:  Install the newest client package and use that pg_dump.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Lifting WHERE conditions out of inner select

2007-10-09 Thread John D. Burger

Tom Lane wrote:


The planner does not look for this type of situation though, and after
some study I think it'd be more trouble than it was worth.  It'd be
less than trivial to determine whether the upper references occurred
only in places where it was safe to pull them up, and the actual  
pulling

would take some code that doesn't exist now, too,


Okay, good to know.  The situation is obviously easy to avoid, I just  
found the contrast surprising.


Thanks.

- John D. Burger
  MITRE



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL]about Command Prompt.

2007-10-09 Thread longlong
what i want is using the psql to excute  a serial  SQL command like create
table in a file.
but i have tried  runas /noprofile /env /user:postgres f:\postgre\bin\psql
-f filename in Windows cmd ,it doesn't work.

Is there any way to do this   or
How can I make my order  work just like in Command Prompt?


Re: [GENERAL] timer script from SAMS book or equivalent?

2007-10-09 Thread John Wells
- Owen Hartnett [EMAIL PROTECTED] wrote:
 At 11:57 PM +0400 10/5/07, John Wells wrote:
 - Felipe de Jesús Molina Bravo [EMAIL PROTECTED]
 wrote:
   May be can help you \timing in psql
 
   El vie, 05-10-2007 a las 21:32 +0400, John Wells escribió:
Guys,
   
In the book PostgreSQL (2nd ed) the author mentions a timer
 script
   he wrote to analyze various performance bits about PostgreSQL.
 I've
   looked everywhere and can't find it. Does anyone know where I can
 find
   a copy, or find an equivalent tool?
 
 I'm afraid it's not even close in functionality. Here is sample 
 results from the timer tool he mentions. This is the kind of data 
 I'm after:
 
 
 This was available on line at www.developers-library.com, but now it 
 looks like that has gone away.  Sometimes an author keeps a web site 
 for code he uses to supplement a book, but I can't find one for him. 
 Here's the code:

Thanks to all. Having problems building it myself. I plan to reimplement in 
ruby if possible. Will post results.

Thanks,
John

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL]about Command Prompt.

2007-10-09 Thread Hiroshi Saito

Hi.

runas is not required.

for example)
C:\Program Files\PostgreSQL\8.3-beta1\binpsql -f filename.sql postgres postgres

Password for user postgres:

pleae see.
psql --help

Regards,
Hiroshi Saito

- Original Message - 
From: longlong

To: pgsql-general@postgresql.org
Sent: Tuesday, October 09, 2007 9:25 PM
Subject: [GENERAL]about Command Prompt.


what i want is using the psql to excute  a serial  SQL command like create 
table in a file.
but i have tried  runas /noprofile /env /user:postgres f:\postgre\bin\psql -f filename 
in Windows cmd ,it doesn't work.


Is there any way to do this   or
How can I make my order  work just like in Command Prompt? 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] timer script from SAMS book or equivalent?

2007-10-09 Thread korry.douglas



In the book PostgreSQL (2nd ed) the author mentions a timer script he wrote to 
analyze various performance bits about PostgreSQL. I've looked everywhere and 
can't find it. Does anyone know where I can find a copy, or find an equivalent 
tool?
  

Hi John, sorry for the delay in getting back to you.

You can find the timer program at 
http://www.conjectrix.com/pgbook/index.html.  You want the sample 
data/code for chapter 4.


  -- Korry

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] DB upgrade

2007-10-09 Thread Albe Laurenz
Andrew Kelly wrote:
 I've read dozens of times in these lists that when one is
 upgrading from an older to newer version of PG, the DB
 being dumped (older version) should be done so using
 pg_dump from the newer version.

[...]

 Which bit exactly are we supposed to use from the
 newer version?

[...]

 I mean, I've used pg_dump hundreds of times but I've never 
 had need to actually look at it and I've always assumed
 that it been a binary file,

[...]

 Unless my installation is unique in some way of which
 I'm yet unaware, pg_dump seems to be just a handful of
 lines in a perl script. In fact, pg_dump, pg_restore and
 pg_dumpall are all simlinks to the same simple perl script,
 the contents of which seem to be identical in both my
 'older' and 'newer' versions of PG.

What is your PostgreSQL version and your operating system?

pg_dump, pg_restore, and pg_dumpall should all be executables:
On my Linux system:

file `which pg_dump`
/magwien/postgres-8.2.5/bin/pg_dump: ELF 32-bit LSB executable,
  Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5,
  dynamically linked (uses shared libs), stripped

The database dump that you generate with pg_dump is either
an SQL script (if you use the default 'plain' format) or
binary data (if you use one of the other formats).

With pg_dumpall, you can only generate an SQL script.

In any case, the dump will contain commands that can be
used to recreate the database.

The reason why you should always use pg_dump from the newer
database version is the following:

The newer pg_dump knows about what is different in the new
PostgreSQL version and can create a dump that will do what
you mean on the new PostgreSQL server.

An example:

In 8.2, there is a new database parameter
standard_conforming_strings. pg_dump from 8.2 knows
about this and will add the following into your dump:

SET standard_conforming_strings = off;

pg_dump from 8.1 will not do that, so when you import
the dump from 8.1 into an 8.2 database, the interpretation
of Strings in the dump will depend on the current setting
of standard_conforming_strings, which may lead to data loss.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] move whole schema to a new tablespace

2007-10-09 Thread Rhys Stewart
Hi all,
short of issuing single commands for each table is there a way to move
the an entire schema to a new tablespace, indices and all?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PG_TRY(), PG_CATCH()....

2007-10-09 Thread Alvaro Herrera
Alex Vinogradovs wrote:

 Which works fine with successful queries, but for each
 unsuccessful query it complains about reference leaks
 and not properly closed relations.
  Later on I've solved that with use of subtransactions, which
 provide some proper cleanup mechanisms, but I was wondering
 if it is possible to bypass that layer, and make the code
 above work fine just by doing some cleanup within the catch
 block.

The only code that knows how to cleanup completely after transaction
failure is the subtransaction code.  If you need to do something that
may cause a transaction abort, then you must use subtransactions.

(You could of course write your own layer but it would duplicate
subtransaction start/abort so there wouldn't be any point.)

It's expensive, yes, but there are good reasons for that.  If you are
worried about that, I'm sure there are optimizations possible.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] move whole schema to a new tablespace

2007-10-09 Thread Scott Marlowe
On 10/9/07, Rhys Stewart [EMAIL PROTECTED] wrote:
 Hi all,
 short of issuing single commands for each table is there a way to move
 the an entire schema to a new tablespace, indices and all?

I'm pretty sure you're gonna have to write a short pl/pgsql script to do that.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] DB upgrade

2007-10-09 Thread Andrew Kelly
On Tue, 2007-10-09 at 13:58 +0200, Peter Eisentraut wrote:
 Am Dienstag, 9. Oktober 2007 schrieb Andrew Kelly:
  Unless my installation is unique in some way of which I'm yet unaware,
 
 Yes, it's a Debian package.

Indeed, yes. 

Where can I read what that means in the great scheme of things? 
Are you saying that Deb is markedly different from other packages (.rpm)
or that any packaged version of PG is different from the recommended
source install?

  pg_dump seems to be just a handful of lines in a perl script. In fact,
  pg_dump, pg_restore and pg_dumpall are all simlinks to the same simple
  perl script, the contents of which seem to be identical in both my
  'older' and 'newer' versions of PG. Does this mean I can trust any old
  dump from my older server to seamlessly plug into my newer version? Or
  does thin mean there are other 'gizmos' than pg_dump which I need to
  copy from new machine to old machine to perform the dump?
 
 The advice remains:  Install the newest client package and use that pg_dump.

[grumble]
I had a remark here about how confused I still was, since my client
package is installed and still only offering the perl wrapper script.
Then, after a 2 hour meeting and getting back to answering this mail, I
kicked my Debian boxes around an bit and found the actual binaries. So,
I think I'm settled now. 

Thanks for your input Peter, and you as well, Albe. I really appreciate
it.

Andy

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] DB upgrade

2007-10-09 Thread Dimitri Fontaine
Le mardi 09 octobre 2007, Andrew Kelly a écrit :
 On Tue, 2007-10-09 at 13:58 +0200, Peter Eisentraut wrote:
 Where can I read what that means in the great scheme of things?
 Are you saying that Deb is markedly different from other packages (.rpm)
 or that any packaged version of PG is different from the recommended
 source install?

man pg_wrapper

debian provides this script for you to easily connect to whichever PostgreSQL 
installation (different version, different clusters) with the same tool and 
without manually messing with PATHs etc.
  psql --cluster 8.2/main ...
  pg_dump --cluster 8.2/main -V
  pg_lsclusters

 Then, after a 2 hour meeting and getting back to answering this mail, I
 kicked my Debian boxes around an bit and found the actual binaries. So,
 I think I'm settled now.

dpkg -L package will give you the exact list of installed files.

Hope this helps, regards,
-- 
dim


signature.asc
Description: This is a digitally signed message part.


[GENERAL] disjoint union types

2007-10-09 Thread Sam Mason
Hi,

I was after opinions as to the best way to lay tables out to get the
effect of a disjoint union type (also known as a tagged union).
When I have to do this at the moment, I'm creating a structure like:

  CREATE TABLE circle ( id SERIAL PRIMARY KEY, radius  REAL NOT NULL );
  CREATE TABLE square ( id SERIAL PRIMARY KEY, sidelen REAL NOT NULL );
  
  CREATE TABLE shapes (
id SERIAL PRIMARY KEY,

tag INTEGER NOT NULL,

circleid INTEGER REFERENCES circle
  CHECK ((tag = 1) = (circleid IS NOT NULL)),
squareid INTEGER REFERENCES square
  CHECK ((tag = 2) = (squareid IS NOT NULL))
  );

I can then put data into this by doing:

  BEGIN;
  INSERT INTO circle (radius) VALUES (1);
  INSERT INTO shapes (tag,circleid) VALUES (1,currval('circle_id_seq'));
  COMMIT;

This works, but it's sometimes a bit of a headache turning things around
so they fit this structure.  Are there standard solutions to this that
work better?


Thanks,
  Sam

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] disjoint union types

2007-10-09 Thread Erik Jones

On Oct 9, 2007, at 9:38 AM, Sam Mason wrote:


Hi,

I was after opinions as to the best way to lay tables out to get the
effect of a disjoint union type (also known as a tagged union).
When I have to do this at the moment, I'm creating a structure like:

  CREATE TABLE circle ( id SERIAL PRIMARY KEY, radius  REAL NOT  
NULL );
  CREATE TABLE square ( id SERIAL PRIMARY KEY, sidelen REAL NOT  
NULL );


  CREATE TABLE shapes (
id SERIAL PRIMARY KEY,

tag INTEGER NOT NULL,

circleid INTEGER REFERENCES circle
  CHECK ((tag = 1) = (circleid IS NOT NULL)),
squareid INTEGER REFERENCES square
  CHECK ((tag = 2) = (squareid IS NOT NULL))
  );

I can then put data into this by doing:

  BEGIN;
  INSERT INTO circle (radius) VALUES (1);
  INSERT INTO shapes (tag,circleid) VALUES (1,currval 
('circle_id_seq'));

  COMMIT;

This works, but it's sometimes a bit of a headache turning things  
around

so they fit this structure.  Are there standard solutions to this that
work better?


You could use after triggers on your circle and shape tables to  
automatically make the insert into shapes for you.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] starting a stored procedure+rule AFTER an insert

2007-10-09 Thread Bima Djaloeis
Thanks for the reply, is there any online reference / tutorial for this?

-BD

2007/10/8, Douglas McNaught [EMAIL PROTECTED]:

 Bima Djaloeis [EMAIL PROTECTED] writes:

  I have implemented a stored procedure that writes out the newest DB
  entry on insert, and combined it with a rule.
 
  1) create function newcache() returns void AS 'newCache', 'newCache'
 language
  c;
  2) create rule newcacherule AS on insert to caches do also select
 newcache();
 
  The problem is that newcacherule fires BEFORE the insert has taken
  place, so effectively, I always just get the 2nd newest entry to
  write into my text file while the newest entry is stuck in the
  queue until a new insert come. How can I execute my rule AFTER the
  insert has taken place?

 Rules effectively happen at query parse time.  You probably want an
 AFTER trigger instead.

 -Doug



[GENERAL] slow request

2007-10-09 Thread michael . guiard
Hi !
I am using postgreSQL v8.1.5 with latest odbc driver v8.02.0500.
I have very slow request between my server and my client. They are
both on the same switch 100Mb/s. I have no particular network
problems.
I use the pgadmin tool to do my request.

My database is compose of one table. This table has some simple fields
(varchar  int, less than 500 bytes) and its primary key is a
varchar(32) (CLSID). This table has 14 records.
I know the primary key as a clsid is not the best choice, but it is
mine :)

The request select * from mytable on the server takes
approximatively 30 seconds.
The same request on the client takes approximatively 400 seconds !
What I do not understand is that my network traffic during this
request on the client side is very low. It is less than 100KB/s !

Why is it so slow ? I suppose that my 14 records are retrieve one
by one... is it true ? if yes, why ?

I try to do the same thing with another table with a primary key as
'int4'. The result is the same : 540 secs to retrieve 15 records
at 45 KB/s (average speed) (132 octets/record * 15 = 19MB / 45 KB/
s = 430 seconds)

How can I improve these performances ?
thanks
Mike


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] slow request

2007-10-09 Thread Scott Marlowe
On 10/9/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Hi !
 I am using postgreSQL v8.1.5 with latest odbc driver v8.02.0500.
 I have very slow request between my server and my client. They are
 both on the same switch 100Mb/s. I have no particular network
 problems.
 I use the pgadmin tool to do my request.

 My database is compose of one table. This table has some simple fields
 (varchar  int, less than 500 bytes) and its primary key is a
 varchar(32) (CLSID). This table has 14 records.
 I know the primary key as a clsid is not the best choice, but it is
 mine :)

 The request select * from mytable on the server takes
 approximatively 30 seconds.
 The same request on the client takes approximatively 400 seconds !
 What I do not understand is that my network traffic during this
 request on the client side is very low. It is less than 100KB/s !

 Why is it so slow ? I suppose that my 14 records are retrieve one
 by one... is it true ? if yes, why ?

Who cares one way or another...  They're fast locally and slow
remotely, which means you've likely got some kind of networking issue
going on here.

How fast can you network copy things from the server to the client
without pgsql involved?  (i.e. rsync, ftp, wget http, scp etc...)
What OS are the client and server running?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] slow request

2007-10-09 Thread Frank Finner
On Tue, 09 Oct 2007 10:34:45 - [EMAIL PROTECTED] thought long, then sat 
down and wrote:

 Hi !
 I am using postgreSQL v8.1.5 with latest odbc driver v8.02.0500.
 I have very slow request between my server and my client. They are
 both on the same switch 100Mb/s. I have no particular network
 problems.
 I use the pgadmin tool to do my request.
 
 My database is compose of one table. This table has some simple fields
 (varchar  int, less than 500 bytes) and its primary key is a
 varchar(32) (CLSID). This table has 14 records.
 I know the primary key as a clsid is not the best choice, but it is
 mine :)
 
 The request select * from mytable on the server takes
 approximatively 30 seconds.
 The same request on the client takes approximatively 400 seconds !
 What I do not understand is that my network traffic during this
 request on the client side is very low. It is less than 100KB/s !
 
 Why is it so slow ? I suppose that my 14 records are retrieve one
 by one... is it true ? if yes, why ?
 
 I try to do the same thing with another table with a primary key as
 'int4'. The result is the same : 540 secs to retrieve 15 records
 at 45 KB/s (average speed) (132 octets/record * 15 = 19MB / 45 KB/
 s = 430 seconds)
 
 How can I improve these performances ?
 thanks
 Mike
 
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match


This could be a DNS related problem, if not the request itself, but the 
connect, is slow. How did you measure these times and speeds?

-- 
Frank Finner

Invenius - Lösungen mit Linux
Köpfchenstraße 36
57072 Siegen
Telefon: 0271 231 8606Mail: [EMAIL PROTECTED]
Telefax: 0271 231 8608Web:  http://www.invenius.de
Key fingerprint = 90DF FF40 582E 6D6B BADF  6E6A A74E 67E4 E788 2651



pgpiCuixCItnO.pgp
Description: PGP signature


[GENERAL] Solutions for listening on multiple ports?

2007-10-09 Thread Jason L. Buberel
Is there a 'generally accepted' best practice for enabling a single 
postgres instance to listen for client connections on more than one 
ip/port combination?


As far as I can tell, the 'listen_address' and 'port' configuration 
variables can only accommodate single values:


listen_address = 127.0.0.1
port = 5432

What I would like to simulate is Apache's notation:

Listen: 127.0.0.1:5432
Listen: 192.168.0.1:54824
...

The force behind this is network security policies and such. I would 
prefer to not resort to kernel-level netfilter trickery to accomplish 
this, if possible.


Thanks,
Jason


Re: [GENERAL] Solutions for listening on multiple ports?

2007-10-09 Thread Erik Jones

On Oct 9, 2007, at 11:22 AM, Jason L. Buberel wrote:

Is there a 'generally accepted' best practice for enabling a single  
postgres instance to listen for client connections on more than one  
ip/port combination?


As far as I can tell, the 'listen_address' and 'port' configuration  
variables can only accommodate single values:


listen_address = 127.0.0.1
port = 5432

What I would like to simulate is Apache's notation:

Listen: 127.0.0.1:5432
Listen: 192.168.0.1:54824
...

The force behind this is network security policies and such. I  
would prefer to not resort to kernel-level netfilter trickery to  
accomplish this, if possible.


You can separate listen addresses with commas:

listen_address = '127.0.0.1,192.168.0.1'

AFAIK, you only get one port per cluster.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Solutions for listening on multiple ports?

2007-10-09 Thread Scott Marlowe
On 10/9/07, Jason L. Buberel [EMAIL PROTECTED] wrote:

  Is there a 'generally accepted' best practice for enabling a single
 postgres instance to listen for client connections on more than one ip/port
 combination?

  As far as I can tell, the 'listen_address' and 'port' configuration
 variables can only accommodate single values:

  listen_address = 127.0.0.1
  port = 5432

As mentioned by someone else, you can have  1 IP be listended to, but
only the one port.  You could likely use port forwarding to accomplish
having pgsql listen on  1 port.  In linux you'd do this with
iptables.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PG_TRY(), PG_CATCH()....

2007-10-09 Thread Alex Vinogradovs
No, I'm not worried about them failing. My code isn't transactional...
I'm just worried about getting whole bunch of warnings about reference
leaks.


On Tue, 2007-10-09 at 09:59 -0400, Alvaro Herrera wrote:

 The only code that knows how to cleanup completely after transaction
 failure is the subtransaction code.  If you need to do something that
 may cause a transaction abort, then you must use subtransactions.
 
 (You could of course write your own layer but it would duplicate
 subtransaction start/abort so there wouldn't be any point.)
 
 It's expensive, yes, but there are good reasons for that.  If you are
 worried about that, I'm sure there are optimizations possible.
 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] DB upgrade

2007-10-09 Thread Richard Huxton

Andrew Kelly wrote:

On Tue, 2007-10-09 at 13:58 +0200, Peter Eisentraut wrote:

Am Dienstag, 9. Oktober 2007 schrieb Andrew Kelly:

Unless my installation is unique in some way of which I'm yet unaware,

Yes, it's a Debian package.


Indeed, yes. 

Where can I read what that means in the great scheme of things? 
Are you saying that Deb is markedly different from other packages (.rpm)

or that any packaged version of PG is different from the recommended
source install?


All packaged versions will differ from the source tarball in some way. 
At one end of the scale, the ./configure options for where files go by 
default will be different. At the other end of the scale the Windows 
package bundles pgAdmin.


Debian have a (somewhat complex at first but very clever) system that 
lets you have multiple installations of PG at the same time. This is 
easy to do with source installs (because you'll make sure each has its 
own directories and port by hand) but harder with packaging systems 
(because the default settings for 8.1 and 8.2 will be the same).


To see all your installed packages, try:
  dpkg-query --list 'postgresql*'
If you're interested in the details try --listfiles:
  dpkg-query --listfiles postgresql-common
Basically version-specific binaries etc go in their own directories and 
what you run is just a wrapper that redirects to the appropriate version.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] slow request

2007-10-09 Thread Steve Atkins


On Oct 9, 2007, at 6:34 AM, [EMAIL PROTECTED] wrote:


Hi !
I am using postgreSQL v8.1.5 with latest odbc driver v8.02.0500.
I have very slow request between my server and my client. They are
both on the same switch 100Mb/s. I have no particular network
problems.
I use the pgadmin tool to do my request.

My database is compose of one table. This table has some simple fields
(varchar  int, less than 500 bytes) and its primary key is a
varchar(32) (CLSID). This table has 14 records.
I know the primary key as a clsid is not the best choice, but it is
mine :)

The request select * from mytable on the server takes
approximatively 30 seconds.
The same request on the client takes approximatively 400 seconds !
What I do not understand is that my network traffic during this
request on the client side is very low. It is less than 100KB/s !

Why is it so slow ? I suppose that my 14 records are retrieve one
by one... is it true ? if yes, why ?

I try to do the same thing with another table with a primary key as
'int4'. The result is the same : 540 secs to retrieve 15 records
at 45 KB/s (average speed) (132 octets/record * 15 = 19MB / 45 KB/
s = 430 seconds)

How can I improve these performances ?


This has been reported before, IIRC, and one issue was that pgadmin
spent a lot longer rendering the data than it did retrieving it.

So before you look any further, run the same query from psql and
see if that changes anything.

Cheers,
  Steve




---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Generating subtotal reports direct from SQL

2007-10-09 Thread Owen Hartnett


I'm hoping there's a real easy way of doing this that I'm just missing:

Given a Select statement such as:

Select ID, code, amount from foo where code  10;

that gives me a table like this:

ID  codeamount
_
1   4   20
2   3   10
3   4   15
4   2   10
5   3   9
6   3   8

I want to generate a report table like the following (group by code):

ID  codeamount
_
4   2   10
2   10
2   3   10
5   3   9
6   3   8
3   27
1   4   20
3   4   15
4   35
72

Such that the final table has additional subtotal rows with the 
aggregate sum of the amounts.  I'm thinking I can generate two tables 
and merge them, but is there an easier way using a fancy Select 
statement?


-Owen

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Generating subtotal reports direct from SQL

2007-10-09 Thread Erik Jones

On Oct 9, 2007, at 4:53 PM, Owen Hartnett wrote:



I'm hoping there's a real easy way of doing this that I'm just  
missing:


Given a Select statement such as:

Select ID, code, amount from foo where code  10;

that gives me a table like this:

ID  codeamount
_
1   4   20
2   3   10
3   4   15
4   2   10
5   3   9
6   3   8

I want to generate a report table like the following (group by code):

ID  codeamount
_
4   2   10
2   10
2   3   10
5   3   9
6   3   8
3   27
1   4   20
3   4   15
4   35
72

Such that the final table has additional subtotal rows with the  
aggregate sum of the amounts.  I'm thinking I can generate two  
tables and merge them, but is there an easier way using a fancy  
Select statement?


Try generating them and merging them in one queryt:

SELECT ID, code, amount
FROM (SELECT ID, code, amount
  FROM table_name
  UNION
  SELECT null, code, sum(amount)
  FROM table_name
  GROUP BY code) t
ORDER BY code, test1_id

Note that I didn't test that

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] How to speedup intarray aggregate function?

2007-10-09 Thread Dmitry Koterov
Hello.

I created an aggregate:

CREATE AGGREGATE intarray_aggregate_push (_int4)
(
  STYPE = _int4,
  SFUNC = intarray_push_array,
  INITCOND = '{}'
);

(or - I may use _int_union instead of intarray_push_array, its speed is
practically the same in my case).
This aggregate merges together a list of integer[] arrays resulting one big
array with all elements.

Then I want to use this aggregate:

SELECT intarray_aggregate_push(arrayfield)
FROM arraytable

The table arraytable contains a lot of rows (about 5000), each row has array
with length of 5-10 elements, so - the resulting array should contain about
5 elements.

The query is okay, but its speed is too bad: about 1 second.

The main problem is the speed of intarray_aggregate_push function - it is
quite slow, because intarray_push_array reallocates the memory each time I
merge two arrays. I am pretty sure that the reallocaton and copying is the
bottleneck, because if I use another dummy aggreate:

CREATE AGGREGATE intarray_aggregate_dummy (_int4)
(
  STYPE = _int4,
  SFUNC = dummy,
  INITCOND = '{}'
);

CREATE OR REPLACE FUNCTION public.dummy (a integer [], b integer [])
RETURNS integer [] AS
$body$ BEGIN RETURN a; END; $body$
LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;

where dummy() is the function which returns its first argument without any
modification, the speed grows dramatically - about 25 ms (instead of 1000
ms!).

The question is: how could I optimize this, and is it possible at all in
Postgres? I just want to get one large array glued from a lot of smaller
arrays...


P.S.

I have tested that

SELECT array_to_string(ARRAY(SELECT text FROM tbl), ' ')

query is many times faster than joining of all text fields inside one
pg/plsql stored function (I assume that it is because Postgres do not
reallocate  copy memory each time it glues a new text piece). But
unfortunately there is no way to convert integer[] to string to use this
method: I could write

select '{1,2}'::integer[]

but I couldn't use

select ARRAY[1,2]::text


Re: [GENERAL] Generating subtotal reports direct from SQL

2007-10-09 Thread Gregory Stark
Owen Hartnett [EMAIL PROTECTED] writes:

 Such that the final table has additional subtotal rows with the aggregate sum
 of the amounts.  I'm thinking I can generate two tables and merge them, but is
 there an easier way using a fancy Select statement?

Unfortunately the fancy SQL feature you're looking for is ROLLUP which
Postgres doesn't support.

I think you would have to do

(
select id, code, amount from foo where code  10
union all
select null, code, sum(amount) from foo where code  10 group by code
) order by code, id


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to speedup intarray aggregate function?

2007-10-09 Thread Filip Rembiałkowski
2007/10/10, Dmitry Koterov [EMAIL PROTECTED]:
 Hello.

 I created an aggregate:

 CREATE AGGREGATE intarray_aggregate_push (_int4)
 (
   STYPE = _int4,
   SFUNC = intarray_push_array,
   INITCOND = '{}'
 );

 (or - I may use _int_union instead of intarray_push_array, its speed is
 practically the same in my case).
 This aggregate merges together a list of integer[] arrays resulting one big
 array with all elements.

 Then I want to use this aggregate:

 SELECT intarray_aggregate_push(arrayfield)
 FROM arraytable

  The table arraytable contains a lot of rows (about 5000), each row has
 array with length of 5-10 elements, so - the resulting array should contain
 about 5 elements.

 The query is okay, but its speed is too bad: about 1 second.

 The main problem is the speed of intarray_aggregate_push function - it is
 quite slow, because intarray_push_array reallocates the memory each time I
 merge two arrays. I am pretty sure that the reallocaton and copying is the
 bottleneck, because if I use another dummy aggreate:

 CREATE AGGREGATE intarray_aggregate_dummy (_int4)
 (
   STYPE = _int4,
   SFUNC = dummy,
   INITCOND = '{}'
 );

 CREATE OR REPLACE FUNCTION public.dummy (a integer [], b integer [])
 RETURNS integer [] AS
 $body$ BEGIN RETURN a; END; $body$
 LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;

 where dummy() is the function which returns its first argument without any
 modification, the speed grows dramatically - about 25 ms (instead of 1000
 ms!).

 The question is: how could I optimize this, and is it possible at all in
 Postgres? I just want to get one large array glued from a lot of smaller
 arrays...


1. no wonder copying is the bottleneck - this is what the aggregate
does, mostly.

2. you can use plain array_cat for this, in my test it is few percent faster

3. in this case I guess intarrray contrib is not an option, AFAIK it
was created only for speeding up searches, that is int4[] lookups

4. to have this kind of optimization you talk about, we would need an
aggregate operating (in this case appending) directly on
internalstate. i'm not sure if this is possible in postgres

5. my results:
 your method (using intarray_push_array): 940 ms
 using array_cat: 860 ms
 same in PL/PgSQL: (LOOP, append) 800 ms
 same thing in Perl, no database (push array of arrays into one and
print ): 18 ms


cheers, Filip


-- 
Filip Rembiałkowski

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] SLEEP in posgresql

2007-10-09 Thread Jasbinder Singh Bali
Hi,

I have a while loop and I want to re-iterate after every 't' seconds.
I was reading up on the postgresql documentation that says pg_sleep(t)
should be handy.
However i doesn't work.

Instead of that, I re-engineered my while loop in the stored procedure as
follows.

while  a=b loop
 --do something
select pg_sleep(5);
end loop

I doubt this would work because when I try to run
SELECT pg_sleep(5) stand alone, it throws error.

I was wondering how to implement the SLEEP functionality here.

Thanks,
~Jas


Re: [GENERAL] SLEEP in posgresql

2007-10-09 Thread Guy Rouillier

Jasbinder Singh Bali wrote:

Hi,

I have a while loop and I want to re-iterate after every 't' seconds.
I was reading up on the postgresql documentation that says pg_sleep(t)
should be handy.
However i doesn't work.


Hmmm, I'm looking at section 9.9.5 Delaying Execution in the PostgreSQL 
8.2.0 Documentation.  Following the example presented there, I fired up 
psql and ran the following:


postgres=# select current_timestamp; select pg_sleep(3); select 
current_timestamp;

now

 2007-10-09 23:50:32.649-04
(1 row)

 pg_sleep
--

(1 row)

now

 2007-10-09 23:50:35.774-04
(1 row)

Seems to be working.  What version are you using and on what platform?

--
Guy Rouillier

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] replacing single quotes

2007-10-09 Thread Ow Mun Heng
Input is of form 

'ppp','aaa','bbb'

I want it to be stripped of quotes to become

ppp,aaa,bbb

escaping the quote would work but it means I will have to do some magic
on the input as well to escape it prior to replacing it.

select replace('AB\'A','\'','C') this works

Can I buy a clue here?


oh.. I;ve also tried to use quote_literal but no go..


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] SLEEP in posgresql

2007-10-09 Thread Jasbinder Singh Bali
I'm using Postgresql Version 8.1.4. on fedora core 6
I'm pretty sure that pg_sleep is not implemented in 8.1.
Am not sure what is the work around

Jas

On 10/10/07, Harpreet Dhaliwal [EMAIL PROTECTED] wrote:

 I think pg_sleep is not implemented in 8.1 and earlier versions. Is there
 any alternative if someone is using versions before 8.2 ?

 On 10/9/07, Guy Rouillier [EMAIL PROTECTED]  wrote:
 
  Jasbinder Singh Bali wrote:
   Hi,
  
   I have a while loop and I want to re-iterate after every 't' seconds.
   I was reading up on the postgresql documentation that says pg_sleep(t)
   should be handy.
   However i doesn't work.
 
  Hmmm, I'm looking at section 9.9.5 Delaying Execution in the PostgreSQL
  8.2.0 Documentation.  Following the example presented there, I fired up
  psql and ran the following:
 
  postgres=# select current_timestamp; select pg_sleep(3); select
  current_timestamp;
   now
  
2007-10-09 23:50:32.649-04
  (1 row)
 
pg_sleep
  --
 
  (1 row)
 
   now
  
2007-10-09 23:50:35.774-04
  (1 row)
 
  Seems to be working.  What version are you using and on what platform?
 
  --
  Guy Rouillier
 
  ---(end of broadcast)---
  TIP 1: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
 




Re: [GENERAL]about Command Prompt.

2007-10-09 Thread Hiroshi Saito

Hi.

- Original Message - 
From: longlong 


if it is not in Command Prompt how can i work in a .bat file like that.
and i don't want the user input  password,no interaction.


Ahh, Ok.

Please see.
http://developer.postgresql.org/pgdocs/postgres/libpq-pgpass.html
notepad %APPDATA%\postgresql\pgpass.conf
--edit-sample--
*:5433:postgres:postgres:postgres

It follows. This is not request password.

C:\Program Files\PostgreSQL\8.3-beta1\binpsql postgres postgres
Welcome to psql 8.3beta1, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
  \h for help with SQL commands
  \? for help with psql commands
  \g or terminate with semicolon to execute query
  \q to quit

postgres=# \q

I want this to help you.:-)

Regards,
Hiroshi Saito

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL]about Command Prompt

2007-10-09 Thread Stuart Cooper
about Command Prompt:

http://www.commandprompt.com/about/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL]about Command Prompt.

2007-10-09 Thread Hiroshi Saito
- Original Message - 
From: longlong 


thanks !

:-)

P.S)
GUI edit can do pgpass.conf in pgAdminIII.
Please use it.!

Regards,
Hiroshi Saito

2007/10/10, Hiroshi Saito [EMAIL PROTECTED]:
Hi.

- Original Message -
From: longlong


if it is not in Command Prompt how can i work in a .bat file like that.
and i don't want the user input  password,no interaction.


Ahh, Ok. 


Please see.
http://developer.postgresql.org/pgdocs/postgres/libpq-pgpass.html
notepad %APPDATA%\postgresql\pgpass.conf 
--edit-sample--

*:5433:postgres:postgres:postgres

It follows. This is not request password.

C:\Program Files\PostgreSQL\8.3-beta1\binpsql postgres postgres
Welcome to psql 8.3beta1, the PostgreSQL interactive terminal. 


Type:  \copyright for distribution terms
  \h for help with SQL commands
  \? for help with psql commands
  \g or terminate with semicolon to execute query
  \q to quit

postgres=# \q 


I want this to help you.:-)

Regards,
Hiroshi Saito

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/