[ADMIN] How this query!

2005-08-25 Thread Richard Susanto
Folks,
would you like help me how to query this case.

I have tbl_a, the fields are :
item_record
item_product_name
item_qty


if the data are :

item_record = 1,2,3,4,5,6
item_product_name = a,b,a,c,d,c,...
item_qty = 10,5,15,10,20,5,...

How the query to sum item_qty_total if i want the result :
item_product_name = a,b,c,d
item_qty_total = 25,5,15,20

Regards,
-- 
Richard Sitompul
Software Engineer
PT. Orbis Indonesia
http://www.orbisindonesia.com
http://richardsstp.3wsi.net


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] How this query!

2005-08-25 Thread Lars Haugseth

* [EMAIL PROTECTED] (Richard Susanto) wrote:
|
| Folks,
| would you like help me how to query this case.
| 
| I have tbl_a, the fields are :
| item_record
| item_product_name
| item_qty
| 
| 
| if the data are :
| 
| item_record = 1,2,3,4,5,6
| item_product_name = a,b,a,c,d,c,...
| item_qty = 10,5,15,10,20,5,...
| 
| How the query to sum item_qty_total if i want the result :
| item_product_name = a,b,c,d
| item_qty_total = 25,5,15,20

SELECT item_product_name, sum(item_qty) FROM tbl_a GROUP BY item_product_name;

-- 
Lars Haugseth

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[ADMIN] size of indexes and tables (more than 1GB)

2005-08-25 Thread jose fuenmayor
I read and have seen that when a table has more than 1GB it is divided
in several files with the names of inode,inode.1,inode.2,inode.3, etc.

I have a table of 1.3 GB (9.618.118 rows,13 fields) it is divided in
that way  as i see on /PGDATA/base but each file has the same size i
mean
table inode (1.3GB), inode.1(1.3GB),inode.2(1.3GB) so is  this not a
waste of space?, are those file sizes reusable by postgresql?.

The size of the table is 3 times bigger than,  for instance Visual Fox
Pro dbf's? since is there fisically three times.

I am a little confuse on this.

I apreciatte anybody could explain this to me

Thanks in advance.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] size of indexes and tables (more than 1GB)

2005-08-25 Thread Bruno Wolff III
On Thu, Aug 25, 2005 at 08:52:35 -0400,
  jose fuenmayor [EMAIL PROTECTED] wrote:
 
 The size of the table is 3 times bigger than,  for instance Visual Fox
 Pro dbf's? since is there fisically three times.

Have you been vacuuming properly? It is possible you have a lot of dead
tuples in the database. If that seems to be the case and you need more
help dealing with this, please include the version of Postgres you are
running, since that will affect what suggestions people will give you.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[ADMIN] Question regarding blocking locks

2005-08-25 Thread Kevin Keith
I have a question regarding blocking locks in the pg database. I ran into a 
process which terminated abnormally, and to fully clear the locks it left 
behind I had to reboot the system (probably restarting postmaster would have 
had the same effect). This was a personal development system so this was no 
big deal to reboot it. I would like to know what other options I have so if 
this was to occur in a production environment in the future I had a less 
drastic measure to take to resolve the issue.


I saw the locks in the pg_locks view (the mode was Exclusivelock),

Can someone point me in the right direction to addressing such a problem 
should it occur in the future?


Thanks,

Kevin

_
Express yourself instantly with MSN Messenger! Download today - it's FREE! 
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [ADMIN] How this query!

2005-08-25 Thread Bruno Wolff III
On Thu, Aug 25, 2005 at 18:44:00 +0700,
  Richard Susanto [EMAIL PROTECTED] wrote:
 Folks,
 would you like help me how to query this case.

This question is off topic for the pgsql-admin list, it should have been
asked on the pgsql-sql list because it was a question about SQL. (I probably
wouldn't have said anything if it had been posted to the pgsql-general
or pgsql-novice lists, but it is definitely not a question even remotely
related to postgres administration.)

 
 I have tbl_a, the fields are :
 item_record
 item_product_name
 item_qty
 
 
 if the data are :
 
 item_record = 1,2,3,4,5,6
 item_product_name = a,b,a,c,d,c,...
 item_qty = 10,5,15,10,20,5,...
 
 How the query to sum item_qty_total if i want the result :
 item_product_name = a,b,c,d
 item_qty_total = 25,5,15,20

Use GROUP BY item_product_name and sum(item_qty). Something like:

SELECT item_product_name, sum(item_qty) AS item_qty_total
  FROM tbl_a
  GROUP BY item_product_name
  ORDER BY item_product_name
;

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] size of indexes and tables (more than 1GB)

2005-08-25 Thread Aldor

Hi Chris,

 If you're running VACUUM often enough, then there's nothing wrong, and
 nothing to be done.  You're simply observing how PostgreSQL handles
 large tables.

Wrong. I have a big table - running VACUUM the first time needs as long 
as I run it after the VACUUM has finished. There are other problems with 
VACUUM, fixed in 8.1. In 8.1. you have a server internal AUTOVACUUM - 
setting this correct might be the solution.


My table has about 40GB of data with about 120 million tuples. Correct 
max_fsm settings, etc...


I created test datases with about 10-20 million tuples - and VACUUM runs 
fast, but not when you do many changes and your tables are more bigger.


Chris Browne wrote:

[EMAIL PROTECTED] (jose fuenmayor) writes:


I read and have seen that when a table has more than 1GB it is divided
in several files with the names of inode,inode.1,inode.2,inode.3, etc.

I have a table of 1.3 GB (9.618.118 rows,13 fields) it is divided in
that way  as i see on /PGDATA/base but each file has the same size i
mean
table inode (1.3GB), inode.1(1.3GB),inode.2(1.3GB) so is  this not a
waste of space?, are those file sizes reusable by postgresql?.

The size of the table is 3 times bigger than,  for instance Visual Fox
Pro dbf's? since is there fisically three times.



Having file, file.1, file.2, and such is routine; that is the
normal handling of tables that grow beyond 1GB in size.  If there is
actually 3GB of data to store in the table, then there is nothing to
be 'fixed' about this.  There is no duplication of data; each of those
files contains distinct sets of tuples.

First question...

Are you vacuuming the table frequently to reclaim dead space?

If that table is heavily updated (e.g. - via DELETE/UPDATE; mere
INSERTs do NOT represent updates in this context), then maybe
there's a lot of dead space, and running VACUUM would cut down on the
size.

If you're running VACUUM often enough, then there's nothing wrong, and
nothing to be done.  You're simply observing how PostgreSQL handles
large tables.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[ADMIN] What is syslog:duration reporting ... ?

2005-08-25 Thread Marc G. Fournier


'k, I've been wracking my brains over this today, and I'm either 
mis-understanding what is being reported *or* its reporting wrong ...


According to syslog:

LOG:  duration: 4107.987 ms  statement: UPDATE session SET hit_time=now() WHERE 
md5='8b8e7b7ff9b1b2ed5fc60218ced28d00';

But, if I do an EXPLAIN ANALYZE:

# explain analyze UPDATE session SET hit_time=now() WHERE 
md5='702c6cb20d5eb254c3feb2991e7e5e31';
 QUERY PLAN

 Index Scan using session_md5_key on session  (cost=0.00..6.01 rows=1 
width=93) (actual time=0.060..0.060 rows=0 loops=1)
   Index Cond: (md5 = '702c6cb20d5eb254c3feb2991e7e5e31'::bpchar)
 Total runtime: 0.171 ms
(3 rows)

And it doesn't matter what value I put for md5, I still get 1ms ...

I could see some variations, but almost 4000x slower  to *really* run the 
query vs an explain analyze?


This is with 7.4.2 ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] What is syslog:duration reporting ... ?

2005-08-25 Thread Aldor
As I know EXPLAIN ANALYZE runs the query. I think you are just running 
the query two times. The first time you run the query it will take a 
long time to be processed - after the first run the query planner will 
remember the best way to run the query so your second run runs much faster.


I can reproduce this behavior for some queries under 8.0.1 - so I'm not 
100% sure if it is the same behavior under 7.4.2.


I'm still wondering why you first query takes about 4107.987 ms - this 
kind of query has usually have to run much much faster.


When did you vacuumed the table the last time?

Marc G. Fournier wrote:


'k, I've been wracking my brains over this today, and I'm either 
mis-understanding what is being reported *or* its reporting wrong ...


According to syslog:

LOG:  duration: 4107.987 ms  statement: UPDATE session SET 
hit_time=now() WHERE md5='8b8e7b7ff9b1b2ed5fc60218ced28d00';


But, if I do an EXPLAIN ANALYZE:

# explain analyze UPDATE session SET hit_time=now() WHERE 
md5='702c6cb20d5eb254c3feb2991e7e5e31';

 QUERY PLAN
 

 Index Scan using session_md5_key on session  (cost=0.00..6.01 rows=1 
width=93) (actual time=0.060..0.060 rows=0 loops=1)

   Index Cond: (md5 = '702c6cb20d5eb254c3feb2991e7e5e31'::bpchar)
 Total runtime: 0.171 ms
(3 rows)

And it doesn't matter what value I put for md5, I still get 1ms ...

I could see some variations, but almost 4000x slower  to *really* run 
the query vs an explain analyze?


This is with 7.4.2 ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] What is syslog:duration reporting ... ?

2005-08-25 Thread Marc G. Fournier

On Thu, 25 Aug 2005, Aldor wrote:

As I know EXPLAIN ANALYZE runs the query. I think you are just running the 
query two times. The first time you run the query it will take a long time to 
be processed - after the first run the query planner will remember the best 
way to run the query so your second run runs much faster.


I can reproduce this behavior for some queries under 8.0.1 - so I'm not 100% 
sure if it is the same behavior under 7.4.2.


I'm still wondering why you first query takes about 4107.987 ms - this kind 
of query has usually have to run much much faster.


That would work if I were to get really occasional high values in syslog, 
but this is almost a consist thing over a very short period of time:


Aug 25 14:53:32 forgehouse-s1 postgres[23721]: [2-1] LOG:  duration: 567.559 ms 
 statement: UPDATE session SET hit_time=now() WHERE 
md5='7537b74eab488de54d6e0167d1919207';
Aug 25 14:53:32 forgehouse-s1 postgres[23722]: [2-1] LOG:  duration: 565.966 ms 
 statement: UPDATE session SET hit_time=now() WHERE 
md5='d84613009a95296fb511c2cb051ad618';
Aug 25 14:53:33 forgehouse-s1 postgres[23723]: [2-1] LOG:  duration: 1192.789 
ms  statement: UPDATE session SET hit_time=now() WHERE 
md5='d84613009a95296fb511c2cb051ad618';
Aug 25 14:53:53 forgehouse-s1 postgres[23727]: [2-1] LOG:  duration: 12159.162 
ms  statement: UPDATE session SET hit_time=now() WHERE 
md5='d84613009a95296fb511c2cb051ad618';
Aug 25 14:53:54 forgehouse-s1 postgres[23728]: [2-1] LOG:  duration: 3283.185 
ms  statement: UPDATE session SET hit_time=now() WHERE 
md5='7537b74eab488de54d6e0167d1919207';
Aug 25 14:53:57 forgehouse-s1 postgres[23729]: [2-1] LOG:  duration: 2116.516 
ms  statement: UPDATE session SET hit_time=now() WHERE 
md5='7537b74eab488de54d6e0167d1919207';

And you will notice that the last two are the exact same md5 value ...


When did you vacuumed the table the last time?


pg_autovacuum is running to keep things up to date, and I just ran a 
VACUUM FULL *and* a REINDEX, just to make sure things are clean ...


No matter when I try it, I can't seem to get a value above 10ms for that 
query above when I do it from psql ...






Marc G. Fournier wrote:


'k, I've been wracking my brains over this today, and I'm either 
mis-understanding what is being reported *or* its reporting wrong ...


According to syslog:

LOG:  duration: 4107.987 ms  statement: UPDATE session SET hit_time=now() 
WHERE md5='8b8e7b7ff9b1b2ed5fc60218ced28d00';


But, if I do an EXPLAIN ANALYZE:

# explain analyze UPDATE session SET hit_time=now() WHERE 
md5='702c6cb20d5eb254c3feb2991e7e5e31';

 QUERY PLAN


 Index Scan using session_md5_key on session  (cost=0.00..6.01 rows=1 
width=93) (actual time=0.060..0.060 rows=0 loops=1)

   Index Cond: (md5 = '702c6cb20d5eb254c3feb2991e7e5e31'::bpchar)
 Total runtime: 0.171 ms
(3 rows)

And it doesn't matter what value I put for md5, I still get 1ms ...

I could see some variations, but almost 4000x slower  to *really* run the 
query vs an explain analyze?


This is with 7.4.2 ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster






Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] What is syslog:duration reporting ... ?

2005-08-25 Thread Michael Fuhr
On Thu, Aug 25, 2005 at 03:56:36PM -0300, Marc G. Fournier wrote:
 On Thu, 25 Aug 2005, Aldor wrote:
 I'm still wondering why you first query takes about 4107.987 ms - this 
 kind of query has usually have to run much much faster.
 
 That would work if I were to get really occasional high values in syslog, 
 but this is almost a consist thing over a very short period of time:

Are the updates happening inside a transaction?  Is it possible
that they're being blocked by other transactions that update the
same record around the same time and don't commit immediately?
I can duplicate the results you're seeing by doing that.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] What is syslog:duration reporting ... ?

2005-08-25 Thread Michael Fuhr
On Thu, Aug 25, 2005 at 01:55:08PM -0600, Michael Fuhr wrote:
 Are the updates happening inside a transaction?  Is it possible
 that they're being blocked by other transactions that update the
 same record around the same time and don't commit immediately?
 I can duplicate the results you're seeing by doing that.

As for why you don't see long durations with EXPLAIN ANALYZE, here's
a possible explanation: ordinary updates of the same record might
happen close together because that's how the application works, but
by the time you run EXPLAIN ANALYZE no other transactions are updating
that record so the update executes immediately.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[ADMIN] Problem with rules

2005-08-25 Thread Chris Hoover
I am having a problem with the rule system in 7.3.4. 

I have a view with an on insert do instead rule that is calling a
function and passing new.*.  When try to insert a row into the view, I
get ERROR: ResolveNew: can't handle whole-tuple reference.

I was working with this in 8.0 on an RD project.  However, management
liked the concept I was working on so much, they asked me to try and
see if I can get it to run in our current environment of 7.3.4.

So far, this does not look good.  Any way to get around this issue
(and no, a db upgrade at the time is not available.  But one is
scheduled in the next 6 months).

Thanks,

Chris

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[ADMIN] dumping query results to a csv

2005-08-25 Thread David Durham

This is kind of a pg-admin newbie question, so apologies in advance.

Anyway, I'd like to issue a command that dumps the results of a query to
a txt file in comma delimited format.  Does PostgreSQL ship with
something to do this?  I searched the web, but found what appeared to be
non-free solutions.

Thanks,

Dave


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] dumping query results to a csv

2005-08-25 Thread Jeff Frost

David,

You're probably looking for something like this from the psql man page:

   -F separator

   --field-separator separator
  Use  separator  as  the  field  separator. This is equivalent to
  \pset fieldsep or \f.

I would guess -F , would do the trick.

On Thu, 25 Aug 2005, David Durham wrote:


This is kind of a pg-admin newbie question, so apologies in advance.

Anyway, I'd like to issue a command that dumps the results of a query to
a txt file in comma delimited format.  Does PostgreSQL ship with
something to do this?  I searched the web, but found what appeared to be
non-free solutions.

Thanks,

Dave


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



--
Jeff Frost, Owner   [EMAIL PROTECTED]
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] dumping query results to a csv

2005-08-25 Thread Adi Alurkar

Greetings,

There is no direct way to achieve what you want, the easiest hack is  
to create a temp table with you query i.e.


create table tmp_foo as select col1, col4, col7 from table1, table2  
where  ;

copy table tmp_foo to [stdout|file_name]

HTH

Adi Alurkar
[EMAIL PROTECTED]


On Aug 25, 2005, at 3:24 PM, David Durham wrote:


This is kind of a pg-admin newbie question, so apologies in advance.

Anyway, I'd like to issue a command that dumps the results of a  
query to

a txt file in comma delimited format.  Does PostgreSQL ship with
something to do this?  I searched the web, but found what appeared  
to be

non-free solutions.

Thanks,

Dave


---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [ADMIN] dumping query results to a csv

2005-08-25 Thread Jeff Frost

Whoops, should have also mentioned that you want -P format=unaligned like so:

psql -P format=unaligned -F ',' snort EOF
select * from iphdr;
EOF

On Thu, 25 Aug 2005, Jeff Frost wrote:


David,

You're probably looking for something like this from the psql man page:

  -F separator

  --field-separator separator
 Use  separator  as  the  field  separator. This is equivalent 
to

 \pset fieldsep or \f.

I would guess -F , would do the trick.

On Thu, 25 Aug 2005, David Durham wrote:


This is kind of a pg-admin newbie question, so apologies in advance.

Anyway, I'd like to issue a command that dumps the results of a query to
a txt file in comma delimited format.  Does PostgreSQL ship with
something to do this?  I searched the web, but found what appeared to be
non-free solutions.

Thanks,

Dave


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings






--
Jeff Frost, Owner   [EMAIL PROTECTED]
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [ADMIN] dumping query results to a csv

2005-08-25 Thread Jim C. Nasby
On Thu, Aug 25, 2005 at 03:48:54PM -0700, Adi Alurkar wrote:
 Greetings,
 
 There is no direct way to achieve what you want, the easiest hack is  
 to create a temp table with you query i.e.
 
 create table tmp_foo as select col1, col4, col7 from table1, table2  
 where  ;
 copy table tmp_foo to [stdout|file_name]

Can we make this a TODO? It would certainly be handy to be able to COPY
directly from a query.

In the mean time, you can also do something like

SELECT field1 || ',' || field2 || ',' || field3

and capture the output of that to a file, but that's an uglier hack than
the temptable trick.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[ADMIN] pgcrypto 'cryptsrc' in Makefile: what is that?

2005-08-25 Thread Colin E. Freas


In the Makefile for pgcrypto there's a cryptsrc variable you can set to 
'builtin' or 'system'.


Makefile only references the variable when it's set to 'builtin'...  I 
can't figure out what it's supposed to do when set to 'system'.


Anyone happen to know?

Thanks,
Colin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [ADMIN] What is syslog:duration reporting ... ?

2005-08-25 Thread Aldor
Aug 25 14:53:32 forgehouse-s1 postgres[23721]: [2-1] LOG:  duration: 
567.559 ms  statement: UPDATE session SET hit_time=now() WHERE 
md5='7537b74eab488de54d6e0167d1919207';
Aug 25 14:53:32 forgehouse-s1 postgres[23722]: [2-1] LOG:  duration: 
565.966 ms  statement: UPDATE session SET hit_time=now() WHERE 
md5='d84613009a95296fb511c2cb051ad618';
Aug 25 14:53:33 forgehouse-s1 postgres[23723]: [2-1] LOG:  duration: 
1192.789 ms  statement: UPDATE session SET hit_time=now() WHERE 
md5='d84613009a95296fb511c2cb051ad618';
Aug 25 14:53:53 forgehouse-s1 postgres[23727]: [2-1] LOG:  duration: 
12159.162 ms  statement: UPDATE session SET hit_time=now() WHERE 
md5='d84613009a95296fb511c2cb051ad618';
Aug 25 14:53:54 forgehouse-s1 postgres[23728]: [2-1] LOG:  duration: 
3283.185 ms  statement: UPDATE session SET hit_time=now() WHERE 
md5='7537b74eab488de54d6e0167d1919207';
Aug 25 14:53:57 forgehouse-s1 postgres[23729]: [2-1] LOG:  duration: 
2116.516 ms  statement: UPDATE session SET hit_time=now() WHERE 
md5='7537b74eab488de54d6e0167d1919207';


Take a look to the timestamps... they are not really close to each other...

Michael Fuhr wrote:

On Thu, Aug 25, 2005 at 01:55:08PM -0600, Michael Fuhr wrote:


Are the updates happening inside a transaction?  Is it possible
that they're being blocked by other transactions that update the
same record around the same time and don't commit immediately?
I can duplicate the results you're seeing by doing that.



As for why you don't see long durations with EXPLAIN ANALYZE, here's
a possible explanation: ordinary updates of the same record might
happen close together because that's how the application works, but
by the time you run EXPLAIN ANALYZE no other transactions are updating
that record so the update executes immediately.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] dumping query results to a csv

2005-08-25 Thread Steve Crawford
On Thursday 25 August 2005 3:24 pm, David Durham wrote:
 This is kind of a pg-admin newbie question, so apologies in
 advance.

 Anyway, I'd like to issue a command that dumps the results of a
 query to a txt file in comma delimited format.  Does PostgreSQL
 ship with something to do this?  I searched the web, but found what
 appeared to be non-free solutions.

Use heredoc notation to set the format to unaligned, set your field 
separator (and, if necessary, record separator) to whatever you want, 
turn off the footer, and run the output to a file:

psql any needed connection parameters --quiet databasename EOT
\pset format unaligned
\pset fieldsep ','
\pset footer
\o youroutputfile.csv
select ..
EOT

Or if you prefer everything on the command line:
psql any needed connection parameters --quiet --no-align 
--field-separator ',' --pset footer --output youroutputfile.csv 
--command select ... databasename

Optionally add \pset tuples-only (first example) or --tuples-only 
(second example) if you do not want the header line with field names 
to be included. Note, if you use tuples only, you don't need to turn 
off the footer separately. You can also use the short versions of all 
the command line switches if you prefer. man psql

Cheers,
Steve


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[ADMIN] pgcrypto regression test: how can I change the port?

2005-08-25 Thread Colin E. Freas


I can't figure out how to change the port of the pgcrypto regression 
tests...  I see how to change it in the regress.sh file, but, I can't 
figure out how to invoke it with that option...  make has never been a 
forte of mine.


I mean...  couldn't you just use ant?  :)

Colin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] pgcrypto regression test: how can I change the port?

2005-08-25 Thread Colin E. Freas


Found the PGPORT variable.

But is there a way to invoke the tests with a different port from the 
command line?


Colin E. Freas wrote:



I can't figure out how to change the port of the pgcrypto regression 
tests...  I see how to change it in the regress.sh file, but, I can't 
figure out how to invoke it with that option...  make has never been a 
forte of mine.


I mean...  couldn't you just use ant?  :)

Colin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] What is syslog:duration reporting ... ?

2005-08-25 Thread Michael Fuhr
On Fri, Aug 26, 2005 at 12:50:29AM +0100, Aldor wrote:
 Aug 25 14:53:32 forgehouse-s1 postgres[23721]: [2-1] LOG:  duration: 
 567.559 ms  statement: UPDATE session SET hit_time=now() WHERE 
 md5='7537b74eab488de54d6e0167d1919207';
 Aug 25 14:53:32 forgehouse-s1 postgres[23722]: [2-1] LOG:  duration: 
 565.966 ms  statement: UPDATE session SET hit_time=now() WHERE 
 md5='d84613009a95296fb511c2cb051ad618';
 Aug 25 14:53:33 forgehouse-s1 postgres[23723]: [2-1] LOG:  duration: 
 1192.789 ms  statement: UPDATE session SET hit_time=now() WHERE 
 md5='d84613009a95296fb511c2cb051ad618';
 Aug 25 14:53:53 forgehouse-s1 postgres[23727]: [2-1] LOG:  duration: 
 12159.162 ms  statement: UPDATE session SET hit_time=now() WHERE 
 md5='d84613009a95296fb511c2cb051ad618';
 Aug 25 14:53:54 forgehouse-s1 postgres[23728]: [2-1] LOG:  duration: 
 3283.185 ms  statement: UPDATE session SET hit_time=now() WHERE 
 md5='7537b74eab488de54d6e0167d1919207';
 Aug 25 14:53:57 forgehouse-s1 postgres[23729]: [2-1] LOG:  duration: 
 2116.516 ms  statement: UPDATE session SET hit_time=now() WHERE 
 md5='7537b74eab488de54d6e0167d1919207';
 
 Take a look to the timestamps... they are not really close to each other...

Eh?  The timestamps show that the updates *are* close to each other.
What we don't know is whether this log excerpt shows all statements
that were executed during its time frame.  It might have been grep'ed
from the full log file, or the log_min_duration_statement setting
might be such that only statements lasting more than a certain
amount of time are logged and we're not seeing similar updates that
happened quickly, nor when any of the updates were committed.

Marc, does my hypothesis of updates being blocked by other transactions
sound plausible in your environment?  How complete a log did you
post -- is it everything, or are there other statements that you
omitted or that weren't logged because of the log_min_duration_statement
setting?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] What is syslog:duration reporting ... ?

2005-08-25 Thread Marc G. Fournier

On Thu, 25 Aug 2005, Michael Fuhr wrote:


On Fri, Aug 26, 2005 at 12:50:29AM +0100, Aldor wrote:

Aug 25 14:53:32 forgehouse-s1 postgres[23721]: [2-1] LOG:  duration:
567.559 ms  statement: UPDATE session SET hit_time=now() WHERE
md5='7537b74eab488de54d6e0167d1919207';
Aug 25 14:53:32 forgehouse-s1 postgres[23722]: [2-1] LOG:  duration:
565.966 ms  statement: UPDATE session SET hit_time=now() WHERE
md5='d84613009a95296fb511c2cb051ad618';
Aug 25 14:53:33 forgehouse-s1 postgres[23723]: [2-1] LOG:  duration:
1192.789 ms  statement: UPDATE session SET hit_time=now() WHERE
md5='d84613009a95296fb511c2cb051ad618';
Aug 25 14:53:53 forgehouse-s1 postgres[23727]: [2-1] LOG:  duration:
12159.162 ms  statement: UPDATE session SET hit_time=now() WHERE
md5='d84613009a95296fb511c2cb051ad618';
Aug 25 14:53:54 forgehouse-s1 postgres[23728]: [2-1] LOG:  duration:
3283.185 ms  statement: UPDATE session SET hit_time=now() WHERE
md5='7537b74eab488de54d6e0167d1919207';
Aug 25 14:53:57 forgehouse-s1 postgres[23729]: [2-1] LOG:  duration:
2116.516 ms  statement: UPDATE session SET hit_time=now() WHERE
md5='7537b74eab488de54d6e0167d1919207';

Take a look to the timestamps... they are not really close to each other...


Eh?  The timestamps show that the updates *are* close to each other.
What we don't know is whether this log excerpt shows all statements
that were executed during its time frame.  It might have been grep'ed
from the full log file, or the log_min_duration_statement setting
might be such that only statements lasting more than a certain
amount of time are logged and we're not seeing similar updates that
happened quickly, nor when any of the updates were committed.

Marc, does my hypothesis of updates being blocked by other transactions
sound plausible in your environment?  How complete a log did you
post -- is it everything, or are there other statements that you
omitted or that weren't logged because of the log_min_duration_statement
setting?


I'm working on it from that perspective ... apparently, there has been no 
changes to teh database, only the application ... the weird thing is that 
the application/database on teh development server (much less powerful) 
isn't exhibiting the same problems, so I'm thinking there has to be 
somethign slightly different between the two that they aren't thinking of 
that they've made ...


Going to have to do a code review next, see if they've maybe thrown in a 
TRANSACTION wouldn't realizing/thinking of it :(


Thanks ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] pgcrypto regression test: how can I change the port?

2005-08-25 Thread Michael Fuhr
On Thu, Aug 25, 2005 at 08:21:11PM -0400, Colin E. Freas wrote:
 Found the PGPORT variable.
 
 But is there a way to invoke the tests with a different port from the 
 command line?

You should be able to set variables like PGPORT, PGUSER, PGHOST,
etc., on make's command line:

make PGPORT=12345 installcheck

Some systems have an env command that runs a command with a
modified environment:

env PGPORT=12345 make installcheck

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[ADMIN] dumping query results to a csv

2005-08-25 Thread David Durham

This is kind of a pg-admin newbie question, so apologies in advance.

Anyway, I'd like to issue a command that dumps the results of a query to 
a txt file in comma delimited format.  Does PostgreSQL ship with 
something to do this?  I searched the web, but found what appeared to be 
non-free solutions.


Thanks,

Dave

---(end of broadcast)---
TIP 6: explain analyze is your friend


[ADMIN] pg_dumpall problem - duplicated users

2005-08-25 Thread Przemysław Nogaj

Hi,

few days ago we made vacuum of all databases on our server. After that 
we canot dump databases:


# pg_dumpall -i -U postgres  test.sql
pg_dump: query to obtain list of schemas failed: ERROR:  More than one 
tuple returned by a subselect used as an expression.

pg_dumpall: pg_dump failed on database alibi, exiting


There are duplicated system users, pg_catalogs, pg_temp1, pg_toast and 
public...



template1= select * from pg_catalog.pg_shadow where usesysid = 1;
usename  | usesysid | usecreatedb | usesuper | usecatupd 
|   passwd| valuntil | useconfig

--+--+-+--+---+-+--+---
postgres |1 | t   | t| t 
| |  |
postgres |1 | t   | t| t | 
md5c084502ed11efa9d3d96d29717a5e555 |  |

(2 rows)


Any suggestions how to solve the problem? There are many databases on 
the server we can't loose. Applications can connect to databases - all 
works fine, hoewer we must make dumps...



Thanks for any help,
Greetings

Przemek

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] Postgres using SSL connections

2005-08-25 Thread Simon de Hartog
Hi,

I want to have Postgres use an SSL certificate for secure access by
clients over the internet. I have a server that runs PostgreSQL and I
have created my own Certificate Authority. I now have a certificate and
corresponding private key in /etc/ssl. This pair is used without
problems by:
- Apache 2
- LDAP server
- Sendmail
- stunnel
- VPN software

I have added all the users these applications run as to a group called
ssl. Permissions on the private key are owned by root, group ssl,
protection rw-r- (640). When I tell PostgreSQL to use this key with
certificate (by using symlinks from server.key and server.crt in the
postgreSQL data dir) it tells me that owner and permissions are wrong.

It seems to me that they are only wrong by PostgreSQL's opinion. How
can I use this certificate and key for PostgreSQL (without copying the
key and changing owner and permissions etc, because then the whole idea
of centrally coordinated certificates is gone)?

I checked the archives. A lot of comments considering the unclear error
messages in previous versions, this has been solved IMHO. Also some
comments and patches to remove these checks, concluded by comments that
they must remain. All in all, it still doesn't work for my situation.
Would it be nice to have a configuration-file option to disable these
checks? Maybe possibly even configurable locations of these files,
instead of the defaults in the PostgreSQL data dir?

Kind regards and thanks in advance,

Simon de Hartog
-- 
From every point in life, there's a road that leads to where you
  want to go.

E: simon at-sign dehartog point nl
W: http://simon.dehartog.nl/
P: +31-6-15094709
M: simon_net at-sign rootsr point com
I: 8714776
K: http://www.rootsr.com/simon.crt

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] connect to postgres from shell scripts

2005-08-25 Thread Chris Travers

Hemapriya wrote:


Hi,

I would like to know how to connect to postgres from
shell scripts using a password protected user.
 


psql is the simple way.


Is there a way to embed the password in psql login.
It keeps prompting for the passwords.

 


Use the .pgpass file to store the password.

Best Wishes,
Chris Travers
Metatron Technology Consulting
begin:vcard
fn:Chris Travers
n:Travers;Chris
email;internet:[EMAIL PROTECTED]
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend


[ADMIN] installation on postgresql on different port

2005-08-25 Thread Ramesh PVK

Hi,
 Anyone pls. help me. I have a postgres installation package for Mac  
OS X.
 After installation, Can i use my own user for configuring postgres,  
instead of creating a postgres user.
Also how can I run it on different port, instead of the default port  
(5432).

And lastly how can I uninstall postgres. I am using Mac OS X Tiger.

Thank you

Regards
 Vijai

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] DB restore fails W2k.

2005-08-25 Thread Dan








Hello,



We have upgraded our PostgreSQL database version from
7.3.1 (Cygwin) under Windows 2000 server to v8.03 for windows. 



Now we dumped our database (app. size 6 GB) with
PgAdmin III with success. The problem now is that we cant completely restore
our database with the dump file. The restore utility just stops when reaching a
certain point in the dump file.

Please help us, we cant afford loosing our
database with several years of work.

Thanks in advance.



/Dan



Here is some info on the dump file:

; Archive created at Wed Aug 17 21:09:56 2005

; dbname: a_db

; TOC Entries: 43

; Compression: -1

; Dump Version: 1.10-0

; Format: CUSTOM

; Integer: 4 bytes

; Offset: 4 bytes

; Dumped from database
version: 7.3.1

; Dumped by pg_dump version:
8.0.0



Error displayed when restoring.

C:\Program Files\PostgreSQL\8.0\pgAdmin
III\pg_restore.exe -i -h localhost -p 5432 -U postgres -d a_db -a -t pictures
-v D:\a_db.backup

pg_restore: connecting to database for restore

pg_restore: restoring data for table  pictures

pg_restore: [custom archiver] error during file seek:
Invalid argument

pg_restore: *** aborted because of error



Process returned exit code 1.










[ADMIN] RPM 8.0.3 for RH7.3, RH7.2 and AS2.1

2005-08-25 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,
I'm trying to create the rpm for these distributions,
I'm using the one available for RH9.0 but I got:

On RH7.3, RH7.2 and AS2.1 I get:

# rpmbuild --rebuild postgresql-8.0.3-1PGDG.src.rpm
[...]
checking for perl... /usr/bin/perl
checking for Perl archlibexp... /usr/lib/perl5/5.6.1/i386-linux
checking for Perl privlibexp... /usr/lib/perl5/5.6.1
checking for Perl useshrplib... false
checking for flags to link embedded Perl...   -L/usr/local/lib 
/usr/lib/perl5/5.6.1/i386-linux/auto/DynaLoader/DynaLoader.a 
-L/usr/lib/perl5/5.6.1/i386-linux/CORE -lperl -lnsl -ldl -lm -lc -lcrypt -lutil
checking for python... /usr/bin/python
checking for Python distutils module... yes
checking Python configuration directory...   File string, line 1
from distutils.sysconfig import get_python_lib as f; import os; print 
os.path.join(f(plat_specific=1,standard_lib=1),'config')
^
SyntaxError: invalid syntax

checking how to link an embedded Python application... -L -lpython1.5 -lieee 
-ldl  -lpthread -lm
checking for main in -lbsd... yes
checking for setproctitle in -lutil... no
checking for main in -lm... yes
checking for main in -ldl... yes
checking for main in -lnsl... yes
checking for main in -lsocket... no
checking for main in -lipc... no
checking for main in -lIPC... no
checking for main in -llc... no
checking for main in -ldld... no
checking for main in -lld... no
checking for main in -lcompat... no
checking for main in -lBSD... no
checking for main in -lgen... no
checking for main in -lPW... no
checking for main in -lresolv... yes
checking for library containing getopt_long... none required
checking for main in -lunix... no
checking for library containing crypt... -lcrypt
checking for library containing fdatasync... none required
checking for shmget in -lcygipc... no
checking for readline... yes (-lreadline -ltermcap)
checking for inflate in -lz... yes
checking for library containing com_err... -lcom_err
checking for library containing krb5_encrypt... no
configure: error: could not find function 'krb5_encrypt' required for Kerberos 5
error: Bad exit status from /var/tmp/rpm-tmp.4601 (%build)


RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.4601 (%build)



does anyone knows how to fix these errors or where I can find the RPM for these 
3 platforms?





-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFDC0gM7UpzwH2SGd4RAqVcAKCMfwHZ0OHkT5MXRXd0qBhM1uMtvwCg1arQ
RReaKumLc0rL9zF13OhQuHQ=
=fyYS
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match