Re: [GENERAL] Postgresql with max_connections=4096

2005-07-28 Thread denis
I picked 260 because 256 are the max clients in the apache configuration 
file.
My problem is that every apache request need to make a connection to the 
database, so if I have all the 256 apache processes working, I need at 
least 256 pg_pool processes.


However, with the pg_pgpool installed in each machine with 260 preforked 
clients, the test ended quite well. The postgresql server didn't loaded 
two much and errors like "There is already a transaction in progress" 
disappeared (I think that was a problem with the apache persistent 
connections).


I encountered also other problems that I must to check. I think they are 
caused by my application.


As soon as I have some results, I'll let you know.

Thank you,
Denis


Jeff Trout wrote:



On Jul 27, 2005, at 10:46 AM, [EMAIL PROTECTED] wrote:

I'm now testing with pg_pool installed on each apache frontend with  
260 pg_pool preforked clients in each machine.



Why did you pick 260?

You don't need a 1:1 ratio. That is the point of the pool. Those  
connections are "shared".  Chances are extremely high that all your  
apache clients are not issuing queries at the same exact time so your  
queries end up getting funnelled into those X  connections.


I ran with 32 kids on pg_pool and 350 apache processes. never had a  
problem.


--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/






---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Postgresql with max_connections=4096

2005-07-28 Thread denis
With the last test I made, the database is swapping but in a very slow 
way... 100K every 10 minutes and that seems to not be a problem... in 
the sense that the server doesn't slow down...


Today I'll make other tests and let you know.

Thank you,
Denis


Scott Marlowe wrote:


On Wed, 2005-07-27 at 12:53, Jim C. Nasby wrote:
 


On Wed, Jul 27, 2005 at 04:46:56PM +0200, [EMAIL PROTECTED] wrote:
   

I'm now testing with pg_pool installed on each apache frontend with 260 
pg_pool preforked clients in each machine.


The database seems to work better. At least when it goes to swap it 
doesn't stop working...
 


Wait, are you saying your database server is swapping? You'll never get
any kind of performance if that's the case.
   



IF it swaps out unused code / data and leaves it, that's fine, but if
it's constantly swapping out then yeah, that's a VERY bad thing.

 




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [pgsql-advocacy] MySQL to PostgreSQL, was ENUM type

2005-07-28 Thread David Fetter
On Wed, Jul 27, 2005 at 08:34:56PM -0700, Joshua D. Drake wrote:
> Josh Berkus wrote:
> >KL-
> >
> >
> >>What I think we need is a C program that dumps directly from MySQL into
> >>PostgreSQL sql.
> >
> >
> >Why C?   PerlDBI or JDBC should be able to do this readily enough.
> 
> Uhmmm isn't that what fetters DBI-Link thing does?

Yikes!  Another unintended use. :P

Anybody interested in contributing to this?

Cheers,
D (who is ashamed about how long it's been since his last commit)
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 510 893 6100   mobile: +1 415 235 3778

Remember to vote!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Welcome to the pgsql-general list!

2005-07-28 Thread Kanaga

Hi all
I have the like 'psql: FATAL:  Ident authentication failed for user'.
Whn i was the cmd 'psql -d template1 -U wifidog -W -f 
sql/wifidog-postgres-schema.sql'


If anyone plz give the idea ...

With Regards,
kanagavel
- Original Message - 
From: <[EMAIL PROTECTED]>

To: "kanaga" <[EMAIL PROTECTED]>
Sent: Thursday, July 28, 2005 4:48 PM
Subject: Welcome to the pgsql-general list!



Welcome to the pgsql-general mailing list!
Your password at  is

kWbz5F

To leave this mailing list, send the following command in the body
of a message to [EMAIL PROTECTED]:

approve kWbz5F unsubscribe pgsql-general [EMAIL PROTECTED]

This command will work even if your address changes.  For that reason,
among others, it is important that you keep a copy of this message.

To post a message to the mailing list, send it to
 pgsql-general@postgresql.org

If you need help or have questions about the mailing list, please
contact the people who manage the list by sending a message to
 [EMAIL PROTECTED]

You can manage your subscription by visiting the following WWW location:











 Put the text of the welcome message here.







No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.338 / Virus Database: 267.9.6/59 - Release Date: 7/27/2005


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Welcome to the pgsql-general list!

2005-07-28 Thread Michelle Konzack
Next time please send a seperat E-mail to the list.
I do not want to know your Password...

Am 2005-07-28 17:14:27, schrieb Kanaga:
> Hi all



> - Original Message - 
> From: <[EMAIL PROTECTED]>
> To: "kanaga" <[EMAIL PROTECTED]>
> Sent: Thursday, July 28, 2005 4:48 PM
> Subject: Welcome to the pgsql-general list!
> 
> >Welcome to the pgsql-general mailing list!
> >Your password at  is
> >
> >kWbz5F


- END OF REPLYED MESSAGE -


Greetings
Michelle

-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSM LinuxMichi
0033/3/8845235667100 Strasbourg/France   IRC #Debian (irc.icq.com)


signature.pgp
Description: Digital signature


[GENERAL] Select for update

2005-07-28 Thread Havasvölgyi Ottó

Hi,

Is it normal that when I select for update a record, but I don't select all 
the fields, that the contents of fields not selected will be deleted:


create table pidtest(pid integer, szoveg text) without oids;

select pid from pistest where pid>5 for update;

After committing (autocommit), the contents of the szoveg field for the 
locked rows will be erased.


PostgreSQL 8.0.3 WinXP

Regards,
Otto 




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Select for update

2005-07-28 Thread Richard Huxton

Havasvölgyi Ottó wrote:

Hi,

Is it normal that when I select for update a record, but I don't select 
all the fields, that the contents of fields not selected will be deleted:


create table pidtest(pid integer, szoveg text) without oids;

select pid from pistest where pid>5 for update;

After committing (autocommit), the contents of the szoveg field for the 
locked rows will be erased.


Committing what? You're not updating anything here. The "SELECT...FOR 
UPDATE" just locks the rows so you can update them, it doesn't perform 
the update.


--
  Richard Huxton
  Archonet Ltd


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Select for update

2005-07-28 Thread Michael Fuhr
On Thu, Jul 28, 2005 at 01:22:31PM +0200, Havasvölgyi Ottó wrote:
> Is it normal that when I select for update a record, but I don't select all 
> the fields, that the contents of fields not selected will be deleted:
> 
> create table pidtest(pid integer, szoveg text) without oids;
> 
> select pid from pistest where pid>5 for update;
> 
> After committing (autocommit), the contents of the szoveg field for the 
> locked rows will be erased.

Could you provide a complete test case?  Works fine here:

CREATE TABLE pidtest (pid integer, szoveg text) WITHOUT OIDS;

INSERT INTO pidtest (pid, szoveg) VALUES (3, 'three');
INSERT INTO pidtest (pid, szoveg) VALUES (4, 'four');
INSERT INTO pidtest (pid, szoveg) VALUES (5, 'five');
INSERT INTO pidtest (pid, szoveg) VALUES (6, 'six');
INSERT INTO pidtest (pid, szoveg) VALUES (7, 'seven');

SELECT pid FROM pidtest WHERE pid > 5 FOR UPDATE;
 pid 
-
   6
   7
(2 rows)

SELECT * FROM pidtest;
 pid | szoveg 
-+
   3 | three
   4 | four
   5 | five
   6 | six
   7 | seven
(5 rows)

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] problem inserting with sequence

2005-07-28 Thread germ germ
I have been trying to figure out why I can't insert
into a table and I think it has something to do with
the sequnce.

I am able to use able to properly insert into the
table using the the shell, but I am not able to insert
using a php script:

INSERT INTO requests (time_stamp, req_num,
recommended_by) VALUES (now(),
nextval('requests_req_num_seq'), 'foo');

Here is the schema I'm using:
DROP SEQUENCE requests_req_num_seq;
DROP TABLE requests;

CREATE SEQUENCE requests_req_num_seq INCREMENT BY 1
START WITH 1000;

CREATE TABLE requests (
time_stamp  timestamp   PRIMARY KEY DEFAULT 'now',  

req_num integer DEFAULT
nextval('acq_requests_req_num_seq') NOT NULL,
recommended_by  varchar(35) NOT NULL
);  
GRANT ALL ON requests TO wwwrun, postgres;

If I reconfigure the schema by removing the sequence
from the table, I am able to insert into the table
using both the shell and php script:
DROP SEQUENCE requests_req_num_seq;
DROP TABLE requests;

CREATE SEQUENCE requests_req_num_seq INCREMENT BY 1
START WITH 1000;

CREATE TABLE requests (
time_stamp  timestamp   PRIMARY KEY DEFAULT 'now',  

recommended_by  varchar(35) NOT NULL
);

INSERT INTO requests (time_stamp, recommended_by)
VALUES (now(), 'foo');

I'm a little stumped on this one. I'm running
PostgreSQL 7.4.8 on i686-pc-linux-gnu, compiled by GCC
gcc (GCC) 3.3.3 (SuSE Linux).

Thanks in advance for any help or advice.



__ 
Yahoo! Mail for Mobile 
Take Yahoo! Mail with you! Check email on your mobile phone. 
http://mobile.yahoo.com/learn/mail 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] problem inserting with sequence

2005-07-28 Thread Gnanavel S
On 7/28/05, germ germ <[EMAIL PROTECTED]> wrote:
I have been trying to figure out why I can't insertinto a table and I think it has something to do withthe sequnce.I am able to use able to properly insert into thetable using the the shell, but I am not able to insert
using a php script:INSERT INTO requests (time_stamp, req_num,recommended_by) VALUES (now(),nextval('requests_req_num_seq'), 'foo');Here is the schema I'm using:DROP SEQUENCE requests_req_num_seq;
DROP TABLE requests;CREATE SEQUENCE requests_req_num_seq INCREMENT BY 1START WITH 1000;CREATE TABLE requests (time_stamp  timestamp  
PRIMARY KEY DEFAULT 'now',req_num
integer DEFAULTnextval('acq_requests_req_num_seq') NOT NULL,
 where is this sequence 'acq_requests_req_num_seq' created. You have created only 'requests_req_num_seq'
recommended_by  varchar(35)
NOT NULL);GRANT ALL ON requests TO wwwrun, postgres;If I reconfigure the schema by removing the sequencefrom the table, I am able to insert into the tableusing both the shell and php script:DROP SEQUENCE requests_req_num_seq;
DROP TABLE requests;CREATE SEQUENCE requests_req_num_seq INCREMENT BY 1START WITH 1000;CREATE TABLE requests (time_stamp  timestamp  
PRIMARY KEY DEFAULT 'now',recommended_by  varchar(35)
NOT NULL);INSERT INTO requests (time_stamp, recommended_by)VALUES (now(), 'foo');I'm a little stumped on this one. I'm runningPostgreSQL 7.4.8 on i686-pc-linux-gnu, compiled by GCCgcc (GCC) 
3.3.3 (SuSE Linux).Thanks in advance for any help or advice.__Yahoo! Mail for MobileTake Yahoo! Mail with you! Check email on your mobile phone.
http://mobile.yahoo.com/learn/mail---(end of broadcast)---TIP 5: don't forget to increase your free space map settings
-- with regards,S.GnanavelSatyam Computer Services Ltd.


Re: [GENERAL] problem inserting with sequence

2005-07-28 Thread Michael Fuhr
On Thu, Jul 28, 2005 at 05:32:03AM -0700, germ germ wrote:
> I have been trying to figure out why I can't insert
> into a table and I think it has something to do with
> the sequnce.
> 
> I am able to use able to properly insert into the
> table using the the shell, but I am not able to insert
> using a php script:
> 
> INSERT INTO requests (time_stamp, req_num,
> recommended_by) VALUES (now(),
> nextval('requests_req_num_seq'), 'foo');

What happens when you try the insert?  We need more details than
just "it doesn't work."  If there's an error then it should be in
the postmaster logs, and it should also be available to the PHP
script.

What's different between the situation that works and the one that
doesn't?  Are you connecting as different users?  In the code you
posted I don't see any permissions being granted on the sequence,
so nextval() might be failing with "permission denied for sequence";
another possibility is that the sequence name is wrong (see below).

> Here is the schema I'm using:
> DROP SEQUENCE requests_req_num_seq;
> DROP TABLE requests;
> 
> CREATE SEQUENCE requests_req_num_seq INCREMENT BY 1
> START WITH 1000;
> 
> CREATE TABLE requests (
>   time_stamp  timestamp   PRIMARY KEY DEFAULT 'now',  
> 

A timestamp is a poor choice for a primary key because it's not
unique; aside from that you've defined the default to be a constant --
run "\d requests" in psql and you'll see what I mean.  See the 
following section of the documentation for more info:

http://www.postgresql.org/docs/7.4/static/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT

>   req_num integer DEFAULT
> nextval('acq_requests_req_num_seq') NOT NULL,

This sequence name doesn't match the name of the sequence you created,
at least not the one you showed.  Is there an acq_requests_req_num_seq
sequence?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Real vs Float & 32bit vs 64bit CPU Performance Question

2005-07-28 Thread jbduffy
Hi

I am wrting a application that involves a lot of floating point number 
crunching.
My data is stored in tables of the form:

TABLE data (
date_id  INT,
value FLOAT)

I have just noticed in the documention that the FLOAT data type is stored
in 8 bytes (ie 64 bits) as opposed to the REAL data type which uses 4 bytes
(ie 32 bits).

All things being equal, does this mean that if I change my data type to REAL
(and lose some precision) I will see a significant performance increase on
my 32 bit Pentium 4?

Or, if I keep my data type as FLOAT will I see a significant performance
increase by changing to a 64 bit CPU?

Regards

John Duffy




___

Book yourself something to look forward to in 2005.
Cheap flights - http://www.tiscali.co.uk/travel/flights/
Bargain holidays - http://www.tiscali.co.uk/travel/holidays/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Cursor Issue??

2005-07-28 Thread DracKewl
I found using the shell works but using the phAdminIII GUI is the one
that gives me problems.  I've even tried running it on EMS PostgreSQL
Manager 3.  Same results.  Is this normal?

Got a couple more questions regarding cursors.
1. When I try to run this statement (Declare curs1 refcursor;)  I get
an error ERROR:  syntax error at or near "refcursor" at character 23
2. Using Fetch count; also barfs on me.
3.  After I use Fetch First how do I take the contents and insert them
into another table?
  This is my guess.
  FETCH FIRST IN cursor1 INTO partstemp(qty, length, width)

BEGIN;
DECLARE cursor1 CURSOR FOR SELECT * FROM PARTS;
FETCH FIRST IN cursor1 INTO partstemp(qty, length, width);<< My
guess but doesnt work.
CLOSE cursor1; 
COMMIT;


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Backup and restore from 7.4.1 to latest, crossing platforms... issues?

2005-07-28 Thread Mark Mikulec
Hi Martjin,

That's what I meant.. I would do my pg_dumps using: pg_dump -U postgres -Ft -b dbname > dbname.tar

And restore them via: pg_restore -O -U postgres -d dbname dbname.tar

That should be cool, yes?

MarkOn 7/27/05, Martijn van Oosterhout  wrote:
Binary backups are not going to work across major releases so justinstalling 8.0 with your old cluster won't work.You'll need to do a pg_dump of your old setup and then restore on yournew cluster.Hope this helps,
On Wed, Jul 27, 2005 at 12:27:41AM -0400, Mark Mikulec wrote:> Hi there,>> This may be a stupid question but I feel I should ask it anyway just to be> sure, since I've had problems in the past.
>> I currently have a defunct Debian 3.0 system running PostgreSQL 7.4.1, in> which I am going to do an entire blob backup dump of all the databases in> hopes to restore them on my new, more secure system running the latest
> OpenBSD, and PgSQL 8.0.>> So my question is: Are they any issues that I should be aware up restoring> on a more recent version of PgSQL, and/or issues using a completely> different operating system altogether? I have some strange pg language
> mishaps going from 7.2 to 7.4 last backup, although frightening, I overcame> them with some fiddling.>> Thanks in advance,> Mark>> --> ___> Roses are #FF
> Violets are #FF,> All my base> Are belong to you.--Martijn van Oosterhout      http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a> tool for doing 5% of the work and then sitting around waiting for someone> else to do the other 95% so you can sue them.
-- ___Roses are #FFViolets are #FF,All my baseAre belong to you.


Re: [GENERAL] problem inserting with sequence

2005-07-28 Thread Richard Huxton

germ germ wrote:

I have been trying to figure out why I can't insert
into a table and I think it has something to do with
the sequnce.

I am able to use able to properly insert into the
table using the the shell, but I am not able to insert
using a php script:

INSERT INTO requests (time_stamp, req_num,
recommended_by) VALUES (now(),
nextval('requests_req_num_seq'), 'foo');

Here is the schema I'm using:
DROP SEQUENCE requests_req_num_seq;
DROP TABLE requests;

CREATE SEQUENCE requests_req_num_seq INCREMENT BY 1
START WITH 1000;

CREATE TABLE requests (
time_stamp  timestamp   PRIMARY KEY DEFAULT 'now',  

req_num integer DEFAULT
nextval('acq_requests_req_num_seq') NOT NULL,
recommended_by  varchar(35) NOT NULL
);  
GRANT ALL ON requests TO wwwrun, postgres;


Check you server logs and it should tell you why. Actually, check the 
error code in your PHP and it should tell you why.


My immediate guess - you haven't granted permission to the sequence for 
user "wwwrun".


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] pg_get_serial_sequence and inheritance

2005-07-28 Thread Ezequiel Tolnay

Hi all,

	I've come accross a problem with pg_get_serial_sequence(table, 
serial_column) when dealing with inherited tables if the serial was 
defined in the parent table. The function would not return the serial 
name in this case if the child table is specified.


	To solve it, and improve its usability for all you MSSQL converts using 
the typical and common-sense single serial identifier and primary key (I 
wonder what would you want to use two serials in one table?), a plpgsql 
function to retrieve the sequence for whatever it is the first serial 
available for a given table (it actually works with any column with a 
default defined for nextval('whatever'::text) ). Following is the code 
for it, I hope you find it useful. Get the last inserted value as 
"SELECT currval(f_get_serial_seq_name(reloid('public', 'mytable', 'r'))) 
AS last_id".


CREATE OR REPLACE FUNCTION "reloid" (p_schemaname name, p_relname name, 
p_relkind name) RETURNS oid AS

$body$
SELECT oid FROM pg_class
  WHERE relkind = $3 AND relname = $2 AND relnamespace = gbt.schemaoid($1);
$body$
LANGUAGE 'sql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER;

create or replace function f_get_serial_seq_name(p_reloid oid) returns 
name as $$

DECLARE
  _atdef pg_attrdef;
BEGIN
  FOR _atdef IN SELECT * FROM pg_attrdef WHERE adrelid = p_reloid AND 
adsrc like 'nextval(%_seq''::text)' ORDER BY adnum LIMIT 1 LOOP
RETURN substring(_atdef.adsrc from 'nextval#(''#"%#"''::text#)' for 
'#');

  END LOOP;
  RETURN null;
END; $$ language plpgsql;
comment on function gbt.f_get_serial_seq_name(oid) is
'Similar to pg_get_serial_sequence, but assumes always the sequence for 
the first column
that has a nextval for any sequence named [...]_seq, and regardless of 
having the column been defined as serial or not.'


Please post comments, I think it can be further optimised and enhanced 
quite a bit.


Cheers!

Ezequiel Tolnay

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Upgrading from 7.1

2005-07-28 Thread Chris Travers


Hi All;

Actually, as I am remembering how I worked through this problem...

I ended up withh a simple shell script something like:

#!/bin/bash
dumpfile="dump.pgsql"
pg_dump -s dbname > $dumpfile
for a in table1 table2 table3 table4 table5 table5 table6
do
  pg_dump -a -t $a >> $dumpfile
done

Hope this helps,
Chris Travers
Metatron Technology Consulting


begin:vcard
fn:Chris Travers
n:Travers;Chris
email;internet:[EMAIL PROTECTED]
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Trigger ad mutli database

2005-07-28 Thread Dorine Periolat
hi,

is it possible to make a trigger on Table T1 on
a database B1 that makes an action on a Table T2 on
a database B2 ???

I'm using postgre 7.2

Thanks
Dorine


Re: [GENERAL] duplicate messages?

2005-07-28 Thread Paul Tillotson

Alvaro Herrera wrote:


On Wed, Jul 27, 2005 at 11:46:05AM -0400, Robert Treat wrote:

 


Seems unlikely unless folks like Tom Lane, Stephan Szabo, and Richard
Huxton have unsubscribed and resubscribed lately... Funny thing is it
isnt every messages, but maybe half of them. And its not to specific
users, sometimes one of Toms emails will duplicate but sometimes not.
Further more it only seems to be happening on pgsql-general emails and
not on any of the other lists... I also notice this is only happening on
one of my subscribed emails, not the other, which is a little weird...
according to the headers though, this problem is happening further
upstream. 
   



I'd think it would be a problem with your "last mile", because it seems
to be a localized problem.  Maybe an SMTP server dropping connections at
the last moment, or a deliver program crashing after delivering (leading
to re-delivery), or something like that.

You'd have to show us the headers though.

 

For the last few days, I am getting about half of the messages on 
-general in duplicate also. I've attached two messages that I got in 
duplicate for header perusal.


It seems kind of unlikely that this should start happening to all of us 
at once.


Paul

--- Begin Message ---
On Wed, Jul 27, 2005 at 04:46:56PM +0200, [EMAIL PROTECTED] wrote:
> I'm now testing with pg_pool installed on each apache frontend with 260 
> pg_pool preforked clients in each machine.
> 
> The database seems to work better. At least when it goes to swap it 
> doesn't stop working...

Wait, are you saying your database server is swapping? You'll never get
any kind of performance if that's the case.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 6: explain analyze is your friend


--- End Message ---
--- Begin Message ---
On Wed, Jul 27, 2005 at 04:46:56PM +0200, [EMAIL PROTECTED] wrote:
> I'm now testing with pg_pool installed on each apache frontend with 260 
> pg_pool preforked clients in each machine.
> 
> The database seems to work better. At least when it goes to swap it 
> doesn't stop working...

Wait, are you saying your database server is swapping? You'll never get
any kind of performance if that's the case.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 6: explain analyze is your friend


--- End Message ---

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Perl DBI and postgres

2005-07-28 Thread david . best
This is probably just a general DBI question but I get the following
errors when I run my code:

DBD::Pg::st fetchrow_array failed: no statement executing at ./snap.pl
line 110.
DBD::Pg::st fetchrow_array failed: no statement executing at ./snap.pl
line 110.

Line 110 is a disconnect statement:

$target_dbh->disconnect();

I have multiple functions which access a target database, take the
information and put it back into a repository.   They all follow this
format, the database handlers are passed in, statement handlers are
created and closed at the end of the function.

sub free_space {

 my $err=0;
 my ($repo_dbh, $target_dbh, $snap_id) = @_;
 my $target_sth = $target_dbh->prepare(
   q{ SELECT columns
  FROM the table }) or "Can't prepare statement:
$DBI::errstr";
 $target_sth->execute() or die $DBI::errstr;

  while (my ($data) = $target_sth->fetchrow_array()) {
   eval {
$repo_sth = $repo_dbh->prepare("INSERT into thetable
  (columns)
   VALUES (?, '$data')");
$repo_sth->execute($snap_id) or die $DBI::errstr;
   };
  }
   # check for errors.. If there are any rollback
   if ( $@ ) {
  $err = 1;
   }

 $repo_sth->finish();
 $target_sth->finish();

 return $err;
}

The main function just opens connections to the repository and target
database and closes them at the end with disconnects and that is where the
error is occuring.

Any ideas?


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] problem inserting with sequence

2005-07-28 Thread germ germ
This is the error: 2005-07-28 08:51:08 ERROR: 
permission denied for sequence requests_req_num_seq

I've tried these grants, but no luck:
GRANT ALL ON FUNCTION nextval(integer) TO wwwrun,
postgres;
GRANT ALL ON FUNCTION nextval('requests_req_num_seq')
TO wwwrun, postgres;
GRANT ALL ON FUNCTION nextval() TO wwwrun, postgres;

--- Michael Fuhr <[EMAIL PROTECTED]> wrote:

> On Thu, Jul 28, 2005 at 05:32:03AM -0700, germ germ
> wrote:
> > I have been trying to figure out why I can't
> insert
> > into a table and I think it has something to do
> with
> > the sequnce.
> > 
> > I am able to use able to properly insert into the
> > table using the the shell, but I am not able to
> insert
> > using a php script:
> > 
> > INSERT INTO requests (time_stamp, req_num,
> > recommended_by) VALUES (now(),
> > nextval('requests_req_num_seq'), 'foo');
> 
> What happens when you try the insert?  We need more
> details than
> just "it doesn't work."  If there's an error then it
> should be in
> the postmaster logs, and it should also be available
> to the PHP
> script.
> 
> What's different between the situation that works
> and the one that
> doesn't?  Are you connecting as different users?  In
> the code you
> posted I don't see any permissions being granted on
> the sequence,
> so nextval() might be failing with "permission
> denied for sequence";
> another possibility is that the sequence name is
> wrong (see below).
> 
> > Here is the schema I'm using:
> > DROP SEQUENCE requests_req_num_seq;
> > DROP TABLE requests;
> > 
> > CREATE SEQUENCE requests_req_num_seq INCREMENT BY
> 1
> > START WITH 1000;
> > 
> > CREATE TABLE requests (
> > time_stamp  timestamp   PRIMARY KEY DEFAULT 'now',
>   
> 
> A timestamp is a poor choice for a primary key
> because it's not
> unique; aside from that you've defined the default
> to be a constant --
> run "\d requests" in psql and you'll see what I
> mean.  See the 
> following section of the documentation for more
> info:
> 
>
http://www.postgresql.org/docs/7.4/static/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT
> 
> > req_num integer DEFAULT
> > nextval('acq_requests_req_num_seq') NOT NULL,
> 
> This sequence name doesn't match the name of the
> sequence you created,
> at least not the one you showed.  Is there an
> acq_requests_req_num_seq
> sequence?
> 
> -- 
> Michael Fuhr
> http://www.fuhr.org/~mfuhr/
> 





Start your day with Yahoo! - make it your home page 
http://www.yahoo.com/r/hs 
 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_get_serial_sequence and table inheritence

2005-07-28 Thread Ezequiel Tolnay

Timothy Perrigo wrote:

Is there anything similar to pg_get_serial_sequence that will work  with 
tables that have an inherited serial column?  (...)


Sorry, I should have searched before posting, I started a new thread. 
Please refer to my solution posted today as "pg_get_serial_sequence and 
inheritence" (remarkably similar to yours! although mine was less 
accurate ;-)


Cheers,

Ezequiel

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Backup and restore from 7.4.1 to latest, crossing

2005-07-28 Thread Kilian Hagemann
Mark,

> I currently have a defunct Debian 3.0 system running PostgreSQL 7.4.1, in
> which I am going to do an entire blob backup dump of all the databases in
> hopes to restore them on my new, more secure system running the latest
> OpenBSD, and PgSQL 8.0.
>
> So my question is: Are they any issues that I should be aware up restoring
> on a more recent version of PgSQL, and/or issues using a completely
> different operating system altogether? I have some strange pg language
> mishaps going from 7.2 to 7.4 last backup, although frightening, I overcame
> them with some fiddling.

I just upgraded from 7.4.8 (same format as 7.4.1) to 8.0.3 using the
pg_dump/pg_restore method described in the documentation, and I did not
encounter any issues. Although my system did not change, I don't think
you'll encounter any issues due to a different operating system since the
dump is just a single regular file containing all necessary information to
restore your database(s), its interpretation is up to pg_restore and not
your OS.

Hope this helps,

Kilian

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Trigger ad mutli database

2005-07-28 Thread Jaime Casanova
On 7/28/05, Dorine Periolat <[EMAIL PROTECTED]> wrote:
> hi,
> 
> is it possible to make a trigger on Table T1 on
> a database B1 that makes an action on a Table T2 on
> a database B2 ???
> 
> I'm using postgre 7.2
> 
> Thanks
> Dorine 

No, at least you are using contrib/dblink

what about upgrading and using schemas (it's available since 7.4, IIRC)?


-- 
Atentamente,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] problem inserting with sequence

2005-07-28 Thread Michael Fuhr
On Thu, Jul 28, 2005 at 07:04:54AM -0700, germ germ wrote:
> This is the error: 2005-07-28 08:51:08 ERROR: 
> permission denied for sequence requests_req_num_seq
> 
> I've tried these grants, but no luck:
> GRANT ALL ON FUNCTION nextval(integer) TO wwwrun,
> postgres;
> GRANT ALL ON FUNCTION nextval('requests_req_num_seq')
> TO wwwrun, postgres;
> GRANT ALL ON FUNCTION nextval() TO wwwrun, postgres;

It's not the nextval() function itself that needs permission, it's
the sequence.  Try this:

GRANT UPDATE, SELECT ON requests_req_num_seq TO wwwrun;

Granting update on a sequence allows the use of nextval() and
setval() on that sequence; granting select allows currval().

http://www.postgresql.org/docs/7.4/static/sql-grant.html

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] problem inserting with sequence

2005-07-28 Thread Douglas McNaught
germ germ <[EMAIL PROTECTED]> writes:

> This is the error: 2005-07-28 08:51:08 ERROR: 
> permission denied for sequence requests_req_num_seq
>
> I've tried these grants, but no luck:
> GRANT ALL ON FUNCTION nextval(integer) TO wwwrun,
> postgres;
> GRANT ALL ON FUNCTION nextval('requests_req_num_seq')
> TO wwwrun, postgres;
> GRANT ALL ON FUNCTION nextval() TO wwwrun, postgres;

You need to grant permission on the sequence itself.

GRANT ALL ON requests_req_num_seq TO wwwrun;

I think that's tghe right syntax--see the docs...

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] problem inserting with sequence

2005-07-28 Thread Tom Lane
germ germ <[EMAIL PROTECTED]> writes:
> This is the error: 2005-07-28 08:51:08 ERROR: 
> permission denied for sequence requests_req_num_seq

> I've tried these grants, but no luck:
> GRANT ALL ON FUNCTION nextval(integer) TO wwwrun,
> postgres;
> GRANT ALL ON FUNCTION nextval('requests_req_num_seq')
> TO wwwrun, postgres;
> GRANT ALL ON FUNCTION nextval() TO wwwrun, postgres;

The missing permission is for the sequence, not the function.

grant all on requests_req_num_seq to ...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] problem inserting with sequence

2005-07-28 Thread germ germ
Thank you all for your help. I got it working, once.

Right after I made the change and tested it,
everything worked perfect. Then I had a freak nose
bleed- (This has to be my 3rd nose bleed in my life
ever).  I frantically starting closing windows and
shells. While in my frantic state, I deleted my php
script I had spent about 30+ hours working on.  I
don't back anything up on my test server so it's gone
forever now. My stupidity for not backing up the test
server, so my loss- lesson learned and will not make
again.

but it did work one time.

thank you again for all the help.

--- Michael Fuhr <[EMAIL PROTECTED]> wrote:

> On Thu, Jul 28, 2005 at 07:04:54AM -0700, germ germ
> wrote:
> > This is the error: 2005-07-28 08:51:08 ERROR: 
> > permission denied for sequence
> requests_req_num_seq
> > 
> > I've tried these grants, but no luck:
> > GRANT ALL ON FUNCTION nextval(integer) TO wwwrun,
> > postgres;
> > GRANT ALL ON FUNCTION
> nextval('requests_req_num_seq')
> > TO wwwrun, postgres;
> > GRANT ALL ON FUNCTION nextval() TO wwwrun,
> postgres;
> 
> It's not the nextval() function itself that needs
> permission, it's
> the sequence.  Try this:
> 
> GRANT UPDATE, SELECT ON requests_req_num_seq TO
> wwwrun;
> 
> Granting update on a sequence allows the use of
> nextval() and
> setval() on that sequence; granting select allows
> currval().
> 
>
http://www.postgresql.org/docs/7.4/static/sql-grant.html
> 
> -- 
> Michael Fuhr
> http://www.fuhr.org/~mfuhr/
> 

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] problem inserting with sequence

2005-07-28 Thread Bricklen Anderson
germ germ wrote:
> Thank you all for your help. I got it working, once.
> 
> Right after I made the change and tested it,
> everything worked perfect. Then I had a freak nose
> bleed- (This has to be my 3rd nose bleed in my life
> ever).  I frantically starting closing windows and
> shells. While in my frantic state, I deleted my php
> script I had spent about 30+ hours working on.  I
> don't back anything up on my test server so it's gone
> forever now. My stupidity for not backing up the test
> server, so my loss- lesson learned and will not make
> again.

Well? Inquiring minds want to know... Did it work?
Did closing windows and shells stop your nosebleed?

:)

-- 
___

This e-mail may be privileged and/or confidential, and the sender does
not waive any related rights and obligations. Any distribution, use or
copying of this e-mail or the information it contains by other than an
intended recipient is unauthorized. If you received this e-mail in
error, please advise me (by return e-mail or otherwise) immediately.
___

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Phil Endecott

Dear Postgresql experts,

For some time I had been trying to work out why every connection to my 
database resulted in several megabytes of data being written to the 
disk, however trivial the query.  I think I've found the culprit: 
global/pgstat.stat.  This is with 7.4.7.


This is for a web application which uses a new connection for each CGI 
request.  The server doesn't have a particularly high disk bandwidth and 
this mysterious activity had been the bottleneck for some time.  The 
system is a little unusual as one of the databases has tens of thousands 
of tables (though I saw these writes whichever database I connected to).


Looking at the output of vmstat I could see about 2.7Mbytes being 
written up to about 5 seconds after the query was processed.  I was 
scratching my head about this for a long time, but today I noticed that 
this size was just a little larger than my global/pgstat.stat file.  So 
I turned off stat_start_collector and stats_row_level and the writes 
vanished.  Turing them back on, the pgstats.stats file is much smaller 
(10k) and the writes are invisible against the background noise.


So can I expect this file to grow again?  I think I need the stats, 
though I'm not entirely sure about that.


Was the entire file re-written, even when the only query I've run is 
"select 1"?  Is this necessary?


Any comments or suggestions gratefully received.

--Phil.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Raise Notice question

2005-07-28 Thread Tony Caduto
Does anyone know how  Raise Notice Messages get sent back to the client 
when a function is executed?


Thanks,

Tony




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Pg_autovaccum.

2005-07-28 Thread Fernando Lujan
Hi everyone,

Which is the best configuration to pg_autovaccum?

Are there benchmarks showing the improvements, after and before the
service had started?

Thanks in advance.

-- 
Fernando Lujan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL]

2005-07-28 Thread Bryan Field-Elliot




subscribe





Re: [GENERAL] Raise Notice question

2005-07-28 Thread Terry Lee Tucker
From the 7.4.6 Documentation:
[Begin Quote]
27.9. Notice Processing


Notice and warning messages generated by the server are not returned by the 
query execution functions, since they do not imply failure of the query. 
Instead they are passed to a notice handling function, and execution 
continues normally after the handler returns. The default notice handling 
function prints the message on stderr, but the application can override this 
behavior by supplying its own handling function.


For historical reasons, there are two levels of notice handling, called the 
notice receiver and notice processor. The default behavior is for the notice 
receiver to format the notice and pass a string to the notice processor for 
printing. However, an application that chooses to provide its own notice 
receiver will typically ignore the notice processor layer and just do all the 
work in the notice receiver.
[End Quote]

There is more in the documentation that tells you how to write your notice 
receiver function.

On Thursday 28 July 2005 11:48 am, Tony Caduto saith:
> Does anyone know how  Raise Notice Messages get sent back to the client
> when a function is executed?
>
> Thanks,
>
> Tony
>
>
>
>
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
>
>http://archives.postgresql.org


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] How to optimize select count(*)..group by?

2005-07-28 Thread Bryan Field-Elliot




We have this simple query:

select status, count(*) from customer group by status;

There is already a btree index on status, but, the customer table is huge, and this query must be executed very frequently... an "explain" on this query shows that it is quite costly (and we notice it runs slowly)...

Can someone recommend the best technique to optimize this? We can create new indices, we can re-write this query.. But we'd rather not add new tables or columns if possible (not just to solve this problem).

Thank you,

Bryan





Re: [GENERAL] How to optimize select count(*)..group by?

2005-07-28 Thread David Fetter
On Thu, Jul 28, 2005 at 09:19:49AM -0700, Bryan Field-Elliot wrote:
> We have this simple query:
> 
> select status, count(*) from customer group by status;
> 
> There is already a btree index on status, but, the customer table is
> huge, and this query must be executed very frequently... an
> "explain" on this query shows that it is quite costly (and we notice
> it runs slowly)...
> 
> Can someone recommend the best technique to optimize this? We can
> create new indices, we can re-write this query.. But we'd rather not
> add new tables or columns if possible (not just to solve this
> problem).

You're pretty much stuck with either writing triggers that modify a
cache table or having your performance the way it is now.

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 510 893 6100   mobile: +1 415 235 3778

Remember to vote!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Cursor Issue??

2005-07-28 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-07-27 12:21:34 -0700:
> I found using the shell works but using the phAdminIII GUI is the one
> that gives me problems.  I've even tried running it on EMS PostgreSQL
> Manager 3.  Same results.  Is this normal?
> 
> Got a couple more questions regarding cursors.
> 1. When I try to run this statement (Declare curs1 refcursor;)  I get
> an error ERROR:  syntax error at or near "refcursor" at character 23

Have you seen the answer to this I sent you in my reply to your
offlist email? If not, reread the mail. If yes, and you still don't
see the problem: which part of

DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]
CURSOR [ { WITH | WITHOUT } HOLD ] FOR query
[ FOR { READ ONLY | UPDATE [ OF column [, ...] ] } ]

does allow for "DECLARE curs1 refcursor"? The answer is simple:
none. You're trying to use a plpgsql declaration outside plpgsql.

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] How to optimize select count(*)..group by?

2005-07-28 Thread Richard Huxton

Bryan Field-Elliot wrote:

We have this simple query:

select status, count(*) from customer group by status;

There is already a btree index on status, but, the customer table is 
huge, and this query must be executed very frequently... an "explain" on 
this query shows that it is quite costly (and we notice it runs slowly)...


Can someone recommend the best technique to optimize this? We can create 
new indices, we can re-write this query.. But we'd rather not add new 
tables or columns if possible (not just to solve this problem).


There's no other accurate solution. While PG's MVCC system means you 
need less locking, it makes it tricky to determine whether a row is 
visible without actually checking on disk.


Simplest solution - have a "customer_status_summary" table and add a 
trigger to the "customer" table to keep it up to date. That way, you'll 
have extremely cheap counts. Make sure you understand what locking 
you'll need with your particular solution.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Perl DBI and postgres

2005-07-28 Thread SCassidy

Hi,

Well, since we don't seem to have the actual code (you seem to have
"paraphrased" it), I can't tell for sure.

However,  here (assuming this is sort of the way you actually are using
it):
  my $target_sth = $target_dbh->prepare(
   q{ SELECT columns
  FROM the table }) or "Can't prepare statement:
$DBI::errstr";

You don't use "die" or "warn" or anything, so you don't know if the prepare
actually worked or not;  you just have a literal, instead.

It is complaining about the fetchrow_array, so it might be there that the
problem is occurring, since you can't fetch something that never executed
correctly.

In this statement:
$repo_sth = $repo_dbh->prepare("INSERT into thetable
  (columns)
   VALUES (?, '$data')");

that does not appear to be the real INSERT statement, so who knows if it is
right.  Also, you should probably have a "my" on the $repo_sth (declared at
the top of the subroutine somewhere, since you use it outside the while).

In this statement:
  while (my ($data) = $target_sth->fetchrow_array()) {

fetchrow_array returns an array, not a single value.  I tried doing it this
way on a multi-column SELECT, and you just get the last column in $data
doing it this way - probably not what you want.

I am a little confused why you do a prepare and execute inside a loop using
placeholders, instead of just embedding the data (like you do with $data,
but not $snap_id).  You usually either use placeholders, or build the whole
statement, not both.   If you did the prepare outside the loop, and used
placeholders for both VALUES, you would not need to quote it first.  Also,
if you do embed it in the statement, unless you are absolutely sure that
the 2nd value inserted will never have any quotes or anything in it, you
would probably be wise to call $dbh->quote to properly quote it before you
do the INSERT.

E.g.:
  my $data_q=$repo_dbh->quote($data);
 $repo_sth = $repo_dbh->prepare("INSERT into thetable
  (column1, column2)
   VALUES (?, $data_q)");

If you don't have RAISE_ERROR turned on, you wouldn't have to do the eval,
you could just check the status after the prepare and execute statements.
E.g., something like:
  if (!($sth=$dbh->prepare($statement))) {
warn("bad prepare for stmt $statement, error: $DBI::errstr");
$err++;
  }
  if (!($rc=$sth->execute())) {
warn("can't execute statement:\n$statement\nDB error: $DBI::errstr");
$err++;
  }

Also, "finish" is normally used only when you need to make sure no data is
left over after a SELECT (if you only fetched part of a result set), so you
don't really need it on the $repo_sth handle used for the INSERT.  Probably
not on the other sth, either.  From the perldoc on DBI:

   Adding calls to "finish" after each fetch loop is a common
mistake, don't
   do it, it can mask genuine problems like uncaught fetch errors.

I assume that any real "rollback" is done in the calling routine if the
returned value $err is true.  Although, shouldn't you be checking $@  for
errors after each eval?  The way you have it, it will continue through the
while loop, I believe.  I don't know if that is what you want to do.

I could be missing something, too - just took a quick look.

Hope this helps,

Susan





   

   
  [EMAIL PROTECTED]To:   
pgsql-general@postgresql.org
   Sent by:  cc:
   
 Subject:  [GENERAL] Perl 
DBI and postgres 

   
  [EMAIL PROTECTED] |---|   
 
  tgresql.org | [ ] Expand Groups | 
   
  |---| 
   

   
   07/28/2005 07:05 
   
  AM

Re: [GENERAL] How to optimize select count(*)..group by?

2005-07-28 Thread Bricklen Anderson
David Fetter wrote:
> On Thu, Jul 28, 2005 at 09:19:49AM -0700, Bryan Field-Elliot wrote:
> 
>>We have this simple query:
>>
>>select status, count(*) from customer group by status;
>>
>>There is already a btree index on status, but, the customer table is
>>huge, and this query must be executed very frequently... an
>>"explain" on this query shows that it is quite costly (and we notice
>>it runs slowly)...
>>
>>Can someone recommend the best technique to optimize this? We can
>>create new indices, we can re-write this query.. But we'd rather not
>>add new tables or columns if possible (not just to solve this
>>problem).
> 
> 
> You're pretty much stuck with either writing triggers that modify a
> cache table or having your performance the way it is now.
> 
> Cheers,
> D
How about the new bitmap index? I wonder if that'll result in better performance
for that type of query?

-- 
___

This e-mail may be privileged and/or confidential, and the sender does
not waive any related rights and obligations. Any distribution, use or
copying of this e-mail or the information it contains by other than an
intended recipient is unauthorized. If you received this e-mail in
error, please advise me (by return e-mail or otherwise) immediately.
___

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Fwd: [GENERAL] Trigger ad mutli database

2005-07-28 Thread Jaime Casanova
On 7/28/05, Dorine Periolat <[EMAIL PROTECTED]> wrote:
>

Always send a copy to the list... not only because is better to get
more answers but i actually see little my own mail... just check the
lists... ;)

> What is contrib/dblink ?
>
Is a module some contrib to make querys that take tables from more
than one database... if you install from sources you can enter in
/contrib to see dblink there are instructions to
install...

i don't use it... so i can't help you with the instalation nor the
use... some one in the list maybe can answer you...

> I can't upgrade, the company wants the solution to stay on 7.2
>
that's bad... we are currently in 8.0.3 and developing 8.1... if, in
future, you come here with one of the 7.2 already found bugs...
claiming something really bad happens to your data... it will be
entirely _your fault_ and the answers can be directed in that way...


--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)


-- 
Atentamente,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to optimize select count(*)..group by?

2005-07-28 Thread Scott Marlowe
On Thu, 2005-07-28 at 11:19, Bryan Field-Elliot wrote:
> We have this simple query:
> 
> select status, count(*) from customer group by status;
> 
> There is already a btree index on status, but, the customer table is
> huge, and this query must be executed very frequently... an "explain"
> on this query shows that it is quite costly (and we notice it runs
> slowly)...
> 
> Can someone recommend the best technique to optimize this? We can
> create new indices, we can re-write this query.. But we'd rather not
> add new tables or columns if possible (not just to solve this
> problem).

With a query of the form:

select field,count([field|*]) from table WHERE somefield =
somecondition;

the query planner is going to have to scan every single row returned by
that where clause.  There's no shortcut, because the visibility rules of
MVCC means you have to look at every tuple IN THE TABLE, not in the
index (it's the way postgresql is built, and it isn't likely to change
soon, because putting the visibility information in indexes is
expensive, and would result in VERY slow updates and very large
indexes).

So, the best optimization is to use a selective where clause.

If you run the query with a where clause of something like:

where processdate between '01 july 2005' and '07 july 2005'

then you should get better performance.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to optimize select count(*)..group by?

2005-07-28 Thread Greg Stark
Bryan Field-Elliot <[EMAIL PROTECTED]> writes:

> We have this simple query:
> 
> select status, count(*) from customer group by status;
> 
> There is already a btree index on status, but, the customer table is
> huge, and this query must be executed very frequently... an "explain" on
> this query shows that it is quite costly (and we notice it runs
> slowly)...
> 
> Can someone recommend the best technique to optimize this? We can create
> new indices, we can re-write this query.. But we'd rather not add new
> tables or columns if possible (not just to solve this problem).

Without creating a new table you have three options, none of which will be
very fast:

1) If the entire table fits in ram then you could try setting random_page_cost
   close enough to 1 to cause the index to be used.

2) If it doesn't but there are a reasonably small number of distinct values of
   status you would hope to see a sequential scan and a hash aggregate being
   used. I would expect this to be what you would see with the default
   configuration. If not you might have to increase work_mem (or sort_mem
   depending on your version I think).

3) If you have a large table with a large set of status values then Postgres
   may be compelled to resort the entire table. In which case you should
   experiment with work_mem/sort_mem to get the largest value you can without
   inducing swapping. You could also experiment with putting pgsql_tmp on
   separate spindles.

None of these will be fast enough for OLTP access like a web page. You would
be best off performing this query periodically and stuffing the results in a
table for quick access.



-- 
greg


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] UNSUBSCRIBE

2005-07-28 Thread wayne schlemitz
I want to unscribe and have tried from the website and
from a web address from this group to no avail. Please
get me off the list.

Wayne



__ 
Do you Yahoo!? 
Yahoo! Mail - Find what you need with new enhanced search. 
http://info.mail.yahoo.com/mail_250

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Can't connect to posgresql server by using pgadmin

2005-07-28 Thread Chris Guo








Dear all,

 

I just installed postgresql 0.8.3 on the fedora core linux 3,
I also installed pgadmin 1.0.2 in my computer. But I am having trouble
connecting postgresql by using pgadmin, it shows me the error message” An
error occurred: Error connecting to the server: could connect to server:
Connection refused (0x274D/10061) Is the server running on host
“192.168.3.118” and accepting TCP/IP connections on port
5432?”

 

I am pretty sure that the postgresql server is running, but
I figured it’s because the command I use to run the postgresql serve is
not quite right and the port 5432 is not open to listen to the request since I
didn’t find port 5432 after using namp to check the open port on the linux
server. 

 

Anybody has idea how to solve this problem? 

 

Thanks in advance.

 

Chris 

 








Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Tom Lane
Phil Endecott <[EMAIL PROTECTED]> writes:
> For some time I had been trying to work out why every connection to my 
> database resulted in several megabytes of data being written to the 
> disk, however trivial the query.  I think I've found the culprit: 
> global/pgstat.stat.  This is with 7.4.7.

> This is for a web application which uses a new connection for each CGI 
> request.  The server doesn't have a particularly high disk bandwidth and 
> this mysterious activity had been the bottleneck for some time.  The 
> system is a little unusual as one of the databases has tens of thousands 
> of tables (though I saw these writes whichever database I connected to).

Well, there's the problem --- the stats subsystem is designed in a way
that makes it rewrite its entire stats collection on every update.
That's clearly not going to scale well to a large number of tables.
Offhand I don't see an easy solution ... Jan, any ideas?

> So can I expect this file to grow again?  I think I need the stats, 
> though I'm not entirely sure about that.

If you're not using autovacuum then you don't need stats_row_level.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Korry Douglas's PostgreSQL 2ed out yet?

2005-07-28 Thread Jerry LeVan

Has anyone had the chance to review this book yet?

Jerry

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Jan Wieck

On 7/28/2005 2:03 PM, Tom Lane wrote:


Phil Endecott <[EMAIL PROTECTED]> writes:
For some time I had been trying to work out why every connection to my 
database resulted in several megabytes of data being written to the 
disk, however trivial the query.  I think I've found the culprit: 
global/pgstat.stat.  This is with 7.4.7.


This is for a web application which uses a new connection for each CGI 
request.  The server doesn't have a particularly high disk bandwidth and 
this mysterious activity had been the bottleneck for some time.  The 
system is a little unusual as one of the databases has tens of thousands 
of tables (though I saw these writes whichever database I connected to).


Well, there's the problem --- the stats subsystem is designed in a way
that makes it rewrite its entire stats collection on every update.
That's clearly not going to scale well to a large number of tables.
Offhand I don't see an easy solution ... Jan, any ideas?


PostgreSQL itself doesn't work too well with tens of thousands of 
tables. I don't see much of an easy solution either. The best workaround 
I can offer is to move that horror-DB to a separate postmaster with 
stats disabled altogether.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] UNSUBSCRIBE

2005-07-28 Thread Richard Huxton

wayne schlemitz wrote:

I want to unscribe and have tried from the website and
from a web address from this group to no avail. Please
get me off the list.


What didn't work on the website?

You can go here
  http://www.postgresql.org/community/lists/subscribe
and choose "unsubscribe" and the list name

Alternatively go to the list archive and click the subscribe/unsubscribe 
link

  http://archives.postgresql.org/pgsql-general/

That will display the management page. Click "Sign In" top-left, then 
get your password emailed to you. Once you have your password, you can 
sign in and control all your list settings.


HTH
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Tom Lane
Jan Wieck <[EMAIL PROTECTED]> writes:
> On 7/28/2005 2:03 PM, Tom Lane wrote:
>> Well, there's the problem --- the stats subsystem is designed in a way
>> that makes it rewrite its entire stats collection on every update.
>> That's clearly not going to scale well to a large number of tables.
>> Offhand I don't see an easy solution ... Jan, any ideas?

> PostgreSQL itself doesn't work too well with tens of thousands of 
> tables.

Really?  AFAIK it should be pretty OK, assuming you are on a filesystem
that doesn't choke with tens of thousands of entries in a directory.
I think we should put down a TODO item to see if we can improve the
stats subsystem's performance in such cases.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Jan Wieck

On 7/28/2005 2:28 PM, Tom Lane wrote:


Jan Wieck <[EMAIL PROTECTED]> writes:

On 7/28/2005 2:03 PM, Tom Lane wrote:

Well, there's the problem --- the stats subsystem is designed in a way
that makes it rewrite its entire stats collection on every update.
That's clearly not going to scale well to a large number of tables.
Offhand I don't see an easy solution ... Jan, any ideas?


PostgreSQL itself doesn't work too well with tens of thousands of 
tables.


Really?  AFAIK it should be pretty OK, assuming you are on a filesystem
that doesn't choke with tens of thousands of entries in a directory.
I think we should put down a TODO item to see if we can improve the
stats subsystem's performance in such cases.


Okay, I should be more specific. The problem with tens of thousands of 
tables does not exist just because of them being there. It will emerge 
if all those tables are actually used because it will mean that you'd 
need all the pg_class and pg_attribute rows cached and also your vfd 
cache will constantly rotate.


Then again, the stats file is only written. There is nothing that 
actually forces the blocks out. On a busy system, one individual stats 
file will be created, written to, renamed, live for 500ms and be thrown 
away by the next stat files rename operation. I would assume that with a 
decent filesystem and appropriate OS buffers, none of the data blocks of 
most stat files even hit the disk. I must be missing something.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Tom Lane
Jan Wieck <[EMAIL PROTECTED]> writes:
> On 7/28/2005 2:28 PM, Tom Lane wrote:
>> Jan Wieck <[EMAIL PROTECTED]> writes:
>>> PostgreSQL itself doesn't work too well with tens of thousands of 
>>> tables.
>> 
>> Really?  AFAIK it should be pretty OK, assuming you are on a filesystem
>> that doesn't choke with tens of thousands of entries in a directory.
>> I think we should put down a TODO item to see if we can improve the
>> stats subsystem's performance in such cases.

> Okay, I should be more specific. The problem with tens of thousands of 
> tables does not exist just because of them being there. It will emerge 
> if all those tables are actually used because it will mean that you'd 
> need all the pg_class and pg_attribute rows cached and also your vfd 
> cache will constantly rotate.

Sure, if you have a single backend touching all tables you'll have some
issues in that backend.  But the stats problem is that it tracks every
table anyone has ever touched, which makes the issue much more pressing.

> Then again, the stats file is only written. There is nothing that 
> actually forces the blocks out. On a busy system, one individual stats 
> file will be created, written to, renamed, live for 500ms and be thrown 
> away by the next stat files rename operation. I would assume that with a 
> decent filesystem and appropriate OS buffers, none of the data blocks of 
> most stat files even hit the disk. I must be missing something.

This is possibly true --- Phil, do you see actual disk I/O happening
from the stats writes, or is it just kernel calls?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] GUID for postgreSQL

2005-07-28 Thread Chris Browne
[EMAIL PROTECTED] (Chris Travers) writes:
>>At least on Windows, the GUID is derived in part from the computers
>>primary MAC address. No, it's not a guarantee, but it's pretty unlikely
>>:-)

> The danger is not that the MAC address will be duplicated, but that
> other factors will lead to an MD5 collision.

What factors?  

The 'random' portion comes _in addition to_ location information and a
timestamp, so that for there to be a collision, you need to be
generating thousands of GUIDs *per millisecond* on a given host.

> Unless you can show me that there is a 1:1 correspondence of all
> possible unique factors going into the GUID generation and the output,
> then I will say it is still no guarantee.

Read the RFC (4122).  It shows how it works.

> So you could use the Mac address of the machine, I guess, if you
> wanted to

That is one option; section 4.3 of the RFC suggests an alternative
that is also likely to work.
-- 
(format nil "[EMAIL PROTECTED]" "cbbrowne" "acm.org")
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  "I will not tell my Legions of Terror
"And he must  be taken alive!" The command will be:  ``And try to take
him alive if it is reasonably practical.''"


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Scott Marlowe
On Thu, 2005-07-28 at 13:40, Jan Wieck wrote:
> On 7/28/2005 2:28 PM, Tom Lane wrote:
> 
> > Jan Wieck <[EMAIL PROTECTED]> writes:
> >> On 7/28/2005 2:03 PM, Tom Lane wrote:
> >>> Well, there's the problem --- the stats subsystem is designed in a way
> >>> that makes it rewrite its entire stats collection on every update.
> >>> That's clearly not going to scale well to a large number of tables.
> >>> Offhand I don't see an easy solution ... Jan, any ideas?
> > 
> >> PostgreSQL itself doesn't work too well with tens of thousands of 
> >> tables.
> > 
> > Really?  AFAIK it should be pretty OK, assuming you are on a filesystem
> > that doesn't choke with tens of thousands of entries in a directory.
> > I think we should put down a TODO item to see if we can improve the
> > stats subsystem's performance in such cases.
> 
> Okay, I should be more specific. The problem with tens of thousands of 
> tables does not exist just because of them being there. It will emerge 
> if all those tables are actually used because it will mean that you'd 
> need all the pg_class and pg_attribute rows cached and also your vfd 
> cache will constantly rotate.
> 
> Then again, the stats file is only written. There is nothing that 
> actually forces the blocks out. On a busy system, one individual stats 
> file will be created, written to, renamed, live for 500ms and be thrown 
> away by the next stat files rename operation. I would assume that with a 
> decent filesystem and appropriate OS buffers, none of the data blocks of 
> most stat files even hit the disk. I must be missing something.

Yeah, I found these three facets of the OP's system a bit disconcerting:

QUOTE ---
This is for a web application which uses a new connection for each CGI 
request.  
The server doesn't have a particularly high disk bandwidth and this
mysterious activity had been the bottleneck for some time.  
The system is a little unusual as one of the databases has tens of
thousands of tables.
ENDQUOTE ---

Any two of those choices could cause some issues, but all three together
are pretty much a death knell for performance, whether or not the
global/pgstat file is being written or not.  

Just an observation.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Greg Stark

Jan Wieck <[EMAIL PROTECTED]> writes:

> Then again, the stats file is only written. There is nothing that actually
> forces the blocks out. On a busy system, one individual stats file will be
> created, written to, renamed, live for 500ms and be thrown away by the next
> stat files rename operation. I would assume that with a decent filesystem and
> appropriate OS buffers, none of the data blocks of most stat files even hit 
> the
> disk. I must be missing something.

Renaming is a metadata operation. Depending on the filesystem it has to be
done either synchronously or force a log write barrier. I'm not sure how those
things are implemented in various filesystems but I could easily imagine some
implementations treating them as implicit fsyncs for that file.

Perhaps this user could put the stats file in a ramdisk. It doesn't sound like
losing it in a crash would be anything to worry about.

-- 
greg


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] GUID for postgreSQL

2005-07-28 Thread Chris Browne
[EMAIL PROTECTED] (Scott Marlowe) writes:
> On Wed, 2005-07-27 at 15:32, John DeSoi wrote:
>> On Jul 27, 2005, at 4:49 AM, Tino Wildenhain wrote:
>> 
>> > I'd create a sequence:
>> >
>> > CREATE SEQUENCE global_unique_id_seq;
>> >
>> > and a function:
>> >
>> > CREATE OR REPLACE FUNCTION newid()
>> >   RETURNS text AS
>> > $BODY$ SELECT nextval('global_unique_id_seq')::text; $BODY$
>> >   LANGUAGE 'sql' VOLATILE;
>> >
>> >
>> > now every call to newid() returns a garantied unique id for
>> > say the next 18446744073709551616 calls.
>> > Of course you can obfuscate the ID even more using
>> > md5, include servername and so on, but this will not improve
>> > security in any way (unless you mix data with 2nd database)
>> 
>> 
>> This is not really a viable replacement for a GUID == globally unique  
>> identifier. Here global means that if I use the application in  
>> multiple databases, I'm guaranteed that no two identifiers will be  
>> the same. Using a sequence will only support uniqueness for a single  
>> database.
>
> So, how can two databases, not currently talking to one another,
> guarantee that their GUIDs don't collide? using a large randomly
> generated name space only reduces the chances of collision, it doesn't
> actually guarantee it.

Consult RFC 4122...

  

"Abstract

   This specification defines a Uniform Resource Name namespace for
   UUIDs (Universally Unique IDentifier), also known as GUIDs (Globally
   Unique IDentifier).  A UUID is 128 bits long, and can guarantee
   uniqueness across space and time.  UUIDs were originally used in the
   Apollo Network Computing System and later in the Open Software
   Foundation's (OSF) Distributed Computing Environment (DCE), and then
   in Microsoft Windows platforms.

   This specification is derived from the DCE specification with the
   kind permission of the OSF (now known as The Open Group).
   Information from earlier versions of the DCE specification have been
   incorporated into this document."

See also the gBorg "pgUUID" project:
  
-- 
(format nil "[EMAIL PROTECTED]" "cbbrowne" "acm.org")
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  "I will not tell my Legions of Terror
"And he must  be taken alive!" The command will be:  ``And try to take
him alive if it is reasonably practical.''"


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] GUID for postgreSQL

2005-07-28 Thread Chris Browne
[EMAIL PROTECTED] ("Stefan 'Kaishakunin' Schumacher") writes:
> So as I understand the GUIDs only apply to the Microsoft Universe[TM], or
> are there any other serious apllications using it?

No, the RFC (4122) was written by a combination of people including
Rich Salz, and is an encoding of the DCE UUID specification previously
produced as part of DCE.
-- 
(format nil "[EMAIL PROTECTED]" "cbbrowne" "acm.org")
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  "I will not tell my Legions of Terror
"And he must  be taken alive!" The command will be:  ``And try to take
him alive if it is reasonably practical.''"


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Greg Stark

Jan Wieck <[EMAIL PROTECTED]> writes:

> >> PostgreSQL itself doesn't work too well with tens of thousands of tables.
> > Really?  AFAIK it should be pretty OK, assuming you are on a filesystem
> > that doesn't choke with tens of thousands of entries in a directory.
> > I think we should put down a TODO item to see if we can improve the
> > stats subsystem's performance in such cases.
> 
> Okay, I should be more specific. The problem with tens of thousands of tables
> does not exist just because of them being there. It will emerge if all those
> tables are actually used because it will mean that you'd need all the pg_class
> and pg_attribute rows cached and also your vfd cache will constantly rotate.

I think occasionally people get bitten by not having their pg_* tables being
vacuumed or analyzed regularly. If you have lots of tables and the stats are
never updated for pg_class or related tables you can find the planner taking a
long time to plan queries.

This happens if you schedule a cron job to do your vacuuming and analyzing but
connect as a user other than the database owner. For example, you leave the
database owned by "postgres" but create a user to own all the tables and use
that to run regularly scheduled "vacuum analyze"s.

I'm not sure how often these types of problems get properly diagnosed. The
symptoms are quite mysterious. In retrospect I think I observed something like
it and never figured out what was going on. The problem only went away when I
upgraded the database and went through an initdb cycle.

-- 
greg


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] GUID for postgreSQL

2005-07-28 Thread Bruno Wolff III
On Wed, Jul 27, 2005 at 16:57:21 -0400,
  John DeSoi <[EMAIL PROTECTED]> wrote:
> 
> On Jul 27, 2005, at 4:46 PM, Scott Marlowe wrote:
> 
> >So, how can two databases, not currently talking to one another,
> >guarantee that their GUIDs don't collide? using a large randomly
> >generated name space only reduces the chances of collision, it doesn't
> >actually guarantee it.
> 
> 
> Like MD5, there is no 100% guarantee, but the collision possibility  
> supposed to be is very close to zero.

If you use a large enough space for the number you can reduce that
probability of an accidental collision to much less than that of
catastrophic hardware failure at which point it isn't noticably better
than having no chance of collisions.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Phil Endecott

Hello again,

Just to give a bit of background, in case it is useful: this is my 
family tree website, treefic.com.  I have a schema for each user, each 
with about a dozen tables.  In most cases the tables are small, i.e. 
tens of entries, but the users I care about are the ones with tens of 
thousands of people in their trees.  The schemas are independent of each 
other.  Example web page: http://treefic.com/treefic/royal92



Jan Wieck <[EMAIL PROTECTED]> writes:
PostgreSQL itself doesn't work too well with tens of thousands of 
tables.


I've specifically asked about this here before.  This is obviously 
important for my application so I invite all readers to share any 
thoughts they might have about possible problems with large numbers of 
tables.  I also create and drop large numbers of temporary tables - can 
anyone think of any additional problems with that?


Issues I have discussed here before include tab-completion in psql 
(unimportant) and autovacuum's O(n^2) performance (important).


Okay, I should be more specific. The problem with tens of thousands of 
tables does not exist just because of them being there. It will emerge 
if all those tables are actually used because it will mean that you'd 
need all the pg_class and pg_attribute rows cached and also your vfd 
cache will constantly rotate.


If many trees are being viewed simultaneously, another part of the 
system will be the bottleneck.  Within any, say, 5 minute period, only 
hundreds of tables will be in use.


Then again, the stats file is only written. There is nothing that 
actually forces the blocks out. On a busy system, one individual stats 
file will be created, written to, renamed, live for 500ms and be thrown 
away by the next stat files rename operation. I would assume that with a 
decent filesystem and appropriate OS buffers, none of the data blocks of 
most stat files even hit the disk. I must be missing something.



This is possibly true --- Phil, do you see actual disk I/O happening
from the stats writes, or is it just kernel calls?


During my tests the system was idle; I would run "psql -c 'select 1;'" 
and see the blocks in vmstat's "bo" column a couple of seconds later. 
As I understand it that indicates actual I/O, and the delay suggests 
that it is being flushed by the kernel.  When the system is busy it is 
harder to see what is going on and it is possible that at least some of 
this activity was not being written to the disk.  Typically I would see 
a lot more write bandwidth than read bandwidth (by a factor of 5 or so) 
according to vmstat;  any advice about how to identify what files or 
processes are involved would be appreciated.  I had previously imagined 
that it could be temporary tables.  This is Linux 2.4.26 and an ext3 
filesystem.


Having disabled stats earlier my stats file is still quite small. 
Presumably it will gradually grow back.  In the meantime I cannot do any 
experiments.


Thanks as ever for your prompt responses.

Regards,

--Phil.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Peter Wiersig
On Thu, Jul 28, 2005 at 08:31:21PM +0100, Phil Endecott wrote:
> 
> This is Linux 2.4.26 and an ext3 filesystem.

With the dir_index feature or without?

Peter

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Phil Endecott

Scott Marlowe wrote:

Yeah, I found these three facets of the OP's system a bit disconcerting:

QUOTE ---
This is for a web application which uses a new connection for each CGI 
request.  
The server doesn't have a particularly high disk bandwidth and this
mysterious activity had been the bottleneck for some time.  
The system is a little unusual as one of the databases has tens of

thousands of tables.
ENDQUOTE ---

Any two of those choices could cause some issues, but all three together
are pretty much a death knell for performance, whether or not the
global/pgstat file is being written or not.  


See my previous message for some background about the application and an 
example URL.  When PostgreSQL is running smoothly, it is not the 
bottleneck in the system: all it has to do is read maybe 100k from the 
disk (or more likely the cache), do some in-memory sorts and joins, and 
pass it to the rest of the application.


As far as I can see it is only because some parts of PostgreSQL have 
poor O(num tables) performance that things are slowing down.


--Phil.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] GUID for postgreSQL

2005-07-28 Thread Christopher Weimann
On 07/28/2005-05:55AM, Stefan 'Kaishakunin' Schumacher wrote:
> 
> I was a little bit confused about the uniqueness of GUID (esp. using
> only 16 Bit [1]) and read the article about the UUID:
> http://en.wikipedia.org/wiki/Universally_Unique_Identifier
> 

16 BYTE not bit.  From the wiki...

   A UUID is essentially a 16-byte number and in its canonical form a UUID may 
look like this:

   550E8400-E29B-11D4-A716-44665544 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [pgsql-advocacy] MySQL to PostgreSQL, was ENUM type

2005-07-28 Thread Josh Berkus
D-

> Anybody interested in contributing to this?

Interested, yes.  Free time's a little harder to come by, as you know ...

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [pgsql-advocacy] MySQL to PostgreSQL, was ENUM type

2005-07-28 Thread Joshua D. Drake

Josh Berkus wrote:

D-



Anybody interested in contributing to this?



Interested, yes.  Free time's a little harder to come by, as you know ...


What is this thing "free time"? I have yet to see or hear of it.






--
Your PostgreSQL solutions provider, Command Prompt, Inc.
24x7 support - 1.800.492.2240, programming, and consulting
Home of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit
http://www.commandprompt.com / http://www.postgresql.org

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Select for update

2005-07-28 Thread Havasvölgyi Ottó

Hi,

Oh, sorry.

CREATE TABLE pidtest (pid integer, szoveg text) WITHOUT OIDS;

INSERT INTO pidtest (pid, szoveg) VALUES (3, 'three');
INSERT INTO pidtest (pid, szoveg) VALUES (4, 'four');
INSERT INTO pidtest (pid, szoveg) VALUES (5, 'five');
INSERT INTO pidtest (pid, szoveg) VALUES (6, 'six');
INSERT INTO pidtest (pid, szoveg) VALUES (7, 'seven');

create function pidtest_del(_pid integer) returns void as $$
declare
row pidtest;
begin
perform pid from pidtest where pid>=_pid for update;
delete from pidtest where pid=_pid;
for row in select * from pidtest where pid>_pid order by pid loop
 update pidtest set pid=pid-1 where pid=row.pid;
end loop;
return;
end;
$$ language plpgslq;


This function deletes a row, and updates the pid field where pid is geater 
than the deleted pid value, so that the gap caused by the deletion is not 
present any more.
Sorry, now I cannot reproduce it, but yesterday I was suprised that the 
szoveg field's contents in the locked records went away.


Best Regards,
Otto




- Original Message - 
From: "Michael Fuhr" <[EMAIL PROTECTED]>

To: "Havasvölgyi Ottó" <[EMAIL PROTECTED]>
Cc: 
Sent: Thursday, July 28, 2005 2:02 PM
Subject: Re: [GENERAL] Select for update



On Thu, Jul 28, 2005 at 01:22:31PM +0200, Havasvölgyi Ottó wrote:
Is it normal that when I select for update a record, but I don't select 
all

the fields, that the contents of fields not selected will be deleted:

create table pidtest(pid integer, szoveg text) without oids;

select pid from pistest where pid>5 for update;

After committing (autocommit), the contents of the szoveg field for the
locked rows will be erased.


Could you provide a complete test case?  Works fine here:

CREATE TABLE pidtest (pid integer, szoveg text) WITHOUT OIDS;

INSERT INTO pidtest (pid, szoveg) VALUES (3, 'three');
INSERT INTO pidtest (pid, szoveg) VALUES (4, 'four');
INSERT INTO pidtest (pid, szoveg) VALUES (5, 'five');
INSERT INTO pidtest (pid, szoveg) VALUES (6, 'six');
INSERT INTO pidtest (pid, szoveg) VALUES (7, 'seven');

SELECT pid FROM pidtest WHERE pid > 5 FOR UPDATE;
pid
-
  6
  7
(2 rows)

SELECT * FROM pidtest;
pid | szoveg
-+
  3 | three
  4 | four
  5 | five
  6 | six
  7 | seven
(5 rows)

--
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Alvaro Herrera
On Thu, Jul 28, 2005 at 09:43:44PM +0200, Peter Wiersig wrote:
> On Thu, Jul 28, 2005 at 08:31:21PM +0100, Phil Endecott wrote:
> > 
> > This is Linux 2.4.26 and an ext3 filesystem.
> 
> With the dir_index feature or without?

Also, with data=ordered, data=writeback or data=journal?
(First one is default value)

-- 
Alvaro Herrera ()
"No renuncies a nada. No te aferres a nada."

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] ecpg and C++

2005-07-28 Thread Peter L. Berghold
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Folks,

If I wanted to embed SQL code in a C++ file is there a way to use ecpg
to generate the proper C++ files?


- --

Peter L. Berghold [EMAIL PROTECTED]
"Those who fail to learn from history are condemned to repeat it."
AIM: redcowdawgYahoo IM: blue_cowdawg  ICQ: 11455958
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFC6Uj4UM9/01RIhaARAhQpAJ9he3KkW7MvbR3y/Og1fsiVA6tIBgCdFFUk
oUY57FLKV4SCrMT65NkawzY=
=Aohq
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Guy Rouillier
Jan Wieck wrote:

> Then again, the stats file is only written. There is nothing that
> actually forces the blocks out. On a busy system, one individual stats
> file will be created, written to, renamed, live for 500ms and be
> thrown away by the next stat files rename operation. I would assume
> that with a decent filesystem and appropriate OS buffers, none of the
> data blocks of most stat files even hit the disk. I must be missing
> something. 

(From someone who is at best semi-informed).. Unless battery-backed
cache is available, we are advised to run with fsync enabled.  Wouldn't
that affect the stats files as well?

-- 
Guy Rouillier


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Phil Endecott

>> This is Linux 2.4.26 and an ext3 filesystem.
> With the dir_index feature or without?

With, I believe.  It is enabled in the superblock (tune2fs -O dir_index) 
but this was not done when the filesystem was created so only new 
directories are indexed I think.  I don't think there's a way to index 
an existing directory on a mounted filesystem, or to tell if a 
particular directory is indexed.  I created new directories for my 
postgres data and moved the files into them in the hope that they would 
then have indexes, but am not sure how to check.


In any case, this does not seem to be a bottleneck.

--Phil.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Alvaro Herrera
On Thu, Jul 28, 2005 at 05:48:21PM -0500, Guy Rouillier wrote:
> Jan Wieck wrote:
> 
> > Then again, the stats file is only written. There is nothing that
> > actually forces the blocks out. On a busy system, one individual stats
> > file will be created, written to, renamed, live for 500ms and be
> > thrown away by the next stat files rename operation. I would assume
> > that with a decent filesystem and appropriate OS buffers, none of the
> > data blocks of most stat files even hit the disk. I must be missing
> > something. 
> 
> (From someone who is at best semi-informed).. Unless battery-backed
> cache is available, we are advised to run with fsync enabled.  Wouldn't
> that affect the stats files as well?

The stats file is dispensable.  In fact, it has been proposed that on
crash recovery the stat file should be deleted.

-- 
Alvaro Herrera ()
"Y eso te lo doy firmado con mis lágrimas" (Fiebre del Loco)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Select for update

2005-07-28 Thread Michael Fuhr
On Fri, Jul 29, 2005 at 12:05:46AM +0200, Havasvölgyi Ottó wrote:
> create function pidtest_del(_pid integer) returns void as $$
> declare
> row pidtest;
> begin
> perform pid from pidtest where pid>=_pid for update;
> delete from pidtest where pid=_pid;
> for row in select * from pidtest where pid>_pid order by pid loop
>  update pidtest set pid=pid-1 where pid=row.pid;
> end loop;
> return;
> end;
> $$ language plpgslq;

I suspect this isn't exactly the code you're running, because creating
this function fails with the following error:

ERROR:  language "plpgslq" does not exist

If I correct the spelling to "plpgsql" then I get the following results:

SELECT * FROM pidtest;
 pid | szoveg 
-+
   3 | three
   4 | four
   5 | five
   6 | six
   7 | seven
(5 rows)

SELECT pidtest_del(5);
 pidtest_del 
-
 
(1 row)

SELECT * FROM pidtest;
 pid | szoveg 
-+
   3 | three
   4 | four
   5 | six
   6 | seven
(4 rows)

> Sorry, now I cannot reproduce it, but yesterday I was suprised that the 
> szoveg field's contents in the locked records went away.

What you describe isn't supposed to happen, but we can't do much
to investigate the problem unless we can see how to reproduce it.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Steve Atkins
On Thu, Jul 28, 2005 at 03:12:33PM -0400, Greg Stark wrote:

> I think occasionally people get bitten by not having their pg_* tables being
> vacuumed or analyzed regularly. If you have lots of tables and the stats are
> never updated for pg_class or related tables you can find the planner taking a
> long time to plan queries.
> 
> This happens if you schedule a cron job to do your vacuuming and analyzing but
> connect as a user other than the database owner. For example, you leave the
> database owned by "postgres" but create a user to own all the tables and use
> that to run regularly scheduled "vacuum analyze"s.
> 
> I'm not sure how often these types of problems get properly diagnosed. The
> symptoms are quite mysterious. In retrospect I think I observed something like
> it and never figured out what was going on. The problem only went away when I
> upgraded the database and went through an initdb cycle.

I've had exactly this problem at least five times, twice on my own
systems and three times that I noticed on customer machines. It's an
easy mistake to make on a system that doesn't have much interactive
use, and if you're creating and dropping a lot of tables it can
devastate your performance after a while.

Cheers,
  Steve

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster