Re: [GENERAL] real multi-master replication?

2007-03-05 Thread hubert depesz lubaczewski

On 3/4/07, Bill Moran [EMAIL PROTECTED] wrote:

How would you define multi-master?


i am able to write to any machine in cluster, and read from any.
hopefully - wiithout any kind of single point of failure (like
pgpool connection point).

depesz

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] real multi-master replication?

2007-03-05 Thread hubert depesz lubaczewski

On 3/4/07, Devrim GÜNDÜZ [EMAIL PROTECTED] wrote:

PGCluster may be the thing that you are looking for. However, if you are
looking at something that is similar to Oracle's RAC, it is the
PGCluster-II that you are looking for, which is under heavy development
right now.


i dont know oracle. but would pgcluster work with triggers modifying
data in other tables?
i mean: when i do insert to table x, i have triggers that modify (or
add) records in other tables as well.
from what i know (which might be untrue) this will not work.

depesz

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] real multi-master replication?

2007-03-05 Thread hubert depesz lubaczewski

On 3/5/07, Steve Atkins [EMAIL PROTECTED] wrote:

I don't believe there is, or can be, any asynchronous multi-master
replication system for any database that will work with all possible
general purpose constructs.


but i dont insist on async. if there is multi-master replication for
postgresql it would be great if it would be sync.


Given that, you might need to be more specific about your
needs (and rethink your application architecture based on the
reality of the issue), unless the constraints of synchronous
replication work for you particular problem space.


i dont have a project (at the moment) that would require multi-master.
i'm just checking my options - as in original mail: i heard a lot of
multi-master replication systems, but all of them seem to replicate
queries, and not data. which is unacceptable for me.

depesz

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] psql client quits after 1st command

2007-03-05 Thread Vincenzo Romano
Good point Tom.

I failed to go  little deeper in the problem.
So the final test for me is that the whole bash script along
with its echos is to be globally directed to the pipes.

On Friday 02 March 2007 19:43 Tom Lane wrote:
 Martijn van Oosterhout kleptog@svana.org writes:
  I don't think you can acheive the effect you want with a FIFO.

 I think Doug had it right: the trick is to have some process holding the
 FIFO open for write throughout the procedure, so that the reader (psql)
 doesn't see an EOF.  This doesn't necessarily have to be the same
 process(es) that're actually putting data into the FIFO.

 Per the read(2) man page:

  When attempting to read from an empty pipe or FIFO:

   o  If no process has the pipe open for writing, read() will
  return 0 to indicate end-of-file.

   o  If some process has the pipe open for writing and O_NONBLOCK
  is set, read() will return -1 and set errnoto EAGAIN.

   o  If some process has the pipe open for writing and O_NONBLOCK
  is clear, read() will block until some data is written or the
  pipe is closed by all processes that had the pipe open for
  writing.

   regards, tom lane

-- 
Vincenzo Romano

Maybe Computers will never become as intelligent as Humans.
For sure they won't ever become so stupid. [VR-1987]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Can I getting a unique ID from a select

2007-03-05 Thread Bruno Wolff III
On Sat, Mar 03, 2007 at 16:46:45 -0800,
  Timasmith [EMAIL PROTECTED] wrote:
 On Mar 3, 7:12 pm, [EMAIL PROTECTED] (Bruno Wolff III) wrote:
  On Thu, Mar 01, 2007 at 06:16:02 -0800,
   Timasmith[EMAIL PROTECTED] wrote:
 
   create view myview as
   select rownum, t1.field, t2.field
   from tableOne t1, tableTwo t2
   where t1.key = t2.fkey
 
 
 Never heard of a 'join key' but that sounds very promising.  How do I
 select it?
 

The join key would be t1.key or t2.fkey from your example. However there
may be multiple rows returned with the same value depending on what you
are joining. If that is the case you, should be able to use the primary
keys of the underlying tables to make a new candidate key for the joined
rows.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] COPY form stdin and file

2007-03-05 Thread senges
Hi all,
when untaring a pg_dumped tar file the data gets populated like this:

-
COPY mytable (id, attr1, attr2) FROM stdin;
\.
copy mytable (id, attr1, attr2)  from '$$PATH$$/2387.dat' ;
-

where $$PATH$$ gets replaced by the current path.

Anyone knows whats the aim of this ?
The .\ throws an error within the pgadmin sql query window:
ERROR:  syntax error at or near \ at character

Isn't it correct just to call:
# copy mytable (id, attr1, attr2)  from '$$PATH$$/2387.dat' ; ?


Its kind of unrelaxed to untar a restored db (which i need to untar
from time to time because of changing libraries) and always need to
change the above lines of code.


Thanks for any info about this,
Chris


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] COPY form stdin and file

2007-03-05 Thread Tom Lane
senges [EMAIL PROTECTED] writes:
 when untaring a pg_dumped tar file the data gets populated like this:

 -
 COPY mytable (id, attr1, attr2) FROM stdin;
 \.
 copy mytable (id, attr1, attr2)  from '$$PATH$$/2387.dat' ;
 -

Not here ... I only see 'FROM stdin' cases.  Please provide a test case,
and mention which PG version you are dealing with.

 The .\ throws an error within the pgadmin sql query window:
 ERROR:  syntax error at or near \ at character

You'd need to complain to the pgadmin people about that.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] COPY form stdin and file

2007-03-05 Thread Richard Huxton

senges wrote:

Hi all,
when untaring a pg_dumped tar file the data gets populated like this:

-
COPY mytable (id, attr1, attr2) FROM stdin;
\.
copy mytable (id, attr1, attr2)  from '$$PATH$$/2387.dat' ;
-

where $$PATH$$ gets replaced by the current path.

Anyone knows whats the aim of this ?
The .\ throws an error within the pgadmin sql query window:
ERROR:  syntax error at or near \ at character


The \. on a separate line ends the COPY ... FROM stdin above it.

I'm not sure pgadmin has a stdin to process data from though.

pg_restore is probably the tool you want to use.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] date format

2007-03-05 Thread Peter Eisentraut
Am Sonntag, 4. März 2007 17:16 schrieb Raymond O'Donnell:
 Probably a silly question, but did you remember to restart the server
 after changing the datestyle setting?

You don't need to do that.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] plpgsql and insert

2007-03-05 Thread Ben Trewern
Depending on what client side library you are using you could use the 
RETURNING clause, see the docs: 
http://www.postgresql.org/docs/8.2/static/sql-insert.html

Regards,

Ben
Jamie Deppeler [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 Hi,

 Have a quick question is possible to record a primary from a insert 
 stament

 eg

 xprimary :=  insert into schema.table(.,.,.,.) VALUES ();


 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] date format

2007-03-05 Thread Joshua D. Drake

Peter Eisentraut wrote:

Am Sonntag, 4. März 2007 17:16 schrieb Raymond O'Donnell:
  

Probably a silly question, but did you remember to restart the server
after changing the datestyle setting?



You don't need to do that.
  
I didn't really explain my point here. You an use datestyle from a 
session, or you can reload the server.
You do not need to *restart*. However, he didn't mention if he had done 
any of that. He just said he set
it which doesn't really mean anything if he didn't change it from the 
setting or reload the server.


J





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] date format

2007-03-05 Thread Raymond O'Donnell

On 05/03/2007 12:16, Joshua D. Drake wrote:

I didn't really explain my point here. You an use datestyle from a 
session, or you can reload the server. You do not need to *restart*.


Yes, that's what I was getting at as well - I was in a hurry at the time
and restart was the wrong word to use.

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re : [GENERAL] US Highschool database in postgres

2007-03-05 Thread Laurent ROCHE
You can get some lists from here:
http://www.iso.org/iso/en/prods-services/iso3166ma/index.html
and there
http://www.unece.org/cefact/locode/service/main.htm

No high schools though, but countries, currencies, cities, states, ...
Not sure whether this will help you ... but you can have a look.

If you don't have access, you can use OpenOffice to open them and you can use a 
PG connection to copy the data from one system to the other !
Quite impressive.

 
Have fun,
[EMAIL PROTECTED]
The Computing Froggy

- Message d'origine 
De : [EMAIL PROTECTED] [EMAIL PROTECTED]
À : Ron Johnson [EMAIL PROTECTED]; pgsql-general@postgresql.org
Envoyé le : Vendredi, 2 Mars 2007, 2h04mn 36s
Objet : Re: [GENERAL] US Highschool database in postgres

http://en.wikipedia.org/wiki/Category:Lists_of_schools_in_the_United_States
Has one parsed the list of high schools in the USA and stored in postgres

or is there an open list of backup files that people can share so they can 
share their useful backups of useful dbs 
like 
list of high schools
list of cities
list of zipcodes
etc
thanks
mArk

On 3/1/07, Ron Johnson [EMAIL PROTECTED] wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hmmm.

Have you Googled?  list of high schools in california brought up
some very useful sites.  I would not be surprised if there were such

sites for all the other states.

You'll need some skill with Perl or Python to efficiently grab all
the data, though.

On 03/01/07 13:39, [EMAIL PROTECTED]
 wrote:
 List of highschools by city state and county
 Mark

 On 3/1/07, Ron Johnson [EMAIL PROTECTED] wrote:

 On 02/27/07 01:04, mobil wrote:

 Is there a  downlaodable high school database in postgresql

 What exactly do you mean by high school database?




-BEGIN PGP SIGNATURE-
Version: GnuPG 
v1.4.6 (GNU/Linux)

iD8DBQFF5zXpS9HxQb37XmcRAsUEAKDpswN/4IHBF4QFDDwCmn2kd6/K7gCfZlH2
PtSrXdRordiqRPkYSgzgSrQ=
=zczG
-END PGP SIGNATURE-













___ 
Découvrez une nouvelle façon d'obtenir des réponses à toutes vos questions ! 
Profitez des connaissances, des opinions et des expériences des internautes sur 
Yahoo! Questions/Réponses 
http://fr.answers.yahoo.com

Re: [GENERAL] usage for 'with recursive'?

2007-03-05 Thread Kenneth Downs

hubert depesz lubaczewski wrote:

On 3/2/07, Kenneth Downs [EMAIL PROTECTED] wrote:

This reminds me of another advantage of the WITH RECURSIVE, which is
that it pushes to overhead to SELECT, with no associated write-time
overheads.


hmm .. why do you consider this as advantage? i would say it's rather 
drawback.



One school of thought aims for overall system performance gains by 
keeping transactions as small as possible.  WITH RECURSIVE allows an 
UPDATE to affect exactly one row, where other methods affect more rows.  
Therefore the WITH RECURSIVE gives you the smallest possible transaction 
at write time.


Further, it seems the actual number of rows pulled in all approaches 
should be the same, so now I wonder if there really even is any overhead 
at SELECT time, making the argument for WITH RECURSIVE rather conclusive 
I'd say.




depesz

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly



--
Kenneth Downs
Secure Data Software, Inc.
www.secdat.com / www.andromeda-project.org
Office: 631-689-7200   Cell: 631-379-0010

::Think you may have a problem with programming? Ask yourself this 
::question: do you worry about how to throw away a garbage can?



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re : Re : [GENERAL] COPY form stdin and file

2007-03-05 Thread Laurent ROCHE
COPY FROM stdin works only from a console ... and PGAdmin is not a console but 
a window environment, so that does not work !
I had the PG Admin explaining the very same things, before.

 
Cheers,
[EMAIL PROTECTED]
The Computing Froggy

- Message d'origine 
De : Tom Lane [EMAIL PROTECTED]
À : senges [EMAIL PROTECTED]
Cc : pgsql-general@postgresql.org
Envoyé le : Lundi, 5 Mars 2007, 10h22mn 22s
Objet : Re: [GENERAL] COPY form stdin and file 

senges [EMAIL PROTECTED] writes:
 when untaring a pg_dumped tar file the data gets populated like this:

 -
 COPY mytable (id, attr1, attr2) FROM stdin;
 \.
 copy mytable (id, attr1, attr2)  from '$$PATH$$/2387.dat' ;
 -

Not here ... I only see 'FROM stdin' cases.  Please provide a test case,
and mention which PG version you are dealing with.

 The .\ throws an error within the pgadmin sql query window:
 ERROR:  syntax error at or near \ at character

You'd need to complain to the pgadmin people about that.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/









___ 
Découvrez une nouvelle façon d'obtenir des réponses à toutes vos questions ! 
Profitez des connaissances, des opinions et des expériences des internautes sur 
Yahoo! Questions/Réponses 
http://fr.answers.yahoo.com









___ 
Découvrez une nouvelle façon d'obtenir des réponses à toutes vos questions ! 
Profitez des connaissances, des opinions et des expériences des internautes sur 
Yahoo! Questions/Réponses 
http://fr.answers.yahoo.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] real multi-master replication?

2007-03-05 Thread Richard Broersma Jr
 but i dont insist on async. if there is multi-master replication for
 postgresql it would be great if it would be sync.

I don't know it this is what you are looking for, but this new link on the 
postgresql home page suggests that a new version of master-to-master 
replication is now available.

http://www.postgresql.org/about/news.752

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Why don't dumped files parse in pgAdmin3 query editor?

2007-03-05 Thread dlivesay
Here's something I've always wondered. When you dump a database, the dumped file
looks like ordinary SQL, but if I load it into a query editor window and try to
execute it, I always get syntax errors. The specific errors vary, but it always
makes it impossible to reload the data that way.

In the past I've just used psql -f [file] [schema] to reload them, but I'm
trying to do this on Windows, and I can't seem to get anything to work from the
command shell, so I'm really stuck.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Why don't dumped files parse in pgAdmin3 query editor?

2007-03-05 Thread Raymond O'Donnell

On 05/03/2007 15:28, [EMAIL PROTECTED] wrote:


Here's something I've always wondered. When you dump a database, the dumped file
looks like ordinary SQL, but if I load it into a query editor window and try to
execute it, I always get syntax errors. The specific errors vary, but it always
makes it impossible to reload the data that way.


That's because theses files contain psql-specific backslash commands 
as well as ordinary SQL - if you look at what the errors tell you, 
you'll probably see that they come from lines containing such commands. 
As I understand it, the plain-text output of pg_dump is intended to be 
restored via psql.


Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Why don't dumped files parse in pgAdmin3 query editor?

2007-03-05 Thread Magnus Hagander
On Mon, Mar 05, 2007 at 10:28:04AM -0500, [EMAIL PROTECTED] wrote:
 Here's something I've always wondered. When you dump a database, the dumped 
 file
 looks like ordinary SQL, but if I load it into a query editor window and try 
 to
 execute it, I always get syntax errors. The specific errors vary, but it 
 always
 makes it impossible to reload the data that way.
 
 In the past I've just used psql -f [file] [schema] to reload them, but I'm
 trying to do this on Windows, and I can't seem to get anything to work from 
 the
 command shell, so I'm really stuck.

It should work perfectly fine to restore it using psql -f on Windows as
well, I'd recommend that you look into why that's not working and try to
fix that instead.

//Magnus

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Why don't dumped files parse in pgAdmin3 query editor?

2007-03-05 Thread Joris Dobbelsteen
-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of 
[EMAIL PROTECTED]
Sent: maandag 5 maart 2007 16:28
To: pgsql-general@postgresql.org
Subject: [GENERAL] Why don't dumped files parse in pgAdmin3 
query editor?

Here's something I've always wondered. When you dump a 
database, the dumped file looks like ordinary SQL, but if I 
load it into a query editor window and try to execute it, I 
always get syntax errors. The specific errors vary, but it 
always makes it impossible to reload the data that way.

In the past I've just used psql -f [file] [schema] to reload 
them, but I'm trying to do this on Windows, and I can't seem 
to get anything to work from the command shell, so I'm really stuck.

It seems it has some strange syntax that might be psql-specific, perhaps
its even for linux only.

However, it might make some difference if you dump with Use insert
statements instead of the normal copy from stdin. This seemed to work
for me last time. Your mileage may vary.

- Joris 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Why don't dumped files parse in pgAdmin3 query editor?

2007-03-05 Thread dlivesay

Quoting Magnus Hagander [EMAIL PROTECTED]:


It should work perfectly fine to restore it using psql -f on Windows as
well, I'd recommend that you look into why that's not working and try to
fix that instead.


The main problem I'm having is that any command I try to use with psql, even
psql /?, causes it to prompt me for a password, and it tells me my password is
wrong no matter what username/passsword combination I try.

Beyond that, I really don't know what the correct syntax to use in the Windows
command shell is. Would it be psql -f or psql /f?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Why don't dumped files parse in pgAdmin3 query editor?

2007-03-05 Thread Raymond O'Donnell

On 05/03/2007 15:58, [EMAIL PROTECTED] wrote:


The main problem I'm having is that any command I try to use with
psql, even psql /?, causes it to prompt me for a password, and it
tells me my password is wrong no matter what username/passsword
combination I try.


Are you using the -U option to connect as a specific user? If you don't, 
psql tries to connect as the currently logged-in user.



Beyond that, I really don't know what the correct syntax to use in
the Windows command shell is. Would it be psql -f or psql /f?


Try psql --help:

C:\Documents and Settings\rodpsql --help
This is psql 8.2.3, the PostgreSQL interactive terminal.

Usage:
  psql [OPTIONS]... [DBNAME [USERNAME]]

General options:
  -d DBNAME   specify database name to connect to (default: rod)
  -c COMMAND  run only single command (SQL or internal) and exit
  -f FILENAME execute commands from file, then exit
  -1 (one)  execute command file as a single transaction
  -l  list available databases, then exit
  -v NAME=VALUE   set psql variable NAME to VALUE
  -X  do not read startup file (~/.psqlrc)
  --help  show this help, then exit
  --version   output version information, then exit

Input and output options:
  -a  echo all input from script
  -e  echo commands sent to server
  -E  display queries that internal commands generate
  -q  run quietly (no messages, only query output)
  -o FILENAME send query results to file (or |pipe)
  -n  disable enhanced command line editing (readline)
  -s  single-step mode (confirm each query)
  -S  single-line mode (end of line terminates SQL command)
  -L FILENAME send session log to file

Output format options:
  -A  unaligned table output mode (-P format=unaligned)
  -H  HTML table output mode (-P format=html)
  -t  print rows only (-P tuples_only)
  -T TEXT set HTML table tag attributes (width, border) (-P 
tableattr=)

  -x  turn on expanded table output (-P expanded)
  -P VAR[=ARG]set printing option VAR to ARG (see \pset command)
  -F STRING   set field separator (default: |) (-P fieldsep=)
  -R STRING   set record separator (default: newline) (-P recordsep=)

Connection options:
  -h HOSTNAME database server host or socket directory (default: 
local sock

et)
  -p PORT database server port (default: 5432)
  -U NAME database user name (default: rod)
  -W  prompt for password (should happen automatically)

For more information, type \? (for internal commands) or \help
(for SQL commands) from within psql, or consult the psql section in
the PostgreSQL documentation.

Report bugs to pgsql-bugs@postgresql.org.

HTH,

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Why don't dumped files parse in pgAdmin3 query editor?

2007-03-05 Thread Magnus Hagander
On Mon, Mar 05, 2007 at 10:58:32AM -0500, [EMAIL PROTECTED] wrote:
 Quoting Magnus Hagander [EMAIL PROTECTED]:
 
 It should work perfectly fine to restore it using psql -f on Windows as
 well, I'd recommend that you look into why that's not working and try to
 fix that instead.
 
 The main problem I'm having is that any command I try to use with psql, even
 psql /?, causes it to prompt me for a password, and it tells me my password 
 is
 wrong no matter what username/passsword combination I try.
 
 Beyond that, I really don't know what the correct syntax to use in the 
 Windows
 command shell is. Would it be psql -f or psql /f?

it's psql -f. So you should be using psql -? to get the help.

//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] real multi-master replication?

2007-03-05 Thread Omar Eljumaily




I don't believe there is, or can be, any asynchronous multi-master
replication system for any database that will work with all possible
general purpose constructs.



I believe it's possible in theory if you have system wide transaction 
locking, i.e. synchronous.  However, if you have to have system wide 
transaction locking, what's the point?  You have server X, that has to 
wait for a transaction to finish on server Y, why don't you make them 
the same server?


It would be nice to have some sort of paradigm for synchronizing 
databases that go on and offline with each other and even have diverse 
data structures.  I know there's a whole science of transaction 
processing which is simple in concept, but very often difficult to 
implement in practice.  It's a matter of matching transactions/records 
up at an atomic level and replicating them across different servers.


The sort of holy grail for me, and I believe a lot of other people, is to:

1. have a server that can easily and transparently replicate itself 
in different environments for speed, security, and fault tolerant purposes.
2. allow for people to go offline with their datasets, for instance 
on a laptop on an airplane, and then go back online with relative ease.
   3. Have a well defined and simple system for identifying and dealing 
with conflicts that arise from multiple copies of the same dataset.


Just ideas that I have on this topic.  I wonder if anybody's doing any 
work on the subject. 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] real multi-master replication?

2007-03-05 Thread hubert depesz lubaczewski

On 3/5/07, Richard Broersma Jr [EMAIL PROTECTED] wrote:

 but i dont insist on async. if there is multi-master replication for
 postgresql it would be great if it would be sync.
I don't know it this is what you are looking for, but this new link on the 
postgresql home page suggests that a new version of master-to-master 
replication is now available.
http://www.postgresql.org/about/news.752


i contacted the company some time ago, and the information i got was
that their product is based on query-replication.

depesz

--
http://www.depesz.com/ - nowy, lepszy depesz

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] M:M table conditional delete for parents

2007-03-05 Thread MargaretGillon
Postgresql 8.1.4 on Redhat 9

I have a table which stores M:M relationships. I can't put foreign keys to 
the parents of this table because the relationships being stored go to 
several tables. This was done so that only two fields have to be searched 
in order for all relationships to be found for an item. For an oem number 
there might be 50 to 100 relationships and 40 different tables having to 
do with materials, locations, revisions, specifications, customer, etc. 
that might be referenced.

Is there some way I can make a mock foreign key restraint on the parents 
so the parent would search the M:M table for  a matching value in key1 if 
the relate-key is 22, 23, 25 or 100 before it allows the row to be 
deleted?

relate-key  relate-type key1table1  key2table2
22  product-material23  oem 545 material
22  product-material23  oem 546 material
23  product-engine  23  oem 15  engine
25  product-stage   23  oem 3   stage
100 product-revision23  oem 2270 
specifications


*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 
*** ***
Margaret Gillon, IS Dept., Chromalloy Los Angeles


Re: [GENERAL] M:M table conditional delete for parents

2007-03-05 Thread Omar Eljumaily
I think a foreign key restraint is basically a trigger that throws an 
exception (RAISE statement) when the restraint is violated.


Something trigger function like:

 If table1
 if not in table1
  raise
 else if table2
 if not in table2
  raise
  end
I think that should work, but I've never tried it.


[EMAIL PROTECTED] wrote:


Postgresql 8.1.4 on Redhat 9

I have a table which stores M:M relationships. I can't put foreign 
keys to the parents of this table because the relationships being 
stored go to several tables. This was done so that only two fields 
have to be searched in order for all relationships to be found for an 
item. For an oem number there might be 50 to 100 relationships and 40 
different tables having to do with materials, locations, revisions, 
specifications, customer, etc. that might be referenced.


Is there some way I can make a mock foreign key restraint on the 
parents so the parent would search the M:M table for  a matching value 
in key1 if the relate-key is 22, 23, 25 or 100 before it allows the 
row to be deleted?


relate-keyrelate-typekey1table1   
 key2 table2
22product-material23oem545 
   material
22product-material23oem546 
   material
23product-engine23oem   
 15engine
25product-stage23oem3 
   stage
100product-revision23oem2270   
 specifications



*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 
*** *** ***

Margaret Gillon, IS Dept., Chromalloy Los Angeles



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] RFC tool to support development / operations work with slony replicated databases

2007-03-05 Thread Andrew Hammond

Hello All,

I've been working on designing a tool to facilitate both developers
and operations staff working with slony replicated databases. I think
that the problem described below is a general problem for people
working with systems that are both in production and under on-going
development / maintenance. As a result I would like to both solicit
the input of the community and share the results. Documentation (which
is still somewhat drafty) follows.

Thank you for your time,
Andrew Hammond


Current Approach

A common problem in the database world is handling revisions to the
database that go with revisions in the software running against this
database. Currently our method is to include upgrade.sql and
downgrade.sql scripts with each software release.

Problem Statement

This will fail when we start using slony since we need to handle DML
differently from DDL and DCL. We also need a way to apply slonik
scripts. Ordering matters in the application of these scripts.

After talking about it for a while, we agreed that developers want a
way to apply their updates without stepping on each other's toes while
in the process of developing and testing their work.

Design

Interface

updatemydatabase -f target [-y] [--force-upgrade | --force-downgrade]
[-U pguser] [-h pghost] [-p pgport] [-d pgdatabase] [--cluster
clustername]

-f

   Optional Defaults to the current working directory. Specifies the
target intended to be upgraded to. This may be either the full or
relative path. This may be either a directory or a file.
-y

   Optional If set, assume yes to all questions. This is intended for
use when running the program in tool mode.
-U -h -p -d

   Optional As for psql and other PostgreSQL command line utilities.
--cluster

   Optional Defaults to the database name. Specifies the name of the
slony cluster to work with. This should have a one-letter short form
that conforms with other similar tools. Gotta figure out what those
are though...

   Since we will be using a python connector which is based on libqp,
we will auto-magically respect the standard postgres environment
variables including the .pgpass file for handling passwords.

Limitations

   * We are not trying to deal with databases with more than one
slony replication cluster in them.
   * We are not going to deal with the case where various sets have
different origins.
   * We assume that this is run off the same machine that is
currently running the slons. We can connect to every database in the
cluster.
   * Aside from generating the slonik preamble, we are not going to
try and auto-generate slonik scripts that do anything more complicated
than EXECUTE SCRIPT. At least not initially. Maybe we can get more
clever later?
   * We will not try to be clever about detecting changes to files.
Alfred floated the idea of using the SVN id tag to detect if a file
had been changed since it was last applied and then forcing a
downgrade/upgrade cycle. That seems like a lot of code for a corner
case. Alfred and Long agreed that it's probably a good idea to create
a convention instead. Do not edit files after they're committed unless
it will cause in-efficiencies in the application to the production
database. Instead, create a new file. If you are forced to edit a
committed file, then email the dev list.
   * Along the lines of not being clever, we assume there is only one
set, and that it's number is 1.
   * We will not assume the existence of a node 1. The whole point of
increasing availability by replicating is that we don't have to rely
on the existence of a given database.

Data Structure

Each release will include a directory that has the same name as the
full release tag. This directory must contain all the scripts to be
applied. The release may include directories of scripts from prior
releases in the same parent directory. The scripts may have an
arbitrary name, but must end with a suffix of either dml.sql, ddl.sql,
dcl.sql or slonik. Script names should incorporate the bug number
they're addressing.

   * /my/base/directory
 o 3.10.0
   + create_foo_23451.ddl.sql
   + populate_foo_23451.dml.sql
   + alter_bar_add_column_reference_foo_23451.ddl.sql
   + update_bar_reference_foo_23451.dml.sql
   + alter_bar_column_not_null_23451.ddl.sql
   + subscribe_foo_23451.slonik
   + cleanup_some_data_migration_stuff_23451.ddl.sql
   + fix_bug_24341.ddl.sql -- these are poorly chosen
names, but hey, it's an example...
   + fix_bug_24341.dml.sql
   + fix_bug_24341.slonik
   + drop_broken_node_30031.slonik
 o 3.10.1
   + another_table_29341.ddl.sql

Inside the script, we add some semantics to what are usually comments.
An example is probably the best way to show this.

-- alter_bar_column_not_null_23451.ddl.sql
-- Witty comment about why this column needs to be not null.
--dep 

[GENERAL] pq_flush: send() failed: Broken pipe

2007-03-05 Thread Yumiko Izumi
Hello. 

When I carried out SQL sentence in PostgreSQL7.3.8 environment,
PostgreSQL outputs the following error messages.
==
Dec 19 13:50:32 gyomu01 postgres[807]: [11] LOG:  pq_flush: send() failed: 
Broken pipe
Dec 19 13:50:33 gyomu01 postgres[807]: [12] LOG:  pq_recvbuf: unexpected EOF on 
client connection
==

Why these messages appear?
Please teach me about a workaround of a problem. 

Work procedures are as follows.
(1) I install PostgreSQL7.3.8 in a HP-UX machine 
(2) I carry out initdb to create gyomuDB 
(3) I carry out an SQL sentence to make TABLE and FUNCTION in gyomuDB 
% psql -d gyomuDB -f 003.sql 

In 003.sql, I create various tables and indexes, but the following SQL 
sentences terminated abnormally. 
(There are 1500 lines in the whole file, and it is the extract as follows.)
###
CREATE FUNCTION mon_CreateTable_WbemMonitorLog( text ) RETURNS integerAS'
DECLARE
str_basenameALIAS FOR $1;
str_sql text;
str_tablename   text;
nb  integer;
nb_end  integer;
BEGIN
nb := 0;
nb_end := 999;
while nb = nb_end loop
-- table
str_tablename := str_basename || CAST( nb AS text );
str_sql := ''CREATE TABLE '' || str_tablename
|| ''(''
|| ''MonID int NOT NULL ,''
|| ''CategoryID int NOT NULL ,''
|| ''ExtensionID int NOT NULL ,''
|| ''SummaryID int NOT NULL ,''
|| ''KeyValue varchar (256) NULL ,''
|| ''Data varchar (512) NULL ,''
|| ''GetDate timestamp NOT NULL ,''
|| ''Status int NOT NULL ,''
|| ''Summarized int NOT NULL);'';
execute str_sql;
nb := nb + 1;
end loop;
RETURN ( 0 );
END;
'
LANGUAGE 'plpgsql';
SELECT mon_CreateTable_WbemMonitorLog( 'WbemMonitorLogMinute_' );
SELECT mon_CreateTable_WbemMonitorLog( 'WbemMonitorLogHour_' );
SELECT mon_CreateTable_WbemMonitorLog( 'WbemMonitorLogDay_' );
DROP FUNCTION mon_CreateTable_WbemMonitorLog( text );
###

This function worked with various servers normally.
But this function terminated abnormally only with a certain server.

A difference is only that server that this function terminates abnormally with 
has high-speed multiprocessor.
Besides this, there is not remarkable difference.

When this function worked normally, this function outputs as follows.
###
CREATE FUNCTION
 mon_createtable_wbemmonitorlog

  0
(1 row)

 mon_createtable_wbemmonitorlog

  0
(1 row)

 mon_createtable_wbemmonitorlog

  0
(1 row)

DROP FUNCTION
###

When this function terminated abnormally, psql outputs only CREATE FUNCTION.
And psql command terminated.

In other words I suppose that I fall in practice (SELECT) of the first function.

I checked the disk use situation then, the neck in a resource was not found.
I understand this error isn't caused by stringency of a resource. 

In addition, this error caused when I am creating DB
just after PostgreSQL installation.

I can't think that there is a lot access in PostgreSQL.
I can't think that DB size is big and lacked memory. 

About a signal
I carry out the above (3) via a script written in bash.
The script was finished by signal 16 then(SIGUSR1).

  # ./gyomuDB_setup.com
  Signal 16

I saw a source of PostgreSQL and various documents.
And I understood about a SIGUSR1 signal as follows.

* A SIGUSR1 signal is used only for timing making a transaction log 
  to restore DB at the time of DB disorder outbreak.
* All the transaction processing is carried out by the child process that is a 
backend.
  But when accumulated a fixed quantity transaction log, 
  postmaster carries out checkpoint processing 
  by transmitting a SIGUSR1 signal to postmaster from a backend.
  (Actually, postmaster generates child process more and carries it out)

Actually this signal is transmitted not to postmaster but to psql, and psql 
seems to stop.

postgresql.conf
--
#
#   Connection Parameters
#
#tcpip_socket = false
#ssl = false

#max_connections = 32
#superuser_reserved_connections = 2

#port = 5432 
#hostname_lookup = false
#show_source_port = false

#unix_socket_directory = ''
#unix_socket_group = ''
#unix_socket_permissions = 0777 # octal

#virtual_host = ''

#krb_server_keyfile = ''


#
#   Shared Memory Size
#
#shared_buffers = 64# min max_connections*2 or 16, 8KB each
#max_fsm_relations = 1000   # min 10, fsm is free space map, ~40 bytes

Re: [GENERAL] pq_flush: send() failed: Broken pipe

2007-03-05 Thread Andrej Ricnik-Bay

On 3/6/07, Yumiko Izumi [EMAIL PROTECTED] wrote:

Hello.

Hi,


When I carried out SQL sentence in PostgreSQL7.3.8 environment,
PostgreSQL outputs the following error messages.

Can't say anything sensible regarding the error message,
but if you have to carry on using 7.x you should upgrade
to at least 7.4.16.  Your install is 2.5 years behind, and there
were quite a few patches/security fixes since.


Cheers,
Andrej

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pq_flush: send() failed: Broken pipe

2007-03-05 Thread Tom Lane
Yumiko Izumi [EMAIL PROTECTED] writes:
 When I carried out SQL sentence in PostgreSQL7.3.8 environment,
 PostgreSQL outputs the following error messages.
 ==
 Dec 19 13:50:32 gyomu01 postgres[807]: [11] LOG:  pq_flush: send() failed: 
 Broken pipe
 Dec 19 13:50:33 gyomu01 postgres[807]: [12] LOG:  pq_recvbuf: unexpected EOF 
 on client connection

These messages indicate that psql crashed, not the backend.  Did you not
see any interesting messages on the client side?  Can you get a stack
trace from the psql crash?

Also, as someone already mentioned, the current release in the 7.3
branch is 7.3.18 not 7.3.8.  If you want us to expend time looking for
the problem, it would be polite to first make sure it's not a
long-since-solved problem.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] real multi-master replication?

2007-03-05 Thread Robert Treat
On Sunday 04 March 2007 21:28, Bill Moran wrote:
 Stefan Kaltenbrunner [EMAIL PROTECTED] wrote:
  Bill Moran wrote:
   Stefan Kaltenbrunner [EMAIL PROTECTED] wrote:
   Bill Moran wrote:
   hubert depesz lubaczewski [EMAIL PROTECTED] wrote:
   hi,
   i read about some replication system for postgresql, but - as far as
   i know there is none real multi-master replication system for
   postgresql.
   all i have seen are based on query replication with various
   hacks for specific constructions (like now()).
   my question is - is there any (even fully commercial) multi-master
   replication system for postgresql that will work with all possible
   constructs, triggers, random data and so on?
   i mean - i dont want to bother with choosing to 'note' somehow that
   'this particular query' has to be replicated somehow.
   i'm thinking about working solution that will allow multi-master
   connections.
  
   anything? anywhere?
  
   Have you looked at pgpool?
  
   afaik pgpool is statement based and not really multimaster either ...
  
   Well, it's multi-master to the degree that all servers are read/write,
   and therefore any server can take over.
 
  not sure I follow - pgpool will simply replay the queries to each
  backend-server that are going through it.

 I guess I'm comparing it to Slony, which has a clear delineation between
 master and slave.  With pgpool, you don't have the failover procedure in
 the same way, in that each server can be read/write at all times.


This is typically reffered to as dual master (you have two unsynchronized 
master servers), though the terminology is so mixed up these days none of it 
is really clear. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Query timing

2007-03-05 Thread Robert Treat
On Monday 05 March 2007 02:38, Naz Gassiep wrote:
 That's not quite as fast as I would like to do it, that throws in a few
 more steps which slow down the development process. However if there is
 no way I will persevere with the method I have now.
 Thanks,
 - Naz.


There are several logging parameters in the postgresql.conf, including 
log_duration.  Logging to a file also has a bonus that you can feed it into a 
query analyzing program like pgfouine or pqa. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] pg temp tables

2007-03-05 Thread Robert Treat
On Saturday 03 March 2007 10:33, Anton Melser wrote:
 Hi,
 I have been going around telling everyone that there is no point using
 physical tables in postgres for temporary storage within a procedure.
 Why bother bothering the system with something which is only used in
 one procedure I said to myself... I have just learnt that with MS Sql
 Server, this is not the case, and that there are locks on some system
 table and temp tables eat up memory and lots of other unfortunate
 things. Can someone give me a 101 on temp table considerations? Or
 rather give me the good link?

The main issue against using temp tables involve bloat of some of the system 
catalogs, but it's no worse than doing create/drop cycles with standard 
tables, and better because they don't suffer as much i/o load. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] giving a user permission to kill their processes only

2007-03-05 Thread Robert Treat
On Wednesday 28 February 2007 15:19, George Nychis wrote:
 Hey all,

 So the pg_cancel_backend() function by default is only available to super
 users, so I decided to write a wrapper function around, use a SECURITY
 DEFINER, and GRANT my user privilege to use the wrapper.

 BEGIN;
 CREATE FUNCTION kill_process(integer) RETURNS boolean AS 'select
 pg_cancel_backend($1);' LANGUAGE SQL SECURITY DEFINER;
 REVOKE EXECUTE ON FUNCTION kill_process(integer) FROM PUBLIC;
 COMMIT;
 GRANT EXECUTE ON FUNCTION kill_process(integer) TO gnychis;

 The problem with this is I can now kill other users postgresql processes. 
 I was wondering if anyone knows a way in which i can check that the
 postgres process being killed is running a query for that user?  Therefore,
 they can't kill queries in postgres processes started by other users.


you could try to match CURRENT_USER with the information in pg_stat_activity, 
but be aware there is a reason why this functionality was made for 
superusers...
-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Support for idempotent schema changes?

2007-03-05 Thread Florian G. Pflug

Joshua D. Drake wrote:

David Lowe wrote:

Within the context of a script, executing:

Begin
Statement1
Statement2
Statement3
Commit

Where I only wish to commit if the error is specific to the object
already existing, and rollback for all other errors, what's the best way
to accomplish that?
  
You would have to put each statement into a savepoint, and catch each 
error that occured and commit or rollback to a savepoint

based on that result.


You could write a plpgsql function that executes a text given to it
as a parameter, and catches only already exists errors. Then your
schema script could look like
select execute_ignoreexists('create table ...') ;
select execute_ignoreexists('create index ...') ;
...

greetings, Florian Pflug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Database slowness -- my design, hardware, or both?

2007-03-05 Thread Reuven M. Lerner
Hi, everyone.  I've been using PostgreSQL for a decade, and it hasn't 
failed me yet.  But I've been having some serious performance problems 
on a database that I've been using in my grad-school research group, and 
it's clear that I need help from some more experienced hands.


Basically, we've collected large amounts of data from students who used 
our chemistry simulation software.  That data, collected in XML logfiles 
and then imported into a normalized set of PostgreSQL tables, keeps 
track of every action the student took, as well as what the chemistry 
simulation was doing.  So if the number of molecules changes, we keep 
track of that.  If the user changes the color representation of the 
molecules, we keep track of that, too.  There is a timestamp 
(unfortunately, with one-second resolution) associated with each 
action.  The simulations take place in several different activities, 
each of which has a set of sections and nodes through which the user passes.


We're trying to look for patterns in this data, which means that we're 
heading into the worlds of data warehousing and data mining.  These are 
relatively new topics for me, although I'm trying to get up to speed on 
them.  (The patterns for which we're searching have already been 
determined.  So we're trying to describe particular patterns, rather 
than simply let data-mining software go wild and find correlations.)


My first plan was to create a single fact table (which I called 
transactions), with foreign keys to various other relevant tables, and 
pre-computed data that I would need in my calculations.  I then figured 
that I would write some queries, in a combination of Perl, Pl/PgSQL, and 
straight SQL, to tag particular rows has being in a pattern (or not).  
Once I've tagged the rows that are in the pattern, I can retrieve them 
with a SQL query, and hand them off to the researcher analyzing them.


In theory, this would be the way to go.  In practice, every step has 
become a performance nightmare, taking many more hours than I might have 
hoped or expected.  For example, I've got a Pl/PgSQL function that goes 
through each variable-assignment row, and UPDATEs is previous_value 
column with whatever the previous value might have been.  This function 
takes 4-6 hours to run, across 2 million rows, representing two of our 
six activities.  (I've removed the other four activities' worth, in the 
hopes that I'll see a performance improvement.) 

When I only had 750,000 rows in our fact table, things ran at a somewhat 
acceptable speed.  Now, the database is getting seriously bogged down 
with what I'm doing.


I've tried this on a few different pieces of hardware, including some 
with multiple CPUs and lots of memory.  And yet, nothing is going 
quickly, or even remotely quickly.  I'm stuck with the nagging feeling 
that (a) I have some seriously unoptimized parts of my query, (b) faster 
disks would be helpful, and/or (c) what I'm doing is inherently slow and 
painful, and there's no way around it.


Numerous invocations of EXPLAIN, and frequent uses of VACUUM tell me 
that I'm dealing with a fair amount of data here.  But let's be honest; 
the whole database is about 5 GB unvacuumed, and I know that people are 
doing all sorts of things with 100 GB and larger disks.  So perhaps I've 
hit a hardware bottleneck, and need some zippier disks?  I'm not sure.


So I'm looking for suggestions and advice from the PostgreSQL 
community.  I haven't included a lot of details here, because I'm not 
even sure what details would be relevant, given the complexity of our 
situation.  So if there are some particular metrics that I should share 
with this group, I'd be happy to do so, from hardware configurations to 
table definitions, to the queries that are getting bogged down.


Thanks in advance,

Reuven


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] US Highschool database in postgres

2007-03-05 Thread Robert Treat
On Monday 05 March 2007 08:47, Laurent ROCHE wrote:
 You can get some lists from here:
 http://www.iso.org/iso/en/prods-services/iso3166ma/index.html
 and there
 http://www.unece.org/cefact/locode/service/main.htm

 No high schools though, but countries, currencies, cities, states, ...
 Not sure whether this will help you ... but you can have a look.

 If you don't have access, you can use OpenOffice to open them and you can
 use a PG connection to copy the data from one system to the other ! Quite
 impressive.


Don't forget to look at the samples database project on pgfoundry: 
http://pgfoundry.org/projects/dbsamples/

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] (no subject)

2007-03-05 Thread Reuven M. Lerner



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Database slowness -- my design, hardware, or both?

2007-03-05 Thread Tom Lane
Reuven M. Lerner [EMAIL PROTECTED] writes:
 Hi, everyone.  I've been using PostgreSQL for a decade, and it hasn't 
 failed me yet.  But I've been having some serious performance problems 
 on a database that I've been using in my grad-school research group, and 
 it's clear that I need help from some more experienced hands.

What PG version are you using?

 In theory, this would be the way to go.  In practice, every step has 
 become a performance nightmare, taking many more hours than I might have 
 hoped or expected.  For example, I've got a Pl/PgSQL function that goes 
 through each variable-assignment row, and UPDATEs is previous_value 
 column with whatever the previous value might have been.

I'd try to think of a way to eliminate the function altogether in favor
of a single UPDATE command.  In general, row-at-a-time thinking isn't
the way to win in SQL.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Database slowness -- my design, hardware, or both?

2007-03-05 Thread Reuven M. Lerner

Hi, Tom.  You wrote:
Hi, everyone.  I've been using PostgreSQL for a decade, and it hasn't 
failed me yet.  But I've been having some serious performance problems 
on a database that I've been using in my grad-school research group, and 
it's clear that I need help from some more experienced hands.



What PG version are you using?
  
I've been using 8.0, 8.1, and 8.2 at various points, depending on which 
machine I've been using.  My main machine is currently using 8.2.0.  (I 
wish that I had control over which version was being used, but my 
sysadmin powers are inversely proportional to the computer power made 
available to me.

I'd try to think of a way to eliminate the function altogether in favor
of a single UPDATE command.  In general, row-at-a-time thinking isn't
the way to win in SQL.
Well, I've tried to do massive UPDATEs as much as possible.  But the 
patterns that we're looking for are basically of the variety, If the 
user clicks on X and then clicks on Y, but without Z between the two of 
them, and if these are all part of the same simulation run, then we tag 
action X as being of interest to us.  So it's oodles of keeping track 
of back-and-forth for each of the rows in the table, and looking forward 
and backward in the table.


I agree that row-at-a-time thinking isn't the best way to work, but I 
didn't see a good alternative for our purposes.  I'm open to any and all 
suggestions.


Reuven



Re: [GENERAL] Database slowness -- my design, hardware, or both?

2007-03-05 Thread Webb Sprague

 Well, I've tried to do massive UPDATEs as much as possible.  But the
patterns that we're looking for are basically of the variety, If the user
clicks on X and then clicks on Y, but without Z between the two of them, and
if these are all part of the same simulation run, then we tag action X as
being of interest to us.  So it's oodles of keeping track of back-and-forth
for each of the rows in the table, and looking forward and backward in the
table.

 I agree that row-at-a-time thinking isn't the best way to work, but I
didn't see a good alternative for our purposes.  I'm open to any and all
suggestions.


Can you post at least some table schemas, indexes, queries, and
explain output?  I think of database optimization as a serious case of
devil in the details, and generalities (like -- make sure you index,
make sure your indexes help using explain, avoid row-at-a-time
thinking) won't get you far. So if we had something concrete to work
with, well, we would have something concrete to work with.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: Wikipedia on Postgres (was Re: [GENERAL] postgresql vs mysql)

2007-03-05 Thread Robert Treat
On Friday 23 February 2007 16:43, Chad Wagner wrote:
 On 2/23/07, Bill Moran [EMAIL PROTECTED] wrote:
   In any case if anyone is interested I was able to reproduce the changes
 
  that
 
   wikipgedia made and applied those changes (as well as others) all the
 
  way up
 
   to the 1.6.10 codebase.  The only reason I mention this is because
   1.6is the only choice for PHP4 users.  If anyone is interested I can
   provide
 
  the
 
   codebase, the schema still has to be created manually as was the case
 
  with
 
   wikipgedia.
 
  I would be interested.  I'm probably expected to maintain this thing ...

 You can download it from:

 http://www.postgresqlforums.com/downloads/pgmediawiki-1.6.10.tar.gz

 Again, like wikipgedia you have to create a schema (manually) named
 mediawiki and like wikipgedia (because the port more or less used some of
 the same mods they made) MySQL support is probably broken.

While no one in thier right mind should be using wikipgedia, I'm sympathetic 
to those who might still be stuck on it for some reason, so if you guys can 
produce a patch against the wikipgedia cvs, I'd be happy to apply it.

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Database slowness -- my design, hardware, or both?

2007-03-05 Thread Webb Sprague

.

 Heh.  Sure thing.  I wasn't sure how much detail to give when initially
posting.


Looks like enough to get the real experts on the list started :)

I will try to look again tommorrow, but I bet other folks have better
intuition than me.

How much concurrency is there on your database?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg temp tables

2007-03-05 Thread Anton Melser

On 06/03/07, Robert Treat [EMAIL PROTECTED] wrote:

On Saturday 03 March 2007 10:33, Anton Melser wrote:
 Hi,
 I have been going around telling everyone that there is no point using
 physical tables in postgres for temporary storage within a procedure.
 Why bother bothering the system with something which is only used in
 one procedure I said to myself... I have just learnt that with MS Sql
 Server, this is not the case, and that there are locks on some system
 table and temp tables eat up memory and lots of other unfortunate
 things. Can someone give me a 101 on temp table considerations? Or
 rather give me the good link?

The main issue against using temp tables involve bloat of some of the system
catalogs, but it's no worse than doing create/drop cycles with standard
tables, and better because they don't suffer as much i/o load.


Thanks for your reply. I am managing a db that has some export scripts
that don't do a drop/create, but rather a delete from at the start of
the proc (6 or 7 tables used for this, and only this). Now given that
there is no vacuuming at all going on - this is clearly suboptimal but
in the general case is this better/worse than using temporary tables?
Thanks again,
Anton

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg temp tables

2007-03-05 Thread Tom Lane
Anton Melser [EMAIL PROTECTED] writes:
 Thanks for your reply. I am managing a db that has some export scripts
 that don't do a drop/create, but rather a delete from at the start of
 the proc (6 or 7 tables used for this, and only this). Now given that
 there is no vacuuming at all going on - this is clearly suboptimal but
 in the general case is this better/worse than using temporary tables?

Delete all rows, you mean?  Have you considered TRUNCATE?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster