[GENERAL] Postgresql with max_connections=4096

2005-07-27 Thread denis

We have a postgresql server configured with max_connections=4096.
We have such a high number of max_connections because there are 8 web 
servers connected to the database and all use persistent connections.
Each web server can have 256 max clients and 2 connection strings, so 
the max connections to the db is 256 * 8 * 2=4096.


To start the postgresql, I must to do some kernel extra configuration:

kernel.shmmax = 1165063808
kernel.sem=512 64000 100 512

and the parameters i changed other than max_connections in the 
postgresql.conf are:


- shared_buffers = 131072
- checkpoint_segments = 32

We are doing pre-production tests and we encountered the following problems:
- The database server is generally low loaded except when the postgres 
recycles a transaction log file.
- This causes the apache frontends to slow down and to do all together 
the most heavy operation (that is five inserts in five different tables 
and a delete)
- After some time the postgresql starts giving the message WARNING:  
there is already a transaction in progress. It seems like the apache 
frontend didn't close correctly the previous connection living a 
transaction open but I'm not sure this is the only problem.


Could anyone have suggestions or tips for the postgres configuration and 
the problem we're encountering?


The postgresql version is 7.4.8 on a Dual Xeon with 4Gb of Ram.
Apache frontends are Apache 1.3.33 with PHP 4.3.11.

Thank you in advance,

Denis Gasparin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] GUID for postgreSQL

2005-07-27 Thread Riaan van der Westhuizen
Hi All, 

We are a small developing house in South Africa, which is in the process of
porting our Apps from 
MS SQL to PostgreSQL.  We use the newid() [globally unique identifier
(GUID)] function in SQL a lot, and need the same for pg. 

Our development platform is .NET using c#.  We also plan to start using Mono
C# in the future.
We will be deploying PostgreSQL on Windows (2003 Server) and Linux (Ubuntu)
platforms. 

We have search the web and found c code that can do this, but we do not have
c programmers.

Are there anybody that can help us compiling these for us, we need it on
both OS’s?

Thanks

Regards,

Riaan van der Westhuizen
CEO

Huizensoft (Pty) Ltd
Tel: +27 44 871 5534
Fax: +27 44 871 5098
 
This e-mail and any attachments thereto is confidential and 
is intended solely for the use of the addressee's. 
If you are not the intended recipient, be advised that any use, 
dissemination, forwarding, printing, or copying of this e-mail is strictly
prohibited.

Huizensoft (Pty) Ltd accepts no liability for any views or opinions
expressed in 
this e-mail or for any loss or damages that may be suffered by any person 
whomsoever, arising from, or in connection with, or caused by, the use of
this e-mail.




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Trigger disactivation and SELECT WAITING

2005-07-27 Thread Philippe Lang
Thanks Tom, thanks Janning,

I found triggers very convenient to do different tasks in the database, and 
these tasks go far beyond what we can do in rules, Janning.

When a line is being inserted in an order, the insert trigger automatically 
inserts data in a subtable of the order line, for example. In this subtable, 
there are informations regarding the planning of the order. People can use 
the GUI to populate the order, but things won't break if the user opens the 
database table directly, which can happen sometimes. Without the trigger, an 
insert function click would have to be used each time an order line is being 
added, and this is not that effective from a user-experience point of view, I 
think. Or would require a lot a client-coding.

Now the use of a trigger has a drawback: when you want to duplicate an order, 
for example. During the duplication function, I would like to disable the 
trigger, in order to make a copy of the order, order lines, and order lines 
subtable data. This is much easier than keeping the trigger, and having to 
delete default data it inserts in the new order.

I'm not sure how I can improve the trigger in this case, and make it smarter, 
so I don't have to disable it during duplication...

I hope I was clear...

Philippe



-Message d'origine-
De : Tom Lane [mailto:[EMAIL PROTECTED] 
Envoyé : mardi, 26. juillet 2005 19:57
À : Philippe Lang
Cc : pgsql-general@postgresql.org
Objet : Re: [GENERAL] Trigger disactivation and SELECT WAITING 

Philippe Lang [EMAIL PROTECTED] writes:
 I meant: in 7.4.X databases, is there a way of disabling a trigger without 
 deleting it? I guess the answer is no.

Nothing officially supported, anyway.  There's a pg_trigger.tgenabled column 
but I'm not sure which operations pay attention to it.

 That's what my plpgsql insert function does, and because of this, if a view 
 is running at the same moment on the same tables (some views can take up to 2 
 hours to be calculated), the insert function gets stuck in a SELECT WAITING 
 state. So insertions are impossible in the database when views are being 
 calculated.

I guess I question a database design in which you routinely have to drop 
triggers in order to get your work done.  Why have the trigger at all if you do 
so many changes to the table with it deleted?  Why not improve the trigger to 
be smart enough to not interfere with what you need the insertion function to 
do?

regards, tom lane
 

-Message d'origine-
De : Janning Vygen [mailto:[EMAIL PROTECTED] 
Envoyé : mardi, 26. juillet 2005 17:39
À : pgsql-general@postgresql.org
Cc : Philippe Lang
Objet : Re: [GENERAL] Trigger disactivation and SELECT WAITING

Am Dienstag, 26. Juli 2005 16:07 schrieb Philippe Lang:
 Hi,

 I meant: in 7.4.X databases, is there a way of disabling a trigger 
 without deleting it? I guess the answer is no.

 That's what my plpgsql insert function does, and because of this, if a 
 view is running at the same moment on the same tables (some views can 
 take up to
 2 hours to be calculated), the insert function gets stuck in a SELECT 
 WAITING state. So insertions are impossible in the database when views 
 are being calculated.

I guess you should rethink your databse design. Disabling triggers is 
convinient if your populate a database or you do bulk inserts, but you 
shouldn't disable them in a production database. 

In my experience rules are much more powerful and faster than triggers but on 
the other side much more difficult. Triggers are procedural. they fire on 
every inserted row. A rule is relational instead. If you use a rule you have 
only one more statement on insert even if you insert lots of data. On the other 
hand rules are not called by COPY Statements. And some things can't be done 
with rules. 

The waiting state ist ok, because other transaction can just not know if you 
commit your changes to the trigger or not.

And i don't know what you mean with view is running for 2 hours i guess you 
have some functionality to build so called materialized views, right? 

if you give me some more information waht you are really doing i can help you.
as your mail is .ch you might prefer german language and can contact via 
personal mail.

kind regards,
janning


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Trigger disactivation and SELECT WAITING

2005-07-27 Thread Janning Vygen
Am Mittwoch, 27. Juli 2005 09:47 schrieb Philippe Lang:
 Thanks Tom, thanks Janning,

 I found triggers very convenient to do different tasks in the database, and
 these tasks go far beyond what we can do in rules, Janning.

Right. There are some things that can't be done with rules. 

 When a line is being inserted in an order, the insert trigger automatically
 inserts data in a subtable of the order line, for example. In this
 subtable, there are informations regarding the planning of the order.
 People can use the GUI to populate the order, but things won't break if the
 user opens the database table directly, which can happen sometimes. Without
 the trigger, an insert function click would have to be used each time an
 order line is being added, and this is not that effective from a
 user-experience point of view, I think. Or would require a lot a
 client-coding.

As far as i understand your example it can be done with rules, too.

 Now the use of a trigger has a drawback: when you want to duplicate an
 order, for example. During the duplication function, I would like to
 disable the trigger, in order to make a copy of the order, order lines, and
 order lines subtable data. This is much easier than keeping the trigger,
 and having to delete default data it inserts in the new order.

just a thought: maybe you can insert a column copyof_id in your tabel and 
mark it if you copy orders. The trigger can check NEW.copyof and quit his 
work if it is marked.

With rules there is a nice advantage when copying: rules are not invoked by 
COPY command (but sometimes it is a disadvantage if you need the rule 
invocation)

kind regards
janning

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Postgresql with max_connections=4096

2005-07-27 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

We have a postgresql server configured with max_connections=4096.
We have such a high number of max_connections because there are 8 web 
servers connected to the database and all use persistent connections.
Each web server can have 256 max clients and 2 connection strings, so 
the max connections to the db is 256 * 8 * 2=4096.


The one-word answer is pgpool http://pgpool.projects.postgresql.org/
You don't really want them all using persistent connections directly to 
the DB.



To start the postgresql, I must to do some kernel extra configuration:

kernel.shmmax = 1165063808
kernel.sem=512 64000 100 512

and the parameters i changed other than max_connections in the 
postgresql.conf are:


- shared_buffers = 131072


That's a *very* big shared_buffers value. Try reducing it by a factor of 
10 and reading the performance tuning article here:

  http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php


- checkpoint_segments = 32

We are doing pre-production tests and we encountered the following 
problems:
- The database server is generally low loaded except when the postgres 
recycles a transaction log file.


Your disk I/O is probably saturated. However, fix your configuration 
settings before worrying about hardware.


- This causes the apache frontends to slow down and to do all together 
the most heavy operation (that is five inserts in five different tables 
and a delete)
- After some time the postgresql starts giving the message WARNING:  
there is already a transaction in progress. It seems like the apache 
frontend didn't close correctly the previous connection living a 
transaction open but I'm not sure this is the only problem.


Sounds like a BEGIN being re-issued alright. Solution - fix your 
application(s) and don't use persistent connections (or if you do, make 
sure you rollback any pre-existing transactions and issue any relevant 
SET commands).


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Postgresql with max_connections=4096

2005-07-27 Thread denis

Richard Huxton wrote:


[EMAIL PROTECTED] wrote:


We have a postgresql server configured with max_connections=4096.
We have such a high number of max_connections because there are 8 web 
servers connected to the database and all use persistent connections.
Each web server can have 256 max clients and 2 connection strings, so 
the max connections to the db is 256 * 8 * 2=4096.



The one-word answer is pgpool http://pgpool.projects.postgresql.org/
You don't really want them all using persistent connections directly 
to the DB.



Already checked.
We set a machine with only pg_pool installed as a fronted to the real db.
If I disable persistent connections and I use pg_pool with 4096 
preforked clients, the load of the pg_pool server goes very high and the 
takes down also the real database server. Maybe should I try installing 
pg_pool direcly into the database server to avoid network traffic?



To start the postgresql, I must to do some kernel extra configuration:

kernel.shmmax = 1165063808
kernel.sem=512 64000 100 512

and the parameters i changed other than max_connections in the 
postgresql.conf are:


- shared_buffers = 131072



That's a *very* big shared_buffers value. Try reducing it by a factor 
of 10 and reading the performance tuning article here:

  http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php



Already read. I put 25% of total memory as that specified by Bruce 
Momjian performance howto but now i'm lowering 65536




Sounds like a BEGIN being re-issued alright. Solution - fix your 
application(s) and don't use persistent connections (or if you do, 
make sure you rollback any pre-existing transactions and issue any 
relevant SET commands).


If that was the problem, I should obtain always that error but I obtain 
that error only after two/three hours of testing.


Thank you for your help,
Denis

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Tino Wildenhain
Am Mittwoch, den 27.07.2005, 09:48 +0200 schrieb Riaan van der
Westhuizen:
 Hi All, 
 
 We are a small developing house in South Africa, which is in the process of
 porting our Apps from 
 MS SQL to PostgreSQL.  We use the newid() [globally unique identifier
 (GUID)] function in SQL a lot, and need the same for pg. 
 
 Our development platform is .NET using c#.  We also plan to start using Mono
 C# in the future.
 We will be deploying PostgreSQL on Windows (2003 Server) and Linux (Ubuntu)
 platforms. 
 
 We have search the web and found c code that can do this, but we do not have
 c programmers.
 
 Are there anybody that can help us compiling these for us, we need it on
 both OS’s?

I'd create a sequence:

CREATE SEQUENCE global_unique_id_seq;

and a function:

CREATE OR REPLACE FUNCTION newid()
  RETURNS text AS
$BODY$ SELECT nextval('global_unique_id_seq')::text; $BODY$
  LANGUAGE 'sql' VOLATILE;


now every call to newid() returns a garantied unique id for
say the next 18446744073709551616 calls.
Of course you can obfuscate the ID even more using
md5, include servername and so on, but this will not improve
security in any way (unless you mix data with 2nd database)


 This e-mail and any attachments thereto is confidential and 
 is intended solely for the use of the addressee's. 
 If you are not the intended recipient, be advised that any use, 
 dissemination, forwarding, printing, or copying of this e-mail is strictly
 prohibited.
 
 Huizensoft (Pty) Ltd accepts no liability for any views or opinions
 expressed in 
 this e-mail or for any loss or damages that may be suffered by any person 
 whomsoever, arising from, or in connection with, or caused by, the use of
 this e-mail.
 
I'd skip this disclaimer as it is not relevant to law and makes the
company look a bit moronic (sorry) especially on mailinglists.

Tino


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Query planner refuses to use index

2005-07-27 Thread Kilian Hagemann
On Monday 25 July 2005 15:43, Michael Fuhr pondered:
 Whatever the results of your experiments, could you post the settings
 you tried and the corresponding EXPLAIN ANALYZE outputs?

I did lots of tests now that you pointed me to a useful guide, also taking 
what's in the documentation into account. In the attached file I have 
documented my results.

There are three sections to the file, each separated by '' markers. The 
first section deals in detail with the EXPLAIN ANALYZE info relating to the 
troublesome queries. The second is probably of least interest, just showing 
that I could implement my problem differently to improve performance.

But the last section is the most important, where I varied 
effective_cache_size, random_page_cost, shared_buffers and cpu_tuple_costs, 
each on its own with the other ones assuming default values(unless 
indicated).

To summarise, increasing effective_cache_size and decreasing random_page_cost 
both yield in lower index scan cost estimates while not changing the seqscan 
ones. As expected, increasing shared_buffers makes no difference whatsoever 
in the query cost estimates or the actual query times. A higher cpu_tuple 
cost penalises the seqscans significantly while only slightly increasing the 
index scan estimates.

Also note that these are all related to the query planner only, they do NOT 
change the actual query time which explains why I did not include EXPLAIN 
ANALYZE outputs, only plain EXPLAIN ones.

In order to make PostgreSQL choose the index scans when I need them (other 
than by setting enable_seq_scans to off), I ended up choosing
effective_cache_size 4
random_page_cost 2.5
cpu_tuple_cost 0.08
as only a combination yielded the desired results. Hardly optimal, but the 
real problem seems to lie with the correlation of the indexed columns (see 
other post in this thread). If I encounter trouble with these somewhere down 
the line, I'll post again.

Hope this helps someone out there.

-- 
Kilian Hagemann

Climate Systems Analysis Group
University of Cape Town
Republic of South Africa
Tel(w): ++27 21 650 2748
It all began with the most basic of queries on a clean database (first batch 
5min data only, vacuum full analyze'd), namely

station_data=# EXPLAIN ANALYZE SELECT * FROM speed WHERE set_id=25::smallint;
   QUERY PLAN
-
 Seq Scan on speed  (cost=0.00..242637.38 rows=1073843 width=8) (actual 
time=98080.848..104617.800 rows=1094400 loops=1)
   Filter: (set_id = 25::smallint)
 Total runtime: 109957.981 ms
(3 rows)

which chooses a seqscan by default . Disabling seqscan manually causes an index 
scan strategy which takes only ~ 12 sec:

station_data=# EXPLAIN ANALYZE SELECT * FROM speed WHERE set_id=25::smallint;
   QUERY PLAN

 Index Scan using speed_pkey on speed  (cost=0.00..3194177.02 rows=1073843 
width=8) (actual time=90.544..6881.291 rows=1094400 loops=1)
   Index Cond: (set_id = 25::smallint)
 Total runtime: 12243.179 ms
(3 rows)

Making the query a bit more complex yields the same picture(default 
configuration is 9x slower):

seqscans disabled:
station_data=# EXPLAIN ANALYZE SELECT * FROM speed WHERE 
set_id=25::smallint AND rec_time = '1999/01/01';
QUERY PLAN

-
Index Scan using speed_pkey on speed  (cost=0.00..1932444.35 
rows=649115 width=8) (actual time=0.169..6520.960 rows=652345 loops=1)
Index Cond: ((set_id = 25::smallint) AND (rec_time = '1999-01-01 
00:00:00+02'::abstime))
Total runtime: 11664.710 ms
(3 rows)
default (seqscans enabled):
station_data=# EXPLAIN ANALYZE SELECT * FROM speed WHERE 
set_id=25::smallint AND rec_time = '1999/01/01';
QUERY PLAN

--
Seq Scan on speed  (cost=0.00..277537.25 rows=649115 width=8) (actual 
time=94546.374..98789.401 rows=652345 loops=1)
Filter: ((set_id = 25::smallint) AND (rec_time = '1999-01-01 
00:00:00+02'::abstime))
Total runtime: 101833.815 ms
(3 rows)

Only when adding more conditions does the index scan seem attractive for the 
planner, and it chooses the index no matter what:

station_data=# EXPLAIN ANALYZE SELECT * FROM speed WHERE 
set_id=25::smallint AND rec_time BETWEEN '1999/01/01'AND '2000/01/01';

Re: [GENERAL] Query planner refuses to use index

2005-07-27 Thread Kilian Hagemann
On Friday 22 July 2005 16:17, Tom Lane pondered:
 Pre-8.0 tends to underestimate the correlation of a multicolumn index.
 (8.0 may too, but not as much.)

I actually upgraded to 8.0.3 now and 2 things have changed. Firstly, I don't 
need to do the annoying casts anymore as the query planner now recognises 
which literals are compatible with which indexes. Secondly, and you're right 
here, 8.0 has decreased the gap between index and sequential scan cost 
estimate significantly, but not nearly sufficiently to detect that the index 
scan is indeed superior.

  Also, note that set_id is strictly increasing (hence correlation of 1)
  and rec_time is strictly increasing within records with same set_id.

 So the reason the indexscan is so good is that the ordering correlation
 is perfect.  This isn't the planner's default assumption, and
 unfortunately we haven't got statistics available that would allow
 correlation of a multicolumn index to be estimated well.

Hmm, what's wrong with using the 'correlation' column of pg_stats? It told us 
straight away that the correlation on set_id was perfect. Even when leaving 
out the condition on the second index column (rec_time) the query planner 
thinks a sequential scan is more appropriate (please refer to the text file 
in my other most recent post for more details).

May I file a bug report for this? I really think that this points to a 
deficiency in the query planner.

-- 
Kilian Hagemann

Climate Systems Analysis Group
University of Cape Town
Republic of South Africa
Tel(w): ++27 21 650 2748

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Postgresql with max_connections=4096

2005-07-27 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-07-27 10:22:48 +0200:
 Richard Huxton wrote:
 Sounds like a BEGIN being re-issued alright. Solution - fix your 
 application(s) and don't use persistent connections (or if you do, 
 make sure you rollback any pre-existing transactions and issue any 
 relevant SET commands).

 If that was the problem, I should obtain always that error but I obtain 
 that error only after two/three hours of testing.

Not necessarily.

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] back-end triggers front-end to update

2005-07-27 Thread Karsten Hilbert
On Tue, Jul 26, 2005 at 06:25:23PM -0300, Adam O'Toole wrote:

 I am searching for a way to have my postgresql 7.4.7 backend be triggered to
 let the front end know there has been a change to the database. If more then
 one person is connected to the database and person (x) makes a change, I want
 other clients to then be aware of that, and refresh there screen so they are
 not then looking at out of date data.
 Any insight?  Thank you so much, have a great day.

GNUmed has a complete Python implementation of what you
describe. We are using triggers and listen/notify as others
suggested. Look at client/pycommon/gmBackendListener.py for
the middleware, then server/sql/gmNotifications.sql for the
schema support, and server/bootstrap/gmNotificationSchemaGenerator.py
for a way to generate the needed triggers at the time of
bootstrapping the database.

 http://salaam.homeunix.com/~ncq/gnumed/snapshot/gnumed-latest-snapshot.tgz

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] dropping non-existent tables

2005-07-27 Thread Walsh, Richard (Richard)
Thanks for this Mark.
This seems like it will work well for TABLES and VIEWS. However I have
also other categories such as USER, GROUP and TYPE. For these I cant
seem to find out where they are in the information_schema in postgres. 

There is no CREATE GROUP statement in the SQL standard, so this is
probably why it is not in the information_schema. 
The CREATE USER and CREATE TYPE statements are PostgreSQL extensions. So
these may not be in the information_schema for this reason.

Has anyone had to deal with this before.
Thanks,
Richie.


-Original Message-
From: mark reid [mailto:[EMAIL PROTECTED] 
Sent: 26 July 2005 18:23
To: Walsh, Richard (Richard)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] dropping non-existent tables

Hello,

You can use the information_schema.* tables, which are part of the SQL
standard (and thus not proprietary).

-Mark.

Walsh, Richard (Richard) wrote:

 Hi,
 I have a problem in that I need to drop non-existent tables in a DDL 
 script. This is in order that the script can re-build a database 
 schema if the tables already exist. However, in Postgres this is 
 proving to be a problem because if the table does not exist then the 
 DDL execution will stop once it gets an error. I know that I can 
 create a custom function that will check the relevant postGres table 
 to see if the table already exists, but I don't want to do this as I 
 would like to keep the DDL as non-proprietary as possible. Anyone any 
 ideas on a work-around for this problem that does not involve creating

 a custom function.
 thanks,
 Richie.




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] dropping non-existent tables

2005-07-27 Thread Walsh, Richard (Richard)
Hi Michael,
I am executing these statements inside an SQL DDL script so I think they
all take place inside a single transaction. I am not using PSQL. I will
try what you suggest and see if executing the drop statements in a
separate tx will work.
Richie.

-Original Message-
From: Michael Fuhr [mailto:[EMAIL PROTECTED] 
Sent: 26 July 2005 23:09
To: Walsh, Richard (Richard)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] dropping non-existent tables

On Tue, Jul 26, 2005 at 06:04:08PM +0200, Walsh, Richard (Richard)
wrote:
 I have a problem in that I need to drop non-existent tables in a DDL 
 script. This is in order that the script can re-build a database 
 schema if the tables already exist. However, in Postgres this is 
 proving to be a problem because if the table does not exist then the 
 DDL execution will stop once it gets an error.

Are you executing the statements inside a transaction?  If so then you
could execute the DROP statements in separate transactions (or execute
them outside of any transaction in autocommit mode).  Also, if you're
using psql then make sure you don't have ON_ERROR_STOP set.

In PostgreSQL 8.1, psql will have an ON_ERROR_ROLLBACK setting that uses
savepoints to automatically roll back failed statements while allowing
the rest of the transaction to continue.

--
Michael Fuhr
http://www.fuhr.org/~mfuhr/



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] duplicate messages?

2005-07-27 Thread Robert Treat
Anyone else getting duplicate messages?  I seem to be getting them 
sporadically on different messages on pgsql-general.  Seems to have started 
sometime Monday morning (estern us time)

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Daily digest?

2005-07-27 Thread Andrew Stewart

How does one receive all mail to this list in a daily digest?

-Andrew

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgresql with max_connections=4096

2005-07-27 Thread Jeff Trout


On Jul 27, 2005, at 4:22 AM, [EMAIL PROTECTED] wrote:


Already checked.
We set a machine with only pg_pool installed as a fronted to the  
real db.
If I disable persistent connections and I use pg_pool with 4096  
preforked clients,


no no no.
you don't want 4096 preforked clients.

What you want to do is leave it at the default 32 preforks and run a  
copy of pgpool on each frontend.  Then point your app to connect ot  
localhost (or the unix socket).  Be sure you change things to use a  
normal connect not a persistant connect.  This should be fine for  
you.   I used to run a site doing hundreds of connects  queries /  
second across 4 frontends each of which ran a pg_pool connected to a  
db box.


If you fine 32 clients isn't enough, then bump it up.  The idea is to  
keep as few connections as possible.


--
Jeff Trout [EMAIL PROTECTED]
http://www.jefftrout.com/
http://www.stuarthamm.net/



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] duplicate messages?

2005-07-27 Thread Audrey Bergeron-Morin
 Anyone else getting duplicate messages?

Yep, have been getting them sporadically ever since I
joined (about 5 weeks ago). It's something that
happens once in a while on many mailing lists, I
wouldn't worry about it unless it becomes a regular occurence.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Cursor Issue??

2005-07-27 Thread DracKewl
BEGIN WORK;
DECLARE cursor1 CURSOR FOR SELECT * FROM PARTS;
FETCH FIRST FROM cursor1;
CLOSE cursor1;
COMMIT WORK;


Query result with 1 rows discarded.
Query returned successfully with no result in 31 ms.

In the data output view nothing is returned?


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL, Lazarus and zeos ?

2005-07-27 Thread Ben Trewern
You need the cvs version of zeoslib to work with Lazarus.  It's also the 
6.5.something version.  The old 5.x only worked with Delphi.  See the 
Lazarus forums for more information.

Ben


Zlatko Matiæ [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
Hi.

Someone mentioned Lazarus as good IDE for working with PostgreSQL, so 
that's the reason I started to learn Lazarus...

Now, I was told that I need to install ZEOS library in order to work with 
PostgreSQL.
I downloaded the following .zip files: zeosdbo-5.0.7-beta, 
zeosctrl-1.0.0-beta. I have installed Lazarus 0.9.8 (binaries) on WIndows 
XP.

It seems that these zeos files are intended to be for Delphi, not for 
Lazarus ?
What am I supposed to do now ? How to install it ?

Is it really neccessary to instal Zeos in order to work with PostgreSQL?

Sorry for stupid questions, but this is totaly new stuff for me...

Thanks in advance,

Zlatko 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] postgres 7.2.4 - errors after vacuuming/reindexing in single user mode

2005-07-27 Thread Warren White








We have a 7.2.4 postgres database that was bloated. We went
into postgres single user mode to

Vacuum and reindex the database. (postgres -0 P) We
reindexed a number of indices explicitly.

We now cannot use the database, and are getting e.g.

ERROR: Relation 9262944 does not exist



What can we do to recover? What happened, and what did we
do wrong?



Thanks in advance,





Warren



Warren White

CTAC Engineer III

email: [EMAIL PROTECTED]

phone: 781.895.3611

cell: 781.738.3416

msn: [EMAIL PROTECTED]



eDial - a division of ALCATEL

266
  2nd Avenue

Waltham, MA 02451










Re: [GENERAL] Wishlist?

2005-07-27 Thread Ezequiel Tolnay

Chris Browne wrote:

kleptog@svana.org (Martijn van Oosterhout) writes:


On Mon, Jul 25, 2005 at 11:35:14AM +1000, Ezequiel Tolnay wrote:

Functions are not the same as stored procedures, but since PG lacks 
stored procedures, there is a necessity to use functions instead.


Ok, maybe I'm missing something, but the only difference between a
procedure and a function is that a function returns a value and a
procedure doesn't. There's no difference in capabilities so I don't
really understand the problem here.


I'm in the same boat here.  I don't grasp what is the *vital*
difference that makes a stored function, which differs from a stored
procedure in that it returns a value, so greatly different.


The difference is quite simple, but perhaps not evident if you don't 
have much experience with other RDBMSs. A function is meant to return a 
result (or a set of results) of a predefined type during execution, 
whilst a stored procedure returns any number of results of arbitrary 
types (as well as status messages and affected rows per operation) 
during execution, just as what you would expect by running a script 
asynchronously, fetching results as the script's processed. The 
convenience of a stored procedure is in short that you can store a 
script (procedure), assign a name to it, thus hiding its internals 
conveniently from the interface layer. The interface, of course, on turn 
must be capable of handling the various results returned, if any at all.


For instance, when running a procedure that you know will take a few 
hours to process, you could return every minute or so a status message 
to know what's going on. Or return in one go customer details plus 
transactions plus a summary. Or launch in the background without waiting 
for a results at all (which requires EXECUTE as opposed to SELECT). Or 
to have a feedback of the rows affected in the various steps of the 
procedure to finally receive a rowset with the results.


Cheers,

Ezequiel Tolnay

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Backup and restore from 7.4.1 to latest, crossing platforms... issues?

2005-07-27 Thread Mark Mikulec
Hi there,

This may be a stupid question but I feel I should ask it anyway just to be sure, since I've had problems in the past.

I currently have a defunct Debian 3.0 system running PostgreSQL 7.4.1, in
which I am going to do an entire blob backup dump of all the databases
in hopes to restore them on my new, more secure system running the
latest OpenBSD, and PgSQL 8.0.

So my question is: Are they any issues that I should be aware up
restoring on a more recent version of PgSQL, and/or issues using a
completely different operating system altogether? I have some strange
pg language mishaps going from 7.2 to 7.4 last backup, although
frightening, I overcame them with some fiddling.

Thanks in advance,
 Mark-- ___Roses are #FFViolets are #FF,All my baseAre belong to you.



Re: [GENERAL] Cursor Issue??

2005-07-27 Thread DracKewl
Here is a further test:

--
BEGIN WORK;
DECLARE cursor1 CURSOR FOR SELECT * FROM PARTS;
FETCH FIRST IN cursor1;
INSERT INTO partstemp VALUES (PARTS.QTY, PARTS.LENGTH, PARTS.WIDTH);
CLOSE cursor1;
COMMIT WORK;
--

What I get is the whole table put into partstemp.  I just wanted the
one row???
One more question what is the difference between using the term IN vs.
FROM?


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Wishlist?

2005-07-27 Thread Ezequiel Tolnay

Ezequiel Tolnay wrote:

(...) A function is meant to return a
result (or a set of results) of a predefined type *during* execution, 
whilst a stored procedure (...)


I meant to say *after* instead of *during*. The capabilitie to return 
results during execution could only be suported by stored procedures.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Daily digest?

2005-07-27 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-07-27 08:56:24 -0400:
 How does one receive all mail to this list in a daily digest?

Have you read the mailing list usage notes on the web site?

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Select Stament Issue??

2005-07-27 Thread DracKewl
Trying out PostgreSQL for the first time and running into a minor
problem.
I created two tables one with the Add table wizard the other hard-core
script.

Script made table:
Select * from Example
--This works as expected
Select * from EXAMPLE
--This works as expected

Wizard made table:
Select * from Example
--ERROR:  relation Example does not exist
Select * from Example
--This works as expected

The wizard table has created a case sensitive restriction on me and
forced me to use quotes.  How do I turn this off?


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Rules vs Triggers

2005-07-27 Thread Janning Vygen
Am Dienstag, 26. Juli 2005 23:53 schrieb Randall Perry:
 Read the Rules section of the manual and the section on Rules vs Triggers.

 From what I get triggers are necessary for column constraints. As far as
 speed, it seems there are some differences between how fast rules/triggers
 would do the same action, but that some complex analysis is involved to
 determine this. And I gathered rules are necessary to allow
 update/insert/delete actions on views.

 Can anyone give me some simple reasons why they choose rules over triggers
 in their real-world dbs?

Triggers are executed per row, so they are quite procedural. If you insert or 
update 500 rows they are fired 500 times.

Rules modify the sql query tree. So rules are at some point nothing else as 
rewrites to your sql statement. If you update 500 rows and you have an on 
update rule. Your query tree is modified once and gets executed for all 500 
rows.

Rules are much faster an much more relational than triggers are, because they 
become pure sql before they reach the database.

imagine an on delete trigger which record the deletion in an audit table like 
this:

create trigger tg_member before delete on member for each row EXECUTE 
PROCEDURE audit_meber_deletion();

audit_meber_deletion() does an INSERT to an audit table.

no think of members are organized in groups. If you delete a group ALL members 
are deleted because of cascading foreing keys references.

Now delete a group with 2 members. The trigger is fired 2 times

No Imagine a rule which does
create rule rl_member AS ON DELETE TO member 
DO
INSERT INTO member_deletion (membername) VALUES (OLD.membername)

this is executed once and is as fast as SQL can be.

Normally you dont see a difference between triggers and rules if you have 
update and insert statemnts which affect only a few rows. but if it comes to 
affecting many rows, you should use rules. But rules are more difficult to 
understand.

kind regards,
janning








---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Daily digest?

2005-07-27 Thread Andrew Stewart

set pgsql-general digest




Roman Neuhauser wrote:


# [EMAIL PROTECTED] / 2005-07-27 08:56:24 -0400:
 


How does one receive all mail to this list in a daily digest?
   



   Have you read the mailing list usage notes on the web site?

 




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] About Tools at the DB design phase.

2005-07-27 Thread Ying Lu

Greetings,

I am at the design phase of the DB design. That is, I'd like to design 
tables and relationships between them, but not the real implement of 
tables. Could somebody suggest some good and free tools to help/ease 
design the structures please?


Thanks a lot!
Emi



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] dropping non-existent tables

2005-07-27 Thread mark reid

Hello,

You can use the information_schema.* tables, which are part of the SQL 
standard (and thus not proprietary).


-Mark.

Walsh, Richard (Richard) wrote:


Hi,
I have a problem in that I need to drop non-existent tables in a DDL 
script. This is in order that the script can re-build a database 
schema if the tables already exist. However, in Postgres this is 
proving to be a problem because if the table does not exist then the 
DDL execution will stop once it gets an error. I know that I can 
create a custom function that will check the relevant postGres table 
to see if the table already exists, but I don't want to do this as I 
would like to keep the DDL as non-proprietary as possible. Anyone any 
ideas on a work-around for this problem that does not involve creating 
a custom function.

thanks,
Richie.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Select Stament Issue??

2005-07-27 Thread DracKewl
I did a couple of tests and found that occationally when using the
wizard it added  to my names.  When this happens it forces the whole
table to be case sensitive.  Anyway I think I'll just stick to manually
creating my tables.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] duplicate messages?

2005-07-27 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


 Anyone else getting duplicate messages?  I seem to be getting them
 sporadically on different messages on pgsql-general.  Seems to have started
 sometime Monday morning (estern us time)

One reason this happens is because people post to the list from an address
which is not subscribed. Their message gets addded the moderator's queue.
They then realize that they were not subscribed, and either subscribe
and send again, or send from another account, or send again so it gets
added to the moderator's queue multiple times. The moderator comes along and
approves the message(s) at some point. Thus, duplicate messages. In an ideal
world, the moderator or the original poster is constantly reading the list
and realized the post has already made it. Of course, in an ideal world,
people would subscribe to the list before attempting to send to it. :)

- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200507271007
https://www.biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8

-BEGIN PGP SIGNATURE-

iEYEARECAAYFAkLnlU8ACgkQvJuQZxSWSsjwpwCgxKKjWvtud1UdVUpkhMvEsO1x
foUAniDRZxc0VQRKkwXzt9Gq/EH1SNIr
=FiHA
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Daily digest?

2005-07-27 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-07-27 10:03:15 -0400:
 Roman Neuhauser wrote:
 # [EMAIL PROTECTED] / 2005-07-27 08:56:24 -0400:
 How does one receive all mail to this list in a daily digest?
 
Have you read the mailing list usage notes on the web site?

 set pgsql-general digest

Yes, that's the command you should send (elsewhere).

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Cursor Issue??

2005-07-27 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-07-26 14:06:34 -0700:
 BEGIN WORK;
 DECLARE cursor1 CURSOR FOR SELECT * FROM PARTS;
 FETCH FIRST FROM cursor1;
 CLOSE cursor1;
 COMMIT WORK;
 
 
 Query result with 1 rows discarded.
 Query returned successfully with no result in 31 ms.
 
 In the data output view nothing is returned?

Complain to your 'data output view' vendor.

test=# create table parts (id serial, t text);
CREATE TABLE
test=# insert into parts (t) values ('aaa');
INSERT 72423 1
test=# insert into parts (t) values ('bbb');
INSERT 72424 1
test=# insert into parts (t) values ('ccc');
INSERT 72425 1
test=# select * from parts;
 id |  t
+-
  1 | aaa
  2 | bbb
  3 | ccc
(3 rows)

test=# BEGIN WORK;
BEGIN
test=# DECLARE cursor1 CURSOR FOR SELECT * FROM PARTS;
DECLARE CURSOR
test=# FETCH FIRST FROM cursor1;
 id |  t
+-
  1 | aaa
(1 row)

test=# CLOSE cursor1;
CLOSE CURSOR
test=# COMMIT WORK;
COMMIT
test=#

As you can see, the fetched row is displayed just fine.

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Edwin Hernán Barrios Núñez
Hi Riaan.!


i think that is a better  solution, to use  the postgres native type
serial, that  it is  a autoincremental number type. You can see it
on the postgres manual. It's very usefull because of you only need to
redifine the type of your id vars. For example, on this moment you
have

 CREATE TABLE test ( name varchar(50), id int );

 insert into test values ('prueba',newid);

On postgres you can resplace thoses

 CREATE TABLE test ( name varchar(50), id serial primary key);

 insert into test values ('prueba');

Serial type create automatically a sequence asocited to id. and always
you insert something  using its default value, the serial id
autoincrements its value.

thanks , i have not a  good english but i try to help !

Atte:
Edwin Barrios ! 
Gerente iBand Networks Ltda




On 7/27/05, Riaan van der Westhuizen [EMAIL PROTECTED] wrote:
 Hi All,
 
 We are a small developing house in South Africa, which is in the process of
 porting our Apps from
 MS SQL to PostgreSQL.  We use the newid() [globally unique identifier
 (GUID)] function in SQL a lot, and need the same for pg.
 
 Our development platform is .NET using c#.  We also plan to start using Mono
 C# in the future.
 We will be deploying PostgreSQL on Windows (2003 Server) and Linux (Ubuntu)
 platforms.
 
 We have search the web and found c code that can do this, but we do not have
 c programmers.
 
 Are there anybody that can help us compiling these for us, we need it on
 both OS's?
 
 Thanks
 
 Regards,
 
 Riaan van der Westhuizen
 CEO
 
 Huizensoft (Pty) Ltd
 Tel: +27 44 871 5534
 Fax: +2744 871 5098
 
 This e-mail and any attachments thereto is confidential and
 is intended solely for the use of the addressee's.
 If you are not the intended recipient, be advised that any use,
 dissemination, forwarding, printing, or copying of this e-mail is strictly
 prohibited.
 
 Huizensoft (Pty) Ltd accepts no liability for any views or opinions
 expressed in
 this e-mail or for any loss or damages that may be suffered by any person
 whomsoever, arising from, or in connection with, or caused by, the use of
 this e-mail.
 
 
 
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] About Tools at the DB design phase.

2005-07-27 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-07-27 10:05:43 -0400:
 Greetings,
 
 I am at the design phase of the DB design. That is, I'd like to design 
 tables and relationships between them, but not the real implement of 
 tables. Could somebody suggest some good and free tools to help/ease 
 design the structures please?

They're not exactly free, but a pen and a hardback notebook (A4)
have proven to be the best tools over the time.

YMMV.

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] About Tools at the DB design phase.

2005-07-27 Thread Tony Caduto

Zeos Database Designer is OK.

http://www.zeoslib.net

Tony Caduto
http://www.amsoftwaredesign.com/lightning_admin.php
Home of PG Lightning Admin (PGLA) for Postgresql 8.x




Ying Lu wrote:


Greetings,

I am at the design phase of the DB design. That is, I'd like to design 
tables and relationships between them, but not the real implement of 
tables. Could somebody suggest some good and free tools to help/ease 
design the structures please?




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Wishlist?

2005-07-27 Thread Martijn van Oosterhout
On Wed, Jul 27, 2005 at 12:45:12PM +1000, Ezequiel Tolnay wrote:
 Ezequiel Tolnay wrote:
 (...) A function is meant to return a
 result (or a set of results) of a predefined type *during* execution, 
 whilst a stored procedure (...)
 
 I meant to say *after* instead of *during*. The capabilitie to return 
 results during execution could only be suported by stored procedures.

I know this is being fiddly but in PostgreSQL a function doesn't have
to return all its values in one go. It can, in a loop, do calculations,
return a row, do more calculations, return a row and these rows can be
received by the client in real time (ie not wait for completion).

But your description of stored procedures was useful. It's basically
more like a script you would feed to psql. You indicate that the
procedure can return multiple sets. How is this handled by the client?

If you're not returning data then it would be like a void function
using NOTICE to communicate with the client.

Still, thanks for the info...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgp2IHzUEvitU.pgp
Description: PGP signature


Re: [GENERAL] Postgresql with max_connections=4096

2005-07-27 Thread denis
I'm now testing with pg_pool installed on each apache frontend with 260 
pg_pool preforked clients in each machine.


The database seems to work better. At least when it goes to swap it 
doesn't stop working...


I also reduced the shared buffers and moved the pg_xlog folder to 
another disk on another raid container.


I'll let go the test for all the night and tomorrow I'll let you know 
the results...


Thank you for your help,

Denis

Jeff Trout wrote:



On Jul 27, 2005, at 4:22 AM, [EMAIL PROTECTED] wrote:


Already checked.
We set a machine with only pg_pool installed as a fronted to the  
real db.
If I disable persistent connections and I use pg_pool with 4096  
preforked clients,



no no no.
you don't want 4096 preforked clients.

What you want to do is leave it at the default 32 preforks and run a  
copy of pgpool on each frontend.  Then point your app to connect ot  
localhost (or the unix socket).  Be sure you change things to use a  
normal connect not a persistant connect.  This should be fine for  
you.   I used to run a site doing hundreds of connects  queries /  
second across 4 frontends each of which ran a pg_pool connected to a  
db box.


If you fine 32 clients isn't enough, then bump it up.  The idea is to  
keep as few connections as possible.


--
Jeff Trout [EMAIL PROTECTED]
http://www.jefftrout.com/
http://www.stuarthamm.net/



---(end of broadcast)---
TIP 6: explain analyze is your friend




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Postgresql with max_connections=4096

2005-07-27 Thread Jeff Trout


On Jul 27, 2005, at 10:46 AM, [EMAIL PROTECTED] wrote:

I'm now testing with pg_pool installed on each apache frontend with  
260 pg_pool preforked clients in each machine.


Why did you pick 260?

You don't need a 1:1 ratio. That is the point of the pool. Those  
connections are shared.  Chances are extremely high that all your  
apache clients are not issuing queries at the same exact time so your  
queries end up getting funnelled into those X  connections.


I ran with 32 kids on pg_pool and 350 apache processes. never had a  
problem.


--
Jeff Trout [EMAIL PROTECTED]
http://www.jefftrout.com/
http://www.stuarthamm.net/



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Backup and restore from 7.4.1 to latest, crossing platforms... issues?

2005-07-27 Thread Martijn van Oosterhout
Binary backups are not going to work across major releases so just
installing 8.0 with your old cluster won't work.

You'll need to do a pg_dump of your old setup and then restore on your
new cluster.

Hope this helps,

On Wed, Jul 27, 2005 at 12:27:41AM -0400, Mark Mikulec wrote:
 Hi there,
 
 This may be a stupid question but I feel I should ask it anyway just to be 
 sure, since I've had problems in the past.
 
 I currently have a defunct Debian 3.0 system running PostgreSQL 7.4.1, in 
 which I am going to do an entire blob backup dump of all the databases in 
 hopes to restore them on my new, more secure system running the latest 
 OpenBSD, and PgSQL 8.0.
 
 So my question is: Are they any issues that I should be aware up restoring 
 on a more recent version of PgSQL, and/or issues using a completely 
 different operating system altogether? I have some strange pg language 
 mishaps going from 7.2 to 7.4 last backup, although frightening, I overcame 
 them with some fiddling.
 
 Thanks in advance,
 Mark
 
 -- 
 ___
 Roses are #FF
 Violets are #FF,
 All my base
 Are belong to you.

-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgp7Gn67FU8H7.pgp
Description: PGP signature


Re: [GENERAL] Select Stament Issue??

2005-07-27 Thread Scott Marlowe
On Tue, 2005-07-26 at 13:54, DracKewl wrote:
 Trying out PostgreSQL for the first time and running into a minor
 problem.
 I created two tables one with the Add table wizard the other hard-core
 script.
 
 Script made table:
 Select * from Example
 --This works as expected
 Select * from EXAMPLE
 --This works as expected
 
 Wizard made table:
 Select * from Example
 --ERROR:  relation Example does not exist
 Select * from Example
 --This works as expected
 
 The wizard table has created a case sensitive restriction on me and
 forced me to use quotes.  How do I turn this off?

When you created the table you likely did this:

create table Example (...

While the wizard actually did this:

create table Example (...

So, postgresql folded case on your create table, and actually created a
table called example, while the wizard's quoting the table name meant
the table was named Example.  This should fix you up:

alter table Example rename to example;

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] transaction timeout

2005-07-27 Thread Dr NoName
 Sure.  Like this:
 
 Client A accesses table T, and hangs.
 Client B attempts to get an ACCESS EXCLUSIVE lock on
 table T in
 preparation for VACUUM FULL.
 Client C connects to the database and waits for
 client B to get and
 release his lock on table T.
 Client D connects to the database and waits for
 client B to get and
 release his lock on table T.
 Client E connects to the database and waits for
 client B to get and
 release his lock on table T.
 etc...

oh! my! gawd!
Finally a clear explanation that makes perfect sense.
Now why did it take so long?

So all I need to do is take out the FULL? Is regular
VACUUM sufficient? How often do we need FULL? (I know
it's a stupid question without providing some more
context, but how can I estimate it?)

I suppose the ultimate solution would be a wrapper
script that works as follows:

check if there are any waiting/idle in transaction
processes
if such processes exist, do a regular VACUUM and send
out a warning email
otherwise, do VACUUM FULL.

I like this solution a lot more than getting support
calls on weekends.

Out of curiousity, how is lock acquisition implemented
in postgresql? All the processes have to go through
some sort of queue, so that locks are granted in FIFO
order, as you described. Just trying to understand it
better.

thanks a lot,

Eugene

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] duplicate messages?

2005-07-27 Thread Robert Treat
On Wed, 2005-07-27 at 10:09, Greg Sabino Mullane wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 
  Anyone else getting duplicate messages?  I seem to be getting them
  sporadically on different messages on pgsql-general.  Seems to have started
  sometime Monday morning (estern us time)
 
 One reason this happens is because people post to the list from an address
 which is not subscribed. Their message gets addded the moderator's queue.
 They then realize that they were not subscribed, and either subscribe
 and send again, or send from another account, or send again so it gets
 added to the moderator's queue multiple times. The moderator comes along and
 approves the message(s) at some point. Thus, duplicate messages. In an ideal
 world, the moderator or the original poster is constantly reading the list
 and realized the post has already made it. Of course, in an ideal world,
 people would subscribe to the list before attempting to send to it. :)
 

Seems unlikely unless folks like Tom Lane, Stephan Szabo, and Richard
Huxton have unsubscribed and resubscribed lately... Funny thing is it
isnt every messages, but maybe half of them. And its not to specific
users, sometimes one of Toms emails will duplicate but sometimes not.
Further more it only seems to be happening on pgsql-general emails and
not on any of the other lists... I also notice this is only happening on
one of my subscribed emails, not the other, which is a little weird...
according to the headers though, this problem is happening further
upstream. 


Robert Treat
-- 
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL, Lazarus and zeos ?

2005-07-27 Thread Zlatko Matić

thanks.
- Original Message - 
From: Ben Trewern [EMAIL PROTECTED]

To: pgsql-general@postgresql.org
Sent: Tuesday, July 26, 2005 7:33 PM
Subject: Re: [GENERAL] PostgreSQL, Lazarus and zeos ?



You need the cvs version of zeoslib to work with Lazarus.  It's also the
6.5.something version.  The old 5.x only worked with Delphi.  See the
Lazarus forums for more information.

Ben



Zlatko Matić [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Hi.



Someone mentioned Lazarus as good IDE for working with PostgreSQL, so
that's the reason I started to learn Lazarus...



Now, I was told that I need to install ZEOS library in order to work with
PostgreSQL.
I downloaded the following .zip files: zeosdbo-5.0.7-beta,
zeosctrl-1.0.0-beta. I have installed Lazarus 0.9.8 (binaries) on WIndows
XP.



It seems that these zeos files are intended to be for Delphi, not for
Lazarus ?
What am I supposed to do now ? How to install it ?



Is it really neccessary to instal Zeos in order to work with PostgreSQL?



Sorry for stupid questions, but this is totaly new stuff for me...



Thanks in advance,



Zlatko




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Budget battery-backed ramdisk (Gigabyte i-RAM)

2005-07-27 Thread Vivek Khera


On Jul 26, 2005, at 1:25 PM, Richard Huxton wrote:


Review
 http://www.anandtech.com/storage/showdoc.aspx?i=2480
Slashdot
 http://hardware.slashdot.org/article.pl?sid=05/07/26/1229211tid=198

Might be useful for those of us working with budget systems. If  
anyone  does make a purchase, please post your investigations to  
the list - I for one would be interested.


But don't put important data on it since it doesn't do ECC RAM

Vivek Khera, Ph.D.
+1-301-869-4449 x806




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] transaction timeout

2005-07-27 Thread Scott Marlowe
On Wed, 2005-07-27 at 10:31, Dr NoName wrote:
  Sure.  Like this:
  
  Client A accesses table T, and hangs.
  Client B attempts to get an ACCESS EXCLUSIVE lock on
  table T in
  preparation for VACUUM FULL.
  Client C connects to the database and waits for
  client B to get and
  release his lock on table T.
  Client D connects to the database and waits for
  client B to get and
  release his lock on table T.
  Client E connects to the database and waits for
  client B to get and
  release his lock on table T.
  etc...
 
 oh! my! gawd!
 Finally a clear explanation that makes perfect sense.
 Now why did it take so long?

Because your initial definition of the problem kinda led us all in the
wrong direction for 24 hours?  :)  Remember, it took like three times of
folks asking what's happening that locks your database before the
vacuum full issue came up.  From there, 24 more hours.  Actually not
bad.

And don't forget, the docs on vacuum pretty clearly state:

The second form is the VACUUM FULL command. This uses a more aggressive
algorithm for reclaiming the space consumed by expired row versions. Any
space that is freed by VACUUM FULL is immediately returned to the
operating system. Unfortunately, this variant of the VACUUM command
acquires an exclusive lock on each table while VACUUM FULL is processing
it. Therefore, frequently using VACUUM FULL can have an extremely
negative effect on the performance of concurrent database queries.

And then later on:

VACUUM FULL is recommended for cases where you know you have deleted
the majority of rows in a table, so that the steady-state size of the
table can be shrunk substantially with VACUUM FULL's more aggressive
approach. Use plain VACUUM, not VACUUM FULL, for routine vacuuming for
space recovery.

So, daily vacuum fulls are not recommended.  

 So all I need to do is take out the FULL? Is regular
 VACUUM sufficient? How often do we need FULL? (I know
 it's a stupid question without providing some more
 context, but how can I estimate it?)

Please read up on vacuuming in the docs, at:

http://www.postgresql.org/docs/8.0/static/maintenance.html#ROUTINE-VACUUMING

It's quite enlightening about this.  Basically, assuming your fsm
settings are high enough for your update/delete load, yes, plain vacuums
should be enough.

 
 I suppose the ultimate solution would be a wrapper
 script that works as follows:
 
 check if there are any waiting/idle in transaction
 processes
 if such processes exist, do a regular VACUUM and send
 out a warning email
 otherwise, do VACUUM FULL.

Nah, that's probably overkill.  I'd rather just run plain vacuum
verboses and check them by hand once a week or so to make sure I'm
reclaiming all the space.

 I like this solution a lot more than getting support
 calls on weekends.

Amen brother, amen...

 Out of curiousity, how is lock acquisition implemented
 in postgresql? All the processes have to go through
 some sort of queue, so that locks are granted in FIFO
 order, as you described. Just trying to understand it
 better.

See here:

http://www.postgresql.org/docs/8.0/static/mvcc.html

PostgreSQL's locking system is quite impression.  I kinda giggle when
someone says Well, not MySQL has feature Y, so why bother with
PostgreSQL? It's pretty obvious they haven't really read up on pgsql
when they say things like that.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Budget battery-backed ramdisk (Gigabyte i-RAM)

2005-07-27 Thread Scott Marlowe
On Wed, 2005-07-27 at 10:56, Vivek Khera wrote:
 On Jul 26, 2005, at 1:25 PM, Richard Huxton wrote:
 
  Review
   http://www.anandtech.com/storage/showdoc.aspx?i=2480
  Slashdot
   http://hardware.slashdot.org/article.pl?sid=05/07/26/1229211tid=198
 
  Might be useful for those of us working with budget systems. If  
  anyone  does make a purchase, please post your investigations to  
  the list - I for one would be interested.
 
 But don't put important data on it since it doesn't do ECC RAM

Considering the small incremental cost of ECC ram, it's hard to believe
someone would build one of those without it.

Heck, I'd think a RAID5 array of Ipod shuffles might be more reliable. 
That's a joke, but only halfway...

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] duplicate messages?

2005-07-27 Thread Steve Atkins
On Wed, Jul 27, 2005 at 11:46:05AM -0400, Robert Treat wrote:
   Anyone else getting duplicate messages?  I seem to be getting them
   sporadically on different messages on pgsql-general.  Seems to have 
   started
   sometime Monday morning (estern us time)
  
  One reason this happens is because people post to the list from an address
  which is not subscribed. Their message gets addded the moderator's queue.
  They then realize that they were not subscribed, and either subscribe
  and send again, or send from another account, or send again so it gets
  added to the moderator's queue multiple times. The moderator comes along and
  approves the message(s) at some point. Thus, duplicate messages. In an ideal
  world, the moderator or the original poster is constantly reading the list
  and realized the post has already made it. Of course, in an ideal world,
  people would subscribe to the list before attempting to send to it. :)
  
 
 Seems unlikely unless folks like Tom Lane, Stephan Szabo, and Richard
 Huxton have unsubscribed and resubscribed lately... Funny thing is it
 isnt every messages, but maybe half of them. And its not to specific
 users, sometimes one of Toms emails will duplicate but sometimes not.
 Further more it only seems to be happening on pgsql-general emails and
 not on any of the other lists... I also notice this is only happening on
 one of my subscribed emails, not the other, which is a little weird...
 according to the headers though, this problem is happening further
 upstream. 

It happens every so often (I've noticed it 4 or 5 times in the past
few years). If you check the headers you'll probably find the dupes
come from one particular server. It appears to be multiple delivery
rather than reinjection of articles to the list.

Cheers,
  Steve

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] duplicate messages?

2005-07-27 Thread Bruno Wolff III
On Wed, Jul 27, 2005 at 11:46:05 -0400,
  Robert Treat [EMAIL PROTECTED] wrote:
 
 Seems unlikely unless folks like Tom Lane, Stephan Szabo, and Richard
 Huxton have unsubscribed and resubscribed lately... Funny thing is it
 isnt every messages, but maybe half of them. And its not to specific
 users, sometimes one of Toms emails will duplicate but sometimes not.
 Further more it only seems to be happening on pgsql-general emails and
 not on any of the other lists... I also notice this is only happening on
 one of my subscribed emails, not the other, which is a little weird...
 according to the headers though, this problem is happening further
 upstream. 

Are some copies of these messages being sent directly to you without going
through the list? If so there is a way to tell the list server not to
send you copies if it appears you are getting a copy directly from the
sender.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] duplicate messages?

2005-07-27 Thread Alvaro Herrera
On Wed, Jul 27, 2005 at 11:46:05AM -0400, Robert Treat wrote:

 Seems unlikely unless folks like Tom Lane, Stephan Szabo, and Richard
 Huxton have unsubscribed and resubscribed lately... Funny thing is it
 isnt every messages, but maybe half of them. And its not to specific
 users, sometimes one of Toms emails will duplicate but sometimes not.
 Further more it only seems to be happening on pgsql-general emails and
 not on any of the other lists... I also notice this is only happening on
 one of my subscribed emails, not the other, which is a little weird...
 according to the headers though, this problem is happening further
 upstream. 

I'd think it would be a problem with your last mile, because it seems
to be a localized problem.  Maybe an SMTP server dropping connections at
the last moment, or a deliver program crashing after delivering (leading
to re-delivery), or something like that.

You'd have to show us the headers though.

-- 
Alvaro Herrera (alvherre[a]alvh.no-ip.org)
Find a bug in a program, and fix it, and the program will work today.
Show the program how to find and fix a bug, and the program
will work forever (Oliver Silfridge)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Wishlist?

2005-07-27 Thread Ian Harding
The client has to read the incoming data stream for indications of the
type of data that is coming next, then further read a description of
each field name, type, and nullabillity in the event it is a dataset. 
It is not pretty.  I don't know how the higher level interfaces handle
it, but here is some interesting reading on TDS.  It has changed a bit
since 5.0, but the basics are the same.

http://www.sybase.com/content/1013412/tds34.pdf

Also, FreeTDS http://www.freetds.org is interesting.  They have high
level interfaces but I haven't had to use them.

- Ian


On 7/27/05, Martijn van Oosterhout kleptog@svana.org wrote:
 On Wed, Jul 27, 2005 at 12:45:12PM +1000, Ezequiel Tolnay wrote:
  Ezequiel Tolnay wrote:
  (...) A function is meant to return a
  result (or a set of results) of a predefined type *during* execution,
  whilst a stored procedure (...)
 
  I meant to say *after* instead of *during*. The capabilitie to return
  results during execution could only be suported by stored procedures.
 
 I know this is being fiddly but in PostgreSQL a function doesn't have
 to return all its values in one go. It can, in a loop, do calculations,
 return a row, do more calculations, return a row and these rows can be
 received by the client in real time (ie not wait for completion).
 
 But your description of stored procedures was useful. It's basically
 more like a script you would feed to psql. You indicate that the
 procedure can return multiple sets. How is this handled by the client?
 
 If you're not returning data then it would be like a void function
 using NOTICE to communicate with the client.
 
 Still, thanks for the info...
 
 Have a nice day,
 --
 Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
  Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
  tool for doing 5% of the work and then sitting around waiting for someone
  else to do the other 95% so you can sue them.
 
 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Budget battery-backed ramdisk (Gigabyte i-RAM)

2005-07-27 Thread Vivek Khera


On Jul 27, 2005, at 12:09 PM, Scott Marlowe wrote:


On Wed, 2005-07-27 at 10:56, Vivek Khera wrote:


But don't put important data on it since it doesn't do ECC RAM



Considering the small incremental cost of ECC ram, it's hard to  
believe

someone would build one of those without it.


This device comes with no RAM -- you add your own.  However, it  
doesn't even *support* ECC.  That's idiotic, I think.


Vivek Khera, Ph.D.
+1-301-869-4449 x806




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] error when using SELECT

2005-07-27 Thread Hilmar Lapp

Hi Tom,

I solved the problem meanwhile. I was using the SUBSTRING function with 
from/length integer arguments. DBD::Pg (this is using perl) binds all 
parameters as type VARCHAR by default, so what I had to do was supply 
an extra type parameter to the $sth-bind_param() calls so that they 
are bound as integers.


The tricky bit was that this used to work perfectly well (i.e., without 
specifying type explicitly) with the 7.3.x server I was running before, 
but not with 8.x. The reason is that DBD::Pg only uses server-side 
prepared statements by default if the server is 8.x or higher, and 
expands the statement itself if the server is 7.3.x or lower ...


Thanks for trying to help. I thought I'd share this here since other 
people might run into the same problem if they've been using DBD::Pg 
since the 7.3.x times.


-hilmar

On Jul 26, 2005, at 11:43 AM, Andrew Stewart wrote:




From: Tom Lane [EMAIL PROTECTED]
Date: July 26, 2005 11:25:14 AM PDT
To: 
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] error when using SELECT


Andrew Stewart [EMAIL PROTECTED] writes:
I myself am still very new to PostgreSQL, so I'm having trouble 
telling

if there is anything wrong with the postgres transaction that is being
attempted by the bioperl-db maketest.  The verbose error output is as
follows...



preparing SELECT statement: SELECT SUBSTRING(seq FROM ? FOR ?) FROM
biosequence WHERE bioentry_id = ?
ok 30
ok 31
DBD::Pg::st execute failed: ERROR:  invalid escape string
HINT:  Escape string must be empty or one character.


According to the docs, that syntax is

   The substring function with three parameters, substring(string from
   pattern for escape-character), provides extraction of a substring 
that

   matches an SQL regular expression pattern.

It would appear that you're supplying an empty string for the second ?
which is a no-no for this particular function.

My guess is that you are trying to port code from another database that
has a different interpretation of this syntax.

regards, tom lane




--
-
Hilmar Lappemail: lapp at gnf.org
GNF, San Diego, Ca. 92121  phone: +1-858-812-1757
-



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] About Tools at the DB design phase.

2005-07-27 Thread Ben
Heh, I have to concur with this comment. Though I always found the US
letter format to be more standards compliant, myself.

On Wed, 27 Jul 2005, Roman Neuhauser wrote:

 # [EMAIL PROTECTED] / 2005-07-27 10:05:43 -0400:
  Greetings,
  
  I am at the design phase of the DB design. That is, I'd like to design 
  tables and relationships between them, but not the real implement of 
  tables. Could somebody suggest some good and free tools to help/ease 
  design the structures please?
 
 They're not exactly free, but a pen and a hardback notebook (A4)
 have proven to be the best tools over the time.
 
 YMMV.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] duplicate messages?

2005-07-27 Thread Robert Treat
On Wednesday 27 July 2005 12:30, Alvaro Herrera wrote:
 On Wed, Jul 27, 2005 at 11:46:05AM -0400, Robert Treat wrote:
  Seems unlikely unless folks like Tom Lane, Stephan Szabo, and Richard
  Huxton have unsubscribed and resubscribed lately... Funny thing is it
  isnt every messages, but maybe half of them. And its not to specific
  users, sometimes one of Toms emails will duplicate but sometimes not.
  Further more it only seems to be happening on pgsql-general emails and
  not on any of the other lists... I also notice this is only happening on
  one of my subscribed emails, not the other, which is a little weird...
  according to the headers though, this problem is happening further
  upstream.

 I'd think it would be a problem with your last mile, because it seems
 to be a localized problem.  Maybe an SMTP server dropping connections at
 the last moment, or a deliver program crashing after delivering (leading
 to re-delivery), or something like that.

 You'd have to show us the headers though.

I would think that too, however take a look at these message headers:
the critical piece seems to be the handoff between postgresql.org and 
commandprompt, where the ESMTP changes into j6RGoF0u002162 and j6RGj9mu001662 
for what would otherwise be the same message.

 Received: from 128.commandprompt.com ([207.173.200.128] 
helo=hosting.commandprompt.com)
by mail.sourceforge.net with esmtps (TLSv1:AES256-SHA:256)
(Exim 4.44)
id 1DxpLy-00015W-19
for [EMAIL PROTECTED]; Wed, 27 Jul 2005 10:06:01 -0700
 Received: from postgresql.org (svr1.postgresql.org [200.46.204.71])
by hosting.commandprompt.com (8.13.4/8.13.4) with ESMTP id 
j6RGj9mu001662;
Wed, 27 Jul 2005 09:45:27 -0700
 X-Original-To: [EMAIL PROTECTED]
 Received: from localhost (unknown [200.46.204.144])
by svr1.postgresql.org (Postfix) with ESMTP id 9F4A852F9D
for [EMAIL PROTECTED]; Wed, 27 
Jul 2005 13:30:42 -0300 (ADT)
 Received: from svr1.postgresql.org ([200.46.204.71])
 by localhost (av.hub.org [200.46.204.144]) (amavisd-new, port 10024)
 with ESMTP id 01459-09
 for [EMAIL PROTECTED];
 Wed, 27 Jul 2005 16:30:37 + (GMT)
 Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.200])
by svr1.postgresql.org (Postfix) with ESMTP id E8ECB52FA4
for pgsql-general@postgresql.org; Wed, 27 Jul 2005 13:30:36 -0300 
(ADT)
 Received: by wproxy.gmail.com with SMTP id i21so208751wra
for pgsql-general@postgresql.org; Wed, 27 Jul 2005 09:30:36 -0700 
(PDT)
 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws;
s=beta; d=gmail.com;

h=received:message-id:date:from:reply-to:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references;

b=nNBziaFbsTlwduR3PYyrUPubs6NRt0AU0/m8aPBQkXbpYYNqBRefNf0io5qZhG4oMSf3FnUwJoNlAfMdHG/R9CKm6XKb0eik8nEBDMcA6DYmBJsbLM3ebzFe6lzilfwxvtMYhSH8SohIDoHPX/CEyLxldB0sDmDLCu6YFXBv3Yg=
 Received: by 10.54.13.59 with SMTP id 59mr411070wrm;
Wed, 27 Jul 2005 09:30:36 -0700 (PDT)
 Received: by 10.54.100.12 with HTTP; Wed, 27 Jul 2005 09:30:36 -0700 (PDT)
 Message-ID: [EMAIL PROTECTED]
---
 Received: from 128.commandprompt.com ([207.173.200.128] 
helo=hosting.commandprompt.com)
by mail.sourceforge.net with esmtps (TLSv1:AES256-SHA:256)
(Exim 4.44)
id 1DxpQt-0003vz-Tr
for [EMAIL PROTECTED]; Wed, 27 Jul 2005 10:11:05 -0700
 Received: from postgresql.org (svr1.postgresql.org [200.46.204.71])
by hosting.commandprompt.com (8.13.4/8.13.4) with ESMTP id 
j6RGoF0u002162;
Wed, 27 Jul 2005 09:50:15 -0700
 X-Original-To: [EMAIL PROTECTED]
 Received: from localhost (unknown [200.46.204.144])
by svr1.postgresql.org (Postfix) with ESMTP id 9F4A852F9D
for [EMAIL PROTECTED]; Wed, 27 
Jul 2005 13:30:42 -0300 (ADT)
 Received: from svr1.postgresql.org ([200.46.204.71])
 by localhost (av.hub.org [200.46.204.144]) (amavisd-new, port 10024)
 with ESMTP id 01459-09
 for [EMAIL PROTECTED];
 Wed, 27 Jul 2005 16:30:37 + (GMT)
 Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.200])
by svr1.postgresql.org (Postfix) with ESMTP id E8ECB52FA4
for pgsql-general@postgresql.org; Wed, 27 Jul 2005 13:30:36 -0300 
(ADT)
 Received: by wproxy.gmail.com with SMTP id i21so208751wra
for pgsql-general@postgresql.org; Wed, 27 Jul 2005 09:30:36 -0700 
(PDT)
 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws;
s=beta; d=gmail.com;

h=received:message-id:date:from:reply-to:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references;

b=nNBziaFbsTlwduR3PYyrUPubs6NRt0AU0/m8aPBQkXbpYYNqBRefNf0io5qZhG4oMSf3FnUwJoNlAfMdHG/R9CKm6XKb0eik8nEBDMcA6DYmBJsbLM3ebzFe6lzilfwxvtMYhSH8SohIDoHPX/CEyLxldB0sDmDLCu6YFXBv3Yg=
 Received: by 10.54.13.59 with SMTP id 59mr411070wrm;
Wed, 27 Jul 2005 09:30:36 -0700 (PDT)
 Received: by 10.54.100.12 with HTTP; Wed, 27 Jul 2005 09:30:36 

Re: [GENERAL] About Tools at the DB design phase.

2005-07-27 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-07-27 10:13:10 -0700:
 On Wed, 27 Jul 2005, Roman Neuhauser wrote:
  # [EMAIL PROTECTED] / 2005-07-27 10:05:43 -0400:
   I am at the design phase of the DB design. That is, I'd like to design 
   tables and relationships between them, but not the real implement of 
   tables. Could somebody suggest some good and free tools to help/ease 
   design the structures please?
  
  They're not exactly free, but a pen and a hardback notebook (A4)
  have proven to be the best tools over the time.
  
  YMMV.

 Heh, I have to concur with this comment. Though I always found the US
 letter format to be more standards compliant, myself.

This is the Central Europe, sir. You either play by the continental
standards or we send you back home. ;)

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Postgresql with max_connections=4096

2005-07-27 Thread Jim C. Nasby
On Wed, Jul 27, 2005 at 04:46:56PM +0200, [EMAIL PROTECTED] wrote:
 I'm now testing with pg_pool installed on each apache frontend with 260 
 pg_pool preforked clients in each machine.
 
 The database seems to work better. At least when it goes to swap it 
 doesn't stop working...

Wait, are you saying your database server is swapping? You'll never get
any kind of performance if that's the case.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: Where do you want to go today?
Linux: Where do you want to go tomorrow?
FreeBSD: Are you guys coming, or what?

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Upgrading from 7.1

2005-07-27 Thread Jonathan Villa

I've been googling a little bit and appears that 7.1 pretty old.  What steps are
advised to upgrade from 7.1 to 7.4?


-Jonathan




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgresql with max_connections=4096

2005-07-27 Thread Scott Marlowe
On Wed, 2005-07-27 at 12:53, Jim C. Nasby wrote:
 On Wed, Jul 27, 2005 at 04:46:56PM +0200, [EMAIL PROTECTED] wrote:
  I'm now testing with pg_pool installed on each apache frontend with 260 
  pg_pool preforked clients in each machine.
  
  The database seems to work better. At least when it goes to swap it 
  doesn't stop working...
 
 Wait, are you saying your database server is swapping? You'll never get
 any kind of performance if that's the case.

IF it swaps out unused code / data and leaves it, that's fine, but if
it's constantly swapping out then yeah, that's a VERY bad thing.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] pgsql mention on zdnet

2005-07-27 Thread Scott Marlowe
http://news.zdnet.com/2100-9590_22-5806608.html

Interestingly, in mentioning how unix just unix, they might have
mispelled PostgreSQL, but at least they didn't say MySQL is MySQL.

It's about halfway through the article.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Upgrading from 7.1

2005-07-27 Thread Scott Marlowe
On Wed, 2005-07-27 at 13:41, Jonathan Villa wrote:
 I've been googling a little bit and appears that 7.1 pretty old.  What steps 
 are
 advised to upgrade from 7.1 to 7.4?

The best way, in my humble opinion, is to build a new machine, and
install 7.4 or 8.0 on it.  Backup the old 7.1 machine using the pg_dump
or pg_dumpall program on the 7.4/8.0 box, and restore it to the 7.4/8.0
machine.  Then test the heck out of it with your applications.  If it
all works, then over a weekend, repeat the process, replacing the 7.1
machine with the 7.4/8.0 machine.  That way, you have a backup plan,
should things go wrong at some point, you can revert to 7.1.

and yes, 7.1 is pretty old.

Depending on hold old your 7.1 version is (like 7.1.0 or something) you
might want to take a backup of it and upgrade pgsql on it to the latest
7.1.xxx version, where xxx was the largest version released.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Upgrading from 7.1

2005-07-27 Thread Steve Crawford
On Wednesday 27 July 2005 11:41 am, Jonathan Villa wrote:
 I've been googling a little bit and appears that 7.1 pretty old. 

Very.

 What steps are advised to upgrade from 7.1 to 7.4?

Unless there is some reason you really need 7.4, just go straight to 
8.0.3.

Instructions start on page 230 of the PG8 documentation. Basically you 
do a pg_dumpall to backup your old database (make sure it is not 
being updated), stop the old server, install the new server, restore 
your data: 
http://www.postgresql.org/docs/8.0/interactive/install-upgrading.html

Cheers,
Steve



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Upgrading from 7.1

2005-07-27 Thread Richard Huxton

Jonathan Villa wrote:

I've been googling a little bit and appears that 7.1 pretty old.
What steps are advised to upgrade from 7.1 to 7.4?


1. Dump the old db using 7.4's pg_dump.
2. Read the release notes for the in-between versions to make sure 
nothing will impact your behaviour. Keep a close eye for tightening-up 
error checking, or changing typecasting rules etc.


If I was you I'd go straight to 8.0 - it's not going to be noticably 
more work and brings you bang up to date.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] DELETE with JOIN syntax

2005-07-27 Thread Brian Wong
I am currently migrating from MySQL to PostgreSQL and I have found
that some queries do not work. For instance,

DELETE t1 FROM t1 LEFT JOIN t2 USING (column_id) WHERE t2.column_id IS NULL;

works in MySQL. This works as expected even though the MySQL
documentation does not mention the option of having a table between
the keywords DELETE and FROM.

I am trying to achieve the same affect for PostgreSQL so I tried

DELETE FROM t1 LEFT JOIN t2 USING (column_id) WHERE t2.column_id IS NULL;

and it did not work. Can someone explain to me exactly what is wrong
with this syntax?
Is a table expression produced by the JOIN allowed for a DELETE?
Im thinking that this would not work because the table expression is
not a real table and it would not make sense for DELETE to accept such
a parameter. How can I rewrite this query to achieve the same affect?
Thanks.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] DELETE with JOIN syntax

2005-07-27 Thread Bruno Wolff III
On Wed, Jul 27, 2005 at 15:28:36 -0400,
  Brian Wong [EMAIL PROTECTED] wrote:
 I am currently migrating from MySQL to PostgreSQL and I have found
 that some queries do not work. For instance,
 
 DELETE t1 FROM t1 LEFT JOIN t2 USING (column_id) WHERE t2.column_id IS NULL;
 
 works in MySQL. This works as expected even though the MySQL
 documentation does not mention the option of having a table between
 the keywords DELETE and FROM.
 
 I am trying to achieve the same affect for PostgreSQL so I tried
 
 DELETE FROM t1 LEFT JOIN t2 USING (column_id) WHERE t2.column_id IS NULL;
 
 and it did not work. Can someone explain to me exactly what is wrong
 with this syntax?
 Is a table expression produced by the JOIN allowed for a DELETE?
 Im thinking that this would not work because the table expression is
 not a real table and it would not make sense for DELETE to accept such
 a parameter. How can I rewrite this query to achieve the same affect?
 Thanks.

In 8.1 you will be able to use 'USING' to do this or something like it.
For now, I don't think you can use explicit join syntax and need to do
something like:
DELETE FROM t1 WHERE t1.column_in NOT IN (SELECT column_id FROM T2);
This assumes there aren't any NULL values in t2.column_id. If there are,
you can rewrite the above to use NOT EXISTS.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] link errors building extensions for Postgres on Windows using MinGW

2005-07-27 Thread Eric Davies


I'm trying to port an extension that previously ran under Postgres on
linux so that it runs under PostgreSQL 8.0.3 on Windows. I'm using MingGW
for the compiling/linking.
I'm getting problems when I try to link though. Symbols that are defined
in the Postgres server and referenced in my extension cause link errors
that prevent a dll from being produced.

fe-giefToGrid.c:25: undefined reference to
`pg_detoast_datum'

fe-udr.c:510: undefined reference to `Float8GetDatum'

libgridfrontendDEBUG.a(fe-udr.o)(.text+0xbe7):fe-udr.c:515: undefined
reference to `get_typlenbyvalalign'
I can get rid of these link errors by linking in the
libpostgres.a file at the cost of a much larger shared library. In
theory, I shouldn't need to though (at least I don't under linux), and
doing so may result in two inconsistent copies of some Postgres internal
data structures presumably. My link line looks like:
gcc -u_etext -shared -fpic -Wl,--whole-archive -o destDir/Grid.dll
-Lmylibdir -L/lib -L/c/Program\ Files/PostgreSQL/8.0/lib -lmylibs
-Wl,--no-whole-archive -lpostgres -lwsock32 -lm 2
link.errs
can anyone suggest a solution?
Thank you,

** 
Eric Davies, M.Sc. 
Barrodale Computing Services Ltd. 
Tel: (250) 472-4372 Fax: (250) 472-4373 
Web:
http://www.barrodale.com

Email: [EMAIL PROTECTED] 
** 
Mailing Address: 
P.O. Box 3075 STN CSC 
Victoria BC Canada V8W 3W2
Shipping Address: 
Hut R, McKenzie Avenue 
University of Victoria 
Victoria BC Canada V8W 3W2 
**





[GENERAL] MySQL to PostgreSQL, was ENUM type

2005-07-27 Thread Chris Travers
So, it seems to me that there is a fair bit of work to be done on 
helping people migrate MySQL to PostgreSQL.


So far, the checklist I can see includes:
*  Maintaining conversion scripts
*  Reviewing pain points and looking at ways of mitigating them.
*  Building solid migration documentation
*  Providing porting frameworks

This last item could have some potentials.  For example, a source-code 
compatible library to map MySQL client lib system calls to libpq 
operations for the C API, PHP wrapper scripts, etc.


My company will be providing a PHP wrapper script, and will be working 
on some documentation.  Most of the documentation will be BSD-style 
licensed with the exception of the references to my company's services 
(which must be stripped out of non-verbatim reproductions).


If anyone else sees any other interesting areas that need work, it might 
be worthwhile to discuss them as well.


Best Wishes,
Chris Travers
Metatron Technology Consulting

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] DELETE with JOIN syntax

2005-07-27 Thread Stephan Szabo
On Wed, 27 Jul 2005, Brian Wong wrote:

 I am currently migrating from MySQL to PostgreSQL and I have found
 that some queries do not work. For instance,

 DELETE t1 FROM t1 LEFT JOIN t2 USING (column_id) WHERE t2.column_id IS NULL;

 works in MySQL. This works as expected even though the MySQL
 documentation does not mention the option of having a table between
 the keywords DELETE and FROM.

 I am trying to achieve the same affect for PostgreSQL so I tried

 DELETE FROM t1 LEFT JOIN t2 USING (column_id) WHERE t2.column_id IS NULL;

 and it did not work. Can someone explain to me exactly what is wrong
 with this syntax?

It's mostly that AFAIK SQL has no equivalent syntax.

 Is a table expression produced by the JOIN allowed for a DELETE?
 Im thinking that this would not work because the table expression is
 not a real table and it would not make sense for DELETE to accept such
 a parameter. How can I rewrite this query to achieve the same affect?

I think the where t2.column_id is null where column_id is the joining
column makes this a form of not exists, so maybe:

DELETE FROM t1 WHERE NOT EXISTS (SELECT 1 FROM t2 WHERE t2.column_id =
t1.columnid);


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Upgrading from 7.1

2005-07-27 Thread Jonathan Villa


My approach will be/has been as follows:

I've used pg_dump of 7.4 to do

pgsql-7.4 $pg_dump --schema-only dbName  schema.sql

Aside from some tweaking, the import seemed to work fine.

Now, I'm attempting the following

pgsql-7.4 $ pg_dump --data-only --inserts dbName  data.sql

and when I attempt an import, I get

ERROR:  insert or update on table doc_data violates foreign key constraint
docdata_languageid_fk
DETAIL:  Key (language)=(1) is not present in table supported_languages.

Regarding the violations of the foreign key contraints, I've been able to
export/import from 7.1 to 7.1 ok.

When I was doing the schema.sql import, I did receive a lot of messages 
regarding
implicit indexes being created?  Is this something I should be worried about?

Reason I'm not moving to 8.0 is because the application I'm trying to get 
upgraded
does not give it it's seal of approval.


-Jonathan

quote who=Richard Huxton
 Jonathan Villa wrote:
 I've been googling a little bit and appears that 7.1 pretty old.
 What steps are advised to upgrade from 7.1 to 7.4?

 1. Dump the old db using 7.4's pg_dump.
 2. Read the release notes for the in-between versions to make sure
 nothing will impact your behaviour. Keep a close eye for tightening-up
 error checking, or changing typecasting rules etc.

 If I was you I'd go straight to 8.0 - it's not going to be noticably
 more work and brings you bang up to date.

 --
Richard Huxton
Archonet Ltd




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Bad plan when null is in an in list

2005-07-27 Thread Jaime Casanova
On 7/26/05, Csaba Nagy [EMAIL PROTECTED] wrote:
 Hi all,
 
 Jumping in directly to the subject, this is what I get:
 
 explain SELECT bigint_col_1, bigint_col_2 FROM big_table WHERE
 bigint_col_2 in (12132131, null, null, null,
 null);
 
 QUERY PLAN
 ---
  Seq Scan on big_table  (cost=0.00..2447201.85 rows=448 width=16)
   Filter: ((bigint_col_2 = 12132131::bigint) OR NULL::boolean)
 (2 rows)
 
 

this is because null values can't be indexed... or telling other way
the planner will never choose an index for comparing to null

maybe a partial index can be used?


 Compared to:
 
 
 explain SELECT bigint_col_1, bigint_col_2 FROM big_table WHERE
 bigint_col_2 in (12132131, 123781, 1297839032, 123667123);
 
 QUERY PLAN
 
  Index Scan using dom_idx_member_bigint_col_2,
 dom_idx_member_bigint_col_2, dom_idx_member_bigint_col_2,
 dom_idx_member_bigint_col_2 on big_table  (cost=0.00..6427.28 rows=1789
 width=16)
   Index Cond: ((bigint_col_2 = 12132131) OR (bigint_col_2 = 123781) OR
 (bigint_col_2 = 1297839032) OR (bigint_col_2 = 123667123))
 (2 rows)
 
 
 big_table has ~ 100 million rows.
 

there is no nulls here so the index can be used

 
 Considering that NULL::boolean is always false, 
 

null::boolean  is null not false. that is because null means 'unknown
value'  not false nor true

template1=# select null::boolean;
 bool
--

(1 fila)


-- 
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] DELETE with JOIN syntax

2005-07-27 Thread Brian Wong
On 7/27/05, Stephan Szabo [EMAIL PROTECTED] wrote:
 
 I think the where t2.column_id is null where column_id is the joining
 column makes this a form of not exists, so maybe:
 
 DELETE FROM t1 WHERE NOT EXISTS (SELECT 1 FROM t2 WHERE t2.column_id =
 t1.columnid);
 
 

This looks good. Thanks.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread John DeSoi


On Jul 27, 2005, at 4:49 AM, Tino Wildenhain wrote:


I'd create a sequence:

CREATE SEQUENCE global_unique_id_seq;

and a function:

CREATE OR REPLACE FUNCTION newid()
  RETURNS text AS
$BODY$ SELECT nextval('global_unique_id_seq')::text; $BODY$
  LANGUAGE 'sql' VOLATILE;


now every call to newid() returns a garantied unique id for
say the next 18446744073709551616 calls.
Of course you can obfuscate the ID even more using
md5, include servername and so on, but this will not improve
security in any way (unless you mix data with 2nd database)



This is not really a viable replacement for a GUID == globally unique  
identifier. Here global means that if I use the application in  
multiple databases, I'm guaranteed that no two identifiers will be  
the same. Using a sequence will only support uniqueness for a single  
database.


Best,




John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Scott Marlowe
On Wed, 2005-07-27 at 15:32, John DeSoi wrote:
 On Jul 27, 2005, at 4:49 AM, Tino Wildenhain wrote:
 
  I'd create a sequence:
 
  CREATE SEQUENCE global_unique_id_seq;
 
  and a function:
 
  CREATE OR REPLACE FUNCTION newid()
RETURNS text AS
  $BODY$ SELECT nextval('global_unique_id_seq')::text; $BODY$
LANGUAGE 'sql' VOLATILE;
 
 
  now every call to newid() returns a garantied unique id for
  say the next 18446744073709551616 calls.
  Of course you can obfuscate the ID even more using
  md5, include servername and so on, but this will not improve
  security in any way (unless you mix data with 2nd database)
 
 
 This is not really a viable replacement for a GUID == globally unique  
 identifier. Here global means that if I use the application in  
 multiple databases, I'm guaranteed that no two identifiers will be  
 the same. Using a sequence will only support uniqueness for a single  
 database.

So, how can two databases, not currently talking to one another,
guarantee that their GUIDs don't collide? using a large randomly
generated name space only reduces the chances of collision, it doesn't
actually guarantee it.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Ben
Yes, this is the problem with GUIDs... you can calculate them by mashing
toghether things like the time, a network address, and some random
numbers, which makes it very unlikely for a collision but at the end
of the day that G stand for global, *not* guaranteed.

On Wed, 27 Jul 2005, Scott Marlowe wrote:

 On Wed, 2005-07-27 at 15:32, John DeSoi wrote:
  On Jul 27, 2005, at 4:49 AM, Tino Wildenhain wrote:
  
   I'd create a sequence:
  
   CREATE SEQUENCE global_unique_id_seq;
  
   and a function:
  
   CREATE OR REPLACE FUNCTION newid()
 RETURNS text AS
   $BODY$ SELECT nextval('global_unique_id_seq')::text; $BODY$
 LANGUAGE 'sql' VOLATILE;
  
  
   now every call to newid() returns a garantied unique id for
   say the next 18446744073709551616 calls.
   Of course you can obfuscate the ID even more using
   md5, include servername and so on, but this will not improve
   security in any way (unless you mix data with 2nd database)
  
  
  This is not really a viable replacement for a GUID == globally unique  
  identifier. Here global means that if I use the application in  
  multiple databases, I'm guaranteed that no two identifiers will be  
  the same. Using a sequence will only support uniqueness for a single  
  database.
 
 So, how can two databases, not currently talking to one another,
 guarantee that their GUIDs don't collide? using a large randomly
 generated name space only reduces the chances of collision, it doesn't
 actually guarantee it.
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Dann Corbit
Windows uses the MAC address in GUID generation.

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Scott Marlowe
 Sent: Wednesday, July 27, 2005 1:47 PM
 To: John DeSoi
 Cc: Tino Wildenhain; Riaan van der Westhuizen; Postgresql-General
 Subject: Re: [GENERAL] GUID for postgreSQL
 
 On Wed, 2005-07-27 at 15:32, John DeSoi wrote:
  On Jul 27, 2005, at 4:49 AM, Tino Wildenhain wrote:
 
   I'd create a sequence:
  
   CREATE SEQUENCE global_unique_id_seq;
  
   and a function:
  
   CREATE OR REPLACE FUNCTION newid()
 RETURNS text AS
   $BODY$ SELECT nextval('global_unique_id_seq')::text; $BODY$
 LANGUAGE 'sql' VOLATILE;
  
  
   now every call to newid() returns a garantied unique id for
   say the next 18446744073709551616 calls.
   Of course you can obfuscate the ID even more using
   md5, include servername and so on, but this will not improve
   security in any way (unless you mix data with 2nd database)
 
 
  This is not really a viable replacement for a GUID == globally
unique
  identifier. Here global means that if I use the application in
  multiple databases, I'm guaranteed that no two identifiers will be
  the same. Using a sequence will only support uniqueness for a single
  database.
 
 So, how can two databases, not currently talking to one another,
 guarantee that their GUIDs don't collide? using a large randomly
 generated name space only reduces the chances of collision, it doesn't
 actually guarantee it.
 
 ---(end of
broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Magnus Hagander

  This is not really a viable replacement for a GUID == 
 globally unique 
  identifier. Here global means that if I use the application in 
  multiple databases, I'm guaranteed that no two identifiers 
 will be the 
  same. Using a sequence will only support uniqueness for a single 
  database.
 
 So, how can two databases, not currently talking to one 
 another, guarantee that their GUIDs don't collide? using a 
 large randomly generated name space only reduces the chances 
 of collision, it doesn't actually guarantee it.

At least on Windows, the GUID is derived in part from the computers
primary MAC address. No, it's not a guarantee, but it's pretty unlikely
:-)

//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread John DeSoi


On Jul 27, 2005, at 4:46 PM, Scott Marlowe wrote:


So, how can two databases, not currently talking to one another,
guarantee that their GUIDs don't collide? using a large randomly
generated name space only reduces the chances of collision, it doesn't
actually guarantee it.



Like MD5, there is no 100% guarantee, but the collision possibility  
supposed to be is very close to zero.



See http://en.wikipedia.org/wiki/GUID


John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Scott Marlowe
On Wed, 2005-07-27 at 15:57, John DeSoi wrote:
 On Jul 27, 2005, at 4:46 PM, Scott Marlowe wrote:
 
  So, how can two databases, not currently talking to one another,
  guarantee that their GUIDs don't collide? using a large randomly
  generated name space only reduces the chances of collision, it doesn't
  actually guarantee it.
 
 
 Like MD5, there is no 100% guarantee, but the collision possibility  
 supposed to be is very close to zero.

Then I would think a better thought out solution would be one where your
unique ids ARE guaranteed to be unique, where you used something like 

select 'astringuniqtothismachine'||nextval('localsequence');

That really would be guaranteed unique as long as you set up each
machine to have a string unique to it.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Chris Travers

You could guarantee it, for example...

Something like (pseudocode here):
create sequence local_id;
create domain guid AS text default ('54-' || (nextval(local_id))::text);
where 54 is the database id.  In this way, every inserted GUID will be 
guaranteed to contain a GUID in two parts:  A database identifier and a 
locally unique local identifier.  These could then be parsed in a 
reasonable way.


The only way I think one can come up with *guaranteed* globally unique 
identifiers is to place such information such as we use with other 
things that must be globally unique:  have a locally unique identifier 
along with a globally unique location identifieer.  Sort of like we have 
with IP addresses, MAC addresses, telephone numbers, etc...


Best Wishes,
Chris Travers
Metatron Technology Consulting

Ben wrote:


Yes, this is the problem with GUIDs... you can calculate them by mashing
toghether things like the time, a network address, and some random
numbers, which makes it very unlikely for a collision but at the end
of the day that G stand for global, *not* guaranteed.

On Wed, 27 Jul 2005, Scott Marlowe wrote:

 


On Wed, 2005-07-27 at 15:32, John DeSoi wrote:
   


On Jul 27, 2005, at 4:49 AM, Tino Wildenhain wrote:

 


I'd create a sequence:

CREATE SEQUENCE global_unique_id_seq;

and a function:

CREATE OR REPLACE FUNCTION newid()
 RETURNS text AS
$BODY$ SELECT nextval('global_unique_id_seq')::text; $BODY$
 LANGUAGE 'sql' VOLATILE;


now every call to newid() returns a garantied unique id for
say the next 18446744073709551616 calls.
Of course you can obfuscate the ID even more using
md5, include servername and so on, but this will not improve
security in any way (unless you mix data with 2nd database)
   

This is not really a viable replacement for a GUID == globally unique  
identifier. Here global means that if I use the application in  
multiple databases, I'm guaranteed that no two identifiers will be  
the same. Using a sequence will only support uniqueness for a single  
database.
 


So, how can two databases, not currently talking to one another,
guarantee that their GUIDs don't collide? using a large randomly
generated name space only reduces the chances of collision, it doesn't
actually guarantee it.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq

   





---(end of broadcast)---
TIP 6: explain analyze is your friend


 




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Chris Travers

Magnus Hagander wrote:





At least on Windows, the GUID is derived in part from the computers
primary MAC address. No, it's not a guarantee, but it's pretty unlikely
:-)
 

The danger is not that the MAC address will be duplicated, but that 
other factors will lead to an MD5 collision.


Unless you can show me that there is a 1:1 correspondence of all 
possible unique factors going into the GUID generation and the output, 
then I will say it is still no guarantee.


Just because two documents or files have the same MD5 doesn't mean that 
they are the same files either.  I.e. you can't go searching all files 
by MD5 checksums and expecting to find the right one.  OTOH, MD5 
provides reasonable assurance that any given file (once you know its 
intended MD5) has not been tampered with.  I.e. MD5 is not meant to 
preclude collisions, but rather it is meant to preclude *intentional* 
collisions.  Similarly, if we want a guaranteed uniqueness to a GUID we 
have to have some sort of unique string to the GUID prepended to it (not 
merely used in a hash).


So you could use the Mac address of the machine, I guess, if you wanted 
to


Best Wishes,
Chris Travers
Metatron Technology Consulting

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Dann Corbit
There is a privacy hole from using the MAC address.  (Read it in the
WIKI article someone else posted).

Probably, it would be better to use a one way hash of the MAC address.


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Chris Travers
 Sent: Wednesday, July 27, 2005 2:27 PM
 To: Magnus Hagander; pgsql-general
 Subject: Re: [GENERAL] GUID for postgreSQL
 
 Magnus Hagander wrote:
 
 
 
 At least on Windows, the GUID is derived in part from the computers
 primary MAC address. No, it's not a guarantee, but it's pretty
unlikely
 :-)
 
 
 The danger is not that the MAC address will be duplicated, but that
 other factors will lead to an MD5 collision.
 
 Unless you can show me that there is a 1:1 correspondence of all
 possible unique factors going into the GUID generation and the output,
 then I will say it is still no guarantee.
 
 Just because two documents or files have the same MD5 doesn't mean
that
 they are the same files either.  I.e. you can't go searching all files
 by MD5 checksums and expecting to find the right one.  OTOH, MD5
 provides reasonable assurance that any given file (once you know its
 intended MD5) has not been tampered with.  I.e. MD5 is not meant to
 preclude collisions, but rather it is meant to preclude *intentional*
 collisions.  Similarly, if we want a guaranteed uniqueness to a GUID
we
 have to have some sort of unique string to the GUID prepended to it
(not
 merely used in a hash).
 
 So you could use the Mac address of the machine, I guess, if you
wanted
 to
 
 Best Wishes,
 Chris Travers
 Metatron Technology Consulting
 
 ---(end of
broadcast)---
 TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Tino Wildenhain
Am Mittwoch, den 27.07.2005, 23:03 +0200 schrieb Magnus Hagander:
   This is not really a viable replacement for a GUID == 
  globally unique 
   identifier. Here global means that if I use the application in 
   multiple databases, I'm guaranteed that no two identifiers 
  will be the 
   same. Using a sequence will only support uniqueness for a single 
   database.

  So, how can two databases, not currently talking to one 
  another, guarantee that their GUIDs don't collide? using a 
  large randomly generated name space only reduces the chances 
  of collision, it doesn't actually guarantee it.

well, then give each database a numer, name or whatever and
pad sequence with it. Nothing simpler then that. 
The global uniqueness is just a wishfull dream. Nobody can
garantie it. If you use a distingushed name or number for
each of your databases, its garantied.


 At least on Windows, the GUID is derived in part from the computers
 primary MAC address. No, it's not a guarantee, but it's pretty unlikely
 :-)

Yes, thats one way.. But really you just need a domain (not 
related to the internet meaning of domains ;) Anything
that lets you identify (or at least distinguish) _your_
databases. (unlikely you are working with every or random
databases in the world ;)

You dont even need a hash function (md5, sha1) if uniqueness is
all you need. Otoh, the often raised clash argument with md5 does not
count here because collisions dont happen just per coincidence
given the very limited rule (database-identifier + serial)



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Upgrading from 7.1

2005-07-27 Thread Thomas F. O'Connell


On Jul 27, 2005, at 3:43 PM, Jonathan Villa wrote:


My approach will be/has been as follows:

I've used pg_dump of 7.4 to do

pgsql-7.4 $pg_dump --schema-only dbName  schema.sql

Aside from some tweaking, the import seemed to work fine.

Now, I'm attempting the following

pgsql-7.4 $ pg_dump --data-only --inserts dbName  data.sql

and when I attempt an import, I get

ERROR:  insert or update on table doc_data violates foreign key  
constraint

docdata_languageid_fk
DETAIL:  Key (language)=(1) is not present in table  
supported_languages.


Regarding the violations of the foreign key contraints, I've been  
able to

export/import from 7.1 to 7.1 ok.

When I was doing the schema.sql import, I did receive a lot of  
messages regarding
implicit indexes being created?  Is this something I should be  
worried about?


Reason I'm not moving to 8.0 is because the application I'm trying  
to get upgraded

does not give it it's seal of approval.


-Jonathan


Jonathan,

The implicit indexes are no big deal; they're just a sign of indexes  
getting created by PRIMARY KEYs on your tables.


I'm not sure why you're getting errors. Is there a reason you did the  
schema dump separately from the data dump rather than a monolithic  
dump/restore?


Once you get your data import working, you might want to check out  
contrib/adddepend, though, since you're coming from a pre-7.3 database.


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Tony Caduto

Use Dblink and do a select off of a sequence on just one of the boxes?
You could set up a view that uses DBlink on all the boxes that points to 
the master seq box.


should work.

Scott Marlowe wrote:


So, how can two databases, not currently talking to one another,
guarantee that their GUIDs don't collide? using a large randomly
generated name space only reduces the chances of collision, it doesn't
actually guarantee it.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq

 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] duplicate messages?

2005-07-27 Thread Alvaro Herrera
On Wed, Jul 27, 2005 at 01:55:51PM -0400, Robert Treat wrote:
 On Wednesday 27 July 2005 12:30, Alvaro Herrera wrote:
  On Wed, Jul 27, 2005 at 11:46:05AM -0400, Robert Treat wrote:
   Seems unlikely unless folks like Tom Lane, Stephan Szabo, and Richard
   Huxton have unsubscribed and resubscribed lately... Funny thing is it
   isnt every messages, but maybe half of them. And its not to specific
   users, sometimes one of Toms emails will duplicate but sometimes not.
   Further more it only seems to be happening on pgsql-general emails and
   not on any of the other lists... I also notice this is only happening on
   one of my subscribed emails, not the other, which is a little weird...
   according to the headers though, this problem is happening further
   upstream.
 
  I'd think it would be a problem with your last mile, because it seems
  to be a localized problem.  Maybe an SMTP server dropping connections at
  the last moment, or a deliver program crashing after delivering (leading
  to re-delivery), or something like that.
 
  You'd have to show us the headers though.
 
 I would think that too, however take a look at these message headers:
 the critical piece seems to be the handoff between postgresql.org and 
 commandprompt, where the ESMTP changes into j6RGoF0u002162 and j6RGj9mu001662 
 for what would otherwise be the same message.

Strange.  I'd expect that this message had been crossposted to more than one
list, but it isn't.

-- 
Alvaro Herrera (alvherre[a]alvh.no-ip.org)
Aprende a avergonzarte más ante ti que ante los demás (Demócrito)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Alvaro Herrera
On Wed, Jul 27, 2005 at 05:40:11PM -0500, Tony Caduto wrote:
 Use Dblink and do a select off of a sequence on just one of the boxes?
 You could set up a view that uses DBlink on all the boxes that points to 
 the master seq box.
 
 should work.

It'll make the whole thing painfully slow.

-- 
Alvaro Herrera (alvherre[a]alvh.no-ip.org)
inflex really, I see PHP as like a stange amalgamation of C, Perl, Shell
crab inflex: you know that amalgam means mixture with mercury,
   more or less, right?
crab i.e., deadly poison

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread John DeSoi


On Jul 27, 2005, at 5:00 PM, Scott Marlowe wrote:

Then I would think a better thought out solution would be one where  
your

unique ids ARE guaranteed to be unique, where you used something like

select 'astringuniqtothismachine'||nextval('localsequence');

That really would be guaranteed unique as long as you set up each
machine to have a string unique to it.



I have implemented this type of approach in distributed systems. The  
problem is users who make a copy of their database, continue to use  
both copies, and then call you when they try to merge things  
together. I would say user opportunity to mess this up is way more  
likely than having a GUID collision.


I'm not saying that GUIDs are the ultimate solution to this problem.  
The original poster brought up the need to store GUIDs in a database.  
There are protocols and standards that require GUIDs and I merely  
agree it would be nice to have a GUID data type.




John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Upgrading from 7.1

2005-07-27 Thread Chris Travers

Thomas F. O'Connell wrote:




Jonathan,

The implicit indexes are no big deal; they're just a sign of indexes  
getting created by PRIMARY KEYs on your tables.


I'm not sure why you're getting errors. Is there a reason you did the  
schema dump separately from the data dump rather than a monolithic  
dump/restore?


I seem to remember encountering an issue some time ago with pg_dump 
dumping tables in an order that prevented them from being reloaded.


The solution was to specify the order of the tables in the commandline.

Hope this helps.
Chris Travers
Metatron Technology Consulting

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] transaction timeout

2005-07-27 Thread Dr NoName
Thanks a lot, everyone! That solved my problem. But I
still want to be able to set transaction timeout. Any
chance of that in the next release?

Eugene


--- Scott Marlowe [EMAIL PROTECTED] wrote:

 On Wed, 2005-07-27 at 10:31, Dr NoName wrote:
   Sure.  Like this:
   
   Client A accesses table T, and hangs.
   Client B attempts to get an ACCESS EXCLUSIVE
 lock on
   table T in
   preparation for VACUUM FULL.
   Client C connects to the database and waits for
   client B to get and
   release his lock on table T.
   Client D connects to the database and waits for
   client B to get and
   release his lock on table T.
   Client E connects to the database and waits for
   client B to get and
   release his lock on table T.
   etc...
  
  oh! my! gawd!
  Finally a clear explanation that makes perfect
 sense.
  Now why did it take so long?
 
 Because your initial definition of the problem kinda
 led us all in the
 wrong direction for 24 hours?  :)  Remember, it took
 like three times of
 folks asking what's happening that locks your
 database before the
 vacuum full issue came up.  From there, 24 more
 hours.  Actually not
 bad.
 
 And don't forget, the docs on vacuum pretty clearly
 state:
 
 The second form is the VACUUM FULL command. This
 uses a more aggressive
 algorithm for reclaiming the space consumed by
 expired row versions. Any
 space that is freed by VACUUM FULL is immediately
 returned to the
 operating system. Unfortunately, this variant of the
 VACUUM command
 acquires an exclusive lock on each table while
 VACUUM FULL is processing
 it. Therefore, frequently using VACUUM FULL can have
 an extremely
 negative effect on the performance of concurrent
 database queries.
 
 And then later on:
 
 VACUUM FULL is recommended for cases where you know
 you have deleted
 the majority of rows in a table, so that the
 steady-state size of the
 table can be shrunk substantially with VACUUM FULL's
 more aggressive
 approach. Use plain VACUUM, not VACUUM FULL, for
 routine vacuuming for
 space recovery.
 
 So, daily vacuum fulls are not recommended.  
 
  So all I need to do is take out the FULL? Is
 regular
  VACUUM sufficient? How often do we need FULL? (I
 know
  it's a stupid question without providing some more
  context, but how can I estimate it?)
 
 Please read up on vacuuming in the docs, at:
 

http://www.postgresql.org/docs/8.0/static/maintenance.html#ROUTINE-VACUUMING
 
 It's quite enlightening about this.  Basically,
 assuming your fsm
 settings are high enough for your update/delete
 load, yes, plain vacuums
 should be enough.
 
  
  I suppose the ultimate solution would be a wrapper
  script that works as follows:
  
  check if there are any waiting/idle in transaction
  processes
  if such processes exist, do a regular VACUUM and
 send
  out a warning email
  otherwise, do VACUUM FULL.
 
 Nah, that's probably overkill.  I'd rather just run
 plain vacuum
 verboses and check them by hand once a week or so to
 make sure I'm
 reclaiming all the space.
 
  I like this solution a lot more than getting
 support
  calls on weekends.
 
 Amen brother, amen...
 
  Out of curiousity, how is lock acquisition
 implemented
  in postgresql? All the processes have to go
 through
  some sort of queue, so that locks are granted in
 FIFO
  order, as you described. Just trying to understand
 it
  better.
 
 See here:
 
 http://www.postgresql.org/docs/8.0/static/mvcc.html
 
 PostgreSQL's locking system is quite impression.  I
 kinda giggle when
 someone says Well, not MySQL has feature Y, so why
 bother with
 PostgreSQL? It's pretty obvious they haven't really
 read up on pgsql
 when they say things like that.
 
 ---(end of
 broadcast)---
 TIP 9: In versions below 8.0, the planner will
 ignore your desire to
choose an index scan if your joining column's
 datatypes do not
match
 




__ 
Yahoo! Mail for Mobile 
Take Yahoo! Mail with you! Check email on your mobile phone. 
http://mobile.yahoo.com/learn/mail 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] transaction timeout

2005-07-27 Thread Paul Tillotson

Dr NoName wrote:


Sure.  Like this:

Client A accesses table T, and hangs.
Client B attempts to get an ACCESS EXCLUSIVE lock on
table T in
preparation for VACUUM FULL.
Client C connects to the database and waits for
client B to get and
release his lock on table T.
Client D connects to the database and waits for
client B to get and
release his lock on table T.
Client E connects to the database and waits for
client B to get and
release his lock on table T.
etc...
   



oh! my! gawd!
Finally a clear explanation that makes perfect sense.
Now why did it take so long?

 

I think you did not get the explanation sooner because you did not 
mention that you were doing VACUUM FULL from a cron job, and you got 
drawn into an argument about what postgres should do rather than WHY it 
did what it did. 

I had a lot of sympathy with your position as something similar happened 
to me, but you did not give the detail that allowed me to guess (i.e., 
the VACUUM FULL) until several exchanges had taken place.



So all I need to do is take out the FULL? Is regular
VACUUM sufficient? How often do we need FULL? (I know
it's a stupid question without providing some more
context, but how can I estimate it?)

 

You never have to run VACUUM FULL.  The only thing that it does that 
plain ole VACUUM does not is that it can actually shrink a table.   If 
your table doesn't need shrinking you don't need VACUUM FULL. It is 
really only for people in desperate straits who let a table get way too 
large without running regular VACUUM on it.


As another poster already pointed out, you need to set the free space 
map configuration high enough.  The general process is to let the 
database go 1 day without VACUUMing, and then run VACUUM VERBOSE.  This 
will print a lot of information about each table that you don't really 
care about, and then at the end, it will tell you how many pages you 
need in the free space map like this:


INFO:  free space map: 248 relations, 242 pages stored; 4032 total pages 
needed
DETAIL:  Allocated FSM size: 1000 relations + 2 pages = 182 kB 
shared memory.


This example from my box shows that I have the free space tracking 1000 
relations and 2 pages, but I only need 248 relations and 4000 
pages.  In your own case, unless you are short on RAM, multiplying the 
amount it says you need by a factor of 4 is probably a good rule of thumb.



Out of curiousity, how is lock acquisition implemented
in postgresql? All the processes have to go through
some sort of queue, so that locks are granted in FIFO
order, as you described. Just trying to understand it
better.

 

Not all locks--only locks that conflict with each other must wait on 
each other in this fashion.  If every lock did, then you would only need 
1 lock in the whole database, as it would protect against any sort of 
concurrent access. :)


There are two main kinds of locks--shared locks and exclusive locks.  
Multiple shared locks can be granted on the same table or row, but only 
one exclusive lock can be.


select, insert, update, and delete, and regular vacuum take no exclusive 
locks, hence the excellent general performance of postgres.* (see below)


The important thing to remember is that if 1 process is waiting trying 
to get an exclusive lock on some table, then every other process asking 
for shared lock on the same table will have to wait.


I know offhand that VACUUM FULL, ALTER TABLE, and REINDEX take exclusive 
locks.  These are probably the only commands that people would be 
tempted to run via a cron job.


You might find this informative:
http://www.postgresql.org/docs/8.0/interactive/explicit-locking.html

Regards,
Paul Tillotson

(*) Actually, you can get this kind of deadlock with just UPDATES.  
Suppose that your web application does:


BEGIN;
UPDATE hits SET count = count + 1 WHERE page = 'somepage.aspx';
[other stuff]
COMMIT;

If you have another transaction that tries to update the SAME ROW, then 
it will wait for the first transaction to finish.  Thus, if your client 
does the update and then hangs while doing [other stuff], every other 
client that tries to update that row will block until the transaction 
commits, even though the rest of the database will be unaffected.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] GUID for postgreSQL

2005-07-27 Thread Alvaro Herrera
On Wed, Jul 27, 2005 at 07:43:08PM -0400, John DeSoi wrote:

 I'm not saying that GUIDs are the ultimate solution to this problem.  
 The original poster brought up the need to store GUIDs in a database.  
 There are protocols and standards that require GUIDs and I merely  
 agree it would be nice to have a GUID data type.

AFAIR there is one on gborg.

-- 
Alvaro Herrera (alvherre[a]alvh.no-ip.org)
Just treat us the way you want to be treated + some extra allowance
 for ignorance.(Michael Brusser)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Upgrading from 7.1

2005-07-27 Thread Alvaro Herrera
On Wed, Jul 27, 2005 at 05:13:01PM -0700, Chris Travers wrote:
 Thomas F. O'Connell wrote:
 
 The implicit indexes are no big deal; they're just a sign of indexes  
 getting created by PRIMARY KEYs on your tables.
 
 I'm not sure why you're getting errors. Is there a reason you did the  
 schema dump separately from the data dump rather than a monolithic  
 dump/restore?
 
 I seem to remember encountering an issue some time ago with pg_dump 
 dumping tables in an order that prevented them from being reloaded.

This problem is solved in 8.0's pg_dump.  Not sure if 7.1 has enough
information in catalogs to make the algorithm run correctly -- I wonder
if pg_depend is needed, because AFAIR there was no pg_depend in 7.1.

-- 
Alvaro Herrera (alvherre[a]alvh.no-ip.org)
The Gord often wonders why people threaten never to come back after they've
been told never to return (www.actsofgord.com)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] transaction timeout

2005-07-27 Thread Alvaro Herrera
On Wed, Jul 27, 2005 at 05:12:46PM -0700, Dr NoName wrote:
 Thanks a lot, everyone! That solved my problem. But I
 still want to be able to set transaction timeout. Any
 chance of that in the next release?

No, because feature freeze for the next release is one month past
already.

Anyway, I think it was proposed and shot down several times already for
past releases.

-- 
Alvaro Herrera (alvherre[a]alvh.no-ip.org)
La Primavera ha venido. Nadie sabe como ha sido (A. Machado)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Upgrading from 7.1

2005-07-27 Thread Jonathan Villa

Interesting, How  would I specify the order of the tables on the commandline

To Thomas: I decided to separate the schema/data export/import to make sure the
schema was at least being created correctly...

How can I export one table by itself?  Not just table, but a view/trigger, 
etc... Is
it even possible to export them separately?


-Jonathan

quote who=Chris Travers
 Thomas F. O'Connell wrote:



 Jonathan,

 The implicit indexes are no big deal; they're just a sign of indexes
 getting created by PRIMARY KEYs on your tables.

 I'm not sure why you're getting errors. Is there a reason you did the
 schema dump separately from the data dump rather than a monolithic
 dump/restore?

 I seem to remember encountering an issue some time ago with pg_dump
 dumping tables in an order that prevented them from being reloaded.

 The solution was to specify the order of the tables in the commandline.

 Hope this helps.
 Chris Travers
 Metatron Technology Consulting

 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Bad plan when null is in an in list

2005-07-27 Thread Tom Lane
Jaime Casanova [EMAIL PROTECTED] writes:
 On 7/26/05, Csaba Nagy [EMAIL PROTECTED] wrote:
 Seq Scan on big_table  (cost=0.00..2447201.85 rows=448 width=16)
 Filter: ((bigint_col_2 = 12132131::bigint) OR NULL::boolean)
 
 Considering that NULL::boolean is always false, 

 null::boolean  is null not false. that is because null means 'unknown
 value'  not false nor true

The reason the planner ends up with this and not just bigint_col_2 =
12132131::bigint is that it's using a general-purpose expression
simplifier, and in the general case we have to keep the NULL arm of
the OR because it can affect the result (the OR output will be NULL
not FALSE if the equality is false).

In the context of the top level of a WHERE clause, we could discard the
NULL, and then the OR, since we do not need to distinguish NULL and
FALSE results.  However, as far as I can see doing this would require an
extra pass over the WHERE clause (it can't readily be folded into any of
the existing traversals because those are done by routines that have
other uses where dropping NULLs would be wrong).  I'm unconvinced that
the use-case for this justifies that much overhead ...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] [pgsql-advocacy] MySQL to PostgreSQL, was ENUM type

2005-07-27 Thread Christopher Kings-Lynne

So far, the checklist I can see includes:
*  Maintaining conversion scripts


What I think we need is a C program that dumps directly from MySQL into 
PostgreSQL sql.


ie. Take the mysqldump source code and just modify its output.

Will inherit the MySQL license though :(

Chris


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [pgsql-advocacy] MySQL to PostgreSQL, was ENUM type

2005-07-27 Thread Josh Berkus
KL-

 What I think we need is a C program that dumps directly from MySQL into
 PostgreSQL sql.

Why C?   PerlDBI or JDBC should be able to do this readily enough.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [pgsql-advocacy] MySQL to PostgreSQL, was ENUM type

2005-07-27 Thread Michael Glaesemann


On Jul 28, 2005, at 11:20 AM, Christopher Kings-Lynne wrote:

What I think we need is a C program that dumps directly from MySQL  
into PostgreSQL sql.


ie. Take the mysqldump source code and just modify its output.

Will inherit the MySQL license though :(


Just the conversion program would, correct? If so, of course it  
couldn't be bundled with the distribution, but that doesn't mean the  
program wouldn't be useful. Seems like a natural for a pgfoundry  
project.


Michael Glaesemann
grzm myrealbox com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] [pgsql-advocacy] MySQL to PostgreSQL, was ENUM type

2005-07-27 Thread Gregory Youngblood
If linking it in directly via C would bring in the MySQL license, and  
you want to avoid that, what about one of the scripting languages  
such as perl or python, or possibly even ruby? Or, what about using  
UnixODBC to talk to MySQL.


I've written a few perl scripts when I need to convert MySQL into  
Postgres. Nothing formalized, I usually just recreate it each time I  
need to do something. My needs are typically pretty simple though,  
and I know what I'm converting, so it makes it easier.


I think one of the more difficult areas will be to convert unsigned  
fields from mysql into postgres. For smaller sizes it is possible to  
convert to postgres by moving one size up and using constraints to  
restrict numbers to be positive, and possibly within the mysql range  
too. But, the problem is unsigned bigint in mysql to postgresql.  
There's not another larger integer size that can be used that would  
allow the 18446744073709551615 (is that the max value?) max value  
available in mysql. Or am I missing something?


I think running into these would be rare, but it is something to be  
considered.


Greg

On Jul 27, 2005, at 7:20 PM, Christopher Kings-Lynne wrote:


So far, the checklist I can see includes:
*  Maintaining conversion scripts



What I think we need is a C program that dumps directly from MySQL  
into PostgreSQL sql.


ie. Take the mysqldump source code and just modify its output.

Will inherit the MySQL license though :(

Chris


---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


  1   2   >