Re: [GENERAL] Planner create a slow plan without an available index

2005-08-31 Thread Ben-Nes Yonatan

Alvaro Herrera wrote:

On Tue, Aug 30, 2005 at 10:39:57PM -0500, Bruno Wolff III wrote:


On Wed, Aug 31, 2005 at 01:27:30 +0200,
 Ben-Nes Yonatan [EMAIL PROTECTED] wrote:

Now again im probably just paranoid but when I'm starting a transaction 
and in it im making more then 4 billions diffrent queries 
(select,insert,update,truncate...) and then im closing it, its counted 
as only one transaction right? (should I duck to avoid the manual? ;))


I believe there is a limit on the number of queries in a transaction of
2 or 4 billion (though this may be just in functions).

Ignoring subtransactions, all these queries count as just one transaction.
I am not sure how subtransactions are counted.



If the subtransaction writes at least a tuple, it counts as another
transaction.  Else it doesn't count.



Oh crap I fear that now im in serious troubles
Where can I read about this limitation? and beside that what if I count 
the number of queries and every 900,000 or so I create a subtransaction 
and continue my process with it, will that work or I'm just trying to be 
a smart ass with the db?


As always thanks alot,
Ben-Nes Yonatan

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Planner create a slow plan without an available index

2005-08-31 Thread Martijn van Oosterhout
On Wed, Aug 31, 2005 at 09:19:05AM +0200, Ben-Nes Yonatan wrote:
 If the subtransaction writes at least a tuple, it counts as another
 transaction.  Else it doesn't count.
 
 
 Oh crap I fear that now im in serious troubles
 Where can I read about this limitation? and beside that what if I count 
 the number of queries and every 900,000 or so I create a subtransaction 
 and continue my process with it, will that work or I'm just trying to be 
 a smart ass with the db?

Um, 1 billion transactions is 1 thousand million. So 900,000
inserts/updates are not even one tenth of one percent of the limit for
one transaction.

Are you really approaching a billion inserts/updates per transaction?
That's alot of diskspace being used...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpHffY8nXgNl.pgp
Description: PGP signature


Re: [GENERAL] Planner create a slow plan without an available index

2005-08-31 Thread Ben-Nes Yonatan

Martijn van Oosterhout wrote:

On Wed, Aug 31, 2005 at 09:19:05AM +0200, Ben-Nes Yonatan wrote:


If the subtransaction writes at least a tuple, it counts as another
transaction.  Else it doesn't count.



Oh crap I fear that now im in serious troubles
Where can I read about this limitation? and beside that what if I count 
the number of queries and every 900,000 or so I create a subtransaction 
and continue my process with it, will that work or I'm just trying to be 
a smart ass with the db?



Um, 1 billion transactions is 1 thousand million. So 900,000
inserts/updates are not even one tenth of one percent of the limit for
one transaction.

Are you really approaching a billion inserts/updates per transaction?
That's alot of diskspace being used...

Have a nice day,


No apprantly I just lack a decent sleep I think that ill stop ask 
you guys questions before you will decide to get your clubs out... :P
In other words I was mistaken and thought about a million and not a 
billion :)


With hopes that this is the end of my bugging :)
Thanks alot,
Ben-Nes Yonatan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Php abstraction layers

2005-08-31 Thread Hannes Dorbath

On 31.08.2005 05:00, Greg Stark wrote:
  My understanding is that PDO is the way and the light. Use PDO.

Maybe you could provide some arguments on why it should be that?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] optimum settings for dedicated box

2005-08-31 Thread Matthew Peter
Hmmm. I was thinking of a more comprehensive solution
or document resource. I would like to know what does
what. Why tweak that or why not to ya know?

Searching gets me such fragmented results I chose to
ask the ones whom are more familiar with this fabulous
piece of software and used it in real world
situations.  

Does anyone know of a some good docs on the subject of
dedicated db optimization for postgresql 8.0.3? 

Is 8.1 to early to use in a production environment?
With just the regular old 8.0.3 stuff? 

Thanks for the tips too. I always appreciate tips. :)

Thanks again,
Matthew A. Peter


--- Jim C. Nasby [EMAIL PROTECTED] wrote:

 On Tue, Aug 30, 2005 at 09:43:19PM -0700, Ian
 Harding wrote:
  Mine in similar, and the only thing I have changed
 from defaults is
  work_mem.  It made certain complex queries go from
 taking forever to
  taking seconds.  I have a database connection pool
 limited to 10
  connections, so I set it to 10MB.  That means (to
 me, anyway) that
  work_mem will never gobble more then 100MB.  Seems
 OK since I have
  1GB.
 
 That's not totally true. A single query can use
 work_mem for multiple
 steps, so if work_mem is 10MB a single query could
 end up using 20MB,
 30MB, or even more.
 -- 
 Jim C. Nasby, Sr. Engineering Consultant 
 [EMAIL PROTECTED]
 Pervasive Softwarehttp://pervasive.com  
  512-569-9461
 
 ---(end of
 broadcast)---
 TIP 1: if posting/reading through Usenet, please
 send an appropriate
subscribe-nomail command to
 [EMAIL PROTECTED] so that your
message can get through to the mailing list
 cleanly
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Get postgresql workin in french...

2005-08-31 Thread Stephane Bortzmeyer
On Tue, Aug 30, 2005 at 09:39:40AM -0400,
 Guy Doune [EMAIL PROTECTED] wrote 
 a message of 28 lines which said:

 for getting postgresql accepting my entry with accent an all the
 what the french poeple put over there caracter while they write...

I use PostgreSQL with French data (names, addresses, etc).
 
 I gonna continu to search why it doesn't work...

You should give details such as commands attempted and error messages
obtained.

 I would know how to set the encoding (unicode, ASCII, etc.)

I use Unicode (actually, UTF-8 would have been a better name)
because the database stores names in other languages, that you cannot
express with Latin-1.

I use the type TEXT and I can store and retrieve UTF-8 names (mostly
via Python programs) and basic operators like LIKE work fine (for
instance, _ matches one Unicode character, not one byte, which is
the right thing).




 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] detection of VACUUM in progress

2005-08-31 Thread Bohdan Linda
On Tue, Aug 30, 2005 at 06:07:24PM +0200, Michael Fuhr wrote:
 tables, and a VACUUM might start or complete immediately after you
 issue the query but before you read the results).  This method is
 therefore unreliable.

I intend to do the VACUUM FULL during quiet hours, thus the chance of
fitting exactly to the time that VACUUM started and it is not reflected in
the tables is quite small. And even if it would happend, very likely it will
affect only one user, who may get around hitting refresh button.
 
 
 What problem are you trying to solve?  If we knew what you're really
 trying to do then we might be able to make suggestions.

I have database, which gets around 240 000 new lines each day and about the
same is also deleted each day. The table has something around 8M lines in
average and simple query takes about 70s to complete(V210 1x
UltraSPARC-IIIi). As this time is quite high, I need defragment database
on daily basis. These queries get visualized in web application. My
problem is, how to make the web application aware that maintainace (VACUUM
FULL) is in place, but the database is not down. I really would not like
to do it via extra status table, while sometimes it may happend, that
someone will run VACUUM FULL ad-hoc-ly in good-faith and will forget to
update the status table.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Postgresql version

2005-08-31 Thread philip johnson
What version would you recommend to use for a new installation: 8.0.x or
7.4.x

TIA


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Postgresql version

2005-08-31 Thread Richard Huxton

philip johnson wrote:

What version would you recommend to use for a new installation: 8.0.x or
7.4.x


8.0.x - always go for the most recent stable series unless you need to 
maintain backwards compatibility.


If you are just entering development, it might be worth looking at 8.1 
which has just gone into beta-testing.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Postgresql version

2005-08-31 Thread Hannes Dorbath

8.0.3

On 31.08.2005 10:43, philip johnson wrote:

What version would you recommend to use for a new installation: 8.0.x or
7.4.x


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Searching for LTree dmoz-testdata

2005-08-31 Thread Peter Arwanitis

Hi there,

I've choosen pgsql-general, cause I'm new to postgres community and I'm 
looking for a actually (4 days ago?) vanished website from Teodor Sigaev 
and Oleg Bartunov (ex: http://www.sai.msu.su/~megera/postgres)


Bside tsearch2 they have worked on LTree (googlecache: 
http://64.233.183.104/search?q=cache:AynIHdkXWGMJ:www.sai.msu.su/~megera/postgres/gist/ltree/+gist+site:sai.msu.suhl=en) 
and I'm very interested in the test-data package (dmozltree-eng.sql.gz). 
It could shorten my evaluation time and I'm interested in dmoz anyway.


Did anyone out here downloaded this data??? I cannot find a second 
occurence online.

mail to spex66 _at_ gmail would be very appreciated

thanks for hints
greetings from Munich, Germany
Peter
(=PA=)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] How do I copy part of table from db1 to db2 (and rename the columns)?

2005-08-31 Thread Joost Kraaijeveld
Hi,

I want to copy several columns of a source table from db1 to db2, and
create the target table and rename the columns in the process. 

Is that possible in PostgresQL? If so, an example or url for such a
command /script would be appreciated...

TIA

Joost 


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] detection of VACUUM in progress

2005-08-31 Thread Ben-Nes Yonatan

Bohdan Linda wrote:

On Tue, Aug 30, 2005 at 06:07:24PM +0200, Michael Fuhr wrote:


tables, and a VACUUM might start or complete immediately after you
issue the query but before you read the results).  This method is
therefore unreliable.



I intend to do the VACUUM FULL during quiet hours, thus the chance of
fitting exactly to the time that VACUUM started and it is not reflected in
the tables is quite small. And even if it would happend, very likely it will
affect only one user, who may get around hitting refresh button.



What problem are you trying to solve?  If we knew what you're really
trying to do then we might be able to make suggestions.



I have database, which gets around 240 000 new lines each day and about the
same is also deleted each day. The table has something around 8M lines in
average and simple query takes about 70s to complete(V210 1x
UltraSPARC-IIIi). As this time is quite high, I need defragment database
on daily basis. These queries get visualized in web application. My
problem is, how to make the web application aware that maintainace (VACUUM
FULL) is in place, but the database is not down. I really would not like
to do it via extra status table, while sometimes it may happend, that
someone will run VACUUM FULL ad-hoc-ly in good-faith and will forget to
update the status table.



From the postgresql manual 
http://www.postgresql.org/docs/8.0/interactive/maintenance.html :
 The standard form of VACUUM is best used with the goal of maintaining 
a fairly level steady-state usage of disk space. If you need to return 
disk space to the operating system you can use VACUUM FULL — but what's 
the point of releasing disk space that will only have to be allocated 
again soon? Moderately frequent standard VACUUM runs are a better 
approach than infrequent VACUUM FULL runs for maintaining 
heavily-updated tables.


From this I conclude that an ordinary VACUUM is sufficent to your 
purpose cause you insert/delete almost the same amount of data daily.


But then again I can be mistaken so if anyone can back me up here or 
throw the manual on me will be nice ;P


Cheers

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Removing all users from a group

2005-08-31 Thread David Sankel
Hello List,

I'm trying to delete all the users from a group and it seems as though there isn't sufficient array functionality to do it.

The pg_group columns:
http://www.postgresql.org/docs/8.0/interactive/catalog-pg-group.html

The pg_user columns:
http://www.postgresql.org/docs/8.0/interactive/view-pg-user.html

After having a peek at the above, we know we can see all the users in a group with this:

SELECT * 
FROM pg_user, pg_group
WHERE usesysid = ANY (grolist)
AND groname = 'somegroupname';

ANY is a function that can tell if a value is in an array:
http://www.postgresql.org/docs/8.0/interactive/arrays.html#AEN5491

Okay, that select function worked fine, but if we want to delete we
cannot use a join (implicit by the ',') to help us out. So the
following should work:

DELETE FROM pg_user
WHERE usesysid = ANY ( SELECT grolist
 FROM pg_group

WHERE groname = 'somegroupname' )

But, alas, it doesn't. Neither does any combination of IN and ANY. It
seems to me like this should work since the same syntax works if we
weren't talking about arrays.

So, how can we delete all users within a specified group? Is there a bug or is the above code incorrect?

When testing the above delete function, I found it useful to substitute SELECT * for DELETE to get non-destructive queries.

Thanks for any help,

David J. Sankel


Re: [GENERAL] How do I copy part of table from db1 to db2 (and rename the columns)?

2005-08-31 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-08-31 12:00:30 +0200:
 I want to copy several columns of a source table from db1 to db2, and
 create the target table and rename the columns in the process. 
 
 Is that possible in PostgresQL? If so, an example or url for such a
 command /script would be appreciated...

check these man pages: pg_dump(1), pg_restore(1), alter_table(7)

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Searching for LTree dmoz-testdata

2005-08-31 Thread Oleg Bartunov

Peter,

we have problem with mb on our server (it's Supermicro for dual ppro :),
but you use mirror at my desktop machine
http://mira.sai.msu.su/~megera/postgres

Oleg
On Wed, 31 Aug 2005, Peter Arwanitis wrote:


Hi there,

I've choosen pgsql-general, cause I'm new to postgres community and I'm 
looking for a actually (4 days ago?) vanished website from Teodor Sigaev and 
Oleg Bartunov (ex: http://www.sai.msu.su/~megera/postgres)


Bside tsearch2 they have worked on LTree (googlecache: 
http://64.233.183.104/search?q=cache:AynIHdkXWGMJ:www.sai.msu.su/~megera/postgres/gist/ltree/+gist+site:sai.msu.suhl=en) 
and I'm very interested in the test-data package (dmozltree-eng.sql.gz). It 
could shorten my evaluation time and I'm interested in dmoz anyway.


Did anyone out here downloaded this data??? I cannot find a second occurence 
online.

mail to spex66 _at_ gmail would be very appreciated

thanks for hints
greetings from Munich, Germany
Peter
(=PA=)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
 choose an index scan if your joining column's datatypes do not
 match



Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] How do I copy part of table from db1 to db2 (and

2005-08-31 Thread Joost Kraaijeveld
On Wed, 2005-08-31 at 12:40 +0200, Roman Neuhauser wrote:
 check these man pages: pg_dump(1), pg_restore(1), alter_table(7)

I am afraid that the problem is more complex. The original database (which is 
created with SQL_ASCII) contains invalid byte sequences in some columns (target 
database created with UNICODE):

[EMAIL PROTECTED]:~/postgresql$  pg_dump -t artik munttest | psql muntfinal
 output.txt
ERROR:  invalid byte sequence for encoding UNICODE: 0xeb207a
CONTEXT:  COPY artik, line 11705, column omschrijving: Anna v. Groot
Brittannië zi prf 38.61 mm

So I cannot dump/restore/alter table. I was hoping that piping the text
from stdout to psql that a valid conversion to unicode would take place
but apparently that is not the case.

Any other ideas?

Joost


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] detection of VACUUM in progress

2005-08-31 Thread Bohdan Linda
 
 From the postgresql manual 
 http://www.postgresql.org/docs/8.0/interactive/maintenance.html :
  The standard form of VACUUM is best used with the goal of maintaining 
 a fairly level steady-state usage of disk space. If you need to return 
 disk space to the operating system you can use VACUUM FULL ? but what's 
 the point of releasing disk space that will only have to be allocated 
 again soon? Moderately frequent standard VACUUM runs are a better 
 approach than infrequent VACUUM FULL runs for maintaining 
 heavily-updated tables.
 
 From this I conclude that an ordinary VACUUM is sufficent to your 
 purpose cause you insert/delete almost the same amount of data daily.
 
 But then again I can be mistaken so if anyone can back me up here or 
 throw the manual on me will be nice ;P


If I vacuum only the table, the records may be used by new lines, that is
fine. Problem is, that when creating select on such table, it takes more
pages to be read from the IO (it will read laso deleted rows) thus the
select will last a bit longer. 

regards,
Bohdan 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] [GENERAL] How do I copy part of table from db1 to db2 (and

2005-08-31 Thread Thomas Pundt
On Wednesday 31 August 2005 14:00, Joost Kraaijeveld wrote:
| So I cannot dump/restore/alter table. I was hoping that piping the text
| from stdout to psql that a valid conversion to unicode would take place
| but apparently that is not the case.
|
| Any other ideas?

maybe the recode utility can help then? Something like

  pg_dump -t artik munttest | recode latin1..utf | psql muntfinal

Ciao,
Thomas

-- 
Thomas Pundt [EMAIL PROTECTED]  http://rp-online.de/ 


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] [GENERAL] How do I copy part of table from db1 to db2 (and

2005-08-31 Thread Thomas Pundt
On Wednesday 31 August 2005 14:09, Thomas Pundt wrote:
| maybe the recode utility can help then? Something like
|
|   pg_dump -t artik munttest | recode latin1..utf | psql muntfinal

sorry to follow up on myself, but that command should read

  pg_dump -t artik munttest | recode latin1..utf8 | psql muntfinal
 
(utf is not a valid recode charset, utf8 is).

Ciao,
Thomas

-- 
Thomas Pundt [EMAIL PROTECTED]  http://rp-online.de/ 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Removing all users from a group

2005-08-31 Thread Patrick . FICHE



Hi 
David,

The 
correct syntax would probably be :

DELETE 
FROM pg_user 
WHEREusesysid = ANY 
(pg_group.grolist)AND pg_group.groname = 
'somegroupname'

However, you won't be allowed to delete users this 
way.
The 
only recommended methodsis to use the DROP USER command to remove 
users...

One 
way to achieve this is to use a function, even if easier methods probably 
exist...

CREATE OR REPLACE FUNCTION 
RemoveUsersFromGroup( text ) RETURNS int4 AS 
$$DECLARE_UserGroupNameALIAS FOR 
$1;_Username RECORD;_deleted 
int4;BEGIN _deleted := 0; FOR _Username IN 
 SELECT usename FROM pg_user, 
pg_group WHERE usesysid = ANY 
(grolist) AND groname = _UserGroupName 
LOOP _deleted := _deleted + 1; 
EXECUTE( 'DROP USER ' || _Username.usename ); END LOOP;

 RETURN 
_deleted;END

$$ LANGUAGE 'plpgsql';

--SELECT 
RemoveUsersFromGroup( 'test' 
)
Regards,
Patrick


--- 
Patrick Fiche email : [EMAIL PROTECTED] tél : 01 69 29 36 
18 --- 


  -Original Message-From: 
  [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]On Behalf Of David 
  SankelSent: mercredi 31 août 2005 12:30To: 
  pgsql-general@postgresql.orgSubject: [GENERAL] Removing all users 
  from a groupHello List,I'm trying to delete all 
  the users from a group and it seems as though there isn't sufficient array 
  functionality to do it.The pg_group columns:http://www.postgresql.org/docs/8.0/interactive/catalog-pg-group.htmlThe 
  pg_user columns:http://www.postgresql.org/docs/8.0/interactive/view-pg-user.htmlAfter 
  having a peek at the above, we know we can see all the users in a group with 
  this:SELECT * FROM pg_user, pg_groupWHERE usesysid 
  = ANY (grolist)AND groname = 'somegroupname';"ANY" is a 
  function that can tell if a value is in an array:http://www.postgresql.org/docs/8.0/interactive/arrays.html#AEN5491Okay, 
  that select function worked fine, but if we want to delete we cannot use a 
  join (implicit by the ',') to help us out. So the following should 
  work:DELETE FROM pg_userWHERE usesysid = ANY ( SELECT 
  grolist 
  FROM 
  pg_group 
  WHERE groname = 'somegroupname' )But, alas, it doesn't. Neither does 
  any combination of IN and ANY. It seems to me like this should work since the 
  same syntax works if we weren't talking about arrays.So, how can we 
  delete all users within a specified group? Is there a bug or is the above code 
  incorrect?When testing the above delete function, I found it useful to 
  substitute "SELECT *" for "DELETE" to get non-destructive 
  queries.Thanks for any help,David J. 
Sankel


Re: [GENERAL] detection of VACUUM in progress

2005-08-31 Thread Patrick . FICHE
Your table size should stabilize to the size you currently get before VACUUM
FULL
If you're afraid about having too many deleted rows, you just have to run
VACUUM more often...

I think that you have to test how often you have to run VACUUM so that your
performance is not impacted
Or perhaps you could try pg_autovacuum which will run VACUUM for you based
on statistics...

Regards,
Patrick


--- 
Patrick Fiche 
email : [EMAIL PROTECTED] 
tel : 01 69 29 36 18 

--- 




-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Bohdan Linda
Sent: mercredi 31 aout 2005 14:19
To: Ben-Nes Yonatan
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] detection of VACUUM in progress


 
 From the postgresql manual 
 http://www.postgresql.org/docs/8.0/interactive/maintenance.html :
  The standard form of VACUUM is best used with the goal of maintaining 
 a fairly level steady-state usage of disk space. If you need to return 
 disk space to the operating system you can use VACUUM FULL ? but what's 
 the point of releasing disk space that will only have to be allocated 
 again soon? Moderately frequent standard VACUUM runs are a better 
 approach than infrequent VACUUM FULL runs for maintaining 
 heavily-updated tables.
 
 From this I conclude that an ordinary VACUUM is sufficent to your 
 purpose cause you insert/delete almost the same amount of data daily.
 
 But then again I can be mistaken so if anyone can back me up here or 
 throw the manual on me will be nice ;P


If I vacuum only the table, the records may be used by new lines, that is
fine. Problem is, that when creating select on such table, it takes more
pages to be read from the IO (it will read laso deleted rows) thus the
select will last a bit longer. 

regards,
Bohdan 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] About ERROR: must be *superuser* to COPY to or from a file

2005-08-31 Thread Doug Bloebaum
Jim Nasby asked:

What do people think about the Oracle method where bulk data operations
can only occur in a specified directory? Making that restriction mightaddress some of the security concerns. I don't think we should changeCOPY in such a way that you *have* to use a specified directory, but if
it was an option that helped with the security concerns...
Oracle's new (well, since 9i) DIRECTORY object (see
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10759/statements_5007.htm#sthref4678)
has proven itself to be a flexible approach for me.



A privileged account creates the DIRECTORY object like so:



CREATE OR REPLACE DIRECTORY my_dir AS '/var/tmp/my_dir';



and then grants access to it:



GRANT READ ON my_dir to db_username;


I'd envision the COPY command using the DIRECTORY object something like:



COPY my_table FROM my_dir:'file.txt';





Re: [GENERAL] How do I copy part of table from db1 to db2 (and

2005-08-31 Thread Tino Wildenhain

Joost Kraaijeveld schrieb:

On Wed, 2005-08-31 at 12:40 +0200, Roman Neuhauser wrote:


   check these man pages: pg_dump(1), pg_restore(1), alter_table(7)



I am afraid that the problem is more complex. The original database (which is 
created with SQL_ASCII) contains invalid byte sequences in some columns (target 
database created with UNICODE):

[EMAIL PROTECTED]:~/postgresql$  pg_dump -t artik munttest | psql muntfinal


output.txt


ERROR:  invalid byte sequence for encoding UNICODE: 0xeb207a
CONTEXT:  COPY artik, line 11705, column omschrijving: Anna v. Groot
Brittannië zi prf 38.61 mm

So I cannot dump/restore/alter table. I was hoping that piping the text
from stdout to psql that a valid conversion to unicode would take place
but apparently that is not the case.

Any other ideas?


If you know the implicit charset you used in your SQL_ASCII
db, try with: SET client_encoding TO 'the_charset';
In your import script if you use one.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] user defined type, plpgsql function and NULL

2005-08-31 Thread Bjoern A. Zeeb
Hi,

let's say one has an user defined data type

CREATE TYPE foobar_t AS ( va varchar(25), vb varchar(4), vc varchar(20), ia 
integer, ib integer );

and a stored procedure in plgpsql (stripped and sample only):

CREATE OR REPLACE FUNCTION foobar(int, foobar_t, int, varchar) RETURNS INT AS'
DECLARE
ia  ALIAS FOR $1;
fbt ALIAS FOR $2;
ib  ALIAS FOR $3;
vc  ALIAS FOR $4;
BEGIN
...
IF fbt IS NULL THEN
RAISE NOTICE ''fbt IS NULL;'';
ELSE
RAISE NOTICE ''fbt IS NOT NULL... '';
IF fbt.va IS NULL THEN
RAISE NOTICE ''fbt.va IS NULL;'';
ELSE
RAISE NOTICE ''fbt.va = %'', fbt.va;
END IF;
...
END IF;
...

RETURN 0;
END'
LANGUAGE plpgsql;


If one does a

SELECT foobar(1, NULL, 2, 'end');
NOTICE:  ia = 1
NOTICE:  fbt IS NOT NULL... 
NOTICE:  fbt.va IS NULL;
NOTICE:  fbt.vb IS NULL;
NOTICE:  fbt.vc IS NULL;
NOTICE:  fbt.ia IS NULL;
NOTICE:  fbt.ib IS NULL;
NOTICE:  ib = 2
NOTICE:  vc = end
 foobar 

  0
(1 row)


Note the second argument foobar_t is given as NULL
but $2 IS NOT NULL.

I cannot find anything about this in the docs but I asume
that the single NULL will implicitly set all attributes to NULL?
Is this correct or is it just a works like that this time but may
change at any time in the future?


-- 
Greetings
Bjoern A. Zeeb

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] or kills performance

2005-08-31 Thread Tom Lane
Sim Zacks [EMAIL PROTECTED] writes:
 The parenthesis are correct:
 where a.batchid=382 and e.stock0 and e.ownerid=1 and
 (
 (d.leadfree and leadstateid in (1,3) )
  or 
 (not d.leadfree and leadstateid in (2,3,4) )
 )

[ goes back and looks more closely ]  The row count estimates in your
EXPLAINs are so far off that it's a wonder you got an OK plan for either
query.  Have you ANALYZEd these tables recently?

The direct source of the problem seems to be that leadfree and
leadstateid come from different tables, so you're taking what had been
independent filter conditions for the two tables and converting them into
a join condition that can't be applied until after the join.  However it
doesn't look to me like that is really a huge deal, because apparently
these conditions are pretty unselective and you'd be reading most of
each table anyway.  What is really causing the problem is the choice to
join partsassembly last in the slow query; in the faster query, that's
joined before joining to assemblies and assembliesbatch, and apparently
that reduces the number of joins to assembliesbatch very considerably.

With the rowcount estimates so far off, though, it's really just luck
that the planner makes a good join order choice in either case.  And it
doesn't look like the conditions are too hard for the planner to figure
out ... I think it must be working with obsolete statistics.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] [GENERAL] How do I copy part of table from db1 to db2 (and

2005-08-31 Thread Tom Lane
Joost Kraaijeveld [EMAIL PROTECTED] writes:
 I am afraid that the problem is more complex. The original database
 (which is created with SQL_ASCII) contains invalid byte sequences in
 some columns (target database created with UNICODE):

There is no magic bullet to make bad data better.  If the original data
is all in a specific encoding that happens not to be unicode, then you
can get Postgres to translate it for you --- just edit the dump file and
change CLIENT_ENCODING to the real original encoding before reloading.
If, as seems more likely, there's a mishmash of different encodings then
you are in for some pain.  At the minimum you'll have to separate out
the rows that are in each encoding so you can pass them through
different conversion processes.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Get postgresql workin in french...

2005-08-31 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Guy Doune [EMAIL PROTECTED] writes:

 Hi,
 I would know how to set the encoding (unicode, ASCII,
 etc.) for getting postgresql accepting my entry with
 accent an all the what the french poeple put over
 there caracter while they write...

French is covered both by Latin9 and Unicode.  Don't use Latin1 - it's
missing the French OE ligature.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] [GENERAL] How do I copy part of table from db1 to db2

2005-08-31 Thread Joost Kraaijeveld
On Wed, 2005-08-31 at 14:14 +0200, Thomas Pundt wrote:

   pg_dump -t artik munttest | recode latin1..utf8 | psql muntfinal
Because the source encoding is unknown (the actual source database was
an ODBC source without known encoding that was copied with a C++ written
to a SQL_ASCII PostgreSQL database) I used no source encoding:

pg_dump -t artik munttest | recode ..utf8 | psql muntfinal

and that worked: no errors. I just lost all diacritical chars as far as
I can see (which is a minor and someone else's problem ;-)). 

Thanks for the sugggestion.

Joost



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] [GENERAL] How do I copy part of table from db1 to db2

2005-08-31 Thread Joost Kraaijeveld
On Wed, 2005-08-31 at 10:29 -0400, Tom Lane wrote:
 Joost Kraaijeveld [EMAIL PROTECTED] writes:
 If, as seems more likely, there's a mishmash of different encodings then
 you are in for some pain.  At the minimum you'll have to separate out
Yep. The original database (which is copied to an SQL-ASCII PostgreSQL
database) is a mishmash of encodings. Actually no official encoding is
given for the database. 

But I managed to get an acceptable (for me that is) import, only losing
all the diacritical chars for this moment (see other mail)

Thanks for responding,

Joost. 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] detection of VACUUM in progress

2005-08-31 Thread Ben-Nes Yonatan

Bohdan Linda wrote:
From the postgresql manual 
http://www.postgresql.org/docs/8.0/interactive/maintenance.html :
 The standard form of VACUUM is best used with the goal of maintaining 
a fairly level steady-state usage of disk space. If you need to return 
disk space to the operating system you can use VACUUM FULL ? but what's 
the point of releasing disk space that will only have to be allocated 
again soon? Moderately frequent standard VACUUM runs are a better 
approach than infrequent VACUUM FULL runs for maintaining 
heavily-updated tables.


From this I conclude that an ordinary VACUUM is sufficent to your 
purpose cause you insert/delete almost the same amount of data daily.


But then again I can be mistaken so if anyone can back me up here or 
throw the manual on me will be nice ;P




If I vacuum only the table, the records may be used by new lines, that is
fine. Problem is, that when creating select on such table, it takes more
pages to be read from the IO (it will read laso deleted rows) thus the
select will last a bit longer. 


regards,
Bohdan 



As far as I understand the vacuum process, it does delete the deleted 
rows so the next queries after the vacuum will not read those rows.
Of course that the table will be the same size as with the deleted 
rows but I dont think that with a proper index it will result in any 
meaningful overhead...


But then again its just my opinion and anyway lately im feeling 
increasingly amateur at this subject... :)


Cheers!
Ben-Nes Yonatan

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] detection of VACUUM in progress

2005-08-31 Thread Scott Marlowe
On Wed, 2005-08-31 at 07:18, Bohdan Linda wrote:
  
  From the postgresql manual 
  http://www.postgresql.org/docs/8.0/interactive/maintenance.html :
   The standard form of VACUUM is best used with the goal of maintaining 
  a fairly level steady-state usage of disk space. If you need to return 
  disk space to the operating system you can use VACUUM FULL ? but what's 
  the point of releasing disk space that will only have to be allocated 
  again soon? Moderately frequent standard VACUUM runs are a better 
  approach than infrequent VACUUM FULL runs for maintaining 
  heavily-updated tables.
  
  From this I conclude that an ordinary VACUUM is sufficent to your 
  purpose cause you insert/delete almost the same amount of data daily.
  
  But then again I can be mistaken so if anyone can back me up here or 
  throw the manual on me will be nice ;P
 
 
 If I vacuum only the table, the records may be used by new lines, that is
 fine. Problem is, that when creating select on such table, it takes more
 pages to be read from the IO (it will read laso deleted rows) thus the
 select will last a bit longer. 

It really depends on what percentage of rows you're updating.

If you are updating 240,000 rows a day, and have a database with 100M
rows, then that's not too bad.  Regular vacuums once a day would be
plenty.

If you're updating 240,000 rows a day, spread out over the day, and you
have a table that has 10,000 rows, then you will need to run vacuum far
more often to keep the table at a steady state of 10,000 to 20,000 rows.

If you're doing the 240,000 updates all at once on a small table, then
you might well be a candidate for a vacuum full.

So, again, it's about percentages really.  If 240k rows represents 1% of
your table, then daily, regular vacuums will do fine.

Personally, I just install pg_autovacuum and check on it once a week or
so to make sure it's doing its job.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PQConnectdb SSL (sslmode): Is this a bug

2005-08-31 Thread vishal saberwal
Why do i have to be the one doing such a silly mistake ...
thanks for your help and yes you were right ...
this fixed my problem ...
thanks,
vishOn 8/30/05, Michael Fuhr [EMAIL PROTECTED] wrote:
On Tue, Aug 30, 2005 at 10:40:26AM -0700, vishal saberwal wrote: Root user: /root/.postgressql:Is this the actual directory name?It's misspelled: it should be.postgresql, not .postgressql.
--Michael Fuhr


[GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Frank

Hello,

I would like to try postgresql  and was wondering what benefits it 
has over MySql.

I am using navicat to manage my MySQL  databases.

Thanks for the help.

Frank



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Michael Fuhr
On Wed, Aug 31, 2005 at 11:50:55AM -0400, Frank wrote:
 I would like to try postgresql  and was wondering what benefits it 
 has over MySql.

Search the list archives to see past discussion, or use a search
engine like Google with words like postgresql mysql comparison.
A couple of links you might want to read are:

http://sql-info.de/mysql/gotchas.html
http://sql-info.de/postgresql/postgres-gotchas.html

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] PQexecParams-Problem

2005-08-31 Thread Mavinakuli, Prasanna (STSD)
Hi,

Problem Description:
  
We need to insert binary data to tables and retrieve the 
Same-(data type is bytea).
We are using PQExecParams for inserting and retrieving
Data.
Table contains other than binary data also.

When we tried to fetch the integer data (type-int2)
We are not able to get anything from that result set(alws value is
zero).but Pqexec returns the desired value

Questions:

1)how can we get the integer value from the result set which is got from
executing PQexecParams;
(the sample code is given below)

2)if it's not possible then how can we get the BINARY values using
Pqexec

Here is the code snapshot:

string query=select intval from dummy where intval=7;;
  
 
res1=PQexecParams(conn,query.c_str(),0,NULL,NULL,NULL,NULL,1);
for(unsigned char i=0;i4;i++)
{
printf(%d \n,(PQgetvalue(res1,0,0)[i]));
//prints  here  
  }
  
coutvalue from pqexec params
atoi(PQgetvalue(res1,0,0))endl; //prints 0 here
  
coutusing pqexec ***endl;
res1=PQexec(conn,select intval from dummy where
intval=7;);//prints 7 below.
coutvalue from pqexec
**atoi(PQgetvalue(res1,0,0))endl;


Prasanna. 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Richard Huxton

Frank wrote:

Hello,

I would like to try postgresql  and was wondering what benefits it has 
over MySql.

I am using navicat to manage my MySQL  databases.


Hi Frank. You don't say what you want to use your RDBMS for, so it's a 
bit difficult to point out what about PostgreSQL would be best for you.


Can you expand on what you are trying to do? What platforms you want to 
run on? What experience with other RDMBS you have?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread aly . dharshi

Hello Frank,

	I believe that PostgreSQL is a proper RDBMS adhering to the ACID 
principles, as well as has similar functions to Oracle, in terms of having 
procedural language support with pg/plsql and the ability to also use 
other languages such as perl, java (pl/perl and pl/java) to do the same.


	I think that you can use the GUI tool PgAdmin III to do what you 
probably do with Navicat for MySQL. MySQL is not really free there are 
some caveats that you have to look out for, PostgreSQL is 100% free (as 
like in the brew).


	I am sure that there are others who could give you a better 
insight than I have, BUT hopefully this helps.


Cheers,

Aly

On Wed, 31 Aug 2005, Frank wrote:


Hello,

I would like to try postgresql  and was wondering what benefits it has over 
MySql.

I am using navicat to manage my MySQL  databases.

Thanks for the help.

Frank



---(end of broadcast)---
TIP 6: explain analyze is your friend



--
Aly S.P Dharshi
[EMAIL PROTECTED]

 A good speech is like a good dress
  that's short enough to be interesting
  and long enough to cover the subject

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] PQexecParams-Problem

2005-08-31 Thread Tom Lane
Mavinakuli, Prasanna (STSD) [EMAIL PROTECTED] writes:
 Here is the code snapshot:
 string query=select intval from dummy where intval=7;;
 
 res1=PQexecParams(conn,query.c_str(),0,NULL,NULL,NULL,NULL,1);
 for(unsigned char i=0;i4;i++)
 {
 printf(%d \n,(PQgetvalue(res1,0,0)[i]));
 //prints  here  
   }

Um, what's the actual data type of intval?  The above would be expected
if it were, say, bigint.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Messages doesn´t appear

2005-08-31 Thread Carlos Henrique Reimer

Hi,

I´m trying to post messages in the performance list but they don´t appear in the list.


Whan can be wrong?

Reimer__Converse com seus amigos em tempo real com o Yahoo! Messenger http://br.download.yahoo.com/messenger/ 

Re: [GENERAL] Php abstraction layers

2005-08-31 Thread Guy Fraser
On Tue, 2005-30-08 at 21:00 -0700, Chris Travers wrote:
 Antimon wrote:
 
 Hi,
 I'm working on a new web project based on php and i decided to use
 PostgreSQL 8.x as
 dbms. For triggers, views, stored procedures etc. I was going to write
 a simple wrapper class and use pg_* functions. But some friends adviced
 me to use an abstraction layer. I checked PEAR:DB and AdoDB. They look
 pretty but i don't understand why sould i need one?
 
 Do yourself a favor and write lightweight wrapper functions.  This means 
 that if something needs to be changed (say, a PHP API name change 
 happens) you don't have to rewrite a lot of your code.  Additionally, if 
 you do have to port someday to Interbase or even (gasp) MySQL, it 
 becomes possible thought not always straightforward.
   *Simple* and light database abstractions are very nice because they 
 isolate your framework from the API syntax and after a few years, 
 something could change and then you don't have to rewrite a whole lot.
 
I would have to agree with this.

As a network administrator I work with IP and MAC addresses alot, 
so I can use the extensive support for them as leverage when 
choosing a database for a project. That being said, I have other 
people to please and sometimes they get their way and I have to 
use MySQL. I do not to use PEAR or CPAN modules because I haven't 
had the time to learn their API's and don't know how secure they 
are.

As Chris suggests, I write wrapper functions to create and 
maintain one or more connections to the database as well as 
functions for inserting, updating, deleting and making 
different kinds of selections from the database. I have found 
it handy to store status, error and data in an array of 
associative arrays for each transaction with the database. The 
trick to making the code compatible is how and where the error 
and status data is stored. The PostgreSQL pg_ functions return 
quite different error and status information than the MySQL 
functions.

Another trick is to write wrapper functions that take a 
driver option to determine which library to use. Since the 
same array format is returned from either library the rest of 
the code doesn't need to know anything about the database.

One thing to take care with is ; making sure you escape any 
filter anything put into a database command to ensure that 
you don't create SQL injection vulnerabilities.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Php abstraction layers

2005-08-31 Thread Antimon
I wrote a wrapper class based on pg_ functions. Added some logging and
exception throwing capabilities etc.
I'm planning to use only prepared statements and pg_query_params
function when using user submitted data in queries to avoid
sql-injection. I believe it is enough but gonna do some tests.
This is the best way i think. 

Thanks to everyone.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] optimum settings for dedicated box

2005-08-31 Thread Bruno Wolff III
On Wed, Aug 31, 2005 at 00:50:20 -0700,
  Matthew Peter [EMAIL PROTECTED] wrote:
 
 Is 8.1 to early to use in a production environment?
 With just the regular old 8.0.3 stuff? 

8.1 is still in early beta and you definitely don't want to use it in
production. It has some nice improvements, so you at least want to look at
the tentative release notes to see if you might want to upgrade to it
sooner rather than later.

8.0.4 will be being released shortly (probably in a few days) and you will
want to use that in preference to 8.0.3.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Removing all users from a group

2005-08-31 Thread Bruno Wolff III
On Wed, Aug 31, 2005 at 12:30:14 +0200,
  David Sankel [EMAIL PROTECTED] wrote:
 
 DELETE FROM pg_user
 WHERE usesysid = ANY ( SELECT grolist
 FROM pg_group
 WHERE groname = 'somegroupname' )
 
 But, alas, it doesn't. Neither does any combination of IN and ANY. It seems 
 to me like this should work since the same syntax works if we weren't 
 talking about arrays.

In 8.1 you will be able to replace the second FROM with USING to do a join
in a DELETE. Currently you need to take advantage of the missing from
feature.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Frank

Thanks for the feedback, sorry I was not more specific.
We are a non-profit hospital and have been using MySQL for about 4 years.

I wanted to convert some apps over to use postgresql and cannot find 
a good tool to import and auto create the tables.

MySQL syntax is not compatible with postgresql.
I get:
ERROR:  syntax error at or near ` at character 14
from the MySQL output below.

CREATE TABLE `category` (
  `category_id` int(11) NOT NULL auto_increment,
  `category` char(50) default NULL,
  `LastUser` int(11) NOT NULL default '0',
  `LastUpdated` timestamp NOT NULL default CURRENT_TIMESTAMP on 
update CURRENT_TIMESTAMP,

  PRIMARY KEY  (`category_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;


insert  into category values
(4, 'Hardware - Monitor', 2, '2004-10-12 10:50:01'),
(5, 'Hardware - Printer', 2, '2004-10-12 10:50:02'),
(6, 'Hardware - Terminal', 2, '2004-10-12 10:50:02'),
(7, 'Hardware - PC Laptop', 2, '2004-10-12 10:50:02'),
(9, 'Hardware - Misc.', 1, '2004-10-12 10:51:00'),
(10, 'Hardware - PC Desktop', 2, '2004-10-12 10:50:03'),
(11, 'Software - PC', 2, '2004-10-12 10:50:03'),
(13, 'Software - Network', 2, '2004-10-12 10:50:04'),
(14, 'Software - Midrange, AS/400', 2, '2004-10-12 10:50:04'),
(15, 'Software - Server', 2, '2004-10-12 10:50:04'),
(16, 'Hardware - Wyse Terminal', 2, '2004-10-12 10:50:05');

Regards,

Frank 




---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] temp tables remain after server restart

2005-08-31 Thread Hari Bhaskaran
Hi,

one of our programs went haywire and created around 200,000 temp
tables. In the end, I restarted the db, but the temporary tables are
still around

the query 

SELECT n.nspname, c.relname, c.relkind, c.relpages, c.reltuples FROM
pg_class c LEFT JOIN pg_namespace n ON n.oid = c.relnamespace WHERE
(c.relkind = 'r'::char OR c.relkind = 'i'::char) order by relpages

still shows all 200,000 of them.
What should I be doing to clean it up?

They are all under pg_temp_x namespaces

Any help is appreciated.
--
Hari

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Scott Marlowe
On Wed, 2005-08-31 at 13:50, Frank wrote:
 Thanks for the feedback, sorry I was not more specific.
 We are a non-profit hospital and have been using MySQL for about 4 years.
 
 I wanted to convert some apps over to use postgresql and cannot find 
 a good tool to import and auto create the tables.

Look in the contrib/mysql directory in the source file (or install the
contrib packages for your system, assuming they come with that contrib
package.)

 MySQL syntax is not compatible with postgresql.

Generally speaking, MySQL syntax is just not compatible.  With anything.

 I get:
 ERROR:  syntax error at or near ` at character 14
 from the MySQL output below.
 
 CREATE TABLE `category` (
`category_id` int(11) NOT NULL auto_increment,
`category` char(50) default NULL,
`LastUser` int(11) NOT NULL default '0',
`LastUpdated` timestamp NOT NULL default CURRENT_TIMESTAMP on 
 update CURRENT_TIMESTAMP,
PRIMARY KEY  (`category_id`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1;

Yeah, the SQL spec says to use  for that, not `.  There's a MySQL
switch that makes it use the right character, but it breaks many
applications so no one uses it.  A simple sed or perl script, or even a
search and replace should do the trick.  If you don't need upper / lower
case in your table names, just don't quote them (ever) and they'll case
fold internally to lower case in postgresql.

Note that instead of autoincrement, use the macro serial.

CREATE TABLE category (
   category_id int(11) NOT NULL SERIAL,
   category char(50) default NULL,
   LastUser int(11) NOT NULL default '0',
   LastUpdated timestamp NOT NULL default now(),
   PRIMARY KEY  (category_id)
);

Note that since you declared category_id as a primary key, there's no
real need for the not null, since it's implied with pk.

If you want an auto updating last update field you'll need a simple
trigger to do that.  

 insert  into category values
 (4, 'Hardware - Monitor', 2, '2004-10-12 10:50:01'),
 (5, 'Hardware - Printer', 2, '2004-10-12 10:50:02'),
 (6, 'Hardware - Terminal', 2, '2004-10-12 10:50:02'),
 (7, 'Hardware - PC Laptop', 2, '2004-10-12 10:50:02'),
 (9, 'Hardware - Misc.', 1, '2004-10-12 10:51:00'),
 (10, 'Hardware - PC Desktop', 2, '2004-10-12 10:50:03'),
 (11, 'Software - PC', 2, '2004-10-12 10:50:03'),
 (13, 'Software - Network', 2, '2004-10-12 10:50:04'),
 (14, 'Software - Midrange, AS/400', 2, '2004-10-12 10:50:04'),
 (15, 'Software - Server', 2, '2004-10-12 10:50:04'),
 (16, 'Hardware - Wyse Terminal', 2, '2004-10-12 10:50:05');

Hope that helps get ya started.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Frank

Thanks for clearing up some confusion.

Look in the contrib/mysql directory in the source file (or install the
contrib packages for your system, assuming they come with that contrib
package

I do not have that, where can I download it?

This fails to insert records


 insert  into category values
 (4, 'Hardware - Monitor', 2, '2004-10-12 10:50:01'),
 (5, 'Hardware - Printer', 2, '2004-10-12 10:50:02'),
 (6, 'Hardware - Terminal', 2, '2004-10-12 10:50:02'),
 (7, 'Hardware - PC Laptop', 2, '2004-10-12 10:50:02'),
 (9, 'Hardware - Misc.', 1, '2004-10-12 10:51:00'),
 (10, 'Hardware - PC Desktop', 2, '2004-10-12 10:50:03'),
 (11, 'Software - PC', 2, '2004-10-12 10:50:03'),
 (13, 'Software - Network', 2, '2004-10-12 10:50:04'),
 (14, 'Software - Midrange, AS/400', 2, '2004-10-12 10:50:04'),
 (15, 'Software - Server', 2, '2004-10-12 10:50:04'),
 (16, 'Hardware - Wyse Terminal', 2, '2004-10-12 10:50:05');


Regards,

Frank 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Jim C. Nasby
MySQL has a nasty habit of ignoring standards; in every other database
I've used, if you want to quote an identifier (such as a field name),
you use , not `.

The fields are also incompatable. int() is non-standard, for starters.

There are MySQL to PostgreSQL conversion tools out there that should
help.

On Wed, Aug 31, 2005 at 02:50:16PM -0400, Frank wrote:
 Thanks for the feedback, sorry I was not more specific.
 We are a non-profit hospital and have been using MySQL for about 4 years.
 
 I wanted to convert some apps over to use postgresql and cannot find 
 a good tool to import and auto create the tables.
 MySQL syntax is not compatible with postgresql.
 I get:
 ERROR:  syntax error at or near ` at character 14
 from the MySQL output below.
 
 CREATE TABLE `category` (
   `category_id` int(11) NOT NULL auto_increment,
   `category` char(50) default NULL,
   `LastUser` int(11) NOT NULL default '0',
   `LastUpdated` timestamp NOT NULL default CURRENT_TIMESTAMP on 
 update CURRENT_TIMESTAMP,
   PRIMARY KEY  (`category_id`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
 
 
 insert  into category values
 (4, 'Hardware - Monitor', 2, '2004-10-12 10:50:01'),
 (5, 'Hardware - Printer', 2, '2004-10-12 10:50:02'),
 (6, 'Hardware - Terminal', 2, '2004-10-12 10:50:02'),
 (7, 'Hardware - PC Laptop', 2, '2004-10-12 10:50:02'),
 (9, 'Hardware - Misc.', 1, '2004-10-12 10:51:00'),
 (10, 'Hardware - PC Desktop', 2, '2004-10-12 10:50:03'),
 (11, 'Software - PC', 2, '2004-10-12 10:50:03'),
 (13, 'Software - Network', 2, '2004-10-12 10:50:04'),
 (14, 'Software - Midrange, AS/400', 2, '2004-10-12 10:50:04'),
 (15, 'Software - Server', 2, '2004-10-12 10:50:04'),
 (16, 'Hardware - Wyse Terminal', 2, '2004-10-12 10:50:05');
 
 Regards,
 
 Frank 
 
 
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend
 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] temp tables remain after server restart

2005-08-31 Thread Tom Lane
Hari Bhaskaran [EMAIL PROTECTED] writes:
 one of our programs went haywire and created around 200,000 temp
 tables. In the end, I restarted the db, but the temporary tables are
 still around

What did you do, the old kill -9 some random process approach to
database management?  The recommended ways of cancelling a session
wouldn't have caused this.

 What should I be doing to clean it up?

There is code to make them go away the first time a backend wants to use
the relevant pg_temp_x namespace.  So you could start a backend,
do create temp table ..., start another backend while the first
remains running, do another create temp table ..., repeat until they
go away.

It would probably work to do drop schema pg_temp_x cascade too,
but you'd have to be really careful not to clobber the temp schema of
an active backend this way.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Jim C. Nasby
On Wed, Aug 31, 2005 at 03:17:29PM -0400, Frank wrote:
 Thanks for clearing up some confusion.
 
 Look in the contrib/mysql directory in the source file (or install the
 contrib packages for your system, assuming they come with that contrib
 package
 
 I do not have that, where can I download it?

It's part of the PostgreSQL source code, which you can download from the
website.

 This fails to insert records

Yes, the values (), (), () syntax isn't supported yet (I think it's
slated for 8.2). In the meantime you'll need to convert either to
multiple insert statements (which you'll want to wrap in a BEGIN;
COMMIT;) or better yet a copy statement. But really what you want to do
is use the migration tools that are out there...

  insert  into category values
  (4, 'Hardware - Monitor', 2, '2004-10-12 10:50:01'),
  (5, 'Hardware - Printer', 2, '2004-10-12 10:50:02'),
  (6, 'Hardware - Terminal', 2, '2004-10-12 10:50:02'),
  (7, 'Hardware - PC Laptop', 2, '2004-10-12 10:50:02'),
  (9, 'Hardware - Misc.', 1, '2004-10-12 10:51:00'),
  (10, 'Hardware - PC Desktop', 2, '2004-10-12 10:50:03'),
  (11, 'Software - PC', 2, '2004-10-12 10:50:03'),
  (13, 'Software - Network', 2, '2004-10-12 10:50:04'),
  (14, 'Software - Midrange, AS/400', 2, '2004-10-12 10:50:04'),
  (15, 'Software - Server', 2, '2004-10-12 10:50:04'),
  (16, 'Hardware - Wyse Terminal', 2, '2004-10-12 10:50:05');
 
 Regards,
 
 Frank 
 
 
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
   http://www.postgresql.org/docs/faq
 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] temp tables remain after server restart

2005-08-31 Thread Hari Bhaskaran
 What did you do, the old kill -9 some random process approach to
 database management?  The recommended ways of cancelling a session
 wouldn't have caused this.

I never said I kill -9 .  I do pg_ctl stop
BTW, drop cascade on the namespace seems to be working.

create temp queries failed with an error asking to increase
max_locks_per_transaction variable. Now that you mention about
the clean up code, it does make sense - it was probably trying to
cleanup and it couldn't.
I increased max_locks_per_transaction and now delete cascade seems to work
(still running, so I can't say)

 There is code to make them go away the first time a backend wants to use
 the relevant pg_temp_x namespace.  So you could start a backend,
 do create temp table ..., start another backend while the first
 remains running, do another create temp table ..., repeat until they
 go away.

Didn't know that - thanks for the info.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Tom Lane
Frank [EMAIL PROTECTED] writes:
 Look in the contrib/mysql directory in the source file (or install the
 contrib packages for your system, assuming they come with that contrib
 package

 I do not have that, where can I download it?

The best place to get it would be
http://gborg.postgresql.org/project/mysql2psql/projdisplay.php

I think we dropped the contrib copy for 8.1 anyway...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Michael Fuhr
On Wed, Aug 31, 2005 at 03:17:29PM -0400, Frank wrote:
 This fails to insert records
 
  insert  into category values
  (4, 'Hardware - Monitor', 2, '2004-10-12 10:50:01'),
  (5, 'Hardware - Printer', 2, '2004-10-12 10:50:02'),
  (6, 'Hardware - Terminal', 2, '2004-10-12 10:50:02'),

PostgreSQL doesn't support this form of INSERT; you'll have to use
a separate INSERT per record or use COPY.  Another method, although
probably not useful in this case, is to insert records from a subquery;
see the INSERT documentation for more information.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Brian Wong
On 8/31/05, Frank [EMAIL PROTECTED] wrote:
 I wanted to convert some apps over to use postgresql and cannot find
 a good tool to import and auto create the tables.
 MySQL syntax is not compatible with postgresql.
 I get:
 ERROR:  syntax error at or near ` at character 14
 from the MySQL output below.
 

Try playing with the mysqldump command. Some of the options alleviate
the quoting issue. Try  the --quote-names=FALSE option for example.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Scott Marlowe
On Wed, 2005-08-31 at 14:17, Frank wrote:
 Thanks for clearing up some confusion.
 
  Look in the contrib/mysql directory in the source file (or install the
 contrib packages for your system, assuming they come with that contrib
 package

It comes with the postgresql tar ball.  just download it from the
www.postgresql.org site and untar it somewhere.  then cd into the
postgresql-x.y.z/contrib/mysql directory, where x.y.z is the version of
postgresql you downloaded.

 
 I do not have that, where can I download it?
 
 This fails to insert records
 
   insert  into category values
   (4, 'Hardware - Monitor', 2, '2004-10-12 10:50:01'),
   (5, 'Hardware - Printer', 2, '2004-10-12 10:50:02'),
   (6, 'Hardware - Terminal', 2, '2004-10-12 10:50:02'),
   (7, 'Hardware - PC Laptop', 2, '2004-10-12 10:50:02'),
   (9, 'Hardware - Misc.', 1, '2004-10-12 10:51:00'),
   (10, 'Hardware - PC Desktop', 2, '2004-10-12 10:50:03'),
   (11, 'Software - PC', 2, '2004-10-12 10:50:03'),
   (13, 'Software - Network', 2, '2004-10-12 10:50:04'),
   (14, 'Software - Midrange, AS/400', 2, '2004-10-12 10:50:04'),
   (15, 'Software - Server', 2, '2004-10-12 10:50:04'),
   (16, 'Hardware - Wyse Terminal', 2, '2004-10-12 10:50:05');

Yeah, you'll have to do it like:

insert into table values (
insert into table values (
insert into table values (
insert into table values (
insert into table values (

If you've got a lot to do (more than a few dozen) wrap them in one big
transaction with begin end:

begin;
insert into table values (
insert into table values (
insert into table values (
insert into table values (
insert into table values (
commit;

Note that any errors in the import with begin/end wrapped around it
means the whole import fails.  which is a good thing (tm) since you
don't have to figuring out which rows made it in and which didn't.  Fix
the error and try the whole being;insertX1000;commit again.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] newbie - postgresql or mysql

2005-08-31 Thread Dann Corbit
If the inserts are all a bunch of data statements like that, a SED
script could turn them into something for bulk load via COPY easily
enough.

Might be an even better solution, depending on what the OP is trying to
accomplish.

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Michael Fuhr
 Sent: Wednesday, August 31, 2005 12:39 PM
 To: Frank
 Cc: Scott Marlowe; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] newbie - postgresql or mysql
 
 On Wed, Aug 31, 2005 at 03:17:29PM -0400, Frank wrote:
  This fails to insert records
 
   insert  into category values
   (4, 'Hardware - Monitor', 2, '2004-10-12 10:50:01'),
   (5, 'Hardware - Printer', 2, '2004-10-12 10:50:02'),
   (6, 'Hardware - Terminal', 2, '2004-10-12 10:50:02'),
 
 PostgreSQL doesn't support this form of INSERT; you'll have to use
 a separate INSERT per record or use COPY.  Another method, although
 probably not useful in this case, is to insert records from a
subquery;
 see the INSERT documentation for more information.
 
 --
 Michael Fuhr
 
 ---(end of
broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that
your
message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] GiST access is not concurrent

2005-08-31 Thread Qingqing Zhou
John Surnow [EMAIL PROTECTED] wrote
 Does this mean that read access is not concurrent, or write access, or 
 both?

For old GiST, you can have several connections read-*only* at the same time, 
but write/write or read/write is unsafe. GiST in 8.1 is concurrent safe 
already.

Regards,
Qingqing 



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Deferred triggers?

2005-08-31 Thread CSN
Perhaps another possible feature request! I've looked
through the docs and it doesn't appear that it's
possible to create deferred triggers - i.e. they don't
get called unless the current transaction commits. (My
understanding is that they currently get called
immediately whether or not there is a transaction in
progress.)

CSN




Start your day with Yahoo! - make it your home page 
http://www.yahoo.com/r/hs 
 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Open Source Database Conference 2005

2005-08-31 Thread Peter Eisentraut
The Open Source Database Conference 2005 finally has a web site: 
http://www.opendbcon.net/

Although it's not yet visible, there's quite a lineup of PostgreSQL 
talks, and I imagine that even some of the other talks will be of 
interest to people here.  This could become the main event for 
exchanging PostgreSQL knowledge, at least in Europe, so consider 
attending if you're interested.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] PostgreSQL local replication

2005-08-31 Thread Leonard Soetedjo
Hi,

I'm currently using PostgreSQL 8.0.3 and would like to have a load-
balancing and failover solution over LAN.

From reading the mailing list, it seems that pgpool and slony1
combination has been recommended for LAN replication.  But seems that
pgcluster is still actively maintained?  Which one should I proceed
with?

It seems that pgcluster is simpler to set up, and pgpool/slony1 seems to
have a few caveats (well, pgcluster might have too, but not stated).  So
I think it's best to get some advice from the PostgreSQL folks.


Thanks,

Leonard


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] or kills performance

2005-08-31 Thread Sim Zacks
I am using autovacuum, and I assumed it was working correctly. Is there a way 
to tell if it is not doing what it is supposed to?
This is the result of ps for the autovacuum PID.
The docs say that it should automatically analyze.
  PID TTY  STAT   TIME COMMAND
10184 ?Ss 0:32 /usr/bin/pg_autovacuum -D -v 1000 -V 2 -s 300 -S 2 
-L /var/lib/postgresql/data/pg_autovacuum.log

My log files show nothing other then an error because I renamed a (different) 
database and it received a fatal error for a month, but that stopped last week 
(before I sent this message) when I rebooted the server There has been nothing 
in the log file since that.

This is running on gentoo and the pg_autovacuum is the default settings that 
starts automatically when the server boots.
Is there a way to check when the last analyze was done? I don't want to run 
analyze now until I can check that to see where the problem is coming from.

Thank you
Sim



Sim Zacks [EMAIL PROTECTED] writes:

 [ goes back and looks more closely ]  The row count estimates in your
 EXPLAINs are so far off that it's a wonder you got an OK plan for either
 query.  Have you ANALYZEd these tables recently?
 
 
 
 regards, tom lane


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly