[GENERAL] Adjacency List total item counts

2005-08-09 Thread Ben
Hi

This question is not specific to PostgreSQL but I would like to know
what is the best way to count the number of items in each node from
the leaf to the root? Something like this:

 Computers (100)
/\
   /  \
 CPU (15)  Memory (85)

I have the following SQL schema:

Tree (
 treeId int,
 parentId int,
 name varchar(250),
)

Item (
itemId int,
treeId int,
expiryDate date
)

Note that the count for the total number of items in each node depends
on the item expiry date, i.e. ignore the item if the expiry date is
older than now().

I have come up with the following solutions but not happy with any one of them:

1) Do a batch count, i.e. count the number of items every 30 minutes.
Using this method defeats the purpose of having the count next to each
node since the number might not be the same as the actual count.

2) Use trigger but this can be slow since it has to recurse the tree
and do the sum every time new item is added.

Thanks
Ben

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Adjacency List total item counts

2005-08-09 Thread Oleg Bartunov

use contrib/ltree

Oleg
On Tue, 9 Aug 2005, Ben wrote:


Hi

This question is not specific to PostgreSQL but I would like to know
what is the best way to count the number of items in each node from
the leaf to the root? Something like this:

Computers (100)
   /\
  /  \
CPU (15)  Memory (85)

I have the following SQL schema:

Tree (
treeId int,
parentId int,
name varchar(250),
)

Item (
   itemId int,
   treeId int,
   expiryDate date
)

Note that the count for the total number of items in each node depends
on the item expiry date, i.e. ignore the item if the expiry date is
older than now().

I have come up with the following solutions but not happy with any one of them:

1) Do a batch count, i.e. count the number of items every 30 minutes.
Using this method defeats the purpose of having the count next to each
node since the number might not be the same as the actual count.

2) Use trigger but this can be slow since it has to recurse the tree
and do the sum every time new item is added.

Thanks
Ben

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match



Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Case sensitivity

2005-08-09 Thread Frank Millman
Frank Millman wrote:

 Hi all
 
 Is there an LC_COLLATE setting, or any other method, which allows all 
 data in a database to be treated in a case-insensitive manner?

I was hoping to stimulate some discussion on this topic, but it seems I will
have to kick-start it myself and see if anyone responds.

My area of interest is general accounting/business systems, using a typical
Western character set. I would imagine that this is a common scenario, but
it is not universal, so read my comments in this context.

In the good old days, data entry was always done in upper case, by dedicated
data entry personnel. These days, it is typically done by a wide variety of
individuals, who carry out a wide range of tasks, most of which require
lower case (word processing, email) with the occasional use of the shift key
to enter a capital letter.

In this context, here are two undesirable effects.

1. The user tries to call up account 'A001', but they enter 'a001'. First
problem, the system does not find the account. Second problem, the system
allows them to create a new account with the code 'a001'. Now you have
'A001' and 'a001'. This is a recipe for chaos.

2. The user tries to call up a product item using a search string on the
description. Assume they enter 'Wheel nut'. Assume further that the person
who created the product item used the description 'Wheel Nut'. Try
explaining to the user why the system cannot find the item they are looking
for.

I mentioned in my original post that there are workarounds for these
problems. However, it seems to me that in a typical system you would want to
apply the workaround on every table, and therefore there is a case for
saying that the database should handle it.

I have some experience of two other database systems, and it is of interest
to see how they handle it.

1. D3 (the latest implementation of the old Pick Database System). In the
early days it was case sensitive. When they brought out a new version in the
early 90's they changed it to case insensitive. As you would expect, an
upgrade required a full backup and restore. I was involved in many of these,
some of them quite large. On two occasions I found that accounts were out of
balance after the restore, and on investigation found that situations
similar to 'A001' 'a001' had crept into the old database, and on restore the
second insert was rejected as the first one already existed. When this was
explained to the user, the reaction was always concern that this 'error' had
been allowed to happen, and relief that the new version ensured that it
could never happen again.

2. MSSQL Server 2000. Each time you create a new database you have to
specify which 'collation' to use. There is a wide range available, both case
sensitive and case insensitive. The default (on my system at least) is case
insensitive, and I believe that in practice this is what most people want.

There may well be counter-arguments to this being handled by the database,
and I would be interested to hear them. However, from my point of view, if
this capability is not currently available in PostgreSQL, I would like to
propose that it is considered for some future release.

Looking forward to any comments.

Frank Millman



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [Pgsqlrpms-hackers] Re: [GENERAL] AMD 64 RPM?

2005-08-09 Thread Devrim GUNDUZ

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Hi Sander,

On Tue, 9 Aug 2005, Sander Steffann wrote:


If you can succeed building the RPMs as Joshua guided, please
let me know and we can upload the binaries to FTP site.


As promised: I put them on http://opensource.nederland.net/PostgreSQL/


Thanks a lot :)

I've uploaded the RPMs to main FTP site (under 
/pub/binary/v8.0.3/linux/rpms/redhat/rhel-4-x86_64). They will be 
available in an hour. Also they will be on the mirrors after first sync.


Regards,

- --
Devrim GUNDUZ
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
 http://www.gunduz.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQFC+Gfetl86P3SPfQ4RAqvQAJ41lWs2wuZ+sI6iiNlQgf7X+IIlOwCfXP4N
zMflttn5SqzjadfJRqbyLkc=
=aiR7
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] The cost of SET search_path TO

2005-08-09 Thread Oluwatope Akinniyi
Hi,

I will appreciate it if anyone can educate me on the cost of using SET
search_path TO for schemas at runtime. In particular, for .NET data
operations where there may be high rate of connection/disconnection and
reconnection for initial data fetching and subsequent updating.

Best regards


Tope Akinniyi
ShepherdHill Software
Lagos


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Case sensitivity

2005-08-09 Thread Martijn van Oosterhout
On Tue, Aug 09, 2005 at 09:35:25AM +0200, Frank Millman wrote:
 Frank Millman wrote:
 
  Hi all
  
  Is there an LC_COLLATE setting, or any other method, which allows all 
  data in a database to be treated in a case-insensitive manner?
 
 I was hoping to stimulate some discussion on this topic, but it seems I will
 have to kick-start it myself and see if anyone responds.

I know there have been complaints from people that their database is
sorting case-insensetively when they wish it wouldn't. This generally
happens when the LC_COLLATE is set to en_US or some such. However, I
think that even the en_US locale just fiddles the sort order, but
doesn't make upper and lowercase equal.

 I mentioned in my original post that there are workarounds for these
 problems. However, it seems to me that in a typical system you would want to
 apply the workaround on every table, and therefore there is a case for
 saying that the database should handle it.

These workarounds are the recommended way of dealing with this issue.
Another option would be to create a new datatype 'itext' which works
like text except it compares case insensetively. PostgreSQL is flexible
like that. Here's something to get you started, see below for example. 

http://svana.org/kleptog/pgsql/type_itext.sql

At the moment it uses SQL functions for the comparisons, for production
you'd probably want to have them in C for performance. Also, it's not
pg_dump safe (no operator class support).

BTW, I can't beleive I'm the first to do this, but hey. It's also my
first type with index support so it may be buggy. But it does work for
basic tests...

Have a nice day,
--- snip ---

test=# create table itest ( pkey serial primary key, val itext );
NOTICE:  CREATE TABLE will create implicit sequence itest_pkey_seq
for serial column itest.pkey
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
itest_pkey for table itest
CREATE TABLE
test=# insert into itest (val) values ('a');
INSERT 72279 1
test=# insert into itest (val) values ('A');
INSERT 72280 1
test=# insert into itest (val) values ('b');
INSERT 72281 1
test=# select * from itest where val = 'a';
 pkey | val 
--+-
1 | a
2 | A
(2 rows)

test=# create unique index itest_val on itest(val);
ERROR:  could not create unique index
DETAIL:  Table contains duplicated values.
test=# delete from itest where val = 'a';
DELETE 2
test=# create unique index itest_val on itest(val);
CREATE INDEX
test=# insert into itest (val) values ('a');
INSERT 72284 1
test=# insert into itest (val) values ('A');
ERROR:  duplicate key violates unique constraint itest_val
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpKSBxrlhuIF.pgp
Description: PGP signature


Re: [GENERAL] The cost of SET search_path TO

2005-08-09 Thread Magnus Hagander
 Hi,
 
 I will appreciate it if anyone can educate me on the cost of 
 using SET search_path TO for schemas at runtime. 

AFAIK, that's quite cheap. Shouldn't be a problem.

 In 
 particular, for .NET data operations where there may be high 
 rate of connection/disconnection and reconnection for initial 
 data fetching and subsequent updating.

You should be using connection pooling for this. I beleive npgsql
supports it just fine. That'll make a *lot* of difference on performance
in this kind of scenario.

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Case sensitivity

2005-08-09 Thread Martijn van Oosterhout
On Tue, Aug 09, 2005 at 11:57:48AM +0200, Martijn van Oosterhout wrote:
 Another option would be to create a new datatype 'itext' which works
 like text except it compares case insensetively. PostgreSQL is flexible
 like that. Here's something to get you started, see below for example. 
 
 http://svana.org/kleptog/pgsql/type_itext.sql
 
 At the moment it uses SQL functions for the comparisons, for production
 you'd probably want to have them in C for performance. Also, it's not
 pg_dump safe (no operator class support).

Oops, turns out there *is* a CREATE OPERATOR CLASS but my version of
psql doesn't have it in command completion. And when you use that it
*is* saved by pg_dump. Problem solved.

I've tested various things, DISTINCT works, ORDER BY works, GROUP BY
works. Neat huh?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpVyptdulwvP.pgp
Description: PGP signature


Re: [GENERAL] Case sensitivity

2005-08-09 Thread Richard Huxton

Frank Millman wrote:

Frank Millman wrote:



Hi all

Is there an LC_COLLATE setting, or any other method, which allows all 
data in a database to be treated in a case-insensitive manner?



I was hoping to stimulate some discussion on this topic, but it seems I will
have to kick-start it myself and see if anyone responds.

My area of interest is general accounting/business systems, using a typical
Western character set. I would imagine that this is a common scenario, but
it is not universal, so read my comments in this context.

In the good old days, data entry was always done in upper case, by dedicated
data entry personnel. These days, it is typically done by a wide variety of
individuals, who carry out a wide range of tasks, most of which require
lower case (word processing, email) with the occasional use of the shift key
to enter a capital letter.

In this context, here are two undesirable effects.


Martijn has pointed to a case-insensitive type, but I'll add a couple of 
 points.



1. The user tries to call up account 'A001', but they enter 'a001'. First
problem, the system does not find the account. Second problem, the system
allows them to create a new account with the code 'a001'. Now you have
'A001' and 'a001'. This is a recipe for chaos.


The basic problem here is that the value isn't text. This is partly the 
fault of development-systems not having a way to deal with sophisticated 
types in databases.


What should happen is that you define some suitable type AccountCode 
which is defined as allowing character data in the form of (e.g.) 
[A-Z][0-9][0-9][0-9]. That type can cast to/from text but doesn't need 
access to the full range of text-handling functions (e.g. concatenating 
two account-codes is probably meaningless). Of course, you want to 
define this in one place and have both the database constraints and 
user-interface understand what you want.


Ironically, MS-Access does this quite well with its tight coupling of 
user-interface and data storage.




2. The user tries to call up a product item using a search string on the
description. Assume they enter 'Wheel nut'. Assume further that the person
who created the product item used the description 'Wheel Nut'. Try
explaining to the user why the system cannot find the item they are looking
for.


Here, I'm not convinced a case-insensitive type is any more useful than 
just using ILIKE.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Referencing less-unique foreign keys

2005-08-09 Thread Alban Hertroys

Hi all,

We migrated a database from version 7.3 something to 7.4.7 a while ago, 
and ever since that time we can't make new foreign keys to a particular 
table. The problem is that the primary key on that table is on two 
columns that are unique together, but that only one of them should be 
referenced from the other table.


Tables are as follows:

CREATE TABLE localization (
localization_id textNOT NULL,
language_id integer NOT NULL REFERENCES 
language(language_id) MATCH FULL,
content textNOT NULL
PRIMARY KEY (localization_id, language_id)
);

CREATE TABLE description (
description_id  serial  PRIMARY KEY,
content textNOT NULL REFERENCES 
localization(localization_id)
);

I'm not sure how we got the content column from description to 
reference localization back in version 7.3. Fact is, we can't seem to 
do this anymore since version 7.4:


psql ALTER TABLE description ADD CONSTRAINT fk_description_content 
FOREIGN KEY (content) REFERENCES localization(localization_id);
ERROR:  there is no unique constraint matching given keys for referenced 
table localization


Any way around this?

--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

//Showing your Vision to the World//

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Referencing less-unique foreign keys

2005-08-09 Thread Richard Huxton

Alban Hertroys wrote:

Hi all,

We migrated a database from version 7.3 something to 7.4.7 a while ago, 
and ever since that time we can't make new foreign keys to a particular 
table.


You shouldn't have been able to before.

 The problem is that the primary key on that table is on two
columns that are unique together, but that only one of them should be 
referenced from the other table.


Well there you go - a foreign-key must reference a set of rows with a 
unique constraint (i.e. a candidate-key).



Tables are as follows:

CREATE TABLE localization (
localization_idtextNOT NULL,
language_idintegerNOT NULL REFERENCES 
language(language_id) MATCH FULL,

contenttextNOT NULL
PRIMARY KEY (localization_id, language_id)
);

CREATE TABLE description (
description_idserialPRIMARY KEY,
contenttextNOT NULL REFERENCES 
localization(localization_id)

);

I'm not sure how we got the content column from description to 
reference localization back in version 7.3. Fact is, we can't seem to 
do this anymore since version 7.4:


I don't have 7.3.x to hand any more, but if you could create such a 
reference it was a bug. What you need to do is create a table to record 
which (unique) localization_id codes you have, so:


CREATE TABLE loc_ids (
  localization_id  text NOT NULL,
  PRIMARY KEY (localization_id)
);

CREATE TABLE localization (
  localization_id  text NOT NULL REFERENCES loc_ids,
  language_id  integer NOT NULL REFERENCES language,
  content  text NOT NULL,
  PRIMARY KEY (localization_id, language_id)
);

CREATE TABLE description (
  description_id  SERIAL,
  content text NOT NULL REFERENCES loc_ids,
  PRIMARY KEY (description_id)
);

Of course, this seems to show that the description table isn't telling 
you anything you couldn't work out by adding a serial column to loc_ids. 
 Perhaps you have more columns in it though.


You can setup triggers/views etc to automatically insert into loc_ids if 
you would like.


Does that help?
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Query stucked in pg_stat_activity

2005-08-09 Thread Csaba Nagy
Hi all,

I have a postgres system where we just migrated a fairly big data set.
The application accessing it is a cluster of servers which do burst-like
processing, i.e. when they have some work to do, it will be distributed
in the cluster and the data base will be under fairly high load.
On our first test run everything went fine, the only strange thing is a
row in the pg_stat_activity, which has a row about a query which is long
gone, the process pointed by the procpid field is not existing.
I've executed a select pg_stat_reset(); as superuser, and all went
away except the offending row... I wonder what can cause this behavior,
and how to get rid of it ? I'm using the pg_stat_activity view to detect
long running queries, and I do expect a few more to pop up in time...

Thanks,
Csaba.




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Referencing less-unique foreign keys

2005-08-09 Thread Martijn van Oosterhout
On Tue, Aug 09, 2005 at 02:31:16PM +0200, Alban Hertroys wrote:
 Hi all,
 
 We migrated a database from version 7.3 something to 7.4.7 a while ago, 
 and ever since that time we can't make new foreign keys to a particular 
 table. The problem is that the primary key on that table is on two 
 columns that are unique together, but that only one of them should be 
 referenced from the other table.

Foreign keys have to reference a column that has only unique values.
This is what the SQL standard requires of FOREIGN KEYS. If your
localization_id in the localization table is unique, just add a UNIQUE
index, problem solved.

If localization_id is not unique but you really want foreign keys,
you'll have to create a table containing only localization_ids and have
both tables foreign key to that...

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpDZ53SvX7K1.pgp
Description: PGP signature


Re: [GENERAL] Query stucked in pg_stat_activity

2005-08-09 Thread Csaba Nagy
Oh, I've forgot to mention that the DB is version 8.0.3 running on
linux.

Cheers,
Csaba.


On Tue, 2005-08-09 at 15:19, Csaba Nagy wrote:
 Hi all,
 
 I have a postgres system where we just migrated a fairly big data set.
 The application accessing it is a cluster of servers which do burst-like
 processing, i.e. when they have some work to do, it will be distributed
 in the cluster and the data base will be under fairly high load.
 On our first test run everything went fine, the only strange thing is a
 row in the pg_stat_activity, which has a row about a query which is long
 gone, the process pointed by the procpid field is not existing.
 I've executed a select pg_stat_reset(); as superuser, and all went
 away except the offending row... I wonder what can cause this behavior,
 and how to get rid of it ? I'm using the pg_stat_activity view to detect
 long running queries, and I do expect a few more to pop up in time...
 
 Thanks,
 Csaba.
 
 
 
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Query stucked in pg_stat_activity

2005-08-09 Thread Michael Fuhr
On Tue, Aug 09, 2005 at 03:19:46PM +0200, Csaba Nagy wrote:
 I have a postgres system where we just migrated a fairly big data set.
 The application accessing it is a cluster of servers which do burst-like
 processing, i.e. when they have some work to do, it will be distributed
 in the cluster and the data base will be under fairly high load.
 On our first test run everything went fine, the only strange thing is a
 row in the pg_stat_activity, which has a row about a query which is long
 gone, the process pointed by the procpid field is not existing.

I ran across this situation a while ago, where high load caused
pg_stat_activity to have stale entries.  Tom Lane wondered if the
stats subsystem was under a high enough load that it was dropping
messages, as it's designed to do.

http://archives.postgresql.org/pgsql-bugs/2004-10/msg00163.php

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Referencing less-unique foreign keys

2005-08-09 Thread Alban Hertroys

Martijn van Oosterhout wrote:

On Tue, Aug 09, 2005 at 02:31:16PM +0200, Alban Hertroys wrote:


Hi all,

We migrated a database from version 7.3 something to 7.4.7 a while ago, 
and ever since that time we can't make new foreign keys to a particular 
table. The problem is that the primary key on that table is on two 
columns that are unique together, but that only one of them should be 
referenced from the other table.



Foreign keys have to reference a column that has only unique values.
This is what the SQL standard requires of FOREIGN KEYS. If your
localization_id in the localization table is unique, just add a UNIQUE
index, problem solved.

If localization_id is not unique but you really want foreign keys,
you'll have to create a table containing only localization_ids and have
both tables foreign key to that...


I was afraid that would be the only answer... It's the way I would have 
solved it too - would I have the time.


SELECTs and UPDATEs aren't influenced by the change, but INSERTs and 
DELETEs (w/ cascade) are. Am I right that this could be fixed 
transparently (to our queries) by creating a few RULEs on localization 
on INSERT and DELETE? That'd certainly save some time...


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

//Showing your Vision to the World//

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] pg_dump for table with bytea takes a long time

2005-08-09 Thread Sailer, Denis (YBUSA-CDR)








I posted the following to the performance mailing list on 8/2/2005, but have
not heard any replies. Maybe this should just be a general question. Would
someone be able to help me get pb_dump to run faster for bytea data?



++



Dumping a database which contains a table with a bytea
column takes approximately 25 hours and 45 minutes. The database has 26
tables in it. The other 25 tables take less than 5 minutes to dump so almost
all time is spent dumping the bytea table.



prd1=# \d ybnet.ebook_master;

 Table ybnet.ebook_master

 Column |
Type | Modifiers

--+-+---

region_key | integer | not null

book_key | integer |
not null

pub_sequence | integer | not null

section_code | integer | not null

pagenbr |
integer | not null

pdffile | bytea
|

Indexes:

 ebook_master_pkey
PRIMARY KEY, btree (book_key, pub_sequence, section_code, pagenbr, region_key)

Foreign-key constraints:

 FK1_book_year FOREIGN
KEY (book_key, pub_sequence, region_key) REFERENCES ybnet.book_year(book_key, pub_sequence,
region_key)

 FK1_ebook_section
FOREIGN KEY (section_code) REFERENCES ybnet.ebook_section(sectioncode)

Tablespace: ebook



The tablespace ebook is 65504295 bytes in size and the ebook_master
table has 61-1GB files associated to it.



The command to dump the database is:



pg_dump --file=$DUMP_FILE --format=c --data-only
--verbose -host=ybcdrdbp01 $DATABASE



I also perform a hot backup of this database using pg_start_backup(),
tar, and pg_stop_backup(). It takes only 20 minutes to create a tar ball
of the entire 62GB. I like the speed of this method, but it does not
allow me to restore 1 table at a time. 



The version of postgres is PostgreSQL
8.0.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2



The machine has 4 Xeon 3.00 GHz processors with
hyper-threading on and 4GB of memory. Postgres is supported by two file
systems connected to an EMC SAN disk array. One 2 GB one for the log
files and a second 500 GB one for the data and indexes. All output files
for the backup files are placed onto the 500 GB volume group and then backed up
to an external storage manager.



Portions of the config file are:



shared_buffers = 16384

work_mem = 8192

maintenance_work_mem = 16384



max_fsm_pages = 512000

max_fsm_relations = 1000

fsync = true



# - Checkpoints -

checkpoint_segments = 20



# - Planner Cost Constants -

effective_cache_size = 262144

random_page_cost = 3





I am looking for ideas for making the backup of the above
table much faster.










[GENERAL] PostgreSQL and ClearQuest?

2005-08-09 Thread Robert Creager

Anyone done it?  Is it possible?  Currently running the free SQLAnywhere version
included with CQ.

I suspect my hopes will be extinguished shortly :-(

Cheers,
Rob

-- 
 08:11:07 up 26 days, 11:46,  5 users,  load average: 2.52, 2.19, 2.20
Linux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004


pgpTNwdjQOuoP.pgp
Description: PGP signature


[GENERAL] Cross database queries

2005-08-09 Thread postgresql
Hi

Suppose I have two databases on one server (lets call them DB_A and DB_B)
and I was to write a cross database query. How do I do this in PostgreSQL?

On MS-SQL I would do something like:

SELECT ta.Field1, ta.Field2, tb.Field2
FROM DB_A.dbo.SomeTable ta
JOIN DB_B.dbo.SomeOtherTable tb
  ON ta.Field1 = tb.Field1

Thanks
Craig

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Database syncronization

2005-08-09 Thread Lowell . Hought

What is the most effective method for
syncronizing a database from a main to a backup machine? I am now
running 8.0. Is it necessary to drop the database from the secondary
machine and restore it from a dump of the primary?


Re: [GENERAL] Query stucked in pg_stat_activity

2005-08-09 Thread Csaba Nagy
Michael, 

I've read the message you referred, and it's probably what happens.
In fact the original row I've complained about is gone, and I have now
10 other dead processes listed in pg_stat_activity... one of the queries
is a BIND, still running after 25 minutes, and the associated
process is gone, so it's clearly an inconsistent state of the stats
collector. I wonder if there's a way to fix that without too much
affecting performance ?
The logs don't show the statistics buffer is full message as suggested
by Tom, but ITOH log_min_messages = info, and that message might be a
debug level one.

In any case it seems my system can readily reproduce the issue whenever
I place a bigger load on it...

Cheers,
Csaba.



On Tue, 2005-08-09 at 15:51, Michael Fuhr wrote:
 On Tue, Aug 09, 2005 at 03:19:46PM +0200, Csaba Nagy wrote:
  I have a postgres system where we just migrated a fairly big data set.
  The application accessing it is a cluster of servers which do burst-like
  processing, i.e. when they have some work to do, it will be distributed
  in the cluster and the data base will be under fairly high load.
  On our first test run everything went fine, the only strange thing is a
  row in the pg_stat_activity, which has a row about a query which is long
  gone, the process pointed by the procpid field is not existing.
 
 I ran across this situation a while ago, where high load caused
 pg_stat_activity to have stale entries.  Tom Lane wondered if the
 stats subsystem was under a high enough load that it was dropping
 messages, as it's designed to do.
 
 http://archives.postgresql.org/pgsql-bugs/2004-10/msg00163.php


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Cross database queries

2005-08-09 Thread Sean Davis
On 8/9/05 10:21 AM, [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:

 Hi
 
 Suppose I have two databases on one server (lets call them DB_A and DB_B)
 and I was to write a cross database query. How do I do this in PostgreSQL?
 
 On MS-SQL I would do something like:
 
 SELECT ta.Field1, ta.Field2, tb.Field2
 FROM DB_A.dbo.SomeTable ta
 JOIN DB_B.dbo.SomeOtherTable tb
 ON ta.Field1 = tb.Field1

See /contrib/dblink in the postgresql source distribution.

Sean


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Query stucked in pg_stat_activity

2005-08-09 Thread Michael Fuhr
On Tue, Aug 09, 2005 at 04:25:30PM +0200, Csaba Nagy wrote:
 The logs don't show the statistics buffer is full message as suggested
 by Tom, but ITOH log_min_messages = info, and that message might be a
 debug level one.

The message is in src/backend/postmaster/pgstat.c:

if (!overflow)
{
ereport(LOG,
(errmsg(statistics buffer is full)));
overflow = true;
}

For log_min_messages, LOG is just above FATAL and PANIC, so I'd
expect those messages to appear in the logs if they're happening.
But I don't recall seeing them either.

 In any case it seems my system can readily reproduce the issue whenever
 I place a bigger load on it...

I was also able to reproduce the behavior when running pgbench with
sufficiently high settings.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Reference new.* or old.* in dynamic statement?

2005-08-09 Thread Jeff Boes
I'm trying to write a general-purpose trigger that will disallow updates
on certain fields (I could probably do this in other ways, but I have a
stubborn streak ...).

Given a table, I want to define a trigger on that table that will
write-protect one column by name:

CREATE TRIGGER tbl_nomod_create
BEFORE INSERT OR UPDATE ON tbl
FOR EACH ROW EXECUTE PROCEDURE
no_modification_allowed('create_date');

I.e.,

UPDATE tbl SET fld_1 = 'foo;

would be OK but

UPDATE tbl SET create_date = now();

would result in an exception.

My trigger function below attempts to create a dynamic SQL statement
that tests old.column-name against new.column-name.

CREATE OR REPLACE FUNCTION no_modification_allowed()
 RETURNS TRIGGER
 LANGUAGE 'plpgsql'
 AS '
DECLARE
  tmp_stmt TEXT;
  result   RECORD;
BEGIN
  IF TG_ARGV[0] IS NULL
  THEN
RETURN new;
  ELSE
tmp_stmt := ''SELECT 1 AS is_null FROM (SELECT 1) AS dual WHERE '';
FOR result IN EXECUTE (tmp_stmt
|| ''old.'' || quote_ident(TG_ARGV[0])
|| '' IS NULL'') LOOP
  RETURN new;
END LOOP;

FOR result IN EXECUTE (tmp_stmt
|| ''old.'' || quote_ident(TG_ARGV[0])
|| '' = new.''
|| quote_ident(TG_ARGV[0])) LOOP
  RETURN new;
END LOOP;

RAISE EXCEPTION ''Cannot modify % in %'', TG_ARGV[0], TG_RELNAME;
  END IF;
END
';

I tried one or two other approaches in the dynamic statement, but
generally I get errors indicating that new and old can't be
referenced in this fashion:

ERROR:  OLD used in query that is not in a rule

Is there a way to do what I want?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] best way to reference tables

2005-08-09 Thread Sean Davis
On 8/9/05 10:31 AM, TJ O'Donnell [EMAIL PROTECTED] wrote:

 I have many different tables that I want to keep track of.
 So, I thought of a master table with those table names in it.
 But, to maintain this (suppose a table changes
 its name, gets dropped) I want to have some kind of referential
 integrity - the way foreign keys and constraints do.
 What could I use that would disallow inserting a name into
 the master table unless another table by that name already exists?
 And what could ensure that a table would not be renamed or dropped
 unless the master table is changed?

Good idea, but the table already exists.  Try:

Select * from pg_tables;

And see:

http://www.postgresql.org/docs/8.0/interactive/catalogs.html

Sean


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Cross database queries

2005-08-09 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

Hi

Suppose I have two databases on one server (lets call them DB_A and DB_B)
and I was to write a cross database query. How do I do this in PostgreSQL?


Either use the dblink module from contrib/ or merge them into one 
database but different schemas.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Case sensitivity

2005-08-09 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 Another option would be to create a new datatype 'itext' which works
 like text except it compares case insensetively. PostgreSQL is flexible
 like that. Here's something to get you started, see below for example.

 http://svana.org/kleptog/pgsql/type_itext.sql

 At the moment it uses SQL functions for the comparisons, for production
 you'd probably want to have them in C for performance.

I believe there is a C-coded type like this on gborg (citext is the
name I think).

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Weird lock or bug maybe?

2005-08-09 Thread Ben-Nes Yonatan
Well what I mean at getting stuck is that some rows can be deleted fast 
when you delete them with their specific id while there are rows which 
when I tried to delete them it just didnt respond (seems like working 
endlessly so i stop it after an hour or so).
Interesting that this morning I was able to delete all of the data in a 
reasonable time... dunno whats the diffrence... the only diffrence that 
I can think of is that I deleted the content of table2 and that from 
some reason cleared the locks on this talbe though I remember testing 
it afterwards and it didnt help.
Maybe its connected to the fact that I connected both of the tables with 
a foreign key? I also got second thoughts about using foreign keys 
between my tables at the DB that I currently build... I always use 
foreign keys when I can but I noticed at the DB which im working on now 
(will contain millions of rows) that its making the process of deleting 
the content way too slow and I need to do it each day am I correct 
with what im doing?


Thanks again,
Yonatan


Richard Huxton wrote:


Ben-Nes Yonatan wrote:


If ill query: DELETE FROM table1;  it will just get stuck...
If ill try: DELETE FROM table1 WHERE table1_id=1523; it will work in 
most cases but for some rows it will just get stuck!

Anyone know anything about this weird problem?
By the way when I restarted the DB server I was able to delete the 
current row which stucked the process but then I got stuck at some 
other row at the table



What do you mean by get stuck?
Are you sure it's not scanning one of the tables to check references 
before deleting? If you don't have an index on the table in question 
then PG will have to scan the entire table.


To check for locks, try:
SELECT * FROM pg_stat_activity;
SELECT * FROM pg_locks;

Let's see what's actually happening.





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] postgres server encodings

2005-08-09 Thread Salem Berhanu
What exactly is the SQL_ASCII encoding in postgres? I have a pg installation 
with an SQL_ASCII server encoding and my database (also SQL_ASCII) seems to 
be able to handle all types of encodings? how is this possible? is this what 
the server  db encoding needs to be set to in order to handle various 
encodings?
Also is there a way of dumping data from an SQL_ASCII db to a UNICODE db. 
Creating a dump and converting with a tool like iconv is probably not an 
option since the data in the SQL_ASCII db has unknown/inconsistant encoding.

Please let me know as soon as possible.
Thanks.
Salem
P.S. {I sent a similar e-mail earlier which wasn't posted on the site. ??}

---BeginMessage---
I wanted to find out how I can split one database into different disk partitions. I willvery soonbe running out of data space . What is the best (safest) way to do this. Any links to docs, ideas appreciated  Urgent!
Thanks
Sally

 There are now three new levels of MSN Hotmail Extra Storage!  Learn more. 

---End Message---

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] index being ignored for limit n queries

2005-08-09 Thread Piotr Sulecki
Ave!

Yesterday I noticed a problem with my PostgreSQL installation. I have
three database clusters, one using version 7.4.6 and the rest using
version 8.0.1. The problem manifests itself in all three installations.
(The three databases are copies of -- more or less -- the same data; now
I'm writing a program to really synchronize these databases, but that's
besides the point. I'll only describe one of them; the same can be said
of the others, too.)

I have two really big tables, the problem is with one of them. The table
looks as follows:

  Table public.pakiety
   Column|Type |  Modifiers
-+-+-
pktid| integer | not null default
 nextval('pktid_seq')
stid | smallint| not null
received | timestamp(6) with time zone | not null
measured | timestamp(0) with time zone | not null
station_time | timestamp(0) with time zone |
bezwzgl  | smallint|
full_cycle   | boolean |

Indexes:
pakiety_pkey PRIMARY KEY, btree (pktid)
pakiety_stid_received_idx UNIQUE, btree (stid, received)
pakiety_measured_idx btree (measured)
pakiety_received_idx btree (received)
pakiety_stid_measured_idx btree (stid, measured)

This table holds info about data packets sent by our automated meteo
stations.

The table contains about 15 million rows. Any type of a sequential scan
will be extremely slow on this table. Even an index scan will take
minutes if the index is not fit for the search.

What's unusual is that:
1) there is only a very limited set of used stid (station ID) values -
   about 500 or so, and
2) there can be some very long periods of time (measured in months) when
   a specific station doesn't send any packets at all.

That's why I created indices pakiety_stid_received_idx and
pakiety_stid_measured_idx. Unfortunately, when I'm looking for the last
data packet from a specific station, I get an index scan using only the
pakiety_received_idx index and not the pakiety_stid_received_idx
one. While it takes a split second for a recently active station, it
really takes several minutes for a station which has been quiet for
several months.

trax=# explain select * from pakiety where stid = 234::smallint order by
received desc limit 1;
 QUERY PLAN
-
 Limit  (cost=0.00..6.26 rows=1 width=33)
   -  Index Scan Backward using pakiety_received_idx on pakiety
(cost=0.00..193599.37 rows=30915 width=33)
 Filter: (stid = 234::smallint)
(3 rows)

What's funny, I get the same plan when using limit 1:

trax=# explain select * from pakiety where stid = 234::smallint order by
received desc limit 1;
 QUERY PLAN
-
 Limit  (cost=0.00..62621.32 rows=1 width=33)
   -  Index Scan Backward using pakiety_received_idx on pakiety
(cost=0.00..193756.63 rows=30941 width=33)
 Filter: (stid = 234::smallint)
(3 rows)

I have just noticed another problem: when the limit goes even higher,
the planner decides to use another unfit index.

trax=# explain select * from pakiety where stid = 234::smallint order by
received desc limit 10;
  QUERY PLAN
---
 Limit  (cost=125101.46..125178.81 rows=30940 width=33)
   -  Sort  (cost=125101.46..125178.81 rows=30940 width=33)
 Sort Key: received
 -  Index Scan using pakiety_stid_measured_idx on pakiety
(cost=0.00..122288.52 rows=30940 width=33)
   Index Cond: (stid = 234::smallint)
(5 rows)

The same index is also used if I remove the limit n clause entirely:

trax=# explain select * from pakiety where stid = 234::smallint order by
received desc;
   QUERY PLAN
-
 Sort  (cost=125293.11..125370.58 rows=30986 width=33)
   Sort Key: received
   -  Index Scan using pakiety_stid_measured_idx on pakiety
(cost=0.00..122474.14 rows=30986 width=33)
 Index Cond: (stid = 234::smallint)
(4 rows)

The pakiety_stid_received_idx index never gets used. Any hints about
possible reasons of such behavior?

Regards,

Piotr Sulecki.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Solicitud de informacion de psql con php

2005-08-09 Thread Ana Mandiola
Srs:

   Me gustaría saber si poseen un tutorial de postgresql con PHP,
ya que me encuentro estudiando en la Universidad, y por un ramo tengo
q desarrollar un sistema basado en postgresql con PHP, he encontrado
muy poca informacion referente a este tema, y al programar tengo
algunos problemas de conexion, es por ello que solicito esa
información para ver si lo que estoy haciendo esta correcto o no...

   Y mi intención es seguir aprendiendo mas sobre postgre y sus
aplicaciones, es por ello que les escribo.

   Esperando su pronta respuesta y agradeciendo su tiempo.

  Se despide atentamente ,

  Ana Mandiola

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Error Loading postgresql

2005-08-09 Thread reddy
Dear Sir,

We are planning to develope CRM package with Postgresql database software.
We are planning to load postgresql  package. we are getting following
errors while loading  in redhat8.0 linux.Please give solution for the same
error message.
[EMAIL PROTECTED] postgrep]# ls
postgresql-8.0.3  postgresql-8.0.3.tar.bz2
[EMAIL PROTECTED] postgrep]# cd postgresql-8.0.3
[EMAIL PROTECTED] postgresql-8.0.3]# ./configure
checking build system type... i686-pc-linux-gnulibc1
checking host system type... i686-pc-linux-gnulibc1
checking which template to use... linux
checking whether to build with 64-bit integer date/time support... no
checking whether NLS is wanted... no
checking for default port number... 5432
checking for gcc... gcc
checking for C compiler default output... configure: error: C compiler
cannot cr
eate executables
[EMAIL PROTECTED] postgresql-8.0.3]#
 we
are
waiting
for
your
valuble
solution

with regards
S.S.Reddy
Dhanush Global IT Solution





-- 
Dhanush Computers
#1552/66, 22nd Main, BSK 1st Stage
Hanumanthnagar
Bangalore
Ph : 26619401/26672359/26674787
Fax: 26619401

-- 
Dhanush Computers
#1552/66, 22nd Main, BSK 1st Stage
Hanumanthnagar
Bangalore
Ph : 26619401/26672359/26674787
Fax: 26619401



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] renaming a table, and its primary key constraint

2005-08-09 Thread Jim

Thank you both for the replies.  I obviously have some things wrong,
and I'll have to masticate on the answers.

AIUI, the foreign key references are not an issue for me since I always
specify the column name.

Thanks again,
Jim


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Poll on your LAPP Preferences

2005-08-09 Thread Google Mike
As a PostgreSQL admin or developer, you may be asked to deploy a Linux
Apache PHP PostgreSQL application. As you know, and simplifying things
a great deal here, the pg_hba.conf file can be edited in approximately
7 different ways:

* locked down -- no access at all (usually the default)
* trust local access, any user
* trust local access, specific users
* trust remote access, any user
* trust remote access, specific users

And all of the above with or without a password, and with various kinds
of password types, thus 7 different ways, roughly.

As I think about building an installation program, can you help me
decide on how to make my LAPP installations easier in these various
kinds of arrangements? Are there more preferred practices that you can
share?

I was thinking of an install for my web app where someone downloads a
*.tar.gz file, expands it into a web directory, then connects to an
index.php in a subdirectory called install. From there, they follow
PHP pages to do what they need in setting this up.

If I can improve this process, then a developer can download my web
app, try it out rapidly, comparing it against others, and hopefully
decide on mine because I have made it easy to get started and easy to
customize to their tastes.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Suppressing Error messages.

2005-08-09 Thread Basith Salman


 Hi All,

I was wondering if there is way to suppress the error messages on the stdout
from a perl dbi execute command, basically  if I do a 
sth-execute() on a command and say the row cannot be updated then
I get a err msg to stdout if there is foreign key violation, I want this error 
message
to be directed to a log file.

Is this possible.

TIA.
[EMAIL PROTECTED]




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Database syncronization

2005-08-09 Thread Richard Huxton

[EMAIL PROTECTED] wrote:
What is the most effective method for syncronizing a database from a main 
to a backup machine?  I am now running 8.0.  Is it necessary to drop the 
database from the secondary machine and restore it from a dump of the 
primary?


Depends on how often/up-to-date you want to be. In order of delay, they are:

1. pg_dump / pg_restore
2. PITR + WAL-file shipping
3. Replication (e.g. Slony)

Check the manuals for the first two. There are other replication 
projects available if Slony isn't quite right for you.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Referencing less-unique foreign keys

2005-08-09 Thread Alban Hertroys

Tom Lane wrote:

Having just tried it, I can say that the last version that would take
that without complaint is 7.0.  7.1 and later give variants of

ERROR:  UNIQUE constraint matching given keys for referenced table 
localization not found

So I'm not sure what Alban actually did.


Neither am I. This database was created before I was responsible for 
this project, a few years ago - by someone who has left already (which 
is a good thing, in his case).


There were 2 or 3 triggers on each refering/referencing table, of which 
I assume they were generated by postgresql. I do recall that creating 
foreign keys stopped working suddenly (maybe due to an update by our 
sysadmin), after which we created the triggers by hand... The we here 
usually not including me.


I would like to show some of those triggers, but due to the clutter 
caused by all the foreign keys we created that way up til last year or 
so, that's quite an ordeal... There are tables where the tabel 
definition scrolls out of view rather rapidly...


I never liked the way this was solved by my predecessor, this seems a 
good opportunity to fix it.


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

//Showing your Vision to the World//

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Query stucked in pg_stat_activity

2005-08-09 Thread Csaba Nagy
[snip]
  I've executed a select pg_stat_reset(); as superuser, and all went
  away except the offending row...
 
 That only resets the I/O counts (and only for one database), not the
 backend activity info.
 
   regards, tom lane

This reminds me I've forgot to ask, is there any other way of getting
rid of those ghost entries than via big load ? The next big load will
leave another set of ghosts behind it...

Cheers,
Csaba.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] best way to reference tables

2005-08-09 Thread Richard Huxton

TJ O'Donnell wrote:

I have many different tables that I want to keep track of.
So, I thought of a master table with those table names in it.
But, to maintain this (suppose a table changes
its name, gets dropped) I want to have some kind of referential
integrity - the way foreign keys and constraints do.
What could I use that would disallow inserting a name into
the master table unless another table by that name already exists?
And what could ensure that a table would not be renamed or dropped
unless the master table is changed?


You can write your own triggers that would stop you from adding a 
non-existent table to your master-table. You can't fire triggers on 
changes to system tables though, so you can't stop someone adding a table.


In any case, you presumably want to track ALTER TABLE ADD COLUMN too so 
you'll need something a little smarter.


Have you considered pg_dump --schema-only along with suitable 
version-control software (CVS/Subversion/Arch etc)? Or are you trying to 
track something specific?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Query stucked in pg_stat_activity

2005-08-09 Thread Tom Lane
Csaba Nagy [EMAIL PROTECTED] writes:
 On our first test run everything went fine, the only strange thing is a
 row in the pg_stat_activity, which has a row about a query which is long
 gone, the process pointed by the procpid field is not existing.

This is not totally surprising, since the pgstat mechanism is by design
not 100% reliable (it will drop statistics messages under high load
rather than making backends wait for the stats collector).  Probably the
backend-is-exiting message for that process got dropped.

Eventually that backend slot will get re-used for another backend, and
then the entry will go away, but if it's a high-number slot then it'll
take a similar load level to get to it.

 I've executed a select pg_stat_reset(); as superuser, and all went
 away except the offending row...

That only resets the I/O counts (and only for one database), not the
backend activity info.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Referencing less-unique foreign keys

2005-08-09 Thread Tom Lane
Richard Huxton dev@archonet.com writes:
 Alban Hertroys wrote:
 I'm not sure how we got the content column from description to 
 reference localization back in version 7.3. Fact is, we can't seem to 
 do this anymore since version 7.4:

 I don't have 7.3.x to hand any more, but if you could create such a 
 reference it was a bug.

Having just tried it, I can say that the last version that would take
that without complaint is 7.0.  7.1 and later give variants of

ERROR:  UNIQUE constraint matching given keys for referenced table 
localization not found

So I'm not sure what Alban actually did.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] index being ignored for limit n queries

2005-08-09 Thread Tom Lane
Piotr Sulecki [EMAIL PROTECTED] writes:
 I have two really big tables, the problem is with one of them. The table
 looks as follows:
 ...
 Indexes:
 pakiety_pkey PRIMARY KEY, btree (pktid)
 pakiety_stid_received_idx UNIQUE, btree (stid, received)
 pakiety_measured_idx btree (measured)
 pakiety_received_idx btree (received)
 pakiety_stid_measured_idx btree (stid, measured)

 trax=# explain select * from pakiety where stid = 234::smallint order by
 received desc limit 1;

The reason this won't use the (stid, received) index is that the
requested sort order doesn't match that index.  Try

select * from pakiety where stid = 234::smallint
order by stid desc, received desc
limit 1;

There is code in CVS tip to recognize that the equality constraint on
stid allows the index to be considered as producing data ordered by
received alone --- but no released version will make that deduction
for you.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Reference new.* or old.* in dynamic statement?

2005-08-09 Thread Richard Huxton

Jeff Boes wrote:

I'm trying to write a general-purpose trigger that will disallow updates
on certain fields (I could probably do this in other ways, but I have a
stubborn streak ...).

Given a table, I want to define a trigger on that table that will
write-protect one column by name:

CREATE TRIGGER tbl_nomod_create
BEFORE INSERT OR UPDATE ON tbl
FOR EACH ROW EXECUTE PROCEDURE
no_modification_allowed('create_date');



CREATE OR REPLACE FUNCTION no_modification_allowed()
 RETURNS TRIGGER
 LANGUAGE 'plpgsql'
 AS '


It's a lot easier if you use TCL/Perl/one of the other interpreted 
languages. Bound to be an example in the mailing list archives, I might 
even have posted one.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] postgres server encodings

2005-08-09 Thread Tom Lane
Salem Berhanu [EMAIL PROTECTED] writes:
 What exactly is the SQL_ASCII encoding in postgres?

SQL_ASCII isn't so much an encoding as the declaration that you don't
care about encodings.  That setting simply disables encoding validity
checks and encoding conversions.  The server will take any byte string
clients send it (barring only embedded zero bytes), and store and return
it unchanged.

Since it disables conversions, the notion of converting to another
encoding is pretty much meaningless :-(.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgres server encodings

2005-08-09 Thread Martijn van Oosterhout
On Mon, Aug 08, 2005 at 04:10:50PM +, Salem Berhanu wrote:
 What exactly is the SQL_ASCII encoding in postgres? I have a pg 
 installation with an SQL_ASCII server encoding and my database (also 
 SQL_ASCII) seems to be able to handle all types of encodings? how is this 
 possible? is this what the server  db encoding needs to be set to in order 
 to handle various encodings?

SQL_ASCII means that the database does no locale specific or language
specific encoding ever. It won't check what you send it either. If
you're content to let clients deal with any encoding issues, this may
be what you want.

But anything to do with lower(), upper(), case-insenstive in the
database itself will be totally stupid since it's assuming ASCII.

 Also is there a way of dumping data from an SQL_ASCII db to a UNICODE db. 
 Creating a dump and converting with a tool like iconv is probably not an 
 option since the data in the SQL_ASCII db has unknown/inconsistant encoding.
 Please let me know as soon as possible.

This is a messy situation. Since the system can't guess your encoding
you'd have to fix it all up yourself...

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgp7zKTudzgYK.pgp
Description: PGP signature


Re: [GENERAL] Case sensitivity

2005-08-09 Thread Martijn van Oosterhout
On Tue, Aug 09, 2005 at 11:02:47AM -0400, Tom Lane wrote:
 Martijn van Oosterhout kleptog@svana.org writes:
  Another option would be to create a new datatype 'itext' which works
  like text except it compares case insensetively. PostgreSQL is flexible
  like that. Here's something to get you started, see below for example.
 
  http://svana.org/kleptog/pgsql/type_itext.sql
 
  At the moment it uses SQL functions for the comparisons, for production
  you'd probably want to have them in C for performance.
 
 I believe there is a C-coded type like this on gborg (citext is the
 name I think).

And so it is, full points to Tom. Here's the link:

http://gborg.postgresql.org/project/citext/projdisplay.php

I couldn't work out any obvious way to make google spit out this link
without the magic word citext, so hopefully this reference will raise
the score enough that a plain google search for case insensitive
postgresql will find it.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpvElz37qptR.pgp
Description: PGP signature


Re: [GENERAL] Reference new.* or old.* in dynamic statement?

2005-08-09 Thread Tom Lane
Jeff Boes [EMAIL PROTECTED] writes:
 I'm trying to write a general-purpose trigger that will disallow updates
 on certain fields (I could probably do this in other ways, but I have a
 stubborn streak ...).

I think it's pretty much impossible to do this in plpgsql.  You could do
it in the other PLs that support triggers, or in C.  From a performance
standpoint I'd think you'd want to do it in C anyway.  There are some
closely related example trigger functions in the contrib tree.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Query stucked in pg_stat_activity

2005-08-09 Thread Tom Lane
Csaba Nagy [EMAIL PROTECTED] writes:
 I've executed a select pg_stat_reset(); as superuser, and all went
 away except the offending row...
 
 That only resets the I/O counts (and only for one database), not the
 backend activity info.

 This reminds me I've forgot to ask, is there any other way of getting
 rid of those ghost entries than via big load ?

Not at the moment.  It might be worth teaching the pgstats code to
cross-check the activity list every so often, but the only place
where it'd really fit naturally is vacuum_tabstats which is probably
not executed often enough to be helpful.

Or maybe we could just filter the data on the reading side: ignore
anything the stats collector reports that doesn't correspond to a
live backend according to the PGPROC array.

Jan, any thoughts?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Solicitud de informacion de psql con php

2005-08-09 Thread Alvaro Herrera
On Sat, Aug 06, 2005 at 04:02:05PM -0400, Ana Mandiola wrote:

Hola,

Me gustaría saber si poseen un tutorial de postgresql con PHP,
 ya que me encuentro estudiando en la Universidad, y por un ramo tengo
 q desarrollar un sistema basado en postgresql con PHP, he encontrado
 muy poca informacion referente a este tema, y al programar tengo
 algunos problemas de conexion, es por ello que solicito esa
 información para ver si lo que estoy haciendo esta correcto o no...

Permiteme indicarte que esta es una lista en ingles, y por lo tanto es
muy poco probable que recibas respuestas utiles.  Te sugiero suscribirte
a la lista en castellano: http://archives.postgresql.org/pgsql-es-ayuda

De hecho, diria que tu pregunta ya ha sido respondida en esa lista
antes, varias veces.  Te recomiendo echarle un vistazo a los archivos
por si encuentras algo util.  Si despues de eso te decides a preguntar,
te sugiero dar muchos mas detalles, como los mensajes de error que
encuentras, los comandos o funciones que estas ejecutando, etc.

-- 
Alvaro Herrera (alvherre[a]alvh.no-ip.org)
No hay cielo posible sin hundir nuestras raíces
 en la profundidad de la tierra(Malucha Pinto)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Query stucked in pg_stat_activity

2005-08-09 Thread Jan Wieck

On 8/9/2005 12:21 PM, Tom Lane wrote:


Csaba Nagy [EMAIL PROTECTED] writes:

I've executed a select pg_stat_reset(); as superuser, and all went
away except the offending row...


That only resets the I/O counts (and only for one database), not the
backend activity info.



This reminds me I've forgot to ask, is there any other way of getting
rid of those ghost entries than via big load ?


Not at the moment.  It might be worth teaching the pgstats code to
cross-check the activity list every so often, but the only place
where it'd really fit naturally is vacuum_tabstats which is probably
not executed often enough to be helpful.

Or maybe we could just filter the data on the reading side: ignore
anything the stats collector reports that doesn't correspond to a
live backend according to the PGPROC array.

Jan, any thoughts?


The reset call is supposed to throw away everything. If it leaves crap 
behind, I'd call that a bug.


IIRC the pg_stat functions don't examine the shared memory, but rely 
entirely on information from the stats file. It sure would be possible 
to add something there that checks the PGPROC array.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] postgres server encodings

2005-08-09 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 SQL_ASCII means that the database does no locale specific or language
 specific encoding ever. It won't check what you send it either. If
 you're content to let clients deal with any encoding issues, this may
 be what you want.

 But anything to do with lower(), upper(), case-insenstive in the
 database itself will be totally stupid since it's assuming ASCII.

Actually it's worse than that: case conversion and comparisons inside
the database will still be done using whatever locale setting the
database was initdb'd with.  If that locale is C, then you have just
ASCII-aware case conversions and memcmp-like sorting, but everything
will behave reasonably sanely within those limitations.  If the locale
is not C then it is assuming some specific encoding that *the database
is not enforcing*.  This is bad news and can result in all sorts of
inconsistent behavior.

We really need to figure out some way of enforcing that the database
encoding is OK to use with the locale setting.  Peter put some trial
code for this into initdb in 8.0, but it's only issuing a warning rather
than enforcing the restriction, so I don't have a lot of confidence
that it is right.  (How many people even noticed the warning message?)

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Query stucked in pg_stat_activity

2005-08-09 Thread Tom Lane
Jan Wieck [EMAIL PROTECTED] writes:
 Jan, any thoughts?

 The reset call is supposed to throw away everything. If it leaves crap 
 behind, I'd call that a bug.

resetcounters only thinks it is supposed to zero the counters for the
current database.  That seems considerably different from throw away
everything.  In any case, with autovacuum coming up fast on the outside
it doesn't seem that we want to encourage people to reset the stats on
a routine basis.

 IIRC the pg_stat functions don't examine the shared memory, but rely 
 entirely on information from the stats file. It sure would be possible 
 to add something there that checks the PGPROC array.

Yeah.  I'll take a look at doing this.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] postgres server encodings

2005-08-09 Thread Lincoln Yeoh

At 05:59 PM 8/9/2005 +0200, Martijn van Oosterhout wrote:


SQL_ASCII means that the database does no locale specific or language
specific encoding ever. It won't check what you send it either. If
you're content to let clients deal with any encoding issues, this may
be what you want.

But anything to do with lower(), upper(), case-insenstive in the
database itself will be totally stupid since it's assuming ASCII.


Is it possible or even good to have the ability to allow you to pick a 
particular locale for a query/function?


e.g. select * from messages where locale_code=$locale_code order by 
locale_code, multilocale_lower(message,locale);


Or even:

create index lower_keyword_idx on keywords (multilocale_lower(keyword,locale))
(there's a column called locale in both tables)

Does that actually make sense? ;)

I suppose we can do that in the client. But it'll be nicer if we can use 
order by, group by, and do it for more than one locale at a time.


Can Postgresql currently handle more than one locale within the same 
database AND have some useful locale sensitive DB functions?


Regards,
Link.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] [GENERAL] postgres server encodings

2005-08-09 Thread Joel Fradkin
Not that I am an expert or anything, but my initial data base was SQLASCII
and I did have to convert it to Unicode.
My reasons were we store French characters in our database and the newer
odbc driver was not displaying them correctly coming from SQLASCII, but was
from UNICODE.
I also think that it can affect functions like length and upper, but Tom
knows a ton more then me about this stuff.

I did my initial conversion on 7.4 and the odbc driver at that time had no
issues with SQLASCII displaying the French, but I think in 8.0.1 I started
seeing an issue. The latest version of the driver 8.0.4 seems to be working
well (only up a little over 24 hours thus far).

I wish I had used a unicode data base from the start (7.4 driver was what I
used and it did not like moving from MSSQL to Unicode). I later switched to
.net (npgsql objects) for my conversion and used a encoding object to write
the data correctly.

Joel Fradkin
 
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305
 
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology  advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.
 

 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Tuesday, August 09, 2005 11:59 AM
To: Salem Berhanu
Cc: pgsql-admin@postgresql.org; pgsql-general@postgresql.org
Subject: Re: [ADMIN] [GENERAL] postgres  server encodings 

Salem Berhanu [EMAIL PROTECTED] writes:
 What exactly is the SQL_ASCII encoding in postgres?

SQL_ASCII isn't so much an encoding as the declaration that you don't
care about encodings.  That setting simply disables encoding validity
checks and encoding conversions.  The server will take any byte string
clients send it (barring only embedded zero bytes), and store and return
it unchanged.

Since it disables conversions, the notion of converting to another
encoding is pretty much meaningless :-(.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Poll on your LAPP Preferences

2005-08-09 Thread brew

Mike.

 If I can improve this process, then a developer can download my web app,
 try it out rapidly, comparing it against others, and hopefully decide on
 mine because I have made it easy to get started and easy to customize to
 their tastes.

If it's a Linux-Apache-PHP-PostgreSQL web app you only need one user, the
one your PHP script logs in as.  Then maybe you could put the same
randomly generated password in both postgres and the PHP script.

Or even have the user name be randomly generated, with or without a
password.

The default would be local access, just to try it out.  If remote access
is wanted it could be a question on the setup script.  Or even done
manually - I don't think anybody will be setting it up across two or more
machines without being pretty sure they want to install your app
permanently.

Just some thoughts off the top of my head.

brew

 ==
  Strange Brew   ([EMAIL PROTECTED])
  Check out my Stock Option Covered Call website  http://www.callpix.com
 and my Musician's Online Database Exchange http://www.TheMode.com
 ==


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Query stucked in pg_stat_activity

2005-08-09 Thread Matthew T. O'Connor

Jan Wieck wrote:


On 8/9/2005 12:21 PM, Tom Lane wrote:


This reminds me I've forgot to ask, is there any other way of getting
rid of those ghost entries than via big load ?



Not at the moment.  It might be worth teaching the pgstats code to
cross-check the activity list every so often, but the only place
where it'd really fit naturally is vacuum_tabstats which is probably
not executed often enough to be helpful.

Or maybe we could just filter the data on the reading side: ignore
anything the stats collector reports that doesn't correspond to a
live backend according to the PGPROC array.

Jan, any thoughts?



The reset call is supposed to throw away everything. If it leaves crap 
behind, I'd call that a bug.


IIRC the pg_stat functions don't examine the shared memory, but rely 
entirely on information from the stats file. It sure would be possible 
to add something there that checks the PGPROC array.



Is that the same stats reset that effects autovacuum?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Poll on your LAPP Preferences

2005-08-09 Thread Alvaro Herrera
On Sat, Aug 06, 2005 at 07:59:06PM -0700, Google Mike wrote:
 As a PostgreSQL admin or developer, you may be asked to deploy a Linux
 Apache PHP PostgreSQL application. As you know, and simplifying things
 a great deal here, the pg_hba.conf file can be edited in approximately
 7 different ways:
 
 * locked down -- no access at all (usually the default)
 * trust local access, any user
 * trust local access, specific users
 * trust remote access, any user
 * trust remote access, specific users

I'd never trust remote access, not even for specific IPs, out of fear
that somebody might be able to inject malicious commands using IP
spoofing.  SSL is a must in that situation.

-- 
Alvaro Herrera (alvherre[a]alvh.no-ip.org)
Y una voz del caos me habló y me dijo
Sonríe y sé feliz, podría ser peor.
Y sonreí. Y fui feliz.
Y fue peor.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] [GENERAL] postgres server encodings

2005-08-09 Thread Alvaro Herrera
On Tue, Aug 09, 2005 at 12:56:37PM -0400, Joel Fradkin wrote:
 Not that I am an expert or anything, but my initial data base was SQLASCII
 and I did have to convert it to Unicode.
 My reasons were we store French characters in our database and the newer
 odbc driver was not displaying them correctly coming from SQLASCII, but was
 from UNICODE.
 I also think that it can affect functions like length and upper, but Tom
 knows a ton more then me about this stuff.
 
 I did my initial conversion on 7.4 and the odbc driver at that time had no
 issues with SQLASCII displaying the French, but I think in 8.0.1 I started
 seeing an issue. The latest version of the driver 8.0.4 seems to be working
 well (only up a little over 24 hours thus far).

A conversion will work fine assuming the data is all encoded using the
same encoding.  So if it's all utf8 (Unicode) already, you can import
it verbatim into a UTF8 database and it will work fine.  If it's all
Latin-1, you can import into a UTF-8 db using a client_encoding=latin1
during import, or verbatim to a Latin-1 database, and it will also work
fine.  (You of course are expected to be able to figure out what
encoding is the data really in.)

The problem only shows up when you have mixed data -- say, you have two
applications, one website in PHP which inserts data in Latin-1, and a
Windows app which inserts in UTF-8.  In this case your data will be a
mess to fix, and there's no way a single conversion will get it right.
You will have to manually separate the parts that are UTF8 from the
Latin1, and import them separately.  Not a position I'd like to be in.

-- 
Alvaro Herrera (alvherre[a]alvh.no-ip.org)
Coge la flor que hoy nace alegre, ufana. ¿Quién sabe si nacera otra mañana?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] [GENERAL] postgres server encodings

2005-08-09 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 The problem only shows up when you have mixed data -- say, you have two
 applications, one website in PHP which inserts data in Latin-1, and a
 Windows app which inserts in UTF-8.  In this case your data will be a
 mess to fix, and there's no way a single conversion will get it right.
 You will have to manually separate the parts that are UTF8 from the
 Latin1, and import them separately.  Not a position I'd like to be in.

The only helpful tip I can think of is that you can try to import data
into a UTF8 database and see if it gets rejected as badly encoded; this
will at least give you a weak tool to separate what's what.

I'm afraid the reverse direction won't help much --- in single-byte
encodings such as Latin1 there are no encoding errors, and so you can't
do any simple filtering to check in that direction.  In the end you're
going to have to eyeball a lot of data for plausibility :-(

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Poll on your LAPP Preferences

2005-08-09 Thread Chris Travers

[EMAIL PROTECTED] wrote:


Mike.

 


If I can improve this process, then a developer can download my web app,
try it out rapidly, comparing it against others, and hopefully decide on
mine because I have made it easy to get started and easy to customize to
their tastes.
   



If it's a Linux-Apache-PHP-PostgreSQL web app you only need one user, the
one your PHP script logs in as.  Then maybe you could put the same
randomly generated password in both postgres and the PHP script.
 

Who says?  I sometimes require that the PHP app logs into the database 
with the username/password suppled by the user.  This makes it easier to 
manage permissions.  Of course you cannot use connection pooling in this 
case without a partial rewrite of your app...


Best Wishes,
Chris Travers
Metatron Technology Consulting

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgres server encodings

2005-08-09 Thread Greg Stark
Tom Lane [EMAIL PROTECTED] writes:

 Salem Berhanu [EMAIL PROTECTED] writes:
  What exactly is the SQL_ASCII encoding in postgres?
 
 SQL_ASCII isn't so much an encoding as the declaration that you don't
 care about encodings.  

It's too late to consider renaming this SQL_RAW or something like that is it?
It is a huge source of confusion.

Perhaps have a separate ascii encoding that checks and complains if any
non-ascii characters are present.

-- 
greg


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] best way to reference tables

2005-08-09 Thread TJ O'Donnell
 TJ O'Donnell wrote:
 I have many different tables that I want to keep track of.
 So, I thought of a master table with those table names in it.
 But, to maintain this (suppose a table changes
 its name, gets dropped) I want to have some kind of referential
 integrity - the way foreign keys and constraints do.
 What could I use that would disallow inserting a name into
 the master table unless another table by that name already exists? And what 
 could ensure that
 a table would not be renamed or dropped unless the master table is changed?

 You can write your own triggers that would stop you from adding a  
 non-existent table to your
 master-table. You can't fire triggers on  changes to system tables though, so 
 you can't stop
 someone adding a table.

 In any case, you presumably want to track ALTER TABLE ADD COLUMN too so  
 you'll need something a
 little smarter.

 Have you considered pg_dump --schema-only along with suitable
 version-control software (CVS/Subversion/Arch etc)? Or are you trying to  
 track something
 specific?

 --
Richard Huxton
Archonet Ltd

My tables are a subset of all the tables - I'm not trying to keep track of 
everything!
So, I think I'll create a schema for the tables I need to keep track of, and 
create
some procedures to properly create/modify tables therein.  I don't need to
keep track of all the innards of each table - ADD COLUMN, etc.

I guess what I'm asking is: Is there a better way to keep track of a table once
it's registered in my master table than just to put its name in my master table?
Some system oid type thing that stays fixed in spite of renames or other
tables mods?

TJ



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Testing of MVCC

2005-08-09 Thread Karsten Hilbert
 Matt Miller [EMAIL PROTECTED] writes:
  I want to write some regression tests that confirm the behavior of
  multiple connections simultaneously going at the same tables/rows.  Is
  there something like this already, e.g. in src/test/regress?
 
 No.  You should consult the pghackers archives --- there have been
 discussions in the past about creating a test harness that would support
 useful concurrent testing.  No one's gotten around to it yet, but surely
 we need one.
There's something *somewhat* related here:

 
http://savannah.gnu.org/cgi-bin/viewcvs/gnumed/gnumed/gnumed/client/testing/concurrency-torture-test.py

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Poll on your LAPP Preferences

2005-08-09 Thread brew

Chris.

 If it's a Linux-Apache-PHP-PostgreSQL web app you only need one user, the
 one your PHP script logs in as.

 Who says?  I sometimes require that the PHP app logs into the database
 with the username/password suppled by the user.  This makes it easier to
 manage permissions.  Of course you cannot use connection pooling in this
 case without a partial rewrite of your app...

I said that.

Let me rephrase it.  As a minimum, the way website PHP scripts typically
connect to PostgreSQL, you only need one user.

Conversely, you could trust anybody on the machine.  If you are on a
dedicated machine and nobody else has access it's as secure as the
machine.  However, some potential users of the app won't have secure
dedicated machines, so I think that would be a bad idea.

OTOH, you could have many postgresql user/password logins, like some of
your (Chris') websites.

How common is it to have the website user names carry through to the
postgresql user login?  I don't see the advantage to it, I just have a web
username table in the database, but my websites are fairly simple, you
either have access to a private area or you don't.

brew

 ==
  Strange Brew   ([EMAIL PROTECTED])
  Check out my Stock Option Covered Call website  http://www.callpix.com
 and my Musician's Online Database Exchange http://www.TheMode.com
 ==


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Testing of MVCC

2005-08-09 Thread Richard_D_Levine
Firebird has MVCC also (they call it multi-generational record architecture
--- MGRA), and may have at least a good test plan, though it may not cover
effects of rules, triggers, functions, and constraints.  Those are the
killer test cases.  I don't have time to look.

http://firebird.sourceforge.net/

Rick

[EMAIL PROTECTED] wrote on 08/09/2005 02:19:56 PM:

  Matt Miller [EMAIL PROTECTED] writes:
   I want to write some regression tests that confirm the behavior of
   multiple connections simultaneously going at the same tables/rows.
Is
   there something like this already, e.g. in src/test/regress?
 
  No.  You should consult the pghackers archives --- there have been
  discussions in the past about creating a test harness that would
support
  useful concurrent testing.  No one's gotten around to it yet, but
surely
  we need one.
 There's something *somewhat* related here:

  http://savannah.gnu.org/cgi-

bin/viewcvs/gnumed/gnumed/gnumed/client/testing/concurrency-torture-test.py

 Karsten
 --
 GPG key ID E4071346 @ wwwkeys.pgp.net
 E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] best way to reference tables

2005-08-09 Thread Tom Lane
TJ O'Donnell [EMAIL PROTECTED] writes:
 I guess what I'm asking is: Is there a better way to keep track of a
 table once it's registered in my master table than just to put its
 name in my master table?  Some system oid type thing that stays fixed
 in spite of renames or other tables mods?

Yeah, you can store the pg_class OID of the table, instead of (or in
addition to) its name.  This is exactly what the system itself considers
to be the identity of the table.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] best way to reference tables

2005-08-09 Thread Alvaro Herrera
On Tue, Aug 09, 2005 at 04:01:33PM -0400, Tom Lane wrote:
 TJ O'Donnell [EMAIL PROTECTED] writes:
  I guess what I'm asking is: Is there a better way to keep track of a
  table once it's registered in my master table than just to put its
  name in my master table?  Some system oid type thing that stays fixed
  in spite of renames or other tables mods?
 
 Yeah, you can store the pg_class OID of the table, instead of (or in
 addition to) its name.  This is exactly what the system itself considers
 to be the identity of the table.

Maybe it is possible to use a column of type regclass to store it.  Not
sure exactly what advantage that would give, but it's an idea.

-- 
Alvaro Herrera (alvherre[a]alvh.no-ip.org)
El miedo atento y previsor es la madre de la seguridad (E. Burke)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] How to write jobs in postgresql

2005-08-09 Thread Guy Rouillier
chiranjeevi.i wrote:
 Hi Team Members,
 
 Is it possible to write jobs in postgresql  if possible how
 should I write .please help me. 

See pgjob in pgfoundry: http://pgfoundry.org/projects/pgjob/.  It's in
the planning stages.

-- 
Guy Rouillier

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] best way to reference tables

2005-08-09 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 On Tue, Aug 09, 2005 at 04:01:33PM -0400, Tom Lane wrote:
 Yeah, you can store the pg_class OID of the table,

 Maybe it is possible to use a column of type regclass to store it.  Not
 sure exactly what advantage that would give, but it's an idea.

Hmm, that's a cute idea.

regression=# create table tablist (tabid regclass);
CREATE TABLE
regression=# create table mytab(f1 int);
CREATE TABLE
regression=# insert into tablist values ('mytab');
INSERT 0 1
regression=# select * from tablist;
 tabid
---
 mytab
(1 row)

regression=# alter table mytab rename to histab;
ALTER TABLE
regression=# select * from tablist;
 tabid

 histab
(1 row)

regression=# create schema s1;
CREATE SCHEMA
regression=# alter table histab set schema s1;
ALTER TABLE
regression=# select * from tablist;
   tabid
---
 s1.histab
(1 row)

regression=# drop table s1.histab;
DROP TABLE
regression=# select * from tablist;
 tabid
---
 82301
(1 row)

regression=#

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Suppressing Error messages.

2005-08-09 Thread SCassidy

Hi,

To get rid of the automatic output, turn PrintError off.

For example:

$dbh = DBI-connect(dbi:Pg:dbname=$dbname;host=${dbserver};, $dbuser,
,{PrintError = 0}) or
 errexit( Unable to connect to dbname $dbname, err: $DBI::errstr);

See the perldoc DBI documentation for full information.  Here is part of
it:

   PrintError (boolean, inherited)
   The PrintError attribute can be used to force errors to
generate
   warnings (using warn) in addition to returning error codes in
the
   normal way.  When set on, any method which results in an error
   occuring will cause the DBI to effectively do a warn($class
   $method failed: $DBI::errstr) where $class is the driver class
   and $method is the name of the method which failed. E.g.,

 DBD::Oracle::db prepare failed: ... error text here ...

   By default, DBI-connect sets PrintError on.

   If desired, the warnings can be caught and processed using a
   $SIG{__WARN__} handler or modules like CGI::Carp and CGI::Error-
   Wrap.

Normally, I catch errors myself in web applications, and output to a log
and/or the screen appropriately.  For this, you may want to turn off
RaiseError, also.  For example, you can look for a specific error message,
and do special things under certain circumstances:

  my $rc2=$sth2-execute($val1, $val2);
  if (!$rc2) {
#will be undef if a problem occurred
if ($DBI::errstr=~/$dupKeyString/) {#$dupKeyString is
set to the actual string we expect
  # This is expected sometimes
  LogMsg(Duplicate key on val \$val1\);
} else {
  my $msg2=Unexpected error during insert of val \$val\: DB
error: $DBI::errstr;
  db_error_exit ($msg2);
}
  }

This is just some quick example code, but you get the idea.

Hope this helps,
Susan





   
   Basith Salman  
   
  [EMAIL PROTECTED]To:   
pgsql-general@postgresql.org  
  net   cc:
   
   Sent by:  Subject:  [GENERAL] 
Suppressing Error messages.   

   
  |---| 
   
  [EMAIL PROTECTED] | [ ] Expand Groups |   
 
  tgresql.org |---| 
   

   

   
   08/05/2005 07:42 
   
  AM
   
   Please respond to
   
  bsalman   
   

   

   








 Hi All,



I was wondering if there is way to suppress the error messages on the
stdout

from a perl dbi execute command, basically  if I do a

sth-execute() on a command and say the row cannot be updated then

I get a err msg to stdout if there is foreign key violation, I want this
error message

to be directed to a log file.



Is this possible.



TIA.

[EMAIL PROTECTED]




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org







--
See our 

[GENERAL] NOTIFY/LISTEN, PHP, rule vs. trigger, blocking, missed NOTIFY's

2005-08-09 Thread CSN
Scott and I were discussing NOTIFY/LISTEN using a PHP
script here:

http://phpbuilder.com/board/showthread.php?t=10302693

Basically:


PHP Code:
 #!/usr/bin/php -q
?php
$conn = pg_connect(dbname=test user=user);
pg_query(listen record_deleted);
$interval = 10;
for (;1;){
sleep($interval);
$notify = pg_get_notify($conn);
if ($notify){
print Now we do something;
}
}
? 


And the sql code:

CREATE TABLE ntest ( id serial primary key, path text
);
create table naudit ( id int primary key, path text );
create rule audit_test as on delete to ntest do (
insert into naudit(id,path) values (OLD.id, OLD.path);
notify record_deleted );
insert into ntest (path) values
('/usr/local/lib/php.ini2');
 delete from ntest;


I think he may be off on some wild and exotic vacation
;) or something - so I'll post my questions here too:

* Is there any reason to use a rule rather than a
trigger? I guess a rule is just simpler.

* Also, think there's any way to just have the PHP
script block until a notify event is actually
received, rather than checking every [sleep] seconds?

* Finally, PG's docs on notify say that if events
happen in rapid succession, notify's might get
dropped. For example: could many item rows get
deleted, but some of their corresponding files not get
deleted due to dropped notify's?

Thanks,
CSN

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Poll on your LAPP Preferences

2005-08-09 Thread Randal L. Schwartz
 Google == Google Mike [EMAIL PROTECTED] writes:

Google As a PostgreSQL admin or developer, you may be asked to deploy a Linux
Google Apache PHP PostgreSQL application.

Not me.  I'll be deploying an OpenBSD, Apache, PostgreSQL, Perl server.

o/~ you down with O-A-P-P? (yeah you know me!)
get down with OAPP! (yeah you know me!) o/~

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
merlyn@stonehenge.com URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] plpythonu and return void

2005-08-09 Thread Adrian Klaver
On Monday 08 August 2005 08:02 pm, Tom Lane wrote:
 Adrian Klaver [EMAIL PROTECTED] writes:
  I recently migrated a database from Postgres 7.4.1 to Postgres 8.03. The
  only problem I have run into is that a plpythonu function that returns
  void will not run under 8.03. The error message states that a plpython
  function cannot return void. I got around it by having it return text and
  not actually returning anything. I have searched for an explanation and
  can't seem to locate one.

 The explanation is doubtless here:

 2004-09-19 19:38  tgl

   * src/pl/plpython/plpython.c: Add defenses against plpython
   functions being declared to take or return pseudotypes.  Not sure
   why I neglected to add these checks at the same time I added them
   to the other PLs, but it seems I did.

 It's certainly possible to weaken this test to allow void again,
 but shouldn't there be corresponding changes elsewhere to ensure that
 the behavior is actually sensible?

   regards, tom lane
Thanks for the explanation. One of these days I will remember that the source 
is my friend.
-- 
Adrian Klaver   
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] escape string type for upcoming 8.1

2005-08-09 Thread Bruce Momjian

E'' is more a marker than a type.  I realize making E a type might work,
but it seems unusual.

What we could do is backpatch E'' to 8.0.X as a no-op like it will be in
8.1.

---

Jeff Davis wrote:
 From what I've read, it looks like 8.1 will introduce the E'' escape
 string, and eventually postgresql will change the normal '' strings to
 be more SQL-compliant.
 
 If I wanted to start being forwards-compatible right now, and I have
 existing databases in 7.4 and 8.0, my idea was to create a type E. Any
 string using that type would work in 7.4/8.0 as normal, and then when I
 upgrade to 8.1 I will drop the type and the applications will still work.
 
 To do that is relatively simple, I'd just use the textin/out functions
 that already exist to create the type (which in 7.4/8.0 will give the
 desired behavior of escaping). Like so:
 
 CREATE FUNCTION ein(CSTRING) RETURNS E AS 'textin' LANGUAGE internal;
 
 CREATE FUNCTION eout(E) RETURNS CSTRING AS 'textout' LANGUAGE internal;
 
 CREATE TYPE E (input=ein,output=eout);
 
 CREATE CAST (E AS TEXT) WITHOUT FUNCTION AS IMPLICIT;
 
 CREATE CAST (TEXT AS E) WITHOUT FUNCTION AS IMPLICIT;
 
 Then, when I upgrade a system to 8.1, I can just remove the type, and
 all the applications will still work. Eventually when the '' strings are
 changed, I can start using those again, but in a SQL-compliant way.
 
 Does this migration path make sense? Will creating the type possibly
 cause casting problems of some kind? I read something about possibly
 backpatching the E'' string to 8.0. If someone did that, what would be
 the difference between applying a backpatch and what I did above?
 
 One thing that has me concerned about the idea is that there are some
 string constants, like B'' and X'' that aren't really types. Does that
 mean that E'' won't be a type? Why are B'' and X'' not types, and are
 there any other notations like that that are not types?
 
 Regards,
   Jeff Davis
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org