Re: [GENERAL] on delete cascade slowing down delete

2008-08-25 Thread Ivan Sergio Borgonovo
On Sun, 24 Aug 2008 13:37:11 +0200
Alban Hertroys [EMAIL PROTECTED] wrote:

  Is it going to make things faster if I:

  delete from s;
  reindex table s;

  Why do you think this step would help you any? There's no index
  on p to begin with. You'd just be reindexing the auto-generated
  unique index on s (due to it being a PK).

  Sorry I forgot to add the index in the example.
  What if there was an index in s.pid too?
  But mostly... if I delete s will the deletion of p be faster?

 Hard to tell without the results from explain analyse. It depends
 on what the planner decides to do, but it's often faster than the
 things we come up with to work around the planner. As a rule of
 thumb, if you're trying to work around the planner it is likely
 your problem is caused by something else.

 Without an explain plan everything is just speculation really,
 the planner is quite smart and it knows your data. It tends to
 outsmart the devs.

  delete from p;

  And no, this would most likely be slower.

  Why?

 Because of the extra reindex step. If you'd replace that with an  
 analyse of p, then it may be faster. Or it may not.

 You seem to misinterpret the use case for REINDEX. Read here:
 http://www.postgresql.org/docs/8.3/interactive/sql-reindex.html

 Especially note the usage scenarios ;)

http://www.postgresql.org/docs/8.1/interactive/routine-reindex.html

So on later version than 7.4... what's going to happen if I delete a
whole table?
It looks like it is not an issue and at least reindexing can be
avoided.

 Maybe you shouldn't try to speculate on solutions before you  
 ascertained what the problem is? People asked for an EXPLAIN
 ANALYSE, we can't really help you without that.

As to my understanding EXPLAIN ANALYSE does actually run the
query... but it was so damn slow to have result in a useful time.
I re engineered the tables and now the stuff works at a reasonable
speed.

Does the planner optimise multiple statements in a transaction or
just a statement at a time?

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] playing with catalog tables limits? dangers? was: seq bug 2073 and time machine

2008-08-25 Thread Ivan Sergio Borgonovo
On Sun, 24 Aug 2008 17:26:24 -0400
Alvaro Herrera [EMAIL PROTECTED] wrote:

 Ivan Sergio Borgonovo wrote:
  I was trying to drop a serial.
  Dropped the default for a column.
  Now it seems I can't drop the sequence since I incurred in:

  http://archives.postgresql.org/pgsql-bugs/2005-11/msg00304.php

  Is there a way I can still delete the sequence without using a
  backup?

 If you're feeling corageous, you can remove the pg_depend entries
 for that sequence.  Make sure to try it in a transaction and drop

I'd like to understand better the risks of being courageous?
I think my life would be easier if I'd know when it is safe to put
hands in the system tables.

 the sequence in that same transaction, so that if you mess up the
 catalogs too badly you can get out of it by rolling back.

Fortunately that pk was referenced just once, so I copied the
content of the table elsewhere, dropped a constraint, dropped the
table and move the content in another one, moved the content back to
a table without serial, recreate the constraint.
Your method is simpler, should be faster and avoid to drop a
constraint but without understanding what's is going to happen
behind the scene it is hard for me to understand if it is safer as
well.

Of course modifying the catalog tables to modify the schema is not
going to be portable across DB... but what about risks and other
limits?

I'm thinking to access the catalog for eg. disabling/dropping a set
of constraint and reenabling/creating them back without the need to
do bookkeeping on the code or writing a script etc...

thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Installing Postgress 8.3.3

2008-08-25 Thread Brian Green

I am going to try to get 8.3.3 to work for me.

1.
I have read that to install 8.3 on Vista that you should disable UAC. If I do 
so, can I turn it back on again AFTER the postgres is installed ?
___
2.
If I install to user 'E' and then reboot the computer and then without logging 
on I try to access the database from a remote computer, will it still have its 
service running and available 
?
3.
Does it matter(as far as usability from a remote computer) if I try to install 
to an administrator account or a standard account 
?
4.
To run the installer, do I click on SETUP.bat ?
 
Inside SETUP.bat are the liness..
 
vcredist_x86.exe /q:a /c:msiexec /i vcredist.msi /qb!andmsiexec /i 
postgresql-8.3.msi
 
what does the msiexec words cause to happen and what does the vcredist.msi word 
do ?
__
Thank You !
 
 
_
Be the filmmaker you always wanted to be—learn how to burn a DVD with Windows®.
http://clk.atdmt.com/MRT/go/108588797/direct/01/

[GENERAL] Easy upgrade on Cpanel *without* downtime

2008-08-25 Thread Phoenix Kiula
Hi.

I have googled and googled for good, simple instructions to upgrade
from 8.2.3 to 8.3.3 (latest stable at this time?)

I am on a Cpanel interface. Use Apache and PHP for most of my websites.

This seems to be the most often quoted resource on forums etc:

http://kb.linuxnetworkcare.com/node/21

But this suggests backing up, then UNinstalling (downtime and loss of
settings!), then REinstalling postgresql.

Is there no sensible way of simply upgrading the database engine
without affecting either the uptime or the data itself?

I really really do not wish to backup gigabytes worth of data and then
reupload it back into the DB. This represents a downtime of *at least*
an hour or so, which our busy website doesn't have.

Any thoughts or pointers?

Thanks!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Easy upgrade on Cpanel *without* downtime

2008-08-25 Thread Tino Wildenhain

Hi,

Phoenix Kiula wrote:

Hi.

I have googled and googled for good, simple instructions to upgrade
from 8.2.3 to 8.3.3 (latest stable at this time?)

I am on a Cpanel interface. Use Apache and PHP for most of my websites.

This seems to be the most often quoted resource on forums etc:

http://kb.linuxnetworkcare.com/node/21

But this suggests backing up, then UNinstalling (downtime and loss of
settings!), then REinstalling postgresql.

Is there no sensible way of simply upgrading the database engine
without affecting either the uptime or the data itself?

I really really do not wish to backup gigabytes worth of data and then
reupload it back into the DB. This represents a downtime of *at least*
an hour or so, which our busy website doesn't have.

Any thoughts or pointers?


The usual way to do it and to avoid downtime is to install
the new version along the old one, backup/restore as usual
(e.g. pg_dump | pg_restore to avoid wasting space 2 times)
then run slony to permanently update the last bits
while you are checking settings and performance of the new
version. If everything is fine, use a calm moment to switch
your configuration to use the new database in production.

Regards
Tino


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] Easy upgrade on Cpanel *without* downtime

2008-08-25 Thread RW

Normally we don't use the packages from the distributions. They're
mostly outdated and installation ov two or more versions of Postgres
isn't that easy normally in this case. We've a directory structure which 
allows
us to  install and startup two or more versions in parallel. I haven't 
tried this

but if you have two instances running (one 8.2 and the thoer 8.3)
you can try to install Slony migrating all the data through replication.
When all data is synchronized a shot downtime is needed to switch
between the two versions. Maybe this is a way to go...

Robert


Phoenix Kiula wrote:

Hi.

I have googled and googled for good, simple instructions to upgrade
from 8.2.3 to 8.3.3 (latest stable at this time?)

I am on a Cpanel interface. Use Apache and PHP for most of my websites.

This seems to be the most often quoted resource on forums etc:

http://kb.linuxnetworkcare.com/node/21

But this suggests backing up, then UNinstalling (downtime and loss of
settings!), then REinstalling postgresql.

Is there no sensible way of simply upgrading the database engine
without affecting either the uptime or the data itself?

I really really do not wish to backup gigabytes worth of data and then
reupload it back into the DB. This represents a downtime of *at least*
an hour or so, which our busy website doesn't have.

Any thoughts or pointers?

Thanks!

  



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Easy upgrade on Cpanel *without* downtime

2008-08-25 Thread Andrew Sullivan
On Mon, Aug 25, 2008 at 11:34:00AM +0200, Tino Wildenhain wrote:
 (e.g. pg_dump | pg_restore to avoid wasting space 2 times)
 then run slony to permanently update the last bits

If you're going to run slony, then the pg_dump|pg_restore step is
completely wasted.  Slony will restore all the data again.  You do
need to run pg_dump -s, of course.

A


-- 
Andrew Sullivan
[EMAIL PROTECTED]
+1 503 667 4564 x104
http://www.commandprompt.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Easy upgrade on Cpanel *without* downtime

2008-08-25 Thread Tino Wildenhain

Andrew Sullivan wrote:

On Mon, Aug 25, 2008 at 11:34:00AM +0200, Tino Wildenhain wrote:

(e.g. pg_dump | pg_restore to avoid wasting space 2 times)
then run slony to permanently update the last bits


If you're going to run slony, then the pg_dump|pg_restore step is
completely wasted.  Slony will restore all the data again.  You do
need to run pg_dump -s, of course.


good point :-)

Regards
Tino


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] Issue with creation of Partial_indexes (Immutable?)

2008-08-25 Thread Tom Lane
Ow Mun Heng [EMAIL PROTECTED] writes:
 On Mon, 2008-08-25 at 12:23 +0800, Ow Mun Heng wrote:
 CREATE INDEX idx_d_trh_code_id_partial
 ON xmms.d_trh_table
 USING btree
 (code_id) where code_id not in ('P000','000') and code_id is not null;
 ERROR:  functions in index predicate must be marked IMMUTABLE

 BTW, this is on 8.2.9 Seems to work OK on 8.3.3.

I suppose code_id is varchar or some such?

Try where code_id::text not in   There's an array type coercion
underlying the right-hand side of the NOT IN, and 8.2 had some problems
with correctly identifying the volatility of such coercions.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread John T. Dow
By bad data, I mean a character that's not UTF8, such as hex 98.

As far as I can tell, pg_dump is the tool to use. But it has
serious drawbacks.

If you dump in the custom format, the data is compressed (nice) and
includes large objects (very nice). But, from my tests and the postings of
others, if there is invalid data in a table, although PostgreSQL won't complain 
and
pg_dump won't complain, pg_restore will strenuously object, rejecting all rows 
for that
particular table (not nice at all).

If you dump in plain text format, you can at least inspect the dumped
data and fix it manually or with iconv. But the plain text
format doesn't support large objects (again, not nice). While byte arrays are 
supported, they result in very large dump files.

Also, neither of these methods gets information such as the roles, so
that has to be captured some other way if the database has to be rebuilt
from scratch.

Is my understanding incomplete or wrong? Is there no good solution?

Why isn't there a dumpall that writes in compressed format and allows recovery 
from bad data?

John


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] playing with catalog tables limits? dangers? was: seq bug 2073 and time machine

2008-08-25 Thread Tom Lane
Ivan Sergio Borgonovo [EMAIL PROTECTED] writes:
 Alvaro Herrera [EMAIL PROTECTED] wrote:
 If you're feeling corageous, you can remove the pg_depend entries
 for that sequence.  Make sure to try it in a transaction and drop

 I'd like to understand better the risks of being courageous?
 I think my life would be easier if I'd know when it is safe to put
 hands in the system tables.

Well, it's safe if (a) you know what you're doing, (b) you don't
make any mistakes, and (c) you don't forget any changes needed to
keep all the catalogs consistent.

You can protect yourself against (b) by using a transaction, but
the other two tend to require hacker-grade knowledge of how the
backend works, so we try to discourage people from doing it.
pg_depend in particular tends to have rather obscure contents,
and what's worse is that messing it up usually doesn't have any
immediately-obvious consequences.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread Tom Lane
John T. Dow [EMAIL PROTECTED] writes:
 If you dump in plain text format, you can at least inspect the dumped
 data and fix it manually or with iconv. But the plain text
 format doesn't support large objects (again, not nice).

It does in 8.1 and later ...

 Also, neither of these methods gets information such as the roles,

Use pg_dumpall.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIAL datatype

2008-08-25 Thread Mark Roberts

On Thu, 2008-08-21 at 13:53 -0600, Scott Marlowe wrote:
 Regular SERIAL type is limited to a 32 bit int.  BIGSERIAL uses a 64
 bit int.

I think one of the things that would be offsetting is the size
difference between the two types (32 vs 64 bits, 5 foreign keys, and a
billion rows or so makes for alot of pain).

-Mark


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] just work installation configuration

2008-08-25 Thread Yuri Huitron Alvarado
Hi!

I'm a newbie and want to startup my just installed postgres :

* 8.3-community
* SPARC
* Solaris 10

it answers that it has to find postgresql.conf but I can't find it anywere
do I have to write it specifying the most basic options?


Re: [GENERAL] playing with catalog tables limits? dangers? was: seq bug 2073 and time machine

2008-08-25 Thread Ivan Sergio Borgonovo
On Mon, 25 Aug 2008 12:07:23 -0400
Tom Lane [EMAIL PROTECTED] wrote:

 Ivan Sergio Borgonovo [EMAIL PROTECTED] writes:
  Alvaro Herrera [EMAIL PROTECTED] wrote:
  If you're feeling corageous, you can remove the pg_depend
  entries for that sequence.  Make sure to try it in a
  transaction and drop
 
  I'd like to understand better the risks of being courageous?
  I think my life would be easier if I'd know when it is safe to
  put hands in the system tables.
 
 Well, it's safe if (a) you know what you're doing, (b) you don't
 make any mistakes, and (c) you don't forget any changes needed to
 keep all the catalogs consistent.
 
 You can protect yourself against (b) by using a transaction, but
 the other two tend to require hacker-grade knowledge of how the
 backend works, so we try to discourage people from doing it.

Why hacker-grade knowledge of the backend?
With hacker-grade you mean: undocumented or RTSL?
Isn't the knowledge about how catalog stuff maps on SQL to guess
how to achieve certain results?

 pg_depend in particular tends to have rather obscure contents,
 and what's worse is that messing it up usually doesn't have any
 immediately-obvious consequences.

OK... what about concurrent works?
eg. supposing I write the correct SQL should I take care to be the
only one accessing the DB in that moment?

What could be the use case of directly accessing the catalog?

I'd like to have an idea if it is something to invest my time in.
My main interest would be refactoring.

thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread John T. Dow
Tom

My mistake in not realizing that 8.1 and later can dump large objects in the 
plain text format. I guess when searching for answers to a problem, the posted 
information doesn't always specify the version. So, sorry about that.

But the plain text format still has serious problems in that the generated file 
is large for byte arrays and large objects, there is no ability to selectively 
restore a table, and bad data still isn't detected until you try to restore.

Or did I miss something else?

John

PS: Yes, I know you can pipe the output from pg_dumpall into an archiver, but 
it's my understanding that the binary data is output in an inefficient format 
so even if zipped, the resulting file would be significantly larger than the 
custom format.



On Mon, 25 Aug 2008 12:14:41 -0400, Tom Lane wrote:

John T. Dow [EMAIL PROTECTED] writes:
 If you dump in plain text format, you can at least inspect the dumped
 data and fix it manually or with iconv. But the plain text
 format doesn't support large objects (again, not nice).

It does in 8.1 and later ...

 Also, neither of these methods gets information such as the roles,

Use pg_dumpall.

   regards, tom lane



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIAL datatype

2008-08-25 Thread Scott Marlowe
On Mon, Aug 25, 2008 at 10:23 AM, Mark Roberts
[EMAIL PROTECTED] wrote:

 On Thu, 2008-08-21 at 13:53 -0600, Scott Marlowe wrote:
 Regular SERIAL type is limited to a 32 bit int.  BIGSERIAL uses a 64
 bit int.

 I think one of the things that would be offsetting is the size
 difference between the two types (32 vs 64 bits, 5 foreign keys, and a
 billion rows or so makes for alot of pain).

Well, of course a 64 bit int is gonna be bigger than a 32 bit, but
with alignment issues and on 64 bit hardware, I'm guessing the
difference isn't exactly twice as slow / twice as much storage.  And
it's way faster than a GUID which was what I think started this
thread.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread Joshua Drake
On Mon, 25 Aug 2008 10:21:54 -0400
John T. Dow [EMAIL PROTECTED] wrote:

 By bad data, I mean a character that's not UTF8, such as hex 98.
 
 As far as I can tell, pg_dump is the tool to use. But it has
 serious drawbacks.
 
 If you dump in the custom format, the data is compressed (nice) and
 includes large objects (very nice). But, from my tests and the
 postings of others, if there is invalid data in a table, although
 PostgreSQL won't complain and pg_dump won't complain, pg_restore will
 strenuously object, rejecting all rows for that particular table (not
 nice at all).

You can use the TOC feature of -Fc to remove restoring of that single
table. You can then convert that single table to a plain text dump and
clean the data. Then restore it separately.

If you have foregin keys and indexes on the bad data table, don't
restore the keys until *after* you have done the above.

Sincerely,

Joshua D. Drake

-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] Regarding access to a user

2008-08-25 Thread Scott Marlowe
On Thu, Aug 21, 2008 at 3:05 AM, Shashwat_Nigam
[EMAIL PROTECTED] wrote:
 Dear Vishal

 Thanks for the help but by doing this an error is generated at the time when
 the user is trying to log in  with the following message:

 

 Access to database denied

 The server doesn't grant access to the database: the server reports

 FATAL: no pg_hba.conf entry for host 127.0.0.1, user hmri, database
 postgres, SSL off

Notice that it says database postgres???  User hmri doesn't have
permission to connect to the the postgres database.  you need to
connect to the hmri database.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread John T. Dow
Joshua

The TOC feature sounds good, as does converting a single table to plain text.

But I can't find documentation for the TOC feature under pg_dump or pg_restore. 
I'm looking in postgresql-8.2.1-US.pdf.

Neither could I see anything about converting a single table to a plain text 
dump.

Also, I stumbled across the statement that you can't restore large objects for 
a single table. Is that true?

Another thing I couldn't find was how to dump roles using -Fc.

John



On Mon, 25 Aug 2008 10:04:13 -0700, Joshua Drake wrote:

On Mon, 25 Aug 2008 10:21:54 -0400
John T. Dow [EMAIL PROTECTED] wrote:

 By bad data, I mean a character that's not UTF8, such as hex 98.
 
 As far as I can tell, pg_dump is the tool to use. But it has
 serious drawbacks.
 
 If you dump in the custom format, the data is compressed (nice) and
 includes large objects (very nice). But, from my tests and the
 postings of others, if there is invalid data in a table, although
 PostgreSQL won't complain and pg_dump won't complain, pg_restore will
 strenuously object, rejecting all rows for that particular table (not
 nice at all).

You can use the TOC feature of -Fc to remove restoring of that single
table. You can then convert that single table to a plain text dump and
clean the data. Then restore it separately.

If you have foregin keys and indexes on the bad data table, don't
restore the keys until *after* you have done the above.

Sincerely,

Joshua D. Drake

-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread Joshua Drake
On Mon, 25 Aug 2008 13:37:13 -0400
John T. Dow [EMAIL PROTECTED] wrote:

 Joshua
 
 The TOC feature sounds good, as does converting a single table to
 plain text.
 
 But I can't find documentation for the TOC feature under pg_dump or
 pg_restore. I'm looking in postgresql-8.2.1-US.pdf.

The commands you are looking for are:

pg_restore -l to get the toc
pg_restore -L to use the toc

If you open the resulting file from something like pg_restore -l 
foo.toc it is just a plain text list of objects to restore.

I don't know how well it is documented but I am sure we would accept a
patch.

 
 Neither could I see anything about converting a single table to a
 plain text dump.

pg_restore allows you to do so. Something like:

pg_restore foo.sqlc --file=foo.sql

 
 Also, I stumbled across the statement that you can't restore large
 objects for a single table. Is that true?

Large objects are stored in a central table called pg_largeobject, so
yes that would be accuarate.


 
 Another thing I couldn't find was how to dump roles using -Fc.
 

You can't; that is a known and irritating limitation.

Sincerely,

Joshua D. Drake



-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] just work installation configuration

2008-08-25 Thread Tino Wildenhain

Hi,

Yuri Huitron Alvarado wrote:

Hi!

I'm a newbie and want to startup my just installed postgres :

* 8.3-community
* SPARC
* Solaris 10

it answers that it has to find postgresql.conf but I can't find it anywere
do I have to write it specifying the most basic options?


I cant say much about your package but since it seems to have
a start script (also I remember solaris even had its
own postgres for system purposes?) so what you might need
to do is to find a place where the database should be
and run initdb with the options you need. (man initdb)
This will create the database cluster as well as postgresql.conf
and pg_hba.conf in the datadir.

Usually packages should come with a README or something telling
you about specific setup if you need to do anything special.

Regards
Tino Wildenhain


smime.p7s
Description: S/MIME Cryptographic Signature


[GENERAL] Trigger function is not called

2008-08-25 Thread Bill
PostgreSQL 8.3 on Windows. I have the table below which has a before 
insert trigger. The CREATE TRIGGER statement and the trigger function 
are also shown below. When I insert a row into this table using pgAdmin 
III and the INSERT statement


insert into note.category (category_id, category)
values(689, 'Ztest');

the before insert trigger function is not called. The notice is not 
displayed and no value is assigned to the version or uc_category columns 
and the insert fails with a violation of the not null constraint on the 
version field? I have created a simple two column test table with a 
before insert trigger and it works perfectly. I am new to PostgreSQL so 
I suspect I am missing something simple but I cannot figure out what. 
Why is the trigger function never called?


Thanks,

Bill

CREATE TABLE note.category
(
category_id note.d_id NOT NULL,
category note.d_category NOT NULL,
uc_category note.d_category,
parent_category_id note.d_id_fk,
version note.d_id,
category_checked boolean NOT NULL DEFAULT false,
CONSTRAINT category_primary_key PRIMARY KEY (category_id)
)
WITH (OIDS=FALSE);
ALTER TABLE note.category OWNER TO postgres;

CREATE TRIGGER category_bi_trigger
BEFORE INSERT
ON note.category
FOR EACH ROW
EXECUTE PROCEDURE note.category_bi();

CREATE OR REPLACE FUNCTION note.category_bi()
RETURNS trigger AS
$BODY$
begin
RAISE NOTICE '*CATEGORY BEFORE INSERT*';
NEW.VERSION := nextval('note.version_seq');
NEW.UC_CATEGORY := UPPER(NEW.CATEGORY);
RETURN NEW;
end;
$BODY$
LANGUAGE 'plpgsql' VOLATILE

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] playing with catalog tables limits? dangers? was: seq bug 2073 and time machine

2008-08-25 Thread Joris Dobbelsteen

Ivan Sergio Borgonovo wrote, On 25-Aug-2008 18:48:

On Mon, 25 Aug 2008 12:07:23 -0400
Tom Lane [EMAIL PROTECTED] wrote:


Ivan Sergio Borgonovo [EMAIL PROTECTED] writes:

Alvaro Herrera [EMAIL PROTECTED] wrote:

If you're feeling corageous, you can remove the pg_depend
entries for that sequence.  Make sure to try it in a
transaction and drop

I'd like to understand better the risks of being courageous?
I think my life would be easier if I'd know when it is safe to
put hands in the system tables.

Well, it's safe if (a) you know what you're doing, (b) you don't
make any mistakes, and (c) you don't forget any changes needed to
keep all the catalogs consistent.

You can protect yourself against (b) by using a transaction, but
the other two tend to require hacker-grade knowledge of how the
backend works, so we try to discourage people from doing it.


Why hacker-grade knowledge of the backend?
With hacker-grade you mean: undocumented or RTSL?


The issue is that 'directly editing the system tables' has NO safety net 
to protect you. You can do everything, even causing assumptions that the 
software will make to become invalid. In general this causes any 
combination of data corruption, server crashes and/or other bad things.


The 'regular interface', that you will find in the documentation, 
ensures that the assumptions remain valid. It will not allow changes 
that cause these assumption to become invalid.
(As a side note: you might see conditions where the assumption are 
violated when using the regular interface: these will be called bugs).


Most of the assumptions are undocumented, but if they happen to be 
documented, there is little reason to actually assume they will remain 
valid over different versions (even between e.g. 8.3 and 8.3.1, though 
that will probably happen, but for other reasons). They are not intended 
to be used by regular users, rather by the system itself.


What Tom calls hacker-grade knowledge is that you know what the 
assumptions are and how you can ensure that you will not violate them.
In general, several hackers that work with the system catalogs, probably 
know quite a few of them. Its not something that the DBA should know, 
the commands in the documentation will provide that kind of protection.



Isn't the knowledge about how catalog stuff maps on SQL to guess
how to achieve certain results?


This maps is dependent on the actual implementation you are running.
(I did have trouble understanding the actual question here).


pg_depend in particular tends to have rather obscure contents,
and what's worse is that messing it up usually doesn't have any
immediately-obvious consequences.


OK... what about concurrent works?
eg. supposing I write the correct SQL should I take care to be the
only one accessing the DB in that moment?


Depends on what you are doing. This is an instance what the regular 
interface enforces. For safety, its probably a good idea to be the only 
one, but its not a requirement.



What could be the use case of directly accessing the catalog?


Ideally, NONE!

However there are cases where strange and unexpected things happen and 
editing the catalog directly can get the database server back into a 
proper/consistent state. You seem to have encountered such a situation.


The main reason for having the option to enable 'direct catalog access' 
is to allow developers to add or modify features and test them while 
they are partially implemented. For example, the feature exists in the 
backend, but no-one implemented a command to actually turn it on or 
modify its parameters. This way the feature can already be tested, while 
they are discussing which commands should be provided to the regular users.



I'd like to have an idea if it is something to invest my time in.
My main interest would be refactoring.


I don't get what you are asking here...

Hope this helps.

Regards,

- Joris

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIAL datatype

2008-08-25 Thread Mark Roberts

On Mon, 2008-08-25 at 11:02 -0600, Scott Marlowe wrote:
 Well, of course a 64 bit int is gonna be bigger than a 32 bit, but
 with alignment issues and on 64 bit hardware, I'm guessing the
 difference isn't exactly twice as slow / twice as much storage.  And
 it's way faster than a GUID which was what I think started this
 thread.

I took a slice of data from our dev box and generated a table using
integers and bigints.  For reference, the schema is:
bigint table:
 Type   | Modifiers 
+---
bigint  | 
date| 
bigint  | 
bigint  | 
bigint  | 
bigint  | 
bigint  | 
date| 
date| 
bytea   | 
integer | 
integer | 
numeric | 
numeric | 
numeric | 
integer | 
integer | 
integer | 
integer | 
integer | 
integer | 
integer | 
bytea   | 

int table:
 Type   | Modifiers 
+---
bigint  | 
date| 
integer | 
integer | 
integer | 
integer | 
integer | 
date| 
date| 
bytea   | 
integer | 
integer | 
numeric | 
numeric | 
numeric | 
integer | 
integer | 
integer | 
integer | 
integer | 
integer | 
integer | 
bytea   | 

The integer version is 599752704 bytes, and the bigint version is
673120256 bytes (a ~12% size increase).  When joining the table to
itself (keys = 1 date, 5 (big)ints, no indexes), the bigint version
performs a join to itself with an average of 44.1 sec, and the integer
version in 29.6 sec (a 48% performance hit).

While granted that it's not twice as big and twice as slow, I think it's
a fairly valid reason to want to stay within (small)int ranges.
Sometimes the initial performance hit on insert would really be worth
the continuing space/performance savings down the road.

Of course, this wasn't very scientific and the benchmarks aren't very
thorough (for instance I assumed that bigserial is implemented as a
bigint), but it should remain a valid point.

Of course, it probably has no bearing on the OP's problem.  So my advice
to the OP: have you considered not keying such a volatile table on a
serial value?

-Mark


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread John T. Dow
Joshua

Thank you very much for answering these various questions.

I guess the compressed format is the best overall solution, except for roles. I 
find myself having a table with other information about users (application 
specific user type, etc) so perhaps the thing to do is record enough 
information there to reconstruct the roles should that become necessary.

Can pg_dump dump roles to plain text? How does pg_dumpall do it, doesn't it do 
everything via pg_dump?

John


On Mon, 25 Aug 2008 10:47:11 -0700, Joshua Drake wrote:

On Mon, 25 Aug 2008 13:37:13 -0400
John T. Dow [EMAIL PROTECTED] wrote:

 Joshua
 
 The TOC feature sounds good, as does converting a single table to
 plain text.
 
 But I can't find documentation for the TOC feature under pg_dump or
 pg_restore. I'm looking in postgresql-8.2.1-US.pdf.

The commands you are looking for are:

pg_restore -l to get the toc
pg_restore -L to use the toc

If you open the resulting file from something like pg_restore -l 
foo.toc it is just a plain text list of objects to restore.

I don't know how well it is documented but I am sure we would accept a
patch.

 
 Neither could I see anything about converting a single table to a
 plain text dump.

pg_restore allows you to do so. Something like:

pg_restore foo.sqlc --file=foo.sql

 
 Also, I stumbled across the statement that you can't restore large
 objects for a single table. Is that true?

Large objects are stored in a central table called pg_largeobject, so
yes that would be accuarate.


 
 Another thing I couldn't find was how to dump roles using -Fc.
 

You can't; that is a known and irritating limitation.

Sincerely,

Joshua D. Drake



-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread Joshua Drake
On Mon, 25 Aug 2008 17:05:53 -0400
John T. Dow [EMAIL PROTECTED] wrote:

 Joshua
 
 Thank you very much for answering these various questions.
 
 I guess the compressed format is the best overall solution, except
 for roles. I find myself having a table with other information about
 users (application specific user type, etc) so perhaps the thing to
 do is record enough information there to reconstruct the roles should
 that become necessary.
 
 Can pg_dump dump roles to plain text? How does pg_dumpall do it,

pg_dumpall -g will dump just roles via plain text.

Joshua D. Drake
-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump/restore with bad data and large objects

2008-08-25 Thread John T. Dow
Sorry, I missed that. Thanks again.

Now to put this all into effect.

John

On Mon, 25 Aug 2008 14:25:12 -0700, Joshua Drake wrote:

On Mon, 25 Aug 2008 17:05:53 -0400
John T. Dow [EMAIL PROTECTED] wrote:

 Joshua
 
 Thank you very much for answering these various questions.
 
 I guess the compressed format is the best overall solution, except
 for roles. I find myself having a table with other information about
 users (application specific user type, etc) so perhaps the thing to
 do is record enough information there to reconstruct the roles should
 that become necessary.
 
 Can pg_dump dump roles to plain text? How does pg_dumpall do it,

pg_dumpall -g will dump just roles via plain text.

Joshua D. Drake
-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Tom Lane
Bill [EMAIL PROTECTED] writes:
 PostgreSQL 8.3 on Windows. I have the table below which has a before 
 insert trigger. The CREATE TRIGGER statement and the trigger function 
 are also shown below.

The script you show attempts to create the trigger before creating the
function, which of course isn't going to work.  Did you check whether
the trigger actually got created?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Bill

Tom Lane wrote:

Bill [EMAIL PROTECTED] writes:
  
PostgreSQL 8.3 on Windows. I have the table below which has a before 
insert trigger. The CREATE TRIGGER statement and the trigger function 
are also shown below.



The script you show attempts to create the trigger before creating the
function, which of course isn't going to work.  Did you check whether
the trigger actually got created?

regards, tom lane


  
The trigger was definitely created. The code I posted was not a script 
that I used to create the trigger and trigger function. I just copied  
the SQL from pgAdmin and pasted the commands into my message not paying 
any attention to the order. Sorry for the confusion.


In a newsgroup posting someone suggested that constraint checks on 
domains occur before the before insert trigger. That seems difficult to 
believe based on my experience with other databases. Do constraint 
checks on domains occur before the before insert trigger?


Bill


Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Tom Lane
Bill [EMAIL PROTECTED] writes:
 In a newsgroup posting someone suggested that constraint checks on 
 domains occur before the before insert trigger.

Yeah, that is the case, but if a domain check was failing then the
row wouldn't get inserted, so I'm not clear on how this matches up
with your report.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Bill

Tom Lane wrote:

Bill [EMAIL PROTECTED] writes:
  
In a newsgroup posting someone suggested that constraint checks on 
domains occur before the before insert trigger.



Yeah, that is the case, but if a domain check was failing then the
row wouldn't get inserted, so I'm not clear on how this matches up
with your report.

regards, tom lane


  
The row is not getting inserted. I just created a test table and trigger 
and confirmed that the trigger fires if the column is defined as bigint 
not null and fails after I change the type to the domain. I will alter 
all of the tables and get rid of the domain.


Is it possible to create a type and use that instead of the domain or 
will I have the same problem with a type?


Bill


Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Tom Lane
Bill [EMAIL PROTECTED] writes:
 Is it possible to create a type and use that instead of the domain or 
 will I have the same problem with a type?

You'd have the same problem.  By the time the trigger sees it, the row
has already been converted to the table's column datatype(s), so any
exception associated with a datatype or domain would be thrown already.

A lot of people seem to have trouble with this concept; I dunno what
data representation they think the trigger is working on...

If you want to enforce constraints for a table in the trigger, you can
do that, but it's not going to work to try to mix and match
trigger-based and datatype-based restrictions.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Bill

You'd have the same problem. By the time the trigger sees it, the row

has already been converted to the table's column datatype(s), so any
exception associated with a datatype or domain would be thrown already.

A lot of people seem to have trouble with this concept; I dunno what
data representation they think the trigger is working on...

If you want to enforce constraints for a table in the trigger, you can
do that, but it's not going to work to try to mix and match
trigger-based and datatype-based restrictions.

regards, tom lane


  
I have no problem with the concept now that I understand it. It is just 
different than InterBase and Firebird which I have done a lot of work 
with lately. Thanks very much for your help.


Bill

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Easy upgrade on Cpanel *without* downtime

2008-08-25 Thread Phoenix Kiula
  If you're going to run slony, then the pg_dump|pg_restore step is
  completely wasted.  Slony will restore all the data again.  You do
  need to run pg_dump -s, of course.




Thanks to everyone who replied. We have no experience with this
Slony. Any simple instructions on installing it and getting it
working?

Why is PG so complex! It's a such a lovely database. If only at least
the installation and upgrade could be made more humane..

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Easy upgrade on Cpanel *without* downtime

2008-08-25 Thread Joshua Drake
On Tue, 26 Aug 2008 09:25:09 +0800
Phoenix Kiula [EMAIL PROTECTED] wrote:

 Slony. Any simple instructions on installing it and getting it
 working?
 
 Why is PG so complex! It's a such a lovely database. If only at least
 the installation and upgrade could be made more humane..
 

Well the installation is about as easy as it gets. I grant you we are
in the stone age when it comes to upgrading. We are however welcoming
patches should you feel the need to create and or sponsor :)

Joshua D. Drake

-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing Postgress 8.3.3

2008-08-25 Thread Craig Ringer
Brian Green wrote:
 I am going to try to get 8.3.3 to work for me.
 
 1.
 I have read that to install 8.3 on Vista that you should disable UAC. If I do 
 so, can I turn it back on again AFTER the postgres is installed ?

I never found it to be necessary to disable UAC. Running the installer
as the administrator user (shift-right-click, Run As) is quite sufficient.

 2.
 If I install to user 'E' and then reboot the computer and then without 
 logging on I try to access the database from a remote computer, will it still 
 have its service running and available ?

The PostgreSQL database runs as a service on Windows. It does not
require GUI/logon session interaction, and runs fine with any or no
user(s) logged in.

 Does it matter(as far as usability from a remote computer) if I try to 
 install to an administrator account or a standard account ?

No. However, the PostgreSQL installer wants to create its own user
account unless you've already set one up for it. It can't do this if
it's run as a standard user, but it may still be able to install and run
under the rights of the user you're using to install it.

I'd recommend just installing it as admin and letting it create its own
user account to run under. That way the postgres account will have only
the priveleges required to run PostgreSQL, and won't get unnecessary
rights like the ability to log in with a GUI session. It also won't show
up in the list of users if you use a welcome screen style login prompt.

 4.
 To run the installer, do I click on SETUP.bat ?
  
 Inside SETUP.bat are the liness..
  
 vcredist_x86.exe /q:a /c:msiexec /i vcredist.msi /qb!andmsiexec /i 
 postgresql-8.3.msi
  
 what does the msiexec words cause to happen and what does the vcredist.msi 
 word do ?

I think what's going on there boils down to:

Using the VC++ 2005 library redistributable installer, quietly install
the VC++ 2005 libraries onto the user's system then invoke the MSI
install package for PostgreSQL.

The VC++ 2005 library redist installer, and the libraries it contains,
are provided by Microsoft.

PostgreSQL requires the VC++ 2005 libraries if it's built with VC++
2005, as the current 8.3 binaries for win32 are.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Bill

Tom Lane wrote:

Bill [EMAIL PROTECTED] writes:
  
Is it possible to create a type and use that instead of the domain or 
will I have the same problem with a type?



You'd have the same problem.  By the time the trigger sees it, the row
has already been converted to the table's column datatype(s), so any
exception associated with a datatype or domain would be thrown already.

A lot of people seem to have trouble with this concept; I dunno what
data representation they think the trigger is working on...

If you want to enforce constraints for a table in the trigger, you can
do that, but it's not going to work to try to mix and match
trigger-based and datatype-based restrictions.

regards, tom lane


  
I removed the domain from the category_id and version columns leaving 
the following table, trigger function and trigger. The trigger function 
is still not called when I insert a new row. Any other ideas?


Bill

CREATE TABLE note.category
(
 category_id bigint NOT NULL,
 category character varying(40) NOT NULL,
 uc_category note.d_category,
 parent_category_id bigint,
 version bigint NOT NULL,
 category_checked boolean NOT NULL DEFAULT false,
 CONSTRAINT category_primary_key PRIMARY KEY (category_id)
)

CREATE OR REPLACE FUNCTION note.category_bi()
 RETURNS trigger AS
$BODY$
BEGIN
 RAISE NOTICE '**CATEGORY BI**';
 IF (NEW.CATEGORY IS NULL OR NEW.CATEGORY = '') THEN
   RAISE EXCEPTION 'Category cannot be blank.';
 END IF;

 IF (NEW.CATEGORY_ID IS NULL) THEN
   NEW.CATEGORY_ID := nextval('note.id_seq');
 END IF;

 NEW.VERSION := nextval('note.version_seq');
 NEW.UC_CATEGORY := UPPER(NEW.CATEGORY);
 RETURN NEW;
END;
$BODY$
 LANGUAGE 'plpgsql' VOLATILE


CREATE TRIGGER category_bi_trigger
 BEFORE UPDATE
 ON note.category
 FOR EACH ROW
 EXECUTE PROCEDURE note.category_bi();



Re: [GENERAL] Easy upgrade on Cpanel *without* downtime

2008-08-25 Thread Scott Marlowe
On Mon, Aug 25, 2008 at 7:25 PM, Phoenix Kiula [EMAIL PROTECTED] wrote:
  If you're going to run slony, then the pg_dump|pg_restore step is
  completely wasted.  Slony will restore all the data again.  You do
  need to run pg_dump -s, of course.




 Thanks to everyone who replied. We have no experience with this
 Slony. Any simple instructions on installing it and getting it
 working?

 Why is PG so complex! It's a such a lovely database. If only at least
 the installation and upgrade could be made more humane..

Any database that has real ACID guarantees is going to be complex.
Some more than others.  I think that upgrading postgresql in place is
a LOT of work and not the highest priority, and slony lets you upgrade
LIVE.  Note that you cannot upgrade an Oracle installation from 9g to
10i live.  You take it offline from the app, migrate your data, and
start up the new database.

Slony replication lets postgresql accomplish this, which is really
quite impressive.  We just upgraded from an 8.1 server to an 8.3
server via slony, and it went smooth as silk.  db downtime was
measured in seconds.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Tom Lane
Bill [EMAIL PROTECTED] writes:
 I removed the domain from the category_id and version columns leaving 
 the following table, trigger function and trigger. The trigger function 
 is still not called when I insert a new row. Any other ideas?

You're still expecting the trigger to get invoked before any constraints
are enforced (the NOT NULLs being the problem here, I think).  Again,
you can enforce things through a trigger or through a table constraint,
but mixing and matching won't work too well.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Bill

Tom Lane wrote:

Bill [EMAIL PROTECTED] writes:
  
I removed the domain from the category_id and version columns leaving 
the following table, trigger function and trigger. The trigger function 
is still not called when I insert a new row. Any other ideas?



You're still expecting the trigger to get invoked before any constraints
are enforced (the NOT NULLs being the problem here, I think).  Again,
you can enforce things through a trigger or through a table constraint,
but mixing and matching won't work too well.

regards, tom lane


  
The thing that has me confused is that the following table, trigger and 
trigger function work perfectly and the primary key for this table is 
also bigint not null. I added a bigint not null domain to this schema 
and changed the data type of the key to the domain and then I get the 
constraint violation. I changed the type of the key column back to 
bigint not null and the trigger fires and no error occurs.


Bill

CREATE TABLE test.trigger_test
(
 key bigint NOT NULL,
 data character varying(16),
 CONSTRAINT trigger_test_key PRIMARY KEY (key)
)

CREATE OR REPLACE FUNCTION test.trigger_test_before_insert()
 RETURNS trigger AS
$BODY$
begin
 raise notice '*Test before insert*';
 new.key := nextval('test.id_seq');
 return new;
end;
$BODY$
 LANGUAGE 'plpgsql' VOLATILE


CREATE TRIGGER trigger_test_insert
 BEFORE INSERT
 ON test.trigger_test
 FOR EACH ROW
 EXECUTE PROCEDURE test.trigger_test_before_insert();




Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Tom Lane
Bill [EMAIL PROTECTED] writes:
 The thing that has me confused is that the following table, trigger and 
 trigger function work perfectly and the primary key for this table is 
 also bigint not null.

Actually, after looking closer, I think the problem with your previous
example is that you created an ON UPDATE trigger not an ON INSERT
trigger.  Table constraints are indeed enforced after before-triggers
fire, as a quick look at the code proves.  Sorry for the misinformation.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Tom Lane
Klint Gore [EMAIL PROTECTED] writes:
 ...  With the not null definition in the domain, this 
 blows up before anything else has a chance.

Right.  Forming the proposed row-to-insert involves coercing the data to
the correct data types, and for domain types enforcing the domain
constraints is seen as part of that.  So you can't use a trigger to
clean up problems that violate the column's datatype definition.

However, constraints associated with the *table* (such as a NOT NULL
column constraint in the table definition) are enforced only after the
before-trigger(s) fire.  So you could use a table constraint to backstop
something you're expecting a trigger to enforce.

This difference is probably what's confusing Bill, and I didn't help any
by giving wrong information about it just now.  Sorry again.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] Regarding access to a user

2008-08-25 Thread Vishal Arora




 Date: Mon, 25 Aug 2008 11:08:14 -0600 From: [EMAIL PROTECTED] To: [EMAIL 
 PROTECTED] Subject: Re: [ADMIN] Regarding access to a user CC: [EMAIL 
 PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
 pgsql-general@postgresql.org; [EMAIL PROTECTED]  On Thu, Aug 21, 2008 at 
 3:05 AM, Shashwat_Nigam [EMAIL PROTECTED] wrote:  Dear Vishal   
 Thanks for the help but by doing this an error is generated at the time when 
  the user is trying to log in with the following message:  
 Access to database denied   The server doesn't grant access to the 
 database: the server reports   FATAL: no pg_hba.conf entry for host 
 127.0.0.1, user hmri, database  postgres, SSL off  Notice that it 
 says database postgres??? User hmri doesn't have permission to connect to 
 the the postgres database. you need to connect to the hmri database.
He is using PgAdmin III to connect to the database. When you use pgAdminIII, it 
has one text field as Maintainance DB - that is by default marked as Postgres. 
Now when you make entries for specific user for specific database, you get this 
above error, if you do not change the maintainence db to the same as the user 
wants to connect to. 
 
 
 
 --  Sent via pgsql-admin mailing list ([EMAIL PROTECTED]) To make changes 
 to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
_
From salsa lessons to filmy gossip, news to music concerts - watch it all on 
MSN Video
http://video.msn.com/?mkt=en-in

[GENERAL] Partial Indexes Not being Used [WAS]Re: Partial_indexes (Immutable?)

2008-08-25 Thread Ow Mun Heng
On Mon, 2008-08-25 at 10:18 -0400, Tom Lane wrote: 
 Ow Mun Heng [EMAIL PROTECTED] writes:
  On Mon, 2008-08-25 at 12:23 +0800, Ow Mun Heng wrote:
  CREATE INDEX idx_d_trh_code_id_partial
  ON xmms.d_trh_table
  USING btree
  (code_id) where code_id not in ('P000','000') and code_id is not null;
  ERROR:  functions in index predicate must be marked IMMUTABLE
 
  BTW, this is on 8.2.9 Seems to work OK on 8.3.3.
 
 I suppose code_id is varchar or some such?

Yep 
 
 Try where code_id::text not in   There's an array type coercion
 underlying the right-hand side of the NOT IN, and 8.2 had some problems
 with correctly identifying the volatility of such coercions.

This now works.

Prior to that, I was trying
WHERE code_id::text  'P000'::text OR code_id::text  '000'::text
Which is basically a variant of the above (only that I didn't realise
it!)


After a few more investigation on the usefulness of the partial indexes,
I found that, it really isn't all that useful, perhaps some experts can
shed some light.

explain analyse select * from d_trh_pbert where code_id IN( 'HAMA2');
  QUERY PLAN
--
 Index Scan using idx_d_trh_pbert_eval on d_trh_pbert  (cost=0.00..26669.96 
rows=7125 width=216) (actual time=0.066..2.491 rows=1840 loops=1)
   Index Cond: ((code_id)::text = 'HAMA2'::text)
 Total runtime: 4.018 ms


explain analyse select * from d_trh_pbert where code_id IN( 'HAMA2','HAMA3');
  QUERY PLAN
--
 Bitmap Heap Scan on d_trh_pbert  (cost=262.02..53641.68 rows=14249 width=216) 
(actual time=0.926..4.858 rows=3556 loops=1)
   Recheck Cond: ((code_id)::text = ANY (('{HAMA2,HAMA3}'::character 
varying[])::text[]))
   -  Bitmap Index Scan on idx_d_trh_pbert_eval  (cost=0.00..258.45 rows=14249 
width=0) (actual time=0.853..0.853 rows=3556 loops=1)
 Index Cond: ((code_id)::text = ANY (('{HAMA2,HAMA3}'::character 
varying[])::text[]))
 Total runtime: 7.809 ms

It doesn't even hit the partial indexes.


explain analyse select * from d_trh_pbert where code_id IN( 'HAMA2','HAMA3') 
and code_id not in ('P000','000') and code_id is not null;
  QUERY PLAN
--
 Bitmap Heap Scan on d_trh_pbert  (cost=259.90..53675.18 rows=5788 width=216) 
(actual time=0.916..7.477 rows=3556 loops=1)
   Recheck Cond: ((code_id)::text = ANY (('{HAMA2,HAMA3}'::character 
varying[])::text[]))
   Filter: ((code_id IS NOT NULL) AND ((code_id)::text  ALL 
(('{P000,000}'::character varying[])::text[])))
   -  Bitmap Index Scan on idx_d_trh_pbert_eval  (cost=0.00..258.45 rows=14249 
width=0) (actual time=0.835..0.835 rows=3556 loops=1)
 Index Cond: ((code_id)::text = ANY (('{HAMA2,HAMA3}'::character 
varying[])::text[]))
 Total runtime: 10.510 ms

hmxmms=# explain analyse select * from d_trh_pbert where code_id IN( 'HAMA3') 
and code_id not in ('P000','000') and code_id is not null;
  QUERY PLAN
--
 Index Scan using idx_d_trh_pbert_eval on d_trh_pbert  (cost=0.00..26687.77 
rows=2894 width=216) (actual time=0.077..3.506 rows=1716 loops=1)
   Index Cond: ((code_id)::text = 'HAMA3'::text)
   Filter: ((code_id)::text  ALL (('{P000,000}'::character 
varying[])::text[]))
 Total runtime: 5.025 ms


The 2 indexes.

CREATE INDEX idx_d_trh_pbert_eval_partial2
  ON xmms.d_trh_pbert
  USING btree
  (code_id)
  WHERE (code_id::text  ALL (ARRAY['P000'::text, '000'::text])) AND code_id 
IS NOT NULL; (size ~500MB)

CREATE INDEX idx_d_trh_pbert_eval
  ON xmms.d_trh_pbert
  USING btree
  (code_id); (size ~1.5G)

This table has approx 73 million rows and is 35 columns wide.
Stats on the code_id column is at 200 and there's ~1k of distinct values in it.


code_id is varchar(5)

I was hoping that doing the partial index will make things faster as ~70-80% of 
the time, it's ('P000','000')



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Bill

Tom Lane wrote:

Bill [EMAIL PROTECTED] writes:
  
The thing that has me confused is that the following table, trigger and 
trigger function work perfectly and the primary key for this table is 
also bigint not null.



Actually, after looking closer, I think the problem with your previous
example is that you created an ON UPDATE trigger not an ON INSERT
trigger.  Table constraints are indeed enforced after before-triggers
fire, as a quick look at the code proves.  Sorry for the misinformation.

regards, tom lane


  
I knew I was missing something really simple. I changed the trigger to 
before insert and everything works perfectly. Thanks again for your 
help. I learned a lot.


Bill


Re: [GENERAL] Partial Indexes Not being Used [WAS]Re: Partial_indexes (Immutable?)

2008-08-25 Thread Tom Lane
Ow Mun Heng [EMAIL PROTECTED] writes:
 On Mon, 2008-08-25 at 10:18 -0400, Tom Lane wrote: 
 I suppose code_id is varchar or some such?

 Yep 
 After a few more investigation on the usefulness of the partial indexes,
 I found that, it really isn't all that useful, perhaps some experts can
 shed some light.

I poked at that example a bit more earlier today, and found that 8.3
has a problem that's interfering with optimizing x IN ('y','z') type
clauses when x is varchar.  If you don't mind building a local copy,
see if this patch helps you any:
http://archives.postgresql.org/pgsql-committers/2008-08/msg00254.php

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function is not called

2008-08-25 Thread Klint Gore

Bill wrote:
The thing that has me confused is that the following table, trigger 
and trigger function work perfectly and the primary key for this table 
is also bigint not null. I added a bigint not null domain to this 
schema and changed the data type of the key to the domain and then I 
get the constraint violation. I changed the type of the key column 
back to bigint not null and the trigger fires and no error occurs.
Perhaps explain verbose on the insert will make things clearer.  When 
the domain is used, there's a COERCETODOMAIN step that gets the constant 
into the domain type.  With the not null definition in the domain, this 
blows up before anything else has a chance.


begin;

create schema test;
create sequence test.id_seq;
create domain mydom as bigint not null;

CREATE TABLE test.trigger_test
(
 key bigint NOT NULL,
 data character varying(16),
 CONSTRAINT trigger_test_key PRIMARY KEY (key)
);

CREATE TABLE test.trigger_test2
(
 key mydom,
 data character varying(16),
 CONSTRAINT trigger_test_key2 PRIMARY KEY (key)
);


CREATE OR REPLACE FUNCTION test.trigger_test_before_insert()
 RETURNS trigger AS
$BODY$
begin
 raise notice '*Test before insert*';
 new.key := nextval('test.id_seq');
 return new;
end;
$BODY$
 LANGUAGE 'plpgsql' VOLATILE;


CREATE TRIGGER trigger_test_insert
 BEFORE INSERT
 ON test.trigger_test
 FOR EACH ROW
 EXECUTE PROCEDURE test.trigger_test_before_insert();

CREATE TRIGGER trigger_test_insert2
 BEFORE INSERT
 ON test.trigger_test2
 FOR EACH ROW
 EXECUTE PROCEDURE test.trigger_test_before_insert();

explain verbose insert into test.trigger_test values (null,'hi');
--explain verbose insert into test.trigger_test2 values (null,'hi');

klint.

--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789  
Fax: 02 6773 3266

EMail: [EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general