Re: [GENERAL] extend sql?

2006-01-12 Thread Qingqing Zhou

""xiapw"" <[EMAIL PROTECTED]> wrote
> Hello everyone,I want to konw how to extend the SQL.
> For example,Create table temp [sec_level],sec_level means the secure level 
> of the table that you have created,how to do this?

If you mean extend the SQL grammar, modify file backend/parser/gram.y.

Regards,
Qingqing 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] extend sql?

2006-01-12 Thread xiapw



Hello everyone,I want to konw how to extend the 
SQL.
For example,Create table temp [sec_level],sec_level means the 
secure level of the table that you have created,how to do this?
xiapw


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Tom Lane
"Qingqing Zhou" <[EMAIL PROTECTED]> writes:
> Maybe related question: is the code below in XactLockTableWait() related to 
> SIGQUIT?

No.

>  /*
>   * Transaction was committed/aborted/crashed - we have to update pg_clog
>   * if transaction is still marked as running.
>   */
>  if (!TransactionIdDidCommit(xid) && !TransactionIdDidAbort(xid))
>   TransactionIdAbort(xid);

The comment's "have to" is an overstatement.  The transaction would be
treated as crashed anyway, it's just that this is a convenient place to
make pg_clog a bit cleaner.  I'm not real sure why we bother, actually.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Qingqing Zhou

"Tom Lane" <[EMAIL PROTECTED]> wrote
>
> The key word there is "safely".  We don't have a lot of trust in
> SIGTERM'ing individual backends (as opposed to shutting down the
> whole cluster at once, which is a well-tested code path).  See the
> archives.
>

Maybe related question: is the code below in XactLockTableWait() related to 
SIGQUIT?

 /*
  * Transaction was committed/aborted/crashed - we have to update pg_clog
  * if transaction is still marked as running.
  */
 if (!TransactionIdDidCommit(xid) && !TransactionIdDidAbort(xid))
  TransactionIdAbort(xid);

I interpret that if a quickdie or crash happens, then other backends may 
still run for a while, so it is important to mark related transaction abort. 
Or there is some other more obvious reason for that?

Regards,
Qingqing



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Chris Browne
[EMAIL PROTECTED] ("Mikael Carneholm") writes:
>>"Built In" Failover/Clustering
>
>>This won't happen. The community stance, which is a good one is that
>>no single replication solutions fits everyone's needs and therefore
>>we rely out the outside sources. Slony-I, Mammoth Replicator and
>>pgpool being the most popular.
>
> Too bad - I think that will keep a lot of potential users from
> evaluating Pg as a serious alternative. Good or bad, decide for
> yourself :)

Why on earth should that be?

What serious alternative to PostgreSQL actually includes built-in
failover or clustering?

For Oracle, it is a separate add-on product licensed separately.

Ditto for DB2.

The same is likely the case for Informix and others.
-- 
(format nil "[EMAIL PROTECTED]" "cbbrowne" "acm.org")
http://cbbrowne.com/info/x.html
"Let's face it  -- ASCII text is  a far richer medium  than most of us
deserve."  -- Scott McNealy

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Harry Jackson
On 1/12/06, Jeff Trout <[EMAIL PROTECTED]> wrote:
>
> On Jan 12, 2006, at 1:36 PM, Joshua D. Drake wrote:
>
> >>> "Built In" Failover/Clustering
> >>> This won't happen. The community stance, which is a good one is
> >>> that no single replication solutions fits everyone's needs and
> >>> therefore we rely out the outside
> >>> sources. Slony-I, Mammoth Replicator and pgpool being the most
> >>> popular.
> >> Too bad - I think that will keep a lot of potential users from
> >> evaluating Pg as a serious alternative. Good or bad, decide for
> >> yourself :)
> >
>
> Isn't the [expensive db name here]'s replication/failover just an
> expensive addon?
> As in if you don't pay for it you don't get it.
>
> So we're basically in the same boat as them.. just an add on. we just
> offer more variety.

Not really. The entire company of [expensive DB name here] is at the
end of the phone[0].

Taking Oracle as an example.

I am not aware of Oracle etc having a seperate company that sells
replication on top of their database although I could be wrong. The
other thing is that Oracle is supported by various platforms etc and
that support will include their replication or clustering offering.
Sun has offered to support PostgreSQL just recently but have they
offered to support any of the replication offerings?

I would hardly say we are in the same boat just because we have bolt
on replication.

--
Harry
http://www.hjackson.org
http://www.uklug.co.uk

[0] I am not suggesting that this improves support although it does
improve the appearance of support.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Doug McNaught
Jeff Trout <[EMAIL PROTECTED]> writes:

> Isn't the [expensive db name here]'s replication/failover just an
> expensive addon?
> As in if you don't pay for it you don't get it.
>
> So we're basically in the same boat as them.. just an add on. we just
> offer more variety.

Well, [cheap and crappy open-source db name here]'s replication is
built in, but we already know we don't want to take them as an
example. :)

-Doug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] MAX without FROM

2006-01-12 Thread Angshu Kar
Thanks Scott. I've used another way of getting the max - I've used order by desc in my inner query and selected top 1 to get the max value ...On 1/12/06, 
Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Thu, 2006-01-12 at 13:14, Angshu Kar wrote:> What are 1 ,2 and 3 in your query?>> Here is my inner query which returns a set of min values. I want to> get the max out of the returned mean values...
>> select min(evalue) as M from distance where query_id> in (select entry_id from partition where partition = 849)>> On 1/12/06, Scott Marlowe <
[EMAIL PROTECTED]> wrote:> On Thu, 2006-01-12 at 12:58, Angshu Kar wrote:> > Hi,> >> > Can I use Select max(a field returned from another query)?
> > say, another query returns say 10,1,2,3,6,7>> Does this form help you:>> select max(a) from (select 1 as a union select 2 union select> 3) as b;
The select 1 as a union ... part was just showing where you could put aREAL query that had multiple rows.Where is your evalue coming from, is that from partition?  If so, youmight be better off with a join of some kind.  Maybe if you explain a
little more what your data sets are and what you're trying to get fromthem...-- Ignore the impossible but honor it ...The only enviable second position is success, since failure always comes first...


Re: [GENERAL] MAX without FROM

2006-01-12 Thread Scott Marlowe
On Thu, 2006-01-12 at 13:14, Angshu Kar wrote:
> What are 1 ,2 and 3 in your query?
> 
> Here is my inner query which returns a set of min values. I want to
> get the max out of the returned mean values...
> 
> select min(evalue) as M from distance where query_id 
> in (select entry_id from partition where partition = 849) 
> 
> On 1/12/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Thu, 2006-01-12 at 12:58, Angshu Kar wrote:
> > Hi,
> >
> > Can I use Select max(a field returned from another query)?
> > say, another query returns say 10,1,2,3,6,7
> 
> Does this form help you:
> 
> select max(a) from (select 1 as a union select 2 union select
> 3) as b;

The select 1 as a union ... part was just showing where you could put a
REAL query that had multiple rows.

Where is your evalue coming from, is that from partition?  If so, you
might be better off with a join of some kind.  Maybe if you explain a
little more what your data sets are and what you're trying to get from
them...

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] MAX without FROM

2006-01-12 Thread Angshu Kar
What are 1 ,2 and 3 in your query?Here is my inner query which returns a set of min values. I want to get the max out of the returned mean values...select min(evalue) as M from distance where query_id 
in (select entry_id from partition where partition = 849) On 1/12/06, Scott Marlowe
 <[EMAIL PROTECTED]> wrote:
On Thu, 2006-01-12 at 12:58, Angshu Kar wrote:> Hi,>> Can I use Select max(a field returned from another query)?> say, another query returns say 10,1,2,3,6,7Does this form help you:
select max(a) from (select 1 as a union select 2 union select 3) as b;-- Ignore the impossible but honor it ...The only enviable second position is success, since failure always comes first...


Re: [GENERAL] MAX without FROM

2006-01-12 Thread Scott Marlowe
On Thu, 2006-01-12 at 12:58, Angshu Kar wrote:
> Hi,
> 
> Can I use Select max(a field returned from another query)?
> say, another query returns say 10,1,2,3,6,7 

Does this form help you:

select max(a) from (select 1 as a union select 2 union select 3) as b;

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Jeff Trout


On Jan 12, 2006, at 1:36 PM, Joshua D. Drake wrote:


"Built In" Failover/Clustering
This won't happen. The community stance, which is a good one is  
that no single replication solutions fits everyone's needs and  
therefore we rely out the outside
sources. Slony-I, Mammoth Replicator and pgpool being the most  
popular.
Too bad - I think that will keep a lot of potential users from  
evaluating Pg as a serious alternative. Good or bad, decide for  
yourself :)




Isn't the [expensive db name here]'s replication/failover just an  
expensive addon?

As in if you don't pay for it you don't get it.

So we're basically in the same boat as them.. just an add on. we just  
offer more variety.


--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] MAX without FROM

2006-01-12 Thread Angshu Kar
Hi,Can I use Select max(a field returned from another query)?say, another query returns say 10,1,2,3,6,7
Thanks,AK-- Ignore the impossible but honor it ...The only enviable second position is success, since failure always comes first...


Re: [GENERAL] Security implications of untrusted triggers

2006-01-12 Thread Tom Lane
Joshua Kramer <[EMAIL PROTECTED]> writes:
> I am writing a couple of Perl modules that talk to the outside world: one 
> talks to a database (via DBI), and one talks to a Jabber/XMPP server.  I 
> want to use these from within a Trigger.

This is most likely a bad idea for reasons that have nothing to do with
security.  Consider what happens when the trigger executes and later in
the transaction there's an error, causing the whole transaction to roll
back.  Now the outside world has changed state and your database has
not.

It's usually better to have an application process that is in charge of
monitoring database state and updating external stuff to match.  See the
archives for many previous discussions of this point.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Tom Lane
Tino Wildenhain <[EMAIL PROTECTED]> writes:
>> * Allow administrators to safely terminate individual sessions either via an 
>> SQL function or SIGTERM

> I thought this already works? At least I'm doing so when I need ...
> (SIGTERM)

The key word there is "safely".  We don't have a lot of trust in
SIGTERM'ing individual backends (as opposed to shutting down the
whole cluster at once, which is a well-tested code path).  See the
archives.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Large object restore problem w/triggers

2006-01-12 Thread Tom Lane
Claire McLister <[EMAIL PROTECTED]> writes:
> Thanks for your quick response. Sorry for the omission, the PG  
> version is 7.4.8

> How do I temporarily disable the trigger while doing the restore?  
> Should I remove the trigger, do the dump, and then work from there?

Yeah, it looks like that's the easiest answer in 7.4 :-(.  I had thought
that pg_restore's --disable-triggers might help you, but it looks like
it won't fire unless you use a data-only restore, which is generally not
a preferable way to do things.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Tino Wildenhain
Chris Browne schrieb:
> From the ToDo list...
> 
> http://www.postgresql.org/docs/faqs.TODO.html
> 
> * Allow administrators to safely terminate individual sessions either via an 
> SQL function or SIGTERM

I thought this already works? At least I'm doing so when I need ...
(SIGTERM)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Joshua D. Drake

Mikael Carneholm wrote:
In terms of statistics we do have statistics and exhaustive logging that 
can provide you with all of that information. Is there something 
specific that

the information already provided really doesn't give you?


Can you give an example query for "list all queries executed since 12.00 AM, order 
by block_reads desc"? What I'm aiming for
is the ability to turn "measuring" on, regression test my application, turn 
"measuring" off again, and list the most offensive queries executed during the regression 
test. I know of at least one other DBMS that is capable of this...won't mention which one :)


You can use timestamp and one of the duration logging options for this.


"Built In" Failover/Clustering


This won't happen. The community stance, which is a good one is that no 
single replication solutions fits everyone's needs and therefore we rely 
out the outside

sources. Slony-I, Mammoth Replicator and pgpool being the most popular.


Too bad - I think that will keep a lot of potential users from evaluating Pg as 
a serious alternative. Good or bad, decide for yourself :)


Although that is one thought, the project is fairly good at supporting 
the various solutions in terms of where and how they exist.


Most people that are going to seriously use postgresql are going to do 
the very easy research to get the right answer they need.



Get coding man! :) We would love to have them.


Sure - as soon as I'm finished with my 4 other hobby projects. Too many ideas, 
too little time.. *sigh* :/


Perhaps you should drop the other 4 and concentrate on the important one ;)

Joshua D. Drake



/Mikael

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Security implications of untrusted triggers

2006-01-12 Thread Joshua Kramer


Or more specifically, what are the security implications of a trigger 
written in an untrusted language - PL/PerlU?


With a standard stored procedure, you have the possibility of an 
SQL-injection attack.  Is this possible with a trigger function, if it is 
defined as a trigger?


I am writing a couple of Perl modules that talk to the outside world: one 
talks to a database (via DBI), and one talks to a Jabber/XMPP server.  I 
want to use these from within a Trigger.  Do I have to taint-check the 
input provided by the trigger mechanism - or does PG do this?


Thanks,
-Josh


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Large object restore problem w/triggers

2006-01-12 Thread Claire McLister
Thanks for your quick response. Sorry for the omission, the PG  
version is 7.4.8


How do I temporarily disable the trigger while doing the restore?  
Should I remove the trigger, do the dump, and then work from there?


It's difficult to move to 8.1 right now, so I'd prefer a workaround  
if we can do that.


Claire

On Jan 12, 2006, at 10:23 AM, Tom Lane wrote:


Claire McLister <[EMAIL PROTECTED]> writes:

  We have a database with a bunch of large objects, who's ids we
reference in a table. There is a trigger associated with inserts and
updates on the table to delete the old value when inserting a new
large object associated with a row in the table.



  This causes a problem when doing a pg_dump and pg_restore. The dump
works fine, but when doing a restore it tries to trigger a delete of
an old large object. It seems that the object id is associated with
the database that was dumped, and not the one that was restored. So,
lo_unlink fails and the whole restore aborts.



  Has anyone seen this behavior before? Am I doing something wrong?
Is there a workaround for this?


You haven't said which PG version you're using.

Pre-8.1, the deal is this: you can never have the same large object  
OIDs

in the new database as you did in the old.  There is code in
pg_dump/pg_restore to try to update large-object references after the
data load step.  A trigger doing what you describe would probably  
break
that update step, but you could work around it by disabling the  
trigger

temporarily.  (I thought that pg_restore was designed to not install
user triggers until after it'd done the OID updating, but maybe this
recollection is wrong.)

8.1 has a much nicer approach, which is that there's a variant of
lo_create that allows a large object to be reloaded with the same  
OID it
had before.  This eliminates the need for the update step in  
pg_restore.
If you're having problems in 8.1 then I'd speculate that maybe  
there's a

logic bug in your trigger.

regards, tom lane

---(end of  
broadcast)---

TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Large object restore problem w/triggers

2006-01-12 Thread Tom Lane
Claire McLister <[EMAIL PROTECTED]> writes:
>   We have a database with a bunch of large objects, who's ids we  
> reference in a table. There is a trigger associated with inserts and  
> updates on the table to delete the old value when inserting a new  
> large object associated with a row in the table.

>   This causes a problem when doing a pg_dump and pg_restore. The dump  
> works fine, but when doing a restore it tries to trigger a delete of  
> an old large object. It seems that the object id is associated with  
> the database that was dumped, and not the one that was restored. So,  
> lo_unlink fails and the whole restore aborts.

>   Has anyone seen this behavior before? Am I doing something wrong?  
> Is there a workaround for this?

You haven't said which PG version you're using.

Pre-8.1, the deal is this: you can never have the same large object OIDs
in the new database as you did in the old.  There is code in
pg_dump/pg_restore to try to update large-object references after the
data load step.  A trigger doing what you describe would probably break
that update step, but you could work around it by disabling the trigger
temporarily.  (I thought that pg_restore was designed to not install
user triggers until after it'd done the OID updating, but maybe this
recollection is wrong.)

8.1 has a much nicer approach, which is that there's a variant of
lo_create that allows a large object to be reloaded with the same OID it
had before.  This eliminates the need for the update step in pg_restore.
If you're having problems in 8.1 then I'd speculate that maybe there's a
logic bug in your trigger.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Large object restore problem w/triggers

2006-01-12 Thread Claire McLister

Hi,

 We have a database with a bunch of large objects, who's ids we  
reference in a table. There is a trigger associated with inserts and  
updates on the table to delete the old value when inserting a new  
large object associated with a row in the table.


 This causes a problem when doing a pg_dump and pg_restore. The dump  
works fine, but when doing a restore it tries to trigger a delete of  
an old large object. It seems that the object id is associated with  
the database that was dumped, and not the one that was restored. So,  
lo_unlink fails and the whole restore aborts.


 Has anyone seen this behavior before? Am I doing something wrong?  
Is there a workaround for this?


 Thanks for your help.

Claire

PS. The dump command was pg_dump --b -Ft -o db > db.tar
Restore command: pg_restore -O -d db db.tar

 --
 Claire McLister[EMAIL PROTECTED]
 1684 Nightingale Avenue Suite 201
 Sunnyvale, CA 94087408-733-2737(fax)

 http://www.zeemaps.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Chris Browne
>From the ToDo list...

http://www.postgresql.org/docs/faqs.TODO.html

* Allow administrators to safely terminate individual sessions either via an 
SQL function or SIGTERM
* Add SQL99 WITH clause to SELECT
* Add SQL99 WITH RECURSIVE to SELECT
* Create a bitmap of pages that need vacuuming
-- 
output = reverse("moc.enworbbc" "@" "enworbbc")
http://cbbrowne.com/info/languages.html
Rules of  the Evil Overlord  #86. "I will  make sure that  my doomsday
device is up to code and properly grounded."


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Issue with fixseq.sql in 8.1 release notes

2006-01-12 Thread Tom Lane
"George Woodring" <[EMAIL PROTECTED]> writes:
> I was running fixseq.sql copied from the release notes to up date the
> sequences and the output has an extra "\" character

Now how'd that happen?  I *know* I tested that code when I put it into
the release notes ... but it's sure wrong ... thanks for the report.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] sql (Stored procedure) design question

2006-01-12 Thread Assad Jarrahian
Alban and Tino,
 More explanations lie below.
A document contains to field, from field, subject, message, etc

> >>documentid,username,groupname  (as real fields)
> No, it would look like this:
>
> (1,'jarraa','postgres','keith') to match your original schema.

I dont understand it. what type would it be (the second column) and
what constraint would it have.
And how can I get away with storing just one table. According to my
thougths, I seem to need two tables for certain (could be wrong).
Schemas as follows:

(documentID, From,subject, message )
(doucumentID, OneOfTheManyToFieldReciepient)


Keep in mind that I dont know how many entries are in the To field
(its just one string coming from the client!). IT would be helpful to
think of this as just like an email system (but a closed one, as in
you have everybody's username in the db). Furhtermore think of social
networks (thats what I am building).


>
> But it really depends on what you want to solve here.
> What data exactly do you have and how do you want to look
> for data? I mean, what are the keys?
>
I rarely analyze data I recieve (besides for constraints). We have
lucene (java search engine) indexing the db contents and almost all
*SELECT*  queries goto that first and then I get an ID list (since
lucene does not store the data) and I retrieve the information in the
format as the same way it came in.
Lucene, being in java can split up tokens fairly easily and quickly
and our Lucene engineer likes it in the format of string '  name,
name, name'

> Do you have a specific "group" and want to see all "docs" for
> that group? And then you have a user and want to see all "docs"
> for this user? Or is it the other way round and you have
> a "doc" and want to see responsible users and/or groups?
see above explanation of lucene.

> And are groups independend from users? Or are users always
> in one or many groups? ...

A user can belong to many groups. A user can write a "document" To
users, and To groups. The constrainfo user being the user is in the db
and for groups, he/she has to be a member of it.

Users Schema
(username, userInfo)
Group Schema (2 tables)
(groupID, groupname, groupInfo)
(groupID, oneOfTheManyGroupMembers)

Hope this helps and that I am making sense now. Sorry for the confusion.

-assad

On 1/12/06, Tino Wildenhain <[EMAIL PROTECTED]> wrote:
> Assad Jarrahian schrieb:
> > Tino, thanks for your response
> >
> >>Your schema could rather look like this:
> >>
> >>documentid,username,groupname  (as real fields)
> >>
> >
> >
> > Okay, so a typical document can be addressed to any number of users/groups. 
> >  so
> > according to an example with the to field = 'jarraa, postgres, keith',
> > some rows could be like
> >
> > (1, jarraa, )
> > (1,  , postgres)
> > (1, keith, )
> >
>
> No, it would look like this:
>
> (1,'jarraa','postgres','keith') to match your original schema.
>
> But it really depends on what you want to solve here.
> What data exactly do you have and how do you want to look
> for data? I mean, what are the keys?
>
> Do you have a specific "group" and want to see all "docs" for
> that group? And then you have a user and want to see all "docs"
> for this user? Or is it the other way round and you have
> a "doc" and want to see responsible users and/or groups?
> And are groups independend from users? Or are users always
> in one or many groups? ...
>
>

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Locales problems with debian sarge3.1

2006-01-12 Thread Amédée

Hello,

I saw lot of topics treating about the subject, but nowhere i found
real solutions. Has the bug been fixed?

# set  >
LANG=fr_FR
LANGUAGE=fr_FR:fr
LC_ALL=C
LC_MESSAGES=C


Before initialising DB I fixed these variables :
(I don't use -E option of initdb)

export LC_ALL=C
export LANG="fr_FR.UTF-8"

Before launching application I fixed these variables :

export LC_ALL=C
export LANG="fr_FR"

Do i have to fix environment variables in postgres.conf?

And eventually , when attempting to DB , I have this:

waiting for postmaster to startFATAL:  invalid value for parameter 
"lc_messages": "fr_FR.UTF-8"


Who ask lc_message to associated to fr_FR.UTF-8 ?
Is there a way to make it valid value?
Have I made wrong variable association?
Is there a generic option to make postgres to work?

Thanks


Regards,

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] cursors as table sources

2006-01-12 Thread Michael Fuhr
[Please copy the mailing list on replies.  I'm forwarding your
entire message to the list without comment so others can see it;
I'll look at it when I get a chance.]

On Thu, Jan 12, 2006 at 04:21:04PM +0200, Peter Filipov wrote:
> It is the second case.
> 
> I find cursors as good way to pass a result set from function to function.
> Because a function should not be burdened with the knowledge how its caller
> obtained the values that he is passing to her as arguments. Here is my  
> case:
> 
>   loop
> 
>   css:=ces + '1 second'::interval;
>   ces:=tperiod_end(cpp,css);
>   perform cursor_rewind(pp);
> 
>   select
>   css  as 
>   stime,
>   case
>   when allp.tpri>apr.tpri then 
>   tperiod_condend((tperiod.*)::tperiod,css)
>   else ces -- handles last 2 'or's
>   end  as 
>   etime,
>   (tperiod.*)::tperiod as 
>   newcp,
>   (allp.*)::tperiod_pentry as 
>   aper
>   from
>   curs2set(pp,wd) as allp(id int, tpri int, tp int),
>   aperiod,
>   tperiod
>   where
> 
>   allp.tp=aperiod.id and
>   aperiod.id=tperiod.tid and
>   tperiod.id<>cpp.id and
>   (
>   (
>   allp.tpri>apr.tpri and
>   tperiod_condend((tperiod.*)::tperiod,css) 
>   ) or
>   (
>   allp.tpri   
> tperiod_stampin((tperiod.*)::tperiod,tperiod_tstampexplode(ces))
>   ) or
>   (
>   tperiod_condend((tperiod.*)::tperiod,css)=ces
>   )
>   )
>   order by
>   case
>   when allp.tpri>apr.tpri then 
>   tperiod_condend((tperiod.*)::tperiod,css)
>   else ces -- handles last 2 'or's
>   end asc,
>   allp.tpri desc
>   limit 1 
>   into cmp;
> 
>   mp:=found;
>   if mp then
>   css:=cmp.stime;
>   ces:=cmp.etime;
>   apr:=cmp.aper;
>   r.st:=css;
>   r.et:=ces;
>   r.csid:=apr.id;
>   r.tpid:=cpp.id;
>   -- it is important here that we give the current 
>   period, not the next  !!!
>   cpp:=cmp.newcp;
>   else
>   r.st:=css;
>   r.et:=ces;
>   r.csid:=apr.id;
>   r.tpid:=cpp.id;
>   end if;
>   -- substract the total allowed length and handle current 
>   period if  necesarry
>   cl:=r.et-r.st+sl;
>   r.et:=r.st+least(cl,tl)-sl;
>   tl:=tl-least(cl,tl);
>   -- return the current row
>   return next r;
>   -- check wether no more total length exists or there are no 
>   more periods
>   if not mp   thenexit; end if;
>   if tl 
>   end loop;
> 
> Few notes.
> 1. Cursor rewind is plpgsql and rewinds the cursor to the begining by:  
> execute 'move backward all from '||cursor_name(c);
>I know it is bad idea but I commented few lines in 'spi.c' in order to  
> make that possible
> 2. I think that: select * from table1,(fetch all from cursor1); is good  
> idea but it is not possible to use it in a function.
>If I replace curs2set(pp) with (fetch all from pp) I get errors
> 3. Of course 'pp' is function parameter
> 4. I think there is at least one advantage in allowing cursors as table  
> sources: It gives you flexibility. It may bring performance
> penalties but those won't be as big as the penalty I get in my  
> implementation here. It will still stay 'full scan' but will avoid copying
> here and there result sets.
> 
> 
> Regards,
> Peter Filipov
> 
> On Wed, 11 Jan 2006 11:24:30 -0700, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> 
> >On Wed, Jan 11, 2006 at 04:11:18PM +0200, Peter Filipov wrote:
> >>Is the idea to use cursors as table sources good?
> >>Do you plan to implement it in the future and if you plan will it be  
> >>soon?
> >
> >Do you mean the ability t

Re: [GENERAL] Finding orphan records

2006-01-12 Thread Wes
On 1/12/06 10:37 AM, "Greg Stark" <[EMAIL PROTECTED]> wrote:

> If it's only a factor of 3-4 then the merge join should be faster. If it's
> really two orders of magnitude (100x?) then the nested loop below would be
> faster. I think in 8.1 (and I think in 8.0 too) the planner is capable of
> coming up with both plans for the NOT IN query though.

Yep, it's really two orders of magnitude.  A is about 4 million records.  B
is around 200 million, and C is about 800 million records.  That's why I
figured a sequential pass on A with index lookups on B and C would be
fastest.  It took about 3.25 hours to complete the SELECT.

I'm working on the 8.1 upgrade now.  My biggest concern is making sure no
bugs have crept in to ecpg that would affect us (bitten once before, so
being careful).  So far everything else is looking good with 8.1.

I'm pondering dumping the keys for A to a file, and B and C to another file,
sorting with uniqueness on the B/C file, then programmatically determining
which keys have been freed.  I'm pretty sure this will be much faster, but I
had hoped to avoid an external process like this.

Wes



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Returning SQL statement

2006-01-12 Thread codeWarrior
You might want to consider an inheritance model for this I use it for 
Sarbanes-Oxley and a viurtual rollback system...

http://www.postgresql.org/docs/8.0/interactive/tutorial-inheritance.html


"rlee0001" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
>I want to write a row-level trigger in PL/PGSQL that inserts rows into
> an audit log whenever records are UPDATEd for a specific table. In my
> audit log I want to record:
>
> * The primary key of the record being modified (easy)
> * The current date (easy)
> * The username of the user (easy)
> * The SQL statement used to modify the record
>
> The last one is the most important and it seems to be impossible. Lets
> assume the user executes the following statement:
>
> UPDATE inventory SET
>   status=5,
>   location_detail='somewhere over the rainbow',
>   policy=1,
>   security_comments='',
>   excludes_sms=false,
>   excludes_epo=false,
>   excludes_ws=false
> WHERE asset_serial='jg432lk';
>
> ...I want a field in the audit log to actually contain the above
> statement text. The only way I can imagine to do it would be to have a
> field in "inventory" called "AUDIT_SQL" and have my PHP application
> populate that field with the statement (sans the "AUDIT_SQL='...',"
> line itself). I can then have my trigger NULL-out this field in the NEW
> row. The problem is that several users (including me) use EMS
> PostgreSQL Manager to update data as well and I want these manual data
> entries to be audited as well. If I don't update the "AUDIT_SQL"
> command by hand it just won't get done and NULL will be seen in
> "AUDIT_SQL" by any subsequent invokations of the trigger from within
> EMS. Or worse user who see this field can falsify it. To keep users
> from falisifying the field I could MD5-encode its contents and store
> the hash in a seperate "AUDIT_HASH" field and have the trigger perform
> an integrity check, canceling the operation if the hash is incorrect.
> But the savy (and malicious) user can bypass this check and still
> falsify the audit data. Is there a better way to pass such dynamic data
> into a trigger procedure? Perhaps there is a way to store data in
> "variables"? Something like this:
>
> SET mysqlvariable = 'hello world';
> UPDATE ...;
>
> ...Then the UPDATE trigger can read my parameters from the variable.
> This might be ideal.
>
> So in the meantime, since I can't figure out how to do the SQL thing, I
> have my PHP record user activity into the audit log and activity done
> from within EMS is not being recorded at all. This is not desirable.
>
> So my question is this: Is there a PostgreSQL function that will return
> the statement that triggered the trigger without the end user/PHP
> having to provide it? Perhaps a function that will return the last
> statement that was executed? Can you even calculate a MD5 hash in
> PL/PGSQL (I assume so)?
>
> I'm using PG 8.1. I have access to both PL/PGSQL and (maybe) PL/Perl.
> 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Mikael Carneholm
>In terms of statistics we do have statistics and exhaustive logging that 
>can provide you with all of that information. Is there something 
>specific that
>the information already provided really doesn't give you?

Can you give an example query for "list all queries executed since 12.00 AM, 
order by block_reads desc"? What I'm aiming for
is the ability to turn "measuring" on, regression test my application, turn 
"measuring" off again, and list the most offensive queries executed during the 
regression test. I know of at least one other DBMS that is capable of 
this...won't mention which one :)

>Materialized views we have, in a sense but much like updateable views 
>you have to code for it.

I think what I'm looking for is MV support out-of-the-box, i.e "CREATE 
MATERIALIZED VIEW foo_mv AS .. "

>"Built In" Failover/Clustering

>This won't happen. The community stance, which is a good one is that no 
>single replication solutions fits everyone's needs and therefore we rely 
>out the outside
>sources. Slony-I, Mammoth Replicator and pgpool being the most popular.

Too bad - I think that will keep a lot of potential users from evaluating Pg as 
a serious alternative. Good or bad, decide for yourself :)

>Analytic/window functions:

>Get coding man! :) We would love to have them.

Sure - as soon as I'm finished with my 4 other hobby projects. Too many ideas, 
too little time.. *sigh* :/

/Mikael

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Finding orphan records

2006-01-12 Thread Greg Stark

Wes <[EMAIL PROTECTED]> writes:

> This appears to be very inefficient.  B is almost two orders of magnitude
> larger than A.  C is about 3-4 times as big as B (record count).  My
> statement (with the same single 'B' table as above) produces:

If it's only a factor of 3-4 then the merge join should be faster. If it's
really two orders of magnitude (100x?) then the nested loop below would be
faster. I think in 8.1 (and I think in 8.0 too) the planner is capable of
coming up with both plans for the NOT IN query though.

-- 
greg


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Plans for 8.2?

2006-01-12 Thread Joshua D. Drake

Mikael Carneholm wrote:


Hi,

I wonder what features other users would like to see in the next version? (8.2) 
The features I'd (very much) would like to see implemented are:

- "Top offender statistics"
In other DBMS:es there are functions that allow you to turn on a sort of 
monitoring that can tell you, for a period of time:
1) which queries that generated the highest I/O
2) which queries that took the longest time to execute
3) percentage of total time spent on I/O wait, CPU time, etc

Statistics like these can be very helpful when it comes to identifying 
application bottlenecks / areas for improvements.

- Materialized views
What's the status on the matview project?

- "Built in" failover/clustering
There are lots of projects supplying replication/failover/clustering functionality 
(Slony-I, pgcluster, pgpool etc), but these are "non-official" in that they do 
not provide out-of-the box functionality (ie not bundled with a default Pg install)

- Analytic/window functions
In DBMS:es such as Oracle, there's rank(), dense_rank(), lag(), lead() etc. 
These functions may(?) be possible to implement using existing/new aggregates.

Comments?
 



You should probably review the archives on a lot of these topics. Many 
of them have been discussed to death :).


In terms of statistics we do have statistics and exhaustive logging that 
can provide you with all of that information. Is there something 
specific that

the information already provided really doesn't give you?

Materialized views we have, in a sense but much like updateable views 
you have to code for it.


"Built In" Failover/Clustering

This won't happen. The community stance, which is a good one is that no 
single replication solutions fits everyone's needs and therefore we rely 
out the outside

sources. Slony-I, Mammoth Replicator and pgpool being the most popular.

Analytic/window functions:

Get coding man! :) We would love to have them.

Sincerely,

Joshua D. Drake



/Mikael

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org
 




--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: PLphp, PLperl - http://www.commandprompt.com/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Plans for 8.2?

2006-01-12 Thread Mikael Carneholm
Hi,

I wonder what features other users would like to see in the next version? (8.2) 
The features I'd (very much) would like to see implemented are:

- "Top offender statistics"
In other DBMS:es there are functions that allow you to turn on a sort of 
monitoring that can tell you, for a period of time:
1) which queries that generated the highest I/O
2) which queries that took the longest time to execute
3) percentage of total time spent on I/O wait, CPU time, etc

Statistics like these can be very helpful when it comes to identifying 
application bottlenecks / areas for improvements.

- Materialized views
What's the status on the matview project?

- "Built in" failover/clustering
There are lots of projects supplying replication/failover/clustering 
functionality (Slony-I, pgcluster, pgpool etc), but these are "non-official" in 
that they do not provide out-of-the box functionality (ie not bundled with a 
default Pg install)

- Analytic/window functions
In DBMS:es such as Oracle, there's rank(), dense_rank(), lag(), lead() etc. 
These functions may(?) be possible to implement using existing/new aggregates.

Comments?

/Mikael

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] sql (Stored procedure) design question

2006-01-12 Thread Tino Wildenhain
Assad Jarrahian schrieb:
> Tino, thanks for your response
> 
>>Your schema could rather look like this:
>>
>>documentid,username,groupname  (as real fields)
>>
> 
> 
> Okay, so a typical document can be addressed to any number of users/groups.  
> so
> according to an example with the to field = 'jarraa, postgres, keith',
> some rows could be like
> 
> (1, jarraa, )
> (1,  , postgres)
> (1, keith, )
> 

No, it would look like this:

(1,'jarraa','postgres','keith') to match your original schema.

But it really depends on what you want to solve here.
What data exactly do you have and how do you want to look
for data? I mean, what are the keys?

Do you have a specific "group" and want to see all "docs" for
that group? And then you have a user and want to see all "docs"
for this user? Or is it the other way round and you have
a "doc" and want to see responsible users and/or groups?
And are groups independend from users? Or are users always
in one or many groups? ...


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] select into multiple variables

2006-01-12 Thread SunWuKung
In article <[EMAIL PROTECTED]>, 
[EMAIL PROTECTED] says...
> I have a select statement in a pgpsql function that returns two columns 
> and a single row.
> I would like to place the two values into two variables in a single 
> statement, but so far I couldn't find out what is the syntax for that.
> 
> I tried a couple of combinations of this but had no luck:
> 
> SELECT INTO
>  instrumentheaderid_arg, subjectgroupheaderid_arg,
>  instrumentnormheader.instrumentheaderid,
>  instrumentnormheader.subjectgroupheaderid
> FROM
>  instrumentnormheader
> WHERE instrumentnormheaderid=3
> 
> Thanks for the help.
> B.
> 

ugh, sorry, found it

SELECT INTO
instrumentheaderid_arg, subjectgroupheaderid_arg
instrumentnormheader.instrumentheaderid, 
instrumentnormheader.subjectgroupheaderid
FROM
  public.instrumentnormdetail
  INNER JOIN public.instrumentnormheader ON 
(public.instrumentnormdetail.instrumentnormheaderid = 
public.instrumentnormheader.instrumentnormheaderid)
WHERE instrumentnormdetailid=instrumentnormdetailid_arg;

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] select into multiple variables

2006-01-12 Thread SunWuKung
I have a select statement in a pgpsql function that returns two columns 
and a single row.
I would like to place the two values into two variables in a single 
statement, but so far I couldn't find out what is the syntax for that.

I tried a couple of combinations of this but had no luck:

SELECT INTO
 instrumentheaderid_arg, subjectgroupheaderid_arg,
 instrumentnormheader.instrumentheaderid,
 instrumentnormheader.subjectgroupheaderid
FROM
 instrumentnormheader
WHERE instrumentnormheaderid=3

Thanks for the help.
B.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] regarding triggers

2006-01-12 Thread John McCawley


surabhi.ahuja wrote:



but if i have "on delete cascade" constraint,
in that case if i have a trigger which is fired in case delet happens 
on the table y.


i have a table x, and table y has a foreign key with  "on delete 
cascade" constraint,


now i delete a row from x, will the trigger still be called?

I just did a test, and it does.  See below (note my serial_id on the log 
table is incremented from earlier testing)



create table tbl_foo ( foo_id SERIAL PRIMARY KEY, stuff varchar(32) );

create table tbl_bar ( bar_id SERIAL PRIMARY KEY, foo_id integer, 
barstuff varchar(32) );


create table tbl_log ( log_id SERIAL PRIMARY KEY, stuff varchar(32) );

ALTER TABLE tbl_bar ADD CONSTRAINT fk_tbl_bar_tbl_foo_foo_id FOREIGN KEY 
(foo_id) REFERENCES tbl_foo(foo_id) MATCH FULL ON DELETE CASCADE;



CREATE FUNCTION sp_logdelete() RETURNS trigger AS '
DECLARE
BEGIN
INSERT INTO tbl_log (stuff) VALUES (\'Trigger was called!\');
return OLD;
END;
' LANGUAGE 'plpgsql';

CREATE TRIGGER bar_delete_trigger BEFORE DELETE ON tbl_bar FOR EACH ROW 
EXECUTE PROCEDURE sp_logdelete();



INSERT INTO tbl_foo (stuff) VALUES ('this is stuff');

select * FROM tbl_foo;
foo_id | stuff
+---
 1 | this is stuff
(1 row)


insert into tbl_bar (foo_id, barstuff) VALUES (1, 'bar stuff');

select * FROM tbl_log;

log_id | stuff
+---
(0 rows)


delete from tbl_foo;

DELETE 1

SELECT * FROM tbl_log;

log_id |stuff
+-
 5 | Trigger was called!
(1 row)

SELECT * FROM tbl_bar;
bar_id | foo_id | barstuff
++--
(0 rows)



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Issue with fixseq.sql in 8.1 release notes

2006-01-12 Thread George Woodring

I have install 8.1.2 on my test box and loaded data from a 7.4.8
database. 

I was running fixseq.sql copied from the release notes to up date the
sequences and the output has an extra "\" character

ALTER TABLE public.modems_old ALTER COLUMN modemsid SET DEFAULT
nextval\('modems_id');

To fix it I had to change the 3rd argument in the regexp_replace()

$$val\(\1$$ --> $$val(\1$$

I thought I would let someone know just in case something has changed
since 8.1 and the release notes needed updating.

Woody


iGLASS Networks
211-A S. Salem St
Apex NC 27502
(919) 387-3550 x813
www.iglass.net

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Granting Privleges on All Tables in One Comand?

2006-01-12 Thread Larry Rosenman
Tom Lane wrote:
> Bruce Momjian  writes:
>> Adam wrote:
>>> Is it possible to 'GRANT SELECT ON ALL TABLES TO User' in 1 command
>>> ? 
>>> 
>>> I saw the question in 2003 and wonder if it's still true.
> 
>> Still true, but on the TODO list:
> 
> Also, you can do it today by making a plpgsql function that iterates
> through all the tables and does GRANTs.  There are examples in the
> mailing list archives.  (The TODO item would probably have been done
> by now if it weren't so easy to work around it.)
> 
One example:

http://www.archonet.com/pgdocs/grant-all.html

-- 
Larry Rosenman  
Database Support Engineer

PERVASIVE SOFTWARE. INC.
12365B RIATA TRACE PKWY
3015
AUSTIN TX  78727-6531 

Tel: 512.231.6173
Fax: 512.231.6597
Email: [EMAIL PROTECTED]
Web: www.pervasive.com 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Returning SQL statement

2006-01-12 Thread A. Kretschmer
am  11.01.2006, um 14:57:42 -0800 mailte rlee0001 folgendes:
> I want to write a row-level trigger in PL/PGSQL that inserts rows into
> an audit log whenever records are UPDATEd for a specific table. In my
> audit log I want to record:

Why do you want to reivent the wheel?
http://pgfoundry.org/projects/tablelog/


HTH, Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47212,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Returning SQL statement

2006-01-12 Thread Jim Buttafuoco

take a look at dblink in the contrib directory, it has a function called 
dblink_current_query() that returns the current
query.  I use it all the time.

Jim


-- Original Message ---
From: "rlee0001" <[EMAIL PROTECTED]>
To: pgsql-general@postgresql.org
Sent: 11 Jan 2006 14:57:42 -0800
Subject: [GENERAL] Returning SQL statement

> I want to write a row-level trigger in PL/PGSQL that inserts rows into
> an audit log whenever records are UPDATEd for a specific table. In my
> audit log I want to record:
> 
> * The primary key of the record being modified (easy)
> * The current date (easy)
> * The username of the user (easy)
> * The SQL statement used to modify the record
> 
> The last one is the most important and it seems to be impossible. Lets
> assume the user executes the following statement:
> 
> UPDATE inventory SET
>status=5,
>location_detail='somewhere over the rainbow',
>policy=1,
>security_comments='',
>excludes_sms=false,
>excludes_epo=false,
>excludes_ws=false
> WHERE asset_serial='jg432lk';
> 
> ...I want a field in the audit log to actually contain the above
> statement text. The only way I can imagine to do it would be to have a
> field in "inventory" called "AUDIT_SQL" and have my PHP application
> populate that field with the statement (sans the "AUDIT_SQL='...',"
> line itself). I can then have my trigger NULL-out this field in the NEW
> row. The problem is that several users (including me) use EMS
> PostgreSQL Manager to update data as well and I want these manual data
> entries to be audited as well. If I don't update the "AUDIT_SQL"
> command by hand it just won't get done and NULL will be seen in
> "AUDIT_SQL" by any subsequent invokations of the trigger from within
> EMS. Or worse user who see this field can falsify it. To keep users
> from falisifying the field I could MD5-encode its contents and store
> the hash in a seperate "AUDIT_HASH" field and have the trigger perform
> an integrity check, canceling the operation if the hash is incorrect.
> But the savy (and malicious) user can bypass this check and still
> falsify the audit data. Is there a better way to pass such dynamic data
> into a trigger procedure? Perhaps there is a way to store data in
> "variables"? Something like this:
> 
> SET mysqlvariable = 'hello world';
> UPDATE ...;
> 
> ...Then the UPDATE trigger can read my parameters from the variable.
> This might be ideal.
> 
> So in the meantime, since I can't figure out how to do the SQL thing, I
> have my PHP record user activity into the audit log and activity done
> from within EMS is not being recorded at all. This is not desirable.
> 
> So my question is this: Is there a PostgreSQL function that will return
> the statement that triggered the trigger without the end user/PHP
> having to provide it? Perhaps a function that will return the last
> statement that was executed? Can you even calculate a MD5 hash in
> PL/PGSQL (I assume so)?
> 
> I'm using PG 8.1. I have access to both PL/PGSQL and (maybe) PL/Perl.
> 
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org
--- End of Original Message ---


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Suse Linux 10.0

2006-01-12 Thread Jerome Lyles
On Sunday 08 January 2006 06:36, John Meyer wrote:
> Jerome Lyles wrote:
> > On Thursday 05 January 2006 10:57, Peter Eisentraut wrote:
> >> Am Donnerstag, 5. Januar 2006 21:15 schrieb Joseph M. Day:
> >>> Has anyone been able to get the latest version of Postgres working on
> >>> Suse 10.0 ? I just switched form Fedora and realized that this version
> >>> is not specifically supported.
> >>
> >> Get it here: ftp://ftp.suse.com/pub/people/max/postgresql
> >>
> >> ---(end of broadcast)---
> >> TIP 1: if posting/reading through Usenet, please send an appropriate
> >>subscribe-nomail command to [EMAIL PROTECTED] so that your
> >>message can get through to the mailing list cleanly
> >
> > I can't get this to add as an installation source:
> > Server Name: ftp.suse.com
> > Directory on Server:pub/people/max/postgresql/postgresql-8.1.1/10.0-i386
> >
> > I've tried with and without forward slashes at the beginning and end of
> > Server Name and Directory on Server.
> >
> > I tried adding as FTP and HTTP.
> >
> > I get this error: ERROR(InstSrc:E_noinstsrc_on_media)
> >
> > Any idea what the problem is?
> >
> > Thanks,
> > Jerome
>
> Have you tried to just ftp the rpms down and install them that way,
> either through rpm or YaST?
>
I will do this as a last resort.
Jerome
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Returning SQL statement

2006-01-12 Thread rlee0001
I want to write a row-level trigger in PL/PGSQL that inserts rows into
an audit log whenever records are UPDATEd for a specific table. In my
audit log I want to record:

* The primary key of the record being modified (easy)
* The current date (easy)
* The username of the user (easy)
* The SQL statement used to modify the record

The last one is the most important and it seems to be impossible. Lets
assume the user executes the following statement:

UPDATE inventory SET
   status=5,
   location_detail='somewhere over the rainbow',
   policy=1,
   security_comments='',
   excludes_sms=false,
   excludes_epo=false,
   excludes_ws=false
WHERE asset_serial='jg432lk';

...I want a field in the audit log to actually contain the above
statement text. The only way I can imagine to do it would be to have a
field in "inventory" called "AUDIT_SQL" and have my PHP application
populate that field with the statement (sans the "AUDIT_SQL='...',"
line itself). I can then have my trigger NULL-out this field in the NEW
row. The problem is that several users (including me) use EMS
PostgreSQL Manager to update data as well and I want these manual data
entries to be audited as well. If I don't update the "AUDIT_SQL"
command by hand it just won't get done and NULL will be seen in
"AUDIT_SQL" by any subsequent invokations of the trigger from within
EMS. Or worse user who see this field can falsify it. To keep users
from falisifying the field I could MD5-encode its contents and store
the hash in a seperate "AUDIT_HASH" field and have the trigger perform
an integrity check, canceling the operation if the hash is incorrect.
But the savy (and malicious) user can bypass this check and still
falsify the audit data. Is there a better way to pass such dynamic data
into a trigger procedure? Perhaps there is a way to store data in
"variables"? Something like this:

SET mysqlvariable = 'hello world';
UPDATE ...;

...Then the UPDATE trigger can read my parameters from the variable.
This might be ideal.

So in the meantime, since I can't figure out how to do the SQL thing, I
have my PHP record user activity into the audit log and activity done
from within EMS is not being recorded at all. This is not desirable.

So my question is this: Is there a PostgreSQL function that will return
the statement that triggered the trigger without the end user/PHP
having to provide it? Perhaps a function that will return the last
statement that was executed? Can you even calculate a MD5 hash in
PL/PGSQL (I assume so)?

I'm using PG 8.1. I have access to both PL/PGSQL and (maybe) PL/Perl.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Concept about stored procedures

2006-01-12 Thread Marcos
Hi,

I'm developing one application using this wonderful Database, and I've
like of use the concepts correctly.

Then, i decided that all my SQL statements will be in database using the
procedure language (plpgsql or plpython), I go create functions for all
interactions in database, and use  in the my application the SELECT for
call him.

Is its the correct way of working? Or the correct way is create
procedure functions for big and complex statements?

The application will work with a large Database and will have many
simultaneous access.

Very Thanks


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] regarding triggers

2006-01-12 Thread surabhi.ahuja
Title: RE: [GENERAL] regarding triggers 







but if i have "on delete cascade" constraint,
in that case if i have a trigger which is fired in case delet happens on the table y.

i have a table x, and table y has a foreign key with  "on delete cascade" constraint,

now i delete a row from x, will the trigger still be called?

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]]
Sent: Tue 1/10/2006 9:58 PM
To: John McCawley
Cc: surabhi.ahuja; pgsql-general@postgresql.org
Subject: Re: [GENERAL] regarding triggers

***
Your mail has been scanned by iiitb VirusWall.
***-***


John McCawley <[EMAIL PROTECTED]> writes:
> Foreign keys do not cascade deletions.

By default, no, but there is the CASCADE option ...

            regards, tom lane









Re: [GENERAL] sql (Stored procedure) design question

2006-01-12 Thread Alban Hertroys

Assad Jarrahian wrote:

what drove me to store it that way was more of a performance issue.

So if I store a documentID and then have a seperate table names, to_field


Why a seperate table? From what you showed us you don't seem to need 
that. Just use 3 columns for the seperate entries instead of 1 that 
combines them. There's really no big deal to it. And you gain the 
benefit of being able to put foreign key constraints on those columns.


Now you need to split a string, risking splitting it the wrong way (if 
there happen to be comma's in names) and using some of that 
computational power you seem to be so concerned about. It seems unlikely 
you would gain any measurable performance this way, if at all.


I can't say I fully understand what you're trying to do, your example 
and explanations were somewhat confusing.


Regards,
--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

//Showing your Vision to the World//

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] sql (Stored procedure) design question

2006-01-12 Thread Assad Jarrahian
Tino, thanks for your response
>
> Your schema could rather look like this:
>
> documentid,username,groupname  (as real fields)
>

Okay, so a typical document can be addressed to any number of users/groups.  so
according to an example with the to field = 'jarraa, postgres, keith',
some rows could be like

(1, jarraa, )
(1,  , postgres)
(1, keith, )

so I am not sure if I can make foreign keys constraints here (since
the username or group can be blank). Additionally the to field can be
to any user, but can only be to a group that the user has permissions
to write to (aka subscribed).

> >
> > based on that it seems that to read (Which occurs a lot more than
> > writing) a document I will have to go an find all the rows with
> > documentID in the to field and then cat them somehow and then return
>
> cat them? Why? (There is text concenation btw)

cat them because originally I wanted really fast retrieval (with the
method you describe, I will have to go to two tables), so I would
store it in the order it was to be shipped out (much like an email to
field when you recieve it)

Where can I find this text concatenation stuff?


>

> There is ;) But I doubt Tom likes to show you the dark side [tm] ;)

hehe

maybe I can do a hybrid, storing the string completely in the doc
table and storing all the to fields as a document_to table.

Awaiting your thoughts.
thanks!

-Assad

>
>
> ...
> >
> >
> > On 1/11/06, Tom Lane <[EMAIL PROTECTED]> wrote:
> >
> >>Assad Jarrahian <[EMAIL PROTECTED]> writes:
> >>
> >>>  A column of type text contains username's and groupname's followed
> >>>by comma (eg. 'jarraa, mooreg3, keith') [it is stored this way
> >>>because it will be displayed in this format].
>
> ++Tino
>

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match