Re: [GENERAL] sort array optimisation in pl/perl

2005-03-25 Thread GIROIRE Nicolas (COFRAMI)
Title: RE: [GENERAL] sort array optimisation in pl/perl





Hi,


thanks for this help.


The method functions great but the profit of time is good just if I have a lot of elements to deplace.



    COFRAMI 
    Nicolas Giroire
    on behalf of AIRBUS France 
    for In Flight & Ground Information Services - Development 
    Phone : +33 (0)5 67 19 98 74 
      Mailto:[EMAIL PROTECTED] 




-Message d'origine-
De : [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]De la part de Ragnar Hafstað
Envoyé : jeudi 24 mars 2005 21:53
À : GIROIRE Nicolas (COFRAMI)
Cc : 'pgsql-general@postgresql.org'
Objet : Re: [GENERAL] sort array optimisation in pl/perl



On Thu, 2005-03-24 at 15:49 +0100, GIROIRE Nicolas (COFRAMI) wrote:


> I create an array which is result of query on postgresql database and
> then I want to sort rows in a particular way (impossible by query on
> database).


can you give us more details on this particular sort order?


> My solution consists to put a rows (indice m+1) in a temporary other
> and then move all element before indice n to m in rows with indice n+1
> to m+1 and last i put my temporary variable to indice n.
> I want to know if somebody know a better solution.
> 
> I think of 2 solutions but i don't success to apply : 
>   - the first is to use list in which I could deplace references as a
> chained list 
>   - the second will be to deplace tab[n..m] to tab[n+1..m+1] in one
> instruction as ada language 


it all depends on the expected sizes of your arrays, but perl has
some nice array operations, such as slices and splice()


these come to mind:


$x=$arr[$m+1];@[EMAIL PROTECTED];$arr[$n]=$x;


or:


@arr[$n..$m+1]=($arr[$m+1],@arr[$n..$m]);


gnari




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?


   http://www.postgresql.org/docs/faq


This mail has originated outside your organization,
either from an external partner or the Global Internet. 
Keep this in mind if you answer this message.





[GENERAL] pl/perl problem of memory

2005-03-25 Thread GIROIRE Nicolas (COFRAMI)
Title: pl/perl problem of memory





Hi,


I load data in a varible of type text, and i arrive to out of memory message.


I want to know if it exists a function which allows to empty a variable and the place uses in memory without destroy it (because I use the variable again).

In fact, now i put the variable to '' ( $myvar = '' ) but it seems that Perl don't empty totally memory.


Best regards,


Nicolas Giroire







[GENERAL] Stuck with references

2005-03-25 Thread Frodo Larik
Hello all,
this is my first post to the list, so please be gentle :-)
I created the following tables (full file can be found here: 
http://www.larik.nl/sql/payway.sql ):

CREATE TABLE location (
  location_id integer DEFAULT nextval('"location_id_seq"'::text) NOT NULL,
  name text NOT NULL,
  CONSTRAINT location_pkey PRIMARY KEY (location_id)  
) WITH OIDS;

CREATE TABLE payway_profile (
  payway_profile_id integer DEFAULT 
nextval('"payway_profile_id_seq"'::text) NOT NULL,
  location_a integer REFERENCES location (location_id) NOT NULL,
  location_b integer REFERENCES location (location_id) NOT NULL,
  distance integer NOT NULL,
  CONSTRAINT payway_profile_pkey PRIMARY KEY (payway_profile_id)  
) WITH OIDS;

Now I wan to do the following:
A select from payway_profile so I also can see the name of location_a 
and location_b, instead of the id as defined in the table location.
If there was only reference i could do something like this:

SELECT l.name, pp.distance FROM payway_profile AS pp
INNER JOIN location AS l ON ( pp.location_a = l.location_id );
But now there are two references location_a and location_b, so I don't 
know how i can solve this in one query. Is it possible or is it bad design?

Sincerely,
Frodo Larik

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] pl/perl problem of memory

2005-03-25 Thread Tom Lane
"GIROIRE Nicolas (COFRAMI)" <[EMAIL PROTECTED]> writes:
> In fact, now i put the variable to '' ( $myvar = '' ) but it seems that Perl
> don't empty totally memory.

See the nearby thread "plperl doesn't release memory".  There's some
evidence that this may be due to a Perl configuration issue.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Stuck with references

2005-03-25 Thread Jim Buttafuoco

try the following (untested) query:

SELECT la.name,lb.name pp.distance FROM payway_profile AS pp
JOIN location AS la ON ( pp.location_a = l.location_id )
join location AS lb ON ( pp.location_b = l.location_id );

-- Original Message ---
From: Frodo Larik <[EMAIL PROTECTED]>
To: pgsql-general@postgresql.org
Sent: Fri, 25 Mar 2005 16:04:17 +0100
Subject: [GENERAL] Stuck with references

> Hello all,
> 
> this is my first post to the list, so please be gentle :-)
> 
> I created the following tables (full file can be found here: 
> http://www.larik.nl/sql/payway.sql ):
> 
> CREATE TABLE location (
>location_id integer DEFAULT nextval('"location_id_seq"'::text) NOT NULL,
>name text NOT NULL,
>CONSTRAINT location_pkey PRIMARY KEY (location_id)  
> ) WITH OIDS;
> 
> CREATE TABLE payway_profile (
>payway_profile_id integer DEFAULT 
> nextval('"payway_profile_id_seq"'::text) NOT NULL,
>location_a integer REFERENCES location (location_id) NOT NULL,
>location_b integer REFERENCES location (location_id) NOT NULL,
>distance integer NOT NULL,
>CONSTRAINT payway_profile_pkey PRIMARY KEY (payway_profile_id)  
> ) WITH OIDS;
> 
> Now I wan to do the following:
> 
> A select from payway_profile so I also can see the name of location_a 
> and location_b, instead of the id as defined in the table location.
> If there was only reference i could do something like this:
> 
> SELECT l.name, pp.distance FROM payway_profile AS pp
> INNER JOIN location AS l ON ( pp.location_a = l.location_id );
> 
> But now there are two references location_a and location_b, so I don't 
> know how i can solve this in one query. Is it possible or is it bad design?
> 
> Sincerely,
> 
> Frodo Larik
> 
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match
--- End of Original Message ---


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Stuck with references

2005-03-25 Thread Michael Fuhr
On Fri, Mar 25, 2005 at 10:37:31AM -0500, Jim Buttafuoco wrote:
> 
> try the following (untested) query:
> 
> SELECT la.name,lb.name pp.distance FROM payway_profile AS pp
> JOIN location AS la ON ( pp.location_a = l.location_id )
> join location AS lb ON ( pp.location_b = l.location_id );

This query produces a syntax error due to a missing comma after
lb.name, and "relation does not exist" errors due to the use of "l"
instead of "la" and "lb" in the join conditions.  Try this instead:

SELECT la.name, lb.name, pp.distance
FROM payway_profile AS pp
JOIN location AS la ON (pp.location_a = la.location_id)
JOIN location AS lb ON (pp.location_b = lb.location_id);

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Persistent data per connection

2005-03-25 Thread Jeff Amiel
We've been struggling for several days now to come up with a mechanism 
that allows us to establish a mechanism to store data that remains 
persistent for the life of the connection.

Essentially we have a web based application that utilizes a connection 
pool (using one single 'super' postgresql database user).  At time of 
login by a user, we are capturing a variety of user demographics (user 
ID, IP address the are coming from, etc).  When any action is performed 
by the user (a service makes a database call), we have audit triggers 
that fire on update/delete/inserts.  We'd like the triggers themselves 
to have access to that demographic data so the audit records are 
correctly stamped (at least with the correct user ID).

We attempted some techniques whereby we create a table and at the time 
that each connection is requested from the pool, we extract the PID for 
that connection and store it in a table (associating the logged in user 
with it). Then our triggers could access this table (and the PID 
associated with the current connection) to marry them up and audit 
appropriately.  As you can imagine, this was fraught with issues of 
stale entries (if a PID was reused and a stale entry with the same PID 
was never removed from the table)  Last thing we wanted was audit 
activity incorrectly denoting that user A did something that they didnt do.

We also thought about hijacking one of the session runtime variables 
(via the SET command), but aside from not finding a suitable one that 
wouldn't cause issues, it sounded dangerous.

Our latest scheme involves giving each web system user a postgresql user 
account and when we grab a connection from the connection pool, we SET 
SESSION AUTHORIZATION for that user.  We can then access the user info 
from the trigger.

But is there a better/different way to persist data at the connection level?
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Referential integrity using constant in foreign key

2005-03-25 Thread Andrus Moor
I need to create referential integrity constraints:

CREATE TABLE classifier (
category CHAR(1),
code CHAR(10),
PRIMARY KEY (category,code)  );

-- code1 references to category 1,
-- code2 references to category 2 from classifier table.
CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
FOREIGN KEY ('1', category1) REFERENCES classifier,
FOREIGN KEY ('2', category2) REFERENCES classifier
);

Unfortunately, second CREATE TABLE causes error

ERROR:  syntax error at or near "'1'" at character 171

Any idea how to implement referential integrity for info table ?
It seems that this is not possible in Postgres.

Andrus. 



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Delay INSERT

2005-03-25 Thread Dawid Kuroczko
On Wed, 23 Mar 2005 12:33:00 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
> Dawid Kuroczko <[EMAIL PROTECTED]> writes:
> > PostgreSQL doesn't have such issues with blocking, so only difference
> > between INSERT and INSERT DELAYED from PostgreSQL's standpoint
> > would be waiting and not for the result...

> With the right client-side code you can transmit multiple queries before
> receiving the result from the first one.  I don't think libpq in its
> current incarnation really supports this, but in principle it's doable.

...though I think it should be called asynchronic rather than
'delayed'.  I.e. issue a statement then select(2) in a spare
time to get the result, doing other work in meantime.

> The interesting questions have to do with error handling: if the
> "delayed" insert fails, what happens and what is the impact on
> subsequent queries?  I have no idea how MySQL defines that.

Well, looking at the on-line mysql docs -- it is not documented,
I did however try to issue them (on 3.x mysql installation).
I've created a table with a single int PK column, then inserted
delayed few values, some of which broke PK contraint and these
were silently discared.  There was no point checking how it would
work with transactions since ...delayed is MyISAM only and these
don't do transactions.

And as for PostgreSQL and asynchronic statements, I think it
should allow Pipelineing -- similar to NNTP or SMTP servers -- these would
send N commands to a server and then read them as-they-come-in.
Some usage example, imagine a web page which would:
 $q1 = execute_async("SELECT foo FROM a");
 $q2 = execute_async("SELECT * FROM polls LIMIT 5"); # whatever
 $q3 = execute_async("SELECT baz FROM bazzz");
  print some html;  print $q1->fetchall, some html $q2->fetchall,
  $q3->fetchall...

There is a question of synchronization and handling errors of course,
but I think it could be done and for some usage scenarios could be
quite benefitial.

Disclaimer: I never wrote a program using libpq, and I only looked
through libpq docs some time ago.  If I am wrong, please forgive me.

   Regards,
  Dawid

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] [ANNOUNCE] Manitou-Mail

2005-03-25 Thread Daniel Verite
   Hello,

I'd like to announce a PostgreSQL-powered mail app I've been working on
for some time and that I think is now candidate for public use.

It comes up as three parts: 
1) a PostgreSQL database for the storage of decoded mail contents.
2) a pure-SQL desktop app for the mail user interface.
3) a separate perl daemon program for the heavy mail processing 
and streaming in and out of the database..

Those programs can run on the same machine or separate nodes.
Although more complex than traditional MUAs that do all in one,
this approach may have some appeal for database savvy users that
are willing to try something new for email processing:

- it's possible to access the mail contents with any sql
  client, psql or whatever. The db schema is meant to expose
  the contents as structured data as much as possible.
  Everything is stored in the database, including things like filters
  definitions or user preferences.

- the UI has built-in and user-definable sql queries to retrieve
  messages with no limit on the complexity of the criteria. Users can also
  add their own tables to enrich mail contents or hook it up to
  other databases. Also the UI can be used by concurrent users without
  mailbox locking issues.

- the perl program has user-made plugins to customize mail processing,
  accepting a DBI connection and a MIME object as arguments. Sample
  plugins include a spamassassin client and a attachments indexer.

- it's possible not to use the UI at all but just fork incoming mail into
  the database in order to make a searchable archive, or run statistics.
  Obviously, it makes much sense with large mailboxes or heavy mail activity.

- In general, SQL as a retrieval protocol is much more easy and
  powerful than imap, the major drawback being that you can't use
  a standard mail client. There are other projects like dbmail or decimail
  that do sql<->imap<->MUA; manitou-mail on the contrary is really about
  having the MUA talking SQL.

The server part is unix-oriented, it's meant to be hooked to a
sendmail-like MTA, but the UI runs under windows too (it uses the
graphical Qt library), with pre-compiled binaries available.

The project has a website with some documentation and download links
at http://www.manitou-mail.org and a sourceforge entry:
http://sf.net/projects/manitou-mail
Bug reports, ideas, help requests et all are welcome.

-- 
 Daniel
 PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Referential integrity using constant in foreign key

2005-03-25 Thread Thomas F . O'Connell
It's somewhat unclear what you're attempting to do, here, but I'll give 
a shot at interpreting. Referential integrity lets you guarantee that 
values in a column or columns exist in a column or columns in another 
table.

With classifier as you've defined it, if you want referential integrity 
in the info table, you could do this:

CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
FOREIGN KEY code1 REFERENCES classifier (category),
FOREIGN KEY code2 REFERENCES classifier (category)
);
But I'm not sure what you mean by "references to category 1". There is 
only a single category column in classifier, and referential integrity 
is not for ensuring that a column in one table contains only values of 
a single row.

Regardless, your syntax doesn't seem to reflect reality. Read the 
CREATE TABLE reference thoroughly.

http://www.postgresql.org/docs/8.0/static/sql-createtable.html
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source — Open Your i™
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Mar 25, 2005, at 10:22 AM, Andrus Moor wrote:
I need to create referential integrity constraints:
CREATE TABLE classifier (
category CHAR(1),
code CHAR(10),
PRIMARY KEY (category,code)  );
-- code1 references to category 1,
-- code2 references to category 2 from classifier table.
CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
FOREIGN KEY ('1', category1) REFERENCES classifier,
FOREIGN KEY ('2', category2) REFERENCES classifier
);
Unfortunately, second CREATE TABLE causes error
ERROR:  syntax error at or near "'1'" at character 171
Any idea how to implement referential integrity for info table ?
It seems that this is not possible in Postgres.
Andrus.

---(end of 
broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to 
[EMAIL PROTECTED])

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Persistent data per connection

2005-03-25 Thread Tom Lane
Jeff Amiel <[EMAIL PROTECTED]> writes:
> We've been struggling for several days now to come up with a mechanism 
> that allows us to establish a mechanism to store data that remains 
> persistent for the life of the connection.

Why don't you just store it in a temporary table?  Goes away at
connection drop by definition ... and each connection can use the same
name for the temp table, so coding is easy.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Persistent data per connection

2005-03-25 Thread Jeff Amiel
because the connection is never really dropped...
using a connection poolso it's just reclaimed by the pool on a 
connection.close() or after a  timeout  period

Tom Lane wrote:
Jeff Amiel <[EMAIL PROTECTED]> writes:
 

We've been struggling for several days now to come up with a mechanism 
that allows us to establish a mechanism to store data that remains 
persistent for the life of the connection.
   

Why don't you just store it in a temporary table?  Goes away at
connection drop by definition ... and each connection can use the same
name for the temp table, so coding is easy.
			regards, tom lane
 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Persistent data per connection

2005-03-25 Thread Joshua D. Drake

> Our latest scheme involves giving each web system user a postgresql user 
> account and when we grab a connection from the connection pool, we SET 
> SESSION AUTHORIZATION for that user.  We can then access the user info 
> from the trigger.
> 
> But is there a better/different way to persist data at the connection level?

You could create a series of events that happen on the initial
connection object. For example:

New Connection
   Check if new connection
  select foo from table
 if foo
 continue

If foo returns you already have all your predefined information such as
cursors etc...

If foo does not return

New Connection
   Check if new connection
  select foo from table
  !if foo
  create temp table foo
  insert into foo
  declare cursor
  set
  etc...
  continue


The table would be a temp table which will automatically drop if the
connection drops so you know if it is a new connection.

You could use pgPool to manage the persistent connections themselves.

This will put more overhead on the startup of new connections but
as the site because busier you will have less and less new connections
and more re-used connections.

Sincerely,

Joshua D. Drake





> 
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
-- 
Command Prompt, Inc., Your PostgreSQL solutions company. 503-667-4564
Custom programming, 24x7 support, managed services, and hosting
Open Source Authors: plPHP, pgManage, Co-Authors: plPerlNG
Reliable replication, Mammoth Replicator - http://www.commandprompt.com/


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Persistent data per connection

2005-03-25 Thread Steve Atkins
On Fri, Mar 25, 2005 at 10:56:50AM -0600, Jeff Amiel wrote:
> because the connection is never really dropped...
> using a connection poolso it's just reclaimed by the pool on a 
> connection.close() or after a  timeout  period

Then you don't really want per-connection state, you want per-client-session
state. Presumably you're generating a unique identifier for each client
session somewhere in the client app? You could use that unique identifier
to store whatever information you need for that session in a table.

This sounds like a pretty normal "keep state based on a cookie" problem,
as solved by a bunch of web apps. If you're looking for higher performance
you might want to look at memcached and the postgresql interface to it
(pgmemcache?).

> Tom Lane wrote:
> 
> >Jeff Amiel <[EMAIL PROTECTED]> writes:
> > 
> >
> >>We've been struggling for several days now to come up with a mechanism 
> >>that allows us to establish a mechanism to store data that remains 
> >>persistent for the life of the connection.
> >>   
> >>
> >
> >Why don't you just store it in a temporary table?  Goes away at
> >connection drop by definition ... and each connection can use the same
> >name for the temp table, so coding is easy.
> >
> > regards, tom lane
> > 
> >
> 
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>  subscribe-nomail command to [EMAIL PROTECTED] so that your
>  message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Persistent data per connection

2005-03-25 Thread Jeff Amiel
Steve Atkins wrote:
On Fri, Mar 25, 2005 at 10:56:50AM -0600, Jeff Amiel wrote:
 

because the connection is never really dropped...
using a connection poolso it's just reclaimed by the pool on a 
connection.close() or after a  timeout  period
   

Then you don't really want per-connection state, you want per-client-session
state. Presumably you're generating a unique identifier for each client
session somewhere in the client app? You could use that unique identifier
to store whatever information you need for that session in a table.
 

Yesbut inside a trigger function, how do I know which 'row' to look 
at in the table that matches up with the session/connection I am 
currently "in"?  That's the trick.  I guess temporary tables are the 
answer here (because you can have a unique one per connection)the 
only issue we had was that inside the trigger, if the table didn't 
exist, the exception killed the entire function.have to hit the 
pg_classes table I guess to see if temp table exists first (temporary 
tables do show up in pg_classes, don't they?)


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Persistent data per connection

2005-03-25 Thread Steve Atkins
On Fri, Mar 25, 2005 at 11:46:03AM -0600, Jeff Amiel wrote:
> Steve Atkins wrote:
> 
> >On Fri, Mar 25, 2005 at 10:56:50AM -0600, Jeff Amiel wrote:
>
> Yesbut inside a trigger function, how do I know which 'row' to look 
> at in the table that matches up with the session/connection I am 
> currently "in"?  That's the trick.  I guess temporary tables are the 
> answer here (because you can have a unique one per connection)the 
> only issue we had was that inside the trigger, if the table didn't 
> exist, the exception killed the entire function.have to hit the 
> pg_classes table I guess to see if temp table exists first (temporary 
> tables do show up in pg_classes, don't they?)

Using a temporary table to store that unique token is how I'd do it.

Rather than grovel through the system tables during the trigger I'd do
all the work at the client session setup. As the first thing, write
the token into the temporary table. If that fails, create the
temporary table and retry. Then create the transaction you want to use
for all the other operations in the session.

I'm using pretty much this approach in one of my applications and it
works pretty well. I have control over the connection pool, though, and
can tell it to do things like create new temporary tables every time
it creates a new connection to the DB avoiding some of the hackery
involved with conditionally creating it at the start of each session.

If you absolutely don't need the data to persist between client
sessions then you can just smash it all into a temporary table at
the beginning of each client session and read it directly from there
rathe than indirecting through a second table.

Cheers,
  Steve


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Persistent data per connection

2005-03-25 Thread Jeff Amiel
Steve Atkins wrote:
Using a temporary table to store that unique token is how I'd do it.
Rather than grovel through the system tables during the trigger I'd do
all the work at the client session setup. As the first thing, write
the token into the temporary table. If that fails, create the
temporary table and retry. Then create the transaction you want to use
for all the other operations in the session.
 

Our problem is that we have batch/standalone applications that will not 
be using the connection pool (as well as developers that may directly 
modify production tables in rare instances).  In those cases (heck, all 
cases) , the audit triggers themselves (when an update/insert/delete 
takes place) have to look to see if a temp table exists  to pull the 
user data from.  It wont exist in tese cases, but the triggers dont know 
this.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Persistent data per connection

2005-03-25 Thread Steve Atkins
On Fri, Mar 25, 2005 at 12:21:28PM -0600, Jeff Amiel wrote:
> Steve Atkins wrote:
> 
> >Rather than grovel through the system tables during the trigger I'd do
> >all the work at the client session setup. As the first thing, write
> >the token into the temporary table. If that fails, create the
> >temporary table and retry. Then create the transaction you want to use
> >for all the other operations in the session.
> 
> Our problem is that we have batch/standalone applications that will not 
> be using the connection pool (as well as developers that may directly 
> modify production tables in rare instances).  In those cases (heck, all 
> cases) , the audit triggers themselves (when an update/insert/delete 
> takes place) have to look to see if a temp table exists  to pull the 
> user data from.  It wont exist in tese cases, but the triggers dont know 
> this.

A temporary table will mask the underlying table with the same name,
so you can have a generic user in a permanent table, then create a new
temporary table with the same name at the beginning of each session.

So in your schema you'd have a normal table called, say, "sessiondata"
with a single column "id" and a single row containing "not-in-a-session".

At the beginning of every client session you'd select id from sessiondata
and look at the value. If the value is "not-in-a-session" then you create
a temporary table called "sessiondata" with the same structure and
INSERT your sessionid.

Otherwise, UPDATE the id in sessiondata to your session id.

Then you'll have a valid session id in that table at all times. If you're
in a client session, it'll be the sessionid for that client session. If
not, it'll be "not-in-a-session".

(Obviously you can have more columns in the table and keep other session
data in it as well).

Cheers,
  Steve



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] plperl doesn't release memory

2005-03-25 Thread Dan Sugalski
At 6:58 PM -0500 3/24/05, Greg Stark wrote:
Dan Sugalski <[EMAIL PROTECTED]> writes:
 Anyway, if perl's using its own memory allocator you'll want to rebuild it
 to not do that.
You would need to do that if you wanted to use a debugging malloc. But 
there's
no particular reason to think that you should need to do this just to work
properly.
Two mallocs can work fine alongside each other. They each call mmap or sbrk to
allocate new pages and they each manage the pages they've received. They won't
have any idea why the allocator seems to be skipping pages, but they should be
careful not to touch those pages.
Perl will only use a single allocator, so there's not a huge issue 
there. It's either the external allocator or the internal one, which 
is for the best since you certainly don't want to be handing back 
memory to the wrong allocator. That way lies madness and unpleasant 
core files.

The bigger issue is that perl's memory allocation system, the one you 
get if you build perl with usemymalloc set to yes, never releases 
memory back to the system -- once the internal allocator gets a chunk 
of memory from the system it's held for the duration of the process. 
This is the right answer in many circumstances, and the allocator's 
pretty nicely tuned to perl's normal allocation patterns, it's just 
not really the right thing in a persistent server situation where 
memory usage bounces up and down. It can happen with the system 
allocator too, though it's less likely.

One of those engineering tradeoff things, and not much to be done 
about it really.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Stuck with references

2005-03-25 Thread Jim Buttafuoco

I did say untested :)

-- Original Message ---
From: Michael Fuhr <[EMAIL PROTECTED]>
To: Jim Buttafuoco <[EMAIL PROTECTED]>
Cc: Frodo Larik <[EMAIL PROTECTED]>, pgsql-general@postgresql.org
Sent: Fri, 25 Mar 2005 09:05:50 -0700
Subject: Re: [GENERAL] Stuck with references

> On Fri, Mar 25, 2005 at 10:37:31AM -0500, Jim Buttafuoco wrote:
> > 
> > try the following (untested) query:
> > 
> > SELECT la.name,lb.name pp.distance FROM payway_profile AS pp
> > JOIN location AS la ON ( pp.location_a = l.location_id )
> > join location AS lb ON ( pp.location_b = l.location_id );
> 
> This query produces a syntax error due to a missing comma after
> lb.name, and "relation does not exist" errors due to the use of "l"
> instead of "la" and "lb" in the join conditions.  Try this instead:
> 
> SELECT la.name, lb.name, pp.distance
> FROM payway_profile AS pp
> JOIN location AS la ON (pp.location_a = la.location_id)
> JOIN location AS lb ON (pp.location_b = lb.location_id);
> 
> -- 
> Michael Fuhr
> http://www.fuhr.org/~mfuhr/
--- End of Original Message ---


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] syntax issue with custom aggregator

2005-03-25 Thread Lucas F.
When I run a query using a custom aggregator I wrote to find the average of
only non-negative values:

CREATE AGGREGATE "property"."pos_avg" ( BASETYPE = "int2",
SFUNC = "property"."ag_pos_avg_accum", STYPE = "_int4",
FINALFUNC = "property"."ag_pos_avg_final", INITCOND = "'{0,0}'");

CREATE OR REPLACE FUNCTION "property"."ag_pos_avg_accum" (p_state integer
[], p_input smallint) RETURNS integer [] AS
$body$
/* state transition function for the pos_avg custom aggregate*/

declare

state_copy integer[];

begin

state_copy := p_state;

if (p_state is null) then
   state_copy := '{0,0}';
   state_copy[0] := 0;
   state_copy[1] := 0;
end if;

if (p_input >= 0 and not p_input is null) then
   /* number of records */
   state_copy[0] := state_copy[0] + 1;
   /* running total */
   state_copy[1] := state_copy[1] + p_input;
end if;

return state_copy;

end;
$body$
LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;

CREATE OR REPLACE FUNCTION "property"."ag_pos_avg_final" (p_state integer
[]) RETURNS smallint AS
$body$
/* final function for pos_avg custom aggregator */

declare

ret smallint;

begin

if (p_state[0] != 0) then
   ret := p_state[1] / p_state[0];
else
   ret := null;
end if;

return ret;

end;
$body$
LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;

I get the error:

ERROR:  array value must start with "{" or dimension information

I'm pretty new to postgresql so it's probably something obvious, but I'm at
a loss as to what it is...

Thank you...


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] syntax issue with custom aggregator

2005-03-25 Thread Tom Lane
"Lucas F." <[EMAIL PROTECTED]> writes:
> CREATE AGGREGATE "property"."pos_avg" ( BASETYPE = "int2",
> SFUNC = "property"."ag_pos_avg_accum", STYPE = "_int4",
> FINALFUNC = "property"."ag_pos_avg_final", INITCOND = "'{0,0}'");

Too many quotes ... try INITCOND = '{0,0}'

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] syntax issue with custom aggregator

2005-03-25 Thread Michael Fuhr
On Fri, Mar 25, 2005 at 02:39:11PM -0500, Tom Lane wrote:
> "Lucas F." <[EMAIL PROTECTED]> writes:
> > CREATE AGGREGATE "property"."pos_avg" ( BASETYPE = "int2",
> > SFUNC = "property"."ag_pos_avg_accum", STYPE = "_int4",
> > FINALFUNC = "property"."ag_pos_avg_final", INITCOND = "'{0,0}'");
> 
> Too many quotes ... try INITCOND = '{0,0}'

Also, check the array subscripts in your functions.  By default,
PostgreSQL array subscripts start at 1, not 0.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] sort array optimisation in pl/perl

2005-03-25 Thread Ragnar Hafstað
On Fri, 2005-03-25 at 15:29 +0100, GIROIRE Nicolas (COFRAMI) wrote:
[re-arranged]

> [mailto:[EMAIL PROTECTED] la part de Ragnar
Hafstað 

> > On Thu, 2005-03-24 at 15:49 +0100, GIROIRE Nicolas (COFRAMI) wrote:
> 
> > > I create an array which is result of query on postgresql database
> and 
> > > then I want to sort rows in a particular way (impossible by query
> on 
> > > database).
> 
> > [snip suggestions]
> thanks for this help.
> 
> The method functions great but the profit of time is good just if I
> have a lot of elements to deplace.

in that case, it most likely need to look at your algorythm. There is
little we can help you with, asy ou have not given us any info on
your sort requirements.

gnari



---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] pg_index question

2005-03-25 Thread Terry Lee Tucker
Hi,

If I were to set the value of pg_class.indisunique on a unique index to False 
inside a transaction so I could juggle sequence numbers around on a table 
with a unique two element index, and then set it back again to its proper 
value, all in the same transaction, would that allow me to temorarily 
override the unique index behavior? Is it safe to temporarily change the 
value of that column?

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] pg_index question

2005-03-25 Thread Terry Lee Tucker
To answer my own question, "No it won't work." I still get a unique constraint 
error.

On Friday 25 March 2005 04:10 pm, Terry Lee Tucker saith:
> Hi,
>
> If I were to set the value of pg_class.indisunique on a unique index to
> False inside a transaction so I could juggle sequence numbers around on a
> table with a unique two element index, and then set it back again to its
> proper value, all in the same transaction, would that allow me to
> temorarily override the unique index behavior? Is it safe to temporarily
> change the value of that column?
>
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

-- 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] pg_index question

2005-03-25 Thread Scott Marlowe
On Fri, 2005-03-25 at 15:10, Terry Lee Tucker wrote:
> Hi,
> 
> If I were to set the value of pg_class.indisunique on a unique index to False 
> inside a transaction so I could juggle sequence numbers around on a table 
> with a unique two element index, and then set it back again to its proper 
> value, all in the same transaction, would that allow me to temorarily 
> override the unique index behavior? Is it safe to temporarily change the 
> value of that column?

You could, but, in order to ensure there are no duplicates, you'd have
to check after turning it back on to see if there were and delete them
by some logic that made sense for your transactional methodologies.

I wouldn't recommend it really, because I'm willing to bet there are
race conditions I'm not thinking of that could bite you in the behind.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] pg_index question

2005-03-25 Thread Scott Marlowe
Umm, I tried it and it worked.  Sure you got the right relid?

On Fri, 2005-03-25 at 15:30, Terry Lee Tucker wrote:
> To answer my own question, "No it won't work." I still get a unique 
> constraint 
> error.
> 
> On Friday 25 March 2005 04:10 pm, Terry Lee Tucker saith:
> > Hi,
> >
> > If I were to set the value of pg_class.indisunique on a unique index to
> > False inside a transaction so I could juggle sequence numbers around on a
> > table with a unique two element index, and then set it back again to its
> > proper value, all in the same transaction, would that allow me to
> > temorarily override the unique index behavior? Is it safe to temporarily
> > change the value of that column?
> >
> > ---(end of broadcast)---
> > TIP 2: you can get off all lists at once with the unregister command
> > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] pg_index question

2005-03-25 Thread Terry Lee Tucker
Well, I think I had the right oid, but I agree with you. I think I have a 
better solution than this. I think I real try it once more though. I must 
have done something wrong.

Thanks for the reply :o)

On Friday 25 March 2005 04:53 pm, Scott Marlowe saith:
> Umm, I tried it and it worked.  Sure you got the right relid?
>
> On Fri, 2005-03-25 at 15:30, Terry Lee Tucker wrote:
> > To answer my own question, "No it won't work." I still get a unique
> > constraint error.
> >
> > On Friday 25 March 2005 04:10 pm, Terry Lee Tucker saith:
> > > Hi,
> > >
> > > If I were to set the value of pg_class.indisunique on a unique index to
> > > False inside a transaction so I could juggle sequence numbers around on
> > > a table with a unique two element index, and then set it back again to
> > > its proper value, all in the same transaction, would that allow me to
> > > temorarily override the unique index behavior? Is it safe to
> > > temporarily change the value of that column?
> > >
> > > ---(end of
> > > broadcast)--- TIP 2: you can get off all lists
> > > at once with the unregister command (send "unregister
> > > YourEmailAddressHere" to [EMAIL PROTECTED])


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] help w/ a little naive brainstorming ...

2005-03-25 Thread OpenMacNews
hi all,
i've two random, not-completely-disjointed questions that have arisen in a 
recent, off-the-cuff discussion.

any/all initial pointers/comments would be much appreciated ... while i slog 
through Google, etc. trying to learn more ...

(1) does pgsql have/plan an equivalent of Oracle's External Tables?
   
searching on 'postgresql "external tables"' i'm unclear as to whether we're 
talking abt the "same fruit" here (apples & apples, ya know?).

(2) most (no, not all) email servers store message data as flat-file text, 
organized in some admin-spec'd file-system directory hierarchy.

if i were to consider 'storing' those messages trees, instead, in a pgsql db, i 
i mmediately/naively think of three possible options

(a) simple DB-stored references to flat file:/// locations
(b) pointers to external, actionable (parseable? writeable?) files in the FS
(c) messages stored, in their entirety as CLOBS and/or BLOBS in the DB
(d) messages converted into, say, XML, and stored in the DB
(e) message components parsed into individual fields, and stored as addressable 
character and 'bytea' (a new one on me ...) data in pgsql tables

any thoughts/comments as to 'best approach' and why?
no, not at all well thought thru -- just brainstormin'  so, flame away!  ;-)
and, thanks =)
richard
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] help w/ a little naive brainstorming ...

2005-03-25 Thread Michael Fuhr
On Fri, Mar 25, 2005 at 03:16:12PM -0800, OpenMacNews wrote:

> (1) does pgsql have/plan an equivalent of Oracle's External Tables?
>

"The External Table feature allows for flat files, which reside
outside the database, to be accessed just like relational tables
within the database: the flat-file data can be queried and joined
to other tables using standard SQL."

You can implement this with a set-returning function that reads an
external source (flat file, spreadsheet, other kind of database,
etc.).  See for example DBI-link:

http://pgfoundry.org/projects/dbi-link

With views and rules you might be able to implement update/delete
operations as well, although you wouldn't get transaction semantics.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_index question

2005-03-25 Thread Tom Lane
Scott Marlowe <[EMAIL PROTECTED]> writes:
> Umm, I tried it and it worked.  Sure you got the right relid?

I think the relation cache will probably not notice a manual update on
pg_index.  So whether it "works" or not would depend on a lot of
extraneous factors like whether the relation cache entry was already
built for the target table and whether you did something else that would
force a cache rebuild.

In any case, the whole idea is so fraught with pitfalls that I couldn't
recommend it ...

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] help w/ a little naive brainstorming ...

2005-03-25 Thread OpenMacNews
hi michael,
(1) does pgsql have/plan an equivalent of Oracle's External Tables?
   
"The External Table feature allows for flat files, which reside
outside the database, to be accessed just like relational tables
within the database: the flat-file data can be queried and joined
to other tables using standard SQL."
You can implement this with a set-returning function that reads an
external source (flat file, spreadsheet, other kind of database,
etc.).  See for example DBI-link:
http://pgfoundry.org/projects/dbi-link
cool.  great place to start learning ...
With views and rules you might be able to implement update/delete
operations as well, although you wouldn't get transaction semantics.
baby steps for me.  i'd be happy 4 now with update/delete ... and worry abt the 
rest as needed later ...

thx!
richard
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_index question

2005-03-25 Thread Terry Lee Tucker
Thanks Tom.

One never knows until he asks...

On Friday 25 March 2005 06:36 pm, Tom Lane saith:
> Scott Marlowe <[EMAIL PROTECTED]> writes:
> > Umm, I tried it and it worked.  Sure you got the right relid?
>
> I think the relation cache will probably not notice a manual update on
> pg_index.  So whether it "works" or not would depend on a lot of
> extraneous factors like whether the relation cache entry was already
> built for the target table and whether you did something else that would
> force a cache rebuild.
>
> In any case, the whole idea is so fraught with pitfalls that I couldn't
> recommend it ...
>
>   regards, tom lane
>
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

-- 
Quote: 42
"In selecting men for office, let principle be your guide. Regard not
 the particular sect or denomination of the candidate -- look to his
 character"

 --Noah Webster

 Work: 1-336-372-6812
 Cell: 1-336-363-4719
email: [EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] TSearch2 FreeBSD

2005-03-25 Thread Matthew Terenzio
Anyone with any TSearch2 experience on FreeBSD? Any known Gotchas?
Postgres 7.3
Postgres is working fine and the Port for contrib seemed to install 
okay. tsearch2.so is indeed in the below directory and permissions look 
right.

$ psql database < tsearch2.sql
SET
BEGIN
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
'pg_ts_dict_pkey' for table 'pg_ts_dict'
CREATE TABLE
ERROR:  Load of file /usr/local/lib/postgresql/tsearch2.so failed: 
dlopen '/usr/local/lib/postgresql/tsearch2.so' failed. 
(/usr/local/lib/postgresql/tsearch2.so: Undefined symbol 
"errcode_for_file_access")
ERROR:  current transaction is aborted, queries ignored until end of 
transaction block
ERROR:  current transaction is aborted, queries ignored until end of 
transaction block

Thanks
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] TSearch2 FreeBSD

2005-03-25 Thread Tom Lane
Matthew Terenzio <[EMAIL PROTECTED]> writes:
> Anyone with any TSearch2 experience on FreeBSD? Any known Gotchas?
> Postgres 7.3

> ERROR:  Load of file /usr/local/lib/postgresql/tsearch2.so failed: 
> dlopen '/usr/local/lib/postgresql/tsearch2.so' failed. 
> (/usr/local/lib/postgresql/tsearch2.so: Undefined symbol 
> "errcode_for_file_access")

This tsearch2 code seems to be intended for PG 7.4 or later.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] TSearch2 FreeBSD

2005-03-25 Thread Matthew Terenzio
On Mar 25, 2005, at 9:28 PM, Tom Lane wrote:
Matthew Terenzio <[EMAIL PROTECTED]> writes:
Anyone with any TSearch2 experience on FreeBSD? Any known Gotchas?
Postgres 7.3

ERROR:  Load of file /usr/local/lib/postgresql/tsearch2.so failed:
dlopen '/usr/local/lib/postgresql/tsearch2.so' failed.
(/usr/local/lib/postgresql/tsearch2.so: Undefined symbol
"errcode_for_file_access")
This tsearch2 code seems to be intended for PG 7.4 or later.
Oh I see what you mean Tom. The postgresql-contrib is code for 7.4



regards, tom lane
---(end of 
broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] TSearch2 FreeBSD

2005-03-25 Thread Michael Fuhr
On Fri, Mar 25, 2005 at 09:18:27PM -0500, Matthew Terenzio wrote:
> 
> Anyone with any TSearch2 experience on FreeBSD? Any known Gotchas?
> Postgres 7.3

I use tsearch2 with PostgreSQL 8.0.1 (built from CVS source) on
FreeBSD 4.11-STABLE.  No problems.

> ERROR:  Load of file /usr/local/lib/postgresql/tsearch2.so failed: 
> dlopen '/usr/local/lib/postgresql/tsearch2.so' failed. 
> (/usr/local/lib/postgresql/tsearch2.so: Undefined symbol 
> "errcode_for_file_access")

I don't think the stock tsearch2 works with 7.3.  The CVS tree for
REL7_3_STABLE doesn't include tsearch2, and the tsearch2 source
code in later branches includes a call to errcode_for_file_access(),
which appears to have been introduced in 7.4.

There appears to be a version of tsearch2 for 7.3 on the tsearch2
home page:

http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/

Is there a reason you're using PostgreSQL 7.3 instead of upgrading
to a newer release?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Persistent data per connection

2005-03-25 Thread Guy Rouillier
Jeff Amiel wrote:
> (heck, all cases) , the audit triggers themselves (when an
> update/insert/delete takes place) have to look to see if a temp table
> exists  to pull the user data from.  It wont exist in tese cases, but
> the triggers dont know this.

Can't you catch the undefined_table exception in your trigger functions,
then create the tables if they don't exist?  We do that and it seems to
work fine.

-- 
Guy Rouillier


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org