Re: [SQL] copy command and array of composite types?

2008-03-25 Thread Guy Fraser


Try to export some data from the table using copy from ... to see  
what it looks like.


On 2008-Mar-25, at 12:19, Jon Norman wrote:


I'm using PostgreSQL 8.3.1.

I've created the following composite type:

CREATE TYPE expression_event_type AS (
event_id varchar(10),
expr_indices smallint[]
);

which is used with the following table definition:

CREATE TABLE boolean_expr_table(
expr_id smallint PRIMARY KEY,
expr_name varchar(100) NOT NULL,
expression varchar(256) NOT NULL,
event_indices expression_event_type[] NOT NULL,
true_cases smallint[] NOT NULL
);

Question: What is the correct way to load the event_indices column  
using the copy command and a external file?


I've tried: {(1,{1,2,3})} without success.




[SQL] Was: fetch first rows of grouped data

2007-08-28 Thread Guy Fraser
Hi there

I have a list of events that take place in a certain city at a
certain date. Now I would like to have the first two (ordered by
date) events for each city.
Is there a way to do this with one query?
I am using PostgreSQL 7.4.

Thanks for any tips.

Claudia


I think I may have come up with a possible solution.

Create a selection that produces a unique identifier for each city
ordered by date then use array_accum to collect the unique identifiers 
for each city, then match the first two elements of the array with the 
identifiers.

For instance if you had a table :

CREATE TABLE crazy_talk (
 ct_id bigserial primary key,
 ct_city text,
 ct_date date,
 ct_data text
) ;

Then you could use :

SELECT
 ct_id , 
 ct_city , 
 ct_date , 
 ct_data 
FROM 
 crazy_talk , 
 (SELECT 
  ct_city AS city, 
  array_accum(ct_id) as match 
 FROM 
  crazy_talk 
 ORDER BY 
  ct_city , 
  ct_date 
 GROUP BY 
  ct_city ) AS data_set 
WHERE 
 ct_city = city AND
 ct_id IN (match[0],match[1])
ORDER BY
 ct_city ,
 ct_date
;

I hope this helps, I did not try it, but I think it should work.

PS if you don't have array_accum here it is :

CREATE AGGREGATE array_accum (
BASETYPE = anyelement,
SFUNC = array_append,
STYPE = anyarray,
INITCOND = '{}'
);




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Regular Expressions

2007-03-21 Thread Guy Fraser
On Wed, 2007-03-21 at 11:04 -0300, Ezequias R. da Rocha wrote:
 Hi list,
 
 I would like to know if postgresql has a Regular Expressions (Regex) 
 implemented already.
 
 With it we could implement queries like
 
 Select * from myClientes where name = 'E[zs]equias'
 
Case Sensitive Regular Match ~
Case Insensitive Regular Match ~*
Negated Case Sensitive Regular Match !~
Negated Case Insensitive Regular Match !~*

Select * from myClientes where name ~ 'E[zs]equias'

 where the result occurs even if the field has Ezequias or Esequias.
 
 Regards
 Ezequias
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 
-- 
Guy Fraser
Network Administrator
The Internet Centre
1-888-450-6787
(780)450-6787



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Regular Expressions

2007-03-21 Thread Guy Fraser
On Wed, 2007-03-21 at 14:37 -0300, Ezequias R. da Rocha wrote:
 Guy Fraser escreveu:
  On Wed, 2007-03-21 at 11:04 -0300, Ezequias R. da Rocha wrote:

  Hi list,
 
  I would like to know if postgresql has a Regular Expressions (Regex) 
  implemented already.
 
  With it we could implement queries like
 
  Select * from myClientes where name = 'E[zs]equias'
 
  
  Case Sensitive Regular Match ~
  Case Insensitive Regular Match ~*
  Negated Case Sensitive Regular Match !~
  Negated Case Insensitive Regular Match !~*
 
  Select * from myClientes where name ~ 'E[zs]equias'
 

  where the result occurs even if the field has Ezequias or Esequias.
 
  Regards
  Ezequias
 
  ---(end of broadcast)---
  TIP 2: Don't 'kill -9' the postmaster
 
  
 Great I am thinking of putting my like to rest. I felt it faster than 
 like statement, have you any information about that ?
 

No I don't know if regular expressions are faster than LIKE but 
I think they are more flexible. When developing queries, I usually 
try different methods of matching to find out what works best for 
each circumstance. Some times upper() lower() and substr() with an 
= are more effective than other methods.

One of the more powerful features of PostgreSQL is the ability to 
use sub-selects to reduce the time required to process a subset of 
data from a larger volume of data.

Example :

select
 * 
from (
 select
  ss_time,
  ss_date,
  ss_type,
  ss_data
 from
  full_set
 where
  ss_type in ('type_a','type_x')
 ) as sub_set 
where
 upper(ss_data) ~ '[A-Z][0-9][A-Z] ?[0-9][A-Z][0-9]'
order by
 ss_time,
 ss_date,
 ss_type 
; 


 Ezequias
 
 ---(end of broadcast)---
 TIP 7: You can help support the PostgreSQL project by donating at
 
 http://www.postgresql.org/about/donate
 


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [SQL] LinkedList

2006-05-01 Thread Guy Fraser
On Sat, 2006-29-04 at 01:50 -0500, Ben K. wrote:
 On Fri, 28 Apr 2006, Guy Fraser wrote:
 
  -- HEAD
  insert into linkedlist values(null,1,0);
  insert into linkedlist values(1,2,10);
  insert into linkedlist values(2,3,20);
  insert into linkedlist values(3,4,30);
  insert into linkedlist values(4,5,40);
  -- TAIL
  insert into linkedlist values(5,null,50);
 
 
 
  Bad example of a double linked list, you also need an id for
  the current node and the values of prevnode and nextnode do not
  need to be ordered or contiguous as the example shows.
 
 
 
 Wow. Interesting... I am willing to be corrected, but to me the node 
 field seems redundant, since it does not add any information. (Since each 
 item in the list is already uniquely identifiable without the node.) 
 Certainly so, for traversing, which was the OP's intention.
 
 It may save some steps in case of other operations but at the expense of 
 one more field. Please see below.
 

The problem is that your way, there is no indicated way to determine
which node is which. For instance is you update any of your nodes 
then the node list would be out of order and your list would not 
work.

After I posted the message I realized there is another way to 
do this without adding an extra field, and it would be a closer 
example to how it is done in C. If you assigned the OID of the 
previous and next nodes rather than arbitrary integer, you could 
access each node independent of the order they are listed.

I have not messed around much with OIDs. I am not sure if 
OIDs change if an entry is updated.

In C you would use a pointer to storage location of previous 
and next node which is similar to using the OID. In some 
cases it can be necessary to use pointers to pointers when 
accessing variable length relocatable data, but that goes way 
past what this thread is about.

The example I provided, is still feasible and alleviates all 
unknowns at the expense of 4 bytes of storage for one integer
used as a fixed address for each node.

 
 
  create table linkedlist(node int,prevnode int, nextnode int, val int);
  insert into linkedlist values(1,null,2,0);
  insert into linkedlist values(2,1,3,10);
  insert into linkedlist values(3,2,4,30);
  insert into linkedlist values(4,3,5,20);
  insert into linkedlist values(5,4,6,40);
  insert into linkedlist values(6,5,null,50);
 
  If we now wanted to reorder an item in the set you need
  make some updates in a block, which I have not done before
  but should be something like this:
 
  Move node 4 between 2 and 3 so that the values from head
  to tail are ordered.
 
  update linkedlist set prevnode = '2',nextnode = '3' where node = '4';
  update linkedlist set nextnode = '4' where node = '2';
  update linkedlist set prevnode = '4' where node = '3';
 
 
 
 If the intention is to change it from 0-10-30-20-40-50 to 
 0-10-20-30-40-50, it would have been (in my design) exchanging node 3 and 
 node 4 below.
 
   null,1,0
   1,2,10  -- node 2
   2,3,30  -- node 3
   3,4,20  -- node 4
   4,5,40
   5,null,50
 
 Now, it can be done by:
 
 begin;
 update linkedlist set prevnode=2 where prevnode=3; -- node 4 = (2,4,20)
 update linkedlist set prevnode=3 where nextnode=3; -- node 3 = (3,3,30)
 update linkedlist set nextnode=3 where prevnode=2; -- node 4 = (2,3,20)
 update linkedlist set nextnode=4 where nextnode=3; -- node 3 = (3,4,30)
 commit;
 
 achieving the same.
   ...
   2,3,20  -- node 4, originally
   3,4,30  -- node 3, originally
   ...
 
 node will be more cost efficient if we insert an item at the beginning 
 of a long list, for example insert
   (2,3,100)
 before node 3 (2,3,20), but at least the sql is simple;
 
   update linkedlist set prevnode = prevnode + 1 where prevnode  1;
   update linkedlist set nextnode = nextnode + 1 where nextnode  2;
   and then do insert (2,3,xxx)
 
 This method can also be used for reordering.
 
 The usefulness of the node will depend on the economics of these update 
 operations over keeping one more field.
 
 But I think this is more of an exercise, and functions would be the proper 
 way for complex operations.
 

As long as it works in real world use. Without some way of addressing 
each node, the idea of a linked list seems wrong, since a linked is 
supposed to hold the address of the previous and or next item in the 
list, assuming the data is always going to be correctly sorted so that
you can locate the next item by tupple number seems overly assumptive.
If it works for you great, your example may then be useful as a short
cut, but I don't believe in leaving things to chance when programming.




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] pgcrypto-crypt

2006-04-06 Thread Guy Fraser
On Thu, 2006-06-04 at 13:53 +0530, AKHILESH GUPTA wrote:
 dear all,
 i want to encrypt and decrypt one of the fields in my table (i.e-
 password field)
 i have searched and with the help of pgcrypto package, using function
 crypt, i am able to encrypt my data,
 but there is nothing which i found to decrypt that same data,
 plz anybody give me the function to decrypt that encrypted value.
 plz reply asap

I found this with Google, maybe it will help you.

CREATE TABLE crypto (
id SERIAL PRIMARY KEY,
title VARCHAR(50),
crypted_content BYTEA
);


INSERT INTO crypto VALUES (1,'test1',encrypt('daniel', 'fooz', 'aes'));
INSERT INTO crypto VALUES (2,'test2',encrypt('struck', 'fooz', 'aes'));
INSERT INTO crypto VALUES (3,'test3',encrypt('konz', 'fooz', 'aes'));

SELECT * FROM crypto;

SELECT *,decrypt(crypted_content, 'fooz', 'aes') FROM crypto;

SELECT *,decrypt(crypted_content, 'fooz', 'aes') FROM crypto WHERE
decrypt(crypted_content, 'fooz', 'aes') = 'struck';

I could not test it, since I do not have pgcrypto installed.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Simple Question

2005-01-11 Thread Guy Fraser
Convert to seconds first (3600 sec/hr) :

select (
 '3600'::int4 
 * '478'::int4 
 / '45.0'::float8
)::int4::reltime::interval ;
 interval
--
 10:37:20
(1 row)

I don't know if ::int4::reltime::interval is the best 
way to end up with an interval, but its the only way I 
could figure out how to do it off the top of my head.

On Tue, 2005-11-01 at 16:42 -0500, Terry Lee Tucker wrote:
 Hello:
 
 I'm trying to figure out how to convert a floating point value into an 
 interval of time. I'm calculating the time required to drive from point A to 
 point B. For the sake of this question, we'll just say it is miles/speed. So:
 
 drv_time = 478 / 45.0;
 
 The value of this is: 10.6222
 
 Is there a way of converting this value to an interval. It seems that 
 INTERVAL 
 only works with a quoted literal value.
 
 If I type:
 rnd=# select interval '10.8444 hours';
   interval
 
  @ 10 hours 50 mins 40 secs
 (1 row)
 
 Anybody have an pointers?
 
 Thanks...
 
 
  Work: 1-336-372-6812
  Cell: 1-336-363-4719
 email: [EMAIL PROTECTED]
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster
 
-- 
Guy Fraser
Network Administrator
The Internet Centre
1-888-450-6787
(780)450-6787


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] save sql result to file

2004-12-24 Thread Guy Fraser
Off the top of my head:

psql -c select  into dump_table from old_table  ; database
pg_dump -t dump_table -f dump_table.tmp database
sed -e s/dump_table/table_name/g dump_table.tmp table_name.sql
rm dump_table.tmp

You should now have a file called table_name.sql with the stuff 
required to create a new table called table_name with the data 
from your query.

On Thu, 2004-23-12 at 20:44 -0800, william lim wrote: 
 how to save sql result to a text file in database-backup format?
 I want to backup a portion of data from my query to files and can do restore 
 from them.
 
 Thanks
 
 William
 
 ___
 Sent by ePrompter, the premier email notification software.
 Free download at http://www.ePrompter.com.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] Comparing Dates

2004-11-19 Thread Guy Fraser
You should use single quotes for all literals.
Examples:
select '2004-06-08' ;
 ?column?

2004-06-08
select 'user' ;
?column?
--
user
Failing to quote literals will cause unexpected results.
Examples:
select 2004-06-08 ;
?column?
--
1990
select user ;
current_user
--
guy
Thomas F.O'Connell [EMAIL PROTECTED] writes:
select 2004-06-08;
  ?column?
--
  1990
I'm not exactly sure how the bare string is converted internally, but 
it's
clearly not a complete date like you're expecting.


--
Guy Fraser
Network Administrator
The Internet Centre
780-450-6787 , 1-888-450-6787
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] [GENERAL] PHP or JSP? That is the question.

2004-03-23 Thread Guy Fraser
Yes java is compiled, and compilers do catch most syntax and scope 
errors, as I said,
but the java object code is still interpreted. Logical errors and other 
mistakes still get
through compilation, and good regression testing is still required for 
quality assurance.

I think JSP is an excelent solution and is probably the way I will go. 
We already have
Jakarta Tomcat running inhouse, but I havn't touched Java since 1996.

I started with PHP about a year ago and after 1 week I had succefully 
written a radius
authentcated session based PostGreSQL reporting application. A couple 
months ago I
wrote a data collection application that stores data in a MySQL database 
{yik, not my
choice} and a PHP front end that generates tables graphs and statistical 
analysis, and it
took one day to build and test thoroughly.

I used to use Perl or white my own CGI's in C, but PHP is so, much 
easier than either.
I have been programming since 1981 and have learned many languages on many
platforms. I have maintained a few Sun servers but usualy in a mix with 
other Unix
variants. Because of the mixed environments and poor Java support on 
many platforms
I never had a good reason to use java. Most of the applications I have 
written did not
need complicated GUI interfaces, so C and Perl have done fine for a long 
time, and
PHP is adequate for simple DB applications. I am entertaining JSP for 
the more
complicated GUI interface for a project I may need to build in the near 
future.
Management has asked about ASP, but since we have to support many non 
windows
platforms I have strongly advised against it, besides I feel JSP will be 
much better.

Do you have a suggestion for a good resource, in order for me to get up 
to speed
quickly with Java?

[EMAIL PROTECTED] wrote:

My point is not about method how and how fast the whole thing executes but if
the source code can be compiled assuming that compilation checks for a many
things like data type mismatch or missing and provides many warnings for
potential errors.  If PHP has such compilation then fine.
Many people like development with interpreted languages and they build
impressive applications but I think that compilation always helps: the time
spent waiting for compilation to finish pays off many times. 
JSP is compiled: I like that. Take Java Script as an example: either you
shake after every change to source code regarding errors like 'Object not
found' or build extensive automated regression tests but then you are going
to spent time for building those test cases which still may fail to detect
errors which are otherwise just easily detected by the compiler. 
 

-Original Message-
From: Guy Fraser [mailto:[EMAIL PROTECTED] 
Sent: 23. mars 2004 16:01
To: Laimutis Nedzinskas
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] PHP or JSP? That is the question.

[EMAIL PROTECTED] wrote:

   

It may be added to the Mark's points bellow that PHP has 
 

more libraries 
   

in the out of the box setup (like regular expressions) but PHP is 
interpreted
(right?) while JSP is compiled: when I was making decision I 
 

have chosen JSP
   

because of compiled criteria: I do like the idea to catch 
 

as many bugs as
   

possible at compile time and rather hate interpreted development.

 

Sorry but your not correct.

Java is a binary 'interpreted' language. Although it is 
binary it is not 
a binary exucutable
native on almost all platforms. The java executable 
interprets the java 
binary code and
translates it into native machine executable code. Since the 
binary data 
is tokenized it is
more efficient and has already had it syntax and scopes 
verified so less 
processing is
required making it faster once it has been compiled.

One of the bigger advantages of JSP is the two way data flow 
capability. 
With the
java application running at the client end providing the 
interface, and 
the java servlet
running at the server end doing the I/O, JSP can make a more fluid 
feeling interface.
With PHP once the page is rendered it is static.

...snip...
--
Guy Fraser


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [SQL] comparing nulls

2004-01-20 Thread Guy Fraser
Kenneth Gonsalves wrote:

On Tuesday 20 January 2004 19:26, Chris Bowlby wrote:
 

Hi Ken,

Under 7.3.x this option was removed, you need to test via:

SELECT * from table where field IS NULL;
   

thanx - works in both 7.1 and 7.3 - why do these guys keep fooling around 
with these thangs?
 

Standards compliance :-)



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [SQL] testing for null value in integer field?

2003-12-19 Thread Guy Fraser
select .  where intnumber IS NULL

Geoffrey wrote:

How does one check for an unset value in an integer field?

I've tried such things as:

select . where intnumber = ''
select .. where intnumber =  ?
select .  where intnumber = NULL
Thanks.



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] picking max from list

2003-12-10 Thread Guy Fraser
select group,rd_pk
from (select ...) as your_query,
(select max(run) as max_run,rd_pk as rd
from (select ...) as your_query) as max_rd
where rd = rd_pk
 and max_run = run;
I dont know if you call that one query but it should work.

There may be more elegant solutions, but I havn't had a chance to read 
up on the new join types.

Good luck

Jodi Kanter wrote:

I have a query that produces results similar to this:

run#  rd_pk  group#
09209  5  
19209  8
09520  2
19520  5
09520   etc
08652
18652
28652
08895  
18894

Ultimately I want to know the group number for EACH rd_pk with the 
highest run number. Can this be done in one query? Or will I need to 
code with a loop?
Thanks
Jodi


--
Guy Fraser
Network Administrator


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] selecting duplicate records

2003-09-23 Thread Guy Fraser
This is a test I did on one of my tables where I put duplicated entries :

select *
from acct_other_2003sep
except select sep.*
from (
 select min(oid) as min_oid,
  Acct-Status-Type,
  User-Name,
  Realm,
  Acct-Session-Time,
  Acct-Input-Octets,
  Acct-Output-Octets,
  Called-Station-Id,
  Calling-Station-Id,
  Acct-Terminate-Cause,
  Framed-IP-Address,
  Service-Type,
  Framed-Protocol,
  Client-IP-Address,
  NAS-IP-Address,
  NAS-Port-Type,
  NAS-Port-Id,
  Acct-Session-Id,
  Acct-Link-Count,
  Acct-Multi-Session-Id
 from acct_other_2003sep
 group by Acct-Status-Type,
  User-Name,
  Realm,
  Acct-Session-Time,
  Acct-Input-Octets,
  Acct-Output-Octets,
  Called-Station-Id,
  Calling-Station-Id,
  Acct-Terminate-Cause,
  Framed-IP-Address,
  Service-Type,
  Framed-Protocol,
  Client-IP-Address,
  NAS-IP-Address,
  NAS-Port-Type,
  NAS-Port-Id,
  Acct-Session-Id,
  Acct-Link-Count,
  Acct-Multi-Session-Id
 ) as min_sep,
acct_other_2003sep as sep
where sep.oid = min_sep.min_oid
;
From the above example you can see how to use a subselect to get a 
unique list then using except, you can get the records that were not unique.

This may not be exactly what you want but it does implement some of the 
methods required to get around using using temporary tables.

For some tasks using temporary tables may be more suitable if your query 
becomes too complex and or you run out of memory/time.

Hope this helps.

Guy

Christopher Browne wrote:

The world rejoiced as [EMAIL PROTECTED] (Christoph Haller) wrote:
 

1. How to select duplicate records only from a single table using a
 

select
   

query.

 

e.g.
select sid,count(sid) from location group by sid having count(sid)1;
Do you get the idea?
Your request is pretty unspecific, so if this is not what you're asking
for,
try again.
   

The aggregate is likely to perform horrifically badly.  Here might
be an option:
Step 1.  Find all of the duplicates...

select a.* into temp table sid from some_table a, some_table b
 where a.oid  b.oid and
   a.field1 = b.field1 and
   a.field2 = b.field2 and
   a.field3 = b.field3 and
...
   a.fieldn = b.fieldn;
Step 2.  Look for the matching entries in the source table...

select a.* from some_table a, sid b
 where
   a.field1 = b.field1 and
   a.field2 = b.field2 and
   a.field3 = b.field3 and
...
   a.fieldn = b.fieldn;
[There's a weakness here; if there are multiple dupes, they may get
picked multiple times in the second query :-(.]
 

--
Guy Fraser
Network Administrator
The Internet Centre
780-450-6787 , 1-888-450-6787
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] [GENERAL] Request for advice: Table design

2003-06-19 Thread Guy Fraser
Hi

I realized I made a mistake. :(

The example below should have :

create view all_data as
select ...
The as statement was missing.

Here is a real sample of a function allowing a similar effect :

--- sql script ---
--
-- delete old function [must be done before type is dropped]
--
DROP FUNCTION acct_info( TEXT , INTEGER );
--
-- delete old type
--
DROP TYPE acct_info_record;
--
-- return type for acct_info records
--
CREATE TYPE acct_info_record AS (
 Time-Stamp ABSTIME,
 Acct-Status-Type TEXT,
 User-Name TEXT,
 Realm TEXT,
 Acct-Session-Time INTEGER,
 Acct-Input-Octets INTEGER,
 Acct-Output-Octets INTEGER,
 Called-Station-Id TEXT,
 Calling-Station-Id TEXT,
 Acct-Terminate-Cause TEXT,
 Framed-IP-Address INET,
 Service-Type TEXT,
 Framed-Protocol TEXT,
 Client-IP-Address INET,
 NAS-IP-Address INET,
 NAS-Port-Type TEXT,
 NAS-Port-Id INTEGER,
 Timestamp INTEGER,
 Acct-Session-Id TEXT,
 Acct-Link-Count SMALLINT,
 Acct-Multi-Session-Id TEXT,
 Acct-Delay-Time INTEGER
);
--
-- function to select start and stop records as one data set by mon 
and year.
--
CREATE FUNCTION acct_info( TEXT , INTEGER ) RETURNS SETOF 
acct_info_record AS '
DECLARE
 p_mon ALIAS FOR $1;
 p_year ALIAS FOR $2;
 v_exec TEXT;
 rec RECORD;
BEGIN
 v_exec := ''SELECT
   Time-Stamp,
   Acct-Status-Type,
   User-Name,
   Realm,
   Acct-Session-Time,
   Acct-Input-Octets,
   Acct-Output-Octets,
   Called-Station-Id,
   Calling-Station-Id,
   Acct-Terminate-Cause,
   Framed-IP-Address,
   Service-Type,
   Framed-Protocol,
   Client-IP-Address,
   NAS-IP-Address,
   NAS-Port-Type,
   NAS-Port-Id,
   Timestamp,
   Acct-Session-Id,
   Acct-Link-Count,
   Acct-Multi-Session-Id,
   Acct-Delay-Time
 FROM acct_start_'' || p_year || p_mon ||
'' UNION SELECT 
 Time-Stamp,
 Acct-Status-Type,
 User-Name,
 Realm,
 Acct-Session-Time,
 Acct-Input-Octets,
 Acct-Output-Octets,
 Called-Station-Id,
 Calling-Station-Id,
 Acct-Terminate-Cause,
 Framed-IP-Address,
 Service-Type,
 Framed-Protocol,
 Client-IP-Address,
 NAS-IP-Address,
 NAS-Port-Type,
 NAS-Port-Id,
 Timestamp,
 Acct-Session-Id,
 Acct-Link-Count,
 Acct-Multi-Session-Id,
 Acct-Delay-Time
FROM acct_stop_'' ||  p_year || p_mon ;
 FOR rec IN EXECUTE v_exec
   LOOP
 RETURN NEXT rec;
   END LOOP;
 RETURN;
END;
' LANGUAGE 'plpgsql';

--
-- check to make sure it works
--
SELECT * FROM acct_info('jun','2003') LIMIT 10;
--
--- end of sql script --
That may not be of as much help for that project, but it was somthing I 
realized I could use in one of my applications.

Guy

Dennis Gearon wrote:

wow! Thanks for that info. I'm definitely filing this for use in a 
future,near term project.

Guy Fraser wrote:

Hi

As an additional note;

Older data is moved into a seperate table to reduce the number of 
records that require regular vacuuming. Since the tables would 
contain similar data it is simple to use union selections in a view 
with an additional column to indicate which table the data comes 
from. Using a view that combines the data from the two tables using a 
union, the data will appear to be comming from a single table. This 
method make archival access transparent.

I have a realtime data collection system that I built. The data is 
put into tables on a yearly and monthly basis on the fly and new 
tables are created as needed. I use a union  to join tables to access 
the data over several months. I just thought of a new idea, I am 
going to write a function to join the tables required over a timespan 
- but that's another story.

Two tables are easy to join with a union :
{if the column types are exactly matched}
create view all_data
select *,'current_data'::text as data_table from current_data ...
union
select *,'archive_data'::text from archive_data ...
;
The last column will indicate the data's origin.

Now to see all the data :

select * from all_data ;

Thats about it, using this method allows the dynamic table to small 
for quick maintenace and operation, while the static table needs 
less maintenace so it can be large with out the penalties incurred by 
frequent maintenace.

Guy



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Creating Views with Column Names based on Distinct

2003-06-11 Thread Guy Fraser
Hi

CREATE VIEW user_stuff ...select comand that displays what you want... ;

This might be what you want ?:-)

CREATE VIEW user_stuff
SELECT username AS Username,userpassword AS 
Pass/Attribute,startdate::TEXT AS Date/Value
 FROM user
 UNION SELECT 
user.username,userdetail.attributename,userdetail.attributevalue::TEXT
  FROM user,userdetail
  WHERE user.userid = userdetail.userid
;

Here is some psuedo output :

-- select Usernames that start with 'j' from view.
-- NOTE: The columns I setup have capitals and 'unsafe' characters so 
they must be in double quotes.

SELECT * from user_stuff where Username ~ '^j';

Username | Pass/Attribute | Date/Value
--++
joeuser  | 5n00py | 01-01-01  
joeuser  | ju-attribute1  | ju-value1
joeuser  | ju-attribute2  | ju-value2
...
janedow  | 5eaShe11   | 02-02-02  
janedow  | jd-attribute1  | jd-value1
janedow  | jd-attribute2  | jd-value2
...

NOTE: The the colums all have to be the same type {and probably size}. I 
would suggest using TEXT instead of VARCHAR(). Since the data in the 
third column is either a date or character data, I cast the date and 
value to TEXT so that they would both match.

This looks suprisingly like a radius authentication database, I recently 
patched cistron to do PostgreSQL accounting, and will likely make an SQL 
authentication patch as well, or switch to freeRadius and help them fix 
up there software. I have looked at freeRadius a couple of times, but it 
has really bad docs for the SQL support.

Hope this helps.

Guy

Frank Bax wrote:

At 10:59 AM 6/6/03, Damien Dougan wrote:

I was wondering if it is possible to create a table view based on a 
table
which is effectively an attribute list.

For example, suppose I have two tables:

CREATE TABLE user
(
  userid integer,
  username character varying,
  userpassword character varying,
  startdate date
);
CREATE TABLE userdetail
(
  userid integer,
  attributename character varying,
  attributevalue character varying
);
Now I want to make a public view of the user, which would have all of 
the
defined fields in user, and all of the defined attributes across 
userdetail.


I'll think you'll find what you're looking for if you search the 
archives of this mailing list for 'crosstab'. 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [SQL] Calculating with the time

2003-05-31 Thread Guy Fraser
Is this what you are looking for?

test=# select 'now'::time as test,'2003-05-30 14:51:38-06'::timestamptz 
as stamp into temp cruft;
SELECT
test=# select test,stamp,test - stamp::time as diff from cruft;
test | stamp | diff
-++-
15:09:28.862728 | 2003-05-30 14:51:38-06 | 00:17:50.862728
(1 row)

test=# select test,stamp,reltime(test - stamp::time) as diff from cruft;
test | stamp | diff
-++--
15:09:28.862728 | 2003-05-30 14:51:38-06 | 00:17:50
(1 row)
test=# drop table cruft;
DROP TABLE
Guy

Katka a Daniel Dunajsky wrote:

Hello All,

I am looking for an advice how to do calculation with the time. I do 
have a column with datatype 'timestamp with time zone'. The value is 
'2003-03-22 07:53:56-07' for instance. I would like to select it from 
the table with result of '07:59:59'  '07:53:56', so the query should 
return '00:06:03'.

Thank you for your time.

DanielD



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Convert a text list to text array? Was: [SQL] Denormalizing duringselect

2003-03-04 Thread Guy Fraser
The email at the bottom gave me an idea, but it doesn't quite work:

CREATE AGGREGATE accumulate(
  BASETYPE = text,
  SFUNC = textcat,
  STYPE = text,
  INITCOND = '' );
--
SELECT ('{' || ltrim(accumulate(',' || tablename),',') || '}') as cruft
 FROM pg_tables
  WHERE hasindexes = 'f';
   cruft
---
 {pg_xactlock,pg_listener}
(1 row)
This produces somthing that looks like it could be able to be converted into 
an array but I cant figure out how to make it work.



Guy

Edmund Lian wrote:
Jeff and Josh,

I found this example in Practical PostgreSQL... will it do the job?


The following example defines an aggregate function named sum(), for
use with the text data type. This aggregate calls the
textcat(text,text) function built into PostgreSQL to return a
concatenated sum of all the text found in its input values:
booktown=# CREATE AGGREGATE sum ( BASETYPE = text,
booktown(#SFUNC = textcat,
booktown(#STYPE = text,
booktown(#INITCOND = '' );
CREATE
booktown=# SELECT sum(title || ' ') FROM books WHERE title ~ '^L';
...snip...

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [SQL] Passing arrays

2003-02-19 Thread Guy Fraser
Would it not be more reasonable to have array_dims return an int or int[]?

Has anyone ever seen an array that does not start at 1?

The other problem I find with array_dims returning text is when you have a 
multi-dimentional array like this IIRC;

array_dims('{{asd,fgh,jkl},{zxc,vbn,mlk}}') = '[1:2][1:3]'

Which appears to mean that there the data is a 2 element array of a 3 element 
array.

If the data was in an int array format like '{{1,2},{1,3}}' it would be dead 
easy to get the dimentions of the array without messy text parsing. It would 
be even better as '{2,3}' since a null element at the start of array is still 
counted as an element so all arrays start from 1. A fairly simple function 
could be made to factor all dimentions together to get a full sub_element 
count, ie. 2x3 = 6 ... .

I think I will update my array_size function to handle this, but that means my 
funtion has to deal with more messy text parsing to generate the int array for 
multi dimentional arrays. I have up until now only been working with single 
element arrays.

Here is an example of my array_size function for text arrays, I just tossed 
this together from what I could remember, so it may not be exactly the same as 
what I am using.

For V7.3 it should look somthing like this.

---%...snip...
CREATE FUNCTION array_size(text[]) RETURNS int AS '
DECLARE
  array ALIAS FOR $1;
  dim int;
BEGIN
  SELECT INTO dim
replace(split_part(array_dims(array),'':'',2),'']'',)::int ;
-- that was the messy stuff
IF dim IS NULL
THEN
  dim := 0 ;
END IF;
  RETURN dim;
END;
' LANGUAGE 'plpgsql';
---%...snip...

For V7.2 it looked something like this, but it is more messy.

---%...snip...
CREATE FUNCTION array_size(text[]) RETURNS int AS '
DECLARE
  array ALIAS FOR $1;
  dim int;
BEGIN
  SELECT INTO dim
rtrim(ltrim(ltrim(array_dims($1),''[012345679''),'':''),'']'')::int ;
-- that was the messy stuff
IF dim IS NULL
THEN
  dim := 0 ;
END IF;
  RETURN dim;
END;
' LANGUAGE 'plpgsql';
---%...snip...

I dropped these into a test DB, created test table and they do work so, here 
are the results:

select *,array_size(destination) from size_test;

   alias   |destination| array_size
---+---+
 alias1| {dest1}   |  1
 alias2| {dest2,dest1} |  2
 alias3| {dest3,dest4} |  2
 alias4| {dest3,dest4,dest5}   |  3
 alias5| {dest6,dest7} |  2
 alias6| {dest3,dest7,dest4,dest5} |  4
 alias7|   |  0


I hope that this helps. You can over load the function by creating more of the 
same function but using different array types for the input.

IE. array_size(int[]) instead of array_size(text[]).


Guy

Michael Weaver wrote:
There is a function array_dims(array) that returns the size of array.
It's not THAT useful as it returns a string like '[1:2]' -( 1 lower, 2 
upper bound.)
With a little bit of string processing you could get the size of the array.





---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] CSV import

2003-01-31 Thread Guy Fraser
FYI

In text files on a Mac. the EOL character is a CR only.

What a messy thing this whole EOL cruft is.

To convert between these text formats on linux is easy if you have dos2unix.

The dos2unix on linux can perform many format conversions to and from unix,dos 
and mac formats.

On BSD you need dos2unix to convert from dos to unix and unix2dos to convert 
from unix to dos. You probably need to get the GNU version of dos2unix or 
mac2unix to convert to or from mac formatted text.


Guy

Jean-Luc Lachance wrote:
In DOS and Windows, text lines end with CRLF.
In Unix, text lines end with LF only.

hex   decoct
CR=CTRL-M or 0x0D or 13 or 015
LF=CTRL-J or 0x0A or 10 or 012



Chad Thompson wrote:


Unix EOL is LF not CR.




Is this the only difference between a dos and unix text file?

Thanks
Chad

---%...snip...


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] CSV import

2003-01-28 Thread Guy Fraser
Hi

You will need two text utilities {dos2unix and sed} to do this in the simplest 
way. They are fairly standard text utilities and are probably already on your 
machine.

This is how I would do it :

sed s/\//g file_name.txt \
	| dos2unix \
	| pgsql -c COPY table_name FROM STDIN USING DELIMITERS ','; db

Where file_name.txt is the csv file you want to import and table_name is 
the previously created table you want to insert the data into and db is the 
database name.

How this works is sed {stream editor} removes all the double quote 
characters '' then pipes the output through dos2unix which converts all the 
CRLF {DOS EOL} sequences into CR {UNIX EOL} characters, then pipes the data to 
pgsql  with a command that does a bulk insert into the table of the database 
you have selected.


Guy

Oliver Vecernik wrote:
Hi again!

After investigating a little bit further my CSV import couldn't work 
because of following reasons:

1. CSV files are delimited with CR/LF
2. text fields are surrounded by double quotes

Is there a direct way to import such files into PostgreSQL?

I would like to have something like MySQL provides:

LOAD DATA [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name.txt'
   [REPLACE | IGNORE]
   INTO TABLE tbl_name
   [FIELDS
   [TERMINATED BY '\t']
   [[OPTIONALLY] ENCLOSED BY '']
   [ESCAPED BY '\\' ]
   ]
   [LINES TERMINATED BY '\n']
   [IGNORE number LINES]
   [(col_name,...)]

Has anybody written such a function already?

Regards,
Oliver




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] converting microsoft sql server 2000 sql-code for postgresql

2003-01-28 Thread Guy Fraser
Hi

You should use date instead of datetime since the data is only a date and 
a date type uses less storage than a timestamp.

For time only data use type time or timetz if you want the timezone as 
well. For data with a date and time use timestamp or timestamptz if you 
want to include the timezone as well.

I hope this is helpful. There are other time based data types as well but you 
should read the documentation to determine when it is best to use them.

One thing I really like about PostgreSQL is the variety of data types and 
functions for special operations on them.

## I believe the information below is correct.

If you go through the documentation you can also find out how to CREATE your 
own data TYPE to allow more direct use of non-PostgreSQL data types. Here is 
an example that will allow input of any datetime data into PostgreSQL :

CREATE TYPE datetime AS (datetime timestamptz);

Then any time,date or date and time data can be input as type datetime. 
Without having to convert your tables to rename the datetime type.


Guy

Rajesh Kumar Mallah. wrote:
Only 1 small change makes it acceptable to pgsql.

change datetime to timestamp .


regds
mallah.

On Tuesday 28 January 2003 03:38 pm, william windels wrote:


Hello all,

I a m a new member of the list and at the moment , I am studiing
informatica: sql.

At the workplace, we use microsoft sql server 2000.
At home, I use postgresql 7.2.1 and now I would import the data of the
database at the workplace into the postgresql environment at home.

I have paste a little part of the sql-code to create a table in a database
called tennisclub.

To execute the code bellow with pgsql, I do the following steps:

pgsql tennisclub
\e file_with_sql_code.sql

The contens of the file file_with_sql_code.sql is as follows:

CREATE TABLE SPELERS

(SPELERSNR SMALLINT NOT NULL,

NAAM CHAR(15) NOT NULL,

VOORLETTERS CHAR(3) NOT NULL,

VOORVOEGSELS CHAR(7) ,

GEB_DATUM datetime ,

GESLACHT CHAR(1) NOT NULL,

JAARTOE SMALLINT NOT NULL,

STRAAT CHAR(15) NOT NULL,

HUISNR CHAR(4) ,

POSTCODE CHAR(6) ,

PLAATS CHAR(10) NOT NULL,

TELEFOON CHAR(10) ,

BONDSNR CHAR(4) ,

PRIMARY KEY (SPELERSNR) );

INSERT INTO SPELERS VALUES (

6, 'Permentier', 'R', NULL, '1964-06-25', 'M', 1977, 'Hazensteinln',

'80', '1234KK', 'Den Haag', '070-476537', '8467'

);

INSERT INTO SPELERS VALUES (

44, 'Bakker', 'E', 'de', '1963-01-09', 'M', 1980, 'Lawaaistraat',

'23', 'LJ', 'Rijswijk', '070-368753', '1124'

);

INSERT INTO SPELERS VALUES (

83, 'Hofland', 'PK', NULL, '1956-11-11', 'M', 1982, 'Mariakade',

'16a', '1812UP', 'Den Haag', '070-353548', '1608'

);

INSERT INTO SPELERS VALUES (

2, 'Elfring', 'R', NULL, '1948-09-01', 'M', 1975, 'Steden',

'43', '3575NH', 'Den Haag', '070-237893', '2411'

);

INSERT INTO SPELERS VALUES (

27, 'Cools', 'DD', NULL, '1964-12-28', 'V', 1983, 'Liespad',

'804', '8457DK', 'Zoetermeer', '079-234857', '2513'

);

INSERT INTO SPELERS VALUES (

104, 'Moerman', 'D', NULL, '1970-05-10', 'V', 1984, 'Stoutlaan',

'65', '9437AO', 'Zoetermeer', '079-987571', '7060'

);

INSERT INTO SPELERS VALUES (

7, 'Wijers', 'GWS', NULL, '1963-05-11', 'M', 1981, 'Erasmusweg',

'39', '9758VB', 'Den Haag', '070-347689', NULL

);

INSERT INTO SPELERS VALUES (

57, 'Bohemen', 'M', 'van', '1971-08-17', 'M', 1985, 'Erasmusweg',

'16', '4377CB', 'Den Haag', '070-473458', '6409'

);

INSERT INTO SPELERS VALUES (

39, 'Bischoff', 'D', NULL, '1956-10-29', 'M', 1980, 'Ericaplein',

'78', '9629CD', 'Den Haag', '070-393435', NULL

);

INSERT INTO SPELERS VALUES (

112, 'Baalen', 'IP', 'van', '1963-10-01', 'V', 1984, 'Vosseweg',

'8', '6392LK', 'Rotterdam', '010-548745', '1319'

);

INSERT INTO SPELERS VALUES (

8, 'Niewenburg', 'B', NULL, '1962-07-08', 'V', 1980, 'Spoorlaan',

'4', '6584WO', 'Rijswijk', '070-458458', '2983'

);

INSERT INTO SPELERS VALUES (

100, 'Permentier', 'P', NULL, '1963-02-28', 'M', 1979, 'Hazensteinln',

'80', '6494SG', 'Den Haag', '070-494593', '6524'

);

INSERT INTO SPELERS VALUES (

28, 'Cools', 'C', NULL, '1963-06-22', 'V', 1983, 'Oudegracht',

'10', '1294QK', 'Leiden', '010-659599', NULL

);

INSERT INTO SPELERS VALUES (

95, 'Meuleman', 'P', NULL , '1963-05-14', 'M', 1972, 'Hoofdweg',

'33a', '5746OP', 'Voorburg', '070-867564', NULL

);



This code doesn't work.



Can someone tell me how I can adjust the syntax of the code and in global:
how can I convert sql-code , for microsoft sql server 2000, to sql-code for
postgresql?



Thanks in advance



best regards

William Windels



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])







---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] SQL to list databases?

2003-01-24 Thread Guy Fraser
Hi

To make it easier to do this in SQL you can create a view like this :

CREATE VIEW db_list AS
  SELECT d.datname as Name,
u.usename as Owner,
pg_catalog.pg_encoding_to_char(d.encoding) as Encoding
  FROM pg_catalog.pg_database d
LEFT JOIN pg_catalog.pg_user u ON d.datdba = u.usesysid
  ORDER BY 1;

Note: the select statement comes from the post I am replying from.

Then all you have to do is :

select * from db_list;

For example this is my output :

foobar=# select * from db_list;
   Name| Owner | Encoding
---+---+---
 foobar| turk  | SQL_ASCII
 template0 | pgsql | SQL_ASCII
 template1 | pgsql | SQL_ASCII
(3 rows)

Or :

foobar=# select Name,Owner from db_list where Owner != 'pgsql';
  Name  | Owner
+---
 foobar | turk
(1 row)

Using psql -E {database} interactivly

Or

psql -E -c \{command} {database}

Example:

user@host:~$ psql -E -c \dt template1
* QUERY **
SELECT n.nspname as Schema,
  c.relname as Name,
  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN 
'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as Type,
  u.usename as Owner
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_user u ON u.usesysid = c.relowner
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','')
  AND n.nspname NOT IN ('pg_catalog', 'pg_toast')
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;
**

You can collect the SQL for other helpful commands and build views like above, 
then you can query the view for more specific information.

I hope this is helpful.

Guy
PS: If you create these views in template1 before you create your other 
databases, these views will be included in new databases automaticaly.


Larry Rosenman wrote:


--On Thursday, January 23, 2003 12:56:50 -0600 Ben Siders 
[EMAIL PROTECTED] wrote:


Is there a query that will return all the databases available, similar to
what psql -l does?


$ psql -E -l
* QUERY **
SELECT d.datname as Name,
  u.usename as Owner,
  pg_catalog.pg_encoding_to_char(d.encoding) as Encoding
FROM pg_catalog.pg_database d
 LEFT JOIN pg_catalog.pg_user u ON d.datdba = u.usesysid
ORDER BY 1;
**




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org









---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] Scheduling Events?

2003-01-24 Thread Guy Fraser
Hi

I would agree that cron is probably the best solution.

You could have cron perform a query that has a trigger and performs all the 
tasks you need done. As well you could create a trigger on other queries that 
would perform the other things as well, but make sure it isn't a heavily used 
query but instead a query that is run hourly or daily. As a backup for cron 
you could manualy or using anacron or somthing similar run the query cron 
should run on a regular basis, but you should make sure your trigger keeps an 
entry in your database letting the other queries know when the update is 
started and when it has finished. Using this check ensures you don't get 
overlapping updates, and can also give you a clue to how much time the 
updates are taking and possibly alert you to a hung update.

Guy


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Getting multiple rows in plpgsql function

2003-01-24 Thread Guy Fraser
NOTE: This is a feature in 7.3 it was either added or fixed, so you will not 
be able to do this unless you are using version 7.3. Remember to backup with 
pg_dumpall before you upgrade.

This is a sample sent to me earlier this week, that iterates an integer array:

Cut Here
CREATE TYPE group_view AS (grosysid int4, groname name, usesysid int4, usename 
name);

CREATE OR REPLACE FUNCTION expand_groups() RETURNS SETOF group_view AS '
DECLARE
  rec record;
  groview record;
  low int;
  high int;
BEGIN
  FOR rec IN SELECT grosysid FROM pg_group LOOP
SELECT INTO low
  replace(split_part(array_dims(grolist),'':'',1),''['',)::int
  FROM pg_group WHERE grosysid = rec.grosysid;
IF low IS NULL THEN
  low := 1;
  high := 1;
ELSE
  SELECT INTO high
replace(split_part(array_dims(grolist),'':'',2),'']'',)::int
FROM pg_group WHERE grosysid = rec.grosysid;
  IF high IS NULL THEN
high := 1;
  END IF;
END IF;

FOR i IN low..high LOOP
  SELECT INTO groview g.grosysid, g.groname, s.usesysid, s.usename
FROM pg_shadow s join pg_group g on s.usesysid = g.grolist[i]
WHERE grosysid = rec.grosysid;
  RETURN NEXT groview;
END LOOP;
  END LOOP;
  RETURN;
END;
' LANGUAGE 'plpgsql' WITH ( iscachable, isstrict );

CREATE VIEW groupview AS SELECT * FROM expand_groups();
Cut Here

One of the tricks is that you apparently need to use the CREATE TYPE commands 
to define the returned result. The veiw at the end just makes queries look 
like a table is being queried rather than a function.

I hope this helps.

Roberto Mello wrote:
On Fri, Jan 24, 2003 at 11:39:07AM -0800, David Durst wrote:


I am wondering how you would handle a select that returns multiple rows
in a plpgsql function?

In other words lets say I wanted to iterate through the results in
the function.



There are examples in the PL/pgSQL documentation that show you how to do it.

-Roberto





---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[SQL] Could someone help me fix my array_list function?

2003-01-20 Thread Guy Fraser
Hi

I am trying to write a function to step through an array and output each value 
as a set {list}, I think.

This is what I want to do:

select attribute,array_list(values,1,sizeof(values)) as value from av_list;

Turn :
 attr6 | {val3,val7,val4,val5}

Into :
 attr6 | val3
 attr6 | val7
 attr6 | val4
 attr6 | val5

Below I have included my functions, a test query, a table definition
and some sample data.

If anyone already has a function to do this I would be elated.

Note: text array_dims(array[]); function existed on the machine I started this 
on, but does not exist on my machine at home. It outputs a text value like 
'[1:1]' when there is only one item in the array and '[1:6]' when there is six 
items. My functions expect that function to exist.


Any help would be apreciated.

Guy

The entire selection below can be pasted to a shell, it will create a test 
database testdb add plpgsql to the database then create the functions, and a 
populated table before running a test query.

---%...Cut Here...
createdb testdb
createlang plpgsql testdb
echo 
--###Start of Functions###
-- Array dimension functions.
--
-- Throw away old version of function
DROP FUNCTION array_diml(text[]);
--
-- Return the start 'left' dimension for the text array.
CREATE FUNCTION array_diml(text[])
	RETURNS int2
	AS
'select int2(ltrim(rtrim(rtrim(array_dims($1),\']012345679\'),\':\'),\'[\')) 
AS RESULT;'
	LANGUAGE sql
	WITH (iscachable,isstrict)
;
--
-- Throw away old version of function
DROP FUNCTION array_dimr(text[]);
--
-- Return the end 'right' dimension for the text array.
CREATE FUNCTION array_dimr(text[])
	RETURNS int2
	AS 'select 
int2(rtrim(ltrim(ltrim(array_dims($1),\'[012345679\'),\':\'),\']\')) AS RESULT;'
	LANGUAGE sql
	WITH (iscachable,isstrict)
;
--
-- Throw away old version of function
DROP FUNCTION array_list(text[],smallint);
--
-- Iterate array and post results
CREATE FUNCTION array_list(text[],smallint)
RETURNS SETOF text AS '
DECLARE
  inarray ALIAS FOR $1;
  dim ALIAS FOR $2;
BEGIN
  FOR counter IN 1..dim LOOP
RAISE NOTICE ''Getting element % of %'',counter,inarray;
  RETURN inarray[counter];
 END LOOP;
END;
'
LANGUAGE 'plpgsql';
--###End of Functions###

--###Start of test query###
--
-- Get a list with each destination for each mailbox
SELECT a_mailbox,
  array_list(a_destination,
array_dimr(a_destination))
FROM mail_aliases;
--###End of test query###

--###Start of table and sample data###
DROP TABLE mail_aliases;
CREATE TABLE mail_aliases(
  a_mailbox text,
  a_destination text[]
);
COPY mail_aliases FROM stdin USING DELIMITERS ':';
alias1:{dest1}
alias2:{dest2,dest1}
alias3:{dest3,dest4}
alias4:{dest3,dest4,dest5}
alias5:{dest6,dest7}
alias6:{dest3,dest7,dest4,dest5}
\.
--###End of table and sample data###

--###Start of test query###
--
-- Get a list with each destination for each mailbox
SELECT a_mailbox,
  array_list(a_destination,
array_dimr(a_destination))
FROM mail_aliases;
--###End of test query###
 | psql testdb
---%...Cut Here...



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster