Re: [SQL] What this parser mean?

2003-03-20 Thread David Witham
Hi Abdul,

You may have lost the '' around the date specification in the to_char function when 
forming the string in Java. Before submitting the query, check the contents of the 
query string to make sure you still have the '' marks.

Regards,
David Witham

-Original Message-
From: Abdul Wahab Dahalan [mailto:[EMAIL PROTECTED]
Sent: Friday, 21 March 2003 13:47
To: [EMAIL PROTECTED]
Subject: [SQL] What this parser mean?


I wrote a java program to query a data from 4 tables and the string
query is like:

querydata ="select
bizfavorite.bizid,bizprofile.bizname,bizprofile.bizphone,bizprofile.bizfax,bizfavorite.searchtype,"

 + "bizfavorite.keyword,to_char(bizfavorite.listdate,'DD-MM-') as
listdate,bizfavorite.offerid,offer.otype,offer.bizid as obizid,"
 + "to_char(offer.oposted_date,'DD-MM-') as
oposted_date,to_char(offer.oexpiry_date,'DD-MM-') as oexpiry_date,"
 + "userprofile.username,userprofile.ufullname,userprofile.uemail"
 + " from bizfavorite join bizprofile using(bizid) join userprofile
using(bizid) left join offer using(offerid)"
 + " where bizfavorite.username= ? and urole='1' order by listdate desc"

 + " limit " + recordPerpage + "," + beginRecord;

When I run this query, I got this message : Message: ERROR:  parser:
parse error at or near "-"

but when I ran this query from psql command prompt its ok.I got what I
want.
b2bscm=> select
bizprofile.bizname,bizprofile.bizphone,bizprofile.bizfax,bizfavorite.searchtype,bizfavorite.keyword,to_char(bizfavorite.listdate,'DD-MM-')
as listdate,bizfavorite.offerid,offer.otype,offer.bizid as
obizid,to_char(offer.oposted_date,'DD-MM-') as
date,to_char(offer.oexpiry_date,'DD-MM-') as
oexpiry_date,userprofile.username,userprofile.ufullname,userprofile.uemail
from bizfavorite join bizprofile using(bizid) join userprofile
using(bizid) left join offer using(offerid) where bizfavorite.username=
'faiz' and urole='1' order by listdate desc limit 8,0;

Any help, very much appreciated..Thanks


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [SQL] FUNCTIONS PROBLEM

2003-03-23 Thread David Witham
Hi Mario,

I have used a record type to do this:

create myfunc() returns record as '

declare
return_val  record;
col1int;
col2int;
col3real;
col4char;

col1 := 5;
col2 := 10;
col3 := 2.7;
col4 := ''z'';

select col1,col2,col3,col4 into return_val;
return return_val;
end;
' language 'plpgsql';

When you call the function you need to specify the expected output:

select * from myfunc() as (val1 int, val2 int, val3 real, val4 char);

See the SELECT reference page in the documentation.

There are other ways (which may be better) to do this that don't require the output 
types to be specified with the query but this is the one I got going first so I stuck 
with it. Hope this helps.

Regards,
David Witham
Telephony Platforms Architect
Unidial

-Original Message-
From: Mario Alberto Soto Cordones [mailto:[EMAIL PROTECTED]
Sent: Friday, 21 March 2003 09:26
To: [EMAIL PROTECTED]
Subject: [SQL] FUNCTIONS PROBLEM
Importance: High


Hi.

i have a function and i need to return 4 fields but not work,

any idea , please

thank

mario



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Inquiry From Form [pgsql]

2003-07-02 Thread David Witham
Hi Chris,

You need to start a transaction explicitly:

Session 1:

buns=# begin;
BEGIN
buns=# create table test_table (col1 int);
CREATE TABLE

Session 2:

buns=# \d test_table
Did not find any relation named "test_table".

Session 1:

buns=# commit;
COMMIT

Session 2:

buns=# \d test_table
  Table "public.test_table"
 Column |  Type   | Modifiers 
+-+---
 col1   | integer | 

HTH.
David

-Original Message-
From: Chris Schneider [mailto:[EMAIL PROTECTED]
Sent: Thursday, 3 July 2003 05:26
To: [EMAIL PROTECTED]
Subject: [SQL] Inquiry From Form [pgsql]


I know this is basic, but couldn\'t find and in a hurry to know the answer.  When 
interfacing with PostgreSQL through PSQL, it appears that DML statements are 
auto-commited, that is, a change I make in one session is seen from another without 
the original session issueing a COMMIT.  Is this a result of PSQL interface and if so, 
can it be turned off.  Is PostgreSQL transactional in the sense that I can issue 
several DMLs and then ROLLBACK.  If so, how.  Thanks and sorry for the newbie question.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] Using a compound primary key

2003-07-27 Thread David Witham
Hi all,

I have a table containing tariff information. It has a primary key (and therefore 
unique index) of (tariff_type varchar(5),prefix varchar(12)) where tariff_type is a 
set of rates assigned to dialling prefixes.
e.g.
tariff_type prefix  rate
abc 44  $x
abc 441 $y
abc 61  $z
def 44  $a
def 441 $b
def 61  $c

and so on.

For a known tariff_type, I need to find the rate that has the longest matching 
dialling prefix. In the data above, if I made a phone call to +4412345678 using 
tariff_type abc, then I would want to retrieve the record abc,441,$y and not the 
record abc,44,$x. I do this currently by dividing up the phone number and using this 
query:

select * from tariff
where tariff_type = 'UIA'
and prefix in ('44','441','4412','44123','441234','4412345','44123456')
order by prefix desc limit 1;

The query doesn't use the primary key index as I might expect:

 Limit  (cost=98.88..98.88 rows=1 width=31)
   ->  Sort  (cost=98.88..98.89 rows=7 width=31)
 Sort Key: prefix
 ->  Seq Scan on tariff  (cost=0.00..98.78 rows=7 width=31)
   Filter: ((tariff_type = 'UIA'::character varying) AND ((prefix = 
'44'::character varying) OR (prefix = '441'::character varying) OR (prefix = 
'4412'::character varying) OR (prefix = '44123'::character varying) OR (prefix = 
'441234'::character varying) OR (prefix = '4412345'::character varying) OR (prefix = 
'44123456'::character varying)))

If I specify both parts of the key then it will, of course, use the index and cost 
very little:

select * from tariff
where tariff_type = 'UIA' and prefix = '441'
order by prefix desc limit 1;   
  QUERY PLAN   
   
--
 Limit  (cost=5.23..5.23 rows=1 width=31)
   ->  Sort  (cost=5.23..5.23 rows=1 width=31)
 Sort Key: prefix
 ->  Index Scan using tariff_ix2 on tariff  (cost=0.00..5.22 rows=1 width=31)
   Index Cond: ((tariff_type = 'UIA'::character varying) AND (prefix = 
'441'::character varying))

I have used this exact scenario using an Informix database and the query planner is 
able to use the index:

QUERY:
--
select *
from tariff where tariff_type = 'SIL18'
and (prefix = '44' or prefix = '441' or prefix = '4412'
or prefix = '44123' or prefix = '441234' or prefix = '4412345'
or prefix = '44123456' ) order by prefix desc

Estimated Cost: 1
Estimated # of Rows Returned: 1

1) informix.tariff: INDEX PATH

(1) Index Keys: tariff_type prefix   (Key-First)
Lower Index Filter: informix.old_tariff.tariff_type = 'SIL18' 
Key-First Filters:  (((informix.old_tariff.prefix = '44' OR informix.ol
d_tariff.prefix = '441' ) OR informix.old_tariff.prefix = '4412' ) OR informix.old_
tariff.prefix = '44123' ) OR informix.old_tariff.prefix = '441234' ) OR informix.ol
d_tariff.prefix = '4412345' ) OR informix.old_tariff.prefix = '44123456' ) )

Is there a way I can achieve the same result using PostgreSQL 7.3.2? I can add another 
index just on prefix and get a performance increase but it's still not as 
cost-efficient as using the primary index. Would it be more cost effective to do a 
bunch of individual queries for each length of prefix until I find one that matches? 
The average length of a prefix would probably be around 3 digits and I would need to 
start at 8 digits and work back to cover all possibilities.

Thanks for any advice,
David Witham

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] Migrating Stored Procedures from MS SQL Server

2003-08-29 Thread David Witham



Hi 
Kumar,
 
You'll 
need to use PL/pgSQL for your stored procedures.
 
When I 
migrated from Informix I found it was pretty easy - just a syntactic conversion 
of the procedures and a few data types to rename. The tricky bit is when you 
have used a MS SQL Server concept that PostgreSQL doesn't support. You'll need 
to write some new code or at worst redesign your logic to work around the 
differences. I don't expect there would be automated tools that could do that 
for you.
 
Regards,
David

  -Original Message-From: Kumar 
  [mailto:[EMAIL PROTECTED]Sent: Friday, 29 August 2003 
  15:24To: psqlSubject: [SQL] Migrating Stored Procedures 
  from MS SQL Server
  Dear Friends,
   
  I am about to do a migration of 
  stored procedures from MS SQL Server to Postgres. What is there any free tools 
  available for it?
   
  Shall I write a SQL function or 
  PL/pgSQL function for Stored Procedures? Please advise me on this. Thanks in 
  advance.
   
  Kumar


[SQL] Left outer join on multiple tables

2004-01-11 Thread David Witham
Hi all,

Is there a way to do left outer joins on more than 2 tables at once (3 in my case)? Or 
do I have to do the first join into a temp table and then another join from the temp 
table to the third table? I can't seem to work out the syntax from the User Guide.

Thanks,

David Witham
Telephony Platforms Architect
Unidial
Ph: 03 8628 3383
Fax: 03 8628 3399


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[SQL] Transpose rows to columns

2004-01-12 Thread David Witham
Hi,

I have a query that returns data like this:

cust_id cust_name   month   costrevenue margin
991234  ABC 2003-07-01  10  15  5
991234  ABC 2003-08-01  11  17  6
991234  ABC 2003-09-01  12  19  7
991235  XYZ 2003-07-01  13  21  8
991235  XYZ 2003-08-01  12  19  7
991235  XYZ 2003-09-01  11  17  6

I want to turn it around so it displays like this:

991234,ABC,2003-07-01,10,15,5,2003-08-01,11,17,6,2003-09-01,12,19,7
991235,XYZ,2003-07-01,13,21,8,2003-08-01,12,19,7,2003-09-01,11,17,6

(I've used commas to shorten the layout for the example)

Does anyone have some ideas on how to do this?

Thanks,

David Witham
Telephony Platforms Architect
Unidial
Ph: 03 8628 3383
Fax: 03 8628 3399


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[SQL] Index question

2004-02-12 Thread David Witham
Hi all,

I have a table with around 3M records in it and a few indexes on it. One of them is on 
the day column. I get 10-20K new records a day. After running ANALYSE in psql I tried 
the following queries:

buns=# explain select count(*) from cdr where day >= '20040127';
QUERY PLAN
--
 Aggregate  (cost=85596.50..85596.50 rows=1 width=0)
   ->  Seq Scan on cdr  (cost=0.00..85053.86 rows=217055 width=0)
 Filter: ("day" >= '2004-01-27'::date)
(3 rows)

buns=# explain select count(*) from cdr where day = '20040127'; 
   QUERY PLAN   

 Aggregate  (cost=12950.10..12950.10 rows=1 width=0)
   ->  Index Scan using cdr_ix1 on cdr  (cost=0.00..12928.00 rows=8839 width=0)
 Index Cond: ("day" = '2004-01-27'::date)
(3 rows)

buns=# explain select count(*) from cdr where day between '20040127' and current_date;
   QUERY PLAN  
 

 Aggregate  (cost=20129.91..20129.91 rows=1 width=0)
   ->  Index Scan using cdr_ix1 on cdr  (cost=0.00..20095.66 rows=13699 width=0)
 Index Cond: (("day" >= '2004-01-27'::date) AND ("day" <= ('now'::text)::date))
(3 rows)

I understand that selecting count(*) will involve a scan at some stage, but I was 
surprised that the index wasn't used in the >= case, but was used in the between case.

Why is this so? Do I need to ANALYSE some more or is this just the way the query 
planner works?

Thanks,
David

David Witham
Telephony Platforms Architect
Unidial
Ph: 03 8628 3383
Fax: 03 8628 3399


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Index question

2004-02-12 Thread David Witham
There are 18321 records for 20040127 and so the estimate of 8839 for the = case is low 
but it still does the right thing.

There are 227197 records between '20040127' and current_date so the estimate in the >= 
case is accurate but the estimate for the between case is an order of magnitude too 
low. However, it used the index I wanted and the >= case didn't.

Regards,
David

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Friday, 13 February 2004 16:38
To: David Witham
Cc: [EMAIL PROTECTED]
Subject: Re: [SQL] Index question 


"David Witham" <[EMAIL PROTECTED]> writes:
> I understand that selecting count(*) will involve a scan at some
> stage, but I was surprised that the index wasn't used in the >= case,
> but was used in the between case.

Given the estimated row counts in your examples, the planner's choices
are not surprising.  You have not given us any information on whether
those estimates are accurate.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] Indexes and statistics

2004-02-17 Thread David Witham
Hi all,

This is a further post from last week. I've got a table of phone call detail records.

buns=# select count(*) from cdr; 
  count  
-
 2800653
(1 row)

One of the customers is quite large (8.3% of the records):

buns=# select count(*) from cdr where cust_id = 99201110;
 count  

 231889
(1 row)

I have indexes on cust_id (integer) and bill_id (integer). If I try to do a query on 
that customer it doesn't use the index:

buns=# explain analyse select count(*) from cdr where cust_id = 99201110 and bill_id 
is null;
   QUERY PLAN  
  
-
 Aggregate  (cost=87082.81..87082.81 rows=1 width=0) (actual time=82279.63..82279.63 
rows=1 loops=1)
   ->  Seq Scan on cdr  (cost=0.00..87037.71 rows=18041 width=0) (actual 
time=82279.61..82279.61 rows=0 loops=1)
 Filter: ((cust_id = 99201110) AND (bill_id IS NULL))
 Total runtime: 82280.19 msec
(4 rows)

I tried this:

alter table cdr alter column cust_id set statistics 1000;
alter table cdr alter column bill_id set statistics 1000;
analyze verbose;

The I ran the query again but I still got the same result.

Then I tried disabling sequential scans in postgresql.conf, restarted the postmaster 
and did the query again:

buns=# explain analyse select count(*) from cdr where cust_id = 99201110 and bill_id 
is null;
QUERY PLAN 
   
--
 Aggregate  (cost=913498.60..913498.60 rows=1 width=0) (actual time=48387.91..48387.91 
rows=1 loops=1)
   ->  Index Scan using cdr_ix3 on cdr  (cost=0.00..913453.49 rows=18041 width=0) 
(actual time=48387.89..48387.89 rows=0 loops=1)
 Index Cond: (cust_id = 99201110)
 Filter: (bill_id IS NULL)
 Total runtime: 48388.47 msec
(5 rows)

The computed cost of using the index was a factor of 10 higher which I presume is why 
the query planner wasn't using the index, but it ran in half the time. So I guess I 
need to know how to alter the statistics collection so that the index will get used. I 
gather that index columns that occur in more than "a few" percent of the table cause 
the query planner to not use the index. Does that mean I won't be able to get the 
query planner to ever use the cust_id index for that customer or can I tune some 
parameters to alter that?

Any suggestions appreciated.

Thanks,
David

David Witham
Telephony Platforms Architect
Unidial, Australia



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Indexes and statistics

2004-02-17 Thread David Witham
Tom,

I'm running PostgreSQL 7.3.2 on Red Hat Linux 7.3 with 512Mb RAM.

The table definition is:

Table "public.cdr"
Column | Type  | Modifiers 
---+---+---
 carrier_id| integer   | not null
 file_id   | integer   | not null
 service_num   | character varying(10) | not null
 day   | date  | not null
 time  | integer   | not null
 destination   | character varying(20) | not null
 duration  | integer   | not null
 charge_wholesale  | numeric(8,2)  | not null
 charge_band_id| integer   | 
 charge_retail | numeric(8,2)  | not null
 rate_plan_id  | integer   | not null
 item_code | integer   | not null
 cust_id   | integer   | not null
 bill_id   | integer   | 
 prefix| character varying(12) | 
 charge_wholesale_calc | numeric(8,2)  | 
Indexes: cdr_ix1 btree ("day"),
 cdr_ix2 btree (service_num),
 cdr_ix3 btree (cust_id),
 cdr_ix4 btree (bill_id),
 cdr_ix5 btree (carrier_id),
 cdr_ix6 btree (file_id)

Does this make it a "wide" table?

The data arrives ordered by service_num, day, time. This customer has one primary 
service_num that most of the calls are made from. Therefore each day a clump of CDRs 
will be loaded for that customer, interspersed with CDRs from all the other customers. 
Therefore the distribution of records for a service_num is clumpy but evenly 
distributed throughout the table. For a customer with a single primary number, this 
result applies to the customer as a whole. For a customer with many service_num's the 
result is a little more doubtful depending on whether their service_num's arrive 
sequentially or not. This would not necessarily be the case.

I hope this makes sense. Does it help any?

Thanks,
David

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Wednesday, 18 February 2004 16:10
To: David Witham
Cc: [EMAIL PROTECTED]
Subject: Re: [SQL] Indexes and statistics 


"David Witham" <[EMAIL PROTECTED]> writes:
> One of the customers is quite large (8.3% of the records):

Hmm.  Unless your rows are quite wide, a random sampling of 8.3% of the
table would be expected to visit every page of the table, probably
several times.  So the planner's cost estimates do not seem out of line
to me; an indexscan *should* be slow.  The first question to ask is why
the deviation from reality.  Are the rows for that customer ID likely to
be physically concentrated into a limited number of physical pages?
Do you have so much RAM that the whole table got swapped in, eliminating
the extra I/O that the planner is expecting?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] ANALYZE error

2004-03-08 Thread David Witham
Hi all,

I run a cron job at 2am every night to do an ANALYZE on my whole database:

su - postgres -c 'echo analyze | /usr/bin/psql buns | grep -v ANALYZE'

Its a batch oriented database so there is no user activity on it at this time. The 
only other job that could be running is a vacuum I run at 1am but I don't think it 
would take an hour to run. Occasionally I get this error message from the analyze job:

ERROR:  simple_heap_update: tuple concurrently updated

What does this mean and should I do anything about it?

Thanks,

David Witham
Telephony Platforms Architect
Unidial
Australia


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] Sorting an aggregated column

2004-03-22 Thread David Witham
Hi all,

I ran these commands:

create temp table dw_survey(survey int,question int,answer_bool boolean,answer_int 
int,answer_char varchar);
insert into dw_survey values(1,1,'t',null,null);
insert into dw_survey values(1,2,'f',null,null);
insert into dw_survey values(1,3,'t',null,null);
insert into dw_survey values(1,4,null,123,null);
insert into dw_survey values(1,5,null,21,null);
insert into dw_survey values(1,6,null,1,null);
insert into dw_survey values(2,1,'t',null,null);
insert into dw_survey values(2,2,'t',null,null);
insert into dw_survey values(2,3,'t',null,null);
insert into dw_survey values(2,4,null,3,null);
insert into dw_survey values(2,5,null,2,null);
insert into dw_survey values(2,6,null,1,null);

and I now have a table with data like this:

 Table "pg_temp_5.dw_survey"
   Column|   Type| Modifiers 
-+---+---
 survey  | integer   | 
 question| integer   | 
 answer_bool | boolean   | 
 answer_int  | integer   | 
 answer_char | character varying | 

 survey | question | answer_bool | answer_int | answer_char 
+--+-++-
  1 |1 | t   || 
  1 |2 | f   || 
  1 |3 | t   || 
  1 |4 | |123 | 
  1 |5 | | 21 | 
  1 |6 | |  1 | 
  2 |1 | t   || 
  2 |2 | t   || 
  2 |3 | t   || 
  2 |4 | |  3 | 
  2 |5 | |  2 | 
  2 |6 | |  1 | 

Answers to a survey can be one of three types - boolean, integer or varchar.
There can be any number of questions in a survey.

I want to summarise the results of the survey like this:

survey | answer1 | answer2 | answer3 | answer4 | answer5 | answer6
---+-+-+-+-+-+
1  |t|f|t|   123   |21   |1
2  |t|t|t| 3   | 2   |1

Or even like this:

survey | answers
---+---
1  | t,f,t,123,21,1
2  | t,t,t,3,2,1

In both cases the order of the answers must be ordered by the "question" column.

I can do the second case with a user-defined string concatenating aggregate:

select survey, list (
case when answer_bool = 't' then 'y'::varchar
when answer_bool = 'f' then 'n'::varchar
when answer_int is not null then answer_int::varchar
when answer_char is not null then answer_char::varchar
end
)
from dw_survey
group by survey
order by survey;

 survey |list 
+-
  1 | y, n, y, 123, 21, 1
  2 | y, y, y, 3, 2, 1

This output is correct in this case but there is no guarantee that the answers will 
come out in "question" order. I can't see how to incorporate sorting by the "question" 
column using this approach.

Can anyone suggest either how to improve my current approach or a different approach 
to get the desired result?

Thanks,

David Witham
Telephony Platforms Architect
Unidial Pty Ltd
Level 1, 174 Peel St North Melbourne,VIC 3051
Australia
Ph: 03 8628 3383
Fax: 03 8628 3399

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Sorting an aggregated column

2004-03-22 Thread David Witham
Hi Tom,

Thanks for the advice. I was planning to upgrade from 7.3.2 to 7.4 soon but this adds 
a bit more impetus.

Under 7.3.2 I rewrote the query as your example suggested:

explain select survey, list (
case when answer_bool = 't' then 'y'::varchar
when answer_bool = 'f' then 'n'::varchar
when answer_int is not null then answer_int::varchar
when answer_char is not null then answer_char::varchar
end
) as answers
from (select survey, answer_bool, answer_int, answer_char from dw_survey order by 
survey,question)
as dws
group by survey
order by survey;


 Aggregate  (cost=122.16..129.66 rows=100 width=45)
   ->  Group  (cost=122.16..127.16 rows=1000 width=45)
 ->  Sort  (cost=122.16..124.66 rows=1000 width=45)
   Sort Key: survey
   ->  Subquery Scan dws  (cost=69.83..72.33 rows=1000 width=45)
 ->  Sort  (cost=69.83..72.33 rows=1000 width=45)
   Sort Key: survey, question
   ->  Seq Scan on dw_survey  (cost=0.00..20.00 rows=1000 
width=45)

So I see that there is the extra sort above the sub-query that wouldn't be there using 
7.4. Are you saying that the sort by survey after the sort by survey,question would 
potentially reorder the records initially sorted by survey,question? If the sub-query 
had already sorted by survey (along with question), would the sort by survey bother to 
reorder any of the rows? E.g. if the subselect returned (assuming 1 answer from the 3 
answer columns):
1,t
1,f
1,t
1,123
1,21
1,1
2,t
2,t
2,t
2,3
2,2
2,1
would the sort by survey potentially reorder these rows even though they don't need to 
be?

Regards,
David

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Tuesday, 23 March 2004 16:17
To: David Witham
Cc: [EMAIL PROTECTED]
Subject: Re: [SQL] Sorting an aggregated column 


"David Witham" <[EMAIL PROTECTED]> writes:
> This output is correct in this case but there is no guarantee that the
> answers will come out in "question" order. I can't see how to
> incorporate sorting by the "question" column using this approach.

As of PG 7.4 you can reliably use a sorted sub-select to determine the
order of inputs to a user-defined aggregate function.  See for instance
http://archives.postgresql.org/pgsql-general/2003-02/msg00917.php

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])