[SQL] fmgr_info: function 15238119: cache lookup failed

2003-11-19 Thread Iain

(B
(B
(BHi All,
(B 
(BI have a problem with a before update 
(Btrigger that I was wondering if someone might be able to give me some help 
(Bwith.
(B 
(BDB version is 7.2. Here is a cut down 
(Bversion of an already simple trigger and function that is returning the 
(Berror:
(B 
(B    fmgr_info: function 
(B15238119: cache lookup failed
(B 
(BCREATE FUNCTION f_hatyuu_detail( ) RETURNS 
(Bopaque  AS 'BEGINraise notice ''f_hatyuu_detail started  
(BTG_WHEN= %  TG_OP= %'',TG_WHEN, TG_OP;  if TG_WHEN 
(B<> ''BEFORE'' or TG_OP <> ''UPDATE'' then  return 
(Bnull; end if;
(B 
(Braise notice ''debug 3''; return 
(Bnew;raise notice ''debug 4'';
(BEND; ' LANGUAGE 
(B'plpgsql';
(B 
(Bdrop trigger TG_D_HATYUU_DETAIL_B on 
(BD_HATYUU_DETAIL;create trigger TG_D_HATYUU_DETAIL_B BEFORE update on 
(BD_HATYUU_DETAIL for each row execute procedure 
(Bf_hatyuu_detail();
(BWhen I do an UPDATE against the table, I 
(Bget the abovementioned error on the RETURN NEW; The debug 4 message is not 
(Bdisplayed. If I return NULL it operates as expected (0 rows updated). Did I miss 
(Bsomething here?
(B 
(Bregards
(BIain

Re: [SQL] fmgr_info: function 15238119: cache lookup failed

2003-11-19 Thread Iain

(B
(B
(BOoops! I fixed it. Somewhere along the 
(Bline the name of the trigger had changed, and there was an old, probably 
(Binvalidated trigger left on the table. Once I dropped that it was 
(BOK.
(B 
(BSorry for the inconvenience!
(B 
(BRegards
(BIain
(B
(B  - Original Message - 
(B  From: 
(B  Iain 
(B  To: [EMAIL PROTECTED] 
(B  Sent: Thursday, November 20, 2003 
(B  11:37 AM
(B  Subject: fmgr_info: function 
(B  15238119: cache lookup failed
(B  
(B  Hi All,
(B   
(B  I have a problem with a before update 
(B  trigger that I was wondering if someone might be able to give me some help 
(B  with.
(B   
(B  DB version is 7.2. Here is a cut 
(B  down version of an already simple trigger and function that is returning the 
(B  error:
(B   
(B      fmgr_info: function 
(B  15238119: cache lookup failed
(B   
(B  CREATE FUNCTION f_hatyuu_detail( ) 
(B  RETURNS opaque  AS 'BEGINraise notice ''f_hatyuu_detail 
(B  started  TG_WHEN= %  TG_OP= %'',TG_WHEN, 
(B  TG_OP;  if TG_WHEN <> ''BEFORE'' or TG_OP <> 
(B  ''UPDATE'' then  return null; end if;
(B   
(B  raise notice ''debug 
(B  3''; return new;raise notice ''debug 4'';
(B  END; ' LANGUAGE 
(B  'plpgsql';
(B   
(B  drop trigger TG_D_HATYUU_DETAIL_B on 
(B  D_HATYUU_DETAIL;create trigger TG_D_HATYUU_DETAIL_B BEFORE update on 
(B  D_HATYUU_DETAIL for each row execute procedure 
(B  f_hatyuu_detail();
(B  When I do an UPDATE against the table, I 
(B  get the abovementioned error on the RETURN NEW; The debug 4 message is not 
(B  displayed. If I return NULL it operates as expected (0 rows updated). Did I 
(B  miss something here?
(B   
(B  regards
(B  Iain

Re: [SQL] Bug: Sequence generator insert

2003-11-30 Thread Iain
Hi Colin,

You have done a nice job of demonstrating the documented behaviour of
sequences.

Sequences are designed for speed and high concurrency. They do not guarantee
contiguous numbers and are not included in any transaction as I understand
it.

regards
Iain
- Original Message - 
From: "Burr, Colin" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, November 26, 2003 8:37 AM
Subject: [SQL] Bug: Sequence generator insert


> Dear Sir,
> I found a sequence generator software bug associated with duplicate key
> inserts that may be of interest to you.
>
> I first created a table with a primary key based on a sequence generator.
> The following script provides an example.
>
> CREATE SEQUENCE "id_seq" start 1 increment 1 maxvalue 2147483647 minvalue
1
> cache 1;
> REVOKE ALL on "id_seq" from PUBLIC;
> GRANT ALL on "id_seq" to "administrator";
>
> CREATE TABLE "example" (
> "id" integer DEFAULT nextval('id_seq'::text) NOT NULL,
> "colum_name" character varying(15) NOT NULL,
> Constraint "ex_pkey" Primary Key ("id")
> );
> REVOKE ALL on "example" from PUBLIC;
> GRANT ALL on "example" to "administrator";
>
> When I tried to insert a record into the table where a duplicate record
> already exists, predictably, the insert fails and the following error is
> generated:
>
>   'ERROR:  Cannot insert a duplicate key into unique index
ex_pkey'
>
> However, even though the new record failed to be inserted into the table,
> the sequence generator was still updated.
>
> The sequence generator should only be updated if the record is
successfully
> inserted into the table, and should not be updated if the insert fails.
>
>
> Sincerely yours,
>
> Colin Burr
>
>
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Bug: Sequence generator insert

2003-11-30 Thread Iain
I just re-read your message, and it occurred to me that for the second
insert you may have provided an explicit value for the 'id', is that right?
You may need to give more detail about the inserts. I'd be mildly surprised,
but not concerned if the insert that failed used an explicit value for the
id (ir didn't use the default) but still used (and therefore incremented)
the sequence.

Regards
Iain
- Original Message - 
From: "Burr, Colin" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, November 26, 2003 8:37 AM
Subject: [SQL] Bug: Sequence generator insert


> Dear Sir,
> I found a sequence generator software bug associated with duplicate key
> inserts that may be of interest to you.
>
> I first created a table with a primary key based on a sequence generator.
> The following script provides an example.
>
> CREATE SEQUENCE "id_seq" start 1 increment 1 maxvalue 2147483647 minvalue
1
> cache 1;
> REVOKE ALL on "id_seq" from PUBLIC;
> GRANT ALL on "id_seq" to "administrator";
>
> CREATE TABLE "example" (
> "id" integer DEFAULT nextval('id_seq'::text) NOT NULL,
> "colum_name" character varying(15) NOT NULL,
> Constraint "ex_pkey" Primary Key ("id")
> );
> REVOKE ALL on "example" from PUBLIC;
> GRANT ALL on "example" to "administrator";
>
> When I tried to insert a record into the table where a duplicate record
> already exists, predictably, the insert fails and the following error is
> generated:
>
>   'ERROR:  Cannot insert a duplicate key into unique index
ex_pkey'
>
> However, even though the new record failed to be inserted into the table,
> the sequence generator was still updated.
>
> The sequence generator should only be updated if the record is
successfully
> inserted into the table, and should not be updated if the insert fails.
>
>
> Sincerely yours,
>
> Colin Burr
>
>
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Validity check in to_date?

2003-12-02 Thread Iain
T've been following this thread with interest because I have a related
problem. Basically we are storing dates in CHAR fields with al the
associated problems. I'd like to do it, but changing everything to date
fields isn't practical for now, so as a stopgap solution, I want to provide
some validation at the database level.

I tried:

create domain ymdtest2 as char(10)   constraint valid_date check
(VALUE::DATE);

But it gives this error:

ERROR:  cannot cast type character to date

I also tried:

create domain test char(10) check (CAST(VALUE AS DATE));

and it gives the same error. I don't need to actually modify VALUE, but I'd
like to have it so that any attempt to insert an invalid date will cause the
transaction to fail. This is OK:

SELECT CAST('2003-3-31' AS DATE);

So it is possible to cast type character to date (as we all know) so it
seems that the problem only applies to checks.

Now, I am wondering if there is a clever work around to this? It escapes me
for now anyway.
Regards
Iain

- Original Message - 
From: "Karel Zak" <[EMAIL PROTECTED]>
To: "Christoph Haller" <[EMAIL PROTECTED]>
Cc: "Alexander M. Pravking" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Tuesday, December 02, 2003 6:57 PM
Subject: Re: [SQL] Validity check in to_date?


> On Tue, Dec 02, 2003 at 10:27:04AM +0100, Christoph Haller wrote:
> > >
> > > I just discovered that to_date() function does not check if supplied
> > > date is correct, giving surprising (at least for me) results:
> > >
> > > fduch=# SELECT to_date('31.11.2003', 'DD.MM.');
> > >   to_date
> > > 
> > >  2003-12-01
> > >
> > > or even
> > >
> > > fduch=# SELECT to_date('123.45.2003', 'DD.MM.');
> > >   to_date
> > > 
> > >  2007-01-03
> > >
> > > to_timestamp() seems to work the same way. It's probably useful
sometimes,
> > > but not in my case... Is it how it supposed to work?
> > > If so, how can I do such a validity check?
> > > If not, has something changed in 7.4?
>
>  No change in 7.4. Maybe in 7.5 or in some 7.4.x.
>
> > As far as I know these results are correct in terms of the underlying
> > C-library function mktime(). This function is intended to be used when
> > adding/subtracting intervals from a given timestamp.
> > I don't know of any postgres function doing the check you're looking
for.
> > But I can't believe this is the first time this topic is brought up.
> > You may search the archives on "date plausibility" are related terms.
>
>  The others PostgreSQL stuff which full parse (means check ranges)
>  date/time is less optimistic with this:
>
>  # select '31.11.2003'::date;
>  ERROR:  date/time field value out of range: "31.11.2003"
>
> Karel
>
> -- 
>  Karel Zak  <[EMAIL PROTECTED]>
>  http://home.zf.jcu.cz/~zakkr/
>
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Validity check in to_date?

2003-12-02 Thread Iain
Sweeet!

This is what I ended up with:

create domain testdate char(10) check (VALUE::text::date = VALUE);

(it wasn't possible to insert a NULL date with (VALUE::text::date IS NOT
NULL); )

I'm pretty happy with this as I didn't even have to use
to_char(VALUE::text::date , '-MM-DD')  for the comparison, since the
date format I am using matches the "datestyle" setting.

It seems to accept any valid date, as well as NULL, while rejecting inputs
such as:

insert into test1 values ('2002-03-32');
insert into test1 values ('200-03-22');
insert into test1 values ('2002- 3-22');
insert into test1 values ('2002-03-2');
insert into test1 values ('2002-03- 2');
insert into test1 values ('2002-3-2');
insert into test1 values ('2002-14-02');
insert into test1 values ('');

insert into test1 values ('2002/03/22');
insert into test1 values ('2002/03/32');
insert into test1 values ('200/03/22');
insert into test1 values ('2002/ 3/22');
insert into test1 values ('2002/03/2');
insert into test1 values ('2002/03/ 2');
insert into test1 values ('2002/3/2');
insert into test1 values ('2002/14/02');

It's no silk purse, but it's short and sweet and I'm satisfied.
Thanks guys.
Rregards
Iain

- Original Message - 
From: "Stephan Szabo" <[EMAIL PROTECTED]>
To: "Iain" <[EMAIL PROTECTED]>
Cc: "Karel Zak" <[EMAIL PROTECTED]>; "Christoph Haller" <[EMAIL PROTECTED]>;
"Alexander M. Pravking" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Wednesday, December 03, 2003 1:15 AM
Subject: Re: [SQL] Validity check in to_date?


> On Tue, 2 Dec 2003, Iain wrote:
>
> > T've been following this thread with interest because I have a related
> > problem. Basically we are storing dates in CHAR fields with al the
> > associated problems. I'd like to do it, but changing everything to date
> > fields isn't practical for now, so as a stopgap solution, I want to
provide
> > some validation at the database level.
> >
> > I tried:
> >
> > create domain ymdtest2 as char(10)   constraint valid_date check
> > (VALUE::DATE);
> >
> > But it gives this error:
> >
> > ERROR:  cannot cast type character to date
> >
> > I also tried:
> >
> > create domain test char(10) check (CAST(VALUE AS DATE));
>
> I'd try CAST(CAST(VALUE AS TEXT) AS DATE)
>
> There's a text->date conversion, but not one from character(n).
>
>
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
>
>http://archives.postgresql.org


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] migration between databases and versions

2003-12-17 Thread Iain
Hi Mohan,

I'm running 7.4 on a reasonably old version of turbo linux, on and even
older creakier machine and it seems fine so far. I suspect I may be having
some problems due to the fact that the development environment is 3 linux
servers running under virtual pc on one windows 2000 box. :)

I'm facing a similar data migration issue to you, but possibly more complex
from what I read. So far my data migration has been from the old production
system to the new development system in order to test the procedures and
generate test data. My approach has been to dump tables individually, then
edit the dump file and change the tables name, and delete things like
constraints and index definitions. I then load the old table as person_old
(to use your example) and then create a bulk insert statement to copy data
from the old table format to the new one. My system is a major
redevelopment, so I wouldn't expect pg_dump to handle it all for me
automatically.

My understanding is that you will have problems loading data into a table
that does not have exactly the same definition as the one used to create the
dump. For example, if the 2 tables have the same columns but they were added
in a different order. It may be that using different dump options could get
around this though - such as generating the dump as insert statements.

If you come up wth any interesting information as you progress with this,
I'd be happy to hear about it.

Regards
Iain
- Original Message - 
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, December 18, 2003 3:58 AM
Subject: [SQL] migration between databases and versions


> I have been Red Hat linux 7.3 on my prod server. I have been running
> postgres 7.2 on it.
> I would like to upgrade it to the newest version of postgres.
> 1) Is there a version for redhat 7.3.
> 2) How stable is postgres 7.4 ?
> I have another critical isssue.
> My database has now changed.In our new release of the product we have
> added few more columns to some of the tables in the database. But the
> production does not have these fields. All the fields in the tables in
> production are also in developement version. But there are some extra
> fields in developement database that are not in production. There is a lot
> of data already in the production database.
> Now i need to add move the data from the old production database to new
> one which has certain extra fields. Please let me know if it is possible
> to do this migration, if yes how? using pg_dump?
>
> Eg: Old database with data
>has a Table named 'person' with fields A,B
>  New Database to be put under prodcution
>also has an Table named 'person' with field A, B,C
>
> If i do a pg_dump from old to new does it fill the A,B and leave C alone
> or will it give an error given that C is not a null field.
>
> Please help me
>
> --Mohan
>
>
>
>
>
>
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] how to preserve \n in select statement

2003-12-23 Thread Iain
Isn't the simple answer to use bind variables?

SQL using bind variables instead of making a new SQL string each time will
prevent malicious users from invoking functions and inserting other sql, as
well as handle the original problem regarding storage of newlines vs \n.

I don't know much about Postgres' SQL cache, but it is well known in Oracle
circles that using bind variables is is a critical part of system design,
not just for security, but for performance and scalability. I suspect that
the same issues apply more or less to postgres.

Correct me if I'm wrong, please...

regards
Iain
- Original Message - 
From: "Richard Huxton" <[EMAIL PROTECTED]>
To: "Denis" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Monday, December 22, 2003 7:48 PM
Subject: Re: [SQL] how to preserve \n in select statement


> On Monday 22 December 2003 09:37, Denis wrote:
> > Hi Richard..
> >
> > If your users are required to fire only SELECT and no DML, you can do
the
> > following:
> >
> > BEGIN;
> > execute the statements given by user
> > ROLLBACK;
> >
> > This will not affect your SELECT and also if any malicious user gives
> > DELETE statement, that will not have any impact too..
>
> An interesting idea, though you'd need to be careful with side-effects
> (triggers/functions etc). I seem to recall a "read-only" setting being
> discussed for transactions too (though not as a security measure, I should
> emphasise).
>
> The other thing is to use the database user/group mechanism - something
which
> tends to be neglected with web-based apps (partly because different DBs
have
> different setups here).
> If only an application super-user can add/delete users make sure the
> permissions reflect this and connect as a more restricted user for other
> logins.
>
> -- 
>   Richard Huxton
>   Archonet Ltd
>
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match


---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] not in vs not exists - vastly diferent performance

2003-12-24 Thread Iain

(B
(B
(BHi All,
(B 
(BI found this interesting and thought I'd 
(Boffer it up for comment.
(B 
(BI have the following delete:
(B 
(Bdelete from tableB where id not in (select 
(Bid from tableA);
(B 
(BThere are about 100,000 rows in table A, 
(Band about 500,000 in table B. id is indexed on both tables. This is just a 
(Bdevelopment DB, and I wanted to clean it up so I could put in RI constraints. 
(BSomewhere along the line, records have been deleted from tableA leaving orphans 
(Bin tableB.
(B 
(BI launched the query yesterday afternoon, 
(Band it hadn't returned as of this morning, 15 hours later. Running top 
(Bshowed that CPU utilization was running close to 100%, and the disk was 
(Bnot busy at all. Anyway, I killed it and did some testing:
(B 
(BAnalyse revealed this plan, and varying 
(Brandom_page_cost between 1 and 4 didn't affect it:
(B 
(B Seq Scan on tableB  
(B(cost=0.00..1003619849.56 rows=251513 width=6)   Filter: (NOT 
(B(subplan))   SubPlan ->  Seq 
(BScan on tableA (cost=0.00..3738.64 rows=100664 width=4)
(Brecoding the delete to use not exists as 
(Bfollows:
(B
(B 
(Bdelete from tableB where not 
(Bexists (select id from tableA where tableA.id = tableB.id);
(B 
(BGave this plan:
(B 
(BSeq Scan on tableB  
(B(cost=0.00..719522.41 rows=236131 width=6)   Filter: (NOT 
(B(subplan))   SubPlan ->  Index 
(BScan using tableB_pk on tableA(cost=0.00..3.01 rows=2 
(Bwidth=4)   Index 
(BCond: ((id)::integer = ($0)::integer)
(BThis deleted 1200 rows in about 2 seconds. 
(BMuch better.
(B 
(BAnyway, I was a little surprised that "not 
(Bin" chose to use a seq scan on the tableA in this case. I had imagined, given 
(Bthat statistics were up to date and the size of the table, that the plan would 
(Bhave been similar to that generated by not exists, or perhaps would have used a 
(Bhash table based on the tableA ids.
(B 
(BSomething to think about 
(Banyway.
(BRegards
(BIain

Re: [SQL] not in vs not exists - vastly diferent performance

2004-01-04 Thread Iain
Hi Tom,
(B
(B> You didn't say what PG version you are using, but I'd venture to bet
(B> it is pre-7.4.
(B
(BSorry I didn't give the version number, here is the output from select
(Bversion():
(B
(BPostgreSQL 7.4 on i686-pc-linux-gnu, compiled by GCC 2.95.3
(B
(BThis is the first time that we have compiled the system from source,
(Bpreviously we used rpm, but it seems very straight forward. I don't know if
(Bthere may have been something we did wrong there.
(B
(BI understand that every DB environment is different, but it sounds like for
(Bthe example I gave, you expected that the NOT IN optimization would have
(Bchosen a better plan. As to why it didn't I have no idea at this stage. If
(Bthere is anything you want me to do to test it, for example on the
(Bregression test DB, then let me know.
(B
(Bregards
(BIain
(B
(B
(B
(B---(end of broadcast)---
(BTIP 7: don't forget to increase your free space map settings

[SQL] name of a column returned from a table function

2004-01-19 Thread Iain

(B
(B
(BDB Version 7,4.
(B 
(BI had a look through the docs, but the 
(Binformation on table functions (those returning SETOF something) seems to have 
(Bgone missing from the 7.4 docs. I found something under 7.3, but it didn't 
(Banswer my question, which is...
(B 
(BI've written a little table function that 
(Bsimply returns all the dates in a month. No problems so far. Now I want to outer 
(Bjoin this list to another table, but I dont know what to call the column 
(Breturned by the function. All the examples I saw just had "select * from 
(B".
(B 
(BI want do something like "select 
(B,... from  join  on 
(B( = ;
(B 
(BSearching the lists, I found something to 
(Bthe effect that this might have problems. Is that still the case?
(B 
(BThanks
(BIain

Re: [SQL] Slow sub-selects, max and count(*)

2004-02-04 Thread Iain



Hi,
 
I don't think thatyou gave enough 
information to get any direct help, for example, what are these 
"sub-selects"?
 
I often see performance problems arising 
from procedural code that puts selects that don't use indexes inside loops. 
That's a plain programming issue, and understanding your system.
 
Another situation I encountered recently 
was using "in" sub selects. Recoding them as "exists" or "not exists" as the 
case may be turned a query that ran for 18hours with no sign of ending (before I 
killed it) into a query that ran in a couple of minutes. 
 
I always try to avoid procedural code in 
such batch updates. If it can be encoded into one insert then it is usually 
better, It just depends on what you are doing.
 
 

  - Original Message - 
  From: 
  Richard Sydney-Smith 
  To: [EMAIL PROTECTED] 
  Sent: Thursday, February 05, 2004 
  10:48 AM
  Subject: [SQL] Slow sub-selects, 
  max and count(*)
  
  I have a procedure ( included below with 
  table definition) to import end of day quotes into a table 
  fsechist.
   
  The issue is with the speed ( or lackof it) that 
  the procedure proceeds. Apparently it is the subselects that are the worst 
  issue and I have tried to replace these. Also max() and count() refuse to use 
  indexes. I have replaced max() with (select ... as mx from ... order by ... 
  desc limit 1) and this is much faster but appear to have to avoid count() as I 
  can not find a way to point this at an index.
   
  Please, I am sure that there is a better way to 
  do this. And 5 to 7 minutes to insert a day is really too slow. Each days 
  import table contains about 3200 records. Total table size for fsechist is 
  about 2.5 million records.
   
   
  If you could point me in the right direction here 
  it would be much appreciated.
   
  thanks
   
  Richard Sydney-Smith
   
   
  
   
  CREATE OR REPLACE FUNCTION 
  public.import_sharedata(varchar)  RETURNS bool AS'
   
  declare   filnam alias for 
  $1;   alldone bool := true;   /* cnt integer := 0; 
  */  /* count of number inserted not used */
   
  begin/*   Imports a text file 
  containing end of day price quotes   filnam should be in double 
  backslash format e.g. c:\\data\\sharequotes\\quotes.dat  
     Ensures that it only brings in quotes for companies we have 
  listed in our database   and ensures only the latest copy of the 
  quote is kept for each day
   
     Import table has 7 columns in CSV 
  tab delimited ascii format.  EzyChart format 
  has prices in cents,    METASTOCK in 
  dollars  MetaStock +100 has the volume 
  divided by 100.      Depending on your import data 
  source you may need to process the temp_shareprices so that the 
     values are in Dollars , not cents and the volume is per unit 
  not per 100 units.
   
     Present calculation is for EzyChart 
  format.
   
     Both tables  have to have a 4 
  digit year mmdd     Warning : If you have the wrong 
  format this procedure will import garbage into your database !!!
   
  */
   
    perform 
  drop_table(\'temp_shareprices\');    create table 
  temp_shareprices(    ticker char(10) 
  null,    dte char(8),    v1 
  numeric(19,4) not null default 0,    v2 numeric(19,4) not 
  null default 0,    v3 numeric(19,4) not null default 
  0,    v4 numeric(19,4) not null default 
  0,    v5 numeric(19,4) not null default 
  0    );  execute \'copy temp_shareprices from 
  \'||quote_literal(filnam)||\' delimiter \'||quote_literal(\',\');
   
  -- extend yr to 4 digits, alter quotes to dollars
   
    update temp_shareprices set dte = 
  \'20\'||dte,v1=v1/100,v2=v2/100,v3=v3/100,v4=v4/100;
   
  -- add in any new company tickersinsert into fsecurity (sec_cde) 
  select ticker from temp_shareprices where length(trim(ticker))=3 and ticker 
  not in ( select sec_cde from fsecurity);
   
  -- 
  -- remove rows from temp_shareprices where we do not have a corresponding 
  security
  -- using sub-select is slow-- delete from temp_shareprices where 
  ticker not in (select sec_cde from fsecurity);
  -- can just delete ones where length<>3 as insert above ensures all 
  length 3 are inserted. All ASX codes are 3char longdelete from 
  temp_shareprices where length(trim(ticker))<>3;
   
  -- Attempt to remove without limiting to length and not using sub select 
  does not work--update temp_shareprices set ticker = s.sec_cde from 
  temp_shareprices t left outer join fsecurity s on t.ticker = 
  s.sec_cde;--delete from temp_shareprices where ticker is null;
  
  --  
  
  -- put indexes on temp tableCREATE INDEX temphist_tick ON 
  temp_shareprices  USING btree  (ticker);CREATE INDEX 
  temphist_dte  ON temp_shareprices  USING btree  (dte);
   
   
  
  --  -- Scan the input table 
  and delete as appropriate from fsechist
   
  -- Using sub select to remov

Re: [SQL] Indexes and statistics

2004-02-18 Thread Iain

> The computed cost of using the index was a factor of 10 higher which I
presume is why the query planner wasn't
> using the index, but it ran in half the time

Have you tried playing with the random_page_cost parameter? The default is
4. Try:

set random_page_cost = 1;

in psql to alter it for the current session (you can change this in
postgresql.conf too). This will make index usage more attractive by reducing
the computed cost. This is the simple way of looking at it anyway.

On my system I tested a 'typical' query exercising some joins on large
tables which didn't use an index, but I thought maybe it would perform
better if it did. I determined that a random_page_cost of 1.8 would cause
indexes to be used, but in this case the *actual* performance didn't improve
very much. I took this to mean that a random_page_cost of around 1.8/1.9
represents a rough balance point on my development server (one slow IDE
disk, and a big database).

As well as the other things mentioned by Tom, perhaps you should be looking
for the "correct" setting of random_page_cost for your system. It may be
appropriate to alter it globally using postgresql.conf, and for specific
situations such as you mentioned.

HTH
Iain


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] How to get Rows Count

2004-02-29 Thread Iain
howabout:

select sum(case when c =  '*' then 0 else 1 end) as count_not_star from
tablename

If you want to process all records but only count thouse without a * in c
then this will do the trick.

regards
iain
- Original Message - 
From: "Abdul Wahab Dahalan" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, March 01, 2004 11:06 AM
Subject: [SQL] How to get Rows Count


> >
> >
> >a   b  dc
> >01  02 b*
> >01  02 a*
> >02  03
> >02  04 b*
> >02  04 a*
> >03  05
> >04  06
> >
> If I've a table like above, how do I make a query to get rows count that
> doesnt have '*' in it.
>
> Can we have other queries than this : Select count(c) from tablename
> where c !='*';
> or Select count(c) from tablename where c <> '*';
>
>
>
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] ANALYZE error

2004-03-08 Thread Iain
>>The only other job that could be running is a vacuum I run at 1am but I
don't think it would take an hour to run.

Famous last words... :-) It should be easy to check this, but I don't know
if that is a likely source of problems anyway.

If I were you I'd just do VACUUM ANALYSE in one comand. It would certainly
eliminate that as a factor.

At this stage you have no idea which table it occurs on, or if it always
happens on the same table, or at the same time, am I right? Failing any more
useful response from those with more internal knowledge you may have to
resort to collecting information and trying to isolate the problem.

It would be a good idea to give all the relevant version numbers too

regards
iain


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Simple SQL question

2004-03-08 Thread Iain
Just a note based on my experience, if you are going to to use IN processing
then there is a good chance that the index isn't going to be used. In some
recent tests I did, the index stopped being used after I put 3 or more items
in the IN list. You should build some representatve examples of the select
and test them using "ANALYSE SELECT ..." .I'm using 7.4.1.

The result from the select is the same, even if you don't make an index on
the function result.It's just a performance consideration. If the index
isn't used, then you don't need to create it.

regards
Iain
- Original Message - 
From: "Daniel Henrique Alves Lima" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, March 09, 2004 1:21 PM
Subject: Re: [SQL] Simple SQL question


> Thank you, Jeremy.
>
> I've built a function that returns a string from (cd_teacher, cd_course)
> and i've create a functional index over this function, like :
>
> create index teacher_course_idx on teacher_course
> (build_unique_2p(cd_teacher,cd_course));
> select * from teacher_course where build_unique_2p(cd_teacher,cd_course)
> in ('1:2','23:11','563','341');
>
> Is it possible to use "array cast" over cd_teacher and cd_course (just
> an idea, i don't known the sintaxe), like:
>
> select * from teachar_course where cast((cd_teacher,cd_course) as array)
> in ('{1,2}','{23,11}','{563,341}');
>
> ?
>
> I'm using postgreSQL 7.34
>
> Thanks !!!
>
> Jeremy Semeiks wrote:
>
> >You could use a subselect of unions:
> >
> >select * from teacher_course where (cd_course, cd_teacher) in
> >(select 1, 2 union select 23, 11 union select 563, 341)
> >
> >Maybe there's a more concise way, though.
> >
> >- Jeremy
> >
> >
> >
>
>
>
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Simple SQL question

2004-03-08 Thread Iain
Sorry, did I write "ANALYSE SELECT  "?

It's supposed to be "EXPLAIN [ANALYSE] [VERBOSE] SELECT ..."

- Original Message - 
From: "Daniel Henrique Alves Lima" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, March 09, 2004 1:21 PM
Subject: Re: [SQL] Simple SQL question


> Thank you, Jeremy.
>
> I've built a function that returns a string from (cd_teacher, cd_course)
> and i've create a functional index over this function, like :
>
> create index teacher_course_idx on teacher_course
> (build_unique_2p(cd_teacher,cd_course));
> select * from teacher_course where build_unique_2p(cd_teacher,cd_course)
> in ('1:2','23:11','563','341');
>
> Is it possible to use "array cast" over cd_teacher and cd_course (just
> an idea, i don't known the sintaxe), like:
>
> select * from teachar_course where cast((cd_teacher,cd_course) as array)
> in ('{1,2}','{23,11}','{563,341}');
>
> ?
>
> I'm using postgreSQL 7.34
>
> Thanks !!!
>
> Jeremy Semeiks wrote:
>
> >You could use a subselect of unions:
> >
> >select * from teacher_course where (cd_course, cd_teacher) in
> >(select 1, 2 union select 23, 11 union select 563, 341)
> >
> >Maybe there's a more concise way, though.
> >
> >- Jeremy
> >
> >
> >
>
>
>
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] randomized order in select?

2004-03-10 Thread Iain
If you have a lot of tips, you could create a unique indexed tip number
column. Select the highest tip number using:

select tip_number from tips order by tip_number desc limit 1;

Then generate a random number and select using that tip_number.

Of course, you would have to allow for the possibility of missing tip
numbers, by repeating the random number generation/read sequence until you
find something. Since the tip_number isn't the PK of the table, you can
regenerate the tip numbers to eliminate holes from deletions any time you
like. Just reset the sequence to 1 and update all rows with the
nextval(tipnumber_seq).

Sounds like a lot of work to me though...


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] feature request ?

2004-06-24 Thread Iain
> > IF foo IS NULL
> > THEN ...
> > ELSIF foo
> > THEN ...
> > ELSE ...
> > END IF;
>
> here the foo expression woll be executed twice

if you can use an immutable or stable function then the overhead would be
minimal as the system knows that it doesn't need to re-evaluate it.

regards
Iain



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] surrogate key or not?

2004-07-20 Thread Iain
Hi,
(B
(Bfor my 2c worth, performance is the least important of the things you need
(Bto consider regarding use of surrogate keys.
(B
(BI use surrogate keys for all situations except the simplest code/description
(Btables, and this is only when the code has no meaning to the application. If
(Bthere is any possibility that you will want to update or re-use codes
(B(attaching a different meaning to them) then surrogate keys are the way to
(Bgo..
(B
(BThus I see it more as an issue of business logic than performance. There are
(Bof course many other considerations with relational theory and stuff like
(Bthat which you could debate endlessly. I expect that googling on "surrogate
(Bkeys" would yeild interesting results.
(B
(BRegards
(BIain
(B
(B- Original Message - 
(BFrom: "Markus Bertheau" <[EMAIL PROTECTED]>
(BTo: <[EMAIL PROTECTED]>
(BSent: Tuesday, July 20, 2004 9:16 PM
(BSubject: [SQL] surrogate key or not?
(B
(B
(B> Hi,
(B>
(B> I have a database that has types in them with unique names. These types
(B> are referenced from other tables through a surrogate integer key. I'm
(B> now wondering if I should eliminate that surrogate key and just use the
(B> name as the primary key. Afaiu, surrogate keys are primarily there to
(B> make joining tables or otherwise searching for a record faster, because
(B> it's faster to compare two integers than it is to compare two strings.
(B>
(B> Now when I want to search for a type in types or another table that
(B> references types(type_id), under what circumstances is it advisable to
(B> have a surrogate integer key and not use the unique type name? Is
(B> searching for an integer as fast as is searching for a string when both
(B> have an index? How many records in the type table do I need to make a
(B> surrogate key a not unsignificantly faster way to retrieve a row? What
(B> about joins? Are these the right questions?
(B>
(B> Thanks.
(B>
(B> -- 
(B> Markus Bertheau <[EMAIL PROTECTED]>
(B>
(B>
(B> ---(end of broadcast)---
(B> TIP 6: Have you searched our list archives?
(B>
(B>http://archives.postgresql.org
(B
(B
(B---(end of broadcast)---
(BTIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Re: [SQL] surrogate key or not?

2004-07-21 Thread Iain
Hi Josh,
(B
(BConsidering that I generally agree with your comments (in this and your
(Blater posts)
(BI'd say I didn't make myself clear in my brief comment.
(B
(BOf the considerations: performance, convenience and business logic, I
(Bpersonally rate performance as the lowest priority. The convenience part has
(Bbeen well covered in other posts and is second most in my opinion..
(B
(BBy business logic I was thinking of a situation we had on an old db where
(Bemployees were idenitfied by codes. These codes were transmitted all over
(Bthe database (as you can imagine) which meant that the codes couldn't be
(Brecycled as employees came and left. The answer was to introduce an integer
(Bbased auto-generated key. That way the data associated with an ex-employee
(Bis associated with that employee, not with the code. It may be that this
(Breally a convenience issue as opposed to business logic, but in my thinking
(Bthe code was data, not a
(Bprimary key - it is not even an alternate key (if ex-employees have their
(Bcode
(Bnulled it can be made "unique" though). Perhaps the term surrogate key
(Bdoesn't apply in this case, I have to admit that I'm not 100% on terminalogy
(Bhere.
(B
(BIn the end, accurately representing the business logic is most importent,
(Band use of a contrived sequential key shouldnt preclude the use of unique
(Bconstraints where needed so the problem you described of duplicate events
(Bshould never have happened anyway.
(B
(BThis is a great topic though, I find it really interesting.
(B
(BRegards
(BIain
(B
(B
(B
(B
(B- Original Message - 
(BFrom: "Josh Berkus" <[EMAIL PROTECTED]>
(BTo: "Iain" <[EMAIL PROTECTED]>; "Markus Bertheau" <[EMAIL PROTECTED]>;
(B<[EMAIL PROTECTED]>
(BSent: Wednesday, July 21, 2004 4:00 PM
(BSubject: Re: [SQL] surrogate key or not?
(B
(B
(B> Markus, Iain,
(B>
(B> > Thus I see it more as an issue of business logic than performance. There
(Bare
(B> > of course many other considerations with relational theory and stuff
(Blike
(B> > that which you could debate endlessly. I expect that googling on
(B"surrogate
(B> > keys" would yeild interesting results.
(B>
(B> Frankly, I couldn't disagree more.   This is why it was so problematic for
(Bthe
(B> SQL committee to enshrine "primary keys" and sequences in the standard; it
(B> mis-educates database designers into believing that surrogate keys are
(B> somehow part of the data model.  They are most decidely NOT.
(B>
(B> Given:  Surrogate keys, by definition, represent no real data;
(B> Given:  Only items which represent real data have any place in
(B> a data model
(B> Conclusion:  Surrogate keys have no place in the data model
(B>
(B> There are, in fact, three very good reasons to use surrogate keys, all of
(B> which are strictly due to limitations of technology; that is,
(Bimplementation
(B> and performance issues, NOT business logic.  They are:
(B>
(B> 1) Convenience:  It's very annoying to have to refer to a 4-column foriegn
(Bkey
(B> whenever you do a join in queries or want to delete a record, as well as
(B> tracking a 4-element composite in your client software.
(B>
(B> 2) Performance:  INT and BIGINT data types are among the most compact and
(B> efficient stored in most RDBMSs.   So using anything else as a key would
(B> likely result in a loss of performance on large-table joins.
(B>
(B> 3) Mutability:  Most RDBMSs are very inefficient about CASCADE deletes and
(B> updates.   Some RDBMSs do not support CASCADE, forcing the client software
(Bto
(B> fix all the dependant rows.  This means that DBAs are very reluctant to
(Buse
(B> columns which change frequently as join keys.
(B>
(B> All three of these implementation issues are, at least in theory,
(B> surmountable.   For example, Sybase overcame problems (1) and (3) by
(Bcreating
(B> an automated, system-controlled hash key based on the table's real key.
(BThis
(B> was a solution endorsed by E.F. Codd in the mid-90's when he came to
(Bregret
(B> his promotion of the "Primary Key" idea in the SQL standard.
(B>
(B> Now, you're probably wondering "why does this guy regard surrogate keys as
(Ba
(B> problem?"   I'll tell you:  I absolutely cannot count the number of "bad
(B> databases" I've encountered which contained tables with a surrogate key,
(Band
(B> NO REAL KEY of any kind.   This makes data normalization impossible, and
(B> cleanup of the database becomes a labor-intensive process requiring
(B> hand-examination of each row.
(B>
(B> -- 
(B> -Josh Berkus
(B>  Aglio Database Solutions
(B>  San Francisco
(B
(B
(B---(end of broadcast)---
(BTIP 7: don't forget to increase your free space map settings

Re: [SQL] surrogate key or not?

2004-07-26 Thread Iain
Hi,

> But, once a surrogate key is assigned to a row, doesn't it become a
> "real" data? For example, I have a bunch of invoices/receipts and I
> write down a unique number on each of them. Doesn't the unique number
> become part of the information contained by the invoice/receipt itself
> (at least as long as I'm concerned)?

I don't think that I'd call an invoice number a surrogate key in the first
place. Invoice numbers and their like come from business requirements, they
just happen to be highly suitable as PKs so they could be considered a
natural key.

> > 3) Mutability:  Most RDBMSs are very inefficient about CASCADE deletes
and
>
> Change management IMO is perhaps the main reason of surrogate/artificial
> key. We often need a PK that _never_ needs to change (because it can be
> a royal PITA or downright impossibility to make this change; the PK
> might already be printed on a form/card/document, recorded on some
> permanent database, tattoed/embedded in someone's forehead, etc).
> Meanwhile, every other aspect of the data can change (e.g. a person can
> change his name, sex, age, email, address, even date & place of birth).
> Not to mention data entry mistakes. So it's impossible to use any
> "real"/natural key in this case.

I've never felt that it's a good idea to be dependent on the DBMS providing
cascade functionality - particularly cascading updates to PKs. I don't think
I've ever worked on a DB that used such constraints.

> Okay, so surrogate key makes it easy for stupid people to design a
> database that is prone to data duplication (because he doesn't install
> enough unique constraints to prevent this). But I don't see how a
> relation with a surrogate key is harder to "normalize" (that is, for the
> duplicates to be removed) than a relation with no key at all. Compare:

I think that most of us would agree that whatever they are, surrogate keys
aren't a substitute for clean data (or quality data).

On the whole, I think that there are more important indicators of quality
(or lack of it) in your database design than the prevalence (or lack) of
numeric ID style keys. Personally, I've grown to appreciate the id approach
over the years, but my mind is always open to other ideas.

regards
iain



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] surrogate key or not?

2004-07-26 Thread Iain
> > Because there is no information to be had on this fact. The
> > patient IS afflicted by such or she is not. There is no why.
>
> I begin to see why I spent $2000 this spring to have a doctor tell me what
I
> *didn't* have ...

This reminds me of a project I worked on many years ago, I was pretty much
fresh out of university writing a system for a large medical practice -
itwas principally for accounting purposes. I made lots of suggestions like
Josh's, only to get replies like Karsten's. I the progammer wanted to codify
everything so as to enable data analysis (ie linking complaints and
diagnosis, etc) but the doctors wern't interested. They just wanted to write
free text comments. And the reason for it (as far as I can tell) is the
distinction between general practice and reseach (such as epidemiology). So
(GPs) are not so much searching for new knowlege in their patients records,
as applying the knowlege gained from research (done by researchers) to treat
individual patients.

Karsten might beg to differ, I don't know, but the "long flaky text" comment
triggered some old (and fond) memories ;-)

Cheers
iain


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] surrogate key or not?

2004-07-27 Thread Iain
> > One thing we ARE looking for in our records is the ability to
> > find groups of patients by arbitrary criteria since one day
> > I'll have to find all my patients whose father took a statine,
> > whose second-born child suffered a bout of neutropenia 2 weeks
> > after birth and who started being on the pill at age 14.
> > Because they'll have a 3fold increased risk of lung embolus.
> > Unless monitored for clotting factors every 6 months. Which I
> > will have to do from now on. Get my point ?  :-)
>
> couldnt you do something like, let them write the 'long flaky text', and
at
> the same time mark a certain number of key words or key phrases which
could
> be stored and retrieved?

I was thinking along similar lines. On one hand, you need the "long flaky
text" (love that expression), on the other, you want to ensure that you can
locate appropriate data, and that the required details are available. By
available, I mean that it was entered in the first place, and that it is
retrievable. I imagine a system whereby you define keywords and attributes
for them (attributes would be an episode date, or dosage, etc). The memo, is
checked for keywords and the doctor prompted to supply the attributes for
them. If your parsing was smart, and the memo formated a little, you could
conceivably pull a lot of this out of the memo as defaults. The processing
could also be done retrospectively by an intern or researcher, but I imagine
it would be best to have the doctor do it at the time.

Just some vague ideas anyway. This may of course be much more work than
anyone wants to get into... I don't have much experience with text searching
systems, but something reasonably sophisticated would probably get you
there.

Regards
Iain


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] pg_dump/pg_restore question

2004-08-22 Thread Iain
Hi Worik,

I can't address the specifics of your problem but I'll give you an example
of I do it on a 7.4 db. I dunno if all options are available on your
version.

To get the backup:

pg_dump --format=c -v DBNAME > FILENAME.dump


To restore you have to create a new DB (drop the old one first if
neccessary):

createdb -T template0  [-E ENCODING] DBNAME

Note that this is template 0 and not template 1. (also, don't forget to set
your database encoding if the default isn't what you want).


Then create a contents list:

pg_restore --list DBNAME.dump > DBNAME.list

You can edit the list file to rearrange the restore order or omit items.


Finally, do the restore:

pg_restore --use-list=DBNAME.list  --dbname=DBNAME -v DBNAME.dump


This is all documented fairly well in the online documentation, you just
need to spend a bit of time going through it. As I recall, some information
is in the admin section and some is in the documentation of the client
programs (pg_dump and pg_restore are client programs iirc).

Also, consider upgrading to 7.4 to ensure you get the best support.

Good luck,
Iain




- Original Message - 
From: "Worik" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, August 23, 2004 11:59 AM
Subject: [SQL] pg_dump/pg_restore question


> Freinds
>
> I am new to this list, and this is my first message.
>
> I hope this is the correct forum, and the question not too stupid/simple.
>
> I have a database on a debian stable system...
>
> dpkg -l postgresql
> [snip]
> ii  postgresql 7.2.1-2woody5  Object-relational SQL database,
> descended fr
>
> When I dump a database (as from the man page for pg_restore)
> pg_dump mydb > mydb.out
>
> OK.  Get an SQL dump in mydb.out
>
> Create a new database
>
> creatdb myotherdb
>
> Load the dump
>
> psql -d myotherdb -f mydb.out
> psql:mydb.out:4: \connect: FATAL 1:  IDENT authentication failed for
> user "postgres"
>
> I have tried adding the line
> host   all 127.0.0.1 255.255.255.255trust
>
> to pg_hba.conf but it makes no difference.
>
> The only thing I can do is edit the dump file into the part that wants
> to connect as postgres and the part that wants to connect as worik and
> run them under their respective logins.
>
> How can I set it up so I do not have to do that?
>
> What documentation should I be reading?
>
> Worik
>
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Possible rounding error of large float values?

2004-08-24 Thread Iain

(B# select 9223372036854775807 = 9223372036854775807::float;
(B ?column?
(B--
(B t
(B(1 row)
(B
(B# select 9223372036854775807 = 9223372036854775807.0::float;
(B ?column?
(B--
(B t
(B(1 row)
(B
(B# select 9223372036854775807 = 9223372036854775807.::numeric;
(B ?column?
(B--
(B t
(B(1 row)
(B
(BThis and the fact that it's still possible to find the row using the
(Boriginal value would seem to indicate that the rounding is just a display
(Bartifact..
(B
(B- Original Message - 
(BFrom: "Oliver Elphick" <[EMAIL PROTECTED]>
(BTo: "Josh Berkus" <[EMAIL PROTECTED]>
(BCc: "Adam Lancaster" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
(BSent: Wednesday, August 25, 2004 5:21 AM
(BSubject: Re: [SQL] Possible rounding error of large float values?
(B
(B
(B> On Tue, 2004-08-24 at 20:52, Josh Berkus wrote:
(B> > Adam,
(B> >
(B> > > 9223372036854775807
(B> > >
(B> > > It gets selected out as:
(B> > >
(B> > > 9.22337203685478E18
(B> >
(B> > This is a property of FLOAT data types.  They round.
(B> >
(B> > > Which appears to be rounded. When we cast it to numeric type we get:
(B> > >
(B> > > 922337203685478
(B> > >
(B> > > Which also is rounded. It is still possible to find the row using the
(B> > > original value
(B> >
(B> > Hmmm ... is 15 digits the limit of NUMERIC?   It may be.
(B>
(B> It must be the limit of float.  Numeric can't produce any more than it
(B> was given and it was cast from float.
(B>
(B> bray=# select 653596708775675750507850507570708696432 ::numeric;
(B>  numeric
(B> -
(B>  653596708775675750507850507570708696432
(B> (1 row)
(B>
(B> -- 
(B> Oliver Elphick  [EMAIL PROTECTED]
(B> Isle of Wight  http://www.lfix.co.uk/oliver
(B> GPG: 1024D/A54310EA  92C8 39E7 280E 3631 3F0E  1EC0 5664 7A2F A543 10EA
(B>  
(B>  "I saw in the night visions, and, behold, one like the
(B>   Son of man came with the clouds of heaven, and came to
(B>   the Ancient of days, and they brought him near before
(B>   him. And there was given him dominion, and glory, and
(B>   a kingdom, that all people, nations, and languages,
(B>   should serve him; his dominion is an everlasting
(B>   dominion, which shall not pass away, and his kingdom
(B>   that which shall not be destroyed."
(B> Daniel 7:13,14
(B>
(B>
(B> ---(end of broadcast)---
(B> TIP 4: Don't 'kill -9' the postmaster
(B
(B
(B---(end of broadcast)---
(BTIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Re: [SQL] Possible rounding error of large float values?

2004-08-24 Thread Iain
> I don't think that's true since the number ending in 807 is equal to a
> version ending in 808.
>
> sszabo=# select 9223372036854775807 = 9223372036854775808::float;
>  ?column?
> --
>  t
> (1 row)

Good point. I think that the problems with using floats are well documented
(I particulary like the doc that come with IBMs java.lang.BigDecimal
package) so there shouldn't be any excuse for being surprised by how they
behave. Basically floats are fine when used for what they were desgned for,
but a liability otherwise.

Just thinking about it now, I can't remember last time I designed a database
that used float data...

regards
Iain


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[SQL] implementing an out-of-transaction trigger

2004-09-14 Thread Iain

(B
(B
(BHi All,
(B 
(BI've come across a situation where I'd 
(Blike to use some kind of "out-of-transaction trigger" to do some processing 
(Bafter changes to some tables, but without extending the duration of the main 
(Btransaction. Of course, it's important that the processing be completed so it 
(Bhas to be, as far as possible, reliable and "safe". The extra processing should 
(Bbe completed within a reasonable time after the original transaction, but it 
(Bneedn't happen immediately.
(B 
(BIn the past, we have used triggers 
(Bwritten in C that call perl scripts. It seems untidy to me as we introduce 2 
(Bmore programming languages and  I'm not so comfortable with the idea of 
(Bcalling an OS program from a trigger which is part of the original transaction 
(Banyway. It doesn't seem to be helping the situation much.
(B 
(BI havn't been able to come up wth a standard way to do this in postgres so 
(BI'm contemplating writing a kind of background process that checks for changes 
(Band processes them periodically, nothing very sophisticated. I'm wondering if 
(Bthere is a better way. Maybe other people here have dealt with this kind of 
(Bsituation.
(B 
(BIf I were using Oracle, I might 
(Buse Streams or Advanced Queuing, this would probably be a good case for 
(Busing them - though I have no experience myself. In oracles case the subscribed 
(Bchanges are captured from the redo log. After that it's fairly standard 
(Bpublisher/subsriber type stuff. I expect that this should, in theory, be 
(Bpossible with postgres too though AFAIK 
(Bthis isn't on the todo list.
(B 
(BAnyway, if anyone has any thoughts on 
(Bthis, I'd be interested to hear them,
(Bregards
(BIain

Re: [SQL] implementing an out-of-transaction trigger

2004-09-15 Thread Iain
Hi Mike,

Thanks, that's exactly what I was looking for.

A quick quote from the docs:

...if a NOTIFY is executed inside a transaction, the notify events are not
delivered until and unless the transaction is committed. ...if a listening
session receives a notification signal while it is within a transaction, the
notification event will not be delivered to its connected client until just
after the transaction is completed...
This is pretty good so far. Though, as far as I can tell, there is no way to
have the notify activate a pl/pgsql function directly. I'll still need to
write a client program to create a session and actually do the listening,
that is if I havn't missed anything else...

Thanks again,

Iain




- Original Message - 
From: "Mike Rylander" <[EMAIL PROTECTED]>
To: "Iain" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, September 16, 2004 1:03 AM
Subject: Re: [SQL] implementing an out-of-transaction trigger


> >I've come across a situation where I'd like to use some kind of
> "out-of-transaction
> >trigger" to do some processing after changes to some tables, but
> without extending
> >the duration of the main transaction. Of course, it's important that
> the processing be
> >completed so it has to be, as far as possible, reliable and "safe". The
extra
> >processing should be completed within a reasonable time after the
original
> >transaction, but it needn't happen immediately.
>
> Check out
>   http://www.postgresql.org/docs/7.4/static/sql-listen.html
> and
>   http://www.postgresql.org/docs/7.4/static/sql-notify.html
>
> Then look at the Rules system for generating a NOTIFY:
>http://www.postgresql.org/docs/7.4/static/sql-createrule.html
>
> --miker
>
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] implementing an out-of-transaction trigger

2004-09-15 Thread Iain
Hi Greg,

I would have like to have avoided writing a daemon if possible, but
presumably it should be possible to write one flexible enough to be used in
a variety of situations.

If I didn't use notify, I'd would just be checking for the presence of data
in "transfer areas" or queue tables as you called them on a periodic basis.
This wouldn't change so much using notify, except that instead of writing
the process so as to sleep and periodically check, I can just have it wake
up when the data actually gets there. A minor saving perhaps, but at least I
could use the name of the notification to determine what processing to
activate.

Thanks,
Iain
- Original Message - 
From: "Greg Stark" <[EMAIL PROTECTED]>
To: "Iain" <[EMAIL PROTECTED]>
Cc: "Mike Rylander" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, September 16, 2004 12:57 PM
Subject: Re: [SQL] implementing an out-of-transaction trigger


>
> "Iain" <[EMAIL PROTECTED]> writes:
>
> > Though, as far as I can tell, there is no way to have the notify
activate a
> > pl/pgsql function directly. I'll still need to write a client program to
> > create a session and actually do the listening, that is if I havn't
missed
> > anything else...
>
> Right, presumably some sort of daemon that sits and waits for events. Much
> like you would have with Oracle advanced queuing and such.
>
> The big difference is that NOTIFY doesn't pass along any parameters. You
will
> need some way for your daemon to find any pending data it needs to
process.
> You might need some kind of queue table, or you might be able to get by
> without one.
>
> -- 
> greg


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] Reuse previously calculated column in sql query?

2004-10-20 Thread Iain
Using the the sub-select is one way, but there is another way.
(B
(BIf the function can be declared as strict or immutable the you can call it 
(Bas many times as you like in a single transaction and it will only be 
(Bevaluated once. As far as I know this does work as advertised.
(B
(BCheck the SQL commands reference section of the manual for CREATE FUNCTION
(B
(BRegards
(BIain
(B- Original Message - 
(BFrom: "Rod Taylor" <[EMAIL PROTECTED]>
(BTo: "Philippe Lang" <[EMAIL PROTECTED]>
(BCc: <[EMAIL PROTECTED]>
(BSent: Wednesday, October 20, 2004 10:05 PM
(BSubject: Re: [SQL] Reuse previously calculated column in sql query?
(B
(B
(B> On Wed, 2004-10-20 at 08:50, Philippe Lang wrote:
(B>> Hello,
(B>>
(B>> Is it possible to reuse a previously calculated column in a following
(B>> column, like:
(B>
(B> SELECT col1
(B> , col2
(B> , col2 * 0.75 AS col3
(B>  FROM (SELECT foo.val1 AS col1
(B> , long_calculation(foo.val1) AS col2
(B>  FROM foo) AS tab;
(B>
(B>
(B>
(B> ---(end of broadcast)---
(B> TIP 5: Have you checked our extensive FAQ?
(B>
(B>   http://www.postgresql.org/docs/faqs/FAQ.html 
(B
(B
(B---(end of broadcast)---
(BTIP 6: Have you searched our list archives?
(B
(B   http://archives.postgresql.org

Re: [SQL] paertially restoring a DB

2004-10-31 Thread Iain
Hi,
What you are trying to do should be possible as far as I know.
Did you use the custom format for your dump (pg_dump -Fc)? Try extracting a 
list of contents of the dump (pg_restore --list) option and check that for 
anything that might be causing you to lose that table. Comment out any 
offending lines and restore using the --use-list option.

If you used the custom format you have a lot of options for extracting 
individual table data and stuff like that.

Good luck,
Iain
- Original Message - 
From: "Niall Linden" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, October 29, 2004 11:04 PM
Subject: [SQL] paertially restoring a DB


i am having a problem with dump and restore. basically i have 2
databases
maindb and tempdb
the databases are almost the same but the main db has some extra
tables which dont change. what i need to do is get all the information
form the temp db and put it in the main db. the tempdb should
overwrite all the data in the main db, but the tables that exist in
the maindb only should remain
ie
maindb has tables:  dbsettings, data1, data2
tempdb has tables:  data1, data2
then the tables (and data) data1,2 from tempdb should overwrite their
corresponding tables but dbsettings should remain unchanged in maindb.
any things ive tried overwrite all the tables and i loose dbsettings
table completely (not good)
any help appreciated
Niall
---(end of broadcast)---
TIP 8: explain analyze is your friend 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] get sequence value of insert command

2004-11-19 Thread Iain
I seem to recall it was mine. I made the mistake of assuming it wasn't 
concurrency safe and was gently corrected by one of the community. I 
think it might have been a Tim/Tam Lane. Wonder what happened to him? :-)
M tim tams
rgds
Homer
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] invalid 'having' clause

2004-12-01 Thread Iain
Hi,
(B
(BJust a quick question out of curiosity, I was just wondering if this is 
(Bsupposed to be valid sql:
(B
(Bselect count(*) as cnt
(Bfrom sometable
(Bgroup by somecolumn
(Bhaving cnt > 1
(B
(BThis isn't valid in pg (7.4.6), but this is:
(B
(Bselect count(*)
(Bfrom sometable
(Bgroup by somecolumn
(Bhaving count(*) > 1
(B
(BDoes anyone have any idea?
(B
(BI can't remember what other databases do, and I'm not so familiar with the 
(Bstandards, but I just thought I'd mention it anyway.
(B
(Bregards
(BIain 
(B
(B
(B---(end of broadcast)---
(BTIP 6: Have you searched our list archives?
(B
(B   http://archives.postgresql.org

Re: [SQL] invalid 'having' clause

2004-12-02 Thread Iain
OK, thanks. That seems to make sense.
(B
(Bregards
(BIain
(B- Original Message - 
(BFrom: "Tom Lane" <[EMAIL PROTECTED]>
(BTo: "Iain" <[EMAIL PROTECTED]>
(BCc: <[EMAIL PROTECTED]>
(BSent: Thursday, December 02, 2004 11:54 AM
(BSubject: Re: [SQL] invalid 'having' clause 
(B
(B
(B> "Iain" <[EMAIL PROTECTED]> writes:
(B>> Just a quick question out of curiosity, I was just wondering if this is 
(B>> supposed to be valid sql:
(B> 
(B>> select count(*) as cnt
(B>> from sometable
(B>> group by somecolumn
(B>> having cnt > 1
(B> 
(B> No.  The HAVING clause logically executes before the SELECT output list
(B> does, so it makes no sense for it to refer to the output list entries.
(B> 
(B> regards, tom lane
(B> 
(B> ---(end of broadcast)---
(B> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
(B
(B---(end of broadcast)---
(BTIP 2: you can get off all lists at once with the unregister command
(B(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Re: [SQL] Date datatype

2004-12-21 Thread Iain
or,
# update table set birthdate = to_date('2000-01-01','-MM-DD');
I'm not sure which is better though I tend to do it this way.
regards
Iain
- Original Message - 
From: "Achilleus Mantzios" <[EMAIL PROTECTED]>
To: "Pablo Digonzelli" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, December 21, 2004 5:38 PM
Subject: Re: [SQL] Date datatype


O Pablo Digonzelli έγραψε στις Dec 20, 2004 :
hi all,
Ca anyone send me an example how to update a date datatype?
for example
update table set birthdate = "200-01-01" doesnt work.
# update table set birthdate = '2000-01-01';
TIA
Pablo
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
--
-Achilleus
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] 

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [SQL] I'm stumped

2004-12-21 Thread Iain
Hi,
I didn't bother analysing this too deeply, so keep that in mind when you 
read my reply ;-)

However the point that set off alarm bells for me was this statement "we now 
have an over-ride table".

It seems to me that you shouldn't be over-riding anything, and the emp_dept 
table should just be history -  ie "emp_dept_his", and it wouldn't need the 
"active" column.

If you need history then you could use a trigger or rule to insert the 
history record whenever the emloyee changes department.

The key question here is "Why wouldn't you change the key in the emp table 
when that key represents the employee's department?". It seems like a 
handbook case to me but maybe I missed something.

regards
Iain
I can't figure out an efficient way to do this. Basically I had a typical 
3-tier relationship:
(Employee -> Department -> Division)
However, at some point the need to move employees arose, but instead of 
changing the key in the emp table, we now have an over-ride table, so a 
history can be tracked.

If I want to get the info for a particular employee, its a pretty simple 
deal, however, getting all the employees for a dept or division has 
become troublesome.

A very simplified schema:
divisions ( div_id, div_name );
departments ( dept_id, dept_name, div_id );
employees ( emp_id, emp_name, dept_id );
emp_dept ( emp_id, dept_id, active, changed_by, changed_when );
The original way that worked well:
SELECT v.div_name, d.dept_id, e.emp_id, e.emp_name
  FROM divisions v
   INNER JOIN departments d
   ON d.div_id = v.div_id
   INNER JOIN employees e
   ON e.dept_id = d.dept_id
 WHERE v.div_id = 123;
What was initially tried:
SELECT v.div_name, COALESCE(ed.dept_id, d.dept_id), e.emp_id, e.emp_name
  FROM divisions v
   INNER JOIN departments d
   ON d.div_id = v.div_id
   INNER JOIN employees e
   ON e.dept_id = d.dept_id
   LEFT JOIN emp_dept ed
  ON ed.emp_id = e.emp_id AND ed.active = true
 WHERE v.div_id = 123;
This query is flawed, as it still always puts the employees in their 
original div, but reports the new dept. Which we didn't catch as a 
problem until emps were moved to depts in a different division.

I tried creating a function:
CREATE OR REPLACE FUNCTION get_empdept(int4) RETURNS int4 AS '
SELECT CASE WHEN ed.dept_id IS NOT NULL
THEN ed.dept_id
ELSE e.dept_id END
  FROM employees AS e
   LEFT JOIN emp_dept AS ed
  ON ed.emp_id = e.emp_id AND ed.active = true
 WHERE e.emp_id = $1
' LANGUAGE SQL STABLE;
And then tried:
SELECT v.div_name, d.dept_id, e.emp_id, e.emp_name
  FROM divisions v
   INNER JOIN departments d
   ON d.div_id = v.div_id
   INNER JOIN employees e
   ON get_empdept(e.emp_id) = d.dept_id
 WHERE v.div_id = 123;
However since the function is not immutable (since it does a query), I 
can't create an index, and the join always does a seq scan.

I also thought to create a view, but I don't believe Postgres supports 
indexed views. It was always using a seq scan too.

The above examples are actually quite simplified, as several other tables 
get joined along the way, I'm not sure a UNION would work or not, how 
would it exclude the ones that match the dept_id in the emp table for 
those emps that match on the over-ride table?

Any suggestions?
Hello,
have you an index on emp_dept on emp_id, dept_id ?
what about this ?
SELECTv.div_name, d.dept_id, e.emp_id, e.emp_name
FROMdivisions v
   INNER JOIN departments d ON d.div_id = v.div_id
   INNER JOIN employees e ON e.dept_id = d.dept_id
WHERENOT EXISTS (SELECT 1 FROM emp_dept ed WHERE ed.emp_id = e.emp_id)
ANDv.div_id = 2
UNION ALL
SELECTv.div_name, d.dept_id, e.emp_id, e.emp_name
FROMdivisions v
   INNER JOIN departments d ON d.div_id = v.div_id
   INNER JOIN emp_dept ed ON d.dept_id = ed.dept_id
   INNER JOIN employees e ON e.emp_id = ed.emp_id
WHEREed.active=true
ANDv.div_id = 2
Regards,
Thomas

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL]

2005-01-31 Thread Iain



hi,
 
I'm not familiar with iso2709  but there is a program called Octopus that may do what 
you want. It's open source software and can be found at octopus.enhydra.org - 
worth a try anyway.
 
Regards
Iain

  - Original Message - 
  From: 
  Matteo 
  Braidotti 
  To: pgsql-sql@postgresql.org 
  Sent: Wednesday, January 26, 2005 
  11:28 PM
  Subject: [SQL] 
  
  Hi, I need a script or a program that convert and 
  esport my data from sql database to a file in the iso2709 format
  thanks
  bye


Re: [SQL] pg primary key bug?

2005-02-17 Thread Iain



Hi Ivan,
 
Sorry, I can't remember all you said in 
earlier posts, but I was wondering, your log file says:
 
> HINT:  Rebuild the index with REINDEX.
Did you do that, and did it solve the 
problem?
 
regards
Iain

  - Original Message - 
  From: 
  pginfo 
  To: Tom Lane 
  Cc: [EMAIL PROTECTED] 
  ; Michael 
  Glaesemann ; pgsql-sql@postgresql.org ; [EMAIL PROTECTED] 
  
  Sent: Friday, February 18, 2005 
  1:53 PM
  Subject: Re: [SQL] pg primary key 
  bug?
  Hi,Tom Lane wrote:
  pginfo <[EMAIL PROTECTED]> writes:
  
Will upgrade to 8.0 solve this type of problems ?

The problem is probably not Postgres' fault at all.  I'm wondering about
disks with write cacheing enabled.  And you didn't turn off fsync,
I trust?
  About fsync (part from postgresql.conf) 
  :#---# 
  WRITE AHEAD 
  LOG#---# 
  - Settings -#fsync = 
  true   
  # turns forced synchronization on or off#wal_sync_method = 
  fsync    # the default varies across 
  platforms:    
  # fsync, fdatasync, open_sync, or open_datasync#wal_buffers = 
  8    
  # min 4, 8KB each# - Checkpoints -#checkpoint_segments = 
  3    # in logfile segments, min 1, 16MB 
  each#checkpoint_timeout = 300   # range 
  30-3600, in seconds#checkpoint_warning = 
  30    # 0 is off, in 
  secondsAlso part from pg logfile:LOG:  statistics 
  collector process (PID 2716) exited with exit code 1LOG:  shutting 
  downLOG:  database system is shut downLOG:  could not create 
  IPv6 socket: Address family not supported by protocolLOG:  database 
  system was shut down at 2005-02-11 19:58:26 EETLOG:  checkpoint 
  record is at 2/BAC39188LOG:  redo record is at 2/BAC39188; undo 
  record is at 0/0; shutdown TRUELOG:  next transaction ID: 2221145; 
  next OID: 826607LOG:  database system is readyLOG:  recycled 
  transaction log file "000200BA"LOG:  recycled transaction log 
  file "000200BB"LOG:  recycled transaction log file 
  "000200BC"LOG:  recycled transaction log file 
  "000200BD"LOG:  recycled transaction log file 
  "000200BE"WARNING:  index "a_constants_str_pkey" contains 
  1449 row versions, but table contains 1422 row versionsHINT:  Rebuild 
  the index with REINDEX.ERROR:  duplicate key violates unique 
  constraint "a_constants_str_pkey"ERROR:  duplicate key violates 
  unique constraint "a_constants_str_pkey"ERROR:  duplicate key 
  violates unique constraint "a_constants_str_pkey"ERROR:  duplicate 
  key violates unique constraint "a_constants_str_pkey"ERROR:  
  duplicate key violates unique constraint 
  "a_constants_str_pkey"ERROR:  duplicate key violates unique 
  constraint "a_constants_str_pkey"ERROR:  duplicate key violates 
  unique constraint "a_constants_str_pkey"ERROR:  duplicate key 
  violates unique constraint "a_constants_str_pkey"ERROR:  duplicate 
  key violates unique constraint "a_constants_str_pkey"ERROR:  
  duplicate key violates unique constraint 
  "a_constants_str_pkey"ERROR:  duplicate key violates unique 
  constraint "a_constants_str_pkey"LOG:  received smart shutdown 
  requestFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating connection due to administrator 
  commandFATAL:  terminating conn

[SQL] smart(er) column aliases

2003-10-14 Thread Iain Sinclair

(B
(B
(BHi all,
(B 
(BTo take a simple example here is what I 
(Bwanted to do:
(B 
(B   select 1 as one, 2 as two, 
(Bone + two as three;
(B 
(Bbut it doesn't work. I've checked it out 
(Bthough, and I found that I can do this:
(B 
(B   select one, two, one + two 
(Bfrom (select 1 as one, 2 as two) as a;
(B 
(Bwhich is acceptable, I guess.
(B 
(BI'm using 7.1 (I think) and will update to 
(B7.3 soon but I guess that the first query still doesn't work.
(B 
(BI'd just like to suggest to the gurus 
(Bdeveloping this DB that it would be realy cool if the first query was possible. 
(BFor me it's just about 2 things: simplicity and efficiency. The queries I'm 
(Bworking on at the moment are a little too complex for my liking and the "1" in 
(Bthis example is typically a call to a very involved function whose result is 
(Bre-used in a number of calculations. Obviously, I only want to so it once, and 
(Band I want to keep the main query as simple as possible as things are already 
(Bout of hand...
(B 
(Bcheers all.
(B 
(BIain