Re: [SQL] Converting epoch to timestamp?

2001-08-01 Thread Richard Huxton

From: "Roberto Mello" <[EMAIL PROTECTED]>

Hi Roberto - long time no see.

> I searched the docs for function to convert epoch to timestamps but
> couldn't find any. Are there any?

richardh=> select now();
  now

 2001-08-01 09:25:58+01
(1 row)

richardh=> select extract('epoch' from now());
 date_part
---
 996654359
(1 row)

richardh=> select '1970-01-01'::date + '996654342 seconds'::interval;
?column?

 2001-08-01 08:25:42+01
(1 row)

That's the only way I've ever known to do it. Note the one-hour offset
because I'm currently in BST rather than GMT timezone (ignore the few
seconds discrepancy - that's me querying then cutting and pasting).

Be interested to find out if there's a neater way. Can't believe there's not
some EPOCH_BASE constant that could be used.

- Richard Huxton


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Converting epoch to timestamp?

2001-08-01 Thread Roberto Mello

On Wed, Aug 01, 2001 at 09:28:39AM +0100, Richard Huxton wrote:

> Hi Roberto - long time no see.

Hey Richard. Yeah. Summer classes and summer jobs :-) I have to finish
my expanded "Porting From Oracle" thingy.

>
> richardh=> select '1970-01-01'::date + '996654342 seconds'::interval;
> ?column?
> 
>  2001-08-01 08:25:42+01
> (1 row)

Duh! Guess I could have thought of that. 
Thanks a lot.

-Roberto
-- 
+| Roberto Mello - http://www.brasileiro.net |+
 Computer Science, Utah State University - http://www.usu.edu
   USU Free Software & GNU/Linux Club - http://fslc.usu.edu
 Space Dynamics Lab, Developer  http://www.sdl.usu.edu
Veni, Vidi, VCR - I came, I saw, I videotaped it

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Converting epoch to timestamp?

2001-08-01 Thread Tom Lane

"Richard Huxton" <[EMAIL PROTECTED]> writes:
>> I searched the docs for function to convert epoch to timestamps but
>> couldn't find any. Are there any?

> richardh=> select '1970-01-01'::date + '996654342 seconds'::interval;
> ?column?
> 
>  2001-08-01 08:25:42+01
> (1 row)

> That's the only way I've ever known to do it. Note the one-hour offset
> because I'm currently in BST rather than GMT timezone (ignore the few
> seconds discrepancy - that's me querying then cutting and pasting).

Yes.  In timezones further away from Greenwich, it'd be a lot worse :-(
-- the problem is that '1970-01-01'::date will be interpreted as
midnight local time.  A more reliable way is to specify the epoch as
a full timestamp, rather than letting the system promote date to
timestamp:

regression=# select now(), date_part('epoch', now());
  now   | date_part
+---
 2001-08-01 09:52:34-04 | 996673954
(1 row)

regression=# select '1970-01-01 00:00 GMT'::timestamp  + '996673954 sec'::interval;
?column?

 2001-08-01 09:52:34-04
(1 row)

> Be interested to find out if there's a neater way. Can't believe there's not
> some EPOCH_BASE constant that could be used.

Now that you mention it, there is:

regression=# select 'epoch'::timestamp  + '996673954 sec'::interval;
?column?

 2001-08-01 09:52:34-04
(1 row)



Personally, though, I tend to rely on the binary equivalence between
abstime and Unix-timestamp integers:

regression=# select 996673954::int4::abstime::timestamp;
?column?

 2001-08-01 09:52:34-04
(1 row)

which can be written more presentably as

regression=# select timestamp(996673954);
   timestamp

 2001-08-01 09:52:34-04
(1 row)

(This last didn't use to work, but it seems fine in 7.0 and 7.1.  It
will fail in 2038 when timestamps stop looking like int4, but by then
hopefully we'll have changed things...)

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] Exclusion List

2001-08-01 Thread Michael Richards

I've got 2 tables, one with a list of users (has only about 5000 
entries) and another with a list of userids that have already been 
sent letters. I'm trying to efficiently join these two so I get every 
user who hasn't been sent a letter. The problem is, coupled with the 
5 other joins on the user table the query can take about 5 minutes to 
run. It's performing the joins and then reducing the list. 

The reduction of the list doesn't seem to be terribly efficient. Here 
are some strategies I've been looking at:

select id from users WHERE 
  id not in (select userid from sentletters where lettertype=1) AND
  aclgroup IN (1,2);   

Seq Scan on users  (cost=0.00..217751.39 rows=5369 width=4)
  SubPlan
->  Seq Scan on sentletters  (cost=0.00..81.06 rows=4405 width=4)

select id from users WHERE not exists 
  (select userid from sentletters where lettertype=1 AND userid=id) 
  AND aclgroup IN (1,2);

Seq Scan on users  (cost=0.00..10980.07 rows=1 width=4)
  SubPlan
->  Index Scan using sentletters_userid_key on sentletters  
(cost=0.00..2.02 rows=1 width=4)

select id from users AS u 
  LEFT JOIN sentletters AS sl ON (lettertype=1 AND sl.userid=u.id) 
where sl.userid IS NULL AND u.aclgroup IN (1,2);

Hash Join  (cost=81.06..550.18 rows=5322 width=12)
  ->  Seq Scan on users u  (cost=0.00..152.53 rows=5322 width=4)
  ->  Hash  (cost=70.05..70.05 rows=4405 width=8)
->  Seq Scan on sentletters sl  (cost=0.00..70.05 rows=4405 
width=8)

All but the last which is an odd way to do it have nasty query plan 
and this isn't even talking about the joins.

I then need to join these results with a table that has about 200,000 
rows and is joined on the userid and some other key elements.

Any suggestions on this? The full query causing the problem is:
select u.id,u.firstname,u.surname,f2.strval,f3.strval,f4.strval,
   f5.strval,u2.firstname,u2.surname,m.strval 
from 
  users as u 
  JOIN dft_formdata as f1 ON 
(u.id=f1.userid AND u.aclgroup IN (1,2) AND f1.formid=1 AND 
 f1.fieldid=1) 
  LEFT JOIN dft_formdata as f2 ON 
(u.id=f2.userid AND f2.formid=1 AND f2.fieldid=2) 
  JOIN dft_formdata as f3 on 
(u.id=f3.userid AND f3.formid=1 AND f3.fieldid=3) 
  JOIN dft_formdata as f4 on 
(u.id=f4.userid AND f4.formid=1 AND f4.fieldid=5) 
  JOIN relations as r on 
(u.id=r.relatedto AND r.type=2) 
  JOIN users as u2 on 
(u2.id=r.userid AND u2.aclgroup=200) 
  JOIN dft_formdata as f5 on 
(u.id=f5.userid AND f5.formid=1 AND f5.fieldid=4) 
  JOIN dft_formmvl as m on 
(m.id=f5.intval and m.mvlid=1) 
  JOIN ft_formdata as f6 on 
(u.id=f6.userid AND f6.formid=1 AND f6.fieldid=155 AND 
 f6.intval=3) 
  WHERE not exists 
(select userid from sentletters WHERE userid=u.id);

Nested Loop  (cost=0.00..11280.10 rows=1 width=164)
 ->  Nested Loop  (cost=0.00..11276.36 rows=1 width=160)
  ->  Nested Loop  (cost=0.00..11274.33 rows=1 width=144)
   ->  Nested Loop  (cost=0.00..11270.59 rows=1 width=124)
->  Nested Loop  (cost=0.00..11268.56 rows=1 width=96)
 ->  Nested Loop  (cost=0.00..10981.55 rows=1 width=88)
  ->  Nested Loop  (cost=0.00..10977.82 rows=1 width=72)
   ->  Nested Loop  (cost=0.00..10974.10 rows=1 width=56)
->  Nested Loop  (cost=0.00..10970.37 rows=1 width=32)
 ->  Seq Scan on users u  (cost=0.00..10966.65 rows=1 
width=28)
  SubPlan
   ->  Index Scan using sentletters_userid_key on 
sentletters  (cost=0.00..2.01 rows=1 width=4)
 ->  Index Scan using dft_formdata_userid_field on 
dft_formdata f1  (cost=0.00..3.71 rows=1 width=4)
->  Index Scan using dft_formdata_userid_field on 
dft_formdata f2  (cost=0.00..3.71 rows=1 width=24)
   ->  Index Scan using dft_formdata_userid_field on dft_formdata 
f3  (cost=0.00..3.71 rows=1 width=16)
  ->  Index Scan using dft_formdata_userid_field on dft_formdata 
f4  (cost=0.00..3.71 rows=1 width=16)
 ->  Seq Scan on relations r  (cost=0.00..185.43 rows=5079 
width=8)
->  Index Scan using users_pkey on users u2  (cost=0.00..2.02 
rows=1 width=28)
   ->  Index Scan using dft_formdata_userid_field on dft_formdata f5  
(cost=0.00..3.71 rows=1 width=20)
  ->  Index Scan using dft_formmvl_pkey on dft_formmvl m  
(cost=0.00..2.02 rows=1 width=16)
 ->  Index Scan using dft_formdata_userid_field on dft_formdata f6  
(cost=0.00..3.71 rows=1 width=4)


Yes I know the query itself is really nasty but I think 5 minutes is 
a little excessive.

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Converting epoch to timestamp?

2001-08-01 Thread Josh Berkus

Tom,

> regression=# select timestamp(996673954);
>timestamp
> 
>  2001-08-01 09:52:34-04
> (1 row)
> 
> (This last didn't use to work, but it seems fine in 7.0 and 7.1.  It
> will fail in 2038 when timestamps stop looking like int4, but by then
> hopefully we'll have changed things...)

Yeah, sure.  That's what my father said in 1964 when they talked about
the potential problems with 2-digit dates on the UNIVAC II ...

;-)

-Josh


__AGLIO DATABASE SOLUTIONS___
   Josh Berkus
  Complete information technology  [EMAIL PROTECTED]
   and data management solutions   (415) 565-7293
  for law firms, small businessesfax 621-2533
and non-profit organizations.  San Francisco

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Exclusion List

2001-08-01 Thread Tom Lane

"Michael Richards" <[EMAIL PROTECTED]> writes:
> The reduction of the list doesn't seem to be terribly efficient. Here 
> are some strategies I've been looking at:

> select id from users WHERE 
>   id not in (select userid from sentletters where lettertype=1) AND
>   aclgroup IN (1,2);   

Try an EXCEPT, along the lines of

(select id from users where conditions) except
(select userid from sentletters where other-conditions);

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [SQL] Exclusion List

2001-08-01 Thread Michael Richards

> "Michael Richards" <[EMAIL PROTECTED]> writes:
>> The reduction of the list doesn't seem to be terribly efficient.
>> Here are some strategies I've been looking at:
>
>> select id from users WHERE
>> id not in (select userid from sentletters where lettertype=1) AND
>> aclgroup IN (1,2);
>
> Try an EXCEPT, along the lines of
>
> (select id from users where conditions) except
> (select userid from sentletters where other-conditions);

I tried except but in this case I'd have to use this to extract the 
list of ids and then re-join it with users again to get the rest of 
the data from the users table :(

-Michael

_
 http://fastmail.ca/ - Fast Free Web Email for Canadians
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] Outer Join Syntax

2001-08-01 Thread Richard Rowell

I'm doing a feasability study on porting our flagship product to Postgres
(from MS_SQL).  I have run across a few snags, the largest of which is the
outer join syntax.  MS has some nice syntactical sugar with the *=/=*
operators that Postgres dosen't seem to support.  I am confused on how
to replicate the behavior however. We often link together many tables via
outer joins to form a view such as:
SELECT Assessment_medical_id, a.Readonly_agency, a.Date_added, ag.name as
'AgencyName',
   y1.Yesno_descrip as 'healthprob', y2.Yesno_descrip as
'MentalIllness', y3.Yesno_descrip as 'MentalTreatment',
   y4.Yesno_descrip as 'drugabuse', d1.Drug_abuse_type_descrip
as 'drug1', d2.Drug_abuse_type_descrip as 'drug2',
   d3.Drug_abuse_type_descrip as 'drug3',
d4.Drug_abuse_type_descrip as 'drug4', d5.Drug_abuse_type_descrip as
'drug5'
   FROM ASSESSMENT_MEDICAL a, AGENCIES ag, YESNO_TYPES02 y1,
YESNO_TYPES02 y2, YESNO_TYPES02 y3, YESNO_TYPES02 y4,
   DRUG_ABUSE_TYPES d1, DRUG_ABUSE_TYPES d2, DRUG_ABUSE_TYPES
d3, DRUG_ABUSE_TYPES d4, DRUG_ABUSE_TYPES d5
   WHERE a.inactive != 'Y'
   AND a.Client_id = $Edit_Client_id
   AND a.Agency_creating *= ag.Agency_id
   AND a.Health_prob *= y1.Yesno_code
   AND a.EmoMental_illness *= y2.Yesno_code
   AND a.Treatment_for_emomental *= y3.Yesno_code
   AND a.AlchoholDrug_abuse *= y4.Yesno_code
   AND a.AlchoholDrug_abuse_type1 *= d1.Drug_abuse_type_id
   AND a.AlchoholDrug_abuse_type2 *= d2.Drug_abuse_type_id
   AND a.AlchoholDrug_abuse_type3 *= d3.Drug_abuse_type_id
   AND a.AlchoholDrug_abuse_type4 *= d4.Drug_abuse_type_id
   AND a.AlchoholDrug_abuse_type5 *= d5.Drug_abuse_type_id


I'm just not grasping how one would accomplish the same using the SQL-92
syntax.

TIA





---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Outer Join Syntax

2001-08-01 Thread Joe Conway

Subject: [SQL] Outer Join Syntax


> I'm doing a feasability study on porting our flagship product to Postgres
> (from MS_SQL).  I have run across a few snags, the largest of which is the
> outer join syntax.  MS has some nice syntactical sugar with the *=/=*
> operators that Postgres dosen't seem to support.  I am confused on how
> to replicate the behavior however. We often link together many tables via

See http://www.postgresql.org/idocs/index.php?queries.html

You also might want to take a look at
http://www.postgresql.org/idocs/index.php?explicit-joins.html

HTH,

--Joe


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [SQL] Outer Join Syntax

2001-08-01 Thread Tom Lane

"Richard Rowell" <[EMAIL PROTECTED]> writes:
> outer join syntax.  MS has some nice syntactical sugar with the *=/=*
> operators that Postgres dosen't seem to support.

Some of us view it as "nonstandard and broken", not as "nice syntactical
sugar" ;-).

> I'm just not grasping how one would accomplish the same using the SQL-92
> syntax.

SELECT ...
FROM ASSESSMENT_MEDICAL a
LEFT JOIN AGENCIES ag ON a.Agency_creating = ag.Agency_id
LEFT JOIN YESNO_TYPES02 y1 ON a.Health_prob = y1.Yesno_code
LEFT JOIN ...
WHERE a.inactive != 'Y' AND a.Client_id = $Edit_Client_id;

While this is more typing, it's clear which conditions determine
joinability and which are filters on the overall result, which is a
critical semantic issue that the Oracle/MS syntax fails miserably on.
For example, suppose I want to add a constraint like y1.col3 = 'foo'.
Now, if there are no rows meeting that constraint for a given value of
a.Health_prob = y1.Yesno_code, does that mean I want to have
ASSESSMENT_MEDICAL rows with that Health_prob show up with nulls
substituted for the y1 fields?  Or does it mean that I don't want to see
those rows at all?  AFAICS there's no way to make that distinction with
the Oracle/MS approach.  With the standard syntax, you put the
additional constraint in ON in one case, and in WHERE in the other case.

The standard syntax also allows you to control the join order by
parenthesization, which is not so important for your star-query
example, but is critical if you want to join two outer-join results
together --- otherwise, you can't control which combinations result
in partially-NULL rows out, and which result in no rows out.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster