Re: [SQL] SQL Challenge: Skip Weekends

2002-06-21 Thread Jean-Luc Lachance

here is the algorithm:

date := now - day_of_the_week
interval := interval + day_of_the_week
date := date + int( interval/5)x7 + ( interval mod 5)


Josh Berkus wrote:
> 
> Folks,
> 
> Hey, I need to write a date calculation function that calculates the date
> after a number of *workdays* from a specific date.   I pretty much have the
> "skip holidays" part nailed down, but I don't have a really good way to skip
> all weekends in the caluclation.  Here's the ideas I've come up with:
> 
> Idea #1: Use a reference table
> 1. Using a script, generate a table of all weekends from 2000 to 2050.
> 2. Increase the interval by the number of weekends that fall in the relevant
> period.
> 
> Idea #2:  Some sort of calculation using 5/7 of the interval, adjusted
> according to the day of the week of our starting date.  My head hurts trying
> to figure this one out.
> 
> --
> -Josh Berkus
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Bad SUM result

2002-07-08 Thread Jean-Luc Lachance

That is because your query is generating a cartesian product.

Try:

SELECT (
  SELECT SUM(totalprice) 
  FROM invoices 
  WHERE custnumber = '1'
) - (
  SELECT SUM(paymentamount) 
  FROM payments
  WHERE custnumber = '1'
)



Roy Souther wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> I have an invoice database that has two tables one for invoices and one for
> payments. I want to get the account balance for a client by subtracting the
> sum of all payments from the sum off all invoices for that client.
> 
> Here is the SQL that I thought should work.
> SELECT SUM(t0.totalprice)-SUM(t1.paymentamount) FROM invoices t0, payments t1
> WHERE t0.custnumber='1' AND t1.custnumber='1'
> 
> It works fine if there is only one invoice and one payment but as soon as
> there is more then one of either it screws up. For each match found in
> payments the invoice sum is added to the total. So if client 1 purchased a
> $100 item then maid a $10 payment the SQL would return the balance of $90
> just fine. When the client makes a second payment of $15 the balance is $75
> but this SQL returns ($100+$100)-($10+$15) = $175. A third payment of $1
> would return ($100+$100+$100)-($10+$15+$1) = $274.
> 
> Could some one explain this to me and recommend an SQL command that would work
> please? I could do this using a temp table but that would be very messy as I
> would really like it to be a single SQL command.
> - --
> Roy Souther <[EMAIL PROTECTED]>
> http://www.SiliconTao.com
> 
> Linux: May the source be with you.
> 
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.0.6 (GNU/Linux)
> Comment: For info see http://www.gnupg.org
> 
> iEYEARECAAYFAj0oo9MACgkQCbnxcmEBt43qFQCgtjCs7khKGH+2LYd78O9mA3h4
> vDQAn0GkKkuYl1Kybgm/ITO4LbO1WWLX
> =1G4R
> -END PGP SIGNATURE-
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [SQL] Error with DISTINCT and AS keywords

2002-07-10 Thread Jean-Luc Lachance

PostgreSQL does not know how to sort 'TEST'.
You must help it be telling it what tpe it is.
Add ::text after 'TEST' as in 'TEST'::text.


Maybe PostgreSQL should default to text for unknown types...

JLL


Andreas Schlegel wrote:
> 
> Hi,
> 
> I need some help to let this sql statement run with Postgres 7.2.1
> 
> Doesn't work:
> select DISTINCT tnr, titel, 'TEST' AS testcol from tTitel;
> ERROR:  Unable to identify an ordering operator '<' for type 'unknown'
>  Use an explicit ordering operator or modify the query
> 
> If I remove the DISTINCT keyword it works:
> select tnr, titel, 'TEST' AS testcol from tTitel;
> 
> Greetings,
> Andreas
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://archives.postgresql.org

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] SQL problem with aggregate functions.

2002-07-12 Thread Jean-Luc Lachance

What is wrong with:

select field_group, sum( case when f1 = 'D' then cnt else 0 end) as
D_COUNT,
  sum( case when f1 = 'R' then cnt else 0 end) as R_COUNT,
  sum( case when f1 = 'X' then cnt else 0 end) as X_COUNT
from (select field_group, f1, count (*) as cnt from tab group by
field_group, f1) as ss
group by field_group;

It should be faster because there is less CASE evaluation.


Loyd Goodbar wrote:
> 
> I would suggest something like
> 
> select sum(case when f1 = 'D' then 1 else 0 end) as D_COUNT,
> sum(case when f1 = 'R' then 1 else 0 end) as R_COUNT,
> sum(case when f1 = 'X' then 1 else 0 end) as X_COUNT
> from tab
> where f1 in ('D','R','X')
> 
> Not sure what the "field group" represents.
> 
> HTH,
> Loyd
> 
> On Thu, 11 Jul 2002 10:37:40 METDST, Christoph Haller <[EMAIL PROTECTED]> wrote:
> 
> >>
> >> I've got a table in which there is a field that can have one amongst 3
> >> possible values : D, R, X. Is it possible to get in one query the count of
> >> this different values.Please, note that I don't want to have a querry like
> >> this :
> >> "select count (*) from tab group by f1;", cause i want to get all the possible
> >> count values in one row (these data are already grouped on another field).
> >> To give a more accurate example, here is what I want to retrieve :
> >>
> >> Field group | count of D | count of R | count of X.
> >>
> >> Any clues ?
> >> --
> >What about something like
> >
> >
> >SELECT SUM(f1_d) AS count_d,
> >   SUM(f1_r) AS count_r,
> >   SUM(f1_x) AS count_x
> >FROM (
> > SELECT CASE WHEN f1 = 'D' THEN 1 ELSE 0 END AS f1_d,
> >CASE WHEN f1 = 'R' THEN 1 ELSE 0 END AS f1_r,
> >CASE WHEN f1 = 'X' THEN 1 ELSE 0 END AS f1_x
> > FROM tab ) AS foo ;
> >
> >Regards, Christoph
> >
> >---(end of broadcast)---
> >TIP 6: Have you searched our list archives?
> >
> >http://archives.postgresql.org
> 
> --
> "Why, you can even hear yourself think." --Hobbes
> "This is making me nervous. Let's go in." --Calvin
> [EMAIL PROTECTED]  ICQ#504581  http://www.blackrobes.net/
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] How to update record in a specified order

2002-08-09 Thread Jean-Luc Lachance

Hi all,

I want to update a field with a 'NEXTVAL', but I want the record updated
in a specific order.
Any simple way of doing this other than having to create a temp table?

JLL

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] SQL syntax

2002-08-09 Thread Jean-Luc Lachance

Well It's Friday and I am still geting vacation messages from
Bob




Tom Lane wrote:
> 
> Jean-Luc Lachance <[EMAIL PROTECTED]> writes:
> > Can someone *please* temporarely remove
> >"Bob Powell" <[EMAIL PROTECTED]>
> > from the list so we do not get a vacation message for every message one
> > posts.
> 
> I complained to Marc about that a week or more ago, but I guess he
> doesn't want to bounce Bob from the lists just for being incompetent
> about configuring "vacation".  I've set my own mailserver to deny
> connections from hotchkiss.org ...
> 
> regards, tom lane
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] How to update record in a specified order

2002-08-09 Thread Jean-Luc Lachance

As in an order by clause... If it existed.

Josh Berkus wrote:
> 
> JLL,
> 
> > I want to update a field with a 'NEXTVAL', but I want the record updated
> > in a specific order.
> > Any simple way of doing this other than having to create a temp table?
> 
> Please be more speciifc.  What do you mean, "specified order"?
> 
> --
> -Josh Berkus
>  Aglio Database Solutions
>  San Francisco
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] query problem

2002-08-30 Thread Jean-Luc Lachance

How about:

select city, min(date) from thetable where date > '2002-07-19
15:39:15+00' group by city;

JLL


Marco Muratori wrote:
> 
> Hi
> suppose I have the following situation:
> 
>   citydate
> -+---
> London   | 2002-08-08 07:05:16+00
> London   | 2002-07-30 13:08:22+00
> London   | 2002-07-30 07:39:15+00
> London   | 2002-07-29 17:51:47+00
> London   | 2002-07-29 17:45:49+00
> London   | 2002-07-29 17:45:47+00
> Paris| 2002-04-08 15:04:28+00
> Paris| 2002-03-29 17:22:18+00
> Paris| 2002-02-15 12:50:32+00
> Paris| 2002-01-22 11:40:22+00
> Paris| 2002-01-07 17:41:23+00
> Paris| 2001-11-12 16:37:37+00
> Paris| 2001-11-05 15:28:23+00
> Paris| 2001-11-05 08:21:19+00
> Oslo | 2002-07-19 15:42:20+00
> Oslo | 2002-07-19 15:42:18+00
> Oslo | 2002-07-18 10:03:58+00
> Oslo | 2002-07-18 08:56:30+00
> Oslo | 2002-07-17 17:17:27+00
> Oslo | 2002-07-17 16:11:38+00
> 
> For each city I have a couple of dates in DESC order.
> For each city i need to get the first record which date
> comes after a given date. If the given date was for example
> "2002-07-19 15:39:15+00", I would get the following
> records:
> 
> London   | 2002-07-29 17:45:47+00
> Oslo | 2002-07-19 15:42:18+00
> 
> Is there a way to obtain this records by performing one
> single query and not by making for each city something like
> "SELECT city,date FROM table WHERE city='London' AND date>'2002-07-19
> 15:39:15+00' ORDER BY date ASC LIMIT 1;"?
> Thanks.
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://archives.postgresql.org

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] query problem

2002-08-30 Thread Jean-Luc Lachance

I think you meant min(date)...


Josh Berkus wrote:
> 
> Marco,
> 
> > Is there a way to obtain this records by performing one
> > single query and not by making for each city something like
> > "SELECT city,date FROM table WHERE city='London' AND date>'2002-07-19
> > 15:39:15+00' ORDER BY date ASC LIMIT 1;"?
> 
> Close.  Try:
> 
> SELECT city, MAX("date") as last_date
> FROM table
> WHERE "date" > $date
> GROUP BY city
> ORDER BY city
> 
> Though as an aggregate query, this will be slow on large tables.
> 
> --
> -Josh Berkus
>  Aglio Database Solutions
>  San Francisco
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] Getting acces to MVCC version number

2002-09-20 Thread Jean-Luc Lachance

Hi all developpers,

This is just a idea.

How about making available the MVCC last version number just like oid is
available.  This would simplify a lot of table design.  You know, having
to add a field "updated::timestamp" to detect when a record was updated
while viewing it (a la pgaccess).

That way, if the version number do not match, one would know that the
reccord was updated since last retrieved.

What do think?

JLL

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] [GENERAL] Getting acces to MVCC version number

2002-09-23 Thread Jean-Luc Lachance

That is great!  Thanks for the info.

Tom Lane wrote:
> 
> Jean-Luc Lachance <[EMAIL PROTECTED]> writes:
> > How about making available the MVCC last version number just like oid is
> > available.  This would simplify a lot of table design.  You know, having
> > to add a field "updated::timestamp" to detect when a record was updated
> > while viewing it (a la pgaccess).
> > That way, if the version number do not match, one would know that the
> > reccord was updated since last retrieved.
> 
> > What do think?
> 
> I think it's already there: see xmin and cmin.  Depending on your needs,
> testing xmin might be enough (you'd only need to pay attention to cmin
> if you wanted to notice changes within your own transaction).
> 
> regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] start and end of the week

2002-09-26 Thread Jean-Luc Lachance

How about:

select now() - date_part( 'DOW', now()) as starts_on, 
  now() -date_part( 'DOW', now()) + 6 as ends_on;


"John Sebastian N. Mayordomo" wrote:
> 
> How do I get the start and end date of the present week?
> Is this possible?
> 
> For example this week
> Start = Sept. 22
> End   = Sept. 28
> 
> Thank you very much.
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] Dublicates pairs in a table.

2002-09-27 Thread Jean-Luc Lachance

What's wrong with 
CREATE UNIQUE INDEX foo_both_uniq ON foo(a,b);
???


Richard Huxton wrote:
> 
> On Wednesday 25 Sep 2002 2:10 am, Kevin Houle wrote:
> > I have the same issue with a table that currently holds well
> > over 600,000 rows. The case you left out is this:
> >
> >   INSERT INTO test (c1,c2) VALUES('a','c');
> >   INSERT INTO test (c1,c2) VALUES('c','a');
> >
> > I want that to fail, but I haven't been able to get it to fail
> > using unique indexes. I presume ordering is significant. Instead,
> > I am doing a SELECT prior to insert to insure the pair doesn't
> > already exist. If you've been able to get order-independent
> > pairs restricted to being unique using indexes, I'd like to know
> > about it. :-)
> 
> Functional indexes sir - define a function that puts the columns into a sorted
> order.
> 
> richardh=> CREATE TABLE foo (a text, b text);
> CREATE
> richardh=> CREATE UNIQUE INDEX foo_both_uniq ON foo ( ord_fn(a,b) );
> ERROR:  DefineIndex: index function must be marked iscachable
> richardh=> \i ordfn.txt
> DROP
> CREATE
> richardh=> CREATE UNIQUE INDEX foo_both_uniq ON foo ( ord_fn(a,b) );
> CREATE
> richardh=> insert into foo values ('aa','bb');
> INSERT 332596 1
> richardh=> insert into foo values ('aa','cc');
> INSERT 332597 1
> richardh=> insert into foo values ('bb','aa');
> ERROR:  Cannot insert a duplicate key into unique index foo_both_uniq
> richardh=> insert into foo values ('aa','bb');
> ERROR:  Cannot insert a duplicate key into unique index foo_both_uniq
> 
> Function defined as:
> CREATE FUNCTION ord_fn (text,text) RETURNS text AS '
> SELECT (CASE
> WHEN $1 < $2
> THEN $1 || $2
> ELSE $2 || $1
> END) as t;
> ' LANGUAGE SQL WITH (iscachable);
> 
> --
>   Richard Huxton
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Dublicates pairs in a table.

2002-09-27 Thread Jean-Luc Lachance

Oh, sorry I missed that. 

Still if C1 and C2 are interchangable, a rule could force C1 <= C2 and
swap them if necessary.


Richard Huxton wrote:
> 
> On Friday 27 Sep 2002 5:17 pm, Jean-Luc Lachance wrote:
> > What's wrong with
> > CREATE UNIQUE INDEX foo_both_uniq ON foo(a,b);
> > ???
> 
> Because he specifically wanted values of ('a','b') and ('b','a') to be treated
> as equivalent (see quote).
> 
> > > >   INSERT INTO test (c1,c2) VALUES('a','c');
> > > >   INSERT INTO test (c1,c2) VALUES('c','a');
> 
> Note Stephen Szabo's observation that I'd missed the obvious need for some
> separator so ('a','ab') is different from ('aa','b') - Doh!
> 
> - Richard Huxton

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] [GENERAL] CURRENT_TIMESTAMP

2002-09-30 Thread Jean-Luc Lachance

How can you make a difference between now('statement'), and
now('immediate').
To me they are the same thing. Why not simply now() for transaction, and
now('CLOCK') or better yet system_clock() or clock() for curent time.

JLL

Josh Berkus wrote:
> 
> Tom,
> 
> > I'd be happier with the whole thing if anyone had exhibited a convincing
> > use-case for statement timestamp.  So far I've not seen any actual
> > examples of situations that are not better served by either transaction
> > timestamp or true current time.  And the spec is perfectly clear that
> > CURRENT_TIMESTAMP does not mean true current time...
> 
> Are we still planning on putting the three different versions of now() on the
> TODO?  I.e.,
> now('transaction'),
> now('statement'), and
> now('immediate')
> With now() = now('transaction')?
> 
> I still think it's a good idea, provided that we have some easy means to
> determine now('statement').
> 
> --
> -Josh Berkus
>  Aglio Database Solutions
>  San Francisco
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://archives.postgresql.org

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] [GENERAL] CURRENT_TIMESTAMP

2002-09-30 Thread Jean-Luc Lachance

OK, forget system_clock() or clock() timeofday() will do.


Jean-Luc Lachance wrote:
> 
> How can you make a difference between now('statement'), and
> now('immediate').
> To me they are the same thing. Why not simply now() for transaction, and
> now('CLOCK') or better yet system_clock() or clock() for curent time.
> 
> JLL

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] enforcing with unique indexes..

2002-10-07 Thread Jean-Luc Lachance

Try a rule or a triger that checks for 
NOT EXISTS ( select 1 from eyp_listing where group_id = New.group_id and
userid != New.userid) 



"Rajesh Kumar Mallah." wrote:
> 
> Hi ,
> 
> can anyone tell me how can i enforce below in a table.
> I want that no more that one distinct userid exists for a given group_id
> in the table.
> 
> ie i want 1 to 1 mapping between group_id and userid so that , there shud not be a
> single group_id having more that one kind of userid.
> 
> SELECT  group_id  from eyp_listing group by group_id  having  count(distinct userid) 
>> 1  ;
> 
> always returns empty.
> 
> can it be done with some sort of UNIQUE INDEX?
> 
> Regds
> MAllah.
> 
> --
> Rajesh Kumar Mallah,
> Project Manager (Development)
> Infocom Network Limited, New Delhi
> phone: +91(11)6152172 (221) (L) ,9811255597 (M)
> 
> Visit http://www.trade-india.com ,
> India's Leading B2B eMarketplace.
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Sum of Every Column

2002-10-23 Thread Jean-Luc Lachance

Tom,

You can add 
sum( case when  then 1 else 0 end) 
for each field that you need.

JLL

Tom Haddon wrote:
> 
> Hi Folks,
> 
> I'm hoping to put together a query that generates a report on a table with
> a large number of boolean fields. This report has to be able to adapt to
> the number of fields in the table. Essentially, I want it to provide the
> sum of TRUE values for each field for a given subset of the table. I've
> got the query that returns the subset of the table (this is based on a
> relationship with another table):
> 
> SELECT breast_cancer_resources.*
> FROM breast_cancer_resources, agency_contact_info
> WHERE breast_cancer_resources.id=agency_contact_info.id
> AND agency_contact_info.guideregion=1
> AND agency_contact_info.list_online=TRUE
> 
> But I'm not sure how to generate the sum for each column. Should I be
> looking elsewhere than SQL to do this for me, such as php (this is for a
> web-based report)?
> 
> Thanks, Tom
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] How do you write this query?

2002-10-31 Thread Jean-Luc Lachance
Thank goodness for nested select!

select data1 from test where data2 = ( select distinct data2 from test
where data1 = 'pooh') and data = 3;

JLL

Richard Huxton wrote:
> 
> On Thursday 31 Oct 2002 6:21 pm, Wei Weng wrote:
> >  data | data1 | data2
> > --+---+---
> > 1 | foo   | bar
> > 2 | greg  | bar
> > 3 | pooh  | bar
> > 4 | dah   | peng
> >
> > I need a query that returns me the "data1" that satisfies the logic of
> > the following pseudo code:
> >
> > 1: select data2 into @out from test where data1 = 'pooh'
> > 2: select data1 from test where data2 = @out and data = 3
> 
> The most literal would be something like:
> 
> SELECT t1.data1 FROM test t1
> WHERE t1.data=3 AND t1.data2 IN
> (SELECT t2.data2
> FROM test t2
> WHERE t2.data1='pooh')
> 
> You can probably get away without the t1/t2 stuff but that should make things
> clear.
> 
> Since Postgresql isn't very good at optimising IN, you might want to rewrite
> it as an EXISTS query instead - see the manuals and mailing list archives for
> details.
> 
> HTH
> --
>   Richard Huxton
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] isAutoIncrement and Postgres

2002-10-18 Thread Jean-Luc Lachance
Are you looking for SERIAL data type?



Josh Berkus wrote:
> 
> Jim,
> 
> > Do any existing drivers / database version combinations support the
> > isAutoIncrement method?
> 
> What programming language are you referring to?  VB?  Delphi?
> 
> --
> -Josh Berkus
>  Aglio Database Solutions
>  San Francisco
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] date

2002-10-18 Thread Jean-Luc Lachance
Try 

select to_char( '1969-10-22'::date, '-MM-DD');


wishy wishy wrote:
> 
> hi folks,
> we have a PostgreSQL 7.2.2 on i686-pc-linux-gnu, compiled by GCC 2.96
> installation on pogo linux 7.2
> we are facing a data problem when we do the following
> select to_char(to_date('1969-10-22','-MM-DD'),'-MM-DD');
> 
> ERROR:  Unable to convert date to tm
> 
> we have been trying to find a solution for this have you found such
> instances before it there a method to over come this.
> Any help will be greatly appreciated.
> thanks
> kris
> 
> _
> Get faster connections -- switch to MSN Internet Access!
> http://resourcecenter.msn.com/access/plans/default.asp
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] join question

2002-10-18 Thread Jean-Luc Lachance
I think you meant:

select profile.name
from profile,attribute
where ( profile.id = attribute.containerId)
  and ( profile.state =' 1020811' or ( attribute.name = 'marketsegment'
and attribute.value = '1020704');


> select profile.name from profile,attribute where
> ((profile.state='1020811') or ((attribute.name='marketsegment') and
> (attribute.value='1020704') and (profile.id=attribute.containerId)));
> 
> Why doesn't this last query return just one row?
> 
> TIA
>

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] BOOLEAN question

2002-10-30 Thread Jean-Luc Lachance
Why not simply:

SELECT COALESCE( (SELECT true FROM ... WHERE boolcol LIMIT 1), FALSE);

JLL

Josh Berkus wrote:
> 
> Tom,
> 
> > Perhaps
> >   SELECT true = ANY (SELECT boolcol FROM ...);
> > or
> >   SELECT true IN (SELECT boolcol FROM ...);
> >
> > Which is not to say that MAX(bool) might not be a nicer solution;
> > but you can definitely do it with SQL-spec constructs.
> 
> Based on some rough testing,
> 
> SELECT true = ANY ( SELECT boolcol FROM complex query )
> 
> Is marginlly faster than
> 
> SELECT max(boolcol) FROM complex query
> 
> With a custom MAX(boolean) function.
> 
> So I'll stick to ANY().
> 
> -Josh
> 
> __AGLIO DATABASE SOLUTIONS___
>Josh Berkus
>   Complete information technology  [EMAIL PROTECTED]
>and data management solutions   (415) 565-7293
>   for law firms, small businessesfax 621-2533
> and non-profit organizations.  San Francisco
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] BOOLEAN question

2002-10-30 Thread Jean-Luc Lachance
Of course, I meant

SELECT COALESCE( (SELECT true FROM ... WHERE ...  AND boolcol LIMIT 1),
FALSE);

Jean-Luc Lachance wrote:
> 
> Why not simply:
> 
> SELECT COALESCE( (SELECT true FROM ... WHERE boolcol LIMIT 1), FALSE);
> 
> JLL
> 
> Josh Berkus wrote:
> >
> > Tom,
> >
> > > Perhaps
> > >   SELECT true = ANY (SELECT boolcol FROM ...);
> > > or
> > >   SELECT true IN (SELECT boolcol FROM ...);
> > >

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Copying a rowtype variable.

2002-11-07 Thread Jean-Luc Lachance
I would personnaly like this feature (assigning a composite from another
similar composite) to be added to PLPGSQL. Another nice feature would be
to able to insert a composite into a table without have to name all
atributes.

Just my $.02


"Rison, Stuart" wrote:
> 
> >> 2) I am looking for an elegant way of copying a rowtype variable:
> >>
> >> eg.
> >>
> >> DECLARE
> >>  current_row orf%ROWTYPE;
> >>  previous_row orf%ROWTYPE;
> >> BEGIN
> >>
> >>  LOOP
> >>  -- use cursors or FOR SELECT to get values into current_row
> >>  -- now try this:
> >>
> >>  previous_row = current_row;
> >>  END LOOP;
> >> END;
> >>
> >> Now, as I anticipated, this fails because a rowtype variable is a
> >> composite
> >> variable.  One working alternative is to do:
> >>
> >
> > I haven't tried this.  One thing I notice above is that you're using
> > the equality operator "=" instead of the assignment operator ":="  .
> > Usually Postgres lets you slack on this, but it would be worth trying
> > to see whether that has an effect on the problem.
> >
> 
> Fair point.  But "previous_row := current_row" doesn't work either.
> 
> > Another thing to try is, instead of a simple variable assignment
> >
> > SELECT current_row INTO previous_row;
> >
> > ... and see if that works.
> 
> Well, I had high hopes for that one... but it didn't work either!
> 
> > I'll tinker later today; there has to be a way to do it.
> 
> I'd definitely appreciate further suggestions, but thanks all the same for
> you help.  I have a feeling that you might have to write a PL function to
> perform the operation... but I haven't really thought about it!
> 
> Stuart.
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] Quartile (etc) ranking in a SQL statement?

2002-11-07 Thread Jean-Luc Lachance
In a PLPGPSQL script, once you know count(*) try 

execute ''select * from table limit '' || int4( theCount / 4);

If you want numbering, create a sequence and add nextval() to the query.

JLL



Jeff Boes wrote:
> 
> Here's a puzzler:
> 
> Given a query that returns rows ranked by some criteria, how can I write
> another query around it that will give me the (say) first quartile (top
> 25%)?  Another way of putting it is: if I have rows that look like this:
> 
> aaa | 1251
> aba | 1197
> cax | 1042
> ... | ...
> axq |  23
> (142 rows)
> 
> How can I write a query that will return these as
> 
>   1 | aaa | 1251
>   2 | aba | 1197
>   3 | cax | 1042
> ... | ... | ...
> 142 | axq |  23
> 
> --
> Jeff Boes  vox 616.226.9550 ext 24
> Database Engineer fax 616.349.9076
> Nexcerpt, Inc. http://www.nexcerpt.com
>...Nexcerpt... Extend your Expertise
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] System´s database table

2002-11-13 Thread Jean-Luc Lachance
While we are on the subject, 
is there any ERD of the system's table somewhere?

JLL


Josh Berkus wrote:
> 
> Pedro,
> 
> > I´m looking for the name of the table that contains all databases in my
> system. I already see this in the postgre manual, but i´m forgot where 
> 
> pg_database
> 
> --
> -Josh Berkus
>  Aglio Database Solutions
>  San Francisco
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] RE: [SQL] System´s database table

2002-11-13 Thread Jean-Luc Lachance
Thanks,  I know about that.  

I was just hoping for a nice diagram.
I guess I will have to wait for 7.3 anyhow if I do not want to waste my
time doing one for 7.2

JLL

Paul Ogden wrote:
> 
> It's not ERD but I've found the information in the Developer's Guide
> regarding system catalogs to be useful in the past.
> 
> This http://www.postgresql.org/idocs/index.php?catalogs.html will
> get you started.
> 
> Thanks,
> 
> Paul Ogden
> Claresco Corporation
> 
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:pgsql-sql-owner@;postgresql.org]On Behalf Of Jean-Luc Lachance
> > Sent: Wednesday, November 13, 2002 12:37
> > Cc: [EMAIL PROTECTED]
> > Subject: Re: [SQL] System´s database table
> >
> >
> > While we are on the subject,
> > is there any ERD of the system's table somewhere?
> >
> > JLL
> >
> >
> > Josh Berkus wrote:
> > >
> > > Pedro,
> > >
> > > > I´m looking for the name of the table that contains all
> > databases in my
> > > system. I already see this in the postgre manual, but i´m
> > forgot where 
> > >
> > > pg_database
> > >
> > > --
> > > -Josh Berkus
> > >  Aglio Database Solutions
> > >  San Francisco
> > >
> > > ---(end of broadcast)---
> > > TIP 4: Don't 'kill -9' the postmaster
> >
> > ---(end of broadcast)---
> > TIP 4: Don't 'kill -9' the postmaster
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] calculating interval

2002-11-22 Thread Jean-Luc Lachance
Watch out!  36.85 weeks could have 37 sundays...

Take into account the day of the week of the first and last day.

Also, process the first and last day separately and work with whole day,
if you want to exclude part of the day.


Dan Langille wrote:
> 
> On 22 Nov 2002, praveen  vejandla wrote:
> 
> > Dear All,
> >
> > Is there any way in postgresql to calculate the interval between
> > two times excluding specific days,specific duration.
> >
> > Ex:
> > timestamp1 : 2002-10-01 10:30AM
> > timestamp2 : 2002-15-01 04:50PM
> >
> > suppose if i need the difference between timestamp1,timestamp2 but
> > i don't want to count how many sun days are coming, i want to
> > ignore all sundays in between,i want to ignore certain timings(say
> > 10.00 AM to 5:00PM)s also,then how can I get the duration in this
> > way.
> 
> My guess: write a function.  Calculating the number of days between the
> two dates is easy.  To avoid certain days, of the week, in your case,
> Sunday, I would count the number of whole weeks between the two dates.
> 
> test=# select '2002-10-01 10:30AM'::timestamp - '2002-15-01
> 04:50PM'::timestamp;
> ?column?
> 
>  258 days 16:40
> 
> In this case 258/7 = 36.85... So you know you have 36 Sundays in there.
> This will need adjusting for non-full weeks.
> 
> Hope that gets you started.
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://archives.postgresql.org

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] Min and Max

2002-12-02 Thread Jean-Luc Lachance
If you do not mind non standard, how about:

SELECT DISTINCT ON(id_father) * FROM children ORDER BY id_father,
child_age;


Dennis Björklund wrote:
> 
> On 29 Nov 2002, Sergio Oshiro wrote:
> 
> > How can I get the rows of the children name and its "father" such that
> > they have the min child_ages?
> >
> > -- the following does not return the child_name...
> > select id_father, min(child_age) from children group by id_father;
> > select id_father, max(child_age) from children group by id_father;
> 
> You could join one of the above with the table itself and get the result.
> Something like
> 
> select *
>   from (  select id_father, min(child_age)
> from children
> group by id_father) as r,
> children
>  where children.id_father = r.id_father
>and children.min = r.min;
> 
> --
> /Dennis
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Query for filtering records

2002-12-03 Thread Jean-Luc Lachance
Eric try:


select num, p1,p2 ... from contacts
  inner join groups using (contacts.num=groups.contactNum)
  where groups.groupNum=a
  and contact.p3=b
  and not exists ( 
select 1 from groups g2 
where g2.contactNum = groups.contactNum and
  g2.groupNum != a);

or 

select num, p1,p2 ... from contacts
  inner join groups using (contacts.num=groups.contactNum)
  where groups.groupNum=a
  and contact.p3=b
  and groups.groupNum in ( 
select contactNum from groups group by contactNum having count(*) =
1);

The IN version may still be faster as the sub-select should be evaluated
only once;


JLL



eric soroos wrote:
> 
> I'm having trouble subtracting groups from other groups.
> 
> I've got a data model that has the following essential features:
> 
> create table contacts (num int, properties);
> create table groups (groupNum int, contactNum int);
> 
> Where not all contacts will be in a group, some groups will contain most contacts, 
>and there will be something like hundreds of groups and tens of thousands of 
>contacts.  I allow people to build groups using criteria, which I need to 
>programatically translate to sql.
> 
> One somewhat common pattern is:
> 
> Select all contacts in group a, who have property b, and who aren't in groups 
>c,d,e,f...
> 
> My first shot was subqueries:
> 
> select num, p1,p2 ... from contacts
> inner join groups using (contacts.num=groups.contactNum)
> where groups.groupNum=a
> and contact.p3=b
> and not num in (select contactNum from groups where groupNum=c)
> and not num in (select contactNum from groups where groupNum=d)
> and not num in (select contactNum from groups where groupNum=e)
> and not num in (select contactNum from groups where groupNum=f)
> 
> This is  slow.  agonizingly so.
> 
> With an inner join, I'm not convinced that the subtraction is actually correct., but 
>it is much faster. Unfortunatley, faster incorrect answers are rarely helpful.
> 
> Outer joins seem even worse than subselects for speed, but it does appear to give 
>the correct answer. (example with a single join.)
> 
> select num from contacts
>left outer join groups
> on (contacts.num=groups.contactNum
> and  groups.groupNum=b)
>where
>dl_groupDonor._groupNum is null
>and p3=c
> 
> I've got to be missing something here, because this is much slower from the (slow) 
>procedural system that I'm porting from.
> 
> I've been avoiding using union / intersect since I don't really ever know what 
>columns are going to be in the query. perhaps I should revisit that decision and try 
>to work around it.
> 
> eric
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Rules/Trigges Trade-offs

2002-12-06 Thread Jean-Luc Lachance
Hi all!

Is there a guideline on the use of rules compared to triggers when both
can be use to achieve the same result?

JLL

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Rules/Trigges Trade-offs

2002-12-06 Thread Jean-Luc Lachance
Thanks for the info.

Do you mean that if an update affects more than one row I should use
triggers because the rules will be executed only once?

JLL


Richard Huxton wrote:
> 
> On Friday 06 Dec 2002 4:03 pm, Jean-Luc Lachance wrote:
> > Hi all!
> >
> > Is there a guideline on the use of rules compared to triggers when both
> > can be use to achieve the same result?
> 
> If I can use rules I do. Rules rewrite the query so are processed once,
> whereas triggers get processed for every row.
> --
>   Richard Huxton

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] Rules/Trigges Trade-offs

2002-12-09 Thread Jean-Luc Lachance
Josh,

Thanks for the info.  

I need to change an insert into an update when the key already exists.
I have been using a rules to test it on a small set (table) and it
works.
"Rules can't use indexes" just scared me. I will have to test on a
larger set.
Also, I had the impression that if a trigger returned NULL, one would
get the equivalent of "DO NOTHING".
Am I wrong with that assumption?

JLL

 

Josh Berkus wrote:
> 
> Bruce, Richard,
> 
> > Triggers are mostly for testing/modifying the row being
> > inserted/updated, while rules are better for affecting other rows or
> > other tables.
> 
> Hmmm.  Thought that there were also some other criteria:
> 
> 1) Rules can't use indexes to do their processing, so Rules which query large
> secondary tables can be a bad idea (maybe this has changed?)
> 
> 2) Only Rules can "DO INSTEAD"; thus, only Rules are good for defining
> Read/Write views.
> 
> 3) There are no AFTER Rules, making, for example, a rule with a table check on
> the new data impractical, so you'd want to use Triggers or Constraints
> 
> etc.
> 
> There are, IMHO, some things Rules are better for, and some things Triggers
> are better for.   I tend to use all Triggers except for updatable views,
> simply because using a mix of Rules and Triggers can be very hard to keep
> track of, but YMMV.
> 
> --
> -Josh Berkus
>  Aglio Database Solutions
>  San Francisco

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Problem with a lookup table! Please help.

2002-12-09 Thread Jean-Luc Lachance
Use the AS keyword to introduce a column alias.

Select thisverlongtablename.thefirstfield as title, ... from 

JLL

Josh Berkus wrote:
> 
> Chris,
> 
> > In my capacity as a vet student, I'm trying to create a database of
> antibiotics.  The way that I have set it up so far is to have one main table
> listing the antibiotics versus their respective efficacies against the four
> major groups of bacteria.  Due to the way that my PHP frontend works, I have
> assigned a number to the efficacy - 1 being excellent and 5 being poor
> efficacy against the particular bacterium.  However, I now want to have a new
> table which converts numbers into words.  The problem is this, if I join the
> main table with the "translation" lookup table, the column names for each of
> the four categories in the main default to the column name in the lookup
> table and hence are all the same.  What SQL expression should I use to
> translate the cryptic numbers into plain english whilst preserving the column
> headings in the main table?
> 
> Please post your table definitions as SQL statements.
> 
> --
> -Josh Berkus
>  Aglio Database Solutions
>  San Francisco
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] union query doubt:

2002-12-11 Thread Jean-Luc Lachance
I think you meant:

SELECT date,
sum( case when point = 1 then flow else 0 end) as flow1,
sum( case when point = 2 then flow else 0 end) as flow2,
sum( case when point = 3 then flow else 0 end) as flow3,
sum( case when point = 4 then flow else 0 end) as flow4,
sum( case when point = 5 then flow else 0 end) as flow5,
sum( case when point = 6 then flow else 0 end) as flow6
from samples group by date;



Frank Bax wrote:
> 
> At 11:21 AM 12/11/02, javier garcia wrote:
> >I've got a table with three fields: DATE, POINT, FLOW. The POINT field can
> >have values among 1 and 6. So, for a same date I have six different points
> >with the correspondings flows.
> >I would like to make a query to obtain something like:
> >DATE POINT1 POINT2 POINT3 POINT4 POINT5 POINT6
> >
> >where for a date I have the flows data of the different points.
> 
> SELECT date,
> case when point = 1 then flow else 0 end as flow1,
> case when point = 2 then flow else 0 end as flow2,
> case when point = 3 then flow else 0 end as flow3,
> case when point = 4 then flow else 0 end as flow4,
> case when point = 5 then flow else 0 end as flow5,
> case when point = 6 then flow else 0 end as flow6
> from samples
> 
> There have been several messages recently about this - search on crosstab
> or pivot - a couple of other options were presented.
> 
> Frank
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] function replace doesnt exist

2002-12-12 Thread Jean-Luc Lachance
If you want character translation like the tr command under unix, 
use TRANSLATE.




Andy Morrow wrote:
> 
> Hi
> 
> im trying to execute an update command on a postgresql DB table using
> pgAdmin II
> 
> im using the following statement
> 
> UPDATE commandlist SET command = REPLACE (command,'A','B')
> 
> commandlist is the table name
> command is the column
> and i want to change the value A to B
> 
> but it's giving me the following error message
> 
> an error has occured in pgAdmin II:frmSQLInput.cmdExecute_Click:
> 
> Number: -2147467259
> Description: Error while executing the query;
> ERROR: Function'replace(varchar, unknown, unknown)' does not exist
> Unable to identify a function that satisfies the given argument types
> You may need to add explicit typecasts
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] CSV import

2003-01-28 Thread Jean-Luc Lachance
You can acheive the same result with:

tr -d '"\015' < file_name.txt | psql {etc...}

Unix EOL is LF not CR.


Guy Fraser wrote:
> 
> Hi
> 
> You will need two text utilities {dos2unix and sed} to do this in the simplest
> way. They are fairly standard text utilities and are probably already on your
> machine.
> 
> This is how I would do it :
> 
> sed "s/\"//g" file_name.txt \
> | dos2unix \
> | pgsql -c "COPY table_name FROM STDIN USING DELIMITERS ',';" db
> 
> Where "file_name.txt" is the csv file you want to import and "table_name" is
> the previously created table you want to insert the data into and db is the
> database name.
> 
> How this works is "sed" {stream editor} removes all the double quote
> characters '"' then pipes the output through "dos2unix" which converts all the
> CRLF {DOS EOL} sequences into CR {UNIX EOL} characters, then pipes the data to
> "pgsql"  with a command that does a bulk insert into the table of the database
> you have selected.
> 
> Guy
> 
> Oliver Vecernik wrote:
> > Hi again!
> >
> > After investigating a little bit further my CSV import couldn't work
> > because of following reasons:
> >
> > 1. CSV files are delimited with CR/LF
> > 2. text fields are surrounded by double quotes
> >
> > Is there a direct way to import such files into PostgreSQL?
> >
> > I would like to have something like MySQL provides:
> >
> > LOAD DATA [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name.txt'
> >[REPLACE | IGNORE]
> >INTO TABLE tbl_name
> >[FIELDS
> >[TERMINATED BY '\t']
> >[[OPTIONALLY] ENCLOSED BY '']
> >[ESCAPED BY '\\' ]
> >]
> >[LINES TERMINATED BY '\n']
> >[IGNORE number LINES]
> >[(col_name,...)]
> >
> > Has anybody written such a function already?
> >
> > Regards,
> > Oliver
> >
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] CSV import

2003-01-31 Thread Jean-Luc Lachance
In DOS and Windows, text lines end with .
In Unix, text lines end with  only.

hex   decoct
=CTRL-M or 0x0D or 13 or 015
=CTRL-J or 0x0A or 10 or 012



Chad Thompson wrote:
> 
> >
> > Unix EOL is LF not CR.
> >
> >
> 
> Is this the only difference between a dos and unix text file?
> 
> Thanks
> Chad
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Forcing query to use an index

2003-03-04 Thread Jean-Luc Lachance
I beg to differ.

A NULL field means not set.

Having to use work around because the database does not index null is
one thing, but making it a general rule is not.

Having NULL indexed would also speed up things when "is null" is part af
the query.

Until then...

JLL



Greg Stark wrote:
> 
> One suggestion I'll make about your data model -- I'm not sure it would
> actually help this query, but might help elsewhere:
> 
>  WHERE ( C.Disabled > '2003-02-28'
>   OR C.Disabled IS NULL
>)
> 
> Don't use NULL values like this. Most databases don't index NULLs (Oracle) or
> even if they do, don't make "IS NULL" an indexable operation (postgres).
> There's been some talk of changing this in postgres but even then, it wouldn't
> be able to use an index for an OR clause like this.
> 
> If you used a very large date, like -01-01 as your "not deactivated" value
> then the constraint would be C.disabled > '2003-02-28' and postgres could use
> an index on "disabled".
> 
> Alternatively if you have a disabled_flag and disabled_date then you could
> have an index on disabled_flag,disabled_date and uhm, there should be a way to
> use that index though I'm not seeing it right now.
> 
> This won't matter at first when 99% of your customers are active. And ideally
> in this query you find some way to use an index to find "kate" rather than
> doing a fully table scan. But later when 90% of the clients are disabled, then
> in a bigger batch job where you actually want to process every active record
> it could prevent postgres from having to dig through a table full of old
> inactive records.
>

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] order by date desc but NULLs last

2003-02-10 Thread Jean-Luc Lachance
Why not try the obvious first?

order by gradedtime is null, gradedtime desc;


"Ross J. Reedstrom" wrote:
> 
> On Sun, Feb 09, 2003 at 05:29:29PM -0500, A.M. wrote:
> > I have a simple query that sorts by descending date but the NULL dates
> > show up first. Is there a way I can sort so they come last without
> > sorting ascending?
> >
> > SELECT submittime,score,gradedtime FROM student_gradedmaterial WHERE
> > gradedmaterialid=3 and studentid=102 order by gradedtime desc;
> >
> >  submittime  | score | gradedtime
> > -+---+
> >  2003-01-30 22:56:38 |   |
> >  2003-01-31 03:42:29 |99 | 2003-02-06 14:21:43.043587
> >
> > but what I want is all the graded items first in gradedtime desc and
> > NULL afterwards. I do need to keep the NULL score rows. (So I get the
> > latest submitted grade for the assignment but also any ungraded
> > submission information.)
> 
> You need to ORDER BY a _function_ of the gradedtime column, substituting
> an extreme value for NULL. Try this:
> 
>  SELECT submittime,score,gradedtime FROM student_gradedmaterial
>  WHERE gradedmaterialid=3 and studentid=102 order by
>  coalesce(gradedtime,'-infinity') desc;
> 
> Ross
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] [PHP] faster output from php and postgres

2003-05-27 Thread Jean-Luc Lachance
KISS

why not use PHP to concatenate the authors while pub_id is the same???
If you insist on having each author in its own column, 
put them at the end and concatenate with .

jll


Chadwick Rolfs wrote:
> 
> So, I have the same problem, but I need all authors for each publication
> to show up in it's own column.  I tried the full join query from a
> suggestion off pgsql-sql, but it only returns ONE author id TWICE instead
> of ALL authors at once.
> 
> I'll do some RTFMing of the joins.. and post any results I get
> 
> BUT, right now, looping over each publication with php isn't taking that
> long.  I would like to know how to make this query, though!
> 
> Please let me know how to get a result like:
> 
> |All Authors|Title|Source|Year|Type|Length|Keywords|
> 
> 
> If there is a way on the SQL side to do this, that is ;>
> 
> Here we go:
> 
> CREATE TABLE "author" (
> "auth_id" integer DEFAULT
> nextval('"author_temp_auth_id_seq"'::text) NOT NULL,
> "first" text,
> "last" text,
> "auth_last_updated" timestamp with time zone,
> Constraint "author_temp_pkey" Primary Key ("auth_id")
> );
> 
> CREATE UNIQUE INDEX auth_id_author_key ON author USING btree (auth_id);
> 
> CREATE UNIQUE INDEX auth_last_updated_author_key ON author USING btree
> (auth_last_updated);
> 
> CREATE TABLE "publication" (
> "copyis" text,
> "pub_id" integer DEFAULT nextval('publication_pub_id_seq'::text)
> NOT NULL,
> "title" text,
> "source" text,
> "year" text,
> "month" text,
> "length" text,
> "type" text,
> "keywords" text,
> "copyright" text,
> "abstract" text,
> "pdformat" text,
> "pub_last_updated" timestamp with time zone
> );
> 
> CREATE UNIQUE INDEX publication_pub_id_key ON publication USING btree
> (pub_id);
> 
> CREATE INDEX keywords_publication_key ON publication USING btree
> (keywords);
> 
> CREATE UNIQUE INDEX pub_last_updated_publication_ke ON publication USING
> btree (pub_last_updated);
> 
> CREATE UNIQUE INDEX pub_id_publication_key ON publication USING btree
> (pub_id);
> 
> CREATE TABLE "pub_auth" (
> "pub_auth_id" integer DEFAULT
> nextval('"pub_auth_temp_pub_auth_id_seq"'::text) NOT NULL,
> "pub_id" integer,
> "auth_id" integer,
> Constraint "pub_auth_temp_pkey" Primary Key ("pub_auth_id")
> );
> 
> CREATE INDEX pub_id_pub_auth_key ON pub_auth USING btree (pub_id);
> 
> CREATE INDEX auth_id_pub_auth_key ON pub_auth USING btree (auth_id);
> 
> On Sat, 24 May 2003, Frank Bax wrote:
> 
> > Finding previous examples of complex joins in archives is not likely an
> > easy thing to find.
> >
> > pg_dump -s -t author -t publication -t pub_auth [database] | grep -v ^--
> >
> > Change [database] to the name of your database - this command will dump out
> > schema relative to your request.  Post the results to this list.  Then ask
> > us the question "how do I write a SELECT that produces...[ you finish this
> > sentence]".  Question probably more appropriate to the list you mentioned,
> > but I expect there are people here who are just as capable of answering the
> > question.  I've even seen examples where the process goes through several
> > emails before SQL produces desired results exactly.
> >
> >  >How would a join make this easier?
> >
> > I have always found that one properly constructed complex query is always
> > "cheaper" in runtime than numerous queries inside a foreach loop.  Your
> > final query will likely include joining a table to itself (this can
> > sometimes be a difficult concept to grasp).
> >
> > Frank
> >
> >
> > At 11:50 AM 5/24/03, Chadwick Rolfs wrote:
> >
> >
> > >I'm glad this came up, because I have the same type of problem.  Except,
> > >I don't see how a join can work... of course, I'm not really schooled in
> > >this stuff.
> > >
> > >I also have three tables: author, publication, and pub_auth.
> > >
> > >There are multiple authors for some publications, so it is necessary to
> > >check each publication selected for ALL authors.  I'm doing this with a
> > >foreach loop on the result of each publication key returned.
> > >
> > >How would a join make this easier?
> > >
> > >I'm browsing the pgsql-sql archives now, but that may take a week.  I'm
> > >not sure what to search for...
> > >
> 
> -Chadwick
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Getting rid of accents..

2003-05-29 Thread Jean-Luc Lachance
Have a look at translate().  It behaves like the unix command 'tr'.


Randall Lucas wrote:
> 
> Hi Mallah,
> 
> I had this problem once, and put together this bunch of regexes.  It's
> by no means optimal, but should solve 90% and would easily be adapted
> into a plperl function.
> 
> Begin perl:
>$value =~ s/[\xc0-\xc6]/A/g;
>$value =~ s/[\xc7]/C/g;
>$value =~ s/[\xc8-\xcb]/E/g;
>$value =~ s/[\xcc-\xcf]/I/g;
>$value =~ s/[\xd1]/N/g;
>$value =~ s/[\xd2-\xd6\xd8]/O/g;
>$value =~ s/[\xd9-\xdc]/U/g;
>$value =~ s/[\xdd]/Y/g;
> 
>$value =~ s/[\xe0-\xe6]/a/g;
>$value =~ s/[\xe7]/c/g;
>$value =~ s/[\xe8-\xeb]/e/g;
>$value =~ s/[\xec-\xef]/i/g;
>$value =~ s/[\xf1]/n/g;
>$value =~ s/[\xf2-\xf6\xd8]/o/g;
>$value =~ s/[\xf9-\xfc]/u/g;
>$value =~ s/[\xfd\xff]/y/g;
> 
> On Tuesday, May 27, 2003, at 04:55 PM, <[EMAIL PROTECTED]> wrote:
> 
> >
> >
> > Is there any easy way for converting accented text to
> > closest text  without accents in postgresql ?
> >
> > eg:
> >
> > BÂLÂ MORGHÂB  to  BALA MORGHAB
> >
> >
> >
> >
> > Regds
> > Mallah.
> >
> >
> > -
> > Get your free web based email at trade-india.com.
> >"India's Leading B2B eMarketplace.!"
> > http://www.trade-india.com/
> >
> >
> >
> > ---(end of
> > broadcast)---
> > TIP 3: if posting/reading through Usenet, please send an appropriate
> > subscribe-nomail command to [EMAIL PROTECTED] so that your
> > message can get through to the mailing list cleanly
> >
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] sort for ranking

2003-07-07 Thread Jean-Luc Lachance
Andreas,

try 

select sum_user,nextval('tipp_eval_seq')-1 as ranking from (
 select user_sum from tbl_sums order by user_sum desc) as ss;


JLL


Andreas Schmitz wrote:
> 
> Hello *,
> 
> I have a little problem that confuses me. We are gathering values from a table
> as a sum to insert them into another table. I also need to get a ranking at
> insert (i.e. Highest points will get first place and so on). I tried ton
> invole a sequence to qualify the ranking by select at insert.
> 
> So I tried the following (smaller example)
> 
> select setval('tipp_eval_seq',1);
> select sum_user,nextval('tipp_eval_seq')-1 as ranking from tbl_sums order by
> ranking desc, user_sum asc;
> 
>   user_sum | ranking
> --+-
>46 |  30
>45 |  26
>44 |  28
>43 |  25
>42 |   1
>41 |   2
>39 |   3
>38 |  27
>36 |  19
>35 |  18
>34 |  20
>31 |  24
>30 |  17
>29 |  15
>28 |  16
>27 |  12
>26 |  11
>25 |  23
>24 |  21
>23 |  10
>19 |  13
>16 |   9
>12 |   7
>11 |   8
>10 |  29
> 8 |   6
> 7 |   5
> 6 |  14
> 2 |   4
> 1 |  22
> (30 rows)
> 
> As you can see, the sums are sorted correctly but the ranking is a mess. I
> recongnized that the select seems to follow primarily the internal table
> order. Is there any way to solve this nicely. Hints and solutions are
> appreciated.
> 
> Thanks in advance
> 
> -Andreas
> 
> --
> Andreas Schmitz - Phone +49 201 8501 318
> Cityweb-Technik-Service-Gesellschaft mbH
> Friedrichstr. 12 - Fax +49 201 8501 104
> 45128 Essen - email [EMAIL PROTECTED]
> 
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] summing tables

2003-07-15 Thread Jean-Luc Lachance

Erik,

If you intent is to get a running total of a and b ordered by seq, you
should try this (assuming the table name is t):

update t set c = ( select sum(a) + sum(b) from t t1 where t1.seq <=
t.seq);

You should have an index on seq.
If the table is very large, it is going to be painfully slow.
In that case you may want to think about using a function to step thru
each row.

JLL

Erik Thiele wrote:
> 
> hi,
> 
> i have a table consisting of 4 integers.
> 
> seq is for making the table ordered. (ORDER BY SEQ ASC)
> a,b,c maybe null
> 
>  seq | a  | b  | c
> -+++---
>0 |  1 |  2 | 3
>1 |  1 |  2 |
>2 |  5 |  7 |
>3 | -2 | -4 |
> 
> i am needing a sql statement to do
> 
> c=a+b+"the c of the row with seq one less than myself"
> 
> this statement has to run over the whole table, in seq order.
> 
> how can this be acomplished???
> 
> cu&thanks
> erik
> 
> --
> Erik Thiele
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] min() and NaN

2003-07-21 Thread Jean-Luc Lachance
If a compare with NaN is always false, how about rewriting it as:
result = ((arg1 < arg2) ? arg2 : arg1).

Or better yet, swap arg1 and arg2 when calling float8smaller.
Use flaost8smaller( current_min, value).

JLL

Tom Lane wrote:
> 
> "Michael S. Tibbetts" <[EMAIL PROTECTED]> writes:
> > I'd expect the aggregate function min() to return the minimum, valid
> > numeric value.  Instead, it seems to return the minimum value from the
> > subset of rows following the 'NaN'.
> 
> Not real surprising given than min() is implemented with float8smaller,
> which does this:
> 
> result = ((arg1 > arg2) ? arg1 : arg2);
> 
> In most C implementations, any comparison involving a NaN will return
> "false".  So when we hit the NaN, we have arg1 = min so far, arg2 = NaN,
> comparison yields false, result is NaN.  On the next row, we have
> arg1 = NaN, arg2 = next value, comparison yields false, result is next
> value; and away it goes.
> 
> We could probably make it work the way you want with explicit tests for
> NaN in float8smaller, arranged to make sure that the result is not NaN
> unless both inputs are NaN.  But I'm not entirely convinced that we
> should make it work like that.  The other float8 comparison operators
> are designed to treat NaN as larger than every other float8 value (so
> that it has a well-defined position when sorting), and I'm inclined to
> think that float8smaller and float8larger probably should behave
> likewise.  (That actually is the same as what you want for MIN(), but
> not for MAX() ...)
> 
> Comments anyone?
> 
> regards, tom lane
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] min() and NaN

2003-07-22 Thread Jean-Luc Lachance
Hey! here is a (stupid maybe) idea.  Why not disallow 'NaN' for a float?

JLL




Stephan Szabo wrote:
> 
> On Tue, 22 Jul 2003, Bruce Momjian wrote:
> 
> > Well, my 2 cents is that though we consider NULL when ordering via ORDER
> > BY, we ignore it in MAX because it really isn't a value, and NaN seems
> > to be similar to NULL.
> >
> > When doing ORDER BY, we have to put the NULL value somewhere, so we put
> > it at the end, but with aggregates, we aren't required to put the NULL
> > somewhere, so we ignore it.  Should that be the same for NaN?  I just
> > don't see how we can arbitrarly say it is greater/less than other
> > values.
> 
> But we already do. When doing a less than/greater than comparison, 'NaN'
> is considered greater than normal values which is different from NULL
> which returns unknown for both.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL]

2003-09-29 Thread Jean-Luc Lachance
Wouldn't:

insert into r 
select count(*) 
from users 
where date( lastlogin) > current_date - MaxDays * interval '' 1 day''
group by date( lastlogin);

be more efficient?



Tom Lane wrote:
> 
> Dan Langille <[EMAIL PROTECTED]> writes:
> >  WHERE lastlogin between current_date - interval \''' ||
> > quote_literal(i - 1) || '' days\'
> >  AND current_date - interval \''' ||
> > quote_literal(i) || '' days\''';
> 
> IIRC, quote_literal() puts single quotes around its result.  So you have
> too many quotes there.  Given that you know i is an integer, you don't
> really need quote_literal for it.  Actually, you don't need EXECUTE
> here at all.  Why not just
> 
> FOR i IN 1..MaxDays LOOP
> SELECT count(*)
>   INTO r
>   FROM users
>  WHERE lastlogin between current_date - (i-1) * interval ''1 day''
>  AND current_date - i * interval ''1 day'';
> RETURN NEXT r;
> END LOOP;
> 
> regards, tom lane
> 
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] How can I produce the following desired result?

2003-10-15 Thread Jean-Luc Lachance
select distinct on( goodid) * from table order by goodid, storehistoryid
desc, totalnum, operationdate;


> aicean wrote:
> 
> How can I produce the following desired result?
> 
> goodidtotalnum   operationdate storehistoryid
>132  35.000  09-28-2003 66
>135  11.500  09-28-2003 61
>132  35.000  09-27-2003 60
>135  11.000  09-28-2003 59
>135  12.000  09-28-2003 58
>134 100.000  09-28-2003 57
>134 112.000  09-27-2003 56
>131   0.000  09-26-2003 54
>131  33.000  09-26-2003 51
>131  -2.000  09-26-2003 50
>  3 550.000  09-26-2003 49
>  3  52.000  09-26-2003 48
>132  35.000  09-27-2003 42
>  3 124.000  09-25-2003 41
>131  59.000  09-25-2003 40
>132  57.000  09-26-2003 39
>131   2.000  09-24-2003 38
>   3   2.000  09-20-2003 23
> 
> result:
> goodidtotalnum   operationdate storehistoryid
>132  35.000  09-28-2003 66
>135  11.500  09-28-2003 61
>134 100.000  09-28-2003 57
>131   0.000  09-26-2003 54
>  3 550.000  09-26-2003 49
> 
> I need   to select rows which storehistoryid is max as the same
> goodid .
> 
> 
> Thanks  in advance
> 
> 
>  aicean
>  Mailto:[EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Sorting problem

2003-10-15 Thread Jean-Luc Lachance
You are obviously not using C locale.
If you can't change it for some reason, you can use:

select * from accounts order by int4( trim( acno, '#'));

JLL

"George A.J" wrote:
> 
> hi all,
> i am using postgres 7.3.2 .i am converitng a mssql database to
> postgres.
> now i am facing a strange problem. sorting based on a varchar field is
> not working
> as expected. the non alphanumeric characters are not sorting based on
> the ascii
> value of them.
> 
> i have the following table structure..
> 
> create table accounts
> (
>   AcNo varchar (10),
>   Name varchar(100),
>   balance numeric(19,4)
> )
> 
> when i used the query select  * from accounts order by acno. the
> result is not correct
> 
> suppose that the acno field contains values '###1' ,'###2' ,'##10' ,
> '#100'
> the sort order in postgres is
> '###1'
> '##10'
> '#100'
> '###2'
>  But i want the result as follows
> '###1'
> '###2'
> '##10'
> '#100'
> 
> that means the ascii value of # should be considered for sorting..
> what is the problem. is it the behaviour of postgres.
> do i need to change any configuration. i am using all default
> configurations
> or is it a bug...?
> the problem actually is of < & > operators for varchar.
> 
> in a simple comparison
> 
> select '###2' < '##10'
> 
> returns false but i need true.
> 
> is there any solution exist. even if i replaced # with any non
> alphanumeric
> character the result is same..
> 
> pls help
> 
> jinu jose
> 
> --
> Do you Yahoo!?
> The New Yahoo! Shopping - with improved product search

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] SUM() & GROUP BY

2004-05-07 Thread Jean-Luc Lachance
Try:

SELECT d.divisions_name, s.pd_geo, COUNT(s.pd_geo)
FROM ser s, ser_divisions d
WHERE s.ser_divisions = '3131'
  AND s.ser_divisions = d.divisions_id
GROUP BY d.divisions_name, s.pd_geo;
Martin Kuria wrote:

Thanks Huxton,

Sorry for not explaining fully here is what I would like to achieve:

When I do:

SELECT s.pd_geo, COUNT(s.pd_geo) FROM ser s GROUP BY s.pd_geo;
OUTPUT:
pd_geo | count
--
1|   49
2|   39
3|6
4|54
It outputs the number of entries the Divisions have made

Here is what Division table contains:

SELECT * FROM ser_divisions;

divisions_name | divisions_id
---
DEE   |   3131
DEPI  |   3133
DED   |   3134
GBH   |   3136
Now I would like to get to know how each Division answered i.e.

SELECT s.pd_geo, COUNT(s.pd_geo)
FROM ser s
WHERE s.ser_divisions = '3131'
GROUP BY s.pd_geo;
output:

pd_geo | count
--
1 |   9
2 |   2
3 |   6
4 |   5
But this is the output I intend to get:

divisions_name | pd_geo  | count
---
DEE |   1 |  9
DEE |   2 |  2
DEE |   3 |  6
DEE |   4 |  5
How do I achieve the above results please do advice thanks again.

Kind Regards
+-+
| Martin W. Kuria (Mr.) [EMAIL PROTECTED]
++


>From: Richard Huxton <[EMAIL PROTECTED]>
>To: Martin Kuria <[EMAIL PROTECTED]>
>CC: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED]
>Subject: Re: [SQL] SUM() & GROUP BY
>Date: Fri, 07 May 2004 09:00:43 +0100
>
>Martin Kuria wrote:
>>Hi again I have two tables I would like to query i.e. service table
>>and division table
>>
>>SELECT s.pd_geo, COUNT(s.pd_geo) FROM ser s GROUP BY s.pd_geo;
>>OUTPUT:
>>pd_geo | count
>>--
>>  1|   49
>>  2|   39
>>  3|6
>>  4|54
>>
>>SELECT d.divisions_name, d.divisions_id)
>>FROM ser s, ser_divisions d
>>WHERE d.divisions_id = s.ser_divisions;
>>
>>division_name | divisions_id
>>--
>>  DEC|   6
>>  DEPI   |   7
>>  DRC|8
>>
>>How can I create a query that displays  How the divisions answered
>>the question please do assist.
>
>Martin - you'll need to explain exactly what you want. Can you show
>what  outputs you would like given the above data?
>
>--
>   Richard Huxton
>   Archonet Ltd
_
Tired of spam? Get advanced junk mail protection with MSN 8. 
http://join.msn.com/?page=features/junkmail

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] where is this problem (trigger)

2004-05-15 Thread Jean-Luc Lachance
Have noticed it is not the same function...
Theodore Petrosky wrote:
Great I got the double quotes in the trigger... like
this:
CREATE FUNCTION notify_jobinfo() RETURNS "trigger"
AS '
BEGIN
EXECUTE ''NOTIFY
"''||TG_RELNAME||''_''||NEW.jobnumber||''"'';
RETURN NEW;
END
' LANGUAGE plpgsql;
and it works great... however, i did a pg_dump of the
db  to back it up. On a lark I started looking through
the file and I decided to look at the dumped trigger
and this is what I see.
--
-- TOC entry 29 (OID 17180)
-- Name: notify_on_update(); Type: FUNCTION; Schema:
public; Owner: postgres
--
CREATE FUNCTION notify_on_update() RETURNS "trigger"
AS '  
BEGIN
EXECUTE ''NOTIFY ''||TG_RELNAME||''_''||NEW.jobnumseq;
RETURN NEW;
END
'
LANGUAGE plpgsql;

Ouch... it looks like pg_dump forgot my double quotes.
Do you have any ideas? If I reimport this dump file
the triggers won't work when it meets a row with a dot
in the column. Of course I can just update my data to
use the underscore instead of the dot.
Ted
--- Stephan Szabo <[EMAIL PROTECTED]>
wrote:
On Wed, 12 May 2004, Theodore Petrosky wrote:

I can not seem to update these rows because of the
dot
in the jobnumber field. I have found that I can
change
the dot to an underscore but I thought I would ask
if
there is a better solution.
here is the error:
UPDATE jobinfo SET isbilled = false WHERE
jobnumber =
'1162.01';
ERROR:  syntax error at or near ".01" at character
20
CONTEXT:  PL/pgSQL function "notify_jobinfo" line
2 at
execute statement
From the docs, it looks like NOTIFY takes an
identifier
as a name.  Foo_1023.01 is not a valid identifier so
you
might want to double quote the string since
"Foo_1023.01"
is one.
---(end of
broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



__
Do you Yahoo!?
SBC Yahoo! - Internet access at a great low price.
http://promo.yahoo.com/sbc/
---(end of broadcast)---
TIP 6: Have you searched our list archives?
   http://archives.postgresql.org

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Multitable uniqueness ?

2004-05-26 Thread Jean-Luc Lachance
Do you really need MANY-TO-MANY between customers and projects?
I can see customers owning many projects, but do you really have 
projects belonging to many customers?

If not, fold cust_proj into projects.
Otherwise, UNIQUE (cp_id, stall_no) on stalls should be enough.

Andreas wrote:
Hi folks,
Is there a way to have something like this :  UNIQUE (table_1.id, 
table_2.xxx)

I got some tables that have a couple of foreign keys. Now I try to 
minimize those relationships to clean up the mess.   :-}
We do business fairs. (???)  Like c-bit only a few magnitudes smaller.
So we have projects and rent stalls to customers.

customers (c_id, ...)
projects (p_id,...)
there is an relationcust_proj (cp_id,  c_fk, p_fk, status_fk)
with a UNIQUE constraint  (c_fk, p_fk)
A customer can have several orders, contacts, ... tied to a project.
Those look like this   stalls (stall_id, cp_id, stall_no, ...)
o_idPRIMARY
cp_fk  FOREIGN KEY that ties to custmer and project
stall_no   is a varchar
It should be unique within a project.
Will I have to integrate the project.id into the stalls-table ?

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Multitable uniqueness ?

2004-05-27 Thread Jean-Luc Lachance
I can't see your problem.
UNIQUE (cp_id, stall_no) will inforce what you want.
Unless, you want to use the same customers table to show who visited 
which stall. In that case you need another relation.
You can't have have a stall rented by and stall visited by at the same 
table.

Andreas wrote:
Jean-Luc Lachance schrieb:
Do you really need MANY-TO-MANY between customers and projects?
I can see customers owning many projects, but do you really have 
projects belonging to many customers?

In this case yes.
projects (
  1, 'x-fair 2003';
  2, 'y-fair 2003';
  3, 'x-fair 2004')
customer (
  1, 'X ltd';
  2, 'Y';
  3, 'Z')
maybe all 3 have a stall on project 1.
c1 and c2 attend project 2 where c2 has 2 stalls one inside and one 
outside the hall.

I have the cust_project relation to tie other objects like proposals and 
letters to something small and common for all the communication to a 
customer.

If not, fold cust_proj into projects.
Otherwise, UNIQUE (cp_id, stall_no) on stalls should be enough.

cp_id implies uniquness of (customer, project) couples.
So with UNIQUE (cp_id, stall_no) we get
--> UNIQUE (c_id, p_id, stall_no)
This'd be too much. Stall_no is the number of the cubicles the customers 
rent.
It needs to be unique within on project so that we can relate on a 
specific spot on the area to send visitors when they ask us and we print 
those numbers in the flyer.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Is there a faster way to do this?

2004-06-15 Thread Jean-Luc Lachance
One way to do it would be to:
Not put the percentile in the sales table;
Create an percentile table with a foreign key to the sales table primary 
key and percentile int4:

CREATE TABLE percentiles(
fkey  PRIMARY KEY REFERENCES sales( ),
percentile INT4 );
Create a sequence for that ancillary table:
CREATE SEQUENCE percent_seq;
When ready to create the percentiles, truncate the percentile table and 
reset the sequence next value:

TRUNCATE percentiles; setval( 'percent_seq', 0);
Now query your sales with the proper order by:
INSERT INTO percentiles SELECT pk_sales, nextval( 'percent_seq') / 
tile_size FROM sales ORDER BY sales_value;

HTH

Patrick Hatcher wrote:

pg: 7.4.2
RedHat 7.2
Can I get some advice on a possible faster way of doing this:
Scenario:  Each day I update a column in a table with an internal
percentile value.  To arrive at  this value, I'll get a count of records
with sales > 0 and then divide this count by the total number of tile
groups I want.  So for example:
Total records w/sales > 0 = 730,000
 tile# = 100
total percentile groups (730,000 / 100):7300
Now that I have the total number of groups I need, I cycle through my
recordset, grab the key field and the current percentile number and stuff
the values into a temp table. I mod the current row counter against the
total percentile group number.  If it is 0 then I add 1 to the  current
percentile number .  After inserting records into the temp file I then
update the main table.
Using the example above, the first 7300 records would get a  percentile
number of 1, the next 7300  records would get  a percentile number of 2,
then 3,4,5,etc.
Unfortunately, I am going record by record in a loop and the process takes
upwards of 20mins.  Is there a faster way to do this?  I thought about
using limit and offset, but I'm not sure how I would accomplish it.
Below is the function I currently use.  Thanks for any help provided
CREATE OR REPLACE FUNCTION cdm.percentile_calc()
  RETURNS text AS
'DECLARE
 v_interval int4;
 v_tile int4;
 v_percentile int4;
 v_check int4;
 v_count int4;
 v_rowcount int4;
 myRec  RECORD;
BEGIN
 v_count:=0;
 v_tile:= 100;
 v_percentile:=1;
 v_rowcount :=1;
 v_check:=0;
 /* Get count of records with val_purch > 0 */
 select into v_count count(*)  from cdm.cdm_indiv_mast where
val_purch_com >0;
 /* this number will be used as part of our MOD to tell when to add one
to our percentile */
 v_interval := v_count / v_tile;
 CREATE TEMP TABLE cdmperct (f1 int8, f2 int2);
 FOR myRec IN  select indiv_key from cdm.cdm_indiv_mast where
val_purch_com  >0 order by val_purch_com desc  LOOP
  INSERT INTO cdmperct values (myRec.indiv_key,v_percentile);
  v_check = mod(v_rowcount,v_interval);
  IF v_check = 0 THEN
   v_percentile:=v_percentile+1;
  END IF;
  v_rowcount:= v_rowcount+1;
 END LOOP;
   UPDATE cdm.cdm_indiv_mast SET percentiler = f2 from  cdmperct where
indiv_key = f1;
 DROP TABLE cdmperct;
 RETURN  \'DONE\';
END; '
  LANGUAGE 'plpgsql' IMMUTABLE;
Patrick Hatcher
Macys.Com
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] How to delete the not DISTINCT ON entries

2004-06-16 Thread Jean-Luc Lachance
If you do not have foreign key restrinctions, create a temp table from 
the select as:

CREATE TEMP TABLE tmp AS SELECT DISTINCT ON (location) location, time, 
report FROM weatherReports ORDER BY location, time DESC;

TRUNCATE weatherReports; INSERT INTO weatherReports SELECT * FROM tmp;
HTH
Achilleus Mantzios wrote:
O kyrios Christoph Haller egrapse stis Jun 16, 2004 :

Referring to the DISTINCT ON example
SELECT DISTINCT ON (location) location, time, report
FROM weatherReports
ORDER BY location, time DESC;

maybe smth like 

delete from weatherReports where (location,time,report) not in 
(SELECT DISTINCT ON (location) location, time, report FROM weatherReports 
ORDER BY location, time DESC)

Note:
Order by is very important, since it affects which rows are deleted.

How would I delete those entries skipped by the DISTINCT ON expression?
TIA
Regards, Christoph

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] SELECT from a list

2004-07-25 Thread Jean-Luc Lachance

Markus Bertheau wrote:
Ð ÐÑÐ, 25.07.2004, Ð 15:18, Keith Gallant ÐÐÑÐÑ:
Hello
I am wondering if it is possible to use a SINGLE LIKE statement for a
selection from a list.
For example: If I want to return all results that a phrase starts with a
number, can I make a call similar to the following:
SELECT * FROM table WHERE phrase LIKE
{'0%','1%','2%','3%','4%','5%','6%','7%','8%','9%'};
If not is there an easier way than having to call this:
SELECT * FROM table WHERE phrase LIKE '0%' OR phrase LIKE '1%' OR phrase
LIKE '2%' OR phrase LIKE '3%' OR phrase LIKE '4%' OR phrase LIKE '5%' OR
phrase LIKE '6%' OR phrase LIKE '7%' OR phrase LIKE '8%' OR phrase LIKE
'9%';

WHERE SUBSTRING(phrase FROM 1 FOR 1) IN ('0', '1', )

Better yet:
SELECT * FROM table WHERE phrase ~ '^[0-9]';


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [SQL] SELECT MAX(c) FROM (SELECT ... FOR UPDATE) AS foo

2004-08-17 Thread Jean-Luc Lachance
This query does not make sense to me.
Why would you create an updatable subquery just to get the highest value?
Maybe you are trying to achieve something other than what the query 
suggest. You wou care to put in words what you want to do?

JLL
Markus Bertheau wrote:
Hi,
why is the following query not allowed:
SELECT MAX(position) FROM (SELECT position FROM classes WHERE name =
'foo' FOR UPDATE OF classes) AS foo
It's clear which rows should be locked here, I think.
Thanks

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] SELECT MAX(c) FROM (SELECT ... FOR UPDATE) AS foo

2004-08-17 Thread Jean-Luc Lachance
If your intent is to insert a new record with position incremented by 1, 
you should use a trigger. Look at the autoincrement thread from few days 
ago.


Markus Bertheau wrote:
Ð ÐÑÑ, 17.08.2004, Ð 16:12, Bruno Wolff III ÐÐÑÐÑ:
SELECT MAX(position) FROM (SELECT position FROM classes WHERE name =
'foo' FOR UPDATE OF classes) AS foo
It's clear which rows should be locked here, I think.
Even if it was allowed, it probably wouldn't be good enough because it won't
protect against newly inserted records.

Can you detail an example where this wouldn't be good enough?
In a PL/pgSQL function I'm doing
PERFORM position FROM class_fields WHERE class = arg_class_name;
INSERT INTO class_fields (class, field, position) VALUES
(arg_class_name, arg_field_name, (SELECT MAX(position) FROM class_fields
WHERE class = arg_class_name));
Is this unsafe?
The question initially arose because I wanted to do something similar to
SELECT INTO var_new_position MAX(position) FROM class_fields WHERE class
= arg_class_name FOR UPDATE OF class_fields;
which didn't work.
Thanks

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [SQL] Complicated "group by" question

2004-08-25 Thread Jean-Luc Lachance
Andrew,
If assing is not a many to many relation,
why did you not fold accept_id into assign?
Any way, here is the query you need:
select assign.reviewer_id, ss.max_assign_date,
  accept.assign_id, accept.accept_id
from (
  select reviewer_id, max( assign_date) as max_assign_date
  from assign group by reviewer_id) as ss, assign, accept
where ss.reviewer_id = assign.reviewer_id
  and ss.max_assign_date = assign.assign_date
  and assign.assign_id = accept.assign_id;

Andrew Perrin wrote:
I have a table of people ("reviewers"), a table of review assignments
("assign"), and a table of review acceptances ("accept"). I would like to
be able to write a query to return the latest (e.g., max(assign_date))
assignment for each reviewer, plus the acc_id field from "accept".  I
think I should be able to do this with a GROUP BY clause, but am having no
luck.
Table structure:
reviewers   assign  accept
-
reviewer_id assign_id   accept_id
reviewer_id assign_id
... assign_date
... ...
Thanks for any guidance.
Andy
--
Andrew J Perrin - http://www.unc.edu/~aperrin
Assistant Professor of Sociology, U of North Carolina, Chapel Hill
[EMAIL PROTECTED] * andrew_perrin (at) unc.edu
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] ORDER BY and NULLs

2004-09-19 Thread Jean-Luc Lachance
select ... order by "FROM" is not null, "FROM";
If you have large amount of rows (with or without nulls) it is faster if 
use a partial index.

create index ... on ...("FROM");
create index ... on ...("FROM") where "FROM" is null;
JLL
[EMAIL PROTECTED] wrote:
Use the coalesce() function.  (coalesce returns the first non-null value in its list)
Specifically
ORDER BY coalesce("TO", 0), "FROM"
If you have records in "TO" column whose values is LESS then 0, then you need to 
replace 0 with
something that sorts BEFORE the first most value that your TO result can return.
Terry Fielder
Manager Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
[EMAIL PROTECTED]
Fax: (416) 441-9085

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of T E Schmitz
Sent: Sunday, September 19, 2004 10:58 AM
To: [EMAIL PROTECTED]
Subject: [SQL] ORDER BY and NULLs
Hello,
I am using PostgreSQL 7.4.2 and as I understand NULL values
always sort
last.
However, I have a table from which select using two numerical
sort keys
"FROM" and "TO". "TO" might be NULL and I would like to display those
rows first (without sorting the column in descending order).
Is there any way this can be achieved without inserting bogus values
into that column?
--
Regards/Gruß,
Tarlika Elisabeth Schmitz
---(end of
broadcast)---
TIP 4: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
   http://www.postgresql.org/docs/faqs/FAQ.html
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster