[SQL] sub-select trouble: wrong SQL or PostgreSQL issue?

2002-10-23 Thread patrick

Greetings,

I have 3 tables.  Two of which (ta and tb) are different
"attributes" the third table (tc) can have.

tc is allowed up to three of each kind of "attributes".

e.g., 3 ta values and 2 tb values.

By assigning ta and tb attributes to each entry in tc you
are also "matching" tb attributes with corresponding ta
ones.

e.g.,
c_id  = 8
  ta1   = 1  <-- match -->   tb1   = 3
  ta2   = 3  <-- match -->   tb2   = 2
  ta3   = 13 <-- match -->   tb3   = 20
c_id  = 9
  ta1   = 2  <-- match -->   tb1   = 3
  ta2   = 5  <-- match -->   tb2   = 3
  ta3   = 9  <-- match -->   tb3   = 2
c_id  = 10
  ta1   = 3  <-- match -->   tb1   = 2
  ta2   = 1  <-- match -->   tb2   = 3
  ta3   = 9  <-- match -->   tb3   = 2

The task at hand is to figure out the number of ta
attributes each tb attribute corresponds with.

For above example I want a result set showing me:

   tb.b_idta.a_id count
  --
  22 -- matched w/ta.a_id = 3, 9
  33 -- matched w/ta.a_id = 1, 2, 5
 201 -- only matched w/ta.a_id = 13


-- tables
create table ta ( a_id int unique );
create table tb ( b_id int unique );
create table tc
   (
   c_idint unique,
   ta1 int references ta( a_id ),
   ta2 int references ta( a_id ),
   ta3 int references ta( a_id ),
   tb1 int references tb( b_id ),
   tb2 int references tb( b_id ),
   tb3 int references tb( b_id )
   );

-- sequences
create sequence ta_seq;
create sequence tb_seq;
create sequence tc_seq;

-- populate tables with some data
insert into ta values ( NEXTVAL( 'ta_seq' ) );
insert into ta values ( NEXTVAL( 'ta_seq' ) );
insert into ta values ( NEXTVAL( 'ta_seq' ) );
insert into ta values ( NEXTVAL( 'ta_seq' ) );
insert into ta values ( NEXTVAL( 'ta_seq' ) );
insert into ta values ( NEXTVAL( 'ta_seq' ) );
insert into ta values ( NEXTVAL( 'ta_seq' ) );
insert into ta values ( NEXTVAL( 'ta_seq' ) );
insert into ta values ( NEXTVAL( 'ta_seq' ) );
insert into ta values ( NEXTVAL( 'ta_seq' ) );
insert into ta values ( NEXTVAL( 'ta_seq' ) );

insert into tb values ( NEXTVAL( 'tb_seq' ) );
insert into tb values ( NEXTVAL( 'tb_seq' ) );
insert into tb values ( NEXTVAL( 'tb_seq' ) );
insert into tb values ( NEXTVAL( 'tb_seq' ) );
-- ...
insert into tb values ( NEXTVAL( 'tb_seq' ) );
insert into tb values ( NEXTVAL( 'tb_seq' ) );



Don't read too much into the data as i don't think the
values chosen have anything to do with the problem.  I
am just using a set of data sequences from my live
database.

Though, i'm willing to be surprised if someone's willing
to show correlation between the problem and the values
used.

-- ta   tb
--   1  2  3  1  2  3
insert into tc values ( NEXTVAL( 'tc_seq' ), 1, 1, 1, 1, 3, 2 );
insert into tc values ( NEXTVAL( 'tc_seq' ), 3, 3, 3, 2, 1, 3 );
insert into tc values ( NEXTVAL( 'tc_seq' ), 4, 4, 4, 2, 1, 3 );
insert into tc values ( NEXTVAL( 'tc_seq' ), 5, 5, 5, 1, 2, 3 );
insert into tc values ( NEXTVAL( 'tc_seq' ), 5, 5, 5, 1, 2, 3 );
insert into tc values ( NEXTVAL( 'tc_seq' ), 1, 1, 1, 1, 2, 3 );
insert into tc values ( NEXTVAL( 'tc_seq' ), 4, 4, 4, 2, 1, 3 );
insert into tc values ( NEXTVAL( 'tc_seq' ), 3, 3, 3, 2, 1, 3 );
/*
insert into tc values ( NEXTVAL( 'tc_seq' ), 1, null, null, 1, null, null );
*/

This last insert is to show it is possible to have null
values for either of the ta or tb attributes in table tc.
But it is omitted to simplify the problem for now.


I am no SQL expert, as in I don't do it for a living.
Therefore, I would not doubt that there exist more
efficient ways to do this but this is one way I came up
with and it is quite possible that a SQL mistake is
causing the strange outputs I am observing.

But let me describe in English what i am attempting to
do in the SQL.

As I said earlier the task is to determine the unique
count of ta entries each tb.b_id is associated with in
the tc table.

I'm selecting on the tb table and doing sub selects on
the tc table where I attempt to count unique counts of
tb to ta matches.

First sub-select (aa) does a distinct count of tc.ta1
where tc.tb1 equals tb.b_id and the corresponding tc.ta1
field is not null.

Second sub-select (bb) I'm again doing a count of
distinct tc.ta2 where tc.tb2 is equal to tb.b_id, tc.ta2
is not null and tc.ta2 was not counted in the aa
sub-select.

Next (cc), once again do a count of distinct tc.ta3
where tc.tb3 is equal to tb.b_id, tc.ta3 is not null and
not in either of the two previous sub-selects (aa, bb).

The sum of sub-selects aa, bb and cc should be the value
I'm interested in (not shown in SQL statement).

The problem:
The result of the select is not the expected counts of ta
to tb.

The twist:
If the select statement is provided a where-clause where
each tb.b_id is restricted to individual tb.b_id values
then the c

Re: [SQL] sub-select trouble: wrong SQL or PostgreSQL issue?

2002-10-24 Thread patrick

Thank you both for your prompt responses.  Bug #526 (as Tom
pointed out) does look very similar to my problem.

I'll attempt to upgrade my development PostgreSQL and
eventually my production servers if all goes well.

Thanks again,


On Wed, Oct 23, 2002 at 11:28:01PM -0400, Tom Lane wrote:
> >> Is it possible that the sub-selects are somehow
> >> affecting the result sets?
> 
> > I think you want to upgrade, 7.3beta at least gives your
> > expected output on all the queries when I tried it.  I don't
> > currently have a 7.2.* system to test on here to try it
> > with that series.
> 
> I think this may be the same problem as bug #526,
> http://archives.postgresql.org/pgsql-bugs/2001-11/msg00168.php
> If so, the fix is in 7.2.*.
> 
>   regards, tom lane

sidster
--
They who would sacrifice freedom for security will have neither.
   -Ben Franklin 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] Are sub-select error suppressed?

2002-11-26 Thread patrick
Greetings,

I'm not sure what the correct behavior is here but the observed
behavior seems "wrong" (or at least undesirable).

I have a few tables and a view on one of the tables selecting
entries that may be purged.

My delete statement uses the view to delete data from one of the
tables.  Like so:

  delete from tab1 where id1 in ( select id from view1 );

Assume that the view doesn't have a field named "id".  The select
statement alone would cause an error.  However, in this context it
doesn't and the delete statement deletes everything from tab1.

Is this a bug in PostgreSQL or an "As Designed" feature?


Best Regards,

sidster
--
They who would sacrifice freedom for security will have neither.
   -Ben Franklin 


Working example (with comments) follows:



  
  

  
I don't yet have access to a newer PostgreSQL build.
  


begin;

create table
ttab_title
  (
  title_id  int4  primary key,
  can_deletebool
  );

create sequence tseq_title_id;

insert into ttab_title values ( nextval( 'tseq_title_id' ), false );
insert into ttab_title values ( nextval( 'tseq_title_id' ), false );
insert into ttab_title values ( nextval( 'tseq_title_id' ), true  ); -- rm able
insert into ttab_title values ( nextval( 'tseq_title_id' ), true  ); -- rm able
insert into ttab_title values ( nextval( 'tseq_title_id' ), true  ); -- rm able
insert into ttab_title values ( nextval( 'tseq_title_id' ), false );
insert into ttab_title values ( nextval( 'tseq_title_id' ), false );
insert into ttab_title values ( nextval( 'tseq_title_id' ), false );
insert into ttab_title values ( nextval( 'tseq_title_id' ), false );

create view
tview_title
as
  select  ttab_title.title_id as title_number
from  ttab_title
   where  ttab_title.can_delete = true
  ;
  --
  -- Notice the column/field rename from title_id to title_number


create table
ttab_title_selection
  (
  title_id  int4  references ttab_title( title_id ),
  ranking   int4
  -- some other fields ...
  );

create sequence tseq_title_rank;

insert into ttab_title_selection
  select ttab_title.title_id, nextval( 'tseq_title_rank' ) from ttab_title;

end;

-- Now lets look at this delete statement.

delete
  from  ttab_title_selection
 where  title_id in ( select  title_id
from  tview_title );
  --
  -- Notice how instead of title_number we made the mistake and used
  -- title_id.
  --
  -- We intended to only delete titles with ids: 3, 4 and 5 but this
  -- delete statement deletes all 9 titles!


-- Drop statements for clean up
/*

drop tablettab_title_selection;
drop sequence tseq_title_rank;
drop view tview_title;
drop sequence tseq_title_id;
drop tablettab_title;

*/

--
-- Local variables:
-- c-basic-offset: 2
-- indent-tabs-mode: nil
-- End:
--
-- ex: ai et sw=2 ts=2



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] vacuum analyze slows sql query

2004-11-02 Thread patrick ~
  :restypmod -1 
  :resname offer_id 
  :ressortgroupref 0 
  :resorigtbl 34965071 
  :resorigcol 1 
  :resjunk false
  }

   :expr 
  {VAR 
  :varno 1 
  :varattno 1 
  :vartype 23 
  :vartypmod -1 
  :varlevelsup 0 
  :varnoold 1 
  :varoattno 1
  }
   }
 
   {TARGETENTRY 
   :resdom 
  {RESDOM 
  :resno 2 
  :restype 16 
  :restypmod -1 
  :resname pkk_offer_has_pending_purch 
  :ressortgroupref 0 
  :resorigtbl 0 
  :resorigcol 0 
  :resjunk false
  }

   :expr 
  {FUNCEXPR 
  :funcid 34965096 
  :funcresulttype 16 
  :funcretset false 
  :funcformat 0 
  :args (
 {VAR 
 :varno 1 
 :varattno 1 
 :vartype 23 
 :vartypmod -1 
 :varlevelsup 0 
 :varnoold 1 
 :varoattno 1
 }
  )
  }
   }
)
 
:qual <> 
:lefttree <> 
:righttree <> 
:initPlan <> 
:extParam ()
 
:allParam ()
 
:nParamExec 0 
:scanrelid 1
}
 
 Seq Scan on pkk_offer  (cost=0.00..13.72 rows=618 width=4)
(78 rows)

Time: 2.207 ms
orig=# select offer_id, pkk_offer_has_pending_purch( offer_id ) from pkk_offer
;

(618 rows)

Time: 1080688.921 ms
-- end


Has anyone seen something like this before?  Is this standard,
expected behavior?


The .sql file that will create the tables and stored function
follows.  It has been processed by uuencode to avoid Yahoo! mail
wrapping lines and messing things up.

The data that I'm testing this db with is somewhat large; even in
its "stripped down," "sanatized" and gzip'ed form:

% ls -lG --si pkk.20041028_00.sql.gz 
-rw-r--r--1 patrick   17M Oct 28 18:15 pkk.20041028_00.sql.gz


I can email the data file, split(1) in multiple chunks, to anyone
interested in looking into this problem.  I may be able to provide an
http (or possibly an ftp) link for downloading it for a short period
of time.  I can't have the http link up permanently as I am limited
bandwith wise.  By far emailing would be my prefered method.


Thanks for reading,
--patrick


begin 644 pkk_db.sql
M+RHJ"B`J("1)9#H@)[EMAIL PROTECTED]@[EMAIL PROTECTED];W!Y2P*("`@(&1E2!K97DH('!U"!P=7)?;V9F97)?:61?:61X
M(&]N('!K:U]P=7)C:&%S92@@;V9F97)?:[EMAIL PROTECTED])E871E(&EN9&5X('!U
M'!I'!I2`H('!U"!O;B!P:VM?8FEL;&EN9R@@<&5N9&EN9R`I.PH*"[EMAIL PROTECTED]@*B!,971S(&-R
M96%T92!O=7(@9G5N8W1I;VYS("[EMAIL PROTECTED]@*B\*+RHJ"B`J(%!U'!I'!I'!I

Re: [SQL] vacuum analyze slows sql query

2004-11-03 Thread patrick ~
On Wed, Nov 03, 2004 at 07:01:00AM -0500, Andrew Sullivan wrote:
> On Tue, Nov 02, 2004 at 06:50:31PM -0800, patrick ~ wrote:
> > We have a nightly "garbage collection" process that runs and
> > purges any old data.  After this process a 'vacuum analyze'
> > is kicked off (regardless of whether or not any data was
> > actually purged).
> > 
> > At this point I should mention that our customer sites are
> > running PostgreSQL 7.1.3; however, I am able to reproduce
> > the issue on 7.4.2.
> 
> A 7.1 system takes an exclusive lock on any VACUUM.  It's the
> same as VACUUM FULL in 7.4.  Nothing you can do to make that
> not be sluggish.

Just to clarify, the sliggishness isn't only during the vacuum
period.  There are more more serious issues during the vacuum,
but i am not touching on those. The sluggishness is persistant
throughout daily operations.


> You want to get those sites off 7.1 anyway.  At the very least,
> you should be aware of xid exhaustion which can be prevented in
> 7.1 only with an initdb and complete restore.  Failure to
> accommodate that will mean that one day your databases will just
> disappear.

Yes, the plan is to upgrade them with new release of our product.
I didn't know about the xid exhaustion problem.  I'll need to
search the mailing list archives.


> Current VACUUM certainly does impose a serious I/O load; this is
> the reason for the vacuum setting tweaks in 8.0.  See the -hackers
> archives (from more than a year ago now) for (for instance) Jan
> Wieck's discussion of his feature and the subsequent debates.

I'll look into this.  I don't think we are ready to touch 8.x
yet.


> > I noticed that a freshly created db with freshly inserted data
> > (from a previous pg_dump) would result in quite fast results.
> > However, after running 'vacuum analyze' the very same query
> > slowed down about 1250x (Time: 1080688.921 ms vs Time: 864.522
> > ms).
> 
> My best guess is that there's something going on inside your
> function.  I'd be looking for locks here, though.  That makes no
> sense, given that you've only 78 rows being returned.  BTW, this
> topic should probably be better pursued on -performance.

Again to clarify, the output I pasted was from my standalone
PostgreSQL box.  That is, it wasn't being used other than those
quries being executed.

I don't know if you looked at my stored function, but there are
no locks in it (no explicit ones anyway).


Thanks for your reply,
--patrick



__ 
Do you Yahoo!? 
Check out the new Yahoo! Front Page. 
www.yahoo.com 
 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] how to select custom value when exists otherwise select default

2005-02-16 Thread patrick

I use a select like this:
SELECT sometables
WHERE field0='a' or field0='b' or field0='c'
and field4='m' or field4='n'
ORDER BY field0;

m.field4 is default value
n.field4 is custom value

I want only n(custom) if it exists ortherwise I want m(default)
but I do not want both!

I mean I have this result:
table{
field0 {a,a,b,c}
field1 {d,d,e,f}
field2 {g,g,h,i}
field3 {j,j,k,l}
field4 {m,n,m,m}
}
a.filed0 has 2 results, one for m(default) and one for n(optionnal)
b and c have no custom value so field4 is set to m for them

and I want this result:
table{
field0 {a,b,c}
field1 {d,e,f}
field2 {g,h,i}
field3 {j,k,l}
field4 {n,m,m}
}

of course I'm looking for only one request to do that, and I'd rather a
lighter one ;-)

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] subqueries as values in updates

2000-12-08 Thread patrick . jacquot

hello, list.
are subqueries alloved as values in an update command ?
e.g.

update a set a.attribute1 = a.attribute1 +
(select sum(b.attribute1) from b where b.attribute2=a.attribute2);

If yes, how is the correct syntax ?
If not, is there a hope for it in a future version ?




Re: [SQL] Query Help

2000-12-28 Thread patrick . jacquot

"Brian C. Doyle" wrote:

> What do I have to do a query where information in table1 is not in table2
>
> I am looking for something like
>
> Select table1.firstname where table1.firstname is not in table2.firstname
> and table2.date='yesterday'
>
> I tried
> Select table1.firstname where table1.firstname != table2.firstname and
> table2.date='yesterday'
>
> and that did not work just listed everyone

imho the most natrural way for what you need seems to be :

select  whatyouwant from table1 where not exists
select * from table2 where table2.firstname = table1.firstname and
table2.date='yesterday';

hoping that helps
P. Jacquot




[SQL] Re: [INTERFACES] outer join in PostgreSql

2001-01-16 Thread Patrick Welche

On Mon, Jan 15, 2001 at 06:16:00PM -0400, Mauricio Hipp Werner wrote:
> I need help, which is the symbol used in postgreSql to carry out the outer
> join.
> 
>  in oracle the is used (+)
> in sybase the is used * and
> in postgreSql?

I don't really understand your question, but from src/test/regress/sql/join.sql
some examples of use are:

--
-- Outer joins
-- Note that OUTER is a noise word
--

SELECT '' AS "xxx", *
  FROM J1_TBL LEFT OUTER JOIN J2_TBL USING (i);

SELECT '' AS "xxx", *
  FROM J1_TBL LEFT JOIN J2_TBL USING (i);

SELECT '' AS "xxx", *
  FROM J1_TBL RIGHT OUTER JOIN J2_TBL USING (i);

SELECT '' AS "xxx", *
  FROM J1_TBL RIGHT JOIN J2_TBL USING (i);

SELECT '' AS "xxx", *
  FROM J1_TBL FULL OUTER JOIN J2_TBL USING (i);

SELECT '' AS "xxx", *
  FROM J1_TBL FULL JOIN J2_TBL USING (i);

SELECT '' AS "xxx", *
  FROM J1_TBL LEFT JOIN J2_TBL USING (i) WHERE (k = 1);

SELECT '' AS "xxx", *
  FROM J1_TBL LEFT JOIN J2_TBL USING (i) WHERE (i = 1);


Cheers,

Patrick



[SQL] Re: [GENERAL] Help with query. (*)

2001-01-18 Thread Patrick Welche

On Tue, Jan 16, 2001 at 01:42:45PM -0700, Diehl, Jeffrey wrote:
> I'm having difficulty writing a query which I really can't live without... 
> 
> I need to get a list of records from table A for which there are
> corresponding records in table B.  I've tried to use the intersect clause,
> but it doesn't seem to work, or it runs far too long.  For example:
> 
> select * from A
>   where 1=1
>   intersect select * from A where
>   B.x=A.x
>   and A.y=B.y
>   and A.z=B.z
>   limit 100
> 
> I need the most efficient method possible; my A tables have upward of 5
> Million records.  The B table, btw, only has about 100 records.

Guessing - my system isn't in a state to test just at the minute - is it

select A.* from A,B
 where A.x=B.x
   and A.y=B.y
   and A.z=B.z
 limit 100

that you want?



Re: [SQL] select returns no line

2001-01-23 Thread patrick . jacquot

Attila Kevei wrote:

> Hi,
>
> Our database has a SELECT problem using varchar columns in WHERE clause
> (but not in all rows!!!).
> We can fix the whole table (or just the row) as shown below but later it
> seems wrong again (and after the fix the row isn't UPDATEd).
>
> Any idea?
>
> Thanks
> Attila
>
> 
> Environment:
>
> [PostgreSQL 6.5.3 on i686-pc-linux-gnu, compiled by gcc 2.95.2]
>
> goodwill=>\d users
> Table= users
> +--+--+---+
> |  Field   |  Type| Length|
> +--+--+---+
> | user_id  | int4 not null default nextval (  | 4 |
> | user_login   | varchar() not null   |15 |
> | user_passwd  | varchar() not null   |15 |
> | user_exp | timestamp| 4 |
> +--+--+---+
> Indices:  users_pkey
>
> users_user_login_key
>
> 
> The problem:
>
> goodwill=>select * from users where user_login='test';
> user_id|user_login|user_passwd|user_exp
> ---+--+---+
> (0 rows)
>
> goodwill=> select * from users where user_id=4;
> user_id|user_login|user_passwd  |user_exp
> ---+--+-+
>   4|test  |0PDv7a2EESjZo|
> (1 row)
>
> goodwill=> update users set user_login=user_login where user_id=4;
> UPDATE
> 1
>
> goodwill=>select * from users where user_login='test';
> user_id|user_login|user_passwd  |user_exp
> ---+--+-+
>   4|test  |0PDv7a2EESjZo|
> (1 row)
>
> --
> x- [EMAIL PROTECTED] -x- [EMAIL PROTECTED] -x- [EMAIL PROTECTED] -x

hello
are you sure the value of the user_id in that line is "test" and not "test "
i.e it has not spurious spaces at the end of it ?
HTH
Patrick




Re: [SQL] Comparing dates

2001-03-06 Thread patrick . jacquot

Markus Fischer wrote:

> Hello,
>
> I've a SELECT statement on many joined Tabled and one of them has
> a date column called 'date_date'. When I fetch a date e.g.
> '02-03-2001', I get, say, 60 results back. When I now perform the
> same query with another date, lets take '03-03-2001', I get back
> about 70 results.
>
> When I now modify my query to get both results in one I write
>
> SELECT
> 
> FROM
> ..
> AND
> date_date >= '2001-03-02'
> AND
> date_date <= '2001-03-03'
> AND
> 
>
> I think I should get back the rows for both days, 60 + 70 makes
> 130 to me. But what I get back is even smaller then 60. I
> allready tried TO_DATE conversion, an OR construct but always
> the same result.
>
> Is there something special to know when comparing/working with
> date-datetypes ?
>
> kind regards,
> Markus
>
> --
> Markus Fischer,  http://josefine.ben.tuwien.ac.at/~mfischer/
> EMail: [EMAIL PROTECTED]
> PGP Public  Key: http://josefine.ben.tuwien.ac.at/~mfischer/C2272BD0.asc
> PGP Fingerprint: D3B0 DD4F E12B F911 3CE1  C2B5 D674 B445 C227 2BD0
>
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

ANDing restrictions makes them narrower.
If you want to broaden your selection, try  ORing the conditions
HTH
Patrick


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Re: [GENERAL] MySQLs Describe emulator!

2001-03-12 Thread Patrick Welche

On Tue, Mar 06, 2001 at 10:38:43AM -0500, Michael Fork wrote:
> try starting psql with the -E option -- this displays all queries used
> internally to the screen, i.e.:

Sorry, hadn't read this one before posting... Thanks to the "moderating" it'll
all be out of synch anyway but..

Patrick

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] 3 options

2001-03-28 Thread Patrick Coulombe

Hi,

Using : PostgreSQL 6.5.3 on i686-pc-linux-gnu
Platform : Linux (Red Hat 6.0)


3 errors :

1)
pg_dump medias > medias.pgdump280301

pqWait() -- connection not open
PQendcopy: resetting connection
SQL query to dump the contents of Table 'dossier' did not execute correctly.
After we read all the table contents from the backend, PQendcopy() failed.
Explanation from backend: 'pqWait() -- connection not open
'.
The query was: 'COPY "dossier" TO stdout;
'.


2)
medias=> vacuum;
NOTICE:  Rel dossier: Uninitialized page 28 - fixing
NOTICE:  Rel dossier: Uninitialized page 29 - fixing
NOTICE:  BlowawayRelationBuffers(dossier, 28): block 28 is dirty (private 0,
last 0, global 0)
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
We have lost the connection to the backend, so further processing is
impossible.  Terminating.


3)
on my website (using php) :
Warning: PostgreSQL query failed: ERROR: Tuple is too big: size 9968 in
/xxx/enregistrer3.php on line 45

---

1 & 2 seem to be ok, because right now i can do a pg_dump without the error,
but  I've searched in the mailing-list for my question (3) and this is what
i came with:

---

i have 3 options - which one would you recommand me ?

- change the default size block in include/config.h (recompile - hum...)
- use large object interface (what is that?)
- upgrade to 7.1 (fear to lost my data - i'm not a linux guru)


---
thank you
patrick, montreal, canada


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] referencing oid impozsible ?

2001-09-11 Thread patrick . jacquot

hello all
I tried recently (pgsql 7.1.2) to establish the oid of one table as foreign key
in another. 
To no avail : there was no uniqueness constraint on that column
Naturally, it seems impossible to add a uniqueness constraint to such a system
column.
As far as i know, the oid is by nature unique, excepting for the case of
wrapping of their sequence.
Is there any workaround ?

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Extremely slow query

2002-07-29 Thread Patrick Hatcher

System:
OS: RedHat 7.2
Dual PIII XEON
Mem 512 mg
PG: 7.2


I have what I think is a fairly simple summary query, but it takes 1:55 to
run on just 155k records.  The query hits against a single table that I use
for reporting purposes.  This table is truncated, refreshed, reindexed, and
vacuum analysed each night.  Other than the initial table update, no other
data is added during the day.

Any help would be appreciated.  My little Win2k with a PIII 500 and 256mgs
is out performing this monster machine.

query:
SELECT  gmmid, gmmname, divid, divname, feddept, deptname, fedvend,
vendor_name, masterid, master_desc, pageid, oz_description, (
 CASE
WHEN (masterid IS NULL) THEN pageid
ELSE masterid END)::character varying(15) AS pagemaster,
 CASE
WHEN (masterid IS NULL) THEN oz_description
ELSE master_desc
 END  AS pagemaster_desc,
 CASE
WHEN (masterid IS NULL) THEN price_original
ELSE NULL::float8
 END  AS org_price_display,
 CASE
WHEN (masterid IS NULL) THEN cur_price
ELSE NULL::float8
 END  AS cur_price_display, price_original, price_owned_retail,
cur_price, oz_color, oz_size, pageflag, itemnumber,
 sum(cur_demandu + cur_returnu) AS cur_net_units,
 sum(cur_demanddol + wtd_returndol) AS cur_net_dollar,
 sum(wtd_demandu + wtd_returnu) AS wtd_net_units,
 sum(wtd_demanddol + wtd_returndol) AS wtd_net_dollar,
 sum(lw_demand + lw_returnu) AS lw_net_units,
 sum(lw_demanddollar + lw_returndollar) AS lw_net_dollar,
 sum(ptd_demanddollar + ptd_returndollar) AS ptd_net_dollar,
 sum(ptd_demand + ptd_returnu) AS ptd_net_units,
 sum(std_demanddollar + std_returndollar) AS std_net_dollar,
 sum(std_demand + std_returnu) AS std_net_units,
 sum(total_curoh) AS total_curoh,
 sum(total_curoo) AS total_curoo,
 sum((float8(total_curoh) * price_owned_retail)) AS curoh_dollar,
 sum((float8(total_curoo) * price_owned_retail)) AS curoo_dollar,
 sum(total_oh) AS total_oh,
 sum(total_oo) AS total_oo,
 sum((float8(total_oh) * price_owned_retail)) AS oh_dollar,
 sum((float8(total_oh) * price_owned_retail)) AS oo_dollar, mkd_status,
option4_flag
FROM tbldetaillevel_report detaillevel_report_v
GROUP   BY gmmid, gmmname, divid, divname, feddept, deptname, fedvend,
vendor_name, masterid, master_desc, pageid, oz_description,
 CASE
WHEN (masterid IS NULL) THEN pageid
ELSE masterid
 END,
 CASE
WHEN (masterid IS NULL) THEN oz_description
ELSE master_desc
 END,
 CASE
WHEN (masterid IS NULL) THEN price_original
ELSE NULL::float8
 END,
 CASE
WHEN (masterid IS NULL) THEN cur_price
ELSE NULL::float8
 END, price_original, price_owned_retail, cur_price, oz_color, oz_size,
pageflag, itemnumber, mkd_status, option4_flag

EXPLAIN ANALYSE results:
Aggregate  (cost=56487.32..72899.02 rows=15267 width=356)
  ->  Group  (cost=56487.32..66029.01 rows=152667 width=356)
->  Sort  (cost=56487.32..56487.32 rows=152667 width=356)
  ->  Seq Scan on tbldetaillevel_report detaillevel_report_v
(cost=0.00..9932.67 rows=152667 width=356)


Table Def:
CREATE TABLE tbldetaillevel_report (
  pageid int4,
  feddept int4,
  fedvend int4,
  oz_description varchar(254),
  price_owned_retail float8,
  oz_color varchar(50),
  oz_size varchar(50),
  lw_demanddollar float8,
  ptd_demanddollar float8,
  std_demanddollar float8,
  lw_returndollar float8,
  ptd_returndollar float8,
  std_returndollar float8,
  lw_demand int4,
  ptd_demand int4,
  std_demand int4,
  lw_returnu int4,
  ptd_returnu int4,
  std_returnu int4,
  divid int4,
  divname varchar(35),
  gmmid int4,
  gmmname varchar(35),
  deptname varchar(35),
  total_oh int4,
  total_oo int4,
  vendorname varchar(40),
  dunsnumber varchar(9),
  current_week int4,
  current_period int4,
  week_end date,
  varweek int4,
  varperiod int4,
  upc int8,
  pageflag int2,
  upcflag int2,
  pid varchar(30),
  cur_price float8,
  vendor_name varchar(40),
  ly_lw_demanddollar float8,
  ly_ptd_demanddollar float8,
  ly_std_demanddollar float8,
  itemnumber varchar(15),
  mkd_status int2,
  lw_1_demanddollar float8,
  lw_2_demanddollar float8,
  lw_3_demanddollar float8,
  lw_4_demanddollar float8,
  masterid int4,
  master_desc varchar(254),
  cur_demandu int4,
  cur_demanddol float8,
  cur_returnu int4,
  cur_returndol float8,
  wtd_demandu int4,
  wtd_demanddol float8,
  wtd_returnu int4,
  wtd_returndol float8,
  total_curoh int4,
  total_curoo int4,
  curr_date date,
  lw_1_demand int4,
  lw_2_demand int4,
  lw_3_demand int4,
  lw_4_demand int4,
  option4_flag int2,
  option3_flag int2,
  price_original float8,
  price_ticket float8
)





Patrick Hatcher





---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] [SQL] Database Design tool

2002-11-05 Thread Patrick Bakker
Title: RE: [GENERAL] [SQL] Database Design tool






I had gASQL working once but that was awhile ago. gASQL has now been renamed as Mergeant and is being developed in conjunction with the gnome-db libraries as far as I know. I don't think there has been a stable release of it yet, although it is under active development.

Patrick


> -Original Message-

> From: [EMAIL PROTECTED]

> [mailto:[EMAIL PROTECTED]]On Behalf Of Dan Hrabarchuk

> Sent: Wednesday, October 30, 2002 7:31 AM

> To: Johannes Lochmann

> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]

> Subject: Re: [GENERAL] [SQL] Database Design tool

> 

> 

> gASQL is a gnome-db client that looks like it has a lot of 

> promise. The

> only problem is I've never been able to get the application to run

> properly. I'm using RedHat 8.0 on my desktop. The last 

> official version

> does not install properly. If I grab a CVS copy, I go through 

> dependency

> hell. Has anyone ever actually gotten gASQL to work?

> 

> Dan

> 

> On Wed, 2002-10-30 at 06:35, Johannes Lochmann wrote:

> > On Wed, 2002-10-30 at 07:45, Viacheslav N Tararin wrote:

> > 

> > Hi,

> > 

> > (which list should this go to? I guess it is OT on both...)

> >  

> > > Can anybody take me reference on Database design tool 

> with PostgreSQL 

> > > support.

> > 

> > Dia and dia2sql (or something similar...) Google knows more :-)

> > 

> > HTH

> > 

> > Johannes Lochmann

> > 

> > 

> > ---(end of 

> broadcast)---

> > TIP 2: you can get off all lists at once with the unregister command

> > (send "unregister YourEmailAddressHere" to 

> [EMAIL PROTECTED])

> 

> 

> 

> ---(end of 

> broadcast)---

> TIP 6: Have you searched our list archives?

> 

> http://archives.postgresql.org

> 






[SQL] change a field

2003-09-14 Thread Patrick Meylemans
Dear,

I want to change a field of a record after the modification of another 
field of the same record or during an insert of a new record.

Suppose the following table

field   | type
--
id  | serial
sign| integer
value   | integer
real_value  | integer
When inserting a new record, giving values to sign and value, the field 
real_value should be the product of the fields sign and value.
When updating an existing record, the value of the field real_value should 
be recalculated.

I tried with plpgsql functions and triggers but with no success. It works 
only when inserting new records and not when updating a field.

What is the best way to solve this problem ?

Kind regards

Patrick Meylemans

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] Is there a faster way to do this?

2004-06-15 Thread Patrick Hatcher




pg: 7.4.2
RedHat 7.2

Can I get some advice on a possible faster way of doing this:

Scenario:  Each day I update a column in a table with an internal
percentile value.  To arrive at  this value, I'll get a count of records
with sales > 0 and then divide this count by the total number of tile
groups I want.  So for example:
Total records w/sales > 0 = 730,000
 tile# = 100
total percentile groups (730,000 / 100):7300

Now that I have the total number of groups I need, I cycle through my
recordset, grab the key field and the current percentile number and stuff
the values into a temp table. I mod the current row counter against the
total percentile group number.  If it is 0 then I add 1 to the  current
percentile number .  After inserting records into the temp file I then
update the main table.

Using the example above, the first 7300 records would get a  percentile
number of 1, the next 7300  records would get  a percentile number of 2,
then 3,4,5,etc.

Unfortunately, I am going record by record in a loop and the process takes
upwards of 20mins.  Is there a faster way to do this?  I thought about
using limit and offset, but I'm not sure how I would accomplish it.

Below is the function I currently use.  Thanks for any help provided

CREATE OR REPLACE FUNCTION cdm.percentile_calc()
  RETURNS text AS
'DECLARE
 v_interval int4;
 v_tile int4;
 v_percentile int4;
 v_check int4;
 v_count int4;
 v_rowcount int4;
 myRec  RECORD;

BEGIN
 v_count:=0;
 v_tile:= 100;
 v_percentile:=1;
 v_rowcount :=1;
 v_check:=0;


 /* Get count of records with val_purch > 0 */
 select into v_count count(*)  from cdm.cdm_indiv_mast where
val_purch_com >0;

 /* this number will be used as part of our MOD to tell when to add one
to our percentile */
 v_interval := v_count / v_tile;

 CREATE TEMP TABLE cdmperct (f1 int8, f2 int2);


 FOR myRec IN  select indiv_key from cdm.cdm_indiv_mast where
val_purch_com  >0 order by val_purch_com desc  LOOP
  INSERT INTO cdmperct values (myRec.indiv_key,v_percentile);
  v_check = mod(v_rowcount,v_interval);
  IF v_check = 0 THEN
   v_percentile:=v_percentile+1;
  END IF;
  v_rowcount:= v_rowcount+1;
 END LOOP;


   UPDATE cdm.cdm_indiv_mast SET percentiler = f2 from  cdmperct where
indiv_key = f1;


 DROP TABLE cdmperct;
 RETURN  \'DONE\';
END; '
  LANGUAGE 'plpgsql' IMMUTABLE;

Patrick Hatcher
Macys.Com


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] automatic update or insert

2005-10-26 Thread Patrick JACQUOT

tobbe wrote:


Hi.

I have a little problem.

In a system of mine i need to insert records into table [tbStat], and
if the records exist i need to update them instead and increase a
column [cQuantity] for every update.

I.e. the first insert sets cQuantity to 1, and for every other run
cQuantity is increased.

Currently i have implemented this as a stored procedure in the plpgsql
language. This means that in my stored procedure i first do a select to
find out if the row exists or not, then i do a insert or update
depending if the row existed.

Unfortunately, stored procedures seems awfully slow. And i need the
application to go faster.

One solution could be to implement the stored procedure in my program
instead. I think that this will be atleast 50% faster than my stored
procedure, so that would be ok.

However, this has made me thinking. Couldn't this be done directly in
SQL?


Brgds Robert


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq

 

Maybe would it be better to insert always, and to use grouping and 
summation when using the table. That would enable you to preserve the 
history of events.

That's how I almost always work
hth
P.Jacquot

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] Just 1 in a series...

2005-12-05 Thread Patrick JACQUOT

Mark Fenbers wrote:


What would have to be done if I needed a standard SQL solution?
Mark


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Maybe you could t'ry something like :
Select whatever  from  yourtable a where not exists
(select * from yourtable b where b.id=a.id and b.timestamp > a.timestamp)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] RETURN SET OF DATA WITH CURSOR

2005-12-16 Thread Patrick JACQUOT

grupos wrote:


Hi Richard,

Thanks for your promptly answer. I don't have experience returning 
refcursor but my choice would be it.
I read the documentation but I didn't find any example with my 
necessity...


Could you give me a small example on the same "basis" that I want?

Thanks,

Rodrigo Carvalhaes

Richard Huxton wrote:


grupos wrote:


Hi !

I am making one plpgsql function and I need to return of setof data 
using a cursor.


The problem is that my function is returning only the first row and 
column of the query. This query have more than 80 columns and 1.000 
rows. Enyone have any tip to give me?



Yes - decide whether you are returning a set of rows or a refcursor.

Check the plpgsql docs again for how to return SETOF  using 
the RETURN NEXT statement. Then you call your function like: SELECT * 
FROM my_func();




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


There is a standard way :

Select * from memberstatus A where not exists
   (select * from emberstatus B where B.member_id=A.member_id and  
B.status_date >A.status_date)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Need SQL Help Finding Current Status of members

2005-12-16 Thread Patrick JACQUOT

Richard Huxton wrote:


Michael Avila wrote:

I have a table which keeps track of the status of members. In the 
table is


member_id int(8)
status_code char(1)
status_date date
KEY member_id (member_id,status_code,status_date)


Each member can have multiple records because a record is added each 
time

the status changes but the old record is kept for history.

What I want to do is find the latest status for each member.



Michael Fuhr has already described on solution, but if you can alter 
the table definition then there might be a neater solution.


Replace "status_date" with "status_expires" and make it a "timestamp 
with time zone". Set the expiry to 'infinity' for the current record 
and  you then have a simple select to find the most recent.


If you regularly want to find which record was active on a particular 
time you'll want two columns: valid_from and valid_to. This makes it 
much easier to find a row for a specific date.


There is a standard way :

Select * from memberstatus A where not exists
  (select * from emberstatus B where B.member_id=A.member_id and  
B.status_date >A.status_date)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Matching several rows

2006-01-18 Thread Patrick JACQUOT

Michael Glaesemann wrote:



On Jan 18, 2006, at 21:48 , Volkan YAZICI wrote:


AFAICS, the bottleneck in above query is ANY(ARRAY[]) clause usage.
Instead of that, if you replace «rights = ANY(ARRAY[2,5,10])» with
«rights IN (2,5,10)» it's overhead decreases to 0.200-0.300ms domain.



explain analyze
SELECT id
FROM (
SELECT id, sum(1) AS s
FROM urights
WHERE uright in (2,5,10)
GROUP BY id) AS t
WHERE s = 3;
 QUERY PLAN
 
-
Subquery Scan t  (cost=1.14..1.19 rows=2 width=4) (actual  
time=0.106..0.108 rows=1 loops=1)
   ->  HashAggregate  (cost=1.14..1.17 rows=2 width=4) (actual  
time=0.103..0.105 rows=1 loops=1)

 Filter: (sum(1) = 3)
 ->  Seq Scan on urights  (cost=0.00..1.10 rows=4 width=4)  
(actual time=0.029..0.038 rows=5 loops=1)

   Filter: ((uright = 2) OR (uright = 5) OR (uright = 10))
Total runtime: 0.386 ms
(6 rows)


Michael Glaesemann
grzm myrealbox com




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


why not use an having clause in the GROUP BY?
HTH

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] problem referencing an attrib which is not unique

2006-02-07 Thread Patrick JACQUOT

Vic Rowan wrote:




-- Forwarded message --
From: *Vic Rowan* <[EMAIL PROTECTED] >
Date: Feb 7, 2006 2:31 PM
Subject: problem referencing an attrib which is not unique
To: pgsql-sql@postgresql.org 


hello everybody,

I need some thing like this below for an application which stores log 
messages in multiple languages. The table 'event_msg' stores 
predefined messages in multiple languages which can be populated with 
place holder values from the application. (These of course are 
language independent). So, the event_id associates these predefined 
messages from both the tables so that displaying a log message is as 
simple as looking up the event_id from the 'logs' table and similarly 
looking up the event_id and language from the 'event_msg' table to 
retreive the predefined_msg with the correct language - the 
application determines the lang from a settings file - and combining 
them to display the log message.


CREATE TABLE event_msg (
  event_id varchar(30) NOT NULL,
  language char(2) NOT NULL,
  predefined_msg varchar(250) NOT NULL,
  PRIMARY KEY (event_id, language)
);

CREATE TABLE logs (
  id int NOT NULL,
  event_id varchar(30) REFERENCES event_msg (event_id) NOT NULL, 
  placeholder_values varchar(250),

  priority varchar(20) NOT NULL,
  timestamp Date NOT NULL,
  primary key (id)
);


The problem I am facing is the event_id from logs is not able to 
reference event_id from event_msg as its not unique.
There are as many entries for each event_id as there are languages 
supported in the 'event_msg' table.
I would be glad if somebody could suggest some work around here to the 
above structure. Or alternately do I need to alter the table structure 
altogether and if so what is the better way of doing this?


Thanks in advance for any help offered.

Cheers,
Vic Rowan.


I think you need three tables

One to list the allowable events, which will be used as reference
CREATE TABLE eventlist (
 event_id varchar(30) PRIMARY-KEY
);

One to give the messages translations
CREATE TABLE messagetranslations(
event-id varchar(30) references eventlist (event_id) NOT NULL
language char(2) not null
event-translation varchar(250)
PRIMARY KEY (event_id, language)
);

and your log table
CREATE TABLE logs (
 id int NOT NULL,
 event_id varchar(30) REFERENCES eventlist (event_id) NOT NULL, 
 placeholder_values varchar(250),

 priority varchar(20) NOT NULL,
 timestamp Date NOT NULL,
 primary key (id)
);

btw,  event-id could be just an integer. If, as I  understand, event-id 
is so large a string,

it's probably because it contains the english name of the event.
Just put it in an occurrence of messagetranslation, with language = 'EN'
other thing : with only 2 chars as language id, how do you distinguish 
EN-US and EN-UK

(or whatever id  the latter can have assigned)?.

hth
P. Jacquot


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Non Matching Records in Two Tables

2006-02-09 Thread Patrick JACQUOT

Ken Hill wrote:

I need some help with a bit of SQL. I have two tables. I want to find 
records in one table that don't match records in another table based 
on a common column in the two tables. Both tables have a column named 
'key100'. I was trying something like:


SELECT count(*)
FROM table1, table2
WHERE (table1.key100 != table2.key100);

But the query is very slow and I finally just cancel it. Any help is 
very much appreciated.


-Ken 


Maybe you could use a NOT EXISTS subquery, as in
SELECT count(*) from table1
WHERE NOT EXISTS(SELECT count(*) from table2 WHERE table1.key100 
=table2.key100)
which gives you the number of records in table1 without corresponding 
records in table2.

That kind of query is quite fast, if there exists an index on table2.key100
hth
P.Jacquot

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] SELECT on a to-be-determined table

2006-02-20 Thread Patrick JACQUOT

garhone wrote:


Hi,

I'm a new at this. So please forgive if I mess up. Also, if there is
already a reference/tutorial somewhere, feel free to point me to it.

Here's my situation:
db=# select * from projects;
projid | projname
+--
 1 | cars
 2 | houses
 3 | pets
(3 rows)

db=# select * from cars;
carid | carname
---+-
1 | ford
2 | mazda
(2 rows)

db=# select * from houses;
houseid | housename
-+---
  1 | semi
  2 | trailer
  3 | mansion
(3 rows)

db=# select * from pets;
petid | petname
---+-
1 | dog
2 | cat
3 | bird
(3 rows)

Is it possible to do this:
Give me all the rows of the table whose project id is 2 (or whatever
projid).

Thanks


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org



Your way of thinking leads to the need of comparing a field to a table name.
Such a request requires two steps
1 - retrieve the name of the table to search in, store it in a variable
2 - use execute to issue a request to that table.

Instead, I think it would be better to use only two tables:
1 - projects (projid, projname)
2 - items (itemid, projid, itemproperty1,itemidproperty2,...)
You would have in the second table, to take your example:
projid | itemid | itemname   |
1 |  1 | ford   |
1 |  2 | mazda  |
2 |  1 | semi   |
2 |  2 | trailer|
2 |  3 | mansion|
3 |  1 | dog|
3 |  2 | cat|
3 |  3 | bird   |
Your request would become :
SELECT itemid, itemname FROM items where projid=2

The problem of having a different set of properties
for the items of differents projects could be solved with three tables:
project(projid, projname)
itempropertymeaning(projid, propid, propmeaning)
itemproperty(projid, itemid, propid, propvalue)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Need help: Find dirty rows, Update, Delete SQL

2006-02-20 Thread Patrick JACQUOT

Janning Vygen wrote:


Am Samstag, 18. Februar 2006 18:41 schrieb [EMAIL PROTECTED]:
 


Hello,

I need a bit of help with some SQL.
I have two tables, call them Page and Bookmark.
Each row in Page can have many Bookmarks pointing to it, and
they are joined via a FK (Page.id = Bookmark.page_id).

Page has a 'url' column: Page.url, which has a unique index on it.

My Page.url column got a little dirty, and I need to clean it up,
and that's what I need help with.

Here is an example of dirtiness:

Page:

id=1 url = 'http://example.com/'
id=2 url = 'http://example.com/#' -- dirty
id=3 url = 'http://example.com/#foo'  -- dirty

The last two rows are dirty.  Normally I normalize URLs before
inserting them, but these got in, and now I need to clean them.

The problem is that rows in Bookmark table may point to dirty
rows in Page, so I can't just remove the dirty rows, and I can't
just update 'url' column in Page to 'http://example.com/',
because that column is unique.

Is there some fancy SQL that I can use them to find the dirty
rows in page (... where url like '%#%') and then find rows in
Bookmark table that point to them, then point those rows to
good rows in Page (e.g. id=1 row above), and finally remove the
dirty rows from Page?
   



try this. But please check if it really does its job. I just wrote it down in 
a minute or two. There will be an easier way or nicer written SQL but a sit 
is just a one time operation you shoudn't care too much. One more hint: you 
should add a CHECK clause to your page_url like  "page_url text NOT NULL 
UNIQUE CHECK (page_url !~ '#')"


here is my test code

CREATE TABLE pages (
 page_id SERIAL PRIMARY KEY,
 page_url text NOT NULL UNIQUE
);

CREATE TABLE bookmarks (
 bm_id SERIAL PRIMARY KEY,
 bm_text text not null,
 page_id int4 NOT NULL REFERENCES pages (page_id)
);

INSERT INTO pages (page_url) VALUES ('http://example.com/');
INSERT INTO pages (page_url) VALUES ('http://example.com/#');
INSERT INTO pages (page_url) VALUES ('http://example.com/#foo');
INSERT INTO pages (page_url) VALUES ('http://example2.com/#foo');
INSERT INTO pages (page_url) VALUES ('http://example3.com/#foobar');

insert into bookmarks (bm_text, page_id) values ('test1', 1);
insert into bookmarks (bm_text, page_id) values ('test2', 1);
insert into bookmarks (bm_text, page_id) values ('test3', 2);
insert into bookmarks (bm_text, page_id) values ('test4', 2);
insert into bookmarks (bm_text, page_id) values ('test5', 3);
insert into bookmarks (bm_text, page_id) values ('test6', 3);
insert into bookmarks (bm_text, page_id) values ('test7', 4);

BEGIN;
UPDATE bookmarks set page_id = pages2.page_id
FROM 
 pages AS pages1, 
 pages AS pages2
WHERE 
 pages1.page_id = bookmarks.page_id

 AND pages2.page_url = split_part(pages1.page_url, '#', 1)
;

DELETE FROM pages WHERE page_id IN (
 SELECT 
   pages1.page_id

 FROM
   pages AS pages1 
   JOIN pages AS pages2 ON ( 
 pages1.page_id != pages2.page_id 
 AND pages2.page_url = split_part(pages1.page_url, '#', 1)

   )
 WHERE position('#' in pages1.page_url) > 0
 AND pages1.page_id NOT IN (SELECT page_id FROM bookmarks)
);
;

UPDATE pages SET page_url = split_part(page_url, '#', 1) 
WHERE position('#' in pages.page_url) > 0

;
select * from bookmarks;
select * from pages;
COMMIT;


kind regards,
janning

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq

 

why do you consider as dirty perfectly honest URLs as 
http://example.com#foo ?
Such a construct points to a specific part (foo) of a specific document 
(http://example.com)



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Referential integrity broken (8.0.3), sub-select help

2006-03-22 Thread Patrick JACQUOT

[EMAIL PROTECTED] wrote:


Hello,

I've got 2 tables, "url" (U), and "bookmark" (B), with "bookmark" pointing to 
"url" via FK.
Somehow I ended up with some rows in B referencing non-existent rows in U.
This sounds super strange and dangerous to me, and it's not clear to me how/why 
PG let this happen.
I'm using 8.0.3.

Here are the table references I just mentioned:

Table "bookmark":
id  SERIAL
CONSTRAINT pk_bookmark_id PRIMARY KEY

Table "url":
   url_id  INTEGER
CONSTRAINT fk_bookmark_id REFERENCES bookmark(id)

 

Your DDL  doesn't say :  "B references U", but the contrary : "U 
references B".
So it's perfectly right that somes tuples in B are not referenced by 
tuples in U.

Please correct your constraints.


Problem #1: Strange that PG allowed this to happen.  Maybe my DDL above allows 
this to happen and needs to be tightened?  I thought the above would ensure 
referential integrity, but maybe I need to specify something else?

Problem #2: I'd like to find all rows in B that point to non-existent rows in 
U.  I can do it with the following sub-select, I believe, but it's rather 
inefficient (EXPLAIN shows both tables would be sequentially scanned):

 SELECT * FROM bookmark WHERE id NOT IN (SELECT b.id FROM bookmark b, url u 
WHERE b.url_id=u.id);

Is there a more efficient way to get the rows from "bookmark"?

Thanks,
Otis


---(end of broadcast)---
TIP 6: explain analyze is your friend

 


I think, for that one Scott's answer is OK
You could also try SELECT * FROM url U WHERE NOT EXISTS(SELECT * FROM 
bookmark B WHERE B.url-id=U.id)

and see wich one is faster

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] insert related data into two tables

2006-05-22 Thread Patrick JACQUOT

[EMAIL PROTECTED] wrote:


Hello,

I have two tables like these:

TABLE_1:  people registry
fields: ID_T1, SURNAME, NAME
ID_T1 is primary key

TABLE_2: work groups
fields: ID_T2, TASK
ID_T2 is foreign key related to ID_T1

the first table is the list of employees, the second the task.

Sometime I need to insert a new employee and at the same time the task
for him:
I want to INSERT TO table 1 JOHN DOE (the ID is assigned automatically
since it's a primary key) and INSERT TO table 2 the ID and the task
name for JOHN DOE. Is it possible to create a single query? What is the
best way to do this?

Thanks,

Filippo


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq

 


maybe you could create a view for the join of the two tebles,
then make that view writable by creating the
appropriate triggers.
Then a single insert into the view would in fact create everything.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] some error when executing query in pgAdmin tool

2006-05-23 Thread Patrick JACQUOT

Penchalaiah P. wrote:

I created one table in pgAdmin tool but when I am executing query it 
is giving error….


CREATE TABLE "ADV"

(

"T-Section_Id" varchar(10) NOT NULL,

"CDA_No" varchar(7) NOT NULL,

"Imp_Schedule_Id" int4 NOT NULL,

"Sanction_No" varchar(20) NOT NULL,

"Sanction_Date" date NOT NULL,

"Station_From" varchar(20) NOT NULL,

"Station_To" varchar(20) NOT NULL,

"Amt_Claimed" int4,

"Amt_Admitted" int4,

"Dak_Id" varchar(20) NOT NULL,

"Refund_Dak_Id" int4 NOT NULL,

"T-Wing_Allowance_Id" varchar(10) NOT NULL,

CONSTRAINT "ADV_pkey" PRIMARY KEY ("T-Section_Id")

)

WITHOUT OIDS;

ALTER TABLE "ADV" OWNER TO postgres;

Above table I created…..

Select * from ADV; when I am executing this query I am getting error is

Relation ADV does not exist… like this error is giving … may I know y 
am I getting this error


*Thanks & Regards*

*Penchal reddy **|** Software Engineer *

*Infinite Computer Solutions **|** Exciting Times…Infinite 
Possibilities... *


*SEI-CMMI level 5 **| **ISO 9001:2000*

*IT SERVICES **|** BPO *

*Telecom **|** **Finance **|** **Healthcare **| **Manufacturing **|** 
**Energy & Utilities **|** **Retail & Distribution **|** **Government *


*Tel +91-80-5193-(Ext:503)**|** Fax +91-80-51930009 **|** Cell No 
+91-9980012376**|**www.infics.com** *


*Information transmitted by this e-mail is proprietary to Infinite 
Computer Solutions and/ or its Customers and is intended for use only 
by the individual or entity to which it is addressed, and may contain 
information that is privileged, confidential or exempt from disclosure 
under applicable law. If you are not the intended recipient or it 
appears that this mail has been forwarded to you without proper 
authority, you are notified that any use or dissemination of this 
information in any manner is strictly prohibited. In such cases, 
please notify us immediately at** [EMAIL PROTECTED] _**and delete 
this mail from your records.*



by quoting ADV in the CREATE TABLE you made it case-sensitive
then by not quoting it in the SELECT, you used the case-insensitive form,
wich PostgreSQL (IIRC) translates as lower case
adv != ADV

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Am I crazy or is this SQL not possible

2006-06-02 Thread Patrick Jacquot

Collin Peters wrote:


I am having some serious mental block here.  Here is the abstract
version of my problem.  I have a table like this:

unique_id (PK)   broadcast_id   date_sent  status
1  1 2005-04-0430
2  1 2005-04-01 30
3  1 2005-05-20 10
4  2 2005-05-29 30

So it is a table that stores broadcasts including the broadcast_id,
the date sent, and the status of the broadcast.

What I would like to do is simply get the last date_sent and it's
status for every broadcast.  I can't do a GROUP BY because I can't put
an aggregate on the status column.

SELECT MAX(date_sent), status
FROM broadcast_history
GROUP BY broadcast_id

How do I get the status for the most recent date_sent using GROUP BY?

DISTINCT also doesn't work

SELECT DISTINCT ON (email_broadcast_id) *
FROM email_broadcast_history
ORDER BY date_sent

As you have to have the DISTINCT fields matching the ORDER BY fields.
I have to ORDER BY date_sent, but I can't do a DISTINCT on date_sent

I keep thinking am I missing something.  Does anybody have any ideas?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


You can also try
SELECT * from broadcast_history A WHERE NOT EXISTS
(SELECT * from broadcast_history B WHERE B.date_sent >A.date_sent)
There isn't any PostgreSQL-ism, just a correlated subrequest wich is 
perfectly standars, afaik



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] "CASE" is not a variable

2006-06-29 Thread Patrick Jacquot

Keith Worthington wrote:


Hi All,

The following is a section of code inside an SQL function.   When I attempt to
run it I get the error message '"CASE" is not a variable'.  If I split this into
two queries (one for each variable) it works fine.  Obviously I have a work
around but I would like to understand what I am doing wrong.  TIA

SELECT tbl_item_bom.so_subline INTO v_so_subline,
  CASE WHEN tbl_mesh.mesh_type = 'square' THEN
( CASE WHEN tbl_mesh.unit_of_measure = 'in' THEN  
tbl_mesh.mesh_size

   WHEN tbl_mesh.unit_of_measure = 'ft' THEN 12.0*
tbl_mesh.mesh_size
   WHEN tbl_mesh.unit_of_measure = 'mm' THEN 25.4*
tbl_mesh.mesh_size
   WHEN tbl_mesh.unit_of_measure = 'cm' THEN  2.54   *
tbl_mesh.mesh_size
   WHEN tbl_mesh.unit_of_measure = 'm'  THEN  0.0254 *
tbl_mesh.mesh_size
   ELSE 0
  END
)
  WHEN tbl_mesh.mesh_type = 'diamond' THEN
( CASE WHEN tbl_mesh.unit_of_measure = 'in' THEN  
tbl_mesh.mesh_size / 2.0

   WHEN tbl_mesh.unit_of_measure = 'ft' THEN 12.0*
tbl_mesh.mesh_size / 2.0
   WHEN tbl_mesh.unit_of_measure = 'mm' THEN 25.4*
tbl_mesh.mesh_size / 2.0
   WHEN tbl_mesh.unit_of_measure = 'cm' THEN  2.54   *
tbl_mesh.mesh_size / 2.0
   WHEN tbl_mesh.unit_of_measure = 'm'  THEN  0.0254 *
tbl_mesh.mesh_size / 2.0
   ELSE 0
  END
)
  ELSE 0
  END INTO v_mesh_size
 FROM sales_order.tbl_item_bom
 LEFT JOIN peachtree.tbl_mesh
   ON tbl_item_bom.item_id = tbl_mesh.item_id
WHERE tbl_item_bom.so_number = rcrd_line.so_number
  AND tbl_item_bom.so_line = rcrd_line.so_line
  AND tbl_item_bom.component_type = 'net';

Kind Regards,
Keith

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match

 


shouldn't your CASE construct be in the select list, i.e
SELECT tbl_item_bom_so.subline, CASE ... END INTO ... FROM ...WHERE ...
?
HTH
--
Patrick

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] graph plottin engines compatible with postgres

2006-07-03 Thread Patrick Jacquot

Parang Saraf wrote:


hey,

I wanted to know about graph plotting engine that is compatible with 
postgres 8.1.4. I am performing some queries like :
Comparing the speed of three runners namely A, B and C versus time. I 
want to publish the result in a graphical manner on the net instead of 
just providing the data.


So, if you could provide me with some information on how i can do it 
then that would be great.


thanks and regards
parang saraf
[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>


I use gnuplot to do that kind of stuff since 1997.
Example for two curves on the same graph :

LANG=fr
LINGUAS=fr
psql editor < plotdata
select distinct d, totblo, utiblo from area where tp='$1' and area = 
'EDDON01'  and d > 'today'::date-365 order by d;

\o
\q
finpsql
gnuplot <plot "plotdata" u 1:3 title "Occupation de l'area", "" u 1:2 title 
"Taille de l'area"

fingnuplot

This example uses a temporary "plotdata" file.
HTH

--
Patrick

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] simple problem???

2006-07-21 Thread Patrick Jacquot

ivan marchesini wrote:


Dear users
It's a simple problem I think (and I hope  :-)

I have a table like this:
A|B|C
1|2|20
1|3|10
1|4|21
2|3|12
2|4|22
3|4|23

where the first two column are a combination without repetition (with
k=2) of the numbers 1,2,3,4
for each pair I have a value in the column C.

I would want a select that can extract these records:

1|3|10
2|3|12
3|4|23


i.e. the select must look into the A field first and: 
-select all records where A=1, 
-find, into this selection, the record where there's the minimum value
of the field C 
-print all the fields for this record.


then

-select all records where A=2
-find, into this selection, the record where there's the minimum value
of the field C 
-print all the fields of this record.


and so on...

using 
SELECT a,MIN(c) FROM table GROUP BY a


is a partial solution because I can't see the value of B
and I obtain:

1|10
2|12
3|23

How can I do for plotting also the value of B???

Thank you very much

Ivan
 


Maybe you could try
Select distinct on (a,b) a,b,c from(select * from table order by A,C)
The distinct on construct is a postgreSql-ism

Cordialement
--
Patrick


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] Select For Update and Left Outer Join

2011-04-27 Thread Patrick Earl
This is a follow-up to an old message by Tom Lane:

http://archives.postgresql.org/pgsql-sql/2006-10/msg00080.php

In ORMs like NHibernate, there are a few strategies for mapping
inheritance to SQL.  One of these is "Joined Subclass," which allows
for the elimination of duplicate data and clean separation of class
contents.

With a class hierarchy such as this:

Pet
Dog : Pet
Cat : Pet

The query to get all the pets is as follows:

select * from Pet
left join Dog on Dog.Id = Pet.Id
left join Cat on Cat.Id = Pet.Id

Now suppose you want to lock to ensure that your Cat is not updated
concurrently.  You add FOR UPDATE, but then PostgreSQL gets upset and
complains that locking on the nullable side of an outer join is not
allowed.

>From our data model, we know that for every single Pet, there can
never be a Dog or Cat that spontaneously appears, so locking in this
case is totally safe.  Unfortunately, PostgreSQL doesn't seem to
provide any mechanism to lock just the rows involved in this query.

Any advice?  I'd be happy if such a thing was implemented in the
engine, as it's supported by other databases without trouble.

As another note, I'm one of the NHibernate developers and I'm working
to get all the NHibernate tests working with PostgreSQL.  The two
significant cases I've had to avoid testing are the "FOR UPDATE"
mentioned above and null characters in UTF strings.  Storing a UTF
"char" which defaults to zero doesn't work on PostgreSQL because it's
apparently still using zero-terminated string functions. :(

Aside from those two things, it looks like PostgreSQL is going to be
passing all the tests soon, so that's good news. :)

Patrick Earl

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Index working, but not inside function

2007-07-10 Thread Patrick Clery
## THE PROBLEM

I'm trying to write a function that will select the ID of a row from a very 
large table (2M rows) using an index (places_autocomplete_idx).
When I execute the function the query stalls and is apparently not taking 
advantage of the index. However, executing the same query outside
of the function uses the index.

When using a prepared statement, the planner uses the index if I use the "~=~" 
operator instead of the "LIKE" operator. Using a different
operator within the function makes no difference whatsoever.

## THE FUNCTION

CREATE OR REPLACE FUNCTION pop_country_place_id(varchar) RETURNS INTEGER AS $$
SELECT id FROM places WHERE LOWER(shortname) LIKE LOWER($1);
$$ LANGUAGE sql;

## PLANNER USES INDEX WHEN "~=~" OPERATOR IS USED INSTEAD OF "LIKE" WITH 
PREPARED STATEMENT

pop=> PREPARE pop_plan(varchar) AS SELECT id FROM places WHERE 
LOWER(shortname) LIKE LOWER($1::varchar);
PREPARE
Time: 0.295 ms
pop=> 
pop=> EXPLAIN EXECUTE pop_plan('Canada'); 
  QUERY PLAN   
---
 Seq Scan on places  (cost=0.00..214301.44 rows=12194 width=4)
   Filter: (lower((shortname)::text) ~~ lower(($1)::text))
(2 rows)

Time: 0.310 ms
pop=> DEALLOCATE pop_plan;
DEALLOCATE
Time: 0.131 ms
pop=> PREPARE pop_plan(varchar) AS SELECT id FROM places WHERE 
LOWER(shortname) ~=~ LOWER($1::varchar);
PREPARE
Time: 0.330 ms
pop=> EXPLAIN EXECUTE pop_plan('Canada');
 QUERY PLAN 


 Bitmap Heap Scan on places  (cost=375.31..38438.72 rows=12194 width=4)
   Recheck Cond: (lower((shortname)::text) ~=~ lower(($1)::text))
   ->  Bitmap Index Scan on places_autocomplete_idx  (cost=0.00..372.26 
rows=12194 width=0)
 Index Cond: (lower((shortname)::text) ~=~ lower(($1)::text))
(4 rows)

Time: 0.318 ms

## PLANNER NOT USING INDEX (places_autocomplete_idx) INSIDE OF FUNCTION 
(TESTED WITH BOTH "~=~" AND "LIKE" OPERATORS) [sql]

pop=> CREATE OR REPLACE FUNCTION pop_country_place_id(varchar) RETURNS INTEGER 
AS $$
pop$> SELECT id FROM places WHERE LOWER(shortname) ~=~ LOWER($1::varchar);
pop$> $$ LANGUAGE sql;
CREATE FUNCTION
Time: 29.310 ms
pop=> SELECT pop_country_place_id('United States');
* * * STALLS * * *
Cancel request sent
ERROR:  canceling statement due to user request
CONTEXT:  SQL function "pop_country_place_id" statement 1

## THE TABLE

pop=> \d places
   Table "public.places"
   Column|  Type  |  Modifiers  

-++-
 id  | integer| not null default 
nextval('places_id_seq'::regclass)
 name| character varying(255) | 
 permalink   | character varying(255) | 
 parent_id   | integer| 
 abreviation | character varying(3)   | 
 type| character varying(255) | 
 is_approved | boolean| not null default false
 permalinks  | character varying(255) | 
 pictures_permalinks | character varying(255) | 
 pictures_count  | integer| not null default 0
 region_code | character varying(2)   | 
 country_code| character varying(2)   | 
 is_active   | boolean| not null default true
 shortname   | character varying(255) | 
Indexes:
"places_pkey" PRIMARY KEY, btree (id)
"places_autocomplete_idx" btree (lower(shortname::text) 
varchar_pattern_ops)
"places_idx_abreviation" btree (abreviation) WHERE "type"::text 
= 'Region'::text
"places_parent_idx" btree (parent_id)
"places_permalinks_idx" btree (lower(permalinks::text))
"places_pictures_permalinks_idx" btree (lower(pictures_permalinks::text))
"places_region_idx" btree (country_code, region_code) WHERE "type"::text 
= 'Region'::text
"regions_idx" btree (country_code, region_code) WHERE "type"::text 
= 'Region'::text
Foreign-key constraints:
"places_regions_fkey" FOREIGN KEY (country_code, region_code) REFERENCES 
regions(country_code, region_code) ON UPDATE CASCADE

---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] merge timestamps to intervals

2008-05-11 Thread Patrick Scharrenberg
Hi!

I have a table where I repeatingly log the status of some service, which
looks something like this:

< timestamp, status >

Now, everytime my service is up I save the timestamp and a status of
"up", if it's down I save the timestamp with "down", eg:
10:13   up
10:14   up
10:15   up
10:16   down
10:17   up
10:18   up

I'd like to merge this information to intervals where the service was up
or down.

< intervall, status >
10:13-10:15 up
10:16-10:16 down
10:17-1018  up

I've no clue how to approach this problem.

Any ideas/hints?
Also suggestions on a feasible better schema are welcome. :-)

Thanks
Patrick

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] returning results from an update for joining other tables

2008-06-10 Thread Patrick Scharrenberg
Hi!

I have a table containing data and a column which holds information on
which compute-node processes the data. In a given interval I'd like to
request some data from this table and mark these returned rows by
setting the "process_node" column to the node-name, which asked for data.
There may also be rows which have the column process_node set to the
name of the node currently asking.

What I tried was something like this, which gave me a syntax error:

SELECT * FROM
( UPDATE ta
SET process_node='nodename'
WHERE a>10 AND process_node is null
RETURNING *
) AS ta
JOIN someothertable ON ...

Can I somehow select some rows and do multiple operations on exactly
this resultset?
In my case update columns, then join columns from other tables and then
return the resultset with the joined columns?

Regards
Patrick

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] returning results from an update for joining other tables

2008-06-10 Thread Patrick Scharrenberg
Hi!
>> What I tried was something like this, which gave me a syntax error:
>>
>> SELECT * FROM
>> ( UPDATE ta
>>  SET process_node='nodename'
>>  WHERE a>10 AND process_node is null
>>  RETURNING *
>> ) AS ta
>> JOIN someothertable ON ...
>
> It's a know limitation, see <[EMAIL PROTECTED]>

Oh, I see.

Are there ways to work around this limitation?

Patrick

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] returning results from an update for joining other tables

2008-06-10 Thread Patrick Scharrenberg
Hi!
>> >> What I tried was something like this, which gave me a syntax error:
>> >>
>> >> SELECT * FROM
>> >> ( UPDATE ta
>> >>   SET process_node='nodename'
>> >>   WHERE a>10 AND process_node is null
>> >>   RETURNING *
>> >> ) AS ta
>> >> JOIN someothertable ON ...
> >
> > It's a know limitation, see <[EMAIL PROTECTED]>

Oh, I see.

Are there ways to work around this limitation?

Patrick


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] using calculated column in where-clause

2008-06-17 Thread Patrick Scharrenberg
Hi!

I'd like to do some calculation with values from the table, show them a
new column and use the values in a where-clause.

Something like this
select a, b , a*b as c from ta where c=2;

But postgresql complains, that column "c" does not exist.

Do I have to repeat the calculation (which might be even more complex
:-) ) in the "where"-clause, or is there a better way?


Thanks in advance.

Best regards
Patrick

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] using calculated column in where-clause

2008-06-17 Thread Patrick Scharrenberg
Andreas Kretschmer wrote:

>> Do I have to repeat the calculation (which might be even more complex
> yes.

Short and pregnant! :-)

Thanks!
Patrick

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] order by when using cursors

2008-06-17 Thread Patrick Scharrenberg
Hi!

I  did some experiments with cursors and found that my data doesn't get
sorted by the "order by"-statement.

Here is what I did:



CREATE TABLE ta (
  a integer NOT NULL,
  b integer NOT NULL
);

insert into ta values(3,1);
insert into ta values(1,2);
insert into ta values(4,3);
insert into ta values(2,4);

CREATE OR REPLACE FUNCTION testcur( OUT a integer, OUT b integer )
RETURNS SETOF RECORD AS $$
DECLARE
cur refcursor;
BEGIN
OPEN cur FOR SELECT * FROM ta ORDER BY a DESC;
LOOP
FETCH cur INTO a,b;
IF not found THEN
exit;
ELSE
RETURN NEXT;
END IF;
END LOOP;
CLOSE cur;
END;
$$ LANGUAGE 'PLPGSQL' ;

SELECT * FROM testcur();



As the result I get:

3   1
1   2
4   3
2   4


Which is not ordered by column a!?

Is this intended?
Am I doing something wrong?

I'm using Postgresql 8.3.1

Patrick


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] order by when using cursors

2008-06-18 Thread Patrick Scharrenberg
Pavel Stehule wrote:
>> it's known problem - column and variable names collision, so when you
>> use any SQL statement inside procedure you have to be carefully about
>> using variable names.

Oh, I didn't took notice of that.

Now knowing it is not a bug and how it works, it makes things much easier!:

Thank you!

Patrick

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] index for group by

2008-07-22 Thread Patrick Scharrenberg
Hi,

is there a way to speedup "group by" queries with an index?

In particular if I have a table like this:

CREATE TABLE data
(
   id1 integer,
   id2 integer,
   somedata character varying,
   ts timestamp with time zone
);

where continously data is logged about "id1" and "id2" into "somedata",
together with the timestamp when it was logged.

So I have multiple rows with the same id1 and id2 but different
timestamp (and data maybe).

At the moment I have ~40.000.000 rows in that table so doing a

SELECT id1, id2 FROM data GROUP BY id1, id2;

takes some time (~10 minutes)
and return about 1.000.000 rows.

I created an index on both colums id1 and id2 (together) which takes
about 800 MB but doesn't speedup things.
In fact it even doesn't seem to be used.

Is there any way to speedup this "group by" or does it seem more likely
that I have a conceptional flaw?

regards
Patrick

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] fast insert-if-key-not-already-there

2008-08-04 Thread Patrick Scharrenberg
Hi!

I have to do much inserts into a database where the key most often is
already there.
My current approach is to query for the key (ip-address), and if the
result is null I do the insert.
For every IP-Address I need the ip_addr_id from the same table.


Something like this:

CREATE TABLE ip_addresses (
"ip_addr_id"serial  NOT NULL,
"ip_addr"   inetUNIQUE NOT NULL

PRIMARY KEY(ip_addr);
);

CREATE OR REPLACE FUNCTION update_Addresses(
v_ip_addresses  inet[]
) RETURNS void AS $$
DECLARE
v_ip_addr   INET;
v_ip_addr_idINTEGER := 0 ;
v_ip_addr_ids   INTEGER[];
BEGIN
FOR i IN 1..( array_upper( v_ip_addresses, 1 ) )::integer LOOP
v_ip_addr = v_ip_addresses[i];

-- check if ip_addr exists and append if not
SELECT ip_addr_id FROM ip_addresses WHERE ip_addr=v_ip_addr 
INTO v_ip_id;
IF v_ip_id IS NULL THEN
INSERT INTO ip_addresses ( ip_addr ) VALUES( v_ip_addr 
) RETURNING
ip_addr_id INTO v_ip_id ;
END IF; 

v_ip_addr_ids = array_append(v_ip_addr_ids, v_ip_addr_id);

END LOOP;
END;
$$ LANGUAGE 'plpgsql' STRICT;

Now I'm wondering if there is a better solution, since I'm doing ~20
inserts at once and every time I'm doing single lookup's for the IDs.

regards
patrick

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] generating date sequences

2008-10-20 Thread Patrick Scharrenberg
Hi!
Is there a simple way to generate sequences of dates like the following?
"2008-07-03 00:00:00"
"2008-07-04 00:00:00"
"2008-07-05 00:00:00"
"2008-07-06 00:00:00"


I'd like to join a table to aggregate the number of items for each day
(each item has a timestamp).
For some days however there are no items, resulting in no row instead of
a row with zero items.
I'd like to fill these empty rows.

I hope I could make my problem clear?!

Best regards
Patrick

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] generating date sequences

2008-10-20 Thread Patrick Scharrenberg
Hi!

>> Is there a simple way to generate sequences of dates like the following?
> Sure:
> test=# select '2008-07-03'::date + s * '1day'::interval from
> generate_Series(1,10) s;


Thanks! Thats what I was searching for.

You saved my day from manually adding missing dates in a huge excel sheet!

So thank you again!

P. Scharrenberg

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


RE: [SQL] 7.0 weirdness

2000-05-30 Thread Patrick FICHE

It seems to me that it was lack of control in 6.5 version...
For one "gid", you may have several "created" values, so Postgres is not
able to decide which value must be taken and ordered

Simple example
gid created
11
13
22

In which order is Postgres supposed to give the data???


Patrick Fiche
-Message d'origine-
De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]De la part
de Jeff MacDonald
Envoyé : mardi 30 mai 2000 14:28
À : [EMAIL PROTECTED]; [EMAIL PROTECTED]
Objet : [SQL] 7.0 weirdness


hi folks,

this query works fine in 6.5 but screwie in 7.0

7.0

gm=> SELECT DISTINCT gid FROM members
gm-> WHERE active = 't'
gm-> AND  (gender = 0
gm-> AND  (wantrstypemale LIKE '%Short Term%'
gm-> OR wantrstypemale like '%Marriage%'
gm-> OR wantrstypemale like '%Long Term%'
gm-> OR wantrstypemale like '%Penpal%'
gm-> OR wantrstypemale like '%Activity Partner%')
gm-> )  order by created desc;
ERROR:  For SELECT DISTINCT, ORDER BY expressions must appear in target list
gm=>


any idea's ?

jeff







Re: [SQL] SPEED UP.

2000-06-02 Thread Patrick Giagnocavo

> 
> 
> I would like to know is there is a specific way to speed up my query to
> postgres.

Please post the exact command line arguments you are giving when you start
the postmaster daemon process.

Cordially

Patrick Giagnocavo
[EMAIL PROTECTED]





[SQL] Outer join in postgresql

2000-06-12 Thread Patrick Kay

I am looking for a way run an outer join in psql.  Can anyone help?

Informix has an "OUTER" keyword.  I don't see anything like this in the docs
for psql.

Thanks much.
-Pat Kay




[SQL] find the number of rows for each table

2000-06-15 Thread Patrick Coulombe

Hi,

I want to find the number of rows for each table (2) in 1 query,
the query is:

select distinct * from medias, contacts WHERE medias.media_id = 
contacts.media_id AND medias_categories.media_id = medias.media_id AND 
medias_categories.categorie_id = 1

700 rows


If I want to find the number of row for medias (for the same query) I 
have to do a other query :

select distinct * from medias WHERE medias.media_id = 
contacts.media_id AND medias_categories.media_id = medias.media_id AND 
medias_categories.categorie_id = 1

60 rows


I search for a solution, I try that :

select distinct *, count(distinct (medias.media_id)), count(distinct 
(contacts.contact_id)) from medias, contacts WHERE medias.media_id = 
contacts.media_id AND medias_categories.media_id = medias.media_id AND 
medias_categories.categorie_id = 1
-
ERROR:  parser: parse error at or near "distinct"

don't work, as you can see.



I need help, please.
Patrick






[SQL] temporay table : error is repeated.

2000-06-28 Thread Patrick Coulombe

Let me try to explain why i want to use temporary tables, maybe you have
others solutions for me (it's call experience) :)

I use PHP to interact with postgres.

1) The "user" do is selection (form - checkbox).
2) Results.
3) Options : he can print it, save it, and make MODIFICATIONS.

That's the problem... modfications : I don't want to let the "user"
change my db, but just add, modify, erase his selection-result. So I'm
thinking about this : put the result of is selection in a temporary
table to let him add, modify and erase is result without affect the
original db.

What I don't understand is when the temporary table will drop? When you
do it in psql, no ploblem cause when you quit this utility he drop
automatically the db - right? How does it work when you query from a web
page in PHP?

After... The query can be as big as that (depends on selection of the
user) ie :

select distinct * from medias, contacts where medias.media_id =
contacts.media_id AND medias_categories.media_id = medias.media_id AND
medias_categories.categorie_id = 1 AND contacts_titres.contact_id =
contacts.contact_id AND (contacts_titres.titre_id = 12) AND (type_id = 1
OR type_id = 2 OR type_id = 0) AND (langue_id = 1 OR langue_id = 2) AND
nation_id = 1 UNION select distinct * from medias, contacts where
medias.media_id = contacts.media_id AND medias_categories.media_id =
medias.media_id AND medias_categories.categorie_id = 3 AND
medias_regions.media_id = medias.media_id AND (medias_regions.region_id
= 11 OR medias_regions.region_id = 13 OR medias_regions.region_id = 6)
AND (type_id = 1 OR type_id = 2 OR type_id = 0) AND (langue_id = 1 OR
langue_id = 2) AND nation_id = 1 AND niveau_id = 1 UNION select distinct
* from medias, contacts where medias.media_id = contacts.media_id AND
medias_categories.media_id = medias.media_id AND
medias_categories.categorie_id = 5 AND medias_regions.media_id =
medias.media_id AND (medias_regions.region_id = 6 OR
medias_regions.region_id = 7 OR medias_regions.region_id = 3) AND
(type_id = 1 OR type_id = 2 OR type_id = 0) AND (langue_id = 1 OR
langue_id = 2) AND nation_id = 1 AND (bulletin != 1 OR bulletin is null)
UNION select distinct * from medias, contacts where medias.media_id =
contacts.media_id AND medias_categories.media_id = medias.media_id AND
medias_categories.categorie_id = 6 AND contacts_secteurs.contact_id =
contacts.contact_id AND (contacts_secteurs.soussecteur_id = 9 OR
contacts_secteurs.soussecteur_id = 67 OR
contacts_secteurs.soussecteur_id = 74 OR
contacts_secteurs.soussecteur_id = 6) AND (type_id = 1 OR type_id = 2 OR
type_id = 0) AND (langue_id = 1 OR langue_id = 2) AND nation_id = 1



TEMPORARY table with UNION : problem or not???

Sorry for this long post... but it's the last option of my project!
Thank you for let me know what you thinking about that.
Sorry for my english, but I get better everyday.

Patrick



Re: [SQL] GROUP by finish&&last day of month

2000-07-05 Thread Patrick Jacquot

Antti Linno wrote:

> Ok, if I want to get non-aggregat data in groups, I use order by. This
> group by seemed so logical though, but the fruit that u can't have, is
> usually the most sweet.
>
> New question, how to get the last day of month(order data by last day of
> month). And to prevent chain letter from misunderstanding, no I don't
> intend to look it up from calendar and then input it manually.
>
> A.

to get last day of month :
1) use date-trunc to truncate to first day of month
2) add one month
3) substract 1 day




Re: [SQL] confused by select.

2000-07-07 Thread Patrick Jacquot

John wrote:

> Hello.  I'm trying to do a select here that i have looked at from many
> angles and cannot find a solution too.  My main problem, (i believe) is
> that it is trying to create a many to many relationship.  I would be
> grateful if anyone knew a way around this.
>
> Here's my predicamint.
> I have a database for sales orders.
> An inventory table.
> And
> A history table.
>
> Inventory:
> Create t1 (sku char(4), type char(1));
> History:
> Create t2 (id char(6), items text);
>
> [There are more fields, but this is all that matters for this query]
>
> I would like to get the id's where the customer has purchased an item of a
> specific type.
>
> Problem A: most people order more than one item at a time.
>  So the 'items' field is a colon delimitted text field containing the
>skus of the purchased items.
>   
> Problem B: there are many skus of each type.
>as are there many purchases.
>
> What would the proper select be?
>
> create view v1 (select sku from t1 where type ='K');
>will get me all the skus of one type but i don't know where to go
> from there. And it feels as if i've exhausted all options.
>
> i've been working around:
> select id from t2 where items like sku;
> and no matter what i use in the where clause (regex, like, or wildcards).
> i get back an error or a zero.
> and there are no other related fields in the mentioned tables.
>
> is there a way to step through the sku field item by item without leaving
> postgres (i.e. resorting to scripting)?
>
> I have also tried different fieldtypes for the 'items' field.
> But they all give me problems too.
> The array works much the same way as the : delimitted field i have does.
> Except you have less operators that work with it.
> And to break it up into separate items fields. (item1, item2, item3,
> etc.) is a waste, seeing as the average order is 2.? but there are many
> orders with hundreds of items.
>
> Sorry for the long winded explanation.
> But I figured, that the more imformation i gave, the more someone may be
> able to help.
>
> Thanks in advance.
> .jtp

usually many-to-many relationships are handled by a third table, like this:

create table items (item_id,...)
create table customers (customer_id, ...)
create table orders (customer_id, item_id, quantity_orderered)

Hoping it may help

Patrick JACQUOT




[SQL] order by accents?

2000-07-07 Thread Patrick Coulombe

hi,
if I do a query like this one :

SELECT name from medias ORDER BY name

name






ÉCCC

6 rows


Why the record : ÉCCC is at the end?
HOW can I fix this?

Thank you
Patrick



Re: [SQL] join if there, blank if not

2000-07-13 Thread Patrick Jacquot

Henry Lafleur wrote:

> Jacques,
>
> The problem with using the union in this way is that you get NULLs for a
> number weather or not it has an associated record in calls.
>
> To do a pure outer join, it would be something like this:
>
> select c.cdate, c.ctime, c.cextn, c.cnumber, n.ndesc
> from calls c, numbers n
> where c.cnumber=n.nnumber
> union all
> select null as cdate, null as cextn, null as cnumber, nnumber, ndesc
> from numbers
> WHERE nnumber NOT IN (SELECT nnumber FROM calls);
>
> ---
>
> What I have always had trouble with, though, is if you have multiple fields
> for a primary key. For example, if a customer master table also had ship-to
> locations as the key and you wanted to get all customers and any orders for
> that customer, in rough ANSI SQL it would be:
>
> SELECT c.cust_number, c.ship_to, o.item
> FROM cust c LEFT OUTER JOIN orders o ON c.cust_number = o.cust_number AND
> c.ship_to = o.ship_to
>
> then, in the union, it is not clear how to do it:
>
> SELECT c.cust_number, c.ship_to, o.item
> FROM cust c, orders o
> WHERE c.cust_number = o.cust_number AND c.ship_to = o.ship_to
> UNION
> SELECT cust_number, ship_to, NULL AS item
> FROM cust
> WHERE ???
>
> which I never know what to do at ??? -
> WHERE c.cust_number NOT IN (SELECT cust_number FROM orders)
> is one choice, but this doesn't help if the ship to doesn't match. We can
> get wild and try -
> WHERE c.cust_number NOT IN (SELECT cust_number FROM orders WHERE
> ship_to = cust.ship_to)
> but if you go to two and three keys, what happens then? It seems like it
> should work if we continue. But how efficiently does this work?
>
> Has anyone examined this problem?
>
> Thanks,
>
> Henry
>
> -Original Message-
> From: Jacques Williams [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, July 12, 2000 9:41 AM
> To: Gary Stainburn
> Cc: [EMAIL PROTECTED]
> Subject: Re: [SQL] join if there, blank if not
>
> Gary,
>
> What you want here is an outer join. The syntax would look something like
> this:
>
> select c.cdate, c.ctime, c.cextn, c.cnumber, n.ndesc
> from calls c, numbers n
> where c.cnumber=n.nnumber
> union all
> select null as cdate, null as cextn, null as cnumber, n.nnumber, n.ndesc
> from numbers;
>
> (I haven't tried this, but it should work.) For more information on outer
> joins, see Bruce Momjian's book at
> http://www.postgresql.org/docs/aw_pgsql_book/ .
>
> Jacques Williams
>
>
> On Wed, Jul 12, 2000 at 04:34:04PM +0100, Gary Stainburn wrote:
> > Hi all,
> >
> > I just can't get my head round this one so I hope one of you can.
> >
> > I've got two tables, one holding phone calls, and another holding phone
> numbers.
> >
> > I want to do a select where if the number in the calls table exists
> > in the numbers table, the description is included otherwise the
> > description field is blank.
> >
> > Unfortunately, using the select I've tried, if the number is not on
> > the one of the tables, it's ignored.
> >
> > Calls table
> > cdate date
> > ctime time
> > cextn char(3)
> > cnumber x(12)
> >
> > Numbers table
> >
> > nnumber x(12)
> > ndesc x(30)
> >
> > Select I tried.
> >
> > select c.cdate, c.ctime, c.cextn, c.cnumber n.ndesc
> > from calls c, numbers n where c.cnumber = n.nnumber;
> >
> > -
> > Gary Stainburn.
> > Work: http://www.ringways.co.uk mailto:[EMAIL PROTECTED]
> > REVCOM: http://www.revcom.org.uk mailto:[EMAIL PROTECTED]
> > -
> > Murphy's Laws: (327) The minute before the engineer arrives, the printer
> starts working.
> > -
> >

hi
for multiple fields in the join condition, i always succeeded wit a
WHERE NOT EXISTS SELECT ...
subselect in the second part of the UNION.
Hoing that may help (although perhaps suboptimal)




[SQL] out-subject : thanks

2000-07-18 Thread Patrick Coulombe

hi,

i just "finish" a project using postgresql. i just want to thank's everyone
from the mailing list who help me. in particular, bruce for your book (i
wasn't a thing about database and sql 2 months ago). tom for your quick
answer and eric l.

http://www.mediacces.com

in french...
go to - medias - visiteurs

thank you again
patrick
---
the power of internet...




[SQL] Inconsistent sql result

2007-10-16 Thread Patrick De Zlio
Hi listers,

As a PG administrator, I'm trying to read technical data from pg_class table
to monitor tables and indexes space.

We are running a quite big postgres platform, with multiple databases,
multiples schemes in each database, and hundreds tables.

When I run the attach python script, I get 1809 rows as result. When I run
the included (from the script) select query from phpPgAdmin or pgAdmin III,
I get 2010 rows as result.

When I try to focus on specific table including where relname ='tablename'
in both parts of the join, I also get different numbers of rows. So I can't
have the full size of all indexes attached on a table.

Does anyone has a clue of why the same query, on same database gives
different result depending on it is included in a python script, or ran from
a console?

Many Thanks
Patrick

#!/usr/bin/python2.4
#

import sys
import pgdb

DEBUG = True
global db
#
# Database access
#
def opendb():
global db
# connect to database
port='5432'
username='xxx'
dbname='xxx'
host='xx.xx.xx.xx'
password='xxx'

try:
db = pgdb.connect(database=dbname,host=host, user=username, 
password=password)
except Exception, detail:
db.rollback()
if DEBUG:
print 'Error occured while connecting to database : %s' % detail
sys.exit(0)

#
# Close Database
#
def closedb():
global db
# Commit all changes before closing
db.commit()
db.close()



if __name__== '__main__':

#
# Main
#
opendb()
query = "SELECT relname, relnamespace, relkind, relfilenode,
relpages, reltoastrelid, relname AS idx_table_name FROM pg_class UNION
SELECT pg_c_i.relname, pg_c_i.relnamespace, pg_c_i.relkind,
pg_c_i.relfilenode, pg_c_i.relpages, pg_c_i.reltoastrelid,
pg_c_i_o.relname AS idx_table_name FROM pg_class pg_c_i, pg_index
pg_i, pg_class pg_c_i_o WHERE pg_c_i.relfilenode = pg_i.indexrelid AND
pg_i.indrelid = pg_c_i_o.relfilenode "
cur = db.cursor()
cur.execute(query)
tables_details = cur.fetchall()
nb_tables = len(tables_details)
for table in tables_details:
print table
print "Tables count=",nb_tables 

closedb()


Re: [SQL] Please help me to slove this SQL statements

2003-11-05 Thread Patrick JACQUOT (DSI NOISIEL)


-Message d'origine-
De : Freshman [mailto:[EMAIL PROTECTED]
Envoyé : jeudi 30 octobre 2003 13:38
À : [EMAIL PROTECTED]
Objet : [SQL] Please help me to slove this SQL statements


There are three table in database which is suppliers, projects, and
shipments
suppliers contain suppliers id, name ...etc
projects contain project name ..suppliers ID ( J1---J7) ...etc
shipments table contain suppliers ID , PROJECTS ID

how can i query to find out the suppliers to supply all the projects ID

I would suggest:
SELECT * from suppliers
WHERE NOT EXISTS(
SELECT * FROM projects
WHERE NOT EXISTS(
SELECT * from shipments 
WHERE shipments.suppliers_ID = suppliers_suppliers_id
AND   shipments.project_ID   = project.project_id
)
);
so you select every supplier who didn't miss any project.
Is that you want to do?

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org