Re: [SQL] SQL query help!

2002-11-27 Thread Luis Sousa
Tell me what did you try with limit and group by.
Where's IN, why don't you use EXISTS instead. It runs much master !

Regards,
Luis Sousa

Arcadius A. wrote:


Hello!

I hope that someone here could help.

I'm using PostgreSQL7.1.3

I have 3 tables in my DB: the tables are defined in the following way:


CREATE TABLE category(
id SERIAL NOT NULL PRIMARY KEY,
// etc etc

)
;

CREATE TABLE subcategory(
id SERIAL NOT NULL PRIMARY KEY,
categoryid int CONSTRAINT subcategory__ref_category
REFERENCES category (id)
// etc etc
)
;

CREATE TABLE entry(
entryid SERIAL NOT NULL PRIMARY KEY,
isapproved CHAR(1) NOT NULL DEFAULT 'n',
subcategoryid int CONSTRAINT entry__ref_subcategory
REFERENCES subcategory (id)
// atd
,
)
;


I have the following SQL query :

"SELECT * FROM entry where isapproved='y'  AND  subcategoryid IN (SELECT id
FROM subcategory WHERE
categoryid='"+catID+"') ORDER BY subcategoryid DESC";


For a given categoryid( catID), the query will return all entries in the
"entry" table
having a corresponding subcategoryid(s)[returned by the inned subquery].

But I want to return only a limited number of entries of each
subcategory. let's say that I want to return at most 5 entries of  each
subcategory type ( for instance if the inner subquery returns 3 results,
thus I will be having in total at most 15 entries as relust)

How can this be achieved?

I'm aware of postgreSQL "LIMIT" and "GROUP BY" clause. but so far, I'm
not able to put all this together...

Thanks in advance.

Arcadius.






---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


 





smime.p7s
Description: S/MIME Cryptographic Signature


Re: [SQL] Question on SQL and pg_-tables

2002-11-27 Thread Tilo Schwarz
> Tilo Schwarz <[EMAIL PROTECTED]> writes:
> > - Is it possible to get not only the two tables, but also their
> > corresponding two columns involved in a RI-Constraint out of the pg_*
> > tables just with a SQL query?
>
> Not easily --- the column info is buried in the pg_trigger.tgargs entries
> for the RI triggers, which there is no good way to take apart in plain SQL.
>
> You might care to start experimenting with 7.3 instead; the new
> pg_constraint table makes this much easier.

Thank you, I'll check that out.

Regards,

Tilo

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] Function and insert

2002-11-27 Thread Laurent Patureau

Hi,

I want to do a function that insert a row on a table like :

CREATE FUNCTION ajout_secu(INT4) RETURNS OID
AS 'INSERT INTO test_2 VALUES($1);'
LANGUAGE 'SQL';

PG refuse to accept the type returns oid as the function is not a SELECT.
What can I do ?

thanks for your help,

LP



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Big query problem

2002-11-27 Thread Tomas Berndtsson
I'm using 7.2.1, trying to run a query like this:

DELETE FROM table WHERE col1='something' AND col2 IN
('aasdoijhfoisdfsdoif','sdfsdfsdfsadfsdf', ... );

In the parantheses I have 6400 names, each about 20 characters. I'm
using libpq from C. This did not work very well, but the result was
very unexpected.

My application has several threads, each opening its own connection to
the database. The above query was run in a transaction followed by a
COMMIT. There was no error from running the above query, but instead,
it seems that the query was never run at all. As a side effect, every
other connection to the database always got:

NOTICE:  current transaction is aborted, queries ignored until end of
transaction block

when trying to run a query. I thought that the transactions in
different connections didn't have anything to do with each other.


If I limited the number of names in the failing query to 3200, it
worked well and as expected.


Is there a limit in libpq of the length of a query? And if this is
exceeded, shouldn't PQexec() give an error?


Greetings,

Tomas

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Casting Money To Numeric

2002-11-27 Thread D'Arcy J.M. Cain
On November 26, 2002 02:19 pm, Thomas Good wrote:
> Having perused all the online docs I can find it appears there is no
> SQL solution for casting the dread money type to numeric.
> Is this true?
>
> select rent::numeric(9,2) from x;
> ERROR: Cannot cast type 'money' to 'numeric'

Fraid so.  That's one of the reasons that I had to finally leave that type 
for numeric.  Be prepared, however, for SUM() to take longer on groups of any 
significant size.  That's the one thing that was really nice about money - 
everything was integer arithmetic.

-- 
D'Arcy J.M. Cain|  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Function and insert

2002-11-27 Thread Richard Huxton
On Wednesday 27 Nov 2002 11:05 am, Laurent Patureau wrote:
> Hi,
>
> I want to do a function that insert a row on a table like :
>
> CREATE FUNCTION ajout_secu(INT4) RETURNS OID
>  AS 'INSERT INTO test_2 VALUES($1);'
>  LANGUAGE 'SQL';
>
> PG refuse to accept the type returns oid as the function is not a SELECT.
> What can I do ?

You're not returning anything - try something like:

CREATE FUNCTION foo_ins(int4) RETURNS int4 AS 
'INSERT INTO foo VALUES($1); SELECT $1;' 
LANGUAGE 'SQL';

-- 
  Richard Huxton

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] [GENERAL] FreeBSD, Linux: select, select count(*) performance

2002-11-27 Thread Achilleus Mantzios
On Wed, 27 Nov 2002, Tom Lane wrote:

> Achilleus Mantzios <[EMAIL PROTECTED]> writes:
> > Linux q1
> > 
> > dynacom=# EXPLAIN ANALYZE SELECT count(*) from noon;
> > NOTICE:  QUERY PLAN:
>
> > Aggregate  (cost=20508.19..20508.19 rows=1 width=0) (actual
> > time=338.17..338.17
> > rows=1 loops=1)
> >   ->  Seq Scan on noon  (cost=0.00..20237.95 rows=108095 width=0) (actual
> > time=0.01..225.73 rows=108095 loops=1)
> > Total runtime: 338.25 msec
>
> > Linux q2
> > 
> > dynacom=# EXPLAIN ANALYZE SELECT * from noon;
> > NOTICE:  QUERY PLAN:
>
> > Seq Scan on noon  (cost=0.00..20237.95 rows=108095 width=1960) (actual
> > time=1.22..67909.31 rows=108095 loops=1)
> > Total runtime: 68005.96 msec
>
> You didn't say what was *in* the table, exactly ... but I'm betting
> there are a lot of toasted columns, and that the extra runtime
> represents the time to fetch (and perhaps decompress) the TOAST entries.

278 columns of various types.
namely,

Table "noon"
 Column |  Type  | Modifiers
++---
 v_code | character varying(4)   |
 log_no | bigint |
 report_date| date   |
 report_time| time without time zone |
 voyage_no  | integer|
 charterer  | character varying(12)  |
 port   | character varying(24)  |
 duration   | character varying(4)   |
 rotation   | character varying(9)   |
 me_do_cons | double precision   |
 reason | character varying(12)  |
 ancorage_date  | date   |
 ancorage_time  | time without time zone |
 exp_berth_date | date   |
 exp_berth_time | time without time zone |
 berth_date | date   |
 berth_time | time without time zone |
 exp_sail_date  | date   |
 exp_sail_time  | time without time zone |
 draft_fw   | double precision   |
 draft_aft  | double precision   |
 etc_date   | date   |
 etc_time   | time without time zone |
 completion_date| date   |
 completion_time| time without time zone |
 load_quantity  | double precision   |
 discharging_quantity   | double precision   |
 delivery_date  | date   |
 delivery_place | character varying(12)  |
 redelivery_date| date   |
 redelivery_time| time without time zone |
 redelivery_place   | character varying(12)  |
 rob_ifo| double precision   |
 rob_mdo| double precision   |
 log_ifo| double precision   |
 log_mdo| double precision   |
 rcv_ifo| double precision   |
 rcv_mdo| double precision   |
 rcv_me | double precision   |
 rcv_cyl| double precision   |
 rcv_gen| double precision   |
 rob_me | double precision   |
 rob_cyl| double precision   |
 rob_gen| double precision   |
 voyage_sub_no  | integer|
 voyage_activity| character varying(3)   |
 remarks| character varying(60)  |
 latitude   | character varying(6)   |
 longitude  | character varying(6)   |
 speed  | double precision   |
 wind_direction | character varying(1)   |
 rpm| double precision   |
 fuelconsumption| double precision   |
 me_bearing_oil_presure | double precision   |
 me_bearing_amber   | double precision   |
 ambere | character varying(8)   |
 remarks2   | character varying(12)  |
 steam_hours| double precision   |
 ifoconsboilerheat  | double precision   |
 ae_mdo_consumption | double precision   |
 cyl_me_exh_temp01  | double precision   |
 cyl_me_exh_temp02  | double precision   |
 cyl_me_exh_temp03  | double precision   |
 cyl_me_exh_temp04  | double precision   |
 cyl_me_exh_temp05  | double precision   |
 cyl_me_exh_temp06  | double precision   |
 cyl_me_exh_temp07  | double precision   |
 cyl_me_exh_temp08  | double precision   |
 cyl_me_exh_temp09  | double precision   |
 cyl_me_exh_temp10  | double precision   |
 cyl_me_exh_temp11  | double precision   |
 cyl_me_exh_temp12  | double precision   |
 cyl_me_exh_temp13  | double precision   |
 cyl_me_exh_temp14  | double precision   |
 gen1_ae_exh_temp01 | double precision   |
 gen1_ae_exh_temp02 | double precision 

Re: [SQL] [GENERAL] FreeBSD, Linux: select, select count(*) performance

2002-11-27 Thread Tom Lane
Achilleus Mantzios <[EMAIL PROTECTED]> writes:
> On Wed, 27 Nov 2002, Tom Lane wrote:
>> You didn't say what was *in* the table, exactly ... but I'm betting
>> there are a lot of toasted columns, and that the extra runtime
>> represents the time to fetch (and perhaps decompress) the TOAST entries.

> 278 columns of various types.
> namely,
> [snip]

Hmm, no particularly wide columns there --- but 278 columns is a lot.
I think the extra time might just be the time involved in fetching all
those column values out of the table row?

If you're interested in pursuing it, I'd suggest rebuilding the backend
with profiling enabled so you can see exactly where the time goes.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] join question

2002-11-27 Thread Nathan Young
OK, that works great, but I was told that I should avoid sub-selects when 
possible for performance reasons.  Also, I used so much mental energy trying 
to find a solution that would do either task using a join that I would be 
very curious if anyone had a solution.

The setup:
>> I have a table with members and a table with payments.  Each payment is
>> related to a member by memberID and each payment has (among other things)
>> a year paid.

The problem:

>> I would like to be able to get a list of members who have not paid for a
>> given year.

Two possible solutions, both using sub-selects:

> select member.memberId, member.name from member where not exists (select
>  * from payment where payment.memberId=member.memberID and
>  payment.yearPaid=2002);
>
> select member.memberId, member.name from member left outer join
>  (select * from payment where yearPaid=2002) as a using (memberId) where
>  yearPaid is null;

In addition to my interest in finding a join that could do that, I'm curios 
about a couple other things.

My understanding is that exists is optimized so that the first version would 
be faster than the second.

"using (memberID)" would be the same as "on member.memberID = 
payment.memberID", right?



Thanks!

->Nathan




11/26/2002 8:11:53 AM, Stephan Szabo <[EMAIL PROTECTED]> wrote:

>
>On Fri, 22 Nov 2002, Nathan Young wrote:
>
>> Hi all.
>>
>> I have a table with members and a table with payments.  Each payment is
>> related to a member by memberID and each payment has (among other things) 
a
>> year paid.
>>
>> I can create a join to find a list of members who have paid for a given 
year
>> (2002 in this case):
>>
>> select member.memberID,member.name from member, payment where
>> payment.memberID = member.memberID and payment.yearPaid = 2002
>>
>> I would like to be able to get a list of members who have not paid for a
>> given year.
>
>Well, I believe either of these two will do that:
>
> select member.memberId, member.name from member where not exists (select
>  * from payment where payment.memberId=member.memberID and
>  payment.yearPaid=2002);
>
> select member.memberId, member.name from member left outer join
>  (select * from payment where yearPaid=2002) as a using (memberId) where
>  yearPaid is null;
>
>> I would also like to combine the two criteria, for example to generate a 
list
>> of members who have paid for 2002 but not 2003.
>
>I think these would do that:
>
>select member.memberID,member.name from member, payment where
> payment.memberID = member.memberID and payment.yearPaid = 1999
> and not exists (select * from payment where
> payment.memberId=member.memberId and yearPaid=2002);
>
>select member.memberId, member.name from member inner join (select
> * from payment where yearPaid=2002) as a using (memberId) left outer join
> (select * from payment where yearPaid=2003) as b using (memberId) where
> b.yearPaid is null;
>
>
>
>





---
(([^/]+)/([^/]+)){0,1}/*(([^/]+)/([^/]+)){0,1}/*(([^/]+)/([^/]+)){0,1}/*
(([^/]+)/([^/]+)){0,1}/*(([^/]+)/([^/]+)){0,1}/*(([^/]+)/([^/]+)){0,1}/*
---



Nathan Young
N. C. Young Design
(530)629-4176
http://ncyoung.com



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Question on SQL and pg_-tables

2002-11-27 Thread Peter Childs
On Wednesday 27 November 2002 10:13, Tilo Schwarz wrote:
> > Tilo Schwarz <[EMAIL PROTECTED]> writes:
> > > - Is it possible to get not only the two tables, but also their
> > > corresponding two columns involved in a RI-Constraint out of the pg_*
> > > tables just with a SQL query?
> >
> > Not easily --- the column info is buried in the pg_trigger.tgargs entries
> > for the RI triggers, which there is no good way to take apart in plain
> > SQL.
> >
> > You might care to start experimenting with 7.3 instead; the new
> > pg_constraint table makes this much easier.
>
> Thank you, I'll check that out.
>
> Regards,
>
>   Tilo
>
I posted a similar question earlier in the week on General. Since I had no 
reply I've come up with this pretty stupid SQL query (its a view so you can 
store it in the database)

create view constraints as
select seven as triggername, eight as constraintname, nine as enabled,
ten as deferrable, eleven as initallydeferred, twelve as relname,
two as localtable, three as foreigntable, four as type, five as localfield,
substring(rest5,1,position('\\000'::bytea in rest5)-1) as foreignfield from (
select seven, eight, nine, ten, eleven, twelve,
one, two, three, four, substring(rest4,1,position('\\000'::bytea in rest4)-1)
as five, substring(rest4,position('\\000'::bytea in rest4)+1) as rest5 from (
select seven, eight, nine, ten, eleven, twelve,
one, two, three, substring(rest3,1,
position('\\000'::bytea in rest3)-1) as four,
substring(rest3,position('\\000'::bytea in rest3)+1) as rest4 from (
select seven, eight, nine, ten, eleven, twelve,
one, two, substring(rest2,1,position('\\000'::bytea in rest2)-1) as three,
substring(rest2,position('\\000'::bytea in rest2)+1) as rest3 from (
select seven, eight, nine, ten, eleven, twelve,
 one, substring(rest1,1,position('\\000'::bytea in rest1)-1) as two,
substring(rest1,position('\\000'::bytea in rest1)+1) as rest2 from (
select tgname as seven, tgconstrname as eight, tgenabled as nine,
tgdeferrable as ten, tginitdeferred as eleven, pg_class.relname as twelve ,
 substring(tgargs,1,position('\\000'::bytea in tgargs)-1) as one,
substring(tgargs,position('\\000'::bytea in tgargs)+1) as rest1
from pg_trigger, pg_class where tgisconstraint=true and  
pg_trigger.tgrelid=pg_class.oid) as a) as b) as c) as e) as f;

Its bad because I could not find a split built in function..
Each constraint has two records I think The table also as a count of the 
number of arguments in tgargs so if you were to define a split function the 
query would be alot simpler!

I've also been looking at upgrading to 7.3 but as I am tring to get it 
installed on a separate computer than our main database I need the 7.2 client 
tools to still work to access the old database. (running on another computer)
They seam to run fine until you try and run the scripting languages which go 
off and find the wrong library (.so files) I managed to get round this by 
changing ldconfig to point to the right place but this broke 7.2.
It would seam that 7.3 and 7.2 are incompatible and even doing a dump and 
reload (as the documentation says you should) does not seam to mean that the 
sql that worked in 7.2 will work in 7.3.
I am beginning to hate sql. Its got nothing in it for getting meta data (data 
about data) so every body uses non-standard methods. Which means using 
standard libraries is a waste of time. 
Perhaps we need a standard set of "views" to tell us the meta data then to 
get at the meta data in a different database all you would need to do is 
reimplemented the views but this is rather a dirty solution. To something 
which is missing in the standard

Peter Childs

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[SQL] FreeBSD, Linux: select, select count(*) performance

2002-11-27 Thread Achilleus Mantzios

Hi,

i run 2 queries on 2 similar boxes (one running Linux 2.4.7, redhat 7.1
and the other running FreeBSD 4.7-RELEASE-p2)

The 2 boxes run postgresql 7.2.3.

I get some performance results that are not obvious (at least to me)

i have one table named "noon" with 108095 rows.

The 2 queries are:
q1: SELECT count(*) from noon;
q2: SELECT * from noon;

Linux q1

dynacom=# EXPLAIN ANALYZE SELECT count(*) from noon;
NOTICE:  QUERY PLAN:

Aggregate  (cost=20508.19..20508.19 rows=1 width=0) (actual
time=338.17..338.17
rows=1 loops=1)
  ->  Seq Scan on noon  (cost=0.00..20237.95 rows=108095 width=0) (actual
time=0.01..225.73 rows=108095 loops=1)
Total runtime: 338.25 msec

Linux q2

dynacom=# EXPLAIN ANALYZE SELECT * from noon;
NOTICE:  QUERY PLAN:

Seq Scan on noon  (cost=0.00..20237.95 rows=108095 width=1960) (actual
time=1.22..67909.31 rows=108095 loops=1)
Total runtime: 68005.96 msec

FreeBSD q1
==
dynacom=# EXPLAIN ANALYZE SELECT count(*) from noon;
NOTICE:  QUERY PLAN:

Aggregate  (cost=20508.19..20508.19 rows=1 width=0) (actual
time=888.93..888.94
rows=1 loops=1)
  ->  Seq Scan on noon  (cost=0.00..20237.95 rows=108095 width=0) (actual
time=0.02..501.09 rows=108095 loops=1)
Total runtime: 889.06 msec

FreeBSD q2
==
dynacom=# EXPLAIN ANALYZE SELECT * from noon;
NOTICE:  QUERY PLAN:

Seq Scan on noon  (cost=0.00..20237.95 rows=108095 width=1975) (actual
time=1.08..53470.93 rows=108095 loops=1)
Total runtime: 53827.37 msec

The pgsql configuration for both systems is identical
(the FreeBSD system has less memory but vmstat dont show
any paging activity so i assume this is not an issue here).

The interesting part is that FreeBSD does better in select *,
whereas Linux seem to do much better in select count(*).

Paging and disk IO activity for both systems is near 0.

When i run the select count(*) in Linux i notice a small
increase (15%) in Context Switches per sec, whereas in FreeBSD
i notice a big increase in Context Switches (300%) and
a huge increase in system calls per second (from normally
9-10 to 110,000).
(Linux vmstat gives no syscall info).

The same results come out for every count(*) i try.
Is it just the reporting from explain analyze??

Has any hacker some light to shed??

Thanx.

==
Achilleus Mantzios
S/W Engineer
IT dept
Dynacom Tankers Mngmt
Nikis 4, Glyfada
Athens 16610
Greece
tel:+30-10-8981112
fax:+30-10-8981877
email:  [EMAIL PROTECTED]
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] [GENERAL] FreeBSD, Linux: select, select count(*) performance

2002-11-27 Thread Tom Lane
Achilleus Mantzios <[EMAIL PROTECTED]> writes:
> Linux q1
> 
> dynacom=# EXPLAIN ANALYZE SELECT count(*) from noon;
> NOTICE:  QUERY PLAN:

> Aggregate  (cost=20508.19..20508.19 rows=1 width=0) (actual
> time=338.17..338.17
> rows=1 loops=1)
>   ->  Seq Scan on noon  (cost=0.00..20237.95 rows=108095 width=0) (actual
> time=0.01..225.73 rows=108095 loops=1)
> Total runtime: 338.25 msec

> Linux q2
> 
> dynacom=# EXPLAIN ANALYZE SELECT * from noon;
> NOTICE:  QUERY PLAN:

> Seq Scan on noon  (cost=0.00..20237.95 rows=108095 width=1960) (actual
> time=1.22..67909.31 rows=108095 loops=1)
> Total runtime: 68005.96 msec

You didn't say what was *in* the table, exactly ... but I'm betting
there are a lot of toasted columns, and that the extra runtime
represents the time to fetch (and perhaps decompress) the TOAST entries.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly