Re: [SQL] how to inherits the references...

2002-07-12 Thread frederik nietzsche

ok, thanks for the (double: two mails ;) ) help, but
in this way when I insert a record in a child table,
the key must be already present in the "sigles" table,
otherwise it breaks the reference and doesn't insert
anything.
In order to use this solution I must create a set of
function that when I want to insert something in a
child tables it automatically insert BEFORE, the sigle
in the "sigles" table and THEN insert the values in
the child table.
If this is the only way, I'm going to use it, but I'm
not really satisfied by it...


ciao 
danilo

> Foreign keys don't inherit to children table on
> either
> the fk or pk side.  Note also that the primary keys
> in
> the above will not guarantee that sigle is unique
> across the whole set, only across each table
> individually.
> 
> Pretty much the only workaround I know of is to make
> a table
> with the key columns and have each of the tables in
> the
> inheritance tree have its key columns reference that
> and anything
> that wants to reference the inheritance tree
> references
> the other table instead.
> 
> 
> ---(end of
> broadcast)---
> TIP 4: Don't 'kill -9' the postmaster 

__
Scarica il nuovo Yahoo! Messenger: con webcam, nuove faccine e tante altre novità.
http://it.yahoo.com/mail_it/foot/?http://it.messenger.yahoo.com/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] [HACKERS] please help on query

2002-07-12 Thread Luis Alberto Amigo Navarro




> The cost is now only 1141741215.35 compared to 2777810917708.17
> before;  this is an improvement factor of more than 2000.  So what's
> your problem? ;-)
>
> Servus
>  Manfred
>
In fact planner is estimating incredibly badly, it took only 833msecs now
runs perfectly

I'm going to keep on asking about another query:

SELECT
 customer.name,
 customer.custkey,
 orders.orderkey,
 orders.orderdate,
 orders.totalprice,
 sum(lineitem.quantity)
FROM
 customer,
 orders,
 lineitem
WHERE
 exists(
  SELECT
   lineitem.orderkey
  FROM
   lineitem
  WHERE
   lineitem.orderkey=orders.orderkey
  GROUP BY
   lineitem.orderkey HAVING
   sum(lineitem.quantity)>300
  )
 AND customer.custkey=orders.custkey
 AND orders.orderkey=lineitem.orderkey
GROUP BY
 customer.name,
 customer.custkey,
 orders.orderkey,
 orders.orderdate,
 orders.totalprice

ORDER BY
 orders.totalprice DESC,
 orders.orderdate;

NOTICE:  QUERY PLAN:

Sort  (cost=26923941.97..26923941.97 rows=300061 width=66)
  ->  Aggregate  (cost=26851634.86..26896644.05 rows=300061 width=66)
->  Group  (cost=26851634.86..26889142.52 rows=3000612 width=66)
  ->  Sort  (cost=26851634.86..26851634.86 rows=3000612
width=66)
->  Hash Join  (cost=26107574.81..26457309.10
rows=3000612 width=66)
  ->  Seq Scan on lineitem  (cost=0.00..08.25
rows=6001225 width=8)
  ->  Hash  (cost=26105699.81..26105699.81
rows=75 width=58)
->  Hash Join  (cost=7431.00..26105699.81
rows=75 width=58)
  ->  Seq Scan on orders
(cost=0.00..26083268.81 rows=75 width=25)
SubPlan
  ->  Aggregate
(cost=0.00..17.35 rows=1 width=8)
->  Group
(cost=0.00..17.34 rows=5 width=8)
  ->  Index Scan
using lineitem_pkey on lineitem  (cost=0.00..17.33 rows=5 width=8)
  ->  Hash  (cost=7056.00..7056.00
rows=15 width=33)
->  Seq Scan on customer
(cost=0.00..7056.00 rows=15 width=33)

again:
orders 150 tuples
lineitem 600 tuples there are 1 to 7 lineitems per orderkey
Customer 15 tuples

select attname,n_distinct,correlation from pg_stats where
tablename='lineitem';
attname| n_distinct | correlation
---++-
 orderkey  |  -0.199847 |   1
 partkey   | 196448 |   0.0223377
 suppkey   |   9658 | -0.00822751
 linenumber|  7 | 0.17274
 quantity  | 50 |   0.0150153
 extendedprice |  25651 | -0.00790245
 discount  | 11 |0.103761
 tax   |  9 |   0.0993771
 returnflag|  3 |0.391434
 linestatus|  2 |0.509791
 shipdate  |   2440 |   0.0072777
 commitdate|   2497 |  0.00698162
 receiptdate   |   2416 |  0.00726686
 shipinstruct  |  4 |0.241511
 shipmode  |  7 |0.138432
 comment   | 275488 |   0.0188006
(16 rows)

select attname,n_distinct,correlation from pg_stats where
tablename='orders';
attname| n_distinct | correlation
---++-
 orderkey  | -1 |   -0.25
 custkey   |  76309 |  0.00590596
 orderstatus   |  3 |0.451991
 totalprice| -1 | -0.00768806
 orderdate |   2431 |  -0.0211354
 orderpriority |  5 |0.182489
 clerk |   1009 |  0.00546939
 shippriority  |  1 |   1
 comment   |  -0.750125 |  -0.0123887

Customer
  attname   | n_distinct | correlation
++-
 custkey| -1 |   1
 name   | -1 |   1
 address| -1 | -0.00510274
 nationkey  | 25 |   0.0170533
 phone  | -1 |  -0.0227816
 acctbal|   -0.83444 | -0.00220958
 mktsegment |  5 |0.205013
 comment| -1 |   0.0327827

This query takes 12 minutes to run and returns about 50 customers.
lineitem.quantity takes values from 1 to 50, so 300 per orderkey is very
restrictive

May someone help on improving performance?
Again thanks in advance
Regards



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] [HACKERS] please help on query

2002-07-12 Thread Jakub Ouhrabka

hi,

avoid subselect: create a temp table and use join...

CREATE TEMP TABLE tmp AS
   SELECT
lineitem.orderkey
   FROM
lineitem
   WHERE
lineitem.orderkey=orders.orderkey
   GROUP BY
lineitem.orderkey HAVING
sum(lineitem.quantity)>300;

CREATE INDEX tmp_idx ON tmp (orderkey);

 SELECT
  customer.name,
  customer.custkey,
  orders.orderkey,
  orders.orderdate,
  orders.totalprice,
  sum(lineitem.quantity)
 FROM
  customer,
  orders,
  lineitem,
  tmp
 WHERE
  orders.orderkey=tmp.orderkey
  AND customer.custkey=orders.custkey
  AND orders.orderkey=lineitem.orderkey
 GROUP BY
  customer.name,
  customer.custkey,
  orders.orderkey,
  orders.orderdate,
  orders.totalprice
 ORDER BY
  orders.totalprice DESC,
  orders.orderdate;


may be the index is not necessary...

kuba


> I'm going to keep on asking about another query:
>
> SELECT
>  customer.name,
>  customer.custkey,
>  orders.orderkey,
>  orders.orderdate,
>  orders.totalprice,
>  sum(lineitem.quantity)
> FROM
>  customer,
>  orders,
>  lineitem
> WHERE
>  exists(
>   SELECT
>lineitem.orderkey
>   FROM
>lineitem
>   WHERE
>lineitem.orderkey=orders.orderkey
>   GROUP BY
>lineitem.orderkey HAVING
>sum(lineitem.quantity)>300
>   )
>  AND customer.custkey=orders.custkey
>  AND orders.orderkey=lineitem.orderkey
> GROUP BY
>  customer.name,
>  customer.custkey,
>  orders.orderkey,
>  orders.orderdate,
>  orders.totalprice
>
> ORDER BY
>  orders.totalprice DESC,
>  orders.orderdate;
>
> NOTICE:  QUERY PLAN:
>
> Sort  (cost=26923941.97..26923941.97 rows=300061 width=66)
>   ->  Aggregate  (cost=26851634.86..26896644.05 rows=300061 width=66)
> ->  Group  (cost=26851634.86..26889142.52 rows=3000612 width=66)
>   ->  Sort  (cost=26851634.86..26851634.86 rows=3000612
> width=66)
> ->  Hash Join  (cost=26107574.81..26457309.10
> rows=3000612 width=66)
>   ->  Seq Scan on lineitem  (cost=0.00..08.25
> rows=6001225 width=8)
>   ->  Hash  (cost=26105699.81..26105699.81
> rows=75 width=58)
> ->  Hash Join  (cost=7431.00..26105699.81
> rows=75 width=58)
>   ->  Seq Scan on orders
> (cost=0.00..26083268.81 rows=75 width=25)
> SubPlan
>   ->  Aggregate
> (cost=0.00..17.35 rows=1 width=8)
> ->  Group
> (cost=0.00..17.34 rows=5 width=8)
>   ->  Index Scan
> using lineitem_pkey on lineitem  (cost=0.00..17.33 rows=5 width=8)
>   ->  Hash  (cost=7056.00..7056.00
> rows=15 width=33)
> ->  Seq Scan on customer
> (cost=0.00..7056.00 rows=15 width=33)
>
> again:
> orders 150 tuples
> lineitem 600 tuples there are 1 to 7 lineitems per orderkey
> Customer 15 tuples
>
> select attname,n_distinct,correlation from pg_stats where
> tablename='lineitem';
> attname| n_distinct | correlation
> ---++-
>  orderkey  |  -0.199847 |   1
>  partkey   | 196448 |   0.0223377
>  suppkey   |   9658 | -0.00822751
>  linenumber|  7 | 0.17274
>  quantity  | 50 |   0.0150153
>  extendedprice |  25651 | -0.00790245
>  discount  | 11 |0.103761
>  tax   |  9 |   0.0993771
>  returnflag|  3 |0.391434
>  linestatus|  2 |0.509791
>  shipdate  |   2440 |   0.0072777
>  commitdate|   2497 |  0.00698162
>  receiptdate   |   2416 |  0.00726686
>  shipinstruct  |  4 |0.241511
>  shipmode  |  7 |0.138432
>  comment   | 275488 |   0.0188006
> (16 rows)
>
> select attname,n_distinct,correlation from pg_stats where
> tablename='orders';
> attname| n_distinct | correlation
> ---++-
>  orderkey  | -1 |   -0.25
>  custkey   |  76309 |  0.00590596
>  orderstatus   |  3 |0.451991
>  totalprice| -1 | -0.00768806
>  orderdate |   2431 |  -0.0211354
>  orderpriority |  5 |0.182489
>  clerk |   1009 |  0.00546939
>  shippriority  |  1 |   1
>  comment   |  -0.750125 |  -0.0123887
>
> Customer
>   attname   | n_distinct | correlation
> ++-
>  custkey| -1 |   1
>  name   | -1 |   1
>  address| -1 | -0.00510274
>  nationkey  | 25 |   0.0170533
>  phone  | -1 |  -0.0227816
>  acctbal|   -0.83444 | -0.00220958
>  mktsegment |  5 |0.205013
>  comment| -1 |   0.0327827
>
> This query takes 12 minutes to run and returns about 50 customers.
> l

[SQL] rules / triggers on insert. why after?

2002-07-12 Thread Ahti Legonkov

Hi,

I have the following things in my database:

CREATE SEQUENCE "REO_ID_seq" INCREMENT 1 MINVALUE 1 MAXVALUE 2147483647 
CACHE 1;

CREATE TABLE reo (
   "REO_ID" int4 DEFAULT nextval('"REO_ID_seq"'::text) NOT NULL,
   "TYPE" varchar(64) NOT NULL,
   CONSTRAINT "REO_ID_reo_ukey" UNIQUE ("REO_ID"),
   CONSTRAINT reo_pkey PRIMARY KEY ("REO_ID")
);

CREATE TABLE lreo (
   "CITY" varchar(64),
   "STREET" varchar(64),
   "PRICE" int4,
   "REO_ID" int4 DEFAULT currval('"REO_ID_seq"'::text),
   CONSTRAINT "REO_ID_land_reo_ukey" UNIQUE ("REO_ID"),
   CONSTRAINT "fk_LREO_REO_ID" FOREIGN KEY ("REO_ID") REFERENCES reo 
("REO_ID") ON DELETE NO ACTION ON UPDATE NO ACTION NOT DEFERRABLE 
INITIALLY IMMEDIATE
);

 either this
-- the rule won't work in 7.2.x, because it is done AFTER the Insert has
-- executed :(
CREATE RULE "rule_lreo_INSERT" AS ON INSERT
 TO lreo
 DO INSTEAD INSERT INTO reo("TYPE") values ('lreo');
 or this
-- but why does not this work ???
CREATE FUNCTION insert_lreo() Returns opaque As '
Begin
INSERT INTO reo ("TYPE") VALUES (''lreo'');
Return Null;
End;
' language 'plpgsql';


CREATE TRIGGER on_insert_lreo
BEFORE INSERT ON lreo FOR EACH ROW
EXECUTE PROCEDURE insert_lreo();
 end

On postgres 7.1.3 it was OK to do this:

INSERT INTO "lreo" ("STREET", "PRICE", "CITY")
VALUES ('street', 1234, 'the city');

but on postgres 7.2 and 7.2.1 I get an error that "REO_ID_seq.currval is 
not yet defined for this session." The trigger should execute *before* 
the insert, right?

Does anyone know why since postgres 7.2 the rules are executed *after* 
the insert?

Hope you can help me:)

-- 
Ahti Legonkov


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] [HACKERS] please help on query

2002-07-12 Thread Jakub Ouhrabka

hi,

avoid subselect: create a temp table and use join...

CREATE TEMP TABLE tmp AS
   SELECT
lineitem.orderkey
   FROM
lineitem
   WHERE
lineitem.orderkey=orders.orderkey
   GROUP BY
lineitem.orderkey HAVING
sum(lineitem.quantity)>300;

CREATE INDEX tmp_idx ON tmp (orderkey);

 SELECT
  customer.name,
  customer.custkey,
  orders.orderkey,
  orders.orderdate,
  orders.totalprice,
  sum(lineitem.quantity)
 FROM
  customer,
  orders,
  lineitem,
  tmp
 WHERE
  orders.orderkey=tmp.orderkey
  AND customer.custkey=orders.custkey
  AND orders.orderkey=lineitem.orderkey
 GROUP BY
  customer.name,
  customer.custkey,
  orders.orderkey,
  orders.orderdate,
  orders.totalprice
 ORDER BY
  orders.totalprice DESC,
  orders.orderdate;


may be the index is not necessary...

kuba


> I'm going to keep on asking about another query:
>
> SELECT
>  customer.name,
>  customer.custkey,
>  orders.orderkey,
>  orders.orderdate,
>  orders.totalprice,
>  sum(lineitem.quantity)
> FROM
>  customer,
>  orders,
>  lineitem
> WHERE
>  exists(
>   SELECT
>lineitem.orderkey
>   FROM
>lineitem
>   WHERE
>lineitem.orderkey=orders.orderkey
>   GROUP BY
>lineitem.orderkey HAVING
>sum(lineitem.quantity)>300
>   )
>  AND customer.custkey=orders.custkey
>  AND orders.orderkey=lineitem.orderkey
> GROUP BY
>  customer.name,
>  customer.custkey,
>  orders.orderkey,
>  orders.orderdate,
>  orders.totalprice
>
> ORDER BY
>  orders.totalprice DESC,
>  orders.orderdate;
>
> NOTICE:  QUERY PLAN:
>
> Sort  (cost=26923941.97..26923941.97 rows=300061 width=66)
>   ->  Aggregate  (cost=26851634.86..26896644.05 rows=300061 width=66)
> ->  Group  (cost=26851634.86..26889142.52 rows=3000612 width=66)
>   ->  Sort  (cost=26851634.86..26851634.86 rows=3000612
> width=66)
> ->  Hash Join  (cost=26107574.81..26457309.10
> rows=3000612 width=66)
>   ->  Seq Scan on lineitem  (cost=0.00..08.25
> rows=6001225 width=8)
>   ->  Hash  (cost=26105699.81..26105699.81
> rows=75 width=58)
> ->  Hash Join  (cost=7431.00..26105699.81
> rows=75 width=58)
>   ->  Seq Scan on orders
> (cost=0.00..26083268.81 rows=75 width=25)
> SubPlan
>   ->  Aggregate
> (cost=0.00..17.35 rows=1 width=8)
> ->  Group
> (cost=0.00..17.34 rows=5 width=8)
>   ->  Index Scan
> using lineitem_pkey on lineitem  (cost=0.00..17.33 rows=5 width=8)
>   ->  Hash  (cost=7056.00..7056.00
> rows=15 width=33)
> ->  Seq Scan on customer
> (cost=0.00..7056.00 rows=15 width=33)
>
> again:
> orders 150 tuples
> lineitem 600 tuples there are 1 to 7 lineitems per orderkey
> Customer 15 tuples
>
> select attname,n_distinct,correlation from pg_stats where
> tablename='lineitem';
> attname| n_distinct | correlation
> ---++-
>  orderkey  |  -0.199847 |   1
>  partkey   | 196448 |   0.0223377
>  suppkey   |   9658 | -0.00822751
>  linenumber|  7 | 0.17274
>  quantity  | 50 |   0.0150153
>  extendedprice |  25651 | -0.00790245
>  discount  | 11 |0.103761
>  tax   |  9 |   0.0993771
>  returnflag|  3 |0.391434
>  linestatus|  2 |0.509791
>  shipdate  |   2440 |   0.0072777
>  commitdate|   2497 |  0.00698162
>  receiptdate   |   2416 |  0.00726686
>  shipinstruct  |  4 |0.241511
>  shipmode  |  7 |0.138432
>  comment   | 275488 |   0.0188006
> (16 rows)
>
> select attname,n_distinct,correlation from pg_stats where
> tablename='orders';
> attname| n_distinct | correlation
> ---++-
>  orderkey  | -1 |   -0.25
>  custkey   |  76309 |  0.00590596
>  orderstatus   |  3 |0.451991
>  totalprice| -1 | -0.00768806
>  orderdate |   2431 |  -0.0211354
>  orderpriority |  5 |0.182489
>  clerk |   1009 |  0.00546939
>  shippriority  |  1 |   1
>  comment   |  -0.750125 |  -0.0123887
>
> Customer
>   attname   | n_distinct | correlation
> ++-
>  custkey| -1 |   1
>  name   | -1 |   1
>  address| -1 | -0.00510274
>  nationkey  | 25 |   0.0170533
>  phone  | -1 |  -0.0227816
>  acctbal|   -0.83444 | -0.00220958
>  mktsegment |  5 |0.205013
>  comment| -1 |   0.0327827
>
> This query takes 12 minutes to run and returns about 50 customers.
> l

Re: [SQL] [HACKERS] please help on query

2002-07-12 Thread Luis Alberto Amigo Navarro

I've tried
SELECT
 supplier.name,
 supplier.address
FROM
 supplier,
 nation,
 lineitem
WHERE
 EXISTS(
  SELECT
   partsupp.suppkey
  FROM
   partsupp,lineitem
  WHERE
   lineitem.partkey=partsupp.partkey
   AND lineitem.suppkey=partsupp.partkey
   AND lineitem.shipdate>=('1994-01-01')::DATE
   AND lineitem.shipdate<(('1994-01-01')::DATE+('1 year')::INTERVAL)::DATE
   AND EXISTS(
SELECT
 part.partkey
FROM
 part
WHERE
 part.name like 'forest%'
 )
  GROUP BY partsupp.partkey,partsupp.suppkey
  HAVING min(availqty)>(0.5*(sum(lineitem.quantity)::FLOAT))
  )
 AND supplier.nationkey=nation.nationkey
 AND nation.name='CANADA'
ORDER BY
 supplier.name;

as you said and something is wrong
Sort  (cost=1141741215.35..1141741215.35 rows=240049 width=81)
  InitPlan
->  Aggregate  (cost=0.00..921773.85 rows=48 width=24)
  InitPlan
->  Seq Scan on part  (cost=0.00..8561.00 rows=1 width=4)
  ->  Group  (cost=0.00..921771.44 rows=481 width=24)
->  Result  (cost=0.00..921769.04 rows=481 width=24)
  ->  Merge Join  (cost=0.00..921769.04 rows=481
width=24)
->  Index Scan using partsupp_pkey on partsupp
(cost=0.00..98522.75 rows=80 width=12)
->  Index Scan using lsupp_index on lineitem
(cost=0.00..821239.91 rows=145 width=12)
  ->  Result  (cost=1.31..112888690.31 rows=240049 width=81)
->  Nested Loop  (cost=1.31..112888690.31 rows=240049 width=81)
  ->  Hash Join  (cost=1.31..490.31 rows=400 width=81)
->  Seq Scan on supplier  (cost=0.00..434.00 rows=1
width=77)
->  Hash  (cost=1.31..1.31 rows=1 width=4)
  ->  Seq Scan on nation  (cost=0.00..1.31 rows=1
width=4)
  ->  Seq Scan on lineitem  (cost=0.00..08.25 rows=6001225
width=0)

where might be my mistake
Thanks and regards
- Original Message -
From: "Manfred Koizar" <[EMAIL PROTECTED]>
To: "Luis Alberto Amigo Navarro" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, July 11, 2002 6:47 PM
Subject: Re: [HACKERS] please help on query


> [moving to pgsql-sql]
> On Thu, 11 Jul 2002 17:22:14 +0200, "Luis Alberto Amigo Navarro"
> <[EMAIL PROTECTED]> wrote:
> >I can't improve performance on this query:
> >
> >SELECT
> > supplier.name,
> > supplier.address
> >FROM
> > supplier,
> > nation
> >WHERE
> > supplier.suppkey IN(
> >  SELECT
> >   partsupp.suppkey
> >  FROM
> >   partsupp
> >  WHERE
> >   partsupp.partkey IN(
> >SELECT
> > part.partkey
> >FROM
> > part
> >WHERE
> > part.name like 'forest%'
> > )
> >   AND partsupp.availqty>(
> >SELECT
> > 0.5*(sum(lineitem.quantity)::FLOAT)
> >FROM
> > lineitem
> >WHERE
> > lineitem.partkey=partsupp.partkey
> > AND lineitem.suppkey=partsupp.partkey
> ^^^
> suppkey ???
> > AND lineitem.shipdate>=('1994-01-01')::DATE
> > AND lineitem.shipdate<(('1994-01-01')::DATE+('1
year')::INTERVAL)::DATE
> > )
> >  )
> > AND supplier.nationkey=nation.nationkey
> > AND nation.name='CANADA'
> >ORDER BY
> > supplier.name;
>




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] list of tables ?

2002-07-12 Thread Christoph Haller

> 
> can anyone point me in the right direction ?
> 
> i need to list all the tables in a database.
> 

Steve, 

Your request reminds me of a similar problem I had. 
Try the following: 

CREATE VIEW sesql_usertables AS
SELECT
UPPER(u.usename) AS tbl_owner, UPPER(c.relname) AS tbl_name,
UPPER(a.attname) AS col_name, a.atttypid AS col_type,
INT4LARGER(a.attlen, a.atttypmod - 4) AS col_length,
CASE WHEN a.attnotnull=TRUE THEN 0 ELSE 1 END AS col_null, a.attnum AS col_seq,
 CASE WHEN EXISTS(SELECT adsrc FROM pg_attrdef d
 WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum) THEN
 1
 ELSE
 0
 END AS COL_DEFAULT
FROM pg_attribute a,
 pg_class c LEFT JOIN pg_user u ON (u.usesysid = c.relowner)
WHERE c.oid = a.attrelid AND NOT (c.relname ~* 'pg_') AND
  c.relkind = 'r' AND a.attnum > 0 ;

SELECT * FROM sesql_usertables ORDER BY tbl_owner, tbl_name, col_seq ; 

It should give at least some ideas how to retrieve information 
 from all the tables in a database. 

Regards, Christoph 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] rules / triggers on insert. why after?

2002-07-12 Thread Jan Wieck

Ahti Legonkov wrote:

> Does anyone know why since postgres 7.2 the rules are executed *after*
> the insert?

Because people where still complaining that they changed to execute
*before* in v6.4.


Jan

-- 

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] how to inherits the references...

2002-07-12 Thread Stephan Szabo


On Fri, 12 Jul 2002, [iso-8859-1] frederik nietzsche wrote:

> ok, thanks for the (double: two mails ;) ) help, but
> in this way when I insert a record in a child table,
> the key must be already present in the "sigles" table,
> otherwise it breaks the reference and doesn't insert
> anything.
> In order to use this solution I must create a set of
> function that when I want to insert something in a
> child tables it automatically insert BEFORE, the sigle
> in the "sigles" table and THEN insert the values in
> the child table.

A before trigger to insert into the key table would
probably work.  Another advantage to this sort of thing
is that you could probably actually guarantee uniqueness
of your key across the entire inheritance hierarchy
which you can't otherwise.

> If this is the only way, I'm going to use it, but I'm
> not really satisfied by it...

Inheritance is only marginally useful right now.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] SQL problem with aggregate functions.

2002-07-12 Thread Jean-Luc Lachance

What is wrong with:

select field_group, sum( case when f1 = 'D' then cnt else 0 end) as
D_COUNT,
  sum( case when f1 = 'R' then cnt else 0 end) as R_COUNT,
  sum( case when f1 = 'X' then cnt else 0 end) as X_COUNT
from (select field_group, f1, count (*) as cnt from tab group by
field_group, f1) as ss
group by field_group;

It should be faster because there is less CASE evaluation.


Loyd Goodbar wrote:
> 
> I would suggest something like
> 
> select sum(case when f1 = 'D' then 1 else 0 end) as D_COUNT,
> sum(case when f1 = 'R' then 1 else 0 end) as R_COUNT,
> sum(case when f1 = 'X' then 1 else 0 end) as X_COUNT
> from tab
> where f1 in ('D','R','X')
> 
> Not sure what the "field group" represents.
> 
> HTH,
> Loyd
> 
> On Thu, 11 Jul 2002 10:37:40 METDST, Christoph Haller <[EMAIL PROTECTED]> wrote:
> 
> >>
> >> I've got a table in which there is a field that can have one amongst 3
> >> possible values : D, R, X. Is it possible to get in one query the count of
> >> this different values.Please, note that I don't want to have a querry like
> >> this :
> >> "select count (*) from tab group by f1;", cause i want to get all the possible
> >> count values in one row (these data are already grouped on another field).
> >> To give a more accurate example, here is what I want to retrieve :
> >>
> >> Field group | count of D | count of R | count of X.
> >>
> >> Any clues ?
> >> --
> >What about something like
> >
> >
> >SELECT SUM(f1_d) AS count_d,
> >   SUM(f1_r) AS count_r,
> >   SUM(f1_x) AS count_x
> >FROM (
> > SELECT CASE WHEN f1 = 'D' THEN 1 ELSE 0 END AS f1_d,
> >CASE WHEN f1 = 'R' THEN 1 ELSE 0 END AS f1_r,
> >CASE WHEN f1 = 'X' THEN 1 ELSE 0 END AS f1_x
> > FROM tab ) AS foo ;
> >
> >Regards, Christoph
> >
> >---(end of broadcast)---
> >TIP 6: Have you searched our list archives?
> >
> >http://archives.postgresql.org
> 
> --
> "Why, you can even hear yourself think." --Hobbes
> "This is making me nervous. Let's go in." --Calvin
> [EMAIL PROTECTED]  ICQ#504581  http://www.blackrobes.net/
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Please, HELP! Why is the query plan so wrong???

2002-07-12 Thread Dmitry Tkach

Jie Liang wrote:

>I believe that SQL will use the index of join 'key' when you join the tables
>if
>have any, in your query the (a,c) is the join key but d is not.
>
>
>Jie Liang
>

Not really... I tried this:

explain select * from fb joing fbr on (fb.a=fbr.a and
fb.c=fbr.c and fbr.d is null) where fb.b=0

It results in the same query plan (seq scan on fbr).

Dima




>
>
>
>-Original Message-
>From: Dmitry Tkach [mailto:[EMAIL PROTECTED]]
>Sent: Thursday, July 11, 2002 3:51 PM
>To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
>Subject: [SQL] Please, HELP! Why is the query plan so wrong???
>
>
>Hi, everybody!
>
>Here is the problem:
>
>test=#   create table fb (a int, b int, c datetime);
>CREATE
>test=#  create table fbr (a int, c datetime, d int);
>CREATE
>test=# create unique index fb_idx on fb(b);
>CREATE
>test=#  create index fbr_idx on fbr(a,c) where d is null;
>CREATE
>test=# set enable_seqscan=off;
>
>SET VARIABLE
>rapidb=# explain select * from fb, fbr where fb.b=0 and fb.a=fbr.a and
>fb.c=fbr.c and fbr.d is null;
>NOTICE:  QUERY PLAN:
>
>Hash Join  (cost=10005.82..11015.87 rows=1 width=32)
>   ->  Seq Scan on fbr  (cost=1.00..11010.00 rows=5 width=16)
>   ->  Hash  (cost=5.81..5.81 rows=1 width=16)
> ->  Index Scan using fb_idx on fb  (cost=0.00..5.81 rows=1
>width=16)
>
>Could someone PLEASE explain to me, why doesn't it want to use the index on
>fbr?
>
>If I get rid of the join, then it works:
>
>test=#  explain select * from fbr where a=1 and c=now() and d is null;
>NOTICE:  QUERY PLAN:
>
>Index Scan using fbr_idx on fbr  (cost=0.00..5.82 rows=1 width=16)
>
>What's the catch???
>
>Any help would be greatly appreciated!
>
>Thanks!
>
>Dima
>
>
>
>
>---(end of broadcast)---
>TIP 3: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
>




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] rules / triggers on insert. why after?

2002-07-12 Thread Tom Lane

Ahti Legonkov <[EMAIL PROTECTED]> writes:
> CREATE TABLE reo (
>"REO_ID" int4 DEFAULT nextval('"REO_ID_seq"'::text) NOT NULL,
>"TYPE" varchar(64) NOT NULL,
>CONSTRAINT "REO_ID_reo_ukey" UNIQUE ("REO_ID"),
>CONSTRAINT reo_pkey PRIMARY KEY ("REO_ID")
> );

> CREATE TABLE lreo (
>"CITY" varchar(64),
>"STREET" varchar(64),
>"PRICE" int4,
>"REO_ID" int4 DEFAULT currval('"REO_ID_seq"'::text),
>CONSTRAINT "REO_ID_land_reo_ukey" UNIQUE ("REO_ID"),
>CONSTRAINT "fk_LREO_REO_ID" FOREIGN KEY ("REO_ID") REFERENCES reo 
> ("REO_ID") ON DELETE NO ACTION ON UPDATE NO ACTION NOT DEFERRABLE 
> INITIALLY IMMEDIATE
> );

That default for reo_id is too fragile to consider using in any case.
You are making way too many assumptions about when defaults will be
evaluated relative to other actions (such as rule/trigger firings).

I'd suggest that you have no default for column reo_id, and instead
have a BEFORE INSERT trigger for lreo that (a) inserts a row into reo
and (b) sets new.reo_id to the inserted key (which you could get from
currval at that point).

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] [HACKERS] please help on query

2002-07-12 Thread Luis Alberto Amigo Navarro

Lineitem is being modified on run time, so creating a temp table don't
solves my problem
The time of creating this table is the same of performing the subselect (or
so I think), it could be done creating a new table, and a new trigger, but
there are already triggers to calculate
lineitem.extendedprice=part.retailprice*lineitem.quantity*(1+taxes)*(1-disco
unt) and to calculate orderstatus in order with linestatus and to calculate
orders.totalprice as sum(extendedprice) where
lineitem.orderkey=new.orderkey. A new trigger in order to insert orderkey if
sum(quantity) where orderkey=new.orderkey might be excessive.
Any other idea?
Thanks And Regards

- Original Message -
From: "Jakub Ouhrabka" <[EMAIL PROTECTED]>
To: "Luis Alberto Amigo Navarro" <[EMAIL PROTECTED]>
Cc: "Manfred Koizar" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, July 12, 2002 1:50 PM
Subject: Re: [SQL] [HACKERS] please help on query


> hi,
>
> avoid subselect: create a temp table and use join...
>
> CREATE TEMP TABLE tmp AS
>SELECT
> lineitem.orderkey
>FROM
> lineitem
>WHERE
> lineitem.orderkey=orders.orderkey
>GROUP BY
> lineitem.orderkey HAVING
> sum(lineitem.quantity)>300;
>
> CREATE INDEX tmp_idx ON tmp (orderkey);
>
>  SELECT
>   customer.name,
>   customer.custkey,
>   orders.orderkey,
>   orders.orderdate,
>   orders.totalprice,
>   sum(lineitem.quantity)
>  FROM
>   customer,
>   orders,
>   lineitem,
>   tmp
>  WHERE
>   orders.orderkey=tmp.orderkey
>   AND customer.custkey=orders.custkey
>   AND orders.orderkey=lineitem.orderkey
>  GROUP BY
>   customer.name,
>   customer.custkey,
>   orders.orderkey,
>   orders.orderdate,
>   orders.totalprice
>  ORDER BY
>   orders.totalprice DESC,
>   orders.orderdate;
>
>
> may be the index is not necessary...
>
> kuba
>
>
> > I'm going to keep on asking about another query:
> >
> > SELECT
> >  customer.name,
> >  customer.custkey,
> >  orders.orderkey,
> >  orders.orderdate,
> >  orders.totalprice,
> >  sum(lineitem.quantity)
> > FROM
> >  customer,
> >  orders,
> >  lineitem
> > WHERE
> >  exists(
> >   SELECT
> >lineitem.orderkey
> >   FROM
> >lineitem
> >   WHERE
> >lineitem.orderkey=orders.orderkey
> >   GROUP BY
> >lineitem.orderkey HAVING
> >sum(lineitem.quantity)>300
> >   )
> >  AND customer.custkey=orders.custkey
> >  AND orders.orderkey=lineitem.orderkey
> > GROUP BY
> >  customer.name,
> >  customer.custkey,
> >  orders.orderkey,
> >  orders.orderdate,
> >  orders.totalprice
> >
> > ORDER BY
> >  orders.totalprice DESC,
> >  orders.orderdate;
> >
> > NOTICE:  QUERY PLAN:
> >
> > Sort  (cost=26923941.97..26923941.97 rows=300061 width=66)
> >   ->  Aggregate  (cost=26851634.86..26896644.05 rows=300061 width=66)
> > ->  Group  (cost=26851634.86..26889142.52 rows=3000612 width=66)
> >   ->  Sort  (cost=26851634.86..26851634.86 rows=3000612
> > width=66)
> > ->  Hash Join  (cost=26107574.81..26457309.10
> > rows=3000612 width=66)
> >   ->  Seq Scan on lineitem
(cost=0.00..08.25
> > rows=6001225 width=8)
> >   ->  Hash  (cost=26105699.81..26105699.81
> > rows=75 width=58)
> > ->  Hash Join
(cost=7431.00..26105699.81
> > rows=75 width=58)
> >   ->  Seq Scan on orders
> > (cost=0.00..26083268.81 rows=75 width=25)
> > SubPlan
> >   ->  Aggregate
> > (cost=0.00..17.35 rows=1 width=8)
> > ->  Group
> > (cost=0.00..17.34 rows=5 width=8)
> >   ->  Index Scan
> > using lineitem_pkey on lineitem  (cost=0.00..17.33 rows=5 width=8)
> >   ->  Hash  (cost=7056.00..7056.00
> > rows=15 width=33)
> > ->  Seq Scan on customer
> > (cost=0.00..7056.00 rows=15 width=33)
> >
> > again:
> > orders 150 tuples
> > lineitem 600 tuples there are 1 to 7 lineitems per orderkey
> > Customer 15 tuples
> >
> > select attname,n_distinct,correlation from pg_stats where
> > tablename='lineitem';
> > attname| n_distinct | correlation
> > ---++-
> >  orderkey  |  -0.199847 |   1
> >  partkey   | 196448 |   0.0223377
> >  suppkey   |   9658 | -0.00822751
> >  linenumber|  7 | 0.17274
> >  quantity  | 50 |   0.0150153
> >  extendedprice |  25651 | -0.00790245
> >  discount  | 11 |0.103761
> >  tax   |  9 |   0.0993771
> >  returnflag|  3 |0.391434
> >  linestatus|  2 |0.509791
> >  shipdate  |   2440 |   0.0072777
> >  commitdate|   2497 |  0.00698162
> >  receiptdate   |   2416 |  0.00726686
> >  shipinstruct

Re: [SQL] Please, HELP! Why is the query plan so wrong???

2002-07-12 Thread Jie Liang

please copy and paste the whole msg and your query!
Note:what I mean ' join key' is the fields that link two tables.
I don't think fb.b=0 is a join key!



Jie Liang

-Original Message-
From: Dmitry Tkach [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 12, 2002 7:34 AM
To: Jie Liang
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: [SQL] Please, HELP! Why is the query plan so wrong???


Jie Liang wrote:

>I believe that SQL will use the index of join 'key' when you join the
tables
>if
>have any, in your query the (a,c) is the join key but d is not.
>
>
>Jie Liang
>

Not really... I tried this:

explain select * from fb joing fbr on (fb.a=fbr.a and
fb.c=fbr.c and fbr.d is null) where fb.b=0

It results in the same query plan (seq scan on fbr).

Dima




>
>
>
>-Original Message-
>From: Dmitry Tkach [mailto:[EMAIL PROTECTED]]
>Sent: Thursday, July 11, 2002 3:51 PM
>To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
>Subject: [SQL] Please, HELP! Why is the query plan so wrong???
>
>
>Hi, everybody!
>
>Here is the problem:
>
>test=#   create table fb (a int, b int, c datetime);
>CREATE
>test=#  create table fbr (a int, c datetime, d int);
>CREATE
>test=# create unique index fb_idx on fb(b);
>CREATE
>test=#  create index fbr_idx on fbr(a,c) where d is null;
>CREATE
>test=# set enable_seqscan=off;
>
>SET VARIABLE
>rapidb=# explain select * from fb, fbr where fb.b=0 and fb.a=fbr.a and
>fb.c=fbr.c and fbr.d is null;
>NOTICE:  QUERY PLAN:
>
>Hash Join  (cost=10005.82..11015.87 rows=1 width=32)
>   ->  Seq Scan on fbr  (cost=1.00..11010.00 rows=5 width=16)
>   ->  Hash  (cost=5.81..5.81 rows=1 width=16)
> ->  Index Scan using fb_idx on fb  (cost=0.00..5.81 rows=1
>width=16)
>
>Could someone PLEASE explain to me, why doesn't it want to use the index on
>fbr?
>
>If I get rid of the join, then it works:
>
>test=#  explain select * from fbr where a=1 and c=now() and d is null;
>NOTICE:  QUERY PLAN:
>
>Index Scan using fbr_idx on fbr  (cost=0.00..5.82 rows=1 width=16)
>
>What's the catch???
>
>Any help would be greatly appreciated!
>
>Thanks!
>
>Dima
>
>
>
>
>---(end of broadcast)---
>TIP 3: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
>



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] config postgresql.conf??

2002-07-12 Thread Jie Liang

Hi,

What parameter I should change in order to make postmaster taking CPU as
much as
possible?
Maybe I should ask: how can I make big tables equijoin faster?
I have a serveral tables that contain more 2.5 million records, I need to
equijoin
those tables often.

Thanks!




Jie Liang

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] config postgresql.conf??

2002-07-12 Thread Josh Berkus

Jie,

> What parameter I should change in order to make postmaster taking CPU
> as
> much as
> possible?
> Maybe I should ask: how can I make big tables equijoin faster?
> I have a serveral tables that contain more 2.5 million records, I
> need to
> equijoin
> those tables often.

There are several good articles on PostgreSQL performance tuning up at
http://techdocs.postgresql.org/

check 'em out.

-Josh

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



It is a bug in pred_test()! (Was: [SQL] Please, HELP! Why is the query plan so wrong???)

2002-07-12 Thread Dmitry Tkach

Jie Liang wrote:

>please copy and paste the whole msg and your query!
>Note:what I mean ' join key' is the fields that link two tables.
>
The message (query plan) is exactly the same (you can see it in the 
bottom of this message).

>
>I don't think fb.b=0 is a join key!
>
Of course not. But it IS using the index on fb. It is the fbr, that is 
the problem (and the join key is explicitly specified in the join... on 
(..) clause).

Actually, it looks like the problem is caused by the predicate on the index:
if instead of

create index fbr_idx on fbr(a,c) where d is null;


I do just:

create index fbr_idx on fbr(a,c,d);


Then this index is used by the query...

It now looks like a bug in the query planner to me - it seems that it 
just doesn't consider indices with predicates for join plans...
I was looking at the source code, and it looks like pred_test() is 
responsible for that.
Ater debugging it a little, I found out that, if I list the tables in 
the query in the opposite order:

explain select * from fbr, fb where fb.b=0 and fb.a=fbr.a and
fb.c=fbr.c and fbr.d is null;

Then it works (both indexes are used!).

Here is what is causing it:
 create_index_paths() calls pred_test () to check whether it is OK to 
use a partial index or not.
pred_test () evaluates the index's predicate list against the 
restriction clauses of the query.
In my case, the predicate list "d is NULL", so the goal is to find the 
equivalent clause in the query.
pred_test () does that by iterating through the query's clauses and 
comparing them to the predicate with the equal() function.
equal () calls _equalNullTest(), which in turn calls _equalVar(), that 
looks at the varno parameter in the NullTest's argument. Now the value 
of varno in the predicate is (of course) 1, (I believe, it always is, 
because we don't have multitable indexes), however, the varno in the 
clause is 2 (it is the index of the table in the join list), if the fbr 
is listed second - and 1, if it is first - so, in the former case it 
does not work, and in the latter it does.

Knowing all this doesn't help much unfortunately, because, if you needed 
to join 2 (or more) tables that have indexes with predicates, then 
whatever order you put them in, would not help (the pred_test() will 
only succeed for the first table in the join list) :-(

Perhaps, somebody, familiar with this code could come up with a patch 
for this problem?

This would be really great!

Dima



>
>
>Jie Liang
>
>-Original Message-
>From: Dmitry Tkach [mailto:[EMAIL PROTECTED]]
>Sent: Friday, July 12, 2002 7:34 AM
>To: Jie Liang
>Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
>Subject: Re: [SQL] Please, HELP! Why is the query plan so wrong???
>
>
>Jie Liang wrote:
>
>>I believe that SQL will use the index of join 'key' when you join the
>>
>tables
>
>>if
>>have any, in your query the (a,c) is the join key but d is not.
>>
>>
>>Jie Liang
>>
>
>Not really... I tried this:
>
>explain select * from fb joing fbr on (fb.a=fbr.a and
>fb.c=fbr.c and fbr.d is null) where fb.b=0
>
>It results in the same query plan (seq scan on fbr).
>
>Dima
>
>
>
>
>>
>>
>>-Original Message-
>>From: Dmitry Tkach [mailto:[EMAIL PROTECTED]]
>>Sent: Thursday, July 11, 2002 3:51 PM
>>To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
>>Subject: [SQL] Please, HELP! Why is the query plan so wrong???
>>
>>
>>Hi, everybody!
>>
>>Here is the problem:
>>
>>test=#   create table fb (a int, b int, c datetime);
>>CREATE
>>test=#  create table fbr (a int, c datetime, d int);
>>CREATE
>>test=# create unique index fb_idx on fb(b);
>>CREATE
>>test=#  create index fbr_idx on fbr(a,c) where d is null;
>>CREATE
>>test=# set enable_seqscan=off;
>>
>>SET VARIABLE
>>rapidb=# explain select * from fb, fbr where fb.b=0 and fb.a=fbr.a and
>>fb.c=fbr.c and fbr.d is null;
>>NOTICE:  QUERY PLAN:
>>
>>Hash Join  (cost=10005.82..11015.87 rows=1 width=32)
>>  ->  Seq Scan on fbr  (cost=1.00..11010.00 rows=5 width=16)
>>  ->  Hash  (cost=5.81..5.81 rows=1 width=16)
>>->  Index Scan using fb_idx on fb  (cost=0.00..5.81 rows=1
>>width=16)
>>
>>Could someone PLEASE explain to me, why doesn't it want to use the index on
>>fbr?
>>
>>If I get rid of the join, then it works:
>>
>>test=#  explain select * from fbr where a=1 and c=now() and d is null;
>>NOTICE:  QUERY PLAN:
>>
>>Index Scan using fbr_idx on fbr  (cost=0.00..5.82 rows=1 width=16)
>>
>>What's the catch???
>>
>>Any help would be greatly appreciated!
>>
>>Thanks!
>>
>>Dima
>>
>>
>>
>>
>>---(end of broadcast)---
>>TIP 3: if posting/reading through Usenet, please send an appropriate
>>subscribe-nomail command to [EMAIL PROTECTED] so that your
>>message can get through to the mailing list cleanly
>>
>




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get thr

Re: [SQL] Query kill

2002-07-12 Thread eric soroos

On Fri, 12 Jul 2002 01:01:31 -0400 (EDT) in message 
<[EMAIL PROTECTED]>, Bruce Momjian <[EMAIL PROTECTED]> 
wrote:
> Rudi Starcevic wrote:
> > Hello,
> > 
> > If I write a query that is inefficient or in an eternal loop how
> > do I stop it without restarting the postmaster ?
> > 
> > I can see many postmaster processed appearing in the output of the 'ps' 
> > command.
> > Do I need to stop/kill them all or can I stop just the query I want ?
> 
> Just send a SIGINT to the process. That simulates a ^C, which works too
> from the client like psql.

Is there a way to deny permission for certain users to execute a query that exceeds 
some expected cost?

For example, I have a query builder from user input that could produce a query that 
ends up doing something that the query planner thinks will take 8M units of work. 
Generally, this is an unconstrained join between my biggest tables, a result that is 
neither fast nor useful.

If I could set a threshold of 1M units for the webapp user, I could trap this sort of 
thing before they cause quality of service issues. 

eric



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Query kill

2002-07-12 Thread Jan Wieck

Bruce Momjian wrote:
> 
> Rudi Starcevic wrote:
> > Hello,
> >
> > If I write a query that is inefficient or in an eternal loop how
> > do I stop it without restarting the postmaster ?
> >
> > I can see many postmaster processed appearing in the output of the 'ps'
> > command.
> > Do I need to stop/kill them all or can I stop just the query I want ?
> 
> Just send a SIGINT to the process. That simulates a ^C, which works too
> from the client like psql.

Doesn't the client need a signal handler for that and call
PQcancelRequest() in that?


Jan

-- 

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] XML to Postgres conversion

2002-07-12 Thread Marc Spitzer

On Thu, Jul 11, 2002 at 09:23:39AM -0500, [EMAIL PROTECTED] wrote:
> Look at contrib/xml at
> http://developer.postgresql.org/cvsweb.cgi/pgsql/contrib/xml/.  I never
> used this, but it might be useful.
> 
> George Essig
> 
> > Hello.
> >
> > I am trying to figure out how to import xml documents into a postgres
> > database.  I am running PostgreSql 7.1.3 on Red Hat 7.2 at the moment.
> > I have several dynamic xml documents that I want imported into the
> > database on a regular basis.  From my research so far, I know that
> > there is middleware available to perform this, but am having some
> > difficulty in finding the actual applications.  I am trying to stay
> > with open source applications, if possible.  Can anyone give me any
> > suggestions or resources to pull from?
> >
> > Thanks,
> >
> > N. Hill

tDOM and nstcl might ne what you need, and a little coding.

marc

> 
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] Query kill

2002-07-12 Thread Bruce Momjian

Jan Wieck wrote:
> Bruce Momjian wrote:
> > 
> > Rudi Starcevic wrote:
> > > Hello,
> > >
> > > If I write a query that is inefficient or in an eternal loop how
> > > do I stop it without restarting the postmaster ?
> > >
> > > I can see many postmaster processed appearing in the output of the 'ps'
> > > command.
> > > Do I need to stop/kill them all or can I stop just the query I want ?
> > 
> > Just send a SIGINT to the process. That simulates a ^C, which works too
> > from the client like psql.
> 
> Doesn't the client need a signal handler for that and call
> PQcancelRequest() in that?

Every backend has that signal handler defined, I think.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] Query kill

2002-07-12 Thread Jan Wieck

Bruce Momjian wrote:
> 
> Jan Wieck wrote:
> > Bruce Momjian wrote:
> > >
> > > Rudi Starcevic wrote:
> > > > Hello,
> > > >
> > > > If I write a query that is inefficient or in an eternal loop how
> > > > do I stop it without restarting the postmaster ?
> > > >
> > > > I can see many postmaster processed appearing in the output of the 'ps'
> > > > command.
> > > > Do I need to stop/kill them all or can I stop just the query I want ?
> > >
> > > Just send a SIGINT to the process. That simulates a ^C, which works too
> > > from the client like psql.
> >
> > Doesn't the client need a signal handler for that and call
> > PQcancelRequest() in that?
> 
> Every backend has that signal handler defined, I think.

What?

So you suggest that the client application, that actually want's
to cancel it's query, send's a SIGINT signal to the backend it is
connected to?

Bruce! To do so would require to be the PostgreSQL UNIX user on
the database server!

What I was talking about is the 

PQcancelRequest(PGconn *conn)

function in libpq. This "client side" function opens another
connection to the postmaster, using the same host and port as
"conn" did. Then it sends a cancel request startup packet
containing this connections backend pid and cancel key (only
known if the client talks version 2 or above of the protocol).

The spawned off backend process receiving the cancel request
startup packet on the "server side" checks the postmasters
process list for validity of the pid and cancel key combination
and sends the backend a signal. It is allowed to do so because it
is a child of the postmaster, as the backend it is signalling is,
both running under the same userid.

So yes, every backend has that signal handler. But not everyone
has a single user environment, where every process is allowed to
kill everybody else. The client by default does not have any
signal handler. So it could catch SIGALRM, do an alarm(10), do
PQexec() and alarm(0). If the signal handler get's called, it'll
call PQcancelRequest() causing PQexec() to recieve an ERROR (the
message says something like "query canceled" or so).


Jan

-- 

#==#
# It's easier to get forgiveness for being wrong than for being
right. #
# Let's break this rule - forgive
me.  #
#==
[EMAIL PROTECTED] #

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] Query kill

2002-07-12 Thread Bruce Momjian


I assumed from the user's question that the admin just wanted to stop a
specific query of a specific backend.  Sending a SIGINT to the backend
will do that.  I wasn't talking client request or anything like that.

Look at the description of the question below.

---

Jan Wieck wrote:
> Bruce Momjian wrote:
> > 
> > Jan Wieck wrote:
> > > Bruce Momjian wrote:
> > > >
> > > > Rudi Starcevic wrote:
> > > > > Hello,
> > > > >
> > > > > If I write a query that is inefficient or in an eternal loop how
> > > > > do I stop it without restarting the postmaster ?
> > > > >
> > > > > I can see many postmaster processed appearing in the output of the 'ps'
> > > > > command.
> > > > > Do I need to stop/kill them all or can I stop just the query I want ?
> > > >
> > > > Just send a SIGINT to the process. That simulates a ^C, which works too
> > > > from the client like psql.
> > >
> > > Doesn't the client need a signal handler for that and call
> > > PQcancelRequest() in that?
> > 
> > Every backend has that signal handler defined, I think.
> 
> What?
> 
> So you suggest that the client application, that actually want's
> to cancel it's query, send's a SIGINT signal to the backend it is
> connected to?
> 
> Bruce! To do so would require to be the PostgreSQL UNIX user on
> the database server!
> 
> What I was talking about is the 
> 
> PQcancelRequest(PGconn *conn)
> 
> function in libpq. This "client side" function opens another
> connection to the postmaster, using the same host and port as
> "conn" did. Then it sends a cancel request startup packet
> containing this connections backend pid and cancel key (only
> known if the client talks version 2 or above of the protocol).
> 
> The spawned off backend process receiving the cancel request
> startup packet on the "server side" checks the postmasters
> process list for validity of the pid and cancel key combination
> and sends the backend a signal. It is allowed to do so because it
> is a child of the postmaster, as the backend it is signalling is,
> both running under the same userid.
> 
> So yes, every backend has that signal handler. But not everyone
> has a single user environment, where every process is allowed to
> kill everybody else. The client by default does not have any
> signal handler. So it could catch SIGALRM, do an alarm(10), do
> PQexec() and alarm(0). If the signal handler get's called, it'll
> call PQcancelRequest() causing PQexec() to recieve an ERROR (the
> message says something like "query canceled" or so).
> 
> 
> Jan
> 
> -- 
> 
> #==#
> # It's easier to get forgiveness for being wrong than for being
> right. #
> # Let's break this rule - forgive
> me.  #
> #==
> [EMAIL PROTECTED] #
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org