FW: [SQL] query optimization question

2002-11-06 Thread terry
No offence taken, however it is incorrect, my SQL is pretty good.  I
received no other responses...  And I later realized the solution to my
question:

(EXPERTS READ ON: If anyone can show me how to use a group by or otherwise
optimize I would be grateful)

This subquery:
SELECT  project_id, marketing_name,
(SELECT count(lots.lot_id) AS lot_count
 FROM deficiency_table AS dt, lots, deficiency_status 
AS ds
 WHERE dt.lot_id = lots.lot_id
AND lots.division_id = proj.division_id
AND lots.project_id = proj.project_id
AND dt.deficiency_status_id = 
ds.deficiency_status_id
AND ds.is_outstanding
#PreserveSingleQuotes(variables.base_query)#
) AS def_count,
Actually does return a deficiency count, where there could be more then 1
deficiency per lot.  In order to get my lot_count, (number of lots with 1 or
more deficiencies) I just needed to add a DISTINCT clause in my count()
aggregate, ie  SELECT count(DISTINCT lots.lot_id)...  I forgot one could do
that:
(SELECT count(DISTINCT lots.lot_id) AS 
lot_count
 FROM deficiency_table AS dt, lots, 
deficiency_status AS ds
 WHERE dt.lot_id = lots.lot_id
AND lots.division_id = proj.division_id
AND lots.project_id = proj.project_id
AND dt.days_old_start_date >= 
#CreateODBCDate(DateAdd("d", -
int(ListLast(variables.aging_breakdown_list, ",")), now() ))#
AND dt.deficiency_status_id = 
ds.deficiency_status_id
AND ds.is_outstanding
#PreserveSingleQuotes(variables.base_query)#
) AS 
lot_count_greater_#ListLast(variables.aging_breakdown_list, ",")#,
Note the #PreserveSingleQuotes(variables.base_query)# is dynamic code that
further selects deficiencies by various criteria, eg just for a particular
supplier.

This query is actually dynamic, if all I had to do was the above 2 clauses
then I most certainly COULD do a group by.

However, for the total deficiencies I am then splitting up the total into
aging groups, eg <30, 30-60, 60-90, and >90 days old.  The query for that
looks like the below.  But before I paste it in, I would like to optimize
it, if I could do so with a group by clause I most certainly would, but I
don't see how I can BECAUSE OF THE AGING BREAKDOWN:

SELECT  project_id, marketing_name,
(SELECT count(lots.lot_id) AS lot_count
 FROM deficiency_table AS dt, lots, deficiency_status AS ds
 WHERE dt.lot_id = lots.lot_id
AND lots.division_id = proj.division_id
AND lots.project_id = proj.project_id
AND dt.deficiency_status_id = ds.deficiency_status_id
AND ds.is_outstanding
AND dt.assigned_supplier_id = '101690'
) AS def_count,

(SELECT count(lots.lot_id) AS lot_count
 FROM deficiency_table AS dt, lots, deficiency_status AS ds
 WHERE dt.lot_id = lots.lot_id
AND lots.division_id = proj.division_id
AND lots.project_id = proj.project_id

AND dt.days_old_start_date < {d '2002-10-07'}
AND dt.deficiency_status_id = ds.deficiency_status_id
AND ds.is_outstanding
AND dt.assigned_supplier_id = '101690'
) AS def_count_less_30,

(SELECT count(lots.lot_id) AS lot_count
 FROM deficiency_table AS dt, lots, deficiency_status 
AS ds
 WHERE dt.lot_id = lots.lot_id
AND lots.division_id = proj.division_id
AND lots.project_id = proj.project_id

AND dt.days_old_start_date >= {d '2002-10-07'}
AND dt.days_old_start_date < {d '2002-09-07'}
AND dt.deficiency_status_id = 
ds.deficiency_status_id
AND ds.is_outstanding
AND dt.assigned_supplier_id = '101690'
) AS def_count_30_60,

(SELECT count

Re: FW: [SQL] query optimization question

2002-11-06 Thread Richard Huxton
On Wednesday 06 Nov 2002 2:01 pm, [EMAIL PROTECTED] wrote:

> However, for the total deficiencies I am then splitting up the total into
> aging groups, eg <30, 30-60, 60-90, and >90 days old.  The query for that
> looks like the below.  But before I paste it in, I would like to optimize
> it, if I could do so with a group by clause I most certainly would, but I
> don't see how I can BECAUSE OF THE AGING BREAKDOWN:

[one sub-query per age-range]
>   AND dt.days_old_start_date < {d '2002-10-07'}

>   ) AS def_count_less_30,

>   AND dt.days_old_start_date >= {d '2002-10-07'}
>   AND dt.days_old_start_date < {d '2002-09-07'}

>   ) AS def_count_30_60,

Could you not define a function age_range(date) to return the relevant range 
text, then group on that text? I've used that before.

-- 
  Richard Huxton

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] query optimization question

2002-11-06 Thread Christoph Haller
Now that I've given your problem more thoughts (and searched for similar
stuff),
I think what you need is generating a cross table resp. pivot table.
Related to this, I am thinking of a query using Conditional Expressions
like
 COUNT ( CASE WHEN ... THEN 1 ELSE NULL) in order to use GROUP BY.
Together with Richard's idea of using a function age_range(date) it
seems
realizable. I'm not yet ready to make a more detailed proposal, but you
might
want to think about it in the meantime, too.

Regards, Christoph


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: FW: [SQL] query optimization question

2002-11-06 Thread Stephan Szabo
On Wed, 6 Nov 2002 [EMAIL PROTECTED] wrote:

> However, for the total deficiencies I am then splitting up the total into
> aging groups, eg <30, 30-60, 60-90, and >90 days old.  The query for that
> looks like the below.  But before I paste it in, I would like to optimize
> it, if I could do so with a group by clause I most certainly would, but I
> don't see how I can BECAUSE OF THE AGING BREAKDOWN:

Well, as a first step, I'd suggest using an age function as already
suggested and a temporary table to hold the grouped by values temporarily
and then doing the subselects against that.

Maybe something like (untested):
create temp table defs as
 select agefunc(dt.days_old_start_date) as ageval,
  count(lots.lot_id) as lots from
  deficiency_table as dt, lots, deficiency_status as ds
  where dt.lot_id = lots.lot_id
  and lots.dividion_id=proj.division_id
  and lots.project_id=proj.project_id
  and dt.deficiency_status_id=ds.deficiency_status_id
  and ts.is_outstanding
  and dt.assigned_supplier_id='101690'
 group by ageval;

-- same general thing for other repeated queries

select project_id, marketing_name,
 (select sum(lots) from defs) as def_count,
 (select lots from defs where ageval=0) as def_count_less_30,
 (select lots from defs where ageval=1) as def_count_30_60,
 ...

Since you want 0's instead of nulls, you'd probably need to do
a coalesce for the subselects, and this will go through the
probably 5 or so line temp table rather than the presumably large
other table.

I haven't spent much thought trying to force it down into a
single query, but that seems a bit harder.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: FW: [SQL] query optimization question

2002-11-06 Thread terry
Actually, come to think of it, just the implementation of re-querying a
temporary table could alone significantly improve performance, because the
temp table would:
a) have fewer records to scan on the subselects
b) not require any joins

Thanks!

Terry Fielder
Network Engineer
Great Gulf Homes / Ashton Woods Homes
[EMAIL PROTECTED]



> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:pgsql-sql-owner@;postgresql.org]On Behalf Of Stephan Szabo
> Sent: Wednesday, November 06, 2002 11:22 AM
> To: [EMAIL PROTECTED]
> Cc: Postgresql Sql Group (E-mail)
> Subject: Re: FW: [SQL] query optimization question
>
>
> On Wed, 6 Nov 2002 [EMAIL PROTECTED] wrote:
>
> > However, for the total deficiencies I am then splitting up
> the total into
> > aging groups, eg <30, 30-60, 60-90, and >90 days old.  The
> query for that
> > looks like the below.  But before I paste it in, I would
> like to optimize
> > it, if I could do so with a group by clause I most
> certainly would, but I
> > don't see how I can BECAUSE OF THE AGING BREAKDOWN:
>
> Well, as a first step, I'd suggest using an age function as already
> suggested and a temporary table to hold the grouped by values
> temporarily
> and then doing the subselects against that.
>
> Maybe something like (untested):
> create temp table defs as
>  select agefunc(dt.days_old_start_date) as ageval,
>   count(lots.lot_id) as lots from
>   deficiency_table as dt, lots, deficiency_status as ds
>   where dt.lot_id = lots.lot_id
>   and lots.dividion_id=proj.division_id
>   and lots.project_id=proj.project_id
>   and dt.deficiency_status_id=ds.deficiency_status_id
>   and ts.is_outstanding
>   and dt.assigned_supplier_id='101690'
>  group by ageval;
>
> -- same general thing for other repeated queries
>
> select project_id, marketing_name,
>  (select sum(lots) from defs) as def_count,
>  (select lots from defs where ageval=0) as def_count_less_30,
>  (select lots from defs where ageval=1) as def_count_30_60,
>  ...
>
> Since you want 0's instead of nulls, you'd probably need to do
> a coalesce for the subselects, and this will go through the
> probably 5 or so line temp table rather than the presumably large
> other table.
>
> I haven't spent much thought trying to force it down into a
> single query, but that seems a bit harder.
>
>
> ---(end of
> broadcast)---
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org
>


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] query optimization question

2002-11-06 Thread Masaru Sugawara
On Wed, 6 Nov 2002 09:01:49 -0500
<[EMAIL PROTECTED]> wrote:

> If anyone can see a way to do a group by to do this, then I will be happy to
> hear about it, because currently the resultset has to do a separate
> (sequential or index) scan of the deficiencies table.  The only way I can
> see to do a group by would be to break out the aging categories into
> separate queries, but that wins me nothing because each query then does its
> own scan...
> 
> The expected simplified output of this query looks like this:
> Project   <30 30-60   >=60lot total   <30 30-60   >=60def 
>total
> X 1   2   1   4   5   10  5   20  (if X had 4 
>lots, each of 5 deficiencies)
> Y 1   1   0   2   3   3   0   6   (each has eg 3 
>deficiencies in project Y)
> 


The following query may be one of the ways, but I cannot confirm whether
it goes well or not. 


SELECT
project_id,
marketing_name,
COUNT(lots.lot_id) AS def_count,
COUNT(CASE WHEN dt.days_old_start_date < {d '2002-10-07'}
  THEN lots.lot_id ELSE NULL END) AS def_count_less_30,
COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-10-07'}
AND dt.days_old_start_date < {d '2002-09-07'}
  THEN lots.lot_id ELSE NULL END) AS def_count_30_60,
COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-09-07'}
AND dt.days_old_start_date < {d '2002-08-08'}
  THEN lots.lot_id ELSE NULL END) AS def_count_60_90,
COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-08-08'}
  THEN lots.lot_id ELSE NULL END) AS def_count_greater_90,
COUNT(DISTINCT(CASE WHEN
   dt.days_old_start_date < {d '2002-10-07'}
  THEN lots.lot_id ELSE NULL END )) AS lot_count_less_30,
   COUNT(DISTINCT(CASE WHEN
   dt.days_old_start_date >= {d '2002-10-07'}
   AND dt.days_old_start_date < {d '2002-09-07'}
  THEN lots.lot_id ELSE NULL END )) AS lot_count_30_60,
   COUNT(DISTINCT(CASE WHEN
   dt.days_old_start_date >= {d '2002-09-07'}
   AND dt.days_old_start_date < {d '2002-08-08'}
  THEN lots.lot_id ELSE NULL END )) AS lot_count_60_90,
   COUNT(DISTINCT(CASE WHEN
  dt.days_old_start_date >= {d '2002-08-08'}
  THEN lots.lot_id ELSE NULL END )) AS lot_count_greater_90,
   COUNT(DISTINCT lots.lot_id) AS lot_count
FROM
   (SELECT * FROM deficiency_table
   WHERE assigned_supplier_id = '101690') AS dt,
   (SELECT * FROM deficiency_status,
   WHERE ds.is_outstanding) AS ds,
   (SELECT * FROM projects
   WHERE proj.division_id = 'GGH') AS proj
   lots
WHERE
   dt.lot_id = lots.lot_id
   AND lots.division_id = proj.division_id
   AND lots.project_id = proj.project_id
   AND dt.deficiency_status_id = ds.deficiency_status_id
   AND NOT EXISTS
  (SELECT 1 FROM menu_group_projects
 WHERE menu_code = 'WA'
 AND division_id = proj.division_id
 AND project_id = proj.project_id
 AND status = 'I')
ORDER BY proj.project_id




Regards,
Masaru Sugawara



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: FW: [SQL] query optimization question

2002-11-06 Thread Stephan Szabo
On Wed, 6 Nov 2002 [EMAIL PROTECTED] wrote:

> Actually, come to think of it, just the implementation of re-querying a
> temporary table could alone significantly improve performance, because the
> temp table would:
> a) have fewer records to scan on the subselects
> b) not require any joins

Yeah, that's what I was thinking.  However the example I gave was
bogus.  I realized that I needed to do more, then forgot before sending.

> > Maybe something like (untested):
> > create temp table defs as
> >  select agefunc(dt.days_old_start_date) as ageval,
> >   count(lots.lot_id) as lots from
> >   deficiency_table as dt, lots, deficiency_status as ds
> >   where dt.lot_id = lots.lot_id
> >   and lots.dividion_id=proj.division_id
> >   and lots.project_id=proj.project_id
> >   and dt.deficiency_status_id=ds.deficiency_status_id
> >   and ts.is_outstanding
> >   and dt.assigned_supplier_id='101690'
> >  group by ageval;

You'll almost certainly need to add projects as proj in the from clause,
proj.project_id in the select clause and group by (and possibly
division_id - I can't quite tell if that's a composite key).

> > -- same general thing for other repeated queries
> >
> > select project_id, marketing_name,
> >  (select sum(lots) from defs) as def_count,
> >  (select lots from defs where ageval=0) as def_count_less_30,

In these you'd want to limit it to the appropriate rows from defs
by project_id (and possibly division_id).


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] query optimization question

2002-11-06 Thread Masaru Sugawara
On Thu, 07 Nov 2002 01:44:25 +0900
I wrote  <[EMAIL PROTECTED]> wrote:

> On Wed, 6 Nov 2002 09:01:49 -0500
> <[EMAIL PROTECTED]> wrote:
> 
> > If anyone can see a way to do a group by to do this, then I will be happy to
> > hear about it, because currently the resultset has to do a separate
> > (sequential or index) scan of the deficiencies table.  The only way I can
> > see to do a group by would be to break out the aging categories into
> > separate queries, but that wins me nothing because each query then does its
> > own scan...
> > 
> > The expected simplified output of this query looks like this:
> > Project <30 30-60   >=60lot total   <30 30-60   >=60def 
>total
> > X   1   2   1   4   5   10  5   20  (if X had 4 
>lots, each of 5 deficiencies)
> > Y   1   1   0   2   3   3   0   6   (each has eg 3 
>deficiencies in project Y)
> > 
> 
> 
> The following query may be one of the ways, but I cannot confirm whether
> it goes well or not. 
> 
> 

There are some misspelling in FROM clause. Now  they are fixed.


 SELECT
 project_id,
 marketing_name,
 COUNT(lots.lot_id) AS def_count,
 COUNT(CASE WHEN dt.days_old_start_date < {d '2002-10-07'}
 THEN lots.lot_id ELSE NULL END) AS def_count_less_30,
 COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-10-07'}
 AND dt.days_old_start_date < {d '2002-09-07'}
 THEN lots.lot_id ELSE NULL END) AS def_count_30_60,
 COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-09-07'}
 AND dt.days_old_start_date < {d '2002-08-08'}
 THEN lots.lot_id ELSE NULL END) AS def_count_60_90,
 COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-08-08'}
 THEN lots.lot_id ELSE NULL END) AS def_count_greater_90,
 COUNT(DISTINCT(CASE WHEN
 dt.days_old_start_date < {d '2002-10-07'}
 THEN lots.lot_id ELSE NULL END )) AS lot_count_less_30,
COUNT(DISTINCT(CASE WHEN
 dt.days_old_start_date >= {d '2002-10-07'}
 AND dt.days_old_start_date < {d '2002-09-07'}
 THEN lots.lot_id ELSE NULL END )) AS lot_count_30_60,
COUNT(DISTINCT(CASE WHEN
 dt.days_old_start_date >= {d '2002-09-07'}
 AND dt.days_old_start_date < {d '2002-08-08'}
 THEN lots.lot_id ELSE NULL END )) AS lot_count_60_90,
COUNT(DISTINCT(CASE WHEN
 dt.days_old_start_date >= {d '2002-08-08'}
 THEN lots.lot_id ELSE NULL END )) AS lot_count_greater_90,
COUNT(DISTINCT lots.lot_id) AS lot_count
 FROM
(SELECT * FROM deficiency_table
WHERE assigned_supplier_id = '101690') AS dt,
(SELECT * FROM deficiency_status
WHERE is_outstanding) AS ds,
(SELECT * FROM projects
WHERE division_id = 'GGH') AS proj,
lots
 WHERE
dt.lot_id = lots.lot_id
AND lots.division_id = proj.division_id
AND lots.project_id = proj.project_id
AND dt.deficiency_status_id = ds.deficiency_status_id
AND NOT EXISTS
   (SELECT 1 FROM menu_group_projects
  WHERE menu_code = 'WA'
  AND division_id = proj.division_id
  AND project_id = proj.project_id
  AND status = 'I')
 ORDER BY proj.project_id
 

 

Regards,
Masaru Sugawara



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] query optimization question

2002-11-06 Thread terry
That looks really promising as a possibility, however I think you intended
to add a group by clause.


Terry Fielder
Network Engineer
Great Gulf Homes / Ashton Woods Homes
[EMAIL PROTECTED]



> -Original Message-
> From: Masaru Sugawara [mailto:rk73@;sea.plala.or.jp]
> Sent: Wednesday, November 06, 2002 11:44 AM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: [SQL] query optimization question
>
>
> On Wed, 6 Nov 2002 09:01:49 -0500
> <[EMAIL PROTECTED]> wrote:
>
> > If anyone can see a way to do a group by to do this, then I
> will be happy to
> > hear about it, because currently the resultset has to do a separate
> > (sequential or index) scan of the deficiencies table.  The
> only way I can
> > see to do a group by would be to break out the aging categories into
> > separate queries, but that wins me nothing because each
> query then does its
> > own scan...
> >
> > The expected simplified output of this query looks like this:
> > Project <30 30-60   >=60lot total   <30
> 30-60 >=60def total
> > X   1   2   1   4   5   10  5
> 20(if X had 4 lots, each of 5 deficiencies)
> > Y   1   1   0   2   3   3   0
> 6 (each has eg 3 deficiencies in project Y)
> >
>
>
> The following query may be one of the ways, but I cannot
> confirm whether
> it goes well or not.
>
>
> SELECT
> project_id,
> marketing_name,
> COUNT(lots.lot_id) AS def_count,
> COUNT(CASE WHEN dt.days_old_start_date < {d '2002-10-07'}
>   THEN lots.lot_id ELSE NULL END) AS
> def_count_less_30,
> COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-10-07'}
> AND dt.days_old_start_date < {d
> '2002-09-07'}
>   THEN lots.lot_id ELSE NULL END) AS
> def_count_30_60,
> COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-09-07'}
> AND dt.days_old_start_date < {d
> '2002-08-08'}
>   THEN lots.lot_id ELSE NULL END) AS
> def_count_60_90,
> COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-08-08'}
>   THEN lots.lot_id ELSE NULL END) AS
> def_count_greater_90,
> COUNT(DISTINCT(CASE WHEN
>dt.days_old_start_date <
> {d '2002-10-07'}
>   THEN lots.lot_id ELSE NULL END ))
> AS lot_count_less_30,
>COUNT(DISTINCT(CASE WHEN
>dt.days_old_start_date >=
> {d '2002-10-07'}
>AND dt.days_old_start_date < {d
> '2002-09-07'}
>   THEN lots.lot_id ELSE NULL END ))
> AS lot_count_30_60,
>COUNT(DISTINCT(CASE WHEN
>dt.days_old_start_date >=
> {d '2002-09-07'}
>AND dt.days_old_start_date < {d
> '2002-08-08'}
>   THEN lots.lot_id ELSE NULL END ))
> AS lot_count_60_90,
>COUNT(DISTINCT(CASE WHEN
>   dt.days_old_start_date >=
> {d '2002-08-08'}
>   THEN lots.lot_id ELSE NULL END ))
> AS lot_count_greater_90,
>COUNT(DISTINCT lots.lot_id) AS lot_count
> FROM
>(SELECT * FROM deficiency_table
>WHERE assigned_supplier_id = '101690') AS dt,
>(SELECT * FROM deficiency_status,
>WHERE ds.is_outstanding) AS ds,
>(SELECT * FROM projects
>WHERE proj.division_id = 'GGH') AS proj
>lots
> WHERE
>dt.lot_id = lots.lot_id
>AND lots.division_id = proj.division_id
>AND lots.project_id = proj.project_id
>AND dt.deficiency_status_id = ds.deficiency_status_id
>AND NOT EXISTS
>   (SELECT 1 FROM menu_group_projects
>  WHERE menu_code = 'WA'
>  AND division_id = proj.division_id
>  AND project_id = proj.project_id
>  AND status = 'I')
> ORDER BY proj.project_id
>
>
>
>
> Regards,
> Masaru Sugawara
>


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Copying a rowtype variable.

2002-11-06 Thread Rison, Stuart
>> 2) I am looking for an elegant way of copying a rowtype variable:
>> 
>> eg.
>> 
>> DECLARE
>>  current_row orf%ROWTYPE;
>>  previous_row orf%ROWTYPE;
>> BEGIN
>> 
>>  LOOP
>>  -- use cursors or FOR SELECT to get values into current_row
>>  -- now try this:
>> 
>>  previous_row = current_row;
>>  END LOOP;
>> END;
>> 
>> Now, as I anticipated, this fails because a rowtype variable is a
>> composite
>> variable.  One working alternative is to do:
>>
>
> I haven't tried this.  One thing I notice above is that you're using
> the equality operator "=" instead of the assignment operator ":="  .
> Usually Postgres lets you slack on this, but it would be worth trying
> to see whether that has an effect on the problem.
>

Fair point.  But "previous_row := current_row" doesn't work either.

> Another thing to try is, instead of a simple variable assignment
> 
> SELECT current_row INTO previous_row;
>
> ... and see if that works.   

Well, I had high hopes for that one... but it didn't work either!

> I'll tinker later today; there has to be a way to do it.

I'd definitely appreciate further suggestions, but thanks all the same for
you help.  I have a feeling that you might have to write a PL function to
perform the operation... but I haven't really thought about it!

Stuart.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Problem: Referential Integrity Constraints lost

2002-11-06 Thread Achilleus Mantzios

Hi i think a hit a major problem on 7.2.1.
I run 3 systems with postgresql 7.2.1.
Its a redhat 7.1 for development, a redhat 7.3 for production
and a FreeBSD 4.6.1RC2 for testing.

After long runs (with periodic (daily) vacuum analyze's)
i noticed that some of the triggers that implement referential integrity
constraints just disapeared.
Some of these triggers were still present on the FreeBSD system
(which has been idle for a month or so), whereas on the linux
systems they were absent.

Has any one have a clue??
Any comment would be really valuable at this moment of darkness.

Thanx.

==
Achilleus Mantzios
S/W Engineer
IT dept
Dynacom Tankers Mngmt
Nikis 4, Glyfada
Athens 16610
Greece
tel:+30-10-8981112
fax:+30-10-8981877
email:  [EMAIL PROTECTED]
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Problem: Referential Integrity Constraints lost

2002-11-06 Thread Achilleus Mantzios
Also i must that these lost triggers implement the parent table
side of the constraint, e.g.

CREATE TABLE VslSections(
id serial NOT NULL UNIQUE PRIMARY KEY,
name varchar(20) NOT NULL UNIQUE);

CREATE TABLE MachClasses(
id serial NOT NULL UNIQUE PRIMARY KEY,
name varchar(20) NOT NULL UNIQUE,
vslsecid int4 NOT NULL,
FOREIGN KEY (vslsecid) REFERENCES VslSections (id));

Then the triggers created are :

1)
CREATE CONSTRAINT TRIGGER "" AFTER INSERT OR UPDATE ON
"machclasses"  FROM "vslsections" NOT DEFERRABLE INITIALLY IMMEDIATE FOR
EACH ROW EXECUTE PROCEDURE "RI_FKey_check_ins" ('',
'machclasses', 'vslsections', 'UNSPECIFIED', 'vslsecid', 'id');
2)
CREATE CONSTRAINT TRIGGER "" AFTER DELETE ON "vslsections"  FROM
"machclasses" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE
PROCEDURE "RI_FKey_noaction_del" ('', 'machclasses',
'vslsections', 'UNSPECIFIED', 'vslsecid', 'id');
3)
CREATE CONSTRAINT TRIGGER "" AFTER UPDATE ON "vslsections"  FROM
"machclasses" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE
PROCEDURE "RI_FKey_noaction_upd" ('', 'machclasses',
'vslsections', 'UNSPECIFIED', 'vslsecid', 'id');

The *LOST* triggers are 2 and 3.

==
Achilleus Mantzios
S/W Engineer
IT dept
Dynacom Tankers Mngmt
Nikis 4, Glyfada
Athens 16610
Greece
tel:+30-10-8981112
fax:+30-10-8981877
email:  [EMAIL PROTECTED]
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Problem: Referential Integrity Constraints lost: Correction

2002-11-06 Thread Achilleus Mantzios

I was wrong about parent side triggers only having
disappeared.

Triggers of both sides are missing.

==
Achilleus Mantzios
S/W Engineer
IT dept
Dynacom Tankers Mngmt
Nikis 4, Glyfada
Athens 16610
Greece
tel:+30-10-8981112
fax:+30-10-8981877
email:  [EMAIL PROTECTED]
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Problem: Referential Integrity Constraints lost

2002-11-06 Thread Stephan Szabo
On Wed, 6 Nov 2002, Achilleus Mantzios wrote:

>
> Hi i think a hit a major problem on 7.2.1.
> I run 3 systems with postgresql 7.2.1.
> Its a redhat 7.1 for development, a redhat 7.3 for production
> and a FreeBSD 4.6.1RC2 for testing.
>
> After long runs (with periodic (daily) vacuum analyze's)
> i noticed that some of the triggers that implement referential integrity
> constraints just disapeared.
> Some of these triggers were still present on the FreeBSD system
> (which has been idle for a month or so), whereas on the linux
> systems they were absent.
>
> Has any one have a clue??

Hmm, you haven't done anything like cluster or an incomplete dump and
reload have you?



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] Problem: Referential Integrity Constraints lost

2002-11-06 Thread Achilleus Mantzios
On Wed, 6 Nov 2002, Stephan Szabo wrote:

> On Wed, 6 Nov 2002, Achilleus Mantzios wrote:
>
> >
> > Hi i think a hit a major problem on 7.2.1.
> > I run 3 systems with postgresql 7.2.1.
> > Its a redhat 7.1 for development, a redhat 7.3 for production
> > and a FreeBSD 4.6.1RC2 for testing.
> >
> > After long runs (with periodic (daily) vacuum analyze's)
> > i noticed that some of the triggers that implement referential integrity
> > constraints just disapeared.
> > Some of these triggers were still present on the FreeBSD system
> > (which has been idle for a month or so), whereas on the linux
> > systems they were absent.
> >
> > Has any one have a clue??
>
> Hmm, you haven't done anything like cluster or an incomplete dump and
> reload have you?

No,
Also the FreeBSD system's database was populated with data from
the production on 2002-10-22, so the problem on the FreeBSD
was partially inherited from the production databse.

>
>
>

==
Achilleus Mantzios
S/W Engineer
IT dept
Dynacom Tankers Mngmt
Nikis 4, Glyfada
Athens 16610
Greece
tel:+30-10-8981112
fax:+30-10-8981877
email:  [EMAIL PROTECTED]
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] Problem: Referential Integrity Constraints lost

2002-11-06 Thread Achilleus Mantzios
On Thu, 7 Nov 2002, Achilleus Mantzios wrote:

> On Wed, 6 Nov 2002, Stephan Szabo wrote:
>
> > On Wed, 6 Nov 2002, Achilleus Mantzios wrote:
> >
> > >
> > > Hi i think a hit a major problem on 7.2.1.
> > > I run 3 systems with postgresql 7.2.1.
> > > Its a redhat 7.1 for development, a redhat 7.3 for production
> > > and a FreeBSD 4.6.1RC2 for testing.
> > >
> > > After long runs (with periodic (daily) vacuum analyze's)
> > > i noticed that some of the triggers that implement referential integrity
> > > constraints just disapeared.
> > > Some of these triggers were still present on the FreeBSD system
> > > (which has been idle for a month or so), whereas on the linux
> > > systems they were absent.
> > >
> > > Has any one have a clue??
> >
> > Hmm, you haven't done anything like cluster or an incomplete dump and
> > reload have you?
>
> No,
> Also the FreeBSD system's database was populated with data from
> the production on 2002-10-22, so the problem on the FreeBSD
> was partially inherited from the production databse.

Also i must add, that the database on the production system
was never dumped/reloaded since the creation of the system.

The production 7.2.1 pgsql db was created and loaded on 2002-04-20,
from a 7.1.3 pgsql on our previous Solaris box (which we replaced
with a new linux one).
The production pgsql is started/stopped only during
system shutdowns/boots.

We had some unexpected system failures due to some
Linux/MotherBoard/BIOS problems.
(I was too enthusiastic about pgsql and its stability
that i was overconfident about our database's state
after these failures).
BTW, could that be the cause of the problem??

The problem is that i didnt realize the problem until yesterday.
The only thing i am sure, is that some of the triggers lost
one both linux'es are present on the FreeBSD system,
which was populated on Oct 22, and had NO deletion activity
at all.

I plan to make a huge map of all my tables, and configure
all the pairs of tables with inter-referential integrity constraints,
pg_dump --schema-only, see which triggers are missing
and then create them by hand.

Has anyone got a better idea??
After recreating the missing triggers should i upgrade
to 7.2.3??

Thanx.

>
> >
> >
> >
>
> ==
> Achilleus Mantzios
> S/W Engineer
> IT dept
> Dynacom Tankers Mngmt
> Nikis 4, Glyfada
> Athens 16610
> Greece
> tel:+30-10-8981112
> fax:+30-10-8981877
> email:  [EMAIL PROTECTED]
> [EMAIL PROTECTED]
>
>
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
>

==
Achilleus Mantzios
S/W Engineer
IT dept
Dynacom Tankers Mngmt
Nikis 4, Glyfada
Athens 16610
Greece
tel:+30-10-8981112
fax:+30-10-8981877
email:  [EMAIL PROTECTED]
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org