Re: [SQL] Query performance problem

2005-03-17 Thread Ragnar Hafstað
On Fri, 2005-03-18 at 10:49 +0530, Kenneth Gonsalves wrote:
> On Thursday 17 Mar 2005 7:35 pm, Richard Huxton wrote:
> 
> > Not necessarily. NOT NULL here helps to ensure you can add values
> > together without the risk of a null result. There are plenty of
> > "amount" columns that should be not-null (total spent, total
> > ordered etc).
> 
> that makes sense - but is it necessary to have a not null constraint 
> when there is a default value?

DEFAULT applies to INSERTs, NOT NULL applies to UPDATEs too.

gnari



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Query performance problem

2005-03-17 Thread Kenneth Gonsalves
On Thursday 17 Mar 2005 7:35 pm, Richard Huxton wrote:

> Not necessarily. NOT NULL here helps to ensure you can add values
> together without the risk of a null result. There are plenty of
> "amount" columns that should be not-null (total spent, total
> ordered etc).

that makes sense - but is it necessary to have a not null constraint 
when there is a default value?

-- 
regards
kg

http://www.livejournal.com/users/lawgon
tally ho! http://avsap.sourceforge.net
àà à!

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Process priority.

2005-03-17 Thread Theo Galanakis
Title: RE: [SQL] Process priority. 





It depends what the intended purpose if for. In our scenario, there are a number of ad-hoc export tasks running during the day that are quite cpu intensive. 

There is a noticeably slow response time when exports are being run. By lowering the priority of the export postmaster, the system appeared to run far smoother for standard users.

I guess you could set-up a replication database which I probably will do in the end, however I just wanted to have a play and see if I could extend postgres with c. I don't profess to being a c or os guru. Thankyou for your feedback.

Theo


-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]] 
Sent: Friday, 18 March 2005 11:06 AM
To: Theo Galanakis
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] Process priority. 



Theo Galanakis <[EMAIL PROTECTED]> writes:
>   I wrote a C program that can be called through postgres via stored 
> procedures. It allows you to change the current postmaster process 
> priority.


You are aware that that's widely considered either useless or counterproductive?  Renice-ing one backend seldom gives helpful results, because (a) what usually matters more than CPU usage is I/O, and renice doesn't change I/O priorities; (b) priority inversion results in higher-priority backends blocking behind the low-priority one whenever it's managed to acquire a lock.

            regards, tom lane




__
This email, including attachments, is intended only for the addressee
and may be confidential, privileged and subject to copyright.  If you
have received this email in error, please advise the sender and delete
it.  If you are not the intended recipient of this email, you must not
use, copy or disclose its content to anyone.  You must not copy or 
communicate to others content that is confidential or subject to 
copyright, unless you have the consent of the content owner.


Re: [SQL] Process priority.

2005-03-17 Thread Tom Lane
Theo Galanakis <[EMAIL PROTECTED]> writes:
>   I wrote a C program that can be called through postgres via stored
> procedures. It allows you to change the current postmaster process priority.

You are aware that that's widely considered either useless or
counterproductive?  Renice-ing one backend seldom gives helpful results,
because (a) what usually matters more than CPU usage is I/O, and renice
doesn't change I/O priorities; (b) priority inversion results in
higher-priority backends blocking behind the low-priority one whenever
it's managed to acquire a lock.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] Process priority.

2005-03-17 Thread Theo Galanakis
Title: Process priority.





Hi,
    I wrote a C program that can be called through postgres via stored procedures. It allows you to change the current postmaster process priority.

    Essentially it's intended purpose would be to allow a highly resource intensive postmaster process to run at a lower priority, allowing other postmaster processes more cpu time. For example admin/reporting backend sql's that impact the system. I guess this is the same as renice, however it can be accessed programatically from the postgres box via any client.

    The three stored procs are as follows:


    get_pid()
    get_priority()
    set_priority(int) 

Re: [SQL] Consecutive row count query

2005-03-17 Thread Andrew Hammond
You could hack it using a custom aggregate. NB: you'll want to reset the 
 categorizer_seq every now and then. And this isn't safe for concurrent 
queries. You could make it safe for concurrent queries by using a 
complex type for STYPE, but I didn't bother. I also haven't debugged 
this, but I think it expresses the concept.

CREATE SEQUENCE categorizer_seq;
CREATE OR REPLACE FUNCTION categorizer_func (string, string) RETURNS 
bigint VOLATILE CALLED ON NULL INPUT AS '
SELECT CASE WHEN $1 = $2
THEN (SELECT last_value FROM categorizer_seq)
ELSE nextval(''categorizer_seq'')
END AS category
' LANGUAGE SQL;

CREATE AGGREGATE categorizer (
BASETYPE = text,
SFUNC = categorizer_func,
STYPE = text,
INITCOND = ''
);
SELECT col1, count(*)
FROM (
SELECT col1, cagetorizer(col1) AS category
FROM mytable
ORDER BY col_order
) tmp
GROUP BY (col1, category);
Leon Stringer wrote:
Hi,
I wondered if anyone could answer the following question:
If I have a table such as the one below:
col1   col_order
---
Apple  1
Apple  2
Orange 3
Banana 4
Apple  5
Is there a way I can get the following results:
Apple  2
Orange 1
Banana 1
Apple  1
i.e. Each row is printed ordered by col_order but consecutive
appearances of the same col1 result in only a single line in the result
with the number of consecutive appearances.
Obviously I could store the table as:
col1   col_order col_count
--
Apple  1 2
Orange 2 1
Banana 3 1
Apple  4 1
But since (in my intended table) most rows will have col_count = 1, this
seems like unnecessary normalization (and semantically "wrong").
Thanks in advance for any help,
Leon Stringer

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [SQL] Consecutive row count query

2005-03-17 Thread Greg Stark
Leon Stringer <[EMAIL PROTECTED]> writes:

> Hi,
> 
> I wondered if anyone could answer the following question:
> 
> If I have a table such as the one below:
> 
> col1   col_order
> ---
> Apple  1
> Apple  2
> Orange 3
> Banana 4
> Apple  5
> 
> Is there a way I can get the following results:
> 
> Apple  2
> Orange 1
> Banana 1
> Apple  1

Maybe. But not easily or efficiently.

How about this:

SELECT a.col1, a.col_order
  FROM tab as a
  LEFT OUTER JOIN tab as b 
ON (b.col_order = a.col_order+1 AND b.col1=a.col1)
 WHERE b.col1 IS NULL


> But since (in my intended table) most rows will have col_count = 1, this
> seems like unnecessary normalization (and semantically "wrong").

I think this looks like a better option. "unnecessary normalization" is an odd
phrase. Unless you can point at some reason that the denormalized seems *more*
convenient --and much *more* convenient at that-- not less convenient then you
should go for it.

Besides, that col_count column's only going to be four bytes. Unless the
"Apple" data is really short it'll only take a few col_count>1 to make it
worthwhile.

The only reason you might have a problem is if it's really "semantically
wrong" which would be if there's data attached to Apple or Orange that might
be different from one streak of results to the other.

-- 
greg


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Consecutive row count query

2005-03-17 Thread Jaime Casanova
On Thu, 17 Mar 2005 20:21:24 +, Leon Stringer
<[EMAIL PROTECTED]> wrote:
> Hi,
> 
> I wondered if anyone could answer the following question:
> 
> If I have a table such as the one below:
> 
> col1   col_order
> ---
> Apple  1
> Apple  2
> Orange 3
> Banana 4
> Apple  5
> 
> Is there a way I can get the following results:
> 
> Apple  2
> Orange 1
> Banana 1
> Apple  1
> 
A function?

regards,
Jaime Casanova

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Consecutive row count query

2005-03-17 Thread Bruno Wolff III
On Thu, Mar 17, 2005 at 20:21:24 +,
  Leon Stringer <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> I wondered if anyone could answer the following question:
> 
> If I have a table such as the one below:
> 
> col1   col_order
> ---
> Apple  1
> Apple  2
> Orange 3
> Banana 4
> Apple  5
> 
> Is there a way I can get the following results:
> 
> Apple  2
> Orange 1
> Banana 1
> Apple  1
> 
> i.e. Each row is printed ordered by col_order but consecutive
> appearances of the same col1 result in only a single line in the result
> with the number of consecutive appearances.

Which col_order value do you expect to use in ordering?

You can probably join a grouped version of the table to get the counts
with a distinct on version of the table to do what you want.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] Consecutive row count query

2005-03-17 Thread Leon Stringer
Hi,

I wondered if anyone could answer the following question:

If I have a table such as the one below:

col1   col_order
---
Apple  1
Apple  2
Orange 3
Banana 4
Apple  5

Is there a way I can get the following results:

Apple  2
Orange 1
Banana 1
Apple  1

i.e. Each row is printed ordered by col_order but consecutive
appearances of the same col1 result in only a single line in the result
with the number of consecutive appearances.

Obviously I could store the table as:

col1   col_order col_count
--
Apple  1 2
Orange 2 1
Banana 3 1
Apple  4 1

But since (in my intended table) most rows will have col_count = 1, this
seems like unnecessary normalization (and semantically "wrong").

Thanks in advance for any help,

Leon Stringer



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] lower and unicode

2005-03-17 Thread Peter Eisentraut
pginfo wrote:
> I am uusing pg 8.0.1 on FreeBSD 5.3 but I am ready t use the version
> taht supports correct unicode.

FreeBSD doesn't support Unicode, so you need to use something else.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] update with subselect (long)

2005-03-17 Thread Leif B. Kristensen
On Thursday 17 March 2005 15:01, Stephan Szabo wrote:

> The above needs some work.  The below should be acceptable to the
> system.
>
>  update name_parts set name_part_type=5 from (select name_id from
> name_parts where name_part_type=6) as gpt_type where
> name_parts.name_id=gpt_type.name_id and name_part_type=3;

Thank you very much!

> I'm a bit worried about blindly changing the type for anything that
> has a name_part_type=6 record, but given your usage that might be
> okay.

Yes, this is a kind of one-shot job. I shall be going through all the 
records manually later and correct the ones that don't fit the bill 
exactly.
-- 
Leif Biberg Kristensen
http://solumslekt.org/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] Query performance problem

2005-03-17 Thread Richard Huxton
Kenneth Gonsalves wrote:
On Thursday 17 Mar 2005 1:50 pm, Phil Daintree wrote:

CREATE TABLE chartdetails (
   accountcode integer DEFAULT 0 NOT NULL,
   period integer DEFAULT 0 NOT NULL,
   budget double precision DEFAULT (0)::double precision NOT NULL,
   actual double precision DEFAULT (0)::double precision NOT NULL,
   bfwd double precision DEFAULT (0)::double precision NOT NULL,
   bfwdbudget double precision DEFAULT (0)::double precision NOT
NULL );

although may be not relevant to your question, as i have noticed this 
before with mysql 'sql', what is the point of having a NOT NULL field 
that defaults to 0? the whole idea of a NOT NULL field is to have the 
value filled in compulsorily and having a default of 0 or '' defeats 
the purpose
Not necessarily. NOT NULL here helps to ensure you can add values 
together without the risk of a null result. There are plenty of "amount" 
columns that should be not-null (total spent, total ordered etc).

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [SQL] update with subselect (long)

2005-03-17 Thread Stephan Szabo

On Thu, 17 Mar 2005, Leif B. Kristensen wrote:

> CREATE TABLE name_part_types ( -- a key/label pair
> name_part_type_id INTEGER PRIMARY KEY,
> name_part_type VARCHAR(50)
> );
>
>
> CREATE TABLE names ( -- one person can have multiple names
> name_id INTEGER PRIMARY KEY,
> person_id INTEGER REFERENCES persons,
> is_primary BOOLEAN NOT NULL -- but one must be primary
> );
>
>
> CREATE TABLE name_parts ( -- a name has multiple name-parts
> name_part_id INTEGER PRIMARY KEY,
> name_id INTEGER REFERENCES names,
> name_part_type INTEGER REFERENCES name_part_types,
> name_sequence INTEGER NOT NULL, -- internal sort order of name part
> name_part VARCHAR(100)
> );
>
> My name_part_types table presently looks like this:
>
> slekta=> select * from name_part_types;
>  name_part_type_id | name_part_type
> ---+
>  1 | prefix
>  2 | given
>  3 | surname
>  4 | suffix
>  5 | patronym
>  6 | toponym
> (6 rows)
>
> My current genealogy program, The Master Genealogist (TMG), stores names
> in the conventional pigeon-hole way, within the fields Prefix / Given /
> Surname / Suffix. This form is quite awkward regarding old Norwegian
> naming practice, and I have been using the Surname field mainly for
> recording patronyms, and the Suffix field for toponyms (ie. "farm
> names"). I've written a FoxPro to SQL conversion script (using Perl and
> the XBase module) to dump the data from the TMG database. A typical
> name_parts set may look like this:
>
> slekta=> select * from name_parts where name_id = 1652;
>  name_part_id | name_id | name_part_type | name_sequence |name_part
> --+-++---+--
>  3643 |1652 |  2 | 0 | Christen
>  3644 |1652 |  5 | 1 | Jonsen
>  3645 |1652 |  6 | 2 | Stavdal
> (3 rows)
>
> Now I'm starting to approach my point. The values (2,3,4) in the
> name_part_type column should be changed to (2,5,6). As the Suffix field
> in the overwhelming majority of instances is used only if the name is
> on the Given / Patronym / Toponym form, I figure that it should be easy
> to change the name_part_type here. Initially, I ran this update:
>
> slekta=> update name_parts set name_part_type=6 where name_part_type=3;
>
> So far, so good. But how do I change the name_part_type from 3 to 5 for
> the names with the same name_id that were altered by the previous
> command? This is my latest try:
>
> slekta=> begin work;
> BEGIN
> slekta=> update name_parts set name_part_type=5
> slekta-> from (select name_id where name_part_type=6) as gpt_type
> slekta-> where name_id=gpt_type and name_part_type=3;
> ERROR:  subquery in FROM may not refer to other relations of same query
> level

The above needs some work.  The below should be acceptable to the system.

 update name_parts set name_part_type=5 from (select name_id from
name_parts where name_part_type=6) as gpt_type where
name_parts.name_id=gpt_type.name_id and name_part_type=3;

I'm a bit worried about blindly changing the type for anything that has a
name_part_type=6 record, but given your usage that might be okay.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Query performance problem

2005-03-17 Thread PFC

although may be not relevant to your question, as i have noticed this
before with mysql 'sql', what is the point of having a NOT NULL field
that defaults to 0? the whole idea of a NOT NULL field is to have the
value filled in compulsorily and having a default of 0 or '' defeats
the purpose
	Well if you define your field as NOT NULL mysql will automatically set it  
to 0 if you store a NULL anyway, so you might as well specify it in your  
table definitions so it looks like you wanted it...

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [SQL] update with subselect (long)

2005-03-17 Thread Leif B. Kristensen
On Thursday 17 March 2005 14:07, Leif B. Kristensen wrote:

> slekta=> update name_parts set name_part_type=6 where
> name_part_type=3;

This message was sent a little prematurely while I was editing a similar 
posting to comp.databases. The cited line is erroneous and should read:

> slekta=> update name_parts set name_part_type=6 where
> name_part_type=4;

If somebody recognizes my data structure and can recommend some reading 
on working with this kind of data, I'd be much obliged.

regards,
-- 
Leif Biberg Kristensen
http://solumslekt.org/

---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] update with subselect (long)

2005-03-17 Thread Leif B. Kristensen
Hello all,
I'm working with a genealogy database where I try to implement a 
somewhat unconventional model for names. The purpose is to allow 
different naming styles, especially the old Norwegian naming style with 
Given name/Patronym/Toponym instead of the Given/Patronym style that 
appears as God-given by current genealogy software. To that end, I have 
designed some tables in this way:

CREATE TABLE name_part_types ( -- a key/label pair
name_part_type_id INTEGER PRIMARY KEY,
name_part_type VARCHAR(50)
);


CREATE TABLE names ( -- one person can have multiple names
name_id INTEGER PRIMARY KEY,
person_id INTEGER REFERENCES persons,
is_primary BOOLEAN NOT NULL -- but one must be primary
);


CREATE TABLE name_parts ( -- a name has multiple name-parts
name_part_id INTEGER PRIMARY KEY,
name_id INTEGER REFERENCES names,
name_part_type INTEGER REFERENCES name_part_types,
name_sequence INTEGER NOT NULL, -- internal sort order of name part
name_part VARCHAR(100)
);

My name_part_types table presently looks like this:

slekta=> select * from name_part_types;
 name_part_type_id | name_part_type
---+
 1 | prefix
 2 | given
 3 | surname
 4 | suffix
 5 | patronym
 6 | toponym
(6 rows)

My current genealogy program, The Master Genealogist (TMG), stores names 
in the conventional pigeon-hole way, within the fields Prefix / Given / 
Surname / Suffix. This form is quite awkward regarding old Norwegian 
naming practice, and I have been using the Surname field mainly for 
recording patronyms, and the Suffix field for toponyms (ie. "farm 
names"). I've written a FoxPro to SQL conversion script (using Perl and 
the XBase module) to dump the data from the TMG database. A typical 
name_parts set may look like this:

slekta=> select * from name_parts where name_id = 1652;
 name_part_id | name_id | name_part_type | name_sequence |name_part
--+-++---+--
 3643 |1652 |  2 | 0 | Christen
 3644 |1652 |  5 | 1 | Jonsen
 3645 |1652 |  6 | 2 | Stavdal
(3 rows)

Now I'm starting to approach my point. The values (2,3,4) in the 
name_part_type column should be changed to (2,5,6). As the Suffix field 
in the overwhelming majority of instances is used only if the name is 
on the Given / Patronym / Toponym form, I figure that it should be easy 
to change the name_part_type here. Initially, I ran this update:

slekta=> update name_parts set name_part_type=6 where name_part_type=3;

So far, so good. But how do I change the name_part_type from 3 to 5 for 
the names with the same name_id that were altered by the previous 
command? This is my latest try:

slekta=> begin work;
BEGIN
slekta=> update name_parts set name_part_type=5
slekta-> from (select name_id where name_part_type=6) as gpt_type
slekta-> where name_id=gpt_type and name_part_type=3;
ERROR:  subquery in FROM may not refer to other relations of same query 
level

Ideas, anyone?
-- 
Leif Biberg Kristensen
http://solumslekt.org/

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Query performance problem

2005-03-17 Thread Kenneth Gonsalves
On Thursday 17 Mar 2005 1:50 pm, Phil Daintree wrote:

> CREATE TABLE chartdetails (
> accountcode integer DEFAULT 0 NOT NULL,
> period integer DEFAULT 0 NOT NULL,
> budget double precision DEFAULT (0)::double precision NOT NULL,
> actual double precision DEFAULT (0)::double precision NOT NULL,
> bfwd double precision DEFAULT (0)::double precision NOT NULL,
> bfwdbudget double precision DEFAULT (0)::double precision NOT
> NULL );

although may be not relevant to your question, as i have noticed this 
before with mysql 'sql', what is the point of having a NOT NULL field 
that defaults to 0? the whole idea of a NOT NULL field is to have the 
value filled in compulsorily and having a default of 0 or '' defeats 
the purpose
-- 
-- 
regards
kg

http://www.livejournal.com/users/lawgon
tally ho! http://avsap.sourceforge.net
àà à!

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] query

2005-03-17 Thread Richard Huxton
Terry Fielder wrote:
now() returns a timestamp.
Cast it to a date and then you can subtract days. e.g.
select now()::date -1
Or CURRENT_DATE - 1
*In oracle we write sysdate-1*
*For example,we write a query (select * from table1 where 
created_date>=sysdate-1).Whats its equivalent in postgre?*
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [SQL] query

2005-03-17 Thread Terry Fielder




now() returns a timestamp.

Cast it to a date and then you can subtract days. e.g.

select now()::date -1

Terry

Chandan_Kumaraiah wrote:

  
  
  
  
   
  Hi,
   
  In
oracle we write sysdate-1
  For
example,we write a query (select *
from table1 where created_date>=sysdate-1).Whats its equivalent in
postgre?
   
  Chandan
   
  


-- 
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085




Re: [SQL] Query performance problem

2005-03-17 Thread PFC

Can anyone tell me why does the following code chokes ... literally -  
this
works almost invisbly under mysql - pg takes more than an hour even on a  
very
small 30 record database.
- You should really use 8.0
- How much time toes it takes without the INSERT/UPDATES ?
- Please post EXPLAIN ANALYZE of all the queries
- You could do all that with only two queries

(The table chartmaster is just a list of general ledger accounts  
accountcode
and accountdescription. PK = accountcode)

$ChartAccounts = DB_query('SELECT accountcode FROM chartmaster',$db);
While ($AccountRow = DB_fetch_array($ChartAccounts)){
for ($PeriodNo=$CreateFrom;$PeriodNo <= $CreateTo;$PeriodNo++) {
echo '' . _('Period Number') . ' ' . $PeriodNo . 
'';
// Check if there is an chart details record set up
$sql = 'SELECT count(*) FROM chartdetails
WHERE 
accountcode='.$AccountRow['accountcode'].'
AND period=' . $PeriodNo;
$InsChartDetails = DB_query($sql,$db,'','','',false);
$CountRows = DB_fetch_row($InsChartDetails);
$AccountExistsAlready = $CountRows[0];
DB_free_result($InsChartDetails);
if(! $AccountExistsAlready) {
$sql = 'INSERT INTO chartdetails (accountcode,
period)
VALUES (' . $AccountRow['accountcode'] 
. ',
' . $PeriodNo . ')';
$InsChartDetails = DB_query($sql,$db);
DB_free_result($InsChartDetails);
}
}
	/*Now run through each of the new chartdetail records created for each
account and update them with the B/Fwd and B/Fwd budget no updates would  
be
required where there were previously no chart details set up ie
FirstPeriodPostedTo > 0 */

for ($PeriodNo=$CreateFrom;$PeriodNo<=$CreateTo; $PeriodNo++) {
$sql = 'SELECT accountcode,
period,
actual + bfwd AS cfwd,
budget + bfwdbudget AS cfwdbudget
FROM chartdetails WHERE period =' . ($PeriodNo  
- 1);
$ChartDetailsCFwd = DB_query($sql,$db);
while ($myrow = DB_fetch_array($ChartDetailsCFwd)){
$sql = 'UPDATE chartdetails SET bfwd =' . 
$myrow['cfwd'] . ',
bfwdbudget =' . 
$myrow['cfwdbudget'] . '
WHERE accountcode = ' . 
$myrow['accountcode'] . '
AND period >=' . $PeriodNo;
$UpdChartDetails = DB_query($sql,$db, '', '', 
'', false);
DB_free_result($UpdChartDetails);
}
DB_free_result($ChartDetailsCFwd);
}
}
function DB_query ($SQL,
&$Conn,
$ErrorMessage='',
$DebugMessage= '',
$Transaction=false,
$TrapErrors=true){
global $debug;
	$result = pg_query($Conn, $SQL);
	if ($DebugMessage == '') {
		$DebugMessage = _('The SQL that failed was:');
	}
	//if (DB_error_no($Conn) != 0){
	if ( !$result AND $TrapErrors){
		prnMsg($ErrorMessage.'' . DB_error_msg($Conn),'error', _('DB  
ERROR:'));
		if ($debug==1){
			echo '' . $DebugMessage. "$SQL";
		}
		if ($Transaction){
			$SQL = 'rollback';
			$Result = DB_query($SQL,$Conn);
			if (DB_error_no($Conn) !=0){
prnMsg(''.  _('Error Rolling Back Transaction!!'), '', _('DB
DEBUG:') );
			}
		}
		if ($TrapErrors){
			include('includes/footer.inc');
			exit;
		}
	}
	return $result;

}
I am hoping that someone will be able to see an alternative simpler  
method or
suggest a method of indexing the pg tables to optmise the required  
queries. I
would appreciate any help here men.

Many thanks in advance 
--
Phil Daintree
webERP Project Admin
---

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[SQL] query

2005-03-17 Thread Chandan_Kumaraiah








 

Hi,

 

In oracle we write sysdate-1

For example,we write a query (select *
from table1 where created_date>=sysdate-1).Whats its equivalent in postgre?

 

Chandan

 








Re: [SQL] How to force subquery scan?

2005-03-17 Thread Tambet Matiisen


> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, March 16, 2005 5:42 PM
> To: Tambet Matiisen
> Cc: [EMAIL PROTECTED]; pgsql-sql@postgresql.org
> Subject: Re: [SQL] How to force subquery scan? 
> 
> 
> "Tambet Matiisen" <[EMAIL PROTECTED]> writes:
> > It seems that subquery scan is only used, when the query can not be 
> > translated into single flat query. Which is mostly good, I assume.
> 
> The planner thinks so anyway ;-)
> 
> If you're desperate you can put in an optimization fence, for 
> instance LIMIT or OFFSET.
> 
>   SELECT * FROM (SELECT ... OFFSET 0) ss;
> 
> In principle the planner could figure out that this offset is 
> a no-op, throw it away, and then flatten the query.  But it 
> doesn't at the moment, and I doubt we'll teach it to do so in 
> the future.
> 
>   regards, tom lane
> 

Thanks, that did the trick. It was a bit more strict than expected,
because WHERE of the outer query is not optimized into subquery. But
considering the semantics of OFFSET that seems reasonable and I can work
around it for now.

  Tambet

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] Query performance problem

2005-03-17 Thread Phil Daintree


Dear psqlers,

I need your help!

I administer/develop an open source PHP accounting software project (webERP)
that was originally mysql only. Since Christmas I and another member of the
team lower cased all the sql and changed some elements of the SQL to allow it
to use postgres as well. All appears to work beautifully with just a single
but important exception.

I am using PostgreSQL 7.4.6 running on :5432 on Mandrake 10.1

We wish to avoid lock in to a single database and have a single code base. We
elected to commonise the sql to ansi standard so far as possible in
preference to using stored procs. Whilst this will mean some compromises to
the performance, the sql is quite simple througout as this is a design goal
and I like having the sql inside the code for readability purposes ie its all
in one place. (I know I am wrong from a computer scientist's perspective!)

There are 2 tables used in the sql we need to optimise .

CREATE TABLE chartdetails (
accountcode integer DEFAULT 0 NOT NULL,
period integer DEFAULT 0 NOT NULL,
budget double precision DEFAULT (0)::double precision NOT NULL,
actual double precision DEFAULT (0)::double precision NOT NULL,
bfwd double precision DEFAULT (0)::double precision NOT NULL,
bfwdbudget double precision DEFAULT (0)::double precision NOT NULL
);


CREATE INDEX idxperiod ON chartdetails USING btree (period);
ALTER TABLE ONLY chartdetails
ADD CONSTRAINT chartdetails_pkey PRIMARY KEY (accountcode, period);
ALTER TABLE ONLY chartdetails
ADD CONSTRAINT cnt001251 FOREIGN KEY (accountcode) REFERENCES
chartmaster(accountcode);
ALTER TABLE ONLY chartdetails
ADD CONSTRAINT cnt001252 FOREIGN KEY (period) REFERENCES
periods(periodno);



AND the second table:


CREATE TABLE gltrans (
counterindex serial NOT NULL,
"type" integer DEFAULT 0 NOT NULL,
typeno bigint DEFAULT (1)::bigint NOT NULL,
chequeno integer DEFAULT 0 NOT NULL,
trandate date,
periodno integer DEFAULT 0 NOT NULL,
account integer DEFAULT 0 NOT NULL,
narrative text DEFAULT ''::text NOT NULL,
amount double precision DEFAULT (0)::double precision NOT NULL,
posted integer DEFAULT 0 NOT NULL,
jobref text DEFAULT ''::text NOT NULL
);


CREATE INDEX idxaccount ON gltrans USING btree (account);
CREATE INDEX idxchequeno ON gltrans USING btree (chequeno);
CREATE INDEX idxgtperiodno ON gltrans USING btree (periodno);
CREATE INDEX idxposted ON gltrans USING btree (posted);
CREATE INDEX idxgttrandate ON gltrans USING btree (trandate);
CREATE INDEX idxgttypeno ON gltrans USING btree (typeno);
CREATE INDEX idxtype_and_number ON gltrans USING btree ("type", typeno);
CREATE INDEX idxgtjobref ON gltrans USING btree (jobref);
ALTER TABLE ONLY gltrans
ADD CONSTRAINT gltrans_pkey PRIMARY KEY (counterindex);
ALTER TABLE ONLY gltrans
ADD CONSTRAINT cnt001296 FOREIGN KEY (account) REFERENCES
chartmaster(accountcode);
ALTER TABLE ONLY gltrans
ADD CONSTRAINT cnt001297 FOREIGN KEY ("type") REFERENCES
 systypes(typeid); ALTER TABLE ONLY gltrans
ADD CONSTRAINT cnt001298 FOREIGN KEY (periodno) REFERENCES
periods(periodno);


So there is a chartdetail record for every period for every general ledger
account.  So if there are 5 years x 12 periods (months) and 200 general
ledger accounts this table will be 12,000 records.

There is a gltrans record for every side of a journal entry. This can get to
be quite a significant table - easily more than 200,000 per annum - depending
on the size of the business obviously.


Can anyone tell me why does the following code chokes ... literally - this
works almost invisbly under mysql - pg takes more than an hour even on a very
small 30 record database.

(The table chartmaster is just a list of general ledger accounts accountcode
and accountdescription. PK = accountcode)


$ChartAccounts = DB_query('SELECT accountcode FROM chartmaster',$db);

While ($AccountRow = DB_fetch_array($ChartAccounts)){

for ($PeriodNo=$CreateFrom;$PeriodNo <= $CreateTo;$PeriodNo++) {

echo '' . _('Period Number') . ' ' . $PeriodNo . 
'';

// Check if there is an chart details record set up
$sql = 'SELECT count(*) FROM chartdetails
WHERE 
accountcode='.$AccountRow['accountcode'].'
AND period=' . $PeriodNo;
$InsChartDetails = DB_query($sql,$db,'','','',false);
$CountRows = DB_fetch_row($InsChartDetails);
$AccountExistsAlready = $CountRows[0];
DB_free_result($InsChartDetails);
if(! $AccountExistsAlready) {
$sql = 'INSERT INTO chartdetails (accountcode,
period)
VALUES (' . $AccountRow['accountcode'] 
. ',