[PERFORM] Trigger vs Rule

2006-04-02 Thread Ключников А . С .
Hi all.

There are two tables:

create table device_types (
id int,
name varchar
);
about 1000 rows

create table devices (
id int,
type int REFERENCES device_types(id),
name varchar,
data float
);
about 20 rows

And about 1000 functions:
create function device_type1(int) returns ..
create function device_type2(int) returns ..
...
create function device_type1000(int) returns ..


What is faster?

One trigger with 1000 ELSE IF
if old.type=1 then 
select device_type1(old.id);
else if old.type=2 then
select device_type2(old.id);
...
else if old.type=1000 then
select device_type1000(old.id);
end if;

Or 1000 rules
create rule device_type1 AS ON update to devices 
where old.type=1  
DO select device_type1(old.id);
create rule device_type2 AS ON update to devices 
where old.type=2
DO select device_type2(old.id);
...
create rule device_type1000 AS ON update to devices 
where old.type=1000
DO select device_type1000(old.id);

thx.

-- 
С уважением,
Ключников А.С.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] statistics buffer is full

2006-04-02 Thread Qingqing Zhou

""Gábriel Ákos"" <[EMAIL PROTECTED]> wrote
>
> I've got this message while heavily inserting into a database. What should 
> I tune and how? It is postgresql 8.1.3.
>
> 2006-03-29 14:16:57.513 CEST:LOG:  statistics buffer is full
>

Since your server is in a heavy load, so the common trick is to increase 
PGSTAT_RECVBUFFERSZ in include/pgstat.h and recompile your server.

Regards,
Qingqing 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Large Binary Objects Middleware

2006-04-02 Thread Qingqing Zhou

""Rodrigo Madera"" <[EMAIL PROTECTED]> wrote
>
> The database is holding large ammounts of digital video, and I am
> wanting to put these directly into the database. What performance
> guidelines would you all give seeing my position today?
>

IMHO, if you don't need transaction semantics, don't put these big things 
into database. Instead, add a field in your table and put the link to the 
big things in it.

Regards,
Qingqing 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Logging SQL queries to optimize them ?

2006-04-02 Thread Qingqing Zhou

"Bruno Baguette" <[EMAIL PROTECTED]> wrote
>
>
> Is there a way to log all SQL queries, with the date/time when they were 
> launched, and the cost of that query (if this is possible) in order to see 
> which queries need to be optimized ?
>

See if log_statement, log_statement_stats parameters can help you. Also, 
EXPLAIN ANALYZE can help you more on the target query.

Regards,
Qingqing 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] index not used again

2006-04-02 Thread Jan Kesten
Stephan Szabo schrieb:

> Did you reset the table contents between these two (remember that
> explain analyze actually runs the query)?  The second appears to be
> changing no rows from the output.

I for myself did not, but as there are runnig automatic jobs
periodically I can't tell, if one ran in the time while I was testing
(but I guess not). At starting my tests all rows contained a zero for
all tokens and there should be no ones at all.

In my case rows with token set to one are really rare, about one of a
thousand rows. I looked for fast way to find therse rows.

I'll try again after a successful run - not resetting the token (not
using analyse this time).

Cheers,
Jan





signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] index not used again

2006-04-02 Thread Stephan Szabo
On Sun, 2 Apr 2006, Jan Kesten wrote:

> Stephan Szabo schrieb:
>
> > Did you reset the table contents between these two (remember that
> > explain analyze actually runs the query)?  The second appears to be
> > changing no rows from the output.
>
> I for myself did not, but as there are runnig automatic jobs
> periodically I can't tell, if one ran in the time while I was testing
> (but I guess not). At starting my tests all rows contained a zero for
> all tokens and there should be no ones at all.

The reason I asked is that the explain analyze output for the first query
on fak6 (using a seqscan) seemed to imply 24k rows actually matched the
condition and were updated, so comparisons to the later times may be
skewed.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Trigger vs Rule

2006-04-02 Thread Niklas Johansson


On 2 apr 2006, at 10.31, Ключников А.С. wrote:

What is faster?
One trigger with 1000 ELSE IF
Or 1000 rules


Faster to write and easier to maintain would be to write a trigger  
function in pl/pgsql which executes the right function dynamically:


CREATE OR REPLACE FUNCTION exec_device_type() RETURNS trigger AS $$
EXECUTE "SELECT device_type" || OLD.type || "(OLD.id)";
$$ LANGUAGE plpgsql;

Best would probably be to refactor your device_typeN() functions into  
one, that would take N as an argument.



Sincerely,

Niklas Johansson





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Query using SeqScan instead of IndexScan

2006-04-02 Thread Josh Berkus
Brendan,

> But just as a follow up question to your #1 suggestion, I have 8 GB
> of ram in my production server. You're saying to set the
> effective_cache_size then to 5 GB roughly? Somewhere around 655360?
> Currently it is set to 65535. Is that something that's OS dependent?
> I'm not sure how much memory my server sets aside for disk caching.

Yes, about.  It's really a judgement call; you're looking for the approximate 
combined RAM available for disk caching and shared mem.  However, this is 
just used as a way of estimating the probability that the data you want is 
cached in memory, so you're just trying to be order-of-magnitude accurate, 
not to-the-MB accurate.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Query using SeqScan instead of IndexScan

2006-04-02 Thread Brendan Duddridge

Hi Josh,

Thanks. I've adjusted my effective_cache_size to 5 GB, so we'll see  
how that goes.


I'm also doing some query and de-normalization optimizations so we'll  
see how those go too.



Brendan Duddridge | CTO | 403-277-5591 x24 |  [EMAIL PROTECTED]

ClickSpace Interactive Inc.
Suite L100, 239 - 10th Ave. SE
Calgary, AB  T2G 0V9

http://www.clickspace.com

On Apr 2, 2006, at 4:30 PM, Josh Berkus wrote:


Brendan,


But just as a follow up question to your #1 suggestion, I have 8 GB
of ram in my production server. You're saying to set the
effective_cache_size then to 5 GB roughly? Somewhere around 655360?
Currently it is set to 65535. Is that something that's OS dependent?
I'm not sure how much memory my server sets aside for disk caching.


Yes, about.  It's really a judgement call; you're looking for the  
approximate
combined RAM available for disk caching and shared mem.  However,  
this is
just used as a way of estimating the probability that the data you  
want is
cached in memory, so you're just trying to be order-of-magnitude  
accurate,

not to-the-MB accurate.

--
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of  
broadcast)---

TIP 4: Have you searched our list archives?

   http://archives.postgresql.org





smime.p7s
Description: S/MIME cryptographic signature