Fred Moyer wrote:
One thing I learned after spending about a week comparing the Athlon (2
ghz, 333 mhz frontside bus) and Xeon (2.4 ghz, 266 mhz frontside bus)
platforms was that on average the select queries I was benchmarking ran
30% faster on the Athlon (this was with data cached in memory so
[I originally posted this using the wrong E-Mail account, so a double
posting may occur if the first message gets released by the moderator later-
sorry!]
Hi-
I have a query that I'm trying to speed up. I haven't been able to come up
with any workable ideas for speeding it up, so I'm seeking
On Tue, 11 Nov 2003 21:13:19 -0500
Chris Field [EMAIL PROTECTED] wrote:
we are looking at Xeon, We are currently running it on a quad sun v880
compiled to be 64bit and have been getting dreadful performance. I
don't think we really have much to gain from going 64bit.
By chance, are you
On Wed, 12 Nov 2003 08:34:50 -0500, Nick Fankhauser
[EMAIL PROTECTED] wrote:
- Index Scan using
actor_full_name_uppercase on actor (cost=0.00..6.01 rows=1 width=42)
^^
(actual time=37.62..677.44
Hi,
I am trying the PG 7.4 RC1 and RC2 and I see a superb performance
improvement compared with 7.3
Explaining the querys, I see a change of planner that, in my case,
prefer Nested Loops in 7.4 opposite to Hash or Merge Join in 7.3.
To test, I disable Hash and Merge Joins in 7.3 and
(actual time=37.62..677.44 rows=3501 loops=1)
^
Nick, can you find out why this row count estimation is so far off?
It's actually correct:
prod1=# select count(actor_id) from actor where actor_full_name_uppercase
like 'SANDERS%';
count
---
3501
(1
actor_full_name_uppercase on actor (cost=0.00..6.01 rows=1 width=42)
^^
(actual time=37.62..677.44 rows=3501 loops=1)
^
Nick, can you find out why this row count estimation is so far off?
^
As another post pointed out, you need to set cflags to get optimization
under Solaris on that flavor of Postgresql.
Also, Postgresql tends to get its best performance from the free unixes,
Linux and BSD. those are available for Sun Sparcs, but postgresql in 64
bit mode on those boxes is still
It looks like you are running with the default statistics target (10).
Try boosting it to 100 or even more for this column (see ALTER TABLE
SET STATISTICS, then re-ANALYZE) and see if the estimate gets better.
Here are the results a few more clues:
prod1=# alter table actor alter column
Nick Fankhauser [EMAIL PROTECTED] writes:
This indicates to me that 1 isn't too shabby as an estimate if the whole
name is specified, but I'm not sure how this gets altered in the case of a
LIKE
For a pattern like SANDERS%, the estimate is basically a range estimate
for this condition:
I'm moving a webmail service over to use a postgresql database for
storage and wanted to get any tips for optimizing performance. The
machine will be a multiprocessor (either 2 or 4 cpu ) system with a raid
array. What layout should be used? I was thinking using about using a
raid 1+0 array to
On Wed, 2003-11-12 at 12:34, Suchandra Thapa wrote:
I'm moving a webmail service over to use a postgresql database for
storage and wanted to get any tips for optimizing performance. The
machine will be a multiprocessor (either 2 or 4 cpu ) system with a raid
array. What layout should be
You might have to resort to brute force, like set enable_nestloop=false.
Just out of curiosity, what do you get if you do that?
I get a different plan, but similar execution time:
Limit (cost=323437.13..323437.13 rows=1 width=115) (actual
time=170921.89..170922.95 rows=1000 loops=1)
-
On Wed, 2003-11-12 at 12:23, Rod Taylor wrote:
On Wed, 2003-11-12 at 12:34, Suchandra Thapa wrote:
I'm moving a webmail service over to use a postgresql database for
storage and wanted to get any tips for optimizing performance. The
machine will be a multiprocessor (either 2 or 4 cpu )
Hi-
I have a query that takes too long. I haven't been able to come up with any
ideas for speeding it up, so I'm seeking some input from the list.
I'm using version 7.3.2
I have three tables:
case_data (1,947,386 rows)
actor (3,385,669 rows)
actor_case_assignment (8,668,650 rows)
As the names
Suchandra Thapa [EMAIL PROTECTED] writes:
I was thinking using about using a raid 1+0 array to hold the
database but since I can use different array types, would it be
better to use 1+0 for the wal logs and a raid 5 for the database?
It has been recommended on this list that getting a RAID
On Wed, 12 Nov 2003 13:27:53 -0500, Nick Fankhauser
[EMAIL PROTECTED] wrote:
You might have to resort to brute force, like set enable_nestloop=false.
- Seq Scan on
actor_case_assignment (cost=0.00..209980.49 rows=8669349 width=34) (actual
On Wed, 2003-11-12 at 16:29, Neil Conway wrote:
Suchandra Thapa [EMAIL PROTECTED] writes:
I was thinking using about using a raid 1+0 array to hold the
database but since I can use different array types, would it be
better to use 1+0 for the wal logs and a raid 5 for the database?
It has
On Wed, 2003-11-12 at 22:35, Rod Taylor wrote:
You may want to consider keeping the compressed email in a separate
table than the information describing it. It would mean descriptive
information is more likely to be in RAM, where the body probably doesn't
matter as much (you view them 1 at a
19 matches
Mail list logo