I ran analyze and tried command
explain analyze
SELECT * FROM dok WHERE doktyyp=E'O' AND ('0'::float8 =0 or
dok.tasumata0) AND
('0'::float8 =0 or NOT dok.taidetud) AND dok.sihtyksus LIKE
'RIISIPERE%' ESCAPE '!' AND kuupaev BETWEEN '2008-05-01' AND '2999-08-31'
ORDER BY dokumnr
Hi,
I'm on 8.0.10 and there is a query I cannot quite get adequately fast.
Should it take 2.5s to sort these 442 rows? Are my settings bad? Is
my query stupid?
Would appreciate any tips.
Best regards,
Marcus
apa= explain analyze
apa- select
apa- ai.objectid as ai_objectid
apa- from
Marcus Engene [EMAIL PROTECTED] writes:
Should it take 2.5s to sort these 442 rows?
Limit (cost=54.40..54.43 rows=12 width=8) (actual
time=2650.254..2651.093 rows=442 loops=1)
- Sort (cost=54.40..54.43 rows=12 width=8) (actual
time=2650.251..2650.515 rows=442 loops=1)
Sort
Hi again,
I was thinking, in my slow query it seems the sorting is the villain.
Doing a simple qsort test I notice that:
[EMAIL PROTECTED] /cygdrive/c/pond/dev/tt
$ time ./a.exe 430
real0m0.051s
user0m0.030s
sys 0m0.000s
[EMAIL PROTECTED] /cygdrive/c/pond/dev/tt
$ time ./a.exe
Tom Lane skrev:
Marcus Engene [EMAIL PROTECTED] writes:
Should it take 2.5s to sort these 442 rows?
Limit (cost=54.40..54.43 rows=12 width=8) (actual
time=2650.254..2651.093 rows=442 loops=1)
- Sort (cost=54.40..54.43 rows=12 width=8) (actual
time=2650.251..2650.515
I just finished reading 'The Art of SQL' by Stephane Faroult who has a
chapter (ch 6) discussing this very topic. I strongly recommend any
developer dealing with databases take a few days to read this
narrative.
A solution would seem to depend on whether you have many objects which
change in
Just for fun, another approach since I believe pg supports it:
select whateverFields
from object_val as outer
where (outer.object_id,
outer.object_val_type_id,outer.observation_date) IN
(select inner.object_id,
inner.object_val_type,max(inner.observation_date)
from object_val as inner
where
macgillivary wrote:
I just finished reading 'The Art of SQL' by Stephane Faroult who has a
chapter (ch 6) discussing this very topic.
I'd be curious to know any other references, books, folks would
recommend when it comes to writing efficient SQL, as well as references
on database design.
.
I'll definitely check that book, I've been looking for something like
that.
Thanks
Jaime
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of macgillivary
Sent: Monday, August 28, 2006 10:14 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] speeding up
Silvela, Jaime (Exchange) wrote:
The obvoious way to get the latest measurement of type A would be to
join the table against
SELECT object_id, object_val_type_id, max(observation_date)
FROM object_val
GROUP BY object_id, object_val_type_id
I'm not sure this is actually the result you want;
PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alban Hertroys
Sent: Monday, August 28, 2006 4:57 AM
To: Silvela, Jaime (Exchange)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] speeding up big query lookup
Silvela, Jaime (Exchange) wrote:
The obvoious way to get the latest measurement
Silvela, Jaime \(Exchange\) [EMAIL PROTECTED] writes:
I have a very big table that catalogs measurements of some objects over
time. Measurements can be of several (~10) types. It keeps the
observation date in a field, and indicates the type of measurement in
another field.
I often need to
On fös, 2006-08-25 at 18:34 -0400, Silvela, Jaime (Exchange) wrote:
This is a question on speeding up some type of queries.
I have a very big table that catalogs measurements of some objects over
time. Measurements can be of several (~10) types. It keeps the
observation date in a field, and
This is a question on speeding up some type of queries.
I have a very big table that catalogs measurements of some objects over
time. Measurements can be of several (~10) types. It keeps the
observation date in a field, and indicates the type of measurement in
another field.
I often need to get
On Mon, 25 Jul 2005 17:50:55 -0400, Kevin Murphy
[EMAIL PROTECTED] wrote:
and because the number of possible search terms is so large, it
would be nice if the entire index could somehow be preloaded into memory
and encouraged to stay there.
Postgres does not have such a feature and I wouldn't
On 8/17/05, Manfred Koizar [EMAIL PROTECTED] wrote:
On Mon, 25 Jul 2005 17:50:55 -0400, Kevin Murphy
[EMAIL PROTECTED] wrote:
and because the number of possible search terms is so large, it
would be nice if the entire index could somehow be preloaded into memory
and encouraged to stay there.
Mike Rylander wrote:
On 8/17/05, Manfred Koizar [EMAIL PROTECTED] wrote:
On Mon, 25 Jul 2005 17:50:55 -0400, Kevin Murphy
[EMAIL PROTECTED] wrote:
and because the number of possible search terms is so large, it
would be nice if the entire index could somehow be preloaded into memory
I'm trying to speed up a query on a text column of a 14M-row table.
Uncached query times vary between 1-20 seconds (maybe more), depending
on the search
term. In between time trials I've been trying to flush the disk buffer
cache by selecting count(*) from a separate 4GB table, and times are
18 matches
Mail list logo