I have a question about the following. The table has an index on (clicked at
time zone 'PST'). I am using postgres 8.1.3
Actually, I think I answered my own question already. But I want to confirm -
Is the GROUP BY faster because it doesn't have to sort results, whereas
DISTINCT must produce
HI,
I've looking around the log files of my server and lately they indicate that I
should consider increase the check_point segments because they're beeing
reading too often and also recommend increasing the max_fsm_pages over 169728...
those are the config values present in the postgresql.co
I'm still confused as to why the inner join version ran so much faster
than the where-clause version.
Here's the inner join query and explain ouput:
select min(nlogid) as start_nlogid,
max(nlogid) as end_nlogid,
min(dtCreateDate) as start_transaction_timestamp,
max(dtCreateD
Makes sense. It is NOT executing the subquery more than once is it?
On Tue, 19 Dec 2006 20:02:35 +, "Richard Huxton"
said:
> Jeremy Haile wrote:
> > Here's the query and explain analyze using the result of the sub-query
> > substituted:
> >
> > QUERY
> > explain analyze select min(nlogid)
Jeremy Haile wrote:
Here's the query and explain analyze using the result of the sub-query
substituted:
QUERY
explain analyze select min(nlogid) as start_nlogid,
max(nlogid) as end_nlogid,
min(dtCreateDate) as start_transaction_timestamp,
max(dtCreateDate) as end_transacti
Here's the query and explain analyze using the result of the sub-query
substituted:
QUERY
explain analyze select min(nlogid) as start_nlogid,
max(nlogid) as end_nlogid,
min(dtCreateDate) as start_transaction_timestamp,
max(dtCreateDate) as end_transaction_timestamp
from activ
Jeremy Haile wrote:
Here is the explain analyze output:
Well, the row estimates are about as far out as you can get:
-> Index Scan using activity_log_facts_pkey on
activity_log_facts (cost=0.00..1831613.82 rows=1539298
width=12) (actual time=0.050..0.050 rows=0
Here is the explain analyze output:
Result (cost=9.45..9.46 rows=1 width=0) (actual
time=156589.390..156589.391 rows=1 loops=1)
InitPlan
-> Result (cost=0.04..0.05 rows=1 width=0) (actual
time=0.034..0.034 rows=1 loops=1)
InitPlan
-> Limit (cost=0.00..0.04 rows
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> I can't reproduce the problem that way either (or when using a server-side
> PLpgSQL function to do similar). It looks like you have to go through an
> ODBC connection, with the looping done on the client side. Each individual
> insert to the temp tab
Jeremy Haile wrote:
I have the following query which performs extremely slow:
select min(nlogid) as start_nlogid,
max(nlogid) as end_nlogid,
min(dtCreateDate) as start_transaction_timestamp,
max(dtCreateDate) as end_transaction_timestamp
from activity_log_facts
where nlogid
On 12/19/06, Tom Lane <[EMAIL PROTECTED]> wrote:
I still can't reproduce this. Using 7.4 branch tip, I did
create temp table foo(f1 varchar);
create table nottemp(f1 varchar);
\timing
insert into foo select stringu1 from tenk1 limit 100; insert into nottemp
select * from foo; truncate foo;
ins
I have the following query which performs extremely slow:
select min(nlogid) as start_nlogid,
max(nlogid) as end_nlogid,
min(dtCreateDate) as start_transaction_timestamp,
max(dtCreateDate) as end_transaction_timestamp
from activity_log_facts
where nlogid > ( select max(a.end_n
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> Issue #1:
> (I'm assuming there's a reasonable explanation for this.) If I create a
> temp table with a single varchar column (or text column), do 100 inserts to
> that table, copy to a permanent table, truncate the temp table and repeat,
> the time re
13 matches
Mail list logo