On Mon, 10 Nov 2003, Marc G. Fournier wrote:
>
> explain analyze SELECT ts.company_id, company_name, SUM(ts.bytes) AS total_traffic
> FROM company c, traffic_logs ts
>WHERE c.company_id = ts.company_id
> AND month_trunc(ts.runtime) = '2003-10-01'
> GROUP BY company_name,ts.company_id
On Mon, 10 Nov 2003, Josh Berkus wrote:
> Marc,
>
> I'd say your machine is very low on available RAM, particularly sort_mem.
> The steps which are taking a long time are:
Here's the server:
last pid: 42651; load averages: 1.52, 0.96, 0.88
up 28+07:43:33 20:35:44
307 processes: 2 running,
On Mon, 10 Nov 2003, Tom Lane wrote:
> Neil Conway <[EMAIL PROTECTED]> writes:
> > Interesting that we get the row count estimate for this index scan so
> > wrong -- I believe this is the root of the problem. Hmmm... I would
> > guess that the optimizer stats we have for estimating the selectivi
On Mon, 10 Nov 2003, Neil Conway wrote:
> "Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> > -> Index Scan using tl_month on traffic_logs ts (cost=0.00..30763.02 rows=8213
> > width=16) (actual time=0.29..5562.25 rows=462198 loops=1)
> > Index Cond: (month_trunc(runtime) = '2003-10-01 0
Neil Conway <[EMAIL PROTECTED]> writes:
> Interesting that we get the row count estimate for this index scan so
> wrong -- I believe this is the root of the problem. Hmmm... I would
> guess that the optimizer stats we have for estimating the selectivity
> of a functional index is pretty primitive,
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> -> Index Scan using tl_month on traffic_logs ts (cost=0.00..30763.02 rows=8213
> width=16) (actual time=0.29..5562.25 rows=462198 loops=1)
> Index Cond: (month_trunc(runtime) = '2003-10-01 00:00:00'::timestamp without
> time zone)
Interest
"Patrick Hatcher" <[EMAIL PROTECTED]> writes:
> Do you have an index on ts.bytes? Josh had suggested this and after I put
> it on my summed fields, I saw a speed increase.
What's the reasoning behind this? ISTM that sum() should never use an
index, nor would it benefit from using one.
-Neil
--
Marc,
I'd say your machine is very low on available RAM, particularly sort_mem.
The steps which are taking a long time are:
> Aggregate (cost=32000.94..32083.07 rows=821 width=41) (actual
time=32983.36..47586.17 rows=144 loops=1)
>-> Group (cost=32000.94..32062.54 rows=8213 width=41)
Do you have an index on ts.bytes? Josh had suggested this and after I put
it on my summed fields, I saw a speed increase. I can't remember the
article was that Josh had written about index usage, but maybe he'll chime
in and supply the URL for his article.
hth
Patrick Hatcher
here's the URL:
http://techdocs.postgresql.org/techdocs/pgsqladventuresep2.php
Patrick Hatcher
Macys.Com
Legacy Integration Developer
415-422-1610 office
HatcherPT - AIM
Patrick
Table structure is simple:
CREATE TABLE traffic_logs (
company_id bigint,
ip_id bigint,
port integer,
bytes bigint,
runtime timestamp without time zone
);
runtime is 'day of month' ...
I need to summarize the month, per company, with a query as:
explain analyze SELECT ts.co
Thanks a lot. IT WORKED! with your suggestions.
Regards,
Radha
> On Sun, 2003-11-09 at 15:06, [EMAIL PROTECTED] wrote:
>> I have a c program called test1.pgc with some sql statements embedded
>> in it. The program was preprocessed, compiled and linked. Now, I have
>> the executable test1.
>>
>> W
12 matches
Mail list logo