There are other ways to influence the selection of a seqscan, notably
effective_cache_size and random_page_cost.
First, you need to find out at what point a seqscan is actually
faster than an index scan. That's going to be a trial and error
search, but eventually if you're going back far en
On May 8, 2007, at 3:29 AM, PFC wrote:
Create a table which contains your list of loggers (since it's good
normalization anyway, you probably have it already) and have your
data table's logger_id REFERENCE it
BTW, you could do that dynamically with a subselect: (SELECT DISTINCT
logger_id F
Thanks for a good answer, I'll try to find a workaround. The number of
data_loggers will change, but not to frequently. I was actually hoping
to make a view showing the latest data for each logger, maybe I can
manage that with a stored procedure thingy...
- Create a table which contains your
Thanks for a good answer, I'll try to find a workaround. The number of
data_loggers will change, but not to frequently. I was actually hoping
to make a view showing the latest data for each logger, maybe I can
manage that with a stored procedure thingy...
Regards, Jonas:))
On 5/7/07, Andrew Kro
On 5/7/07, Andrew Kroeger <[EMAIL PROTECTED]> wrote:
Jonas Henriksen wrote:
>>> explain analyze SELECT max(date_time) FROM data_values;
> Goes fast and returns:
In prior postgres versions, the planner could not take advantage of
indexes with max() (nor min()) calculations. A workaround to this
Jonas Henriksen wrote:
>>> explain analyze SELECT max(date_time) FROM data_values;
> Goes fast and returns:
In prior postgres versions, the planner could not take advantage of
indexes with max() (nor min()) calculations. A workaround to this was
(given an appropriate index) a query like:
select
Well thanks, but that don't help me much.
I've tried setting an extra condition using datetime>(now() - '14
weeks'::interval)
explain analyze
SELECT max(date_time) FROM data_values
where date_time > (now() - '14 weeks'::interval)
GROUP BY data_logger_id;
HashAggregate (cost=23264.52..23264.55
On May 7, 2007, at 8:53 AM, Jonas Henriksen wrote:
while if I add a GROUP BY data_logger the query uses a seq scan and a
lot of time:
explain analyze SELECT max(date_time) FROM data_values GROUP BY
data_logger_id;
What do you get if you run that with SET enable_seqscan = off; ?
--
Jim Nasby
Am Montag, 7. Mai 2007 15:53 schrieb Jonas Henriksen:
> while if I add a GROUP BY data_logger the query uses a seq scan and a
>
> lot of time:
> >> explain analyze SELECT max(date_time) FROM data_values GROUP BY
> data_logger_id;
I don't think there is anything you can do about this.
--
Peter E
Hi,
I'm trying to figure out how to make postgres utilize my indexes on a table.
this query:
explain analyze SELECT max(date_time) FROM data_values;
Goes fast and returns:
QUERY PLAN
---
10 matches
Mail list logo