On Tue, Nov 27, 2007 at 07:25:54PM -0500, Tom Lane wrote:
>> You could make an index on (question_id,status) (or a partial index on
>> question id, with status=1 as the filter), but I'm not sure how much it would
>> help you unless the questions table is extremely big. It doesn't appear to
>> be; i
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> You could make an index on (question_id,status) (or a partial index on
> question id, with status=1 as the filter), but I'm not sure how much it would
> help you unless the questions table is extremely big. It doesn't appear to
> be; in fact, it
> -Original Message-
> From: cluster
>
> If I disable the nested loops, the query becomes *much* slower.
>
> A thing that strikes me is the following. As you can see I have the
> constraint: q.status = 1. Only a small subset of the data set
> has this status. I have an index on q.status
On Tue, Nov 27, 2007 at 11:51:40PM +0100, cluster wrote:
> A thing that strikes me is the following. As you can see I have the
> constraint: q.status = 1. Only a small subset of the data set has this
> status. I have an index on q.status but for some reason this is not used.
> Instead the constr
As for optimizing the query, I noticed that all three joins are done by
nested loops. I wonder if another join method would be faster. Have you
analyzed all the tables?
Yes. I did a VACUUM FULL ANALYZE before running the test queries. Also I
have just performed an ANALYZE just to be sure ever
Simon Riggs wrote:
All of those responses have cooked up quite a few topics into one. Large
databases might mean text warehouses, XML message stores, relational
archives and fact-based business data warehouses.
The main thing is that TB-sized databases are performance critical. So
it all depends
On Tue, 2007-11-27 at 14:18 -0600, Peter Koczan wrote:
> Thanks all. This is just what I needed.
All of those responses have cooked up quite a few topics into one. Large
databases might mean text warehouses, XML message stores, relational
archives and fact-based business data warehouses.
The mai
Thanks all. This is just what I needed.
On Nov 26, 2007 1:16 PM, Stephen Cook <[EMAIL PROTECTED]> wrote:
> I think either would work; both PostgreSQL and MS SQL Server have
> success stories out there running VLDBs. It really depends on what you
> know and what you have. If you have a lot of exp
> -Original Message-
> From: cluster
>
> >> Probably by buying much faster disk hardware.
> > Or buy more RAM, so that the data can stay cached.
>
> So the only problem here is lack of RAM and/or disk speed?
I don't think you can reach that conclusion yet. Like everybody said the
reason
In response to cluster <[EMAIL PROTECTED]>:
> >> Probably by buying much faster disk hardware.
> > Or buy more RAM, so that the data can stay cached.
>
> So the only problem here is lack of RAM and/or disk speed?
Not automatically, but the chances that more RAM and/or faster disks will
improve t
Probably by buying much faster disk hardware.
Or buy more RAM, so that the data can stay cached.
So the only problem here is lack of RAM and/or disk speed?
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http:/
On Tue, Nov 27, 2007 at 06:28:23PM +, Matthew wrote:
> SELECT * FROM table WHERE a > 1 AND b < 4;
This sounds like something an R-tree can do.
/* Steinar */
--
Homepage: http://www.sesse.net/
---(end of broadcast)---
TIP 2: Don't 'kill -9' the
Hi all.
I'm wanting to write a new GiST index system, to improve performance on
some queries I am running. I have had quite a look through the docs and
code, and I'm not convinced that it is possible to do what I want. This is
what I am wanting to index:
CREATE INDEX range_index ON table(a, b) U
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Tue, Nov 27, 2007 at 05:33:36PM +0100, cluster wrote:
>> I have a query that takes about 7000 ms in average to complete the first
>> time it runs. Subsequent runs complete in only 50 ms. That is more than
>> a factor 100 faster! How can I make the
On Tuesday 27 November 2007 09:33:36 cluster wrote:
> I have a query that takes about 7000 ms in average to complete the first
> time it runs. Subsequent runs complete in only 50 ms. That is more than
> a factor 100 faster! How can I make the query perform good in the first
> run too?
>
> Query and
On Tue, Nov 27, 2007 at 05:33:36PM +0100, cluster wrote:
> I have a query that takes about 7000 ms in average to complete the first
> time it runs. Subsequent runs complete in only 50 ms. That is more than
> a factor 100 faster! How can I make the query perform good in the first
> run too?
Prob
I have a query that takes about 7000 ms in average to complete the first
time it runs. Subsequent runs complete in only 50 ms. That is more than
a factor 100 faster! How can I make the query perform good in the first
run too?
Query and output from both first and second run of Explain Analyze i
Gianluca Alberici wrote:
I have ran into an interesting problem with 8.1 and i would like anybody
to explain me if there's a problem with the planner or there's a problem
with myself. In both cases a solution is welcome. The following query:
SELECT sum(qty) FROM
_abi_main_pof_r ampr
inne
18 matches
Mail list logo