Scott Marlowe wrote:
> Douglas Alan wrote:
>> Okay -- no problem:
>>
>>set enable_seqscan = on;
>>explain analyze select * from maindb_astobject
>>where survey_id = 2
>>limit 1000;
>>
>>"Limit (cost=0.00..48.03 rows=1000 width=78) (actual
>> time=84837.835..265938.258 rows=10
On Thu, May 28, 2009 at 6:45 PM, Douglas Alan wrote:
> Scott Marlowe wrote:
>
>> Douglas Alan wrote:
>
>>> I'm worried that turning off seqscan would distort other queries.
>>> (Remember, I don't have control of the individual queries. The
>>> user of the application can specify all sorts of que
Scott Marlowe wrote:
> Douglas Alan wrote:
>> I'm worried that turning off seqscan would distort other queries.
>> (Remember, I don't have control of the individual queries. The
>> user of the application can specify all sorts of queries, and
>> there's an ORM in the middle.)
> You are aware y
On Thu, 2009-05-28 at 15:12 -0400, Douglas Alan wrote:
> The most obvious solution would be an option to tell Postgres not to
> assume that the value is evenly distributed throughout the table, and
> to take account of the fact that the data in question might very well
> be clustered at the very en
On Thu, May 28, 2009 at 5:52 PM, Simon Riggs wrote:
>
> On Thu, 2009-05-28 at 15:03 -0400, Douglas Alan wrote:
>
> > The application in question is a kind of data warehousing thing (of
> > astronomical stars), and there's an ORM in the middle, so it's not
> > easy for us to hand-tune how individu
On Thu, 2009-05-28 at 15:03 -0400, Douglas Alan wrote:
> The application in question is a kind of data warehousing thing (of
> astronomical stars), and there's an ORM in the middle, so it's not
> easy for us to hand-tune how individual queries are specified.
> Instead, we have to structure the da
2009 15:03:32 -0400
> Subject: Re: [GENERAL] What is the right way to deal with a table with rows
> that are not in a random order?
> To: pgsql-general@postgresql.org
> CC: si...@2ndquadrant.com
>
> On Thu, May 28, 2009 at 4:14 AM, Simon Riggs wrote:
>
> > Partition the
On Thu, May 28, 2009 at 1:12 PM, Douglas Alan wrote:
> On Thu, May 28, 2009 at 10:24 AM, Scott Marlowe
> wrote:
>
>> OTOH, if you've got it all sussed out, then ignore the request for more
>> information.
>
> I don't *know* if I have it "all sussed out", but I *do* know why
> Postgres is doing
On Thu, May 28, 2009 at 1:45 PM, Douglas Alan wrote:
> On Thu, May 28, 2009 at 10:41 AM, Scott Marlowe
> wrote:
>
>> Note that in the OPs case I'd probably try testing things like turning
>> off seqscan, or lowering random_page_cost. I'd also look at
>> clustering on the index for the field you'
On Thu, May 28, 2009 at 10:41 AM, Scott Marlowe
wrote:
> Note that in the OPs case I'd probably try testing things like turning
> off seqscan, or lowering random_page_cost. I'd also look at
> clustering on the index for the field you're selecting on.
I'm worried that turning off seqscan would d
On Thu, May 28, 2009 at 10:24 AM, Scott Marlowe wrote:
> OTOH, if you've got it all sussed out, then ignore the request for more
> information.
I don't *know* if I have it "all sussed out", but I *do* know why
Postgres is doing what it is doing in this particular case. It's
assuming that the
On Thu, May 28, 2009 at 4:14 AM, Simon Riggs wrote:
> Partition the table, then scan the correct partition.
If I do that, will Postgres figure out the "right thing" to do if the
parent table is queried instead? Also, what are the performance
implications then for doing queries that span all the
Note that in the OPs case I'd probably try testing things like turning
off seqscan, or lowering random_page_cost. I'd also look at
clustering on the index for the field you're selecting on.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription
On Wed, May 27, 2009 at 8:43 PM, Douglas Alan wrote:
> On Wed, May 27, 2009 at 8:54 PM, Jeff Davis wrote:
>
>>
>> If you're putting a LIMIT on it, why does it return millions of results?
>
> It doesn't return millions of results with the LIMIT on it. It just does a
> sequential scan of the table
Douglas Alan writes:
> As I said, in my original post, Postgres's approach would be completely
> reasonable in this case,* if* the rows that it was looking for were
> sprinkled randomly throughout the table. But they're *not* in this case --
> they're all at the end.
There's been some talk of pe
On Wed, 2009-05-27 at 19:53 -0400, Douglas Alan wrote:
> We have a very large table (150 million rows) where the rows are not
> in a random order. Some common queries will have millions of results,
> and this of course is slow. For an interactive interface to the
> database, we want to put a lim
On Wed, May 27, 2009 at 8:54 PM, Jeff Davis wrote:
> If you're putting a LIMIT on it, why does it return millions of results?
>
It *doesn't* return millions of results with the LIMIT on it. It just does
a sequential scan of the table and doesn't find any results until it gets to
the last quart
On Wed, 2009-05-27 at 19:53 -0400, Douglas Alan wrote:
> We have a very large table (150 million rows) where the rows are not
> in a random order. Some common queries will have millions of results,
> and this of course is slow. For an interactive interface to the
> database, we want to put a limi
We have a very large table (150 million rows) where the rows are not in a
random order. Some common queries will have millions of results, and this
of course is slow. For an interactive interface to the database, we want to
put a limit on all queries so that queries will return quickly even if th
19 matches
Mail list logo