On Fri, Oct 26, 2012 at 7:04 PM, Claudio Freire wrote:
> On Fri, Oct 26, 2012 at 7:01 PM, Tom Lane wrote:
>> Claudio Freire writes:
>>> Because once you've accessed that last index page, it would be rather
>>> trivial finding out how many duplicate tids are in that page and, with
>>> a small CPU
On Fri, Oct 26, 2012 at 7:01 PM, Tom Lane wrote:
> Claudio Freire writes:
>> Because once you've accessed that last index page, it would be rather
>> trivial finding out how many duplicate tids are in that page and, with
>> a small CPU cost (no disk access if you don't query other index pages)
>>
Claudio Freire writes:
> Because once you've accessed that last index page, it would be rather
> trivial finding out how many duplicate tids are in that page and, with
> a small CPU cost (no disk access if you don't query other index pages)
> you could verify the assumption of near-uniqueness.
I
On Fri, Oct 26, 2012 at 6:08 PM, Tom Lane wrote:
>
> Interestingly, this is a case where the get_actual_variable_range patch
> (commit 40608e7f, which appeared in 9.0) makes the results worse.
> Before that, there was a (very arbitrary) lower bound on what we'd
> believe as the selectivity of a >=
Shaun Thomas writes:
> But I just noticed the lag in your response. :) It turns out, even
> though I was substituting 2012-10-24 or 2012-10-25, what I really meant
> was current_date. That does make all the difference, actually.
Ah. [ pokes at that for awhile... ] OK, this has nothing to do w
Okay, so I took EXPLAIN ANALYZE off and made sure that timing is on "psql"
command \timing shows
Timing = on
Run the query several times..
9.1.6 runs this query an average of 354 ms
8.2.11 runs this query an average of 437 ms
So 9.1 IS FASTER
Why is EXPLAIN ANALYZE showing the reverse...of t
Sorry,
Again, I'm really new and so don't know how I would go about getting results
from "contrib/pg_test_timing"
Is this something that can be done from psql prompt, or will I need my
developers to get involved and write me something...?
Sorry for being such a newbie:)
--
View this messa
On Fri, Oct 26, 2012 at 8:30 AM, Kevin Grittner wrote:
> k...@rice.edu wrote:
>
>> You have the sequential_page_cost = 1 which is better than or equal
>> to the random_page_cost in all of your examples. It sounds like you
>> need a sequential_page_cost of 5, 10, 20 or more.
>
> The goal should be
robcron writes:
> Our IT group took over an app that we have running using postgres and it has
> been on version 8.2.11 since we acquired it. It is time to get current, so
> I have created instances of our production database that mirror exact
> hardware for our existing implementation on versio
On 10/26/2012 02:35 PM, Tom Lane wrote:
So I'm wondering exactly what "9.1" version you're using, and also
whether you've got any nondefault planner cost parameters.
Just a plain old 9.1.6 from Ubuntu 12.04. Only thing I personally
changed was the default_statistics_target. Later, I bumped up
Shaun Thomas writes:
> On 10/24/2012 02:31 PM, Shaun Thomas wrote:
>> The main flaw with my example is that it's random. But I swear I'm not
>> making it up! :)
> And then I find a way to make it non-random. Hooray:
I can't reproduce this. In 9.1 for instance, I get
Sort (cost=9.83..9.83 row
All... first let me say thank you for this forum I am new to it and
relatively new to postgres, more of a sysadmin than a DBA, but let me
explain my issue. I'll try also to post relevant information as well.
Our IT group took over an app that we have running using postgres and it has
been on
Am 26.10.2012 um 20:00 schrieb Jeff Janes:
> You probably only need to load one partition to figure out if does a
> better job there.
>
> Once you know if it solves the problem, then you can make an informed
> decision on whether migration might be worthwhile.
>
> Cheers,
>
> Jeff
ok .. i'll g
On Fri, Oct 26, 2012 at 8:30 AM, Böckler Andreas wrote:
>
> Am 25.10.2012 um 18:20 schrieb Jeff Janes:
>
>> Can you load the data into 9.2 and see if it does better? (I'm not
>> optimistic that it will be.)
>
> This takes months, the customer has to pay us for that ;)
You probably only need to l
Hey guys,
I have a pretty nasty heads-up. If you have hardware using an Intel XEON
and a newer Linux kernel, you may be experiencing very high CPU latency.
You can check yourself:
cat /sys/devices/system/cpu/cpuidle/current_driver
If it says intel_idle, the Linux kernel will *aggressively* p
On Fri, Oct 26, 2012 at 8:15 AM, Böckler Andreas wrote:
> Hi Ken,
>
> Am 26.10.2012 um 16:55 schrieb k...@rice.edu:
>
>> Hi Andy,
>>
>> You have the sequential_page_cost = 1 which is better than or equal to
>> the random_page_cost in all of your examples.
>> It sounds like you need
>> a sequential
k...@rice.edu wrote:
> You have the sequential_page_cost = 1 which is better than or equal
> to the random_page_cost in all of your examples. It sounds like you
> need a sequential_page_cost of 5, 10, 20 or more.
The goal should be to set the cost factors so that they model actual
costs for you w
Böckler Andreas wrote:
> b) they high seq costs might be true for that table (partition at
> 40gb), but not for the rest of the database Seqscan-Costs per
> table would be great.
You can set those per tablespace. Again, with about 40 spindles in
our RAID, we got about ten times the speed wi
Böckler Andreas wrote:
> Am 25.10.2012 um 20:22 schrieb Kevin Grittner:
>> The idea is to model actual costs on your system. You don't show
>> your configuration or describe your hardware, but you show an
>> estimate of retrieving over 4000 rows through an index and
>> describe a response time of
Am 25.10.2012 um 18:20 schrieb Jeff Janes:
> Can you load the data into 9.2 and see if it does better? (I'm not
> optimistic that it will be.)
This takes months, the customer has to pay us for that ;)
There are already talks about moving it to a new server, but this is for next
year.
And it w
On Fri, Oct 26, 2012 at 05:15:05PM +0200, Böckler Andreas wrote:
> Hi Ken,
>
> Am 26.10.2012 um 16:55 schrieb k...@rice.edu:
>
> > Hi Andy,
> >
> > You have the sequential_page_cost = 1 which is better than or equal to
> > the random_page_cost in all of your examples.
> > It sounds like you need
Hi,
I have a tree-structure managed with ltree and gist index.
Simplified schema is
CREATE TABLE crt (
idcrt INT NOT NULL,
...
pathname LTREE
)
idcrt primary key and other index ix_crt_pathname on pathname with gist
CREATE TABLE doc (
iddoc INT NOT NULL, ...)
iddoc
Hi Ken,
Am 26.10.2012 um 16:55 schrieb k...@rice.edu:
> Hi Andy,
>
> You have the sequential_page_cost = 1 which is better than or equal to
> the random_page_cost in all of your examples.
> It sounds like you need
> a sequential_page_cost of 5, 10, 20 or more.
You're right it was sequential_pag
On Fri, Oct 26, 2012 at 04:37:33PM +0200, Böckler Andreas wrote:
> Hi,
>
>
> Am 25.10.2012 um 20:22 schrieb Kevin Grittner:
>
> >
> > The idea is to model actual costs on your system. You don't show
> > your configuration or describe your hardware, but you show an
> > estimate of retrieving ov
Hi,
Am 25.10.2012 um 20:22 schrieb Kevin Grittner:
>
> The idea is to model actual costs on your system. You don't show
> your configuration or describe your hardware, but you show an
> estimate of retrieving over 4000 rows through an index and describe a
> response time of 4 seconds, so you m
25 matches
Mail list logo