Tom Lane wrote:
> One idea I thought about was to sort by index scan cost, using
> selectivity only as a tiebreaker for cost, rather than the other way
> around as is currently done. This seems fairly plausible because
> indexscans that are cheaper than other indexscans likely return fewer
> rows
Steve <[EMAIL PROTECTED]> writes:
> [ strange planner misbehavior in 8.2.3 ]
After some off-list investigation (thanks, Steve, for letting me poke
at your machine), the short answer is that the heuristics used by
choose_bitmap_and() suck. The problem query is like
select ... from ds where
ds.re
On Apr 13, 2007, at 4:01 PM, Dan Harris wrote:
Is there a pg_stat_* table or the like that will show how bloated
an index is? I am trying to squeeze some disk space and want to
track down where the worst offenders are before performing a global
REINDEX on all tables, as the database is rou
On Mon, 2007-04-09 at 16:05 -0400, Carlos Moreno wrote:
> 2) What would be the real implications of doing that?
Many people ask, hence why a whole chapter of the manual is devoted to
this important topic.
http://developer.postgresql.org/pgdocs/postgres/wal.html
--
Simon Riggs
E
On Friday 13 April 2007 14:53:53 Carlos Moreno wrote:
> How does PG take advantage of the available memory? I mean, if I have a
> machine with, say, 4 or 8GB of memory, how will those GBs would end
> up being used? They just do?? (I mean, I would find that a vaild
> answer;
On linux the files
Is there a pg_stat_* table or the like that will show how bloated an index is?
I am trying to squeeze some disk space and want to track down where the worst
offenders are before performing a global REINDEX on all tables, as the database
is rougly 400GB on disk and this takes a very long time to
Steve wrote:
Common wisdom in the past has been that values above a couple of hundred
MB will degrade performance.
The annotated config file talks about setting shared_buffers to a third
of the
available memory --- well, it says "it should be no more than 1/3 of the
total
amount of memory
At 12:38 PM 4/13/2007, Steve wrote:
Really?
Wow!
Common wisdom in the past has been that values above a couple of hundred
MB will degrade performance. Have you done any benchmarks on 8.2.x that
show that you get an improvement from this, or did you just take the
"too much of a good thing is wo
Really?
Wow!
Common wisdom in the past has been that values above a couple of hundred
MB will degrade performance. Have you done any benchmarks on 8.2.x that
show that you get an improvement from this, or did you just take the
"too much of a good thing is wonderful" approach?
Not to be rude
"Avdhoot Kishore Saple" <[EMAIL PROTECTED]> writes:
> How to compute the frequency of predicate (e.g. Salary > $7) in an
> SQL query from a DB's pre-defined indexes?". I'm specifically looking at
> how to retrieve information about indices (like number of pages at each
> level of index, range o
Dear All.
How to compute the frequency of predicate (e.g. Salary > $7) in an
SQL query from a DB's pre-defined indexes?". I'm specifically looking at
how to retrieve information about indices (like number of pages at each
level of index, range of attribute values etc.)
Any suggestions
On Tue, 2007-04-10 at 15:28 -0400, Steve wrote:
>
> I'm trying to tune the memory usage of a new machine that has a -lot- of
> memory in it (32 gigs).
...
>
> shared_buffers = 16GB
Really?
Wow!
Common wisdom in the past has been that values above a couple of hundred
MB will degrade performan
12 matches
Mail list logo