Hi all,
I've got a fairly small DB (~850MB when pg_dump'ed) running on PgSQL
v8.1.11 a CentOS 5.3 x86_64 Xen-based virtual machine. The filesystem is
ext3 on LVM with 32MB extents. It's about the only real resource-hungry
VM on the server.
It slows down over time and I can't seem to find a w
On 04/01/2010 7:10 PM, Madison Kelly wrote:
Hi all,
I've got a fairly small DB (~850MB when pg_dump'ed) running on PgSQL
v8.1.11 a CentOS 5.3 x86_64 Xen-based virtual machine. The filesystem is
ext3 on LVM with 32MB extents. It's about the only real resource-hungry
VM on the server.
It slow
Madison Kelly wrote:
Hi all,
I've got a fairly small DB...
It slows down over time and I can't seem to find a way to get the
performance to return without doing a dump and reload of the database...
Some questions:
Is autovacuum running? This is the most likely suspect. If not, things
wi
The query shown below [select count(distinct...] seems to be looping
(99-101% CPU as shown by top for 11+ hours). This using postgres 8.3.5
on a dual quad core machine (Intel(R) Xeon(R) CPU X5460 @ 3.16GHz) with
32G RAM. Can I provide any other info to help investigate this issue? Or
any though
Steve Crawford wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly small DB...
It slows down over time and I can't seem to find a way to get the
performance to return without doing a dump and reload of the database...
Some questions:
Is autovacuum running? This is the most likely susp
Yup, I even tried manually running 'VACUUM FULL' and it didn't help. As
for upgrading;
VACUUM FULL is usually considered a bad idea. What you probably want to
do instead is CLUSTER, followed by ANALYZE.
Basically, VACUUM makes the indexes smaller (but doesn't reclaim much
space from the ta
On 04/01/2010 8:30 PM, Madison Kelly wrote:
Steve Crawford wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly small DB...
It slows down over time and I can't seem to find a way to get the
performance to return without doing a dump and reload of the
database...
Some questions:
Is
Gary Doades wrote:
From your queries it definitely looks like its your stats that are the
problem. When the stats get well out of date the planner is choosing a
hash join because it thinks thousands of rows are involved where as only
a few are actually involved. Thats why, with better stats, t
On Mon, 2010-01-04 at 15:53 -0500, Madison Kelly wrote:
> Gary Doades wrote:
> > From your queries it definitely looks like its your stats that are the
> > problem. When the stats get well out of date the planner is choosing a
> > hash join because it thinks thousands of rows are involved where
Madison Kelly wrote:
> I've added CLUSTER -> ANALYZE -> VACUUM to my nightly
> routine and dropped the VACUUM FULL call.
The CLUSTER is probably not going to make much difference once
you've eliminated bloat, unless your queries do a lot of searches in
the sequence of the index used. Be sure
Kevin Grittner wrote:
Madison Kelly wrote:
I've added CLUSTER -> ANALYZE -> VACUUM to my nightly
routine and dropped the VACUUM FULL call.
The CLUSTER is probably not going to make much difference once
you've eliminated bloat, unless your queries do a lot of searches in
the sequence of the
Brad Nicholson wrote:
I think you are going down the wrong route here - you should be looking
at preventative maintenance instead of fixing it after its broken.
Ensure that autovacuum is running for the database (assuming that you
are on a relatively modern version of PG), and possibly tune it t
Madison Kelly wrote:
Steve Crawford wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly small DB...
It slows down over time and I can't seem to find a way to get the
performance to return without doing a dump and reload of the
database...
Yup, I even tried manually running 'VACUUM F
hi,
just a small question: is it normal that PostgreSQL 8.4.1 always uses
sequential scanning on any table when there is a condition having the
constant "current_user"? Of course there is a btree index set on that table,
but the DBMS just doesn't want to utilize it. When I replace current_user to
On Mon, Jan 4, 2010 at 2:41 PM, Brian Cox wrote:
> The query shown below [select count(distinct...] seems to be looping
> (99-101% CPU as shown by top for 11+ hours). This using postgres 8.3.5 on a
> dual quad core machine (Intel(R) Xeon(R) CPU X5460 @ 3.16GHz) with 32G RAM.
> Can I provide any ot
Madison Kelly wrote:
I wanted to get ahead of the problem, hence my question here. :)
I've set this to run at night ('iwt' being the DB in question):
su postgres -c "psql iwt -c \"VACUUM ANALYZE VERBOSE\"
And why not the vacuumdb command?:
su postgres -c "vacuumdb --analyze --verbose iw
Steve Crawford wrote:
Madison Kelly wrote:
Steve Crawford wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly small DB...
It slows down over time and I can't seem to find a way to get the
performance to return without doing a dump and reload of the
database...
Yup, I even tried man
Steve Crawford wrote:
Madison Kelly wrote:
I wanted to get ahead of the problem, hence my question here. :)
I've set this to run at night ('iwt' being the DB in question):
su postgres -c "psql iwt -c \"VACUUM ANALYZE VERBOSE\"
And why not the vacuumdb command?:
su postgres -c "vacuumdb
2010/1/4 Keresztury Balázs :
> just a small question: is it normal that PostgreSQL 8.4.1 always uses
> sequential scanning on any table when there is a condition having the
> constant "current_user"? Of course there is a btree index set on that table,
> but the DBMS just doesn't want to utilize it.
On Mon, Jan 4, 2010 at 2:57 PM, Madison Kelly wrote:
> As for autovacuum, I assumed (yes, I know) that all v8.x releases enabled it
> by default. How would I confirm that it's running or not?
I believe it's not enabled by default in 8.1-land, and is as of 8.2
and later. Whether it's running or n
Madison Kelly wrote:
I think for now, I will stick with 8.1, but I will certainly try out
your repo edit above on a test machine and see how that works out. I
am always reticent to change something as fundamental as postgres
without "good reason". I guess I am a fan of "if it ain't broke...". :
On 01/04/2010 04:53 PM, Robert Haas [robertmh...@gmail.com] wrote:
PREPARE foo AS
EXPLAIN EXECUTE foo();
Thanks for the response. Results below. Brian
cemdb=> prepare foo as select count(distinct b.ts_id) from
ts_stats_transetgroup_user_weekly b, ts_stats_transet_user_interval c,
ts_transet
Actually table itself has ~8000 records. I don't know why does it report 42
rows, since there is not even a matching row in the table for this specific
condition.. But as we all know, the universal answer for every question is
42 ;) Autovacuum is on, and I also did some vacuuming before I started t
On Mon, Jan 4, 2010 at 3:13 PM, Greg Smith wrote:
> Madison Kelly wrote:
>>
>> I think for now, I will stick with 8.1, but I will certainly try out your
>> repo edit above on a test machine and see how that works out. I am always
>> reticent to change something as fundamental as postgres without "
On Jan 4, 2010, at 1:59 PM, Robert Haas wrote:
> The thing is, PostgreSQL doesn't know at planning time what the value of
> current_user() will be, so the plan can't depend on that; the planner
> just takes its best shot.
current_user() is a stable function and the manual is explicit that the r
Rosser Schwarz wrote:
On Mon, Jan 4, 2010 at 2:57 PM, Madison Kelly wrote:
As for autovacuum, I assumed (yes, I know) that all v8.x releases enabled it
by default. How would I confirm that it's running or not?
I believe it's not enabled by default in 8.1-land, and is as of 8.2
and later. Whe
Scott Marlowe wrote:
On Mon, Jan 4, 2010 at 3:13 PM, Greg Smith wrote:
Madison Kelly wrote:
I think for now, I will stick with 8.1, but I will certainly try out your
repo edit above on a test machine and see how that works out. I am always
reticent to change something as fundamental as postgre
Madison Kelly wrote:
You are right, autovacuum is not running after all. From your comment,
I am wondering if you'd recommend I turn it on or not?...
I see you are considering an upgrade but FWIW on your 8.1 instance, my
remaining 8.1 server has been running for years with it on. Read up on
2010/1/4 Erik Jones :
> On Jan 4, 2010, at 1:59 PM, Robert Haas wrote:
>> The thing is, PostgreSQL doesn't know at planning time what the value of
>> current_user() will be, so the plan can't depend on that; the planner
>> just takes its best shot.
>
> current_user() is a stable function and the ma
Hi everybody,
I am running a PostgreSQL server 8.3.5 with a pretty much standard config.
The web application server which runs Apache 1.3/PHP2.9 has an intermittent
problem:
pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.
The long request happens at approximate rate 1:
Erik Jones wrote:
> On Jan 4, 2010, at 1:59 PM, Robert Haas wrote:
>
>> The thing is, PostgreSQL doesn't know at planning time what the value of
>> current_user() will be, so the plan can't depend on that; the planner
>> just takes its best shot.
>
> current_user() is a stable function and the m
Dmitri Girski wrote:
I am running a PostgreSQL server 8.3.5 with a pretty much standard config.
The web application server which runs Apache 1.3/PHP2.9 has an
intermittent problem:
pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.
The long request happens at approxima
+Madison Kelly wrote:
You are right, autovacuum is not running after all. From your comment, I
am wondering if you'd recommend I turn it on or not? If so, given that I
doubt I will upgrade any time soon, how would I enable it? I suppose I
could google that, but google rarely shares gotcha's. :)
Dmitri Girski writes:
> I am running a PostgreSQL server 8.3.5 with a pretty much standard config.
> The web application server which runs Apache 1.3/PHP2.9 has an intermittent
> problem:
> pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.
> The long request happens at a
Craig Ringer writes:
> Erik Jones wrote:
>> current_user() is a stable function and the manual is explicit that the
>> result of stable function can be used in an index scan:
> Yes ... but the planner doesn't know the value current_user will return,
I think it's got nothing to do with that and
35 matches
Mail list logo