Hello,
> Once we ramped up production traffic on the machines, PostgreSQL
> pretty much died under the load and could never get to a steady state.
> I think this had something to do with the PG backends not having
> enough I/O bandwidth (due to CFQ) to put data into cache fast enough.
> This wen
Hello,
>>> Anybody on the list have any experience with these drives? They get
>>> good numbers but I can't find diddly on them on the internet for the
>>> last year or so.
>>>
>>> http://www.stec-inc.com/product/zeusiops.php
Most of the storage vendors (I have confirmation from EMC and HP)
> > When a row is orphaned it's added to a list of possibly available rows.
> > When a new row is needed the list of possible rows is examined and the
> > first one with a transaction id less then the lowest running transaction
> > id is chosen to be the new row? These rows can be in a heap so it
> Looked like pg_autovacuum is operating as expected. One of the annoying
> limitations of pg_autovacuum in current releases is that you can't set
> thresholds on a per table basis. It looks like this table might require
> an even more aggressive vacuum threshold. Couple of thoughts, are you
>
> >>AFAICT the vacuum is doing what it is supposed to, and the problem has
> >>to be just that it's not being done often enough. Which suggests either
> >>an autovacuum bug or your autovacuum settings aren't aggressive enough.
> >
> > -D -d 1 -v 1000 -V 0.5 -a 1000 -A 0.1 -s 10
> >
> > That is a
> >> First thing I'd suggest is to get a more detailed idea of exactly
> >> what is bloating --- which tables/indexes are the problem?
>
> > I think the most problematic table is this one. After vacuum
full/reindex
> > it was 20MB in size now (after 6 hours) it is already 70MB and counting.
>
>
> > Our database increases in size 2.5 times during the day.
> > What to do to avoid this? Autovacuum running with quite
> > aggressive settings, FSM settings are high enough.
>
> First thing I'd suggest is to get a more detailed idea of exactly
> what is bloating --- which tables/indexes are th
Hello,
Our database increases in size 2.5 times during the day.
What to do to avoid this? Autovacuum running with quite
aggressive settings, FSM settings are high enough.
Database size should be more or less constant but it
has high turnover rate (100+ insert/update/delete per second).
> > Hm. Yes. Number of locks varies quite alot (10-600). Now what to
> > investigate
> > further? We do not use explicit locks in our functions. We use quite
simple
> > update/delete where key=something;
> > Some sample (select * from pg_locks order by pid) is below.
>
> The sample doesn't sho
> >>The "vacuum cost" parameters can be adjusted to make vacuums fired
> >>by pg_autovacuum less of a burden. I haven't got any specific numbers
> >>to suggest, but perhaps someone else does.
> >
> > It looks like that not only vacuum causes our problems. vacuum_cost
> > seems to lower vacuum i
> > It looks like that not only vacuum causes our problems. vacuum_cost
> > seems to lower vacuum impact but we are still noticing slow queries
"storm".
> > We are logging queries that takes >2000ms to process.
> > And there is quiet periods and then suddenly 30+ slow queries appears
in
> > lo
> > ... So contents of database changes very fast. Problem is that when
> > pg_autovacuum does vacuum those changes slows down too much.
>
> The "vacuum cost" parameters can be adjusted to make vacuums fired
> by pg_autovacuum less of a burden. I haven't got any specific numbers
> to suggest, but
Hello,
We have problems with one postgresql database with high
data change rate. Actually we are already under pressure
to change postgresql to Oracle.
I cannot post schema and queries to list but can do this
privately.
Tables are not big (2-15 rows each) but have very high
turn
Hello,
What would be reasonable settings for quite heavily used
but not large database?
Dabatase is under 1G in size and fits into server cache (server
has 2GB of memeory). Two of most used tables are ~100k rows each
but they get up to 50inserts/updates/deletes per second.
How to tweak
> > While writing web application I found that it would
> > be very nice for me to have "null" WHERE clause. Like
> > WHERE 1=1. Then it is easy to concat additional
> > conditions just using $query . " AND col=false" syntax.
> >
> > But which of the possible "null" clauses is the fastest
> > o
Hello,
While writing web application I found that it would
be very nice for me to have "null" WHERE clause. Like
WHERE 1=1. Then it is easy to concat additional
conditions just using $query . " AND col=false" syntax.
But which of the possible "null" clauses is the fastest
one?
Thanks,
> >> Well, try it without the trigger. If performance improves markedly, it
> >> might be worth rewriting in C.
>
> > Nope. Execution time is practically the same without trigger.
>
> >> If not, you're probably saturating the disk I/O -
>
> > Bottleneck in this case is CPU. postmaster process
> > How can I improve performance and will version 7.4 bring something
> > valuable for my task? Rewrite to some other scripting language is not
> > a problem. Trigger is simple enough.
>
> Well, try it without the trigger. If performance improves markedly, it
might
> be worth rewriting in C.
Hello,
I have small table (up to 1 rows) and every row will be updated
once per minute. Table also has "before update on each row" trigger
written in plpgsql. But trigger 99.99% of the time will do nothing
to the database. It will just compare old and new values in the row
and those value
> > I missed your orig. post, but AFAIK multiprocessing kernels will handle
HT
> > CPUs as 2 CPUs each. Thus, our dual Xeon 2.4 is recognized as 4 Xeon 2.4
> > CPUs.
> >
> > This way, I don't think HT would improve any single query (afaik no
postgres
> > process uses more than one cpu), but overal
20 matches
Mail list logo