Ron Mayer <[EMAIL PROTECTED]> writes:
> Greg Smith wrote:
>> Count me on the side that agrees adjusting the vacuuming parameters is
>> the more straightforward way to cope with this problem.
> Agreed for vacuum; but it still seems interesting to me that
> across databases and workloads high priori
Greg Smith wrote:
>
> Count me on the side that agrees adjusting the vacuuming parameters is
> the more straightforward way to cope with this problem.
Agreed for vacuum; but it still seems interesting to me that
across databases and workloads high priority transactions
tended to get through fast
Ah, glad this came up again 'cause a problem here caused my original reply
to bounce.
On Thu, 10 May 2007, Ron Mayer wrote:
Actually, CPU priorities _are_ an effective way of indirectly scheduling
I/O priorities. This paper studied both CPU and lock priorities on a
variety of databases includ
Andrew Sullivan wrote:
> On Thu, May 10, 2007 at 05:10:56PM -0700, Ron Mayer wrote:
>> One way is to write astored procedure that sets it's own priority.
>> An example is here:
>> http://weblog.bignerdranch.com/?p=11
>
> Do you have evidence to show this will actually work consistently?
The paper
Hi Josh - thanks for thoughts.
>
> This is Postgres 8.1.4 64bit.
>1. Upgrade to 8.1.9. There is a bug with autovac that is fixed that is
>pretty important.
We don't use pg_autovac - we have our own process that runs very often
vacuuming tables that are dirty. It works well and vacuums when ac
Ralph Mason wrote:
We have a database running on a 4 processor machine. As time goes by
the IO gets worse and worse peeking at about 200% as the machine loads up.
The weird thing is that if we restart postgres it’s fine for hours but
over time it goes bad again.
(CPU usage graph here
We have a database running on a 4 processor machine. As time goes by the IO
gets worse and worse peeking at about 200% as the machine loads up.
The weird thing is that if we restart postgres it’s fine for hours but over
time it goes bad again.
(CPU usage graph here HYPERLINK
"http://www.fl
OK, I understand.
So one clarifying question on WAL contents:
On an insert of a 100 byte row that is logged, what goes into the WAL
log? Is it 100 bytes, 132 bytes (row + overhead), or other? Does just
the row contents get logged, or the contents plus all of the relative
overhead? I understand
Keaton Adams wrote:
So for every data page there is a 20 byte header, for every row there is
a 4 byte identifier (offset into the page), AND there is also a 28 byte
fixed-size header (27 + optional null bitmap)?? (I did find the section
in the 8.1 manual that give the physical page layout.) The
So for every data page there is a 20 byte header, for every row there is
a 4 byte identifier (offset into the page), AND there is also a 28 byte
fixed-size header (27 + optional null bitmap)?? (I did find the section
in the 8.1 manual that give the physical page layout.) The other RDBMS
platforms
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Keaton Adams wrote:
>> We are running on PostgreSQL 8.1.4 and are planning to move to 8.3 when
>> it becomes available. Are there space utilization/performance
>> improvements in WAL logging in the upcoming release?
> One big change in 8.3 is that
Keaton Adams wrote:
Using an 8K data page:
8K data page (8192 bytes)
Less page header and row overhead leaves ~8000 bytes
At 100 bytes per row = ~80 rows/page
Rows loaded: 250,000 / 80 = 3125 data pages * 8192 = 25,600,000 bytes /
1048576 = ~ 24.4 MB of data page space.
That's not accurate. Th
I sent this to pgsql-admin but didn't receive a response. Would this be
a WAL log performance/efficiency issue?
Thanks,
Keaton
Given these postgresql.conf settings:
#---
# WRITE AHEAD LOG
#
On Thu, May 10, 2007 at 05:10:56PM -0700, Ron Mayer wrote:
> One way is to write astored procedure that sets it's own priority.
> An example is here:
> http://weblog.bignerdranch.com/?p=11
Do you have evidence to show this will actually work consistently?
The problem with doing this is that if you
14 matches
Mail list logo