Re: [PERFORM] a heavy duty operation on an unused table kills my server

2010-01-16 Thread Greg Smith

Tom Lane wrote:

This is in fact exactly what the vacuum_cost_delay logic does.
It might be interesting to investigate generalizing that logic
so that it could throttle all of a backend's I/O not just vacuum.
In principle I think it ought to work all right for any I/O-bound
query.
  


So much for inventing a new idea; never considered that parallel 
before.  The logic is perfectly reusable, not so sure how much of the 
implementation would be though.


I think the main difference is that there's one shared VacuumCostBalance 
to worry about, whereas each backend that might be limited would need 
its own clear scratchpad to accumulate costs into.  That part seems 
similar to how the new EXPLAIN BUFFERS capability instruments things 
though, which was the angle I was thinking of approaching this from.  
Make that instrumenting more global, periodically compute a total cost 
from that instrument snapshot, and nap whenever the delta between the 
cost at the last nap and the current cost exceeds your threshold.


Bet I could find some more consumers in user land who'd love to watch 
that instrumented data too, if it were expanded to be available for 
operations beyond just plan execution.  I know it would make a lot of 
jobs easier if you could measure that x statement cost you y for 
more than just queries--for example, tracking whether any given UPDATE 
goes outside of the buffer cache or not would be fascinating tuning 
fodder.  Ditto if you could get a roll-up of everything a particular 
connection did.


The part specific to the rate limiting that I don't have any good idea 
about yet is where to put the napping logic at, such that it would work 
across everything an I/O limited backend might do.  The only common 
point here seems to be the calls into the buffer manager code, but since 
that's happening with locks and pins you can't sleep in there.  Not 
enthusiastic about sprinkling every type of backend operation with a 
call to some nap check routine.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] a heavy duty operation on an unused table kills my server

2010-01-16 Thread Greg Smith

Robert Haas wrote:

Seems like you'd also need to think about priority inversion, if the
low-priority backend is holding any locks.
  


Right, that's what I was alluding to in the last part:  the non-obvious 
piece here is not how to decide when the backend should nap because it's 
done too much I/O, it's how to figure out when it's safe for it to do so 
without causing trouble for others.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Inserting 8MB bytea: just 25% of disk perf used?

2010-01-16 Thread Pierre Frédéric Caillau d



I've changed the setting a bit:

(1) Replaced 7.200 disk by a 10.000 one, still sata though.

(2) Inserting rows only 10x times (instead of 100x times)
but 80mb each, so having the same amount of 800mb in total.

(3) Changed the WAL path to the system disk (by the
great 'junction' trick mentioned in the other posting), so
actually splitting the write access to the system disk and
the fast data disk.



And here is the frustrating result:

1. None of the 4 CPUs was ever more busy than 30% (never
less idle than 70%),

2. while both disks kept being far below the average write
performance: the data disk had 18 peaks of approx. 40 mb
but in total the average thoughput was 16-18 mb/s.


BTW:

* Disabling noatime and similar for ntfs did not change
things much (thanks though!).

* A short cross check copying 800mb random data file from
system to data disk showed a performance of constantly
75 mb/s.


So, I have no idea what remains as the bottleneck.

 Felix


Try this :

CREATE TABLE test AS SELECT * FROM yourtable;

This will test write speed, and TOAST compression speed.
Then try this:

CREATE TABLE test (LIKE yourtable);
COMMIT;
INSERT INTO test SELECT * FROM yourtable;

This does the same thing but also writes WAL.
I wonder what results you'll get.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance