On 5/24/13 10:36 AM, Jim Nasby wrote:
Instead of KB/s, could we look at how much time one process is spending waiting on IO vs the rest of the cluster? Is it reasonable for us to measure IO wait time for every request, at least on the most popular OSes?
It's not just an OS specific issue. The overhead of collecting timing data varies massively based on your hardware, which is why there's the pg_test_timing tool now to help quantify that.
I have a design I'm working on that exposes the system load to the database usefully. That's what I think people really want if the goal is to be adaptive based on what else is going on. My idea is to use what "uptime" collects as a starting useful set of numbers to quantify what's going on. If you have both a short term load measurement and a longer term one like uptime provides, you can quantify both the overall load and whether it's rising or falling. I want to swipe some ideas on how moving averages are used to determine trend in stock trading systems: http://www.onlinetradingconcepts.com/TechnicalAnalysis/MASimple2.html
Dynamic load-sensitive statement limits and autovacuum are completely feasible on UNIX-like systems. The work to insert a cost delay point needs to get done before building more complicated logic on top of it though, so I'm starting with this part.
-- Greg Smith 2ndQuadrant US g...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers