Re: [Lxc-users] Downgrade disk IO PRIORITY automatically

2011-12-06 Thread Serge Hallyn
Quoting Arie Skliarouk (sklia...@gmail.com):
 Hi,
 
 I understand that this is not the quite appropriate mailing list to ask the
 question, but the question is related to the LXC tech we use on the server,
 so here it goes:
 
 Most of the time the LXC containers on our servers work properly, but
 occasionally someone, somewhere starts an IO heavy operation that kills
 performance for everybody. For some time I tried to ask people nicely to
 use ionice -c 3 or run the task offhours but this is not enough. The
 problem happens quite often for people to complain, but not (IMHO) to
 warrant purchasing of new hardware.
 
 I envision that an ideal solution would be some daemon that would monitor
 disk IO activity and automatically reduce (or raise, depending how you view
 it) ionice priority of the process or the container. The daemon would
 restore the IO niceness after some good behavior period.
 
 Is there any solution along the lines?

Have you tried the blkio cgroup?  (I haven't, so am curious how effective
it is)

-serge

--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Downgrade disk IO PRIORITY automatically

2011-12-06 Thread Zhu Yanhai
2011/12/6 Arie Skliarouk sklia...@gmail.com:
 Hi,

 I understand that this is not the quite appropriate mailing list to ask the
 question, but the question is related to the LXC tech we use on the server,
 so here it goes:

 Most of the time the LXC containers on our servers work properly, but
 occasionally someone, somewhere starts an IO heavy operation that kills
 performance for everybody. For some time I tried to ask people nicely to use
 ionice -c 3 or run the task offhours but this is not enough. The problem
 happens quite often for people to complain, but not (IMHO) to warrant
 purchasing of new hardware.

 I envision that an ideal solution would be some daemon that would monitor
 disk IO activity and automatically reduce (or raise, depending how you view
 it) ionice priority of the process or the container. The daemon would
 restore the IO niceness after some good behavior period.

 Is there any solution along the lines?

 --
 Arie

Hi,
Basically cgroup's blkio controller works for such scenario. Please
see http://www.mjmwired.net/kernel/Documentation/cgroups/blkio-controller.txt
for details.
In my test I found that the blkio controller shipped with older kernel
(e.g. rhel6's 2.6.32) doesn't work quite well when the all IO workers
in each group are very seeky ones, unless you echo 0  slice_idle to
switch to iops mode manually. However in the latest upstream kernel it
works very well by the default arguments, without any adjustment.

Thanks,
Zhu Yanhai

--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users