On Tue, Apr 28, 2009 at 09:46:04PM +0900, Ryo Tsuruta wrote:
> The body of bio-cgroup.
>
> Based on 2.6.30-rc3-git3
> Signed-off-by: Hirokazu Takahashi
> Signed-off-by: Ryo Tsuruta
Hi Ryo,
few minor coding style issues reported by checkpatch.pl:
WARNING: line over 80 characters
#138: FILE: in
Andrea Righi wrote:
> Andrea Righi wrote:
>> Vivek Goyal wrote:
>> [snip]
>>> Ok, I will give more details of the thought process.
>>>
>>> I was thinking of maintaing an rb-tree per request queue and not an
>>> rb-tree per cgroup. This tree can
Andrea Righi wrote:
> Vivek Goyal wrote:
> [snip]
>> Ok, I will give more details of the thought process.
>>
>> I was thinking of maintaing an rb-tree per request queue and not an
>> rb-tree per cgroup. This tree can contain all the bios submitted to that
>>
Vivek Goyal wrote:
[snip]
> Ok, I will give more details of the thought process.
>
> I was thinking of maintaing an rb-tree per request queue and not an
> rb-tree per cgroup. This tree can contain all the bios submitted to that
> request queue through __make_request(). Every node in the tree will
Hirokazu Takahashi wrote:
> Hi,
>
> It's possible the algorithm of dm-ioband can be placed in the block layer
> if it is really a big problem.
> But I doubt it can control every control block I/O as we wish since
> the interface the cgroup supports is quite poor.
Had a questio
Vivek Goyal wrote:
> On Fri, Sep 19, 2008 at 08:20:31PM +0900, Hirokazu Takahashi wrote:
>>> To avoid creation of stacking another device (dm-ioband) on top of every
>>> device we want to subject to rules, I was thinking of maintaining an
>>> rb-tree per request queue. Requests will first go into t
Vivek Goyal wrote:
> On Thu, Sep 18, 2008 at 05:18:50PM +0200, Andrea Righi wrote:
>> Vivek Goyal wrote:
>>> On Thu, Sep 18, 2008 at 04:37:41PM +0200, Andrea Righi wrote:
>>>> Vivek Goyal wrote:
>>>>> On Thu, Sep 18, 2008 at 09:04:18PM +0900, Ryo Tsuru
Vivek Goyal wrote:
> On Thu, Sep 18, 2008 at 04:37:41PM +0200, Andrea Righi wrote:
>> Vivek Goyal wrote:
>>> On Thu, Sep 18, 2008 at 09:04:18PM +0900, Ryo Tsuruta wrote:
>>>> Hi All,
>>>>
>>>> I have got excellent results of dm-ioband, that con
e?
>
> Secondly, why do we have to create an additional dm-ioband device for
> every device we want to control using rules. This looks little odd
> atleast to me. Can't we keep it in line with rest of the controllers
> where task grouping takes place using cgroup and rules are spec
Dong-Jae Kang wrote:
> Hi,
>
> 2008/8/13 Andrea Righi <[EMAIL PROTECTED]>:
>> Fernando Luis Vázquez Cao wrote:
>>> On Tue, 2008-08-12 at 22:29 +0900, Andrea Righi wrote:
>>>> Andrea Righi wrote:
>>>>> Hirokazu Takahashi wrote:
>>
[EMAIL PROTECTED] wrote:
> Fernando Luis Vázquez Cao wrote:
>>> BTW as I said in a previous email, an interesting path to
>> be explored
>>> IMHO could be to think in terms of IO time. So, look at the
>> time an IO
>>> request is issued to the drive, look at the time the
>> request is served,
>>
Fernando Luis Vázquez Cao wrote:
> On Tue, 2008-08-12 at 22:29 +0900, Andrea Righi wrote:
>> Andrea Righi wrote:
>>> Hirokazu Takahashi wrote:
>>>>>>>>> 3. & 4. & 5. - I/O bandwidth shaping & General design aspects
>>>>>
Andrea Righi wrote:
> Hirokazu Takahashi wrote:
>>>>>>> 3. & 4. & 5. - I/O bandwidth shaping & General design aspects
>>>>>>>
>>>>>>> The implementation of an I/O scheduling algorithm is to a certain extent
Hirokazu Takahashi wrote:
>> 3. & 4. & 5. - I/O bandwidth shaping & General design aspects
>>
>> The implementation of an I/O scheduling algorithm is to a certain extent
>> influenced by what we are trying to achieve in terms of I/O bandwidth
>> shaping, but, as discussed below,
Fernando Luis Vázquez Cao wrote:
>>> This seems to be the easiest part, but the current cgroups
>>> infrastructure has some limitations when it comes to dealing with block
>>> devices: impossibility of creating/removing certain control structures
>>> dynamically and hardcoding of subsystems (i.e. r
Fernando Luis Vázquez Cao wrote:
> This RFC ended up being a bit longer than I had originally intended, but
> hopefully it will serve as the start of a fruitful discussion.
Thanks for posting this detailed RFC! A few comments below.
> As you pointed out, it seems that there is not much consensus
Ryo Tsuruta wrote:
> +static int mem_cgroup_charge_common(struct page *page, struct mm_struct *mm,
> + gfp_t gfp_mask, enum charge_type ctype,
> + struct mem_cgroup *memcg)
> +{
> + struct page_cgroup *pc;
> +#ifdef CONFIG_CGROUP_MEM_RES_C
Hirokazu Takahashi wrote:
> Hi, Andrea,
>
> I'm working with Ryo on dm-ioband and other stuff.
>
>>> On Mon, 2008-08-04 at 20:22 +0200, Andrea Righi wrote:
>>>> But I'm not yet convinced that limiting the IO writes at the device
>>>> mapper
Satoshi UCHIDA wrote:
> Andrea's requirement is
>* to be able to set and control by absolute(direct) performance.
* improve IO performance predictability of each cgroup
(try to guarantee more precise IO performance values)
> And, he gave a advice "Can't a framework which organize
Paul Menage wrote:
> On Mon, Aug 4, 2008 at 1:44 PM, Andrea Righi <[EMAIL PROTECTED]> wrote:
>> A safer approach IMHO is to force the tasks to wait synchronously on
>> each operation that directly or indirectly generates i/o.
>>
>> In particular the solution used
Dave Hansen wrote:
> On Mon, 2008-08-04 at 20:22 +0200, Andrea Righi wrote:
>> But I'm not yet convinced that limiting the IO writes at the device
>> mapper layer is the best solution. IMHO it would be better to throttle
>> applications' writes when they're d
Balbir Singh wrote:
> Dave Hansen wrote:
>> On Mon, 2008-08-04 at 17:51 +0900, Ryo Tsuruta wrote:
>>> This series of patches of dm-ioband now includes "The bio tracking
>>> mechanism,"
>>> which has been posted individually to this mailing list.
>>> This makes it easy for anybody to control the I/
Dave Hansen wrote:
> On Mon, 2008-08-04 at 17:51 +0900, Ryo Tsuruta wrote:
>> This series of patches of dm-ioband now includes "The bio tracking
>> mechanism,"
>> which has been posted individually to this mailing list.
>> This makes it easy for anybody to control the I/O bandwidth even when
>> th
23 matches
Mail list logo