-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22064/#review45440
-----------------------------------------------------------



src/slave/containerizer/isolators/cgroups/cpushare.cpp
<https://reviews.apache.org/r/22064/#comment80305>

    This should be 'onAny'?



src/slave/containerizer/linux_launcher.cpp
<https://reviews.apache.org/r/22064/#comment80304>

    In fact, we just discovered an issue which will occur when an isolator 
cleans up an orphaned container before processes inside it are killed. (In our 
case, it causes tcp connection being leaked.)
    
    So I think we should do a block wait here and set a timeout. If we cannot 
finish it before timeout, we probably want to return error?


- Jie Yu


On June 11, 2014, 10:21 p.m., Ian Downes wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/22064/
> -----------------------------------------------------------
> 
> (Updated June 11, 2014, 10:21 p.m.)
> 
> 
> Review request for mesos, Jie Yu and Vinod Kone.
> 
> 
> Repository: mesos-git
> 
> 
> Description
> -------
> 
> Use a timeout to discard cgroups::destroy.
> 
> 
> Diffs
> -----
> 
>   src/linux/cgroups.hpp 21d87a0783c2edd653d28fa89c59773200ae647e 
>   src/linux/cgroups.cpp 142ac437d6d53b678ef284bda46444e1615ff0d1 
>   src/slave/containerizer/isolators/cgroups/cpushare.hpp 
> 909ea8802b3746b73aae8d62c8e49f259c471fd5 
>   src/slave/containerizer/isolators/cgroups/cpushare.cpp 
> 3d253af51677dcb4dc48dc9e01bdc2ba80847da9 
>   src/slave/containerizer/isolators/cgroups/mem.hpp 
> 362ebcfa2e16701b225deea0fbeb92e4a56d51aa 
>   src/slave/containerizer/isolators/cgroups/mem.cpp 
> 60013d4e840f6b1f131b796b95916d1978b37c70 
>   src/slave/containerizer/linux_launcher.cpp 
> c17724b138de9d64856fb85db019e52043fbc7af 
> 
> Diff: https://reviews.apache.org/r/22064/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Ian Downes
> 
>

Reply via email to