-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30546/#review76245
-----------------------------------------------------------



src/slave/containerizer/isolators/cgroups/mem.cpp
<https://reviews.apache.org/r/30546/#comment123762>

    Can you chain these together rather than using await? See below for 
suggested changes to _usage() to facilitate this.
    
    You would pass the hash key to usage() along with the counter value you 
get, this decouples the chaining from the presence of listeners. This also 
means you can avoid have the Option<> for the hash value - you'd just remove 
them if they failed.



src/slave/containerizer/isolators/cgroups/mem.cpp
<https://reviews.apache.org/r/30546/#comment123761>

    What about calling this once for each counter level fetched and chaining 
them together? The next call in the chain will get the updated result and will 
include the next counter value
    _usage(containerId, result, level, counter)



include/mesos/mesos.proto
<https://reviews.apache.org/r/30546/#comment123756>

    Can you point readers to the relevant kernel doc for memory pressure levels?



src/slave/containerizer/isolators/cgroups/mem.cpp
<https://reviews.apache.org/r/30546/#comment123763>

    We should verify memory pressure is working by contructing and destroying 
memory pressure listeners for each level. This should be done early, in 
create(), rather than potentially silently (only logging) failing later.



src/slave/containerizer/isolators/cgroups/mem.cpp
<https://reviews.apache.org/r/30546/#comment123764>

    If any listener fails then all of usage() always fails? But, if all three 
listeners eventually fail then you change to returning early and reporting the 
partially completed statistics?
    
    This is odd and inconsistent behavior, suggest rethinking this.



src/slave/containerizer/isolators/cgroups/mem.cpp
<https://reviews.apache.org/r/30546/#comment123765>

    This should be attempted in create() so we can verify early that we expect 
this to work.



src/tests/slave_recovery_tests.cpp
<https://reviews.apache.org/r/30546/#comment123769>

    Do you really want to ASSERT this? Perhaps just skip the test if it's not 
present?
    
    You're also relying on the implementation of dd supporting the "--help" 
flag. This is _probably_ true on all Linuxes but, for example, it's not true on 
OSX (I know this test is Linux specific). I suggest you test using the flags 
you'll actually use... "dd count=1 bs=1 if=/dev/zero of=/dev/null"



src/tests/slave_recovery_tests.cpp
<https://reviews.apache.org/r/30546/#comment123768>

    This seems racey - how can you be sure that you don't write all 5 GB while 
the first slave is running such that when the next slave starts there aren't 
any pressure events?
    
    You should also verify there's enough disk in the offer... and set the disk 
resource?


- Ian Downes


On March 11, 2015, 3:32 p.m., Chi Zhang wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30546/
> -----------------------------------------------------------
> 
> (Updated March 11, 2015, 3:32 p.m.)
> 
> 
> Review request for mesos, Dominic Hamon, Ian Downes, and Jie Yu.
> 
> 
> Bugs: MESOS-2136
>     https://issues.apache.org/jira/browse/MESOS-2136
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> MemIsolator: expose memory pressures for containers.
> 
> 
> Diffs
> -----
> 
>   include/mesos/mesos.proto 9df972d750ce1e4a81d2e96cc508d6f83cad2fc8 
>   src/slave/containerizer/isolators/cgroups/mem.hpp 
> a00f723de61b9bccd76a2948b6d14fde7a993a8d 
>   src/slave/containerizer/isolators/cgroups/mem.cpp 
> 6299ca4ba01b65daa3d75c64150e2738e32b841e 
>   src/tests/slave_recovery_tests.cpp 53adae0118a26e6d25a9ff20c6374cc8e73275b1 
> 
> Diff: https://reviews.apache.org/r/30546/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Chi Zhang
> 
>

Reply via email to