-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31505/#review74457
-----------------------------------------------------------



src/slave/containerizer/isolators/network/port_mapping.cpp
<https://reviews.apache.org/r/31505/#comment121040>

    +1 
    You can pull out all of the constraints and insert the available ids at 
initialization. If it's empty then you'll return an Error().



src/slave/containerizer/isolators/network/port_mapping.cpp
<https://reviews.apache.org/r/31505/#comment121041>

    you can remove the hash because you'll have a set (or something else) of 
available ids



src/slave/containerizer/isolators/network/port_mapping.cpp
<https://reviews.apache.org/r/31505/#comment121043>

    I'm confused - do these need to be configured for each container? I thought 
all (host and container) icmp traffic was on the same flow?



src/slave/containerizer/isolators/network/port_mapping.cpp
<https://reviews.apache.org/r/31505/#comment121044>

    Why do you need a new flow id on update? each container has a single flow 
id, doesn't it?



src/slave/containerizer/isolators/network/port_mapping.cpp
<https://reviews.apache.org/r/31505/#comment121045>

    this would be adding the released flow id back into the available set



src/slave/containerizer/isolators/network/port_mapping.cpp
<https://reviews.apache.org/r/31505/#comment121047>

    why this change? prior code used the eth0 instance variable



src/slave/containerizer/isolators/network/port_mapping.cpp
<https://reviews.apache.org/r/31505/#comment121046>

    why this change? prior code used the eth0 instance variable


- Ian Downes


On Feb. 26, 2015, 3:15 p.m., Cong Wang wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31505/
> -----------------------------------------------------------
> 
> (Updated Feb. 26, 2015, 3:15 p.m.)
> 
> 
> Review request for mesos, Chi Zhang, Ian Downes, and Jie Yu.
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Currently we do nothing on the host egress side. By default, kernel uses its 
> own hash function to classify the packets to different TX queues (if the 
> hardware interface supports multiqueue). So packets coming out of different 
> containers could end up queueing in the same TX queue, in this case we saw 
> buffer bloat on some TX queue caused packet drops.
> 
> We need to isolation the egress traffic so that containers will not have 
> interference with each other. The number of hardware TX queues is limited by 
> hardware interface, usually not enough to map our container in 1:1 way, 
> therefore we need some software solution. We choose fq_codel and use tc 
> filters to classify packets coming out of different containers to different 
> fq_codel flows, and the codel algorithm on each flow could also help us to 
> reduce the buffer bloat. Note when the packets leave fq_codel, they still 
> share the physical TX queue(s), this is however (almost) beyond what we can 
> control, we have to rely on the kernel behavior.
> 
> TODO: get some performance numbers
> 
> 
> Diffs
> -----
> 
>   src/slave/containerizer/isolators/network/port_mapping.hpp 
> 8443097b2c79fef5ae0e23a3fb815ffec0318a93 
>   src/slave/containerizer/isolators/network/port_mapping.cpp 
> 5227987cdb7b904c2f4bb2bdf5c5d705a435010d 
> 
> Diff: https://reviews.apache.org/r/31505/diff/
> 
> 
> Testing
> -------
> 
> Manually start two mesos containers with netperf running side.
> 
> 
> Thanks,
> 
> Cong Wang
> 
>

Reply via email to