Hello Thanh

Thank you very much for looking into this. Some comments.

> - enable IP forwarding in B        echo 1 >
> /proc/sys/net/ipv4/ip_forward This is not necessary in our cloud as
> ip_forward is already enabled by default.

We recently had a problem where this was not enabled in the docker
image by default [1]. But the situation might have changed since then.

> - clean ip tables in B        sudo iptables -F        sudo iptables
> -t nat -FIn my testing cleaning the iptables in this way breaks the
> docker container's ability to route to the outside world so I would
> not recommend cleaning the iptables as this will make the container
> not be able to reach anything at all.

A requirement of our test setup is that the docker container can reach
the builder with no nat'ing in between. By default docker network does
masquerading to the outside world and cleaning ip tables gets rid of
it. Is a rather rudimentary way of achieving it but worked for us. The
proper way to do it is by creating a separate docker network with the
option com.docker.network.bridge.enable_ip_masquerade disabled.

Disable masquerading also requires adding a route in the builder back
to the docker network. Otherwise, as you say, there wont be
connectivity:

route add -net <docker network> netmask <docker netmask> gw <docker host ip>

Worth mentioning is that this whole setup worked for us pre cloud
provider change.

> We should be setting iptables -P FORWARD ACCEPT to allow another host
> to reach inside the docker container on the docker host however even
> after setting this things are still unrouteable.

This is one more thing that docker did by default but no longer does. I
will crosscheck if it affects us but I dont think so. Our specific
problem is packets going out of the container towards the builder,
being forwarded in the docker host, but getting lost somewhere between
the docker host and the builder.

Thanks again
Jaime.

[1] https://git.opendaylight.org/gerrit/#/c/65807/

-----Original Message-----
From: Thanh Ha <thanh...@linuxfoundation.org>
To: jcaam...@suse.de
Cc: integration-...@lists.opendaylight.org <integration-dev@lists.opend
aylight.org>, sfc-dev@lists.opendaylight.org <sfc-dev@lists.opendayligh
t.org>
Subject: Re: [integration-dev] [sfc-dev] sfc csit failure
Date: Sun, 21 Jan 2018 21:59:56 -0500

Hi Jaime,

Sorry for taking so long to get back to you on this one. I spent
sometime tonight trying to troubleshoot this but haven't been able to
get the host -> docker connection working yet but perhaps some experts
can help troubleshoot with the information below.

The test scenario is as follows. I manually spun up 2 hosts in a test
environment one "builder" (Host-A) and one "docker". I did not modify
anything in the image other than just starting it up and launching a
centos:7 container on the docker host.

container: centos:7 image running on "docker"
docker: The basic docker VM provided by ODL infra
builder: The generic build VM provided by ODL infra

Scenario 1: container ping docker host: SUCCESS
Scenario 2: container ping builder vm: SUCCESS
Scenario 3: docker host ping container: SUCCESS
Scenario 4: builder vm ping docker host: SUCCESS
Scenario 5: builder vm ping container: FAIL

So it seems the only direction that doesn't work is when another vm
(builder) pinging the docker host. According to the docker
documentation here:

https://docs.docker.com/engine/userguide/networking/default_network/con
tainer-communication/

We should be setting iptables -P FORWARD ACCEPT to allow another host
to reach inside the docker container on the docker host however even
after setting this things are still unrouteable.

Below is a list of the iptables rules that's active on the system.


Jaime sent me some configuration for the docker host as follows.

- enable IP forwarding in B
        echo 1 > /proc/sys/net/ipv4/ip_forward

This is not necessary in our cloud as ip_forward is already enabled by
default.


- clean ip tables in B
        sudo iptables -F
        sudo iptables -t nat -F

In my testing cleaning the iptables in this way breaks the docker
container's ability to route to the outside world so I would not
recommend cleaning the iptables as this will make the container not be
able to reach anything at all.

Any ideas on what else I should check or try to configure?

Regards,
Thanh


# iptables -L -n 
Chain INPUT (policy ACCEPT) 
target     prot opt source               destination          

Chain FORWARD (policy ACCEPT) 
target     prot opt source               destination          
DOCKER-USER  all  --  0.0.0.0/0            0.0.0.0/0            
DOCKER-ISOLATION  all  --  0.0.0.0/0            0.0.0.0/0            
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate
RELATED,ESTABLISHED 
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            

Chain OUTPUT (policy ACCEPT) 
target     prot opt source               destination          

Chain DOCKER (1 references) 
target     prot opt source               destination          

Chain DOCKER-ISOLATION (1 references) 
target     prot opt source               destination          
RETURN     all  --  0.0.0.0/0            0.0.0.0/0            

Chain DOCKER-USER (1 references) 
target     prot opt source               destination          
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           



On Fri, Dec 29, 2017 at 7:48 AM, Jaime Caamaño Ruiz <jcaam...@suse.de>
wrote:
> I took another look into this in oxygen.
> 
> The test that fails uses docker to setup multiple OVS instances. All
> these instances run on a docker bridge network and must have
> connectivity to ODL each with distinct IP from that network. To
> achieve
> this, masquerading is disabled for the docker network, ip forwarding
> is
> enabled on the docker host and a route for this network via the
> docker
> host is added to the odl host.
> 
> This setup worked fine before the cloud provider transition. But now
> packets are dropped between the docker host and the odl host when the
> source IP is that of the docker network. I have verified forwarding
> is
> done correctly on the docker host. I have also verified that the
> docker
> network is reachable from the odl host, so the problem is on the
> other
> direction.
> 
> I am out of ideas now but this might be out of our control and the
> test
> invalid as is.
> 
> BR
> Jaime.
> 
> 
> 
> On Fri, 2017-12-22 at 12:17 -0800, Jamo Luhrsen wrote:
> > On 12/22/2017 07:45 AM, Sam Hague wrote:
> > >
> > >
> > > On Fri, Dec 22, 2017 at 7:37 AM, Jaime Caamaño Ruiz <jcaamano@sus
> e.
> > > de <mailto:jcaam...@suse.de>> wrote:
> > >
> > >     Hello Jamo
> > >
> > >     Took a quick look.
> > >
> > >     The problem seems to be that the odl-sfc-openflow-renderer
> > > feature is
> > >     installed but the dependant bundles are not activated. This
> > > feature is
> > >     in charge of writing the flows, and the test fails because
> > > there are no
> > >     flows on the  switches.
> >
> > yeah, I saw a bunch of ugly karaf.log messages as well. There is an
> > NPE showing
> > up from sfc code that may or may not be important. Check this log:
> >
> > https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/sfc-csit
> -3
> > node-docker-full-deploy-all-carbon/390/odl1_karaf.log.gz
> >
> > >     This is not the only feature with this problem, seems to me
> > > than more
> > >     than one bundle is just not properly started. So something is
> > > wrong in
> > >     the feature or blueprint handling. Unfortunately, older logs
> > > are not
> > >     available to compare but there were no changes in Carbon
> since
> > > the last
> > >     success job.
> >
> > I tried a job with Carbon SR2 since that should probably have been
> > from
> > before when it was passing. It failed as well. The logs didn't
> save,
> > but
> > you can see in the console log that it's the same failure:
> >
> > https://jenkins.opendaylight.org/releng/view/sfc/job/sfc-csit-3node
> -d
> > ocker-full-deploy-all-carbon/392/console
> >
> > Thanks,
> > JamO
> >
> >
> > >     Oxygen job is also failing from Dec 9. On this case, seems
> that
> > > there
> > >     is no connectivity between the docker network running in the
> > > tools
> > >     system and ODL. Something that we fixed few days ago caused
> by
> > > a new
> > >     tools docker image that did not have ip forward enabled. But
> > > now I dont
> > >     know what is the problem. When did we effectively switch to
> new
> > > cloud
> > >     provider?
> > >
> > > the switch was last week so around the 15th.
> > >
> > >
> > >     BR
> > >     Jaime.
> > >
> > >
> > >     On Thu, 2017-12-21 at 16:57 -0800, Jamo Luhrsen wrote:
> > >     > Hi SFC,
> > >     >
> > >     > I can't remember if there was ever any discussion on this
> one
> > > yet,
> > >     > but there is a
> > >     > consistent failure it would be nice to see fixed. Or if
> there
> > > is a
> > >     > low sev bug
> > >     > we can point to, I'll edit the test to list that bug in the
> > > report.
> > >     >
> > >     > looks like there is something not showing up in an ovs
> docker
> > >     > containers flow
> > >     > table. Specifically, it's not finding "*actions=pop_nsh*"
> > >     >
> > >     > I'm vaguely remembering something regarding this, but can't
> > > recall it
> > >     > exactly.
> > >     >
> > >     > here's an example:
> > >     > https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/
> sf
> > > c-csit-3
> > >     <https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/s
> fc
> > > -csit-3>
> > >     > node-docker-full-deploy-all-carbon/390/log.html.gz#s1-s1-s1
> > >     >
> > >     >
> > >     > Thanks,
> > >     > JamO
> > >     > _______________________________________________
> > >     > sfc-dev mailing list
> > >     > sfc-dev@lists.opendaylight.org <mailto:sfc-dev@lists.openda
> yl
> > > ight.org>
> > >     > https://lists.opendaylight.org/mailman/listinfo/sfc-dev
> <http
> > > s://lists.opendaylight.org/mailman/listinfo/sfc-dev>
> > >     >
> > >     _______________________________________________
> > >     integration-dev mailing list
> > >     integration-...@lists.opendaylight.org <mailto:integration-
> dev@
> > > lists.opendaylight.org>
> > >     https://lists.opendaylight.org/mailman/listinfo/integration-d
> ev
> > >     <https://lists.opendaylight.org/mailman/listinfo/integration-
> de
> > > v>
> > >
> > >
> >
> >
> _______________________________________________
> integration-dev mailing list
> integration-...@lists.opendaylight.org
> https://lists.opendaylight.org/mailman/listinfo/integration-dev
> 

_______________________________________________
sfc-dev mailing list
sfc-dev@lists.opendaylight.org
https://lists.opendaylight.org/mailman/listinfo/sfc-dev

Reply via email to