Quoting Serge Hallyn (serge.hal...@ubuntu.com):
> Quoting Jäkel, Guido (g.jae...@dnb.de):
> > >Quoting Jäkel, Guido (g.jae...@dnb.de):
> > >> Hi,
> > >>
> > >> I want to contribute an observation while playing around with my "empty 
> > >> plain vanilla" container template: The test cycle is to start it,
> > >>open an ssh terminal session to it, leave it idle and regular shut down 
> > >>the container.
> > >>
> > >> Now, if the containers eth0 is brought down by the shutdown, after end 
> > >> of the lxc process the corresponding veth is still there and the
> > >>idle ssh terminal client don't notices anything. This seems to hold for 
> > >>long times (, maybe endless if there is no traffic?). I'm not able to
> > >>rename the veth at this point; it's said to be busy.
> > >
> > >Yes, AIUI this is proper tcp behavior, and there's really nothing we can
> > >do about it.  The tcp sock has to be kept open so we can tell the other
> > >end to shutdown.
> > 
> > I completly  agree ... 
> > 
> > 
> > >> But if I do a keystroke in the terminal, after some short timeout the 
> > >> client quits with a broken connection message. Also, a short time the 
> > >> veth disappear.
> > >>
> > >>
> > >> By the other hand if I prevent inside the container by configuration 
> > >> that eth0 is driven down, then right at the termination of the lxc
> > >>process the ssh terminal quits and also, the veth disappears. Beside from 
> > >>the test, I noticed the similar effect on other "in-real-usage"-
> > >>containers with connection to listeners inside: The veth stays a while 
> > >>until theses inbounding connection have died.
> > 
> > ... , but what causes this "helpful" effect? I guess that the open 
> > connection are reset, maybe by the stack as a result of closing the network 
> > namespace. But why this will happen only if the interface was left up 
> > (which is the anormal case)?
> 
> You bring up a good point - we should be able to inject a tcp rst to
> force it to close.  So we may in fact be able to watch for this and
> fix it from userspace in lxc.  (in fact that may be the only place
> where it really makes sense to do - since we *know* the container
> should be dying.)

If someone wants to experiment with this and send a patch with
a good example of how to test the patch - that would rock.

-serge

------------------------------------------------------------------------------
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk
_______________________________________________
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users

Reply via email to