Re: [Lxc-users] Making LXC accept an already open network interface—or other options

2011-05-09 Thread Serge Hallyn
Quoting David Serrano (dserra...@gmail.com):
 Hi,
 
 At $work we're currently using KVM and setting it up so that it uses a
 previously opened TAP interface: 'kvm -net tap,fd=3'. This way, we are
 able to create the interface a set up a couple of ebtables filters on
 it before going on. Now, we would like to do the same with LXC.
 
 After taking a look to the documentation I don't think LXC is able to
 get the interface from a given FD, so I guess I should look for a
 workaround. I see there's a message in the LXC log that says
 «instanciated veth 'vethC1zCUS/vethtCn0zY'» but the relevant container
 doesn't appear in the same line. Yes it's in the previous line but
 relying on that is prone to race conditions. Moreover, reading from a
 debug log isn't elegant at all...
 
 Do I have other options I haven't considered?

Best would be to patch the LXC code to do this, and send the patch
upstream.  But for first, for testing and $firebrigade purposes,
the way to do this by hand would be to write your own our_lxc_start.sh
script which does something like

#!/bin/sh
devs=`ls /sys/class/net/veth*`
ip link add type veth
newdevs=`ls /sys/class/net/veth*`
# Get the intersection of $devs and $newdevs
# Attach $dev1 to your bridge
lxc-start -n mycontainer
# mycontainer has no network
# get PID as the init pid of mycontainer
ip link set $dev2 netns $PID
# now from your mycontainer console, configure $dev2 which is now in the 
container
# you can rename it to eth0 in the container as
ip link set $dev2 name eth0

Something like that.  Patching lxc-start to take an extra command line
argument saying 'use this fd' shouldn't be a big deal.

-serge

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] local routing

2011-05-09 Thread Ulli Horlacher

I have a lxc host (zoo 129.69.1.68) with a container (vmtest8 129.69.8.6).

I want all host/container communication to be internal without network
traffic going via external router.

I know I can setup host routes like:

root@vms2:# route add -host 129.69.8.6 gw 129.69.1.68

root@vms2:# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse Iface
129.69.8.6  129.69.1.68 255.255.255.255 UGH   0  00 br0
129.69.1.0  0.0.0.0 255.255.255.0   U 0  00 br0
0.0.0.0 129.69.1.2540.0.0.0 UG10000 br0

root@vms2:# lxc -c vmtest8

Type Ctrl+a q to exit the console

root@vmtest8:~# route add -host 129.69.1.68 gw 129.69.8.6

root@vmtest8:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse Iface
129.69.1.68 129.69.8.6  255.255.255.255 UGH   0  00 eth0
129.69.8.0  0.0.0.0 255.255.255.0   U 0  00 eth0
0.0.0.0 129.69.8.2540.0.0.0 UG0  00 eth0


root@vms2:# ping 129.69.8.6
PING 129.69.8.6 (129.69.8.6) 56(84) bytes of data.
64 bytes from 129.69.8.6: icmp_seq=1 ttl=64 time=9.54 ms
64 bytes from 129.69.8.6: icmp_seq=2 ttl=64 time=0.015 ms
64 bytes from 129.69.8.6: icmp_seq=3 ttl=64 time=0.014 ms
64 bytes from 129.69.8.6: icmp_seq=4 ttl=64 time=0.013 ms
64 bytes from 129.69.8.6: icmp_seq=5 ttl=64 time=0.015 ms
64 bytes from 129.69.8.6: icmp_seq=6 ttl=64 time=0.013 ms
^C
--- 129.69.8.6 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 4998ms
rtt min/avg/max/mdev = 0.013/1.602/9.547/3.553 ms

But I do not want to set up such host routes manually, they should be
created some kind of automatic.

With only 1 host/container pair it is not much trouble. But later I want
to have a dozen containers and they all should use internal routing.

Modifying the host and each container VM routing table manually is nasty.


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] local routing

2011-05-09 Thread Daniel Lezcano
On 05/09/2011 03:10 PM, Ulli Horlacher wrote:

 I have a lxc host (zoo 129.69.1.68) with a container (vmtest8 129.69.8.6).

 I want all host/container communication to be internal without network
 traffic going via external router.

Maybe I misunderstood but why don't you setup a bridge for the container 
only without attaching the physical interface and making sure 
/proc/sys/net/ipv4/ip_forward is not set ?

 I know I can setup host routes like:

 root@vms2:# route add -host 129.69.8.6 gw 129.69.1.68

 root@vms2:# route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse Iface
 129.69.8.6  129.69.1.68 255.255.255.255 UGH   0  00 br0
 129.69.1.0  0.0.0.0 255.255.255.0   U 0  00 br0
 0.0.0.0 129.69.1.2540.0.0.0 UG10000 br0

 root@vms2:# lxc -c vmtest8

 TypeCtrl+a q  to exit the console

 root@vmtest8:~# route add -host 129.69.1.68 gw 129.69.8.6

 root@vmtest8:~# route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse Iface
 129.69.1.68 129.69.8.6  255.255.255.255 UGH   0  00 eth0
 129.69.8.0  0.0.0.0 255.255.255.0   U 0  00 eth0
 0.0.0.0 129.69.8.2540.0.0.0 UG0  00 eth0


 root@vms2:# ping 129.69.8.6
 PING 129.69.8.6 (129.69.8.6) 56(84) bytes of data.
 64 bytes from 129.69.8.6: icmp_seq=1 ttl=64 time=9.54 ms
 64 bytes from 129.69.8.6: icmp_seq=2 ttl=64 time=0.015 ms
 64 bytes from 129.69.8.6: icmp_seq=3 ttl=64 time=0.014 ms
 64 bytes from 129.69.8.6: icmp_seq=4 ttl=64 time=0.013 ms
 64 bytes from 129.69.8.6: icmp_seq=5 ttl=64 time=0.015 ms
 64 bytes from 129.69.8.6: icmp_seq=6 ttl=64 time=0.013 ms
 ^C
 --- 129.69.8.6 ping statistics ---
 6 packets transmitted, 6 received, 0% packet loss, time 4998ms
 rtt min/avg/max/mdev = 0.013/1.602/9.547/3.553 ms

 But I do not want to set up such host routes manually, they should be
 created some kind of automatic.

 With only 1 host/container pair it is not much trouble. But later I want
 to have a dozen containers and they all should use internal routing.

 Modifying the host and each container VM routing table manually is nasty.




--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] local routing

2011-05-09 Thread Ulli Horlacher
On Mon 2011-05-09 (22:52), Daniel Lezcano wrote:
 On 05/09/2011 03:10 PM, Ulli Horlacher wrote:
 
 
  I have a lxc host (zoo 129.69.1.68) with a container (vmtest8 129.69.8.6).
 
  I want all host/container communication to be internal without network
  traffic going via external router.
 
 Maybe I misunderstood but why don't you setup a bridge for the container 
 only without attaching the physical interface and making sure 
 /proc/sys/net/ipv4/ip_forward is not set ?

Of course the containers shall be able to communicate with the internet,
too.

But I want the communication of host-container to be internal and not via
external router.

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users