[Lxc-users] Running LXC on a pxelinux machine
Hi, I've got a pxelinux boot configuration with a remote NFS root filesystem and was wondering if anyone out there has tried running lxc on such a configuration. I'm having difficulty getting the host machine to talk with the running lxc containers. I managed to get a local bridge interface up and running without hanging the host machine using the following: #Copy to /tmp tmpfs to avoid NFS hang cp /sbin/brctl /tmp cp /sbin/ifconfig /tmp /tmp/brctl /tmp/ifconfig /tmp/brctl addbr br0 /tmp/ifconfig br0 up /tmp/brctl setfd br0 0 /tmp/brctl stp br0 off /tmp/brctl addif br0 eth0 /tmp/ifconfig eth0 192.168.1.68 netmask 255.255.255.0 /tmp/brctl show From there I can create lxc container instances; other machines on the network can talk to them but the host machine is unable to do so. I suspect I need to update the bridge tables (using ebtables) in some way. Any help greatly appreciated! Thanks, Gus. -- Xperia(TM) PLAY It's a major breakthrough. An authentic gaming smartphone on the nation's most reliable network. And it wants your games. http://p.sf.net/sfu/verizon-sfdev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] native (non-NAT) routing?
On Mon 2011-04-04 (19:35), Ulli Horlacher wrote: My first Ubuntu 10.04 container is up and running on a Ubuntu 10.04 host, but the container can only connect to the host (and vice versa), but not to the world outside. I found a workaround: I have added an extra ethernet card dedicated to the container. -- Ullrich Horlacher Server- und Arbeitsplatzsysteme Rechenzentrum E-Mail: horlac...@rus.uni-stuttgart.de Universitaet Stuttgart Tel:++49-711-685-65868 Allmandring 30 Fax:++49-711-682357 70550 Stuttgart (Germany) WWW:http://www.rus.uni-stuttgart.de/ -- Xperia(TM) PLAY It's a major breakthrough. An authentic gaming smartphone on the nation's most reliable network. And it wants your games. http://p.sf.net/sfu/verizon-sfdev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc-clone
On 03/30/2011 06:29 PM, Serge E. Hallyn wrote: I've replaced most of my previous use of kvm and cloud instances for bug investigations with lxc instances. To emulate my older workflows, I've created lxc-clone. My diff against the current natty lxc package is attached. I've written up how I use this at s3hh.wordpress.com. Briefly, I have a single pristine container, with LVM rootfs, for each of lucid, maverick, and natty. When I want a container, I lxc-clone -o natty -n n1 -s lxc-start -n n1 which takes about 5 seconds altogether. Ruin n1 however I like, and lxc-destroy -l -n n1 when done. It needs fleshing out, but it's at the point where it does exactly what I need. The next thing I'm likely to add will be btrfs snapshotting, not sure when. Daniel, is this something you'd consider adding? I assume that if so, then there are changes you'd like to make to the interface :) Hi Serge, yes, it is an interesting feature, thanks for the patch. I think more configuration tweaking will be needed but this patch looks good for me. === modified file 'configure.ac' --- configure.ac 2011-03-10 07:25:34 + +++ configure.ac 2011-03-30 15:36:58 + @@ -156,6 +156,7 @@ src/lxc/lxc-setuid src/lxc/lxc-version src/lxc/lxc-create + src/lxc/lxc-clone src/lxc/lxc-destroy ]) === modified file 'lxc.spec' It should be lxc.spec.in --- lxc.spec 2011-03-10 07:25:34 + +++ lxc.spec 2011-03-30 15:36:58 + @@ -78,6 +78,7 @@ %{_bindir}/* %attr(4111,root,root) %{_bindir}/lxc-attach %attr(4111,root,root) %{_bindir}/lxc-create +%attr(4111,root,root) %{_bindir}/lxc-clone %attr(4111,root,root) %{_bindir}/lxc-start %attr(4111,root,root) %{_bindir}/lxc-netstat %attr(4111,root,root) %{_bindir}/lxc-unshare === modified file 'src/lxc/Makefile.am' --- src/lxc/Makefile.am 2011-03-10 07:25:34 + +++ src/lxc/Makefile.am 2011-03-30 15:36:58 + @@ -72,6 +72,7 @@ lxc-setuid \ lxc-version \ lxc-create \ + lxc-clone \ lxc-destroy bin_PROGRAMS = \ === modified file 'src/lxc/Makefile.in' Makefile.in is generated. I suppose it is the diff command which integrated the configure and Makefile.in in the diff result. === added file 'src/lxc/lxc-clone.in' --- src/lxc/lxc-clone.in 1970-01-01 00:00:00 + +++ src/lxc/lxc-clone.in 2011-03-30 15:36:58 + @@ -0,0 +1,206 @@ +#!/bin/bash + +# +# lxc: linux Container library + +# Authors: +# Serge Hallynserge.hal...@ubuntu.com +# Daniel Lezcanodaniel.lezc...@free.fr + +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. + +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. + +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +usage() { +echo usage: lxc-clone -oorig -nnew [-s] [-h] [-L fssize] [-v vgname] +} + +help() { +usage +echo +echo creates a lxc system object. +echo +echo Options: +echo orig: name of the original container +echo new : name of the new container +echo -s : make the new rootfs a snapshot of the original +echo fssize : size if creating a new fs. By default, 2G +echo vgname : lvm volume group name, lxc by default +} + +shortoptions='ho:n:sL:v:' +longoptions='help,orig:,name:,snapshot,fssize,vgname' +lxc_path=/var/lib/lxc +bindir=/usr/bin +snapshot=no +lxc_size=2G +lxc_vg=lxc + +getopt=$(getopt -o $shortoptions --longoptions $longoptions -- $@) +if [ $? != 0 ]; then +usage +exit 1; +fi + +eval set -- $getopt + +while true; do +case $1 in + -h|--help) + help + exit 1 + ;; + -s|--snapshot) + shift + snapshot=yes + ;; + -o|--orig) + shift + lxc_orig=$1 + shift + ;; + -L|--fssize) + shift + lxc_size=$1 + shift + ;; + -v|--vgname) + shift + lxc_vg=$1 + shift + ;; + -n|--new) + shift + lxc_new=$1 + shift + ;; +--) + shift + break;; +*) + echo $1 + usage +
Re: [Lxc-users] mounted filesystems inconsistency
On 03/31/2011 11:41 AM, Milos Negovanovic wrote: Hi all, I have 3 identical LXC setups, 2 of which are sandboxes on different workstations and one is live environment. All 3 run Arch linux on host and inside the container. On one of the sandboxes output of mount after the container is started, from inside the container looks like this: [root@node1 spiked]# mount /dev/sda2[/home/stuff/lxc/node1.spiked.uk.com] on / type reiserfs (rw,relatime) devpts[/13] on /dev/console type devpts (ro) udev on /dev type devtmpfs (rw,nosuid,relatime,size=10240k,nr_inodes=1017469,mode=755) On other two LXC setups mount doesn't show anything! All 3 share the same configuration files and run last released LXC: 0.7.4.1. Any idea what might be happening? Lxc mounts the mount point by directly using the syscalls without invoking the mount(3) command. The mount command rely on /etc/mtab to write and read information. Depending on what does your distro, that is invoking mount or not, /etc/mtab may be different. If you want to check the consistency, you should look at /proc/mounts of each container. I only noticed it when after I restarted the container with the above mount output, devpts was re monted read only on the host. Make sure the distro you are running is mounting devpts with the new_instance option. This is the config: root@box ~ # cat /etc/lxc/node1.spiked.uk.com lxc.utsname = node1.spiked.uk.com lxc.pts = 1024 lxc.mount = /etc/lxc/node1.spiked.uk.com_fstab lxc.rootfs = /home/stuff/lxc/node1.spiked.uk.com lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.hwaddr = 52:54:00:00:16:79 lxc.network.ipv4 = 10.63.74.97 lxc.network.name = eth0 root@box ~ # cat /etc/lxc/node1.spiked.uk.com_fstab none /home/stuff/lxc/node1.spiked.uk.com/dev/pts devpts defaults 0 0 none /home/stuff/lxc/node1.spiked.uk.com/procproc defaults 0 0 none /home/stuff/lxc/node1.spiked.uk.com/sys sysfs defaults 0 0 none /home/stuff/lxc/node1.spiked.uk.com/dev/shm tmpfs defaults 0 0 Regards -- Xperia(TM) PLAY It's a major breakthrough. An authentic gaming smartphone on the nation's most reliable network. And it wants your games. http://p.sf.net/sfu/verizon-sfdev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] native (non-NAT) routing?
On Mon, 4 Apr 2011, Ulli Horlacher wrote: My first Ubuntu 10.04 container is up and running on a Ubuntu 10.04 host, but the container can only connect to the host (and vice versa), but not to the world outside. I saw a lot of configurations for NAT, but I want native routing for my containers. I know nothing about Ubuntu, but I got a similar setup working with bridging. The host's IP is assigned to bridge br0 which has host's physical network interface eth0 and guest's VETH interface gw1-eth0 as ports: host# ip addr show br0 4: br0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:19:99:5f:f2:87 brd ff:ff:ff:ff:ff:ff inet 130.xxx.xxx.xxx/24 brd 130.xxx.xxx.255 scope global br0 valid_lft forever preferred_lft forever host# brctl show bridge name bridge id STP enabled interfaces br0 8000.0019995ff287 no eth0 gw1-eth0 No manual mutilation of routing tables is needed, only IP forwarding allowed (net.ipv4.ip_forward = 1). BR, Antti Tanhuanpää -- Xperia(TM) PLAY It's a major breakthrough. An authentic gaming smartphone on the nation's most reliable network. And it wants your games. http://p.sf.net/sfu/verizon-sfdev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] native (non-NAT) routing?
On 04/04/2011 07:35 PM, Ulli Horlacher wrote: My first Ubuntu 10.04 container is up and running on a Ubuntu 10.04 host, but the container can only connect to the host (and vice versa), but not to the world outside. I saw a lot of configurations for NAT, but I want native routing for my containers. My setup is: host zoo 129.69.1.39 container LXC 129.69.1.219 router129.69.1.254 In LXC.conf is: lxc.utsname = LXC lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up lxc.network.name = eth0 lxc.network.mtu = 1500 lxc.network.ipv4 = 129.69.1.219/24 root@LXC:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface 129.69.1.0 0.0.0.0 255.255.255.0 U 0 00 eth0 0.0.0.0 129.69.1.2540.0.0.0 UG0 00 eth0 root@LXC:~# ping -c 1 129.69.1.39 PING 129.69.1.39 (129.69.1.39) 56(84) bytes of data. 64 bytes from 129.69.1.39: icmp_seq=1 ttl=64 time=11.5 ms --- 129.69.1.39 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 11.547/11.547/11.547/0.000 ms root@LXC:~# ping -c 1 129.69.1.254 PING 129.69.1.254 (129.69.1.254) 56(84) bytes of data. From 129.69.1.219 icmp_seq=1 Destination Host Unreachable --- 129.69.1.254 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms root@zoo:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface 129.69.1.0 0.0.0.0 255.255.255.0 U 0 00 br0 0.0.0.0 129.69.1.2540.0.0.0 UG10000 br0 root@zoo:~# ping -c 1 129.69.1.219 PING 129.69.1.219 (129.69.1.219) 56(84) bytes of data. 64 bytes from 129.69.1.219: icmp_seq=1 ttl=64 time=0.058 ms --- 129.69.1.219 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms root@zoo:~# ping -c 1 129.69.1.254 PING 129.69.1.254 (129.69.1.254) 56(84) bytes of data. 64 bytes from 129.69.1.254: icmp_seq=1 ttl=255 time=0.509 ms --- 129.69.1.254 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms root@zoo:~# iptables -n -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination root@zoo:~# sysctl -a | grep forward net.ipv4.conf.all.forwarding = 1 net.ipv4.conf.all.mc_forwarding = 0 net.ipv4.conf.default.forwarding = 1 net.ipv4.conf.default.mc_forwarding = 0 net.ipv4.conf.lo.forwarding = 1 net.ipv4.conf.lo.mc_forwarding = 0 net.ipv4.conf.eth0.forwarding = 1 net.ipv4.conf.eth0.mc_forwarding = 0 net.ipv4.conf.br0.forwarding = 1 net.ipv4.conf.br0.mc_forwarding = 0 net.ipv4.conf.virbr0.forwarding = 1 net.ipv4.conf.virbr0.mc_forwarding = 0 net.ipv4.conf.vethMx2A0v.forwarding = 1 net.ipv4.conf.vethMx2A0v.mc_forwarding = 0 net.ipv4.ip_forward = 1 Any debugging hints? Can you give the bridge setup ? (brctl show) -- Xperia(TM) PLAY It's a major breakthrough. An authentic gaming smartphone on the nation's most reliable network. And it wants your games. http://p.sf.net/sfu/verizon-sfdev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] native (non-NAT) routing?
Quoting Ulli Horlacher (frams...@rus.uni-stuttgart.de): On Mon 2011-04-04 (19:35), Ulli Horlacher wrote: My first Ubuntu 10.04 container is up and running on a Ubuntu 10.04 host, but the container can only connect to the host (and vice versa), but not to the world outside. I found a workaround: I have added an extra ethernet card dedicated to the container. If you're happy with what you've got, great. If you'd like to figure out what went wrong originally, I suspect the answer might lie in the results of 'brctl show'. -serge -- Xperia(TM) PLAY It's a major breakthrough. An authentic gaming smartphone on the nation's most reliable network. And it wants your games. http://p.sf.net/sfu/verizon-sfdev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] native (non-NAT) routing?
On Tue 2011-04-05 (14:53), Daniel Lezcano wrote: Can you give the bridge setup ? (brctl show) root@zoo:/lxc# brctl show bridge name bridge id STP enabled interfaces br0 8000.0050568e0003 no eth0 -- Ullrich Horlacher Server- und Arbeitsplatzsysteme Rechenzentrum E-Mail: horlac...@rus.uni-stuttgart.de Universitaet Stuttgart Tel:++49-711-685-65868 Allmandring 30 Fax:++49-711-682357 70550 Stuttgart (Germany) WWW:http://www.rus.uni-stuttgart.de/ -- Xperia(TM) PLAY It's a major breakthrough. An authentic gaming smartphone on the nation's most reliable network. And it wants your games. http://p.sf.net/sfu/verizon-sfdev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Running LXC on a pxelinux machine
On 04/05/2011 09:49 AM, Gus Power wrote: Hi, I've got a pxelinux boot configuration with a remote NFS root filesystem and was wondering if anyone out there has tried running lxc on such a configuration. I'm having difficulty getting the host machine to talk with the running lxc containers. I managed to get a local bridge interface up and running without hanging the host machine using the following: #Copy to /tmp tmpfs to avoid NFS hang cp /sbin/brctl /tmp cp /sbin/ifconfig /tmp /tmp/brctl /tmp/ifconfig /tmp/brctl addbr br0 /tmp/ifconfig br0 up /tmp/brctl setfd br0 0 /tmp/brctl stp br0 off /tmp/brctl addif br0 eth0 /tmp/ifconfig eth0 192.168.1.68 netmask 255.255.255.0 /tmp/brctl show From there I can create lxc container instances; other machines on the network can talk to them but the host machine is unable to do so. I suspect I need to update the bridge tables (using ebtables) in some way. Any help greatly appreciated! Hi Gus, I am not sure to understand the use case. Can you elaborate ? Do you want to run containers on your diskless host ? or Do you want to have your diskless host to run inside a container ? I did recently a configuration with a tftp server running inside a container and a pxe host to boot inside it. It was working like a charm. Maybe it is what you are looking for ? -- Xperia(TM) PLAY It's a major breakthrough. An authentic gaming smartphone on the nation's most reliable network. And it wants your games. http://p.sf.net/sfu/verizon-sfdev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] native (non-NAT) routing?
Hi Ulli, I have managed to set up routed networking with lxc, it isn't very different from xen or qemu. I've created a webpage explaining how I did it: http://j.9souldier.org/trunk/lxc/ Comments are welcome. John ps. I think your setup is wrong in that you need to route through the host and not your router, the host will take care of routing through the routes that are relevant (i.e. communication between guests don't need to go through the router). -- Current excuse: network down, IP packets delivered via UPS On Mon, 4 Apr 2011 19:35:09 +0200 Ulli Horlacher frams...@rus.uni-stuttgart.de wrote: My first Ubuntu 10.04 container is up and running on a Ubuntu 10.04 host, but the container can only connect to the host (and vice versa), but not to the world outside. I saw a lot of configurations for NAT, but I want native routing for my containers. My setup is: host zoo 129.69.1.39 container LXC 129.69.1.219 router129.69.1.254 In LXC.conf is: lxc.utsname = LXC lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up lxc.network.name = eth0 lxc.network.mtu = 1500 lxc.network.ipv4 = 129.69.1.219/24 root@LXC:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 129.69.1.0 0.0.0.0 255.255.255.0 U 0 00 eth0 0.0.0.0 129.69.1.254 0.0.0.0 UG0 00 eth0 root@LXC:~# ping -c 1 129.69.1.39 PING 129.69.1.39 (129.69.1.39) 56(84) bytes of data. 64 bytes from 129.69.1.39: icmp_seq=1 ttl=64 time=11.5 ms --- 129.69.1.39 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 11.547/11.547/11.547/0.000 ms root@LXC:~# ping -c 1 129.69.1.254 PING 129.69.1.254 (129.69.1.254) 56(84) bytes of data. From 129.69.1.219 icmp_seq=1 Destination Host Unreachable --- 129.69.1.254 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms root@zoo:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 129.69.1.0 0.0.0.0 255.255.255.0 U 0 00 br0 0.0.0.0 129.69.1.2540.0.0.0 UG10000 br0 root@zoo:~# ping -c 1 129.69.1.219 PING 129.69.1.219 (129.69.1.219) 56(84) bytes of data. 64 bytes from 129.69.1.219: icmp_seq=1 ttl=64 time=0.058 ms --- 129.69.1.219 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms root@zoo:~# ping -c 1 129.69.1.254 PING 129.69.1.254 (129.69.1.254) 56(84) bytes of data. 64 bytes from 129.69.1.254: icmp_seq=1 ttl=255 time=0.509 ms --- 129.69.1.254 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms root@zoo:~# iptables -n -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination root@zoo:~# sysctl -a | grep forward net.ipv4.conf.all.forwarding = 1 net.ipv4.conf.all.mc_forwarding = 0 net.ipv4.conf.default.forwarding = 1 net.ipv4.conf.default.mc_forwarding = 0 net.ipv4.conf.lo.forwarding = 1 net.ipv4.conf.lo.mc_forwarding = 0 net.ipv4.conf.eth0.forwarding = 1 net.ipv4.conf.eth0.mc_forwarding = 0 net.ipv4.conf.br0.forwarding = 1 net.ipv4.conf.br0.mc_forwarding = 0 net.ipv4.conf.virbr0.forwarding = 1 net.ipv4.conf.virbr0.mc_forwarding = 0 net.ipv4.conf.vethMx2A0v.forwarding = 1 net.ipv4.conf.vethMx2A0v.mc_forwarding = 0 net.ipv4.ip_forward = 1 Any debugging hints? -- Xperia(TM) PLAY It's a major breakthrough. An authentic gaming smartphone on the nation's most reliable network. And it wants your games. http://p.sf.net/sfu/verizon-sfdev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc-fstab vs /etc/fstab vs /lib/init/fstab
Serge Hallyn serge.hal...@canonical.com writes: Next, upstart's mountall consults /lib/init/fstab. That's the one which will usually prevent container startup from proceeding. The lxcguest package for ubuntu will force upstart to mount an empty version of that file before mountall runs. So if you install lxcguest then mountall can safely run, which makes your container safer against package updates. Interesting approach. IIRC I just dpkg-divert --rename /lib/init/fstab : /lib/init/fstab although I was still having problems (I think mountall was trying to REmount everything it found in mtab and/or /proc/mounts), so now I drop mount capability and replace [un]mount with symlinks to /bin/true. -- Xperia(TM) PLAY It's a major breakthrough. An authentic gaming smartphone on the nation's most reliable network. And it wants your games. http://p.sf.net/sfu/verizon-sfdev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users