Re: [lxc-users] cannot create a centos container on lvm with centos template

2015-01-21 Thread Serge Hallyn
Quoting 黄奕 (ruffian...@126.com):
> lxc version:1.0.7
> I tried to create a lxc on lvm with command:
> lxc-create -n mylxc2 -t centos -B lvm
> and i got the errors:
> 
> Copy /usr/local/var/cache/lxc/centos/x86_64/6/rootfs to /dev/lxc/mylxc2 ... 
> Copying rootfs to /dev/lxc/mylxc2 ...mkdir: cannot create directory 
> ‘/dev/lxc/mylxc2’: File exists
> rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken 
> pipe (32)
> rsync: ERROR: cannot stat destination "/dev/lxc/mylxc2/": Not a directory (20)
> rsync error: errors selecting input/output files, dirs (code 3) at 
> main.c(565) [Receiver=3.0.9]
> rsync: connection unexpectedly closed (9 bytes received so far) [sender]
> rsync error: error in rsync protocol data stream (code 12) at io.c(605) 
> [sender=3.0.9]

Ah yes, I think this is a bug in the centos template.  It appears to take
the $rootfs_path to which it rsyncs from the lxc.rootfs entry in
the container config file, rather than from the --rootfs argument
which lxc-create passed to it.  The lxc config file specifies
/dev/lxc/mylx2, whereas --rootfs will be the path at which
that fs is mounted.

Can someone volunteer to fix the centos template in this regard?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] cannot create a centos container on lvm with centos template

2015-01-21 Thread 黄奕
lxc version:1.0.7
I tried to create a lxc on lvm with command:
lxc-create -n mylxc2 -t centos -B lvm
and i got the errors:

Copy /usr/local/var/cache/lxc/centos/x86_64/6/rootfs to /dev/lxc/mylxc2 ... 
Copying rootfs to /dev/lxc/mylxc2 ...mkdir: cannot create directory 
‘/dev/lxc/mylxc2’: File exists
rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken 
pipe (32)
rsync: ERROR: cannot stat destination "/dev/lxc/mylxc2/": Not a directory (20)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(565) 
[Receiver=3.0.9]
rsync: connection unexpectedly closed (9 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(605) 
[sender=3.0.9]___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] cgmanager: cgm_list_children for controller=systemd, cgroup_path=user failed: invalid request

2015-01-21 Thread Serge Hallyn
Quoting Smart Goldman (ytlec2...@gmail.com):
> 2015-01-17 5:31 GMT+09:00 Serge Hallyn :
> 
> > Operation not permitted?  That's unexpected.  Are you running a custom
> > kernel or custom selinux policy?
> 
> Yes, mine is ubuntu system provided by a VPS service of
> https://crissic.net/los-angeles_vps_pre-launch
> I think it's a possibility. I may need to ask the provider about it..
>
> Although I do not think this information will be helpful, this is my kernel
> version.
> root@okapi:~# uname -a
> Linux okapi 2.6.32-042stab093.5 #1 SMP Wed Sep 10 17:39:49 MSK 2014 x86_64
> x86_64 x86_64 GNU/Linux
> I think I've never changed the kernel by myself.
> 
> > I do think removing cgroup-bin
> >
> > sudo apt-get purge cgroup-bin
> >
> > will fix the mounting of the name=beancounter etc hierarchies.
> 
> I had removed cgroup-bin.
> But unfortunately it looks like it was not fixed.
> 
> Each file after removing cgroup-bin, reboot and re-login is now as follows:
> 
> root@okapi:~# cat /proc/self/cgroup
> 4:name=systemd:/
> 3:freezer,devices,name=container:/12042
> 2:cpuacct,cpu,cpuset,name=fairsched:/12042
> 1:blkio,name=beancounter:/12042
> 
> root@okapi:~# tail -n 13 /var/log/upstart/cgmanager.log
> Mounted systemd onto /run/cgmanager/fs/none,name=systemd
> Mounted container onto /run/cgmanager/fs/none,name=container
> Mounted fairsched onto /run/cgmanager/fs/none,name=fairsched
> Mounted beancounter onto /run/cgmanager/fs/none,name=beancounter
> found 4 controllers
> buf is /run/cgmanager/agents/cgm-release-agent.systemd
> buf is /run/cgmanager/agents/cgm-release-agent.container
> buf is /run/cgmanager/agents/cgm-release-agent.fairsched
> buf is /run/cgmanager/agents/cgm-release-agent.beancounter
> Mounted systemd onto /run/cgmanager/fs/none,name=systemd
> cgmanager: Failed mounting /run/cgmanager/fs/none,name=container: Operation
> not permitted

Yeah that's weird.

Look around /var/log and see what is mounting those cgroups
at boot.  What files still exist under /etc/init and /etc/init.d?

> cgmanager: Failed mounting cgroups
> cgmanager: Failed to set up cgroup mounts
> 
> root@okapi:~# tail -n 10 /var/log/auth.log
> Jan 17 00:31:48 okapi sudo: root : TTY=pts/0 ; PWD=/root ; USER=root ;
> COMMAND=/usr/bin/apt-get -y purge cgroup-bin
> Jan 17 00:31:48 okapi sudo: pam_unix(sudo:session): session opened for user
> root by root(uid=0)
> Jan 17 00:31:55 okapi sudo: pam_unix(sudo:session): session closed for user
> root
> Jan 17 00:32:53 okapi systemd-logind[326]: New seat seat0.
> Jan 17 00:32:54 okapi sshd[492]: Server listening on 0.0.0.0 port 22.
> Jan 17 00:32:54 okapi sshd[492]: Server listening on :: port 22.
> Jan 17 00:34:09 okapi sshd[897]: Accepted password for root from
> 119.105.136.26 port 56815 ssh2
> Jan 17 00:34:09 okapi sshd[897]: pam_unix(sshd:session): session opened for
> user root by (uid=0)
> Jan 17 00:34:09 okapi systemd-logind[326]: Failed to create cgroup
> name=systemd:/user/0.user: No such file or directory
> Jan 17 00:34:09 okapi sshd[897]: pam_systemd(sshd:session): Failed to
> create session: No such file or directory

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Fun with lxc.network.type=phys

2015-01-21 Thread U.Mutlu

scrumpyjack wrote, On 01/21/2015 01:09 PM:

On Wed, 21 Jan 2015, Fajar A. Nugraha wrote:


It is, to be frank. lxc already supports macvlan, so there's no need to
create it manually and use phys.


I have been reading more in macvlan support and it is now clearer.


If it's "I want to to have /32 in the container", then there are other ways
to do that. I deploy just that with veth and bridge.


Yes, i want to give a /32 to a container.

If i stick to

lxc.network.type = macvlan
lxc.network.flags = up
lxc.network.link = eth0
lxc.network.name = eth1
lxc.network.ipv4 = 21.45.463.23/32 (fake IP, obvs)
lxc.network.ipv4.gateway = 21.45.463.23

would you expect that to work?

I'm trying not to have to do any NATing or any other configuration in my
host for my container to get traffic destined for 21.45.463.23/32


This IP, even if it's a fake, is not a valid IP b/c 463 is > 255 ...


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Fun with lxc.network.type=phys

2015-01-21 Thread ScrumpyJack
On Wed, 21 Jan 2015, Fajar A. Nugraha wrote:

> On Wed, Jan 21, 2015 at 7:09 PM, scrumpyjack  wrote:
> 
> > Yes, i want to give a /32 to a container.

> This is on ubuntu server. The host has 100.0.0.10/24, router is on
> 100.0.0.1, the container is on 100.0.0.11 (fake IPs, of course).
> The host communicates with the container thru a PRIVATE bridge with IP
> 192.168.124.1 (note that this IP doesn't even have to be in the same
> network as host and container's IP)
> 
> Relevant part of host's /etc/network/interfaces
> ###
> auto eth0
> iface eth0 inet static
> address 100.0.0.10
> netmask 255.255.255.0
> gateway 100.0.0.1
> # this part functions similar as proxy arp, force eth0 to accepts packets
> # destined for the container's IP using static arp
> up arp -i eth0 -Ds 100.0.0.11 eth0 pub || true
> 
> # this is an internal bridge used to connect the host to the container
> auto br0
> iface br0 inet manual
> bridge_ports none
> bridge_maxwait 0
> bridge_stp off
> bridge_fd 0
> # add specific route for the container IP
> up ip route add 100.0.0.11/32 dev br0 || true
> ###
> 
> 
> Relevant part of container config. Note that this only sets the bridge and
> persistent vif mac & name.
> ###
> lxc.network.type=veth
> lxc.network.link=br0
> lxc.network.veth.pair=veth-c1-0
> lxc.network.flags=up
> lxc.network.hwaddr = 00:16:3E:FD:46:25
> ###
> 
> 
> Relevant part of container's /etc/network/interfaces
> ###
> auto eth0
> iface eth0 inet static
> address 100.0.0.11
> netmask 255.255.255.255
> # force route for host's br0
> up ip route add 192.168.124.1 dev eth0
> # ... and use it for default route
> up ip route add default via 192.168.124.1

Yup, thanks, this worked for me.
I was trying to use macvlan and phys to avoid having to add my eth0 to the 
bridgeport and putting into promiscuois mode, which this solves

thanks again!




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Fun with lxc.network.type=phys

2015-01-21 Thread Fajar A. Nugraha
On Wed, Jan 21, 2015 at 7:09 PM, scrumpyjack  wrote:

> Yes, i want to give a /32 to a container.
>
> If i stick to
>
> lxc.network.type = macvlan
> lxc.network.flags = up
> lxc.network.link = eth0
> lxc.network.name = eth1
> lxc.network.ipv4 = 21.45.463.23/32 (fake IP, obvs)
> lxc.network.ipv4.gateway = 21.45.463.23
>
> would you expect that to work?
>
>

Nope.

Your main mistake is that you thought since the /32 IP works in the host
(e.g. when used as "eth0:1"), it would automagically work inside the
container, the host would simply "know" where to route the packet. It
doesn't work that way. Network-wise, the host and the container are two
separate entities, which might have a private link (i.e. thru a private
bridge or something).

The generic explanation of a working setup can be "stolen" from xen wiki:
http://wiki.xen.org/wiki/Vif-route
Basically they use a combination of /32, specific route, and proxy arp. I
use a similar but slightly different method.

This is on ubuntu server. The host has 100.0.0.10/24, router is on
100.0.0.1, the container is on 100.0.0.11 (fake IPs, of course).
The host communicates with the container thru a PRIVATE bridge with IP
192.168.124.1 (note that this IP doesn't even have to be in the same
network as host and container's IP)

Relevant part of host's /etc/network/interfaces
###
auto eth0
iface eth0 inet static
address 100.0.0.10
netmask 255.255.255.0
gateway 100.0.0.1
# this part functions similar as proxy arp, force eth0 to accepts packets
# destined for the container's IP using static arp
up arp -i eth0 -Ds 100.0.0.11 eth0 pub || true

# this is an internal bridge used to connect the host to the container
auto br0
iface br0 inet manual
bridge_ports none
bridge_maxwait 0
bridge_stp off
bridge_fd 0
# add specific route for the container IP
up ip route add 100.0.0.11/32 dev br0 || true
###


Relevant part of container config. Note that this only sets the bridge and
persistent vif mac & name.
###
lxc.network.type=veth
lxc.network.link=br0
lxc.network.veth.pair=veth-c1-0
lxc.network.flags=up
lxc.network.hwaddr = 00:16:3E:FD:46:25
###


Relevant part of container's /etc/network/interfaces
###
auto eth0
iface eth0 inet static
address 100.0.0.11
netmask 255.255.255.255
# force route for host's br0
up ip route add 192.168.124.1 dev eth0
# ... and use it for default route
up ip route add default via 192.168.124.1
###


Relevant output of several commands in the host
###
# ip route
...
default via 100.0.0.1 dev eth0
100.0.0.0/24 dev eth0  proto kernel  scope link  src 100.0.0.10
100.0.0.11 dev br0  scope link
...

# arp -n
Address  HWtype  HWaddress   Flags Mask
 Iface
...
100.0.0.11ether   00:16:3e:fd:46:25   C br0
100.0.0.11*   MPeth0
...

# brctl show
bridge name bridge id STP enabled interfaces
...
br0 8000.feb01cb4ee91 no veth-c1-0
...
###

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Fun with lxc.network.type=phys

2015-01-21 Thread scrumpyjack
On Wed, 21 Jan 2015, Fajar A. Nugraha wrote:
> 
> It is, to be frank. lxc already supports macvlan, so there's no need to
> create it manually and use phys.

I have been reading more in macvlan support and it is now clearer.

> If it's "I want to to have /32 in the container", then there are other ways
> to do that. I deploy just that with veth and bridge.
 
Yes, i want to give a /32 to a container.

If i stick to 

lxc.network.type = macvlan
lxc.network.flags = up
lxc.network.link = eth0 
lxc.network.name = eth1
lxc.network.ipv4 = 21.45.463.23/32 (fake IP, obvs)
lxc.network.ipv4.gateway = 21.45.463.23

would you expect that to work? 

I'm trying not to have to do any NATing or any other configuration in my 
host for my container to get traffic destined for 21.45.463.23/32

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Fun with lxc.network.type=phys

2015-01-21 Thread Fajar A. Nugraha
On Wed, Jan 21, 2015 at 3:31 PM, ScrumpyJack  wrote:

> On Mon, 19 Jan 2015, ScrumpyJack wrote:
>
> > I'd like to connect a physical interface from a host to a LXC container
> > guest like so:
> >
> > lxc.network.type=phys
> >
> > And then assign a routable IP/32 address to the LXC container for it to
> > "just work".
> >
> > The problem is that I don't have a spare "real" physical interface, so on
> > the host i create a "virtual" interface
> >
> >  ip link add link eth0 mac0 type macvlan
>




> hi again. I'm wondering if my setup is so silly that all as ignoring it :)
>


It is, to be frank. lxc already supports macvlan, so there's no need to
create it manually and use phys.

What is it that you're trying to achieve? If it's "just because I want to",
then good luck.

If it's "I want to to have /32 in the container", then there are other ways
to do that. I deploy just that with veth and bridge.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Fun with lxc.network.type=phys

2015-01-21 Thread ScrumpyJack
On Mon, 19 Jan 2015, ScrumpyJack wrote:
 
> I'd like to connect a physical interface from a host to a LXC container 
> guest like so:
> 
> lxc.network.type=phys
> 
> And then assign a routable IP/32 address to the LXC container for it to 
> "just work".
> 
> The problem is that I don't have a spare "real" physical interface, so on 
> the host i create a "virtual" interface
> 
>  ip link add link eth0 mac0 type macvlan
> 
> I now have a new virtual interface called mac0 with a separate mac address 
> in my host. I assign it a test IP and it can be pinged from outside the 
> host.
> 
> I add the following details to the container's config file
> 
> lxc.network.type=phys
> lxc.network.flags = up
> lxc.network.link = mac0
> lxc.network.name = eth1
> 
> 
> I boot my LXC guest, and as expected the mac0 virtual interface gets 
> passed on to the guest, as the guest has a new interface called eth1 with 
> exactly the same mac address as the randomly generated mac0 mac address 
> from the host, and the mac0 interface is no longer available in the host.
> 
> But that's as far as it goes. Assigning the same test IP address to the 
> guest doesn't have the desired effect and the containers is unreachable. I 
> see the traffic coming into eth0 on the host, but that's it. The guest 
> doesn't seem to get the traffic with it's IP.
> 
> I don't want to use bridging, veths or taps, or any method other than 
> physical.
> 

hi again. I'm wondering if my setup is so silly that all as ignoring it :)
Meanwhile, I'm trawling this mailing list and searching online and there 
is nothing i see that might be of any help.
If anyone with knowledge of lxc networking would be kind enough to tell me 
whether I'm mad or not, then I could keep looking for give up on passing a 
macvlan host interface to the LXC physically. That would be most kind.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users