Re: [lxc-users] Thrilled to announce the the launch of Flockport.com to this list

2014-09-11 Thread Jon Brinkmann
A few questions...

- How robust are the containers, e.g., how have they been tested?
- How secure are the containers, e.g., how can one be sure they don't
  contain backdoors, trojans, etc.?
- What standards are required for the containers?
- What's the support for problems with the containers?

Poorly implemented containers will do more harm than good for LXC.

- Jon

On Tue, Sep 9, 2014 at 12:49 PM, Tobby Banerjee to...@flockport.com wrote:

 Hi LXC users,

 I am extremely excited to announce the launch of Flockport.com to this
 list, its home so to speak.

 Flockport.com was created to let users discover, download and share
 portable and easy to use Linux containers (LXC) that can be deployed in
 seconds.

 Flockport hosts an LXC repo for Debian Wheezy that provides all the
 features of LXC out of the box - yah! like in Ubuntu. Flockport also
 provides a Flockport utility to let users view and download Flockport
 containers directly to their systems.

 At launch Flockport.com provides over 50 containers of some of the most
 popular web applications.

 There is a significant information gap on LXC in the marketplace that
 Flockport hopes to address.

 Flockport wants to make containers accessible to a much wider audience and
 articulate and provide a broader use of containers as lightweight, 
 portable
 and extremely fast alternative to virtualization.

 We have comprehensive documentation on setting up and using LXC.
 We would absolutely love your feedback and thoughts. Thanks Stephane and
 Serge for the fantastic blog posts! and Dr Rami Rosen for the wonderful 
 LXC
 presentations!

 We have also put up an indepth piece on the key difference between LXC and
 Docker and would love for you to weigh in.

 Introduction to Flockport

 Flockport containers

 Flockport LXC getting started guide

 Understanding the key differences between LXC and Docker

 I am happy to follow up with any information. I would like to thank
 Stephane and Serge for making this possible and giving us a wonderful
 project.

 Thank you for reading.

 Cheers!

 Tobby
 Flockport
 www.flockport.com

 About Flockport
 Flockport.com is a hub to download and share LXC containers. Flockport
 provides web stacks and applications in containers that can be deployed in
 any LXC environment in a simple, predictable and easy way. Flockport is a
 start up based in Mumbai founded by Indrajit Banerjee. Indrajit has over 
 14
 years experience in enterprise software marketing and has previously 
 headed
 the India marketing operations of Savvion, Progress Software and Redhat.



 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] No outgoing traffic with bridged network and public IP address from container

2014-09-11 Thread Tamas Papp


On 09/11/2014 11:05 AM, othiman wrote:

Hi everyone,

I already posted this to askubuntu.com 
(http://askubuntu.com/questions/522457/lxc-container-no-outgoing-traffic-with-bridged-network-and-public-ip-address), 
but I think this might be a better place to find help.


I try to setup a LXC container with bridged network on ubuntu 14.04.1, 
but the outgoing traffic seems to be blocked. Ping another IP than the 
container's one is not working. Actually I tried this with a working 
container of a ubuntu 12.04 host moved to new hardware and a recent 
ubuntu 14.04, but the problem also applies to a new created ubuntu 
14.04 container.


I should mention that if I bind the IP address to an aliasing 
interface of the host directly, pinging inside and outside to the host 
is working correctly.





lxc.network.ipv4=91.143.88.119/24
lxc.network.ipv4.gateway=91.143.88.1





auto br0
iface br0 inet static
address 81.7.15.233
netmask 255.255.255.0
broadcast 81.7.15.255
gateway 81.7.15.1
bridge_ports eth0
bridge_fd 0
bridge_stp off
bridge_waitport 0
bridge_maxwait 0


and on the client:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 91.143.88.119
netmask 255.255.255.0


First of all either use guest's network file or lxc.networ.ipv4* 
settings. There is no reason to use both.


Can you ping 91.143.88.1?
What do you see with tcpdump -i eth0 on the host machine?

tamas
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] No outgoing traffic with bridged network and public IP address from container

2014-09-11 Thread othiman

Hi Andreas, hi tamas,

thanks for your fast answers. I removed the IP settings from the config, 
but that was obviously not the problem.


I cannot ping the gateway from inside of the container:
ubuntu@ubuntu-test:~$ ping 91.143.88.1
PING 91.143.88.1 (91.143.88.1) 56(84) bytes of data.
^C
--- 91.143.88.1 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3023ms

Meanwhile I used tcpdump -i eth0 -v icmp -n on the host to look at the 
ICMP packets (because there is a lot of other traffic on that device).


tcpdump: WARNING: eth0: no IPv4 address assigned
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 
65535 bytes
11:46:27.181917 IP (tos 0x0, ttl 244, id 28226, offset 0, flags [none], 
proto ICMP (1), length 84)
176.227.209.42  81.7.14.108: ICMP echo request, id 263, seq 31682, 
length 64
11:46:27.401801 IP (tos 0x0, ttl 64, id 48628, offset 0, flags [DF], 
proto ICMP (1), length 84)
91.143.88.119  91.143.88.1: ICMP echo request, id 457, seq 1, 
length 64
11:46:28.409373 IP (tos 0x0, ttl 64, id 48629, offset 0, flags [DF], 
proto ICMP (1), length 84)
91.143.88.119  91.143.88.1: ICMP echo request, id 457, seq 2, 
length 64
11:46:29.417370 IP (tos 0x0, ttl 64, id 48630, offset 0, flags [DF], 
proto ICMP (1), length 84)
91.143.88.119  91.143.88.1: ICMP echo request, id 457, seq 3, 
length 64
11:46:30.425366 IP (tos 0x0, ttl 64, id 48631, offset 0, flags [DF], 
proto ICMP (1), length 84)
91.143.88.119  91.143.88.1: ICMP echo request, id 457, seq 4, 
length 64
11:46:31.383279 IP (tos 0x0, ttl 244, id 29380, offset 0, flags [none], 
proto ICMP (1), length 84)
176.227.209.42  81.7.14.108: ICMP echo request, id 263, seq 32673, 
length 64

^C
6 packets captured
6 packets received by filter
0 packets dropped by kernel


So it looks like the packets are going out, but there is no answer 
coming back. So I tried Andreas hint. I called (in the container)


ubuntu@ubuntu-test:~$ sudo ./arping -I eth0 -u 91.143.88.119 -c4
ARPING 91.143.88.119

--- 91.143.88.119 statistics ---
4 packets transmitted, 0 packets received, 100% unanswered (0 extra)

But still no answers from the gateway :-(

Best regards,
Thomas

On 11.09.2014 11:20, Andreas Laut wrote:

We face this problem very often.

You can try to ping the Host IP and after that the gateway IP from
inside the container.
If that doesn't work you'll have to send arpings from inside the conainter:
arping -I [lxc-interfacename]  -U 91.143.88.119 -c4 (arping from iputils
package)

After that you'll get the following output:
Sent 4 probes (4 broadcast(s))
Received 0 response(s)

0 responses are ok. Try to ping again.
Hope that helps.

Best Regards



Am 11.09.2014 um 11:05 schrieb othiman:

Hi everyone,

I already posted this to askubuntu.com
(http://askubuntu.com/questions/522457/lxc-container-no-outgoing-traffic-with-bridged-network-and-public-ip-address),
but I think this might be a better place to find help.

I try to setup a LXC container with bridged network on ubuntu 14.04.1,
but the outgoing traffic seems to be blocked. Ping another IP than the
container's one is not working. Actually I tried this with a working
container of a ubuntu 12.04 host moved to new hardware and a recent
ubuntu 14.04, but the problem also applies to a new created ubuntu
14.04 container.

I should mention that if I bind the IP address to an aliasing
interface of the host directly, pinging inside and outside to the host
is working correctly.

I hope someone has an idea what I am doing wrong.


I created a the new container with:
 lxc-create -t ubuntu -n ubuntu-test


This is my config file:
 # Template used to create this container:
/usr/share/lxc/templates/lxc-ubuntu
 # Parameters passed to the template: -r trusty
 # For additional config options, please look at lxc.container.conf(5)

 # Common configuration
 lxc.include = /usr/share/lxc/config/ubuntu.common.conf

 # Container specific configuration
 lxc.rootfs = /var/lib/lxc/ubuntu-test/rootfs
 lxc.mount = /var/lib/lxc/ubuntu-test/fstab
 lxc.utsname = ubuntu-test
 lxc.arch = amd64

 # Network configuration
 lxc.network.type = veth
 lxc.network.flags = up
 lxc.network.link = br0
 lxc.network.hwaddr = 00:16:3e:6c:7c:79
 lxc.network.ipv4=91.143.88.119/24
 lxc.network.ipv4.gateway=91.143.88.1
 lxc.network.name=eth0


My '/etc/network/interfaces' on the host:
 auto lo
 iface lo inet loopback

 auto br0
 iface br0 inet static
 address 81.7.15.233
 netmask 255.255.255.0
 broadcast 81.7.15.255
 gateway 81.7.15.1
 bridge_ports eth0
 bridge_fd 0
 bridge_stp off
 bridge_waitport 0
 bridge_maxwait 0


and on the client:
 auto lo
 iface lo inet loopback

 auto eth0
 iface eth0 inet static
 address 91.143.88.119
 netmask 255.255.255.0
 broadcast 91.143.88.255
 gateway 

Re: [lxc-users] No outgoing traffic with bridged network and public IP address from container

2014-09-11 Thread Tamas Papp

hi,

Is it allowed by your provider?

tamas

On 09/11/2014 12:46 PM, othiman wrote:

Hi Andreas, hi tamas,

thanks for your fast answers. I removed the IP settings from the 
config, but that was obviously not the problem.


I cannot ping the gateway from inside of the container:
ubuntu@ubuntu-test:~$ ping 91.143.88.1
PING 91.143.88.1 (91.143.88.1) 56(84) bytes of data.
^C
--- 91.143.88.1 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3023ms

Meanwhile I used tcpdump -i eth0 -v icmp -n on the host to look at 
the ICMP packets (because there is a lot of other traffic on that 
device).


tcpdump: WARNING: eth0: no IPv4 address assigned
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 
65535 bytes
11:46:27.181917 IP (tos 0x0, ttl 244, id 28226, offset 0, flags 
[none], proto ICMP (1), length 84)
176.227.209.42  81.7.14.108: ICMP echo request, id 263, seq 
31682, length 64
11:46:27.401801 IP (tos 0x0, ttl 64, id 48628, offset 0, flags [DF], 
proto ICMP (1), length 84)
91.143.88.119  91.143.88.1: ICMP echo request, id 457, seq 1, 
length 64
11:46:28.409373 IP (tos 0x0, ttl 64, id 48629, offset 0, flags [DF], 
proto ICMP (1), length 84)
91.143.88.119  91.143.88.1: ICMP echo request, id 457, seq 2, 
length 64
11:46:29.417370 IP (tos 0x0, ttl 64, id 48630, offset 0, flags [DF], 
proto ICMP (1), length 84)
91.143.88.119  91.143.88.1: ICMP echo request, id 457, seq 3, 
length 64
11:46:30.425366 IP (tos 0x0, ttl 64, id 48631, offset 0, flags [DF], 
proto ICMP (1), length 84)
91.143.88.119  91.143.88.1: ICMP echo request, id 457, seq 4, 
length 64
11:46:31.383279 IP (tos 0x0, ttl 244, id 29380, offset 0, flags 
[none], proto ICMP (1), length 84)
176.227.209.42  81.7.14.108: ICMP echo request, id 263, seq 
32673, length 64

^C
6 packets captured
6 packets received by filter
0 packets dropped by kernel


So it looks like the packets are going out, but there is no answer 
coming back. So I tried Andreas hint. I called (in the container)


ubuntu@ubuntu-test:~$ sudo ./arping -I eth0 -u 91.143.88.119 -c4
ARPING 91.143.88.119

--- 91.143.88.119 statistics ---
4 packets transmitted, 0 packets received, 100% unanswered (0 extra)

But still no answers from the gateway :-(

Best regards,
Thomas

On 11.09.2014 11:20, Andreas Laut wrote:

We face this problem very often.

You can try to ping the Host IP and after that the gateway IP from
inside the container.
If that doesn't work you'll have to send arpings from inside the 
conainter:

arping -I [lxc-interfacename]  -U 91.143.88.119 -c4 (arping from iputils
package)

After that you'll get the following output:
Sent 4 probes (4 broadcast(s))
Received 0 response(s)

0 responses are ok. Try to ping again.
Hope that helps.

Best Regards



Am 11.09.2014 um 11:05 schrieb othiman:

Hi everyone,

I already posted this to askubuntu.com
(http://askubuntu.com/questions/522457/lxc-container-no-outgoing-traffic-with-bridged-network-and-public-ip-address), 


but I think this might be a better place to find help.

I try to setup a LXC container with bridged network on ubuntu 14.04.1,
but the outgoing traffic seems to be blocked. Ping another IP than the
container's one is not working. Actually I tried this with a working
container of a ubuntu 12.04 host moved to new hardware and a recent
ubuntu 14.04, but the problem also applies to a new created ubuntu
14.04 container.

I should mention that if I bind the IP address to an aliasing
interface of the host directly, pinging inside and outside to the host
is working correctly.

I hope someone has an idea what I am doing wrong.


I created a the new container with:
 lxc-create -t ubuntu -n ubuntu-test


This is my config file:
 # Template used to create this container:
/usr/share/lxc/templates/lxc-ubuntu
 # Parameters passed to the template: -r trusty
 # For additional config options, please look at 
lxc.container.conf(5)


 # Common configuration
 lxc.include = /usr/share/lxc/config/ubuntu.common.conf

 # Container specific configuration
 lxc.rootfs = /var/lib/lxc/ubuntu-test/rootfs
 lxc.mount = /var/lib/lxc/ubuntu-test/fstab
 lxc.utsname = ubuntu-test
 lxc.arch = amd64

 # Network configuration
 lxc.network.type = veth
 lxc.network.flags = up
 lxc.network.link = br0
 lxc.network.hwaddr = 00:16:3e:6c:7c:79
 lxc.network.ipv4=91.143.88.119/24
 lxc.network.ipv4.gateway=91.143.88.1
 lxc.network.name=eth0


My '/etc/network/interfaces' on the host:
 auto lo
 iface lo inet loopback

 auto br0
 iface br0 inet static
 address 81.7.15.233
 netmask 255.255.255.0
 broadcast 81.7.15.255
 gateway 81.7.15.1
 bridge_ports eth0
 bridge_fd 0
 bridge_stp off
 bridge_waitport 0
 bridge_maxwait 0


and on the client:
 auto lo
 iface lo inet loopback

 auto eth0
 iface eth0 inet static
 address 

Re: [lxc-users] No outgoing traffic with bridged network and public IP address from container

2014-09-11 Thread Fajar A. Nugraha
On Thu, Sep 11, 2014 at 4:05 PM, othiman othi...@gmx.de wrote:

 Hi everyone,

 I already posted this to askubuntu.com (http://askubuntu.com/
 questions/522457/lxc-container-no-outgoing-traffic-
 with-bridged-network-and-public-ip-address), but I think this might be a
 better place to find help.

 I try to setup a LXC container with bridged network on ubuntu 14.04.1, but
 the outgoing traffic seems to be blocked. Ping another IP than the
 container's one is not working. Actually I tried this with a working
 container of a ubuntu 12.04 host moved to new hardware and a recent ubuntu
 14.04, but the problem also applies to a new created ubuntu 14.04 container.

 I should mention that if I bind the IP address to an aliasing interface of
 the host directly, pinging inside and outside to the host is working
 correctly.

 I hope someone has an idea what I am doing wrong.


Sounds suspiciously similar to a dedicated server/colo setup where your
provider only allows one MAC on each port. Is that the case for you? If
yes, then short answer is you can't use bridge.

SInce your container IP (91.143.88.119) and host IP (81.7.15.233) is on a
different subnet mask, I suspect that your provider routes the additional
IP to your main IP. In which case you should use routed setup.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] No outgoing traffic with bridged network and public IP address from container

2014-09-11 Thread othiman
I'm still confused that is working at the old server, but not at the new 
one. I wrote an email to my provider asking if they use a kind if MAC 
filter. I will let you know if this is the solution.


Thanks for all your help,
Thomas

On 11.09.2014 13:08, Tamas Papp wrote:


On 09/11/2014 01:06 PM, othiman wrote:

Hi,

binding the 91.143.88.119 address to an aliasing interface of the 
host (br0:0) directly, pinging inside and outside to the host is 
working correctly.


Yes, but as it was mentioned, you're trying with a different mac address.

What you need is I think using an alias and iptables DNAT/SNAT.


Cheers,
tamas
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] No outgoing traffic with bridged network and public IP address from container

2014-09-11 Thread othiman

91.143.88.1 actually is the providers gateway for the subnet.

The whole IP configuration for the container looks like this:
IP address: 91.143.88.119
netmask: 255.255.255.0
broadcast: 91.143.88.255
gateway: 91.143.88.1

So the container is on a totally different subnet, but shouldn't that 
work anyway with a bridged device?


Best regards,
Thomas

On 11.09.2014 15:11, brian mullan wrote:

In your container config you set the IP gateway as:

 lxc.network.ipv4.gateway=91.143.88.1

But I didn't see that IP addr anywhere else in your email. Where is 88.1 ?

Brian



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] cannot enable dev loop access from LXC

2014-09-11 Thread Bin Zhou




On Wednesday, September 10, 2014 2:30 PM, Serge Hallyn 
serge.hal...@ubuntu.com wrote:



Quoting Bin Zhou (lakerz...@yahoo.com):
 Hi,
 
 I am trying to enable dev loop access in LXC and set up glusterFS server 
 volume on the loop dev.
 I add the following line to /var/lib/lxc/local-server-7/config
  lxc.cgroup.devices.allow = b 7:* rwm

 Certainly looks fine.


 The container failed to start with the new config.
 
 ubuntu@bpcluster1:~$ sudo lxc-start -n local-server-7 
 lxc-start: write /sys/fs/cgroup/devices//lxc/local-server-7/devices.allow : 
 Invalid argument
 lxc-start: failed to setup the cgroups for 'local-server-7'
 lxc-start: failed to setup the container
 lxc-start: invalid sequence number 1. expected 2
 lxc-start: failed to spawn 'local-server-7

 Hm.  Can you cat /sys/fs/cgroup/devices/lxc/devices.list ?

Host:~$ sudo cat /sys/fs/cgroup/devices/lxc/devices.list 
a *:* rwm

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] No outgoing traffic with bridged network and public IP address from container

2014-09-11 Thread Fajar A. Nugraha
Depends on how your provider set it up.

If it WERE intended to work that way, they would've given you full
instructions (e.g. use this IP, this netmask, and this gateway) instead
of just giving the IP (and probably say add this as a secondary IP on your
server).

The fact that you say it works when you use it as br0:0 (and br0 has
81.7.15.233) means that at that point you're NOT using bridge, but instead
use your host's primary IP as the gateway. And your provider has route that
IP thru your primary IP.

Again, it is important to know how your provider works. Asking them BEFORE
asking questions here would've lead to a more productive discussion.

FWIW, on server4you I use something like this:

On the host's /etc/network/interfaces:
#==
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
  address 209.126.X.Y
  netmask 255.255.255.192
  gateway 209.126.X.1

auto br0
iface br0 inet static
address 192.168.124.1
netmask 255.255.255.0
bridge_ports none
up ip route add A.B.C.D/32 dev br0 || true



... where A.B.C.D is the one additional IP that they gave me.


On the container lxc config:
#===
# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 00:16:3E:04:A8:65
lxc.network.veth.pair=veth-C1-0


On the container's /etc/network/interfaces:
#==
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address A.B.C.D
netmask 255.255.255.255
up ip route add 192.168.124.1 dev eth0
up ip route add default via 192.168.124.1



Basically it uses some static routes to force communication between the
hosts's br0 and the container's eth0, even though they're on different
logical subnet (br0 192.168.124.1/24, container is A.B.C.D/32). And
container's traffic is routed thru the host's br0, which in turn goes to
the provider's routing thru the host's main IP. Pure routing setup, no NAT
involved.

-- 
Fajar



On Thu, Sep 11, 2014 at 8:19 PM, othiman othi...@gmx.de wrote:

 91.143.88.1 actually is the providers gateway for the subnet.

 The whole IP configuration for the container looks like this:
 IP address: 91.143.88.119
 netmask: 255.255.255.0
 broadcast: 91.143.88.255
 gateway: 91.143.88.1

 So the container is on a totally different subnet, but shouldn't that work
 anyway with a bridged device?

 Best regards,
 Thomas


 On 11.09.2014 15:11, brian mullan wrote:

 In your container config you set the IP gateway as:

  lxc.network.ipv4.gateway=91.143.88.1

 But I didn't see that IP addr anywhere else in your email. Where is 88.1 ?

 Brian


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] cannot enable dev loop access from LXC

2014-09-11 Thread Serge Hallyn
Quoting Bin Zhou (lakerz...@yahoo.com):
 
 
 
 
 On Wednesday, September 10, 2014 2:30 PM, Serge Hallyn 
 serge.hal...@ubuntu.com wrote:
 
 
 
 Quoting Bin Zhou (lakerz...@yahoo.com):
  Hi,
  
  I am trying to enable dev loop access in LXC and set up glusterFS server 
  volume on the loop dev.
  I add the following line to /var/lib/lxc/local-server-7/config
   lxc.cgroup.devices.allow = b 7:* rwm
 
  Certainly looks fine.
 
 
  The container failed to start with the new config.
  
  ubuntu@bpcluster1:~$ sudo lxc-start -n local-server-7 
  lxc-start: write /sys/fs/cgroup/devices//lxc/local-server-7/devices.allow 
  : Invalid argument
  lxc-start: failed to setup the cgroups for 'local-server-7'
  lxc-start: failed to setup the container
  lxc-start: invalid sequence number 1. expected 2
  lxc-start: failed to spawn 'local-server-7
 
  Hm.  Can you cat /sys/fs/cgroup/devices/lxc/devices.list ?
 
 Host:~$ sudo cat /sys/fs/cgroup/devices/lxc/devices.list 
 a *:* rwm

Adding lxc.cgroup.devices.allow = b 7:* rwm to a brand-new container works
for me here.  Can you start the container without that line and show

/sys/fs/cgroup/devices/lxc/local-server-7/devices.list

while it's running?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] cannot enable dev loop access from LXC

2014-09-11 Thread Bin Zhou
@Serge
Thanks for the response.

Host:~$ sudo cat /sys/fs/cgroup/devices/lxc/local-server-7/devices.list
c *:* m
b *:* m
c 1:3 rwm
c 1:5 rwm
c 5:1 rwm
c 5:0 rwm
c 1:9 rwm
c 1:8 rwm
c 136:* rwm
c 5:2 rwm
c 254:0 rwm
c 10:229 rwm
c 10:200 rwm
c 1:7 rwm
c 10:228 rwm
c 10:232 rwm




 On Thursday, September 11, 2014 12:36 PM, Serge Hallyn 
 serge.hal...@ubuntu.com wrote:
 Quoting Bin Zhou (lakerz...@yahoo.com):
  
  
  
  
  On Wednesday, September 10, 2014 2:30 PM, Serge Hallyn 
  serge.hal...@ubuntu.com wrote:
  
  
  
  Quoting Bin Zhou (lakerz...@yahoo.com):
   Hi,
   
  I am trying to enable dev loop access in LXC and set up glusterFS server 
  volume on the loop dev.
   I add the following line to /var/lib/lxc/local-server-7/config
lxc.cgroup.devices.allow = b 7:* rwm
  
   Certainly looks fine.
  
  
   The container failed to start with the new config.
   
   ubuntu@bpcluster1:~$ sudo lxc-start -n local-server-7 
   lxc-start: write 
   /sys/fs/cgroup/devices//lxc/local-server-7/devices.allow : Invalid 
   argument
   lxc-start: failed to setup the cgroups for 'local-server-7'
   lxc-start: failed to setup the container
   lxc-start: invalid sequence number 1. expected 2
   lxc-start: failed to spawn 'local-server-7
  
   Hm.  Can you cat /sys/fs/cgroup/devices/lxc/devices.list ?
  
  Host:~$ sudo cat /sys/fs/cgroup/devices/lxc/devices.list 
  a *:* rwm

 Adding lxc.cgroup.devices.allow = b 7:* rwm to a brand-new container works
 for me here.  Can you start the container without that line and show

 /sys/fs/cgroup/devices/lxc/local-server-7/devices.list

 while it's running?



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] cannot enable dev loop access from LXC

2014-09-11 Thread Serge Hallyn
If you're on ubuntu, could you go to pad.lv/u/lxc and file a bug so
we can better track the configuration info?

Otherwise, can you show:

uname -r
cat /etc/*-release
cat /var/lib/lxc/local-server-7/config (*with* the line added)

After trying to start the container again the last 200 lines of

/var/log/audit/auditd.log if it exists,
/var/log/syslog

For the last two, look over them first to make sure no sensitive
info is there.

Quoting Bin Zhou (lakerz...@yahoo.com):
 @Serge
 Thanks for the response.
 
 Host:~$ sudo cat /sys/fs/cgroup/devices/lxc/local-server-7/devices.list
 c *:* m
 b *:* m
 c 1:3 rwm
 c 1:5 rwm
 c 5:1 rwm
 c 5:0 rwm
 c 1:9 rwm
 c 1:8 rwm
 c 136:* rwm
 c 5:2 rwm
 c 254:0 rwm
 c 10:229 rwm
 c 10:200 rwm
 c 1:7 rwm
 c 10:228 rwm
 c 10:232 rwm
 
 
 
 
  On Thursday, September 11, 2014 12:36 PM, Serge Hallyn 
  serge.hal...@ubuntu.com wrote:
  Quoting Bin Zhou (lakerz...@yahoo.com):
   
   
   
   
   On Wednesday, September 10, 2014 2:30 PM, Serge Hallyn 
   serge.hal...@ubuntu.com wrote:
   
   
   
   Quoting Bin Zhou (lakerz...@yahoo.com):
Hi,

   I am trying to enable dev loop access in LXC and set up glusterFS server 
   volume on the loop dev.
I add the following line to /var/lib/lxc/local-server-7/config
 lxc.cgroup.devices.allow = b 7:* rwm
   
Certainly looks fine.
   
   
The container failed to start with the new config.

ubuntu@bpcluster1:~$ sudo lxc-start -n local-server-7 
lxc-start: write 
/sys/fs/cgroup/devices//lxc/local-server-7/devices.allow : Invalid 
argument
lxc-start: failed to setup the cgroups for 'local-server-7'
lxc-start: failed to setup the container
lxc-start: invalid sequence number 1. expected 2
lxc-start: failed to spawn 'local-server-7
   
Hm.  Can you cat /sys/fs/cgroup/devices/lxc/devices.list ?
   
   Host:~$ sudo cat /sys/fs/cgroup/devices/lxc/devices.list 
   a *:* rwm
 
  Adding lxc.cgroup.devices.allow = b 7:* rwm to a brand-new container works
  for me here.  Can you start the container without that line and show
 
  /sys/fs/cgroup/devices/lxc/local-server-7/devices.list
 
  while it's running?
 
 
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Getting kdump to work on an LXC server

2014-09-11 Thread Rod Bruce
Greetings,
I have been working on a problem the last couple of days and I believe I
have come up with a solution so I thought I would share it with the list
in case anybody else runs into this or someone has a better solution.


Problem:
I have had a server running Ubuntu 14.04 hang a couple of times. I try
to run everything using standard Ubuntu packages. The server is an LXC
host with two containers running on it (but several more planned). I
wanted to get a kernel core dump if it hung again so I started
investigating kdump/kexec. I installed, configured, and tested
kdump/kexec on another server and it worked as advertised. However, when
I tried it on the LXC server it would save the core dump OK but the
server would fail to reboot or hang at some other point in the process.

I noticed that when kexec was booting the secondary kernel it was
starting up all of the services that start on a normal boot, including
LXC, and that seemed to be causing a problem. When I set the containers
to not auto boot, kdump worked as expected. However, we want the
containers to auto boot so I had to come up with a different solution.


Things I tried that did not work:

- I added the parameter KDUMP_RUNLEVEL=1 to the
/etc/default/kdump-tools file. KDUMP_RUNLEVEL=1 is something I found
mentioned on a couple of pages but it is not in any of the man pages or
Ubuntu documentation.

- I uncommented the KDUMP_CMDLINE_APPEND parameter in the
/etc/default/kdump-tools file and changed the line to
KDUMP_CMDLINE_APPEND=irqpoll maxcpus=1 nousb 1 which would tell kexec
to boot into single-user mode. This did boot to single-user mode,
however single-user mode is not adequate because it asks for a root
password (for which there is a work-around) but it also does not mount
extra file systems (like /var/crash).


The solution I came up with:

I changed the default run-level from 2 to 3, set LXC to not start on
run-level 2, and configure kdump to boot to run-level 2. Historically,
run-level 2 was multi-user mode without networking and run-level 3 was
the same as 2 but with network support enabled. As far as I can tell, at
least with a standard Ubuntu 14.04 server install there is no difference
between run-levels 2 and 3.


Here are the details:

1. Change the default run-level from 2 to 3:

sudo sed -i s/^env DEFAULT_RUNLEVEL=2/env DEFAULT_RUNLEVEL=3/
/etc/init/rc-sysinit.conf

2. Set LXC to not start on run-level 2:

sudo sed -i s/^start on runlevel \[2345\]/start on runlevel \[345\]/
/etc/init/lxc.conf

sudo sed -i s/^stop on starting rc RUNLEVEL=\[016\]/stop on starting rc
RUNLEVEL=\[0126\]/ /etc/init/lxc.conf

3. Configure kdump to boot to run-level 2:

sudo sed -i s/^#KDUMP_CMDLINE_APPEND=\irqpoll maxcpus=1
nousb\/KDUMP_CMDLINE_APPEND=\irqpoll maxcpus=1 nousb 2\/
/etc/default/kdump-tools


After I made these changes I rebooted the server, ran some tests and
everything seems to be working.


-- 

Rod Bruce
UNIX System and Network Administrator
PALS, A Program of the
Minnesota State Colleges and Universities
rod.br...@mnsu.edu
507.389.2000

Quis custodiet ipsos custodes?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Getting kdump to work on an LXC server

2014-09-11 Thread Serge Hallyn
Quoting Rod Bruce (rod.br...@mnsu.edu):
 Greetings,
 I have been working on a problem the last couple of days and I believe I
 have come up with a solution so I thought I would share it with the list
 in case anybody else runs into this or someone has a better solution.
 
 
 Problem:
 I have had a server running Ubuntu 14.04 hang a couple of times. I try
 to run everything using standard Ubuntu packages. The server is an LXC
 host with two containers running on it (but several more planned). I
 wanted to get a kernel core dump if it hung again so I started
 investigating kdump/kexec. I installed, configured, and tested
 kdump/kexec on another server and it worked as advertised. However, when
 I tried it on the LXC server it would save the core dump OK but the
 server would fail to reboot or hang at some other point in the process.
 
 I noticed that when kexec was booting the secondary kernel it was
 starting up all of the services that start on a normal boot, including
 LXC, and that seemed to be causing a problem. When I set the containers

Do you have any idea why it was causing a problem?

Now that you are kexecing into runlevel 2, after you do that, are you
able to start the lxc service and lxc container by hand?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users