Re: [lxc-users] make a nginx webserver in a lxc container available to the local wireless lan

2015-12-09 Thread Eldon Kuzhyelil
Hi,
I have followed that
https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-network and changed my
lxc config file like below

lxc.network.type = veth
lxc.network.link = br0

 and in my /etc/network/interfaces i have followed this link
https://help.ubuntu.com/lts/serverguide/network-configuration.html#bridging

and my interface file looks like this now

auto lo
iface lo inet loopback

auto br0
iface br0 inet static
address 192.168.0.10(my ip address when connected through ethernet)
network 192.168.0.0
netmask 255.255.255.0
broadcast 192.168.0.255
gateway 192.168.0.1
bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

Now bring up the bridge:

sudo ifup br0



But after rebooting i am still not able to connect to internet using
ethernet or wifi.
What to do?

I am using kubuntu 14.04 and i have installed nginx in one container
and want that webpage from my lxc container to be visible in another
laptop connected

to my private LAN

Thankyou




On Tue, Dec 8, 2015 at 6:09 PM, Luis Michael Ibarra <
michael.iba...@gmail.com> wrote:

> Hi all,
>
> 2015-12-04 5:47 GMT-05:00 Eldon Kuzhyelil :
>
>> Hi,
>> I am using kubuntu 14.04 with lxc installed on it.I have created one lxc
>> container and installed nginx webserver in that container and created a
>> simple webpage displaying a welcome sentence.I am able to get this page
>> when i enter the http://10.0.1.3 (ip address of the container).Now i
>> want to display this webpage from another machine connected to the same
>> wireless LAN.i have searched the bridgng techniques.But not able to do it.
>> Can anybody help me in this?
>>
>> There you go, just update/delete your iptables rules if container's ip
> changes or you stop using the service.
>
> #echo 1 > /proc/sys/net/ipv4/ip_forward
> #iptables -A FORWARD -j ACCEPT
> #iptables -t nat -A PREROUTING -d ${HOST} -p tcp --dport 80 -j DNAT --to
> ${CONTAINER}:80 -m comment --comment "http to ${CONTAINER}"
>
>
>
>> Thank you
>>
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>
>
>
>
> --
> Luis Michael Ibarra
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] make a nginx webserver in a lxc container available to the local wireless lan

2015-12-09 Thread Mark Constable

On 09/12/15 21:41, Eldon Kuzhyelil wrote:

auto lo
iface lo inet loopback


You may be missing...

auto eth0
iface eth0 inet manual


auto br0
iface br0 inet static
[...]


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] make a nginx webserver in a lxc container available to the local wireless lan

2015-12-09 Thread Fajar A. Nugraha
On Wed, Dec 9, 2015 at 6:41 PM, Eldon Kuzhyelil  wrote:

> and my interface file looks like this now
>
>
> auto lo
> iface lo inet loopback
>
> auto br0
> iface br0 inet static
> address 192.168.0.10(my ip address when connected through ethernet)
> network 192.168.0.0
> netmask 255.255.255.0
> broadcast 192.168.0.255
> gateway 192.168.0.1
> bridge_ports eth0
> bridge_fd 9
> bridge_hello 2
> bridge_maxage 12
> bridge_stp off
>
> Now bring up the bridge:
>
> sudo ifup br0
>
>
>
> But after rebooting i am still not able to connect to internet using ethernet 
> or wifi.
>
>
First thing's first: do NOT even bother with bridging if you still use wifi
to access the internet. If you HAVE to use wifi, use NAT instead (there's
an example in the first link)



> What to do?
>
>
Do some basic trobleshooting:
- is the HOST able to connect to the internet?
- what does these commands show on the host:

brctl show
ip ad li
ip route
ethtool eth0
ping -n -c 1 192.168.0.1
ping -n -c 1 8.8.8.8
ping -n -c 1 google.com
cat /etc/resolv.conf

My GUESS is you're missing something simple, like:
- the cable is unplugged
- you're trying to setup bridging with eth0 while you're connected using
wifi
- your interface name is not eth0, but something else
- you previously use DHCP, and now since you use static address your
resolv.conf is empty since you haven't set it up

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Serge Hallyn's article "Publishing LXD images"

2015-12-09 Thread david . andel
Ok, after installing the daily builds, relaunching wily from scratch, 
installing upstart in it, creating a config profile and applying it to the 
container:

david@kimera:~$ lxc profile show debug_init
name: debug_init
config:
  raw.lxc: |-
    lxc.console.logfile = /tmp/out
    lxc.init_cmd = /sbin/init --debug --verbose
devices: {}

david@kimera:~$ lxc profile apply wily-u-1 default,debug_init
Profile default,debug_init applied to wily-u-1

No log is being written after running a bash shell and just waiting until it 
quits by itself:

david@kimera:~$ lxc exec wily-u-1 /bin/bash
root@wily-u-1:~# david@kimera:~$ 
david@kimera:~$ lxc exec wily-u-1 -- ls -alh /tmp/
total 0
drwxrwxrwt 1 root root   0 Dec  9 15:37 .
drwxr-xr-x 1 root root 132 Dec  9 03:54 ..

Also, the time until the shell quits is variable in the double seconds:

david@kimera:~$ time lxc exec wily-u-1 /bin/bash
root@wily-u-1:~# 
real    0m14.124s
user    0m0.004s
sys 0m0.004s
david@kimera:~$ time lxc exec wily-u-1 /bin/bash
root@wily-u-1:~# ls -l /tmp/
total 0
root@wily-u-1:~# 
real    0m28.515s
user    0m0.016s
sys 0m0.000s
david@kimera:~$ time lxc exec wily-u-1 /bin/bash
root@wily-u-1:~# 
real    0m12.001s
user    0m0.000s
sys 0m0.012s
david@kimera:~$ time lxc exec wily-u-1 /bin/bash
root@wily-u-1:~# ls -l
total 0
root@wily-u-1:~# 
real    0m8.603s
user    0m0.008s
sys 0m0.000s

David

-"lxc-users"  wrote: -
To: LXC users mailing-list 
From: Serge Hallyn 
Sent by: "lxc-users" 
Date: 12/08/2015 18:27
Subject: Re: [lxc-users] Serge Hallyn's article "Publishing LXD images"

Quoting david.an...@bli.uzh.ch (david.an...@bli.uzh.ch):
> But now happens something strange:
> 
> When running 'lxc exec wily-with-upstart-1 /bin/bash' the prompt changes to 
> root@wily-with-upstart-1 as expected, but then closes the shell in a few 
> seconds, falling back on the host prompt.
> This happens both in the original wily (with systemd) and in the 
> wily-with-upstart containers, but not in a trusty container.
> I haven't found any entries in /var/log in those containers during the time 
> that this happens.
> 
> What could this be and how can I find the cause?

Jinkeys, no idea.  Is the container still running after your
shell dies?

I'd try getting debug output from upstart itself.  Depending on which
lxc you're running, you may need to use lxc from the daily ppa to get
the patch "support arguments in lxc.init_cmd".  Add the following to
your container config:

  raw.lxc: |-
    lxc.console.logfile = /tmp/out
    lxc.init_cmd = /sbin/init --debug --verbose

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] sshd-keygen fails during container boot

2015-12-09 Thread Peter Steele

On 12/08/2015 08:36 PM, Serge Hallyn wrote:

What do you mean by "when the server comes up"?  If you bring up the
server, let it set for 5 mins, then start them, they still fail?
What I meant here was that when my server boots, it launches our 
management software, which in turns launches the containers that are 
defined on that server. The systemd errors occur as the containers are 
started. Delaying when the containers are started doesn't have any 
effect. I've found though that if I put a five second delay between 
starting each container, the systemd errors don't occur (at least not in 
the tests I've run so far). I haven't had this issue with libvirt-lxc, 
and I hope there is a better solution than this arbitrary five second delay.
What lxc version are you using again? 

1.1.5.

Ok, so this shows that in the container 'sys/fs' existed,
but fuse did not.  This suggests that the fuse kernel module
was not done loading yet.

Could you add to /lib/systemd/system/lxc.service the line

ExecStartPre=modprobe fuse

and see if that helps?  (I'm not sure if you'd also need to sleep
a short time to give syfs time to catch up, or if the modprobe
would wait...  you could just use a script that waits until
/sys/fs/fuse exists on the host)

I added this line to lxc.service and that cleared up the fuse issues. 
This did not have any effect on the systemd errors though.


Peter

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] sshd-keygen fails during container boot

2015-12-09 Thread Serge Hallyn
Quoting Peter Steele (pwste...@gmail.com):
> On 12/08/2015 08:36 PM, Serge Hallyn wrote:
> >What do you mean by "when the server comes up"?  If you bring up the
> >server, let it set for 5 mins, then start them, they still fail?
> What I meant here was that when my server boots, it launches our
> management software, which in turns launches the containers that are
> defined on that server. The systemd errors occur as the containers
> are started. Delaying when the containers are started doesn't have
> any effect. I've found though that if I put a five second delay
> between starting each container, the systemd errors don't occur (at
> least not in the tests I've run so far). I haven't had this issue
> with libvirt-lxc, and I hope there is a better solution than this
> arbitrary five second delay.
> >What lxc version are you using again?
> 1.1.5.
> >>Ok, so this shows that in the container 'sys/fs' existed,
> >>but fuse did not.  This suggests that the fuse kernel module
> >>was not done loading yet.
> >>
> >>Could you add to /lib/systemd/system/lxc.service the line
> >>
> >>ExecStartPre=modprobe fuse
> >>
> >>and see if that helps?  (I'm not sure if you'd also need to sleep
> >>a short time to give syfs time to catch up, or if the modprobe
> >>would wait...  you could just use a script that waits until
> >>/sys/fs/fuse exists on the host)
> >>
> I added this line to lxc.service and that cleared up the fuse
> issues. This did not have any effect on the systemd errors though.

And "the systemd errors" is the ssh-keygen ones only?  Or is there
more?

And you do, or do not, also get these with containers created
through the download template?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] sshd-keygen fails during container boot

2015-12-09 Thread Peter Steele

On 12/09/2015 09:43 AM, Serge Hallyn wrote:

And "the systemd errors" is the ssh-keygen ones only?  Or is there
more?
Various services are being impacted, for example, I saw these errors in 
a run yesterday:


Dec  7 13:52:00 pws-vm-00 systemd: Failed at step CGROUP spawning 
/usr/bin/kmod: No such file or directory

Dec  7 13:52:00 pws-vm-00 systemd: Mounted Huge Pages File System.
Dec  7 13:52:00 pws-vm-00 systemd: kmod-static-nodes.service: main 
process exited, code=exited, status=219/CGROUP
Dec  7 13:52:00 pws-vm-00 systemd: Failed to start Create list of 
required static device nodes for the current kernel.
Dec  7 13:52:00 pws-vm-00 systemd: Unit kmod-static-nodes.service 
entered failed state.


Dec  7 13:52:01 pws-vm-00 systemd: Failed at step CGROUP spawning 
/etc/rc.d/init.d/jexec: No such file or directory
Dec  7 13:52:01 pws-vm-00 systemd: jexec.service: control process 
exited, code=exited status=219
Dec  7 13:52:01 pws-vm-00 systemd: Failed to start LSB: Supports the 
direct execution of binary formats..

Dec  7 13:52:01 pws-vm-00 systemd: Unit jexec.service entered failed state.

At least a half dozen different services have failed in the various 
tests I've done, and the set is always different from run to run.

And you do, or do not, also get these with containers created
through the download template?

Most of my tests have been with my custom containers of course since we 
need the additional tools and files that make up our management 
software. I did a test though where I blew away the containers that were 
created by my install framework and replaced them all with the generic 
CentOS download template. I was unable to reproduce the systemd errors 
with this simple container. I then installed the additional OS modules 
and other third party packages that we use in our software on top of 
this basic container and the systemd errors returned. I'm going to break 
this process down a bit more to see if I can identify what additions to 
the base container cause systemd to fail.


Peter

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] sshd-keygen fails during container boot

2015-12-09 Thread Serge Hallyn
Quoting Peter Steele (pwste...@gmail.com):
> On 12/09/2015 09:43 AM, Serge Hallyn wrote:
> >And "the systemd errors" is the ssh-keygen ones only?  Or is there
> >more?
> Various services are being impacted, for example, I saw these errors
> in a run yesterday:
> 
> Dec  7 13:52:00 pws-vm-00 systemd: Failed at step CGROUP spawning
> /usr/bin/kmod: No such file or directory
> Dec  7 13:52:00 pws-vm-00 systemd: Mounted Huge Pages File System.
> Dec  7 13:52:00 pws-vm-00 systemd: kmod-static-nodes.service: main
> process exited, code=exited, status=219/CGROUP
> Dec  7 13:52:00 pws-vm-00 systemd: Failed to start Create list of
> required static device nodes for the current kernel.
> Dec  7 13:52:00 pws-vm-00 systemd: Unit kmod-static-nodes.service
> entered failed state.

This is the kind of thing I'd expect when using cgmanager or lxcfs,
but not with straight lxc+cgfs.

Can you show what /sys/fs/cgroup tree and /proc/1/cgroup looks like in a
working container?

> Dec  7 13:52:01 pws-vm-00 systemd: Failed at step CGROUP spawning
> /etc/rc.d/init.d/jexec: No such file or directory
> Dec  7 13:52:01 pws-vm-00 systemd: jexec.service: control process
> exited, code=exited status=219
> Dec  7 13:52:01 pws-vm-00 systemd: Failed to start LSB: Supports the
> direct execution of binary formats..
> Dec  7 13:52:01 pws-vm-00 systemd: Unit jexec.service entered failed state.
> 
> At least a half dozen different services have failed in the various
> tests I've done, and the set is always different from run to run.
> >And you do, or do not, also get these with containers created
> >through the download template?
> >
> Most of my tests have been with my custom containers of course since
> we need the additional tools and files that make up our management
> software. I did a test though where I blew away the containers that
> were created by my install framework and replaced them all with the
> generic CentOS download template. I was unable to reproduce the
> systemd errors with this simple container. I then installed the
> additional OS modules and other third party packages that we use in
> our software on top of this basic container and the systemd errors
> returned. I'm going to break this process down a bit more to see if
> I can identify what additions to the base container cause systemd to
> fail.

Interesting.

I suppose just looking at the 'capsh --print' output difference for the
bounding set between the custom containers spawned by lxc and libvirt-lxc could
be enlightening.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] sshd-keygen fails during container boot

2015-12-09 Thread Peter Steele

On 12/09/2015 10:18 AM, Serge Hallyn wrote:
This is the kind of thing I'd expect when using cgmanager or lxcfs, 
but not with straight lxc+cgfs. Can you show what /sys/fs/cgroup tree 
and /proc/1/cgroup looks like in a working container? 

As requested:

# ll /sys/fs/cgroup(top level only)
total 0
drwxr-xr-x 3 root root 60 Dec  9 10:12 blkio
lrwxrwxrwx 1 root root 11 Dec  9 10:12 cpu -> cpu,cpuacct
drwxr-xr-x 3 root root 60 Dec  9 10:12 cpu,cpuacct
lrwxrwxrwx 1 root root 11 Dec  9 10:12 cpuacct -> cpu,cpuacct
drwxr-xr-x 3 root root 60 Dec  9 10:12 cpuset
drwxr-xr-x 3 root root 60 Dec  9 10:12 devices
drwxr-xr-x 3 root root 60 Dec  9 10:12 freezer
drwxr-xr-x 3 root root 60 Dec  9 10:12 hugetlb
drwxr-xr-x 3 root root 60 Dec  9 10:12 memory
lrwxrwxrwx 1 root root 16 Dec  9 10:12 net_cls -> net_cls,net_prio
drwxr-xr-x 3 root root 60 Dec  9 10:12 net_cls,net_prio
lrwxrwxrwx 1 root root 16 Dec  9 10:12 net_prio -> net_cls,net_prio
drwxr-xr-x 3 root root 60 Dec  9 10:12 perf_event
dr-xr-xr-x 4 root root  0 Dec  9 10:28 systemd

# cat /proc/1/cgroup
10:hugetlb:/lxc/vm-00
9:perf_event:/lxc/vm-00
8:net_cls,net_prio:/lxc/vm-00
7:freezer:/lxc/vm-00
6:devices:/lxc/vm-00
5:memory:/lxc/vm-00
4:blkio:/lxc/vm-00
3:cpu,cpuacct:/lxc/vm-00
2:cpuset:/lxc/vm-00
1:name=systemd:/system.slice/supervisord.service

And for a bonus:

# mount
/dev/md1 on / type ext4 (rw,relatime,stripe=256,data=ordered)
none on /dev type tmpfs (rw,relatime,size=100k,mode=755)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys/net type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
sysfs on /sys/devices/virtual/net type sysfs (rw,relatime)
sysfs on /sys/devices/virtual/net type sysfs 
(rw,nosuid,nodev,noexec,relatime)
sysfs on /sys/fs/fuse/connections type sysfs 
(rw,nosuid,nodev,noexec,relatime)
cgroup_root on /sys/fs/cgroup type tmpfs 
(rw,nosuid,nodev,noexec,relatime,size=10240k,mode=755)
cgroup_root on /sys/fs/cgroup/hugetlb type tmpfs 
(ro,relatime,size=10240k,mode=755)
cgroup on /sys/fs/cgroup/hugetlb/lxc/vm-00 type cgroup 
(rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup_root on /sys/fs/cgroup/perf_event type tmpfs 
(ro,relatime,size=10240k,mode=755)
cgroup on /sys/fs/cgroup/perf_event/lxc/vm-00 type cgroup 
(rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup_root on /sys/fs/cgroup/net_cls,net_prio type tmpfs 
(ro,relatime,size=10240k,mode=755)
cgroup on /sys/fs/cgroup/net_cls,net_prio/lxc/vm-00 type cgroup 
(rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup_root on /sys/fs/cgroup/freezer type tmpfs 
(ro,relatime,size=10240k,mode=755)
cgroup on /sys/fs/cgroup/freezer/lxc/vm-00 type cgroup 
(rw,nosuid,nodev,noexec,relatime,freezer)
cgroup_root on /sys/fs/cgroup/devices type tmpfs 
(ro,relatime,size=10240k,mode=755)
cgroup on /sys/fs/cgroup/devices/lxc/vm-00 type cgroup 
(rw,nosuid,nodev,noexec,relatime,devices)
cgroup_root on /sys/fs/cgroup/memory type tmpfs 
(ro,relatime,size=10240k,mode=755)
cgroup on /sys/fs/cgroup/memory/lxc/vm-00 type cgroup 
(rw,nosuid,nodev,noexec,relatime,memory)
cgroup_root on /sys/fs/cgroup/blkio type tmpfs 
(ro,relatime,size=10240k,mode=755)
cgroup on /sys/fs/cgroup/blkio/lxc/vm-00 type cgroup 
(rw,nosuid,nodev,noexec,relatime,blkio)
cgroup_root on /sys/fs/cgroup/cpu,cpuacct type tmpfs 
(ro,relatime,size=10240k,mode=755)
cgroup on /sys/fs/cgroup/cpu,cpuacct/lxc/vm-00 type cgroup 
(rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup_root on /sys/fs/cgroup/cpuset type tmpfs 
(ro,relatime,size=10240k,mode=755)
cgroup on /sys/fs/cgroup/cpuset/lxc/vm-00 type cgroup 
(rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
devpts on /dev/lxc/console type devpts 
(rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)

devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/lxc/tty1 type devpts 
(rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/lxc/tty2 type devpts 
(rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/lxc/tty3 type devpts 
(rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/lxc/tty4 type devpts 
(rw,relatime,gid=5,mode=620,ptmxmode=666)

tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup 
(rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)

debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)


Interesting.

I suppose just looking at the 'capsh --print' output difference for the
bounding set between the custom containers spawned by lxc and libvirt-lxc could
be enlightening.

Here's t

Re: [lxc-users] sshd-keygen fails during container boot

2015-12-09 Thread Peter Steele

On 12/09/2015 11:46 AM, Peter Steele wrote:

On 12/09/2015 10:18 AM, Serge Hallyn wrote:


I suppose just looking at the 'capsh --print' output difference for the
bounding set between the custom containers spawned by lxc and 
libvirt-lxc could

be enlightening.

Here's the diff:

# sdiff lxc libvirt
My apologies here. The output I had pasted in was nicely column aligned, 
with spaces. Something got lost along the way...


Peter

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] sshd-keygen fails during container boot

2015-12-09 Thread Peter Steele

On 12/09/2015 01:56 PM, Peter Steele wrote:

On 12/09/2015 11:46 AM, Peter Steele wrote:

On 12/09/2015 10:18 AM, Serge Hallyn wrote:


I suppose just looking at the 'capsh --print' output difference for the
bounding set between the custom containers spawned by lxc and 
libvirt-lxc could

be enlightening.

Here's the diff:

# sdiff lxc libvirt
My apologies here. The output I had pasted in was nicely column 
aligned, with spaces. Something got lost along the way...


Peter


Actually, some tabs got mixed in. Hopefully this will look better:

cap_chown cap_chown
cap_dac_override cap_dac_override
cap_dac_read_search cap_dac_read_search
cap_fowner cap_fowner
cap_fsetid cap_fsetid
cap_kill cap_kill
cap_setgid cap_setgid
cap_setuid cap_setuid
cap_setpcap cap_setpcap
cap_linux_immutable cap_linux_immutable
cap_net_bind_service cap_net_bind_service
cap_net_broadcast cap_net_broadcast
cap_net_admin cap_net_admin
cap_net_raw cap_net_raw
cap_ipc_lock cap_ipc_lock
cap_ipc_owner cap_ipc_owner
> cap_sys_rawio
cap_sys_chroot cap_sys_chroot
cap_sys_ptrace cap_sys_ptrace
> cap_sys_pacct
cap_sys_admin cap_sys_admin
cap_sys_boot cap_sys_boot
> cap_sys_nice
cap_sys_resource cap_sys_resource
cap_sys_tty_config cap_sys_tty_config
cap_mknod <
cap_lease cap_lease
cap_audit_write cap_audit_write
cap_audit_control | cap_setfcap
cap_setfcap,cap_syslog | cap_mac_override
> cap_syslog

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] sshd-keygen fails during container boot

2015-12-09 Thread Serge Hallyn
Quoting Peter Steele (pwste...@gmail.com):
> On 12/09/2015 01:56 PM, Peter Steele wrote:
> >On 12/09/2015 11:46 AM, Peter Steele wrote:
> >>On 12/09/2015 10:18 AM, Serge Hallyn wrote:
> >>>
> >>>I suppose just looking at the 'capsh --print' output difference for the
> >>>bounding set between the custom containers spawned by lxc and
> >>>libvirt-lxc could
> >>>be enlightening.
> >>Here's the diff:
> >>
> >># sdiff lxc libvirt
> >My apologies here. The output I had pasted in was nicely column
> >aligned, with spaces. Something got lost along the way...
> >
> >Peter
> >
> Actually, some tabs got mixed in. Hopefully this will look better:
> 
> cap_chown cap_chown
> cap_dac_override cap_dac_override
> cap_dac_read_search cap_dac_read_search
> cap_fowner cap_fowner
> cap_fsetid cap_fsetid
> cap_kill cap_kill
> cap_setgid cap_setgid
> cap_setuid cap_setuid
> cap_setpcap cap_setpcap
> cap_linux_immutable cap_linux_immutable
> cap_net_bind_service cap_net_bind_service
> cap_net_broadcast cap_net_broadcast
> cap_net_admin cap_net_admin
> cap_net_raw cap_net_raw
> cap_ipc_lock cap_ipc_lock
> cap_ipc_owner cap_ipc_owner
> > cap_sys_rawio

Looking through the systemd source, the only obvious thing is that
systmed won't mount configfs or debugfs without rawio.  That
doesn't sound relevant here though.

> cap_sys_chroot cap_sys_chroot
> cap_sys_ptrace cap_sys_ptrace
> > cap_sys_pacct
> cap_sys_admin cap_sys_admin
> cap_sys_boot cap_sys_boot
> > cap_sys_nice
> cap_sys_resource cap_sys_resource
> cap_sys_tty_config cap_sys_tty_config
> cap_mknod <

Ok, systemd does behave differently if it shouldn't be able
to create devices.  If you add
lxc.cap.drop = mknod sys_rawio
to your configs does that help?

> cap_lease cap_lease
> cap_audit_write cap_audit_write
> cap_audit_control | cap_setfcap
> cap_setfcap,cap_syslog | cap_mac_override
> > cap_syslog
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Serge Hallyn's article "Publishing LXD images"

2015-12-09 Thread Serge Hallyn
Quoting david.an...@bli.uzh.ch (david.an...@bli.uzh.ch):
> Ok, after installing the daily builds, relaunching wily from scratch, 
> installing upstart in it, creating a config profile and applying it to the 
> container:
> 
> david@kimera:~$ lxc profile show debug_init
> name: debug_init
> config:
>   raw.lxc: |-
>     lxc.console.logfile = /tmp/out
>     lxc.init_cmd = /sbin/init --debug --verbose
> devices: {}
> 
> david@kimera:~$ lxc profile apply wily-u-1 default,debug_init
> Profile default,debug_init applied to wily-u-1
> 
> No log is being written after running a bash shell and just waiting until it 
> quits by itself:

Wait, did you restart the container after applying the new
profile?  Since we're setting debug arguments for init, we
need init to be restart anyway.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Serge Hallyn's article "Publishing LXD images"

2015-12-09 Thread david . andel
Yes, I did:

david@kimera:~$ lxc stop wily-u-1
david@kimera:~$ lxc profile apply wily-u-1 default,debug_init
Profile default,debug_init applied to wily-u-1
david@kimera:~$ lxc start wily-u-1
david@kimera:~$ time lxc exec wily-u-1 /bin/bash
root@wily-u-1:~# 
real    0m5.007s
user    0m0.004s
sys 0m0.008s
david@kimera:~$ lxc exec wily-u-1 -- ls -alh /tmp/
total 0
drwxrwxrwt 1 root root   0 Dec  9 15:37 .
drwxr-xr-x 1 root root 132 Dec  9 03:54 ..

david@kimera:~$ lxc profile show debug_init
name: debug_init
config:
  raw.lxc: |-
    lxc.console.logfile = /tmp/out
    lxc.init_cmd = /sbin/init --debug --verbose
devices: {}


-"lxc-users"  wrote: -
To: LXC users mailing-list 
From: Serge Hallyn 
Sent by: "lxc-users" 
Date: 12/10/2015 3:52
Subject: Re: [lxc-users] Serge Hallyn's article "Publishing LXD images"

Quoting david.an...@bli.uzh.ch (david.an...@bli.uzh.ch):
> Ok, after installing the daily builds, relaunching wily from scratch, 
> installing upstart in it, creating a config profile and applying it to the 
> container:
> 
> david@kimera:~$ lxc profile show debug_init
> name: debug_init
> config:
>   raw.lxc: |-
>     lxc.console.logfile = /tmp/out
>     lxc.init_cmd = /sbin/init --debug --verbose
> devices: {}
> 
> david@kimera:~$ lxc profile apply wily-u-1 default,debug_init
> Profile default,debug_init applied to wily-u-1
> 
> No log is being written after running a bash shell and just waiting until it 
> quits by itself:

Wait, did you restart the container after applying the new
profile?  Since we're setting debug arguments for init, we
need init to be restart anyway.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users