[lxc-users] Auto Reply (was: Using LXC as an application container.)

2015-06-09 Thread Ashish Bunkar

  
  
Thanks for your email
I am on vacation currently, will get back to you once I am back on
the job.
Kindly expect delays in my responses...
-- 
Thanks  Regards
Ashish Bunkar
Contact No.- +91-7259183696
  

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Auto Reply (was: unprivileged container won't start; is encrypted home directory the problem?)

2015-06-09 Thread Ashish Bunkar

  
  
Thanks for your email
I am on vacation currently, will get back to you once I am back on
the job.
Kindly expect delays in my responses...
-- 
Thanks  Regards
Ashish Bunkar
Contact No.- +91-7259183696
  

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Systemd inside lxc

2015-06-09 Thread Fajar A. Nugraha
On Tue, Jun 9, 2015 at 6:20 AM, SIVA SUBRAMANIAN.P psiv...@gmail.com
wrote:

 Installed lxcfs on the host, but getting the below issue. Where can I find
 documentation about this.
 # lxcfs -s -f -o allow_other /var/lib/lxcfs/



 *fuse: device not found, try 'modprobe fuse' first*


When you decide to bold that log line, did you also check whether your
system has fuse support enabled (kernel and the userland tool)?

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Systemd inside lxc

2015-06-09 Thread SIVA SUBRAMANIAN.P
Resolved this by adding CONFIG_FUSE_FS=y in the kernel, but trying to
figure out the correct sequence of running cgprxoy, cgmanager and lxcfs to
get systemd working on the container.

On Mon, Jun 8, 2015 at 11:19 PM, Fajar A. Nugraha l...@fajar.net wrote:

 On Tue, Jun 9, 2015 at 6:20 AM, SIVA SUBRAMANIAN.P psiv...@gmail.com
 wrote:

 Installed lxcfs on the host, but getting the below issue. Where can I
 find documentation about this.
 # lxcfs -s -f -o allow_other /var/lib/lxcfs/



 *fuse: device not found, try 'modprobe fuse' first*


 When you decide to bold that log line, did you also check whether your
 system has fuse support enabled (kernel and the userland tool)?

 --
 Fajar

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Unity8inLXC

2015-06-09 Thread brian mullan
I just became aware of this today and thought I'd pass the info along in
case others hadn't seen it yet...

Ubuntu Wiki - The Unity 8 Desktop (Preview) in an LXC
https://wiki.ubuntu.com/Unity8inLXC

Brian
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Auto Reply (was: unprivileged container won't start; is encrypted home directory the problem?)

2015-06-09 Thread Stéphane Graber
I have now unsubscribed this person from the list.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] kernel crash when starting an unprivileged container

2015-06-09 Thread Christoph Lehmann
As a side note, you can use rsyslogs remotelogging to get the oops


Am 3. Juni 2015 08:01:22 MESZ, schrieb Tomasz Chmielewski man...@wpkg.org:
I'm trying to start an unprivileged container on Ubuntu 14.04; 
unfortunately, the kernel crashes.


# lxc-create -t download -n test-container
(...)
Distribution: ubuntu
Release: trusty
Architecture: amd64
(...)

# lxc-start -n test-container -F

Kernel crashes at this point.

It does not crash if I start the container as privileged.


- kernel used is 4.0.4-040004-generic from 
http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.4-wily/

- lxc userspace: http://ppa.launchpad.net/ubuntu-lxc/stable/ubuntu

# dpkg -l|grep lxc
ii  liblxc1  
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1amd64Linux Containers 
userspace tools (library)
ii  lxc  
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1amd64Linux Containers 
userspace tools
ii  lxc-templates
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1amd64Linux Containers 
userspace tools (templates)
ii  lxcfs
0.7-0ubuntu4~ubuntu14.04.1~ppa1  amd64FUSE based filesystem

for LXC
ii  python3-lxc  
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1amd64Linux Containers 
userspace tools (Python 3.x bindings)



It's a bit hard to get the printout of the OOPS, as I'm only able to 
access the server remotely and it doesn't manage to write the OOPS to 
the log.

Anyway, after a few crashes and while true; do dmesg -c ; done I was 
able to capture this:

[  237.706914] device vethPI4H7F entered promiscuous mode
[  237.707006] IPv6: ADDRCONF(NETDEV_UP): vethPI4H7F: link is not ready
[  237.797284] eth0: renamed from veth1OSOTS
[  237.824526] IPv6: ADDRCONF(NETDEV_CHANGE): vethPI4H7F: link becomes 
ready
[  237.824556] lxcbr0: port 1(vethPI4H7F) entered forwarding state
[  237.824562] lxcbr0: port 1(vethPI4H7F) entered forwarding state
[  237.928179] BUG: unable to handle kernel NULL pointer dereference at
 
  (null)
[  237.928262] IP: [8122f888] pin_remove+0x58/0xf0
[  237.928318] PGD 0
[  237.928364] Oops: 0002 [#1] SMP
[  237.928432] Modules linked in: xt_conntrack veth xt_CHECKSUM 
iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat 
nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack 
xt_tcpudp iptable_filter ip_tables x_tables bridge stp llc intel_rapl 
iosf_mbi x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm 
crct10dif_pclmul crc32_pclmul eeepc_wmi ghash_clmulni_intel aesni_intel

asus_wmi sparse_keymap ie31200_edac aes_x86_64 edac_core lrw gf128mul 
glue_helper shpchp lpc_ich ablk_helper cryptd mac_hid 8250_fintek 
serio_raw tpm_infineon video wmi btrfs lp parport raid10 raid456 
async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq

e1000e raid1 ahci raid0 ptp libahci pps_core multipath linear
[  237.930151] CPU: 2 PID: 6568 Comm: lxc-start Not tainted 
4.0.4-040004-generic #201505171336
[  237.930188] Hardware name: System manufacturer System Product 
Name/P8B WS, BIOS 0904 10/24/2011
[  237.930225] task: 880806970a00 ti: 8808090c8000 task.ti: 
8808090c8000
[  237.930259] RIP: 0010:[8122f888]  [8122f888] 
pin_remove+0x58/0xf0
[  237.930341] RSP: 0018:8808090cbe18  EFLAGS: 00010246
[  237.930383] RAX:  RBX: 880808808a20 RCX: 
dead00100100
[  237.930429] RDX:  RSI: dead00200200 RDI: 
81f9a548
[  237.930474] RBP: 8808090cbe28 R08: 81d11b60 R09: 
0100
[  237.930572] R13: 880806970a00 R14: 81ecd070 R15: 
7ffe57fd5540
[  237.930618] FS:  7fd448c0() GS:88082fa8() 
knlGS:
[  237.930685] CS:  0010 DS:  ES:  CR0: 80050033
[  237.930728] CR2:  CR3: 0008099c1000 CR4: 
000407e0
[  237.930773] Stack:
[  237.930809]  880806970a00 880808808a20 8808090cbe48 
8121d0f2
[  237.930957]  8808090cbe68 880808808a20 8808090cbea8 
8122fa55
[  237.931123]   880806970a00 810bb2b0 
8808090cbe70
[  237.931286] Call Trace:
[  237.931336]  [8121d0f2] drop_mountpoint+0x22/0x40
[  237.931380]  [8122fa55] pin_kill+0x75/0x130
[  237.931425]  [810bb2b0] ?
prepare_to_wait_event+0x100/0x100
[  237.931471]  [8122fb39] mnt_pin_kill+0x29/0x40
[  237.931530]  [8121baf0] cleanup_mnt+0x80/0x90
[  237.931573]  [8121bb52] __cleanup_mnt+0x12/0x20
[  237.931617]  [81096ad7] task_work_run+0xb7/0xf0
[  237.931662]  [8101607c] do_notify_resume+0xbc/0xd0
[  237.931709]  [817f0beb] int_signal+0x12/0x17



-- 
Tomasz Chmielewski
http://wpkg.org



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] kernel crash when starting an unprivileged container

2015-06-09 Thread Tomasz Chmielewski
It may be worth trying, but it won't work reliably for most kernel 
crashes (network, disk IO etc. may crash as well).


--
Tomasz Chmielewski
http://wpkg.org

On 2015-06-10 14:11, Christoph Lehmann wrote:

As a side note, you can use rsyslogs remotelogging to get the oops

Am 3. Juni 2015 08:01:22 MESZ, schrieb Tomasz Chmielewski
man...@wpkg.org:


I'm trying to start an unprivileged container on Ubuntu 14.04;
unfortunately, the kernel crashes.

# lxc-create -t download -n test-container
(...)
Distribution: ubuntu
Release: trusty
Architecture: amd64
(...)

# lxc-start -n test-container -F

Kernel crashes at this point.

It does not crash if I start the container as privileged.

- kernel used is 4.0.4-040004-generic from
http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.4-wily [1]/

- lxc userspace: http://ppa.launchpad.net/ubuntu-lxc/stable/ubuntu
[2]

# dpkg -l|grep lxc
ii liblxc1
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers
userspace tools (library)
ii lxc !

1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers
userspace tools
ii lxc-templates
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers
userspace tools (templates)
ii lxcfs
0.7-0ubuntu4~ubuntu14.04.1~ppa1 amd64 FUSE based filesystem
for LXC
ii python3-lxc
1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers
userspace tools (Python 3.x bindings)

It's a bit hard to get the printout of the OOPS, as I'm only able to

access the server remotely and it doesn't manage to write the OOPS
to
the log.

Anyway, after a few crashes and while true; do dmesg -c ; done I
was
able to capture this:

[ 237.706914] device vethPI4H7F entered promiscuous mode
[ 237.707006] IPv6: ADDRCONF(NETDEV_UP): vethPI4H7F:!
link is
not ready
[ 237.797284] eth0: renamed from veth1OSOTS
[ 237.824526] IPv6: ADDRCONF(NETDEV_CHANGE): vethPI4H7F: link
becomes
ready
[ 237.824556] lxcbr0: port 1(vethPI4H7F) entered forwarding state
[ 237.824562] lxcbr0: port 1(vethPI4H7F) entered forwarding state
[ 237.928179] BUG: unable to handle kernel NULL pointer dereference
at
(null)
[ 237.928262] IP: [8122f888] pin_remove+0x58/0xf0
[ 237.928318] PGD 0
[ 237.928364] Oops: 0002 [#1] SMP
[ 237.928432] Modules linked in: xt_conntrack veth xt_CHECKSUM
iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat
nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack
xt_tcpudp iptable_filter ip_tables x_tables bridge stp llc
intel_rapl
iosf_mbi x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel
kvm
crct10dif_pclmul crc32_pclmul eeepc_wmi ghash_clmulni_intel
aesni_intel
asus_wmi
sparse_keymap ie31200_edac aes_x86_64 edac_core lrw gf128mul
glue_helper shpchp lpc_ich ablk_helper cryptd mac_hid 8250_fintek
serio_raw tpm_infineon video wmi btrfs lp parport raid10 raid456
async_raid6_recov async_memcpy async_pq async_xor async_tx xor
raid6_pq
e1000e raid1 ahci raid0 ptp libahci pps_core multipath linear
[ 237.930151] CPU: 2 PID: 6568 Comm: lxc-start Not tainted
4.0.4-040004-generic #201505171336
[ 237.930188] Hardware name: System manufacturer System Product
Name/P8B WS, BIOS 0904 10/24/2011
[ 237.930225] task: 880806970a00 ti: 8808090c8000 task.ti:
8808090c8000
[ 237.930259] RIP: 0010:[8122f888] [8122f888]
pin_remove+0x58/0xf0
[ 237.930341] RSP: 0018:8808090cbe18 EFLAGS: 00010246
[ 237.930383] RAX:  RBX: 880808808a20 RCX:
dead00100100
[ 237.930429] RDX:  RSI: dead002!
00200
RDI:
81f9a548
[ 237.930474] RBP: 8808090cbe28 R08: 81d11b60 R09:
0100
[ 237.930572] R13: 880806970a00 R14: 81ecd070 R15:
7ffe57fd5540
[ 237.930618] FS: 7fd448c0() GS:88082fa8()
knlGS:
[ 237.930685] CS: 0010 DS:  ES:  CR0: 80050033
[ 237.930728] CR2:  CR3: 0008099c1000 CR4:
000407e0
[ 237.930773] Stack:
[ 237.930809] 880806970a00 880808808a20 8808090cbe48
8121d0f2
[ 237.930957] 8808090cbe68 880808808a20 8808090cbea8
8122fa55
[ 237.931123]  880806970a00 810bb2b0
8808090cbe70
[ 237.931286] Call Trace:
[ 237.931336] [8121d0f2] drop_mountpoint+0x22/0x40
[ 237.931380] [8122fa55] pin_kill+0x75/0x130
[ 237.931425]
[810bb2b0] ? prepare_to_wait_event+0x100/0x100
[ 237.931471] [8122fb39] mnt_pin_kill+0x29/0x40
[ 237.931530] [8121baf0] cleanup_mnt+0x80/0x90
[ 237.931573] [8121bb52] __cleanup_mnt+0x12/0x20
[ 237.931617] [81096ad7] task_work_run+0xb7/0xf0
[ 237.931662] [8101607c] do_notify_resume+0xbc/0xd0
[ 237.931709] [817f0beb] int_signal+0x12/0x17


 --
 Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail
gesendet.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users 

[lxc-users] LXD v0.11 and vivid containers

2015-06-09 Thread Mark Constable
I was excited to see 0.11 arrive on a wily host and thought it might
solve a few of the issues with systemd-journal not starting up and
therefor neither the network but it doesn't seem so. In fact after a
launch of todays vivid image, which starts, there is no IP (as before)
and I can't exec bash into it or stop it without killing the lxc exec
PID. Nothing interesting in /var/lib/lxd/lxc/lxc-monitord.log

Update: well well, on a hunch I checked to see if there is a wily
image and there is so I launch one and it lists with an IP (cool)
but I still can't exec into or stop it.

liblxc11.1.2-0ubuntu3
lxc1.1.2-0ubuntu3
lxcfs  0.9-0ubuntu1
lxd0.11-0ubuntu1
lxd-client 0.11-0ubuntu1

Not whinging, just FWIW.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users