Brian
Thank you for this link!
I have read through it and it indeed seems related, or maybe the same thing as
some of those.
My problem clearly isn't related to DNS, since I cannot ping out by IP as well.
And since I see this exact same behavior on two hosts (one physical, one
virtual) on two di
Ok, this happens again and again!
Like this LXD is not usable in production. I cannot restart LXD every few days.
I'll answer Fajar's questions from below here:
By "inbound" I mean connections from the host/internet to he container. Those
work and keep working. I have port forwarding enabled.
By
Hi
How can I best put snapshots of LXD on a separate (slower) btrfs volume?
Probably I cannot simply symlink /var/lib/lxd/snapshots/ to the other btrfs
volume, can I?
I guess that could bring problems since /var/lib/lxd is on the fast btrfs
volume so the snapshots will probably be generated as s
Hi
My LXD has the following network configuration:
root@qumind:~# egrep -v '(^#|^$)' /etc/default/lxd-bridge
USE_LXD_BRIDGE="true"
LXD_BRIDGE="lxdbr0"
UPDATE_PROFILE="true"
LXD_CONFILE=""
LXD_DOMAIN="lxd"
LXD_IPV4_ADDR="10.0.8.1"
LXD_IPV4_NETMASK="255.255.255.0"
LXD_IPV4_NETWORK="10.0.8.0/24"
LX
Thank you, that is very helpful!
-"lxc-users" wrote: -
To: LXC users mailing-list
From: "Fajar A. Nugraha"
Sent by: "lxc-users"
Date: 05/14/2016 12:35
Subject: Re: [lxc-users] lxd preinstalled in images
On Sat, May 14, 2016 at 12:32 PM, wrote:
Hi
I have just noticed that lxd is pre
Hi
I have just noticed that lxd is preinstalled in the official xenial image from
cloud-images.ubuntu.com/releases.
What is the reason? Is it really needed?
And is there any drawback if deinstalled (besides nesting)?
Thanks, David
___
lxc-users mailing
Hi
After adding a remote I can see it and also run command there:
david@kimera:~$ lxc remote list
+-+--+---+++
| NAME | URL | PROTOCOL |
PUBLIC | STATIC |
+---
What encryption is LXD using for the move behind the scenes?
-"lxc-users" wrote: -
Use the lxc "move" command:
lxc move [remote:] [remote:]
On 2016-03-12 14:23, david.an...@bli.uzh.ch wrote:
> Hi
>
> For simple "offline" migration, i.e. migration of a stopped container,
> can I jus
Hi
For simple "offline" migration, i.e. migration of a stopped container, can I
just copy /var/lib/lxd/containers/xyz to another host or do I need to use
specific lxc commands?
Can I move images in a similar way, by simply copying /var/lib/lxd/images/xyz
to another host?
Thanks
___
Hi
I would like to change the subuids used by LXD. In oder to do that I have
changed the entry in /etc/subuid and /etc/subgid but new containers kept the
former uid space.
Even after deleting all container and all images new containers keep getting
the old uid range.
I have found that the uid
Addendum: I have seen now that the setting in /etc/subuid is used as expected
on all hosts but the one where root is mapped to 1'000'000. There was a wrong
setting in /etc/sub[ug]id where lxd and root had the same setting. Seems that
1'000'000 is a hard coded setting when the config is wrong?
Bu
Hi
I have noticed that LXD uses some UIDs/GIDs by default I haven't set up and
which aren't represented in /etc/sub[ug]id files.
Interestingly, they are different from instance to instance: one one root is
mapped on 1'000'000 (not 100'000), on another it's 265'536.
Now when I copy the rootfs of
Hi
Even though everything seems to work, there are always the same errors in the
log. I thought I should let you guys know:
david@qumind:~$ lxc info --show-log psql | egrep -v '(DEBUG|WARN|INFO|NOTICE)'
| tail -5
lxc 1453906613.863 ERROR lxc_monitor -
monitor.c:lxc_monitor_open:
Since LXD is starting the unprivileged containers as root, does that mean that
from a security point of view there is no difference between running the 'lxc'
commands from a user which is member of the 'sudo' group and a user which is
not?
For plain LXC I've understood that it is more secure to
Has the certificate of images.linuxcontainers.org changed?
Or was I attacked?
I can access again after removing .config/lxc/servercerts/images.crt
But how do I now add the correct certificate again?
Thanks!
___
lxc-users mailing list
lxc-users@lists.linu
So if I understood correctly, this means that lxd could potentially suffer
from a weakness in 'lxc monitor' meaning that it is more secure to run
unprivileged containers using the low level lxc-... functions?
-"lxc-users" wrote: -
To: LXC users mailing-list
From: Serge Hallyn
Sent by
Hmm, this is interesting.
I am runnung my container from the unprivileged user 'lxduser' and yet:
root@qumind:~# ps -ef | grep '[l]xc monitor'
root 7609 1 0 11:54 ? 00:00:00 [lxc monitor]
/var/lib/lxd/containers pgroonga
What is wrong here?
-"lxc-users" wrote: -
To:
Ok, made some progress, but running into permission errors now:
lxduser@qumind:~$ lxc profile show share
name: share
config: {}
devices:
share:
path: /share
readonly: "true"
source: /share
type: disk
With profiles default,share enabled the container does not start.
With only def
I am trying to implement bind mounting for LXD containers using a profile
'share' containing:
name: share
config:
lxc.mount.entry = /share share none bind 0 0
devices: {}
However, when I try to save (in vi) I get the error:
Config parsing error: yaml: unmarshal errors:
line 21: cannot unmar
In may case it happens consistently, not only once. And also if I wait several
minutes before logging in.
It makes the container unusable.
But it seems that containers with systemd now work? So no need for upstart
anymore?
-"lxc-users" wrote: -
To: LXC users mailing-list
From: Serge H
Thanks, now we have more info.
First I have deleted all containers and images and relaunched wily from todays
builds and noticed that I can run an interactive shell in it just fine.
It does not exit. Even though the out file shows lots of file system permission
errors, from a user perspective th
And the same is true for the original wily container from the official image:
david@kimera:~$ lxc stop wily
david@kimera:~$ lxc profile apply wily default,debug_init
Profile default,debug_init applied to wily
david@kimera:~$ lxc start wily
david@kimera:~$ time lxc exec wily /bin/bash
root@wily:~#
Yes, I did:
david@kimera:~$ lxc stop wily-u-1
david@kimera:~$ lxc profile apply wily-u-1 default,debug_init
Profile default,debug_init applied to wily-u-1
david@kimera:~$ lxc start wily-u-1
david@kimera:~$ time lxc exec wily-u-1 /bin/bash
root@wily-u-1:~#
real 0m5.007s
user 0m0.004s
sys
Ok, after installing the daily builds, relaunching wily from scratch,
installing upstart in it, creating a config profile and applying it to the
container:
david@kimera:~$ lxc profile show debug_init
name: debug_init
config:
raw.lxc: |-
lxc.console.logfile = /tmp/out
lxc.init_cmd = /sb
But now happens something strange:
When running 'lxc exec wily-with-upstart-1 /bin/bash' the prompt changes to
root@wily-with-upstart-1 as expected, but then closes the shell in a few
seconds, falling back on the host prompt.
This happens both in the original wily (with systemd) and in the
wily
Thanks, that worked.
-"lxc-users" wrote: -
To: LXC users mailing-list
From: Serge Hallyn
Sent by: "lxc-users"
Date: 12/07/2015 16:21
Subject: Re: [lxc-users] Serge Hallyn's article "Publishing LXD images"
Quoting david.an...@bli.uzh.ch (david.an...@bli.uzh.ch):
> Hi all
>
> In his ar
Hi all
In his article "Publishing LXD images", Serge Hallyn writes:
# lxc launch lxc:ubuntu/wily/amd64 w1 # lxc exec w1 -- apt-get -y install
upstart-bin upstart-sysv
However, this does not work, since the wily container does not have an IP
address (I have used "wily" as a name instead of "w1")
ntainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
[attachment "outfiles.tgz" removed by David Andel/at/UZH]
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
Now attached the output of
strace -f -ostrace.out -- lxc-ls -f
strace -f -ostrace-start.out -- lxc-start -n s0_RStSh
lxc-start -n s0_RStSh -l trace -o debug.out
I was running these not as root this time but if that is required I will post
those as well.
Interestingly, this happens only on a viv
Hi
I have the exact same problem after yesterdays update.
And I suspect it is bug
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1413927 or at least
closely related.
root@andel2:~# cat /proc/self/cgroup
10:devices:/system.slice/ssh.service
9:perf_event:/system.slice/ssh.service
8:cpuse
Hi
How the subject says, I would like to understand all the security aspects of
root vs. user based unprivileged containers.
As far as I understand containers with the same namespace mapping can interact
with each other because the UID on the host is identical.
Also, if I understand it correct
Thanks Fajar for the suggestions.
I am going to stick to vivid + upstart unprivileged for now and upgrade to wily
as soon as it will be out.
Even though this is for production, I don't need feature/syntax stability right
now since we are just starting out. Also, Trusty does not seem to have
eve
Thanks Fajar for the explanations!
I am going to stick to utopic for the time being :-)
David
-"lxc-users" wrote: -
To: LXC users mailing-list
From: "Fajar A. Nugraha"
Sent by: "lxc-users"
Date: 05/05/2015 9:16
Subject: Re: [lxc-users] no network in vivid image? - addendum
Longer ver
It definitely is a problem of vivid in the container, although not only the
image.
After installing utopic fresh in an unprivileged container I've got an ipv4
address and connectivity, as I wrote below.
However, after running a do-release-upgrade to vivid the ipv4 address
disappeared.
David
-
Hi
After downloading and running an unprivileged vivid container under vivid as
base system a don't get any ipv4 address inside the container.
Up to utopic container under utopic base I got an ipv4 address in the container
out of the box.
Also starting such a pre-existing container under the upg
Hi
After upgrade to vivid I cannot start my (privileged) containers based on
overlayfs snapshots.
The interesting lines I get in the log are:
lxc-start 1430327863.828 INFO bdev - bdev.c:overlayfs_mount:2247 -
overlayfs: error mounting /var/lib/lxc/ro_nginx/rootfs onto
/usr/lib/x86_6
Hi Tycho
>> 2. LXD will work with btrfs subvolumes per environment?
> Eventually, although it currently does not. You can set it up by hand
> if you like, though.
Well, if I want to use the lxc-clone / lxc-snapshot functionality on btrfs
snapshots I guess I have to lxc launch an image first, m
Hi Mark
I see that nobody replied to your question.
Have you made any progress in the meantime?
Also, how did you manage to have your containers v1 and v2 installed into a
btrfs subvolume? With lxc-create it's clear, but with lxc launch?
Thanks,
David
-"lxc-users" wrote: -
To: lxc-user
The colors also work when logging in with ssh.
Only with lxc-console they do not work.
-"lxc-users" wrote: -
To: LXC users mailing-list
From: david.an...@bli.uzh.ch
Sent by: "lxc-users"
Date: 03/20/2015 20:30
Subject: Re: [lxc-users] color in console not working in unprivileged
con
Thanks Xavier,
unfortunately, this is not the issue.
The colors work out of the box when I access the container using lxc-attach,
but they do not work when I access the container using lxc-console - not even
using --color=auto.
And I see the same behavior on two independent systems (both running
Hi,
in unprivileged containers color in the console does not work out of the box.
How can it be enabled?
I am running ubuntu utopic with lxc 1.1.0~alpha2-0ubuntu3.2.
Thanks,
David
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://l
Hi,
I am looking into individual user namespaces for each container.
The first container could have uids and gids from 10 to 165536.
The second container could have 20 to 265536, couldn't it?
How far can I go? Is there a limit?
Thanks,
David
_
Hi,
I am looking into individual user namespaces for each container.
The first container could have uids and gids from 10 to 165536.
The second container could have 20 to 265536, couldn't it?
How far can I go? Is there a limit?
Thanks,
David
___
Hi,
I am looking into individual user namespaces for each container.
The first container could have uids and gids from 10 to 165536.
The second container could have 20 to 265536, couldn't it?
How far can I go? Is there a limit?
Thanks,
David
___
44 matches
Mail list logo