On 2020-11-19 00:07, Tomasz Chmielewski wrote:
On 2020-11-18 23:50, Tomasz Chmielewski wrote:
That's a weird one!
In AWS, there is a concept of "instance metadata" - a webserver which
lets you fetch some instance metadata using http:
https://docs.aws.amazon.com/AWSEC2/l
On 2020-11-18 23:50, Tomasz Chmielewski wrote:
That's a weird one!
In AWS, there is a concept of "instance metadata" - a webserver which
lets you fetch some instance metadata using http:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
For e
v http://169.254.169.254/latest/api/tok
curl -v http://169.254.169.254/latest/api/toke
And this times out:
curl -v http://169.254.169.254/latest/api/token
Does anyone know why? tcpdump doesn't give me many clues (TTL?).
Tomasz Chmielewski
This works as expected (outputs to stdout):
# lxc exec lxd05:tomasztest -- hostname
tomasztest
This doesn't work as expected (output file is empty, zero bytes):
# lxc exec lxd05:tomasztest -- hostname > output.txt
# cat output.txt
# ls -l output.txt
-rw-r--r-- 1 root root 0 Jun 9 11:56 outp
Are /etc/sysctl.conf and /etc/security/limits.conf changes documented on
https://github.com/lxc/lxd/blob/master/doc/production-setup.md still
relevant for LXD installed from snap (on Ubuntu 20.04)?
Tomasz
___
lxc-users mailing list
lxc-users@lists.li
?
Does the container have a dedicated, public IP?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
On 2020-03-17 09:12, Tomasz Chmielewski wrote:
Not sure what happened, but suddenly, my system no longer has lxc
command:
(...)
I didn't do any snap manipulations (like lxd removal) recently.
The containers are still running on this system (as seen by ps).
Sorry for noise - that
Not sure what happened, but suddenly, my system no longer has lxc
command:
# lxc
-bash: lxc: command not found
# lxd
-bash: lxd: command not found
There are no lxc/lxd commands in /snap/bin/.
The directory was modified nearly 2 hours ago:
# date
Tue Mar 17 00:11:01 UTC 2020
# ls -ld /snap/b
foreign architecture VMs at some point (i.e.
ARM VM on amd64 host)?
I understand it would be quite slow, but in general, it works if you
fiddle with qemu-system-arm.
Tomasz Chmielewski
___
lxc-users mailing list
lxc-users@lists.linu
l on
server startup - but then again, there is no reload/change mechanism
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
t out to the world - this is what I want.
Unfortunately, containers from one network (staging) can also connect to
containers from the other network (testing) - which is not what I want.
Is there any mechanism in LXD to prevent it? Or do I have to add my own,
cust
On 2020-01-24 23:40, Tomasz Chmielewski wrote:
Now, it works great. However, mail sent from container 10.2.2.2 will
use LXD server's 1.1.1.1 as the outgoing IP. I'd like it to use
2.2.2.2, and still have the private IP assigned (I don't want to
assign the public IP to this conta
Let's say I have a LXD server with two public IPs, 1.1.1.1 and 2.2.2.2.
The default IP for outgoing routing is 1.1.1.1.
There, I setup two containers with private IP addresses: 10.1.1.1 and
10.2.2.2.
They receive the following proxy nat config:
- LXD server passes TCP traffic 1.1.1.1:25 to c
e how to do this...
You just need to set this:
security.nesting: "true"
(in "lxc config edit container-name").
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
ng else, i.e. proper/full vim?
Tomasz Chmielewski
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
changes
to lxd restart model maybe? Or how lxc exec sessions are handled?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
command" also
get interrupted when this happens
Is there a way to prevent this?
Tomasz Chmielewski
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
t connected
ls: cannot access '/proc/diskstats': Transport endpoint is not connected
(...)
Is it a known issue? I'm observing it on around 10 servers.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
the data
over to the snap.
On Tue, Jul 2, 2019 at 9:52 PM Tomasz Chmielewski
wrote:
Just installed lxd from snap on a Ubuntu 18.04 server and launched the
first container:
# snap list
Name VersionRevTracking Publisher Notes
amazon-ssm-agent 2.3.612.0 1335 stable
valid config: Unknown configuration key:
security.protection.delete
Also doesn't work when I try to set it via "lxc config edit".
This works perfectly on other LXD servers, so I'm a bit puzzled why it
won't work here?
Tomasz
n't any "lxc.payload" systemd service - then yes, like
you said - I'd need to dive into /sys/fs/cgroup/memory/ for now.
Great to hear project quotas are in the plans!
Tomasz Chmielewski
___
lxc-users mailing list
ot; (to achieve the desired result - memory
limit per group of containers; nothing wrong with nesting itself), and
may be even hard to implement on existing setups.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linu
t; to
/etc/systemd/system/snap.lxd.daemon.service - would I achieve a desired
effect?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
ystem like KVM or Xen.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
On 2019-03-14 19:05, Sergiusz Pawlowicz wrote:
On Thu, 14 Mar 2019 at 16:55, Tomasz Chmielewski
wrote:
Is the following guide also relevant for running docker in LXD
installed
from snap?
https://stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
yes, I am using docker - but container
/mounts
from the container?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
Is the following guide also relevant for running docker in LXD installed
from snap?
https://stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http
quot;
$ lxc config set testcontainer description "some description"
Error: Invalid config: Unknown configuration key: description
What would be the best way to set the description for a container?
Tomasz Chmielewski
https://lxadm.com
__
uilt-in shell (ash)
Enter 'help' for a list of built-in commands.
/ # mount
/dev/loop0 on / type squashfs (ro,relatime)
none on /dev type tmpfs
This.
You have tmpfs mounted over /dev in your container.
Why is it an issue for you? I'd say it's perfectly normal behaviour.
Please note these are two separate commands:
mount
cat /proc/mounts
Tomasz Chmielewski
https://lxadm.com
On 2019-02-25 17:37, Yasoda Padala wrote:
yasoda@yasoda-HP-Z600-Workstation:~/.local/share/lxc/busybox$
lxc-attach -n busybox
lxc-attach: busybox: utils.c: get_ns_uid: 548 No such file
Yes, these parameters passed to lxc command don't really help.
"ssh -t" makes "lxc exec" return after executing the command.
Though I'd be interested to understand why it suddenly started to
happen; have a number of scripts which broke because of this chang
cution via ssh on the host does not hang:
laptop$ ssh root@host date
Sun Feb 24 12:31:33 UTC 2019
laptop$
Why do commands executed via ssh and lxc hang? It used to work some 1-2
months ago, not sure with which lxd version it regressed like this.
Tomasz Chmielewski
__
27;d say it's a "normal, bad default".
You can fix it by adding this to your sysctl config file:
kernel.dmesg_restrict = 1
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
ht
|
+-+-++---+-+
Not sure what's causing, but it's yet another time I'm seeing it.
Tomasz
On 2018-09-24 22:43, Christian Brauner wrote:
On Mon, Sep 24, 2018 at 03:40:57PM +0200, Tomasz Chmielewski wrote:
Turns out something changed the
esn't show the data too reliably.
It seems to rely on reading data from /sys/fs/cgroup/ - maybe there are
better ways to process it and come up with some meaningful data.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing
works, but isn't the best if you want to keep
systems up.
* Stop the original container
* create the new container with the snapshot
* modify the IP of the new container
* start the original container
If it isn't possible, I'll continue on as I've been doing.
lxc file pull
what I can do?
It's similar with snapshots with a space in name - which used to work in
the past.
Unfortunately I also don't have a solution for that.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linu
On 2018-11-09 11:19, Stéphane Graber wrote:
On Fri, Nov 09, 2018 at 04:29:46AM +0900, Tomasz Chmielewski wrote:
LXD 3.6 from a snap on an up-to-date Ubuntu 18.04 server:
lxd 3.69510 stablecanonical✓ -
Suddenly, some (but not all) containers lost their /proc filesystem:
# ps
procdefaults
In the meantime, run "mount proc /proc -t proc"
#
I think I've seen something similar like this in the past.
Can it be attributed to some not-so-well automatic snap upgrades?
Tomasz Chmielewski
https://lxadm.com
__
80924132439.380 WARN commands -
commands.c:lxc_cmd_rsp_recv:130 - Connection reset by peer - Failed to
receive response for command "get_state"
# snap list
Name Version Rev Tracking Publisher Notes
core 16-2.35 5328 stablecanonical✓ core
lxd
root root 0 Sep 21 06:05 images
drwx-- 1 root root 0 Sep 24 05:48 snapshots
This fixed it:
chmod 711 /data/lxd/containers/
I'm 99% sure we did not change the permissions on that directory...
Tomasz
On 2018-09-24 15:32, Tomasz Chmielewski wrote:
I'm not able to start any
80924132439.380 WARN commands -
commands.c:lxc_cmd_rsp_recv:130 - Connection reset by peer - Failed to
receive response for command "get_state"
# snap list
Name Version Rev Tracking Publisher Notes
core 16-2.35 5328 stablecanonical✓ core
lxd
On 2018-09-21 09:28, Stéphane Graber wrote:
On Fri, Sep 21, 2018 at 09:22:46AM +0200, Tomasz Chmielewski wrote:
On 2018-09-21 09:11, lxc-us...@licomonch.net wrote:
> maybe not what you are looking for, but could work as workaround for the
> moment:
> mv /snap/core/4917/bin/gzip /snap/
7/bin/anything
touch: cannot touch '/snap/core/4917/bin/anything': Read-only file
system
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
ntainer --alias $container --compression "xz -T 0"
Are there any possible workarounds to use parallel compression for "lxc
publish"?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
FYI I've seen a similar phenomenon when launching new containers.
Sometimes, connectivity freezes for several seconds after that.
What usually "helps" is sending an arping to the gateway IP from an
affected container.
Tomasz
On 2018-09-17 06:02, toshinao wrote:
Hi.
I experienced occasion
default storage also for temporary files when
publishing the images (/var/snap/lxd/common/lxd/images/)?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
ionRev Tracking Publisher Notes
core 16-2.34.3 5145 stablecanonical core
lxd 3.48393 stablecanonical -
Full system restart helps, but it would be great to know if there is a
"better" fix.
Tomasz Chmielews
On 2018-08-15 12:06, Christian Brauner wrote:
On Wed, Aug 15, 2018 at 11:49:40AM +, Tomasz Chmielewski wrote:
# lxc list
cannot perform readlinkat() on the mount namespace file descriptor of
the
init process: Permission denied
Where is this error coming from? It's not from LX{C,D}
64 GNU/Linux
# snap list
Name VersionRev Tracking Publisher Notes
core 16-2.34.3 5145 stablecanonical core
lxd 3.38011 stablecanonical -
# cat /etc/issue
Ubuntu 18.04.1 LTS \n \l
Expected?
Tomasz Chmielewski
https://lxadm.com
__
to the source LXD: Get http://unix.socket/1.0:
dial unix /var/lib/lxd/unix.socket: connect: no such file or directory
# /snap/bin/lxc list
+--+---+--+--+--+---+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--+---+--+--+--+---+
So I did the following:
- mv /var/snap/lxd/common/lxd /var/snap/lxd/common/lxd.orig
- modified /etc/fstab to mount the previous /var/lib/lxd to
/var/snap/lxd/common/lxd
- rebooted
- removed lxd deb packages
And it now started properly.
Thanks for helping!
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
On 2018-08-08 21:06, Tomasz Chmielewski wrote:
I've tried to migrate from deb to snap on Ubuntu 18.04.
Unfortunately, lxd.migrate failed with "error: LXD still not running
after 5 minutes":
(...)
Not sure how to recover now? The containers seem intact in
/var/lib/lxd/
server
error: Unable to connect to the source LXD: Get http://unix.socket/1.0:
dial unix /var/lib/lxd/unix.socket: connect: no such file or directory
root@b1 ~ # lxc list
Error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket:
connect: no such file or directory
Not sure how to recov
u1~16.04.1
amd64Container hypervisor based on LXC - daemon
ii lxd-client3.0.1-0ubuntu1~16.04.1
amd64Container hypervisor based on LXC - client
# cat /etc/issue
Ubuntu 16.04.4 LTS \n \l
Tomasz Chmielewski
https://lxadm.c
urs and
get back to you.
Cheers
Tomasz Chmielewski writes:
On 2018-05-06 18:02, Tomasz Chmielewski wrote:
Please would send us tar with the content
/var/snap/lxd/common/lxd/database?
(or /var/snap/lxd/common/lxd/raft/, depending on which version of
the
snap you use).
I believe this particular c
On 2018-05-06 18:02, Tomasz Chmielewski wrote:
Please would send us tar with the content
/var/snap/lxd/common/lxd/database?
(or /var/snap/lxd/common/lxd/raft/, depending on which version of the
snap you use).
I believe this particular crash has been solved in our master
branches,
but
k at the data
you send to confirm that, and possibly post a workaround.
I've sent it to your free.*@canonical* address, let me know if you need
more info.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@
On 2018-05-06 17:27, Tomasz Chmielewski wrote:
On 2018-05-06 17:16, Tomasz Chmielewski wrote:
On 2018-05-06 17:07, Tomasz Chmielewski wrote:
I have a Ubuntu 16.04 server with LXD 3.0 installed from snap.
I've filled the disk to 100% to get "No space left on device".
(...)
On 2018-05-06 17:16, Tomasz Chmielewski wrote:
On 2018-05-06 17:07, Tomasz Chmielewski wrote:
I have a Ubuntu 16.04 server with LXD 3.0 installed from snap.
I've filled the disk to 100% to get "No space left on device".
(...)
The same error shows up after freeing space and
On 2018-05-06 17:07, Tomasz Chmielewski wrote:
I have a Ubuntu 16.04 server with LXD 3.0 installed from snap.
I've filled the disk to 100% to get "No space left on device".
(...)
The same error shows up after freeing space and restarting the server.
How to debug this?
=2018-05-06T08:00:36+
lvl=info msg="Initializing global database" t=2018-05-06T08:00:36+
# ps aux|grep lxd
root 3336 0.0 0.0 4504 1656 ?Ss 10:00 0:00
/bin/sh /snap/lxd/6954/commands/daemon.start
root 3586 0.0
as
managing its own containers and own settings
Though I didn't investigate later how to move the containers from the
deb setup to snap setup (I could probably import/export, but it would
take a while on production servers).
Tomasz Chmielewski
https://lxadm.com
On 2018-05-05 04:55, Stev
|
This one was producing broken /etc/netplan/10-lxc.yaml:
| | 87b5c0fec8ff | no | Ubuntu bionic amd64 (20180502_09:49) |
x86_64 | 118.15MB | May 3, 2018 at 2:39am (UTC) |
Tomasz Chmielewski
https://lxadm.com
___
lx
:
network:
ethernets:
eth0: {dhcp4: true}
version: 2
Please note that the broken one has no indentation (two spaces) before
"version: 2", this is the only thing that differs and which breaks
DHCPv4.
What's responsible for this?
Tomasz Chmi
On 2018-05-03 12:09, Tomasz Chmielewski wrote:
On 2018-05-03 11:58, Mark Constable wrote:
On 5/3/18 12:42 PM, Tomasz Chmielewski wrote:
> Today or yesterday, bionic image launched in LXD is not getting an IPv4
> address. It is getting an IPv6 address.
If you do a "lxc profile s
On 2018-05-03 12:14, David Favor wrote:
Mark Constable wrote:
On 5/3/18 12:42 PM, Tomasz Chmielewski wrote:
Today or yesterday, bionic image launched in LXD is not getting an
IPv4 address. It is getting an IPv6 address.
If you do a "lxc profile show default" you will probab
On 2018-05-03 11:58, Mark Constable wrote:
On 5/3/18 12:42 PM, Tomasz Chmielewski wrote:
> Today or yesterday, bionic image launched in LXD is not getting an IPv4
> address. It is getting an IPv6 address.
If you do a "lxc profile show default" you will probably find it
doesn
On 2018-05-03 11:42, Tomasz Chmielewski wrote:
I was able to reproduce it on two different LXD servers.
This used to work a few days ago.
Also - xenial containers are getting IPv4 address just fine.
Here is the output of "systemctl status systemd-networkd" on a bionic
containe
d_lft forever
I was able to reproduce it on two different LXD servers.
This used to work a few days ago.
Did anything change in bionic images recently?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
with snapshotting
functionality, you can snapshot, and copy the snapshot.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
On 2018-04-12 06:25, Stéphane Graber wrote:
The LXC, LXD and LXCFS teams are proud to announce the release of the
3.0 version of all three projects.
Great news, great features!
What's the best way to upgrade from a 2.xx deb to a 3.xx snap?
Tomasz Chmielewski
https://lxad
suming the container needs a public IP)
- one to a NIC with internal network only
If the container doesn't need a public IP, then one NIC attached to the
internal network should be enough.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users m
deb
package, but is not when installed from a snap package.
Is it a bug, a feature?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
I'm also seeing it complaining with no raw.lxc keys.
Tomasz Chmielewski
https://lxadm.com
On 2018-03-24 23:28, MonkZ wrote:
Nope - so what are the other 0,01%? ;)
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 17.10 amd64 (release) (201
incorrectly as 0.0?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
On 2018-02-17 02:20, Stepan Santalov wrote:
Hello!
Is there a way to log to host's machine syslog by applications,
running in containers?
I've googled but with no luck.
rsyslog over TCP/UDP?
Tomasz Chmielewski
https://lxadm.com
___
On 2017-12-27 01:17, Tomasz Chmielewski wrote:
Trying to launch a new container, but it fails:
# lxc launch images:ubuntu/xenial/amd64 testcontainer
Creating testcontainer
error: Failed to fetch
https://images.linuxcontainers.org:8443/1.0/images/ubuntu%2Fxenial%2Famd64:
404 Not Found
The
x27;s interesting, the above command works on some other systems.
ii lxd-client 2.21-0ubuntu2~16.04.1~ppa1
amd64Container hypervisor based on LXC - client
Tomasz Chmielewski
https://lxadm.com
___
lxc-
lxc_monitor -
monitor.c:lxc_monitor_fifo_send:111 - Failed to open fifo to send
message: No such file or directory.
Not sure how to debug this.
It kills our server deployment / automation.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing
FILES
$LSFILES
How do I use the variables / wildcards with lxc exec? Say, I want to
remove all /tmp/somefile* in the container.
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
/lxc/config/common.conf.d/
total 4
-rw-r--r-- 1 root root 103 Jul 5 22:24 00-lxcfs.conf
# cat /usr/share/lxc/config/common.conf.d/00-lxcfs.conf
lxc.hook.mount = /usr/share/lxcfs/lxc.mount.hook
lxc.hook.post-stop = /usr/share/lxcfs/lxc.reboot.hook
# dpkg -l|grep lxd
ii lxd 2.18-0ubuntu3
ostid":10,"Nsid":0,"Maprange":65536}]'
volatile.last_state.power: RUNNING
devices: {}
ephemeral: false
profiles:
- default
stateful: false
description: ""
host# dpkg -l|grep lxd
ii lxd 2.18-0ubuntu3
On 2017-10-02 03:25, Mike Wright wrote:
On 10/01/2017 10:59 AM, Tomasz Chmielewski wrote:
I would like to have several networks on the same host - so I've
created them with:
# lxc network create br-testing
# lxc network create br-staging
Then edited to match:
# lxc network show br-st
nect to br-testing
containers, and the other way around. Both networks should be able to
connect to hosts in the internet.
Is there any easy switch for that? So far, one thing which works is
write my own iptables rules, but that gets messy with more networks.
Tomasz
On 2017-09-27 22:03, Stéphane Graber wrote:
On Wed, Sep 27, 2017 at 09:48:39PM +0900, Tomasz Chmielewski wrote:
# lxc exec some-container /bin/bash
The configuration file contains legacy configuration keys.
Please update your configuration file!
Is there a way to tell find out which ones are
# lxc exec some-container /bin/bash
The configuration file contains legacy configuration keys.
Please update your configuration file!
Is there a way to tell find out which ones are legacy without pasting
the whole config on the mailing list?
Tomasz Chmielewski
https://lxadm.com
I think fan is single server only and / or won't cross different networks.
You may also take a look at https://www.tinc-vpn.org/
Tomasz
https://lxadm.com
On Thursday, August 03, 2017 20:51 JST, Félix Archambault
wrote:
> Hi Amblard,
>
> I have never used it, but this may be worth taking a
fig eth0:dev 1.2.3.4
# ifconfig eth0:prod 2.3.4.5
This will also work:
# ip addr add 10.1.2.3 dev eth0 label eth0:dev
# ip addr add 10.2.3.4 dev eth0 label eth0:prod
This one also works:
# brctl addbr prod
# brctl addbr dev
# brctl show
bridge name bridge id STP enabled interfaces
dev 8000.00
On Tuesday, August 01, 2017 18:04 JST, Sjoerd wrote:
>
>
> On 30-07-17 17:15, Tomasz Chmielewski wrote:
> > Bug or a feature?
> >
> > # lxc network create dev
> > error: Failed to run: ip link add dev type bridge: Error: either "dev"
Bug or a feature?
# lxc network create dev
error: Failed to run: ip link add dev type bridge: Error: either "dev" is
duplicate, or "bridge" is a garbage.
# lxc network create devel
Network devel created
--
Tomasz Chmielews
e by a few GBs from
each OOM to OOM, until it stopped happenin
Tomasz Chmielewski
https://lxadm.com
On Saturday, July 15, 2017 18:36 JST, Saint Michael wrote:
> I have a lot of memory management issues using pure LXC. In my case, my box
> has only one container. I use LXC to be abl
4:21.01927786Z\",\n\t\t\"status\":
\"Running\",\n\t\t\"status_code\": 103,\n\t\t\"resources\":
{\n\t\t\t\"containers\":
[\n\t\t\t\t\"/1.0/containers/vpn-hz1\"\n\t\t\t]\n\t\t},\n\t\t\"metadata\":
{\n\t\t\t\"fds\":
On Thursday, July 13, 2017 00:35 JST, "Tomasz Chmielewski"
wrote:
> On Wednesday, July 12, 2017 20:52 JST, "Tomasz Chmielewski"
> wrote:
>
> > It only fails with "error: not found" on the first, second or third "lxc
> > config&
On Wednesday, July 12, 2017 20:52 JST, "Tomasz Chmielewski"
wrote:
> It only fails with "error: not found" on the first, second or third "lxc
> config" line.
>
> It started to fail in the last 2 weeks I think (lxd updates?) - before, it
> was r
On Wednesday, July 12, 2017 20:33 JST, "Tomasz Chmielewski" wrote:
> In the last days, lxc commands are failing randomly.
>
> Example (used in a script):
>
> # lxc config set TDv2-z-testing-a19ea62218-2017-07-12-11-23-03 raw.lxc
> lxc.aa_allow_incomplete=
untu6~ubuntu16.04.1~ppa1amd64
Container hypervisor based on LXC - client
Not sure how I can debug this.
--
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinf
s can be modified.
But perhaps there is some "recommended" way?
--
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
orrect.
OK, I can see we can restore "2.14 behaviour" with an asterisk, i.e.:
lxc file push -r /tmp/testdir/* container/tmp
Which makes "lxc file push -r" in 2.15 behave similar like cp.
Before, 2.14 behaved similar like rsync.
So can we assume, going forward, the behaviour w
dir/file1 /tmp/testdir/file2
# lxc file push -r /tmp/testdir/ testvm2/tmp # < note the trailing
slash after /tmp/testdir/
# lxc exec testvm2 ls /tmp
testdir
# lxc exec testvm2 ls /tmp/testdir
file1 file2
This breaks many scripts!
--
Tomasz Chmielewski
https://lxa
0 33
What's the correct syntax to set it?
Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
1 - 100 of 219 matches
Mail list logo