Re: [lxc-users] Snap 2.20 - Default Text Editor
sudo update-alternatives --config editor http://vim.wikia.com/wiki/Set_Vim_as_your_default_editor_for_Unix > On Nov 21, 2017, at 7:49 PM, Lai Wei-Hwa wrote: > > Thanks, but that's the problem, it's still opening in VI > > Thanks! > Lai > > - Original Message - > From: "Björn Fischer" > To: "lxc-users" > Sent: Tuesday, November 21, 2017 7:46:58 PM > Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor > > Hi, > >> $ lxc profile edit default >> Opens in VI even though my editor is nano (save the flaming) >> >> How can we edit the default editor? > > $ EDITOR=nano > $ export EDITOR > > Cheers, > > Björn > ___ > lxc-users mailing list > lxc-users@lists.linuxcontainers.org > http://lists.linuxcontainers.org/listinfo/lxc-users > ___ > lxc-users mailing list > lxc-users@lists.linuxcontainers.org > http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Snap 2.20 - Default Text Editor
Thanks, but that's the problem, it's still opening in VI Thanks! Lai - Original Message - From: "Björn Fischer" To: "lxc-users" Sent: Tuesday, November 21, 2017 7:46:58 PM Subject: Re: [lxc-users] Snap 2.20 - Default Text Editor Hi, > $ lxc profile edit default > Opens in VI even though my editor is nano (save the flaming) > > How can we edit the default editor? $ EDITOR=nano $ export EDITOR Cheers, Björn ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Snap 2.20 - Default Text Editor
Hi, > $ lxc profile edit default > Opens in VI even though my editor is nano (save the flaming) > > How can we edit the default editor? $ EDITOR=nano $ export EDITOR Cheers, Björn ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] Snap 2.20 - Default Text Editor
$ lxc profile edit default Opens in VI even though my editor is nano (save the flaming) How can we edit the default editor? Best Regards, Lai Wei-Hwa ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Bonding inside container? Or any other ideas?
I'm not sure I follow. I have multiple servers running Bond Mode 4 (for LACP/802.3ad). I then created a bridge, br0 which becomes the main (only) interface. I'm using flat networking with no NATS between containers and edited the profiles to use br0. Everything works for me. I can't speak to the other bond modes, though. Thanks! Lai - Original Message - From: "Andrey Repin" To: "lxc-users" Sent: Tuesday, November 21, 2017 6:38:55 PM Subject: [lxc-users] Bonding inside container? Or any other ideas? Greetings, All! Some time ago I've managed to install a second network card into one of my servers, and have been experimenting with bonding on host. The field is: a host with two cards in one bond0 interface. A number of containers sitting as macvlans on top of bond0. Some success was achieved with bond mode 5 (balance-tlb) - approx 2:1 TX counts with five clients, but all upload is weighted on one network card. Attempt to change the mode to balance-alb(mode 6) immediately broke the loading of roaming Windows profiles, the issue immediately disappear once I switch back to mode 5. I suppose this happens because bonding balancer creates havoc with macvlan and own bonding MAC addresses, which the network can't easily solve, or Windows clients got picky and refuse to load stuff from randomly changed source. While I could turn back to internal LXC bridge and route requests between it and bond0 on host to dissolve the MAC issue, I'd like to see if there's a more direct solution could be found, such as creating a bonding inside container? Or if not, is there any other way to use bonding and maintain broadcast visibility range between containers and the rest of the network? -- With best regards, Andrey Repin Wednesday, November 22, 2017 02:23:22 Sorry for my terrible english... ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] Bonding inside container? Or any other ideas?
Greetings, All! Some time ago I've managed to install a second network card into one of my servers, and have been experimenting with bonding on host. The field is: a host with two cards in one bond0 interface. A number of containers sitting as macvlans on top of bond0. Some success was achieved with bond mode 5 (balance-tlb) - approx 2:1 TX counts with five clients, but all upload is weighted on one network card. Attempt to change the mode to balance-alb(mode 6) immediately broke the loading of roaming Windows profiles, the issue immediately disappear once I switch back to mode 5. I suppose this happens because bonding balancer creates havoc with macvlan and own bonding MAC addresses, which the network can't easily solve, or Windows clients got picky and refuse to load stuff from randomly changed source. While I could turn back to internal LXC bridge and route requests between it and bond0 on host to dissolve the MAC issue, I'd like to see if there's a more direct solution could be found, such as creating a bonding inside container? Or if not, is there any other way to use bonding and maintain broadcast visibility range between containers and the rest of the network? -- With best regards, Andrey Repin Wednesday, November 22, 2017 02:23:22 Sorry for my terrible english... ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Using a mounted drive to handle storage pool
That seems to work! I still get the message: error: Unable to talk to LXD: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory But if I run it again, it inits. Thanks, Ron. Thanks! Lai - Original Message - From: "Ron Kelley" To: "lxc-users" Sent: Tuesday, November 21, 2017 6:26:30 PM Subject: Re: [lxc-users] Using a mounted drive to handle storage pool Perhaps you should use “bind” mount instead of symbolic links here? mount -o bind /storage/lxd /var/snap/lxd You probably also need to make sure survives a reboot. -Ron > On Nov 21, 2017, at 5:47 PM, Lai Wei-Hwa wrote: > > In the following scenario, I: > > $ sudo mount /dev/sdb /storage > > Then, when I do: > > $ sudo ln -s /storage/lxd lxd > $ snap install lxd > $ sudo lxd init > error: Unable to talk to LXD: Get http://unix.socket/1.0: dial unix > /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory > > > > Thanks! > Lai > > From: "Lai Wei-Hwa" > To: "lxc-users" > Sent: Tuesday, November 21, 2017 1:37:18 PM > Subject: [lxc-users] Using a mounted drive to handle storage pool > > I've currently migrated LXD from canonical PPA to Snap. > > I have 2 RAIDS: > • /dev/sda - ext4 (this is root device) > • /dev/sdb - brtfs (where I want my pool to be with the containers and > snapshots) > How/where should I mount my btrfs device? What's the best practice in having > the pool be in a non-root device? > > There are a few approaches I can see > • mount /dev/sdb to /var/snap/lxd (or /var/lib/lxd - if you're using > PPA) ... then: lxd init > • mount /dev/sdb to /storage and: ln -s /storage/lxd /var/snap/lxd ... > then: lxd init > • lxd init and choose existing block device /dev/sdb > Whats the best practice and why? > > Also, I'd love it if LXD could make this a little easier and let users more > easily define where the storage pool will be located. > > Best Regards, > > Lai > > ___ > lxc-users mailing list > lxc-users@lists.linuxcontainers.org > http://lists.linuxcontainers.org/listinfo/lxc-users > ___ > lxc-users mailing list > lxc-users@lists.linuxcontainers.org > http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Using a mounted drive to handle storage pool
Perhaps you should use “bind” mount instead of symbolic links here? mount -o bind /storage/lxd /var/snap/lxd You probably also need to make sure survives a reboot. -Ron > On Nov 21, 2017, at 5:47 PM, Lai Wei-Hwa wrote: > > In the following scenario, I: > > $ sudo mount /dev/sdb /storage > > Then, when I do: > > $ sudo ln -s /storage/lxd lxd > $ snap install lxd > $ sudo lxd init > error: Unable to talk to LXD: Get http://unix.socket/1.0: dial unix > /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory > > > > Thanks! > Lai > > From: "Lai Wei-Hwa" > To: "lxc-users" > Sent: Tuesday, November 21, 2017 1:37:18 PM > Subject: [lxc-users] Using a mounted drive to handle storage pool > > I've currently migrated LXD from canonical PPA to Snap. > > I have 2 RAIDS: > • /dev/sda - ext4 (this is root device) > • /dev/sdb - brtfs (where I want my pool to be with the containers and > snapshots) > How/where should I mount my btrfs device? What's the best practice in having > the pool be in a non-root device? > > There are a few approaches I can see > • mount /dev/sdb to /var/snap/lxd (or /var/lib/lxd - if you're using > PPA) ... then: lxd init > • mount /dev/sdb to /storage and: ln -s /storage/lxd /var/snap/lxd ... > then: lxd init > • lxd init and choose existing block device /dev/sdb > Whats the best practice and why? > > Also, I'd love it if LXD could make this a little easier and let users more > easily define where the storage pool will be located. > > Best Regards, > > Lai > > ___ > lxc-users mailing list > lxc-users@lists.linuxcontainers.org > http://lists.linuxcontainers.org/listinfo/lxc-users > ___ > lxc-users mailing list > lxc-users@lists.linuxcontainers.org > http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Using a mounted drive to handle storage pool
On Wed, Nov 22, 2017 at 1:37 AM, Lai Wei-Hwa wrote: > I've currently migrated LXD from canonical PPA to Snap. > > I have 2 RAIDS: > >- /dev/sda - ext4 (this is root device) >- /dev/sdb - brtfs (where I want my pool to be with the containers and >snapshots) > > How/where should I mount my btrfs device? What's the best practice in > having the pool be in a non-root device? > > You can simply use 'lxc storage create' to create a new storage pool (needs newer lxd version, not 2.0.x): https://github.com/lxc/lxd/blob/master/doc/storage.md#btrfs > There are a few approaches I can see > >1. mount /dev/sdb to /var/snap/lxd (or /var/lib/lxd - if you're using >PPA) ... then: lxd init > > snap complicates that. I'm not sure which directories are available for snap. It MIGHT work if you specify the block device directly and let lxd choose the best mount point. > Also, I'd love it if LXD could make this a little easier and let users > more easily define where the storage pool will be located. > > That's what 'lxd storage create' does. -- Fajar ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Using a mounted drive to handle storage pool
In the following scenario, I: $ sudo mount /dev/sdb /storage Then, when I do: $ sudo ln -s /storage/lxd lxd $ snap install lxd $ sudo lxd init error: Unable to talk to LXD: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory Thanks! Lai From: "Lai Wei-Hwa" To: "lxc-users" Sent: Tuesday, November 21, 2017 1:37:18 PM Subject: [lxc-users] Using a mounted drive to handle storage pool I've currently migrated LXD from canonical PPA to Snap. I have 2 RAIDS: * /dev/sda - ext4 (this is root device) * /dev/sdb - brtfs (where I want my pool to be with the containers and snapshots) How/where should I mount my btrfs device? What's the best practice in having the pool be in a non-root device? There are a few approaches I can see 1. mount /dev/sdb to /var/snap/lxd (or /var/lib/lxd - if you're using PPA) ... then: lxd init 2. mount /dev/sdb to /storage and: ln -s /storage/lxd /var/snap/lxd ... then: lxd init 3. lxd init and choose existing block device /dev/sdb Whats the best practice and why? Also, I'd love it if LXD could make this a little easier and let users more easily define where the storage pool will be located. Best Regards, Lai ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] Using a mounted drive to handle storage pool
I've currently migrated LXD from canonical PPA to Snap. I have 2 RAIDS: * /dev/sda - ext4 (this is root device) * /dev/sdb - brtfs (where I want my pool to be with the containers and snapshots) How/where should I mount my btrfs device? What's the best practice in having the pool be in a non-root device? There are a few approaches I can see 1. mount /dev/sdb to /var/snap/lxd (or /var/lib/lxd - if you're using PPA) ... then: lxd init 2. mount /dev/sdb to /storage and: ln -s /storage/lxd /var/snap/lxd ... then: lxd init 3. lxd init and choose existing block device /dev/sdb Whats the best practice and why? Also, I'd love it if LXD could make this a little easier and let users more easily define where the storage pool will be located. Best Regards, Lai ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] Working configuration for live migration
Dear all, could you please suggest the exact versions of LXD, CRIU and Linux kernel on top of which you managed to successfully migrate a container statefully? Any other specific configuration? We are working with Linux kernel 4.13.0-17, LXD 2.20 and CRIU 3.4 and it always gives the same error: error: migration failed 1 We also tried with Linux kernel 4.4.0-97, LXD 2.0.2 and CRIU 2.0 and migration seems to work but it is not stateful as a process running on the moving container is killed when landing on the target machine (it seems like the container is actually stopped and started again without state restore). Thank you, Francesco -- - Dr. Francesco Longo, PhD Assistant Professor @ Department of Engineering, University of Messina address: Contrada di Dio, S. Agata - 98166, Messina, Italy email: flo...@unime.it web: mdslab.unime.it/flongo phone: +39 090 3977335 --- fax: +39 090 3977471 Co-founder @ SmartMe.io s.r.l. address: Via Osservatorio, 1 - 98121, Messina, Italy email: france...@smartme.io web: smartme.io VAT number: 03457040834 - ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxd.migrate doesn't work with Ubuntu based distro
$ snap install lxd 2017-11-21T10:36:27-03:00 INFO Waiting for restart... lxd 2.20 from 'canonical' installed $ lxd.migrate error: This tool must be run as root. $ sudo lxd.migrate error: Data migration is only supported on Ubuntu at this time. $ lsb_release -a No LSB modules are available. Distributor ID: neon Description:KDE neon User Edition 5.11 Release:16.04 Codename: xenial KDE neon *is* Ubuntu LTS. Is there any way to force migration (at my own risk)? Thanks Norberto ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] TTY issue
On 21/11/17 15:07, Saint Michael wrote: Thanks for the solution. It works indeed. Just out of curiosity, how did you find this out? I googled it far and wide and there was nothing available. The autodev part, I needed it in order to make qemu networking work in a container via /dev/net/tun device which is missing by default. I did not invent it, found somewhere in Google, probably here: https://serverfault.com/questions/429461/no-tun-device-in-lxc-guest-for-openvpn . It just executes specified command(s) during /dev population, nothing magical; you can execute same command in the container later if you can afford to defer your mounts or whatever uses the created device. /dev/fuse is more interesting, I was trying to make snapd work in an LXC container, still not very successful in that. You can read more about this effort in the discussion on the bottom of https://bugs.launchpad.net/snappy/+bug/1628289 -- With Best Regards, Marat Khalili ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] TTY issue
Thanks for the solution. It works indeed. Just out of curiosity, how did you find this out? I googled it far and wide and there was nothing available. On Sat, Nov 18, 2017 at 9:13 AM, Marat Khalili wrote: > On 18/11/17 17:10, Saint Michael wrote: > > Yes, of course. It works but only if autodev=0 > That is the issue. > > > Even as: > > lxc.hook.autodev = sh -c 'mknod ${LXC_ROOTFS_MOUNT}/dev/fuse c 10 229' > > ? > > > -- > > With Best Regards, > Marat Khalili > > ___ > lxc-users mailing list > lxc-users@lists.linuxcontainers.org > http://lists.linuxcontainers.org/listinfo/lxc-users > ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users