Re: [lxc-users] lxc-users Digest, Vol 233, Issue 11

2018-06-09 Thread Michel Jansens
Hi Oliver,

I think I don’t completely understand what you mean, but I can say, if you use 
a ZFS storage, lxc init , or lxc copy your are creating ZFS clones (meaning 
sharing the data blocks, planned reduplication).
If you want new blocks to be deduplicated, you must activate reduplication, but 
this will come at a cost: the system has to make a table of hashes, and for 
each write, check in that table if a block already has the same hash. It you 
don’t have enough memory, this happens on disk, meaning one write can generate 
many reads…
You could also use extra data volumes and clone them through "zfs clone” 
instruction (for dev-test-prod environments).
 
From all I’ve read, if you want to activate deduplication, you better have lots 
of RAM.

Cheers,

Michel 


> On 9 Jun 2018, at 14:56, Thouraya TH  wrote:
> 
> Thank you for answer.
> Example
> lxc-create -n c1 -o ubuntu
> i don't mean ubuntu image
> 
> I mean these files:
> 
> Example:
> 
> root@graphene-14:/var/lib/lxc/graphene-14-worker1# ls
> 
> config  fstab  rootfs
> 
> 
> These folders content 
> config  fstab  rootfs are the same for the three containers on the same host ?
> it's up to me to use a deduplication solution  ? That's it ?
> Bests.
> 
> 
> 2018-06-09 13:00 GMT+01:00  >:
> Send lxc-users mailing list submissions to
> lxc-users@lists.linuxcontainers.org 
> 
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.linuxcontainers.org/listinfo/lxc-users 
> 
> or, via email, send a message with subject or body 'help' to
> lxc-users-requ...@lists.linuxcontainers.org 
> 
> 
> You can reach the person managing the list at
> lxc-users-ow...@lists.linuxcontainers.org 
> 
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of lxc-users digest..."
> 
> Today's Topics:
> 
>1. Re: Question : duplication (Stéphane Graber)
>2. Howto see actually used IPv6 (*not* link local) with lxc list
>   (v3.0.0)? (Oliver Rath)
> 
> 
> -- Message transféré --
> From: "Stéphane Graber" mailto:stgra...@ubuntu.com>>
> To: LXC users mailing-list  >
> Cc: 
> Bcc: 
> Date: Fri, 8 Jun 2018 11:30:07 -0400
> Subject: Re: [lxc-users] Question : duplication
> On Fri, Jun 08, 2018 at 12:13:34PM +0100, Thouraya TH wrote:
> > Hi,
> > 
> > In my cluster, i have 3 containers per host  and i have 10 hosts.
> > all containers are ubuntu containers.
> > My question: the rootfs of these containers are the same ?
> > i have duplicated files in different containers repository ?
> > 
> > 
> > 
> > Thank you so much for answers.
> > Best regards.
> 
> Depends on your storage backend, all backends except the directory one
> will use copy-on-write for deltas from the image used for the container
> and their current state.
> 
> If using ZFS and have quite a bit of spare RAM you can also turn on
> deduplication on your ZPOOL which will then deduplicate writes as they
> happen, possibly saving you a lot of disk space.
> 
> 
> -- 
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com 
> 
> 
> -- Message transféré --
> From: Oliver Rath mailto:r...@mglug.de>>
> To: LXC users mailing-list  >
> Cc: 
> Bcc: 
> Date: Sat, 9 Jun 2018 13:47:16 +0200
> Subject: [lxc-users] Howto see actually used IPv6 (*not* link local) with lxc 
> list (v3.0.0)?
> Hi list,
> 
> Im using IPv6 in several lxc environments. Is there a possibility to see
> the actual set IPv6-Adresses (coming from radvd) in the list command?
> Ive tried
> 
> lxc list  --columns="n",user.net.0.ipv6.address
> 
> with no luck (field leaves empty), but
> 
> # lxc list  --columns="n",net.0.ipv6.address
> Error: Invalid config key 'net.0.ipv6.address' in 'net.0.ipv6.address'
> 
> gives an error. Unfortunatly I didnt found any hint whats the key name
> for getting this information.
> 
> What can I do?
> 
> Tfh!
> 
> Oliver
> 
> 
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org 
> 
> http://lists.linuxcontainers.org/listinfo/lxc-users 
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Network instability with bridged nat and macvlan interfaces

2018-06-08 Thread Michel Jansens


Hi Guido,

Thanks for your reply

I’ve installed an apache2 on port 8082, and it falls at the same time as 
haproxy ports 80 and 443. Only ssh keeps responding. Weird!

Michel




> On 8 Jun 2018, at 08:15, Jäkel, Guido  wrote:
> 
> Dear Michel,
> 
> did you already take a look on the other parts of the involved network 
> environment? Maybe you have an issue on layer two vs. three concerning the 
> MAC <-> IP correlation on the involved next upstream switch. You may check 
> the ARP tables.
> 
> And -- because you "loose" port 80 and 443, but not 22 --- as a test I would 
> arrange some other simple services (using another product as you use for the 
> httpd).
> 





> Greetings
> 
> Guido
> 
>> -Original Message-
>> From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On 
>> Behalf Of Michel Jansens
>> Sent: Thursday, June 07, 2018 7:36 PM
>> To: LXC users mailing-list 
>> Subject: Re: [lxc-users] Network instability with bridged nat and macvlan 
>> interfaces
>> 
>> Hi Andrey,
>> Thank you for your answer.
>> I’ll try to avoid mixing macvlan with bridging/nat to test.
>> I’m currently building the equivalent on a second server, but with a bridge 
>> built on top of the vlan.
>> Somebody at Canonical also suggested it could be the physical switch playing 
>> bad with macvlan. We’re investigating.
>> I’ll keep you informed of the evolution.
>> 
>> Cheers,
>> 
>> Michel
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Network instability with bridged nat and macvlan interfaces

2018-06-07 Thread Michel Jansens
Hi Andrey,
Thank you for your answer.
I’ll try to avoid mixing macvlan with bridging/nat to test.
I’m currently building the equivalent on a second server, but with a bridge 
built on top of the vlan.
Somebody at Canonical also suggested it could be the physical switch playing 
bad with macvlan. We’re investigating.
I’ll keep you informed of the evolution.

Cheers,

Michel

> On 7 Jun 2018, at 17:34, Andrey Repin  wrote:
> 
> Greetings, Michel Jansens!
> 
>> I’m running on Ubuntu18.04 LXC 3.0.0.
> 
>> I’ve created 5 debian9 containers with default eth0 networking on NAT:
> 
>> # lxc network show lxdbr0
>> config:
>>  ipv4.address: 10.1.1.1/24
>>  ipv4.dhcp.ranges: 10.1.1.2-10.1.1.99
>>  ipv4.nat: "true"
>>  ipv6.address: fd42:6f79:c120:7701::1/64
>>  ipv6.nat: "true"
>> description: Natted network 0
>> name: lxdbr0
>> type: bridge
> 
>> One of the containers (frontal) has an additional interface configured with:
> 
>> # lxc network attach vlan7 frontal
>> # lxc config show kspreprodfrontal
>> …
>> devices:
>>  vlan7:
>>nictype: macvlan
>>parent: vlan7
>>type: nic
> 
>> vlan7 is a flan with id: 7 configured in /etc/netplan/01-netcfg.yaml 
>> ... 
>> vlans:
>>vlan7:
>>  id: 7
>>  link: enp1s0f0
> 
> I'm no expert, frankly, but it itching me to mix brctl and macvlan like that.
> 
>> I’ve changed the frontal host internal networking so that eth1 comes first
>> and default route is going through eth1. 
>> Everything works internal and external…except from time to time, the
>> frontal starts refusing connexions from the outside for a few seconds (up to 
>> 50).
>> It looks like general networking because all ports suddenly stop working 
>> (connexion refused)
>> internally the frontal remains reachable
>> I’m running haproxy on ports 80 and 443, but also tried running apache2 on
>> port 8082. All ports go down at the same time.
> 
>> I’ve now installed an Ubuntu (16.04) container and added the vlan7 network
>> the same way.
>> It worked fine…for about an hour and stopped working again, but for good.
>> What is weird is that port 80 and 443 are refused but port 22 is working
>> (maybe that’s the host ssh?).
> 
>> Any idea?
> 
> Your explanation is not very clear in parts where you describe the failure.
> 
>> Thanks for any suggestion.
> 
> My first suggestion would be to rebuild your networking a little bit
> different.
> 
> 1. Create a dummy internal interface and bind your containers' macvlan bridges
>  to it. Bind an additional bridged macvlan on host to be able to reach into
>  the containers' network.
> 2. If your vlan7 is a dedicate network interface for your containers, pass it
>  as physical to the ingress container.
> 
> 
> -- 
> With best regards,
> Andrey Repin
> Thursday, June 7, 2018 18:26:48
> 
> Sorry for my terrible english...
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Network instability with bridged nat and macvlan interfaces

2018-06-06 Thread Michel Jansens
Hi,

I’m running on Ubuntu18.04 LXC 3.0.0.

I’ve created 5 debian9 containers with default eth0 networking on NAT:

# lxc network show lxdbr0
config:
  ipv4.address: 10.1.1.1/24
  ipv4.dhcp.ranges: 10.1.1.2-10.1.1.99
  ipv4.nat: "true"
  ipv6.address: fd42:6f79:c120:7701::1/64
  ipv6.nat: "true"
description: Natted network 0
name: lxdbr0
type: bridge

One of the containers (frontal) has an additional interface configured with:

# lxc network attach vlan7 frontal
# lxc config show kspreprodfrontal
…
devices:
  vlan7:
nictype: macvlan
parent: vlan7
type: nic

vlan7 is a flan with id: 7 configured in /etc/netplan/01-netcfg.yaml 
... 
vlans:
vlan7:
  id: 7
  link: enp1s0f0

I’ve changed the frontal host internal networking so that eth1 comes first and 
default route is going through eth1. 
Everything works internal and external…except from time to time, the frontal 
starts refusing connexions from the outside for a few seconds (up to 50).
It looks like general networking because all ports suddenly stop working 
(connexion refused) 
internally the frontal remains reachable
I’m running haproxy on ports 80 and 443, but also tried running apache2 on port 
8082. All ports go down at the same time.

I’ve now installed an Ubuntu (16.04) container and added the vlan7 network the 
same way.
It worked fine…for about an hour and stopped working again, but for good.
What is weird is that port 80 and 443 are refused but port 22 is working (maybe 
that’s the host ssh?).


Any idea?

Thanks for any suggestion.

Cheers,

Michel

PS: Sorry for my previous post where I replied to another message and 
apparently messing with another thread... 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Network instability with bridged nat and macvlan interfaces

2018-06-06 Thread Michel Jansens
Hi Andrey,

I don’t understand what you mean by hijack unrelated threads. I just created a 
mail with the title "Network instability with bridged nat and macvlan 
interfaces”
Or did I miss something?
Sorry if I did.

Cheers,

Michel


> On 6 Jun 2018, at 19:54, Andrey Repin  wrote:
> 
> Greetings, Michel Jansens!
> 
> Please don't hijack unrelated threads. If you want to post a new issue, post a
> new message.
> 
> 
> -- 
> With best regards,
> Andrey Repin
> Wednesday, June 6, 2018 20:54:31
> 
> Sorry for my terrible english...
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Network instability with bridged nat and macvlan interfaces

2018-06-06 Thread Michel Jansens
Hi again,

In the meantime, I’ve installed an Ubuntu (16.04) container and added the vlan7 
network the same way.
It worked fine…for about an hour and stopped working again, but for good.
 What is weird is that port 80 and 443 are refused but port 22 is working 
(maybe that’s the host ssh?).

Michel

> On 6 Jun 2018, at 16:51, Michel Jansens  wrote:
> 
> Hi,
> 
> I’m running on Ubuntu18.04 LXC 3.0.0.
> 
> I’ve created 5 debian9 containers with default eth0 networking on NAT:
> 
> # lxc network show lxdbr0
> config:
>   ipv4.address: 10.1.1.1/24
>   ipv4.dhcp.ranges: 10.1.1.2-10.1.1.99
>   ipv4.nat: "true"
>   ipv6.address: fd42:6f79:c120:7701::1/64
>   ipv6.nat: "true"
> description: Natted network 0
> name: lxdbr0
> type: bridge
> 
> One of the containers (frontal) has an additional interface configured with:
> 
> # lxc network attach vlan7 frontal
> # lxc config show kspreprodfrontal
> …
> devices:
>   vlan7:
> nictype: macvlan
> parent: vlan7
> type: nic
> 
> vlan7 is a flan with id: 7 configured in /etc/netplan/01-netcfg.yaml 
> ... 
> vlans:
> vlan7:
>   id: 7
>   link: enp1s0f0
> 
> I’ve changed the frontal host internal networking so that eth1 comes first 
> and default route is going through eth1. 
> Everything works internal and external…except from time to time, the frontal 
> starts refusing connexions from the outside for a few seconds (up to 50).
> It looks like general networking because all ports suddenly stop working 
> (connexion refused) 
> internally the frontal remains reachable
> I’m running haproxy on ports 80 and 443, but also tried running apache2 on 
> port 8082. All ports go down at the same time.
> 
> 
> Any idea?
> 
> Thanks for any suggestion.
> 
> Cheers,
> 
> Michel
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Network instability with bridged nat and macvlan interfaces

2018-06-06 Thread Michel Jansens
Hi,

I’m running on Ubuntu18.04 LXC 3.0.0.

I’ve created 5 debian9 containers with default eth0 networking on NAT:

# lxc network show lxdbr0
config:
  ipv4.address: 10.1.1.1/24
  ipv4.dhcp.ranges: 10.1.1.2-10.1.1.99
  ipv4.nat: "true"
  ipv6.address: fd42:6f79:c120:7701::1/64
  ipv6.nat: "true"
description: Natted network 0
name: lxdbr0
type: bridge

One of the containers (frontal) has an additional interface configured with:

# lxc network attach vlan7 frontal
# lxc config show kspreprodfrontal
…
devices:
  vlan7:
nictype: macvlan
parent: vlan7
type: nic

vlan7 is a flan with id: 7 configured in /etc/netplan/01-netcfg.yaml 
... 
vlans:
vlan7:
  id: 7
  link: enp1s0f0

I’ve changed the frontal host internal networking so that eth1 comes first and 
default route is going through eth1. 
Everything works internal and external…except from time to time, the frontal 
starts refusing connexions from the outside for a few seconds (up to 50).
It looks like general networking because all ports suddenly stop working 
(connexion refused) 
internally the frontal remains reachable
I’m running haproxy on ports 80 and 443, but also tried running apache2 on port 
8082. All ports go down at the same time.


Any idea?

Thanks for any suggestion.

Cheers,

Michel








___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] nat networking with fixed (dhcp ) IP addresses

2018-05-25 Thread Michel Jansens

> On 25 May 2018, at 12:25, Fajar A. Nugraha <l...@fajar.net> wrote:
> 
> On Fri, May 25, 2018 at 3:25 PM, Michel Jansens
> <michel.jans...@ulb.ac.be> wrote:
>> Thanks Fajar it works!
>> 
>> What I did:
>> 
>> #lets create a new profile
>> lxc profile  copy default nonet
>> 
>> #remove network from the profile
>> lxc profile  device remove nonet eth0
>> 
>> #create the ’testip' container
>> lxc init ubuntu:18.04 testip --profile nonet
>> 
>> #attach the network device to it, with IP address
>> lxc config  device add testip eth0 nic nictype=bridged parent=lxdbr0
>> host_name=testip ipv4.address=10.0.3.203
> 
> 
> One note from me, you could actually use the default profile and
> override eth0 directly in the config file.
> At least it works with "lxc config edit", didn't try with "lxc config device”.

With lxc config device add, it looks like it does strange things: in  "lxc 
network  list-leases lxdbr0” I get 2 entries for the container, with the same 
Mac address and different IPs (one STATIC and ON DYNAMIC) and the twrong one is 
listed in lxc list…

Michel


> 
> -- 
> Fajar
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD 3.0 macvlan networking

2018-05-05 Thread Michel Jansens
Well, on my system with latest install of Ubuntu 18.04 and LXD 3.0, the host 
can’t reach a container in macvlan setup. the container can’t connect to the 
host either.
on a bridged network, it works.

Michel


> On 5 May 2018, at 12:30, Mark Constable  wrote:
> 
> On 5/5/18 5:43 PM, Janjaap Bos wrote:
>> To be able to ping a container macvlan interface, you need to have a
>> macvlan interface configured on the host.
> 
> Thank you for the host macvlan snippet but I CAN actually ping the
> container from the host (but not the host from inside the container)
> and that was actually my question... how come I ping the
> container from my host when I just set up that container using
> macvlan?
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Macvlan explained

2018-05-04 Thread Michel Jansens
Hi, 

I just stumbled on this site and thought it would be nice to share:

https://hicu.be/bridge-vs-macvlan

It nicely explains how Macvlan work and how it compares with normal bridges.
In the case of LXD, I suppose the macvlan bridge mode is used?
It also mentions that although VMs cannot directly communicate with the host, 
you can add another macvlan sub-interface and assign it to the host to enable 
communication…

Cheers,

Michel







___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD project status

2018-03-27 Thread Michel Jansens
Great!, I’m looking forward to that :-)

Michel

> On 27 Mar 2018, at 21:02, Stéphane Graber <stgra...@ubuntu.com> wrote:
> 
> Yes
> 
> On Tue, Mar 27, 2018 at 08:45:03PM +0200, Michel Jansens wrote:
>> Hi Stéphane,
>> 
>> Does this means LXD 3.0 will be part of Ubuntu 18.04 next month?
>> 
>> Cheers,
>> Michel
>>> On 27 Mar 2018, at 19:44, Stéphane Graber <stgra...@ubuntu.com> wrote:
>>> 
>>> We normally release a new feature release every month and have been
>>> doing so until the end of December where we've then turned our focus on
>>> our next Long Term Support release, LXD 3.0 which is due out later this
>>> week.
>> 
> 
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> 
> -- 
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD project status

2018-03-27 Thread Michel Jansens
Hi Stéphane,

Does this means LXD 3.0 will be part of Ubuntu 18.04 next month?

Cheers,
Michel
> On 27 Mar 2018, at 19:44, Stéphane Graber  wrote:
> 
> We normally release a new feature release every month and have been
> doing so until the end of December where we've then turned our focus on
> our next Long Term Support release, LXD 3.0 which is due out later this
> week.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD connectors for any web VM management platforms?

2018-02-02 Thread Michel Jansens
Hi Jan,

Thanks for the suggestion.

I’ve been testing Proxmox for a few weeks.
I must say it’s not bad at all:
- it  supports ZFS (or CEPH) storage
-  allows high availability clusters
- LXD and KVM
- ACLs to different VMs or containers
- offers basic monitoring
- support pices are relatively decent
But 
- it is Debian based
- uses LXC instead of LXD
- does not support live migration of containers or KVM live migration when 
using local storage (ZFS)
- ?does not offer live kernel upgrade facility like Ubuntu does?
- does not allow clones of lxc containers
-didn’t find the fine grain resource control of LXD ( storage IOPS or 
Bandwidth,…)
- containers don’t have access to snapshots

The web interface is also not a complete customer portal. That being said I’ve 
seen there is a beta provider for ‘Foreman’ ( https://theforeman.org )

I’m still investigating it…

Michel




> On 2 Feb 2018, at 17:05, Jan Münnich  wrote:
> 
> Proxmox (https://www.proxmox.com/en/proxmox-ve) is an open-source 
> virtualisation platform with web interface that supports LXC.
> 
> Best,
> Jan 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD connectors for any web VM management platforms?

2018-02-02 Thread Michel Jansens
HI Kees,

Thanks for the info. I’ve already tried SaltStack but had the problem that the 
official LXC module was only lxc and not LXD. It worked, but the containers 
didn’t appear in ' lxc list’ . I’ll try the GitHub LXD formula.

I’ve also seen that there is a module in Ansible that I will try soon.

But I’ve also seen there are  systems with web graphical interfaces that offer 
an abstraction layer above clouds that can offer a merged view on private 
cloud/virtualization/containers servers and public clouds:
 CloudForms ( ManageIQ) and Foreman are 2 of them. 

From what I understand they offer a sort of customer portal, and integrate 
orchestration for provisioning automatisation. I was wondering if there is 
something like that compatible with lxd. I didn’t find any myself.

Thanks,

Michel

> On 2 Feb 2018, at 17:53, Kees Bos <cornelis@gmail.com> wrote:
> 
> On do, 2018-02-01 at 23:15 +0100, Michel Jansens wrote:
>> Hi,
>> 
>> I’ve been looking around to get a web interface for customer portal/
>> container management for lxc.
>> I looked a bit at ManageIQ and Foreman, but found no provider for
>> lxd.
>> Do you know of any project that have lxd connectors/providers?
>> 
>> I know that lxd integrates in OpenStack at Canonical and OpenStack
>> has providers  for both applications, but I would prefer to avoid it
>> (too complex and heavy hardware requirements).
>> 
>> Alternatively, would there be a gateway that would offer a known API
>> and translate/emulate it to lxd? ( Ovirt, VMware, Amazon, Azure,
>> Google are a few well supported APIs)  
>> 
> 
> I'm not sure if you're after this kind of integratio, but saltstack can
> provision containers.
> 
> https://github.com/saltstack-formulas/lxd-formula
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD connectors for any web VM management platforms?

2018-02-01 Thread Michel Jansens
Hi,

I’ve been looking around to get a web interface for customer portal/ container 
management for lxc.
I looked a bit at ManageIQ and Foreman, but found no provider for lxd.
Do you know of any project that have lxd connectors/providers?

I know that lxd integrates in OpenStack at Canonical and OpenStack has 
providers  for both applications, but I would prefer to avoid it (too complex 
and heavy hardware requirements).

Alternatively, would there be a gateway that would offer a known API and 
translate/emulate it to lxd? ( Ovirt, VMware, Amazon, Azure, Google are a few 
well supported APIs)  

Thanks,


Michel

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc 2.0 adding a nic to a container on another vlan (was: access to snapshots from within the containers)

2017-06-16 Thread Michel Jansens
Thanks a lot Stéphane for this information,

I succeeded in attaching a bridge device from a specific vlan following your 
advise from https://github.com/lxc/lxd/issues/2551 
<https://github.com/lxc/lxd/issues/2551>
command I used is: lxc config device add welcome-lemur eth1 nic nictype=macvlan 
parent=brvlan3904 name=eth1

In /etc/network/interfaces I added:

#vlan 3904 interface on enp1s0f0
auto vlan3904
iface vlan3904 inet manual
vlan_raw_device enp1s0f0
#add a bridge for vlan3904
auto brvlan3904
iface  brvlan3904 inet  manual
   bridge_ports vlan3904


I managed to add the brvlan3904 to multiple containers, but this doesn’t create 
an interface for each container in the brvlan3904 bridge, and I don’t know what 
the security consequences are… 
Is This OK like this?


Alternatively, to mimic how lxc br0 bridge looks (one interface for each 
container with vethXX like names), I tried to add more ports to the 
bridge,with dummy interfaces: 

ip link add welcomelemur type dummy
brctl addif brvlan3904 welcomelemur
ifconfig welcomelemur up
lxc config device add welcome-lemur eth1 nic nictype=macvlan parent=brvlan3904 
name=eth1

But this gave me: error: Failed to create the new macvlan interface: exit 
status 2
I tried using nictype=veth instead of mtacvlan but got 'error: Bad nic type: 
veth’ 

How should I do this properly?



I must say what I’d really like is a way to do networking like I used to in 
Solaris 10 with “shared IP interfaces”: 
   -the network interface is created in the host (one for each container) like 
eth0:1,eth0:2,...
   -the containers sees the interface in ifconfig but cannot change network IP, 
mask or anything.
Some apps don’t work (e.g.: tcpdump needs promiscuous mode), but someone cannot 
just change IP from within the container (maybe this can be prevented in LXC, 
but I’m not  experienced enough yet to know how). 




Thanks for any additional information


—

Michel

> On 15 Jun 2017, at 19:13, Stéphane Graber <stgra...@ubuntu.com> wrote:
> 
> On Thu, Jun 15, 2017 at 07:58:33AM +0200, mjansens wrote:
>> Hi,
>> 
>> Thank you Stéphane for this clarification.
>> I'll indeed try to stick with the LTS version if I can. The snapshot glitch 
>> has an easy work around:  just need to do a ‘ls’ of the new snapshot 
>> contents in the host (can even happen in a cron). And anyway, nobody said 
>> this issue was fixed in later updates...
> 
> Yeah, I don't expect this to be any different on the LXD feature branch.
> This behavior is an internal ZFS behavior and short of having LXD clone
> every snapshot and mount the resulting clone, I can't think of another
> way to easily expose that data.
> 
>> 
>> Where I might get stuck is in the network part: I will need at some point to 
>> lock some containers in specific VLANs. I more or less have gathered from 
>> various info on the web that LXD2.0.x networking is limited to a simple 
>> bridge (my actual config) or the standard NAT.
> 
> LXD 2.0.x doesn't have an API to let you define additional bridges.
> 
> There's however nothing preventing you from defining additional bridges
> at the system level and then telling LXD to use them.
> 




>> 
>> 
>> Thanks,
>> 
>> Michel
>> 
>> 
>>> On 14 Jun 2017, at 19:10, Stéphane Graber <stgra...@ubuntu.com> wrote:
>>> 
>>> On Wed, Jun 14, 2017 at 03:41:27PM +0800, gunnar.wagner wrote:
>>>> not directly related to your snapshot issue but still maybe good to know
>>>> fact
>>>> 
>>>> On 6/13/2017 8:37 PM, Michel Jansens wrote:
>>>>> I’m busy discovering LXD v2.0.9 on Ubuntu 16.04
>>>> if you want the most recent (yet regarded stable for production) version of
>>>> LXD on an ubuntu 16.04 host you'd install it from the xenial-backports
>>>> sources
>>>> 
>>>>   sudo apt install -t xenial-backports lxd lxd-client
>>>> 
>>>> this gives you 2.13 at this point in time. I am not really sure what the
>>>> lxd-client package exactly does (or which feature your are missing if you
>>>> don;t have that) but it was recommended somewhere to get that as well
>>> 
>>> Please don't tell people to do that unless they understand the implications!
>>> 
>>> Doing the above will get your system from the LXD LTS branch (2.0.x) to
>>> the LXD feature branch. Downgrading isn't possible, so once someone does
>>> that, there's no going back.
>>> 
>>> The LXD LTS branch (2.0.x) is supported for 5 years and only gets
>>> bugfixes and security updates. This is typically recommended for
>>> production environments where new features are consider

Re: [lxc-users] access to snapshots from within the containers

2017-06-14 Thread Michel Jansens
Hi Gunnar,

Thanks for your comment, it brings up some issue that are not clear to me:
I’m looking to build a production environment based on Ubuntu servers with ZFS 
storage and LXD ( similar architecture to what I have now on SmartOS).
I intend to buy Ubuntu server licences with support.
I understand that version 2.0.9 is not the latest version available upstream, 
but what I don’t get, is will I get support from Canonical if I use a more 
recent version?
If Canonical offers LXD2.0.x in 16.04LTS, maybe it is for stability concerns?

Thank you for any information on this.

Cheers,

Michel



> On 14 Jun 2017, at 09:41, gunnar.wagner <gunnar.wag...@netcologne.de> wrote:
> 
> not directly related to your snapshot issue but still maybe good to know fact
> On 6/13/2017 8:37 PM, Michel Jansens wrote:
>> I’m busy discovering LXD v2.0.9 on Ubuntu 16.04
> if you want the most recent (yet regarded stable for production) version of 
> LXD on an ubuntu 16.04 host you'd install it from the xenial-backports sources
> 
> sudo apt install -t xenial-backports lxd lxd-client
> 
> this gives you 2.13 at this point in time. I am not really sure what the 
> lxd-client package exactly does (or which feature your are missing if you 
> don;t have that) but it was recommended somewhere to get that as well
> 
> 
> 
> - 
> 
> Gunnar Wagner | Yongfeng Village Group 12 #5, Pujiang Town, Minhang District, 
> 201112 Shanghai, P.R. CHINA 
> mob +86.159.0094.1702 | skype: professorgunrad | wechat: 15900941702
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] access to snapshots from within the containers

2017-06-13 Thread Michel Jansens
Hi all,

I’m busy discovering LXD v2.0.9 on Ubuntu 16.04
I’m trying to access the (ZFS) snapshots data from within containers.

I’ve shared the “.zfs/snapshot” directory with the associated container like 
this: 
lxc config device add obliging-panda snapshot disk path=/snapshot 
source=/var/lib/lxd/containers/obliging-panda.zfs/.zfs/snapshot

From inside the container, I see the list of snapshots:

root@obliging-panda:/snapshot# ls -l /snapshot/
total 2
drwxr-xr-x 4 root   root5 May 23 09:12 snapshot-2017_06_12_08h56
drwxr-xr-x 4 root   root5 May 23 09:12 snapshot-2017_06_13_09h06
drwxr-xr-x 4 root   root5 May 23 09:12 snapshot-abcd
dr-xr-xr-x 1 nobody nogroup 0 Jun 13 07:11 snapshot-newsnap
drwxr-xr-x 4 root   root5 May 23 09:12 snapshot-test

But they all look inaccessible:
root@obliging-panda:/snapshot# ls -l /snapshot/snapshot-newsnap/rootfs
ls: cannot access '/snapshot/snapshot-newsnap/rootfs': Object is remote


…until you list them in the main Server 

ls  
/var/lib/lxd/containers/obliging-panda.zfs/.zfs/snapshot/snapshot-newsnap/rootfs/

Then they appear in the container:

root@obliging-panda:/snapshot# ls -l snapshot-newsnap/rootfs
total 99
drwxr-xr-x  2 root root 173 Jun 12 12:01 bin
drwxr-xr-x  3 root root   3 May 16 14:19 boot
drwxr-xr-x  5 root root  91 May 16 14:18 dev
...

The funny thing, is that this same weird behaviour  happened a long time ago in 
Solaris zones… so I imagine this has to do with ZFS…



Is there another more “standard" way to access snapshots?
I saw there is a  /snap (empty) directory in the containers. Is it meant to 
access snapshots? if yes how do you have them mounted?

Sorry if there is something obvious I’m missing. I’m new to Ubuntu/LXD (coming 
from Solaris & SmartOS zones).

Thanks.

Michel___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users