Re: [lxc-users] problem after migrating zfs pool to new disk

2020-11-21 Thread Norberto Bensa
I cloned the github repository and found docs/database.md

There's a patch.global.sql that did the trick :-)



On Sat, Nov 21, 2020 at 8:30 PM Norberto Bensa  wrote:

> Hello,
>
> $ snap list
> lxd 4.818324  latest/stable  canonical✓  -
>
> I moved my zfs pool to a new disk and then renamed the pool.
> Old name was zroot, new name: rz1-venkman-nvme.
> The pool was migrated using zfs send|zfs receive.
> While doing the migration, both pools were on the same machine.
> The procedure was done with all the services down.
>
> lxc shows:
>
> $ lxc list
> Error: Get "http://unix.socket/1.0": EOF
>
> $ lxd
> Error: This must be run as root
>
> $ sudo lxd
> [sudo] password for zoolook:
> WARN[11-21|19:23:17] - Couldn't find the CGroup blkio.weight, disk
> priority will be ignored
> EROR[11-21|19:23:26] Failed to start the daemon: Failed initializing
> storage pool "default": Failed to run: zpool import zroot: cannot import
> 'zroot': no such pool available
> Error: Failed initializing storage pool "default": Failed to run: zpool
> import zroot: cannot import 'zroot': no such pool available
>
> I figured I could run snap stop lxd, update the database at
> /var/snap/lxd/common/lxd/database/global/db.bin
>
> I updated the value of the table storage_pools_config  to the correct
> volume, but the problem persist, but it is different
>
> $ lxc list
> Error: Get "http://unix.socket/1.0": dial unix
> /var/snap/lxd/common/lxd/unix.socket: connect: connection refused
>
> $ sudo lxd
> [sudo] password for zoolook:
> WARN[11-21|20:18:14]  - Couldn't find the CGroup blkio.weight, disk
> priority will be ignored
> EROR[11-21|20:18:15] Failed to start the daemon: Failed to start dqlite
> server: raft_start(): io: load closed segment
> 00078237-00078237: entries batch 2 starting at byte 48:
> entries count in preamb
> le is zero
> Error: Failed to start dqlite server: raft_start(): io: load closed
> segment 00078237-00078237: entries batch 2 starting at byte
> 48: entries count in preamble is zero
>
> Also. I don't know if this is related but there's at least two files I
> cannot read:
>
> # cd /var/snap/lxd/common/ns
> # ls -l
> total 0
> -r--r--r-- 1 root root 0 nov 21 19:48 mntns
> -r--r--r-- 1 root root 0 nov 21 19:48 shmounts
>
> # cat mntns
> cat: mntns: Invalid argument
>
> # cat shmounts
> cat: shmounts: Invalid argument
>
>
> Thanks for any help
> Norberto
>
>
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] problem after migrating zfs pool to new disk

2020-11-21 Thread Norberto Bensa
Hello,

$ snap list
lxd 4.818324  latest/stable  canonical✓  -

I moved my zfs pool to a new disk and then renamed the pool.
Old name was zroot, new name: rz1-venkman-nvme.
The pool was migrated using zfs send|zfs receive.
While doing the migration, both pools were on the same machine.
The procedure was done with all the services down.

lxc shows:

$ lxc list
Error: Get "http://unix.socket/1.0": EOF

$ lxd
Error: This must be run as root

$ sudo lxd
[sudo] password for zoolook:
WARN[11-21|19:23:17] - Couldn't find the CGroup blkio.weight, disk priority
will be ignored
EROR[11-21|19:23:26] Failed to start the daemon: Failed initializing
storage pool "default": Failed to run: zpool import zroot: cannot import
'zroot': no such pool available
Error: Failed initializing storage pool "default": Failed to run: zpool
import zroot: cannot import 'zroot': no such pool available

I figured I could run snap stop lxd, update the database at
/var/snap/lxd/common/lxd/database/global/db.bin

I updated the value of the table storage_pools_config  to the correct
volume, but the problem persist, but it is different

$ lxc list
Error: Get "http://unix.socket/1.0": dial unix
/var/snap/lxd/common/lxd/unix.socket: connect: connection refused

$ sudo lxd
[sudo] password for zoolook:
WARN[11-21|20:18:14]  - Couldn't find the CGroup blkio.weight, disk
priority will be ignored
EROR[11-21|20:18:15] Failed to start the daemon: Failed to start dqlite
server: raft_start(): io: load closed segment
00078237-00078237: entries batch 2 starting at byte 48:
entries count in preamb
le is zero
Error: Failed to start dqlite server: raft_start(): io: load closed segment
00078237-00078237: entries batch 2 starting at byte 48:
entries count in preamble is zero

Also. I don't know if this is related but there's at least two files I
cannot read:

# cd /var/snap/lxd/common/ns
# ls -l
total 0
-r--r--r-- 1 root root 0 nov 21 19:48 mntns
-r--r--r-- 1 root root 0 nov 21 19:48 shmounts

# cat mntns
cat: mntns: Invalid argument

# cat shmounts
cat: shmounts: Invalid argument


Thanks for any help
Norberto
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] lxd.migrate doesn't work with Ubuntu based distro

2017-11-21 Thread Norberto Bensa
$ snap install lxd
2017-11-21T10:36:27-03:00 INFO Waiting for restart...
lxd 2.20 from 'canonical' installed

$ lxd.migrate
error: This tool must be run as root.

$ sudo lxd.migrate
error: Data migration is only supported on Ubuntu at this time.

$ lsb_release -a
No LSB modules are available.
Distributor ID: neon
Description:KDE neon User Edition 5.11
Release:16.04
Codename:   xenial


KDE neon *is* Ubuntu LTS.

Is there any way to force migration (at my own risk)?

Thanks

Norberto
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-destroy: "container has snapshots" but it doesn't

2017-08-29 Thread Norberto Bensa
Hello,

I haven't read the code, but it seems lxc-destroy checks if there's a
'snaps' directory and if it exists the code asumes there's at least
one snapshot.


nbensa@tecno04:~$ sudo lxc-destroy -n eleva
[sudo] password for nbensa:
Destroying eleva failed: eleva has snapshots.
nbensa@tecno04:~$ sudo rmdir /var/lib/lxc/eleva/snaps
nbensa@tecno04:~$ sudo lxc-destroy -n eleva
Destroyed container eleva


nbensa@tecno04:~$ dpkg -l | grep lxc
ii  liblxc1
2.0.8-0ubuntu6~ubuntu16.04.1~ppa1   amd64Linux
Containers userspace tools (library)
ii  lxc
2.0.8-0ubuntu6~ubuntu16.04.1~ppa1   all
Transitional package for lxc1
ii  lxc-common
2.0.8-0ubuntu6~ubuntu16.04.1~ppa1   amd64Linux
Containers userspace tools (common tools)
ii  lxc-templates
2.0.8-0ubuntu6~ubuntu16.04.1~ppa1   amd64Linux
Containers userspace tools (templates)
ii  lxc1
2.0.8-0ubuntu6~ubuntu16.04.1~ppa1   amd64Linux
Containers userspace tools
ii  lxcfs
2.0.7-0ubuntu5~ubuntu16.04.1~ppa1   amd64FUSE
based filesystem for LXC
ii  python3-lxc
2.0.8-0ubuntu6~ubuntu16.04.1~ppa1   amd64Linux
Containers userspace tools (Python 3.x bindings)
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Resolve .lxc domain with Ubuntu 17.04

2017-04-17 Thread Norberto Bensa
Hi Gregory,

thanks! The 100% cpu usage is gone!

I'm using lxC, so I had to "hack" your instructions.

/etc/default/lxc-net:
LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf

/etc/lxc/dnsmasq.conf:
dns-loop-detect
auth-zone=lxc


But now I cannot resolve external domains from inside the containers,
and I know why: the upstream dns for 10.0.1.1 is my host and my host's
first dns is 10.0.1.1 (loop).

I'll go back to /etc/hosts for now.

BTW, can you post your /etc/resolv.conf,
/run/NetworkManager/resolv.conf, /run/systemd/resolv/resolv.conf,
/run/resolvconf/resolv.conf? What does /etc/resolv.conf look like in
your containers?

Thanks!

Regards,
Norberto



2017-04-17 21:12 GMT-03:00 Gregory Lutostanski
<gregory.lutostan...@canonical.com>:
> Norbento, indeed you are not crazy! I have seen the same thing here.
> On my laptop I did the nm-applet setup to setup dns on lxdbr0, and then saw
> cpu usage spike to 100% due to a loop about dnsmasq asking the
> network-manager dns server and back around forever...
>
> the way I fixed this was by adding these two config options to lxd's
> dnsmasq:
> auth-zone=lxd
> dns-loop-detect
>
> http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html for what those
> do.
>
> $ lxc network edit lxdbr0
>
> looks like...
> config:
>   ipv4.address: 10.216.134.1/24
>   ipv4.nat: "true"
>   ipv6.address: none
>   ipv6.nat: "true"
>   raw.dnsmasq: |
> auth-zone=lxd
> dns-loop-detect
> name: lxdbr0
> type: bridge
>
> No more 100% cpu usage any more!
>
> The workaround I was using until I figured it out was...
> https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1571967/comments/13 --
> but that only works for ssh, not for http and other stuff
>
> Hope you can confirm that this works for you too.
>
> On Mon, Apr 17, 2017 at 6:23 PM, Norberto Bensa <nbensa+lxcus...@gmail.com>
> wrote:
>>
>> That used to work, but from 17.04 (on the desktop editions, both
>> ubuntu and kubuntu) adding the ip of the bridge to /etc/resolv.conf
>> makes systemd-resolved and dnsmasq eat my cpu.
>>
>> 2017-04-17 12:16 GMT-03:00 Matlink <matl...@matlink.fr>:
>> > For me, simply adding the lxc bridge IP address to DNS resolvers made me
>> > able to resolve *.lxd domains from the host machine.
>> > --
>> > Matlink
>> >
>> > Le 17 avril 2017 13:42:36 GMT+02:00, Simos Xenitellis
>> > <simos.li...@googlemail.com> a écrit :
>> >>
>> >> On Thu, Apr 13, 2017 at 10:49 PM, Norberto Bensa
>> >> <nbensa+lxcus...@gmail.com> wrote:
>> >>>
>> >>>  Hello Simos,
>> >>>
>> >>>  2017-04-13 10:44 GMT-03:00 Simos Xenitellis
>> >>> <simos.li...@googlemail.com>:
>> >>>>
>> >>>>  I got stuck with this issue (Ubuntu Desktop with NetworkManager) and
>> >>>>  wrote about it at
>> >>>>
>> >>>>
>> >>>> https://www.mail-archive.com/lxc-users@lists.linuxcontainers.org/msg07060.html
>> >>>
>> >>>
>> >>>  For me, that doesn't work anymore with 17.04
>> >>>
>> >>>  I tried a lot of configuration options with dnsmasq, network-manager,
>> >>>  and systemd-resolved with Ubuntu and Kubuntu (real hardware and
>> >>>  virtualized with kvm).
>> >>
>> >>
>> >>
>> >> If you installed additional packages or changed configuration options,
>> >> you might have changed something that alters the default behaviour.
>> >>
>> >> 1. On Ubuntu Desktop, NetworkManager handles the networking
>> >> configuration.
>> >> You should be able to do "ps aux | grep dnsmasq" and see at least one
>> >> "dnsmasq" process,
>> >> the one from NetworkManager.
>> >> For me, it is:
>> >> " 3653 ?S  0:00 /usr/sbin/dnsmasq --no-resolv
>> >> --keep-in-foreground --no-hosts --bind-interfaces
>> >> --pid-file=/var/run/NetworkManager/dnsmasq.pid
>> >> --listen-address=127.0.1.1 --cache-size=0 --conf-file=/dev/null
>> >> --proxy-dnssec --enable-dbus=org.freedesktop.NetworkManager.dnsmasq
>> >> --conf-dir=/etc/NetworkManager/dnsmasq.d"
>> >>
>> >> What is yours?
>> >>
>> >> 2. NetworkManager uses dnsmasq as a caching nameserver, and it does so
>> >> by configuring /etc/resolv.conf with:
>> >> # Dynamic resolv.conf(5) file for glibc resolver(3) generated by
>> >

Re: [lxc-users] Resolve .lxc domain with Ubuntu 17.04

2017-04-17 Thread Norberto Bensa
That used to work, but from 17.04 (on the desktop editions, both
ubuntu and kubuntu) adding the ip of the bridge to /etc/resolv.conf
makes systemd-resolved and dnsmasq eat my cpu.

2017-04-17 12:16 GMT-03:00 Matlink <matl...@matlink.fr>:
> For me, simply adding the lxc bridge IP address to DNS resolvers made me
> able to resolve *.lxd domains from the host machine.
> --
> Matlink
>
> Le 17 avril 2017 13:42:36 GMT+02:00, Simos Xenitellis
> <simos.li...@googlemail.com> a écrit :
>>
>> On Thu, Apr 13, 2017 at 10:49 PM, Norberto Bensa
>> <nbensa+lxcus...@gmail.com> wrote:
>>>
>>>  Hello Simos,
>>>
>>>  2017-04-13 10:44 GMT-03:00 Simos Xenitellis
>>> <simos.li...@googlemail.com>:
>>>>
>>>>  I got stuck with this issue (Ubuntu Desktop with NetworkManager) and
>>>>  wrote about it at
>>>>
>>>> https://www.mail-archive.com/lxc-users@lists.linuxcontainers.org/msg07060.html
>>>
>>>
>>>  For me, that doesn't work anymore with 17.04
>>>
>>>  I tried a lot of configuration options with dnsmasq, network-manager,
>>>  and systemd-resolved with Ubuntu and Kubuntu (real hardware and
>>>  virtualized with kvm).
>>
>>
>>
>> If you installed additional packages or changed configuration options,
>> you might have changed something that alters the default behaviour.
>>
>> 1. On Ubuntu Desktop, NetworkManager handles the networking configuration.
>> You should be able to do "ps aux | grep dnsmasq" and see at least one
>> "dnsmasq" process,
>> the one from NetworkManager.
>> For me, it is:
>> " 3653 ?S  0:00 /usr/sbin/dnsmasq --no-resolv
>> --keep-in-foreground --no-hosts --bind-interfaces
>> --pid-file=/var/run/NetworkManager/dnsmasq.pid
>> --listen-address=127.0.1.1 --cache-size=0 --conf-file=/dev/null
>> --proxy-dnssec --enable-dbus=org.freedesktop.NetworkManager.dnsmasq
>> --conf-dir=/etc/NetworkManager/dnsmasq.d"
>>
>> What is yours?
>>
>> 2. NetworkManager uses dnsmasq as a caching nameserver, and it does so
>> by configuring /etc/resolv.conf with:
>> # Dynamic resolv.conf(5) file for glibc resolver(3) generated by
>> resolvconf(8)
>> # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
>> nameserver 127.0.1.1
>>
>> Can you verify that you have exactly the same?
>>
>> 3. Then, LXD should have it's own "dnsmasq" process (as a DHCP server
>> and caching nameserver).
>> This dnsmasq process binds on a specific private IP address, which you
>> can find with, for example,
>>
>> ifconfig lxdbr0
>>
>> In my case, it is 10.0.125.1. I have an LXD container called
>> "mycontainer", therefore I can run
>>
>> $ host mycontainer.lxd 10.0.125.1
>> Using domain server:
>> Name: 10.0.185.1
>> Address: 10.0.185.1#53
>> Aliases:
>>
>> mycontainer.lxd has address 10.0.125.18
>> mycontainer.lxd has IPv6 address fd42:aacb:3658:4ca6:216:3e4f:fcd9:35e1
>> $ _
>>
>> Do you get such a result? If not, perhaps you have the wrong IP address.
>> Also, if you ran "lxd init" several times, you might have lingering
>> "dnsmasq" process
>> that bind on port 53 on lxdbr0. Would need to reboot here.
>>
>> If you can get up to this point, then the rest is really easy.
>>
>> Simos
>> 
>>
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Resolve .lxc domain with Ubuntu 17.04

2017-04-17 Thread Norberto Bensa
Hello Simos,


I did all my experiments on clean ubuntu (desktop) and kubuntu
installs because I don't trust the kubuntu installed in my notebook
:-)

Your instructions work until 16.10. Can you verify your instructions
work on a fresh (k)ubuntu desktop 17.04?


Thanks!

Regards,
Norberto




2017-04-17 8:42 GMT-03:00 Simos Xenitellis <simos.li...@googlemail.com>:
> On Thu, Apr 13, 2017 at 10:49 PM, Norberto Bensa
> <nbensa+lxcus...@gmail.com> wrote:
>> Hello Simos,
>>
>> 2017-04-13 10:44 GMT-03:00 Simos Xenitellis <simos.li...@googlemail.com>:
>>> I got stuck with this issue (Ubuntu Desktop with NetworkManager) and
>>> wrote about it at
>>> https://www.mail-archive.com/lxc-users@lists.linuxcontainers.org/msg07060.html
>>
>> For me, that doesn't work anymore with 17.04
>>
>> I tried a lot of configuration options with dnsmasq, network-manager,
>> and systemd-resolved with Ubuntu and Kubuntu (real hardware and
>> virtualized with kvm).
>>
>
> If you installed additional packages or changed configuration options,
> you might have changed something that alters the default behaviour.
>
> 1. On Ubuntu Desktop, NetworkManager handles the networking configuration.
> You should be able to do "ps aux | grep dnsmasq" and see at least one
> "dnsmasq" process,
> the one from NetworkManager.
> For me, it is:
> " 3653 ?S  0:00 /usr/sbin/dnsmasq --no-resolv
> --keep-in-foreground --no-hosts --bind-interfaces
> --pid-file=/var/run/NetworkManager/dnsmasq.pid
> --listen-address=127.0.1.1 --cache-size=0 --conf-file=/dev/null
> --proxy-dnssec --enable-dbus=org.freedesktop.NetworkManager.dnsmasq
> --conf-dir=/etc/NetworkManager/dnsmasq.d"
>
> What is yours?
>
> 2. NetworkManager uses dnsmasq as a caching nameserver, and it does so
> by configuring /etc/resolv.conf with:
> # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
> # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
> nameserver 127.0.1.1
>
> Can you verify that you have exactly the same?
>
> 3. Then, LXD should have it's own "dnsmasq" process (as a DHCP server
> and caching nameserver).
> This dnsmasq process binds on a specific private IP address, which you
> can find with, for example,
>
> ifconfig lxdbr0
>
> In my case, it is 10.0.125.1. I have an LXD container called
> "mycontainer", therefore I can run
>
> $ host mycontainer.lxd 10.0.125.1
> Using domain server:
> Name: 10.0.185.1
> Address: 10.0.185.1#53
> Aliases:
>
> mycontainer.lxd has address 10.0.125.18
> mycontainer.lxd has IPv6 address fd42:aacb:3658:4ca6:216:3e4f:fcd9:35e1
> $ _
>
> Do you get such a result? If not, perhaps you have the wrong IP address.
> Also, if you ran "lxd init" several times, you might have lingering
> "dnsmasq" process
> that bind on port 53 on lxdbr0. Would need to reboot here.
>
> If you can get up to this point, then the rest is really easy.
>
> Simos
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Resolve .lxc domain with Ubuntu 17.04

2017-04-13 Thread Norberto Bensa
Hello Simos,

2017-04-13 10:44 GMT-03:00 Simos Xenitellis :
> I got stuck with this issue (Ubuntu Desktop with NetworkManager) and
> wrote about it at
> https://www.mail-archive.com/lxc-users@lists.linuxcontainers.org/msg07060.html

For me, that doesn't work anymore with 17.04

I tried a lot of configuration options with dnsmasq, network-manager,
and systemd-resolved with Ubuntu and Kubuntu (real hardware and
virtualized with kvm).

Thanks,
Norberto
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Resolve .lxc domain with Ubuntu 17.04

2017-04-10 Thread Norberto Bensa
I found that in the 17.04 server edition it's enough to:

$ sudo apt-get install dnsmasq
$ sudoedit /etc/dnsmasq.d-available/lxc

add the following lines:

server=/lxc/10.0.3.1
server=/3.0.10.in-addr.arpa/10.0.3.1

$ sudoedit /etc/default/lxc-net

uncomment LXC_DOMAIN="lxc"


But, in the desktop version, systemd-resolved and dnsmasq eat my cpu.
Maybe a network-manager bug?



2017-03-29 1:25 GMT-03:00 Serge E. Hallyn <se...@hallyn.com>:
> On Wed, Mar 29, 2017 at 12:59:15AM -0300, Norberto Bensa wrote:
>> 2017-03-29 0:29 GMT-03:00 Serge E. Hallyn <se...@hallyn.com>:
>> > On Wed, Mar 29, 2017 at 12:16:29AM -0300, Norberto Bensa wrote:
>> >> Hello list!
>> >>
>> >> In previous versions of Ubuntu I had lxc domain name resolution
>> >> working, but it's broken for me in 17.04.
>> >
>> > https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-network
>>
>> As I said, worked flawlessly up to 16.10, broken with 17.04.
>
> Ok, wasn't quite sure if you were following that recipe or not.
> I wouldn't be surprised if the switch to systemd-networkd requires
> a change.  Personaly I'm on 16.04+upstart, but if you can come up with
> the revised recipe an update to the serverguide would be awesome.
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Resolve .lxc domain with Ubuntu 17.04

2017-03-28 Thread Norberto Bensa
2017-03-29 0:29 GMT-03:00 Serge E. Hallyn <se...@hallyn.com>:
> On Wed, Mar 29, 2017 at 12:16:29AM -0300, Norberto Bensa wrote:
>> Hello list!
>>
>> In previous versions of Ubuntu I had lxc domain name resolution
>> working, but it's broken for me in 17.04.
>
> https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-network

As I said, worked flawlessly up to 16.10, broken with 17.04.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Resolve .lxc domain with Ubuntu 17.04

2017-03-28 Thread Norberto Bensa
Hello list!

In previous versions of Ubuntu I had lxc domain name resolution
working, but it's broken for me in 17.04.

Name resolution inside the containers work:

ubuntu@consulta:~⟫ ping fox
PING fox.lxc (10.0.1.114) 56(84) bytes of data.
64 bytes from fox.lxc (10.0.1.114): icmp_seq=1 ttl=64 time=0.058 ms
^C

But it doesn't from the host:

zoolook@melnitz:~$ ping fox.lxc
ping: fox.lxc: Name or service not known


I've added 10.0.1.1 as a nameserver in
/etc/resolvconf/resolv.conf.d/head, but it makes systemd-resolved eat
my cpu at ~100%.

 1268 systemd+  20   0   57672  13292   5136 R  98,9  0,1   0:18.73
/lib/systemd/systemd-resolved
Logs from systemd are useless:

Mar 28 23:41:54 melnitz systemd-resolved[1268]: Processing query...


I need/want to have name resolution working from the host. Does anyone
have a solution?


Thanks!
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Ownership changes after container move

2015-11-10 Thread Norberto Bensa
Hi,

what prodecure did you use to move the container?

I usually use tar: cd /var/lib/lxc/ ; tar --numeric-owner -zcpf
containter.tar container/, then, tar -zxpf container.tar in the target host.

HTH,
Norberto


2015-11-10 6:10 GMT-03:00 Jamie Brown :

> Hi,
>
> I’ve discovered that some file ownership changes have occurred after
> moving stopped containers between hosts.
>
> Prior to the move there were various user directories (e.g. “/home/jamie”)
> with ownership set to jamie:jamie. After moving, the ownership was changed
> to ubuntu:ubuntu.
>
> I discovered the issue when attempting to SSH to the moved host and was
> prompted to enter my password as I no longer owned my authorized_keys file.
>
> I will try to repeat this, but I can confirm it has happened on multiple
> containers.
>
> — Jamie
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] re lxcbr0 doesn't exist after upgrade to 15.10

2015-10-29 Thread Norberto Bensa
Hello.

This problem fixed by itself. Today I re-enabled systemd and after a
reboot lxcbr0 is up and running again.

Maybe this update has something to do with it:

Start-Date: 2015-10-26  20:37:19
Commandline: apt-get dist-upgrade
Upgrade: lxc:amd64 (1.1.4-0ubuntu1, 1.1.4-0ubuntu1.1),
python3-lxc:amd64 (1.1.4-0ubuntu1, 1.1.4-0ubuntu1.1), liblxc1:amd64
(1.1.4-0ubuntu1, 1.1.4-0ubuntu1.1), lxc-templates:amd64
(1.1.4-0ubuntu1, 1.1.4-0ubuntu1.1)
End-Date: 2015-10-26  20:37:21




2015-10-27 11:59 GMT-03:00 Serge Hallyn :
> Quoting brian mullan (bmullan.m...@gmail.com):
>> Norberto
>>
>> Great coincidence as I read your msg to the lxc-users list about the lxcbr0
>> bridge
>> disappearing after upgrade to Ubuntu 15.10.
>
> Can you open a launchpad bug and describe there the system you upgraded
> from?  (i.e. was it stock 14.04 with systemd?  desktop, with network-manager?
> /etc/network/interfaces contents;  and what do you get when you run
>
> journalctl -u lxc-net
> /usr/lib/x86_64-linux-gnu/lxc/lxc-net stop
> /usr/lib/x86_64-linux-gnu/lxc/lxc-net start
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxcbr0 doesn't exist after upgrade to 15.10

2015-10-27 Thread Norberto Bensa
2015-10-27 1:31 GMT-03:00 Fajar A. Nugraha <l...@fajar.net>:
> On Tue, Oct 27, 2015 at 6:20 AM, Norberto Bensa <nbensa+lxcus...@gmail.com>
> wrote:
>>
>> This problem is related to network-manager (NM) or systemd.
>>
>> I tried to disable NM but I couldn't. NM started with every boot (does
>> systemd depend on it?). I switched to upstart. Now NM is down, lxcbr0
>> starts up.
>>
>> Everything works as it used to be including my routes and dns servers.
>>
>
> Workaround:
>
> - edit /etc/network/interfaces, add "iface lxcbr0 inet manual"
> - reboot
>

Tried and it fixed lxcbr0, but my /etc/resolv.conf is still b0rken.

For me, the real workaround is: disable network-manager and use upstart.

Thanks anyway Fajar. This looks like a bug in systemd/network-manager,
I'll report it.

Regards,
Norberto
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxcbr0 doesn't exist after upgrade to 15.10

2015-10-26 Thread Norberto Bensa
This problem is related to network-manager (NM) or systemd.

I tried to disable NM but I couldn't. NM started with every boot (does
systemd depend on it?). I switched to upstart. Now NM is down, lxcbr0
starts up.

Everything works as it used to be including my routes and dns servers.

2015-10-26 19:28 GMT-03:00 Norberto Bensa <nbensa+lxcus...@gmail.com>:
> zoolook@venkman:~$ LC_ALL=C ifconfig lxcbr0
> lxcbr0: error fetching interface information: Device not found
>
> zoolook@venkman:~$ LC_ALL=C sudo service lxc-net start
>
> zoolook@venkman:~$ LC_ALL=C ifconfig lxcbr0
> lxcbr0: error fetching interface information: Device not found
>
> zoolook@venkman:~$ LC_ALL=C sudo service lxc-net stop
>
> zoolook@venkman:~$ LC_ALL=C sudo service lxc-net start
>
> zoolook@venkman:~$ LC_ALL=C ifconfig lxcbr0
> lxcbr0Link encap:Ethernet  HWaddr 5e:6c:12:20:f1:a1
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0
>   RX bytes:0 (0.0 B)  TX bytes:432 (432.0 B)
>
>
> And even then, note that there's no IP assigned. Of course my
> container do not start anymore (unless I set the IP address manually).
>
> How do I debug this?
>
> Thanks!
> Norberto
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxcbr0 doesn't exist after upgrade to 15.10

2015-10-26 Thread Norberto Bensa
zoolook@venkman:~$ LC_ALL=C ifconfig lxcbr0
lxcbr0: error fetching interface information: Device not found

zoolook@venkman:~$ LC_ALL=C sudo service lxc-net start

zoolook@venkman:~$ LC_ALL=C ifconfig lxcbr0
lxcbr0: error fetching interface information: Device not found

zoolook@venkman:~$ LC_ALL=C sudo service lxc-net stop

zoolook@venkman:~$ LC_ALL=C sudo service lxc-net start

zoolook@venkman:~$ LC_ALL=C ifconfig lxcbr0
lxcbr0Link encap:Ethernet  HWaddr 5e:6c:12:20:f1:a1
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:0 (0.0 B)  TX bytes:432 (432.0 B)


And even then, note that there's no IP assigned. Of course my
container do not start anymore (unless I set the IP address manually).

How do I debug this?

Thanks!
Norberto
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] #1390223 Apparmor related regression on access to unix sockets (was: ubuntu utopic (14.10) permission problems?)

2015-04-21 Thread Norberto Bensa
2015-03-11 23:01 GMT-03:00 Norberto Bensa nbensa+lxcus...@gmail.com:

 And of course it's a bug and is reported.

 https://bugs.launchpad.net/ubuntu/utopic/+source/linux/+bug/1390223



Hello.

Is anyone working on this? It says Fix Released for and old Vivid kernel
(3.18) but I still have this problem. I can easily reproduce it on any
up-to-date machine with:

$ sudo lxc-create -n test -t ubuntu
$ sudo lxc-start -n test

Inside the container:

$ sudo apt-get install postfix

$ mailq
postqueue: warning: close: Permission denied
$ sudo mailq
postqueue: warning: close: Permission denied


Appart from not using postfix, is there any workaround? Should I report a
new bug?


Thanks,
Norberto
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] #1390223 Apparmor related regression on access to unix sockets (was: ubuntu utopic (14.10) permission problems?)

2015-04-21 Thread Norberto Bensa
2015-04-21 11:01 GMT-03:00 Fajar A. Nugraha l...@fajar.net:

 On Tue, Apr 21, 2015 at 7:06 PM, Norberto Bensa
 nbensa+lxcus...@gmail.com wrote:
  2015-03-11 23:01 GMT-03:00 Norberto Bensa nbensa+lxcus...@gmail.com:
 
  And of course it's a bug and is reported.
 
  https://bugs.launchpad.net/ubuntu/utopic/+source/linux/+bug/1390223
 
 
 
  Hello.
 
  Is anyone working on this? It says Fix Released for and old Vivid
 kernel
  (3.18) but I still have this problem.

 ... and that is your source of problem :)


I don't think I understand your reply. I'm running current up-to-date
Vivid, not Utopic with an old kernel. Yeah, I should have stated my
versions. I don't believe you have a crystal ball anywhere around you, do
you? :-)

Anyway, I filled a new bug here:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1446906

Thanks!

Norberto
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] container stuck until lxcfs restart

2015-03-27 Thread Norberto Bensa
2015-03-25 2:53 GMT-03:00 Fajar A. Nugraha l...@fajar.net:

 On Tue, Mar 24, 2015 at 2:10 PM, Fajar A. Nugraha l...@fajar.net wrote:
  On Tue, Mar 24, 2015 at 5:52 AM, Norberto Bensa
  nbensa+lxcus...@gmail.com wrote:
  Hello,
 
  from time to tome, I think once per day, my containers get stuck. I
 found
  that `service lxcfs restart` restores functionality.
 
 
 
  I had some problems with simple test of several cycles start-stop a
  vivid container. Similar to your experience, restarting lxcfs solves
  the problem. Serge suggested running lxcfs under gdb and try to find
  out what's wrong when the problem occur. I haven't had time to do so
  though, since debugging with gdb can be somewhat complicated.


 I believe I have a reproducer script:


I'm not really sure, but I think byobu inside the container also triggers
this bug.

Regards,
Norberto
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] container stuck until lxcfs restart

2015-03-23 Thread Norberto Bensa
Hello,

from time to tome, I think once per day, my containers get stuck. I found
that `service lxcfs restart` restores functionality.

Is this a know bug?

ii  liblxc1   1.1.0-0ubuntu1
  amd64Linux Containers userspace tools
(library)
ii  lxc   1.1.0-0ubuntu1
  amd64Linux Containers userspace tools
ii  lxc-templates 1.1.0-0ubuntu1
  amd64Linux Containers userspace tools
(templates)
ii  lxcfs 0.6-0ubuntu2
  amd64FUSE based filesystem for LXC
ii  python3-lxc   1.1.0-0ubuntu1
  amd64Linux Containers userspace tools
(Python 3.x bindings)


Linux venkman 3.19.0-9-generic #9-Ubuntu SMP Wed Mar 11 17:50:03 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux


No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu Vivid Vervet (development branch)
Release: 15.04
Codename: vivid


Thanks.

Regards,
Norberto
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ubuntu utopic (14.10) permission problems?

2015-03-11 Thread Norberto Bensa
Update.

# mailq
Mail queue is empty
# mailq
postqueue: warning: close: Permission denied


Same session. Less than a second between two consecutive mailq commands. So
I made this test:

# for i in {1..1000}; do mailq 21 |grep Mail; done
Mail queue is empty
# for i in {1..1000}; do mailq 21 |grep Mail; done
# for i in {1..1000}; do mailq 21 |grep Mail; done
Mail queue is empty

Does this ring any bell? I'm using postfix as an example but I think this
is related to my kerberos/ldap/pam problem. Postfix is just easier to set
up :-)

Thanks!




2015-03-11 0:42 GMT-03:00 Norberto Bensa nbensa+lxcus...@gmail.com:

 Hello,

 I upgraded my main box to ubuntu 14.10 and now my containers are failing
 with weird permission problems. A simple test is this:

 $ sudo lxc-create -t ubuntu -n testing -- -r trusty

 In the containter install postfix (sudo apt-get install postfix). After a
 basic postfix configuration, run mailq:

 $ mailq
 postqueue: warning: close: Permission denied

 $ sudo mailq
 postqueue: warning: close: Permission denied


 Others containters are also failing with pam (?) related issues. For
 example:

 $ ssh dana
 Connection closed by 10.11.101.3

 Now this one is more interesting for me because dana uses kerberos and
 ldap. When I attach to the container, auth.log says:

 Mar 11 00:20:15 dana sshd[1503]: Authorized to zoolook, krb5 principal
 zool...@bensa.ar (krb5_kuserok)
 Mar 11 00:20:15 dana sshd[1503]: fatal: Access denied for user zoolook by
 PAM account configuration [preauth]

 This container was working with ubuntu trusty on the host BUT it also
 failed when I tried utopic kernels on the host
 (linux-image-generic-lts-utopic).

 Does anyone have any idea what it's going on?

 Thanks in advance,
 Norberto

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] more ubuntu utopic problems (network)

2015-03-11 Thread Norberto Bensa
Thanks Serge.

Since this one was easy to fix, and my other problem was caused by the
utopic kernel, I went ahead and upgraded to vivid. :-/

I never upgrade to betas on my main machine, I have strong reasons not to
do it.

My problem now, completely unrelated to lxc is unity's network indicator.
It lists every f**g bridge!

I run 10 containers and 2 or 3 kvm guests all the time so the list of
bridges is large. Do you, or anyone else, know how to hide those bridges
from the network widget?

Thanks in advance!

Best regards,
Norberto




2015-03-12 0:27 GMT-03:00 Serge Hallyn serge.hal...@ubuntu.com:

 Quoting Norberto Bensa (nbensa+lxcus...@gmail.com):
  Hello Tycho,
 
  2015-03-11 20:10 GMT-03:00 Tycho Andersen tycho.ander...@canonical.com
 :
 
   On Wed, Mar 11, 2015 at 07:57:50PM -0300, Norberto Bensa wrote:
Hello,
   
new containers (created with lxc-create) have either no network or
 the ip
address is not preserved between container reboots.
   
Containers without network lack all lxc.network.* entries in the
 confg
   file.
   
Containers with random ip address have lxc.network.hwaddr =
00:16:3e:xx:xx:xx
   
Is this a known bug or am I supposed to configure these entries
 myself?
  
   You can just switch the xx's to something, e.g.:
  
   00:16:38:01:10:01
  
  
  By hand?
 
  This was done automagically by lxc and/or the template in previous Ubuntu
  versions. So the question remains: is this a bug or a feature in
  Utopic/newer lxc versions?

 It was a regression.  It's fixed in vivid, as well as in
 ppa:ubuntu-lxc/daily
 for utopic.
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ubuntu utopic (14.10) permission problems?

2015-03-11 Thread Norberto Bensa
And of course it's a bug and is reported.

https://bugs.launchpad.net/ubuntu/utopic/+source/linux/+bug/1390223

2015-03-11 22:58 GMT-03:00 Norberto Bensa nbensa+lxcus...@gmail.com:

 This one is a kernel issue. Going back to trusty's kernel solves these
 issues with mailq and pam/kerberos/ldap.

 Good kernel:

 ii  linux-image-3.13.0-46-generic 3.13.0-46.77
 amd64Linux kernel image for version 3.13.0
 on 64 bit x86 SMP

 Bad:

 ii  linux-image-3.16.0-31-generic 3.16.0-31.43
 amd64Linux kernel image for version 3.16.0
 on 64 bit x86 SMP





 2015-03-11 22:14 GMT-03:00 Norberto Bensa nbensa+lxcus...@gmail.com:

 Update.

 # mailq
 Mail queue is empty
 # mailq
 postqueue: warning: close: Permission denied


 Same session. Less than a second between two consecutive mailq commands.
 So I made this test:

 # for i in {1..1000}; do mailq 21 |grep Mail; done
 Mail queue is empty
 # for i in {1..1000}; do mailq 21 |grep Mail; done
 # for i in {1..1000}; do mailq 21 |grep Mail; done
 Mail queue is empty

 Does this ring any bell? I'm using postfix as an example but I think this
 is related to my kerberos/ldap/pam problem. Postfix is just easier to set
 up :-)

 Thanks!




 2015-03-11 0:42 GMT-03:00 Norberto Bensa nbensa+lxcus...@gmail.com:

 Hello,

 I upgraded my main box to ubuntu 14.10 and now my containers are failing
 with weird permission problems. A simple test is this:

 $ sudo lxc-create -t ubuntu -n testing -- -r trusty

 In the containter install postfix (sudo apt-get install postfix). After
 a basic postfix configuration, run mailq:

 $ mailq
 postqueue: warning: close: Permission denied

 $ sudo mailq
 postqueue: warning: close: Permission denied


 Others containters are also failing with pam (?) related issues. For
 example:

 $ ssh dana
 Connection closed by 10.11.101.3

 Now this one is more interesting for me because dana uses kerberos and
 ldap. When I attach to the container, auth.log says:

 Mar 11 00:20:15 dana sshd[1503]: Authorized to zoolook, krb5 principal
 zool...@bensa.ar (krb5_kuserok)
 Mar 11 00:20:15 dana sshd[1503]: fatal: Access denied for user zoolook
 by PAM account configuration [preauth]

 This container was working with ubuntu trusty on the host BUT it also
 failed when I tried utopic kernels on the host
 (linux-image-generic-lts-utopic).

 Does anyone have any idea what it's going on?

 Thanks in advance,
 Norberto




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] ubuntu utopic (14.10) permission problems?

2015-03-10 Thread Norberto Bensa
Hello,

I upgraded my main box to ubuntu 14.10 and now my containers are failing
with weird permission problems. A simple test is this:

$ sudo lxc-create -t ubuntu -n testing -- -r trusty

In the containter install postfix (sudo apt-get install postfix). After a
basic postfix configuration, run mailq:

$ mailq
postqueue: warning: close: Permission denied

$ sudo mailq
postqueue: warning: close: Permission denied


Others containters are also failing with pam (?) related issues. For
example:

$ ssh dana
Connection closed by 10.11.101.3

Now this one is more interesting for me because dana uses kerberos and
ldap. When I attach to the container, auth.log says:

Mar 11 00:20:15 dana sshd[1503]: Authorized to zoolook, krb5 principal
zool...@bensa.ar (krb5_kuserok)
Mar 11 00:20:15 dana sshd[1503]: fatal: Access denied for user zoolook by
PAM account configuration [preauth]

This container was working with ubuntu trusty on the host BUT it also
failed when I tried utopic kernels on the host
(linux-image-generic-lts-utopic).

Does anyone have any idea what it's going on?

Thanks in advance,
Norberto
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users