Re: [lxc-users] LXC start command fails when ran with valgrind

2016-10-28 Thread Serge E. Hallyn
Quoting Adithya K (linux.challen...@gmail.com):
> HI,
> 
>  I am trying to run LXC on Ubuntu 14.04 and LXC version 1.0.8. When I
> run valgrind
> --tool=memcheck --leak-check=yes --show-reachable=yes --num-callers=20
> --track-fds=yes lxc-start -d -n test, I get following error.
> 
> Warning: invalid file descriptor 1024 in syscall close()
> ==7897==at 0x5195F60: __close_nocancel (syscall-template.S:81)
> ==7897==by 0x4E526BC: lxc_check_inherited (in /usr/lib/x86_64-linux-gnu/
> liblxc.so.1.0.8)
> ==7897==by 0x4E55840: lxc_monitord_spawn (in /usr/lib/x86_64-linux-gnu/
> liblxc.so.1.0.8)
> ==7897==by 0x4E82659: ??? (in /usr/lib/x86_64-linux-gnu/liblxc.so.1.0.8)
> 
> I create LXC container with busybox template.
> 
> Any solution to this?

You are asking for lxc to run in daemonized mode (-d).  When it does so,
it always enables '-C' (close-all-fds) to close inherited fds.  So
lxc-start sees an open fd of valgrind's and closes it.  valgrind doesn't
like that.

You could probalby get around it by doing

valgrind @valgrind-args@ lxc-start -F -n test

which will run lxc-start in the foreground and without closing inherited
fds.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD init in Puppet issue

2016-10-28 Thread Benoit GEORGELIN - Association Web4all
Hi, 

looks like the lxd should be able to run a "zpool list" so the zfs backend will 
be considered as an option 

https://github.com/lxc/lxd/blob/6cbf82757e96213e73be9a7305803910b09ea5ed/lxd/main.go#L628
 

Maybe, using puppet , the zpool list command from LXD failed . 

I would try to get the "zpool list" command running from puppet to be sure it's 
working on its own trough puppet. 
Or puppet that would run a bash script to get zpool list result . 

Maybe it's just related to your PATH environment variable used with puppet and 
you zfs binary are not found. 


Cordialement, 



De: "Tardif, Christian"  
À: "lxc-users"  
Envoyé: Vendredi 28 Octobre 2016 15:35:54 
Objet: [lxc-users] LXD init in Puppet issue 



Hi, 

Maybe this would ring a bell to someone 

I'm in the process of deploying some LXD servers. In our company, we try to 
puppetize everything we can. Same story for LXD servers. 

My issue is that when I try to run lxd init .. --storage-backend zfs, it 
fails, returning 

Notice: /Stage[main]/Nhs_lxd/Exec[lxd init]/returns: error: The requested 
backend 'zfs' isn't available on your system (missing tools). 

But when I run my command manually, it runs perfectly. And in order to prove 
that I have the zfs tools installed, I have a zfs pool up and running in that 
box: 

zpool list 
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT 
lxd 79.5G 272K 79.5G - 0% 0% 1.00x ONLINE - 

Any clue? 
-- 


Christian Tardif 

___ 
lxc-users mailing list 
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD init in Puppet issue

2016-10-28 Thread Tardif, Christian
Hi, 

Maybe this would ring a bell to someone 

I'm in the process of deploying some LXD servers. In our company, we try
to puppetize everything we can. Same story for LXD servers. 

My issue is that when I try to run lxd init ..  --storage-backend
zfs, it fails, returning 

Notice: /Stage[main]/Nhs_lxd/Exec[lxd init]/returns: error: The
requested backend 'zfs' isn't available on your system (missing tools). 

But when I run my command manually, it runs perfectly. And in order to
prove that I have the zfs tools installed, I have a zfs pool up and
running in that box: 

zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH 
ALTROOT
lxd   79.5G   272K  79.5G - 0% 0%  1.00x  ONLINE  - 

Any clue?

-- 
CHRISTIAN TARDIF___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD: network connectivity dies when doing lxc stop / lxc start

2016-10-28 Thread Tomasz Chmielewski
Here is a weird one, and most likely not LXD fault, but some issue with 
bridged networking.


I'm using bridge networking for all container. The problem is that if I 
stop and start one container, some other containers loose connectivity. 
They loose connectivity for 10, 20 secs, sometimes up to a minute.



For example:

# ping 8.8.8.8
(...)
64 bytes from 8.8.8.8: icmp_seq=16 ttl=48 time=15.0 ms
64 bytes from 8.8.8.8: icmp_seq=17 ttl=48 time=15.0 ms
64 bytes from 8.8.8.8: icmp_seq=18 ttl=48 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=19 ttl=48 time=15.0 ms
64 bytes from 8.8.8.8: icmp_seq=20 ttl=48 time=15.0 ms

...another container stopped/started...
...40 seconds of broken connectivity...

64 bytes from 8.8.8.8: icmp_seq=60 ttl=48 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=61 ttl=48 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=62 ttl=48 time=15.1 ms
64 bytes from 8.8.8.8: icmp_seq=63 ttl=48 time=15.0 ms


Pinging the gateway dies in a similar way.

The networking is as follows:

containers - eth0, private addressing (192.168.0.x)
host - "ctbr0" - private address (192.168.0.1), plus NAT into the world


auto ctbr0
iface ctbr0 inet static
address 192.168.0.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0


The only workaround seems to be arpinging the gateway from the container 
all the time, for example:


# arping 192.168.0.1

This way, the container doesn't loose connectivity when other containers 
are stopped/started.


But of course I don't like this kind of fix.

Is anyone else seeing this too? Any better workaround that constant 
arping from all affected containers?



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-destroy - unable to destroy a container with snapshots

2016-10-28 Thread Tomasz Chmielewski

The container has plenty of snapshots:

# lxc-snapshot -L -n td-backupslave | wc -l
53

Is stopped:

# lxc-ls -f|grep td-backupslave
td-backupslaveSTOPPED 0 -  - -


But not able to remove it:

# lxc-destroy -n td-backupslave -s
lxc-destroy: lxccontainer.c: do_lxcapi_destroy: 2417 Container 
td-backupslave has snapshots;  not removing

Destroying td-backupslave failed

# lxc-destroy -s -n td-backupslave
lxc-destroy: lxccontainer.c: do_lxcapi_destroy: 2417 Container 
td-backupslave has snapshots;  not removing

Destroying td-backupslave failed


According to help, "-s" option should destroy a container even if it has 
snapshots:


# lxc-destroy -h
Usage: lxc-destroy --name=NAME [-f] [-P lxcpath]

lxc-destroy destroys a container with the identifier NAME

Options :
  -n, --name=NAME   NAME of the container
  -s, --snapshots   destroy including all snapshots
  -f, --force   wait for the container to shut down
  --rcfile=FILE Load configuration file FILE



Am I misreading the help, or is it a bug?

It's Ubuntu 16.04 with these lxc packages:

# dpkg -l|grep lxc
ii  liblxc1  2.0.5-0ubuntu1~ubuntu16.04.2
amd64Linux Containers userspace tools (library)
ii  lxc  2.0.5-0ubuntu1~ubuntu16.04.2
all  Transitional package for lxc1
ii  lxc-common   2.0.5-0ubuntu1~ubuntu16.04.2
amd64Linux Containers userspace tools (common tools)
ii  lxc-templates2.0.5-0ubuntu1~ubuntu16.04.2
amd64Linux Containers userspace tools (templates)
ii  lxc1 2.0.5-0ubuntu1~ubuntu16.04.2
amd64Linux Containers userspace tools
ii  lxcfs2.0.4-0ubuntu1~ubuntu16.04.1
amd64FUSE based filesystem for LXC
ii  python3-lxc  2.0.5-0ubuntu1~ubuntu16.04.2
amd64Linux Containers userspace tools (Python 3.x bindings)



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users