[lxc-users] SIGRTMIN+3

2018-04-07 Thread Eric Wolf
One of my containers is shutting down seemingly randomly. I'm trying
to figure out why, but so far all I can find in syslog is systemd[1]:
Received SIGRTMIN+3. which seems to be related to the LXC/LXD stop
command, but I can't find anything that might be sending that command
from my host, so I'm here looking for help finding the source. I'm not
sure what to look for in my logs, either in the container or on the
host.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Lxc list - permission denied

2017-09-07 Thread Eric Wolf
19wolf@Nephele:~$ lxc list
>Permission denied, are you in the lxd group?
19wolf@Nephele:~$ sudo adduser 19wolf lxd
>The user `19wolf' is already a member of `lxd'.
19wolf@Nephele:~$ lxc list
>Permission denied, are you in the lxd group?

What do I do? 'sudo lxc list' works fine
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Kubernetes Storage Provisioning using LXD

2017-02-16 Thread Eric
That's is what I've also been trying to do

Kubernetes has a list of supported persistent volume types, of which the
only one's that aren't cloud-based that I've tried are NFS, CephFS,
Glusterfs, and HostPath

https://kubernetes.io/docs/user-guide/persistent-volumes/#types-of-persistent-volumes

With LXC + ZFS you can't:

- provide a raw block device (/dev/sad) to heketi, or a loopback device
(/dev/loop0). so glusterfs is out of the picture (
https://github.com/heketi/heketi/issues/665)
- cephfs is like glusterfs, so cephfs is out
- NFS requires kernel modules, so nfs is out
- HostPath doesn't work over multiple nodes

So your only option is to use KVM

I use Proxmox. I knew LXC/LXD wasn't going to be able to fulfill what I
needed to do on a single server, so I looked for a hypervisor that had a
polished UI for creating both LXC and KVM VM

I'm still going to go with glusterfs, which will also need heketi, and will
be running it in a fedora kvm. And the using it as a persistent volume for
kubernetes




On Wed, Feb 15, 2017 at 4:16 AM, Butch Landingin 
wrote:

> Hi all,
>
> I've been trying out Canonical Kubernetes via conjure-up and juju charms on
> a local LXD cluster. Following the tutorials, I've set up the cluster
> running on lxd with a zfs file system.
>
> Everything's been great and I've pretty much exhausted the tutorials
> (running microbot, etc).
>
> I'm now at the point where I want to try provisioning some persistent
> volumes or even trying
> dynamic storage allocation.
>
> By this time, I'm 99 percent sure the answer is no (and I've searched
> extensively) ,
> but is there anyway to create Kubernetes persistent volumes on a multi
> node set up (not using hostpath)
> on a local LXD cluster?
>
> If there isn't,  does Canonical have this in their roadmap?
>
> Best regards,
> Butch
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Containers on linux-4.8-rc1 sometimes(?) requiring "cgmanager -m name=systemd" (bisected, but is it a bug?)

2016-09-14 Thread Eric W. Biederman
Adam Richter <adamricht...@gmail.com> writes:

> Hello, Eric.
>
> Thank you for your prompt response to my posting.
>
> If you think that the new lxc behavior is acceptable, I am OK with it
> too. I just wanted to let you know because I thought that there was
> perhaps a ~30% chance that you might see it as indicating a more
> consequential problem.

Thank you for that.  In practice this comes closer than I would like to
the kernel's no regressions rule.

> I emailed lxc-users instead of lxc-devel to see if if I could
> determine that this was not a bug without needing to escalate to
> lxc-devel.  Also in an effort to err on the side of trying to minimize
> annoyance, I bcc'ed you and Tejun instead of cc'ing you because I
> didn't want effectively to involuntarily subscribe you to what could
> become an ongoing thread.  However, based on your expressed
> preference, I will cc you if I respond further to this thread on
> lxc-users, unless you request otherwise.

Thank you.

> Perhaps, in the future, if I look into control groups and containers
> more, I might investigate your alternative idea, that might not
> require non-systemd host to do anything specific to systemd to run
> systemd guests, reducing the specialness of the real host environment,
> thereby perhaps slightly increasing the set of configurations that can
> be tested with nested containers instead of VM's and reducing barriers
> to other init systems doing whatever systemd does that currently needs
> that bit of host configuration.

How to handle these hierarchy mismatches is part of a very slow moving
and important conversation going on with how cgroup hierarchies will be
treated in the future.  The cgroup2 filesystem and much of the work is
based on the assumption that people will want exactly one hierarchy.
Given containers that run Centos 7 and similar distributions today that
will be an interesting discussion.

Eric
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Containers on linux-4.8-rc1 sometimes(?) requiring "cgmanager -m name=systemd" (bisected, but is it a bug?)

2016-09-13 Thread Eric W. Biederman
"Serge E. Hallyn" <se...@hallyn.com> writes:

> Quoting Eric W. Biederman (ebied...@xmission.com):
>> Adam Richter <adamricht...@gmail.com> writes:
>> 
>> > On Linux 4.8-rc1 through 4-8-rc6 (latest rc), lxc fails start to
>> > Ubuntu 16.04 and Centos 7 containers [1], unless I first run
>> > "cgmanager -m name=systemd &" on the host, which, unlike the
>> > containers, was not running systemd or cgmanager.
>> 
>> Yes, that appears correct.  Given the current flat namespace of
>> hierarchies you fundamentally must coordinate with the host if you want
>> to use a new hierarchy.  So running cgmanager on the host seems like
>> the minimum way to do that.
>> 
>> If we truly need something more (which does not appear to be the case
>> here) the names of hierarchies need to be moved into a namespace.
>> 
>> > Git bisect revealed that this behavior began with a commit entitled
>> > "cgroupns: Only allow creation of hierarchies in the initial cgroup
>> > namespace" [2], which appears to be an attempt to protect against a
>> > possible denial of service attack.  Reversing the commit also restores
>> > successful commit the need to run that cgmanager process.  [Eric and
>> > Tejun, I have bcc'ed you so you can be aware of this discussion
>> > thread, as you apparently respectively wrote and approved the commit.]
>> 
>> As far as I can tell you were getting lucky and not having problems
>> before.
>> 
>> > Running that cgmanager invocation is pretty simple, and seems to me to
>> > be well worth closing a denial of service vulnerability, much as I
>> > dislike adding something systemd-specific to a non-systemd environment
>> > and adding a new dependency (lxc requires cgmanager on the host to
>> > run, I guess, any container that runs systemd).  However, I am posting
>> > this message because I don't fully understand the problem, and, most
>> > importantly, I am wondering if I have stumbled on an unintended
>> > consequence of this commit that might have other indicate other
>> > potential breakage.
>> 
>> I am surprised that your case worked but I don't think it amounts to an
>> unintended consequence.
>> 
>> > If this new lxc behavior is completely acceptable, then I apologize
>> > for consuming people's time with it and hope that this message will
>> > allow others experiencing the same problem find an answer for it when
>> > they search the web.
>> 
>> I will let the lxc-developers judge.
>> 
>> I don't think you hit a case that was expected to work.  Furthermore
>
> fwiw indeed this was never expected to work.
>

As just creating the hiearchy before starting the container fixes this,
I agree this does appear to be just a documentation issue.

>> either your containers were overprivileged or they would not have been
>> able to create subdirectories in the cgroup hierarchy.  So I expect this
>> change transformed a subtle breakage (aka one you had not noticed yet)
>> into an explicit breakage.
>>
>> I am not subscribed to lxc-users so I don't know if anyone else has
>> replied to your post.  Cc's would have been better than Bcc's for
>> getting feedback in a situation like this.
>> 
>> Eric
>> 
>> 
>> > Adam Richter
>> >
>> >
>> > [1] Here is an example of failing to start one of these containers.
>> > $ sudo lxc-start --name ubuntu16.04_amd64 --foreground
>> > Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
>> > [!!] Failed to mount API filesystems, freezing.
>> > Freezing execution.
>> >
>> >
>> > [2] Here is the commit diff that triggers the new mishbehavior.
>> > commit 726a4994b05ff5b6f83d64b5b43c3251217366ce
>> > Author: Eric W. Biederman <ebied...@xmission.com>
>> > Date:   Fri Jul 15 06:36:44 2016 -0500
>> >
>> > cgroupns: Only allow creation of hierarchies in the initial cgroup 
>> > namespace
>> >
>> > Unprivileged users can't use hierarchies if they create them as they 
>> > do not
>> > have privilieges to the root directory.
>> >
>> > Which means the only thing a hiearchy created by an unprivileged user
>> > is good for is expanding the number of cgroup links in every css_set,
>> > which is a DOS attack.
>> >
>> > We could allow hierarchies to be created in namespaces in the initial
>> > user name

Re: [lxc-users] Wifi in container

2016-09-08 Thread Eric
On September 8, 2016 2:53:38 PM EDT, Claudio Corsi  wrote:
>I tried a number of scenario, including restarting the container after
>adding the device to it.
>

Sometimes the container needs to be fully stopped, and started again for the 
changes to take in effect
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Wifi in container

2016-09-08 Thread Eric
On Thu, Sep 8, 2016, 01:05 Claudio Corsi  wrote:

> Hello,
>
> I am attempting to get a Linksys AE3000 USB wireless dongle to work within
> my container. I am running LXC 2.0 on Ubuntu 16.04 and have an Ubuntu 16.04
> guest up and running.
>
> In both the host OS and the container the device is detected and is listed
> when running lsusb
>
> Bus 001 Device 004: ID 13b1:003b Linksys AE3000 802.11abgn (3x3) Wireless
> Adapter [Ralink RT3573]
>
> On the host OS i am able to display the device details using both iw and
> iwconfig, however in the container i get no output from either command.
>
> I have added the device to the container with the following command
>
> lxc config device add wap usb-001-004 unix-char path=/dev/usb/001/004
>
> without any change in container.
>
> Any advice would be greatly appreciated.
>
> C
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


Was your container stopped when the device was added?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Recommended techniques for dynamically provisioning containers using lxd

2016-09-08 Thread Eric
On Thu, Sep 8, 2016, 01:05 Zach Lanich  wrote:
Umberto, I’m not 100% sure of what SaltStack uses under the hood lib wise,
but it’s written in Python an already does everything that Lib does. We’re
talking more of how the creation of the LXD containers themselves,
including setting Mounts, Static IP, etc. SaltStack & Chef handle
everything else from there once the provisioned container is connected to
the master. CloudInit would certainly be an option as the 2nd part of the
equation if we weren’t already using a configuration management tool

@zach, what configuration management tool are you guys using?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Crucial LXD, Bind Mounts & Gluster Question

2016-08-14 Thread Eric
On August 14, 2016 9:55:36 AM EDT, Personal  wrote:
>I would have to at very least chown the subdirectory to the same user
>the container is running on in order to have write access to it from
>with in the container, but that was my thought that the volume itself
>provides enough protection. My friend who is an experienced systems
>administrator seems to be very uncomfortable with the idea of bind
>mounting into the container, as he thinks it kind of breaks the
>isolation that the containers provide when adding write access to the
>mount, Thoughts?

Another way is setting extended attributes (setfacl) to the parent dataset that 
is being shared (xattr=sa, acltype=posixacl) 

It's also tricky, because new files created by the container gets assigned the 
UID of the user from the container (setting the defaults for the xattr probably 
would resolve that, but I'll have to test it out)
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to set up multiple secure (SSL/TLS, Qualys SSL Labs A+) websites using LXD containers

2016-07-31 Thread Eric


On July 31, 2016 4:22:28 PM EDT, Simos Xenitellis  
wrote:
>Hi All,
>
>I have written a few articles on LXD containers and here is the latest,
>https://simos.info/blog/how-to-set-up-multiple-secure-ssltls-qualys-ssl-labs-a-websites-using-lxd-containers/
>
>It's about putting websites in different containers, and getting them
>accessed through HAProxy (also in a container), as a TLS Termination
>proxy.
>
>LetsEncrypt is used to provide certificates, however it runs outside
>of the containers. If you have an idea on how to run it inside a
>container, please tell me.
>
>In this specific article I omit to mention the nginx configuration so
>that it delivers to IP address of the client to the web servers (as
>is, the logs show the HAProxy IP address).
>
>I will probably write a few more articles.
>
>I use the term "LXD containers"; I am quite happy with it, if you have
>another suggestion, please tell me.
>
>Simos
>___
>lxc-users mailing list
>lxc-users@lists.linuxcontainers.org
>http://lists.linuxcontainers.org/listinfo/lxc-users

"LXD containers" help distinguish between "old LXC" and "new LXC + LXD"
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Mounting Ecryptfs Within LXD/LXC Container

2016-07-31 Thread Eric
Is there a way to mount an ecryptfs directory within a LXD/LXC container?

All I've tried so far is:

Editing, and reloading, /etc/apparmor.d/lxc/lxc-default [1][2]

|mount options=(rw, bind), ||mount fstype=(ecryptfs), |

After reloading apparmor I still get this error message:

|Exiting. Unable to obtain passwd info |


I'm wondering if anyone has already solved this


[1]: https://gist.github.com/gionn/7585324
[2]:
https://www.reddit.com/r/LXC/comments/4d3tz2/how_to_use_ecryptfs_in_an_lxc/




signature.asc
Description: OpenPGP digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] permissions question: netstat -anp does not show process for non owned processes

2016-05-27 Thread Eric W. Biederman
Serge Hallyn <serge.hal...@ubuntu.com> writes:

> So running a netstat as ubuntu user in the container and stracing netstat, the
> only eaccess I got was:
>
> 492   open("/proc/90/fd", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = -1 
> EACCES (Permission denied)
> 492   open("/proc/95/fd", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = -1 
> EACCES (Permission denied)
> 492   open("/proc/97/fd", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = -1 
> EACCES (Permission denied)
> 492   open("/proc/462/fd", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = -1 
> EACCES (Permission denied)
> 492   open("/proc/464/fd", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = -1 
> EACCES (Permission denied)
>
>
> those tasks are:
> daemon  90 1  0 17:12 ?00:00:00 /usr/sbin/atd -f
> syslog  95 1  0 17:12 ?00:00:00 /usr/sbin/rsyslogd -n
> message+97 1  0 17:12 ?00:00:00 /usr/bin/dbus-daemon --system 
> --address=systemd: --nofork --nopidfile --systemd-activation
> root   462   452  0 17:13 ?00:00:00 su - ubuntu
> ubuntu 464   463  0 17:13 ?00:00:00 (sd-pam)
>
> interesting.
>
> It doesn't appear to be yama - setting ptrace_scope -t 0 doesn't help.
>
> /proc/90/fd is owned by nobody:nogroup in the container, root:root on
> the host.
>
> Looking at the code in fs/proc/base.c, it seems the code intends to
> use the cred of the task to which the procpid entry belongs.  So it
> really should be owned by daemon.
>
> (proc_tgid_lookup should be called, iiuc, to fill in the details about fd
> under /proc/pid, it gets the task to which /proc/pid belongs, passes that
> to proc_pident_instantiate, which passes it to proc_pid_make_inode, which
> gets the task cred uid/gid and assigns them to the inode)
>
> I'm sure there's a good reason for this, but i'm failing to remember what
> it is.

This is the dumpable restriction.  When a processes changes it's creds
in the right way it stops being dumpable.  Currently dumpable is a very
simple global thing, not a user namespace isolated thing.

We have talked about sorting this out but it has never been on the top
of anyone's list to do.

To make this work I think we need dumpable to change to an indication of
which user namespace root we can allow to dump a file.

Eric

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC Kernel options requirements

2015-07-28 Thread Keller, Eric
Hi everyone,

I was wondering if someone could help me out defining all the required
kernel configuration option in order to seamlessly use LXC.

I used the lxc-checkconfig to test the basic configuration but, I am still
not able to get any ip address assigned to the container.

---
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
---
I am sure the container is properly setup, since with an official Ubuntu
kernel everything works fine.

do you have any idea which option should be enabled in order to get a dhcp
ip from the lxc0 bridge?
it would make sense to extend the lxc-checkconfig misc option with these
kernel configuration

I attach the kernel config

Best Regards

-- 
Eric Keller

mailto: eric.keller@roche.com
www.roche.ch/rotkreuz


config-lxc
Description: Binary data
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Ubuntu Trusty Tahr 14.04 LTS

2014-12-14 Thread Eric Keller
Could simply be that you are behind a firewall... That why the ask Ubuntu
answer includes setting up a proxy environment variable.

The error message means you cannot get the appropriate key from the server,
a workaround is to download the key using wget or curl and then using
apt-key add the-downloaded-key

Hope this helps

Regards
Eric
On Dec 13, 2014 7:21 PM, Thouraya TH thouray...@gmail.com wrote:

 Hello, Please i have already posted this question but i haven't any answer;
 i found this solution on the web:
 http://askubuntu.com/questions/544597/lxc-create-hangs-and-finally-fails
 but i didn't understand the solution.
 Can you explain me the solution on the URL ?


 *Problem*
 root@localhost:/home# sudo lxc-create -t ubuntu -n u1 -- -r trusty -a
 amd64
 Checking cache download in /var/cache/lxc/trusty/rootfs-amd64 ...
 Installing packages in template: ssh,vim,language-pack-en
 Downloading ubuntu trusty minimal ...
 I: Retrieving Release

 *E: Failed getting release file
 http://archive.ubuntu.com/ubuntu/dists/trusty/Release
 http://archive.ubuntu.com/ubuntu/dists/trusty/Release*
 lxc_container: container creation template for u1 failed
 lxc_container: Error creating container u1

 root@localhost:~# sudo lxc-create -t download -n ubuntu -- -d ubuntu -r
 trusty -a amd64
 lxc-create: Error: ubuntu creation was not completed
 Setting up the GPG keyring
 ERROR: Unable to fetch GPG key from keyserver.

 Thanks a lot.
 Bests.



 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [LXC] locale: Cannot set LC_CTYPE to default locale: No such file or directory

2014-11-08 Thread Eric Keller
Hi Neil,

thanks for the answer, there is no such package (language-pack-en-base) in
debian repositories :(

Regards,

-- 
Eric Keller


On Sat, Nov 8, 2014 at 9:01 AM, Neil Greenwood neil.greenw...@gmail.com
wrote:

 On 8 Nov 2014 07:54, Eric Keller keller.e...@gmail.com wrote:
 
  Hi everyone,
 
  I am currently using LXC (debian wheezy) container on my Ubuntu 12.04 64
 bit distribution.
 
  setting up the container, goes fine but the locale are not working
 accordingly.
 
  $ locale
  locale: Cannot set LC_CTYPE to default locale: No such file or directory
  locale: Cannot set LC_MESSAGES to default locale: No such file or
 directory
  locale: Cannot set LC_ALL to default locale: No such file or directory
  LANG=en_US.UTF-8
  LANGUAGE=en_US:
  LC_CTYPE=en_US.UTF-8
  LC_NUMERIC=en_US.UTF-8
  LC_TIME=en_US.UTF-8
  LC_COLLATE=en_US.UTF-8
  LC_MONETARY=en_US.UTF-8
  LC_MESSAGES=en_US.UTF-8
  LC_PAPER=en_US.UTF-8
  LC_NAME=en_US.UTF-8
  LC_ADDRESS=en_US.UTF-8
  LC_TELEPHONE=en_US.UTF-8
  LC_MEASUREMENT=en_US.UTF-8
  LC_IDENTIFICATION=en_US.UTF-8
  LC_ALL=
 
  I did apply the usual tricks (https://wiki.debian.org/Locale) to setup
 the locale in vain. The locale command continue to throw me the same errors!
 
  the update-locale also behave in a strange way:
  sudo /usr/sbin/update-locale
  perl: warning: Setting locale failed.
  perl: warning: Please check that your locale settings:
  LANGUAGE = en_US:,
  LC_ALL = (unset),
  LC_CTYPE = en_US.UTF-8,
  LANG = en_US.UTF-8
  are supported and installed on your system.
  perl: warning: Falling back to the standard locale (C).
  *** update-locale: Error: invalid locale settings:  LANG=en_US.UTF-8
 
  as does the dpkg-reconfigure locales:
 
  sudo dpkg-reconfigure -f noninteractive locales
  perl: warning: Setting locale failed.
  perl: warning: Please check that your locale settings:
  LANGUAGE = en_US:,
  LC_ALL = (unset),
  LC_CTYPE = en_US.UTF-8,
  LANG = en_US.UTF-8
  are supported and installed on your system.
  perl: warning: Falling back to the standard locale (C).
  locale: Cannot set LC_CTYPE to default locale: No such file or directory
  locale: Cannot set LC_MESSAGES to default locale: No such file or
 directory
  locale: Cannot set LC_ALL to default locale: No such file or directory
  Generating locales (this might take a while)...
en_US.UTF-8...cannot change mode of new locale archive: No such file
 or directory
   done
  Generation complete.
  perl: warning: Setting locale failed.
  perl: warning: Please check that your locale settings:
  LANGUAGE = en_US:,
  LC_ALL = (unset),
  LC_CTYPE = en_US.UTF-8,
  LANG = C
  are supported and installed on your system.
  perl: warning: Falling back to the standard locale (C).
 
  has someone a hint where I could investigate
 
  N.B.: /etc/default/locale /etc/locale.gen and /etc/profile are set
 according to the debian wiki page
 
  here are the deailed setup steps executed in the container as root:
 
  apt-get purge locales-all
  dpkg-reconfigure -f noninteractive locales
  echo en_US.UTF-8 UTF-8  /etc/locale.gen
  echo en_US.UTF-8 UTF-8  /etc/default/locale
  /usr/sbin/locale-gen
  echo : ${LANG:=en_US.UTF-8}; export LANG  /etc/profile.d/language

 Hi Eric,

 I'm not a Debian user, nor an expert on locales. But to me it looks like
 you haven't installed an English locale, specifically the US one.

 Regards,

 Neil

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [LXC] locale: Cannot set LC_CTYPE to default locale: No such file or directory

2014-11-08 Thread Eric Keller
Hi all,

digging a bit more into local-gen update-locale, ... I fund that apparmor
makes some problem. Here is some other post from launchpad:

https://bugs.launchpad.net/ubuntu/+source/langpack-locales/+bug/931717

The last link  does give some workaround which is already implemented in
the apparmor lxc-default profile. Currently disabling apparmor when
configuring locales in the container, and re-enabling apparmor afterwards

Cheers

-- 
Eric Keller

On Sat, Nov 8, 2014 at 8:53 AM, Eric Keller keller.e...@gmail.com wrote:

 Hi everyone,

 I am currently using LXC (debian wheezy) container on my Ubuntu 12.04 64
 bit distribution.

 setting up the container, goes fine but the locale are not working
 accordingly.

 $ locale
 locale: Cannot set LC_CTYPE to default locale: No such file or directory
 locale: Cannot set LC_MESSAGES to default locale: No such file or directory
 locale: Cannot set LC_ALL to default locale: No such file or directory
 LANG=en_US.UTF-8
 LANGUAGE=en_US:
 LC_CTYPE=en_US.UTF-8
 LC_NUMERIC=en_US.UTF-8
 LC_TIME=en_US.UTF-8
 LC_COLLATE=en_US.UTF-8
 LC_MONETARY=en_US.UTF-8
 LC_MESSAGES=en_US.UTF-8
 LC_PAPER=en_US.UTF-8
 LC_NAME=en_US.UTF-8
 LC_ADDRESS=en_US.UTF-8
 LC_TELEPHONE=en_US.UTF-8
 LC_MEASUREMENT=en_US.UTF-8
 LC_IDENTIFICATION=en_US.UTF-8
 LC_ALL=

 I did apply the usual tricks (https://wiki.debian.org/Locale) to setup
 the locale in vain. The locale command continue to throw me the same errors!

 the update-locale also behave in a strange way:
 sudo /usr/sbin/update-locale
 perl: warning: Setting locale failed.
 perl: warning: Please check that your locale settings:
 LANGUAGE = en_US:,
 LC_ALL = (unset),
 LC_CTYPE = en_US.UTF-8,
 LANG = en_US.UTF-8
 are supported and installed on your system.
 perl: warning: Falling back to the standard locale (C).
 *** update-locale: Error: invalid locale settings:  LANG=en_US.UTF-8

 as does the dpkg-reconfigure locales:

 sudo dpkg-reconfigure -f noninteractive locales
 perl: warning: Setting locale failed.
 perl: warning: Please check that your locale settings:
 LANGUAGE = en_US:,
 LC_ALL = (unset),
 LC_CTYPE = en_US.UTF-8,
 LANG = en_US.UTF-8
 are supported and installed on your system.
 perl: warning: Falling back to the standard locale (C).
 locale: Cannot set LC_CTYPE to default locale: No such file or directory
 locale: Cannot set LC_MESSAGES to default locale: No such file or directory
 locale: Cannot set LC_ALL to default locale: No such file or directory
 Generating locales (this might take a while)...
   en_US.UTF-8...cannot change mode of new locale archive: No such file or
 directory
  done
 Generation complete.
 perl: warning: Setting locale failed.
 perl: warning: Please check that your locale settings:
 LANGUAGE = en_US:,
 LC_ALL = (unset),
 LC_CTYPE = en_US.UTF-8,
 LANG = C
 are supported and installed on your system.
 perl: warning: Falling back to the standard locale (C).

 has someone a hint where I could investigate

 N.B.: /etc/default/locale /etc/locale.gen and /etc/profile are set
 according to the debian wiki page

 here are the deailed setup steps executed in the container as root:

 apt-get purge locales-all
 dpkg-reconfigure -f noninteractive locales
 echo en_US.UTF-8 UTF-8  /etc/locale.gen
 echo en_US.UTF-8 UTF-8  /etc/default/locale
 /usr/sbin/locale-gen
 echo : ${LANG:=en_US.UTF-8}; export LANG  /etc/profile.d/language

 Regards
 --
 Eric Keller


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to share a dual nvidia cards between two LXC

2014-09-04 Thread Eric Espino
Guillaume Thouvenin guillaume.thouvenin@... writes:

 
 Hello,
 
   I have a card with two nvidia GPUs. Currently I'm using it in one 
 LXC. I compiled the nvidia drivers from their our official web site in 
 the container. I created /dev/nvidia0, /dev/nvidia1 and /dev/nvidiactl 
 devices into the container. From the container I can start an X server 
 on :0. Then I'm using TurboVNC and virtualGL to use the 3D graphics 
 capabilities of the card.
 
   As I have a two GPUs I'd like to dedicate one GPU to a container and 
 the other one to other container. My approach is to compile the nvidia 
 drivers in both containers, create /dev/nvidia0, /dev/nvidiactl into 
 one container and /dev/nvidia1, /dev/nvidiactl into the other 
 container. Then I should be able to start an X server in both 
 containers. The main problem I have is that both containers try to use 
 display :0 even if I start one with xinit -display :2
 
 So I'd like to know if this approach seems doable and if people that 
 already achieve this can share the configuration about cgroups, tty and 
 nvidia device.
 
 ...

Hello Guillaume:

Quick questions:
1. The original post was done on 2013, did you solve your problem on regard 
assigning one GPU to different containers?
2. Are the two GPUs interconnected using a SLI cable?
2. Without the solution in place, what does it happen when you bootup a 
second containter that requires a GPU (assuming that the first container is 
also require access to a GPU)?

Regards,






___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users