Re: [lxc-users] LXC copy snapshots only to remote?

2018-03-07 Thread Marat Khalili
> If I'm right, how do I copy SC by itself (and not the whole container) to H2 
> on Wednesday?

You can be right, but developers just didn't create lxc- command for this. Did 
you try to simply copy SC files over C2 and run the result? LXC 1.0 command 
layer is very thin, you can observe and emulate results of most of them.
-- 

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC container isolation with iptables?

2018-03-05 Thread Marat Khalili
Thank you for the explanation, I'll give it a try. proxyarp seem to be 
the magic ingredient needed.


--

With Best Regards,
Marat Khalili


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC container isolation with iptables?

2018-03-04 Thread Marat Khalili

On 04/03/18 02:26, Steven Spencer wrote:
Honestly, unless I'm spinning up a container on my local desktop, I 
always use the routed method. Because our organization always thinks 
of a container as a separate machine, it makes the build pretty 
similar whether the machine is on the LAN or WAN side of the network. 
It does, of course, require that each container run its own firewall, 
but that's what we would do with any machine on our network.


Can you please elaborate on your setup?It always seemed like 
administrative hassle to me. Outside routers need to known how to find 
your container. I can see three ways, each has it's drawbacks:


1. Broadcast container MACs outside, but L3-route packets inside the 
server instead of L2-bridging. Seems clean but I don't know how to do it 
in [bare] Linux.


2. Create completely virtual LAN (not in 802.1q sense) with separate IP 
address space inside the server and teach outside routers to route 
corresponding addresses via your server. OKish as long as you have 
access to the outside router configuration, but some things like 
broadcasts won't work. Also, I'm not sure it solves OP inter-container 
isolation problem.


3. Create separate routing table rule for each container/group of them. 
Hard to administer and dangerous IMO.


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Race condition in IPv6 network configuration

2018-01-08 Thread Marat Khalili

Hello,

On 06/01/18 04:25, MegaBrutal wrote:


It's very sad that I had to do this, but I wrote a script to check the
IPv6 default route of a container. In case it finds a problem (there
is no default route or it is configured through RA when the interface
configuration is supposed to be static), it resets and reconfigures
the network interface.


Did you by any chance find a right way to run this script just before 
any network-dependent services?


--

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-21 Thread Marat Khalili

On 21/11/17 15:07, Saint Michael wrote:

Thanks for the solution. It works indeed.
Just out of curiosity, how did you find this out? I googled it far and 
wide and there was nothing available.


The autodev part, I needed it in order to make qemu networking work in a 
container via /dev/net/tun device which is missing by default. I did not 
invent it, found somewhere in Google, probably here: 
https://serverfault.com/questions/429461/no-tun-device-in-lxc-guest-for-openvpn 
. It just executes specified command(s) during /dev population, nothing 
magical; you can execute same command in the container later if you can 
afford to defer your mounts or whatever uses the created device.


/dev/fuse is more interesting, I was trying to make snapd work in an LXC 
container, still not very successful in that. You can read more about 
this effort in the discussion on the bottom of 
https://bugs.launchpad.net/snappy/+bug/1628289


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-19 Thread Marat Khalili

On 19/11/17 22:45, Ron Kelley wrote:

In all seriousness, I just ran some tests on my servers to see if SSH is still 
the bottleneck on rsync.  These servers have dual 10G NICs (linux bond), 3.6GHz 
CPU, and 32G RAM.  I found some interesting data points:

* Running the command "pv /dev/zero | ssh $REMOTE_SERVER 'cat > /dev/null’” I 
was able to get about 235MB/sec between two servers with ssh pegged at 100% CPU usage. 
[...]
In the end, rsync over NFS (using 10G networking) is much faster than rsync 
using SSH keys in my environment.  Maybe your environment is different or you 
use different ciphers?


Very good data points. I agree that you can saturate ssh if you have 10G 
network connection and either SSDs or some expensive HDD arrays on both 
sides and some sequential data to transfer. If you don't have any of 
listed items, ssh does not slow down things.




As for not trusting the LAN with unencrypted traffic, I would argue either the 
security policies are not well enforced or the server uses insecure NFS mount 
options.  I have no reason not to trust my LAN.
I was just afraid that someone reading your post would copy-paste your 
configuration to use over less secure LAN or even WAN. (I admit this is 
not a big problem for original poster since FTP is not much better in 
this regard.)



--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-19 Thread Marat Khalili
> My experience has shown rsync over ssh is by far the slowest because of the 
> ssh cipher. 

What century is this experience from? Any modern hardware can encrypt at IO 
speed several times over. Even LAN, on the other hand, cannot be trusted with 
unencrypted data.
-- 

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-18 Thread Marat Khalili

On 18/11/17 17:10, Saint Michael wrote:

Yes, of course. It works but only if autodev=0
That is the issue.


Even as:


lxc.hook.autodev = sh -c 'mknod ${LXC_ROOTFS_MOUNT}/dev/fuse c 10 229'

?


--

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-18 Thread Marat Khalili

On 16/11/17 18:50, Saint Michael wrote:

The issue is with fuse, that is why I keep
lxc.autodev=0
if I do not, if I set it to 1, then fuse does not mount inside a 
container. I need fuse, for I mount an FTP server inside the container.

So I am caught between a rock and a hard place.
I akready asked about this contradiction on the LXC developers list.

BTW, did you try:

mknod /dev/fuse c 10 229

?

--

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-18 Thread Marat Khalili

On 18/11/17 14:46, Saint Michael wrote:
I need to do an rsync of hundreds of files very morning. The least 
complex way to achieve that is to do an rsync with some parameters 
that narrow down what files I need.

Is there a better way?

On Fri, Nov 17, 2017 at 11:43 PM, Andrey Repin <anrdae...@yandex.ru 
<mailto:anrdae...@yandex.ru>> wrote:


Was there the need for it? Really?
I feel like you've dug the grave for yourself with this config.



I understand the need, but not the solution. I also have some remote 
mounts (SMB, WebDAV...) that I'd like to rsync but don't want to mount 
on host. However, I my case I had to create "true" kvm-based VM for it. 
I'd like to learn a clean way to do it in LXC container without problems 
like OP encountered.



--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-16 Thread Marat Khalili

On 16/11/17 14:58, Saint Michael wrote:

lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
lxc.mount.entry = sysfs sys sysfs defaults  0 0
lxc.mount.entry = /cdr cdr none bind 0 0
lxc.mount.auto = cgroup:mixed
lxc.tty = 10
lxc.pts = 1024
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 254:0 rwm
lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
lxc.cgroup.devices.allow = b 7:* rwm    # loop*
lxc.cgroup.devices.allow = c 10:229 rwm #fuse
lxc.autodev = 0
lxc.aa_profile = unconfined
lxc.cap.drop=
lxc.network.type = phys
lxc.network.flags = up
lxc.network.link = eth6
lxc.network.name <http://lxc.network.name> = eth0
lxc.network.ipv4 = 0.0.0.0/27 <http://0.0.0.0/27>
lxc.network.type = macvlan
lxc.network.flags = up
lxc.network.link = eth3
lxc.network.name <http://lxc.network.name> = eth1
lxc.network.macvlan.mode = bridge
lxc.network.ipv4 = 0.0.0.0/24 <http://0.0.0.0/24>

lxc.start.auto = 1
lxc.start.delay = 5
lxc.start.order = 0
lxc.rootfs = /data/iplinkcdr/rootfs
lxc.rootfs.backend = dir
lxc.utsname = iplinkcdr


It does not look as config created by lxc-create. Does same thing happen 
if you use `lxc-create -t download`?


Looking at your config, I most notably don't see `lxc.devttydir = lxc`. 
Although according to man it should not directly cause effect you 
described, but I'd still try to add it and see. `lxc.console` is also a 
good thing to try, although it is not set in my system too. Probably it 
can be the easiest fix.


I don't run with `lxc.aa_profile = unconfined` and `lxc.cap.drop=`, so 
in your system container can do more things than it can do here.


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-16 Thread Marat Khalili
I'm using LXC on 16.04 and observe nothing of the kind you describe. How 
are you creating containers? Please post container config file.


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Race condition in IPv6 network configuration

2017-11-12 Thread Marat Khalili

On 07/11/17 14:55, Marat Khalili wrote:

On 07/11/17 13:45, MegaBrutal wrote:


First of all, I also suggest you to comment out the line
"lxc.net.0.flags = up" in your LXC container configuration
(/var/lib/lxc/containername/config).
Will definitely try it, although since it happens so rarely in my case 
(approximately once per month in different containers) it will take 
some time to confirm the solution. 

Unfortunately, it didn't help: two containers failed on next boot :(


--

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Race condition in IPv6 network configuration

2017-11-07 Thread Marat Khalili

On 07/11/17 13:45, MegaBrutal wrote:


First of all, I also suggest you to comment out the line
"lxc.net.0.flags = up" in your LXC container configuration
(/var/lib/lxc/containername/config).
Will definitely try it, although since it happens so rarely in my case 
(approximately once per month in different containers) it will take some 
time to confirm the solution.



I also tried ifdown, but that usually doesn't work, because as far as
the system is concerned, the interface is not configured (as it failed
configuration when ifupdown tried to bring it up, so it is in a
half-configured state).

Looks tough.


3) Write systemd service for this if you have enough time at hand, then
share it :)

Yes, but it feels like a workaround. I'd prefer to know the cause and
find a better solution.
Me too. Just a wild idea: you could try to play with ip6tables to find 
out what process sends RA requests, and also (another workaround) to 
filter out corresponding packets.


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Race condition in IPv6 network configuration

2017-11-06 Thread Marat Khalili

If I reboot the
container multiple times, once I'll get lucky and the interface
configuration happens before an RA is received, and I get a permanent
default route and everything's fine. But it's annoying because I have
to reboot the container multiple times before it happens.
I don't know the cause of your problem, but I also encountered some race 
in container network configuration: 
https://lists.linuxcontainers.org/pipermail/lxc-users/2017-June/013456.html 
. Since no one knows why it happens, I ended up checking container 
network configurations from a cron job and restarting resolvconf if it's 
wrong then email administrator; emailing is necessary since in my case 
some services may have already failed to start by the time configuration 
is auto-corrected. Even better solution would be to put a check in a 
systemd service with correct dependencies, but I haven't implemented it yet.


Until someone finds proper solution of your problem, I suggest you to:

1) Restart not the whole container but something smaller, like 
networking or specific interface.


2) Automate check and restart as I did.

3) Write systemd service for this if you have enough time at hand, then 
share it :)


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] How to lxc-copy running container?

2017-10-11 Thread Marat Khalili

Dear all,
good time of the day,

I'm testing package upgrades of an LXC container by making a copy of it 
and running upgrade in a copy. However, for some reason lxc-copy (at 
least in 2.0.8) refuses to copy running container. Is there some 
workaround for it other than replacing lxc-copy with manual operations? 
Does it make sense to ask developers for some --force switch?


(I understand that in some cases copying running container can damage 
the copy (but never the original), but when the only mutable data in 
root filesystem is logs it should be perfectly OK, stopping services and 
disconnecting clients is unjustified.)


--

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Unprivileged LXC container can't fully access NFS share mounted on host

2017-10-09 Thread Marat Khalili
You are right that there are too many moving parts. Either you are 
confusing UIDs meaning inside and outside the container, or NFS mangles 
them. Make sure NFS on both sides does not mess with UIDs, then set 
correct ownership *when seen from inside the container*. Works for me.


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Limits kind of not working

2017-09-12 Thread Marat Khalili
On September 12, 2017 7:57:26 PM GMT+03:00, Pavol Cupka <pavol.cu...@gmail.com> 
wrote:
>ok it works better with if=/dev/urandom probably because of the
>compress=lzo being enabled

It explains a lot:
> limit [options] |none [] 
[...]
>Options
>
>-c
>
>limit amount of data after compression. This is the default, it is 
> currently not possible to turn off this option.
( https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs-qgroup#SUBCOMMAND )

Obviously /dev/zero compresses to nearly zero (not really due to btrfs 
on-the-fly compression limitations, but still by much). Now I don't even 
understand why it limits you in 2GB case.
-- 

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Limits kind of not working

2017-09-12 Thread Marat Khalili

so you would recommend ZFS instead?


I personally don't use ZFS (it's alien to Linux kernel and I don't want 
to deal with support problems arising from this situation). Many people 
claim it's magical and revolutionary, so it's up to you to test and decide.


I use LXC on BTRFS with manually-enabled quotas, but I do have problems 
with it. If you want it simple I'd recommend to either forget quotas or 
use partitions.



well it looks like it gets set correctly also for btrfs

btrfs qgroup show -reF /var/lib/lxd/containers/c1/
qgroupid rfer excl max_rfer max_excl
    
0/528 1.02GiB156.54MiB


Not sure why it doesn't work then, try calling `btrfs quota enable` for 
the filesystem. Are you sure you can really exceed the quota? You have 
it set for amount of exclusive data, so extents shared with other 
subvolumes do not count towards it.


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Marking running containers as emphermial

2017-09-12 Thread Marat Khalili
I have tried to use the lxc.hook.post-stop.  The script checked to see 
if a file was on the root file system and then called lxc-destroy but 
the process just hung.  Is there a way for a container config to be 
updated when it is running or is there another to go about this.
lxc-ls with various arguments can show you container states or only list 
containers in certain states. Alternatively lxc-wait can wait for 
container to stop, but I'd add couple of seconds delay after it returns 
because I have had problems with it in the past otherwise. I don't know 
why lxc-destroy hangs for you, but I'd avoid calling it from other hooks 
to avoid deadlocks, and use cron/at jobs instead.


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Limits kind of not working

2017-09-12 Thread Marat Khalili
What doesn't work for me is disk "quota" I am using btrfs and I have 
set 20GB for the container root, but was still able to allocate more 
the 90GB writing with dd to a file inside the container. What am I 
doing wrong?
AFAIU at least LXC 2.0.8 with BTRFS back-end it does not propagate quota 
settings from LXC config to the actual filesystem. There's a reason for 
this: BTRFS quotas are very beta quality. You can enable them manually 
(using btrfs commands) and they will work to some extent, but you are in 
for some bad surprises.


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc1 and ubuntu 17.10 daily: artful guest doesn't get ip address?

2017-09-02 Thread Marat Khalili
> Anyone know what's up there?  I haven't looked under the hood yet.

Compare /var/lib/lxc/name/config and 
/var/lib/lxc/name/rootfs/etc/network/interfaces for your container names, there 
ought to be some differences between working and non-working cases in these 
files.

There are in fact many ways to configure network in lxc1, some better than 
other. Me and some other people here ended up manually writing guest's 
/etc/network/interfaces with necessary values.
-- 

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD 2.14 - Ubuntu 16.04 - kernel 4.4.0-57-generic - SWAP continuing to grow

2017-07-15 Thread Marat Khalili

Marat/Fajar:  How many servers do you guys have running in production, and what 
are their characteristics (RAM, CPU, workloads, etc)?
I have to admit I'm not running a farm; I administer a few, but they are 
all different depending on task. Still, even smallest has 64GB RAM. In 
2017 the 8GB is small even for user notebook IMO.



After digging into this a bit, it seems “top”, “top”, and “free” report similar 
swap usage, however, other tools report much less swap usage.
Yes, this is known, they got confused in containers. Run them on host to 
produce more meaningful results.



All that said, the real issue is to find out if one of our containers/processes 
has a memory leak (per Marat’s suggestion below).  Unfortunately, LXD does not 
provide an easy way to track per-container stats, thus we must “roll our own” 
tools.


Here's a typical top output (on the host system with 19 LXC containers 
currently running):


top - 16:00:01 up 12 days, 10:35,  5 users,  load average: 0.67, 0.58, 
0.61

Tasks: 501 total,   2 running, 499 sleeping,   0 stopped,   0 zombie
%Cpu(s):  5.8 us,  1.4 sy,  0.0 ni, 91.5 id,  1.1 wa,  0.0 hi,  0.2 
si,  0.0 st
KiB Mem : 65853268 total,   379712 free,  8100284 used, 57373272 
buff/cache
KiB Swap: 24986931+total, 24782081+free,  2048496 used. 56852384 avail 
Mem


  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ 
COMMAND
 6671 root  20   0 5450952 3.728g   1564 S   0.3 5.9  93:14.29 
qemu-system-x86
 6670 root  20   0 5411084 2.073g   1456 S   0.0 3.3  33:32.07 
qemu-system-x86
 6979 999   20   0 5251132 244532  19436 S   0.0 0.4 101:47.88 
drwcsd.real

 4338 lxd   20   0 1968400 229004   8052 S   5.3 0.3 639:52.03 mysqld
 8135 root  20   0 6553852 198224   4280 S   0.0 0.3  41:52.66 java
 4231 root  20   0  150072  99596  99472 S   0.0 0.2   0:19.43 
systemd-journal 
It shows all processes, including those running in containers (first 5 
are). I sorted by RES/%RAM; in your case I'd also try sorting by VIRT. I 
don't know how to directly find process that occupies much swap, but 
most likely it will have high RES and VIRT values too. As soon as you 
find problem processes, it is trivial to find container they run in with 
ps -AFH or pstree -p on the host system. (Note, that user names and PIDs 
are different inside and outside of containers, don't rely on them.)


I don't have much experience with LXD, but I suppose it's same in this 
aspect.


--

With Best Regards,
Marat Khalili

On 15/07/17 18:48, Ron Kelley wrote:

Thanks for the great replies.

Marat/Fajar:  How many servers do you guys have running in production, and what 
are their characteristics (RAM, CPU, workloads, etc)?  I am trying to see if 
our systems generally align to what you are running.  Running without swap 
seems rather drastic and removes the “safety net” in the case of a bad program. 
 In the end, we must have all containers/processes running 24/7.

tldr;

After digging into this a bit, it seems “top”, “top”, and “free” report similar 
swap usage, however, other tools report much less swap usage.  I found the 
following threads on the ‘net which include simple scripts to look in /proc and 
examine swap usage per process:

https://stackoverflow.com/questions/479953/how-to-find-out-which-processes-are-swapping-in-linux
https://www.cyberciti.biz/faq/linux-which-process-is-using-swap

As some people pointed out, top/htop don’t accurately report the swap usage as 
they combine a number of memory fields together.  And, indeed, running the 
script in each container (looking at /proc) show markedly different results 
when all the numbers are added up.  For example, the “free” command on one of 
our servers reports 3G of swap in use, but the script that scans the /proc 
directory only shows 1.3G of real swap in use.  Very odd.

All that said, the real issue is to find out if one of our containers/processes 
has a memory leak (per Marat’s suggestion below).  Unfortunately, LXD does not 
provide an easy way to track per-container stats, thus we must “roll our own” 
tools.



-Ron





On Jul 15, 2017, at 5:11 AM, Marat Khalili <m...@rqc.ru> wrote:

I'm using LXC, and I frequently observe some unused containers get swapped out, 
even though system has plenty of RAM and no RAM limits are set. The only bad 
effect I observe is couple of seconds delay when you first log into them after 
some time. I guess it is absolutely normal since kernel tries to maximize 
amount of memory available for disk caches.

If you don't like this behavior, instead of trying to fine tune kernel 
parameters why not disable swap altogether? Many people run it this way, it's 
mostly a matter of taste these days. (But first check your software for leaks.)


For example, our “server-4” machine shows 8G total RAM, 500MB free, 2.5G 
available, and 5G of buff/cache. Yet, swap is at 5.5GB and has been slowly 
growing over the past few days. It seems something is preventing the apps from 

Re: [lxc-users] LXD 2.14 - Ubuntu 16.04 - kernel 4.4.0-57-generic - SWAP continuing to grow

2017-07-15 Thread Marat Khalili
I'm using LXC, and I frequently observe some unused containers get swapped out, 
even though system has plenty of RAM and no RAM limits are set. The only bad 
effect I observe is couple of seconds delay when you first log into them after 
some time. I guess it is absolutely normal since kernel tries to maximize 
amount of memory available for disk caches.

If you don't like this behavior, instead of trying to fine tune kernel 
parameters why not disable swap altogether? Many people run it this way, it's 
mostly a matter of taste these days. (But first check your software for leaks.)

> For example, our “server-4” machine shows 8G total RAM, 500MB free, 2.5G 
> available, and 5G of buff/cache. Yet, swap is at 5.5GB and has been slowly 
> growing over the past few days. It seems something is preventing the apps 
> from using the RAM.

Did you identify what processes all this virtual memory belongs to?

> To be honest, we have been battling lots of memory/swap issues using LXD. We 
> started with no tuning, but the app stack quickly ran out of memory. 

LXC/LXD is hardly responsible for your app stack memory usage. Either you 
underestimated it or there's a memory leak somewhere.

> Given all the issues we have had with memory and swap using LXD, we are 
> seriously considering moving back to the traditional VM approach until 
> LXC/LXD is better “baked”.

Did your VMs use less memory? I don't think so. Limits could be better 
enforced, but VMs don't magically give you infinite RAM. 
-- 

With Best Regards,
Marat Khalili

On July 14, 2017 9:58:57 PM GMT+03:00, Ron Kelley <rkelley...@gmail.com> wrote:
>Wondering if anyone else has similar issues.
>
>We have 5x LXD 2.12 servers running (U16.04 - kernel 4.4.0-57-generic -
>8G RAM, 19G SWAP).  Each server is running about 50 LXD containers -
>Wordpress w/Nginx and PHP7.  The servers have been running for about 15
>days now, and swap space continues to grow.  In addition, the kswapd0
>process starts consuming CPU until we flush the system cache via
>"/bin/echo 3 > /proc/sys/vm/drop_caches” command.
>
>Our LXD profile looks like this:
>-
>config:
>  limits.cpu: "2"
>  limits.memory: 512MB
>  limits.memory.swap: "true"
>  limits.memory.swap.priority: "1"
>-
>
>
>We also have added these to /etc/sysctl.conf
>-
>vm.swappiness=10
>vm.vfs_cache_pressure=50
>-
>
>A quick “top” output shows plenty of available Memory and buff/cache. 
>But, for some reason, the system continues to swap out the app.  For
>example, our “server-4” machine shows 8G total RAM, 500MB free, 2.5G
>available, and 5G of buff/cache.  Yet, swap is at 5.5GB and has been
>slowly growing over the past few days.  It seems something is
>preventing the apps from using the RAM.
>
>
>To be honest, we have been battling lots of memory/swap issues using
>LXD.  We started with no tuning, but the app stack quickly ran out of
>memory.  After editing the profile to allow 512MB RAM per container
>(and restarting the container), the kswapd0 issue happens.  Given all
>the issues we have had with memory and swap using LXD, we are seriously
>considering moving back to the traditional VM approach until LXC/LXD is
>better “baked”.
>
>
>-Ron
>___
>lxc-users mailing list
>lxc-users@lists.linuxcontainers.org
>http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] /etc/resolv.conf occasionally does not get written in LXC container with static conf

2017-06-24 Thread Marat Khalili
Thank you for sharing, I was suspecting systemd too. It is very sad that such 
drastic measures as incrontab are needed :(
-- 

With Best Regards,
Marat Khalili___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] /etc/resolv.conf occasionally does not get written in LXC container with static conf

2017-06-09 Thread Marat Khalili
Occasionally after reboot of the host the /etc/resolv.conf file in some 
container comes up containing only two comment lines (taken from 
/etc/resolvconf/resolv.conf.d/head). It should be filled in accordance 
with dns-nameservers line in /etc/network/interfaces (network 
configuration is all static), but it doesn't, though IP address is 
assigned correctly. Any of the following fixes the problem:

* /etc/init.d/resolvconf reload
* ifdown/ifup
* lxc-stop/lxc-start
* lxc-attach shutdown -r now

What's bizzare is there're many containers with similar configuration on 
this host (actually, most created with the same script), but they 
usually come up ok, and there's nothing particularly different in 
successful and unsuccessful syslogs until some service inside (e.g. 
Apache) realizes it's got no DNS.


Since it is hard to reproduce the problem without rebooting production 
server, I don't even know where to dig. Probably someone has seen this 
kind of behaviour before?


It's Ubuntu 16.04 on both host and container:
Linux host 4.4.0-79-generic #100-Ubuntu SMP Wed May 17 19:58:14 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux


--

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Strange handling of background processes by lxc-attach

2017-06-01 Thread Marat Khalili

Dear all,

I tried starting background processes with lxc-attach and found strange 
behaviour in Ubuntu Xenial (both container and host).


When I start lxc-attach without command and type my command in shell, it 
waits for background process to end before completing the exit:



root@host:~# lxc-attach -n test
root@test:~# sleep 10 &
[1] 14607
root@test:~# exit

# hangs here for 10 seconds

root@host:~#


When I do it with nohup it works as indended: no wait, process continues 
in the background:



root@host:~# lxc-attach -n test
root@test:~# nohup sleep 10 &
[1] 14621
nohup: ignoring input and appending output to 'nohup.out'
root@test:~# exit
root@host:~# lxc-attach -n test -- pgrep sleep
14621
root@host:~#


When I specify command in lxc-attach arguments or pass it 
non-interactively, background process gets killed, even with nohup! (I 
wonder what signal it gets.) There're no hangs below:



root@host:~# lxc-attach -n test -- bash -c 'nohup sleep 10 &'
root@host:~# echo 'nohup sleep 10 &' | lxc-attach -n test
root@host:~# lxc-attach -n test -- pgrep sleep
root@host:~# 


Finally, nohup bash works:


root@host:~# lxc-attach -n test -- nohup bash -c 'sleep 10 &'
nohup: appending output to 'nohup.out'
root@host:~# echo 'sleep 10 &' | lxc-attach -n test -- nohup bash
nohup: ignoring input and appending output to 'nohup.out'
root@host:~# lxc-attach -n test -- pgrep sleep
14732
14734
root@host:~#


I wonder if this is intended or should be reported as a bug? I guess 
this is somehow related to missing parent, right?




root@host:~# uname -a
Linux host 4.4.0-78-generic #99-Ubuntu SMP Thu Apr 27 15:29:09 UTC 
2017 x86_64 x86_64 x86_64 GNU/Linux


root@host:~# lsb_release -a
No LSB modules are available.
Distributor ID:Ubuntu
Description:Ubuntu 16.04.2 LTS
Release:16.04
Codename:xenial



--

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] mounting the same path on multiple containers

2017-04-21 Thread Marat Khalili
You'll get same results as from editing the same file from two different 
processes without any lxc involved: anything can happen to a file, but 
the filesystem must be totally OK with it.


--

With Best Regards,
Marat Khalili

On 21/04/17 13:09, Werner Hack wrote:

Maybe I have not expressed myself clearly.
I want to add the same data directory to multiple containers (bind mount).
The data is stored in a zfs pool if this is relevant to my question.

Now, we presume there are two independant users (or services) working 
on two different containers.
They are editing the same file on the common data storage at the same 
time.
Can this corrupt my filesystem or is this managed by the host (or 
kernel or whatever) correctly?

Could this be a problem and I damage my filesystem this way?

Thanks in advance
Werner Hack


On 04/20/2017 03:37 PM, T.C 吳天健 wrote:
I think lxc just do bind mount for you.  It will or not damage file 
depends on the device file nature.  For example, block device allows 
multiple process read-write. Some other drive file might not allow 
multiple entry.


2017年4月20日 下午9:21,"Werner Hack" <werner.h...@uni-ulm.de 
<mailto:werner.h...@uni-ulm.de>>寫道:


Hi all,

I want to mount (lxc device add) the same path on multiple
containers.
Is this possible or will I damage my filesystem this way?
How is concurrent (write) access to the same directory from
multiple containers handled?
Is this managed with lxd daemon or the kernel and I have not to
worry about it?
Or have I to use a cluster filesystem in this case?

Thanks in advance
Werner Hack


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users
<http://lists.linuxcontainers.org/listinfo/lxc-users>



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc exec comand, redirecting

2017-04-10 Thread Marat Khalili

Also the following might work:

root@host ~ $ lxc exec container -- rsync -azR 
vps270841.ovh.net:/var/www/website.com/htdocs/ /


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] can't get kvm to work inside lxc

2017-04-05 Thread Marat Khalili

Just making it run is as simple as I wrote before, just 3 commands:


# apt install wget qemu-kvm

# wget 
https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img


# kvm -curses ubuntu-16.04-server-cloudimg-amd64-disk1.img

You should see it booting. I used a script to create fresh privileged 
LXC container, but there's nothing kvm-specific in that script, just 
some network configuration and user preferences. You said you use LXD, 
but I don't think there's big difference.


You'll have to solve more problems to actually make it useful:
* give it more space with qemu-img resize;
* access virtual system: define password or ssh key with cloud-localds;
* share network with VM: -netdev bridge,id=br0 -device 
virtio-net,netdev=br0,mac="$MAC_ADDRESS";
* configure static IP address: using cloud-localds rewrite file in 
/etc/network/interfaced.d and reboot the system;

* share local storage with VM: -virtfs and mount with 9p.
* control VM with scripts: -monitor unix:... and socat;
* monitor boot with scripts: -serial unix... and socat;
* start and stop VM with container: systemd;
* ...
(br0 above is a bridge inside the container; you'll need to create it 
and to forward /dev/net/tun for this to work.)


There's a lot of info about all this scattered throughout the internet, 
but no single page; I'll think about writing details up somewhere.


--

With Best Regards,
Marat Khalili

On 04/04/17 20:22, Spike wrote:

Marat,

any chance you could share a little more about the steps you took to 
start kvm manually? that'd be most useful to get things started. If 
you wrote up that experience somewhere a link would be most welcomed too.


thank you,

Spike

On Mon, Apr 3, 2017 at 11:36 PM Marat Khalili <m...@rqc.ru 
<mailto:m...@rqc.ru>> wrote:


Hello,

I was able to run kvm in a privileged lxc container without any
modifications of lxc stock config (well, with some related to network
bridge). I gave up on libvirt and start containers with
qemu-system-x86_64 and systemd. You may want to try downloading ubuntu
cloud image from
https://cloud-images.ubuntu.com/releases/16.04/release/
and starting it with kvm -curses to see if it works.

--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] can't get kvm to work inside lxc

2017-04-04 Thread Marat Khalili

Hello,

I was able to run kvm in a privileged lxc container without any 
modifications of lxc stock config (well, with some related to network 
bridge). I gave up on libvirt and start containers with 
qemu-system-x86_64 and systemd. You may want to try downloading ubuntu 
cloud image from https://cloud-images.ubuntu.com/releases/16.04/release/ 
and starting it with kvm -curses to see if it works.


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] nfs server in [unprivileged] container?

2017-03-30 Thread Marat Khalili
> To clarify, in your setup, is the container using zfs? are you creating a 
> dataset for /nfs and exporting that to the container?

In my setup it is a btrfs subvolume that's bind-mounted to nfs container and 
then shared via nfs. It contains users' home directories so the load is not 
particularly high.
-- 

With Best Regards,
Marat Khalili___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] nfs server in [unprivileged] container?

2017-03-30 Thread Marat Khalili
https://launchpad.net/~gluster/+archive/ubuntu/nfs-ganesha 
<https://launchpad.net/%7Egluster/+archive/ubuntu/nfs-ganesha>


Disclamer: I haven't tested it.
Yes, I found it too, but its production readiness is unclear to me. 
Also, it is not present in stock Ubuntu repositories. Would be glad to 
hear any success stories before trying it myself.



--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] nfs server in [unprivileged] container?

2017-03-29 Thread Marat Khalili
I do run nfs in a privileged container, mostly because it is easier to 
manage it this way (separate IP-address and such -- reasons similar to 
yours actually).


Since I use nfs-kernel-server, most (if not all) of the code is actually 
executed in kernel, not in container userspace. Also, I had to disable 
apparmor for this container (lxc.aa_profile = unconfined). Because of 
this, I'm not sure if trying unprivileged nfs container makes any sense.


The story would be all different for userspace nfs server, but 
apparently there's none.


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD - Small Production Deployment - Storage

2017-03-29 Thread Marat Khalili
Just don't use NFS clients using NFS server on the same machine (same 
kernel), as this will break (hangs). 
Huh? Works for me between LXC containers. Only had to tune 
startup/shutdown sequence in systemd. In what exactly situation does it 
hang? /worried/



--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] DHCP or static ip address?

2017-03-04 Thread Marat Khalili
> The other way is to leave in /var/lib/lxc/NAME/config only [...] and have: 
> [...] in /var/lib/lxc/NAME/rootfs/etc/network/interfaces. Then everything 
> works.

> Which way is better? 

I also stumbled on nameservers issue when using config and switched to 
/etc/network/interfaces more than a year ago. No problems so far. Didn't find 
any other way yet.

I'd also recommend you to install local DNS server and automatically put names 
of new containers in its zone by the same script you use to assign IP 
addresses. Helps accessing network services running in containers, if you have 
any.

I don't see any benefits in DHCP _unless_ you use LXD and plan to move your 
containers around.
-- 

With Best Regards,
Marat Khalili___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Graceful Shutdown/Reboot

2017-02-09 Thread Marat Khalili
You are right, I was jumping to conclusions about "all sane 
distributions". Did you verify that lxc-stop shuts your container 
correctly? Because according to my quick google search they actually 
recommend to specify different signal in lxc.haltsignal for containers 
that don't understand SIGPWR from LXC.



--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Graceful Shutdown/Reboot

2017-02-06 Thread Marat Khalili
I suggest you to look for shutdown messages in the service log file, 
e.g. /var/log/mysql/error.log . I expect all sane distributions to at 
least try to stop all containers (as well as other processes) gracefully 
and give them some time to finish their work. However, I suppose it is 
possible to install and run LXC in a way that containers won't close 
correctly, so it makes sense to check first.



--

With Best Regards,
Marat Khalili

On 06/02/17 16:36, Brett 11 wrote:


Hi guys. Forgive me if this question has been answered before. I 
couldn't find it..



When I shutdown/reboot the HOST, does LXC gracefully shutdown the 
containers and their services automatically? Or should they always be 
shutdown manually or via script before HOST shutdown/reboot?



I have a container running MySQL for example and have been using 
'lxc-stop' before I shutdown the host. I really don't want to assume 
the host can gracefully shutdown the container & MySQL so it would be 
nice to know 'for sure'.



Thanks to all who can help!

Brett



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] sendmail / use IP of host ? / network.type=none

2017-02-02 Thread Marat Khalili

I am afraid, that spam filters receiving emails from it
will rate emails down, because of the NAT and the private IP. 
It's NAT that already gives your container access to public IP and also 
protects it from spammers outside. There are other ways but you really 
don't need them unless you want to receive mail too.


If you care about your reputation against spam filters, make sure your 
public IP has DNS record (most likely your ISP already took care of 
this), and configure SPF records for your outgoing mail domains.



--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Pre and post flight checks for container start/stop

2016-12-23 Thread Marat Khalili
Just recently I had similar requirement of making sure NFS is up before 
starting an LXC container. After some unsuccessful experiments with 
lxc.hook.pre-start etc I settled upon a systemd solution. If your system 
happen to use systemd, then e.g. for a container user1 create file 
/etc/systemd/units/lxc@user1.service:



.include /lib/systemd/system/lxc@.service

[Unit]
After=mnt-nfs-home.mount
Requires=mnt-nfs-home.mount

[Install]
WantedBy=multi-user.target
(replace mnt-nfs-home.mount with whatever checks you want), then run: 
systemctl enable /etc/systemd/units/lxc@user1.service . Deactivation is 
handled similarly. Of course, you will have to do everything through 
systemd after that, which is why some people believe it is evil and 
contagious.


--

With Best Regards,
Marat Khalili

On 23/12/16 08:01, Kees Bos wrote:

Hi,

Is it possible to do some pre-flight checks before starting a
container. E.g. to verify network connectivity before starting the
container, or to check in a central 'database' that the container isn't
running on a different host and register it? Note, that the preflight
check should be able to cancel a startup.

And similar on stopping a container execute some commands e.g. yo
deactivate registration of the container in the central.


Cheers,

Kees
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC containers w/ static IPs work on some hosts, not on others

2016-10-20 Thread Marat Khalili

On 20/10/16 21:42, Michael Peek wrote:
On the host, if I assign the host ip configuration to br1, don't I 
need to change something about the eno1 configuration?

Yes. You should delete it. Everything must be under br1.

--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC containers w/ static IPs work on some hosts, not on others

2016-10-20 Thread Marat Khalili

Hello,

I use lxc (not lxd!) with static IP addresses. Here's my config (Ubuntu 
16.04):


/etc/network/interfaces:


auto br1
iface br1 inet static
bridge_ports eno1
bridge_fd 0
address 10... # host ip configuration follows

/etc/lxc/default.conf:

lxc.network.type = veth
lxc.network.link = br1
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx

/var/lib/lxc/test/rootfs/etc/network/interfaces:

auto eth0
iface eth0 inet static
address 10... #container ip configuration follows


You seem to use macvlan. It is explicitly designed to prevent containers 
from talking to each other (they can only talk via external router), and 
it complicates things, e.g. requires router support (which might be a 
problem in your case). Unless you specifically need this feature you may 
have better results (and performance) with bridge like above.


Unfortunately, many places on the web teach people to configure macvlan 
with containers without really explaining why.


--

With Best Regards,
Marat Khalili
 


On 20/10/16 20:33, Michael Peek wrote:

Hi gurus,

I'm scratching my head again.  I'm using the following commands to 
create an LXC container with a static IP address:


# lxc-create -n my-container-1 -t download -- -d ubuntu -r xenial
-a amd64

# vi /var/lib/lxc/my-container-1/config

Change:
# Network configuration
# lxc.network.type = veth
# lxc.network.link = lxcbr0
# lxc.network.flags = up
# lxc.network.hwaddr = 00:16:3e:0d:ec:13
lxc.network.type = macvlan
lxc.network.link = eno1

# vi /var/lib/lxc/my-container-1/rootfs/etc/network/interfaces

Change:
#iface eth0 inet dhcp
iface eth0 inet static
  address xxx.xxx.xxx.4
  netmask 255.255.255.0
  network xxx.xxx.xxx.0
  broadcast xxx.xxx.xxx.255
  gateway xxx.xxx.xxx.1
  dns-nameservers xxx.xxx.0.66 xxx.xxx.128.66 8.8.8.8
  dns-search my.domain

# lxc-start -n my-container-1 -d


It failed to work.  I reviewed my notes from past posts to the list 
but found no discrepancies.  So I deleted the container and tried it 
on another host -- and it worked.  Next I deleted that container and 
went back to the first host, and it failed.  Lastly, I tried the above 
steps on multiple hosts and found that it works fine on some hosts, 
but not on others, and I have no idea why.  On hosts where this fails 
there are no error messages, but the container can't access the 
network, and nothing on the network can access the container.


Is there some step that I'm missing?

Thanks for any help,

Michael Peek


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Can not stop lxc with lxc-stop

2016-09-27 Thread Marat Khalili
I'd guess that some container process has stuck in the kernel due to a 
hardware/driver/filesystem problem. Since all containers actually share 
the same kernel, there's no easy way out in this case. Check if you see 
any processes of a container still running and whether you can kill them.


--

With Best Regards,
Marat Khalili

On 27/09/16 11:55, John Y. wrote:

The same as lxc-stop -n testlxc


Thank you for your help.
John

2016-09-27 14:21 GMT+08:00 Marat Khalili <m...@rqc.ru <mailto:m...@rqc.ru>>:

What's with:

# lxc-stop -n testlxc -k

?

--

With Best Regards,
Marat Khalili

On 27/09/16 05:46, John Y. wrote:

I create a container with lxc 2.0.4.
lxc-stop hangs up when I want to stop it.

#lxc-stop -n testlxc

But it may already stoped, because I exited from lxc auto
automatically and lxc-attach failed.:

#lxc-attach -n testlxc
lxc-attach: attach.c: lxc_attach_to_ns: 257 No such file or
directory - failed to open '/proc/23193/ns/mnt'
lxc-attach: attach.c: lxc_attach: 948 failed to enter the namespace

And lxc-stop still hanged without any output.

Anyone know why?

Thanks,
John





___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users
<http://lists.linuxcontainers.org/listinfo/lxc-users>

___ lxc-users mailing
list lxc-users@lists.linuxcontainers.org
<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users
<http://lists.linuxcontainers.org/listinfo/lxc-users> 


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Can not stop lxc with lxc-stop

2016-09-27 Thread Marat Khalili

What's with:

# lxc-stop -n testlxc -k

?

--

With Best Regards,
Marat Khalili

On 27/09/16 05:46, John Y. wrote:

I create a container with lxc 2.0.4.
lxc-stop hangs up when I want to stop it.

#lxc-stop -n testlxc

But it may already stoped, because I exited from lxc auto 
automatically and lxc-attach failed.:


#lxc-attach -n testlxc
lxc-attach: attach.c: lxc_attach_to_ns: 257 No such file or directory 
- failed to open '/proc/23193/ns/mnt'

lxc-attach: attach.c: lxc_attach: 948 failed to enter the namespace

And lxc-stop still hanged without any output.

Anyone know why?

Thanks,
John





___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] can't run “glxgears” in root on lxc 2.0 container

2016-09-22 Thread Marat Khalili

On 19/09/16 21:40, manik sheeri wrote:


However, after that I enter root mode via `sudo su` command.
And try to run glxgears, but I get the following error:

No protocol specified
Error: couldn't open display :0.0

Not sure why this error is coming. If user `ubuntu` runs x apps fine , 
I expected root to do the same.


I saw this kind of error on a non-containerized machine. Try running 
glxgears the following way:

HOME=/home/Ubuntu glxgears
Replace /home/Ubuntu with home directory of a user for which it is 
working. I don't know the proper fix, but it works as a workaround for 
me. Probably something related to X server authorization.


--

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Experimental cache-updater between containers (LXC)

2016-09-21 Thread Marat Khalili
Currently using apt-cacher-ng here, works great. What are possible 
benefits of your solution?


--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Crucial LXD, Bind Mounts & Gluster Question

2016-08-14 Thread Marat Khalili
Hello Zach,

> Gluster Volume subdirectories Bind Mounted into their respective containers 
> (i.e. /data/gluster/user1 -> container:/data/gluster)

Considering this line, do you even depend on ACLs? I'd think bind mounts 
provide sufficient protection by itself, as long as server demons run outside 
containers.

(I'm currently facing similar problem, but don't have first-hand experience 
solving it yet.)
-- 

With Best Regards,
Marat Khalili

On August 14, 2016 4:50:52 AM GMT+03:00, Zach Lanich <z...@zachlanich.com> 
wrote:
>Hey guys, I have a crucial decision I have to make about a platform I’m
>building, and I really need your help to make this decision in regards
>to security. Here’s what I’m trying to accomplish:
>
>Platform: Highly Available Wordpress hosting using Galera, GlusterFS &
>LXD (don’t worry about the SQL part)
>- One container per customer on a VM (or ded server)
>- (preferably) One 3 node GlusterFS Cluster for the Wordpress files of
>all customers’ containers
>- GlusterFS volume divided into subdirectories (one per customer), with
>ACLs to control permissions (see *)
>- Gluster Volume subdirectories Bind Mounted into their respective
>containers (i.e. /data/gluster/user1 -> container:/data/gluster)
>- LXC User/Group mappings to make the ACLs work
>
>My concerns:
>- (*) Although the containers are isolated (all but the shared kernel),
>and that in itself is probably secure enough to feel ok about it,
>introducing a shared Gluster volume into the mix and depending on ACLs
>makes me a bit nervous. I’d like your opinions on what the norm is in
>the world (the PaaSs, etc) and if you guys think this is a terrible
>idea. If you think this is not a good way of handling my needs, PLEASE
>help me find a better solution.
>
>My hangups:
>- I know PaaSs have found incredibly efficient ways to provide
>containerized apps with high availability, and I tend to highly doubt
>they’re throwing up 3+ GlusterFS VMs for every single app they deploy.
>This to me seems like an impossibly cost-ineffective approach. Correct
>me if I’m wrong. That being said, I’m not 100% sure how they’re doing
>it.
>
>Odd thoughts & alternative solutions that have crossed my mind:
>- To avoid using a shared single Gluster Volume and ACLs altogether,
>while also avoiding too much infrastructure cost, I’ve thought of
>possible putting up a 3 VM Gluster cluster, each with matching LXD
>Containers on them with Gluster server daemons running in those
>containers. I could use those containers & networking to simulate
>having multiple 3 node Gluster Clusters, each being dedicated to a
>respective containerized app on the App Server. This to me seems like
>it would be an unnecessarily complex and annoying to maintain solution,
>so please help me here.
>
>I hugely appreciate anyones help and this is a huge passion project of
>mine and I’ve dedicated an absurd number of hours reading to try and
>figure this out.
>
>Best Regards,
>
>Zach Lanich
>Business Owner, Entrepreneur, Creative
>Owner/CTO
>weCreate LLC
>www.WeCreate.com
>
>
>
>
>
>___
>lxc-users mailing list
>lxc-users@lists.linuxcontainers.org
>http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Can a container modify the host rtc?

2016-07-27 Thread Marat Khalili

On 26/07/16 19:58, Stewart Brodie wrote:


You won't be able to call those functions from a container not in the
initial user namespace, even if you possess CAP_SYS_TIME, because of the way
the kernel does its permission checks.
I wonder if there's there really no workaround for ntpd? Special version 
talking to the host through pipe probably? It is very convenient from 
administration point of view to keep every network service in a separate 
container.


--

With Best Regards,
Marat Khalili
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users