[lxc-users] Error in 'lxc.host-start' makes infratsructure unusable

2019-07-27 Thread webman
Hello!

I am using Debian (buster, kernel 4.19)
and LXC 3.1 (3.1.0+really3.0.3-8).

In a container' 'host-start' hook, I try to
find out the PID of the container and I am using:
>pid=$(lxc-info -n temp -p -H)<
This hungs.

Easy to assume, this is a locking situation ;-)
Ok, but to make another further use of the
LXC-tools (like 'lxc-ls' etc. - they are all
blocked since the start attempt) you have to
hard kill the corresponding monitor.

Using:
>lxc-monitor -Q -n <

shows no error, but it just don't work
and the container start attempt continues ...

Beneth of that, how to get the PID of the
started container? What I want to achive is,
to create a named network namespace for each
container (>ln -sf /proc/${pid}/ns/net /var/run/netns/${1}<).

BTW, passing environment variables into the
container does continue to NOT work ...

Thanks,
Manfred




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXC-3.1 - console disfunctional

2019-05-29 Thread webman
Hello!

Started the container with trace and even the attaching console.
--- start-log ---
lxc-start tc 20190529154624.524 TRACEstart - 
start.c:lxc_recv_ttys_from_child:1477 - Received pty with master fd 27 and 
slave fd 28 from parent
lxc-start tc 20190529154624.524 TRACEstart - 
start.c:lxc_recv_ttys_from_child:1483 - Received 4 ttys from child
lxc-start tc 20190529154624.524 NOTICE   start - start.c:post_start:2048 - 
Started "/sbin/init" with pid "1499"
lxc-start tc 20190529154624.524 TRACEstart - 
start.c:lxc_serve_state_socket_pair:536 - Sent container state "RUNNING" to 5
lxc-start tc 20190529154624.524 TRACEstart - 
start.c:lxc_serve_state_clients:466 - Set container state to RUNNING
lxc-start tc 20190529154624.524 TRACEstart - 
start.c:lxc_serve_state_clients:469 - No state clients registered
lxc-start tc 20190529154624.524 DEBUGlxccontainer - 
lxccontainer.c:wait_on_daemonized_start:830 - First child 1495 exited
lxc-start tc 20190529154624.524 TRACElxccontainer - 
lxccontainer.c:wait_on_daemonized_start:846 - Container is in "RUNNING" state
lxc-start tc 20190529154624.524 TRACEstart - start.c:lxc_poll:622 - 
Mainloop is ready
lxc-start tc 20190529154624.524 NOTICE   start - start.c:signal_handler:430 - 
Received 17 from pid 1497 instead of container init 1499
---
--- console log ---
lxc-console tc 20190529154800.937 DEBUGcommands - 
commands.c:lxc_cmd_rsp_recv:165 - Response data length for command "get_state" 
is 0
lxc-console tc 20190529154800.937 DEBUGcommands - 
commands.c:lxc_cmd_get_state:585 - Container "tc" is in "RUNNING" state
lxc-console tc 20190529154800.937 TRACEcommands - 
commands.c:lxc_cmd_rsp_recv:139 - Command "console" received response
lxc-console tc 20190529154800.937 DEBUGcommands - 
commands.c:lxc_cmd_rsp_recv:165 - Response data length for command "console" is 0
lxc-console tc 20190529154800.937 INFO commands - 
commands.c:lxc_cmd_console:744 - Alloced fd 5 for tty 1 via socket 4
lxc-console tc 20190529154800.937 TRACEterminal - 
terminal.c:lxc_console:1060 - Process is already group leader
lxc-console tc 20190529154800.937 DEBUGterminal - 
terminal.c:lxc_terminal_signal_init:192 - Created signal fd 6
lxc-console tc 20190529154800.937 DEBUGterminal - 
terminal.c:lxc_terminal_winsz:90 - Set window size to 132 columns and 50 rows
lxc-console tc 20190529154800.937 TRACEcommands - 
commands.c:lxc_cmd_rsp_recv:139 - Command "terminal_winch" received response
lxc-console tc 20190529154800.937 DEBUGcommands - 
commands.c:lxc_cmd_rsp_recv:165 - Response data length for command 
"terminal_winch" is 0
---
This is on host Buster with container Stretch.
This tells me, the comminication is failing.

Regards,
Manfred


> -Original Message-
> From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org]
> On Behalf Of web...@manfbraun.de
> Sent: Wednesday, May 29, 2019 4:33 PM
> To: lxc-users@lists.linuxcontainers.org
> Subject: [lxc-users] LXC-3.1 - console disfunctional
> 
> Hello!
> 
> I am just trying debian buster and
> found LCX -3.1 and gave it a try.
> After making the networking configuration,
> I started the container and tried to login
> via console
> 
> >lxc-console [-n] rxdptest1
> 
> A hello message appears, but that's all!
> No logon prompt at all, even not pressing enter several times.
> 
> I can use attach, but this different.
> Astoundingly, after connecting via Putty from
> Windows, the usual curor keys to edit [1]the commandline,
> don't work. I think, the has a relationship.
> 
> Any help would be great!
> 
> Thanks so far,
> Manfred
> 
> [added console-data, reconfigured keyboard-layout without changes]
> [Debian buster, kernel, 4.19, LXC-3.1]
> 
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXC-3.1 - console disfunctional

2019-05-29 Thread webman
Hi Oliver!

Thanks, but I am using Debian, Buster!

Just to see, if this depends on Buster, I created
a new container using Stretch, but it behaves the same.

But there error at the end of the createion process:

update-rc.d: error: cannot find a LSB script for checkroot.sh
update-rc.d: error: cannot find a LSB script for umountfs
update-rc.d: error: cannot find a LSB script for hwclockfirst.sh
Creating SSH2 RSA key; this may take some time ...
2048 SHA256:SYJi57SfMAjyplQUSjvkHGooFEooE7yQmQBM/Vzgwcw root@medio-rep (RSA)
Creating SSH2 ECDSA key; this may take some time ...
256 SHA256:MHD1GNwlk1EpKOLXKdE6nuIXDm7FgdwVIDXD/Ejcm90 root@medio-rep (ECDSA)
Creating SSH2 ED25519 key; this may take some time ...
256 SHA256:cNmkkLLAFbCvJR0FBL9VL020dMbcvLd0aI4mdeQSFRI root@medio-rep (ED25519)
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.

They don’t tell me, if this is part of the reason.
The mentioned scripts are not on my install.

MfG, Manfred

> -Original Message-
> From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org]
> On Behalf Of Oliver Dzombic
> Sent: Wednesday, May 29, 2019 4:40 PM
> To: lxc-users@lists.linuxcontainers.org
> Subject: Re: [lxc-users] LXC-3.1 - console disfunctional
> 
> Hi,
> 
> what container image do you use ?
> 
> Some do not have an activated getty by default ( like the centos 6
> image
> ). So you wont see anything.
> 
> If you try it with centos 7 or recent ubuntu / debian, it will have it
> by default.
> 
> --
> Mit freundlichen Gruessen / Best regards
> 
> Oliver Dzombic
> Layer7 Networks
> 
> mailto:i...@layer7.net
> 
> Anschrift:
> 
> Layer7 Networks GmbH
> Zum Sonnenberg 1-3
> 63571 Gelnhausen
> 
> HRB 96293 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
> UST ID: DE259845632
> 
> Am 29.05.19 um 16:32 schrieb web...@manfbraun.de:
> > Hello!
> >
> > I am just trying debian buster and
> > found LCX -3.1 and gave it a try.
> > After making the networking configuration,
> > I started the container and tried to login
> > via console
> >
> >> lxc-console [-n] rxdptest1
> >
> > A hello message appears, but that's all!
> > No logon prompt at all, even not pressing enter several times.
> >
> > I can use attach, but this different.
> > Astoundingly, after connecting via Putty from
> > Windows, the usual curor keys to edit [1]the commandline,
> > don't work. I think, the has a relationship.
> >
> > Any help would be great!
> >
> > Thanks so far,
> > Manfred
> >
> > [added console-data, reconfigured keyboard-layout without changes]
> > [Debian buster, kernel, 4.19, LXC-3.1]
> >
> >
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] LXC-3.1 - console disfunctional

2019-05-29 Thread webman
Hello!

I am just trying debian buster and
found LCX -3.1 and gave it a try.
After making the networking configuration,
I started the container and tried to login
via console

>lxc-console [-n] rxdptest1

A hello message appears, but that's all!
No logon prompt at all, even not pressing enter several times.

I can use attach, but this different.
Astoundingly, after connecting via Putty from
Windows, the usual curor keys to edit [1]the commandline,
don't work. I think, the has a relationship.

Any help would be great!

Thanks so far,
Manfred

[added console-data, reconfigured keyboard-layout without changes]
[Debian buster, kernel, 4.19, LXC-3.1]



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Understanding IPVLAN and LCX cross namespaces

2018-09-27 Thread webman
Hello!

 

Probably, someone may give me a hint.

In the following, I try to describe my configuration.

 

I have a VPS with two switchports and two network
interfaces in Linux (wich is debian stretch - so no LXD).

I want to move one of the interfaces, which itself
has multiple ip-addresses, into a LXC VM - thats easy.

But the different ip-addreses should only appear
in separate lcx vms, which would make iptables
management and separation a lot easier. I want to end
up with one slave VM for one IP and security services
running in the container, which uses the eth port.

So I started experimenting, but I cannot make this work.

First, I start two containers, the one, which
uses the eth1 from the host, is named "portsplit"
and its network type is "phys". The other one has
network type "empty" and is named "slave1".

If both containers are running, I start this script
on the host:

---

 

ip l set eth1 down

 

ip link add ipvlan1 link eth1 type ipvlan mode l3

ip l set ipvlan1 up

ip link set ipvlan1 netns $(lxc-info -pHn slave1)   ##Move IF into container.

 

ip link set eth1 up

ip link set eth1 netns $(lxc-info -pHn portsplit)

This is just the beginning to understand the stuff.

After this, I log in to the containers and apply a

>systemctl restart networking.

The "portsplit" itself does NOT(!!) have an ip-address.
it uses the following route:

default dev eth1 scope link

The slave has the two following routes:

default via 192.168.26.254 dev ipvlan1 onlink ##To LAN-GW

192.168.26.0/24 dev ipvlan1 proto kernel scope link src 192.168.26.239 ##OWN

 

I am then mapping the containers ip-namespaces into
/var/run/netns, so that I can access them easily.

 

I am running tcpdump on the LAN-GW (192.168.26.254),
on the "portsplit" (with NO ip) and on slave1 with
ip 192.168.26.239.

The packets are leaving the slave1, crossing the portsplit
and are reaching the LAN-GW, which answers with an ARP (who-is),
crossing the portswitcher and come back to slave1 - which never
answers.

And thats right, because ipvlan blocks all broadcast (the default,

which "ip -d l" shows is "NOARP") for the sub-interfaces.

 

On the gateway, there is never a mac address for slave1
in the arp table, the entry is flagged "(incomplete)".


Naturally, the ISP will allow me only the mac address,
the physical (eth1) interface will have.

 

>From my understanding, the eth1 must act with an arp-reply
with his own mac-address (which is the same for all
interfaces on the ipvlan!) and internally send the
packet to the linked ipvlan1.

 

If someone could probably explain, this would made
me happy.

 

I experimented too with:

#net.ipv4.conf.all.accept_source_route = 1

#net.ipv4.conf.enp4s0f1.proxy_arp = 1

#net.ipv4.conf.enp4s0f1.proxy_arp_pvlan = 1

 

BTW, if I switch th ipvlan creation to "mode L2",
everythings starts working. But my ISP would see
different mac addresses and the wholy broadcast
would flood all "slave" 's …. This was the reason
for the "mode L3".

 

Any help would be really welcomed!

 

uname-a: 4.17.0-0.bpo.1-amd64
Just to be sure, I updated iproute2 from backports (Now: 4.18).

 

Just a last note.

What I see accidentally: The ipvlan kernel module is not
loaded. Ok, I saw this with lsmod, but I do not known,
what this mean (later loaded or dynamically?). On the host,
I just made a modprobe ipvlan and it is there, but no references.

 

Thanks,
Manfred

 

 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] proc-sys-fs-binfmt_misc.automount failed

2016-09-03 Thread webman
Hi !

Thanks ! I am on the way to migrate my whole it environment away
from windows - so I am new to too many things at the same time.

That I asked about binfmt, was that searching the internet show
up nothing about, binfmt would be used inside LXC at some point
or not and the core message was "failed to load automount".

I try to create a VM for a DMZ inside a firewall machine and
even I am using mono (which MAY use binfmt to make exe programs
easier to use), does not force ME to use binfmt. From your
answer I assume, LXC itself does not need it. If I install
"autofs", the error dont go away. I then just disabled and
masked the service inside the VM - this helped. Will see,
if I need automount later at some point. BTW, my VMs are
on ZFS anyways.

I do not have the competition to decide about security, but
experts told me, not to use Ubuntu, so I keep plain debian.

I try to avoid backports, because they caused me at least two
nightmares in the last weeks ...

Regards,
Manfred


> -Original Message-
> From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On
> Behalf Of Fajar A. Nugraha
> Sent: Saturday, September 03, 2016 12:03 PM
> To: LXC users mailing-list
> Subject: Re: [lxc-users] proc-sys-fs-binfmt_misc.automount failed
> 
> On Sat, Sep 3, 2016 at 1:21 PM,   wrote:
> > Hello !
> >
> > I have a problem with LXC (1.0.6-6+deb8u2, on debian jessie, 8.5, uname
> 3.16.xx).
> 
> If you REALLY have (or want) to use debian jessie, I recommend at
> least use jessie-backports:
> https://packages.debian.org/search?keywords=lxc
> It has lxc-2.0.x which has lots of improvements over 1.0.x.
> 
> > [FAILED] Failed to set up automount Arbitrary Executable File Formats
> File System Automount Point.
> > See 'systemctl status proc-sys-fs-binfmt_misc.automount' for details.
> > Unit proc-sys-fs-binfmt_misc.automount entered failed state.
> 
> Some things, like autofs or loading kernel module, simply doesn't work
> inside a container. Most linux programs can run just fine without it
> though.
> 
> > I do not understand too much of Linux to know, what this means. I just
> created
> > another machine, whith same results. All work done as root.
> 
> A simple google search for "linux binfmt" (or reading the link in
> systemd unit) would've told you a lot. Short version, if you're not
> using wine or qemu user emulation, it should be safe to ignore it.
> 
> If you're "just a user who wants to use linux container", I highly
> recommend you use ubuntu + lxd + zfs instead. Ubuntu has gone a long
> way to integrate lxd/lxc into their distro, including tweaking their
> packages to be more container-friendly.
> 
> --
> Fajar
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] proc-sys-fs-binfmt_misc.automount failed

2016-09-03 Thread webman
Hello !

I have a problem with LXC (1.0.6-6+deb8u2, on debian jessie, 8.5, uname 
3.16.xx).
I am making just the basisc to start: lxc-create.
Creation of the VM works, I gave it a network (which work) and start it.

Then I got this message:

Set hostname to .
[  OK  ] Reached target Remote File Systems (Pre).
[  OK  ] Reached target Paths.
[  OK  ] Reached target Encrypted Volumes.
Failed to open /dev/autofs: No such file or directory
Failed to initialize automounter: No such file or directory
[FAILED] Failed to set up automount Arbitrary Executable File Formats File 
System Automount Point.
See 'systemctl status proc-sys-fs-binfmt_misc.automount' for details.
Unit proc-sys-fs-binfmt_misc.automount entered failed state.

Anyway, the machine is running and I log in and follow the message (above):

$ systemctl --failed
  UNIT  LOAD   ACTIVE SUBDESCRIPTION
● proc-sys-fs-binfmt_misc.automount loaded failed failed Arbitrary Executable 
File Formats File System Automount Point

and

$ systemctl status -l proc-sys-fs-binfmt_misc.automount
● proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File 
System Automount Point
   Loaded: loaded (/lib/systemd/system/proc-sys-fs-binfmt_misc.automount; 
static)
   Active: failed (Result: resources)
Where: /proc/sys/fs/binfmt_misc
 Docs: https://www.kernel.org/doc/Documentation/binfmt_misc.txt
   http://www.freedesktop.org/wiki/Software/systemd/APIFileSystems

There are no files below:

/proc/sys/fs/binfmt_misc/

I do not understand too much of Linux to know, what this means. I just created
another machine, whith same results. All work done as root.

I put here all relevant lines of the config file (the network is ok):

lxc.rootfs = /var/lib/lxc/vmtest/rootfs
lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.mount = /var/lib/lxc/vmtest/fstab
lxc.utsname = vmtest
lxc.arch = amd64
lxc.autodev = 1
lxc.kmsg = 0
lxc.tty = 8

Additional to note: There is nothing in the '/var/lib/lxc/vmtest/fstab' file.

Looking at systemd's unit-file, I see a precondition, which is NOT true:
ConditionPathIsReadWrite=/proc/sys/

So, if someone could help, this would be great. It's a bad idea to me,
to continue with this fault (and to not know about the consequences).
Beneth what systemd tells, "Failed to open /dev/autofs: No such file or 
directory"
seems to be the core of the problem.

BTW, did exact the same on another jessie (same versions), same result
and there seems to be no apparmor or selinux on my computers.

Thanks anyway and best regards,
Manfred




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users