[lxc-users] Moving storage volumes to new pool

2021-01-20 Thread Joshua Schaeffer
I'm running LXD 3.03 on Ubuntu 18.04 and the backend for my main storage pool 
is starting to fail. I have a drive that is provided via SAN to my LXD server 
and the SAN server itself needs to be shutdown and refreshed. I wanted to move 
my container's storage volumes from my original storage to a new ceph storage 
pool, but I'm clearly not specifying my move command properly:

lxcuser@lxcserver:~$ lxc storage list
++-++-+-+
|    NAME    | DESCRIPTION | DRIVER |  STATE  | USED BY |
++-++-+-+
| btrfspool1 | | btrfs  | CREATED | 21  |
++-++-+-+
| cephpool1  | | ceph   | CREATED | 0   |
++-++-+-+

lxcuser@lxcserver:~$ lxc storage volume list btrfspool1
+--+--+-+-+---+
| TYPE |   NAME 
  | DESCRIPTION | USED BY | LOCATION  |
+--+--+-+-+---+
| container    | bllweb05   
  | | 1   | lxcserver |
+--+--+-+-+---+

lxcuser@lxcserver:~$ lxc storage volume move btrfspool1/bllweb05 
cephpool1/bllweb05
Error: not found

How are you supposed to specify the move command to move a volume from one 
storage pool to another?

-- 
Thanks,
Joshua Schaeffer

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] AppArmor denies connect operation inside container

2020-07-07 Thread Joshua Schaeffer
On 7/6/20 9:35 PM, Fajar A. Nugraha wrote:
> Try editing /etc/apparmor.d/usr.sbin.slapd inside the container
I added /run/saslauthd/mux rw, to the usr.sbin.slapd profile inside the 
container and it fixed the problem.

-- 
Thanks,
Joshua Schaeffer

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] AppArmor denies connect operation inside container

2020-07-06 Thread Joshua Schaeffer
Looking for some help with getting slapd to be able to connect to saslauthd 
inside an LXD container. Whenever slapd needs to connect to the socket I see 
the following error message in the host's kernel log:

    Jul  6 13:27:17 host kernel: [923413.078592] audit: type=1400 
audit(1594063637.667:51106): *apparmor="DENIED" operation="connect"* 
namespace="root//lxd-container1_" *profile="/usr/sbin/slapd" 
name="/run/saslauthd/mux"* pid=58517 comm="slapd" *requested_mask="wr"* 
denied_mask="wr" fsuid=1111 ouid=1000

I've added the following to the container config and restarted the container, 
but I'm still seeing the same problem:

    lxcuser@host:~$ lxc config get container1 raw.apparmor
    /run/saslauthd/mux wr,

I'm not super familiar with AppArmor and going through the docs now, but 
thought I'd ask to see if anybody can point me in the right direction.

    lxcuser@host:~$ lxd --version
    3.0.3
    lxcuser@host:~$ lsb_release -a
    No LSB modules are available.
    Distributor ID:    Ubuntu
    Description:    Ubuntu 18.04.4 LTS
    Release:    18.04
    Codename:    bionic

-- 
Thanks,
Joshua Schaeffer

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Intermittent network issue with containers

2020-07-01 Thread Joshua Schaeffer
Thanks Fajar, I'll look into the workaround.

On 7/1/20 12:40 AM, Fajar A. Nugraha wrote:
> On Wed, Jul 1, 2020 at 1:05 PM Joshua Schaeffer
>  wrote:
>> And the really odd part is that if I try to actually ping *from* the 
>> container *to* my local box it works AND afterwards my original ping *from* 
>> my local box *to* the container starts to work.
> I had a similar problem on a vmware system some time ago. Gave up
> trying to fix it (I don't manage the vmware system), implement a
> workaround instead.
>
> Its either:
> - duplicate IP somewhere on your network
> - your router or switch somehow can't manage arp cache for the container hosts
>
> My workaround is to install iputils-arping (on ubuntu), and (for your
> case) do something like this (preferably on a systemd service)
>
> arping -I veth-mgmt -i 10 -b 10.2.28.1
>
> Or you could replace it with ping, whatever works for you.
>

-- 
Thanks,
Joshua Schaeffer

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Intermittent network issue with containers

2020-07-01 Thread Joshua Schaeffer
mp_seq=20 ttl=62 time=4.80 ms
    64 bytes from 10.2.80.129: icmp_seq=21 ttl=62 time=4.28 ms
    64 bytes from 10.2.80.129: icmp_seq=22 ttl=62 time=4.32 ms
    64 bytes from 10.2.80.129: icmp_seq=23 ttl=62 time=4.28 ms
    64 bytes from 10.2.80.129: icmp_seq=24 ttl=62 time=4.22 ms
    64 bytes from 10.2.80.129: icmp_seq=25 ttl=62 time=4.25 ms
    64 bytes from 10.2.80.129: icmp_seq=26 ttl=62 time=4.21 ms
    64 bytes from 10.2.80.129: icmp_seq=27 ttl=62 time=4.34 ms
    64 bytes from 10.2.80.129: icmp_seq=28 ttl=62 time=4.31 ms
    64 bytes from 10.2.80.129: icmp_seq=29 ttl=62 time=4.15 ms
    64 bytes from 10.2.80.129: icmp_seq=30 ttl=62 time=4.60 ms

    --- 10.2.80.129 ping statistics ---
    30 packets transmitted, 27 received, 10% packet loss, time 29137ms
    rtt min/avg/max/mdev = 4.145/43.328/1042.959/196.063 ms, pipe 2
    Tue 30 Jun 2020 22:31:01 PM UTC

    root@container1:~# date -u; ping -c 4 172.16.44.18; date -u
    Tue Jun 30 22:30:33 UTC 2020
    PING 172.16.44.18 (172.16.44.18) 56(84) bytes of data.
    64 bytes from 172.16.44.18: icmp_seq=1 ttl=63 time=444 ms
    64 bytes from 172.16.44.18: icmp_seq=2 ttl=63 time=4.30 ms
    64 bytes from 172.16.44.18: icmp_seq=3 ttl=63 time=4.27 ms
    64 bytes from 172.16.44.18: icmp_seq=4 ttl=63 time=4.23 ms

    --- 172.16.44.18 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3003ms
    rtt min/avg/max/mdev = 4.238/114.371/444.666/190.695 ms
    Tue Jun 30 22:30:36 UTC 2020

>From this point on I can successfully communicate with the veth-int-core 
>interface. If no traffic is pushed to that interface for anywhere between 5 to 
>60 minutes then the problem comes back. I've tried:

- Seeing if any information shows up in the kernel logs on the host (nothing 
that I could see).
- Restarting the containers.
- Restarting the LXD host.
- Moving the containers to another host (the problem persisted).
- Changing the rp_filter setting on one or both interfaces.
- Looking at the lxd logs to see if anything related shows up.

Any pointers on where I could look to get more info would be appreciated.

-- 
Thanks,
Joshua Schaeffer

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Unable to join cluster

2020-04-08 Thread Joshua Schaeffer
On 4/8/20 00:51, Free Ekanayaka wrote:
> The 3.0.3 version in Ubuntu 18.04 has now become quite old, expecially
> since that was the first version introducing clustering and tons of
> improvements have landed since then.
>
> If you want to use clustering, please install the new stable LXD 4.0.0
> which has just been released. You can install it on 18.04 using "snap
> install lxd", but make sure "apt-get remove lxd" first.
Thanks, I'll give that a try

-- 
Thanks,
Joshua Schaeffer

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Unable to join cluster

2020-04-07 Thread Joshua Schaeffer
L)   = -1 EAGAIN (Resource temporarily 
unavailable)
futex(0xc4200d2d48, FUTEX_WAKE, 1)  = 1
sched_yield()   = 0
futex(0x17e4af0, FUTEX_WAIT, 2, NULL)   = -1 EAGAIN (Resource temporarily 
unavailable)
futex(0x17e4af0, FUTEX_WAKE, 1) = 0
futex(0x17e4be8, FUTEX_WAKE, 1) = 1
futex(0x17e4af0, FUTEX_WAKE, 1) = 1
futex(0x17e4bc0, FUTEX_WAKE, 1) = 1
futex(0x17e4af0, FUTEX_WAKE, 1) = 1
sched_yield()   = 0
futex(0x17e4af0, FUTEX_WAIT, 2, NULL)   = 0
futex(0x17e4af0, FUTEX_WAKE, 1) = 1
futex(0x17e5628, FUTEX_WAIT, 0, NULL

^C)   = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
strace: Process 13785 detached


On 3/19/20 11:41, Joshua Schaeffer wrote:
> Hey all, I'm trying to build a cluster on Ubuntu 18.04 with lxd 3.0.3. I was 
> able to bootstrap the first node without any issues, but when I try to add a 
> second node it just hangs and never returns the  terminal prompt. Here is my 
> bootstrapped node:
>
> lxcuser@blllxc02:~$ lxc cluster list
> +--+--+--++---+
> |   NAME   | URL  | DATABASE | STATE  
> |  MESSAGE  |
> +--+--+--++---+
> | blllxc02 | https://blllxc02-mgmt.harmonywave.cloud:8443 | YES  | ONLINE 
> | fully operational |
> +--+--+--++---+
>
> And here is the second node I am trying to add:
>
> lxcuser@blllxc01:~$ sudo lxd init
> Would you like to use LXD clustering? (yes/no) [default=no]: yes
> What name should be used to identify this node in the cluster? 
> [default=blllxc01]:
> What IP address or DNS name should be used to reach this node? 
> [default=fe80::6a1c:a2ff:fe13:1ec6]: blllxc01-mgmt.harmonywave.cloud
> Are you joining an existing cluster? (yes/no) [default=no]: yes
> IP address or FQDN of an existing cluster node: 
> blllxc02-mgmt.harmonywave.cloud
> Cluster fingerprint: 
> 20b51145761f3444278317331feeded8492c263920889f5dccd83772da0c42cf
> You can validate this fingerpring by running "lxc info" locally on an 
> existing node.
> Is this the correct fingerprint? (yes/no) [default=no]: yes
> Cluster trust password:
> All existing data is lost when joining a cluster, continue? (yes/no) 
> [default=no] yes
> Choose the local disk or dataset for storage pool "btrfspool1" (empty for 
> loop disk): /dev/sdj
> Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
>
> ^C
> lxcuser@blllxc01:~$ lxc cluster list
> Error: LXD server isn't part of a cluster
>
> After the last question from lxd init my terminal never returns. I've left it 
> like this overnight with no change. This is all I'm seeing in the logs as 
> well from the time I run lxd init to when I abort the process:
>
> Logs from the node trying to be added:
> t=2020-03-18T20:17:07-0600 lvl=info msg="Creating BTRFS storage pool 
> \"btrfspool1\""
> t=2020-03-18T20:17:08-0600 lvl=warn msg="Failed to detect UUID by looking at 
> /dev/disk/by-uuid"
> t=2020-03-18T20:17:08-0600 lvl=info msg="Created BTRFS storage pool 
> \"btrfspool1\""
> t=2020-03-19T02:12:27-0600 lvl=info msg="Updating images"
> t=2020-03-19T02:12:27-0600 lvl=info msg="Done updating images"
> t=2020-03-19T08:12:27-0600 lvl=info msg="Updating images"
> t=2020-03-19T08:12:27-0600 lvl=info msg="Done updating images"
>
> Logs from the bootstrapped node:
> t=2020-03-18T17:05:58-0600 lvl=info msg="Initializing global database"
> t=2020-03-18T17:06:02-0600 lvl=warn msg="Raft: Heartbeat timeout from \"\" 
> reached, starting election"
> t=2020-03-18T17:06:03-0600 lvl=info msg="Initializing storage pools"
> t=2020-03-18T17:06:03-0600 lvl=info msg="Initializing networks"
> t=2020-03-18T17:06:03-0600 lvl=info msg="Pruning leftover image files"
> t=2020-03-18T17:06:03-0600 lvl=info msg="Done pruning leftover image files"
> t=2020-03-18T17:06:03-0600 lvl=info msg="Loading daemon configuration"
> t=2020-03-18T17:06:03-0600 lvl=info msg="Pruning expired images"
> t=2020-03-18T17:06:03-0600 lvl=info msg="Done pruning expired images"
> t=2020-03-18T17:06:03-0600 lvl=info msg="Expiring log files"
> t=2020-03-18T17:06:03-0600 lvl=info msg="Done expiring log files"
> t=2020-03-18T17:06:03-0600 lvl=info msg="Updating images"
> t=2020-03-18T17:06:03-0600 lvl=info msg="Done updating images"

Re: [lxc-users] Mapping multiple ids

2020-04-03 Thread Joshua Schaeffer


On 4/3/20 11:05, Michael Eager wrote:
> /var/log/lxd/wiki/lxc.log contains this:
> lxc wiki 20200403165802.697 ERROR    start - start.c:proc_pidfd_open:1644 - 
> Function not implemented - Failed to send signal through pidfd
> lxc wiki 20200403165802.700 ERROR    conf - conf.c:lxc_map_ids:3009 - 
> newuidmap failed to write mapping "newuidmap: uid range [48-49) -> [48-49) 
> not allowed": newuidmap 27611 0 10 48 48 48 1 49 100049 951 1000 1000 1 
> 1001 101001 64535
> lxc wiki 20200403165802.700 ERROR    start - start.c:lxc_spawn:1798 - Failed 
> to set up id mapping.
>
I ran into the same error recently but I was increasing the default map size. I 
had to:

1. Stop the container
2. Make the container privileged
3. Start then stop the container
4. Make the container unprivileged

After that it worked with the new ID's in the unprivileged container.
>
> I'm guessing that remapping UID/GID 48 is not permitted in a non-privileged 
> container.
I would guess the same thing
>
> Is there a better way to do this?
I would also be interested if there is a better way to do this as the method I 
listed above may not always be possible for some situations.

-- 
Thanks,
Joshua Schaeffer

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Unable to join cluster

2020-03-19 Thread Joshua Schaeffer
Hey all, I'm trying to build a cluster on Ubuntu 18.04 with lxd 3.0.3. I was 
able to bootstrap the first node without any issues, but when I try to add a 
second node it just hangs and never returns the  terminal prompt. Here is my 
bootstrapped node:

lxcuser@blllxc02:~$ lxc cluster list
+--+--+--++---+
|   NAME   | URL  | DATABASE | STATE  | 
 MESSAGE  |
+--+--+--++---+
| blllxc02 | https://blllxc02-mgmt.harmonywave.cloud:8443 | YES  | ONLINE | 
fully operational |
+--+--+--++---+

And here is the second node I am trying to add:

lxcuser@blllxc01:~$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? 
[default=blllxc01]:
What IP address or DNS name should be used to reach this node? 
[default=fe80::6a1c:a2ff:fe13:1ec6]: blllxc01-mgmt.harmonywave.cloud
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: blllxc02-mgmt.harmonywave.cloud
Cluster fingerprint: 
20b51145761f3444278317331feeded8492c263920889f5dccd83772da0c42cf
You can validate this fingerpring by running "lxc info" locally on an existing 
node.
Is this the correct fingerprint? (yes/no) [default=no]: yes
Cluster trust password:
All existing data is lost when joining a cluster, continue? (yes/no) 
[default=no] yes
Choose the local disk or dataset for storage pool "btrfspool1" (empty for loop 
disk): /dev/sdj
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

^C
lxcuser@blllxc01:~$ lxc cluster list
Error: LXD server isn't part of a cluster

After the last question from lxd init my terminal never returns. I've left it 
like this overnight with no change. This is all I'm seeing in the logs as well 
from the time I run lxd init to when I abort the process:

Logs from the node trying to be added:
t=2020-03-18T20:17:07-0600 lvl=info msg="Creating BTRFS storage pool 
\"btrfspool1\""
t=2020-03-18T20:17:08-0600 lvl=warn msg="Failed to detect UUID by looking at 
/dev/disk/by-uuid"
t=2020-03-18T20:17:08-0600 lvl=info msg="Created BTRFS storage pool 
\"btrfspool1\""
t=2020-03-19T02:12:27-0600 lvl=info msg="Updating images"
t=2020-03-19T02:12:27-0600 lvl=info msg="Done updating images"
t=2020-03-19T08:12:27-0600 lvl=info msg="Updating images"
t=2020-03-19T08:12:27-0600 lvl=info msg="Done updating images"

Logs from the bootstrapped node:
t=2020-03-18T17:05:58-0600 lvl=info msg="Initializing global database"
t=2020-03-18T17:06:02-0600 lvl=warn msg="Raft: Heartbeat timeout from \"\" 
reached, starting election"
t=2020-03-18T17:06:03-0600 lvl=info msg="Initializing storage pools"
t=2020-03-18T17:06:03-0600 lvl=info msg="Initializing networks"
t=2020-03-18T17:06:03-0600 lvl=info msg="Pruning leftover image files"
t=2020-03-18T17:06:03-0600 lvl=info msg="Done pruning leftover image files"
t=2020-03-18T17:06:03-0600 lvl=info msg="Loading daemon configuration"
t=2020-03-18T17:06:03-0600 lvl=info msg="Pruning expired images"
t=2020-03-18T17:06:03-0600 lvl=info msg="Done pruning expired images"
t=2020-03-18T17:06:03-0600 lvl=info msg="Expiring log files"
t=2020-03-18T17:06:03-0600 lvl=info msg="Done expiring log files"
t=2020-03-18T17:06:03-0600 lvl=info msg="Updating images"
t=2020-03-18T17:06:03-0600 lvl=info msg="Done updating images"
t=2020-03-18T17:06:03-0600 lvl=info msg="Updating instance types"
t=2020-03-18T17:06:03-0600 lvl=info msg="Done updating instance types"
t=2020-03-18T23:06:03-0600 lvl=info msg="Updating images"
t=2020-03-18T23:06:03-0600 lvl=info msg="Done updating images"
t=2020-03-19T05:06:03-0600 lvl=info msg="Updating images"
t=2020-03-19T05:06:03-0600 lvl=info msg="Done updating images"
t=2020-03-19T11:06:03-0600 lvl=info msg="Updating images"
t=2020-03-19T11:06:03-0600 lvl=info msg="Done updating images"

Any idea where I can get more information about what is going on to 
successfully add the node to the cluster?

-- 
Thanks,
Joshua Schaeffer

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXD static IP in container

2020-02-11 Thread Joshua Schaeffer
 bllmail02 -- ip link show veth-mgmt
316: veth-mgmt@if317:  mtu 1500 qdisc noqueue 
state UP mode DEFAULT group default qlen 1000
    link/ether 00:16:3e:f6:e5:ec brd ff:ff:ff:ff:ff:ff link-netnsid 0
lxcuser@blllxc02:~$ lxc exec bllmail02 -- ip -4 addr show veth-mgmt
316: veth-mgmt@if317:  mtu 1500 qdisc noqueue 
state UP group default qlen 1000 link-netnsid 0
    inet 10.2.28.129/22 brd 10.2.31.255 scope global veth-mgmt
   valid_lft forever preferred_lft forever

lxcuser@blllxc02:~$ lxc exec bllmail02 -- ip link show veth-ext-svc
314: veth-ext-svc@if315:  mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 00:16:3e:21:ac:dc brd ff:ff:ff:ff:ff:ff link-netnsid 0
lxcuser@blllxc02:~$ lxc exec bllmail02 -- ip -4 addr show veth-ext-svc
314: veth-ext-svc@if315:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000 link-netnsid 0
    inet 192.41.41.85/26 brd 192.41.41.127 scope global veth-ext-svc
   valid_lft forever preferred_lft forever

-- 
Thanks,
Joshua Schaeffer

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Managing network devices on different VLAN's in LXD

2018-11-13 Thread Joshua Schaeffer
Can you manage networks on different VLAN's in LXD 3.x that connects to a
dedicated physical interface for all container traffic?

For example let's say I have 10 containers on 10 different VLAN's and I
want to have all traffic go across a dedicated physical interface on the
host. The dedicated interface is not the management interface and is
connected to a trunk port on the switch.

I know how to do this outside of LXD (meaning LXD doesn't manage the
interfaces). I setup several raw VLAN devices and then create bridges for
each VLAN device. I can then use that bridge on a container/profile, but I
haven't been able to figure out how to do this directly through LXD (i.e
using the `lxc network ...` command). Is this possible?

Thanks,
Joshua Schaeffer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] No certificate when adding remote

2017-10-01 Thread Joshua Schaeffer
I've setup my own PKI infrastructure for my LXD hosts and I'm trying to add a 
remote, but I'm getting an error about no certificate being provided:

    lxc remote add blllxd03 https://blllxd03.appendata.net:8443
    Admin password for blllxd03:
    error: No client certificate provided

If I run it with debug I see this after entering the trust password:

    [...]
    Admin password for blllxd03:
    INFO[10-01|11:50:41] Sending request to LXD   etag= 
method=POST url=https://blllxd03.appendata.net:8443/1.0/certificates
    DBUG[10-01|11:50:41]
        {
            "name": "",
            "type": "client",
            "certificate": "",
            "password": "XXX"
        }
    DBUG[10-01|11:50:41] Trying to remove 
/home/lxduser/.config/lxc/servercerts/blllxd03.crt
    error: No client certificate provided

Why would the remote not send its certificate? I have the files: server.ca, 
server.crt, and server.key in /var/lib/lxd/ for both the server and the remote. 
I replaced the the default files with my own. I can verify with OpenSSL that 
all the certs are valid and signed by the CA.

Thanks,
Joshua Schaeffer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] BIND isn't recognizing container CPU limits

2017-08-03 Thread Joshua Schaeffer
I saw this in my log file when I start BIND9 and was a little concerned, since 
I limit the container to 2 CPU's and this is an unprivileged container:

Aug  2 16:04:39 blldns01 named[320]: found 32 CPUs, using 32 worker threads
Aug  2 16:04:39 blldns01 named[320]: using 16 UDP listeners per interface
Aug  2 16:04:39 blldns01 named[320]: using up to 4096 sockets

>From the container:

root@blldns01:~# cat /proc/cpuinfo | grep -c processor
2

>From the host:

lxduser@blllxd01:~$ lxc config get blldns01 limits.cpu
2

Why would BIND be able to see all the cores of the host? I can certainly limit 
BIND to using less threads, but it shouldn't be able to see that many cores in 
the first place. I'm using LXD 2.15

Thanks,
Joshua Schaeffer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] setgid error inside container

2017-07-24 Thread Joshua Schaeffer
I figured this out. LXD could use the range I listed below in subuid and 
subgid, but the container itself was still limited to 65000ish ID's. I just set 
security.idmap.isolated and security.idmap.size in my profile and restarted my 
containers and I was able to log in with my network credentials.

On 07/20/2017 11:09 AM, Joshua Schaeffer wrote:
> Hey guys,
>
> I'm trying to setup my subuid and subgid parameters correctly and I'm clearly 
> doing something wrong as I keep getting "setgid: Invalid argument" when I try 
> to su to my user. I have all my accounts in LDAP and I've connected my 
> container to my infrastruture. It can see users, authenticate with LDAP, 
> Kerberos, etc, I just can't login due to the uid/gid mapping. I'm on LXD 
> 2.15, all my end users have uid's/gid's between 100,000 and 199,999. The LXD 
> container is running under a local user called "lxduser" on the host.
>
> root@bllldap01:~# getent passwd jschaeffer
> jschaeffer:*:10:10:Joshua Schaeffer:/home/jschaeffer:/bin/bash
>
> root@bllldap01:~# ldapwhoami -Q
> dn:uid=jschaeffer,ou=end users,ou=people,dc=appendata,dc=net
>
> root@bllldap01:~# ldapsearch -LLLQ -b "uid=jschaeffer,ou=End 
> Users,ou=People,dc=appendata,dc=net" -s base
> dn: uid=jschaeffer,ou=End Users,ou=People,dc=appendata,dc=net
> objectClass: top
>     objectClass: account
> objectClass: posixAccount
> uid: jschaeffer
>     cn: Joshua Schaeffer
> homeDirectory: /home/jschaeffer
> loginShell: /bin/bash
> gecos: Joshua Schaeffer
> gidNumber: 10
> uidNumber: 10
>
> When I try to actually log into the users I get the setgid error:
>
> root@bllldap01:~# su - jschaeffer
> setgid: Invalid argument
>
> Here is my /etc/subuid and /etc/subgid files on the LXD host:
>
> lxduser@blllxd01:~$ cat /etc/sub{uid,gid}
> lxd:10:100
> root:10:100
> lxduser:1065536:100
> lxd:10:100
> root:10:100
> lxduser:1065536:100
>
> I've restarted lxd.service and restarted all my containers after I made this 
> change. My understanding is, from my uid/gid files, that user 100,000 inside 
> the container should be mapped to 200,000 outside the container. Any help 
> would be appreciated.
>
> Thanks,
> Joshua Schaeffer

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] MAC address prefix

2017-07-21 Thread Joshua Schaeffer
Is it possible to set a mac address prefix on a profile or container? I see in 
/etc/lxc/defaults.conf that the 'lxc.network.hwaddr has unspecified 
placeholders (i.e. 00:16:3e:xx:xx:xx). However if I try to do this on a profile 
it says that only interface specific network keys are allowed:

lxduser@blllxd01:~$ lxc profile set 30_vlan_mgmt_server raw.lxc 
lxc.network.hwaddr=00:16:3e:1e:xx:xx
error: Only interface-specific ipv4/ipv6 lxc.network. keys are allowed

And the "x" placeholder doesn't fly on an actual network device either. It 
would be nice to be able to set a prefix based on a profile, not just globally.

Using LXD 2.15.

Thanks,
Joshua Schaeffer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] setgid error inside container

2017-07-20 Thread Joshua Schaeffer
Hey guys,

I'm trying to setup my subuid and subgid parameters correctly and I'm clearly 
doing something wrong as I keep getting "setgid: Invalid argument" when I try 
to su to my user. I have all my accounts in LDAP and I've connected my 
container to my infrastruture. It can see users, authenticate with LDAP, 
Kerberos, etc, I just can't login due to the uid/gid mapping. I'm on LXD 2.15, 
all my end users have uid's/gid's between 100,000 and 199,999. The LXD 
container is running under a local user called "lxduser" on the host.

root@bllldap01:~# getent passwd jschaeffer
jschaeffer:*:10:100000:Joshua Schaeffer:/home/jschaeffer:/bin/bash

root@bllldap01:~# ldapwhoami -Q
dn:uid=jschaeffer,ou=end users,ou=people,dc=appendata,dc=net

root@bllldap01:~# ldapsearch -LLLQ -b "uid=jschaeffer,ou=End 
Users,ou=People,dc=appendata,dc=net" -s base
dn: uid=jschaeffer,ou=End Users,ou=People,dc=appendata,dc=net
objectClass: top
objectClass: account
objectClass: posixAccount
uid: jschaeffer
cn: Joshua Schaeffer
homeDirectory: /home/jschaeffer
loginShell: /bin/bash
gecos: Joshua Schaeffer
gidNumber: 10
uidNumber: 10

When I try to actually log into the users I get the setgid error:

root@bllldap01:~# su - jschaeffer
setgid: Invalid argument

Here is my /etc/subuid and /etc/subgid files on the LXD host:

lxduser@blllxd01:~$ cat /etc/sub{uid,gid}
lxd:10:100
root:10:100
lxduser:1065536:100
lxd:10:100
root:10:100
lxduser:1065536:100

I've restarted lxd.service and restarted all my containers after I made this 
change. My understanding is, from my uid/gid files, that user 100,000 inside 
the container should be mapped to 200,000 outside the container. Any help would 
be appreciated.

Thanks,
Joshua Schaeffer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Adding a second disk to a container

2017-07-12 Thread Joshua Schaeffer

  
  
So is adding a storage volume the proper way to add a 2nd, 3rd,
4th... disk to a container, then? What is the relationship between "lxc storage" and "lxc storage volume"? It sounds like when a
container is created it gets a container volume. What is the
difference between a container volume and a custom volume? When I do
an "lxc storage volume list int_lxd"
I see my container volumes, but all of them have "used by" set to 0.
Just trying to wrap my head around all of this. I've read the man
page for lxc.storage and the github configuration docs, but if there
is more documentation I'd be happy to read it.

On 07/12/2017 02:25 PM, Stéphane Graber wrote:
Sorry about
  that, looks like we're missing a test for "lxc storage
  volume set", I fixed the issue and added a test:

https://github.com/lxc/lxd/pull/3541


No problem, glad I could help.
  

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Adding a second disk to a container

2017-07-12 Thread Joshua Schaeffer
I'm wondering what the best/recommended approach is to adding a second disk to 
a container. I'm using LXD 2.15 on Ubuntu 16.04. As an example I have a 
container where I've limited the disk size to 20GB:

lxduser@blllxd01:~$ lxc config show bllcloudctl01 | grep -A 4 root
  root:
path: /
pool: int_lxd
size: 20GB
type: disk

This works great, but say I want the container to have another disk device that 
is mounted elsewhere like /mnt/disk1. I've tried adding that in my config but I 
alway get an error about needing the source. Since I'm using ZFS I don't have 
an actual block device to pass to the source. I then played around with storage 
volumes, which appears to work, but I always get an error when editing the 
volume:

lxduser@blllxd01:~$ lxc storage volume create int_lxd vol1
Storage volume cinder created

lxduser@blllxd01:~$ lxc storage volume set int_lxd vol1 size=240GB
error: ETag doesn't match: 
834842d9406bd41f2f23c097e496434c3c263a022ef3fb1aaf214b13e4395771 vs 
b37ae1157f2a46dc1b24b2b561aefccc168ba822d5057330e94ca10ba47ccfb6

If I create the storage volume with the size parameter is works, but doesn't 
set the right size:

lxduser@blllxd01:~$ lxc storage volume delete int_lxd vol1
Storage volume cinder deleted

lxduser@blllxd01:~$ lxc storage volume create int_lxd vol1 size=240GB
Storage volume cinder created

lxduser@blllxd01:~$ lxc storage volume show int_lxd vol1
config:
  *size: 10GB*
description: ""
name: vol1
type: custom
used_by: []

I really just learned about storage volumes so I'm not even sure if I'm using 
them correctly or in the correct context here. Also, I get that ETag error 
whenever I edit the volume, even if I didn't make changes:

lxduser@blllxd01:~$ lxc storage volume delete int_lxd vol1
Storage volume cinder deleted

lxduser@blllxd01:~$ lxc storage volume create int_lxd vol1
Storage volume vol1 created

lxduser@blllxd01:~$ lxc storage volume edit int_lxd vol1
/[I immediately exit the editor without making any changes]/
Config parsing error: ETag doesn't match: 
446844b3e6b6bc70d7a101b5b281e4cc8fb568cfcd66424f32b2027c8215032a vs 
9e27a7a4de3badc96a0046c111094164619f7cfc6351534c7b7236769c85ace1
Press enter to open the editor again

I just want the container to look like it has two disks, one mounted on / and 
the other mounted under /mnt.

Thanks,
Joshua Schaeffer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] root device isn't being inherited on ZFS storage pool

2017-05-31 Thread Joshua Schaeffer
Thanks for the explanation Stéphane, I will add the device locally. I
figured it was a change in versions that caused my discrepancy.

Joshua Schaeffer

On Wed, May 31, 2017 at 11:25 AM, Stéphane Graber <stgra...@ubuntu.com>
wrote:

> On Wed, May 31, 2017 at 11:01:39AM -0600, Joshua Schaeffer wrote:
> > I guess I should have mentioned an important change. When I switched from
> > BTRFS to ZFS I also went from LXD 2.0 to 2.13.
>
> Right and with LXD 2.8 we moved from always adding a container-local
> disk device to having it be inherited from the profile, which is what's
> causing the confusion here.
>
> >
> > On Wed, May 31, 2017 at 10:27 AM, Joshua Schaeffer <
> jschaeffer0...@gmail.com
> > > wrote:
> >
> > > I've recently switch from using BTRFS to ZFS backend, and my
> containers on
> > > the ZFS backend aren't inheriting the root device from my default
> profile:
> > >
> > > lxduser@raynor:~$ lxc config show fenix
> > > architecture: x86_64
> > > [snip]
> > > devices: {}
> > > ephemeral: false
> > > profiles:
> > > - default
> > > - 30_vlan_mgmt_server
>
> They ar inheriting it but you won't see the inherited stuff unless you
> pass --expanded to "lxc config show".
>
> > > The default profile, which the container is using has the root device
> with
> > > a pool specified:
> > >
> > > lxduser@raynor:~$ lxc profile show default
> > > config: {}
> > > description: Default LXD profile
> > > devices:
> > >   root:
> > > path: /
> > > pool: lxdpool
> > > type: disk
> > > name: default
> > >
> > > But the container isn't showing a root device (or any device for that
> > > matter), and I get an error when I try to set a size limit on the root
> > > device for that container:
> > >
> > > lxduser@raynor:~$ lxc config device set fenix root size 50G
> > > error: The device doesn't exist
>
> That's because it's an inherited device rather than one set at the
> container level. To override the inherited device you must add a new
> local device with the same name.
>
> lxc config device add fenix root disk pool=lxdpool path=/ size=50GB
>
> That should do the trick and will then show up in "lxc config show"
> (without --expanded) since it will be a container-local device.
>
> > > Is there a ZFS property that has to be set to get it to inherit the
> > > device? I was able to successfully create the root device on another
> > > container, but I don't want to create the device on each container, I
> just
> > > want to set it on the profile. I'm on LXD 2.13. Here is my storage
> device:
> > >
> > > lxduser@raynor:~$ lxc storage list
> > > +-++-+-+
> > > |  NAME   | DRIVER | SOURCE  | USED BY |
> > > +-++-+-+
> > > | lxdpool | zfs| lxdpool | 15  |
> > > +-++-+-+
> > >
> > > Thanks,
> > > Joshua Schaeffer
>
>
> --
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] root device isn't being inherited on ZFS storage pool

2017-05-31 Thread Joshua Schaeffer
I guess I should have mentioned an important change. When I switched from
BTRFS to ZFS I also went from LXD 2.0 to 2.13.

On Wed, May 31, 2017 at 10:27 AM, Joshua Schaeffer <jschaeffer0...@gmail.com
> wrote:

> I've recently switch from using BTRFS to ZFS backend, and my containers on
> the ZFS backend aren't inheriting the root device from my default profile:
>
> lxduser@raynor:~$ lxc config show fenix
> architecture: x86_64
> [snip]
> devices: {}
> ephemeral: false
> profiles:
> - default
> - 30_vlan_mgmt_server
>
> The default profile, which the container is using has the root device with
> a pool specified:
>
> lxduser@raynor:~$ lxc profile show default
> config: {}
> description: Default LXD profile
> devices:
>   root:
> path: /
> pool: lxdpool
> type: disk
> name: default
>
> But the container isn't showing a root device (or any device for that
> matter), and I get an error when I try to set a size limit on the root
> device for that container:
>
> lxduser@raynor:~$ lxc config device set fenix root size 50G
> error: The device doesn't exist
>
> Is there a ZFS property that has to be set to get it to inherit the
> device? I was able to successfully create the root device on another
> container, but I don't want to create the device on each container, I just
> want to set it on the profile. I'm on LXD 2.13. Here is my storage device:
>
> lxduser@raynor:~$ lxc storage list
> +-++-+-+
> |  NAME   | DRIVER | SOURCE  | USED BY |
> +-++-----+-----+
> | lxdpool | zfs| lxdpool | 15  |
> +-++-+-+
>
> Thanks,
> Joshua Schaeffer
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] root device isn't being inherited on ZFS storage pool

2017-05-31 Thread Joshua Schaeffer
I've recently switch from using BTRFS to ZFS backend, and my containers on
the ZFS backend aren't inheriting the root device from my default profile:

lxduser@raynor:~$ lxc config show fenix
architecture: x86_64
[snip]
devices: {}
ephemeral: false
profiles:
- default
- 30_vlan_mgmt_server

The default profile, which the container is using has the root device with
a pool specified:

lxduser@raynor:~$ lxc profile show default
config: {}
description: Default LXD profile
devices:
  root:
path: /
pool: lxdpool
type: disk
name: default

But the container isn't showing a root device (or any device for that
matter), and I get an error when I try to set a size limit on the root
device for that container:

lxduser@raynor:~$ lxc config device set fenix root size 50G
error: The device doesn't exist

Is there a ZFS property that has to be set to get it to inherit the device?
I was able to successfully create the root device on another container, but
I don't want to create the device on each container, I just want to set it
on the profile. I'm on LXD 2.13. Here is my storage device:

lxduser@raynor:~$ lxc storage list
+-++-+-+
|  NAME   | DRIVER | SOURCE  | USED BY |
+-++-+-+
| lxdpool | zfs| lxdpool | 15  |
+-++-+-+

Thanks,
Joshua Schaeffer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] subuids and subgid range with multiple LXC containers

2017-03-30 Thread Joshua Schaeffer
On Tue, Mar 28, 2017 at 7:07 PM, Serge E. Hallyn  wrote:

> One thing I've always thought would be useful, but not had the time to
> pursue, woudl be to have a concept of 'clients' or somesuch, where each
> client can get one or more unique ranges.  They can then use those
> ranges however they want, but no other clients will ever get their
> rnages.


+1 for this idea. I would find that extremely useful.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Mounting ISO files inside LXD container

2016-09-28 Thread Joshua Schaeffer
I'm getting an error when I try to mount an ISO file into a container and
I'm not sure how to fix the problem. I've searched a bit and found somewhat
similar issues[1][2] but they were mostly related to LXC containers not LXD.

When I try to mount an ISO I get an error:

root@broodwar:~# mount -t iso9660 -o loop /root/Win10_English_x32.iso
/mnt/windows/x32
mount: /mnt/windows/x32: mount failed: Unknown error -1

I though it was because I didn't have a loop device in my container, so I
added one, but I still get an error:

root@kerrigan:~# lxc config device add broodwar loop unix-block
path=/dev/loop0

root@broodwar:~# ls -l /dev/loop*
brw-rw 1 root root 7, 0 Sep 28 16:00 /dev/loop0

Running mount -v gives no extra messages. The ISO file itself is good, I
can mount it successfully on the host, just not the container. Does anyone
know how to fix this issue?

Thanks,
Joshua Schaeffer

[1]
https://lists.linuxcontainers.org/pipermail/lxc-users/2015-November/010560.html
[2]
http://askubuntu.com/questions/376345/allow-loop-mounting-files-inside-lxc-containers
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] OpenVPN in Debian Jessie container

2016-05-30 Thread Joshua Schaeffer

For starters, from "man lxc.container.conf"

lxc.hook.autodev
   A hook to be run in the container's namespace after mounting
   has been done and after any mount hooks have run, but before
   the pivot_root, if lxc.autodev == 1.

You can never modprobe in unprivileged container's namespace.

Another thing, AFAIK the hooks only accepts one parameter: a script name. So 
you need to have a script (e.g. /usr/local/bin/my_script) inside the container.


I actually tried that already as well and it resulted in the exact same error:

lxc.autodev = 1
lxc.hook.autodev = /home/lxcuser/.local/share/lxc/autodev/vpn_barracks
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 10:200 rwm

lxcuser@corsair:~/.local/share/lxc$ cat autodev/vpn_barracks
#!/bin/bash
cd ${LXC_ROOTFS_MOUNT}/dev
mkdir net
mknod net/tun c 10 200
chmod 0666 net/tun

lxc-start -n vpn_barracks --logpriority=DEBUG

...
lxc-start 1464620477.814 INFO lxc_conf - conf.c:run_script_argv:362 - 
Executing script '/usr/share/lxcfs/lxc.mount.hook' for container 
'vpn_barracks', config section 'lxc'
  lxc-start 1464620477.893 INFO lxc_conf - conf.c:run_script_argv:362 - 
Executing script '/home/lxcuser/.local/share/lxc/autodev/vpn_barracks' for 
container 'vpn_barracks', config section 'lxc'
  lxc-start 1464620477.900 ERRORlxc_conf - conf.c:run_buffer:342 - 
Script exited with status 1
  lxc-start 1464620477.900 ERRORlxc_conf - conf.c:lxc_setup:3947 - 
failed to run autodev hooks for container 'vpn_barracks'.
  lxc-start 1464620477.900 ERRORlxc_start - start.c:do_start:717 - 
failed to setup the container
  lxc-start 1464620477.900 ERRORlxc_sync - sync.c:__sync_wait:51 - 
invalid sequence number 1. expected 2
  lxc-start 1464620477.942 ERRORlxc_start - start.c:__lxc_start:1192 - 
failed to spawn 'vpn_barracks'
  lxc-start 1464620477.998 WARN lxc_commands - 
commands.c:lxc_cmd_rsp_recv:172 - command get_init_pid failed to receive 
response
  lxc-start 1464620477.999 WARN lxc_cgmanager - cgmanager.c:cgm_get:994 
- do_cgm_get exited with error
  lxc-start 1464620483.004 ERRORlxc_start_ui - lxc_start.c:main:344 - 
The container failed to start.
  lxc-start 1464620483.004 ERRORlxc_start_ui - lxc_start.c:main:346 - 
To get more details, run the container in foreground mode.
  lxc-start 1464620483.004 ERRORlxc_start_ui - lxc_start.c:main:348 - 
Additional information can be obtained by setting the --logfile and 
--logpriority options.

Since the error was exactly the same I figured LXC was simply executing 
whatever parameter lxc.hook.autodev was provided, regardless of whether it was 
a file or not.


My best advice is to bind-mount /dev/net/tun from the host (lxc.mount.entry) instead 
of using lxc.hook.autodev, and try again. I"m not even sure that /dev/net/tun 
works for unpriv containers (fuse doesn't), so if that still doesn't work, you 
probably want to try privileged container.



Okay, thanks, I'll try this, especially after Wojtek's comments saying this 
should work.

Thanks,
Joshua

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] OpenVPN in Debian Jessie container

2016-05-29 Thread Joshua Schaeffer

I'm trying to setup OpenVPN in an unprivileged container. The host and 
container are both Debian Jessie on LXC version 1.1.5. When I try to start 
OpenVPN I get:

Sat May 28 20:55:57 2016 us=360137 ERROR: Cannot open TUN/TAP dev /dev/net/tun: 
No such file or directory (errno=2)

So it makes sense that the container can't create the tun device so I looked 
around and found suggestions to add an autodev hook:

lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 10:200 rwm
lxc.hook.autodev = sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod 
net/tun c 10 200; chmod 0666 net/tun"

However when I try to start the container I get an error:

lxc-start -n vpn_barracks --logpriority=DEBUG

...
  lxc-start 1464541270.246 INFO lxc_conf - 
conf.c:mount_file_entries:2150 - mount points have been setup
  lxc-start 1464541270.247 INFO lxc_conf - conf.c:run_script_argv:362 - 
Executing script '/usr/share/lxcfs/lxc.mount.hook' for container 
'vpn_barracks', config section 'lxc'
  lxc-start 1464541270.332 INFO lxc_conf - conf.c:run_script_argv:362 - Executing 
script 'sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod net/tun c 
10 200; chmod 0666 net/tun"' for container 'vpn_barracks', config section 'lxc'
  lxc-start 1464541270.338 ERRORlxc_conf - conf.c:run_buffer:342 - 
Script exited with status 1
  lxc-start 1464541270.338 ERRORlxc_conf - conf.c:lxc_setup:3947 - 
failed to run autodev hooks for container 'vpn_barracks'.
  lxc-start 1464541270.338 ERRORlxc_start - start.c:do_start:717 - 
failed to setup the container
  lxc-start 1464541270.338 ERRORlxc_sync - sync.c:__sync_wait:51 - 
invalid sequence number 1. expected 2
  lxc-start 1464541270.374 ERRORlxc_start - start.c:__lxc_start:1192 - 
failed to spawn 'vpn_barracks'
  lxc-start 1464541270.430 WARN lxc_commands - 
commands.c:lxc_cmd_rsp_recv:172 - command get_init_pid failed to receive 
response
  lxc-start 1464541270.431 WARN lxc_cgmanager - cgmanager.c:cgm_get:994 
- do_cgm_get exited with error
  lxc-start 1464541275.436 ERRORlxc_start_ui - lxc_start.c:main:344 - 
The container failed to start.
  lxc-start 1464541275.436 ERRORlxc_start_ui - lxc_start.c:main:346 - 
To get more details, run the container in foreground mode.
  lxc-start 1464541275.436 ERRORlxc_start_ui - lxc_start.c:main:348 - 
Additional information can be obtained by setting the --logfile and 
--logpriority options.

I'd appreciate any pointers.

Thanks,
Joshua
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Setting static arp and DNS options from config file

2016-03-02 Thread Joshua Schaeffer
Newbie question here. I have an unprivileged container using the following
network configuration:

lxc.network.type= veth
lxc.network.link= br0-500
lxc.network.ipv4= 10.240.78.3/24
lxc.network.ipv4.gateway= 10.240.78.1
lxc.network.flags   = up
lxc.network.hwaddr  = 00:16:3e:0d:e9:e5

However I also have some settings in /etc/network/interfaces on the
container itself. I need to set a static ARP and I have some DNS options:

auto eth0
iface eth0 inet manual
post-up arp -f /etc/ethers
dns-nameserver 10.240.78.4
dns-search mpls.digitalriver.com thinksubscription.com appendata.net

How would I set the ARP entry and the DNS options from the config side.

If I'm reading the man page correctly lxc.network.script.up is run from the
host side so I don't think that would add the ARP entry to the container.
Should I use a hook instead?

How would I set the DNS options as well? Should I be invoking resolvconf
directly?

Thanks,
Joshua
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Connecting container to tagged VLAN

2016-01-28 Thread Joshua Schaeffer
On Wed, Jan 27, 2016 at 6:09 PM, Fajar A. Nugraha  wrote:

>
>
>> eth2 already works. I set it up for testing outside of all containers
>> (i.e. on the host only). From the host:
>>
>>
> That doesn't match what you said earlier.
>

It actually does. Remember that this LXC host is a virtual machine running
off of VMware, which makes this whole situation more complex. I'll try to
clarify.

VLAN10, the native vlan, is 192.168.54.0/25. It's my management vlan
VLAN500 is 10.240.78.0/24.

eth1 and eth2 are setup to connect to vlan500 because they were setup that
way through VMware. Normarlly you would be correct, on a physical server
eth2 would only be able to contact the native vlan, because no tagging
information is provided. However VMware allows you to tag a NIC (its
actually called a port group, but it is essentially VMware's way of saying
a NIC) from outside the VM guest. If you do this (as I have) then you don't
(and shouldn't) need to tag anything on the VM guest itself. So by just
looking at the guest it can look incorrect/confusing.

My original problem was I was tagging the port group (a.k.a. VMware's NIC)
and I was tagging eth1 inside the VM guest (a.k.a. the LXC host). Clearly
this causes problems. Because I was tagging eth1 but not eth2 that is where
the problem resided. I was trying to mimic a setup I have in my home lab
where I tag an Ethernet device, add it to a bridge, then use that bridge in
a container, but my home lab uses a physical LXC host. Hopefully I've
explained it in a way that clears this up.

Either way I have that problem resolved. Now I'm just wondering why the
container is not adding the gateway's MAC address when it ARP's for it (as
I explained in my last email).


>
> What I meant, check that ETH1 works on the host. If eth2 is on the same
> network, it might interfere with settings. So disable eth2 first, then test
> eth1 on the host. Without bridging.
>

Okay that makes sense.

Thanks,
Joshua
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Connecting container to tagged VLAN

2016-01-27 Thread Joshua Schaeffer
On Wed, Jan 27, 2016 at 4:38 PM, Guido Jäkel  wrote:

> Dear Joshua,
>
> you wrote, that there's a trunk on eth1 and eth2. But for eth2, i can't
> see any VLAN (501 ?) detrunking as with eth1 & eth1.500. In the other hand
> you wrote, that eth2 is working. Are you shure, that you realy receive this
> trunk of 3 VLANs on your both eth's?
>

I started to think about this as well and I've found the reason. VMware
allows you to tag NICs from the hypervisor level. Eth1 and eth2 were both
setup under VLAN 500 so that is why no tagging on the LXC host was
required, hence why eth2 worked. So the lesson there is don't mix dot1q,
either set it on the hypervisor and leave it completely out of the LXC host
and container or vice versa.

I've completely removed VLAN tagging from my LXC host and making progress,
but still running into odd situations:

lxcuser@prvlxc01:~$ sudo ip -d link show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode
DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0
2: eth0:  mtu 1500 qdisc pfifo_fast state
UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:be:13:94 brd ff:ff:ff:ff:ff:ff promiscuity 0
3: eth1:  mtu 1500 qdisc pfifo_fast master
br0-500 state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:be:46:c5 brd ff:ff:ff:ff:ff:ff promiscuity 1
bridge_slave
4: eth2:  mtu 1500 qdisc noop state DOWN mode DEFAULT
group default qlen 1000
link/ether 00:50:56:be:26:4f brd ff:ff:ff:ff:ff:ff promiscuity 0
5: eth3:  mtu 1500 qdisc noop state DOWN mode DEFAULT
group default qlen 1000
link/ether 00:50:56:be:01:d8 brd ff:ff:ff:ff:ff:ff promiscuity 0
6: br0-500:  mtu 1500 qdisc noqueue state
UP mode DEFAULT group default
link/ether 00:50:56:be:46:c5 brd ff:ff:ff:ff:ff:ff promiscuity 0
bridge
7: lxcbr0:  mtu 1500 qdisc noqueue state
UNKNOWN mode DEFAULT group default
link/ether de:ef:8c:53:01:0b brd ff:ff:ff:ff:ff:ff promiscuity 0
bridge
9: vethKAG02C:  mtu 1500 qdisc pfifo_fast
master br0-500 state UP mode DEFAULT group default qlen 1000
link/ether fe:bf:b5:cf:f0:83 brd ff:ff:ff:ff:ff:ff promiscuity 1
veth
bridge_slave

*Scenario 1*: When assigning an IP directly to eth1 on the host, no
bridging involved, no containers involved (Success):

/etc/network/interfaces
auto eth1
iface eth1 inet static
address 10.240.78.3/24

route -n
10.240.78.0 0.0.0.0 255.255.255.0   U 0  00 eth1

PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
64 bytes from 10.240.78.1: icmp_seq=1 ttl=255 time=8.25 ms
64 bytes from 10.240.78.1: icmp_seq=2 ttl=255 time=2.59 ms
^C
--- 10.240.78.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 2.597/5.425/8.254/2.829 ms

*Scenario 2*: When assigning an IP to a bridge and making eth1 a slave to
the bridge, no containers involved (Success):

/etc/network/interfaces
auto eth1
iface eth1 inet manual

auto br0-500
iface br0-500 inet static
address 10.240.78.3/24
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0


route -n
10.240.78.0 0.0.0.0 255.255.255.0   U 0  00
br0-500

PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
64 bytes from 10.240.78.1: icmp_seq=1 ttl=255 time=3.26 ms
64 bytes from 10.240.78.1: icmp_seq=2 ttl=255 time=1.51 ms
64 bytes from 10.240.78.1: icmp_seq=3 ttl=255 time=2.30 ms
^C
--- 10.240.78.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.514/2.360/3.262/0.715 ms

*Scenario 3*: Same scenario as above, expect the bridge is not assigned and
IP and a container is created and connects to the same bridge (Failure):

/etc/network/interfaces
auto eth1
iface eth1 inet manual

auto br0-500
iface br0-500 inet static
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0

~/.local/share/lxc/c4/config
# Network configuration
lxc.network.type = veth
lxc.network.link = br0-500
lxc.network.ipv4 = 10.240.78.3/24
lxc.network.ipv4.gateway = 10.240.78.1
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:f7:0a:83

route -n (on host)
10.240.78.0 0.0.0.0 255.255.255.0   U 0  00
br0-500

route -n (inside container)
10.240.78.0 0.0.0.0 255.255.255.0   U 0  00 eth0

ping (on host)
PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
64 bytes from 10.240.78.1: icmp_seq=1 ttl=255 time=1.12 ms
64 bytes from 10.240.78.1: icmp_seq=2 ttl=255 time=1.17 ms
64 bytes from 10.240.78.1: icmp_seq=3 ttl=255 time=6.54 ms
^C
--- 10.240.78.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt 

[lxc-users] Connecting container to tagged VLAN

2016-01-27 Thread Joshua Schaeffer
I'm trying to setup a container on a new VLAN that only allows tagged
traffic and I'm getting varied success. Maybe somebody can point me in the
right direction. I can ping the gateway from the host but not from the
container and I can't see what I'm missing. I'm using LXC 1.1.5 on Debian
Jessie. The container is unprivileged. The host itself is a VM running off
of VMware. The VM has 3 NIC's. eth0 is for my management network and the
other two NIC's (eth1 and eth2) are setup to connect to this VLAN (vlan id
500).

/etc/network/interfaces
# The second network interface
auto eth1
iface eth1 inet manual

# The third network interface
auto eth2
iface eth2 inet static
address 10.240.78.4/24
gateway 10.240.78.1

iface eth1.500 inet manual
vlan-raw-device eth1

auto br0-500
iface br0-500 inet manual
bridge_ports eth1.500
bridge_stp off
bridge_fd 0
bridge_maxwait 0

I've setup br0-500 to use with my container:

# Network configuration
lxc.network.type = veth
lxc.network.link = br0-500
lxc.network.ipv4 = 10.240.78.3/24
lxc.network.ipv4.gateway = 10.240.78.1
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:3d:51:af

When I start the container everything seems to be in order:

eth0  Link encap:Ethernet  HWaddr 00:16:3e:3d:51:af
  inet addr:10.240.78.3  Bcast:10.240.78.255  Mask:255.255.255.0
  inet6 addr: fe80::216:3eff:fe3d:51af/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:8 errors:0 dropped:0 overruns:0 frame:0
  TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:648 (648.0 B)  TX bytes:774 (774.0 B)

Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
0.0.0.0 10.240.78.1 0.0.0.0 UG0  00 eth0
10.240.78.0 0.0.0.0 255.255.255.0   U 0  00 eth0

But when I try to ping the gateway I get no response:

PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
>From 10.240.78.3 icmp_seq=1 Destination Host Unreachable
>From 10.240.78.3 icmp_seq=2 Destination Host Unreachable
>From 10.240.78.3 icmp_seq=3 Destination Host Unreachable
>From 10.240.78.3 icmp_seq=4 Destination Host Unreachable
>From 10.240.78.3 icmp_seq=5 Destination Host Unreachable
>From 10.240.78.3 icmp_seq=6 Destination Host Unreachable
^C
--- 10.240.78.1 ping statistics ---
7 packets transmitted, 0 received, +6 errors, 100% packet loss, time 6030ms

Address  HWtype  HWaddress   Flags Mask
 Iface
10.240.78.1  (incomplete)
 eth0

Running tcpdump on eth1 on the host, I can see the arp requests coming
through the host but there is no reply from the gateway.

lxcuser@prvlxc01:~$ su root -c "tcpdump -i eth1 -Uw - | tcpdump -en -r -
vlan 500"
Password:
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size
262144 bytes
reading from file -, link-type EN10MB (Ethernet)
11:35:34.589795 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:35.587647 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:36.587413 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:37.604816 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:38.603408 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:39.603387 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:40.620677 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:41.619399 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
^C
Session terminated, terminating shell...tcpdump: pcap_loop: error reading
dump file: Interrupted system call
16 packets captured
17 packets received by filter
0 packets dropped by kernel

I feel that this is a setup problem with the router, but I'm not getting
much help from my networking team so I'm kind of asking all around to see
if anybody has any good ideas. The only other source of the problem I can
think of is with VMware. Maybe somebody more familiar with the hypervisor
has seen this issue before? I have every port group on 

Re: [lxc-users] CGManager error on debian

2016-01-14 Thread Joshua Schaeffer
On Wed, Jan 13, 2016 at 8:45 PM, Fajar A. Nugraha <l...@fajar.net> wrote:

> On Thu, Jan 14, 2016 at 6:13 AM, Joshua Schaeffer <
> jschaeffer0...@gmail.com> wrote:
>
>> I'm getting an error when trying to start an unprivileged container on
>> Debian Jessie. LXC version 1.1.2.
>>
>
> Why 1.1.2? Where did you get it from?
>
Oh, I had installed 1.1.2 from source a while ago and just haven't gotten
around to upgrading 1.1.5. It's a machine I haven't touched in a bit.

>
> My guess is your user session does not have its own cgroups. See
> http://debian-lxc.github.io/Create%20Unprivileged%20Jessie%20Container.html
>
> In particular, the smallest modification you need to make is
> http://debian-lxc.github.io/Create%20User-Owned%20Cgroup.html
>
> If you're willing to use third-party packages, you can simply follow the
> entire howto: http://debian-lxc.github.io/installation.html
>

Thanks I'll be able to test this out a little later today.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] CGManager error on debian

2016-01-13 Thread Joshua Schaeffer
I'm getting an error when trying to start an unprivileged container on
Debian Jessie. LXC version 1.1.2. I installed CGManager using the
instructions here: https://linuxcontainers.org/cgmanager/getting-started/.
There weren't any problems installing. Creating the container succeeded
without issue.

jschaeffer@prvlxc01:~$ lxc-start -n append01
Connection from private client
Disconnected from private client
Connection from private client
Disconnected from private client
Connection from private client
Disconnected from private client
Connection from private client
Disconnected from private client
Connection from private client
Disconnected from private client
Connection from private client
Connection from private client
Create: Client fd is: 7 (pid=3994, uid=1000, gid=1000)
cgmanager:do_create_main: pid 3994 (uid 1000 gid 1000) may not create under
/run/cgmanager/fs/blkio
cgmanager:do_create_main: pid 3994 (uid 1000 gid 1000) may not create under
/run/cgmanager/fs/cpu
cgmanager:do_create_main: pid 3994 (uid 1000 gid 1000) may not create under
/run/cgmanager/fs/cpuset
cgmanager:do_create_main: pid 3994 (uid 1000 gid 1000) may not create under
/run/cgmanager/fs/devices
cgmanager:do_create_main: pid 3994 (uid 1000 gid 1000) may not create under
/run/cgmanager/fs/freezer
cgmanager:do_create_main: pid 3994 (uid 1000 gid 1000) may not create under
/run/cgmanager/fs/memory
cgmanager:do_create_main: pid 3994 (uid 1000 gid 1000) may not create under
/run/cgmanager/fs/net_cls
cgmanager:do_create_main: pid 3994 (uid 1000 gid 1000) may not create under
/run/cgmanager/fs/perf_event
cgmanager:do_create_main: pid 3994 (uid 1000 gid 1000) may not create under
/run/cgmanager/fs/none,name=systemd/system.slice/ssh.service
cgmanager_create: returning 0; existed is -1
Disconnected from private client
Connection from private client
MovePid: Client fd is: 7 (pid=3994, uid=1000, gid=1000)
cgmanager: Invalid path /run/cgmanager/fs/perf_event/lxc/append01
cgmanager:per_ctrl_move_pid_main: Invalid path
/run/cgmanager/fs/perf_event/lxc/append01
Disconnected from private client
Connection from private client
Disconnected from private client
Remove: Client fd is: 7 (pid=3994, uid=1000, gid=1000)
Disconnected from private client
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container in
foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by
setting the --logfile and --logpriority options.


jschaeffer@prvlxc01:~$ cat ~/.local/share/lxc/
append01/ c1/   c3/   lxc-monitord.log
jschaeffer@prvlxc01:~$ cat ~/.local/share/lxc/append01/append01.log
  *lxc-start 1452726692.572 ERRORlxc_cgmanager -
cgmanager.c:lxc_cgmanager_enter:694 - call to cgmanager_move_pid_sync
failed: invalid request*
  lxc-start 1452726692.657 ERRORlxc_start -
start.c:__lxc_start:1164 - failed to spawn 'append01'
  lxc-start 1452726697.662 ERRORlxc_start_ui - lxc_start.c:main:344
- The container failed to start.
  lxc-start 1452726697.662 ERRORlxc_start_ui - lxc_start.c:main:346
- To get more details, run the container in foreground mode.
  lxc-start 1452726697.663 ERRORlxc_start_ui - lxc_start.c:main:348
- Additional information can be obtained by setting the --logfile and
--logpriority options.


I've done a little searching on "call to cgmanager_move_pid_sync failed"
but couldn't find anything that useful.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Container doesn't connect to bridge

2015-10-26 Thread Joshua Schaeffer
I already have networking setup in the container:

root@thinkweb:/# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
 address 192.168.54.110
 netmask 255.255.255.128
 gateway 192.168.54.1

When I add lxc.network.ipv4.gateway to the config it now works. Why would
adding the gateway to the config work, but not in interfaces. I've never
needed to add the gateway to the config before.


On Sat, Oct 24, 2015 at 12:50 AM, Fajar A. Nugraha <l...@fajar.net> wrote:

> On Sat, Oct 24, 2015 at 5:34 AM, Joshua Schaeffer <
> jschaeffer0...@gmail.com> wrote:
>
>> I set the virtual switch that the host uses to promiscuous mode and I can
>> ping the gateway and other machines on my subnet from the container,
>> however I still cannot get to the outside world:
>>
>>
>
>> Is this because of my routing table on the container?
>>
>> Container:
>>>>> root@thinkweb:~# route -n
>>>>> Kernel IP routing table
>>>>> Destination Gateway Genmask Flags Metric Ref
>>>>>  Use Iface
>>>>> 192.168.54.00.0.0.0 255.255.255.128 U 0  0
>>>>>  0 eth0
>>>>>
>>>>>
>
>
>> lxc.network.ipv4   = 192.168.54.110/25
>>>>>> <http://lxc.network.name>
>>>>>>
>>>>>
>
> Obviously.
>
> You either need "lxc.network.ipv4.gateway" (see "man lxc.container.conf"),
> or setup networking inside the container (e.g on /etc/network/interfaces)
>
> --
> Fajar
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Container doesn't connect to bridge

2015-10-23 Thread Joshua Schaeffer
I have a lxc container on version 1.1.2 on Debian that cannot connect to
the network. My host has br0 setup and I can access any machine on the
network and internet from the host:

This is the host:
jschaeffer@prvlxc01:~$ sudo ifconfig
[sudo] password for jschaeffer:
br0   Link encap:Ethernet  HWaddr 00:50:56:be:13:94
  inet addr:192.168.54.65  Bcast:192.168.54.127
Mask:255.255.255.128
  inet6 addr: fe80::250:56ff:febe:1394/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:9891 errors:0 dropped:0 overruns:0 frame:0
  TX packets:4537 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:4078480 (3.8 MiB)  TX bytes:521427 (509.2 KiB)

eth0  Link encap:Ethernet  HWaddr 00:50:56:be:13:94
  inet6 addr: fe80::250:56ff:febe:1394/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:10872 errors:0 dropped:0 overruns:0 frame:0
  TX packets:5085 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:4159749 (3.9 MiB)  TX bytes:575863 (562.3 KiB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vethAGP5QO Link encap:Ethernet  HWaddr fe:fa:9c:21:8d:0b
  inet6 addr: fe80::fcfa:9cff:fe21:8d0b/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:536 errors:0 dropped:0 overruns:0 frame:0
  TX packets:3013 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:49648 (48.4 KiB)  TX bytes:332247 (324.4 KiB)

>From the container I cannot even reach the gateway:

This is the container:
root@thinkweb:/# ifconfig
eth0  Link encap:Ethernet  HWaddr aa:0a:f7:64:12:db
  inet addr:192.168.54.110  Bcast:192.168.54.127
Mask:255.255.255.128
  inet6 addr: fe80::a80a:f7ff:fe64:12db/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:3194 errors:0 dropped:0 overruns:0 frame:0
  TX packets:536 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:352314 (344.0 KiB)  TX bytes:49648 (48.4 KiB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:4 errors:0 dropped:0 overruns:0 frame:0
  TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:336 (336.0 B)  TX bytes:336 (336.0 B)

root@thinkweb:/# ping 192.168.54.1
PING 192.168.54.1 (192.168.54.1) 56(84) bytes of data.
^C
--- 192.168.54.1 ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6049ms

jschaeffer@prvlxc01:~$ cat /var/lib/lxc/thinkweb/config
cat: /var/lib/lxc/thinkweb/config: Permission denied
jschaeffer@prvlxc01:~$ sudo cat /var/lib/lxc/thinkweb/config
# Template used to create this
container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template: -d debian -r jessie -a amd64
# For additional config options, please look at lxc.container.conf(5)

# Distribution configuration
lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.arch = x86_64

# Container specific configuration
lxc.rootfs = /var/lib/lxc/thinkweb/rootfs
lxc.utsname = thinkweb
lxc.tty = 4
lxc.pts = 1024
lxc.cap.drop= sys_module mac_admin
mac_override sys_time
# When using LXC with apparmor, uncomment the next line to run
unconfined:
#lxc.aa_profile = unconfined

# Network configuration
lxc.network.type= veth
lxc.network.flags   = up
lxc.network.link= br0
lxc.network.ipv4   = 192.168.54.110/25
lxc.network.name= eth0

## Limits
lxc.cgroup.cpu.shares   = 1024
lxc.cgroup.cpuset.cpus  = 0,1,2,3
lxc.cgroup.memory.limit_in_bytes= 2G
#lxc.cgroup.memory.memsw.limit_in_bytes = 3G


Thanks,
Joshua
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Container doesn't connect to bridge

2015-10-23 Thread Joshua Schaeffer
Here ya go. It looks like the routing table is off for the container or am
I just misreading that. Also I assigned the veth an mac address from the
config file. Everything still appears to be the same, no change.

Host:
jschaeffer@prvlxc01:~$ sudo route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
0.0.0.0 192.168.54.10.0.0.0 UG0  00 br0
192.168.54.00.0.0.0 255.255.255.128 U 0  00 br0

jschaeffer@prvlxc01:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

allow-ovs br0
iface br0 inet static
address 192.168.54.65
netmask 255.255.255.128
gateway 192.168.54.1
ovs_type OVSBridge
ovs_ports eth0

# The primary network interface
allow-br0 eth0
iface eth0 inet manual
ovs_bridge br0
ovs_type OVSPort



Container:
root@thinkweb:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
192.168.54.00.0.0.0 255.255.255.128 U 0  00 eth0

root@thinkweb:~# arp -n
Address  HWtype  HWaddress   Flags Mask
 Iface
192.168.54.65ether   00:50:56:be:13:94   C
eth0
192.168.54.1 ether   00:13:c4:f2:64:41   C
eth0


On Fri, Oct 23, 2015 at 12:23 PM, Benoit GEORGELIN - Association Web4all <
benoit.george...@web4all.fr> wrote:

> Hi,
>
> can you provide from  the host and from the container :
>
> route -n
>
> can you provide from the container  :
>
> arp -n
>
> can you also give the bridge configuration from /etc/network/interfaces
>
> LXC configuration looks good to me .
> I would try to set the mac address manually in the configuration file like
> :
>
> lxc.network.hwaddr = fe:fa:9c:21:8d:0b
>
> Cordialement,
>
> Benoît Georgelin -
> Afin de contribuer au respect de l'environnement, merci de n'imprimer ce
> mail qu'en cas de nécessité
>
> --
> *De: *"Joshua Schaeffer" <jschaeffer0...@gmail.com>
> *À: *"lxc-users" <lxc-users@lists.linuxcontainers.org>
> *Envoyé: *Vendredi 23 Octobre 2015 13:40:35
> *Objet: *[lxc-users] Container doesn't connect to bridge
>
> I have a lxc container on version 1.1.2 on Debian that cannot connect to
> the network. My host has br0 setup and I can access any machine on the
> network and internet from the host:
>
> This is the host:
> jschaeffer@prvlxc01:~$ sudo ifconfig
> [sudo] password for jschaeffer:
> br0   Link encap:Ethernet  HWaddr 00:50:56:be:13:94
>   inet addr:192.168.54.65  Bcast:192.168.54.127
> Mask:255.255.255.128
>   inet6 addr: fe80::250:56ff:febe:1394/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:9891 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:4537 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0
>   RX bytes:4078480 (3.8 MiB)  TX bytes:521427 (509.2 KiB)
>
> eth0  Link encap:Ethernet  HWaddr 00:50:56:be:13:94
>   inet6 addr: fe80::250:56ff:febe:1394/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:10872 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:5085 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:4159749 (3.9 MiB)  TX bytes:575863 (562.3 KiB)
>
> loLink encap:Local Loopback
>   inet addr:127.0.0.1  Mask:255.0.0.0
>   inet6 addr: ::1/128 Scope:Host
>   UP LOOPBACK RUNNING  MTU:65536  Metric:1
>   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0
>   RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>
> vethAGP5QO Link encap:Ethernet  HWaddr fe:fa:9c:21:8d:0b
>   inet6 addr: fe80::fcfa:9cff:fe21:8d0b/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:536 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:3013 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:49648 (48.4 KiB)  TX bytes:332247 (324.4 KiB)
>
> From the container I cannot even reach the gateway:
>
> This is the container:
> root@thinkweb:/# ifconfig
> eth0  Link encap:Ethernet  HWaddr aa:0a:f7:64:12:db
>   inet addr:192.168.54.110  Bcast:192.168.54.127
> Mask:

Re: [lxc-users] Container doesn't connect to bridge

2015-10-23 Thread Joshua Schaeffer
Oh, also forgot to mention that I'm using OVS to create the bridge. I
didn't think this would be a problem if I got the bridge working on the
host, but let me know if I've missed something.

Thanks,
Joshua

On Fri, Oct 23, 2015 at 1:36 PM, Joshua Schaeffer <jschaeffer0...@gmail.com>
wrote:

> Here ya go. It looks like the routing table is off for the container or am
> I just misreading that. Also I assigned the veth an mac address from the
> config file. Everything still appears to be the same, no change.
>
> Host:
> jschaeffer@prvlxc01:~$ sudo route -n
> Kernel IP routing table
> Destination Gateway Genmask Flags Metric RefUse
> Iface
> 0.0.0.0 192.168.54.10.0.0.0 UG0  00 br0
> 192.168.54.00.0.0.0 255.255.255.128 U 0  00 br0
>
> jschaeffer@prvlxc01:~$ cat /etc/network/interfaces
> # This file describes the network interfaces available on your system
> # and how to activate them. For more information, see interfaces(5).
>
> source /etc/network/interfaces.d/*
>
> # The loopback network interface
> auto lo
> iface lo inet loopback
>
> allow-ovs br0
> iface br0 inet static
> address 192.168.54.65
> netmask 255.255.255.128
> gateway 192.168.54.1
> ovs_type OVSBridge
> ovs_ports eth0
>
> # The primary network interface
> allow-br0 eth0
> iface eth0 inet manual
> ovs_bridge br0
> ovs_type OVSPort
>
>
>
> Container:
> root@thinkweb:~# route -n
> Kernel IP routing table
> Destination Gateway Genmask Flags Metric RefUse
> Iface
> 192.168.54.00.0.0.0 255.255.255.128 U 0  00
> eth0
>
> root@thinkweb:~# arp -n
> Address  HWtype  HWaddress   Flags Mask
>  Iface
> 192.168.54.65ether   00:50:56:be:13:94   C
> eth0
> 192.168.54.1 ether   00:13:c4:f2:64:41   C
> eth0
>
>
> On Fri, Oct 23, 2015 at 12:23 PM, Benoit GEORGELIN - Association Web4all <
> benoit.george...@web4all.fr> wrote:
>
>> Hi,
>>
>> can you provide from  the host and from the container :
>>
>> route -n
>>
>> can you provide from the container  :
>>
>> arp -n
>>
>> can you also give the bridge configuration from /etc/network/interfaces
>>
>> LXC configuration looks good to me .
>> I would try to set the mac address manually in the configuration file
>> like :
>>
>> lxc.network.hwaddr = fe:fa:9c:21:8d:0b
>>
>> Cordialement,
>>
>> Benoît Georgelin -
>> Afin de contribuer au respect de l'environnement, merci de n'imprimer ce
>> mail qu'en cas de nécessité
>>
>> --
>> *De: *"Joshua Schaeffer" <jschaeffer0...@gmail.com>
>> *À: *"lxc-users" <lxc-users@lists.linuxcontainers.org>
>> *Envoyé: *Vendredi 23 Octobre 2015 13:40:35
>> *Objet: *[lxc-users] Container doesn't connect to bridge
>>
>> I have a lxc container on version 1.1.2 on Debian that cannot connect to
>> the network. My host has br0 setup and I can access any machine on the
>> network and internet from the host:
>>
>> This is the host:
>> jschaeffer@prvlxc01:~$ sudo ifconfig
>> [sudo] password for jschaeffer:
>> br0   Link encap:Ethernet  HWaddr 00:50:56:be:13:94
>>   inet addr:192.168.54.65  Bcast:192.168.54.127
>> Mask:255.255.255.128
>>   inet6 addr: fe80::250:56ff:febe:1394/64 Scope:Link
>>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>   RX packets:9891 errors:0 dropped:0 overruns:0 frame:0
>>   TX packets:4537 errors:0 dropped:0 overruns:0 carrier:0
>>   collisions:0 txqueuelen:0
>>   RX bytes:4078480 (3.8 MiB)  TX bytes:521427 (509.2 KiB)
>>
>> eth0  Link encap:Ethernet  HWaddr 00:50:56:be:13:94
>>   inet6 addr: fe80::250:56ff:febe:1394/64 Scope:Link
>>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>   RX packets:10872 errors:0 dropped:0 overruns:0 frame:0
>>   TX packets:5085 errors:0 dropped:0 overruns:0 carrier:0
>>   collisions:0 txqueuelen:1000
>>   RX bytes:4159749 (3.9 MiB)  TX bytes:575863 (562.3 KiB)
>>
>> loLink encap:Local Loopback
>>   inet addr:127.0.0.1  Mask:255.0.0.0
>>   inet6 addr: ::1/128 Scope:Host
>>   UP LOOPBACK RUNNING  MTU:65536  Metric:1
>>   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

Re: [lxc-users] Container doesn't connect to bridge

2015-10-23 Thread Joshua Schaeffer
Alright, making progress on this. I forgot to mention that the host is a VM
running off of VMWare... slipped my mind :)

I set the virtual switch that the host uses to promiscuous mode and I can
ping the gateway and other machines on my subnet from the container,
however I still cannot get to the outside world:

>From the container:
root@thinkweb:/# ping 192.168.54.1
PING 192.168.54.1 (192.168.54.1) 56(84) bytes of data.
64 bytes from 192.168.54.1: icmp_seq=1 ttl=255 time=2.98 ms
64 bytes from 192.168.54.1: icmp_seq=2 ttl=255 time=5.01 ms
64 bytes from 192.168.54.1: icmp_seq=3 ttl=255 time=1.10 ms
^C
--- 192.168.54.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 1.105/3.035/5.014/1.597 ms
root@thinkweb:/# ping 192.168.54.65
PING 192.168.54.65 (192.168.54.65) 56(84) bytes of data.
64 bytes from 192.168.54.65: icmp_seq=1 ttl=64 time=0.245 ms
64 bytes from 192.168.54.65: icmp_seq=2 ttl=64 time=0.041 ms
64 bytes from 192.168.54.65: icmp_seq=3 ttl=64 time=0.047 ms
^C
--- 192.168.54.65 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.041/0.111/0.245/0.094 ms
root@thinkweb:/# ping 8.8.8.8
connect: Network is unreachable

Is this because of my routing table on the container?

Thanks,
Joshua

On Fri, Oct 23, 2015 at 3:50 PM, Joshua Schaeffer <jschaeffer0...@gmail.com>
wrote:

> Okay, ip_forward was set to 0 on the host. I changed it to 1, but I still
> wasn't able to ping the gateway from the container. iptables rules is set
> to accept for INPUT, FORWARD, and OUTPUT on the host:
>
> jschaeffer@prvlxc01:~$ sudo iptables -L
> Chain INPUT (policy ACCEPT)
> target prot opt source   destination
>
> Chain FORWARD (policy ACCEPT)
> target prot opt source   destination
>
> Chain OUTPUT (policy ACCEPT)
> target prot opt source   destination
>
> Here is the OVS db output:
>
> jschaeffer@prvlxc01:~$ sudo ovs-vsctl show
> [sudo] password for jschaeffer:
> 4e502746-9746-4972-8cb4-cf27f7b7332f
> Bridge "br0"
> Port "veth52B8DS"
> Interface "veth52B8DS"
> Port vethYERYXP
> Interface vethYERYXP
> Port "vethAGP5QO"
> Interface "vethAGP5QO"
> Port "eth0"
> Interface "eth0"
> Port "veth6WFED2"
> Interface "veth6WFED2"
> Port "br0"
> Interface "br0"
> type: internal
> ovs_version: "2.3.0"
>
> Not sure if this is a problem or not, but I ran ifconfig on the host again
> and it looks like the last 6 digits of the veth changed (maybe because I
> changed the lxc's config to include the hardward address?). This particular
> veth is not included in the ovs output:
>
> jschaeffer@prvlxc01:~$ sudo ifconfig
> [...]
> vethJMVQHJ Link encap:Ethernet  HWaddr fe:1f:8a:a9:25:52
>   inet6 addr: fe80::fc1f:8aff:fea9:2552/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:15 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:216 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:1054 (1.0 KiB)  TX bytes:21554 (21.0 KiB)
>
> Thanks,
> Joshua
>
> On Fri, Oct 23, 2015 at 2:25 PM, Benoit GEORGELIN - Association Web4all <
> benoit.george...@web4all.fr> wrote:
>
>> Yes, thanks, I saw it in your configuration file.
>>
>> Everything looks good.
>> Your container does not have a gateway address , but you should be able
>> to ping local network .
>>
>> This looks good too:
>>
>> Address  HWtype  HWaddress   Flags Mask
>>  Iface
>> 192.168.54.65 ether
>> 00:50:56:be:13:94   C eth0
>> 192.168.54.1  ether   00:13:c4:f2:64:41
>>   C eth0
>>
>>
>> Your container know the mac address of the host. Communication is working
>> on that level.
>>
>> Do you have any iptables rules on the host ?
>>
>> Can you look at this file , it should be 1
>> cat /proc/sys/net/ipv4/ip_forward
>>
>> Also can you send the OVS db content:
>>
>> ovs-vsctl show
>>
>>
>> Cordialement,
>>
>> Benoît Georgelin -
>> Afin de contribuer au respect de l'environnement, merci de n'imprimer ce
>> mail qu'en cas de nécessité
>>
>> --
>> *De: *"Joshua Schaeffer" <jschaeffer0...@gmail.com>
>> *À: *"lxc-users" <lxc-user

Re: [lxc-users] Container doesn't connect to bridge

2015-10-23 Thread Joshua Schaeffer
Okay, ip_forward was set to 0 on the host. I changed it to 1, but I still
wasn't able to ping the gateway from the container. iptables rules is set
to accept for INPUT, FORWARD, and OUTPUT on the host:

jschaeffer@prvlxc01:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source   destination

Chain FORWARD (policy ACCEPT)
target prot opt source   destination

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination

Here is the OVS db output:

jschaeffer@prvlxc01:~$ sudo ovs-vsctl show
[sudo] password for jschaeffer:
4e502746-9746-4972-8cb4-cf27f7b7332f
Bridge "br0"
Port "veth52B8DS"
Interface "veth52B8DS"
Port vethYERYXP
Interface vethYERYXP
Port "vethAGP5QO"
Interface "vethAGP5QO"
Port "eth0"
Interface "eth0"
Port "veth6WFED2"
Interface "veth6WFED2"
Port "br0"
Interface "br0"
type: internal
ovs_version: "2.3.0"

Not sure if this is a problem or not, but I ran ifconfig on the host again
and it looks like the last 6 digits of the veth changed (maybe because I
changed the lxc's config to include the hardward address?). This particular
veth is not included in the ovs output:

jschaeffer@prvlxc01:~$ sudo ifconfig
[...]
vethJMVQHJ Link encap:Ethernet  HWaddr fe:1f:8a:a9:25:52
  inet6 addr: fe80::fc1f:8aff:fea9:2552/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:15 errors:0 dropped:0 overruns:0 frame:0
  TX packets:216 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:1054 (1.0 KiB)  TX bytes:21554 (21.0 KiB)

Thanks,
Joshua

On Fri, Oct 23, 2015 at 2:25 PM, Benoit GEORGELIN - Association Web4all <
benoit.george...@web4all.fr> wrote:

> Yes, thanks, I saw it in your configuration file.
>
> Everything looks good.
> Your container does not have a gateway address , but you should be able to
> ping local network .
>
> This looks good too:
>
> Address  HWtype  HWaddress   Flags Mask
>  Iface
> 192.168.54.65 ether   00:50:56:be:13:94
>   C eth0
> 192.168.54.1  ether   00:13:c4:f2:64:41
>   C eth0
>
>
> Your container know the mac address of the host. Communication is working
> on that level.
>
> Do you have any iptables rules on the host ?
>
> Can you look at this file , it should be 1
> cat /proc/sys/net/ipv4/ip_forward
>
> Also can you send the OVS db content:
>
> ovs-vsctl show
>
>
> Cordialement,
>
> Benoît Georgelin -
> Afin de contribuer au respect de l'environnement, merci de n'imprimer ce
> mail qu'en cas de nécessité
>
> --
> *De: *"Joshua Schaeffer" <jschaeffer0...@gmail.com>
> *À: *"lxc-users" <lxc-users@lists.linuxcontainers.org>
> *Envoyé: *Vendredi 23 Octobre 2015 15:41:49
> *Objet: *Re: [lxc-users] Container doesn't connect to bridge
>
> Oh, also forgot to mention that I'm using OVS to create the bridge. I
> didn't think this would be a problem if I got the bridge working on the
> host, but let me know if I've missed something.
> Thanks,
> Joshua
>
> On Fri, Oct 23, 2015 at 1:36 PM, Joshua Schaeffer <
> jschaeffer0...@gmail.com> wrote:
>
>> Here ya go. It looks like the routing table is off for the container or
>> am I just misreading that. Also I assigned the veth an mac address from the
>> config file. Everything still appears to be the same, no change.
>>
>> Host:
>> jschaeffer@prvlxc01:~$ sudo route -n
>> Kernel IP routing table
>> Destination Gateway Genmask Flags Metric RefUse
>> Iface
>> 0.0.0.0 192.168.54.10.0.0.0 UG0  00
>> br0
>> 192.168.54.00.0.0.0 255.255.255.128 U 0  00
>> br0
>>
>> jschaeffer@prvlxc01:~$ cat /etc/network/interfaces
>> # This file describes the network interfaces available on your system
>> # and how to activate them. For more information, see interfaces(5).
>>
>> source /etc/network/interfaces.d/*
>>
>> # The loopback network interface
>> auto lo
>> iface lo inet loopback
>>
>> allow-ovs br0
>> iface br0 inet static
>> address 192.168.54.65
>> netmask 255.255.255.128
>> gateway 192.168.54.1
>> ovs_type OVSBridge
>> ovs_ports eth0
>>
>> # The primary network interface
>> allow-br0 eth0
>> iface e

Re: [lxc-users] Building LXC 1.1 on Debian 8

2015-04-02 Thread Joshua Schaeffer
Thanks Xavier, I'll check this out.

On Thu, Apr 2, 2015 at 3:10 PM, Xavier Gendre gendre.rei...@gmail.com
wrote:

 If it can help you, i have summarized all the Serge's advices (the
 CLONE_NEWUSER trick, in particular) about containers in Debian in a little
 script to handle user-owned unprivileged containers and make them
 autostart. This is called mithlond,

 https://github.com/Meseira/mithlond

 This is build for Debian Jessie, thus you should find some useful things
 inside, i hope ;-)

 Xavier


 Le 02/04/2015 22:49, Serge Hallyn a écrit :

 Quoting Joshua Schaeffer (jschaeffer0...@gmail.com):

 I've been using LXC's on Debian 7 for over a year now and everything has
 been working great, but I've just been using the version that is packaged
 with the distro and I figured it's probably time to get up to date and
 start taking advantage of the newer features and unprivileged containers.
 So I've created a VM with Debian 8 on it and downloaded the source for
 LXC
 1.1.1.

 I configured, compiled, and installed the software without any issues,
 but
 when I try to run lxc-create as a regular user I get the following error:

 
 --
 lxcuser@thinkhost:~$ lxc-create -t download -n c1
 unshare: Operation not permitted


 Since unshare failed, your kernel seems to not be allowing unprivileged
 CLONE_NEWUSER.  Check whether there is a sysctl called
 /proc/sys/kernel/unprivileged_userns_clone, and if so set it to 1.

  read pipe: Success
 lxc_container: lxccontainer.c: do_create_container_dir: 772 Failed to
 chown
 container dir
 lxc_container: lxc_create.c: main: 274 Error creating container c2
 
 --

 I've set execute rights on the home directory for that user. Seems like
 I'm
 missing something obvious. Below is the configure parameters I used.
 make,
 make check, and make install reported no problems or errors:

 ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
 --enable-doc --enable-capabilities --with-distro=debian

 I can run the above command as root and it successfully downloads the
 template and creates the container which I can then attach to.

 Thanks,
 Joshua


  ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

  ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Building LXC 1.1 on Debian 8

2015-03-26 Thread Joshua Schaeffer
I've been using LXC's on Debian 7 for over a year now and everything has
been working great, but I've just been using the version that is packaged
with the distro and I figured it's probably time to get up to date and
start taking advantage of the newer features and unprivileged containers.
So I've created a VM with Debian 8 on it and downloaded the source for LXC
1.1.1.

I configured, compiled, and installed the software without any issues, but
when I try to run lxc-create as a regular user I get the following error:

--
lxcuser@thinkhost:~$ lxc-create -t download -n c1
unshare: Operation not permitted
read pipe: Success
lxc_container: lxccontainer.c: do_create_container_dir: 772 Failed to chown
container dir
lxc_container: lxc_create.c: main: 274 Error creating container c2
--

I've set execute rights on the home directory for that user. Seems like I'm
missing something obvious. Below is the configure parameters I used. make,
make check, and make install reported no problems or errors:

./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--enable-doc --enable-capabilities --with-distro=debian

I can run the above command as root and it successfully downloads the
template and creates the container which I can then attach to.

Thanks,
Joshua
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Setting up containers with multiple logical volumes

2014-02-18 Thread Joshua Schaeffer
Okay thanks for the pointers.  I'll be able to try this out tonight and
report back.


On Mon, Feb 17, 2014 at 10:30 PM, Serge Hallyn serge.hal...@ubuntu.comwrote:

 Quoting Serge Hallyn (serge.hal...@ubuntu.com):
  Quoting Joshua Schaeffer (jschaeffer0...@gmail.com):
   Yes it failed to start:
  
   1. lxc-create -n testme1 -t debian
   2. root@reaver:~# cat /var/lib/lxc/testme1/config | grep
 lxc.mount.entry
   lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
   lxc.mount.entry = sysfs sys sysfs defaults  0 0
   lxc.mount.entry = /dev/vg_lxc1/lv_ldap_tmp1 lv_ldap
   defaults,create=dir 0 0
 
  Wait a sec.  I mistyped.  There's no fstype in there :)
 
  That should be
 
  lxc.mount.entry = /dev/vg_lxc1/lv_ldap_tmp1 lv_ldap ext4
 defaults,create=dir 0 0
 
  or whatever fstype it is in place of ext4.  Sorry.

 Actually I see that will fail too, because 'create=dir' has a
 slight problem.  We need to remove create=dir from the mount
 options which we pass along to the mount syscall.

 so for the sake of testing just do

 lxc.mount.entry = /dev/vg_lxc1/lv_ldap_tmp1 mnt ext4 defaults 0 0

 -serge

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Setting up containers with multiple logical volumes

2014-02-17 Thread Joshua Schaeffer
I still can't get this to work, I'm sure I'm missing something simple or 
obvious. To recap, I'm trying to use an LVM logical volume for my 
container's /var and /tmp partitions. Since I've been unable to get this 
to work, I've just beenfocusing on getting /tmp to work.  I changed my 
mount points and my container's config:


Here are my logical volume onthe host:

root@reaver:~# lvdisplay /dev/vg_lxc1/lv_ldap_tmp1
  --- Logical volume ---
  LV Path /dev/vg_lxc1/lv_ldap_tmp1
  LV Namelv_ldap_tmp1
  VG Namevg_lxc1
  LV UUID GDru3y-oLJB-Iv06-tjv3-wHuq-p8Fi-xBUscf
  LV Write Accessread/write
  LV Creation host, time reaver, 2013-11-30 13:46:16 -0700
  LV Status  available
  # open 1
  LV Size4.00 GiB
  Current LE 1024
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   254:4

I've mounted this LV to: /mnt/lxc/ldap/tmp on the host.  I then changed 
my container's config file:


root@reaver:~# cat /var/lib/lxc/ldap_baneling/config |grep lxc.mount.entry
lxc.mount.entry= proc 
/var/lib/lxc/ldap_baneling/rootfs/proc procnodev,noexec,nosuid 
00
lxc.mount.entry= devpts 
/var/lib/lxc/ldap_baneling/rootfs/dev/pts devptsdefaults00
lxc.mount.entry= sysfs 
/var/lib/lxc/ldap_baneling/rootfs/sys sysfsdefaults00
#lxc.mount.entry= /mnt/lxc/ldap/var 
/var/lib/lxc/ldap_baneling/rootfs/var nonedefaults00
*lxc.mount.entry = /mnt/lxc/ldap/tmp 
/var/lib/lxc/ldap_baneling/rootfs/tmp   nonedefaults00*


When I start my container I don't see the new mount:

root@baneling:~# df -h
Filesystem  Size  Used Avail Use% Mounted on
tmpfs   3.1G 0  3.1G   0% /run/shm
rootfs   10G  537M  9.5G   6% /
tmpfs   801M   16K  801M   1% /run
tmpfs   5.0M 0  5.0M   0% /run/lock

Do, I need to put something in the container's fstab?

Thanks,
Josh

On 02/12/2014 05:02 PM, Alvaro Miranda Aguilera wrote:

sorry, i should have been more clear.

seems you are trying to mount /a into /a that won't work.

at those level, where are the logical volumes mounted?

Say you want to share

/media/lv1

into container as /media/lv1

then, the line should be:

lxc.mount.entry= /media/lv1 
/var/lib/lxc/ldap_baneling/rootfs/media/lv1none bind00


do you see the difference with your line?

if you have already mounted your lv inside the container, unmount it, 
mount it somewhere else, and try as I tell you, for me it works.


if you have time, I wrote this:

http://kikitux.net/lxc/lxc.html





On Wed, Feb 12, 2014 at 9:34 AM, Joshua Schaeffer 
jschaeffer0...@gmail.com mailto:jschaeffer0...@gmail.com wrote:


Based on the documentation I read, this can be the same, however
all I really care about is that the LV gets mounted to that
location on the host.


On Tue, Feb 11, 2014 at 1:13 PM, Alvaro Miranda Aguilera
kiki...@gmail.com mailto:kiki...@gmail.com wrote:


On Wed, Feb 12, 2014 at 4:11 AM, Joshua Schaeffer
jschaeffer0...@gmail.com mailto:jschaeffer0...@gmail.com
wrote:

lxc.mount.entry = /var/lib/lxc/ldap_baneling/rootfs/var
/var/lib/lxc/ldap_baneling/rootfs/var nonebind00
lxc.mount.entry= /var/lib/lxc/ldap_baneling/rootfs/tmp
/var/lib/lxc/ldap_baneling/rootfs/tmp nonebind00



you are mounting the same path in the same path?



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
mailto:lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
mailto:lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Setting up containers with multiple logical volumes

2014-02-17 Thread Joshua Schaeffer

Yes it failed to start:

1. lxc-create -n testme1 -t debian
2. root@reaver:~# cat /var/lib/lxc/testme1/config | grep lxc.mount.entry
lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
lxc.mount.entry = sysfs sys sysfs defaults  0 0
lxc.mount.entry = /dev/vg_lxc1/lv_ldap_tmp1 lv_ldap 
defaults,create=dir 0 0

3. root@reaver:~# lxc-start -n testme1 -l trace -o testme1.out
lxc-start: No such file or directory - failed to mount 
'/dev/vg_lxc1/lv_ldap_tmp1' on '/usr/lib/x86_64-linux-gnu/lxc/lv_ldap'

lxc-start: failed to setup the mount entries for 'testme1'
lxc-start: failed to setup the container
lxc-start: invalid sequence number 1. expected 2
lxc-start: failed to spawn 'testme1'

Thanks,
Josh

On 02/17/2014 09:55 AM, Serge Hallyn wrote:

Quoting Joshua Schaeffer (jschaeffer0...@gmail.com):

I still can't get this to work, I'm sure I'm missing something
simple or obvious. To recap, I'm trying to use an LVM logical volume
for my container's /var and /tmp partitions. Since I've been unable
to get this to work, I've just beenfocusing on getting /tmp to work.
I changed my mount points and my container's config:

Here are my logical volume onthe host:

root@reaver:~# lvdisplay /dev/vg_lxc1/lv_ldap_tmp1

Could you please create a new container, 'testme1', and just add to its
/var/lib/lxc/testme1/config the entry

lxc.mount.entry = /dev/vg_lxc1/lv_ldap_tmp1 lv_ldap defaults,create=dir 0 0

Then do

lxc-start -n testme1 -l trace -o testme1.out

and send testme1.out here (assuming that fails to start)?

thanks,
-serge
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


  lxc-start 1392656494.694 DEBUGlxc_conf - allocated pty '/dev/pts/12' 
(4/5)
  lxc-start 1392656494.695 DEBUGlxc_conf - allocated pty '/dev/pts/13' 
(6/7)
  lxc-start 1392656494.695 DEBUGlxc_conf - allocated pty '/dev/pts/14' 
(8/9)
  lxc-start 1392656494.695 DEBUGlxc_conf - allocated pty '/dev/pts/15' 
(10/11)
  lxc-start 1392656494.695 INFO lxc_conf - tty's configured
  lxc-start 1392656494.695 DEBUGlxc_console - using '/dev/tty' as 
console
  lxc-start 1392656494.695 DEBUGlxc_start - sigchild handler set
  lxc-start 1392656494.695 INFO lxc_start - 'testme1' is initialized
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/' (rootfs)
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/sys' (sysfs)
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/proc' (proc)
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/dev' (devtmpfs)
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/dev/pts' 
(devpts)
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/run' (tmpfs)
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/' (xfs)
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/sys/fs/selinux' 
(selinuxfs)
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/run/lock' 
(tmpfs)
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/boot' (ext4)
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/tmp' (xfs)
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/var' (xfs)
  lxc-start 1392656494.696 DEBUGlxc_cgroup - checking '/sys/fs/cgroup' 
(cgroup)
  lxc-start 1392656494.696 INFO lxc_cgroup - [1] found cgroup mounted 
at 
'/sys/fs/cgroup',opts='rw,relatime,perf_event,blkio,net_cls,freezer,devices,memory,cpuacct,cpu,cpuset,clone_children'
  lxc-start 1392656494.696 DEBUGlxc_cgroup - get_init_cgroup: found 
init cgroup for subsys (null) at /


  lxc-start 1392656494.696 DEBUGlxc_cgroup - cgroup /sys/fs/cgroup has 
flags 0x2
  lxc-start 1392656494.725 INFO lxc_cgroup - created cgroup 
'/sys/fs/cgroup//lxc/testme1'
  lxc-start 1392656494.725 DEBUGlxc_cgroup - checking 
'/var/lib/nfs/rpc_pipefs' (rpc_pipefs)
  lxc-start 1392656494.773 DEBUGlxc_start - Dropped cap_sys_boot

  lxc-start 1392656494.773 INFO lxc_conf - 'testme1' hostname has been 
setup
  lxc-start 1392656494.773 DEBUGlxc_conf - mounted 
'/var/lib/lxc/testme1/rootfs' on '/usr/lib/x86_64-linux-gnu/lxc'
  lxc-start 1392656494.774 DEBUGlxc_conf - mounted 'proc' on 
'/usr/lib/x86_64-linux-gnu/lxc/proc', type 'proc'
  lxc-start 1392656494.774 DEBUGlxc_conf - mounted 'sysfs' on 
'/usr/lib/x86_64-linux-gnu/lxc/sys', type 'sysfs'
  lxc-start 1392656494.774 ERRORlxc_conf - No such file or directory - 
failed to mount '/dev/vg_lxc1/lv_ldap_tmp1' on 
'/usr/lib/x86_64-linux-gnu/lxc/lv_ldap'
  lxc-start 1392656494.774 ERRORlxc_conf - failed to setup the mount 
entries for 'testme1'
  lxc-start 1392656494.774 ERRORlxc_start - failed to setup the 
container
  lxc-start 1392656494.774 ERRORlxc_sync - invalid sequence number 1. 
expected 2
  lxc