Re: [lxc-users] LXD Cluster & Ceph storage: "Config key ... may not be used as node-specific key"

2019-05-15 Thread Robert Johnson

On 5/15/19 4:20 PM, Stéphane Graber wrote:

On Wed, May 15, 2019 at 03:00:34PM -0700, Robert Johnson wrote:

I seem to be stuck in a catch-22 with adding a ceph storage pool to an
existing LXD cluster.

When attempting to add a ceph storage pool, I am prompted to specify the
target node, but, when doing so, the config keys are not allowed. Once a
ceph pool is created, it's not possible to add config keys. Is there
something that I'm missing to the process of adding a ceph pool to a LXD
cluster?

The documentation and examples that I have found all assume a stand-alone
LXD instance.


Example commands that I am trying to accomplish this:

rob@stack1b:~$ lxd --version
3.13

rob@stack1b:~$ lxc cluster list
+-+-+--++---+
|  NAME   | URL | DATABASE | STATE  |
MESSAGE  |
+-+-+--++---+
| stack1a | https://[]:8443 | YES  | ONLINE | fully
operational |
+-+-+--++---+
| stack1b | https://[]:8443 | YES  | ONLINE | fully
operational |
+-+-+--++---+
| stack1c | https://[]:8443 | YES  | ONLINE | fully
operational |
+-+-+--++---+

rob@stack1b:~$ lxc storage list
+---+-++-+-+
| NAME  | DESCRIPTION | DRIVER |  STATE  | USED BY |
+---+-++-+-+
| local | | zfs    | CREATED | 10  |
+---+-++-+-+

rob@stack1b:~$ lxc storage create lxd-slow ceph ceph.osd.pool_name=lxd-slow
ceph.user.name=user
Error: Pool not pending on any node (use --target  first)

rob@stack1b:~$ lxc storage create --target stack1b lxd-slow ceph
ceph.osd.pool_name=lxd-slow ceph.user.name=user
Error: Config key 'ceph.osd.pool_name' may not be used as node-specific key

rob@stack1b:~$ lxc storage create --target stack1b lxd-slow ceph
ceph.user.name=user
Error: Config key 'ceph.user.name' may not be used as node-specific key

rob@stack1b:~$ lxc storage create --target stack1b lxd-slow ceph
Storage pool lxd-slow pending on member stack1b

rob@stack1b:~$ lxc storage list
+--+-++-+-+
|   NAME   | DESCRIPTION | DRIVER |  STATE  | USED BY |
+--+-++-+-+
| local    | | zfs    | CREATED | 10  |
+--+-++-+-+
| lxd-slow | | ceph   | PENDING | 0   |
+--+-++-+-+

rob@stack1b:~$ lxc storage set lxd-slow ceph.osd.pool_name lxd-slow
Error: failed to notify peer []:8443: The
[ceph.osd.pool_name] properties cannot be changed for "ceph" storage pools

rob@stack1b:~$ lxc storage set lxd-slow ceph.user.name user
Error: failed to notify peer []:8443: The
[ceph.user.name] properties cannot be changed for "ceph" storage pools


lxc storage create lxd-slow ceph --target stack1a
lxc storage create lxd-slow ceph --target stack1b
lxc storage create lxd-slow ceph --target stack1c
lxc storage create lxd-slow ceph ceph.osd.pool_name=lxd-slow ceph.user.name=user


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Thank you!

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] LXD Cluster & Ceph storage: "Config key ... may not be used as node-specific key"

2019-05-15 Thread Robert Johnson
I seem to be stuck in a catch-22 with adding a ceph storage pool to an 
existing LXD cluster.


When attempting to add a ceph storage pool, I am prompted to specify the 
target node, but, when doing so, the config keys are not allowed. Once a 
ceph pool is created, it's not possible to add config keys. Is there 
something that I'm missing to the process of adding a ceph pool to a LXD 
cluster?


The documentation and examples that I have found all assume a 
stand-alone LXD instance.



Example commands that I am trying to accomplish this:

rob@stack1b:~$ lxd --version
3.13

rob@stack1b:~$ lxc cluster list
+-+-+--++---+
|  NAME   | URL | DATABASE | STATE  
|  MESSAGE  |

+-+-+--++---+
| stack1a | https://[]:8443 | YES  | ONLINE | 
fully operational |

+-+-+--++---+
| stack1b | https://[]:8443 | YES  | ONLINE | 
fully operational |

+-+-+--++---+
| stack1c | https://[]:8443 | YES  | ONLINE | 
fully operational |

+-+-+--++---+

rob@stack1b:~$ lxc storage list
+---+-++-+-+
| NAME  | DESCRIPTION | DRIVER |  STATE  | USED BY |
+---+-++-+-+
| local | | zfs    | CREATED | 10  |
+---+-++-+-+

rob@stack1b:~$ lxc storage create lxd-slow ceph 
ceph.osd.pool_name=lxd-slow ceph.user.name=user

Error: Pool not pending on any node (use --target  first)

rob@stack1b:~$ lxc storage create --target stack1b lxd-slow ceph 
ceph.osd.pool_name=lxd-slow ceph.user.name=user

Error: Config key 'ceph.osd.pool_name' may not be used as node-specific key

rob@stack1b:~$ lxc storage create --target stack1b lxd-slow ceph 
ceph.user.name=user

Error: Config key 'ceph.user.name' may not be used as node-specific key

rob@stack1b:~$ lxc storage create --target stack1b lxd-slow ceph
Storage pool lxd-slow pending on member stack1b

rob@stack1b:~$ lxc storage list
+--+-++-+-+
|   NAME   | DESCRIPTION | DRIVER |  STATE  | USED BY |
+--+-++-+-+
| local    | | zfs    | CREATED | 10  |
+--+-++-+-+
| lxd-slow | | ceph   | PENDING | 0   |
+--+-++-+-+

rob@stack1b:~$ lxc storage set lxd-slow ceph.osd.pool_name lxd-slow
Error: failed to notify peer []:8443: The 
[ceph.osd.pool_name] properties cannot be changed for "ceph" storage pools


rob@stack1b:~$ lxc storage set lxd-slow ceph.user.name user
Error: failed to notify peer []:8443: The 
[ceph.user.name] properties cannot be changed for "ceph" storage pools


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Failed to mount rootfs

2018-03-26 Thread Robert Johnson

This is a new LXD host running from the snap system.

Initially, this was a working system, but as of this morning, I am 
unable to start existing containers and am unable to launch new ones.



Please advise on any additional information that is needed.




robertj@vh3:~$ lxd --version
2.21




robertj@vh3:~$ lxc launch ubuntu:
Creating the container
Container name is: fluent-bluegill
Starting fluent-bluegill
error: Failed to run: /snap/lxd/current/bin/lxd forkstart 
fluent-bluegill /var/snap/lxd/common/lxd/containers 
/var/snap/lxd/common/lxd/logs/fluent-bluegill/lxc.conf:

Try `lxc info --show-log local:fluent-bluegill` for more info




robertj@vh3:~$ lxc info --show-log local:fluent-bluegill
Name: fluent-bluegill
Remote: unix://
Architecture: x86_64
Created: 2018/03/26 20:19 UTC
Status: Stopped
Type: persistent
Profiles: default

Log:

    lxc 20180326201947.382 WARN lxc_monitor - 
monitor.c:lxc_monitor_fifo_send:111 - Failed to open fifo to send 
message: No such file or directory.
    lxc 20180326201947.382 WARN lxc_monitor - 
monitor.c:lxc_monitor_fifo_send:111 - Failed to open fifo to send 
message: No such file or directory.
    lxc 20180326201947.209 WARN lxc_conf - 
conf.c:lxc_map_ids:2672 - newuidmap binary is missing
    lxc 20180326201947.209 WARN lxc_conf - 
conf.c:lxc_map_ids:2678 - newgidmap binary is missing
    lxc 20180326201947.225 WARN lxc_conf - 
conf.c:lxc_map_ids:2672 - newuidmap binary is missing
    lxc 20180326201947.225 WARN lxc_conf - 
conf.c:lxc_map_ids:2678 - newgidmap binary is missing
    lxc 20180326201947.253 ERROR    dir - 
storage/dir.c:dir_mount:179 - Permission denied - Failed to mount 
"/var/snap/lxd/common/lxd/containers/fluent-bluegill/rootfs" on 
"/var/snap/lxd/common/lxc/"
    lxc 20180326201947.253 ERROR    lxc_conf - 
conf.c:lxc_setup_rootfs:1313 - Failed to mount rootfs 
"dir:/var/snap/lxd/common/lxd/containers/fluent-bluegill/rootfs" onto 
"/var/snap/lxd/common/lxc/" with options "(null)".
    lxc 20180326201947.253 ERROR    lxc_conf - 
conf.c:do_rootfs_setup:3105 - failed to setup rootfs for 'fluent-bluegill'
    lxc 20180326201947.253 ERROR    lxc_conf - 
conf.c:lxc_setup:3146 - Error setting up rootfs mount after spawn
    lxc 20180326201947.253 ERROR    lxc_start - 
start.c:do_start:944 - Failed to setup container "fluent-bluegill".
    lxc 20180326201947.253 ERROR    lxc_sync - 
sync.c:__sync_wait:57 - An error occurred in another process (expected 
sequence number 5)
    lxc 20180326201947.308 WARN lxc_monitor - 
monitor.c:lxc_monitor_fifo_send:111 - Failed to open fifo to send 
message: No such file or directory.
    lxc 20180326201947.308 WARN lxc_monitor - 
monitor.c:lxc_monitor_fifo_send:111 - Failed to open fifo to send 
message: No such file or directory.
    lxc 20180326201947.308 ERROR    lxc_container - 
lxccontainer.c:wait_on_daemonized_start:760 - Received container state 
"ABORTING" instead of "RUNNING"
    lxc 20180326201947.308 ERROR    lxc_start - 
start.c:__lxc_start:1459 - Failed to spawn container "fluent-bluegill".
    lxc 20180326201947.308 WARN lxc_monitor - 
monitor.c:lxc_monitor_fifo_send:111 - Failed to open fifo to send 
message: No such file or directory.
    lxc 20180326201947.308 WARN lxc_monitor - 
monitor.c:lxc_monitor_fifo_send:111 - Failed to open fifo to send 
message: No such file or directory.
    lxc 20180326201947.357 WARN lxc_conf - 
conf.c:lxc_map_ids:2672 - newuidmap binary is missing
    lxc 20180326201947.357 WARN lxc_conf - 
conf.c:lxc_map_ids:2678 - newgidmap binary is missing
    lxc 20180326201947.358 WARN lxc_conf - 
conf.c:lxc_map_ids:2672 - newuidmap binary is missing
    lxc 20180326201947.358 WARN lxc_conf - 
conf.c:lxc_map_ids:2678 - newgidmap binary is missing
    lxc 20180326201947.359 WARN lxc_conf - 
conf.c:lxc_map_ids:2672 - newuidmap binary is missing
    lxc 20180326201947.359 WARN lxc_conf - 
conf.c:lxc_map_ids:2678 - newgidmap binary is missing
    lxc 20180326201947.361 WARN lxc_conf - 
conf.c:lxc_map_ids:2672 - newuidmap binary is missing
    lxc 20180326201947.361 WARN lxc_conf - 
conf.c:lxc_map_ids:2678 - newgidmap binary is missing
    lxc 20180326201947.362 WARN lxc_conf - 
conf.c:lxc_map_ids:2672 - newuidmap binary is missing
    lxc 20180326201947.362 WARN lxc_conf - 
conf.c:lxc_map_ids:2678 - newgidmap binary is missing
    lxc 20180326201947.363 WARN lxc_conf - 
conf.c:lxc_map_ids:2672 - newuidmap binary is missing
    lxc 20180326201947.363 WARN lxc_conf - 
conf.c:lxc_map_ids:2678 - newgidmap binary is missing
    lxc 20180326201947.364 WARN lxc_conf - 
conf.c:lxc_map_ids:2672 - newuidmap binary is missing
    lxc 20180326201947.364 WARN 

Re: [lxc-users] lxc exec - support for wildcards and/or variables?

2017-11-20 Thread Robert Johnson

lxc exec $CONTAINER -- bash -c "rm -rf /tmp/somefile*"


On 11/17/2017 08:03 AM, Tomasz Chmielewski wrote:


How do I use the variables / wildcards with lxc exec? Say, I want to 
remove all /tmp/somefile* in the container.


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Manually building an image

2017-04-06 Thread Robert Johnson
I have a particular distribution based on CentOS 7 that I would like to 
turn into a LXD container. The post from Stephane Graber here below has 
a section titled Manually building an image, but it gives some pretty 
generic steps that I'm not entirely familiar with. I'm hoping someone 
could point me in the right direction.


1. Generate a container filesystem. This entirely depends on the 
distribution you’re using. For Ubuntu and Debian, it would be by using 
debootstrap.


If anyone has an documentation on "Generating a container filesystem", 
I'm all ears. Specifically pertaining to CentOS 7 would be great.


Could I just create a tarball of / ?

2. Configure anything that’s needed for the distribution to work 
properly in a container (if anything is needed).


How would I know if a distribution needs any special attention?

From there on, the process seems pretty straight forward.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd api return of /1.0/containers//exec

2016-12-23 Thread Robert Johnson
I've been working on obtaining two way communications with the /exec 
URI, but your needs may more simple then mine.


When you POST to /exec, you probably want to include the 'record-output' 
key with a (boolean) value of true. ie


'{"command": ["df -f /"], "record-output": true}'

The doc's state that stdout and stderr will be redirected to a log file.

It doesn't state where the log file is; but I'm willing to bet that the 
reply will include a path that you can subsequently GET.


If this works for you, let me know, I'm interested in at least obtaining 
the output from some commands.


On 12/23/2016 05:02 AM, laurent ducos wrote:

Hello.

Is there a solution to get the return of a command launch by lxd exec API ?

curl -s --unix-socket /var/lib/lxd/unix.socket
a/1.0/containers/$CONTAINER/exec -X POST -d '{"command": ["df -f /"]}' |
jq .

I would like to see something like this in return of the command.

output: /dev/sda5   15G4,6G  9,8G  32% /


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users