We have noticed this problem recently in OpenNebula 4.6 and 4.8 recently.  We 
also
had to make a similar patch in OpenNebula 3.2

Our use case:
we have a SHARED image datastore


DATASTORE 102 INFORMATION

ID             : 102

NAME           : cloud_images

USER           : oneadmin

GROUP          : oneadmin

CLUSTER        : cloudworker

TYPE           : IMAGE

DS_MAD         : fs

TM_MAD         : shared

BASE PATH      : /var/lib/one/datastores/102

DISK_TYPE      : FILE


---------


We have a non-shared SYSTEM data store (local to each of 250-some nodes)


[root@fclheadgpvm01 shared]# onedatastore show 100

DATASTORE 100 INFORMATION

ID             : 100

NAME           : localnode

USER           : oneadmin

GROUP          : oneadmin

CLUSTER        : cloudworker

TYPE           : SYSTEM

DS_MAD         : -

TM_MAD         : ssh

BASE PATH      : /var/lib/one/datastores/100/100

DISK_TYPE      : FILE



When we launch a VM the tm/shared/clone procedure is invoked


[root@cloudworker1359 1]# ls -lrt

total 2390660

-rw-r--r-- 1 oneadmin oneadmin 2447638528 Dec 29 20:52 disk.0

-rw-r--r-- 1 oneadmin oneadmin     389120 Dec 29 20:52 disk.1

lrwxrwxrwx 1 oneadmin oneadmin         36 Dec 29 20:52 disk.1.iso -> 
/var/lib/one/datastores/100/1/disk.1

-rw-r--r-- 1 oneadmin oneadmin        922 Dec 29 20:52 deployment.0


It clones disk.0 to the appropriate directory in the local system datastore

but we get a permission denied when we try to launch the VM.


The fix, is to hack the "clone" script to chmod the file to 664 and then the VM 
will launch.


------

The relevant configurations, we think:


Non-default settings in qemu.conf


[root@cloudworker1359 libvirt]# grep -v ^# qemu.conf | grep .

dynamic_ownership = 0


Non-default settings in libvirtd.conf:


[root@cloudworker1359 libvirt]# grep -v ^# libvirtd.conf | grep .

unix_sock_group = "libvirtd"

unix_sock_ro_perms = "0777"

unix_sock_rw_perms = "0770"

auth_unix_ro = "none"

auth_unix_rw = "none"

log_level = 2

log_outputs = "2:syslog:libvirtd"

host_uuid = "a68ca77f-dab0-5873-be6f-2216635204d1"


[root@cloudworker1359 libvirt]# grep qemu /etc/passwd

qemu:x:107:107:qemu user:/:/sbin/nologin

[root@cloudworker1359 libvirt]# grep libvirt /etc/passwd

[root@cloudworker1359 libvirt]# grep qemu /etc/passwd

qemu:x:107:107:qemu user:/:/sbin/nologin

[root@cloudworker1359 libvirt]# grep oneadmin /etc/passwd

oneadmin:x:44897:10040::/var/lib/one:/bin/bash

[root@cloudworker1359 libvirt]# grep qemu /etc/group

disk:x:6:qemu

kvm:x:36:qemu

qemu:x:107:

oneadmin:x:10040:qemu


------------


Three questions:


1) why can the VM not be launched with the default permissions

2) are there any system configurations I can fix to make it launch?

3) If I have to continue patching the tm/shared/clone script is there any way to

   push it out to the other nodes?  "one host sync" doesn't appear to change 
the remotes.


Steve Timm





_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to