Hi,
Thanks for the update. We have changed the TMs drivers to not to set
too permissive permissions on the disk/checkpoint files. This could be
the reason that makes the previous installation work.
Cheers,
Ruben
On Wed, Apr 25, 2012 at 10:17 AM, Carlos A. wrote:
> Hello,
>
> I have finally man
Hello,
I have finally managed to solve the problem.
It was a problem of permissions and libvirt. I have had to set oneadmin
as the running user for kvm, and disable the dynamic permissions. The
dynamic permissions caused to change the ownership of the disk.0 to root
when saving a VM. The perm
Hi,
$ ls -Rl /srv/cloud/one/var/datastores/0/2994
/srv/cloud/one/var/datastores/0/2994:
total 1055880
-rw-r--r-- 1 oneadmin oneadmin653 2012-04-24 18:47 deployment.0
-rw-r- 1 libvirt-qemu kvm 1081212928 2012-04-24 18:47 disk.0
$ bash -xv /srv/cloud/one/var/remotes/tm/ssh/mv
Hi Carlos,
can you send us some extra debugging info?
Create a new VM, exactly like you did with the previous email and launch it.
Supposing the VM has been deployed in dellblade01, send us the output of
the following commands
# in dellblade01
$ ls -Rl /srv/cloud/one/**var//datastores/0/
# in
Hi,
I have also checked this option, but I found also a problem.
If I change the system datastore (0) to set the TM_MAD ssh and then I
create a new VM and try to migrate it, the vm.log fragment is next:
Tue Apr 24 17:17:07 2012 [LCM][I]: New V
Hi,
Yes this may be the problem. Clould you check the output of
onedatastore show 0 (and 1). The TM_MAD associated with the datastore
should be ssh. If not, could you try to update it (onedatastore
update). There should not be any "shared" keyword as you suggest.
Note that the changes on the data
Hello,
I am upgrading my ONE 3.2 deployment to ONE 3.4 but I have one problem
with migration of VM between nodes (not live migration).
When using ONE 3.2 migration was fine, but now migration fails and I
cannot find how to solve this problem.
I have the default datastore, that it is a "file