Hi Vadim

1-
Virtual routers can run on RDB.
If you have VR running on other storage (ie: NFS) , you can stop them and than change the system offering with the Ceph's one (Ceph version:Dumpling) .

My secondary storage VM and my console proxy are still on NFS. It works to me (Virtual router was potentially a more critical issue, on NFS), but if you find how to move them to Ceph, please tell me ;)

2-
Sorry. I'm only a user. Maybe some dev can explain this.

Gerolamo

Il 28/07/2014 14:59, Vadim Kimlaychuk ha scritto:
1. I have read that System VM-s can't run on RBD. But the information was 
rather old (for Ceph 0.58) is it still valid for ACS 4.3 and Ceph 0.80.4 ??
2. When I run cloud configuration wizard just after CS install - it does not have "RBD" 
type for primary storage in drop-down list. But when I launch "Add primary storage" After 
Zone is configured and Pod/Cluster is defined -- I can see RBD type.  A bit confused why so?

Vadim.


-----Original Message-----
From: Gerolamo Valcamonica [mailto:gerol...@pyder.com]
Sent: Monday, July 28, 2014 3:15 PM
To: users@cloudstack.apache.org
Subject: Re: Ceph as primary storage at CS 4.3

Il 28/07/2014 08:43, Vadim Kimlaychuk ha scritto:
I have a fresh install of CS4.3 and going to configure Ceph RBD device as 
primary storage. I can’t see this type at drop-down list (only nfs,  clvm and 
shared mountpoint are available).
I use CS 4.3 on KVM and Ceph.
No problem with ceph, but I upgraded from 4.1 to 4.3, so, sorry, I've no answer 
for this.

            Do I need to install libvirt on managment servers as well ?
            Will it be a problem to work with different versions of libvirt 
(ubuntu 12 and 14 have different) ?

Yes there will be.

I updated hosts from Ubuntu 13.04 (libvirt 1.0.2) to 14.04 libvirt
(1.2.2) and, when in mixed environment, i had this issues:

1- User VM on 13.04 hosts migrated to 14.04 hosts with a failure rate of about 
5%. In case of failure, the result was a crash of the VM, kernel panic, and 
need to stop and start the VM from Cloudstack.

1a- In some cases (another 5%, more or less) the issue was a loss of 
connectivity from and to VMs. Network restarting inside the VM had no results 
and the solution was a VM reboot from Cloudstack

2- Users VM on 14.04 hosts didn't migrate to 13.04 hosts and resulted in an 
error like this:
"""
Unable to migrate due to internal error process exited while connecting to 
monitor:
W: kvm binary is deprecated, please use qemu-system-x86_64 instead Supported 
machines are:
none empty machine pc Standard PC (i440FX + PIIX, 1996) (alias of
pc-i440fx-1.4) pc-i440fx-1.4 Standard PC (i440FX + PIIX, 1996) default)
pc-1.3 Standard PC pc-1.2 Standard PC pc-1.1 Standard PC pc-1.0 Standard PC 
pc-0.15 Standard PC pc-0.14 Standard PC pc-0.13 Standard PC pc-0.12 Standard PC 
pc-0.11 Standard PC, qemu 0.11 pc-0.10 Standard PC, qemu
0.10 isapc ISA-only PC xenfv Xen Fully-virtualized PC q35 Standard PC
(Q35 + ICH9, 2009) (alias of pc-q35-1.4) pc-q35-1.4 Standard PC (Q35 + ICH9, 2009) xenpv Xen 
Para-virtualized PC """

3 - Most of System VM hanged or went to panic during migration from
13.04 to 14.04 hosts.
In one case, migrating a Virtual Router resulted in lost of connectivity from 
and to VMs connected to that router.
I stopped and started all migrated System VM / Virtual router to avoid further 
issues.

4- Once all hosts was migrated to 14.04, no additional issues resulted to me.

Hope this helps

--
Gerolamo Valcamonica

Reply via email to