Hi all, I would like to make the same configuration but using ISCSI instead
of NFS.
I could set two phisical servers(for high availability)  with ubuntu and
lio iscsi target.
On the ceph side this servers will use rbd and on the vmware side they will
exports rbd disks as LUNs.
I would verify if there will be a performance degradation vs kvm rbd
integration.
Someone has tested this configuration yet ?

Regards


2014-06-11 10:38 GMT+02:00 Andrei Mikhailovsky <and...@arhont.com>:

> Hi guys,
>
> This is what I've done to get a failover nfs storage solution with
> XenServer on top of ceph.
>
> 1. Setup two rbd volumes (one for vm images and another one for shared nfs
> state folder). Make sure you disable rbd caching on the nfs servers.
> 2. Install ucarp package on two servers which will be the nfs servers.
> These servers will need to have access to the rbd volumes that you've set
> up in step 1.
> 3. Configure ucarp to be master/slave on the servers. Choose a virtual IP
> address that will be shared between the nfs servers.
> 4. Configure ucarp vif-up/vif-down scripts on both servers to a)map/unmap
> rbd volumes, b)mount/unmount rbd volumes, c) start/stop nfs service
> 5. Add nfs server using the virtual IP you've chosen as a storage to
> XenServer/ACS
>
> This way I am able to perform maintenance tasks on the nfs servers without
> the need to shutdown the running vms. The switching between master and
> slave takes about 5 seconds or so and this doesn't seem to impact vm
> performance. I've done some basic failover tests and the setup is working
> okay.
>
> I would really appreciate your feedback and thoughts on my setup. Does it
> look like a viable solution for a production env?
>
> Cheers
>
> Andrei
>
>
> ----- Original Message -----
>
> From: "Gerolamo Valcamonica" <gerolamo.valcamon...@overweb.it>
> To: users@cloudstack.apache.org
> Sent: Tuesday, 10 June, 2014 11:24:13 AM
> Subject: Re: KVM + VMware (and ceph)
>
> Thanks to all
>
> (OT:
> I'm investigating with Inktank for VMware support by CEPH.
> If I find something new I will inform you)
>
> Gerolamo Valcamonica
>
>
> Il 09/06/2014 22:30, ilya musayev ha scritto:
> > Gerolamo,
> >
> > As previously noted, you can mix and match with some degree of
> > segregation (i.e. templates must be different).
> >
> > I've not tried mixing KVM + VMware recently, but i see no reason why
> > you cannot do that, if i recall correctly, i've done so a year ago or
> > so, when i first installed cloudstack.
> >
> > As for Cephs, i have somewhat similar setup, I have beefy vSphere
> > hypervisors with about 1.5tb of SSD drives on each hypervisor with
> > 10GB NICs, that until recently have been idle. I'm setting up Ceph
> > cluster on these and will front them with iSCSI to ESX hosts as VMFS.
> > You can also look into presenting Ceph as NFS to vmware.
> >
> > Regards,
> > ilya
> >
> >
> >
> > On 6/9/14, 8:17 AM, Gerolamo Valcamonica wrote:
> >> Hi Everybody,
> >>
> >> i have a production environment of cloudstack 4.3 based on KVM hosts
> >> and CEPH storage
> >>
> >> It's a good solution for me and i have good performance on both
> >> compute and storage side
> >>
> >> But now i have an explicit customer request for VMware environment so
> >> I'm investigating about it.
> >>
> >> Here my questions:
> >> - Can i have a mixed environment KVM + VMware vSphere Essentials Plus
> >> Kit under Cloudstack?
> >> - Can I have a mixed networking environment , so that i can, as
> >> example, have a frontend VMs on KVM and a backend VMs on VMware, on
> >> the same customere?
> >> (- Third, but off topic, question: can i have VMware hosts and CEPH
> >> storage?)
> >>
> >> Is there someone with similar enviroment that can give me suggestion
> >> about this?
> >>
> >
>
>
> --
> Gerolamo Valcamonica
> Overweb Srl
>
> - - - - - - - - - - - - - - - - - - - - - - - - - - -
> Nota di riservatezza. Il presente messaggio, e i relativi allegati,
> contengono informazioni strettamente riservate e sono destinati
> esclusivamente al destinatario sopra indicato, il quale è l'unico
> autorizzato a usarlo, copiarlo e, sotto la propria responsabilità,
> diffonderlo. Chiunque ricevesse questo messaggio per errore e/o lo leggesse
> senza esserne legittimato è avvertito che trattenerlo, copiarlo,
> divulgarlo, distribuirlo a persone diverse dal destinatario è severamente
> proibito, ed è pregato di rinviarlo immediatamente al mittente distruggendo
> l'originale.
>
>
>

Reply via email to