Hi,
For VDI (Windows 10) use case... is there any document about the
recommended configuration with rbd?
Thanks a lot!
2017-08-18 15:40 GMT+02:00 Oscar Segarra :
> Hi,
>
> Yes, you are right, the idea is cloning a snapshot taken from the base
> image...
>
> And yes,
Hi,
Yes, you are right, the idea is cloning a snapshot taken from the base
image...
And yes, I'm working with the current RC of luminous.
In this scenario: base image (raw format) + snapshot + snapshot clones
(for end user Windows 10 vdi). Does tiering ssd+hdd may help?
Thanks a lot
El 18
Do you mean a lot of snapshots or creating a lot of clones from a snapshot?
I can agree to the pain of crating a lot of snapshots of rbds in ceph. I'm
assuming that you mean to say that you will have a template rbd with a
version snapshot that you clone each time you need to let someone log in.
Is
Hello,
On Fri, 18 Aug 2017 03:31:56 +0200 Oscar Segarra wrote:
> Hi Christian,
>
> Thanks a lot for helping...
>
> Have you read:
> http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>
> So just from the perspective of qcow2, you seem to be doomed.
> --> Sorry, I've talking about RAW +
Hello,
On Thu, 17 Aug 2017 23:56:49 +0200 Oscar Segarra wrote:
> Hi David,
>
> Thanks a lot again for your quick answer...
>
> *The rules in the CRUSH map will always be followed. It is not possible
> for Ceph to go against that and put data into a root that shouldn't have
> it.*
> --> I
Thanks a lot David,
for me is a little bit difficult to make some tests because I have to buy a
hardware... and the price is different with cache ssd tier o without it.
If anybody have experience with VDI/login storms... will be really welcome!
Note: I have removed the ceph-user list because I
The rules in the CRUSH map will always be followed. It is not possible for
Ceph to go against that and put data into a root that shouldn't have it.
The problem with a cache tier is that Ceph is going to need to promote and
evict stuff all the time (not free). A lot of people that want to use
If I'm understanding you correctly, you want to have 2 different roots that
pools can be made using. The first being entirely SSD storage. The second
being HDD Storage with an SSD cache tier on top of it.
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
Hi,
Sorry guys, during theese days I'm asking a lot about how to distribute my
data.
I have two kinds of VM:
1.- Management VMs (linux) --> Full SSD dedicated disks
2.- Windows VM --> SSD + HHD (with tiering).
I'm working on installing two clusters on the same host but I'm
encountering lots of