Re: [ceph-users] rbd ssd pool for (windows) vms

2019-05-06 Thread Marc Roos
 
But what happens if the guest os has trim enabled and qemu did not have 
the discard option set. Should there be done some fsck to correct this? 
(Sorry is getting a bit off topic here.)


-Original Message-
From: Jason Dillaman [mailto:jdill...@redhat.com] 
Sent: woensdag 1 mei 2019 23:34
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] rbd ssd pool for (windows) vms

On Wed, May 1, 2019 at 5:00 PM Marc Roos  
wrote:
>
>
> Do you need to tell the vm's that they are on a ssd rbd pool? Or does 
> ceph and the libvirt drivers do this automatically for you?

Like discard, any advanced QEMU options would need to be manually 
specified.

> When testing a nutanix acropolis virtual install, I had to 'cheat' it 
> by adding this  
> To make the installer think there was a ssd drive.
>
> I only have 'Thin provisioned drive' mentioned regardless if the vm is 

> on a hdd rbd pool or a ssd rbd pool.
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Jason


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd ssd pool for (windows) vms

2019-05-06 Thread Janne Johansson
Den mån 6 maj 2019 kl 10:03 skrev Marc Roos :

>
> Yes but those 'changes' can be relayed via the kernel rbd driver not?
> Besides I don't think you can move a rbd block device being used to a
> different pool anyway.
>
>
No, but you can move the whole pool, which takes all RBD images with it.


> On the manual page [0] there is nothing mentioned about configuration
> settings needed for rbd use. Nor for ssd. They are also using in the
> example the virtio/vda, while I learned here that you should use the
> virtio-scsi/sda.
>
>
There are differences here, one is "tell guest to use virtio-scsi" that
will allow
TRIM from a guest to become a possibility to reclaim space on
thin-provisioned
RBD images, and that is probably a good thing.

That doesn't mean the guest TRIM commands will pass on to the pool OSD
storage sectors underneath.

So you don't gain anything directly on the end devices in that regard to
let a guest know
that it currently is lying on SSDs or HDDs, because the guest will not be
sending SSD
commands to the real device. Inversely, the TRIMs sent from a guest would
allow re-thinning
on a HDD pool aswell, since it's not a factor of the underlying devices,
but rather the ceph
code and pool/rbd properties which are the same regardless.

Also, if the guest makes other decisions based on if there is HDD or SSD
underneath,
those decisions can be wrong both ways, like "I was told its hdd and
therefore I assume
only X iops are possible" where the kvm librbd layer can cache tons of
things for you,
aswell as filestore OSD ram caches could give you awesome RAM-like write
performance
not seen on normal HDDs ever. (at the risk of dataloss in the worst case)

On the other hand, being told as a guest that there is SSD or NVME
and deciding 100k iops should be the norm from now on would be equally
wrong the
other way around if your ceph network between the guest and OSDs prevent
you from
doing more than 1k iops.

If you find out that there is no certain way to tell a guest about where it
really is stored,
that may actually be a conscious decision that is for the best. Let the
guest try to do as
much IO as it think it needs and get the results when they are ready.

(a nod to Xen that likes to tell its guests they are on IDE drives so
guests never send out
more than one IO request at a time because IDE just don't have that
concept, regardless
of how fancy host you have with super-deep request queues and all... 8-/ )

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd ssd pool for (windows) vms

2019-05-06 Thread Marc Roos
 
Yes but those 'changes' can be relayed via the kernel rbd driver not? 
Besides I don't think you can move a rbd block device being used to a 
different pool anyway. 

On the manual page [0] there is nothing mentioned about configuration 
settings needed for rbd use. Nor for ssd. They are also using in the 
example the virtio/vda, while I learned here that you should use the 
virtio-scsi/sda.



[0] http://docs.ceph.com/docs/master/rbd/libvirt/





-Original Message-
From: Janne Johansson  
Subject: Re: [ceph-users] rbd ssd pool for (windows) vms

Den ons 1 maj 2019 kl 23:00 skrev Marc Roos :


Do you need to tell the vm's that they are on a ssd rbd pool? Or 
does 
ceph and the libvirt drivers do this automatically for you?
When testing a nutanix acropolis virtual install, I had to 'cheat' 
it by 
adding this
 
To make the installer think there was a ssd drive.  



Being or not being on an SSD pool is a (possibly) temporary conditition, 
so if the guest OS makes certain assumptions based on it, it might be 
invalid an hour later.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd ssd pool for (windows) vms

2019-05-03 Thread Janne Johansson
Den ons 1 maj 2019 kl 23:00 skrev Marc Roos :

> Do you need to tell the vm's that they are on a ssd rbd pool? Or does
> ceph and the libvirt drivers do this automatically for you?
> When testing a nutanix acropolis virtual install, I had to 'cheat' it by
> adding this
>  
> To make the installer think there was a ssd drive.
>

Being or not being on an SSD pool is a (possibly) temporary conditition, so
if the guest OS makes certain assumptions based on it, it might be invalid
an hour later.

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd ssd pool for (windows) vms

2019-05-01 Thread Jason Dillaman
On Wed, May 1, 2019 at 5:00 PM Marc Roos  wrote:
>
>
> Do you need to tell the vm's that they are on a ssd rbd pool? Or does
> ceph and the libvirt drivers do this automatically for you?

Like discard, any advanced QEMU options would need to be manually specified.

> When testing a nutanix acropolis virtual install, I had to 'cheat' it by
> adding this
>  
> To make the installer think there was a ssd drive.
>
> I only have 'Thin provisioned drive' mentioned regardless if the vm is
> on a hdd rbd pool or a ssd rbd pool.
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd ssd pool for (windows) vms

2019-05-01 Thread Marc Roos


Do you need to tell the vm's that they are on a ssd rbd pool? Or does 
ceph and the libvirt drivers do this automatically for you?

When testing a nutanix acropolis virtual install, I had to 'cheat' it by 
adding this
 
To make the installer think there was a ssd drive. 

I only have 'Thin provisioned drive' mentioned regardless if the vm is 
on a hdd rbd pool or a ssd rbd pool.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com