Re: [ceph-users] How to distribute data

2017-09-04 Thread Oscar Segarra
Hi,

For VDI (Windows 10) use case... is there any document about the
recommended configuration with rbd?

Thanks a lot!

2017-08-18 15:40 GMT+02:00 Oscar Segarra :

> Hi,
>
> Yes, you are right, the idea is cloning a snapshot taken from the base
> image...
>
> And yes, I'm working with the current RC of luminous.
>
> In this scenario: base image (raw format)  + snapshot + snapshot clones
> (for end user Windows 10 vdi). Does tiering ssd+hdd may help?
>
> Thanks a lot
>
>
> El 18 ago. 2017 4:05, "David Turner"  escribió:
>
> Do you mean a lot of snapshots or creating a lot of clones from a
> snapshot? I can agree to the pain of crating a lot of snapshots of rbds in
> ceph. I'm assuming that you mean to say that you will have a template rbd
> with a version snapshot that you clone each time you need to let someone
> log in. Is that what you're planning?
>
> On Thu, Aug 17, 2017, 9:51 PM Christian Balzer  wrote:
>
>>
>> Hello,
>>
>> On Fri, 18 Aug 2017 03:31:56 +0200 Oscar Segarra wrote:
>>
>> > Hi Christian,
>> >
>> > Thanks a lot for helping...
>> >
>> > Have you read:
>> > http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>> >
>> > So just from the perspective of qcow2, you seem to be doomed.
>> > --> Sorry, I've talking about RAW + QCOW2 when I meant RBD images and
>> RBD
>> > snapshots...
>> >
>> I tested Snapshots with Hammer and the release before it, found them
>> immensely painful (resource intensive) and avoided them since.
>> That said, there are supposedly quite some improvements in recent versions
>> (I suppose you'll deploy with Luminous), as well as more (and
>> working) control knobs to reduce the impact of snapshot operations.
>>
>> > A sufficiently large cache tier should help there immensely and the
>> base image
>> > should be in cache (RAM, pagecache on the OSD servers really) all the
>> time
>> > anyway.
>> > --> If talking about RBD images and RBD snapshots... it helps immensely
>> as
>> > well?
>> >
>> No experience, so nothing conclusive and authoritative from my end.
>> If the VMs write/read alot of the same data (as in 4MB RBD objects),
>> cache-tiering should help again.
>> But promoting and demoting things through it when dealing with snapshots
>> and deletions of them might be a pain.
>>
>> Christian
>>
>> > Sizing this and specifying the correct type of SSDs/NVMes for the
>> cache-tier
>> > is something that only you can answer based on existing data or
>> sufficiently
>> > detailed and realistic tests.
>> > --> Yes, the problem is that I have to buy a HW and for Windows 10
>> VDI...
>> > and I cannot make realistic tests previously :( but I will work on this
>> > line...
>> >
>> > Thanks a lot again!
>> >
>> >
>> >
>> > 2017-08-18 3:14 GMT+02:00 Christian Balzer :
>> >
>> > >
>> > > Hello,
>> > >
>> > > On Thu, 17 Aug 2017 23:56:49 +0200 Oscar Segarra wrote:
>> > >
>> > > > Hi David,
>> > > >
>> > > > Thanks a lot again for your quick answer...
>> > > >
>> > > > *The rules in the CRUSH map will always be followed.  It is not
>> possible
>> > > > for Ceph to go against that and put data into a root that shouldn't
>> have
>> > > > it.*
>> > > > --> I will work on your proposal of creating two roots in the CRUSH
>> > > map...
>> > > > just one question more:
>> > > > --> Regarding to space consumption, with this proposal, is it
>> possible to
>> > > > know how many disk space is it free in each pool?
>> > > >
>> > > > *The problem with a cache tier is that Ceph is going to need to
>> promote
>> > > and
>> > > > evict stuff all the time (not free).  A lot of people that want to
>> use
>> > > SSD
>> > > > cache tiering for RBDs end up with slower performance because of
>> this.
>> > > > Christian Balzer is the expert on Cache Tiers for RBD usage.  His
>> primary
>> > > > stance is that it's most likely a bad idea, but there are definite
>> cases
>> > > > where it's perfect.*
>> > > > --> Christian, is there any advice for VDI --> BASE IMAGE (raw) +
>> 1000
>> > > > linked clones (qcow2)
>> > > >
>> > > Have you read:
>> > > http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>> > >
>> > > So just from the perspective of qcow2, you seem to be doomed.
>> > >
>> > > Windows always appears to be very chatty when it comes to I/O and also
>> > > paging/swapping seemingly w/o need, rhyme or reason.
>> > > A sufficiently large cache tier should help there immensely and the
>> base
>> > > image should be in cache (RAM, pagecache on the OSD servers really)
>> all the
>> > > time anyway.
>> > > Sizing this and specifying the correct type of SSDs/NVMes for the
>> > > cache-tier is something that only you can answer based on existing
>> data or
>> > > sufficiently detailed and realistic tests.
>> > >
>> > > Christian
>> > >
>> > > > Thanks a lot.
>> > > >
>> > > >
>> > > > 2017-08-17 22:42 GMT+02:00 David Turner :
>> > > >
>> > > > > The rules in the CRUSH map will always be followed.  It is not
>> possible
>> > > > > for Ceph to go against that and put data into a root that
>> shouldn't
>> >

Re: [ceph-users] How to distribute data

2017-08-18 Thread Oscar Segarra
Hi,

Yes, you are right, the idea is cloning a snapshot taken from the base
image...

And yes, I'm working with the current RC of luminous.

In this scenario: base image (raw format)  + snapshot + snapshot clones
(for end user Windows 10 vdi). Does tiering ssd+hdd may help?

Thanks a lot


El 18 ago. 2017 4:05, "David Turner"  escribió:

Do you mean a lot of snapshots or creating a lot of clones from a snapshot?
I can agree to the pain of crating a lot of snapshots of rbds in ceph. I'm
assuming that you mean to say that you will have a template rbd with a
version snapshot that you clone each time you need to let someone log in.
Is that what you're planning?

On Thu, Aug 17, 2017, 9:51 PM Christian Balzer  wrote:

>
> Hello,
>
> On Fri, 18 Aug 2017 03:31:56 +0200 Oscar Segarra wrote:
>
> > Hi Christian,
> >
> > Thanks a lot for helping...
> >
> > Have you read:
> > http://docs.ceph.com/docs/master/rbd/rbd-openstack/
> >
> > So just from the perspective of qcow2, you seem to be doomed.
> > --> Sorry, I've talking about RAW + QCOW2 when I meant RBD images and RBD
> > snapshots...
> >
> I tested Snapshots with Hammer and the release before it, found them
> immensely painful (resource intensive) and avoided them since.
> That said, there are supposedly quite some improvements in recent versions
> (I suppose you'll deploy with Luminous), as well as more (and
> working) control knobs to reduce the impact of snapshot operations.
>
> > A sufficiently large cache tier should help there immensely and the base
> image
> > should be in cache (RAM, pagecache on the OSD servers really) all the
> time
> > anyway.
> > --> If talking about RBD images and RBD snapshots... it helps immensely
> as
> > well?
> >
> No experience, so nothing conclusive and authoritative from my end.
> If the VMs write/read alot of the same data (as in 4MB RBD objects),
> cache-tiering should help again.
> But promoting and demoting things through it when dealing with snapshots
> and deletions of them might be a pain.
>
> Christian
>
> > Sizing this and specifying the correct type of SSDs/NVMes for the
> cache-tier
> > is something that only you can answer based on existing data or
> sufficiently
> > detailed and realistic tests.
> > --> Yes, the problem is that I have to buy a HW and for Windows 10 VDI...
> > and I cannot make realistic tests previously :( but I will work on this
> > line...
> >
> > Thanks a lot again!
> >
> >
> >
> > 2017-08-18 3:14 GMT+02:00 Christian Balzer :
> >
> > >
> > > Hello,
> > >
> > > On Thu, 17 Aug 2017 23:56:49 +0200 Oscar Segarra wrote:
> > >
> > > > Hi David,
> > > >
> > > > Thanks a lot again for your quick answer...
> > > >
> > > > *The rules in the CRUSH map will always be followed.  It is not
> possible
> > > > for Ceph to go against that and put data into a root that shouldn't
> have
> > > > it.*
> > > > --> I will work on your proposal of creating two roots in the CRUSH
> > > map...
> > > > just one question more:
> > > > --> Regarding to space consumption, with this proposal, is it
> possible to
> > > > know how many disk space is it free in each pool?
> > > >
> > > > *The problem with a cache tier is that Ceph is going to need to
> promote
> > > and
> > > > evict stuff all the time (not free).  A lot of people that want to
> use
> > > SSD
> > > > cache tiering for RBDs end up with slower performance because of
> this.
> > > > Christian Balzer is the expert on Cache Tiers for RBD usage.  His
> primary
> > > > stance is that it's most likely a bad idea, but there are definite
> cases
> > > > where it's perfect.*
> > > > --> Christian, is there any advice for VDI --> BASE IMAGE (raw) +
> 1000
> > > > linked clones (qcow2)
> > > >
> > > Have you read:
> > > http://docs.ceph.com/docs/master/rbd/rbd-openstack/
> > >
> > > So just from the perspective of qcow2, you seem to be doomed.
> > >
> > > Windows always appears to be very chatty when it comes to I/O and also
> > > paging/swapping seemingly w/o need, rhyme or reason.
> > > A sufficiently large cache tier should help there immensely and the
> base
> > > image should be in cache (RAM, pagecache on the OSD servers really)
> all the
> > > time anyway.
> > > Sizing this and specifying the correct type of SSDs/NVMes for the
> > > cache-tier is something that only you can answer based on existing
> data or
> > > sufficiently detailed and realistic tests.
> > >
> > > Christian
> > >
> > > > Thanks a lot.
> > > >
> > > >
> > > > 2017-08-17 22:42 GMT+02:00 David Turner :
> > > >
> > > > > The rules in the CRUSH map will always be followed.  It is not
> possible
> > > > > for Ceph to go against that and put data into a root that shouldn't
> > > have it.
> > > > >
> > > > > The problem with a cache tier is that Ceph is going to need to
> promote
> > > and
> > > > > evict stuff all the time (not free).  A lot of people that want to
> use
> > > SSD
> > > > > cache tiering for RBDs end up with slower performance because of
> this.
> > > > > Christian Balzer is the expert 

Re: [ceph-users] How to distribute data

2017-08-17 Thread David Turner
Do you mean a lot of snapshots or creating a lot of clones from a snapshot?
I can agree to the pain of crating a lot of snapshots of rbds in ceph. I'm
assuming that you mean to say that you will have a template rbd with a
version snapshot that you clone each time you need to let someone log in.
Is that what you're planning?

On Thu, Aug 17, 2017, 9:51 PM Christian Balzer  wrote:

>
> Hello,
>
> On Fri, 18 Aug 2017 03:31:56 +0200 Oscar Segarra wrote:
>
> > Hi Christian,
> >
> > Thanks a lot for helping...
> >
> > Have you read:
> > http://docs.ceph.com/docs/master/rbd/rbd-openstack/
> >
> > So just from the perspective of qcow2, you seem to be doomed.
> > --> Sorry, I've talking about RAW + QCOW2 when I meant RBD images and RBD
> > snapshots...
> >
> I tested Snapshots with Hammer and the release before it, found them
> immensely painful (resource intensive) and avoided them since.
> That said, there are supposedly quite some improvements in recent versions
> (I suppose you'll deploy with Luminous), as well as more (and
> working) control knobs to reduce the impact of snapshot operations.
>
> > A sufficiently large cache tier should help there immensely and the base
> image
> > should be in cache (RAM, pagecache on the OSD servers really) all the
> time
> > anyway.
> > --> If talking about RBD images and RBD snapshots... it helps immensely
> as
> > well?
> >
> No experience, so nothing conclusive and authoritative from my end.
> If the VMs write/read alot of the same data (as in 4MB RBD objects),
> cache-tiering should help again.
> But promoting and demoting things through it when dealing with snapshots
> and deletions of them might be a pain.
>
> Christian
>
> > Sizing this and specifying the correct type of SSDs/NVMes for the
> cache-tier
> > is something that only you can answer based on existing data or
> sufficiently
> > detailed and realistic tests.
> > --> Yes, the problem is that I have to buy a HW and for Windows 10 VDI...
> > and I cannot make realistic tests previously :( but I will work on this
> > line...
> >
> > Thanks a lot again!
> >
> >
> >
> > 2017-08-18 3:14 GMT+02:00 Christian Balzer :
> >
> > >
> > > Hello,
> > >
> > > On Thu, 17 Aug 2017 23:56:49 +0200 Oscar Segarra wrote:
> > >
> > > > Hi David,
> > > >
> > > > Thanks a lot again for your quick answer...
> > > >
> > > > *The rules in the CRUSH map will always be followed.  It is not
> possible
> > > > for Ceph to go against that and put data into a root that shouldn't
> have
> > > > it.*
> > > > --> I will work on your proposal of creating two roots in the CRUSH
> > > map...
> > > > just one question more:
> > > > --> Regarding to space consumption, with this proposal, is it
> possible to
> > > > know how many disk space is it free in each pool?
> > > >
> > > > *The problem with a cache tier is that Ceph is going to need to
> promote
> > > and
> > > > evict stuff all the time (not free).  A lot of people that want to
> use
> > > SSD
> > > > cache tiering for RBDs end up with slower performance because of
> this.
> > > > Christian Balzer is the expert on Cache Tiers for RBD usage.  His
> primary
> > > > stance is that it's most likely a bad idea, but there are definite
> cases
> > > > where it's perfect.*
> > > > --> Christian, is there any advice for VDI --> BASE IMAGE (raw) +
> 1000
> > > > linked clones (qcow2)
> > > >
> > > Have you read:
> > > http://docs.ceph.com/docs/master/rbd/rbd-openstack/
> > >
> > > So just from the perspective of qcow2, you seem to be doomed.
> > >
> > > Windows always appears to be very chatty when it comes to I/O and also
> > > paging/swapping seemingly w/o need, rhyme or reason.
> > > A sufficiently large cache tier should help there immensely and the
> base
> > > image should be in cache (RAM, pagecache on the OSD servers really)
> all the
> > > time anyway.
> > > Sizing this and specifying the correct type of SSDs/NVMes for the
> > > cache-tier is something that only you can answer based on existing
> data or
> > > sufficiently detailed and realistic tests.
> > >
> > > Christian
> > >
> > > > Thanks a lot.
> > > >
> > > >
> > > > 2017-08-17 22:42 GMT+02:00 David Turner :
> > > >
> > > > > The rules in the CRUSH map will always be followed.  It is not
> possible
> > > > > for Ceph to go against that and put data into a root that shouldn't
> > > have it.
> > > > >
> > > > > The problem with a cache tier is that Ceph is going to need to
> promote
> > > and
> > > > > evict stuff all the time (not free).  A lot of people that want to
> use
> > > SSD
> > > > > cache tiering for RBDs end up with slower performance because of
> this.
> > > > > Christian Balzer is the expert on Cache Tiers for RBD usage.  His
> > > primary
> > > > > stance is that it's most likely a bad idea, but there are definite
> > > cases
> > > > > where it's perfect.
> > > > >
> > > > >
> > > > > On Thu, Aug 17, 2017 at 4:20 PM Oscar Segarra <
> oscar.sega...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > >> Hi David,
> > > > >>
>

Re: [ceph-users] How to distribute data

2017-08-17 Thread Christian Balzer

Hello,

On Fri, 18 Aug 2017 03:31:56 +0200 Oscar Segarra wrote:

> Hi Christian,
> 
> Thanks a lot for helping...
> 
> Have you read:
> http://docs.ceph.com/docs/master/rbd/rbd-openstack/
> 
> So just from the perspective of qcow2, you seem to be doomed.
> --> Sorry, I've talking about RAW + QCOW2 when I meant RBD images and RBD  
> snapshots...
>
I tested Snapshots with Hammer and the release before it, found them
immensely painful (resource intensive) and avoided them since.
That said, there are supposedly quite some improvements in recent versions
(I suppose you'll deploy with Luminous), as well as more (and
working) control knobs to reduce the impact of snapshot operations. 
 
> A sufficiently large cache tier should help there immensely and the base image
> should be in cache (RAM, pagecache on the OSD servers really) all the time
> anyway.
> --> If talking about RBD images and RBD snapshots... it helps immensely as  
> well?
> 
No experience, so nothing conclusive and authoritative from my end.
If the VMs write/read alot of the same data (as in 4MB RBD objects),
cache-tiering should help again.
But promoting and demoting things through it when dealing with snapshots
and deletions of them might be a pain.

Christian

> Sizing this and specifying the correct type of SSDs/NVMes for the cache-tier
> is something that only you can answer based on existing data or sufficiently
> detailed and realistic tests.
> --> Yes, the problem is that I have to buy a HW and for Windows 10 VDI...  
> and I cannot make realistic tests previously :( but I will work on this
> line...
> 
> Thanks a lot again!
> 
> 
> 
> 2017-08-18 3:14 GMT+02:00 Christian Balzer :
> 
> >
> > Hello,
> >
> > On Thu, 17 Aug 2017 23:56:49 +0200 Oscar Segarra wrote:
> >  
> > > Hi David,
> > >
> > > Thanks a lot again for your quick answer...
> > >
> > > *The rules in the CRUSH map will always be followed.  It is not possible
> > > for Ceph to go against that and put data into a root that shouldn't have
> > > it.*  
> > > --> I will work on your proposal of creating two roots in the CRUSH  
> > map...  
> > > just one question more:  
> > > --> Regarding to space consumption, with this proposal, is it possible to 
> > >  
> > > know how many disk space is it free in each pool?
> > >
> > > *The problem with a cache tier is that Ceph is going to need to promote  
> > and  
> > > evict stuff all the time (not free).  A lot of people that want to use  
> > SSD  
> > > cache tiering for RBDs end up with slower performance because of this.
> > > Christian Balzer is the expert on Cache Tiers for RBD usage.  His primary
> > > stance is that it's most likely a bad idea, but there are definite cases
> > > where it's perfect.*  
> > > --> Christian, is there any advice for VDI --> BASE IMAGE (raw) + 1000  
> > > linked clones (qcow2)
> > >  
> > Have you read:
> > http://docs.ceph.com/docs/master/rbd/rbd-openstack/
> >
> > So just from the perspective of qcow2, you seem to be doomed.
> >
> > Windows always appears to be very chatty when it comes to I/O and also
> > paging/swapping seemingly w/o need, rhyme or reason.
> > A sufficiently large cache tier should help there immensely and the base
> > image should be in cache (RAM, pagecache on the OSD servers really) all the
> > time anyway.
> > Sizing this and specifying the correct type of SSDs/NVMes for the
> > cache-tier is something that only you can answer based on existing data or
> > sufficiently detailed and realistic tests.
> >
> > Christian
> >  
> > > Thanks a lot.
> > >
> > >
> > > 2017-08-17 22:42 GMT+02:00 David Turner :
> > >  
> > > > The rules in the CRUSH map will always be followed.  It is not possible
> > > > for Ceph to go against that and put data into a root that shouldn't  
> > have it.  
> > > >
> > > > The problem with a cache tier is that Ceph is going to need to promote  
> > and  
> > > > evict stuff all the time (not free).  A lot of people that want to use  
> > SSD  
> > > > cache tiering for RBDs end up with slower performance because of this.
> > > > Christian Balzer is the expert on Cache Tiers for RBD usage.  His  
> > primary  
> > > > stance is that it's most likely a bad idea, but there are definite  
> > cases  
> > > > where it's perfect.
> > > >
> > > >
> > > > On Thu, Aug 17, 2017 at 4:20 PM Oscar Segarra  > >  
> > > > wrote:
> > > >  
> > > >> Hi David,
> > > >>
> > > >> Thanks a lot for your quick answer!
> > > >>
> > > >> *If I'm understanding you correctly, you want to have 2 different  
> > roots  
> > > >> that pools can be made using.  The first being entirely SSD storage.  
> > The  
> > > >> second being HDD Storage with an SSD cache tier on top of it.  *  
> > > >> --> Yes, this is what I mean.  
> > > >>
> > > >> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-
> > > >> and-ssd-within-the-same-box/  
> > > >> --> I'm not an expert in CRUSH rules... Whit this configuration, it is 
> > > >>  
> > > >> guaranteed that objects stored in ssd pool d

Re: [ceph-users] How to distribute data

2017-08-17 Thread Christian Balzer

Hello,

On Thu, 17 Aug 2017 23:56:49 +0200 Oscar Segarra wrote:

> Hi David,
> 
> Thanks a lot again for your quick answer...
> 
> *The rules in the CRUSH map will always be followed.  It is not possible
> for Ceph to go against that and put data into a root that shouldn't have
> it.*
> --> I will work on your proposal of creating two roots in the CRUSH map...  
> just one question more:
> --> Regarding to space consumption, with this proposal, is it possible to  
> know how many disk space is it free in each pool?
> 
> *The problem with a cache tier is that Ceph is going to need to promote and
> evict stuff all the time (not free).  A lot of people that want to use SSD
> cache tiering for RBDs end up with slower performance because of this.
> Christian Balzer is the expert on Cache Tiers for RBD usage.  His primary
> stance is that it's most likely a bad idea, but there are definite cases
> where it's perfect.*
> --> Christian, is there any advice for VDI --> BASE IMAGE (raw) + 1000  
> linked clones (qcow2)
> 
Have you read:
http://docs.ceph.com/docs/master/rbd/rbd-openstack/

So just from the perspective of qcow2, you seem to be doomed.

Windows always appears to be very chatty when it comes to I/O and also
paging/swapping seemingly w/o need, rhyme or reason.
A sufficiently large cache tier should help there immensely and the base
image should be in cache (RAM, pagecache on the OSD servers really) all the
time anyway.
Sizing this and specifying the correct type of SSDs/NVMes for the
cache-tier is something that only you can answer based on existing data or
sufficiently detailed and realistic tests.

Christian

> Thanks a lot.
> 
> 
> 2017-08-17 22:42 GMT+02:00 David Turner :
> 
> > The rules in the CRUSH map will always be followed.  It is not possible
> > for Ceph to go against that and put data into a root that shouldn't have it.
> >
> > The problem with a cache tier is that Ceph is going to need to promote and
> > evict stuff all the time (not free).  A lot of people that want to use SSD
> > cache tiering for RBDs end up with slower performance because of this.
> > Christian Balzer is the expert on Cache Tiers for RBD usage.  His primary
> > stance is that it's most likely a bad idea, but there are definite cases
> > where it's perfect.
> >
> >
> > On Thu, Aug 17, 2017 at 4:20 PM Oscar Segarra 
> > wrote:
> >  
> >> Hi David,
> >>
> >> Thanks a lot for your quick answer!
> >>
> >> *If I'm understanding you correctly, you want to have 2 different roots
> >> that pools can be made using.  The first being entirely SSD storage.  The
> >> second being HDD Storage with an SSD cache tier on top of it.  *  
> >> --> Yes, this is what I mean.  
> >>
> >> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-
> >> and-ssd-within-the-same-box/  
> >> --> I'm not an expert in CRUSH rules... Whit this configuration, it is  
> >> guaranteed that objects stored in ssd pool do not "go" to the hdd disks?
> >>
> >> *The above guide explains how to set up the HDD root and the SSD root.
> >> After that all you do is create a pool on the HDD root for RBDs, a pool on
> >> the SSD root for a cache tier to use with the HDD pool, and then a a pool
> >> on the SSD root for RBDs.  There aren't actually a lot of use cases out
> >> there where using an SSD cache tier on top of an HDD RBD pool is what you
> >> really want.  I would recommend testing this thoroughly and comparing your
> >> performance to just a standard HDD pool for RBDs without a cache tier.*  
> >> --> I'm working on a VDI solution where there are BASE IMAGES (raw) and  
> >> qcow2 linked clones... where I expect not all VDIs to be powered on at the
> >> same time and perform a configuration to avoid problems related to login
> >> storm. (1000 hosts)  
> >> --> Do you think it is not a good idea? do you know what does usually  
> >> people configure for this kind of scenarios?
> >>
> >> Thanks a lot.
> >>
> >>
> >> 2017-08-17 21:31 GMT+02:00 David Turner :
> >>  
> >>> If I'm understanding you correctly, you want to have 2 different roots
> >>> that pools can be made using.  The first being entirely SSD storage.  The
> >>> second being HDD Storage with an SSD cache tier on top of it.
> >>>
> >>> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-
> >>> and-ssd-within-the-same-box/
> >>>
> >>> The above guide explains how to set up the HDD root and the SSD root.
> >>> After that all you do is create a pool on the HDD root for RBDs, a pool on
> >>> the SSD root for a cache tier to use with the HDD pool, and then a a pool
> >>> on the SSD root for RBDs.  There aren't actually a lot of use cases out
> >>> there where using an SSD cache tier on top of an HDD RBD pool is what you
> >>> really want.  I would recommend testing this thoroughly and comparing your
> >>> performance to just a standard HDD pool for RBDs without a cache tier.
> >>>
> >>> On Thu, Aug 17, 2017 at 3:18 PM Oscar Segarra 
> >>> wrote:
> >>>  
>  Hi,
> 
>  Sorry guys, duri

Re: [ceph-users] How to distribute data

2017-08-17 Thread Oscar Segarra
Thanks a lot David,

for me is a little bit difficult to make some tests because I have to buy a
hardware... and the price is different with cache ssd tier o without it.

If anybody have experience with VDI/login storms... will be really welcome!

Note: I have removed the ceph-user list because I get errors when I copy it.

2017-08-18 2:20 GMT+02:00 David Turner :

> Get it set up and start running tests. You can always enable or disable
> the cache tier later. I don't know if Christian will chime in. And please
> stop removing the ceph-users list from your responses.
>
> On Thu, Aug 17, 2017, 7:41 PM Oscar Segarra 
> wrote:
>
>> Thanks a lot David!!!
>>
>> Let's wait the opinion of Christian about the suggested configuration for
>> VDI...
>>
>> Óscar Segarra
>>
>> 2017-08-18 1:03 GMT+02:00 David Turner :
>>
>>> `ceph df` and `ceph osd df` should give you enough information to know
>>> how full each pool, root, osd, etc are.
>>>
>>> On Thu, Aug 17, 2017, 5:56 PM Oscar Segarra 
>>> wrote:
>>>
 Hi David,

 Thanks a lot again for your quick answer...


 *The rules in the CRUSH map will always be followed.  It is not
 possible for Ceph to go against that and put data into a root that
 shouldn't have it.*
 --> I will work on your proposal of creating two roots in the CRUSH
 map... just one question more:
 --> Regarding to space consumption, with this proposal, is it possible
 to know how many disk space is it free in each pool?


 *The problem with a cache tier is that Ceph is going to need to promote
 and evict stuff all the time (not free).  A lot of people that want to use
 SSD cache tiering for RBDs end up with slower performance because of this.
 Christian Balzer is the expert on Cache Tiers for RBD usage.  His primary
 stance is that it's most likely a bad idea, but there are definite cases
 where it's perfect.*
 --> Christian, is there any advice for VDI --> BASE IMAGE (raw) + 1000
 linked clones (qcow2)

 Thanks a lot.


 2017-08-17 22:42 GMT+02:00 David Turner :

> The rules in the CRUSH map will always be followed.  It is not
> possible for Ceph to go against that and put data into a root that
> shouldn't have it.
>
> The problem with a cache tier is that Ceph is going to need to promote
> and evict stuff all the time (not free).  A lot of people that want to use
> SSD cache tiering for RBDs end up with slower performance because of this.
> Christian Balzer is the expert on Cache Tiers for RBD usage.  His primary
> stance is that it's most likely a bad idea, but there are definite cases
> where it's perfect.
>
>
> On Thu, Aug 17, 2017 at 4:20 PM Oscar Segarra 
> wrote:
>
>> Hi David,
>>
>> Thanks a lot for your quick answer!
>>
>> *If I'm understanding you correctly, you want to have 2 different
>> roots that pools can be made using.  The first being entirely SSD 
>> storage.
>> The second being HDD Storage with an SSD cache tier on top of it.  *
>> --> Yes, this is what I mean.
>>
>> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-
>> and-ssd-within-the-same-box/
>> --> I'm not an expert in CRUSH rules... Whit this configuration, it
>> is guaranteed that objects stored in ssd pool do not "go" to the hdd 
>> disks?
>>
>> *The above guide explains how to set up the HDD root and the SSD
>> root.  After that all you do is create a pool on the HDD root for RBDs, a
>> pool on the SSD root for a cache tier to use with the HDD pool, and then 
>> a
>> a pool on the SSD root for RBDs.  There aren't actually a lot of use 
>> cases
>> out there where using an SSD cache tier on top of an HDD RBD pool is what
>> you really want.  I would recommend testing this thoroughly and comparing
>> your performance to just a standard HDD pool for RBDs without a cache 
>> tier.*
>> --> I'm working on a VDI solution where there are BASE IMAGES (raw)
>> and qcow2 linked clones... where I expect not all VDIs to be powered on 
>> at
>> the same time and perform a configuration to avoid problems related to
>> login storm. (1000 hosts)
>> --> Do you think it is not a good idea? do you know what does usually
>> people configure for this kind of scenarios?
>>
>> Thanks a lot.
>>
>>
>> 2017-08-17 21:31 GMT+02:00 David Turner :
>>
>>> If I'm understanding you correctly, you want to have 2 different
>>> roots that pools can be made using.  The first being entirely SSD 
>>> storage.
>>> The second being HDD Storage with an SSD cache tier on top of it.
>>>
>>> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-
>>> and-ssd-within-the-same-box/
>>>
>>> The above guide explains how to set up the HDD root and the SSD
>>> root.  After that all you do is create a 

Re: [ceph-users] How to distribute data

2017-08-17 Thread David Turner
The rules in the CRUSH map will always be followed.  It is not possible for
Ceph to go against that and put data into a root that shouldn't have it.

The problem with a cache tier is that Ceph is going to need to promote and
evict stuff all the time (not free).  A lot of people that want to use SSD
cache tiering for RBDs end up with slower performance because of this.
Christian Balzer is the expert on Cache Tiers for RBD usage.  His primary
stance is that it's most likely a bad idea, but there are definite cases
where it's perfect.

On Thu, Aug 17, 2017 at 4:20 PM Oscar Segarra 
wrote:

> Hi David,
>
> Thanks a lot for your quick answer!
>
> *If I'm understanding you correctly, you want to have 2 different roots
> that pools can be made using.  The first being entirely SSD storage.  The
> second being HDD Storage with an SSD cache tier on top of it.  *
> --> Yes, this is what I mean.
>
>
> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
> --> I'm not an expert in CRUSH rules... Whit this configuration, it is
> guaranteed that objects stored in ssd pool do not "go" to the hdd disks?
>
> *The above guide explains how to set up the HDD root and the SSD root.
> After that all you do is create a pool on the HDD root for RBDs, a pool on
> the SSD root for a cache tier to use with the HDD pool, and then a a pool
> on the SSD root for RBDs.  There aren't actually a lot of use cases out
> there where using an SSD cache tier on top of an HDD RBD pool is what you
> really want.  I would recommend testing this thoroughly and comparing your
> performance to just a standard HDD pool for RBDs without a cache tier.*
> --> I'm working on a VDI solution where there are BASE IMAGES (raw) and
> qcow2 linked clones... where I expect not all VDIs to be powered on at the
> same time and perform a configuration to avoid problems related to login
> storm. (1000 hosts)
> --> Do you think it is not a good idea? do you know what does usually
> people configure for this kind of scenarios?
>
> Thanks a lot.
>
>
> 2017-08-17 21:31 GMT+02:00 David Turner :
>
>> If I'm understanding you correctly, you want to have 2 different roots
>> that pools can be made using.  The first being entirely SSD storage.  The
>> second being HDD Storage with an SSD cache tier on top of it.
>>
>>
>> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
>>
>> The above guide explains how to set up the HDD root and the SSD root.
>> After that all you do is create a pool on the HDD root for RBDs, a pool on
>> the SSD root for a cache tier to use with the HDD pool, and then a a pool
>> on the SSD root for RBDs.  There aren't actually a lot of use cases out
>> there where using an SSD cache tier on top of an HDD RBD pool is what you
>> really want.  I would recommend testing this thoroughly and comparing your
>> performance to just a standard HDD pool for RBDs without a cache tier.
>>
>> On Thu, Aug 17, 2017 at 3:18 PM Oscar Segarra 
>> wrote:
>>
>>> Hi,
>>>
>>> Sorry guys, during theese days I'm asking a lot about how to distribute
>>> my data.
>>>
>>> I have two kinds of VM:
>>>
>>> 1.- Management VMs (linux) --> Full SSD dedicated disks
>>> 2.- Windows VM --> SSD + HHD (with tiering).
>>>
>>> I'm working on installing two clusters on the same host but I'm
>>> encountering lots of problems as named clusters look not be fully supported.
>>>
>>> In the same cluster, Is there any way to distribute my VMs as I like?
>>>
>>> Thanks a lot!
>>>
>>> Ó.
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to distribute data

2017-08-17 Thread David Turner
If I'm understanding you correctly, you want to have 2 different roots that
pools can be made using.  The first being entirely SSD storage.  The second
being HDD Storage with an SSD cache tier on top of it.

https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/

The above guide explains how to set up the HDD root and the SSD root.
After that all you do is create a pool on the HDD root for RBDs, a pool on
the SSD root for a cache tier to use with the HDD pool, and then a a pool
on the SSD root for RBDs.  There aren't actually a lot of use cases out
there where using an SSD cache tier on top of an HDD RBD pool is what you
really want.  I would recommend testing this thoroughly and comparing your
performance to just a standard HDD pool for RBDs without a cache tier.

On Thu, Aug 17, 2017 at 3:18 PM Oscar Segarra 
wrote:

> Hi,
>
> Sorry guys, during theese days I'm asking a lot about how to distribute my
> data.
>
> I have two kinds of VM:
>
> 1.- Management VMs (linux) --> Full SSD dedicated disks
> 2.- Windows VM --> SSD + HHD (with tiering).
>
> I'm working on installing two clusters on the same host but I'm
> encountering lots of problems as named clusters look not be fully supported.
>
> In the same cluster, Is there any way to distribute my VMs as I like?
>
> Thanks a lot!
>
> Ó.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com