I have edited the disk_offering table, in the cache_mode just enter
"writeback". Stop and start VM, and it will pickup/inherit the cache_mode
from it's parrent offering
This also applies to Compute/Service offering, again inside disk_offering
table - just tested both

i.e.

UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
`id`=102; # Compute Offering (Service offering)
UPDATE `cloud`.`disk_offering` SET `cache_mode`='writeback' WHERE
`id`=114; #data disk offering

Before SQL:

root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
      <driver name='qemu' type='qcow2' cache='none'/>
      <source
file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/1b655159-ae10-41cf-8987-f1cfb47fe453'/>
      <target dev='vda' bus='virtio'/>
--
      <driver name='qemu' type='qcow2' cache='none'/>
      <source
file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/09bdadcb-ec6e-4dda-b37b-17b1a749257f'/>
      <target dev='vdb' bus='virtio'/>
--

STOP and START VM = after SQL

root@ix1-c7-4:~# virsh dumpxml i-2-10-VM | grep cache -A2
      <driver name='qemu' type='qcow2' cache='writeback'/>
      <source
file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/1b655159-ae10-41cf-8987-f1cfb47fe453'/>
      <target dev='vda' bus='virtio'/>
--
      <driver name='qemu' type='qcow2' cache='writeback'/>
      <source
file='/mnt/63a3ae7b-9ea9-3884-a772-1ea939ef6ec3/09bdadcb-ec6e-4dda-b37b-17b1a749257f'/>
      <target dev='vdb' bus='virtio'/>
--



On 20 February 2018 at 14:03, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> I have no idea how it can change the performance. If you look at the
> content of the commit you provided, it is only the commit that enabled the
> use of getCacheMode from disk offerings. However, it is not exposing any
> way to users to change that value/configuration in the database. I might
> have missed it; do you see any API methods that receive the parameter
> "cacheMode" and then pass this parameter to a "diskOffering" object, and
> then persist/update this object in the database?
>
> May I ask how are you guys changing the cacheMode configuration?
>
> On Tue, Feb 20, 2018 at 9:56 AM, Paul Angus <paul.an...@shapeblue.com>
> wrote:
>
> > I'm working with some guys who are experimenting with the setting as if
> > definitely seems to change the performance of data disks.  It also
> changes
> > the XML of the VM which is created.
> >
> > p.s.
> > I've found this commit;
> >
> > https://github.com/apache/cloudstack/commit/
> 1edaa36cc68e845a42339d5f267d49
> > c82343aefb
> >
> > so I've got something to investigate now, but API documentation must
> > definitely be askew.
> >
> >
> >
> > paul.an...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
> > -----Original Message-----
> > From: Rafael Weingärtner [mailto:rafaelweingart...@gmail.com]
> > Sent: 20 February 2018 12:31
> > To: dev <dev@cloudstack.apache.org>
> > Subject: Re: Caching modes
> >
> > This cache mode parameter does not exist in "CreateDiskOfferingCmd"
> > command. I also checked some commits from 2, 3, 4 and 5 years ago, and
> > this parameter was never there. If you check the API in [1], you can see
> > that it is not an expected parameter. Moreover, I do not see any use of
> > "setCacheMode" in the code (in case it is updated by some other method).
> > Interestingly enough, the code uses "getCacheMode".
> >
> > In summary, it is not a feature, and it does not work. It looks like some
> > leftover from dark ages when people could commit anything and then they
> > would just leave a half implementation there in our code base.
> >
> > [1]
> > https://cloudstack.apache.org/api/apidocs-4.11/apis/
> > createDiskOffering.html
> >
> >
> > On Tue, Feb 20, 2018 at 9:19 AM, Andrija Panic <andrija.pa...@gmail.com>
> > wrote:
> >
> > > I can also assume that "cachemode" as API parameter is not supported,
> > > since when creating data disk offering via GUI also doesn't set it in
> > DB/table.
> > >
> > > CM:    create diskoffering name=xxx displaytext=xxx storagetype=shared
> > > disksize=1024 cachemode=writeback
> > >
> > > this also does not set cachemode in table... my guess it's not
> > > implemented in API
> > >
> > > Let me know if I can help with any testing here.
> > >
> > > Cheers
> > >
> > > On 20 February 2018 at 13:09, Andrija Panic <andrija.pa...@gmail.com>
> > > wrote:
> > >
> > > > Hi Paul,
> > > >
> > > > not helping directly answering your question, but here are some
> > > > observations and "warning" if client's are using write-back cache on
> > > > KVM level
> > > >
> > > >
> > > > I have (long time ago) tested performance in 3 combinations (this
> > > > was not really thorough testing but a brief testing with FIO and
> > > > random IO WRITE)
> > > >
> > > > - just CEPH rbd cache (on KVM side)
> > > >            i.e. [client]
> > > >                  rbd cache = true
> > > >                  rbd cache writethrough until flush = true
> > > >                  #(this is default 32MB per volume, afaik
> > > >
> > > > - just KMV write-back cache (had to manually edit disk_offering
> > > > table to activate cache mode, since when creating new disk offering
> > > > via GUI, the disk_offering tables was NOT populated with
> > > > "write-back" sertting/value
> > > ! )
> > > >
> > > > - both CEPH and KVM write-back cahce active
> > > >
> > > > My observations were like following, but would be good to actually
> > > confirm
> > > > by someone else:
> > > >
> > > > - same performance with only CEPH caching or with only KVM caching
> > > > - a bit worse performance with both CEPH and KVM caching active
> > > > (nonsense combination, I know...)
> > > >
> > > >
> > > > Please keep in mind, that some ACS functionality, KVM
> > > > live-migrations on shared storage (NFS/CEPH) are NOT supported when
> > > > you use KVM write-back cache, since this is considered "unsafe"
> > migration, more info here:
> > > > https://doc.opensuse.org/documentation/leap/virtualization/
> > > html/book.virt/
> > > > cha.cachemodes.html#sec.cache.mode.live.migration
> > > >
> > > > or in short:
> > > > "
> > > > The libvirt management layer includes checks for migration
> > > > compatibility based on several factors. If the guest storage is
> > > > hosted on a clustered file system, is read-only or is marked
> > > > shareable, then the cache mode is ignored when determining if
> > > > migration can be allowed. Otherwise libvirt will not allow migration
> > > > unless the cache mode is set to none. However, this restriction can
> > > > be overridden with the “unsafe” option to the migration APIs, which
> > > > is also supported by virsh, as for example in
> > > >
> > > > virsh migrate --live --unsafe
> > > > "
> > > >
> > > > Cheers
> > > > Andrija
> > > >
> > > >
> > > > On 20 February 2018 at 11:24, Paul Angus <paul.an...@shapeblue.com>
> > > wrote:
> > > >
> > > >> Hi Wido,
> > > >>
> > > >> This is for KVM (with Ceph backend as it happens), the API
> > > >> documentation is out of sync with UI capabilities, so I'm trying to
> > > >> figure out if we
> > > >> *should* be able to set cacheMode for root disks.  It seems to make
> > > quite a
> > > >> difference to performance.
> > > >>
> > > >>
> > > >>
> > > >> paul.an...@shapeblue.com
> > > >> www.shapeblue.com
> > > >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> -----Original Message-----
> > > >> From: Wido den Hollander [mailto:w...@widodh.nl]
> > > >> Sent: 20 February 2018 09:03
> > > >> To: dev@cloudstack.apache.org
> > > >> Subject: Re: Caching modes
> > > >>
> > > >>
> > > >>
> > > >> On 02/20/2018 09:46 AM, Paul Angus wrote:
> > > >> > Hey guys,
> > > >> >
> > > >> > Can anyone shed any light on write caching in CloudStack?
> >  cacheMode
> > > >> is available through the UI for data disks (but not root disks), but
> > not
> > > >> documented as an API option for data or root disks (although is
> > > documented
> > > >> as a response for data disks).
> > > >> >
> > > >>
> > > >> What hypervisor?
> > > >>
> > > >> In case of KVM it's passed down to XML which then passes it to
> > Qemu/KVM
> > > >> which then handles the caching.
> > > >>
> > > >> The implementation varies per hypervisor, so that should be the
> > > question.
> > > >>
> > > >> Wido
> > > >>
> > > >>
> > > >> > #huh?
> > > >> >
> > > >> > thanks
> > > >> >
> > > >> >
> > > >> >
> > > >> > paul.an...@shapeblue.com
> > > >> > www.shapeblue.com
> > > >> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> > > >> >
> > > >> >
> > > >> >
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Andrija Panić
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
> >
> >
> > --
> > Rafael Weingärtner
> >
>
>
>
> --
> Rafael Weingärtner
>



-- 

Andrija Panić

Reply via email to