There are very few configuration settings passed between Cinder and
Nova when attaching a volume. I think the only real possibility
(untested) would be to configure two Cinder backends against the same
Ceph cluster using two different auth user ids -- one for cache
enabled and another for cache disabled. Then you could update the
ceph.conf on the Nova compute hosts to have a client section for each
user id, configured however you want it. You would most likely need to
unset "disk_cachemodes" in your nova.conf since that would override
any ceph.conf client-specific settings (again, untested).

On Sat, Jan 7, 2017 at 10:59 PM, Lazuardi Nasution
<mrxlazuar...@gmail.com> wrote:
> Hi,
>
> I'm still waiting for clues or any comments of this case. This case happen
> because only some volumes are multi attached volumes. I'm trying to not to
> downgrade performance of mostly volumes, images and instance ephemeral data
> by disabling RBD Cache feature. Any best practices of combining multi
> attached volumes with single attached volumes?
>
> Best regards,
>
>
>
> Date: Tue, 3 Jan 2017 16:12:29 +0700
> From: Lazuardi Nasution <mrxlazuar...@gmail.com>
> To: Ceph Users <ceph-users@lists.ceph.com>
> Subject: [ceph-users] RBD Cache & Multi Attached Volumes
> Message-ID:
>         <ca+u3guka2z0ygj0bbnqbh-3nx_xxdokfwpbz9vecsugwgmv...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
>
> Hi,
>
> For using with OpenStack Cinder multi attached volumes, is it possible to
> disable RBD Cache for specific multi attached volumes only? Single attached
> volumes still need to enable RBD Cache for better performance.
>
> If I disable RBD Cache on /etc/ceph/ceph.conf, is
> disk_cachemodes="network=writeback"
> on /etc/nova/nova.conf still effective? What if I make different ceph.conf
> for specific OpenStack services?
>
> Best regards,
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to