> -----Original Message-----
> From: Mike Perez [mailto:thin...@gmail.com]
> Sent: 11 August 2015 19:04
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and
> glance
> 
> On 15:06 Aug 11, Kuvaja, Erno wrote:
> > > -----Original Message-----
> > > From: Jay Pipes [mailto:jaypi...@gmail.com]
> 
> <snip>
> 
> > > Having the image cache local to the compute nodes themselves gives
> > > the best performance overall, and with glance_store, means that
> > > glance-api isn't needed at all, and Glance can become just a
> > > metadata repository, which would be awesome, IMHO.
> >
> > You have any figures to back this up in scale? We've heard similar
> > claims for quite a while and as soon as people starts to actually look
> > into how the environments behaves, they quite quickly turn back. As
> > you're not the first one, I'd like to make the same request as to
> > everyone before, show your data to back this claim up! Until that it
> > is just like you say it is, opinion.;)
> 
> The claims I make with Cinder doing caching on its own versus just using
> Glance with rally with an 8G image:
> 
> Creating/deleting 50 volumes w/ Cinder image cache: 324 seconds
> Creating/delete 50 volumes w/o Cinder image cache: 3952 seconds
> 
> http://thing.ee/x/cache_results/
> 
> Thanks to Patrick East for pulling these results together.
> 
> Keep in mind, this is using a block storage backend that is completely
> separate from the OpenStack nodes. It's *not* using a local LVM all in one
> OpenStack contraption. This is important because even if you have Glance
> caching enabled, and there was no cache miss, you still have to dd the bits to
> the block device, which is still going over the network. Unless Glance is 
> going
> to cache on the storage array itself, forget about it.
> 
> Glance should be focusing on other issues, rather than trying to make
> copying image bits over the network and dd'ing to a block device faster.
> 
> --
> Mike Perez
> 
Thanks Mike,

So without cinder cache your times averaged roughly 150+second marks. The 
couple of first volumes with the cache took roughly 170+seconds. What the data 
does not tell, was cinder pulling the images directly from glance backend 
rather than through glance on either of these cases?

Somehow you need to seed those caches and that seeding time/mechanism is where 
the debate seems to be. Can you afford keeping every image in cache so that 
they are all local or if you need to pull the image to seed your cache how much 
you will benefit that your 100 cinder nodes are pulling it directly from 
backend X versus glance caching/sitting in between. How block storage backend 
handles that 100 concurrent reads by different client when you are seeding it 
between different arrays? The scale starts matter here because it makes a lot 
of difference on backend if it's couple of cinder or nova nodes requesting the 
image vs. 100s of them. Lots of backends tends to not like such loads or we 
outperform them due not having to fight for the bandwidth with other consumers 
of that backend.

That dd part we gladly leave to you, the network takes what it takes to 
transfer and we will be happily handing the bits over at the other end still, 
so you have something to dd. That is our business and we do it pretty well. 

- Erno
> __________________________________________________________
> ________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to