Re: [openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-20 Thread Duncan Thomas
It is quite possible that the requirement for glance to own images can be
achieved by having a glance tenant in cinder, and using clone and
volume-transfer functionalities in cinder to get copies to the right place.

I know there is some attempts to move away from the single glance tenant
model for swift usage, but doing anything else in cinder will require
significantly more thought/

On 19 November 2014 23:04, Alex Meade mr.alex.me...@gmail.com wrote:

 Hey Henry/Folks,

 I think it could make sense for Glance to store the volume UUID, the idea
 is that no matter where an image is stored it should be *owned* by Glance
 and not deleted out from under it. But that is more of a single tenant vs
 multi tenant cinder store.

 It makes sense for Cinder to at least abstract all of the block storage
 needs. Glance and any other service should reuse Cinders ability to talk to
 certain backends. It would be wasted effort to reimplement Cinder drivers
 as Glance stores. I do agree with Duncan that a great way to solve these
 issues is a third party transfer service, which others and I in the Glance
 community have discussed at numerous summits (since San Diego).

 -Alex



 On Wed, Nov 19, 2014 at 3:40 AM, henry hly henry4...@gmail.com wrote:

 Hi Flavio,

 Thanks for your information about Cinder Store, Yet I have a little
 concern about Cinder backend: Suppose cinder and glance both use Ceph
 as Store, then if cinder  can do instant copy to glance by ceph clone
 (maybe not now but some time later), what information would be stored
 in glance? Obviously volume UUID is not a good choice, because after
 volume is deleted then image can't be referenced. The best choice is
 that cloned ceph object URI also be stored in glance location, letting
 both glance and cinder see the backend store details.

 However, although it really make sense for Ceph like All-in-one Store,
 I'm not sure if iscsi backend can be used the same way.

 On Wed, Nov 19, 2014 at 4:00 PM, Flavio Percoco fla...@redhat.com
 wrote:
  On 19/11/14 15:21 +0800, henry hly wrote:
 
  In the Previous BP [1], support for iscsi backend is introduced into
  glance. However, it was abandoned because of Cinder backend
  replacement.
 
  The reason is that all storage backend details should be hidden by
  cinder, not exposed to other projects. However, with more and more
  interest in Converged Storage like Ceph, it's necessary to expose
  storage backend to glance as well as cinder.
 
  An example  is that when transferring bits between volume and image,
  we can utilize advanced storage offload capability like linked clone
  to do very fast instant copy. Maybe we need a more general glance
  backend location support not only with iscsi.
 
 
 
  [1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store
 
 
  Hey Henry,
 
  This blueprint has been superseeded by one proposing a Cinder store
  for Glance. The Cinder store is, unfortunately, in a sorry state.
  Short story, it's not fully implemented.
 
  I truly think Glance is not the place where you'd have an iscsi store,
  that's Cinder's field and the best way to achieve what you want is by
  having a fully implemented Cinder store that doesn't rely on Cinder's
  API but has access to the volumes.
 
  Unfortunately, this is not possible now and I don't think it'll be
  possible until L (or even M?).
 
  FWIW, I think the use case you've mentioned is useful and it's
  something we have in our TODO list.
 
  Cheers,
  Flavio
 
  --
  @flaper87
  Flavio Percoco
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-19 Thread Flavio Percoco

On 19/11/14 15:21 +0800, henry hly wrote:

In the Previous BP [1], support for iscsi backend is introduced into
glance. However, it was abandoned because of Cinder backend
replacement.

The reason is that all storage backend details should be hidden by
cinder, not exposed to other projects. However, with more and more
interest in Converged Storage like Ceph, it's necessary to expose
storage backend to glance as well as cinder.

An example  is that when transferring bits between volume and image,
we can utilize advanced storage offload capability like linked clone
to do very fast instant copy. Maybe we need a more general glance
backend location support not only with iscsi.



[1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store


Hey Henry,

This blueprint has been superseeded by one proposing a Cinder store
for Glance. The Cinder store is, unfortunately, in a sorry state.
Short story, it's not fully implemented.

I truly think Glance is not the place where you'd have an iscsi store,
that's Cinder's field and the best way to achieve what you want is by
having a fully implemented Cinder store that doesn't rely on Cinder's
API but has access to the volumes.

Unfortunately, this is not possible now and I don't think it'll be
possible until L (or even M?).

FWIW, I think the use case you've mentioned is useful and it's
something we have in our TODO list.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgps0pmEesXKz.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-19 Thread Duncan Thomas
I think that having a stand-alone  (client of cinder) rich data streaming
service (http put/get with offset support, which can be used for
conventional glance plus volume upload/download directly), and rich
data-source semantics exposed so that it can be used in an optimal way
by/for nova, need not wait on the cinder roadmap to be realised, and is
ultimately the right way to progress this.

Certain features may need to wait for cinder features (e.g. read-only
multi-attach is not available yet), but the basic framework could be
written right now, I think

On 19 November 2014 10:00, Flavio Percoco fla...@redhat.com wrote:

 On 19/11/14 15:21 +0800, henry hly wrote:

 In the Previous BP [1], support for iscsi backend is introduced into
 glance. However, it was abandoned because of Cinder backend
 replacement.

 The reason is that all storage backend details should be hidden by
 cinder, not exposed to other projects. However, with more and more
 interest in Converged Storage like Ceph, it's necessary to expose
 storage backend to glance as well as cinder.

 An example  is that when transferring bits between volume and image,
 we can utilize advanced storage offload capability like linked clone
 to do very fast instant copy. Maybe we need a more general glance
 backend location support not only with iscsi.



 [1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store


 Hey Henry,

 This blueprint has been superseeded by one proposing a Cinder store
 for Glance. The Cinder store is, unfortunately, in a sorry state.
 Short story, it's not fully implemented.

 I truly think Glance is not the place where you'd have an iscsi store,
 that's Cinder's field and the best way to achieve what you want is by
 having a fully implemented Cinder store that doesn't rely on Cinder's
 API but has access to the volumes.

 Unfortunately, this is not possible now and I don't think it'll be
 possible until L (or even M?).

 FWIW, I think the use case you've mentioned is useful and it's
 something we have in our TODO list.

 Cheers,
 Flavio

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-19 Thread henry hly
Hi Flavio,

Thanks for your information about Cinder Store, Yet I have a little
concern about Cinder backend: Suppose cinder and glance both use Ceph
as Store, then if cinder  can do instant copy to glance by ceph clone
(maybe not now but some time later), what information would be stored
in glance? Obviously volume UUID is not a good choice, because after
volume is deleted then image can't be referenced. The best choice is
that cloned ceph object URI also be stored in glance location, letting
both glance and cinder see the backend store details.

However, although it really make sense for Ceph like All-in-one Store,
I'm not sure if iscsi backend can be used the same way.

On Wed, Nov 19, 2014 at 4:00 PM, Flavio Percoco fla...@redhat.com wrote:
 On 19/11/14 15:21 +0800, henry hly wrote:

 In the Previous BP [1], support for iscsi backend is introduced into
 glance. However, it was abandoned because of Cinder backend
 replacement.

 The reason is that all storage backend details should be hidden by
 cinder, not exposed to other projects. However, with more and more
 interest in Converged Storage like Ceph, it's necessary to expose
 storage backend to glance as well as cinder.

 An example  is that when transferring bits between volume and image,
 we can utilize advanced storage offload capability like linked clone
 to do very fast instant copy. Maybe we need a more general glance
 backend location support not only with iscsi.



 [1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store


 Hey Henry,

 This blueprint has been superseeded by one proposing a Cinder store
 for Glance. The Cinder store is, unfortunately, in a sorry state.
 Short story, it's not fully implemented.

 I truly think Glance is not the place where you'd have an iscsi store,
 that's Cinder's field and the best way to achieve what you want is by
 having a fully implemented Cinder store that doesn't rely on Cinder's
 API but has access to the volumes.

 Unfortunately, this is not possible now and I don't think it'll be
 possible until L (or even M?).

 FWIW, I think the use case you've mentioned is useful and it's
 something we have in our TODO list.

 Cheers,
 Flavio

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-19 Thread Alex Meade
Hey Henry/Folks,

I think it could make sense for Glance to store the volume UUID, the idea
is that no matter where an image is stored it should be *owned* by Glance
and not deleted out from under it. But that is more of a single tenant vs
multi tenant cinder store.

It makes sense for Cinder to at least abstract all of the block storage
needs. Glance and any other service should reuse Cinders ability to talk to
certain backends. It would be wasted effort to reimplement Cinder drivers
as Glance stores. I do agree with Duncan that a great way to solve these
issues is a third party transfer service, which others and I in the Glance
community have discussed at numerous summits (since San Diego).

-Alex



On Wed, Nov 19, 2014 at 3:40 AM, henry hly henry4...@gmail.com wrote:

 Hi Flavio,

 Thanks for your information about Cinder Store, Yet I have a little
 concern about Cinder backend: Suppose cinder and glance both use Ceph
 as Store, then if cinder  can do instant copy to glance by ceph clone
 (maybe not now but some time later), what information would be stored
 in glance? Obviously volume UUID is not a good choice, because after
 volume is deleted then image can't be referenced. The best choice is
 that cloned ceph object URI also be stored in glance location, letting
 both glance and cinder see the backend store details.

 However, although it really make sense for Ceph like All-in-one Store,
 I'm not sure if iscsi backend can be used the same way.

 On Wed, Nov 19, 2014 at 4:00 PM, Flavio Percoco fla...@redhat.com wrote:
  On 19/11/14 15:21 +0800, henry hly wrote:
 
  In the Previous BP [1], support for iscsi backend is introduced into
  glance. However, it was abandoned because of Cinder backend
  replacement.
 
  The reason is that all storage backend details should be hidden by
  cinder, not exposed to other projects. However, with more and more
  interest in Converged Storage like Ceph, it's necessary to expose
  storage backend to glance as well as cinder.
 
  An example  is that when transferring bits between volume and image,
  we can utilize advanced storage offload capability like linked clone
  to do very fast instant copy. Maybe we need a more general glance
  backend location support not only with iscsi.
 
 
 
  [1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store
 
 
  Hey Henry,
 
  This blueprint has been superseeded by one proposing a Cinder store
  for Glance. The Cinder store is, unfortunately, in a sorry state.
  Short story, it's not fully implemented.
 
  I truly think Glance is not the place where you'd have an iscsi store,
  that's Cinder's field and the best way to achieve what you want is by
  having a fully implemented Cinder store that doesn't rely on Cinder's
  API but has access to the volumes.
 
  Unfortunately, this is not possible now and I don't think it'll be
  possible until L (or even M?).
 
  FWIW, I think the use case you've mentioned is useful and it's
  something we have in our TODO list.
 
  Cheers,
  Flavio
 
  --
  @flaper87
  Flavio Percoco
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev