Re: [openstack-dev] [Nova][Cinder] Questions re progress

2015-03-19 Thread Eduard Matei
Hi Adam,

Disclaimer: i work for a company interested in providing solutions based on
openstack, but this email should not be considered marketing/promotional

Regarding your second question "Using Swift as a back-end for Cinder", we
already have a solution for this, a part of which is a Cinder driver
(already merged), and another part is our custom middle layer between swift
and fuse (available partially open-source, for free).
This solution allows you to use your swift cluster as shared storage for
your nova compute nodes (using cinder volume driver) so all nodes can see
all the volumes.

If you would like more details you can contact me at this email address.

*Eduard *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Questions re progress

2015-03-19 Thread Philipp Marek
> So others have/will chime in here... one thing I think is kinda missing in
> the statement above is the "single host", that's actually the whole point
> of Ceph and other vendor driven clustered storage technologies out there.
> There's a ton to choose from at this point, open source as well as
> proprietary and a lot of them are really really good.  This is also very
> much what DRBD aims to solve for you.  You're not tying data access to a
> single host/node, that's kinda the whole point.
Current status of the DRBD driver is: you can have redundant (replicated) 
storage 
in Cinder, but the connection to Nova is still done via iSCSI.

> Granted in the case of DRBD we've still got a ways to go and something we
> haven't even scratched the surface on much is virtual/shared IP's for
> targets but we're getting there albeit slowly (there are folks who are
> doing this already but haven't contributed their work back upstream), so in
> that case yes we still have a shortcoming in that if the node that's acting
> as your target server goes down you're kinda hosed.
The WIP is that the Nova nodes use DRBD as a transport protocol to the 
storage nodes, too; that would implicitly be a multi-connection setup.

The Nova side
https://review.openstack.org/#/c/149244/
got delayed to L, sadly, and so the Cinder side
https://review.openstack.org/#/c/156212/
is on hold, too.

(We've got github repositories where I try to keep these branches 
up-to-date for people who want to test, BTW.)


Of course, if the hypervisor crashes, you'll have to restart the VMs (or 
create new ones).


If you've got any questions, please don't hesitate to ask me (or drbd-user, 
if you prefer that).



Regards,

Phil


-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Cinder] Questions re progress

2015-03-18 Thread Billy Olsen
Specifically to the point of Swift backend for Cinder...

>From my understanding Swift was never intending to provide block-device
abstractions the way that Ceph does. That's not to say that it couldn't,
but it doesn't today.

I wonder if you might be targeting the wrong audience by going to the
Cinder community for the Swift backed volume support in Cinder. Since
Cinder is not in the datapath it cannot provide the block level
abstractions necessary for Swift objects to be treated as block devices.

If you're really interested in this, you might want to reach out to the
Swift community to see if there is an interest in adding block support.
After some set of block device abstraction is available for Swift then a
driver can be written for Cinder which exposes the block abstractions.

- Billy


On Wed, Mar 18, 2015 at 4:43 PM John Griffith 
wrote:

> On Wed, Mar 18, 2015 at 12:25 PM, Adam Lawson  wrote:
>
>> The aim is cloud storage that isn't affected by a host failure and major
>> players who deploy hyper-scaling clouds architect them to prevent that from
>> happening. To me that's cloud 101. Physical machine goes down, data
>> disappears, VM's using it fail and folks scratch their head and ask this
>> was in the cloud right? That's the indication of a service failure, not a
>> feature.
>>
> ​
> Yeah, the idea of an auto-evacuate is def nice, and I know there's
> progress there just maybe not as far along as some would like.  I'm far
> from a domain expert there though so I can't say much, other than I keep
> beating the drum that that doesn't require shared storage.
>
> Also, I would argue depending on who you ask, cloud 101 actually says;
> "The Instance puked, auto-spin up another one and get on wit it".  I'm
> certainly not arguing your points, just noting their are multiple views on
> this.  Also.
> ​
>
>
>>
>> I'm just a very big proponent of cloud arch that provides a seamless
>> abstraction between the service and the hardware. Ceph and DRDB are decent
>> enough. But tying data access to a single host by design is a mistake IMHO
>> so I'm asking why we do things the way we do and whether that's the way
>> it's always going to be.
>>
>
> ​So others have/will chime in here... one thing I think is kinda missing
> in the statement above is the "single host", that's actually the whole
> point of Ceph and other vendor driven clustered storage technologies out
> there.  There's a ton to choose from at this point, open source as well as
> proprietary and a lot of them are really really good.  This is also very
> much what DRBD aims to solve for you.  You're not tying data access to a
> single host/node, that's kinda the whole point.
>
> Granted in the case of DRBD we've still got a ways to go and something we
> haven't even scratched the surface on much is virtual/shared IP's for
> targets but we're getting there albeit slowly (there are folks who are
> doing this already but haven't contributed their work back upstream), so in
> that case yes we still have a shortcoming in that if the node that's acting
> as your target server goes down you're kinda hosed.  ​
>
>
>>
>> Of course this bumps into the question whether all apps hosted in the
>> cloud should be cloud aware or whether the cloud should have some tolerance
>> for legacy apps that are not written that way.
>>
>
> ​I've always felt "it depends".  I think you should be able to do both
> honestly (and IMHO you can currently), but if you want to take full
> advantage of everything that's offered in an OpenStack context at least,
> the best way to do that is to design and build with failure and dynamic
> provisioning in mind.​
>
>
>>
>>
>>
>> *Adam Lawson*
>>
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>>
>>
> ​Just my 2 cents, hope it's helpful.
>
> John​
>
>
>>
>> On Wed, Mar 18, 2015 at 10:59 AM, Duncan Thomas 
>> wrote:
>>
>>> I'm not sure of any particular benefit to trying to run cinder volumes
>>> over swift, and I'm a little confused by the aim - you'd do better to use
>>> something closer to purpose designed for the job if you want software fault
>>> tolerant block storage - ceph and drdb are the two open-source options I
>>> know of.
>>>
>>> On 18 March 2015 at 19:40, Adam Lawson  wrote:
>>>
 Hi everyone,

 Got some questions for whether certain use cases have been addressed
 and if so, where things are at. A few things I find particularly
 interesting:

- Automatic Nova evacuation for VM's using shared storage
- Using Swift as a back-end for Cinder

 I know we discussed Nova evacuate last year with some dialog leading
 into the Paris Operator Summit and there were valid unknowns around what
 would be required to constitute a host being "down", by what logic that
 would be calculated and what would be required to initiate the move and
>

Re: [openstack-dev] [Nova][Cinder] Questions re progress

2015-03-18 Thread John Griffith
On Wed, Mar 18, 2015 at 12:25 PM, Adam Lawson  wrote:

> The aim is cloud storage that isn't affected by a host failure and major
> players who deploy hyper-scaling clouds architect them to prevent that from
> happening. To me that's cloud 101. Physical machine goes down, data
> disappears, VM's using it fail and folks scratch their head and ask this
> was in the cloud right? That's the indication of a service failure, not a
> feature.
>
​
Yeah, the idea of an auto-evacuate is def nice, and I know there's progress
there just maybe not as far along as some would like.  I'm far from a
domain expert there though so I can't say much, other than I keep beating
the drum that that doesn't require shared storage.

Also, I would argue depending on who you ask, cloud 101 actually says; "The
Instance puked, auto-spin up another one and get on wit it".  I'm certainly
not arguing your points, just noting their are multiple views on this.
Also.
​


>
> I'm just a very big proponent of cloud arch that provides a seamless
> abstraction between the service and the hardware. Ceph and DRDB are decent
> enough. But tying data access to a single host by design is a mistake IMHO
> so I'm asking why we do things the way we do and whether that's the way
> it's always going to be.
>

​So others have/will chime in here... one thing I think is kinda missing in
the statement above is the "single host", that's actually the whole point
of Ceph and other vendor driven clustered storage technologies out there.
There's a ton to choose from at this point, open source as well as
proprietary and a lot of them are really really good.  This is also very
much what DRBD aims to solve for you.  You're not tying data access to a
single host/node, that's kinda the whole point.

Granted in the case of DRBD we've still got a ways to go and something we
haven't even scratched the surface on much is virtual/shared IP's for
targets but we're getting there albeit slowly (there are folks who are
doing this already but haven't contributed their work back upstream), so in
that case yes we still have a shortcoming in that if the node that's acting
as your target server goes down you're kinda hosed.  ​


>
> Of course this bumps into the question whether all apps hosted in the
> cloud should be cloud aware or whether the cloud should have some tolerance
> for legacy apps that are not written that way.
>

​I've always felt "it depends".  I think you should be able to do both
honestly (and IMHO you can currently), but if you want to take full
advantage of everything that's offered in an OpenStack context at least,
the best way to do that is to design and build with failure and dynamic
provisioning in mind.​


>
>
>
> *Adam Lawson*
>
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
>
>
​Just my 2 cents, hope it's helpful.

John​


>
> On Wed, Mar 18, 2015 at 10:59 AM, Duncan Thomas 
> wrote:
>
>> I'm not sure of any particular benefit to trying to run cinder volumes
>> over swift, and I'm a little confused by the aim - you'd do better to use
>> something closer to purpose designed for the job if you want software fault
>> tolerant block storage - ceph and drdb are the two open-source options I
>> know of.
>>
>> On 18 March 2015 at 19:40, Adam Lawson  wrote:
>>
>>> Hi everyone,
>>>
>>> Got some questions for whether certain use cases have been addressed and
>>> if so, where things are at. A few things I find particularly interesting:
>>>
>>>- Automatic Nova evacuation for VM's using shared storage
>>>- Using Swift as a back-end for Cinder
>>>
>>> I know we discussed Nova evacuate last year with some dialog leading
>>> into the Paris Operator Summit and there were valid unknowns around what
>>> would be required to constitute a host being "down", by what logic that
>>> would be calculated and what would be required to initiate the move and
>>> which project should own the code to make it happen. Just wondering where
>>> we are with that.
>>>
>>> On a separate note, Ceph has the ability to act as a back-end for
>>> Cinder, Swift does not. Perhaps there are performance trade-offs to
>>> consider but I'm a big fan of service plane abstraction and what I'm not a
>>> fan of is tying data to physical hardware. The fact this continues to be
>>> the case with Cinder troubles me.
>>>
>>> So a question; are these being addressed somewhere in some context? I
>>> admittedly don't want to distract momentum on the Nova/Cinder teams, but I
>>> am curious if these exist (or conflict) with our current infrastructure
>>> blueprints?
>>>
>>> Mahalo,
>>> Adam
>>>
>>> *Adam Lawson*
>>>
>>> AQORN, Inc.
>>> 427 North Tatnall Street
>>> Ste. 58461
>>> Wilmington, Delaware 19801-2230
>>> Toll-free: (844) 4-AQORN-NOW ext. 101
>>> International: +1 302-387-4660
>>> Direct: +1 916-246-2072
>>>
>>>
>>>
>>> ___

Re: [openstack-dev] [Nova][Cinder] Questions re progress

2015-03-18 Thread Clint Byrum
Excerpts from Adam Lawson's message of 2015-03-18 11:25:37 -0700:
> The aim is cloud storage that isn't affected by a host failure and major
> players who deploy hyper-scaling clouds architect them to prevent that from
> happening. To me that's cloud 101. Physical machine goes down, data
> disappears, VM's using it fail and folks scratch their head and ask this
> was in the cloud right? That's the indication of a service failure, not a
> feature.
>

Ceph provides this for cinder installations that use it.

> I'm just a very big proponent of cloud arch that provides a seamless
> abstraction between the service and the hardware. Ceph and DRDB are decent
> enough. But tying data access to a single host by design is a mistake IMHO
> so I'm asking why we do things the way we do and whether that's the way
> it's always going to be.
> 

Why do you say Ceph is "decent". It solves all your issues you're
talking about, and does so on commodity hardware.

> Of course this bumps into the question whether all apps hosted in the cloud
> should be cloud aware or whether the cloud should have some tolerance for
> legacy apps that are not written that way.
> 

Using volumes is more expensive than using specialized scale-out storage,
aka "cloud aware" storage. But finding and migrating to that scale-out
storage takes time and has a cost too, so volumes have their place and
always will.

So, can you be more clear, what is it that you're suggesting isn't
available now?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Questions re progress

2015-03-18 Thread Adam Lawson
The aim is cloud storage that isn't affected by a host failure and major
players who deploy hyper-scaling clouds architect them to prevent that from
happening. To me that's cloud 101. Physical machine goes down, data
disappears, VM's using it fail and folks scratch their head and ask this
was in the cloud right? That's the indication of a service failure, not a
feature.

I'm just a very big proponent of cloud arch that provides a seamless
abstraction between the service and the hardware. Ceph and DRDB are decent
enough. But tying data access to a single host by design is a mistake IMHO
so I'm asking why we do things the way we do and whether that's the way
it's always going to be.

Of course this bumps into the question whether all apps hosted in the cloud
should be cloud aware or whether the cloud should have some tolerance for
legacy apps that are not written that way.



*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Wed, Mar 18, 2015 at 10:59 AM, Duncan Thomas 
wrote:

> I'm not sure of any particular benefit to trying to run cinder volumes
> over swift, and I'm a little confused by the aim - you'd do better to use
> something closer to purpose designed for the job if you want software fault
> tolerant block storage - ceph and drdb are the two open-source options I
> know of.
>
> On 18 March 2015 at 19:40, Adam Lawson  wrote:
>
>> Hi everyone,
>>
>> Got some questions for whether certain use cases have been addressed and
>> if so, where things are at. A few things I find particularly interesting:
>>
>>- Automatic Nova evacuation for VM's using shared storage
>>- Using Swift as a back-end for Cinder
>>
>> I know we discussed Nova evacuate last year with some dialog leading into
>> the Paris Operator Summit and there were valid unknowns around what would
>> be required to constitute a host being "down", by what logic that would be
>> calculated and what would be required to initiate the move and which
>> project should own the code to make it happen. Just wondering where we are
>> with that.
>>
>> On a separate note, Ceph has the ability to act as a back-end for Cinder,
>> Swift does not. Perhaps there are performance trade-offs to consider but
>> I'm a big fan of service plane abstraction and what I'm not a fan of is
>> tying data to physical hardware. The fact this continues to be the case
>> with Cinder troubles me.
>>
>> So a question; are these being addressed somewhere in some context? I
>> admittedly don't want to distract momentum on the Nova/Cinder teams, but I
>> am curious if these exist (or conflict) with our current infrastructure
>> blueprints?
>>
>> Mahalo,
>> Adam
>>
>> *Adam Lawson*
>>
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Duncan Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Questions re progress

2015-03-18 Thread Duncan Thomas
I'm not sure of any particular benefit to trying to run cinder volumes over
swift, and I'm a little confused by the aim - you'd do better to use
something closer to purpose designed for the job if you want software fault
tolerant block storage - ceph and drdb are the two open-source options I
know of.

On 18 March 2015 at 19:40, Adam Lawson  wrote:

> Hi everyone,
>
> Got some questions for whether certain use cases have been addressed and
> if so, where things are at. A few things I find particularly interesting:
>
>- Automatic Nova evacuation for VM's using shared storage
>- Using Swift as a back-end for Cinder
>
> I know we discussed Nova evacuate last year with some dialog leading into
> the Paris Operator Summit and there were valid unknowns around what would
> be required to constitute a host being "down", by what logic that would be
> calculated and what would be required to initiate the move and which
> project should own the code to make it happen. Just wondering where we are
> with that.
>
> On a separate note, Ceph has the ability to act as a back-end for Cinder,
> Swift does not. Perhaps there are performance trade-offs to consider but
> I'm a big fan of service plane abstraction and what I'm not a fan of is
> tying data to physical hardware. The fact this continues to be the case
> with Cinder troubles me.
>
> So a question; are these being addressed somewhere in some context? I
> admittedly don't want to distract momentum on the Nova/Cinder teams, but I
> am curious if these exist (or conflict) with our current infrastructure
> blueprints?
>
> Mahalo,
> Adam
>
> *Adam Lawson*
>
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Cinder] Questions re progress

2015-03-18 Thread Adam Lawson
Hi everyone,

Got some questions for whether certain use cases have been addressed and if
so, where things are at. A few things I find particularly interesting:

   - Automatic Nova evacuation for VM's using shared storage
   - Using Swift as a back-end for Cinder

I know we discussed Nova evacuate last year with some dialog leading into
the Paris Operator Summit and there were valid unknowns around what would
be required to constitute a host being "down", by what logic that would be
calculated and what would be required to initiate the move and which
project should own the code to make it happen. Just wondering where we are
with that.

On a separate note, Ceph has the ability to act as a back-end for Cinder,
Swift does not. Perhaps there are performance trade-offs to consider but
I'm a big fan of service plane abstraction and what I'm not a fan of is
tying data to physical hardware. The fact this continues to be the case
with Cinder troubles me.

So a question; are these being addressed somewhere in some context? I
admittedly don't want to distract momentum on the Nova/Cinder teams, but I
am curious if these exist (or conflict) with our current infrastructure
blueprints?

Mahalo,
Adam

*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev