Re: [openstack-dev] [openstack-community] Give cinder-backup more CPU resources

2018-07-06 Thread Duncan Thomas
You can run many c-bak processes on one node, which will get fed round
robin, so you should see fairly linear speedup in the many backups case
until you run out of CPUs.

Parallelising a single backup was something I attempted, but python makes
it extremely difficult so there's no useful implementation I'm aware of.

On Fri, 6 Jul 2018, 3:18 pm Amy Marrich,  wrote:

> Hey,
>
> Forwarding to the Dev list as you may get a better response from there.
>
> Thanks,
>
> Amy (spotz)
>
> On Thu, Jul 5, 2018 at 11:30 PM, Keynes Lee/WHQ/Wistron <
> keynes_...@wistron.com> wrote:
>
>> Hi
>>
>>
>>
>> When making “cinder backup-create”
>>
>> We found the process “cinder-backup” use 100% util of 1 CPU core on an
>> OpenStack Controller node.
>>
>> It not just causes a bad backup performance, also make the
>> openstack-cinder-backup unstable.
>>
>> Especially when we make several backup at the same time.
>>
>>
>>
>> The Controller Node has 40 CPU cores.
>>
>> Can we assign more CPU resources to cinder-backup ?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> [image: cid:image007.jpg@01D1747D.DB260110]
>>
>> *Keynes  Lee**李* *俊* *賢*
>>
>> Direct:
>>
>> +886-2-6612-1025
>>
>> Mobile:
>>
>> +886-9-1882-3787
>>
>> Fax:
>>
>> +886-2-6612-1991
>>
>>
>>
>> E-Mail:
>>
>> keynes_...@wistron.com
>>
>>
>>
>>
>>
>>
>> *---*
>>
>> *This email contains confidential or legally privileged information and
>> is for the sole use of its intended recipient. *
>>
>> *Any unauthorized review, use, copying or distribution of this email or
>> the content of this email is strictly prohibited.*
>>
>> *If you are not the intended recipient, you may reply to the sender and
>> should delete this e-mail immediately.*
>>
>>
>> *---*
>>
>> ___
>> Community mailing list
>> commun...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/community
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Create a Volume type using OpenStack

2018-05-08 Thread Duncan Thomas
If you're using the cinder CLI (aka python-cinderclient) then if you
run with --debug, then you can see the REST calls used.

I would assume the the unified openstack CLI client has a similar mode.

On 8 May 2018 at 12:13, Hari Prasanth Loganathan
<hariprasant...@msystechnologies.com> wrote:
> Hi Team,
>
> 1) I am able to list all the project using the OpenStack REST API,
>
>   http://{IP_ADDRESS}:5000/v3/auth/projects/
>
> But as per the documentation of /v3/ API's in OpenStack
> (https://developer.openstack.org/api-ref/block-storage/v3/index.html#volumes-volumes),
>
> I need API's to
> i) list all the Volume types in the OpenStack
> ii) I need API's to create the Volume types in the OpenStack
>
> I am able to create via CLI, I need to perform the same using API
> Create Volume Type
> openstack volume type create ${poolName}
> cinder type-key "${poolName}" set storagetype:pool=${poolName}
> volume_backend_name=rbd-${poolName}
>
>
> I am able to create via CLI, I need to perform the same using API. Please
> help me in this.
>
>
> Thanks,
> Hari
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-09 Thread Duncan Thomas
Hopefully this flow means we can do rebuild root filesystem from
snapshot/backup too? It seems rather artificially limiting to only do
restore-from-image. I'd expect restore-from-snap to be a more common
use case, personally.

On 9 April 2018 at 09:51, Gorka Eguileor <gegui...@redhat.com> wrote:
> On 06/04, Matt Riedemann wrote:
>> On 4/6/2018 5:09 AM, Matthew Booth wrote:
>> > I think you're talking at cross purposes here: this won't require a
>> > swap volume. Apart from anything else, swap volume only works on an
>> > attached volume, and as previously discussed Nova will detach and
>> > re-attach.
>> >
>> > Gorka, the Nova api Matt is referring to is called volume update
>> > externally. It's the operation required for live migrating an attached
>> > volume between backends. It's called swap volume internally in Nova.
>>
>> Yeah I was hoping we were just having a misunderstanding of what 'swap
>> volume' in nova is, which is the blockRebase for an already attached volume
>> to the guest, called from cinder during a volume retype or migration.
>>
>> As for the re-image thing, nova would be detaching the volume from the guest
>> prior to calling the new cinder re-image API, and then re-attach to the
>> guest afterward - similar to how shelve and unshelve work, and for that
>> matter how rebuild works today with non-root volumes.
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Hi,
>
> Thanks for the clarification.  When I was talking about "swapping" I was
> referring to the fact that Nova will have to not only detach the volume
> locally using OS-Brick, but it will also need to use new connection
> information to do the attach after the volume has been re-imaged.
>
> As I see it, the process would look something like this:
>
> - Nova detaches volume using OS-Brick
> - Nova calls Cinder re-image passing the node's info (like we do when
>   attaching a new volume)
> - Cinder would:
>   - Ensure only that node is connected to the volume
>   - Terminate connection to the original volume
>   - If we can do optimized volume creation:
> - If encrypted volume we create a copy of the encryption key on
>   Barbican or copy the ID field from the DB and ensure we don't
>   delete the Barbican key on the delete.
> - Create new volume from image
> - Swap DB fields to preserve the UUID
> - Delete original volume
>   - If it cannot do optimized volume creation:
> - Initialize+Attach volume to Cinder node
> - DD the new image into the volume
> - Detach+Terminate volume
>   - Initialize connection for the new volume to the Nova node
>   - Return connection information to the volume
> - Nova attaches volume with OS-Brick using returned connection
>   information.
>
> So I agree, it's not a blockRebase operation, just a change in the
> volume that is used.
>
> Regards,
> Gorka.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New image backend: StorPool

2018-03-16 Thread Duncan Thomas
On 16 March 2018 at 16:39, Dan Smith <d...@danplanet.com> wrote:
>> Can you be more specific about what is limiting you when you use
>> volume-backed instances?
>
> Presumably it's because you're taking a trip over iscsi instead of using
> the native attachment mechanism for the technology that you're using? If
> so, that's a valid argument, but it's hard to see the tradeoff working
> in favor of adding all these drivers to nova as well.
>
> If cinder doesn't support backend-specific connectors, maybe that's
> something we could work on?

Cinder supports a range of connectors, and there has never been any
opposition in principle to supporting more.

I suggest looking at the RDB support in cinder as an example of a
strongly supported native attachment method.



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pros and Cons of face-to-face meetings

2018-03-08 Thread Duncan Thomas
On 8 March 2018 at 18:16, Jay S Bryant <jungleb...@gmail.com> wrote:
> Cinder has been doing this for many years and it has worked relatively well.
> It requires a good remote speaker and it also requires the people in the
> room to be sensitive to the needs of those who are remote.  I.E. planning
> topics at a time appropriate for the remote attendees, ensuring everyone
> speaks up, etc.  If everyone, however, works to be inclusive with remote
> participants  it works well.

Having been both in the room and on the phone, I'd have to say it was
better than nothing but a long way from 'working well'.
There's definitely a huge imbalance between being in the room and able
to follow everything, and being on the phone, where you have to ask
for people to repeat things (if you even know something was said to
ask for the repeat), speak up, stop talking over other people, etc. It
always feels like a very second-class position to me.

-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][ptg] Dinner Outing Update and Photo Reminder ...

2018-03-02 Thread Duncan Thomas
Met with Jay, looking at alternatives

On 2 Mar 2018 6:26 pm, "Duncan Thomas" <duncan.tho...@gmail.com> wrote:

> I'm in the pub now, and they are closing down
>
> On 2 Mar 2018 5:48 pm, "Jay S. Bryant" <jungleb...@gmail.com> wrote:
>
>> Ivan,
>>
>> I sent another note but will also respond in this thread.
>>
>> Yes, they will serve us tonight.  It is a somewhat limited menu but I
>> stopped to look at it and it still looked good.
>>
>> Sidewalks on the way to the restaurant were not in too bad of shape.
>>
>> Jay
>>
>> On 3/2/2018 10:48 AM, Ivan Kolodyazhny wrote:
>>
>> Hi Jay,
>>
>> Will Fagans serve us tonight?
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>> On Thu, Mar 1, 2018 at 3:00 PM, Jay S Bryant <jungleb...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On 3/1/2018 6:50 AM, Sean McGinnis wrote:
>>>
>>>> On Feb 28, 2018, at 16:58, Jay S Bryant <jungleb...@gmail.com> wrote:
>>>>>
>>>>> Team,
>>>>>
>>>>> Just a reminder that we will be having our team photo at 9 am tomorrow
>>>>> before the Cinder/Nova cross project session.  Please be at the
>>>>> registration desk before 9 to be in the photo.
>>>>>
>>>>> We will then have the Cross Project session in the Nova room as it
>>>>> sounds like it is somewhat larger.  I will have sound clips in hand to 
>>>>> make
>>>>> sure things don't get too serious.
>>>>>
>>>>> Finally, an update on dinner for tomorrow night.  I have moved dinner
>>>>> to a closer venue:
>>>>>
>>>>> Fagan's Bar and Restaurant:  146 Drumcondra Rd Lower, Drumcondra,
>>>>> Dublin 9
>>>>>
>>>>> I have reservations for 7:30 pm.  It isn't too difficult a walk from
>>>>> Croke Park (even in a blizzard) and it is a great pub.
>>>>>
>>>>> Thanks for a great day today!
>>>>>
>>>>> See you all tomorrow!  Let's make it a great one!  ;-)
>>>>> Jay
>>>>>
>>>>> Any plan now that there is a 4pm curfew?
>>>>
>>>> Dinner has been rescheduled for Friday night 3/2 or 2/3 depending on
>>> your country of origin.  6:30 at Fagans.
>>>
>>> I will update the etherpad.
>>>
>>> Jay
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][ptg] Dinner Outing Update and Photo Reminder ...

2018-03-02 Thread Duncan Thomas
I'm in the pub now, and they are closing down

On 2 Mar 2018 5:48 pm, "Jay S. Bryant"  wrote:

> Ivan,
>
> I sent another note but will also respond in this thread.
>
> Yes, they will serve us tonight.  It is a somewhat limited menu but I
> stopped to look at it and it still looked good.
>
> Sidewalks on the way to the restaurant were not in too bad of shape.
>
> Jay
>
> On 3/2/2018 10:48 AM, Ivan Kolodyazhny wrote:
>
> Hi Jay,
>
> Will Fagans serve us tonight?
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Thu, Mar 1, 2018 at 3:00 PM, Jay S Bryant  wrote:
>
>>
>>
>> On 3/1/2018 6:50 AM, Sean McGinnis wrote:
>>
>>> On Feb 28, 2018, at 16:58, Jay S Bryant  wrote:

 Team,

 Just a reminder that we will be having our team photo at 9 am tomorrow
 before the Cinder/Nova cross project session.  Please be at the
 registration desk before 9 to be in the photo.

 We will then have the Cross Project session in the Nova room as it
 sounds like it is somewhat larger.  I will have sound clips in hand to make
 sure things don't get too serious.

 Finally, an update on dinner for tomorrow night.  I have moved dinner
 to a closer venue:

 Fagan's Bar and Restaurant:  146 Drumcondra Rd Lower, Drumcondra,
 Dublin 9

 I have reservations for 7:30 pm.  It isn't too difficult a walk from
 Croke Park (even in a blizzard) and it is a great pub.

 Thanks for a great day today!

 See you all tomorrow!  Let's make it a great one!  ;-)
 Jay

 Any plan now that there is a 4pm curfew?
>>>
>>> Dinner has been rescheduled for Friday night 3/2 or 2/3 depending on
>> your country of origin.  6:30 at Fagans.
>>
>> I will update the etherpad.
>>
>> Jay
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url

2018-02-02 Thread Duncan Thomas
So I guess my question here is why is being RESTful good? Sure it's (very,
very loosely) a standard, but what are the actual advantages? Standards
come and go, what we want most of all is a good quality, easy to use API.

I'm not saying that going RESTful is wrong, but I don't see much discussion
about what the advantages are, only about how close we are to implementing
it.

On 1 Feb 2018 10:55 pm, "Ed Leafe"  wrote:

> On Jan 18, 2018, at 4:07 AM, TommyLike Hu  wrote:
>
> >Recently We found an issue related to our OpenStack action APIs. We
> usually expose our OpenStack APIs by registering them to our API Gateway
> (for instance Kong [1]), but it becomes very difficult when regarding to
> action APIs. We can not register and control them seperately because them
> all share the same request url which will be used as the identity in the
> gateway service, not say rate limiting and other advanced gateway features,
> take a look at the basic resources in OpenStack
>
> We discussed your email at today’s API-SIG meeting [0]. This is an area
> that is always contentious in the RESTful world. Actions, tasks, and state
> changes are not actual resources, and in a pure REST design they should
> never be part of the URL. Instead, you should POST to the actual resource,
> with the desired action in the body. So in your example:
>
> > URL:/volumes/{volume_id}/action
> > BODY:{'extend':{}}
>
> the preferred way of achieving this is:
>
> URL: POST /volumes/{volume_id}
> BODY: {‘action’: ‘extend’, ‘params’: {}}
>
> The handler for the POST action should inspect the body, and call the
> appropriate method.
>
> Having said that, we realize that a lot of OpenStack services have adopted
> the more RPC-like approach that you’ve outlined. So while we strongly
> recommend a standard RESTful approach, if you have already released an
> RPC-like API, our advice is:
>
> a) avoid having every possible verb in the URL. In other words, don’t use:
>   /volumes/{volume_id}/mount
>   /volumes/{volume_id}/umount
>   /volumes/{volume_id}/extend
> This moves you further into RPC-land, and will make updating your API to a
> more RESTful design more difficult.
>
> b) choose a standard term for the item in the URL. In other words, always
> use ‘action’ or ‘task’ or whatever else you have adopted. Don’t mix
> terminology. Then pass the action to perform, along with any parameters in
> the body. This will make it easier to transition to a RESTful design by
> later updating the handlers to first inspect the BODY instead of relying
> upon the URL to determine what action to perform.
>
> You might also want to contact the Kong developers to see if there is a
> way to work with a RESTful API design.
>
> -- Ed Leafe
>
> [0] http://eavesdrop.openstack.org/meetings/api_sig/2018/api_
> sig.2018-02-01-16.02.log.html#l-28
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][cinder]about cinder volume usage

2017-12-04 Thread Duncan Thomas
Either way step one is to propose the code for a better replacement. I
doubt you'll get much opposition to that.

No need to propose removing the old code immediately, it can live in
parallel for as long as needed.

On 4 December 2017 at 15:14, Jaze Lee <jaze...@gmail.com> wrote:
> 2017-12-04 19:36 GMT+08:00 Duncan Thomas <duncan.tho...@gmail.com>:
>> Why remove something that works and people are using?
>
> Yes, removing it will make noise.
>
>>
>> If polster can be set up to do the job, then great, but there's no
>> rush to remove the existing infrastructure; this is one case where
>> duplication is very, very cheap.
>
> Yes it is too cheap to refuse. But i think it should be not exists there.
>
>>
>> On 4 December 2017 at 10:30, Jaze Lee <jaze...@gmail.com> wrote:
>>> Hello,
>>>
>>> Right now,we can get volume from central pollester. Then i think
>>> may be we can drop cinder volume usage.
>>>Cinder volume usage, do not like nova usage which is in nova
>>> compute. It is a cmd.  And should put it in cron. This will cause more
>>> work in devops. If ceilometer
>>> pollester can handle it. I think may be we should remove it?
>>>
>>>
>>>
>>>
>>>
>>> --
>>> 谦谦君子
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Duncan Thomas
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> 谦谦君子
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][cinder]about cinder volume usage

2017-12-04 Thread Duncan Thomas
Why remove something that works and people are using?

If polster can be set up to do the job, then great, but there's no
rush to remove the existing infrastructure; this is one case where
duplication is very, very cheap.

On 4 December 2017 at 10:30, Jaze Lee <jaze...@gmail.com> wrote:
> Hello,
>
> Right now,we can get volume from central pollester. Then i think
> may be we can drop cinder volume usage.
>Cinder volume usage, do not like nova usage which is in nova
> compute. It is a cmd.  And should put it in cron. This will cause more
> work in devops. If ceilometer
> pollester can handle it. I think may be we should remove it?
>
>
>
>
>
> --
> 谦谦君子
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [api] Script injection issue

2017-11-19 Thread Duncan Thomas
But the filtering requirements are going to be different for different
front ends, and we can't hope to catch them all. Trying to do any filtering
at the cinder/nova API level just provides a false sense of security -
horizon and other UI *have* to get their escaping right. If you put
incomplete filtering at the cinder level, it becomes more likely that
consumers will assume they don't need to bother.

On 20 Nov 2017 2:18 am, "TommyLike Hu"  wrote:

> Our API service is open to any client or any API consumer. We can not
> guarantee every frontend has the ability to protect themself from script
> injections. Although the specific cases would differ, the security issue is
> the same. If we have to keep asking them to add this support repeatedly,
> Can we just define the BASIC limitation for the input?
>
> Tristan Cacqueray 于2017年11月17日周五 下午11:55写道:
>
>> On November 17, 2017 1:56 pm, Jeremy Stanley wrote:
>> > On 2017-11-17 12:47:34 + (+), Luke Hinds wrote:
>> >> This will need the VMT's attention, so please raise as an issue on
>> >> launchpad and we can tag it as for the vmt members as a possible OSSA.
>> > [...]
>> >
>> > Ugh, looks like someone split this thread, and I already replied to
>> > the original thread. In short, I don't think it's safe to assume we
>> > know what's going to be safe for different frontends and consuming
>> > applications, so trying to play whack-a-mole with various unsafe
>> > sequences at the API side puts the responsibility for safe filtering
>> > in the wrong place and can lead to lax measures in the software
>> > which should actually be taking on that responsibility.
>> >
>> > Of course, I'm just one voice. Others on the VMT certainly might
>> > disagree with my opinion on this.
>>
>> We had similar issues[0][1] in the past where we already draw the line
>> that it is the client responsibility to filter out API response.
>>
>> Thus I agree with Jeremy, perhaps it is not ideal, but at least it
>> doesn't give a false sense of security if^Wwhen the server side
>> filtering let unpredicted malicious content through.
>>
>> -Tristan
>>
>> [0] https://launchpad.net/bugs/1486565
>> [1] https://launchpad.net/bugs/1649248
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Does glance_store swift driver support range requests ?

2017-11-15 Thread Duncan Thomas
On 15 November 2017 at 11:15, Matt Keenan  wrote:
> On 13/11/17 22:51, Nikhil Komawar wrote:
>
> I think it will a rather hard problem to solve. As swift store can be
> configured to store objects in different configurations. I guess the next
> question would be, what is your underlying problem -- multiple build
> requests or is this for retry for a single download?
>
> If the image is in image cache and you are hitting the glance node with
> cached image (which is quite possible for tiny deployments), this feature
> will be relatively easier.
>
>
> So the specific image stored in glance is a Unified Archive
> (https://docs.oracle.com/cd/E36784_01/html/E38524/gmrlo.html).
>
> During a UAR deployment the archive UUID is required and it is contained in
> the first 33 characters of the UAR image, thus a range request for this
> portion is required when initiating the deployment. Then the rest of the
> archive is extracted and deployed.

Given the range you want is always at the beginning, is a range
request any different to doing a full get request and dripping the
connection when you've got the bytes you want?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][castellan] Toward deprecating ConfKeyManager

2017-10-11 Thread Duncan Thomas
I'm not sure there's a general agreement about removing the fixed key
manager code in future. It serves several purposes, testing being the most
significant one, though it also covers some people's usecase from a
security PoV too where something better might not be worth the complexity
trade off. If this work is a backdoor effort to remove the functionality,
rather than purely a code cleanup effort then it should definitely be
clearly presented as such.

On 11 Oct 2017 9:50 pm, "Alan Bishop"  wrote:

> On Wed, Oct 11, 2017 at 1:17 PM, Dave McCowan (dmccowan)
>  wrote:
> > Hi Alan--
> > Since a fixed-key implementation is not secure, I would prefer not
> > adding it to Castellan.  Our desire is that Castellan can be a
> best-practice
> > project to encourage operators to use key management securely.
> > I'm all for consolidating code and providing good migration paths
> from
> > ConfKeyManager to Castellan.
> > Can we create a new oslo project to facilitate this?  Something like
> > oslo.fixed_key_manager.
> > I would rather keep a fixed_key implementation out of Castellan if
> > possible.
>
> Hi Dave,
>
> While I totally take your point about keeping the "deficient" (I'm being
> charitable) ConfKeyManager code out of Castellan, I view it as a short
> term tactical move. Everyone is looking forward to deprecating the code.
> The key (no pun intended) to getting there is providing a migration path
> for users (there are significant ones) that have existing deployments
> that use the fixed-key.
>
> Because of the circumstances, I feel there would be resistance to the
> idea of creating an entirely new oslo project that; a) consists of code
> that everyone knows to be deficient, and b) will be deprecated soon.
>
> I have another motive for temporarily moving the code into Castellan,
> and it pertains to providing a migration path to Barbican. With everything
> consolidated in Castellan, a wrapper class could provide a seamless way
> of handling KeyManager.get() requests for the all-zeros fixed-key ID,
> even when Barbican is the key manager. This would allow users to switch
> to Barbican, and still have any get() requests for the legacy fixed-key
> be resolved by the ConfKeyManager.
>
> All of this could be implemented wholely within Castellan, and be totally
> transparent to the the user, Nova, Cinder, and the Barbican implementation
> in barbican_key_manager.py.
>
> As a final note, we could add all sorts of warnings any to code added
> to Castellan, perhaps even name the file insecure_key_manager.py ;-)
>
> Alan
>
>
> > --Dave
> >
> > There is an ongoing effort to deprecate the ConfKeyManager, but care
> > must be taken when migrating existing ConfKeyManager deployments to
> > Barbican. The ConfKeyManager's fixed_key secret can be added to Barbican,
> > but the process of switching from one key manager to another will need
> > to be done smoothly to ensure encrypted volumes continue to function
> > during the migration period.
> >
> > One thing that will help the migration process is consolidating the
> > two ConfKeyManager implementations (one in Cinder and one in Nova).
> > The two are functionally identical, as dictated by the need to derive
> > the exact same secret from the fixed_key. While it may seem counter-
> > intuitive, adding a ConfKeyManager implementation to Castellan will
> > facilitate the process of deprecating them in Cinder and Nova.
> >
> > To that end, I identified a series of small steps to get us there:
> >
> > 1) Unify the "fixed_key" oslo_config definitions in Cinder and Nova
> > so they are identical (right now their help texts are slightly
> > different). This step avoids triggering a DuplicateOptError exception
> > in the next step.
> >
> > 2) Add a ConfKeyManager implementation to Castellan. This essentially
> > involves copying in one of the existing implementations (either Cinder's
> > or Nova's).
> >
> > 3) Replace Cinder's and Nova's implementations with references to the
> > one in Castellan. This can be done in a way that retains compatibility
> > with the key_manager "backend" (was "api_class") config options
> > currently used by Cinder and Nova. The code in
> > cinder/keymgr/conf_key_manager.py and nova/keymgr/conf_key_manager.py
> > will collapse down to this:
> >
> >   from castellan.key_manager import conf_key_manager
> >
> >   class ConfKeyManager(conf_key_manager.ConfKeyManager):
> >   pass
> >
> > Having a common ConfKeyManager implementation will make it much
> > easier to support migrating things to Barbican, and that's an important
> > step toward the goal of deprecating the ConfKeyManager entirely.
> >
> > Please let me know your thoughts, as I plan to begin proposing patches.
> >
> > Regards,
> >
> > Alan Bishop
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 

Re: [openstack-dev] [Cinder] Requirements for re-adding Gluster support

2017-07-26 Thread Duncan Thomas
I believe the previous CI ran on the Openstack CI infrastructure, but
was constantly broken and unmaintained (I'm working from memory here).
The Cinder team have never cared /where/ the CI is run, and if infra
is happy to host the Gluster CI jobs then great. What is needed is
somebody to maintain them long term, which I believe is what was
missing, causing the removal.

On 26 July 2017 at 13:30, Jeremy Stanley <fu...@yuggoth.org> wrote:
> On 2017-07-26 12:56:55 +0200 (+0200), Niels de Vos wrote:
> [...]
>> My current guess is that adding a 3rd party CI [3] for Gluster is
>> the only missing piece?
> [...]
>
> I thought GlusterFS was free/libre software. If so, won't the Cinder
> team allow upstream testing in OpenStack's CI system for free
> backends/drivers? Maintaining a third-party CI system for that seems
> like overkill, but I'm unfamiliar with Cinder's particular driver
> testing policies.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Requirements for re-adding Gluster support

2017-07-26 Thread Duncan Thomas
I think the substantial part was running (and maintaining) the CI.
Given the fragility of devstack and tempest, and their dependencies,
this is not, unfortunately, a fire-and-forget operation but rather
something that requires a fair time investment. Certainly I don't know
of any reason other than CI why support can't be fixed up and
re-added.

On 26 July 2017 at 11:56, Niels de Vos <nde...@redhat.com> wrote:
> Hello,
>
> In one of the last Cinder releases support for Gluster has been dropped.
> The commit message [1] mentions that the support has been marked
> deprecated during Newton.
>
> It seems that there are quite some users in the Gluster Community that
> run OpenStack with Gluster storage. These users did not take action when
> Newton came out, but voiced their disappointment with more recent
> releases.
>
> As one of the Gluster Maintainers that is watching over the integration
> of Gluster in other projects, I would like to know more about the tasks
> that it takes to get Gluster support back into Cinder. With those
> details, the Gluster Community can work with interested OpenStack users
> to add required CI jobs, and possibly other things.
>
> At the moment, the only knowledge I have on why Gluster support was
> removed from Cinder is in a messy email conversation [2]. Pointers to
> further clarifications and requirements that Gluster did not meet are
> welcome.
>
> My current guess is that adding a 3rd party CI [3] for Gluster is the
> only missing piece? If that is the case, I expect that we could add one
> or more Jenkins jobs to one of our Gluster Community CI's. We run tests
> in our own Jenkins instance [4], but also use the CentOS CI [5] for some
> heavier testing.
>
> Any guidance, suggestions and opinions are most welcome!
>
> Many thanks,
> Niels
>
>
> 1. 
> https://github.com/openstack/cinder/commit/16e93ccd4f3a6d62ed9d277f03b64bccc63ae060
> 2. http://lists.gluster.org/pipermail/integration/2017-May/24.html
> 3. https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
> 4. https://build.gluster.org/
> 5. https://ci.centos.org/view/Gluster/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?

2017-05-26 Thread Duncan Thomas
On 25 May 2017 12:33 pm, "Lee Yarwood" <lyarw...@redhat.com> wrote:

On 25-05-17 11:38:44, Duncan Thomas wrote:
> On 25 May 2017 at 11:00, Lee Yarwood <lyarw...@redhat.com> wrote:
> > This has also reminded me that the plain (dm-crypt) format really needs
> > to be deprecated this cycle. I posted to the dev and ops ML [2] last
> > year about this but received no feedback. Assuming there are no last
> > minute objections I'm going to move forward with deprecating this format
> > in os-brick this cycle.
>
> What is the reasoning for this? There are plenty of people using it, and
> you're going to break them going forward if you remove it.

I didn't receive any feedback indicating that we had any users of plain
when I initially posted to the ML. That said there obviously can be
users out there and my intention isn't to pull support for this format
immediately without any migration path to LUKS etc.


Ok, after a few emails, of the users I knew about, one is happy with luks
and the others are no longer running openstack. Apologies for the mis-steer
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?

2017-05-25 Thread Duncan Thomas
On 25 May 2017 at 11:00, Lee Yarwood <lyarw...@redhat.com> wrote:
> This has also reminded me that the plain (dm-crypt) format really needs
> to be deprecated this cycle. I posted to the dev and ops ML [2] last
> year about this but received no feedback. Assuming there are no last
> minute objections I'm going to move forward with deprecating this format
> in os-brick this cycle.

What is the reasoning for this? There are plenty of people using it, and
you're going to break them going forward if you remove it.

-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread Duncan Thomas
On 23 May 2017 4:51 am, "Matt Riedemann"  wrote:



Is this really something we are going to have to deny at least once per
release? My God how is it that this is the #1 thing everyone for all time
has always wanted Nova to do for them?


Is it entirely unreasonable to turn the question around and ask why, given
it is such a commonly requested feature, the Nova team are so resistant to
it?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-19 Thread Duncan Thomas
On 19 May 2017 at 12:24, Sean Dague <s...@dague.net> wrote:

> I do get the concerns of extra logic in Nova, but the decision to break
> up the working compute with network and storage problem space across 3
> services and APIs doesn't mean we shouldn't still make it easy to
> express some pretty basic and common intents.

Given that we've similar needs for retries and race avoidance in and
between glance, nova, cinder and neutron, and a need or orchestrate
between at least these three (arguably other infrastructure projects
too, I'm not trying to get into specifics), maybe the answer is to put
that logic in a new service, that talks to those four, and provides a
nice simple API, while allowing the cinder, nova etc APIs to remove
things like internal retries?




-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Follow up on Nova/Cinder summit sessions from an ops perspective

2017-05-18 Thread Duncan Thomas
On 18 May 2017 at 22:26, Rochelle Grober <rochelle.gro...@huawei.com> wrote:
> If you're going to use --distance, then you should have specific values 
> (standard definitions) rather than operator defined:
> And for that matter, is there something better than distance?  Collocated 
> maybe?
>
> colocated={local, rack, row, module, dc}
> Keep the standard definitions that are already in use in/across data centers

There's at least 'chasis' that some people would want to add (blade
based stuff) and I'm not sure what standard 'module' is... The trouble
with standard definitions is that your standards rarely match the next
guy's standards, and since some of these are entirely irrelevant to
many storage topologies, you're likely going to need an API to
discover what is relevant to a specific system anyway.

-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-09 Thread Duncan Thomas
On 5 May 2017 at 23:45, Chris Friesen  wrote:
> On 05/05/2017 02:04 PM, John Griffith wrote:
>> I'd love some detail on this.  What falls over?

> It's been a while since I looked at it, but the main issue was that with LIO
> as the iSCSI server there is no automatic traffic shaping/QoS between
> guests, or between guests and the host.  (There's no iSCSI server process to
> assign to a cgroup, for example.)
>
> The throttling in IOPS/Bps is better than nothing, but doesn't really help
> when you don't necessarily know what your total IOPS/bandwidth actually is
> or how many volumes could get created.
>
> So you have one or more guests that are hammering on the disk as fast as
> they can, combined with disks on the cinder server that maybe aren't as fast
> as they should be, and it ended up slowing down all the other guests.  And
> if the host is using the same physical disks for things like glance
> downloads or image conversion, then a badly-behaved guest can cause
> performance issues on the host as well due to IO congestion.  And if they
> fill up the host caches they can even affect writes to other unrelated
> devices.
>
> So yes, it wasn't the ideal hardware for the purpose, and there are some
> tuning knobs, but in an ideal world we'd be able to reserve some
> amount/percentage of bandwidth/IOPs for the host and have the rest shared
> equally between all active iSCSI sessions (or unequally via a share
> allocation if desired).

So that's a complaint that it can't do magic with underspecced,
overloaded hardware, plus a request for fair-share I/O or network
scheduling? The latter is maybe something cinder could look at, though
we're limited by the available technologies - array vendors tend to
keep such things proprietary. Note that it is trivial to overload many
SAN too, both the data path and the control path.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Is Cinder still maintained?

2017-04-27 Thread Duncan Thomas
Ok, that new commit message made me laugh.

On 27 April 2017 at 08:42, Julien Danjou <jul...@danjou.info> wrote:
> Hi,
>
> I've posted a refactoring patch that simplifies tooz (read: remove
> technical debt) usage more than a month ago, and I got 0 review since
> then:
>
>   https://review.openstack.org/#/c/447079
>
> I'm a bit worried to see this zero review on such patches. It seems the
> most recently merged things are all vendor specific. Is the core of
> Cinder still maintained? Is there any other reason for such patches to
> be ignored for so long?
>
> --
> Julien Danjou
> // Free Software hacker
> // https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Manila]share or volume's size unit

2017-04-11 Thread Duncan Thomas
Changing the size to a float creates rounding issues, and means
changes to the db, the api definition, and changes to every single
client and client library out there, for very little gain.

On 10 April 2017 at 04:41, jun zhong <jun.zhongj...@gmail.com> wrote:
> I agree with you extend might be one way to solve the problem.
>
> By the way, How about another way that we could import volume
> size with float value? such as: 2.5G, 3.4G?
>
> Did community consider about it in the begin?
>
>
> 2017-04-07 20:16 GMT+08:00 Duncan Thomas <duncan.tho...@gmail.com>:
>>
>> Cinder will store the volume as 1G in the database (and quota) even if
>> the volume is only 500M. It will stay as 500M when it is attached
>> though. It's a side effect of importing volumes, but that's usually a
>> pretty uncommon thing to do, so shouldn't affect many people or cause
>> a huge amount of trouble.
>>
>> There are also backends that allocate in units greater than 1G, and so
>> sometimes give you slightly bigger volumes than you asked for. Cinder
>> doesn't not go out if its way to support this; again the database and
>> quota will reflect what you asked for, the attached volume will be a
>> slightly different size.
>>
>> In your case, extend might be one way to solve the problem, if you
>> backend supports it. I'm not certain what will happen if you ask
>> cinder to extend to 1G a volume it already thinks is 1G... if it
>> doesn't work, please file a bug.
>>
>> On 7 April 2017 at 09:01, jun zhong <jun.zhongj...@gmail.com> wrote:
>> > Hi guys,
>> >
>> > We know the share's size unit is in gigabiyte in manila, and volume's
>> > size
>> > unit is also in gigabiyte in cinder, But there is a question that the
>> > size
>> > is not exactly after we migrate tradition enviroment to OpenStack.
>> > For example:
>> > 1.There is original volume(vol_1) with 500MB size in tradition
>> > enviroment
>> > 2.We want to use openstack to manage this volume(vol_1)
>> > 3.We can only use 1GB volume to manage the original volume(vol_1),
>> > because
>> > the cinder volume size can not support 500MB.
>> > How to deal with this? Could we set the volume or share's unit to float
>> > or
>> > something else? or add new unit MB? or just extend the original volume
>> > size?
>> >
>> >
>> > Thanks
>> > jun
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Duncan Thomas
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Tags for volumes

2017-04-10 Thread Duncan Thomas
You can deploy Searchlight
(https://wiki.openstack.org/wiki/Searchlight) and it can handle this
and more complex metadata querying (it is backed by elastic search and
supports all of the expressive power of elastic search queries).
There's a limit to how much query functionality can go into a basic
API service before DoS becomes a serious issue, or you start to
require serious database restructuring.

As an extreme example, I've repeated resisted attempts to add regexp
matching into the cinder API since nobody has come up with any way to
deal with the DoS issues - I suspect the regexp processing in nova is
vulnerable though I've never spent the time to prove it.

On 10 April 2017 at 10:58, 王玺源 <wangxiyuan1...@gmail.com> wrote:
> We have an user demand that volumes should be filtered by some operation
> with metadata(or tag):
> 1. key1=value1
> 2. key1=value1 and key2=value2
> 3. key1=value1 or key1=value2
> 4. key1=value1 or key2=value2
> 5. not key1=value1
> 6. not key1=value1 and not key1=value2
> 7. not key1=value1 and not key2=value2
> 8. not key1=value1 and key2=value2
>
> But AFAIK Cinder now use metadata as a filter when list volumes, so it only
> support the 1 and 2.
>
> Is there any suggestion that how Cinder can support them?
>
>
> Thanks
> Wangxiyuan
>
> 2017-04-08 8:49 GMT+08:00 Matt Riedemann <mriede...@gmail.com>:
>>
>> On 3/27/2017 9:59 AM, Duncan Thomas wrote:
>>>
>>> On 27 March 2017 at 14:20, 王玺源 <wangxiyuan1...@gmail.com
>>> <mailto:wangxiyuan1...@gmail.com>> wrote:
>>>
>>> I think the reason is quite simple:
>>> 1. Some users don't want to use key/value pairs to tag volums. They
>>> just need some simple strings.
>>>
>>>
>>> ...and some do. We can hide this in the client and just save tags under
>>> a metadata item called 'tags', with no API changes needed on the cinder
>>> side and backwards compatability on the client.
>>>
>>>
>>> 2. Metadata must be shorter than 255. If users don't need keys, use
>>> tag here can save some spaces.
>>>
>>>
>>> How many / long tags are you considering supporting?
>>>
>>>
>>> 3. Easy for quick searching or filter. Users don't need to know
>>> what' the key related to the value.
>>>
>>>
>>> The client can hide all this, so it is not really a justification
>>>
>>>
>>> 4. For Web App, it should be a basic function[1]
>>>
>>>
>>> Web standards are not really standards. You can find a million things
>>> that apps 'should' do. They're usually contradictory.
>>>
>>>
>>>
>>>
>>> [1]https://en.m.wikipedia.org/wiki/Tag_(metadata)
>>> <https://en.m.wikipedia.org/wiki/Tag_(metadata)>
>>>
>>>
>>> 2017-03-27 19:49 GMT+08:00 Sean McGinnis <sean.mcgin...@gmx.com>:
>>>
>>> On Mon, Mar 27, 2017 at 03:13:59PM +0800, 王玺源 wrote:
>>> > Hi cinder team:
>>> >
>>> > I want to know what's your thought about adding tags for
>>> volumes.
>>> >
>>> > Now Many resources, like Nova instances, Glance images,
>>> Neutron
>>> > networks and so on, all support tagging. And some of our cloud
>>> customers
>>> > want this feature in Cinder as well. It's useful for auditing,
>>> billing for
>>> > could admin, it can let admin and users filter resources by
>>> tag, it can let
>>> > users categorize resources for different usage or just remarks
>>> something.
>>> >
>>> > Actually there is a related spec in Cinder 2 years ago, but
>>> > unfortunately it was not accepted and was abandoned :
>>> > https://review.openstack.org/#/c/99305/
>>> <https://review.openstack.org/#/c/99305/>
>>> >
>>> > Can we bring it up and revisit it a second time now? What's
>>> cinder
>>> > team's idea?  Can you give me some advice that if we can do it
>>> or not?
>>>
>>> Can you give any reason why the existing metadata mechanism does
>>> not or will
>>> not work for them? There was some discussion in that spec
>>> explaining why it
>>> was rejected at the time. I don't think anything has changed
>>> since then th

Re: [openstack-dev] [Cinder][Manila]share or volume's size unit

2017-04-07 Thread Duncan Thomas
Cinder will store the volume as 1G in the database (and quota) even if
the volume is only 500M. It will stay as 500M when it is attached
though. It's a side effect of importing volumes, but that's usually a
pretty uncommon thing to do, so shouldn't affect many people or cause
a huge amount of trouble.

There are also backends that allocate in units greater than 1G, and so
sometimes give you slightly bigger volumes than you asked for. Cinder
doesn't not go out if its way to support this; again the database and
quota will reflect what you asked for, the attached volume will be a
slightly different size.

In your case, extend might be one way to solve the problem, if you
backend supports it. I'm not certain what will happen if you ask
cinder to extend to 1G a volume it already thinks is 1G... if it
doesn't work, please file a bug.

On 7 April 2017 at 09:01, jun zhong <jun.zhongj...@gmail.com> wrote:
> Hi guys,
>
> We know the share's size unit is in gigabiyte in manila, and volume's size
> unit is also in gigabiyte in cinder, But there is a question that the size
> is not exactly after we migrate tradition enviroment to OpenStack.
> For example:
> 1.There is original volume(vol_1) with 500MB size in tradition enviroment
> 2.We want to use openstack to manage this volume(vol_1)
> 3.We can only use 1GB volume to manage the original volume(vol_1), because
> the cinder volume size can not support 500MB.
> How to deal with this? Could we set the volume or share's unit to float or
> something else? or add new unit MB? or just extend the original volume size?
>
>
> Thanks
> jun
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder][Driver][Nimble] - Need Point of contact

2017-04-04 Thread Duncan Thomas
Their CI contact from the wiki (
https://wiki.openstack.org/wiki/ThirdPartySystems/Nimble_Storage_CI )
is openstack...@nimblestorage.com - that's probably a good start.

On 4 April 2017 at 11:58, nidhi.h...@wipro.com <nidhi.h...@wipro.com> wrote:
> Hi all,
>
>
> I was working on few bugs related to Nimble cinder driver.
>
> I needed some clarification.
>
>
> May i know right point of contact for this?
>
>
> Who from nimble should be contacted for their cinder
>
> driver bugs?
>
>
> Thanks
>
> Nidhi
>
>
>
> The information contained in this electronic message and any attachments to
> this message are intended for the exclusive use of the addressee(s) and may
> contain proprietary, confidential or privileged information. If you are not
> the intended recipient, you should not disseminate, distribute or copy this
> e-mail. Please notify the sender immediately and destroy all copies of this
> message and any attachments. WARNING: Computer viruses can be transmitted
> via email. The recipient should check this email and any attachments for the
> presence of viruses. The company accepts no liability for any damage caused
> by any virus transmitted by this email. www.wipro.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Tags for volumes

2017-03-27 Thread Duncan Thomas
On 27 March 2017 at 14:20, 王玺源 <wangxiyuan1...@gmail.com> wrote:

> I think the reason is quite simple:
> 1. Some users don't want to use key/value pairs to tag volums. They just
> need some simple strings.
>

...and some do. We can hide this in the client and just save tags under a
metadata item called 'tags', with no API changes needed on the cinder side
and backwards compatability on the client.


> 2. Metadata must be shorter than 255. If users don't need keys, use tag
> here can save some spaces.
>

How many / long tags are you considering supporting?


> 3. Easy for quick searching or filter. Users don't need to know what' the
> key related to the value.
>

The client can hide all this, so it is not really a justification


> 4. For Web App, it should be a basic function[1]
>

Web standards are not really standards. You can find a million things that
apps 'should' do. They're usually contradictory.



>
> [1]https://en.m.wikipedia.org/wiki/Tag_(metadata)
>
>
> 2017-03-27 19:49 GMT+08:00 Sean McGinnis <sean.mcgin...@gmx.com>:
>
>> On Mon, Mar 27, 2017 at 03:13:59PM +0800, 王玺源 wrote:
>> > Hi cinder team:
>> >
>> > I want to know what's your thought about adding tags for volumes.
>> >
>> > Now Many resources, like Nova instances, Glance images, Neutron
>> > networks and so on, all support tagging. And some of our cloud customers
>> > want this feature in Cinder as well. It's useful for auditing, billing
>> for
>> > could admin, it can let admin and users filter resources by tag, it can
>> let
>> > users categorize resources for different usage or just remarks
>> something.
>> >
>> > Actually there is a related spec in Cinder 2 years ago, but
>> > unfortunately it was not accepted and was abandoned :
>> > https://review.openstack.org/#/c/99305/
>> >
>> > Can we bring it up and revisit it a second time now? What's cinder
>> > team's idea?  Can you give me some advice that if we can do it or not?
>>
>> Can you give any reason why the existing metadata mechanism does not or
>> will
>> not work for them? There was some discussion in that spec explaining why
>> it
>> was rejected at the time. I don't think anything has changed since then
>> that
>> would change what was said there.
>>
>> >
>> >
>> > Thanks!
>> >
>> > Wangxiyuan
>>
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Some information about the Forum at the Summit in Boston

2017-03-09 Thread Duncan Thomas
On 9 Mar 2017 18:22, "Jay Pipes"  wrote:
>
> On 03/09/2017 01:06 PM, Dmitry Tantsur wrote:
>>
>> On 03/09/2017 06:57 PM, Clint Byrum wrote:
>>>
>>> Excerpts from Ben Swartzlander's message of 2017-03-09 11:23:31 -0500:
>>
>> 
>>>
>>>
>>> Combine that with the lower cost and subsidized travel support program
>>> for the PTG, and you should end up with a more complete gathering of
>>> developers at PTG, and a better interface with users and operators. That
>>> is the theory, and I think it's worth trying to fit one's approach into
>>> that, rather than try to keep doing things the old way and expecting
>>> this new system to work.
>>
>>
>> Small correction: it does not seem that the PTG was cheaper for people
>> attending it.
>
>
> It was substantially cheaper for many folks, actually. The venue chosen
and the city chosen is significantly cheaper than Boston.
>
> Hotel rooms in Boston are *minimum* $350 per night compared with $180 per
night in Atlanta's Sheraton.
>
> Those of us in certain companies were allowed to use Airbnb which cut the
lodging costs down even further (around $80 per night per person compared
with $150 average Airbnb prices per person in Boston).
>
> Add to that the lack of expensive vendor parties and nighttime events
(*somebody* ends up having to pay for those things, after all) and the
costs for attending (and putting on the PTG) were indeed cheaper in my
experience.

Compared to the summit it as cheaper, but compared to the midcycles cinder
used to have it was definitely more expensive
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] confusion on where nova_catalog_admin_info is used in cinder

2017-02-14 Thread Duncan Thomas
This is only used by the file based drivers, who need to call nova for
assistance with attached snapshots. Most drivers won't use it, including
the default, LVM.

I can't comment on the test coverage, sorry.

On 14 February 2017 at 13:01, Sean Dague <s...@dague.net> wrote:

> After some of the confusion around endpoints in devstack, we decided to
> simplify the endpoints registered in devstack to only the ones needed
> for development. Basically only register "public" interfaces unless
> there is something special about the service.
>
> https://review.openstack.org/#/c/433272/ is the change.
>
> Matt Riedemann pointed out that this would break Cinder because there is
> a hardcoded concept of nova_catalog_admin_info -
> https://github.com/openstack/cinder/blob/cfc617b0cea99ed6994f08e5337fd5
> b65ea9fd1c/cinder/compute/nova.py#L39-L41
>
> Except... it didn't (see results on
> https://review.openstack.org/#/c/433272/).
>
> What is more confusing, is the oslo.config dump at the beginning of
> cinder service starts there don't have any reference to any of these
> nova_ config variables.
>
> How is this code loaded and used? Is there no testing of cinder -> nova
> happening in the gate? Is this missing testing, or are there reasons
> these configurations would never load that code?
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Can I use lvm thin provisioning in mitaka?

2017-01-20 Thread Duncan Thomas
There's also cinder functionality called the 'generic image cache' that
does this for you; see the (per-backend) config options:
image_volume_cache_enabled, image_volume_cache_max_size_gb and
image_volume_cache_max_count

On 20 January 2017 at 16:54, Chris Friesen <chris.frie...@windriver.com>
wrote:

> On 01/20/2017 04:07 AM, Marco Marino wrote:
>
>> Hi, I'm trying to use cinder with lvm thin provisioning. It works well
>> and I'd
>> like to know if there is some reason lvm thin should be avoided in mitaka
>> release. I'm trying to use with
>> max_over_subscription_ratio = 1.0
>> so I don't have problems with over subscription.
>> I using thin provisioning because it is fast (I think). More precisely,
>> my use
>> case is:
>>
>> - create one bootable volume. This is a long operation because cinder
>> download
>> the image from glance, qemu-img convert in raw format and then "dd" copy
>> the
>> image in the volume.
>> - Create a snapshot of the bootable volume. Really fast and reliable
>> because the
>> original volume is not used by any vm.
>> - Create a new volume from the snapshot. This is faster than create a new
>> bootable volume.
>>
>> Is this use correct? Can I deploy in the production environment (mitaka -
>> centos 7)
>> Thank you
>>
>
> For what it's worth we're using cinder with LVM thin provisioning in
> production with no overprovisioning.
>
> What you're proposing should work, you're basically caching the vanilla
> image as a cinder snapshot.
>
> If you wish to speed up volume deletion, you can set "volume_clear=none"
> in the cinder.conf file.
>
> Be aware that LVM thin provisioning will see a performance penalty the
> first time you write to a given disk block in a volume, because it needs to
> allocate a new block, zero it out, then write the new data to it.
>
> Chris
>
>
> ______
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Duncan Thomas
On 17 January 2017 at 13:41, Dave McCowan (dmccowan) <dmcco...@cisco.com>
wrote:

>
> I don't know everything that was proposed in the Juno timeframe, or
> before, but the Nova and Cinder integration has been done now.  The
> documentation is at [1].  A cinder user can create an encryption key
> through Barbican when creating a volume, then the same user (or a user with
> permissions granted by that user), as a nova user, can retrieve that key
> when mounting the encrypted volume.
>

Sure, cinder can add a secret and nova can retrieve it. But glance can
*also* retrieve it. So can trove. And any other service that gets a normal
keystone token from the user (i.e. just about all of them). This is, for
some threat models, far worse that the secret being nice and safe int he
cinder DB and only ever given out to nova via a trusted API path. The
original design vision I saw for barbican was intended to have much better
controls than this, but they never showed up AFAIK. And that's just the
problem - people think 'Oh, barbican is storing the cinder volume secrets,
great, we're secure' when actually barbican has made the security situation
worse not better. It's a pretty terrible secrets-as-a-service product at
the moment. Fixing it is not trivial.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Duncan Thomas
To give a totally different prospective on why somebody might dislike
Barbican (I'm one of those people). Note that I'm working from pretty hazy
memories so I don't guarantee I've got everything spot on, and I am without
a doubt giving a very one sided view. But hey, that's the side I happen to
sit on. I certainly don't mean to cause great offence to the people
concerned, but rather to give  ahistory from a PoV that hasn't appeared yet.

Cinder needed somewhere to store volume encryption keys. Long, long ago,
Barbican gave a great presentation about secrets as a service, ACLs on
secrets, setups where one service could ask for keep material to be created
and only accessible to some other service. Having one service give another
service permission to get at a secret (but never be able to access that
secret itself). All the clever things that cinder could possibly leverage.
It would also handle hardware security modules and all of the other
craziness that no sane person wants to understand the fine details of. Key
revocation, rekeying and some other stuff was mentioned as being possible
future work.

So I waited, and I waited, and I asked some security people about what
Barbican was doing, and I got told it had gone off and done some unrelated
to anything we wanted certificate cleverness stuff for some other service,
but secrets-as-a-service would be along at some point. Eventually, a long
time after all my enthusiasm had waned, the basic feature

It doesn't do what it says on the tin. It isn't very good at keeping
secrets. If I've got a token then I can get the keys for all my volumes.
That kind of sucks. For several threat models, I'd have done better to just
stick the keys in the cinder db.

I really wish I'd got a video of that first presentation, because it would
be an interesting project to implement. Barbican though, from a really
narrow focused since usecase view point really isn't very good though.

(If I've missed something and Barbican can do the clever ACL type stuff
that was talked about, please let me know - I'd be very interested in
trying to fit it to cinder, and I'm not even working on cinder
professionally currently.)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Duncan Thomas
On 12 December 2016 at 16:35, Ash  wrote:

> I tend to agree with you, Sean. Also, if there's a concern that some
> project has changed its license, then just create a fork. In the case of
> this previously GPL code, it will at least be re-distributable. In the end,
> I just don't think this is a huge issue that cannot be easily managed.
>

Creating a fork is easy. Maintaining a fork against bitrot, and managing
the drift between the 'official' version and the fork, is a task that
requires resources that are hard to find.

We've put up patches to remove (At least)  two drivers for exactly this
sort of switch before, and I think it was the right thing to do then and
now.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Duncan Thomas
On 12 December 2016 at 16:14, Sean McGinnis <sean.mcgin...@gmx.com> wrote:

>
> Honestly, my opinion is it's just fine as it is, and the fact that this
> license has changed doesn't make any difference.
>
> For most external storage there is _something_ that the deployer needs
> to do outside of install and configure OpenStack to get things set up
> and working. Whether that is setting up an physical array or downloading
> and installing a client library on their own - that's just part of the
> requirements for whatever solution they chose to deploy.
>
> It would be great if things were all open and an all in one
> download->install->run solution, but that's not reality and not what
> everyone is looking for out of OpenStack. So be it.
>
>

I'm going to respectfully but forcefully disagree here, and even go so far
as to suggest that the failing of Openstack is that people /do/ want that,
and Openstack is, in many areas (not just cinder) simply unable to provide
such a solution.

I'm willing to bet you can't find a customer who says "yes, we want to mess
around with downloading things from different sources, worrying about
versions, keeping copies of things in case companies decide to take their
portal down... oh and figuring out how to get those onto my nodes is great
fun, we'll have a double helping of that please." That is, frankly,
nonsense. Sure some people might put up with it, but I don't think anybody
wants it.

Having read the Openstack rules linked to earlier in the thread (
https://governance.openstack.org/tc/reference/licensing.html) we're clearly
violating that.

Having worked to try to build a turnkey Openstack distro, I can say with
authority that the cinder soft dependencies are absolutely a obstacle, and
in some cases (like customers who want a fully offline/airgapped install)
an insurmountable one.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Duncan Thomas
On 12 December 2016 at 14:55, Andreas Jaeger <a...@suse.com> wrote:

>
> So, what are the steps forward here? Requiring a non-free library like
> drbdmanage is not acceptable AFAIU,
>

This is pretty much where things went dead at the summit - there were
various degrees of unacceptability (I was personally bothered by the the
parts that can't be freely redistributed, rather than free software per se,
but that still leaves a large number of problem cases. Few people were
willing to seriously consider pulling 1/3 of the cinder drivers out, and
there was not AFAICT a firm conclusion.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Duncan Thomas
It's a soft dependency, like most of the vendor specific dependencies - you
only need them if you're using a specific backend. We've loads of them in
cinder, under a whole bunch of licenses. There was a summit session
discussing it that didn't come to any firm conclusions.

On 12 December 2016 at 10:52, Thierry Carrez <thie...@openstack.org> wrote:

> Mehdi Abaakouk wrote:
> > I have recently seen that drbdmanage python library is no more GPL2 but
> > need a end user license agreement [1].
> > Is this compatible with the driver policy of Cinder ?
>
> It's not acceptable as a dependency of an OpenStack project (be it GPLv2
> or using a custom EULA), see:
>
> https://governance.openstack.org/tc/reference/licensing.html
>
> That said, it doesn't seem to be listed as a Cinder requirement right
> now ? Is it a new dependency being considered, or is it currently flying
> under the radar ?
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-10-16 Thread Duncan Thomas
On 14 October 2016 at 23:55, Jay Pipes <jaypi...@gmail.com> wrote:

> The primary thing that, to me at least, differentiates rolling upgrades of
> distributed software is that different nodes can contain multiple versions
> of the software and continue to communicate with other nodes in the system
> without issue.
>
> In the case of Glance, you cannot have different versions of the Glance
> service running simultaneously within an environment, because those Glance
> services each directly interface with the Glance database and therefore
> expect the Glance DB schema to look a particular way for a specific version
> of the Glance service software.
>

Cinder services can run N+-1 versions in a mixed manner, all talking to the
 same database, no conductor required.



-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Cinder]Removing Cinder v1 in Tempest

2016-10-10 Thread Duncan Thomas
If we can get them running on cinder patches via a different job, then
removing them from the common job afterwards seems reasonable.

There's no strong will to remove them, several libraries still use them,
and given we're now supporting /all/ other API versions indefinitely,
keeping them around isn't that much of a burden.

On 10 October 2016 at 15:32, Jordan Pittier <jordan.pitt...@scality.com>
wrote:

> Hi,
> I'd like to reduce the duration of a full Tempest run and I noticed that
> Cinder tests take a good amount of time (cumulative time 2149sec vs 2256sec
> for Nova, source code [0])
>
> So I'd like to not run the Cinder v1 tests anymore, at least on the master
> branches.
>
> I remember that Cinder  v1 is deprecated (it has been for what, 2 years ?)
> Is the removal scheduled ? I don't see/feel a lot of efforts toward that
> removal but I may be missing something. Any way, that's not really my
> business but it's not really fair to all the projects that run the "common
> jobs" that Cinder "slows" everyone down.
>
> What do you think ?
>
> [0] : https://github.com/JordanP/openstack-snippets/blob/
> master/tempest-timing/tempest_timing.py
>
>
> <https://www.scality.com/backup/?utm_source=signatures_medium=email_campaign=backup2016>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI, nova plans to have a room at the PTG in February

2016-10-10 Thread Duncan Thomas
On 7 October 2016 at 19:53, Clint Byrum <cl...@fewbar.com> wrote:

>
> My hope was that it would be "the summit without the noise". Sounds like
> it will be "the summit without the noise, or the organization".
>
> I'd really like to see time boxes for most if not all of it, even if many
> of the boxes are just half a day of "work time" which means "we want to
> work on stuff together without the overhead of less involved participants."
>
> The two days of cross project is awesome. But there are also big
> single-project initiatives that have cross-project interest anyway.
>
> For instance, the movement of the scheduler out of Nova is most definitely
> a Nova session, but it has ramifications for oslo, performance, neutron,
> cinder, architecture, API-WG, etc.  etc. If we don't know when Nova is
> going to discuss it, how can we be there to influence that discussion?


I've got to agree entirely here. I am mostly interested in cinder stuff,
but I've interest and a stake in specific nova and glance topics... getting
involved in those is going to be impossible without some sort of schedule.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

2016-09-21 Thread Duncan Thomas
On 22 September 2016 at 00:23, John Griffith 
wrote:

>
> Yes, that is a sizeable chunk of the solution.  The remaining components
> are how to coordinate with Nova (compute nodes) and figuring out if we just
> use c-vol as is, or if we come up with some form of a paired down agent.
> Just using c-vol as a start might be the best way to go.
>
>
It seems logical that somebody try a large-ish deployment using the
existing cinder-volume service, and benchmark and load test, to see if the
least effort approach works.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

2016-09-20 Thread Duncan Thomas
On 20 September 2016 at 16:24, Nikita Konovalov 
wrote:

> Hi,
>
> From Sahara (and Hadoop workload in general) use-case the reason we used
> BDD was a complete absence of any overhead on compute resources
> utilization.
>
> The results show that the LVM+Local target perform pretty close to BDD in
> synthetic tests. It's a good sign for LVM. It actually shows that most of
> the storage virtualization overhead is not caused by LVM partitions and
> drivers themselves but rather by the iSCSI daemons.
>
> So I would still like to have the ability to attach partitions locally
> bypassing the iSCSI to guarantee 2 things:
> * Make sure that lio processes do not compete for CPU and RAM with VMs
> running on the same host.
> * Make sure that CPU intensive VMs (or whatever else is running nearby)
> are not blocking the storage.
>

So these are, unless we see the effects via benchmarks, completely
meaningless requirements. Ivan's initial benchmarks suggest that LVM+LIO is
pretty much close enough to BDD even with iSCSI involved. If you're aware
of a case where it isn't, the first thing to do is to provide proof via a
reproducible benchmark. Otherwise we are likely to proceed, as John
suggests, with the assumption that local target does not provide much
benefit.

I've a few benchmarks myself that I suspect will find areas where getting
rid of iSCSI is benefit, however if you have any then you really need to
step up and provide the evidence. Relying on vague claims of overhead is
now proven to not be a good idea.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

2016-09-19 Thread Duncan Thomas
I think there's some mileage in some further work on adding local LVM,
since things like striping/mirroring for performace can be done. We can
prototype it and get the numbers before even thinking about merging though
- as additions to an already fully featured driver. these seem more
worthwhile a way forward than limping on with the bdd driver.

Moving to change our default target to LIO seems worthwhile - I'd suggest
being cautious with deprecation rather than aggressive though - aiming to
change the default in 'O' then planning the rest based on how that goes.

On 19 September 2016 at 21:54, John Griffith <john.griffi...@gmail.com>
wrote:

>
>
> On Mon, Sep 19, 2016 at 12:01 PM, Ivan Kolodyazhny <e...@e0ne.info> wrote:
>
>> + [sahara] because they are primary consumer of the BDD.
>>
>> John,
>> Thanks for the answer. My comments are inline.
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>> On Mon, Sep 19, 2016 at 4:41 PM, John Griffith <john.griffi...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Mon, Sep 19, 2016 at 4:43 AM, Ivan Kolodyazhny <e...@e0ne.info>
>>> wrote:
>>>
>>>> Hi team,
>>>>
>>>> We did some performance tests [1] for LVM and BDD drivers. All tests
>>>> were executed on real hardware with OpenStack Mitaka release.
>>>> Unfortunately, we didn't have enough time to execute all tests and compare
>>>> results. We used Sahara/Hadoop cluster with TestDFSIO and others
>>>> tests.
>>>>
>>>> All tests were executed on the same hardware and OpenStack release.
>>>> Only difference were in cinder.conf to enable needed backend and/or target
>>>> driver.
>>>>
>>>> Tests were executed on following configurations:
>>>>
>>>>- LVM +TGT target
>>>>- LVM+LocalTarget: PoC based on [2] spec
>>>>- LVM+LIO
>>>>- Block Device Driver.
>>>>
>>>>
>>>> Feel free to ask question if any about our testing infrastructure,
>>>> environment, etc.
>>>>
>>>>
>>>> [1] https://docs.google.com/spreadsheets/d/1qS_ClylqdbtbrVSvwbbD
>>>> pdWNf2lZPR_ndtW6n54GJX0/edit?usp=sharing
>>>> [2] https://review.openstack.org/#/c/247880/
>>>>
>>>> Regards,
>>>> Ivan Kolodyazhny,
>>>> http://blog.e0ne.info/
>>>>
>>>> 
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: openstack-dev-requ...@lists.op
>>>> enstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>> ​Thanks Ivan, so I'd like to propose we (the Cinder team) discuss a few
>>> things (again):
>>>
>>> 1. Deprecate the BDD driver
>>>  Based on the data here LVM+LIO the delta in performance ​(with the
>>> exception of the Terravalidate run against 3TB) doesn't seem significant
>>> enough to warrant maintaining an additional driver that has only a subset
>>> of features implemented.  It would be good to understand why that
>>> particular test has such a significant peformance gap.
>>>
>> What about Local Target? Does it make sense to implement it instead BDD?
>>
> ​Maybe I'm missing something, what would the advantage be?  LVM+LIO and
> LVM+LOCAL-TARGET seem pretty close.  In the interest of simplicity and
> maintenance just thinking maybe it would be worth considering just using
> LVM+LIO across the board.
> ​
>
>
>>
>>> 2. Consider getting buy off to move the default implementation to use
>>> the LIO driver and consider deprecating the TGT driver
>>>
>> +1. Let's bring this topic for the next weekly meeting.
>>
>>
>>
>>>
>>> I realize this probably isn't a sufficient enough data set to make those
>>> two decisions but I think it's at least enough to have a more informed
>>> discussion this time.
>>>
>>> Thanks,
>>> John​
>>>
>>>
>>> ________
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]tempest test case for force detach volume

2016-09-19 Thread Duncan Thomas
Writing a sensible test for this api is rather tricky, since it is intended
to clean up one very specific error condition, and has few guarantees about
the state it leaves the system in. It is provided as a tool to allow the
system administrator to clean up certain faults and situations without
needing to manually effort the database, however the conditions under which
it is safe to use, and the cleanup actions that are required after calling
it, vary between backends.

The only test I can think of that is  probably safe across all backends is
to call reserve, create_export, reset then delete. (All directly against
the cinder endpoint with no Nova involvement).

There is a substantial danger in the thinking that this call is any sort of
generic fixup - it will happily leave volumes attached behind the scenes,
open to data corruption.

On 18 Sep 2016 05:48, "joehuang"  wrote:

> Hello, Ken,
>
> Thank you for your information, for APIs without tempest test cases,
> it's due to hard to build the test environment, or it's just for the API
> is not mature enough? I want to know why the tempest test cases
> were not added at the same time when the features were implemented.
>
> Best Regards
> Chaoyi Huang(joehuang)
> 
> From: Ken'ichi Ohmichi [ken1ohmi...@gmail.com]
> Sent: 15 September 2016 2:02
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [cinder]tempest test case for force detach
> volume
>
> Hi Chaoyi,
>
> That is a nice point.
> Now Tempest have tests for some volume v2 action APIs which doesn't
> contain os-force_detach.
> The available APIs of tempest are two: os-set_image_metadata and
> os-unset_image_metadata like
> https://github.com/openstack/tempest/blob/master/tempest/
> services/volume/v2/json/volumes_client.py#L27
> That is less than I expected by comparing the API reference.
>
> The corresponding API tests' patches are welcome if interested in :-)
>
> Thanks
> Ken Ohmichi
>
> ---
>
>
> 2016-09-13 17:58 GMT-07:00 joehuang :
> > Hello,
> >
> > Is there ant tempest test case for "os-force_detach" action to force
> detach
> > a volume? I didn't find such a test case both in the repository
> > https://github.com/openstack/cinder/tree/master/cinder/tests/tempest
> > and https://github.com/openstack/tempest
> >
> > The API link is:
> > http://developer.openstack.org/api-ref-blockstorage-v2.
> html#forcedetachVolume
> >
> > Best Regards
> > Chaoyi Huang(joehuang)
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-13 Thread Duncan Thomas
On 13 September 2016 at 06:44, Ben Swartzlander <b...@swartzlander.org>
wrote:

> On 09/09/2016 11:12 AM, Duncan Thomas wrote:
>
>> I don't care so much whether your CLI or API proxy in open or closed
>> source, but I really do care if I can create a distribution, even a
>> novel one, with that software in it, without hitting licensing issues.
>> That is, as I see it, a bare minimum - anything less than that and it
>> does not belong in the cinder source tree.
>>
>
> I don't understand how you can have this stance while tolerating the
> existence of such things as the VMware driver. That software (ESXi)
> absolutely requires a license to use or distribute.


In all honesty, I hadn't considered the situation in detail until the
recent IBM discussions - I've raised concerns before when specific
troublesome libraries appeared (the Netapp one, and rts-lib, both solved by
relicensing to apache) but never tried to audit the whole codebase. There's
an etherpad Walt linked to int he meeting that is collecting the dependency
info for various drivers, so hopefully we'll have an accurate assessment of
the current situation so that we can figure out what we're doing going
forward.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-09 Thread Duncan Thomas
On 9 September 2016 at 17:22, Ben Swartzlander <b...@swartzlander.org> wrote:

On 09/08/2016 04:41 PM, Duncan Thomas wrote:
>


> Despite the fact I've appeared to be slightly disagreeing with John in
>> the IRC discussion on this subject, you've summarised my concern very
>> well. I'm not convinced that these support tools need to be open source,
>> but they absolutely need to be licensed in such a way that distributions
>> can repackage them and freely distribute them. I'm not aware of any
>> tools currently required by cinder where this is not the case, but a few
>> of us are in the process of auditing this to make sure we understand the
>> situation before we clarify our rules.
>>
>
> I don't agree with this stance. I think the Cinder (and OpenStack)
> communities should be able to dictate what form driver take, including the
> code and the license, but when we start to try to control what drivers are
> allowed to talk to (over and API or CLI) then we are starting to
> artificially limit what kinds of storage systems can integrate with
> OpenStack.
>
> Storage systems take a wide variety of forms, including specialized
> hardware systems, clusters of systems, pure software-based systems, open
> source, closed source, and even other SDS abstraction layers. I don't see
> the point is creating rules that specify what form a storage system has to
> take if we are going to allow a driver for it. As long as the driver itself
> and all of it's python dependencies are Apache licensed, we can do our job
> of reviewing the code and fixing cinder-level bugs. Any other kind of
> restrictions just limit customer choice and stifle competition.
>
> Even if you don't agree with my stance, I see serious practical problems
> with trying to define what it and is not permitted in terms of "support
> tools". Is a proprietary binary that communicates with a physical
> controller using a proprietary API a "support tool"? What if someone
> creates a software-defined-storage system which is purely a proprietary
> binary and nothing else?
>
> API proxies are also very hard to nail down. Is an API proxy with a
> proprietary license not allowed? What if that proxy runs on the box itself?
> What if it's a separate software package you have to install? I don't think
> we can write a set of rules that won't accidentally exclude things we don't
> want to exclude.


So my issue is not with any of those things, it is that I believe anybody
should be able to put together a distribution of openstack, that just
works, which any supported backend, without needed to negotiate licensing
deals with vendors, and without having to have nasty hacks in their
installers that pull things down off the web on to cinder nodes to get
around licensing rules. That is one of the main 'opens' to me in openstack.

I don't care so much whether your CLI or API proxy in open or closed
source, but I really do care if I can create a distribution, even a novel
one, with that software in it, without hitting licensing issues. That is,
as I see it, a bare minimum - anything less than that and it does not
belong in the cinder source tree.


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-08 Thread Duncan Thomas
On 8 September 2016 at 20:17, John Griffith <john.griffi...@gmail.com>
wrote:

> On Thu, Sep 8, 2016 at 11:04 AM, Jeremy Stanley <fu...@yuggoth.org> wrote:
>



> 
>
> they should be able to simply install it and its free dependencies
>> and get a working system that can communicate with "supported"
>> hardware without needing to also download and install separate
>> proprietary tools from the hardware vendor. It's not what we say
>> today, but it's what I personally feel like we *should* be saying.
>
>
> Your view on what you feel we *should* say, is exactly how I've
> interpreted our position in previous discussions within the Cinder
> project.  Perhaps I'm over reaching in my interpretation and that's why
> this is so hotly debated when I do see it or voice my concerns about it.
>
>
Despite the fact I've appeared to be slightly disagreeing with John in the
IRC discussion on this subject, you've summarised my concern very well. I'm
not convinced that these support tools need to be open source, but they
absolutely need to be licensed in such a way that distributions can
repackage them and freely distribute them. I'm not aware of any tools
currently required by cinder where this is not the case, but a few of us
are in the process of auditing this to make sure we understand the
situation before we clarify our rules.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] API interaction changes overview

2016-09-03 Thread Duncan Thomas
There's also another API limitation to be fixed - whether it goes in the
initial API fixup or gets done later, which is a round one cinder serving
multiple nova or other consumers: https://review.openstack.org/#/c/362637/

On 2 September 2016 at 22:51, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:

> On 9/1/2016 4:09 PM, Ildiko Vancsa wrote:
>
>> Hi All,
>>
>>
>> As we skipped a few meetings and we also reached the N-3 milestone this
>> week I would like to summarise where we are currently with our plans.
>>
>> At the beginning of the Newton cycle we decided to refactor the Cinder
>> API around attach and detach to make Cinder more a standalone module and
>> also to simplify the interaction between Nova and Cinder. The changes in
>> high level are the following:
>>
>> * Have one ‘create_attachment' call that will contain the functionality
>> of the current ‘initialize_connection’ and ‘attach’ calls
>> * Have one ‘remove_attachment’ call for detach
>> * There are further plans to extend the Cinder API to check whether it is
>> safe to remove a connection to prepare for the multi-attach use case.
>>
>> The code is already on its way [1][2][3], we plan to merge the Cinder
>> side of changes as an experimental API as soon as possible. We can then
>> continue to test the changes on the Nova side. When the Cinder API changes
>> are stable we will add proper API microversioning to enable Nova to pick it
>> up the right way and use the new API with support for backward
>> compatibility.
>>
>> This work should provide the benefit of removing workarounds from the
>> Nova code and make those parts more readable and easier to maintain. We
>> will also get very close to support multi-attach in both modules when we
>> progress with the above listed changes.
>>
>> We started some work to reduce race conditions and remove a few
>> workarounds from mostly the Nova code base as well during this cycle.
>>
>> We had an attempt to remove ‘check_attach’ from the Nova code, as this
>> check should be purely Cinder’s responsibility. The call is partially
>> removed [4] and there’s one patch up for review to finish that work item
>> [5]. The difficulty with this one is that we have one flow that missed the
>> ‘reserve_volume’ call and therefore also the required checks on Cinder
>> side. This is corrected in the patch up for review [5], it needs some more
>> eyes on it to ensure we have the proper fix.
>>
>
> I haven't gone through the new experimental APIs proposed in cinder but
> while Walter was working on [4] we realized a gap in the os-reserve call in
> that we should pass the availability zone since nova still has to check
> that separately:
>
> https://github.com/openstack/nova/blob/96926a8ee182550391c30
> b16b410da7f598c0f39/nova/volume/cinder.py#L290
>
> That seems like something we could do separately early in ocata with a
> microversion on the os-reserve API to take an optional availability_zone
> argument and fail if it doesn't match the AZ for the volume being reserved.
>
>
>> We also started to look into how to remove the unnecessary
>> ‘initialize_connection’ calls [6] from the Nova code. We need more review
>> attention on this one also as when we will have the new calls in the Cinder
>> API we need to rethink some of the current workarounds to make things like
>> live migration work with the new setup.
>>
>> We still have the etherpad for this work up [7], I will add the links in
>> this mail to the top for better readability.
>>
>> We will have the next meeting at the regular slot we use next Monday to
>> check on the status and decide on what we can make happen before the Summit
>> and also start to plan a little bit for the event itself as well. The
>> meeting is on #openstack-meeting-cp, on Monday (Sept. 5) 1700UTC.
>>
>> If you have any questions or comments please respond to this thread or
>> join the meeting next Monday.
>>
>>
>> Thanks and Best Regards,
>> Ildikó
>>
>> [1] https://review.openstack.org/#/c/327408/
>> [2] https://review.openstack.org/#/c/327409/
>> [3] https://review.openstack.org/#/c/330285/
>> [4] https://review.openstack.org/#/c/315789/
>> [5] https://review.openstack.org/#/c/335358/
>> [6] https://review.openstack.org/#/c/312773/
>> [7] https://etherpad.openstack.org/p/cinder-nova-api-changes
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsub

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread Duncan Thomas
On 31 August 2016 at 22:30, Ian Wells <ijw.ubu...@cack.org.uk> wrote:

> On 31 August 2016 at 10:12, Clint Byrum <cl...@fewbar.com> wrote:
>
>> Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
>> > Is there a writeup anywhere on what these issues are? I've heard this
>> > sentiment expressed multiple times now, but without a writeup of the
>> issues
>> > and the design goals of the replacement, we're unlikely to make
>> progress on
>> > a replacement - even if somebody takes the heroic approach and writes a
>> > full replacement themselves, the odds of getting community by-in are
>> very
>> > low.
>>
>> Right, this is exactly the sort of thing I'd like to gather a group of
>> design-minded folks around in an Architecture WG. Oslo is busy with the
>> implementations we have now, but I'm sure many oslo contributors would
>> like to come up for air and talk about the design issues, and come up
>> with a current design, and some revisions to it, or a whole new one,
>> that can be used to put these summit hallway rumors to rest.
>>
>
> I'd say the issue is comparatively easy to describe.  In a call sequence:
>
> 1. A sends a message to B
> 2. B receives messages
> 3. B acts upon message
> 4. B responds to message
> 5. A receives response
> 6. A acts upon response
>
> ... you can have a fault at any point in that message flow (consider
> crashes or program restarts).  If you ask for something to happen, you wait
> for a reply, and you don't get one, what does it mean?  The operation may
> have happened, with or without success, or it may not have gotten to the
> far end.  If you send the message, does that mean you'd like it to cause an
> action tomorrow?  A year from now?  Or perhaps you'd like it to just not
> happen?  Do you understand what Oslo promises you here, and do you think
> every person who ever wrote an RPC call in the whole OpenStack solution
> also understood it?
>
>

Thank you for the explanation. Some times it is best to state the
apparently obvious just so that everybody is on the same page.

There are some pieces in cinder that attempt to work around some of these
limitations already, added with the recent H/A cinder-volume work.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Duncan Thomas
On 31 August 2016 at 18:54, Joshua Harlow <harlo...@fastmail.com> wrote:

> Duncan Thomas wrote:
>
>> On 31 August 2016 at 11:57, Bogdan Dobrelya <bdobre...@mirantis.com
>> <mailto:bdobre...@mirantis.com>> wrote:
>>
>> I agree that RPC design pattern, as it is implemented now, is a major
>> blocker for OpenStack in general. It requires a major redesign,
>> including handling of corner cases, on both sides, *especially* RPC
>> call
>> clients. Or may be it just have to be abandoned to be replaced by a
>> more
>> cloud friendly pattern.
>>
>>
>>
>> Is there a writeup anywhere on what these issues are? I've heard this
>> sentiment expressed multiple times now, but without a writeup of the
>> issues and the design goals of the replacement, we're unlikely to make
>> progress on a replacement - even if somebody takes the heroic approach
>> and writes a full replacement themselves, the odds of getting community
>> by-in are very low.
>>
>
> +2 to that, there are a bunch of technologies that could replace the
> rabbit+rpc, aka, gRPC, then there is http2 and thrift and ... so a writeup
> IMHO would help at least clear the waters a little bit, and explain the
> blocker of the current RPC design pattern (which is multidimensional
> because most people are probably thinking RPC == rabbit when it's actually
> more than that now, ie zeromq and amqp1.0 and ...) and try to centralize on
> a better replacement.
>
>
Is anybody who dislikes the current pattern(s) and implementation(s)
volunteering to start this documentation? I really am not aware of the
issues, and I'd like to begin to understand them.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Duncan Thomas
On 31 August 2016 at 11:57, Bogdan Dobrelya  wrote:


> I agree that RPC design pattern, as it is implemented now, is a major
> blocker for OpenStack in general. It requires a major redesign,
> including handling of corner cases, on both sides, *especially* RPC call
> clients. Or may be it just have to be abandoned to be replaced by a more
> cloud friendly pattern.
>


Is there a writeup anywhere on what these issues are? I've heard this
sentiment expressed multiple times now, but without a writeup of the issues
and the design goals of the replacement, we're unlikely to make progress on
a replacement - even if somebody takes the heroic approach and writes a
full replacement themselves, the odds of getting community by-in are very
low.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]How to address TCs concerns in Tricircle big-tent application

2016-08-31 Thread Duncan Thomas
If you're going with the approach of having a master region (which seems
sensible), you're going to want an admin API that checks all of the regions
setup matches in terms of these objects existing in all regions, for
validation.

On 31 August 2016 at 10:16, joehuang <joehu...@huawei.com> wrote:

> Hello, team,
>
> During last weekly meeting, we discussed how to address TCs concerns in
> Tricircle big-tent application. After the weekly meeting, the proposal was
> co-prepared by our contributors: https://docs.google.com/presentation/d/
> 1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E
>
> The more doable way is to divide Tricircle into two independent and
> decoupled projects, only one of the projects which deal with networking
> automation will try to become an big-tent project, And Nova/Cinder API-GW
> will be removed from the scope of big-tent project application, and put
> them into another project:
>
> *TricircleNetworking:* Dedicated for cross Neutron networking automation
> in multi-region OpenStack deployment, run without or with TricircleGateway.
> Try to become big-tent project in the current application of
> https://review.openstack.org/#/c/338796/.
>
>
> *TricircleGateway:* Dedicated to provide API gateway for those who need
> single Nova/Cinder API endpoint in multi-region OpenStack deployment, run
> without or with TricircleNetworking. Live as non-big-tent,
> non-offical-openstack project, just like Tricircle toady’s status. And not
> pursue big-tent only if the consensus can be achieved in OpenStack
> community, including Arch WG and TCs, then decide how to get it on board in
> OpenStack. A new repository is needed to be applied for this project.
>
> And consider to remove some overlapping implementation in Nova/Cinder
> API-GW for global objects like flavor, volume type, we can configure one
> region as master region, all global objects like flavor, volume type,
> server group, etc will be managed in the master Nova/Cinder service. In
> Nova API-GW/Cinder API-GW, all requests for these global objects will be
> forwarded to the master Nova/Cinder, then to get rid of any API
> overlapping-implementation.
>
> More information, you can refer to the proposal draft
> https://docs.google.com/presentation/d/1kpVo5rsL6p_
> rq9TvkuczjommJSsisDiKJiurbhaQg7E,
> your thoughts are welcome, and let's have more discussion in this weekly
> meeting.
>
> Best Regards
> Chaoyi Huang(joehuang)
>
> *From:* joehuang
> *Sent:* 24 August 2016 16:35
> *To:* openstack-dev
> *Subject:* [openstack-dev][tricircle]agenda of weekly meeting Aug.24
>
> Hello, team,
>
>
>
> Agenda of Aug.24 weekly meeting:
>
>
> # progress review and concerns on the features like micro versions, policy
> control, dynamic pod binding, cross pod L2 networking
>
> # How to address TCs concerns in Tricircle big-tent application
>
> # open discussion
>
>
>
> How to join:
>
>
>
> #  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting
> on every Wednesday starting from UTC 13:00.
>
>
>
> If you  have other topics to be discussed in the weekly meeting, please
> reply the mail.
>
>
>
> Best Regards
>
> Chaoyi Huang ( joehuang )
>
> __________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-22 Thread Duncan Thomas
What is the logic for that? It's a massive duplication of effort, and it
leads to defacto forks and inconsistencies between clouds - exactly what
the OpenStack mission is against.

Many/most of the clouds actually in production are already out of upstream
stable policy. The more convergence we can get on what happens after that
the better. There are zero advantages I can see to each vendor going it
alone.

On 22 Aug 2016 19:31, <arkady_kanev...@dell.com> wrote:

> Sorry if touch 3rd rail.
>
> But should backport bug fixes to older releases be done in distros and not
> upstream?
>
> -Original Message-
> From: Walter A. Boring IV [mailto:walter.bor...@hpe.com]
> Sent: Tuesday, August 09, 2016 12:34 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Cinder] [stable] [all] Changing stable
> policy for drivers
>
> On 08/08/2016 02:28 PM, Ihar Hrachyshka wrote:
> > Duncan Thomas wrote:
> >
> >> On 8 August 2016 at 21:12, Matthew Treinish
> >> wrote:
> >> Ignoring all that, this is also contrary to how we perform testing in
> >> OpenStack.
> >> We don't turn off entire classes of testing we have so we can land
> >> patches, that's just a recipe for disaster.
> >>
> >> But is it more of a disaster (for the consumers) than zero testing,
> >> zero review, scattered around the internet
> >> if-you're-lucky-with-a-good-wind you'll maybe get the right patch
> >> set? Because that's where we are right now, and vendors, distributors
> >> and the cinder core team are all saying it's a disaster.
> >
> > If consumers rely on upstream releases, then they are expected to
> > migrate to newer releases after EOL, not switch to a random branch on
> > the internet. If they rely on some commercial product, then they
> > usually have an extended period of support and certification for their
> > drivers, so it’s not a problem for them.
> >
> > Ihar
> This is entirely unrealistic. Force customers to upgrade. Good luck
> explaining to a bank that in order to get their cinder driver fix in, they
> have to upgrade their entire OpenStack deployment. Real world customers
> simply will balk at this all day long.
>
> Walt
> >
> > __
> > 
> >
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-13 Thread Duncan Thomas
There's so much misinformation in that email I barely know where to start.

There is nothing stopping out of tree drivers for cinder, and a few have
existed, though they don't seem to stick around. The driver is just a
python class referenced in the config file.

Turning a removed driver into an out of tree driver (or patching it back
into the tree) is trivial for anybody with basic python skills. They can
even just apply a reverse patch of the removal patch directly and cleanly
most of the time, since the drivers are clearly separated.

As has been said in the thread multiple times, by multiple people, the idea
of out of tree drivers has been discussed, passionately and at vast length,
with people on both sides of the debate. We've got storage vendors,
operators and distribution packagers at every single one of these
discussions, and have had each time it is discussed, which has been at
least the last three summits and the last three mid cycles.

It is getting tiring and distracting to keep rehashing that decision in
thread with nothing new being said, with somebody who neither has a driver
nor otherwise contributes to cinder. Please have the curtsey to follow some
of the provided historical references before repeatedly derailing the
thread by expounding the virtues of out of tree drivers. They have been
discussed, and soundly (though not unanimously, as Mike points out)
rejected. We have clearly decided there is a consensus that they aren't
what we want now. That is not what we're trying to discuss here.

To spell it pot one more time: we don't stop out of tree drivers. They
work, they're easy. We don't advertise them as supported because they're
not part of the review or testing process. We like in tree drivers. Vendors
like in tree drivers, and the advertised support we give them for doing so.
They will handle the Burton of keeping a third party CI going to maintain
that status, though a little begrudgingly - it has repeatedly and
continuously been necessary to have the option of the (apparently
substantial, given the effect it can have) threat of removal from tree in
order to persuade them to put enough resources into keeping their CI going.

On 13 Aug 2016 16:40, "Ihar Hrachyshka"  wrote:

> Clay Gerrard  wrote:
>
> The 
> use_untested_probably_broken_deprecated_manger_so_maybe_i_can_migrate_cross_fingers
>> option sounds good!  The experiment would be then if it's still enough of a
>> stick to keep 3rd party drivers pony'd up on their commitment to the Cinder
>> team to consistently ship quality releases?
>>
>>
> This commitment, is it the only model that you allow to extend Cinder for
> vendor technology? Because if not, then, in a way, you put vendors in an
> unfortunate situation where they are forced into a very specific model of
> commitment, that may be not in the best interest of that vendor. While
> there may be a value of keeping multiple drivers closer to the core code
> (code reusability, spotting common patterns, …), I feel that the benefit
> from such collaboration is worthwhile only when it's mutual, and not forced
> onto.
>
> I assume that if there would be alternatives available (including walking
> autonomously of Cinder release cycles and practices), then some of those
> vendors that you currently try hard to police into doing things the right
> and only way would actually choose that alternative path, that could be
> more in line with their production cycle. And maybe those vendors that
> break current centralized rules would voluntarily vote for leaving the tree
> to pursuit happiness as they see it, essentially freeing you from the need
> to police code that you cannot actually maintain.
>
> What about maybe the operator just not upgrading till post migration?
>> It's the migration that sucks right?  You either get to punt a release and
>> hope it gets "back in good faith" or do it now and that 3rd party driver
>> has lost your business/trust.
>>
>
> The deprecation tag indicates a good will of the community to do whatever
> it takes to fulfill the guarantee that a solution that worked in a previous
> cycle won’t be dropped with no prior notice (read: deprecation warnings in
> logs). Explicitly removing a driver just because you *think* it may no
> longer work is not in line with this thinking. Yes, there may be bugs in
> the code, but there is at least a path forward: for one, operators may try
> to fix bugs they hit in upgrade, or they can work with the vendor to fix
> the code and backport the needed fixes to stable branches. When you don’t
> have the code in tree at all, it’s impossible to backport, because stable
> branches don’t allow new features. And it’s not possible to play with
> (potentially broken) driver to understand which bugs block you from going
> forward.
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> 

Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-12 Thread Duncan Thomas
On 12 August 2016 at 16:09, Thierry Carrez <thie...@openstack.org> wrote:

>
> How about: 4. Take 3rd-party drivers to a separate cinder-extra-drivers
> repository/deliverable under the Cinder team, one that would /not/ have
> follows-stable-policy or follows-standard-deprecation tags ? That
> repository would still get core-reviewed by the Cinder team, so you
> would keep the centralized code review value. It would be in a single
> repository, so you would keep most of the "all drivers checked out in
> one place" benefits. But you could have a special stable branch policy
> there and that would also solve that other issue in the thread about
> removing unmaintained drivers without deprecation notices.
>
> Or is there another benefit in shipping everything inside a single
> repository that you didn't mention ?
>

The development process is definitely smoother with everything in one repo.
Cross repo changes (even repos under the same team, like brinck is for
cinder) are painful, because you have to get the change into the 'child'
repo, wait of it to merge, then wait for it to be released in some form
that is usable to the parent project (e.g. a pip release), then finally you
can merge the cinder change.

To turn the question around, what is the downside of loosing the tag? Are
people going to suddenly stop deploying cinder? That seems rather unlikely.

Nobody has yet given a single benefit to shipping a broken driver.


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Duncan Thomas
Strictly speaking, we only guarantee lvm... If any other driver starts
failing CI and nobody steps up to fix it then it wool be removed. I listed
ceph and NFS because I think there's enough knowledge and interest in the
core team to keep them working without needing any particular company to
help out.

We could make the windows larger as you suggest, but experience has shown
that this just causes vendors to make CI less of a priority, and
realistically we are already struggling to get meaningful results from CI.

If we remove a driver, it is highly likely that a forward port from the
previous release is trivial. Anybody building a deployment without some
sort of contracted driver commitment from their storage vendor is probably
doing themselves and the community a disservice though.

120 days of broken CI probably means we're shipping broken code, and I'm
not sure tags help most deployers - we have a lot of them, and the fine
details of their meaning is not obvious. There's only one tag appropriate
in prestige for a driver that has no passing CI - BROKEN. Shipping broken
code does not help anybody who's trying to rely on it, even during upgrade.
We might as well be honest and force them to do the forward port. If we
leave the broken driver in and they upgrade and everything breaks, it just
makes cinder look broken, without putting the blame squarely where it
belong - with the storage vendor who hadn't kept up the support for their
product. Giving a fake façade of 'support' just allows vendors to sell more
unsupported stuff, it doesn't help users, OpenStack developers or operators.

We could split and the drivers out into a new tree and give it different
tags, but it would slow down development, and frankly we've enough problems
on that front already. As far as I can tell, we (the cinder team) are
better off shrugging about the tags and carrying on as we are.

On 12 Aug 2016 15:54, "Sean Dague" <s...@dague.net> wrote:

> On 08/12/2016 08:40 AM, Duncan Thomas wrote:
> > On 12 Aug 2016 15:28, "Thierry Carrez" <thie...@openstack.org
> > <mailto:thie...@openstack.org>> wrote:
> >>
> >> Duncan Thomas wrote:
> >
> >> I agree that leaving broken drivers in tree is not significantly better
> >> from an operational perspective. But I think the best operational
> >> experience would be to have an idea of how much risk you expose yourself
> >> when you pick a driver, and have a number of them that are actually
> >> /covered/ by the standard deprecation policy.
> >>
> >> So ideally there would be a number of in-tree drivers (on which the
> >> Cinder team would apply the standard deprecation policy), and a separate
> >> repository for 3rd-party drivers that can be removed at any time (and
> >> which would /not/ have the follows-standard-deprecation-policy tag).
> >
> > So we'd certainly have to move out all of the backends requiring
> > proprietary hardware, since we couldn't commit to keeping them working
> > if their vendors turn of their CI. That leaves ceph, lvm, NFS, drdb, and
> > sheepdog, I think. There is not enough broad knowledge in the core team
> > currently to support sheepdog or drdb without 'vendor' help. That would
> > leave us with three drivers in the tree, and not actually provide much
> > useful risk information to deployers at all.
>
> I 100% understand the cinder policy of kicking drivers out without CI.
> And I think there is a lot of value in ensuring what's in tree is tested.
>
> However, from a user perspective basically it means that if you deploy
> Newton cinder and build a storage infrastructure around anything other
> than ceph, lvm, or NFS, you have a very real chance of never being able
> to upgrade to Ocata, because your driver was fully deleted, unless you
> are willing to completely change up your storage architecture during the
> upgrade.
>
> That is the kind of reality that should be front and center to the
> users. Because it's not just a drop of standard deprecation, it's also a
> removal of 'supports upgrade', as Netwon cinder config won't work with
> Ocata.
>
> Could there be more of an off ramp / on ramp here to the drivers? If a
> driver CI fails to meet the reporting window mark it deprecated for the
> next delete window. If a driver is in a deprecated state they need some
> long window of continuous reporting to get out of that state (like 120
> days or something). Bring in all new drivers in a
> deprecated/experimental/untested state, which they only get to shrug off
> after the onramp window?
>
> It's definitely important that the project has the ability to clean out
> the cruft, but it would be nice to not be overly brutal to our operators
> at the same time.
>
>

Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-12 Thread Duncan Thomas
Is there some docs for it somewhere? Or some quick way of telling that
we've done it and gotten it right?

On 12 Aug 2016 08:17, "Andreas Jaeger"  wrote:

> On 08/12/2016 04:25 AM, Robert Collins wrote:
> > On 11 Aug 2016 3:13 PM, "Ben Swartzlander"  > > wrote:
> >>
> >> ...
> >>
> >> I still don't agree with this stance. Code doesn't just magically stop
> > working. Code breaks when things change which aren't version controlled
> > properly or when you have undeclared dependencies.
> >
> > Well this is why the constraints work was and is being done. It's not
> > 100%rolled out as far as I know though, and stable branch support feels
> > all the gaps.
>
> As announced yesterday:
>
> Constraints work is *now* 100 % rolled out from the infra side, it's up
> to projects to use it fully now,
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Duncan Thomas
On 12 Aug 2016 15:28, "Thierry Carrez" <thie...@openstack.org> wrote:
>
> Duncan Thomas wrote:

> I agree that leaving broken drivers in tree is not significantly better
> from an operational perspective. But I think the best operational
> experience would be to have an idea of how much risk you expose yourself
> when you pick a driver, and have a number of them that are actually
> /covered/ by the standard deprecation policy.
>
> So ideally there would be a number of in-tree drivers (on which the
> Cinder team would apply the standard deprecation policy), and a separate
> repository for 3rd-party drivers that can be removed at any time (and
> which would /not/ have the follows-standard-deprecation-policy tag).

So we'd certainly have to move out all of the backends requiring
proprietary hardware, since we couldn't commit to keeping them working if
their vendors turn of their CI. That leaves ceph, lvm, NFS, drdb, and
sheepdog, I think. There is not enough broad knowledge in the core team
currently to support sheepdog or drdb without 'vendor' help. That would
leave us with three drivers in the tree, and not actually provide much
useful risk information to deployers at all.

> I understand that this kind of reorganization is a bit painful for
> little (developer-side) gain, but I think it would provide the most
> useful information to our users and therefore the best operational
> experience...

In theory this might be true, but see above - in practice it doesn't work
that way.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-11 Thread Duncan Thomas
n
>> >
>>
>> Sean,
>>
>> As said on my initial opening, I do understand and agree with the
>> reasoning/treatment of the 3rd party drivers. My request for that tag
>> removal is out of the remains of my ops hat.
>>
>> Lets say I was ops evaluating different options as storage vendor for
>> my cloud and I get told that "Here is the list of supported drivers
>> for different OpenStack Cinder back ends delivered by Cinder team", I
>> start looking what the support level of those drivers are and see that
>> Cinder follows standard deprecation which is fairly user/ops friendly
>> with decent warning etc. I'm happy with that, not knowing OpenStack I
>> would not even look if different subcomponents of Cinder happens to
>> follow different policy. Now I buy storage vendor X HW and at Oct I
>> realize that the vendor's driver is not shipped, nor any remains of it
>> is visible anymore, I'd be reasonably pissed off. If I knew that the
>> risk is there I would select my HW based on the negotiations that my
>> HW is contractually tied to maintain that driver and it's CI, and that
>> would be fine as well or if not possible I'd select some other
>> solution I could get reasonably guarantee that it will be
>> supported/valid at it's expected life time. As said I don't think
>> there is anything wrong with the 3rd party driver policy, but
>> maintaining that and the tag about standard-deprecation project wide
>> is sending wrong message to those who do not know better to safeguard
>> their rear ends.
>>
>> The other option would be to leave the drivers in tree, tag them with
>> deprecation message, something like "This driver has not been tested
>> by vendor CI since 15.3.2016 and cannot be guaranteed working. Unless
>> testing will be resumed the driver will be removed on Unicorn
>> release". Which would give as clear indication that the driver seems
>> abandoned, but still provide the consumer easier way to test in their
>> staging if the driver is something they still dare to use in
>> production or not.
>>
>> IMHO this is purely to set the expectations right for our consumers so
>> that they know what to expect from us and what to demand from their
>> vendors. Personally I don't think such tags should be the reason why
>> we do development in certain way, but rather just indication for the
>> consumer where they should pay attention while making the decisions.
>>
>> - Erno
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> >> Yes, that's how I see it. Cinder's own policy is that the drivers can
> >> be removed without any warning to the consumers while the standard
> >> deprecation policy defines quite strict lines about informing the
> >> consumer of the functionality deprecation before it gets removed.
> ​
> That is a great point, it was mentioned at one point but we sort of
> conveniently swept it under the rug a bit.  I certainly understand the
> problem, I do think there's two sides to it.  Frankly I also think it
> points out that drivers really are *different* like it or not.​  So it
> totally sucks that yes, a release could come out and that driver no longer
> exists, and that's no good.
>
>
> BUT on the other hand, it's not much worse to find that the code has been
> all but abandoned and no longer works anyway.  I don't think either
> scenario is a good one.  It highlights in my opinion that frankly maybe
> distros and customers should be more selective in what they choose to use.
> Community involvement matters.
>
> That being said, I'm not sure which I'd prefer to see happen in this
> situation.  I lean slightly towards remove the "follows deprecation" tag
> from Cinder and continue to remove drivers.
>
> The alternative isn't much better, but if we go that route I do think we
> should come up with some widely broadcasted advertisement of drivers that
> are just using Cinder as dumping ground and don't offer any care and
> feeding to the project in any way shape or form (you know who you are...
> but you're not reading this ML anyway).
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-10 Thread Duncan Thomas
So I tried to get into helping with the cinder stable tree for a while, and
while I wasn't very successful (lack of time and an inability to convince
my employer it should be a priority), one thing I did notice it that much
of the breakage seemed to come from outside cinder - many of the libraries
we depend on make backwards incompatible changes by accident, for example.
Would it be possible to have a long-term-support branch where we pinned the
max version of everything for the gate, pips and devtstack? I'd have
thought (and I'm very willing to be corrected) that would make the stable
gate, well, stable, such that it required far less work to keep it able to
run a basic devstack test plus unit tests.

Does that sound at all sane?

(I'm aware there are community standards for stable currently, but a lot of
this thread is the tail of standards wagging the dog of our goals. Lets
figure out what we want to achieve, and figure out how we can do that
without causing either too much extra work or an unnecessary fall off in
quality, rather than saying we can't do anything because of how we do
things now.)




On 10 August 2016 at 08:54, Tony Breeds <t...@bakeyournoodle.com> wrote:

> On Tue, Aug 09, 2016 at 09:16:02PM -0700, John Griffith wrote:
> > Sorry, I wasn't a part of the sessions in Austin on the topic of long
> > terms support of Cinder drivers.  There's a lot going on during the
> summits
> > these days.
>
> For the record the session in Austin, that I think Matt was referencing,
> were
> about stable life-cycles. not cinder specific.
>
> > Yeah, ok... I do see your point here, and as I mentioned I have had this
> > conversation with you and others over he years and I don't disagree.  I
> also don't have the ability to "force"
> > said parties to do things differently.  So when I try and help customers
> > that are having issues my only recourse is an out of tree patch, which
> then
> > when said distro notices or finds out they don't want to support the
> > customer any longer based on the code no longer being "their blessed
> > code".  The fact is that the distros hold the power in these situations,
> if
> > they happen to own the OS release and the storage then it works out great
> > for them, not so much for anybody else.​
>
> Right we can't 'force' the distros to participate (if we could we wouldn't
> be
> having this discussion).  The community has a process and all we can do is
> encourage distros and the like to participate in that process as it really
> is
> best for them, and us.
>
> > So is the consensus here that the only viable solution is for people to
> > invest in keeping the stable branches in general supported longer?  How
> > does that work for projects that are interested and have people willing
> to
> > do the work vs projects that don't have the people willing to do the
> work?
> > In other words, Cinder has a somewhat unique problem that Nova, Glance
> and
> > Keystone don't have.  So for Cinder to try and follow the policies,
> > processes and philosophies you outlined does that mean that as a project
> > Cinder has to try and bend the will of "ALL" of the projects to make this
> > happen?  Doesn't seem very realistic to me.​
>
> So the 'Cinder' team wont need to do all the will bending, that's for the
> Stable team to do with the support of *everyone* that cares about the
> outcome.
> That probably doens't fill you with hope, but that is the reality.
>
> > Just one last point and I'll move on from the topic.  I'm not sure where
> > this illusion that we're testing all the drivers so well is coming from.
> > Sure, we require the steps and facade of 3'rd party CI, but dig a bit
> > deeper and you soon find that we're not really testing as much as some
> > might think here.
>
> That's probbaly true but if we created a 'mitaka-drivers' branch of cinder
> the
> gate CI would rapidly degernate to a noop any unit/functional tests would
> be
> *entirely* 3rd party.
>
> Yours Tony.
>
> ______
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-08 Thread Duncan Thomas
On 8 August 2016 at 21:12, Matthew Treinish <mtrein...@kortar.org> wrote:

> Ignoring all that, this is also contrary to how we perform testing in
> OpenStack.
> We don't turn off entire classes of testing we have so we can land patches,
> that's just a recipe for disaster.
>

But is it more of a disaster (for the consumers) than zero testing, zero
review, scattered around the internet if-you're-lucky-with-a-good-wind
you'll maybe get the right patch set? Because that's where we are right
now, and vendors, distributors and the cinder core team are all saying it's
a disaster.



-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-08 Thread Duncan Thomas
On 8 August 2016 at 18:31, Matthew Treinish <mtrein...@kortar.org> wrote:

>
> This argument comes up at least once a cycle and there is a reason we
> don't do
> this. When we EOL a branch all of the infrastructure for running any ci
> against
> it goes away. This means devstack support, job definitions, tempest skip
> checks,
> etc. Leaving the branch around advertises that you can still submit
> patches to
> it which you can't anymore. As a community we've very clearly said that we
> don't
> land any code without ensuring it passes tests first, and we do not
> maintain any
> of the infrastructure for doing that after an EOL.
>
>
Ok, to turn the question around, we (the cinder team) have recognised a
definite and strong need to have somewhere for vendors to share patches on
versions of Cinder older than the stable branch policy allows.

Given this need, what are our options?

1. We could do all this outside Openstack infrastructure. There are
significant downsides to doing so from organisational, maintenance, cost
etc points of view. Also means that the place vendors go for these patches
is not obvious, and the process for getting patches in is not standard.

2. We could have something not named 'stable' that has looser rules than
stable branches,, maybe just pep8 / unit / cinder in-tree tests. No
devstack.

3. We go with the Neutron model and take drivers out of tree. This is not
something the cinder core team are in favour of - we see significant value
in the code review that drivers currently get - the code quality
improvements between when a driver is submitted and when it is merged are
sometimes very significant. Also, taking the code out of tree makes it
difficult to get all the drivers checked out in one place to analyse e.g.
how a certain driver call is implemented across all the drivers, when
reasoning or making changes to core code.

Given we've identified a clear need, and have repeated rejected one
solution (take drivers out of tree - it has been discussed at every summit
and midcycle for 3+ cycles), what positive suggestions can people make?

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-06 Thread Duncan Thomas
+1 from me

Sound like the best solution to at least part of the problem that was
causing people to want to pull the drivers out of tree

On 6 Aug 2016 18:49, "Philipp Marek"  wrote:

> > I want to propose
> > we officially make a change to our stable policy to call out that
> > drivers bugfixes (NOT new driver features) be allowed at any time.
> Emphatically +1 from me.
>
>
> With the small addendum that "bugfixes" should include compatibility
> changes for libraries used.
>
>
> Thanks for bringing that up!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Duncan Thomas
On 1 August 2016 at 18:14, Adrian Otto  wrote:

> I am struggling to understand why we would want to remove projects from
> our big tent at all, as long as they are being actively developed under the
> principles of "four opens". It seems to me that working to disqualify such
> projects sends an alarming signal to our ecosystem. The reason we made the
> big tent to begin with was to set a tone of inclusion. This whole
> discussion seems like a step backward. What problem are we trying to solve,
> exactly?
>

Any project existing in the big tent sets a significant barrier (policy,
technical, mindshare) of entry to any competing project that might spring
up. The cost of entry as an individual into a single-vendor project is much
higher in general than a diverse one (back-channel communications,
differences in vision, monoculture, commercial pressures, etc), and so
having a non-diverse project in the big tent reduces the possibilities of a
better replacement appearing.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Listing of volume fails while using openstack client

2016-07-28 Thread Duncan Thomas
Looks like either you've got an intermittent network problem or the cinder
api service is restarting. Anything enlightening in the cinder-api log?

On 28 Jul 2016 16:41, "varun bhatnagar"  wrote:

Hello Steve,

Thanks a lot for such a quick response.

Yes the IP is reachable.

ping 10.33.237.104
PING 10.33.237.104 (10.33.237.104) 56(84) bytes of data.
64 bytes from 10.33.237.104: icmp_seq=1 ttl=64 time=0.587 ms
64 bytes from 10.33.237.104: icmp_seq=2 ttl=64 time=0.101 ms
64 bytes from 10.33.237.104: icmp_seq=3 ttl=64 time=0.092 ms
64 bytes from 10.33.237.104: icmp_seq=4 ttl=64 time=0.144 ms
^C
--- 10.33.237.104 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.092/0.231/0.587/0.206 ms
root@cic-1:~#


Endpoints for keystone and cinder are defined as below:

| keystone   | identity   | RegionOne
 |
|||   publicURL:
http://10.33.237.104:5000/v2.0  |
|||   internalURL:
http://192.168.2.28:5000/v2.0 |
|||   adminURL:
http://192.168.2.28:35357/v2.0   |


| cinderv2   | volumev2   | RegionOne
 |
|||   publicURL:
http://10.33.237.104:8776/v2/3cbbffce04d9463e8cb8d3ca6480ed92   |
|||   internalURL:
http://192.168.2.28:8776/v2/3cbbffce04d9463e8cb8d3ca6480ed92  |
|||   adminURL:
http://192.168.2.28:8776/v2/3cbbffce04d9463e8cb8d3ca6480ed92 |
|||
 |


And adding debug gives the below details:

openstack volume list --debug
START with options: ['volume', 'list', '--debug']
options: Namespace(access_token_endpoint='', auth_type='', auth_url='
http://192.168.2.28:5000/v2.0', cacert='', client_id='',
client_secret='***', cloud='', debug=True, default_domain='Default',
deferred_help=False, domain_id='', domain_name='', endpoint='',
identity_provider='', identity_provider_url='', insecure=None,
interface='', log_file=None, os_clustering_api_version='1',
os_compute_api_version='', os_data_processing_api_version='1.1',
os_data_processing_url='', os_dns_api_version='2',
os_identity_api_version='', os_image_api_version='',
os_key_manager_api_version='1', os_network_api_version='',
os_object_api_version='', os_orchestration_api_version='1',
os_project_id=None, os_project_name=None, os_queues_api_version='1.1',
os_volume_api_version='', os_workflow_api_version='2', password='***',
profile=None, project_domain_id='', project_domain_name='', project_id='',
project_name='admin', protocol='', region_name='RegionOne', scope='',
service_provider_endpoint='', timing=False, token='***', trust_id='',
url='', user_domain_id='', user_domain_name='', user_id='',
username='admin', verbose_level=3, verify=None)
defaults: {u'auth_type': 'password', u'compute_api_version': u'2', 'key':
None, u'database_api_version': u'1.0', 'api_timeout': None,
u'baremetal_api_version': u'1', u'image_api_version': u'2', 'cacert': None,
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron',
u'orchestration_api_version': u'1', u'interface': None,
u'network_api_version': u'2', u'image_format': u'qcow2',
u'key_manager_api_version': u'v1', u'metering_api_version': u'2', 'verify':
True, u'identity_api_version': u'2.0', u'volume_api_version': u'2', 'cert':
None, u'secgroup_source': u'neutron', u'container_api_version': u'1',
u'dns_api_version': u'2', u'object_store_api_version': u'1',
u'disable_vendor_agent': {}}
cloud cfg: {'auth_type': 'password', u'compute_api_version': u'2',
u'orchestration_api_version': '1', u'database_api_version': u'1.0',
'data_processing_api_version': '1.1', u'network_api_version': u'2',
u'image_format': u'qcow2', u'image_api_version': u'2',
'clustering_api_version': '1', 'verify': True, u'dns_api_version': '2',
u'object_store_api_version': u'1', 'verbose_level': 3, 'region_name':
'RegionOne', 'api_timeout': None, u'baremetal_api_version': u'1',
'queues_api_version': '1.1', 'auth': {'username': 'admin', 'project_name':
'admin', 'password': '***', 'auth_url': 'http://192.168.2.28:5000/v2.0'},
'default_domain': 'Default', u'container_api_version': u'1',
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 'key':
None, 'timing': False, 'cacert': None, u'key_manager_api_version': '1',
u'metering_api_version': u'2', 'deferred_help': False,
u'identity_api_version': u'2.0', 'workflow_api_version': '2',
u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron',
'debug': True, u'interface': None, u'disable_vendor_agent': {}}
compute API version 2, cmd group openstack.compute.v2
network API version 2, cmd group openstack.network.v2
image API version 2, cmd group 

Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)

2016-07-20 Thread Duncan Thomas
On 20 July 2016 at 19:57, James Bottomley <
james.bottom...@hansenpartnership.com> wrote:

>
> OK, I accept your analogy, even though I would view currency as the
> will to create and push patches.
>
> The problem you describe: getting the recipients to listen and accept
> your patches, is also a common one.  The first essential is simple
> minimal patches because they're hard to reject.
>
> Once you've overcome the reject barrier, there's the indifference one
> (no-one says no, but no-one says yes).
>
> [snip]

The trouble with drive-by architecture patches (or large feature patches of
any kind) is that it is often better *not* to merge them if you don't think
the contributor is  going to stick around for a while. This changes are
usually intrusive, and have repercussions that take time to discover. It's
often difficult to keep a change clean when the original author isn't
around to review the follow-on work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]The backend-group concept in Cinder

2016-06-14 Thread Duncan Thomas
As long as each backend has a unique name, you can key the type to a list
of backend names if there's no useful capabilities to key off. No restart
required.

On 14 June 2016 at 10:16, chen ying <chenying...@outlook.com> wrote:

> Hi John,
>
>
>
> User case 1:
>   The  backends in backend-group-1 have SSD disk, more memory .
> The backend-group-1 can provide higher performance to user.
>   The other  backends  in  backend-group-2 have HHD disk, more
> capacity. The backend-group-2 can provide more storage space to user .
>
> Not sure, but we sort of do some of this already via the filter
> scheduler.  An Admin can define various types (they may be set up based on
> performance, ssd, spinning-rust etc).  Those types are then given arbitrary
> definitions via a type (again details hidden from end user) and he/she can
> create volumes of a specific type.
>
>
>
> Yes,  An Admin can arbitrary define various types and he/she can create
> volumes of a specific type. But we need to restart our cinder driver after
> define various types in each driver(If the driver cannot report
> capabilities by itself). It will not easy to manage for Admin.
>
> Does Admin could use the concept of dynamically adding/removing backends
> to backend-group. In this way, we not need to modify backend configure
> file(such as: report a capabilities of ssd, spinning-rust etc).We can
> arbitrary define various types for backend-group, and also he/she can
> create volumes of a specific type(from backend-group type).
>
> So for example I could say "I want these backends with capability XYZ",
> and many of backends from different vendors. How to manage these backends
> by Administrator?
>
> Currently:
>
> 1.Admin need modify backend configure file, let the backend
> report capability XYZ to filter scheduler.
>
> 2.Restart the volume service, make the capability valid
>
> 3.Create volume type(test_type) with capability XYZ.
>
> 4.he/she can create volumes of a specific type(test_type)
>
>Now we expect:
>
> 1.Admin add backend to backend-group
>
> 2.Create volume type(test_type) with capability XYZ, use
> frefix(group or something else (group: capability = XYZ)) to distinguish
> between the backend capability and the backend-group capability.
>
> 3.he/she can create volumes of a specific type(test_type)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-13 Thread Duncan Thomas
Hi

I would, once again, love to attend.

If you find that other cores apply and you'd rather have a new face, I
would be very understanding of the situation.

Regards

-- 
Duncan Thomas




On 13 June 2016 at 11:06, Wang, Shane <shane.w...@intel.com> wrote:

> Hi, OpenStackers,
>
> As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack
> Bug Smash at Hangzhou, China.
> The 1st China Bug Smash was at Shanghai, the 2nd was at Xi’an, and the 3rd
> was at Chengdu.
>
> We are constructing the etherpad page for registration, and the date will
> be around July 11 (probably July 6 – 8, but to be determined very soon).
>
> The China teams will still focus on Neutron, Nova, Cinder, Heat, Magnum,
> Rally, Ironic, Dragonflow and Watcher, etc. projects, so need developers to
> join and fix bugs as many as possible, and cores to be on site to moderate
> the code changes and merges. Welcome to the smash mash at Hangzhou -
> *http://www.chinahighlights.com/hangzhou/attraction/*
> <http://www.chinahighlights.com/hangzhou/attraction/>.
>
> Good news is still that for the first two cores who are from those above
> projects and respond to this invitation in my email inbox and copy the CC
> list, the sponsors are pleased to sponsor your international travel,
> including flight and hotel. Please simply reply to me.
>
> Best regards,
> --
> China OpenStack Bug Smash Team
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Duncan Thomas
On 24 May 2016 at 02:28, Gregory Haynes <g...@greghaynes.net> wrote:

> On Mon, May 23, 2016, at 05:24 PM, Morgan Fainberg wrote:
>
> I really do not want to "special case" swift. It really doesn't go with
> the spirit of inclusion.
>
>
> I am not sure how inclusion is related to special casing. Inclusion here
> implies that some group is not being accepted in to our community. The
> excluded group here would be one writing software not in python, which is
> the same groups being excluded currently. This is an argument against the
> status quo, not against special casing.
>
>

It excludes any other project that might be possible under the more relaxed
rules. It says 'swift is a special snowflake that the rules don't apply to,
but you are not special enough, go away', which is the very definition of
exclusionary.


> I expect we will see a continued drive for whichever additional languages
> are supported once it's in place/allowed.
>
>
> That is the problem. The social and debugging costs of adding another
> language are a function of how much that other language is used. If one
> component of one project is written in another language then these costs
> should be fairly low. I agree that once we allow another language we should
> expect many projects to begin using it, and IMO most if not all of these
> cases except swift will not be warranted.
>

So designate have come up with a use-case detailed in this thread. Gnocchi
have suggested they might have one. Others quite possibly exist that
haven't even been explored yet because the rules were against it.

One alternative to ffor swift to split into two layer, a control plane and
a data plane, and for alternative dataplane implementations (i.e. the go
one, maybe something ceph based, etc) to sit outside of the openstack
umbrella. This is the model nearly every other openstack service has.


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][cinder] max_concurrent_builds in Cinder

2016-05-24 Thread Duncan Thomas
On 24 May 2016 at 05:46, John Griffith <john.griffi...@gmail.com> wrote:

>
> ​Just curious about a couple things:  Is this attempting to solve a
> problem in the actual Cinder Volume Service or is this trying to solve
> problems with backends that can't keep up and deliver resources under heavy
> load?
>

I would posit that no backend can cope with infinite load, and with things
like A/A c-vol on the way, cinder is likely to get more efficient to the
point it will start stressing more backends. It is certainly worth thinking
about.

We've more than enough backend technologies that have different but
entirely reasonable metadata performance limitations, and several pieces of
code outside of backend's control (examples: FC zoning, iSCSI multipath)
seem to have clear scalability issues.

I think I share a worry that putting limits everywhere becomes a bandaid
that avoids fixing deeper problems, whether in cinder or on the backends
themselves.


> I get the copy-image to volume, that's a special case that certainly does
> impact Cinder services and the Cinder node itself, but there's already
> throttling going on there, at least in terms of IO allowed.
>

Which is probably not the behaviour we want - queuing generally gives a
better user experience than fair sharing beyond a certain point, since you
get to the point that *nothing* gets completed in a reasonable amount of
time with only moderate loads.

It also seems to be a very common thing for customers to try to boot 300
instances from volume as an early smoke test of a new cloud deployment.
I've no idea why, but I've seen it many times, and others have reported the
same thing. While I'm not entirely convinced it is a reasonable test, we
should probably make sure that the usual behaviour for this is not horrible
breakage. The image cache, if turned on, certainly helps massively with
this, but I think some form of queuing is a good thing for both image cache
work and probably backups too eventually.


> Also, I'm curious... would the exiting API Rate Limit configuration
> achieve the same sort of thing you want to do here?  Granted it's not
> selective but maybe it's worth mentioning.
>

Certainly worth mentioning, since I'm not sure how many people are aware it
exists. My experiences of it were that it was too limited to be actually
useful (it only rate limits a single process, and we've usually got more
than enough enough API workers across multiple nodes that very significant
loads are possible before tripping any reasonable per-process rate limit).



> If we did do something like this I would like to see it implemented as a
> driver config; but that wouldn't help if the problem lies in the Rabbit or
> RPC space.  That brings me back to wondering about exactly where we want to
> solve problems and exactly which.  If delete is causing problems like you
> describe I'd suspect we have an issue in our DB code (too many calls to
> start with) and that we've got some overhead elsewhere that should be
> eradicated.  Delete is a super simple operation on the Cinder side of
> things (and most back ends) so I'm a bit freaked out thinking that it's
> taxing resources heavily.
>

I agree we should definitely do more analysis of where the breakage occurs
before adding many limits or queues. Image copy stuff is an easy to analyse
first case - i/o stat can tell you exactly where the problem is.

Using the fake backend and a large number of API workers / nodes with a
pathological load trivially finds breakages currently, though it depends
exactly which code version you're running as to where the issues are. The
compare & update changes (aka race avoidance patches) have removed a bunch
of these, but seem to have led to a significant increase in DB load that
means it is easier to get DB timeouts and other issues.

As for delete being resource heavy, our reference driver provides a
pathological example with the secure delete code. Now that we've got a high
degree of confidence in the LVM thin code (specifically, I'm not aware of
any instances where it is worse than the LVM-thick code and I don't see any
open bugs that disagree), is it time to dump the LVM-thick support
completely?


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-24 Thread Duncan Thomas
Cinder bugs list was far more manageable once this had been done.

It is worth sharing the tool for this? I realise it's fairly trivial to
write one, but some standardisation on the comment format etc seems
valuable, particularly for Q/A folks who work between different projects.

On 23 May 2016 at 14:02, Markus Zoeller <mzoel...@linux.vnet.ibm.com> wrote:

> TL;DR: Automatic closing of 185 bug reports which are older than 18
> months in the week R-13. Skipping specific bug reports is possible. A
> bug report comment explains the reasons.
>
>
> I'd like to get rid of more clutter in our bug list to make it more
> comprehensible by a human being. For this, I'm targeting our ~185 bug
> reports which were reported 18 months ago and still aren't in progress.
> That's around 37% of open bug reports which aren't in progress. This
> post is about *how* and *when* I do it. If you have very strong reasons
> to *not* do it, let me hear them.
>
> When
> 
> I plan to do it in the week after the non-priority feature freeze.
> That's week R-13, at the beginning of July. Until this date you can
> comment on bug reports so they get spared from this cleanup (see below).
> Beginning from R-13 until R-5 (Newton-3 milestone), we should have
> enough time to gain some overview of the rest.
>
> I also think it makes sense to make this a repeated effort, maybe after
> each milestone/release or monthly or daily.
>
> How
> ---
> The bug reports which will be affected are:
> * in status: [new, confirmed, triaged]
> * AND without assignee
> * AND created at: > 18 months
> A preview of them can be found at [1].
>
> You can spare bug reports if you leave a comment there which says
> one of these (case-sensitive flags):
> * CONFIRMED FOR: NEWTON
> * CONFIRMED FOR: MITAKA
> * CONFIRMED FOR: LIBERTY
>
> The expired bug report will have:
> * status: won't fix
> * assignee: none
> * importance: undecided
> * a new comment which explains *why* this was done
>
> The comment the expired bug reports will get:
> This is an automated cleanup. This bug report got closed because
> it is older than 18 months and there is no open code change to
> fix this. After this time it is unlikely that the circumstances
> which lead to the observed issue can be reproduced.
> If you can reproduce it, please:
> * reopen the bug report
> * AND leave a comment "CONFIRMED FOR: "
>   Only still supported release names are valid.
>   valid example: CONFIRMED FOR: LIBERTY
>   invalid example: CONFIRMED FOR: KILO
> * AND add the steps to reproduce the issue (if applicable)
>
>
> Let me know if you think this comment gives enough information how to
> handle this situation.
>
>
> References:
> [1] http://45.55.105.55:8082/bugs-dashboard.html#tabExpired
>
> --
> Regards, Markus Zoeller (markus_z)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More on the topic of DELIMITER, the Quota Management Library proposal

2016-04-25 Thread Duncan Thomas
On 23 Apr 2016 14:26, "Jay Pipes"  wrote:

> On 04/23/2016 03:18 PM, Mike Perez wrote:

>> How about extending a volume? A volume is a resource and can be extended
in
>> Cinder today.
>
>
> Yep, understood :) I recognize some resource amounts can be modified for
some resource classes. How about *shrinking* a volume. Is that supported?

Currently, no, but saying we can never implement it because the quota
library will never support it is the tail wagging the dog in a big way.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] API features discoverability

2016-04-20 Thread Duncan Thomas
On 19 April 2016 at 23:42, Michał Dulko  wrote:

> On 04/18/2016 09:17 AM, Ramakrishna, Deepti wrote:
> > Hi Michal,
> >
> > This seemed like a good idea when I first read it. What more, the server
> code for extension listing [1] does not do any authorization, so it can be
> used for any logged in user.
> >
> > However, I don't know if requiring the admin to manually disable an
> extension is practical. First, admins can always forget to do that. Second,
> even if they wanted to, it is not clear how they could disable specific
> extensions. I assume they would need to edit the cinder.conf file. This
> file currently lists the set of extensions to load as
> cinder.api.contrib.standard_extensions. The server code [2] implements this
> by walking the cinder/api/contrib directory and loading all discovered
> extensions. How is it possible to subtract just one extension from the
> "standard extensions"? Also, system capabilities and extensions may not
> have a 1:1 relationship in general.
>
> Good point, to make that a standard for Cinder API feature discovery we
> would still need to make that more admin-friendly. This also implies
> that probably no admin is actually caring about setting the set of
> extensions correctly.
>

Certainly no no admins - the HP public cloud disabled a bunch of extensions
on the public endpoint for example - but it isn't something we can rely on.


> > Having a new extension API (as proposed by me in [3]) for returning the
> available services/functionality does not have the above problems. It will
> dynamically check the existence of the cinder-backup service, so it does
> not need manual action from admin. I have published a BP [4] related to
> this. Can you please comment on that?
>
> Yes, but I don't think you can run away from setting things manually.
> For example CGs are supported only for certain backends. This set of
> features should also be discoverable. Anyway I think the spec makes sense.
>

Volume type feature discovery is different (but related) to API feature
discovery.

This is unfortunately going against the recent efforts of standardizing
> how OpenStack works between deployments. In Cinder we have API features
> that may or may not be available in different installations. This
> certainly isn't addressed by microversions efforts, which may seem
> related. My feeling is that this goes beyond Cinder and hits a more
> general topic of API discoverability. I think that we should seek the
> API WG advice in that matter. Do we have other OpenStack project
> suffering from similar issue?
>
>
It's a nice aim to have clouds be entirely consistent, but then you're left
with the lowest common denominator. Replication and CG support in cinder
are both valuable to a subset of users, and extremely difficult to make
universal (I'm still hoping somebody can tell me why CGs at the hypervisor
are impossible to get right FWIW). Neutron is likely to be the largest
example of differentiated features, and manilla has some too.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-keystoneclient] Return request-id to caller

2016-04-20 Thread Duncan Thomas
On 20 April 2016 at 08:08, koshiya maho <koshiya.m...@po.ntts.co.jp> wrote:


> This design was discussed, reviewed and approved in cross-projects [1] and
> already implemented in nova, cinder and neutron.
> At this point if we change the implementation then it will not be
> consistent across core OpenStack projects.
> For maintenance of the whole of OpenStack, I think that the present method
> is best.
> Please suggest.
>

The fact that a cross-project spec is approved doesn't mean that it will
end up being practical. If the cinder-client implementation had been found
to break any none-trivial users then I wouldn't have hesitated.

Cross project specs are not getting massive amounts of detailed attention
from project teams, end even they were it is not possible to foresee all
subtle problems at review time - they should be taken as guidance not
gospel and expect to be reworked if it proves necessary.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread Duncan Thomas
HP facility just outside Dublin (ireland) is available again, depending on
dates

On 12 April 2016 at 17:05, Sean McGinnis <sean.mcgin...@gmx.com> wrote:

> Hey Cinder team (and those interested),
>
> We've had a few informal conversations on the channel and in meetings,
> but wanted to capture some things here and spread awareness.
>
> I think it would be good to start planning for our Newton midcycle.
> These have been incredibly productive in the past (at least in my
> opinion) so I'd like to get it on the schedule so folks can start
> planning for it.
>
> For Mitaka we held our midcycle in the R-10 week. That seemed to work
> out pretty well, but I also think it might be useful to hold it a little
> earlier in the cycle to keep some momentum going and make sure things
> stay pretty focused for the rest of the cycle.
>
> For reference, here is the current release schedule for Newton:
>
> http://releases.openstack.org/newton/schedule.html
>
> R-10 puts us in the last week of July.
>
> I would have a conflict R-16, R-15. We probably want to avoid US
> Independence Day R-13, and milestone weeks R-18 and R12.
>
> So potential weeks look like:
>
> * R-17
> * R-14
> * R-11
> * R-10
> * R-9
>
> Nova is in the process of figuring out their date. If we have that, it
> would be good to try to avoid an overlap there. Our linked midcycle
> session worked out well, but probably better if they don't conflict.
>
> We also need to work out locations. Anyone able and willing to host,
> just let me know. We need a facility with wifi, able to hold ~30-40
> people, wifi, close to an airport. And wifi.
>
> At some point I still think it would be nice for our international folks
> to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
> Collins our somewhere similar.
>
> Thanks!
>
> Sean (smcginnis)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About snapshot Rollback?

2016-04-11 Thread Duncan Thomas
Ok, you're right about device naming by UUID.

So we have two advantages compared to the existing system:

- Keeping the same volume id (and therefore disk UUID) makes reverting a VM
much easier since device names inside the instance stay the same
- Can significantly reduce the amount of copying required on some backends

These do seem like solid reasons to consider the feature.

If you can solve the backwards compatibility problem mentioned further up
this thread, then I think there's a strong case for considering adding this
API.

The next step is a spec and a PoC implementation.



On 11 April 2016 at 20:57, Erlon Cruz <sombra...@gmail.com> wrote:

> You are right, the instance should be shutdown or the device be unmounted,
> before 'revert' or removing the old device. That should be enough to avoid
> corruption. I think the device naming is not a problem if you use the same
> volume (at least the disk UUID will be the same).
>
> On Mon, Apr 11, 2016 at 2:39 PM, Duncan Thomas <duncan.tho...@gmail.com>
> wrote:
>
>> You can't just change the contents of a volume under the instance though
>> - at the very least you need to do an unmount in the instance, and a detach
>> is preferable, otherwise you've got data corruption issues.
>>
>> At that point, the device naming problems are identical.
>>
>> On 11 April 2016 at 20:22, Erlon Cruz <sombra...@gmail.com> wrote:
>>
>>> The actual user workflow is:
>>>
>>>  1 - User creates a volume(s)
>>>  2 - User attach volume to instance
>>>  3 - User creates a snapshot
>>>  4 - Something happens causing the need of a revert
>>>  5 - User creates a volume(s) from the snapshot(s)
>>>  6 - User detach old volumes
>>>  7 - User attach new volumes (and pray so they get the same id) - Nova,
>>> should have the ability to honor supplied device names (vdc, vdd, etc),
>>> which not always happen[1]. But, does the volume keep the same UUID in the
>>> system? Several application use that to boot.
>>>
>>> The suggested workflow would be simpler for a user POV:
>>>
>>>  1 - User creates a volume(s)
>>>  2 - User attach volume to instance
>>>  3 - User creates a snapshot
>>>  4 - Something happens causing the need of a revert
>>>  5 - User revert snapshot(s)
>>>
>>>
>>>  [1] https://goo.gl/Kusfne
>>>
>>> On Fri, Apr 8, 2016 at 5:07 AM, Ivan Kolodyazhny <e...@e0ne.info> wrote:
>>>
>>>> Hi Chenzongliang,
>>>>
>>>> I still don't understand what is difference between proposed feature
>>>> and 'restore volume from snapshot'? Could you please explain it?
>>>>
>>>> Regards,
>>>> Ivan Kolodyazhny,
>>>> http://blog.e0ne.info/
>>>>
>>>> On Thu, Apr 7, 2016 at 12:00 PM, Chenzongliang <
>>>> chenzongli...@huawei.com> wrote:
>>>>
>>>>> Dear Cruz:
>>>>>
>>>>>
>>>>>
>>>>>  Thanks for you kind support, I will review the previous spec
>>>>> according to the following links.  May be more user scenario we should
>>>>> considered,such as backup,create volume from snapshot,consistency group 
>>>>> and
>>>>> etc,we will spend some time to gather
>>>>>
>>>>> the user's scenarios and determin what to do next step.
>>>>>
>>>>>
>>>>>
>>>>> Sincerely,
>>>>>
>>>>> zongliang chen
>>>>>
>>>>>
>>>>>
>>>>> *发件人:* Erlon Cruz [mailto:sombra...@gmail.com]
>>>>> *发送时间:* 2016年4月5日 2:50
>>>>> *收件人:* OpenStack Development Mailing List (not for usage questions)
>>>>> *抄送:* Zhangli (ISSP); Shenhong (C)
>>>>> *主题:* Re: [openstack-dev] [Cinder] About snapshot Rollback?
>>>>>
>>>>>
>>>>>
>>>>> Hi Chen,
>>>>>
>>>>>
>>>>>
>>>>> Not sure if I got you right but I brought this topic in
>>>>> #openstack-cinder some days ago. The idea is to be able to rollback a
>>>>> snapshot in Cinder. Today what is possible to do is to create a volume 
>>>>> from
>>>>> a snapshot. From the user point of view, this is not ideal, as there are
>>>>> several cases, if not the majority of, that the purpose of the snapshot is
>>>>> to revert to a desired state, and not keep the original volume. For 

Re: [openstack-dev] [Cinder] About snapshot Rollback?

2016-04-11 Thread Duncan Thomas
er it can be roolback?
>>>
>>>
>>>
>>>I want to know whether the topic have been discussed or have other
>>> recommendations to us?
>>>
>>>
>>>
>>>Thanks
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [API]Make API errors conform to the common error message without microversion

2016-04-11 Thread Duncan Thomas
So by adding the handling of a header to change the behaviour of the API,
you're basically implementing a subset of microversions, with a
non-standard header (See the API WG spec on non-proliferation of headers).
You'll find it takes much of the work that implementing microversions does,
and explodes your API test matrix some more.

Sounds like something that should go on hold until microversions is done,
assuming that microversions are desired anyway. Standard error messages are
not such a big win that they're worth non-standard headers and yet more API
weirdness that needs to sit around potentially for a very long time (see
the API WG rules on removing APIs, which is basically never)

On 8 April 2016 at 11:23, Xie, Xianshan <xi...@cn.fujitsu.com> wrote:

> Hi, all,
>
> We are attempting to make the neutron API conform to the common error
> message format recommended by API-WG [1]. As this change will introduce a
> new error format into neutron which is different from existing  format [2],
> we should think of some solutions to preserve the backward compat.
>
> The easiest way to do that is microversion, just like the cinder does [3]
> although which is still in progress. But unfortunately, there are many
> projects in which the microversion haven't been landed yet, e.g. neutron,
> glance, keystone etc. Thus during the interim period we have to find other
> approaches to keep the backward compat.
>
> According to the discussion, a new header would be a good idea to resolve
> this issue [4], we think.
> For instance:
> curl -X DELETE "http://xxx:9696/v2.0/networks/xxx; -H
> "Neutron-Common-Error-Format: True"
>
> But we haven't decided which header name will be used yet.
> So how do you think which is the best appropriate one?
> A: Neutron-Common-Error-Format
> B: OpenStack-Neutron-Common-Error-Format
> C: other (Could you please specify it? Thanks in advance)
>
> Any comments would be appreciated.
>
> [1] http://specs.openstack.org/openstack/api-wg/guidelines/errors.html
> [2] https://review.openstack.org/#/c/113570/
> [3] https://review.openstack.org/#/c/293306/
> [4] https://bugs.launchpad.net/neutron/+bug/1554869
>
> Best regards,
> xiexs
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][openstackclient] Required name option for volumes, snapshots and backups

2016-04-05 Thread Duncan Thomas
What about commands the become ambiguous in the future? I doubt there are
many operations or objects that are unique to Cinder - backup, snapshot,
transfer, group, type - these are all very much generic, and even if they
aren't ambiguous now, they might well become so in future...

On 5 April 2016 at 17:15, Jay Bryant <jsbry...@electronicjungle.net> wrote:

> All,
>
> Just to document the discussion we had during the OSC IRC meeting last
> week: I believe the consensus we reached was that it wasn't appropriate to
> pretend "volume" before all Cinder commands but that it would be
> appropriate to move in that direction to for any commands that may be
> ambiguous like "snapshot". The cinder core development team will start
> working with the OSC development teams to address such commands and move
> them to more user friendly commands and as we move forward we will work to
> avoid such confusion in the future.
>
> Jay
>
> On Mon, Mar 28, 2016 at 1:15 PM Dean Troyer <dtro...@gmail.com> wrote:
>
>> On Sun, Mar 27, 2016 at 6:11 PM, Mike Perez <thin...@gmail.com> wrote:
>>
>>> On 00:40 Mar 28, Jordan Pittier wrote:
>>> > I am going to play the devil's advocate here but why can"t
>>> > python-openstackclient have its own opinion on the matter ? This CLI
>>> seems
>>> > to be for humans and humans love names/labels/tags and find UUIDS hard
>>> to
>>> > remember. Advanced users who want anonymous volumes can always hit the
>>> API
>>> > directly with curl or whatever SDK.
>>>
>>> I suppose it could, however, names are not unique.
>>>
>>
>> Names are not unique in much of OpenStack.  When ambiguity exists, we
>> exit with an error.
>>
>> Also, this works to produce a volume with no name should you absolutely
>> require it:
>>
>> openstack volume create --size 10 ""
>>
>>
>> dt
>> --
>>
>> Dean Troyer
>> dtro...@gmail.com
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Fix nova swap volume (updating an attached volume) function

2016-03-31 Thread Duncan Thomas
;> Ideally we would have implemented this like the nova/neutron server
> >>>> events callback API in Nova during vif plugging (nova does the vif
> plug
> >>>> on the host then waits for neutron to update it's database for the
> port
> >>>> status and sends an event (API call) to nova to continue booting the
> >>>> server). That server events API in nova is admin-only by default and
> >>>> neutron is configured with admin credentials for nova to use it.
> >>>>
> >>>> Another option would be for Nova to handle a 403 response when calling
> >>>> Cinder's migrate_volume_completion API and ignore it if we don't have
> an
> >>>> admin context. This is pretty hacky though. It assumes that it's a
> >>>> non-admin user initiating the swap-volume operation. It wouldn't be a
> >>>> problem for the volume migration operation initiated in Cinder since
> by
> >>>> default that's admin-only, so nova shouldn't get a 403 when calling
> >>>> migrate_volume_completion. The trap would be if the cinder policy for
> >>>> volume migration was changed to allow non-admins, but if someone did
> >>>> that, they should also change the policy for migrate_volume_completion
> >>>> to allow non-admin too.
> >>>>
> >>>>>
> >>>>> If you have a good idea, please let me know it.
> >>>>>
> >>>>> [1] Cinder volumes are stuck when non admin user executes nova swap
> >>>>> volume API
> >>>>>  https://bugs.launchpad.net/cinder/+bug/1522705
> >>>>>
> >>>>> [2] Cinder volume stuck in swap_volume
> >>>>>  https://bugs.launchpad.net/nova/+bug/1471098
> >>>>>
> >>>>> [3] Fix cinder volume stuck in swap_volume
> >>>>>  https://review.openstack.org/#/c/207385/
> >>>>>
> >>>>> [4] Fix swap_volume for case without migration
> >>>>>  https://review.openstack.org/#/c/247767/
> >>>>>
> >>>>> [5] Enable volume owners to execute migrate_volume_completion
> >>>>>  https://review.openstack.org/#/c/253363/
> >>>>>
> >>>>> Regards,
> >>>>> Takashi Natsume
> >>>>> NTT Software Innovation Center
> >>>>> E-mail: natsume.taka...@lab.ntt.co.jp
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> __
> >>>>>
> >>>>>
> >>>>>
> >>>>> OpenStack Development Mailing List (not for usage questions)
> >>>>> Unsubscribe:
> >>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>
> >>>>
> >>>
> >>> I also just checked Tempest and apparently we have no coverage for the
> >>> swap-volume API in Nova, we should fix that as part of this.
> >>>
> >>
> >> I've done some more digging. The swap-volume functionality was added to
> >> nova here [1].  The cinder use of it for volume migration was added here
> >> [2].
> >>
> >> Looking at the cinder volume API for migrate_volume_completion, it
> >> expects the source (old) volume to have migration_status set [3].
> >>
> >> So, I think we can easily fix this in Nova by simply not calling
> >> volume_migration_completion if old_volume['migration_status'] is None.
> >>
> >> [1]
> >>
> >>
> https://github.com/openstack/nova/commit/8f51b120b430c7c21399256f37e1d8f75d030484
> >>
> >> [2]
> >>
> >>
> https://github.com/openstack/nova/commit/0e4bd7f93b9bfadcc2bb6dfaeae7bb5ee00c194b
> >>
> >> [3]
> >>
> >>
> http://git.openstack.org/cgit/openstack/cinder/tree/cinder/volume/api.py#n1358
> >>
> >>
> >
> > Of course that had to be too easy to be true. The volume
> 'migration_status'
> > is only returned for the volume details if you're calling with an admin
> > context [1].
> >
> > I think we can still use this, we just can't expect
> > volume['migration_status'] to be in the response from the volume GET. If
> > it's not there, we can assume we're not doing a migration and we're not
> an
> > admin anyway, so we can't call migrate_volume_completion.
>
> So in that case, we need to attach, detach volume on nova side right?
> I mean if migrate_volume_completion is not being called then
> new volume attachment and old volume detachment should be initiated
> explicitly.
>
> Can we make Nova default policy same as cinder, i mean swap volume
> allowed only for admin? Because if there is simple swap initiated from
> nova(not cinder migration), Nova allow that operation for non-admin
> user and get stuck to attaching/detaching status.
>
> >
> > [1]
> >
> http://git.openstack.org/cgit/openstack/cinder/tree/cinder/api/v2/views/volumes.py#n82-L84
> >
> > --
> >
> > Thanks,
> >
> > Matt Riedemann
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Mitaka RC2 available

2016-03-29 Thread Duncan Thomas
That patch is now approved and in the process of merging; once it is
merged, you can propose a backport - if it doesn't make the release, it
will at least be one the stable tree.

On 29 March 2016 at 16:18, Gyorgy Szombathelyi <
gyorgy.szombathe...@doclerholding.com> wrote:

> Hi Thierry,
>
> If a new tarball will necessary, is it possible to get this included, too?
> https://review.openstack.org/#/c/272437/
>
> Seems this is not critical enough, but a straight fix.
>
> Br,
> Gyorgy
>
> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: 29 March 2016 3:03 PM
> To: OpenStack Development Mailing List <openstack-dev@lists.openstack.org>;
> openst...@lists.openstack.org
> Subject: [openstack-dev] [Cinder] Mitaka RC2 available
>
> Due to release-critical issues spotted in Cinder during RC1 testing, a new
> release candidate was created for Mitaka. You can find the RC2 source code
> tarballs at:
>
> https://tarballs.openstack.org/cinder/cinder-8.0.0.0rc2.tar.gz
>
> Unless new release-critical issues are found that warrant a last-minute
> release candidate respin, this tarball will be formally released as the
> final "Mitaka" version on April 7th. You are therefore strongly encouraged
> to test and validate this tarball !
>
> Alternatively, you can directly test the mitaka release branches at:
>
> http://git.openstack.org/cgit/openstack/cinder/log/?h=stable/mitaka
>
> If you find an issue that could be considered release-critical, please
> file it at:
>
> https://bugs.launchpad.net/cinder/+filebug
>
> and tag it *mitaka-rc-potential* to bring it to the Cinder release crew's
> attention.
>
> --
> Thierry Carrez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][openstackclient] Required name option for volumes, snapshots and backups

2016-03-28 Thread Duncan Thomas
Because it leads to false assumptions, and code that breaks when something
breaks those assumptions (e.g. somebody creates a volume with no name on
horizon and breaks all the users of openstackclient that expects one
because their client suggested it was mandatory

On 28 March 2016 at 01:40, Jordan Pittier <jordan.pitt...@scality.com>
wrote:

> I am going to play the devil's advocate here but why can"t
> python-openstackclient have its own opinion on the matter ? This CLI seems
> to be for humans and humans love names/labels/tags and find UUIDS hard to
> remember. Advanced users who want anonymous volumes can always hit the API
> directly with curl or whatever SDK.
>
> On Sun, Mar 27, 2016 at 4:44 PM, Duncan Thomas <duncan.tho...@gmail.com>
> wrote:
>
>> I think it is worth fixing the client to actually match the API, yes. The
>> client seems to be determined not to actually match the API in lots of
>> ways, e.g. https://bugs.launchpad.net/python-openstackclient/+bug/1561666
>>
>> On 24 March 2016 at 19:08, Ivan Kolodyazhny <e...@e0ne.info> wrote:
>>
>>> Hi team,
>>>
>>> From the Cinder point of view, both volumes, snapshots and backups APIs
>>> do not require name param. But python-openstackclient requires name param
>>> for these entities.
>>>
>>> I'm going to fix this inconsistency with patch [1]. Unfortunately, it's
>>> a bit more than changing required params to not required. We have to change
>>> CLI signatures. E.g. for create a volume: from [2].
>>>
>>> Is it acceptable? What is the right way to do such changes for OpenStack
>>> Client?
>>>
>>>
>>> [1] https://review.openstack.org/#/c/294146/
>>> [2] http://paste.openstack.org/show/491771/
>>> [3] http://paste.openstack.org/show/491772/
>>>
>>> Regards,
>>> Ivan Kolodyazhny,
>>> http://blog.e0ne.info/
>>>
>>>
>>> ______
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> --
>> Duncan Thomas
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][openstackclient] Required name option for volumes, snapshots and backups

2016-03-27 Thread Duncan Thomas
I think it is worth fixing the client to actually match the API, yes. The
client seems to be determined not to actually match the API in lots of
ways, e.g. https://bugs.launchpad.net/python-openstackclient/+bug/1561666

On 24 March 2016 at 19:08, Ivan Kolodyazhny <e...@e0ne.info> wrote:

> Hi team,
>
> From the Cinder point of view, both volumes, snapshots and backups APIs do
> not require name param. But python-openstackclient requires name param for
> these entities.
>
> I'm going to fix this inconsistency with patch [1]. Unfortunately, it's a
> bit more than changing required params to not required. We have to change
> CLI signatures. E.g. for create a volume: from [2].
>
> Is it acceptable? What is the right way to do such changes for OpenStack
> Client?
>
>
> [1] https://review.openstack.org/#/c/294146/
> [2] http://paste.openstack.org/show/491771/
> [3] http://paste.openstack.org/show/491772/
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-16 Thread Duncan Thomas
On 16 March 2016 at 09:15, Tim Bell <tim.b...@cern.ch> wrote:

Then, there were major reservations from the PTLs at the impacts in terms of
> latency, ability to reconcile and loss of control (transactions are
> difficult, transactions
> across services more so).
>
>
Not just PTLs :-)


> 
> I would favor a library, at least initially. If we cannot agree on a
> library, it
> is unlikely that we can get a service adopted (even if it is desirable).
>
> A library (along the lines of 1 or 2 above) would allow consistent
> implementation
> of nested quotas and user quotas. Nested quotas is currently only
> implemented
> in Cinder and user quota implementations vary between projects which is
> confusing.


It is worth noting that the cinder implementation has been found rather
lacking in correctness, atomicity requirements and testing - I wouldn't
suggest taking it as anything other than a PoC to be honest. Certainly it
should not be cargo-culted into another project in its present state.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-07 Thread Duncan Thomas
On 7 March 2016 at 23:45, Eric Harney <ehar...@redhat.com> wrote:

>
>
> I'm not really sure that writing a "hacking" check for this is a
> worthwhile investment.  (It's not a hacking check really, but something
> more like what you're describing, but that's beside the point.)
>
> We should just be looking for large, complex unit tests in review, and
> the ones that we already have should be moving towards the functional
> test area anyway.
>
> So what would the objective here be exactly?
>
>
Complexity can be tricky to spot by hand, and expecting reviewers to get it
right all of the time is not a reasonable expectation.

My ideal would be something that processes the commit and the jenkins logs,
extracts the timing info of any new tests, and if they are outside some
(fairly tight) window, then posts a comment to the review indicating that
these tests should get closer scrutiny. This does not remove reviewer
judgement from the equation, just provides a helpful prod that there's
something to be considered.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-24 Thread Duncan Thomas
On 22 February 2016 at 23:11, Devananda van der Veen <
devananda@gmail.com> wrote:

> On 02/22/2016 09:45 AM, Thierry Carrez wrote:
>

> The split should ideally reduce the needs to organize separate in-person
> mid-cycle events. If some are still needed, the main conference venue and
> time could easily be used to provide space for such midcycle events (given
> that it would end up happening in the middle of the cycle).
>
> If this "extra midcycle" is sanctioned by the foundation, even if
> optional, I'm concerned that it would grow until developer attendance is
> expected again. That said, I can appreciate the need to keep the option
> open for now. Perhaps if the Conference organizers include a hack-space and
> allow developers to self-organize if present, it will avoid the draw that a
> formal midcycle has?
>

The cinder mid-cycle is, by far, out most productive part of the cycle, and
this has been true for two cycles now. The midcycle is already much, much
cheaper to attend than the foundation events, and I think the bulk of the
productivity comes from the fact that all of the people are totally focused
on one thing - there's no running out to do something else, little
scheduling around other commitments (except remote attendees) and plenty of
opportunity to circle back around to things. The massive progress we made
at the Cinder midcycle only happened because we came back to it I think
four times - the kind of thinking and communication needed often doesn't
fit into slots and sessions, and really needs the bandwidth of face to face
time.

I now think the cinder mid-cycle is more valuable for many devs, and much
cheaper, than the foundation events. I don't see that this proposal will
actually increase that value at all.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-22 Thread Duncan Thomas
On 22 February 2016 at 06:40, Thomas Goirand <z...@debian.org> wrote:

>
> I'd vote for the extra round trip and implementation of caching whenever
> possible. Using another endpoint is really annoying, I already have
> specific stuff for cinder to setup both v1 and v2 endpoint, as v2
> doesn't fully implements what's in v1. BTW, where are we with this? Can
> I fully get rid of the v1 endpoint, or will I still experience some
> Tempest failures?
>


Can you detail what isn't in V2 that is in V1 please? I'm not aware of
anything, and I'd consider anything missing to be a serious bug

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-21 Thread Duncan Thomas
On 21 February 2016 at 19:34, Jay S. Bryant <jsbry...@electronicjungle.net>
wrote:

> Spent some time talking to Sean about this on Friday afternoon and bounced
> back and forth between the two options.  At first, /v3 made the most sense
> to me ... at least it did at the meetup.  With people like Sean Dague and
> Morgan Fainberg weighing in with concerns, it seems like we should
> reconsider.  Duncan, your comment here about customers moving when they are
> ready is somewhat correct.  That, however, I am concerned is a a small
> subset of the users.  I think many users want to move but don't know any
> better.  That was what we encountered with our consumers.  They didn't
> understand that they needed to update the endpoint and couldn't figure out
> why their new functions weren't working.
>
> So, I am leaning towards going with the /v2 endpoint and making sure that
> the clients we can control are set up properly and we put safety checks in
> the server end.  I think that may be the safest way to go.
>

So we can't get users to change endpoints, or write our libraries to have
sensible defaults, but we're somehow going to magically get consumers to do
the much harder job of doing version probes in their code/libraries so that
they don't get surprised by unexpected results? This seems to be entirely
nuts. If 'they' can't change endpoints (and we can't make the libraries we
write just do the right thing without needing to change endpoints) then how
are 'they' expected to do the probing magic that will be required at some
unpredictable poin tin the future, but which you'll get away without until
then?

This would also make us inconsistent with the other projects that have
implemented microversions - so we're changing a known working pattern, to
try to avoid the problem of a user having to get their settings right if
they want new functionality, and hoping this doesn't introduce entirely
predictable and foreseeable bugs in the future that can't actually be fixed
except by checking/changing every client library out there? There's no way
that's a sensible API design.


--
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-20 Thread Duncan Thomas
On 20 Feb 2016 00:21, "Walter A. Boring IV"  wrote:

> Not that I'm adding much to this conversation that hasn't been said
already, but I am pro v2 API, purely because of how painful and long it's
been to get the official OpenStack projects to adopt the v2 API from v1.

I think there's a slightly different argument here. We aren't taking away
the v2 API, probably ever. Clients that are satisfied with it can continue
to use it, as it is, forever. For clients that aren't trying to do anything
beyond the current basics will quite possibly be happy with that. Consumers
have no reason to change over without compelling value from the change -
that will come from what we implement on top of microversions, or not.
Unlike the v1 transition, we aren't trying to get rid of v2, just stop
changing existing semantics of it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] v2 image upload from url

2016-02-06 Thread Duncan Thomas
Sorry, can you give an example of the exact command you are using, please?
On 5 Feb 2016 22:45, "Fox, Kevin M"  wrote:

> We've been using the upload image from http url for a long time and when
> we upgraded to liberty we noticed it broke because the client's defaulting
> to v2 now. How do you do image upload via http with v2? Is there a
> different command/method?
>
> Thanks,
> Kevin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-06 Thread Duncan Thomas
Actually, keeping track of changed blocks on cinder volumes would make the
cinder incremental backup substantially more efficient... Something could
push them into cinder at detach time, and an api call for cinder to pull
them at live backup time, and cinder backup can do the rest... Not sure of
the none-cinder bits of the architecture, but certainly an interesting
idea. In the event something goes wrong, cinder can assume the while device
has changed or fall back to the current mechanism, so it is back compatible
from a tenant point of view...
On 6 Feb 2016 17:56, "Sam Yaple"  wrote:

> On Sat, Feb 6, 2016 at 3:00 PM, Jeremy Stanley  wrote:
>
>> On 2016-02-05 16:38:19 + (+), Sam Yaple wrote:
>> > I always forget to qualify that statement don't I? Nova does not
>> > have a mechanism for _incremental_ backups. Nor does Nova have
>> > compression or encryption because AFAIK that api only creates a
>> > snapshot. I would also point out again that snapshots != backups,
>> > at least not for those who care about backups.
>>
>> And just to be clear, you assert that the Nova team would reject
>> extending their existing backup implementation to support this, so
>> the only real solution is to make another project.
>>
>
> I don't know if Nova would reject it or not, but as discussed it could be
> extended to Cinder. Should Nova ever backup Cinder volumes? Additionally,
> why don't we combine networking into Nova? Or images? Or volumes? What I do
> assert is that we have done alot of work to strip out components from Nova,
> backups don't seem like a good candidate to shove into Nova.
>
> Luckily since Ekko and Nova (just like Ekko and Freezer) don't have any
> conflicting operations should Ekko be built separate and merged into Nova
> it would be fairly painless process since there are no overlapping services.
>
> Integration with Nova where Nova controls the hypervisor and Ekko requests
> operations through the Nova api before doing the backup is another
> question, and that is reasonable in my opinion. This is likely an issue
> that can be addressed down the road rather than at this moment, though.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Duncan Thomas
On 2 February 2016 at 02:28, Sam Yaple <sam...@yaple.net> wrote:

>
> I disagree with this statement strongly as I have stated before. Nova has
> snapshots. Cinder has snapshots (though they do say cinder-backup). Freezer
> wraps Nova and Cinder. Snapshots are not backups. They are certainly not
> _incremental_ backups. They can have neither compression, nor encryption.
> With this in mind, Freezer does not have this "feature" at all. Its not
> that it needs improvement, it simply does not exist in Freezer. So a
> separate project dedicated to that one goal is not unreasonable. The real
> question is whether it is practical to merge Freezer and Ekko, and this is
> the question Ekko and the Freezer team are attempting to answer.
>

You're misinformed of the cinder feature set there - cinder has both
snapshots (usually fast COW thing on the same storage backend) and backups
(copy to a different storage backend, usually swift but might be
NFS/ceph/TSM) - the backups support incremental and compression. Separate
encryption to the volume encryption is not yet supported or implemented,
merely because nobody has written it yet. There's also live backup
(internally via a snapshot) merged last cycle.

I can see a place for other backup solutions, I just want to make the
existing ones clear.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Duncan Thomas
On 3 February 2016 at 16:32, Sam Yaple <sam...@yaple.net> wrote:

>
> Looking into it, however, shows Cinder has no mechanism to delete backups
> in the middle of a chain since you use dependent backups (please correct me
> if I am wrong here). This means after a number of incremental backups you
> _must_ take another full to ensure the chain doesn't get to long. That is a
> problem Ekko is purposing to solve as well. Full backups are costly in
> terms of IO, storage, bandwidth and time. A full backup being required in a
> backup plan is a big problem for backups when we talk about volumes that
> are terabytes large.
>

You're right that this is an issue currently. Cinder actually has enough
info in theory to be able to trivially squash backups to be able to break
the chain, it's only a bit of metadata ref counting and juggling, however
nobody has yet written the code.


> Luckily, digging into it it appears cinder already has all the
> infrastructure in place to handle what we had talked about in a separate
> email thread Duncan. It is very possible Ekko can leverage the existing
> features to do it's backup with no change from Cinder. This isn't the
> initial priority for Ekko though, but it is good information to have. Thank
> you for your comments!
>


Always interested in better ways to solve backup.


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Duncan Thomas
On 3 February 2016 at 17:27, Sam Yaple <sam...@yaple.net> wrote:


>
> And here we get to the meat of the matter. Squashing backups is awful in
> object storage. It requires you to pull both backups, merge them, then
> reupload. This also has the downside of casting doubt on a backup since you
> are now modifying data after it has been backed up (though that doubt is
> lessened with proper checksuming/hashing which cinder does it looks like).
> This is the issue Ekko can solve (and has solved over the past 2 years).
> Ekko can do this "squashing" in a non-traditional way, without ever
> modifying content or merging anything. With deletions only. This means we
> do not have to pull two backups, merge, and reupload to delete a backup
> from the chain.
>

I'm sure we've lost most of the audience by this point, but I might as well
reply here as anywhere else...

In the cinder backup case, since the backup is chunked in object store, all
that is required is to reference count the chunks that are required for the
backups you want to keep, get rid of the rest, and re-upload the (very
small) json mapping file. You can either upload over the old json, or
create a new one. Either way, the bulk data does not need to be touched.



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Duncan Thomas
tOn 3 February 2016 at 17:52, Sam Yaple <sam...@yaple.net> wrote:


> This is a very similiar method to what Ekko is doing. The json mapping in
> Ekko is a manifest file which is a sqlite database. The major difference I
> see is Ekko is doing backup trees. If you launch 1000 instances from the
> same glance image, you don't need 1000 fulls, you need 1 full and 1000
> incrementals. Doing that means you save a ton of space, time, bandwidth,
> IO, but it also means n number of backups can reference the same chunk of
> data and it makes deletion of that data much harder than you describe in
> Cinder. When restoring a backup, you don't _need_ a new full, you need to
> start your backups based on the last restore point and the same point about
> saving applies. It also means that Ekko can provide "backups can scale with
> OpenStack" in that sense. Your backups will only ever be your changed data.
>
> I recognize that isn't probably a huge concern for Cinder, with volumes
> typically being just unique data and not duplicate data, but with nova I
> would argue _most_ instances in an OpenStack deployment will be based on
> the same small subset of images and thats alot of duplicate data to
> consider backing up especially at scale.
>
>

So this sounds great. If your backup formats are similar enough, it is
worth considering putting a backup export function in that spits out a
cinder-backup compatible JSON file (it's a dead simple format) and perhaps
an import for the same. That would allow cinder backup and Ekko to exchange
data where desired. I'm not sure if this is possible, but I'd certainly
suggest looking at it.

Thanks for keeping the dialog open, it has definitely been useful.


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Patrick East to Cinder Core

2016-01-30 Thread Duncan Thomas
+1

He's been doing great work, and is a pleasure to work with.
On 29 Jan 2016 19:05, "Sean McGinnis"  wrote:

> Patrick has been a strong contributor to Cinder over the last few
> releases, both with great code submissions and useful reviews. He also
> participates regularly on IRC helping answer questions and providing
> valuable feedback.
>
> I would like to add Patrick to the core reviewers for Cinder. Per our
> governance process [1], existing core reviewers please respond with any
> feedback within the next five days. Unless there are no objections, I will
> add Patrick to the group by February 3rd.
>
> Thanks!
>
> Sean (smcginnis)
>
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Testing Cinder upgrades - c-bak upgrade

2016-01-30 Thread Duncan Thomas
On 29 Jan 2016 19:37, "Michał Dulko"  wrote:
>

> Resolution on this matter from the Cinder mid-cycle is that we're fine
> as long as we safely fail in case of upgrade conducted in an improper
> order. And it seems we can implement that in a simple way by raising an
> exception from volume.rpcapi when c-vol is pinned to a version too old.
> This means that scalable backup patches aren't blocked by this issue.

Agreed. As long as:
a) there is a correct order to upgrade, with no loss of service

And

b) incorrect ordering results in graceful failure (zero data loss, new
volumes / backups go to error, old backups are in a state where they can be
restored once the upgrade is complete, sensible user error messages where
possible)

If those two conditions are met (and it sounds like they are) then I'm happy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-27 Thread Duncan Thomas
On 27 January 2016 at 06:40, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:

> On 1/27/2016 11:22 AM, Avishay Traeger wrote:
>
>>
>> I agree with you.  Actually, I think it would be more correct to have
>> Cinder store it, and not pass it at all to terminate_connection().
>>
>>
> That would be ideal but I don't know if cinder is storing this information
> in the database like nova is in the nova
> block_device_mappings.connection_info column.
>


This is being discussed for cinder, since it is useful for implementing
force detach / cleanup in cinder

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][ironic][cinder][nova] 'tar' as an image disk_format

2016-01-23 Thread Duncan Thomas
I guess my wisdom would be 'why'? What does this enable you to do that you
couldn't do with similar ease with the formats we have and are people
trying to do that frequently.

We've seen in cinder that image formats have a definite security surface to
them, and with glance adding arbitrary conversion pipelines, that surface
is going to increase with every format we add. This should mean we tend
towards being increasingly conservative I think.

We've heard a possible feature, but zero use case that I can see. Why is
this better than converting your days to a supported format?
On 23 Jan 2016 16:57, "Brian Rosmaita"  wrote:

> Please provide feedback about a proposal to add 'tar' as a new Glance
> disk_format.[0]
>
> The Ironic team is adding support for "OS tarball images" in Mitaka.  This
> is a compressed tar archive of a / (root filesystem). These tarballs are
> created by first installing the OS packages in a chroot and then
> compressing the chroot as tar.*.  The proposal is to store such images as
> disk_format == tar and container_format == bare.
>
> Intuitively, 'tar' seems more like a container_format.  The Glance
> developer documentation, however, says that "The container format refers to
> whether the virtual machine image is in a file format that also contains
> metadata about the actual virtual machine."[1]  Under this proposal, there
> is no such metadata included.
>
> The Glance docs say this about disk_format: "The disk format of a virtual
> machine image is the format of the underlying disk image. Virtual appliance
> vendors have different formats for laying out the information contained in
> a virtual machine disk image."[1]  Under this definition, 'tar' as used in
> this proposal [0] does in fact seem to be a disk_format.
>
> There is not currently a 'tar' container format defined for Glance.  The
> closest we have now is 'ova' (an OVA tar archive file) and 'docker' (a
> Docker tar archive of the container filesystem).  And, in fact, 'tar' as a
> container format wouldn't be very helpful, as it doesn't indicate where in
> the tarball the metadata should be found.
>
> The goal here is to come up with an identifier for an "OS tarball image"
> that's acceptable across projects and isn't confusing for people who are
> creating images.
>
> Thanks in advance for your feedback,
> brian
>
> [0] https://bugs.launchpad.net/glance/+bug/1535900
> [1] https://github.com/openstack/glance/blob/master/doc/source/formats.rst
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Object backporting and the associated service

2016-01-18 Thread Duncan Thomas
On 5 January 2016 at 18:55, Ryan Rossiter 
wrote:

> This is definitely good to know. Are you planning on setting up something
> off to the side of o.vo within that holds a dictionary of all values for a
> release? Something like:
>
> {‘liberty’: {‘volume’: ‘1.3’, …},
>  ‘mitaka’: {‘volume’: ‘1.8’, …}, }
>
> With the possibility of replacing the release name with the RPC version or
> some other version placeholder. Playing devil’s advocate, how does this
> work out if I want to be continuously deploying Cinder from HEAD?


As far as I know (the design has iterated a bit, but I think I'm still
right), there is no need for such a table - before you start a rolling
upgrade, you call the 'pin now' api, and all of the services write their
max supported version to the DB. Once the DB is written to by all services,
the running services can then read that table and cache the max value. Any
new services bought up will also build a max volume cache on startup. Once
everything is upgraded, you can call 'pin now' again and the services can
figure out a new (hopefully higher) version limit.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][python-client-api] Retrieve host-name attribute of Cinder Volume

2016-01-06 Thread Duncan Thomas
gt;>> |
>>>>
>>>> How this info can be accessed from the cinder python client?
>>>>
>>>> I can access other information (id, size, name etc.) as follows:
>>>>
>>>> >>> volumes = cinder.volumes.list()
>>>>
>>>> >>> volumes
>>>>
>>>> []
>>>>
>>>> >>> volumes[0].id
>>>>
>>>> u'e8be1df5-64fb-43fa-aacd-9bebba17fba5'
>>>>
>>>> >>> volumes[0].volume_type
>>>>
>>>> u'iscsi_1'
>>>>
>>>>
>>>>
>>>> Thanks,
>>>> Pradip
>>>>
>>>>
>>>>
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   >