+1
Since this is very isolated to the rbd driver and it's passing already
Walt
On 09/09/2016 12:32 PM, Gorka Eguileor wrote:
Hi,
As some of you may know, Jon Bernard (jbernard on IRC) has been working
on the RBD v2.1 replication implementation [1] for a while, and we would
like to request a
I was leaning towards a separate repo until I started thinking about all
the overhead and complications this would cause. It's another repo for
cores to watch. It would cause everyone extra complication in setting up
their CI, which is already one of the biggest roadblocks. It would make
it a
On 08/09/2016 11:52 AM, Ihar Hrachyshka wrote:
Walter A. Boring IV <walter.bor...@hpe.com> wrote:
On 08/08/2016 02:28 PM, Ihar Hrachyshka wrote:
Duncan Thomas <duncan.tho...@gmail.com> wrote:
On 8 August 2016 at 21:12, Matthew Treinish <mtrein...@kortar.org>
w
On 08/08/2016 02:28 PM, Ihar Hrachyshka wrote:
Duncan Thomas wrote:
On 8 August 2016 at 21:12, Matthew Treinish
wrote:
Ignoring all that, this is also contrary to how we perform testing in
OpenStack.
We don't turn off entire classes of testing
I think "currently active stable branches" is key there. These branches
would no longer be "currently active". They would get an EOL tag when it
reaches the end of the support phases. We just wouldn't delete the
branch.
This argument comes up at least once a cycle and there is a reason we
This is great! I know I'm a bit late to replying to this on the ML,
due to my vacation,
but I whole heartedly agree!
+1
Walt
On 06/27/2016 10:27 AM, Sean McGinnis wrote:
I would like to nominate Scott D'Angelo to core. Scott has been very
involved in the project for a long time now and is
Does QEMU support hardware initiators? iSER?
No, this is only for case where you're doing pure software based
iSCSI client connections. If we're relying on local hardware that's
a different story.
We regularly fix issues with iSCSI attaches in the release cycles of
OpenStack,
because it's
volumes connected to QEMU instances eventually become directly connected?
Our long term goal is that 100% of all network storage will be connected
to directly by QEMU. We already have the ability to partially do this with
iSCSI, but it is lacking support for multipath. As & when that gap is
One major disadvantage is lack of multipath support.
Multipath is still done outside of qemu and there is no native multipath
support inside of qemu from what I can tell. Another
disadvantage is that qemu iSCSI support is all s/w based. There are
hardware iSCSI initiators that are supported
I just put up a WIP patch in os-brick that tests to see if os-privsep is
configured with
the helper_command. If it's not, then os-brick falls back to using
processutils
with the root_helper and run_as_root kwargs passed in.
https://review.openstack.org/#/c/329586
If you can check this out
+1
Walt
Hey everyone,
I would like to nominate Michał Dulko to the Cinder core team. Michał's
contributions with both code reviews [0] and code contributions [1] have
been significant for some time now.
His persistence with versioned objects has been instrumental in getting
support in the
Adam,
As the bug shows, it was fixed in the Juno release. The icehouse
release is no longer supported. I would recommend upgrading your
deployment if possible or looking at the patch and see if it can work
against your Icehouse codebase.
https://review.openstack.org/#/c/96548/
Walt
On
On 02/23/2016 06:14 AM, Qiming Teng wrote:
I don't think the proposal removes that opportunity. Contributors
/can/ still go to OpenStack Summits. They just don't /have to/. I
just don't think every contributor needs to be present at every
OpenStack Summit, while I'd like to see most of them
On 02/22/2016 11:24 AM, John Garbutt wrote:
Hi,
Just came up on IRC, when nova-compute gets killed half way through a
volume attach (i.e. no graceful shutdown), things get stuck in a bad
state, like volumes stuck in the attaching state.
This looks like a new addition to this conversation:
On 02/22/2016 09:45 AM, Thierry Carrez wrote:
Amrith Kumar wrote:
[...]
As a result of this proposal, there will still be four events each
year, two "OpenStack Summit" events and two "MidCycle" events.
Actually, the OpenStack summit becomes the midcycle event. The new
separated
On 02/22/2016 07:14 AM, Thierry Carrez wrote:
Hi everyone,
TL;DR: Let's split the events, starting after Barcelona.
Time is ripe for a change. After Tokyo, we at the Foundation have been
considering options on how to evolve our events to solve those issues.
This proposal is the result of
On 02/20/2016 02:42 PM, Duncan Thomas wrote:
On 20 Feb 2016 00:21, "Walter A. Boring IV" <walter.bor...@hpe.com
<mailto:walter.bor...@hpe.com>> wrote:
> Not that I'm adding much to this conversation that hasn't been said
already, but I am pro v2 API, purely becaus
But, there are no such clients today. And there is no library that does
this yet. It will be 4 - 6 months (or even more likely 12+) until that's
in the ecosystem. Which is why adding the header validation to existing
v2 API, and backporting to liberty / kilo, will provide really
substantial
On 02/12/2016 04:35 PM, John Griffith wrote:
On Thu, Feb 11, 2016 at 10:31 AM, Walter A. Boring IV
<walter.bor...@hpe.com <mailto:walter.bor...@hpe.com>> wrote:
There seems to be a few discussions going on here wrt to
detaches. One is what to do on the Nova side
Hey folks,
One of the challenges we have faced with the ability to attach a
single volume to multiple instances, is how to correctly detach that
volume. The issue is a bit complex, but I'll try and explain the
problem, and then describe one approach to solving one part of the
detach
attachments on the same host. By having the information stored in Cinder as
well we can also avoid removing a target when there are still active
attachments connected to it.
What do you think?
Thanks,
Ildikó
-Original Message-
From: Walter A. Boring IV [mailto:walter.bor...@hpe.com]
Sent
My plan was to store the connector object at attach_volume time. I was
going to add an additional column to the cinder volume attachment table
that stores the connector that came from nova. The problem is live
migration. After live migration the connector is out of date. Cinder
doesn't
+1 from me. Patrick has done a great job the last several releases and
his dedication to making Cinder better has been very visible.
Patrick has been a strong contributor to Cinder over the last few releases,
both with great code submissions and useful reviews. He also participates
On 12/21/2015 06:40 AM, Philipp Marek wrote:
Hi everybody,
in the current patch https://review.openstack.org/#/c/259973/1 the test
script needs to use a lot of the constant definitions of the backend driver
it's using (DRBDmanage).
As the DRBDmanage libraries need not be installed on the CI
As a side note to the DR discussion here, there was a session in Tokyo
that talked about a new
DR project called Smaug. You can see their mission statement here:
https://launchpad.net/smaug
https://github.com/openstack/smaug
There is another service in the making called DRagon:
On 11/20/2015 10:19 AM, Daniel P. Berrange wrote:
On Fri, Nov 20, 2015 at 02:45:15PM +0200, Duncan Thomas wrote:
Brick does not have to take over the decisions in order to be a useful
repository for the code. The motivation for this work is to avoid having
the dm setup code copied wholesale
Hello folks,
I just wanted to post up the YouTube link for the video hangout that
the Cinder team just had.
We had a good discussion about the local file locks in the volume
manager and how it affects the interaction
of Nova with Cinder in certain cases. We are trying to iron out how to
On 09/28/2015 10:29 AM, Ben Swartzlander wrote:
I've always thought it was a bit strange to require new drivers to
merge by milestone 1. I think I understand the motivations of the
policy. The main motivation was to free up reviewers to review "other
things" and this policy guarantees that for
>> To be honest this is probably my fault, AZ's were pulled in as part of
>> the nova-volume migration to Cinder and just sort of died. Quite
>> frankly I wasn't sure "what" to do with them but brought over the
>> concept and the zones that existing in Nova-Volume. It's been an issue
>> since
Hi Thomas,
I can't speak to the other packages, but as far as os-brick goes, the
/usr/local/etc stuff is simply for the embedded
rootwrap filter that os-brick is currently exporting. You can see it here:
>
> 1. Do you actually have the time to spend to be PTL
>
> I don't think many people realize the time commitment. Between being
> on top of reviews and having a pretty consistent view of what's going
> on and in process; to meetings, questions on IRC, program management
> type stuff etc. Do you
Cinder for many releases ahead.
Thank you for considering me.
Walter A. Boring IV (hemna)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
Thanks for your leadership and service Mike. You've done a great job!
Walt
Hello all,
I will not be running for Cinder PTL this next cycle. Each cycle I ran
was for a reason [1][2], and the Cinder team should feel proud of our
accomplishments:
* Spearheading the Oslo work to allow *all*
Hey Tony,
This has been a long running pain point/problem for some of the
drivers in Cinder.
As a reviewer, I try and -1 drivers that talk directly to the database
as I don't think
drivers *should* be doing that. But, for some drivers, unfortunately,
in order to
implement the features,
This isn't as simple as making calls to virsh after an attached volume
is extended on the cinder backend, especially when multipath is involved.
You need the host system to understand that the volume has changed size
first, or virsh will really never see it.
For iSCSI/FC volumes you need to
+1
It gives me great pleasure to nominate Gorka Eguileor for Cinder core.
Gorka's contributions to Cinder core have been much apprecated:
https://review.openstack.org/#/q/owner:%22Gorka+Eguileor%22+project:openstack/cinder,p,0035b6410002dd11
60/90 day review stats:
Currently, The FCZM doesn't support this. Also, from my experience
Brocade and Cisco switches don't play well together when managing the
same fabrics.
Walt
Hi, guys
I am using Brocade FC switch in my OpenStack environment. I have
a question about OpenStack cinder zonemanger driver.
I missed this whole thread due to my mail filtering. Sorry about that.
Anyway, Ivan and I have an open Blueprint here:
https://blueprints.launchpad.net/cinder/+spec/use-cinder-without-nova
That starts the discussion of adding the end to end ability of attaching
a Cinder
volume to a host
on, and os-brick.
Walt
On 7/9/15, 14:44 , Walter A. Boring IV walter.bor...@hp.com wrote:
I missed this whole thread due to my mail filtering. Sorry about that.
Anyway, Ivan and I have an open Blueprint here:
https://blueprints.launchpad.net/cinder/+spec/use-cinder-without-nova
That starts
On 06/10/2015 08:40 AM, Matt Riedemann wrote:
This is a follow-on to the thread [1] asking about modeling the
connection_info dict returned from the os-initialize_connection API.
The more I think about modeling that in Nova, the more I think it
should really be modeled in Cinder with an
+1 for Sean. He's done a great job doing reviews and getting involved
in core Cinder features.
Walt
On 05/22/2015 04:34 PM, Mike Perez wrote:
This is long overdue, but it gives me great pleasure to nominate Sean
McGinnis for
Cinder core.
Reviews:
Can you please file a defect for this against cinder and os-brick.
I'll fix it ASAP.
Walt
Hi,
I am wondering why screen-c-vol.log is displaying the CHAP secret.
Logs:
2015-04-16 16:04:23.288 7306 DEBUG oslo_concurrency.processutils
[req-23c699df-7b21-48d2-ba14-d8ed06642050
I went ahead and filed a bug, and I have 2 fixes posted up already that
mirror's how nova fixed this issue in the libvirt volume driver for iSCSI.
https://bugs.launchpad.net/os-brick/+bug/1445137
Walt
On 04/16/2015 05:54 AM, Yogesh Prasad wrote:
Hi,
I am wondering why screen-c-vol.log is
This is a real defect related to the multiattach patch that I worked
on. I have posted a fix for your driver.
https://review.openstack.org/#/c/167683/
Walt
Hi,
Just reported an issue: https://bugs.launchpad.net/cinder/+bug/1436367
Seems to be related to
On 03/23/2015 01:50 PM, Mike Perez wrote:
On 12:59 Mon 23 Mar , Stefano Maffulli wrote:
On Mon, 2015-03-23 at 11:43 -0700, Mike Perez wrote:
We've been talking about CI's for a year. We started talking about CI deadlines
in August. If you post a driver for Kilo, it was communicated that
Maybe we can leverage Cinder's use of the abc in the drivers.py now.
We could create an OptionsVD that drivers would add and implement. The
config generator could inspect objects looking for OptionsVD and then
call list_opts() on it. That way, driver maintainers don't also have
to patch
On 03/19/2015 07:13 PM, liuxinguo wrote:
Hi Mike,
I have seen the patch at https://review.openstack.org/#/c/165990/
saying that huawei driver will be removed because “the maintainer does
not have a CI reporting to ensure their driver integration is successful”.
Looking at this patch,
We have this patch in review currently. I think this one should 'fix'
it no?
Please review.
https://review.openstack.org/#/c/163551/
Walt
On 03/11/2015 10:47 AM, Mike Bayer wrote:
Hello Cinder -
I’d like to note that for issue
https://bugs.launchpad.net/oslo.db/+bug/1417018, no solution
Since the Nova side isn't in, and won't land for Kilo, then there is no
reason for Cinder to have it for Kilo, as it will simply not work.
We can revisit this for the L release if you like.
Also, make sure you have 3rd Party CI setup for this driver, or it won't
be accepted in the L release
Hey folks,
Today we found out that a patch[1] that was made against our
lefthand driver caused the driver to fail. The 3rd party CI that we
have setup
to test our driver (hp_lefthand_rest_proxy.py) caught the CI failure and
reported it correctly[2]. The patch that broke the driver was
Yes, assume NEW drivers have to land in before the L-1 milestone. This
also includes getting a CI system up and running.
Walt
Hi,
In Kilo the cinder driver is requested to be merged before K-1, I want
to ask that in L does the driver will be requested to be merged before
L-1?
Thanks and
Hey folks,
I wanted to get some feedback from the Nova folks on using Cinder's
Brick library. As some of you
may or may not know, Cinder has an internal module called Brick. It's
used for discovering and removing
volumes attached to a host. Most of the code in the Brick module in
cinder
sorry I didn't see this earlier.
I'd welcome Ivan to the team!
+1
Walt
On Wed, Jan 21, 2015 at 10:14 AM, Mike Perez thin...@gmail.com wrote:
It gives me great pleasure to nominate Ivan Kolodyazhny (e0ne) for
Cinder core. Ivan's reviews have been valuable in decisions, and his
contributions
John,
Thanks for the term as PTL since Cinder got it's start. Without your
encouragement when Kurt and I started working on Cinder back during
Grizzly. We wouldn't have been successful, and might not even be
working on the project to this day. I can't say enough about how
helpful you
Thanks for the effort Ivan. Your interest in brick is also helping us
push forward with the idea of the agent that we've had in mind for quite
some time.
For those interested, I have created an etherpad that discusses some of
the requirements and design decisions/discussion on the
Originally I wrote the connector side of brick to be the LUN discovery
shared code between Cinder and Nova.
I tried to make a patch in Havana that would remove do this but it
didn't make it in.
The upside to brick not making it in Nova is that it has given us some
time to rethink things a
On 04/23/2014 05:09 PM, Jay S. Bryant wrote:
All,
I have gotten questions from our driver developers asking for details
regarding the move to using cinder-specs for proposing Blueprints. I
brought this topic up in today's Cinder Weekly Meeting, but the meeting
was lightly attended so we
On 02/13/2014 02:51 AM, Thierry Carrez wrote:
John Griffith wrote:
So we've talked about this a bit and had a number of ideas regarding
how to test and show compatibility for third-party drivers in Cinder.
This has been an eye opening experience (the number of folks that have
NEVER run tempest
On 02/13/2014 09:51 AM, Avishay Traeger wrote:
Walter A. Boring IV walter.bor...@hp.com wrote on 02/13/2014 06:59:38
PM:
What I would do different for the Icehouse release is this:
If a driver doesn't pass the certification test by IceHouse RC1, then we
have a bug filed
against the driver. I
4 or 5 UTC works better for me. I can't attend the current meeting
time, due to taking my kids to school in the morning at 1620UTC
Walt
Hi All,
Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
some interest in either changing the weekly Cinder meeting time, or
proposing
Awesome guys,
Thanks for picking this up. I'm looking forward to the reviews :)
Walt
On 19.11.2013 10:38, Kekane, Abhishek wrote:
Hi All,
Greetings!!!
Hi there!
And thanks for your interest in cinder and taskflow!
We are in process of implementing the TaskFlow 0.1 in Cinder for copy
___
Mailing list: https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
Post to : openst...@lists.launchpad.net
mailto:openst...@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
62 matches
Mail list logo