Re: [openstack-dev] [Ironic] Nominating David Shrewsbury to ironic-core

2014-07-14 Thread Dmitry Tantsur
+1 

On Fri, 2014-07-11 at 15:50 -0700, Devananda van der Veen wrote:
> Hi all!
> 
> 
> While David (Shrews) only began working on Ironic in earnest four
> months ago, he has been working on some of the tougher problems with
> our Tempest coverage and the Nova<->Ironic interactions. He's also
> become quite active in reviews and discussions on IRC, and
> demonstrated a good understanding of the challenges facing Ironic
> today. I believe he'll also make a great addition to the core team.
> 
> 
> Below are his stats for the last 90 days.
> 
> 
> Cheers,
> Devananda
> 
> 
> +--+---++
> | Reviewer | Reviews   -2  -1  +1  +2  +A+/- % |
> Disagreements* |
> +--+---++
> 
> 
> 30
> | dshrews  |  470  11  36   0   076.6% |
>7 ( 14.9%)  |
> 
> 
> 
> 60
> | dshrews  |  910  14  77   0   084.6% |
> 15 ( 16.5%)  |
> 
> 
> 90
> | dshrews  | 1210  21 100   0   082.6% |
> 16 ( 13.2%)  |
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How to get testr to failfast

2014-07-31 Thread Dmitry Tantsur
Hi!

On Thu, 2014-07-31 at 10:45 +0100, Chris Dent wrote:
> One of the things I like to be able to do when in the middle of making
> changes is sometimes run all the tests to make sure I haven't accidentally
> caused some unexpected damage in the neighborhood. If I have I don't
> want the tests to all run, I'd like to exit on first failure.

This makes even more sense, if you _know_ that you've broken a lot of
things and want to deal with it case-by-case. At least for me it's more
convenient, I believe many will prefer getting all the errors at once.

>  This
> is a common feature in lots of testrunners but I can't seem to find
> a way to make it happen when testr is integrated with setuptools.
> 
> Any one know a way?
> 
> There's this:
>https://bugs.launchpad.net/testrepository/+bug/1211926
> 
> But it is not clear how or where to effectively pass the right argument,
> either from the command line or in tox.ini.
> 
> Even if you don't know a way, I'd like to hear from other people who
> would like it to be possible. It's one of several testing habits I
> have from previous worlds that I'm missing and doing a bit of
> commiseration would be a nice load off.

It would be my 2nd wanted feature in our test system (after getting
reasonable error message (at least not binary) in case of import
errors :)

> 
> Thanks.
> 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] help

2014-07-31 Thread Dmitry Tantsur
Hi!

This list is not for usage question, it's for OpenStack developers. The
best way to get a quick help should be using
https://ask.openstack.org/en/questions/ or joining #openstack on
Freenode and asking there.

Good luck!

On Thu, 2014-07-31 at 15:59 +0530, shailendra acharya wrote:
> hello folks,
>   this is shailendra acharya. i m trying to install openstack
> icehouse in centos6.5. but i got stuck and tried almost every link
> which was suggested to me by google. i have last hope to u. 
> when i come to create user using keystone cmd as written in openstack
> installation manual
>keystone user-create --name=admin --pass=ADMIN_PASS
> --email=ADMIN_EMAIL
> 
>  
> i replaced email and pass but when i press enter it shows 
> invalid credential error  plz do dsomething asap
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Exceptional approval request for Cisco Driver Blueprint

2014-08-07 Thread Dmitry Tantsur
Hi!

Didn't read the spec thoroughly, but I'm concerned by it's huge scope.
It's actually several specs squashed into one (not too detailed). My
vote is splitting it into a chain of specs (at least 3: power driver,
discovery, other configurations) and seek exception separately.
Actually, I'm +1 on making exception for power driver, but -0 on the
others, until I see a separate spec for them.

Dmitry.

On Thu, 2014-08-07 at 09:30 +0530, GopiKrishna Saripuri wrote:
> Hi,
> 
> 
> I've submitted Ironic Cisco driver blueprint post proposal freeze
> date. This driver is critical for Cisco and few customers to test as
> part of their private cloud expansion. The driver implementation is
> ready along with unit-tests. Will submit the code for review once
> blueprint is accepted. 
> 
> 
> The Blueprint review link: https://review.openstack.org/#/c/110217/
> 
> 
> Please let me know If its possible to include this in Juno release.
> 
> 
> 
> Regards
> GopiKrishna S
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Proposal for slight change in our spec process

2014-08-07 Thread Dmitry Tantsur
Hi!

On Tue, 2014-08-05 at 12:33 -0700, Devananda van der Veen wrote:
> Hi all!
> 
> 
> The following idea came out of last week's midcycle for how to improve
> our spec process and tracking on launchpad. I think most of us liked
> it, but of course, not everyone was there, so I'll attempt to write
> out what I recall.
> 
> 
> This would apply to new specs proposed for Kilo (since the new spec
> proposal deadline has already passed for Juno).
> 
> 
> 
> 
> First, create a blueprint in launchpad and populate it with your
> spec's heading. Then, propose a spec with just the heading (containing
> a link to the BP), Problem Description, and first paragraph outlining
> your Proposed change. 
> 
> 
> This will be given an initial, high-level review to determine whether
> it is in scope and in alignment with project direction, which will be
> reflected on the review comments, and, if affirmed, by setting the
> blueprint's "Direction" field to "Approved".

How will we formally track it in Gerrit? By having several +1's by spec
cores? Or will it be done by you (I guess only you can update
"Direction" in LP)?

> 
> 
> At this point, if affirmed, you should proceed with filling out the
> entire spec, and the remainder of the process will continue as it was
> during Juno. Once the spec is approved, update launchpad to set the
> specification URL to the spec's location on
> https://specs.openstack.org/openstack/ironic-specs/ and a member of
> the team (probably me) will update the release target, priority, and
> status.
> 
> 
> 
> 
> I believe this provides two benefits. First, it should give quicker
> initial feedback to proposer if their change is going to be in/out of
> scope, which can save considerable time if the proposal is out of
> scope. Second, it allows us to track well-aligned specs on Launchpad
> before they are completely approved. We observed that several specs
> were approved at nearly the same time as the code was approved. Due to
> the way we were using LP this cycle, it meant that LP did not reflect
> the project's direction in advance of landing code, which is not what
> we intended. This may have been confusing, and I think this will help
> next cycle. FWIW, several other projects have observed a similar
> problem with spec<->launchpad interaction, and are adopting similar
> practices for Kilo.
> 
> 
> 
> 
> Comments/discussion welcome!

I'm +1 to the idea, just some concerns about the implementation:
1. We don't have any "pre-approved" state in Gerrit - need agreement on
when to continue (see above)
2. We'll need to speed up spec reviews, because we're adding one more
blocker on the way to the code being merged :) Maybe it's no longer a
problem actually, we're doing it faster now.

> 
> 
> 
> -Deva
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][All] tox 1.7.0 error while running tests

2014-02-11 Thread Dmitry Tantsur
Hi. This seems to be related:
https://bugs.launchpad.net/openstack-ci/+bug/1274135
We also encountered this.

On Tue, 2014-02-11 at 14:56 +0530, Swapnil Kulkarni wrote:
> Hello,
> 
> 
> I created a new devstack environment today and installed tox 1.7.0,
> and getting error "tox.ConfigError: ConfigError: substitution key
> 'posargs' not found".
> 
> 
> Details in [1].
> 
> 
> Anybody encountered similar error before? Any workarounds/updates
> needed?
> 
> 
> [1] http://paste.openstack.org/show/64178/
> 
> 
> 
> 
> Best Regards,
> Swapnil Kulkarni
> irc : coolsvap
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Dmitry Tantsur
Hi.

While implementing CRUD operations for node profiles in Tuskar (which
are essentially Nova flavors renamed) I encountered editing of flavors
and I have some doubts about it.

Editing of nova flavors in Horizon is implemented as
deleting-then-creating with a _new_ flavor ID.
For us it essentially means that all links to flavor/profile (e.g. from
overcloud role) will become broken. We had the following proposals:
- Update links automatically after editing by e.g. fetching all
overcloud roles and fixing flavor ID. Poses risk of race conditions with
concurrent editing of either node profiles or overcloud roles.
  Even worse, are we sure that user really wants overcloud roles to be
updated?
- The same as previous but with confirmation from user. Also risk of
race conditions.
- Do not update links. User may be confused: operation called "edit"
should not delete anything, nor is it supposed to invalidate links. One
of the ideas was to show also deleted flavors/profiles in a separate
table.
- Implement clone operation instead of editing. Shows user a creation
form with data prefilled from original profile. Original profile will
stay and should be deleted manually. All links also have to be updated
manually.
- Do not implement editing, only creating and deleting (that's what I
did for now in https://review.openstack.org/#/c/73576/ ).

Any ideas on what to do?

Thanks in advance,
Dmitry Tantsur


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Dmitry Tantsur
I think we still are going to multiple flavors for I, e.g.:
https://review.openstack.org/#/c/74762/
On Thu, 2014-02-20 at 08:50 -0500, Jay Dobies wrote:
> 
> On 02/20/2014 06:40 AM, Dmitry Tantsur wrote:
> > Hi.
> >
> > While implementing CRUD operations for node profiles in Tuskar (which
> > are essentially Nova flavors renamed) I encountered editing of flavors
> > and I have some doubts about it.
> >
> > Editing of nova flavors in Horizon is implemented as
> > deleting-then-creating with a _new_ flavor ID.
> > For us it essentially means that all links to flavor/profile (e.g. from
> > overcloud role) will become broken. We had the following proposals:
> > - Update links automatically after editing by e.g. fetching all
> > overcloud roles and fixing flavor ID. Poses risk of race conditions with
> > concurrent editing of either node profiles or overcloud roles.
> >Even worse, are we sure that user really wants overcloud roles to be
> > updated?
> 
> This is a big question. Editing has always been a complicated concept in 
> Tuskar. How soon do you want the effects of the edit to be made live? 
> Should it only apply to future creations or should it be applied to 
> anything running off the old configuration? What's the policy on how to 
> apply that (canary v. the-other-one-i-cant-remember-the-name-for v. 
> something else)?
> 
> > - The same as previous but with confirmation from user. Also risk of
> > race conditions.
> > - Do not update links. User may be confused: operation called "edit"
> > should not delete anything, nor is it supposed to invalidate links. One
> > of the ideas was to show also deleted flavors/profiles in a separate
> > table.
> > - Implement clone operation instead of editing. Shows user a creation
> > form with data prefilled from original profile. Original profile will
> > stay and should be deleted manually. All links also have to be updated
> > manually.
> > - Do not implement editing, only creating and deleting (that's what I
> > did for now in https://review.openstack.org/#/c/73576/ ).
> 
> I'm +1 on not implementing editing. It's why we wanted to standardize on 
> a single flavor for Icehouse in the first place, the use cases around 
> editing or multiple flavors are very complicated.
> 
> > Any ideas on what to do?
> >
> > Thanks in advance,
> > Dmitry Tantsur
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] Proposed Change to Sensor meter naming in Ceilometer

2014-10-17 Thread Dmitry Tantsur

Hi Jim,

On 10/16/2014 07:23 PM, Jim Mankovich wrote:

All,

I would like to get some feedback on a proposal  to change to the
current sensor naming implemented in ironic and ceilometer.

I would like to provide vendor specific sensors within the current
structure for IPMI sensors in ironic and ceilometer, but I have found
that the current  implementation of sensor meters in ironic and
ceilometer is IPMI specific (from a meter naming perspective) . This is
not suitable as it currently stands to support sensor information from a
provider other than IPMI.Also, the current Resource ID naming makes
it difficult for a consumer of sensors to quickly find all the sensors
for a given Ironic Node ID, so I would like to propose changing the
Resource ID naming as well.

Currently, sensors sent by ironic to ceilometer get named by ceilometer
as has "hardware.ipmi.SensorType", and the Resource ID is the Ironic
Node ID with a post-fix containing the Sensor ID.  For Details
pertaining to the issue with the Resource ID naming, see
https://bugs.launchpad.net/ironic/+bug/1377157, "ipmi sensor naming in
ceilometer is not consumer friendly"

Here is an example of what meters look like for sensors in ceilometer
with the current implementation:
| Name| Type  | Unit | Resource ID
| hardware.ipmi.current   | gauge | W|
edafe6f4-5996-4df8-bc84-7d92439e15c0-power_meter_(0x16)
| hardware.ipmi.temperature   | gauge | C|
edafe6f4-5996-4df8-bc84-7d92439e15c0-16-system_board_(0x15)

What I would like to propose is dropping the ipmi string from the name
altogether and appending the Sensor ID to the name  instead of to the
Resource ID.   So, transforming the above to the new naming would result
in the following:
| Name | Type  | Unit | Resource ID
| hardware.current.power_meter_(0x16)  | gauge | W|
edafe6f4-5996-4df8-bc84-7d92439e15c0
| hardware.temperature.system_board_(0x15) | gauge | C|
edafe6f4-5996-4df8-bc84-7d92439e15c0

+1

Very-very nit, feel free to ignore if inappropriate: maybe 
hardware.temperature.system_board.0x15 ? I.e. use separation with dots, 
do not use brackets?


This structure would provide the ability for a consumer to do a
ceilometer resource list using the Ironic Node ID as the Resource ID to
get all the sensors in a given platform.   The consumer would then then
iterate over each of the sensors to get the samples it wanted.   In
order to retain the information as to who provide the sensors, I would
like to propose that a standard "sensor_provider" field be added to the
resource_metadata for every sensor where the "sensor_provider" field
would have a string value indicating the driver that provided the sensor
information. This is where the string "ipmi", or a vendor specific
string would be specified.

+1


I understand that this proposed change is not backward compatible with
the existing naming, but I don't really see a good solution that would
retain backward compatibility.
For backward compatibility you could _also_ keep old ones (with ipmi in 
it) for IPMI sensors.




Any/All Feedback will be appreciated,
In this version it makes a lot of sense to me, +1 if Ceilometer folks 
are not against.



Jim




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

2014-10-21 Thread Dmitry Tantsur

On 10/21/2014 02:11 AM, Devananda van der Veen wrote:

Hi all,

I was reminded in the Ironic meeting today that the words "hardware
discovery" are overloaded and used in different ways by different
people. Since this is something we are going to talk about at the
summit (again), I'd like to start the discussion by building consensus
in the language that we're going to use.

So, I'm starting this thread to explain how I use those two words, and
some other words that I use to mean something else which is what some
people mean when they use those words. I'm not saying my words are the
right words -- they're just the words that make sense to my brain
right now. If someone else has better words, and those words also make
sense (or make more sense) then I'm happy to use those instead.

So, here are rough definitions for the terms I've been using for the
last six months to disambiguate this:

"hardware discovery"
The process or act of identifying hitherto unknown hardware, which is
addressable by the management system, in order to later make it
available for provisioning and management.

"hardware introspection"
The process or act of gathering information about the properties or
capabilities of hardware already known by the management system.
I generally agree with this separation, though it brings some troubles 
to me, as I'm used to calling "discovery" what you called 
"introspection" (it was not the case this summer, but now I changed my 
mind). And the term "discovery" is baked into the.. hmm.. introspection 
service that I've written [1].


So I would personally prefer to leave "discovery" as in "discovery of 
hardware properties", though I realize that "introspection" may be a 
better name.


[1] https://github.com/Divius/ironic-discoverd



Why is this disambiguation important? At the last midcycle, we agreed
that "hardware discovery" is out of scope for Ironic -- finding new,
unmanaged nodes and enrolling them with Ironic is best left to other
services or processes, at least for the forseeable future.

However, "introspection" is definitely within scope for Ironic. Even
though we couldn't agree on the details during Juno, we are going to
revisit this at the Kilo summit. This is an important feature for many
of our current users, and multiple proof of concept implementations of
this have been done by different parties over the last year.

It may be entirely possible that no one else in our developer
community is using the term "introspection" in the way that I've
defined it above -- if so, that's fine, I can stop calling that
"introspection", but I don't know a better word for the thing that is
find-unknown-hardware.

Suggestions welcome,
Devananda


P.S.

For what it's worth, googling for "hardware discovery" yields several
results related to identifying unknown network-connected devices and
adding them to inventory systems, which is the way that I'm using the
term right now, so I don't feel completely off in continuing to say
"discovery" when I mean "find unknown network devices and add them to
Ironic".

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

2014-11-13 Thread Dmitry Tantsur

On 11/12/2014 10:47 PM, Victor Lowther wrote:

Hmmm... with this thread in mind, anyone think that changing DISCOVERING
to INTROSPECTING in the new state machine spec is a good idea?
As before I'm uncertain. Discovery is a troublesome term, but too many 
people use and recognize it, while IMO introspecting is much less 
common. So count me as -0 on this.




On Mon, Nov 3, 2014 at 4:29 AM, Ganapathy, Sandhya
mailto:sandhya.ganapa...@hp.com>> wrote:

Hi all,

Following the mail thread on disambiguating the term 'discovery' -

In the lines of what Devananda had stated, Hardware Introspection
also means retrieving and storing hardware details of the node whose
credentials and IP Address are known to the system. (Correct me if I
am wrong).

I am currently in the process of extracting hardware details (cpu,
memory etc..) of n no. of nodes belonging to a Chassis whose
credentials are already known to ironic. Does this process fall in
the category of hardware introspection?

Thanks,
Sandhya.

-Original Message-
From: Devananda van der Veen [mailto:devananda@gmail.com
]
Sent: Tuesday, October 21, 2014 5:41 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Ironic] disambiguating the term "discovery"

Hi all,

I was reminded in the Ironic meeting today that the words "hardware
discovery" are overloaded and used in different ways by different
people. Since this is something we are going to talk about at the
summit (again), I'd like to start the discussion by building
consensus in the language that we're going to use.

So, I'm starting this thread to explain how I use those two words,
and some other words that I use to mean something else which is what
some people mean when they use those words. I'm not saying my words
are the right words -- they're just the words that make sense to my
brain right now. If someone else has better words, and those words
also make sense (or make more sense) then I'm happy to use those
instead.

So, here are rough definitions for the terms I've been using for the
last six months to disambiguate this:

"hardware discovery"
The process or act of identifying hitherto unknown hardware, which
is addressable by the management system, in order to later make it
available for provisioning and management.

"hardware introspection"
The process or act of gathering information about the properties or
capabilities of hardware already known by the management system.


Why is this disambiguation important? At the last midcycle, we
agreed that "hardware discovery" is out of scope for Ironic --
finding new, unmanaged nodes and enrolling them with Ironic is best
left to other services or processes, at least for the forseeable future.

However, "introspection" is definitely within scope for Ironic. Even
though we couldn't agree on the details during Juno, we are going to
revisit this at the Kilo summit. This is an important feature for
many of our current users, and multiple proof of concept
implementations of this have been done by different parties over the
last year.

It may be entirely possible that no one else in our developer
community is using the term "introspection" in the way that I've
defined it above -- if so, that's fine, I can stop calling that
"introspection", but I don't know a better word for the thing that
is find-unknown-hardware.

Suggestions welcome,
Devananda


P.S.

For what it's worth, googling for "hardware discovery" yields
several results related to identifying unknown network-connected
devices and adding them to inventory systems, which is the way that
I'm using the term right now, so I don't feel completely off in
continuing to say "discovery" when I mean "find unknown network
devices and add them to Ironic".

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Dmitry Tantsur

On 11/12/2014 08:06 PM, Doug Hellmann wrote:

During our “Graduation Schedule” summit session we worked through the list of 
modules remaining the in the incubator. Our notes are in the etherpad [1], but as 
part of the "Write it Down” theme for Oslo this cycle I am also posting a 
summary of the outcome here on the mailing list for wider distribution. Let me know 
if you remembered the outcome for any of these modules differently than what I have 
written below.

Doug



Deleted or deprecated modules:

funcutils.py - This was present only for python 2.6 support, but it is no 
longer used in the applications. We are keeping it in the stable/juno branch of 
the incubator, and removing it from master (https://review.openstack.org/130092)

hooks.py - This is not being used anywhere, so we are removing it. 
(https://review.openstack.org/#/c/125781/)

quota.py - A new quota management system is being created 
(https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and should 
replace this, so we will keep it in the incubator for now but deprecate it.

crypto/utils.py - We agreed to mark this as deprecated and encourage the use of 
Barbican or cryptography.py (https://review.openstack.org/134020)

cache/ - Morgan is going to be working on a new oslo.cache library as a 
front-end for dogpile, so this is also deprecated 
(https://review.openstack.org/134021)

apiclient/ - With the SDK project picking up steam, we felt it was safe to 
deprecate this code as well (https://review.openstack.org/134024).

xmlutils.py - This module was used to provide a security fix for some XML 
modules that have since been updated directly. It was removed. 
(https://review.openstack.org/#/c/125021/)



Graduating:

oslo.context:
- Dims is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
- includes:
context.py

oslo.service:
- Sachi is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
- includes:
eventlet_backdoor.py
loopingcall.py
periodic_task.py
By te way, right now I'm looking into updating this code to be able to 
run tasks on a thread pool, not only in one thread (quite a problem for 
Ironic). Does it somehow interfere with the graduation? Any deadlines or 
something?



request_utils.py
service.py
sslutils.py
systemd.py
threadgroup.py

oslo.utils:
- We need to look into how to preserve the git history as we import these 
modules.
- includes:
fileutils.py
versionutils.py



Remaining untouched:

scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
whether Gantt has enough traction yet so we will hold onto these in the 
incubator for at least another cycle.

report/ - There’s interest in creating an oslo.reports library containing this 
code, but we haven’t had time to coordinate with Solly about doing that.



Other work:

We will continue the work on oslo.concurrency and oslo.log that we started 
during Juno.

[1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

2014-11-13 Thread Dmitry Tantsur

On 11/13/2014 12:27 PM, Ganapathy, Sandhya wrote:

Hi All,

Based on the discussions, I have filed a blue print that initiates discovery of 
node hardware details given its credentials at chassis level. I am in the 
process of creating a spec for it. Do share your thoughts regarding this -

https://blueprints.launchpad.net/ironic/+spec/chassis-level-node-discovery
Hi and thank you for the suggestion. As already said, this thread is not 
the best place to discuss it, so please file a (short version of) spec, 
so that we can comment on it.


Thanks,
Sandhya.

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, November 13, 2014 2:20 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] disambiguating the term "discovery"

On 11/12/2014 10:47 PM, Victor Lowther wrote:

Hmmm... with this thread in mind, anyone think that changing
DISCOVERING to INTROSPECTING in the new state machine spec is a good idea?

As before I'm uncertain. Discovery is a troublesome term, but too many people 
use and recognize it, while IMO introspecting is much less common. So count me 
as -0 on this.



On Mon, Nov 3, 2014 at 4:29 AM, Ganapathy, Sandhya
mailto:sandhya.ganapa...@hp.com>> wrote:

 Hi all,

 Following the mail thread on disambiguating the term 'discovery' -

 In the lines of what Devananda had stated, Hardware Introspection
 also means retrieving and storing hardware details of the node whose
 credentials and IP Address are known to the system. (Correct me if I
 am wrong).

 I am currently in the process of extracting hardware details (cpu,
 memory etc..) of n no. of nodes belonging to a Chassis whose
 credentials are already known to ironic. Does this process fall in
 the category of hardware introspection?

 Thanks,
 Sandhya.

 -Original Message-
 From: Devananda van der Veen [mailto:devananda@gmail.com
 <mailto:devananda@gmail.com>]
 Sent: Tuesday, October 21, 2014 5:41 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Ironic] disambiguating the term "discovery"

 Hi all,

 I was reminded in the Ironic meeting today that the words "hardware
 discovery" are overloaded and used in different ways by different
 people. Since this is something we are going to talk about at the
 summit (again), I'd like to start the discussion by building
 consensus in the language that we're going to use.

 So, I'm starting this thread to explain how I use those two words,
 and some other words that I use to mean something else which is what
 some people mean when they use those words. I'm not saying my words
 are the right words -- they're just the words that make sense to my
 brain right now. If someone else has better words, and those words
 also make sense (or make more sense) then I'm happy to use those
 instead.

 So, here are rough definitions for the terms I've been using for the
 last six months to disambiguate this:

 "hardware discovery"
 The process or act of identifying hitherto unknown hardware, which
 is addressable by the management system, in order to later make it
 available for provisioning and management.

 "hardware introspection"
 The process or act of gathering information about the properties or
 capabilities of hardware already known by the management system.


 Why is this disambiguation important? At the last midcycle, we
 agreed that "hardware discovery" is out of scope for Ironic --
 finding new, unmanaged nodes and enrolling them with Ironic is best
 left to other services or processes, at least for the forseeable future.

 However, "introspection" is definitely within scope for Ironic. Even
 though we couldn't agree on the details during Juno, we are going to
 revisit this at the Kilo summit. This is an important feature for
 many of our current users, and multiple proof of concept
 implementations of this have been done by different parties over the
 last year.

 It may be entirely possible that no one else in our developer
 community is using the term "introspection" in the way that I've
 defined it above -- if so, that's fine, I can stop calling that
 "introspection", but I don't know a better word for the thing that
 is find-unknown-hardware.

 Suggestions welcome,
 Devananda


 P.S.

 For what it's worth, googling for "hardware discovery" yields
 several results related to identifying unknown network-connected
 devices and adding them to inventory systems, which is the way that
 I'm using the term right now, so I don't feel completely off in
 continuin

Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Dmitry Tantsur

On 11/13/2014 01:54 PM, Doug Hellmann wrote:


On Nov 13, 2014, at 3:52 AM, Dmitry Tantsur  wrote:


On 11/12/2014 08:06 PM, Doug Hellmann wrote:

During our “Graduation Schedule” summit session we worked through the list of 
modules remaining the in the incubator. Our notes are in the etherpad [1], but as 
part of the "Write it Down” theme for Oslo this cycle I am also posting a 
summary of the outcome here on the mailing list for wider distribution. Let me know 
if you remembered the outcome for any of these modules differently than what I have 
written below.

Doug



Deleted or deprecated modules:

funcutils.py - This was present only for python 2.6 support, but it is no 
longer used in the applications. We are keeping it in the stable/juno branch of 
the incubator, and removing it from master (https://review.openstack.org/130092)

hooks.py - This is not being used anywhere, so we are removing it. 
(https://review.openstack.org/#/c/125781/)

quota.py - A new quota management system is being created 
(https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and should 
replace this, so we will keep it in the incubator for now but deprecate it.

crypto/utils.py - We agreed to mark this as deprecated and encourage the use of 
Barbican or cryptography.py (https://review.openstack.org/134020)

cache/ - Morgan is going to be working on a new oslo.cache library as a 
front-end for dogpile, so this is also deprecated 
(https://review.openstack.org/134021)

apiclient/ - With the SDK project picking up steam, we felt it was safe to 
deprecate this code as well (https://review.openstack.org/134024).

xmlutils.py - This module was used to provide a security fix for some XML 
modules that have since been updated directly. It was removed. 
(https://review.openstack.org/#/c/125021/)



Graduating:

oslo.context:
- Dims is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
- includes:
context.py

oslo.service:
- Sachi is driving this
- https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
- includes:
eventlet_backdoor.py
loopingcall.py
periodic_task.py

By te way, right now I'm looking into updating this code to be able to run 
tasks on a thread pool, not only in one thread (quite a problem for Ironic). 
Does it somehow interfere with the graduation? Any deadlines or something?


Feature development on code declared ready for graduation is basically frozen 
until the new library is created. You should plan on doing that work in the new 
oslo.service repository, which should be showing up soon. And the you describe 
feature sounds like something for which we would want a spec written, so please 
consider filing one when you have some of the details worked out.
Sure, right now I'm experimenting in Ironic tree to figure out how it 
really works. There's a single oslo-specs repo for the whole oslo, right?







request_utils.py
service.py
sslutils.py
systemd.py
threadgroup.py

oslo.utils:
- We need to look into how to preserve the git history as we import these 
modules.
- includes:
fileutils.py
versionutils.py



Remaining untouched:

scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
whether Gantt has enough traction yet so we will hold onto these in the 
incubator for at least another cycle.

report/ - There’s interest in creating an oslo.reports library containing this 
code, but we haven’t had time to coordinate with Solly about doing that.



Other work:

We will continue the work on oslo.concurrency and oslo.log that we started 
during Juno.

[1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Changing our weekly meeting format

2014-11-13 Thread Dmitry Tantsur

On 11/13/2014 01:15 PM, Lucas Alvares Gomes wrote:

This was discussed in the Contributor Meetup on Friday at the Summit
but I think it's important to share on the mail list too so we can get
more opnions/suggestions/comments about it.

In the Ironic weekly meeting we dedicate a good time of the meeting to
do some announcements, reporting bug status, CI status, oslo status,
specific drivers status, etc... It's all good information, but I
believe that the mail list would be a better place to report it and
then we can free some time from our meeting to actually discuss
things.

Are you guys in favor of it?

If so I'd like to propose a new format based on the discussions we had
in Paris. For the people doing the status report on the meeting, they
would start adding the status to an etherpad and then we would have a
responsible person to get this information and send it to the mail
list once a week.

For the meeting itself we have a wiki page with an agenda[1] which
everyone can edit to put the topic they want to discuss in the meeting
there, I think that's fine and works. The only change about it would
be that we may want freeze the agenda 2 days before the meeting so
people can take a look at the topics that will be discussed and
prepare for it; With that we can move forward quicker with the
discussions because people will be familiar with the topics already.

Let me know what you guys think.
I'm not really fond of it (like every process complication) but it looks 
inevitable, so +1.




[1] https://wiki.openstack.org/wiki/Meetings/Ironic

Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Proposing new meeting times

2014-11-18 Thread Dmitry Tantsur

On 11/18/2014 02:00 AM, Devananda van der Veen wrote:

Hi all,

As discussed in Paris and at today's IRC meeting [1] we are going to be
alternating the time of the weekly IRC meetings to accommodate our
contributors in EMEA better. No time will be perfect for everyone, but
as it stands, we rarely (if ever) see our Indian, Chinese, and Japanese
contributors -- and it's quite hard for any of the AU / NZ folks to attend.

I'm proposing two sets of times below. Please respond with a "-1" vote
to an option if that option would cause you to miss ALL meetings, or a
"+1" vote if you can magically attend ALL the meetings. If you can
attend, without significant disruption, at least one of the time slots
in a proposal, please do not vote either for or against it. This way we
can identify a proposal which allows everyone to attend at a minimum 50%
of the meetings, and preferentially weight towards one that allows more
contributors to attend two meetings.

This link shows the local times in some major coutries / timezones
around the world (and you can customize it to add your own).
http://www.timeanddate.com/worldclock/meetingtime.html?iso=20141125&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5

For reference, the current meeting time is 1900 UTC.

Option #1: alternate between Monday 1900 UTC && Tuesday 0900 UTC.  I
like this because 1900 UTC spans all of US and western EU, while 0900
combines EU and EMEA. Folks in western EU are "in the middle" and can
attend all meetings.

+1



http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=24&hour=19&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=25&hour=9&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5


Option #2: alternate between Monday 1700 UTC && Tuesday 0500 UTC. I like
this because it shifts the current slot two hours earlier, making it
easier for eastern EU to attend without excluding the western US, and
while 0500 UTC is not so late that US west coast contributors can't
attend (it's 9PM for us), it is harder for western EU folks to attend.
There's really no one in the middle here, but there is at least a chance
for US west coast and EMEA to overlap, which we don't have at any other
time.

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=24&hour=17&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5


I'll collate all the responses to this thread during the week, ahead of
next week's regularly-scheduled meeting.

-Devananda

[1]
http://eavesdrop.openstack.org/meetings/ironic/2014/ironic.2014-11-17-19.00.log.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Cleaning up spec review queue.

2014-11-19 Thread Dmitry Tantsur

On 11/18/2014 06:13 PM, Chris K wrote:

Hi all,

In an effort to keep the Ironic specs review queue as up to date as
possible, I have identified several specs that were proposed in the Juno
cycle and have not been updated to reflect the changes to the current
Kilo cycle.

I would like to set a deadline to either update them to reflect the Kilo
cycle or abandon them if they are no longer relevant.
If there are no objections I will abandon any specs on the list below
that have not been updated to reflect the Kilo cycle after the end of
the next Ironic meeting (Nov. 24th 2014).

Below is the list of specs I have identified that would be affected:
https://review.openstack.org/#/c/107344 - *Generic Hardware Discovery Bits*

Killed it with fire :D


https://review.openstack.org/#/c/102557 - *Driver for NetApp storage arrays*
https://review.openstack.org/#/c/108324 - *DRAC hardware discovery*

Imre, are you going to work on it?


https://review.openstack.org/#/c/103065 - *Design spec for iLO driver
for firmware settings*
https://review.openstack.org/#/c/108646 - *Add HTTP GET support for
vendor_passthru API*

This one is replaced by Lucas' work.


https://review.openstack.org/#/c/94923 - *Make the REST API fully
asynchronous*
https://review.openstack.org/#/c/103760 - *iLO Management Driver for
firmware update*
https://review.openstack.org/#/c/110217 - *Cisco UCS Driver*
https://review.openstack.org/#/c/96538 - *Add console log support*
https://review.openstack.org/#/c/100729 - *Add metric reporting spec.*
https://review.openstack.org/#/c/101122 - *Firmware setting design spec.*
https://review.openstack.org/#/c/96545 - *Reset service processor*
*
*
*This list may also be found on this ether pad:
https://etherpad.openstack.org/p/ironic-juno-specs-to-be-removed*
*
*
If you believe one of the above specs should not be abandoned please
update the spec to reflect the current Kilo cycle, or let us know that a
update is forth coming.

Please feel free to reply to this email, I will also bring this topic up
at the next meeting to ensure we have as much visibility as possible
before abandoning the old specs.

Thank you,
Chris Krelle
IRC: NobodyCam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] maintaining backwards compatibility within a cycle

2014-11-20 Thread Dmitry Tantsur

On 11/20/2014 04:38 PM, Ruby Loo wrote:

Hi, we had an interesting discussion on IRC about whether or not we
should be maintaining backwards compatibility within a release cycle. In
this particular case, we introduced a new decorator in this kilo cycle,
and were discussing the renaming of it, and whether it needed to be
backwards compatible to not break any out-of-tree driver using master.

Some of us (ok, me or I) think it doesn't make sense to make sure that
everything we do is backwards compatible. Others disagree and think we
should, or at least strive for 'must be' backwards compatible with the
caveat that there will be cases where this isn't
feasible/possible/whatever. (I hope I captured that correctly.)

Although I can see the merit (well, sort of) of trying our best, trying
doesn't mean 'must', and if it is 'must', who decides what can be
exempted from this, and how will we communicate what is exempted, etc?
It make sense to try to preserve compatibility, especially for things 
that landed some time ago. For newly invented things, like the decorator 
it makes no sense to me, however.


People consuming master have to be prepared. That does not mean that we 
should break them every week obviously, but still. That's why we have 
releases: to promise stability to people. By consuming master you agree 
that things might break rarely.


Thoughts?

--ruby


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Do we need an IntrospectionInterface?

2014-11-26 Thread Dmitry Tantsur
Hi all!

As our state machine and discovery discussion proceeds, I'd like to ask
your opinion on whether we need an IntrospectionInterface
(DiscoveryInterface?). Current proposal [1] suggests adding a method for
initiating a discovery to the ManagementInterface. IMO it's not 100%
correct, because:
1. It's not management. We're not changing anything.
2. I'm aware that some folks want to use discoverd-based discovery [2] even
for DRAC and ILO (e.g. for vendor-specific additions that can't be
implemented OOB).

Any ideas?

Dmitry.

[1] https://review.openstack.org/#/c/100951/
[2] https://review.openstack.org/#/c/135605/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] ironic driver retries on ironic driver Conflict response

2014-11-28 Thread Dmitry Tantsur

Hi!

On 11/28/2014 11:41 AM, Murray, Paul (HP Cloud) wrote:

Hi All,

Looking at the ironic virt driver code in nova it seems that a Conflict
(409) response from the ironic client results in the driver re-trying
the request. Given the comment below in the ironic code I would imagine
that is not the right behavior – it reads as though this is something
that would fail on the retry as well.

class Conflict(HTTPClientError):

 """HTTP 409 - Conflict.

 Indicates that the request could not be processed because of conflict

 in the request, such as an edit conflict.

 """

 http_status = 409

 message = _("Conflict")

An example of this is if the virt driver attempts to assign an instance
to a node that is in the power on state it will issue this Conflict
response.
It's possible that a periodic background process is going on, retrying 
makes perfect sense for this case. We're trying to get away from 
background processes causing Conflict btw.


Have I understood this or is there something about this I am not getting
right?

Paul

Paul Murray

Nova Technical Lead, HP Cloud

+44 117 316 2527

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks
RG12 1HN Registered No: 690597 England. The contents of this message and
any attachments to it are confidential and may be legally privileged. If
you have received this message in error, you should delete it from your
system immediately and advise the sender. To any recipient of this
message within HP, unless otherwise stated you should consider this
message and attachments as "HP CONFIDENTIAL".



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Dmitry Tantsur

Hi folks,

Thank you for additional explanation, it does clarify things a bit. I'd 
like to note, however, that you talk a lot about how _different_ Fuel 
Agent is from what Ironic does now. I'd like actually to know how well 
it's going to fit into what Ironic does (in additional to your specific 
use cases). Hence my comments inline:


On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote:

Just a short explanation of Fuel use case.

Fuel use case is not a cloud. Fuel is a deployment tool. We install OS
on bare metal servers and on VMs
and then configure this OS using Puppet. We have been using Cobbler as
our OS provisioning tool since the beginning of Fuel.
However, Cobbler assumes using native OS installers (Anaconda and
Debian-installer). For some reasons we decided to
switch to image based approach for installing OS.

One of Fuel features is the ability to provide advanced partitioning
schemes (including software RAIDs, LVM).
Native installers are quite difficult to customize in the field of
partitioning
(that was one of the reasons to switch to image based approach).
Moreover, we'd like to implement even more
flexible user experience. We'd like to allow user to choose which hard
drives to use for root FS, for
allocating DB. We'd like user to be able to put root FS over LV or MD
device (including stripe, mirror, multipath).
We'd like user to be able to choose which hard drives are bootable (if
any), which options to use for mounting file systems.
Many many various cases are possible. If you ask why we'd like to
support all those cases, the answer is simple:
because our users want us to support all those cases.
Obviously, many of those cases can not be implemented as image
internals, some cases can not be also implemented on
configuration stage (placing root fs on lvm device).

As far as those use cases were rejected to be implemented in term of
IPA, we implemented so called Fuel Agent.
Important Fuel Agent features are:

* It does not have REST API

I would not call it a feature :-P

Speaking seriously, if you agent is a long-running thing and it gets 
it's configuration from e.g. JSON file, how can Ironic notify it of any 
changes?



* it has executable entry point[s]
* It uses local json file as it's input
* It is planned to implement ability to download input data via HTTP
(kind of metadata service)
* It is designed to be agnostic to input data format, not only Fuel
format (data drivers)
* It is designed to be agnostic to image format (tar images, file system
images, disk images, currently fs images)
* It is designed to be agnostic to image compression algorithm
(currently gzip)
* It is designed to be agnostic to image downloading protocol (currently
local file and HTTP link)
Does it support Glance? I understand it's HTTP, but it requires 
authentication.




So, it is clear that being motivated by Fuel, Fuel Agent is quite
independent and generic. And we are open for
new use cases.
My favorite use case is hardware introspection (aka getting data 
required for scheduling from a node automatically). Any ideas on this? 
(It's not a priority for this discussion, just curious).




According Fuel itself, our nearest plan is to get rid of Cobbler because
in the case of image based approach it is huge overhead. The question is
which tool we can use instead of Cobbler. We need power management,
we need TFTP management, we need DHCP management. That is
exactly what Ironic is able to do. Frankly, we can implement power/TFTP/DHCP
management tool independently, but as Devananda said, we're all working
on the same problems,
so let's do it together.  Power/TFTP/DHCP management is where we are
working on the same problems,
but IPA and Fuel Agent are about different use cases. This case is not
just Fuel, any mature
deployment case require advanced partition/fs management.
Taking into consideration that you're doing a generic OS installation 
tool... yeah, it starts to make some sense. For cloud advanced partition 
is definitely a "pet" case.


However, for

me it is OK, if it is easily possible
to use Ironic with external drivers (not merged to Ironic and not tested
on Ironic CI).

AFAIU, this spec https://review.openstack.org/#/c/138115/ does not
assume changing Ironic API and core.
Jim asked about how Fuel Agent will know about advanced disk
partitioning scheme if API is not
changed. The answer is simple: Ironic is supposed to send a link to
metadata service (http or local file)
where Fuel Agent can download input json data.
That's not about not changing Ironic. Changing Ironic is ok for 
reasonable use cases - we do a huge change right now to accommodate 
zapping, hardware introspection and RAID configuration.


I actually have problems with this particular statement. It does not 
sound like Fuel Agent will integrate enough with Ironic. This JSON file: 
who is going to generate it? In the most popular use case we're driven 
by Nova. Will Nova generate this file?


If the answer is "generate it manually for ever

Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Dmitry Tantsur

On 12/09/2014 03:40 PM, Vladimir Kozhukalov wrote:



Vladimir Kozhukalov

On Tue, Dec 9, 2014 at 3:51 PM, Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

Hi folks,

Thank you for additional explanation, it does clarify things a bit.
I'd like to note, however, that you talk a lot about how _different_
Fuel Agent is from what Ironic does now. I'd like actually to know
how well it's going to fit into what Ironic does (in additional to
your specific use cases). Hence my comments inline:



On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote:

Just a short explanation of Fuel use case.

Fuel use case is not a cloud. Fuel is a deployment tool. We
install OS
on bare metal servers and on VMs
and then configure this OS using Puppet. We have been using
Cobbler as
our OS provisioning tool since the beginning of Fuel.
However, Cobbler assumes using native OS installers (Anaconda and
Debian-installer). For some reasons we decided to
switch to image based approach for installing OS.

One of Fuel features is the ability to provide advanced partitioning
schemes (including software RAIDs, LVM).
Native installers are quite difficult to customize in the field of
partitioning
(that was one of the reasons to switch to image based approach).
Moreover, we'd like to implement even more
flexible user experience. We'd like to allow user to choose
which hard
drives to use for root FS, for
allocating DB. We'd like user to be able to put root FS over LV
or MD
device (including stripe, mirror, multipath).
We'd like user to be able to choose which hard drives are
bootable (if
any), which options to use for mounting file systems.
Many many various cases are possible. If you ask why we'd like to
support all those cases, the answer is simple:
because our users want us to support all those cases.
Obviously, many of those cases can not be implemented as image
internals, some cases can not be also implemented on
configuration stage (placing root fs on lvm device).

As far as those use cases were rejected to be implemented in term of
IPA, we implemented so called Fuel Agent.
Important Fuel Agent features are:

* It does not have REST API

I would not call it a feature :-P

Speaking seriously, if you agent is a long-running thing and it gets
it's configuration from e.g. JSON file, how can Ironic notify it of
any changes?

Fuel Agent is not long-running service. Currently there is no need to
have REST API. If we deal with kind of keep alive stuff of
inventory/discovery then we probably add API. Frankly, IPA REST API is
not REST at all. However that is not a reason to not to call it a
feature and through it away. It is a reason to work on it and improve.
That is how I try to look at things (pragmatically).

Fuel Agent has executable entry point[s] like /usr/bin/provision. You
can run this entry point with options (oslo.config) and point out where
to find input json data. It is supposed Ironic will  use ssh (currently
in Fuel we use mcollective) connection and run this waiting for exit
code. If exit code is equal to 0, provisioning is done. Extremely simple.

* it has executable entry point[s]
* It uses local json file as it's input
* It is planned to implement ability to download input data via HTTP
(kind of metadata service)
* It is designed to be agnostic to input data format, not only Fuel
format (data drivers)
* It is designed to be agnostic to image format (tar images,
file system
images, disk images, currently fs images)
* It is designed to be agnostic to image compression algorithm
(currently gzip)
* It is designed to be agnostic to image downloading protocol
(currently
local file and HTTP link)

Does it support Glance? I understand it's HTTP, but it requires
authentication.


So, it is clear that being motivated by Fuel, Fuel Agent is quite
independent and generic. And we are open for
new use cases.

My favorite use case is hardware introspection (aka getting data
required for scheduling from a node automatically). Any ideas on
this? (It's not a priority for this discussion, just curious).


That is exactly what we do in Fuel. Currently we use so called 'Default'
pxelinux config and all nodes being powered on are supposed to boot with
so called 'Bootstrap' ramdisk where Ohai based agent (not Fuel Agent)
runs periodically and sends hardware report to Fuel master node.
User then is able to look at CPU, hard drive and network info and choose
which nodes to use for controllers, wh

[openstack-dev] [Ironic] ironic-discoverd status update

2014-12-11 Thread Dmitry Tantsur

Hi all!

As you know I actively promote ironic-discoverd project [1] as one of 
the means to do hardware inspection for Ironic (see e.g. spec [2]), so I 
decided it's worth to give some updates to the community from time to 
time. This email is purely informative, you may safely skip it, if 
you're not interested.


Background
==

The discoverd project (I usually skip the "ironic-" part when talking 
about it) solves the problem of populating information about a node in 
Ironic database without help of any vendor-specific tool. This 
information usually includes Nova scheduling properties (CPU, RAM, disk 
size) and MAC's for ports.


Introspection is done by booting a ramdisk on a node, collecting data 
there and posting it back to discoverd HTTP API. Thus actually discoverd 
consists of 2 components: the service [1] and the ramdisk [3]. The 
service handles 2 major tasks:
* Processing data posted by the ramdisk, i.e. finding the node in Ironic 
database and updating node properties with new data.
* Managing iptables so that the default PXE environment for 
introspection does not interfere with Neutron


The project was born from a series of patches to Ironic itself after we 
discovered that this change is going to be too intrusive. Discoverd was 
actively tested as part of Instack [4] and it's RPM is a part of Juno 
RDO. After the Paris summit, we agreed on bringing it closer to the 
Ironic upstream, and now discoverd is hosted on StackForge and tracks 
bugs on Launchpad.


Future
==

The basic feature of discoverd: supply Ironic with properties required 
for scheduling, is pretty finished as of the latest stable series 0.2.


However, more features are planned for release 1.0.0 this January [5]. 
They go beyond the bare minimum of finding out CPU, RAM, disk size and 
NIC MAC's.


Plugability
~~~

An interesting feature of discoverd is support for plugins, which I 
prefer to call hooks. It's possible to hook into the introspection data 
processing chain in 2 places:
* Before any data processing. This opens opportunity to adopt discoverd 
to ramdisks that have different data format. The only requirement is 
that the ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for 
MAC's, but before any actual data update. This gives an opportunity to 
alter, which properties discoverd is going to update.


Actually, even the default logic of update Node.properties is contained 
in a plugin - see SchedulerHook in ironic_discoverd/plugins/standard.py 
[6]. This plugability opens wide opportunities for integrating with 3rd 
party ramdisks and CMDB's (which as we know Ironic is not ;).


Enrolling
~

Some people have found it limiting that the introspection requires power 
credentials (IPMI user name and password) to be already set. The recent 
set of patches [7] introduces a possibility to request manual power on 
of the machine and update IPMI credentials via the ramdisk to the 
expected values. Note that support of this feature in the reference 
ramdisk [3] is not ready yet. Also note that this scenario is only 
possible when using discoverd directly via it's API, not via Ironic API 
like in [2].


Get Involved


Discoverd terribly lacks reviews. Out team is very small and 
self-approving is not a rare case. I'm even not against fast-tracking 
any existing Ironic core to a discoverd core after a couple of 
meaningful reviews :)


And of course patches are welcome, especially plugins for integration 
with existing systems doing similar things and CMDB's. Patches are 
accepted via usual Gerrit workflow. Ideas are accepted as Launchpad 
blueprints (we do not follow the Gerrit spec process right now).


Finally, please comment on the Ironic spec [2], I'd like to know what 
you think.


References
==

[1] https://pypi.python.org/pypi/ironic-discoverd
[2] https://review.openstack.org/#/c/135605/
[3] 
https://github.com/openstack/diskimage-builder/tree/master/elements/ironic-discoverd-ramdisk

[4] https://github.com/agroup/instack-undercloud/
[5] https://bugs.launchpad.net/ironic-discoverd/+milestone/1.0.0
[6] 
https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discoverd/plugins/standard.py
[7] 
https://blueprints.launchpad.net/ironic-discoverd/+spec/setup-ipmi-credentials


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-05-22 Thread Dmitry Tantsur
On Thu, 2014-05-22 at 09:48 +0100, Lucas Alvares Gomes wrote:
> On Thu, May 22, 2014 at 1:03 AM, Devananda van der Veen
>  wrote:
> > I'd like to bring up the topic of drivers which, for one reason or another,
> > are probably never going to have third party CI testing.
> >
> > Take for example the iBoot driver proposed here:
> >   https://review.openstack.org/50977
> >
> > I would like to encourage this type of driver as it enables individual
> > contributors, who may be using off-the-shelf or home-built systems, to
> > benefit from Ironic's ability to provision hardware, even if that hardware
> > does not have IPMI or another enterprise-grade out-of-band management
> > interface. However, I also don't expect the author to provide a full
> > third-party CI environment, and as such, we should not claim the same level
> > of test coverage and consistency as we would like to have with drivers in
> > the gate.
> 
> +1
But we'll still expect unit tests that work via mocking their 3rd party
library (for example), right?

> 
> >
> > As it is, Ironic already supports out-of-tree drivers. A python module that
> > registers itself with the appropriate entrypoint will be made available if
> > the ironic-conductor service is configured to load that driver. For what
> > it's worth, I recall Nova going through a very similar discussion over the
> > last few cycles...
> >
> > So, why not just put the driver in a separate library on github or
> > stackforge?
> 
> I would like to have this drivers within the Ironic tree under a
> separated directory (e.g /drivers/staging/, not exactly same but kinda
> like what linux has in their tree[1]). The advatanges of having it in
> the main ironic tree is because it makes it easier to other people
> access the drivers, easy to detect and fix changes in the Ironic code
> that would affect the driver, share code with the other drivers, add
> unittests and provide a common place for development.
I do agree, that having these drivers in-tree would make major changes
much easier for us (see also above about unit tests).

> 
> We can create some rules for people who are thinking about submitting
> their driver under the staging directory, it should _not_ be a place
> where you just throw the code and forget it, we would need to agree
> that the person submitting the code will also babysit it, we also
> could use the same process for all the other drivers wich wants to be
> in the Ironic tree to be accepted which is going through ironic-specs.
+1

> 
> Thoughts?
> 
> [1] http://lwn.net/Articles/285599/
> 
> Cheers,
> Lucas
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TripleO] virtual-ironic job now voting!

2014-05-25 Thread Dmitry Tantsur
Great news! Even being non-voting, it already helped me 2-3 times to
spot a subtle error in a patch.

On Fri, 2014-05-23 at 18:56 -0700, Devananda van der Veen wrote:
> Just a quick heads up to everyone -- the tempest-dsvm-virtual-ironic
> job is now fully voting in both check and gate queues for Ironic. It's
> also now symmetrically voting on diskimage-builder, since that tool is
> responsible for building the deploy ramdisk used by this test.
> 
> 
> Background: We discussed this prior to the summit, and agreed to
> continue watching the stability of the job through the summit week.
> It's been reliable for over a month now, and I've seen it catch
> several real issues, both in Ironic and in other projects, and all the
> core reviewers I spoke lately have been eager to enable voting on this
> test. So, it's done!
> 
> 
> Cheers,
> Devananda
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Random thoughts on asynchronous API spec

2014-05-28 Thread Dmitry Tantsur
Hi Ironic folks, hi Devananda!

I'd like to share with you my thoughts on asynchronous API, which is
spec https://review.openstack.org/#/c/94923
First I was planned this as comments to the review, but it proved to be
much larger, so I post it for discussion on ML.

Here is list of different consideration, I'd like to take into account
when prototyping async support, some are reflected in spec already, some
are from my and other's comments:

1. "Executability"
We need to make sure that request can be theoretically executed,
which includes:
a) Validating request body
b) For each of entities (e.g. nodes) touched, check that they are
available
   at the moment (at least exist).
   This is arguable, as checking for entity existence requires going to
DB.

2. Appropriate state
For each entity in question, ensure that it's either in a proper state
or
moving to a proper state.
It would help avoid users e.g. setting deploy twice on the same node
It will still require some kind of NodeInAWrongStateError, but we won't
necessary need a client retry on this one.

Allowing the entity to be _moving_ to appropriate state gives us a
problem:
Imagine OP1 was running and OP2 got scheduled, hoping that OP1 will come
to desired state. What if OP1 fails? What if conductor, doing OP1
crashes?
That's why we may want to approve only operations on entities that do
not
undergo state changes. What do you think?

Similar problem with checking node state.
Imagine we schedule OP2 while we had OP1 - regular checking node state.
OP1 discovers that node is actually absent and puts it to maintenance
state.
What to do with OP2?
a) Obvious answer is to fail it
b) Can we make client wait for the results of periodic check?
   That is, wait for OP1 _before scheduling_ OP2?

Anyway, this point requires some state framework, that knows about
states,
transitions, actions and their compatibility with each other.

3. Status feedback
People would like to know, how things are going with their task.
What they know is that their request was scheduled. Options:
a) Poll: return some REQUEST_ID and expect users to poll some endpoint.
   Pros:
   - Should be easy to implement
   Cons:
   - Requires persistent storage for tasks. Does AMQP allow to do this
kinds
 of queries? If not, we'll need to duplicate tasks in DB.
   - Increased load on API instances and DB
b) Callback: take endpoint, call it once task is done/fails.
   Pros:
   - Less load on both client and server
   - Answer exactly when it's ready
   Cons:
   - Will not work for cli and similar
   - If conductor crashes, there will be no callback.

Seems like we'd want both (a) and (b) to comply with current needs.

If we have a state framework from (2), we can also add notifications to
it.

4. Debugging consideration
a) This is an open question: how to debug, if we have a lot of requests
   and something went wrong?
b) One more thing to consider: how to make command like `node-show`
aware of
   scheduled transitioning, so that people don't try operations that are
   doomed to failure.

5. Performance considerations
a) With async approach, users will be able to schedule nearly unlimited
   number of tasks, thus essentially blocking work of Ironic, without
any
   signs of the problem (at least for some time).
   I think there are 2 common answers to this problem:
   - Request throttling: disallow user to make too many requests in some
 amount of time. Send them 503 with Retry-After header set.
   - Queue management: watch queue length, deny new requests if it's too
large.
   This means actually getting back error 503 and will require retrying
again!
   At least it will be exceptional case, and won't affect Tempest run...
b) State framework from (2), if invented, can become a bottleneck as
well.
   Especially with polling approach.

6. Usability considerations
a) People will be unaware, when and whether their request is going to be
   finished. As there will be tempted to retry, we may get flooded by
   duplicates. I would suggest at least make it possible to request
canceling
   any task (which will be possible only if it is not started yet,
obviously).
b) We should try to avoid scheduling contradictive requests.
c) Can we somehow detect duplicated requests and ignore them?
   E.g. we won't want user to make 2-3-4 reboots in a row just because
the user
   was not patient enough.

--

Possible takeaways from this letter:
- We'll need at least throttling to avoid DoS
- We'll still need handling of 503 error, though it should not happen
under
  normal conditions
- Think about state framework that unifies all this complex logic with
features:
  * Track entities, their states and actions on entities
  * Check whether new action is compatible with states of entities it
touches
and with other ongoing and scheduled actions on these entities.
  * Handle notifications for finished and failed actions by providing
both
pull and push approaches.
  * Track whether started action is still executed,

Re: [openstack-dev] [Ironic] Random thoughts on asynchronous API spec

2014-05-28 Thread Dmitry Tantsur
A task scheduler responsibility: this is basically a
> state check before task is scheduled, and it should be
> done one more time once the task is started, as
> mentioned above.
>  
> c) Can we somehow detect duplicated requests
> and ignore them?
>E.g. we won't want user to make 2-3-4
> reboots in a row just because
> the user
>was not patient enough.
> 
> 
> Queue similar tasks. All the users will be pointed to
> the similar task resource, or maybe to a different
> resources which tied to the same conductor action. 
>  
> Best regards,
> Max Lobur,
> Python Developer, Mirantis, Inc.
> Mobile: +38 (093) 665 14 28
> Skype: max_lobur
> 38, Lenina ave. Kharkov, Ukraine
> www.mirantis.com
> www.mirantis.ru
> 
> 
> On Wed, May 28, 2014 at 5:10 PM, Lucas Alvares
> Gomes  wrote:
> On Wed, May 28, 2014 at 2:02 PM, Dmitry
> Tantsur  wrote:
> > Hi Ironic folks, hi Devananda!
> >
> > I'd like to share with you my thoughts on
> asynchronous API, which is
> > spec https://review.openstack.org/#/c/94923
> > First I was planned this as comments to the
> review, but it proved to be
> > much larger, so I post it for discussion on
> ML.
> >
> > Here is list of different consideration, I'd
> like to take into account
> > when prototyping async support, some are
> reflected in spec already, some
> > are from my and other's comments:
> >
> > 1. "Executability"
> > We need to make sure that request can be
> theoretically executed,
> > which includes:
> > a) Validating request body
> > b) For each of entities (e.g. nodes)
> touched, check that they are
> > available
> >at the moment (at least exist).
> >This is arguable, as checking for entity
> existence requires going to
> > DB.
> 
> >
> 
> > 2. Appropriate state
> > For each entity in question, ensure that
> it's either in a proper state
> > or
> > moving to a proper state.
> > It would help avoid users e.g. setting
> deploy twice on the same node
> > It will still require some kind of
> NodeInAWrongStateError, but we won't
> > necessary need a client retry on this one.
> >
> > Allowing the entity to be _moving_ to
> appropriate state gives us a
> > problem:
> > Imagine OP1 was running and OP2 got
> scheduled, hoping that OP1 will come
> > to desired state. What if OP1 fails? What if
> conductor, doing OP1
> > crashes?
> > That's why we may want to approve only
> operations on entities that do
> > not
> > undergo state changes. What do you think?
> >
> > Similar problem with checking node state.
> > Imagine we schedule OP2 while we had OP1 -
>  

[openstack-dev] [Ironic] Proposal for shared review dashboard

2014-06-02 Thread Dmitry Tantsur
Hi folks,

Inspired by great work by Sean Dague [1], I have created a review
dashboard for Ironic projects. Main ideas:

Ordering:
0. Viewer's own patches, that have any kind of negative feedback
1. Specs
2. Changes w/o negative feedback, with +2 already
3. Changes that did not have any feedback for 5 days
4. Changes without negative feedback (no more than 50)
5. Other changes (no more than 20)

Shows only verified patches, except for 0 and 5.
Never shows WIP patches.

I'll be thankful for any tips on how to include prioritization from
Launchpad bugs.

Short link: http://goo.gl/hqRrRw
Long link: [2]

Source code (will create PR after discussion on today's meeting): 
https://github.com/Divius/gerrit-dash-creator
To generate a link, use:
$ ./gerrit-dash-creator dashboards/ironic.dash

Dmitry.

[1] https://github.com/Divius/gerrit-dash-creator
[2] https://review.openstack.org/#/dashboard/?foreach=%28project%
3Aopenstack%2Fironic+OR+project%3Aopenstack%2Fpython-ironicclient+OR
+project%3Aopenstack%2Fironic-python-agent+OR+project%3Aopenstack%
2Fironic-specs%29+status%3Aopen+NOT+label%3AWorkflow%3C%3D-1+NOT+label%
3ACode-Review%3C%3D-2+NOT+label%3AWorkflow%3E%3D1&title=Ironic+Inbox&My
+Patches+Requiring+Attention=owner%3Aself+%28label%3AVerified-1%
252cjenkins+OR+label%3ACode-Review-1%29&Ironic+Specs=NOT+owner%3Aself
+project%3Aopenstack%2Fironic-specs&Needs+Approval=label%3AVerified%3E%
3D1%252cjenkins+NOT+owner%3Aself+label%3ACode-Review%3E%3D2+NOT+label%
3ACode-Review-1&5+Days+Without+Feedback=label%3AVerified%3E%3D1%
252cjenkins+NOT+owner%3Aself+NOT+project%3Aopenstack%2Fironic-specs+NOT
+label%3ACode-Review%3C%3D2+age%3A5d&No+Negative+Feedback=label%
3AVerified%3E%3D1%252cjenkins+NOT+owner%3Aself+NOT+project%3Aopenstack%
2Fironic-specs+NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%
3E%3D2+limit%3A50&Other=label%3AVerified%3E%3D1%252cjenkins+NOT+owner%
3Aself+NOT+project%3Aopenstack%2Fironic-specs+label%3ACode-Review-1
+limit%3A20



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Review dashboard update

2014-06-03 Thread Dmitry Tantsur
Hi everyone!

It's hard to stop polishing things, and today I got an updated review
dashboard. It's sources are merged to Sean Dague's repository [1], so I
expect this to be the final version. Thank you everyone for numerous
comments and suggestions, especially Ruby Loo.

Here is nice link to it: http://perm.ly/ironic-review-dashboard

Major changes since previous edition:
- "My Patches Requiring Attention" section - all your patches that are
either WIP or have any -1.
- "Needs Reverify" - approved changes that failed Jenkins verification
- Added last section with changes that either WIP or got -1 from Jenkins
(all other sections do not include these).
- Specs section show also WIP specs

I know someone requesting dashboard with IPA subproject highlighted - I
can do such things on case-by-case base - ping me on IRC.

Hope this will be helpful :)

Dmitry.

[1] https://github.com/sdague/gerrit-dash-creator


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
Hi!

Workflow is not entirely documented by now AFAIK. After PXE boots deploy
kernel and ramdisk, it exposes hard drive via iSCSI and notifies Ironic.
After that Ironic partitions the disk, copies an image and reboots node
with final kernel and ramdisk.

On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
> Hi, All:
> 
> I searched a lot about how ironic automatically install image
> on bare metal. But there seems to be no clear workflow out there.
> 
> What I know is, in traditional PXE, a bare metal pull image
> from PXE server using tftp. In tftp root, there is a ks.conf which
> tells tftp which image to kick start.
> 
> But in ironic there is no ks.conf pointed in tftp. How do bare
> metal know which image to install ? Is there any clear workflow where
> I can read ?
> 
> 
> 
> 
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> --
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
> Hi,
> 
> Thank you very much for your reply !
> 
> But there are still some questions for me. Now I've come to the step
> where ironic partitions the disk as you replied.
> 
> Then, how does ironic copies an image ? I know the image comes from
> glance. But how to know image is really available when reboot? 
I don't quite understand your question, what do you mean by "available"?
Anyway, before deploying Ironic downloads image from Glance, caches it
and just copies to a mounted iSCSI partition (using dd or so).

> 
> And, what are the differences between final kernel (ramdisk) and
> original kernel (ramdisk) ? 
We have 2 sets of kernel+ramdisk:
1. Deploy k+r: these are used only for deploy process itself to provide
iSCSI volume and call back to Ironic. There's ongoing effort to create
smarted ramdisk, called Ironic Python Agent, but it's WIP.
2. Your k+r as stated in Glance metadata for an image - they will be
used for booting after deployment.

> 
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> --
> 
> 
> 
> 2014-06-04 19:36 GMT+08:00 Dmitry Tantsur :
> Hi!
> 
> Workflow is not entirely documented by now AFAIK. After PXE
> boots deploy
> kernel and ramdisk, it exposes hard drive via iSCSI and
> notifies Ironic.
> After that Ironic partitions the disk, copies an image and
> reboots node
> with final kernel and ramdisk.
> 
> On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
> > Hi, All:
> >
> > I searched a lot about how ironic automatically
> install image
> > on bare metal. But there seems to be no clear workflow out
> there.
> >
> > What I know is, in traditional PXE, a bare metal
> pull image
> > from PXE server using tftp. In tftp root, there is a ks.conf
> which
> > tells tftp which image to kick start.
> >
> > But in ironic there is no ks.conf pointed in tftp.
> How do bare
> > metal know which image to install ? Is there any clear
> workflow where
> > I can read ?
> >
> >
> >
> >
> > Best Regards!
> > Chao Yan
> > --
> > My twitter:Andy Yan @yanchao727
> > My Weibo:http://weibo.com/herewearenow
> > --
> >
> 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
On Wed, 2014-06-04 at 21:18 +0800, 严超 wrote:
> Thank you !
> 
> I noticed the two sets of k+r in tftp configuration of ironic.
> 
> Should the two sets be the same k+r ?
Deploy images are created for you by DevStack/whatever. If you do it by
hand, you may use diskimage-builder. Currently they are stored in flavor
metadata, will be stored in node metadata later.

And than you have "production" images that are whatever you want to
deploy and they are stored in Glance metadata for the instance image.

TFTP configuration should be created automatically, I doubt you should
change it anyway.

> 
> The first set is defined in the ironic node definition. 
> 
> How do we define the second set correctly ? 
> 
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> ------
> 
> 
> 
> 2014-06-04 21:00 GMT+08:00 Dmitry Tantsur :
> On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
> > Hi,
> >
> > Thank you very much for your reply !
> >
> > But there are still some questions for me. Now I've come to
> the step
> > where ironic partitions the disk as you replied.
> >
> > Then, how does ironic copies an image ? I know the image
> comes from
> > glance. But how to know image is really available when
> reboot?
> 
> I don't quite understand your question, what do you mean by
> "available"?
> Anyway, before deploying Ironic downloads image from Glance,
> caches it
> and just copies to a mounted iSCSI partition (using dd or so).
> 
> >
> > And, what are the differences between final kernel (ramdisk)
> and
> > original kernel (ramdisk) ?
> 
> We have 2 sets of kernel+ramdisk:
> 1. Deploy k+r: these are used only for deploy process itself
> to provide
> iSCSI volume and call back to Ironic. There's ongoing effort
> to create
> smarted ramdisk, called Ironic Python Agent, but it's WIP.
> 2. Your k+r as stated in Glance metadata for an image - they
> will be
> used for booting after deployment.
> 
> >
> > Best Regards!
>     > Chao Yan
> > --
> > My twitter:Andy Yan @yanchao727
> > My Weibo:http://weibo.com/herewearenow
> > --
> >
> >
> >
> > 2014-06-04 19:36 GMT+08:00 Dmitry Tantsur
> :
> > Hi!
> >
> > Workflow is not entirely documented by now AFAIK.
> After PXE
> > boots deploy
> > kernel and ramdisk, it exposes hard drive via iSCSI
> and
> > notifies Ironic.
> > After that Ironic partitions the disk, copies an
> image and
> > reboots node
> > with final kernel and ramdisk.
> >
> > On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote:
> > > Hi, All:
> > >
> > > I searched a lot about how ironic
> automatically
> > install image
> > > on bare metal. But there seems to be no clear
> workflow out
> > there.
> > >
> > > What I know is, in traditional PXE, a bare
> metal
> > pull image
> > > from PXE server using tftp. In tftp root, there is
> a ks.conf
> > which
> > > tells tftp which image to kick start.
> > >
> > > But in ironic there is no ks.conf pointed
> in tftp.
> > How do bare
> > > metal know which image to install ? Is there any
> clear
> > workflow where
> > > I can read ?
> > >
> > >
> > >
> > >
> > > Best Regards!
> > > Chao Yan
> > > --
> > > My twitter:Andy Yan @yanchao727
> > > My Weibo:http://weibo.com/herewearenow
> >   

Re: [openstack-dev] [ironic workflow question]

2014-06-04 Thread Dmitry Tantsur
On Wed, 2014-06-04 at 21:51 +0800, 严超 wrote:
> Yes, but when you assign a "production" image to an ironic bare metal
> node. You should provide ramdisk_id and kernel_id. 
What do you mean by "assign" here? Could you quote some documentation?
Instance image is "assigned" using --image argument to `nova boot`, k&r
are fetched from it's metadata.

Deploy k&r are currently taken from flavor provided by --flavor argument
(this will change eventually).
If you're using e.g. DevStack, you don't even touch deploy k&r, they're
bound to flavor "baremetal".

Please see quick start guide for hints on this:
http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html

> 
> Should the ramdisk_id and kernel_id be the same as deploy images (aka
> the first set of k+r) ?
> 
> You didn't answer me if the two sets of r + k should be the same ? 
> 
> 
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> --
> 
> 
> 
> 2014-06-04 21:27 GMT+08:00 Dmitry Tantsur :
> On Wed, 2014-06-04 at 21:18 +0800, 严超 wrote:
> > Thank you !
> >
> > I noticed the two sets of k+r in tftp configuration of
> ironic.
> >
> > Should the two sets be the same k+r ?
> 
> Deploy images are created for you by DevStack/whatever. If you
> do it by
> hand, you may use diskimage-builder. Currently they are stored
> in flavor
> metadata, will be stored in node metadata later.
> 
> And than you have "production" images that are whatever you
> want to
> deploy and they are stored in Glance metadata for the instance
> image.
> 
> TFTP configuration should be created automatically, I doubt
> you should
> change it anyway.
> 
> >
> > The first set is defined in the ironic node definition.
> >
> > How do we define the second set correctly ?
> >
> > Best Regards!
> > Chao Yan
> > --
> > My twitter:Andy Yan @yanchao727
> > My Weibo:http://weibo.com/herewearenow
> > --
> >
> >
> >
> > 2014-06-04 21:00 GMT+08:00 Dmitry Tantsur
> :
> > On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote:
> > > Hi,
> > >
> > > Thank you very much for your reply !
> > >
> > > But there are still some questions for me. Now
> I've come to
> > the step
> > > where ironic partitions the disk as you replied.
> > >
> > > Then, how does ironic copies an image ? I know the
> image
> > comes from
> > > glance. But how to know image is really available
> when
> > reboot?
> >
> > I don't quite understand your question, what do you
> mean by
> > "available"?
> > Anyway, before deploying Ironic downloads image from
> Glance,
> > caches it
> > and just copies to a mounted iSCSI partition (using
> dd or so).
> >
> > >
> > > And, what are the differences between final kernel
> (ramdisk)
> > and
> > > original kernel (ramdisk) ?
> >
> > We have 2 sets of kernel+ramdisk:
> > 1. Deploy k+r: these are used only for deploy
> process itself
> > to provide
> > iSCSI volume and call back to Ironic. There's
> ongoing effort
> > to create
> > smarted ramdisk, called Ironic Python Agent, but
>     it's WIP.
> > 2. Your k+r as stated in Glance metadata for an
> image - they
> > will be
> > used for booting after deployment.
> >
> > >
> > > Best Regards!
> > > Chao Yan
> > > --
> > > My twitter:Andy Yan @yanchao727
> > > My Weibo:http:

Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-05 Thread Dmitry Tantsur

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for 
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one of the means 
to do hardware inspection for Ironic (see e.g. spec [2]), so I decided it's 
worth to give some updates to the community from time to time. This email is 
purely informative, you may safely skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the "ironic-" part when talking about it) 
solves the problem of populating information about a node in Ironic database without help 
of any vendor-specific tool. This information usually includes Nova scheduling properties 
(CPU, RAM, disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting data there and 
posting it back to discoverd HTTP API. Thus actually discoverd consists of 2 
components: the service [1] and the ramdisk [3]. The service handles 2 major 
tasks:
* Processing data posted by the ramdisk, i.e. finding the node in Ironic 
database and updating node properties with new data.
* Managing iptables so that the default PXE environment for introspection does 
not interfere with Neutron

The project was born from a series of patches to Ironic itself after we 
discovered that this change is going to be too intrusive. Discoverd was 
actively tested as part of Instack [4] and it's RPM is a part of Juno RDO. 
After the Paris summit, we agreed on bringing it closer to the Ironic upstream, 
and now discoverd is hosted on StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties required for 
scheduling, is pretty finished as of the latest stable series 0.2.

However, more features are planned for release 1.0.0 this January [5].
They go beyond the bare minimum of finding out CPU, RAM, disk size and NIC 
MAC's.

Plugability
~~~

An interesting feature of discoverd is support for plugins, which I prefer to 
call hooks. It's possible to hook into the introspection data processing chain 
in 2 places:
* Before any data processing. This opens opportunity to adopt discoverd to 
ramdisks that have different data format. The only requirement is that the 
ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for MAC's, but 
before any actual data update. This gives an opportunity to alter, which 
properties discoverd is going to update.

Actually, even the default logic of update Node.properties is contained in a 
plugin - see SchedulerHook in ironic_discoverd/plugins/standard.py
[6]. This plugability opens wide opportunities for integrating with 3rd party 
ramdisks and CMDB's (which as we know Ironic is not ;).

Enrolling
~

Some people have found it limiting that the introspection requires power 
credentials (IPMI user name and password) to be already set. The recent set of 
patches [7] introduces a possibility to request manual power on of the machine 
and update IPMI credentials via the ramdisk to the expected values. Note that 
support of this feature in the reference ramdisk [3] is not ready yet. Also 
note that this scenario is only possible when using discoverd directly via it's 
API, not via Ironic API like in [2].

Get Involved


Discoverd terribly lacks reviews. Out team is very small and self-approving is 
not a rare case. I'm even not against fast-tracking any existing Ironic core to 
a discoverd core after a couple of meaningful reviews :)

And of course patches are welcome, especially plugins for integration with 
existing systems doing similar things and CMDB's. Patches are accepted via 
usual Gerrit workflow. Ideas are accepted as Launchpad blueprints (we do not 
follow the Gerrit spec process right now).

Finally, please comment on the Ironic spec [2], I'd like to know what you think.

References
==

[1] https://pypi.python.org/pypi/ironic-discoverd
[2] https://review.openstack.org/#/c/135605/
[3]
https://github.com/openstack/diskimage-builder/tree/master/elements/ironic-discoverd-ramdisk
[4] https://github.com/agroup/instack-undercloud/
[5] https://bugs.launchpad.net/ironic-discoverd/+milestone/1.0.0
[6]
https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discoverd/plugins/standard.py
[7]
https://blueprints.launchpad.net/ironic-discoverd/+spec/setup-ipmi-credentials

__

Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-07 Thread Dmitry Tantsur

On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:

So is it possible to just integrate this project into ironic? I mean when you 
create an ironic node, it will start discover in the background. So we don't 
need two services?
Well, the decision on the summit was that it's better to keep it 
separate. Please see https://review.openstack.org/#/c/135605/ for 
details on future interaction between discoverd and Ironic.



Just a thought, thanks.

BR
Zhou Zhenzan

-Original Message-----
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Monday, January 5, 2015 4:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for 
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-----
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one of the means 
to do hardware inspection for Ironic (see e.g. spec [2]), so I decided it's 
worth to give some updates to the community from time to time. This email is 
purely informative, you may safely skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the "ironic-" part when talking
about it) solves the problem of populating information about a node in
Ironic database without help of any vendor-specific tool. This
information usually includes Nova scheduling properties (CPU, RAM,
disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting data there and 
posting it back to discoverd HTTP API. Thus actually discoverd consists of 2 
components: the service [1] and the ramdisk [3]. The service handles 2 major 
tasks:
* Processing data posted by the ramdisk, i.e. finding the node in Ironic 
database and updating node properties with new data.
* Managing iptables so that the default PXE environment for
introspection does not interfere with Neutron

The project was born from a series of patches to Ironic itself after we 
discovered that this change is going to be too intrusive. Discoverd was 
actively tested as part of Instack [4] and it's RPM is a part of Juno RDO. 
After the Paris summit, we agreed on bringing it closer to the Ironic upstream, 
and now discoverd is hosted on StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties required for 
scheduling, is pretty finished as of the latest stable series 0.2.

However, more features are planned for release 1.0.0 this January [5].
They go beyond the bare minimum of finding out CPU, RAM, disk size and NIC 
MAC's.

Plugability
~~~

An interesting feature of discoverd is support for plugins, which I prefer to 
call hooks. It's possible to hook into the introspection data processing chain 
in 2 places:
* Before any data processing. This opens opportunity to adopt discoverd to 
ramdisks that have different data format. The only requirement is that the 
ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for MAC's, but 
before any actual data update. This gives an opportunity to alter, which 
properties discoverd is going to update.

Actually, even the default logic of update Node.properties is
contained in a plugin - see SchedulerHook in
ironic_discoverd/plugins/standard.py
[6]. This plugability opens wide opportunities for integrating with 3rd party 
ramdisks and CMDB's (which as we know Ironic is not ;).

Enrolling
~

Some people have found it limiting that the introspection requires power 
credentials (IPMI user name and password) to be already set. The recent set of 
patches [7] introduces a possibility to request manual power on of the machine 
and update IPMI credentials via the ramdisk to the expected values. Note that 
support of this feature in the reference ramdisk [3] is not ready yet. Also 
note that this scenario is only possible when using discoverd directly via it's 
API, not via Ironic API like in [2].

Get Involved


Discoverd terribly lacks reviews. Out team is very small and
self-approving is not a rare case. I'm even not against fast-tracking
any existing Ironic core to a discoverd core after a couple of
meaningful reviews :)

And of course patches are welcome, especially plugins for integration with 
existing systems doing similar things and CMDB's. Patches are accepted via 
usual Gerrit workflow. Ideas are accepted as Launchpad blueprints (we do not 
follow

Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-07 Thread Dmitry Tantsur

On 01/07/2015 03:44 PM, Matt Keenan wrote:

On 01/07/15 14:24, Kumar, Om (Cloud OS R&D) wrote:

If it's a separate project, can it be extended to perform out of band
discovery too..? That way there will be a single service to perform
in-band as well as out of band discoveries.. May be it could follow
driver framework for discovering nodes, where one driver could be
native (in-band) and other could be iLO specific etc...



I believe the following spec outlines plans for out-of-band discovery:
   https://review.openstack.org/#/c/100951/
Right, so Ironic will have drivers, one of which (I hope) will be a 
driver for discoverd.




No idea what the progress is with regard to implementation within the
Kilo cycle though.

For now we hope to get it merged in K.



cheers

Matt


Just a thought.

-Om

-Original Message-----
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: 07 January 2015 14:34
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:

So is it possible to just integrate this project into ironic? I mean
when you create an ironic node, it will start discover in the
background. So we don't need two services?

Well, the decision on the summit was that it's better to keep it
separate. Please see https://review.openstack.org/#/c/135605/ for
details on future interaction between discoverd and Ironic.


Just a thought, thanks.

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Monday, January 5, 2015 4:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-----
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one
of the means to do hardware inspection for Ironic (see e.g. spec
[2]), so I decided it's worth to give some updates to the community
from time to time. This email is purely informative, you may safely
skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the "ironic-" part when talking
about it) solves the problem of populating information about a node
in Ironic database without help of any vendor-specific tool. This
information usually includes Nova scheduling properties (CPU, RAM,
disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting
data there and posting it back to discoverd HTTP API. Thus actually
discoverd consists of 2 components: the service [1] and the ramdisk
[3]. The service handles 2 major tasks:
* Processing data posted by the ramdisk, i.e. finding the node in
Ironic database and updating node properties with new data.
* Managing iptables so that the default PXE environment for
introspection does not interfere with Neutron

The project was born from a series of patches to Ironic itself after
we discovered that this change is going to be too intrusive.
Discoverd was actively tested as part of Instack [4] and it's RPM is
a part of Juno RDO. After the Paris summit, we agreed on bringing it
closer to the Ironic upstream, and now discoverd is hosted on
StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties
required for scheduling, is pretty finished as of the latest stable
series 0.2.

However, more features are planned for release 1.0.0 this January [5].
They go beyond the bare minimum of finding out CPU, RAM, disk size
and NIC MAC's.

Plugability
~~~

An interesting feature of discoverd is support for plugins, which I
prefer to call hooks. It's possible to hook into the introspection
data processing chain in 2 places:
* Before any data processing. This opens opportunity to adopt
discoverd to ramdisks that have different data format. The only
requirement is that the ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for
MAC's, but before any actual data update. This gives an opportunity
to alter, which properties discoverd is going to update.

Actually, even the default logic of update Node.properties is
contained in a plugin - see SchedulerHook in
ironic_discoverd/plugins/standard.py
[6]. This plugability opens wide opportunities for integrating with
3rd party ramdisks and CMDB's (which as we know Ironic is not ;).

Enrolling
~

Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-08 Thread Dmitry Tantsur

On 01/08/2015 06:48 AM, Kumar, Om (Cloud OS R&D) wrote:

My understanding of discovery was to get all details for a node and then 
register that node to ironic. i.e. Enrollment of the node to ironic. Pardon me 
if it was out of line with your understanding of discovery.
That's why we agreed to use terms inspection/introspection :) sorry for 
not being consistent here (name 'discoverd' is pretty old and hard to 
change).


discoverd does not enroll nodes. while possible, I'm somewhat resistant 
to make it do enrolling, mostly because I want it to be user-controlled 
process.




What I understand from the below mentioned spec is that the Node is registered, 
but the spec will help ironic discover other properties of the node.

that's what discoverd does currently.



-Om

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: 07 January 2015 20:20
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/07/2015 03:44 PM, Matt Keenan wrote:

On 01/07/15 14:24, Kumar, Om (Cloud OS R&D) wrote:

If it's a separate project, can it be extended to perform out of band
discovery too..? That way there will be a single service to perform
in-band as well as out of band discoveries.. May be it could follow
driver framework for discovering nodes, where one driver could be
native (in-band) and other could be iLO specific etc...



I believe the following spec outlines plans for out-of-band discovery:
https://review.openstack.org/#/c/100951/

Right, so Ironic will have drivers, one of which (I hope) will be a driver for 
discoverd.



No idea what the progress is with regard to implementation within the
Kilo cycle though.

For now we hope to get it merged in K.



cheers

Matt


Just a thought.

-Om

-----Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: 07 January 2015 14:34
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:

So is it possible to just integrate this project into ironic? I mean
when you create an ironic node, it will start discover in the
background. So we don't need two services?

Well, the decision on the summit was that it's better to keep it
separate. Please see https://review.openstack.org/#/c/135605/ for
details on future interaction between discoverd and Ironic.


Just a thought, thanks.

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Monday, January 5, 2015 4:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:

Hi, Dmitry

I think this is a good project.
I got one question: what is the relationship with ironic-python-agent?
Thanks.

Hi!

No relationship right now, but I'm hoping to use IPA as a base for
introspection ramdisk in the (near?) future.


BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com]
Sent: Thursday, December 11, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] ironic-discoverd status update

Hi all!

As you know I actively promote ironic-discoverd project [1] as one
of the means to do hardware inspection for Ironic (see e.g. spec
[2]), so I decided it's worth to give some updates to the community
from time to time. This email is purely informative, you may safely
skip it, if you're not interested.

Background
==

The discoverd project (I usually skip the "ironic-" part when
talking about it) solves the problem of populating information
about a node in Ironic database without help of any vendor-specific
tool. This information usually includes Nova scheduling properties
(CPU, RAM, disk
size) and MAC's for ports.

Introspection is done by booting a ramdisk on a node, collecting
data there and posting it back to discoverd HTTP API. Thus actually
discoverd consists of 2 components: the service [1] and the ramdisk
[3]. The service handles 2 major tasks:
* Processing data posted by the ramdisk, i.e. finding the node in
Ironic database and updating node properties with new data.
* Managing iptables so that the default PXE environment for
introspection does not interfere with Neutron

The project was born from a series of patches to Ironic itself
after we discovered that this change is going to be too intrusive.
Discoverd was actively tested as part of Instack [4] and it's RPM
is a part of Juno RDO. After the Paris summit, we agreed on
bringing it closer to the Ironic upstream, and now discoverd is
hosted on StackForge and tracks bugs on Launchpad.

Future
==

The basic feature of discoverd: supply Ironic with properties
required for scheduling, is pretty finished as of the latest stable
series 0.2.

However, more featu

Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-09 Thread Dmitry Tantsur

On 01/09/2015 08:43 AM, Jerry Xinyu Zhao wrote:

tuskar-ui is supposed to enroll nodes into ironic.

Right. And it has support for discoverd IIRC.



On Thu, Jan 8, 2015 at 4:36 AM, Zhou, Zhenzan mailto:zhenzan.z...@intel.com>> wrote:

Sounds like we could add something new to automate the enrollment of
new nodes:-)
Collecting IPMI info into a csv file is still a trivial job...

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com
<mailto:dtant...@redhat.com>]
Sent: Thursday, January 8, 2015 5:19 PM
To: openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/08/2015 06:48 AM, Kumar, Om (Cloud OS R&D) wrote:
 > My understanding of discovery was to get all details for a node
and then register that node to ironic. i.e. Enrollment of the node
to ironic. Pardon me if it was out of line with your understanding
of discovery.
That's why we agreed to use terms inspection/introspection :) sorry
for not being consistent here (name 'discoverd' is pretty old and
hard to change).

discoverd does not enroll nodes. while possible, I'm somewhat
resistant to make it do enrolling, mostly because I want it to be
user-controlled process.

 >
 > What I understand from the below mentioned spec is that the Node
is registered, but the spec will help ironic discover other
properties of the node.
that's what discoverd does currently.

 >
 > -Om
 >
 > -Original Message-
 > From: Dmitry Tantsur [mailto:dtant...@redhat.com
<mailto:dtant...@redhat.com>]
 > Sent: 07 January 2015 20:20
 > To: openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>
 > Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update
 >
 > On 01/07/2015 03:44 PM, Matt Keenan wrote:
 >> On 01/07/15 14:24, Kumar, Om (Cloud OS R&D) wrote:
 >>> If it's a separate project, can it be extended to perform out
of band
 >>> discovery too..? That way there will be a single service to perform
 >>> in-band as well as out of band discoveries.. May be it could follow
 >>> driver framework for discovering nodes, where one driver could be
 >>> native (in-band) and other could be iLO specific etc...
 >>>
 >>
 >> I believe the following spec outlines plans for out-of-band
discovery:
 >> https://review.openstack.org/#/c/100951/
 > Right, so Ironic will have drivers, one of which (I hope) will be
a driver for discoverd.
 >
 >>
 >> No idea what the progress is with regard to implementation
within the
 >> Kilo cycle though.
     > For now we hope to get it merged in K.
 >
 >>
 >> cheers
 >>
 >> Matt
 >>
 >>> Just a thought.
 >>>
 >>> -Om
 >>>
 >>> -Original Message-
 >>> From: Dmitry Tantsur [mailto:dtant...@redhat.com
<mailto:dtant...@redhat.com>]
 >>> Sent: 07 January 2015 14:34
 >>> To: openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>
 >>> Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status
update
 >>>
 >>> On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:
 >>>> So is it possible to just integrate this project into ironic?
I mean
 >>>> when you create an ironic node, it will start discover in the
 >>>> background. So we don't need two services?
 >>> Well, the decision on the summit was that it's better to keep it
 >>> separate. Please see https://review.openstack.org/#/c/135605/ for
 >>> details on future interaction between discoverd and Ironic.
 >>>
 >>>> Just a thought, thanks.
 >>>>
 >>>> BR
 >>>> Zhou Zhenzan
 >>>>
 >>>> -Original Message-
 >>>> From: Dmitry Tantsur [mailto:dtant...@redhat.com
<mailto:dtant...@redhat.com>]
 >>>> Sent: Monday, January 5, 2015 4:49 PM
 >>>> To: openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>
 >>>> Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status
update
 >>>>
 >>>> On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:
 >>>>> Hi, Dmitry
 >>>>>
 >>>>

Re: [openstack-dev] [all] python 2.6 for clients

2015-01-09 Thread Dmitry Tantsur

On 01/09/2015 02:37 PM, Ihar Hrachyshka wrote:

On 01/09/2015 02:33 PM, Andreas Jaeger wrote:

On 01/09/2015 02:25 PM, Ihar Hrachyshka wrote:

Hi all,

I assumed that we still support py26 for clients, but then I saw [1]
that removed corresponding tox environment from ironic client.

What's our take on that? Shouldn't clients still support Python 2.6?

[1]:
https://github.com/openstack/ironic-python-agent/commit/d95a99d5d1a62ef5c085ce20ec07d960a3f23ac1


Indeed, clients are supposed to continue supporting 2.6 as mentioned
here:

http://lists.openstack.org/pipermail/openstack-dev/2014-October/049111.html


Andreas


OK, thanks. Reverting: https://review.openstack.org/#/c/146083/
Thanks you for your time folks, but this is not a client :) it's an 
alternative ramdisk for Ironic.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-dev topics now work correctly

2015-01-12 Thread Dmitry Tantsur

On 01/09/2015 08:12 PM, Stefano Maffulli wrote:

Dear all,

if you've tried the topics on this mailing list and haven't received
emails, well... we had a problem on our side: the topics were not setup
correctly.

Luigi Toscano helped isolate the problem and point at the solution[1].
He noticed that only the "QA topic" was working and that's the only one
defined with a single regular expression, while all the others use
multiple line regexp.

I corrected the regexp as described in the mailman FAQ and tested that
the delivery works correctly. If you want to subscribe only to some
topics now you can. Thanks again to Luigi for the help.

Cheers,
stef

[1] http://wiki.list.org/pages/viewpage.action?pageId=8683547



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Hi!

Is it possible to make topic lists more up-to-date with what real in-use 
topic are? I would appreciate at least topics "oslo" and "all".


Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ironic-discoverd status update: 1.0.0 feature freeze and testing

2015-01-19 Thread Dmitry Tantsur

Hi all!

That's pure information email about discoverd, feel free to skip it, if 
not interested.


For those interested I'm glad to announce that ironic-discoverd 1.0.0 is 
feature complete and is scheduled to release on Feb 5 with Kilo-2 
milestone. Master branch is under feature freeze now and will only 
receive bug fixes and documentation updates until the release. This is 
the version intended to work with my in-band inspection spec 
http://specs.openstack.org/openstack/ironic-specs/specs/kilo/inband-properties-discovery.html


Preliminary release notes: 
https://github.com/stackforge/ironic-discoverd#10-series
Release tracking page: 
https://bugs.launchpad.net/ironic-discoverd/+milestone/1.0.0
Installation notes: 
https://github.com/stackforge/ironic-discoverd#installation (might be 
slightly outdated, but should be correct)


I'm not providing a release candidate tarball, but you can treat git 
master at https://github.com/stackforge/ironic-discoverd as such. Users 
of RPM-based distros can use my repo: 
https://copr.fedoraproject.org/coprs/divius/ironic-discoverd/ but beware 
that it's kind of experimental, and it will be receiving updates from 
git master after released is pushed to PyPI.


Lastly, I do not expect this release to be a long-term supported one. 
Next feature release 1.1.0 is expected to arrive around Kilo RC and will 
be supported for longer time.


Looking forward to your comments/suggestions/bug reports.
Dmitry.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ironic-discoverd 1.0.0 released

2015-02-03 Thread Dmitry Tantsur

Hi!

A bit earlier than expected due to personal circumstances I'm announcing 
first stable release of ironic-discoverd: 1.0.0 [1]. It contains 
implementations of 6 blueprints and fixes for 17 bugs. Full release 
notes can be found at [2], here is the summary:

* Redesigned API, including endpoint to get introspection status
* Better error handling, including proper time out
* Support for plugins hooking into data processing chain
* Support for K state machine

In addition to PyPI, new RPM is built in Fedora rawhide, and is expected 
to be provided via Juno RDO in the near future.


Please report bugs on launchpad [3].
Next feature release is planned before Kilo RC, feel free to submit 
ideas: [4].


Cheers,
Dmitry

[1] https://pypi.python.org/pypi/ironic-discoverd/1.0.0
[2] https://github.com/stackforge/ironic-discoverd#10-series
[3] https://bugs.launchpad.net/ironic-discoverd
[4] https://launchpad.net/ironic-discoverd/+milestone/1.1.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] *ED states strike back

2015-02-19 Thread Dmitry Tantsur

Hi everyone!

On one of our meetings [1] we agreed on keeping *ED states (DELETED, 
INSPECTED, CLEANED, etc) as no-ops for now. Since then, however, the 
inspection patch [2] got several comments from cores requesting removal 
of INSPECTED state. That was done by Nisha.


Today we decided to approve [2] and bring the discussion here. We can 
always add the state later, and blocking this patch again would be a bit 
unfair IMO. We'll create a follow-up to [2] if we decide that we need 
INSPECTED state.


So now, are we keeping/adding these *ED states to our state machine? I 
personally agree with what was discussed on the meeting, namely:

1. Keep *ED states
2. Make them no-ops, so that code that does INSPECTING -> MANAGEABLE 
right now will do INSPECTING -> INSPECTED -> MANAGEABLE instead.


Having INSPECTED is also useful for distinguish between OOB case (when 
inspect_hardware returns after having everything done) and in-band case 
(when inspect_hardware returns after initializing inspection). We could 
use return value of inspect_hardware being INSPECTING or INSPECTED.


WDYT?

Dmitry.

[1] 
http://eavesdrop.openstack.org/meetings/ironic/2015/ironic.2015-02-09-17.00.log.html 
starting 17:47

[2] https://review.openstack.org/#/c/147857/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]

2015-02-20 Thread Dmitry Tantsur

On 02/20/2015 06:14 AM, Ganapathy, Sandhya wrote:

Hi All,

I would like to discuss the Chassis Discovery Tool Blueprint -
https://review.openstack.org/#/c/134866/

The blueprint suggests Hardware enrollment and introspection for
properties at the Chassis layer. Suitable for micro-servers that have an
active chassis to query for details.

Initially, the idea was proposed as an API change at the Ironic layer.
We found many complexities such as interaction with conductor and the
point of nodes in a chassis being mapped to different conductors.

So, decision was taken to keep it as a separate tool above the Ironic
API layer.  It is a generic tool that can be plugged in for specific
hardware.


I'll reiterate over my points:

0. /tool directory is a no-go, we have development tooling there. We 
won't run tests from there, distributions won't package it, etc.


So valid options are:

1. create a driver vendor passthru, not sure why you want to care about 
node mapping here


2. create a new proper CLI. this does not feel right to create too 
specific tool (which will actually be vendor-specific for a long time or 
forever)


3. create a new repo (ironic-extras?) for cool tools for Ironic. that's 
the way we went with ironic-discoverd, and that's my vote if you can't 
create a vendor passthru.


I see Ironic as a bare metal API, not just set of tools, so that e.g. 
every feature added to Ironic can be consumed from UI. If it should be a 
tool I see no reason for Ironic core team to start handling it (we have 
enough reviews honestly :).


Dmitry.



There are different opinions from the community on this and it will be
good to come to a consensus.

I have also added the topic as an agenda in the Ironic IRC meeting.

Thanks,

Sandhya



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] patches that only address grammatical/typos

2015-02-25 Thread Dmitry Tantsur

On 02/25/2015 05:26 PM, Ruby Loo wrote:

Hi,

I was wondering what people thought about patches that only fix
grammatical issues or misspellings in comments in our code.

I can't believe I'm sending out this email, but as a group, I'd like it
if we had  a similar understanding so that we treat all patches in a
similar (dare I say it, consistent) manner. I've seen negative votes and
positive (approved) votes for similar patches. Right now, I look at such
submitted patches and ignore them, because I don't know what the fairest
thing is. I don't feel right that a patch that was previously submitted
gets a -2, whereas another patch gets a +A.

/me too



To be clear, I think that anything that is user-facing like (log,
exception) messages or our documentation should be cleaned up. (And yes,
I am fine using British or American English or a mix here.)

What I'm wondering about are the fixes to docstrings and inline comments
that aren't externally visible.

On one hand, It is great that someone submits a patch so maybe we should
approve it, so as not to discourage the submitter. On the other hand,
how useful are such submissions. It has already been suggested (and
maybe discussed to death) that we should approve patches if there are
only nits. These grammatical and misspellings fall under nits. If we are
explicitly saying that it is OK to merge these nits, then why fix them
later, unless they are part of a patch that does more than only address
those nits?
The biggest problem is that these patches 1. take our time, 2. take gate 
resources, 3. may introduce merge conflicts.


So I would suggest agree to -2 patches that fix _only_ user-invisible 
strings.




I realize that it would take me less time to approve the patches than to
write this email, but I wanted to know what the community thought. Some
rule-of-thumb would be helpful to me.

Thoughts?

--ruby


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] How to check admin authentication?

2015-02-27 Thread Dmitry Tantsur

Hi all!

This (presumably) pretty basic question tortures me for several months 
already, so I kindly seek for help here.


I'm working on a Flask-based service [1] and I'd like to use Keystone 
tokens for authentication. This is an admin-only API, so we need to 
check for an admin role. We ended up with code [2] first accessing 
Keystone with a given token and (configurable) admin tenant name, then 
checking 'admin' role. Things went well for a while.


Now I'm writing an Ironic driver accessing API of [1]. Pretty naively I 
was trying to use an Ironic service user credentials, that we use for 
accessing all other services. For TripleO-based installations it's a 
user with name 'ironic' and a special tenant 'service'. Here is where 
problems are. Our code perfectly authenticates a mere user (that has 
tenant 'admin'), but asks Ironic to go away.


We've spent some time researching documentation and keystone middleware 
source code, but didn't find any more clues. Neither did we find a way 
to use keystone middleware without rewriting half of project. What we 
need is 2 simple things in a simple Flask application:

1. validate a token
2. make sure it belongs to admin

I'll thankfully appreciate any ideas how to fix our situation.
Thanks in advance!

Dmitry.

[1] https://github.com/stackforge/ironic-discoverd
[2] 
https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discoverd/utils.py#L50-L65


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Re: New API for node create, specifying initial provision state

2015-08-27 Thread Dmitry Tantsur

On 08/27/2015 11:40 AM, Lucas Alvares Gomes wrote:

On Wed, Aug 26, 2015 at 11:09 PM, Julia Kreger
 wrote:

My apologies for not expressing my thoughts on this matter
sooner, however I've had to spend some time collecting my
thoughts.

To me, it seems like we do not trust our users.  Granted,
when I say users, I mean administrators who likely know more
about the disposition and capabilities of their fleet than
could ever be discovered or inferred via software.

Sure, we have other users, mainly in the form of consumers,
asking Ironic for hardware to be deployed, but the driver for
adoption is who feels the least amount of pain.

API versioning aside, I have to ask the community, what is
more important?

- An inflexible workflow that forces an administrator to
always have a green field, and to step through a workflow
that we've dictated, which may not apply to their operational
scenario, ultimately driving them to write custom code to
inject "new" nodes into the database directly, which will
surely break from time to time, causing them to hate Ironic
and look for a different solution.

- A happy administrator that has the capabilities to do their
job (and thus manage the baremetal node wherever it is in the
operator's lifecycle) in an efficient fashion, thus causing
them to fall in love with Ironic.



I'm sorry, I find the language used in this reply very offensive.
That's not even a real question, due the alternatives you're basically
asking the community "What's more important, be happy or be sad ? Be
efficient or not efficient?"

It's not about an "inflexible workflow" which "dictates" what people
do making them "hate" the project. It's about finding a common pattern
for an work flow that will work for all types of machines, it's about
consistency, it's about keeping the history of what happened to that
node. When a node is on a specific state you know what it's been
through so you can easily debug it (i.e an ACTIVE node means that it
passed through MANAGEABLE -> CLEAN* -> AVAILABLE -> DEPLOY* -> ACTIVE.
Even if some of the states are non-op for a given driver, it's a clear
path).

Think about our API, it's not that we don't allow vendors to add every
new features they have to the core part of the API because we don't
trust them or we think that their shiny features are not worthy. We
don't do that to make it consistent, to have an abstraction layer that
will work the same for all types of hardware.

I mean it when I said I want to have a fresh mind to read the proposal
this new work flow. But I rather read a technical explanation than an
emotional one. What I want to know for example is what it will look
like when one register a node in ACTIVE state directly? What about the
internal driver fields? What about the TFTP/HTTP environment that is
built as part of the DEPLOY process ? What about the ports in Neutron
? and so on...


I agree with everything Lucas said.

I also want to point that it's completely unrealistic to expect even 
majority of Ironic users to have at least some idea about how Ironic 
actually works. And definitely not all our users are Ironic developers.


I routinely help people who never used Ironic before, and they don't 
have problems with running 1, 2, 10 commands, if they're written in the 
documentation and clearly explained. What they do have problems with is 
several ways of doing the same thing, with different ways being broken 
under different conditions.




Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Re: New API for node create, specifying initial provision state

2015-08-27 Thread Dmitry Tantsur
gt; > this new work flow. But I rather read a technical explanation than an
> > emotional one. What I want to know for example is what it will look
> > like when one register a node in ACTIVE state directly? What about the
> > internal driver fields? What about the TFTP/HTTP environment that is
> > built as part of the DEPLOY process ? What about the ports in Neutron
> > ? and so on...
>
> Emotions matter to users. You're right that a technical argument helps
> us get our work done efficiently. But don't forget _why Ironic exists_.
> It's not for you to develop on, and it's not just for Nova to talk to.
> It's for your users to handle their datacenter in the wee hours without
> you to hold their hand. Make that hard, get somebody fired or burned
> out, and no technical argument will ever convince them to use Ironic
> again.
>

You care only about users at your technical level in OpenStack. For other
(and the majority of) users the situation is the opposite: they want to be
told that they've screwed their SSH credentials (and they *constantly* do)
as soon as it is possible. If they are not, their nodes, for example, will
silently go into maintenance, and then nova will return cryptic "no valid
host found" error.


>
> I think I see the problem though. Ironic needs a new mission statement:
>
> To produce an OpenStack service and associated libraries capable of
> managing and provisioning physical machines, and to do this in a
> security-aware and fault-tolerant manner.
>
> Mission accomplished. It's been capable of doing that for a long time.
> Perhaps the project should rethink whether _users_ should be considered
> in a new mission statement.
>

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][manila] "latest" microversion considered dangerous

2015-08-28 Thread Dmitry Tantsur

On 08/27/2015 09:38 PM, Ben Swartzlander wrote:

Manila recently implemented microversions, copying the implementation
from Nova. I really like the feature! However I noticed that it's legal
for clients to transmit "latest" instead of a real version number.

THIS IS A TERRIBLE IDEA!

I recommend removing support for "latest" and forcing clients to request
a specific version (or accept the default).


I think "latest" is needed for integration testing. Otherwise you have 
to update your tests each time new version is introduced.




Allowing clients to request the "latest" microversion guarantees
undefined (and likely broken) behavior* in every situation where a
client talks to a server that is newer than it.

Every client can only understand past and present API implementation,
not future implementations. Transmitting "latest" implies an assumption
that the future is not so different from the present. This assumption
about future behavior is precisely what we don't want clients to make,
because it prevents forward progress. One of the main reasons
microversions is a valuable feature is because it allows forward
progress by letting us make major changes without breaking old clients.

If clients are allowed to assume that nothing will change too much in
the future (which is what asking for "latest" implies) then the server
will be right back in the situation it was trying to get out of -- it
can never change any API in a way that might break old clients.

I can think of no situation where transmitting "latest" is better than
transmitting the highest version that existed at the time the client was
written.

-Ben Swartzlander

* Undefined/broken behavior unless the server restricts itself to never
making any backward-compatiblity-breaking change of any kind.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][manila] "latest" microversion considered dangerous

2015-08-28 Thread Dmitry Tantsur

On 08/28/2015 09:34 AM, Valeriy Ponomaryov wrote:

Dmitriy,

New tests, that cover new functionality already know which API version
they require. So, even in testing, it is not needed. All other existing
tests do not require API update.


Yeah, but you can't be sure that your change does not break the world, 
until you merge it and start updating tests. Probably it's not that 
important for projects who have their integration tests in-tree though..




So, I raise hand for restricting "latest".

On Fri, Aug 28, 2015 at 10:20 AM, Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

On 08/27/2015 09:38 PM, Ben Swartzlander wrote:

Manila recently implemented microversions, copying the
implementation
from Nova. I really like the feature! However I noticed that
it's legal
for clients to transmit "latest" instead of a real version number.

THIS IS A TERRIBLE IDEA!

I recommend removing support for "latest" and forcing clients to
request
a specific version (or accept the default).


I think "latest" is needed for integration testing. Otherwise you
have to update your tests each time new version is introduced.



Allowing clients to request the "latest" microversion guarantees
undefined (and likely broken) behavior* in every situation where a
client talks to a server that is newer than it.

Every client can only understand past and present API
implementation,
not future implementations. Transmitting "latest" implies an
assumption
that the future is not so different from the present. This
assumption
about future behavior is precisely what we don't want clients to
make,
because it prevents forward progress. One of the main reasons
microversions is a valuable feature is because it allows forward
progress by letting us make major changes without breaking old
clients.

If clients are allowed to assume that nothing will change too
much in
the future (which is what asking for "latest" implies) then the
server
will be right back in the situation it was trying to get out of
-- it
can never change any API in a way that might break old clients.

I can think of no situation where transmitting "latest" is
better than
transmitting the highest version that existed at the time the
client was
written.

-Ben Swartzlander

* Undefined/broken behavior unless the server restricts itself
to never
making any backward-compatiblity-breaking change of any kind.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com <http://www.mirantis.com>
vponomar...@mirantis.com <mailto:vponomar...@mirantis.com>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with _____ ?

2015-08-28 Thread Dmitry Tantsur

On 08/28/2015 04:36 PM, Lucas Alvares Gomes wrote:

Hi,


If you just want to shoot the breeze please respond here. If you
have specific comments on the spec please response there.



I have been thinking about doing it for Ironic as well so I'm looking
for options. IMHO after using WSME I would think that one of the most
important criteria we should start looking at is if the project has a
health, sizable and active community around it. It's crucial to use
libraries that are being maintained.

So at the present moment the [micro]framework that comes to my mind -
without any testing or prototype of any sort - is Flask.


We're using Flask in inspector. We have a nice experience, but note that 
inspector does not have very complex API :)




Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Mitaka Design Summit - Proposed slot allocation

2015-09-04 Thread Dmitry Tantsur

On 09/04/2015 12:14 PM, Thierry Carrez wrote:

Hi PTLs,

Here is the proposed slot allocation for every "big tent" project team
at the Mitaka Design Summit in Tokyo. This is based on the requests the
liberty PTLs have made, space availability and project activity &
collaboration needs.

We have a lot less space (and time slots) in Tokyo compared to
Vancouver, so we were unable to give every team what they wanted. In
particular, there were far more workroom requests than we have
available, so we had to cut down on those quite heavily. Please note
that we'll have a large lunch room with roundtables inside the Design
Summit space that can easily be abused (outside of lunch) as space for
extra discussions.

Here is the allocation:

| fb: fishbowl 40-min slots
| wr: workroom 40-min slots
| cm: Friday contributors meetup
| | day: full day, morn: only morning, aft: only afternoon

Neutron: 12fb, cm:day
Nova: 14fb, cm:day
Cinder: 5fb, 4wr, cm:day
Horizon: 2fb, 7wr, cm:day   
Heat: 4fb, 8wr, cm:morn
Keystone: 7fb, 3wr, cm:day
Ironic: 4fb, 4wr, cm:morn
Oslo: 3fb, 5wr
Rally: 1fb, 2wr
Kolla: 3fb, 5wr, cm:aft
Ceilometer: 2fb, 7wr, cm:morn
TripleO: 2fb, 1wr, cm:full
Sahara: 2fb, 5wr, cm:aft
Murano: 2wr, cm:full
Glance: 3fb, 5wr, cm:full   
Manila: 2fb, 4wr, cm:morn
Magnum: 5fb, 5wr, cm:full   
Swift: 2fb, 12wr, cm:full   
Trove: 2fb, 4wr, cm:aft
Barbican: 2fb, 6wr, cm:aft
Designate: 1fb, 4wr, cm:aft
OpenStackClient: 1fb, 1wr, cm:morn
Mistral: 1fb, 3wr   
Zaqar: 1fb, 3wr
Congress: 3wr
Cue: 1fb, 1wr
Solum: 1fb
Searchlight: 1fb, 1wr
MagnetoDB: won't be present

Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA) 
PuppetOpenStack: 2fb, 3wr
Documentation: 2fb, 4wr, cm:morn
Quality Assurance: 4fb, 4wr, cm:full
OpenStackAnsible: 2fb, 1wr, cm:aft
Release management: 1fb, 1wr (shared meetup with QA)
Security: 2fb, 2wr
ChefOpenstack: will camp in the lunch room all week
App catalog: 1fb, 1wr
I18n: cm:morn
OpenStack UX: 2wr
Packaging-deb: 2wr
Refstack: 2wr
RpmPackaging: 1fb, 1wr

We'll start working on laying out those sessions over the available
rooms and time slots. If you have constraints (I already know
searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
Manila with Cinder, Solum with Magnum...) please let me know, we'll do
our best to limit them.



Would be cool to avoid conflicts between Ironic and TripleO.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Releasing tripleo-common on PyPI

2015-09-09 Thread Dmitry Tantsur

On 09/09/2015 12:15 PM, Dougal Matthews wrote:

Hi,

The tripleo-common library appears to be registered or PyPI but hasn't yet had
a release[1]. I am not familiar with the release process - what do we need to
do to make sure it is regularly released with other TripleO packages?


I think this is a good start: 
https://github.com/openstack/releases/blob/master/README.rst




We will also want to do something similar with the new python-tripleoclient
which doesn't seem to be registered on PyPI yet at all.


And instack-undercloud.



Thanks,
Dougal

[1]: https://pypi.python.org/pypi/tripleo-common

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Command structure for OSC plugin

2015-09-10 Thread Dmitry Tantsur

On 09/09/2015 06:48 PM, Jim Rollenhagen wrote:

On Tue, Sep 01, 2015 at 03:47:03PM -0500, Dean Troyer wrote:

[late catch-up]

On Mon, Aug 24, 2015 at 2:56 PM, Doug Hellmann 
wrote:


Excerpts from Brad P. Crochet's message of 2015-08-24 15:35:59 -0400:

On 24/08/15 18:19 +, Tim Bell wrote:



>From a user perspective, where bare metal and VMs are just different

flavors (with varying capabilities), can we not use the same commands
(server create/rebuild/...) ? Containers will create the same conceptual
problems.


OSC can provide a converged interface but if we just replace '$ ironic

' by '$ openstack baremetal ', this seems to be a missed
opportunity to hide the complexity from the end user.


Can we re-use the existing server structures ?




I've wondered about how users would see doing this, we've done it already
with the quota and limits commands (blurring the distinction between
project APIs).  At some level I am sure users really do not care about some
of our project distinctions.



To my knowledge, overriding or enhancing existing commands like that

is not possible.

You would have to do it in tree, by making the existing commands
smart enough to talk to both nova and ironic, first to find the
server (which service knows about something with UUID XYZ?) and
then to take the appropriate action on that server using the right
client. So it could be done, but it might lose some of the nuance
between the server types by munging them into the same command. I
don't know what sorts of operations are different, but it would be
worth doing the analysis to see.



I do have an experimental plugin that hooks the server create command to
add some options and change its behaviour so it is possible, but right now
I wouldn't call it supported at all.  That might be something that we could
consider doing though for things like this.

The current model for commands calling multiple project APIs is to put them
in openstackclient.common, so yes, in-tree.

Overall, though, to stay consistent with OSC you would map operations into
the current verbs as much as possible.  It is best to think in terms of how
the CLI user is thinking and what she wants to do, and not how the REST or
Python API is written.  In this case, 'baremetal' is a type of server, a
set of attributes of a server, etc.  As mentioned earlier, containers will
also have a similar paradigm to consider.


Disclaimer: I don't know much about OSC or its syntax, command
structure, etc. These may not be well-formed thoughts. :)


With the same disclaimer applied...



While it would be *really* cool to support the same command to do things
to nova servers or do things to ironic servers, I don't know that it's
reasonable to do so.

Ironic is an admin-only API, that supports running standalone or behind
a Nova installation with the Nova virt driver. The API is primarily used
by Nova, or by admins for management. In the case of a standalone
configuration, an admin can use the Ironic API to deploy a server,
though the recommended approach is to use Bifrost[0] to simplify that.
In the case of Ironic behind Nova, users are expected to boot baremetal
servers through Nova, as indicated by a flavor.

So, many of the nova commands (openstack server foo) don't make sense in
an Ironic context, and vice versa. It would also be difficult to
determine if the commands should go through Nova or through Ironic.
The path could be something like: check that Ironic exists, see if user
has access, hence standalone mode (oh wait, operators probably have
access to manage Ironic *and* deploy baremetal through Nova, what do?).


I second this. I'd like also to add that in case of Ironic "server 
create" may involve actually several complex actions, that do not map to 
'nova boot'. First of all we create a node record in database, second we 
check it's power credentials, third we do properties inspection, finally 
we do cleaning. None of these make any sense on a virtual environment.




I think we should think of "openstack baremetal foo" as commands to
manage the baremetal service (Ironic), as that is what the API is
primarily intended for. Then "openstack server foo" just does what it
does today, and if the flavor happens to be a baremetal flavor, the user
gets a baremetal server.

// jim

[0] https://github.com/openstack/bifrost

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Suggestion to split install guide

2015-09-11 Thread Dmitry Tantsur

Hi all!

Our install guide is huge, and I've just approved even more text for it. 
WDYT about splitting it into "Basic Install Guide", which will contain 
bare minimum for running ironic and deploying instances, and "Advanced 
Install Guide", which will the following things:

1. Using Bare Metal service as a standalone service
2. Enabling the configuration drive (configdrive)
3. Inspection
4. Trusted boot
5. UEFI

Opinions?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Suggestion to split install guide

2015-09-14 Thread Dmitry Tantsur

On 09/14/2015 03:54 PM, Ruby Loo wrote:



On 11 September 2015 at 04:56, Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

Hi all!

Our install guide is huge, and I've just approved even more text for
it. WDYT about splitting it into "Basic Install Guide", which will
contain bare minimum for running ironic and deploying instances, and
"Advanced Install Guide", which will the following things:
1. Using Bare Metal service as a standalone service
2. Enabling the configuration drive (configdrive)
3. Inspection
4. Trusted boot
5. UEFI

Opinions?


Thanks for bringing this up Dmitry. Any idea whether there is some sort
of standard format/organization of install guides for the other
OpenStack projects?


Not sure

> And/or maybe we should ask Ops folks (non developers

:-))


Fair enough. I've proposed basic vs advanced split based on what we did 
for TripleO downstream, which was somewhat user-driven.




--ruby


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] final liberty cycle client library releases needed

2015-09-15 Thread Dmitry Tantsur

On 09/14/2015 04:18 PM, Doug Hellmann wrote:

Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:

PTLs and release liaisons,

In order to keep the rest of our schedule for the end-of-cycle release
tasks, we need to have final releases for all client libraries in the
next day or two.

If you have not already submitted your final release request for this
cycle, please do that as soon as possible.

If you *have* already submitted your final release request for this
cycle, please reply to this email and let me know that you have so I can
create your stable/liberty branch.

Thanks!
Doug


I forgot to mention that we also need the constraints file in
global-requirements updated for all of the releases, so we're actually
testing with them in the gate. Please take a minute to check the version
specified in openstack/requirements/upper-constraints.txt for your
libraries and submit a patch to update it to the latest release if
necessary. I'll do a review later in the week, too, but it's easier to
identify the causes of test failures if we have one patch at a time.


Hi Doug!

When is the last and final deadline for doing all this for 
not-so-important and non-release:managed projects like ironic-inspector? 
We still lack some Liberty features covered in 
python-ironic-inspector-client. Do we have time until end of week to 
finish them?


Sorry if you hear this question too often :)

Thanks!



Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Inspector] Finishing Liberty (was: final liberty cycle client library releases needed)

2015-09-15 Thread Dmitry Tantsur

Hi folks!

As you can see below, we have to make the final release of 
python-ironic-inspector-client really soon. We have 2 big missing parts:


1. Introspection rules support.
   I'm working on it: https://review.openstack.org/#/c/223096/
   This required a substantial requirement, so that our client does not 
become a complete mess: https://review.openstack.org/#/c/223490/


2. Support for getting introspection data. John (trown) volunteered to 
do this work.


I'd like to ask the inspector team to pay close attention to these 
patches, as the deadline for them is Friday (preferably European time).


Next, please have a look at the milestone page for ironic-inspector 
itself: https://launchpad.net/ironic-inspector/+milestone/2.2.0
There are things that require review, and there are things without an 
assignee. If you'd like to volunteer for something there, please assign 
it to yourself. Our deadline is next Thursday, but it would be really 
good to finish it earlier next week to dedicate some time to testing.


Thanks all, I'm looking forward to this release :)


 Forwarded Message 
Subject: Re: [openstack-dev] [all][ptl][release] final liberty cycle 
client library releases needed

Date: Tue, 15 Sep 2015 10:45:45 -0400
From: Doug Hellmann 
Reply-To: OpenStack Development Mailing List (not for usage questions) 


To: openstack-dev 

Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:

On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
>> PTLs and release liaisons,
>>
>> In order to keep the rest of our schedule for the end-of-cycle release
>> tasks, we need to have final releases for all client libraries in the
>> next day or two.
>>
>> If you have not already submitted your final release request for this
>> cycle, please do that as soon as possible.
>>
>> If you *have* already submitted your final release request for this
>> cycle, please reply to this email and let me know that you have so I can
>> create your stable/liberty branch.
>>
>> Thanks!
>> Doug
>
> I forgot to mention that we also need the constraints file in
> global-requirements updated for all of the releases, so we're actually
> testing with them in the gate. Please take a minute to check the version
> specified in openstack/requirements/upper-constraints.txt for your
> libraries and submit a patch to update it to the latest release if
> necessary. I'll do a review later in the week, too, but it's easier to
> identify the causes of test failures if we have one patch at a time.

Hi Doug!

When is the last and final deadline for doing all this for
not-so-important and non-release:managed projects like ironic-inspector?
We still lack some Liberty features covered in
python-ironic-inspector-client. Do we have time until end of week to
finish them?


We would like for the schedule to be the same for everyone. We need the
final versions for all libraries this week, so we can update
requirements constraints by early next week before the RC1.

https://wiki.openstack.org/wiki/Liberty_Release_Schedule

Doug



Sorry if you hear this question too often :)

Thanks!

>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Inspector] Finishing Liberty

2015-09-15 Thread Dmitry Tantsur

On 09/15/2015 05:02 PM, Dmitry Tantsur wrote:

Hi folks!

As you can see below, we have to make the final release of
python-ironic-inspector-client really soon. We have 2 big missing parts:

1. Introspection rules support.
I'm working on it: https://review.openstack.org/#/c/223096/
This required a substantial requirement, so that our client does not
become a complete mess: https://review.openstack.org/#/c/223490/

2. Support for getting introspection data. John (trown) volunteered to
do this work.

I'd like to ask the inspector team to pay close attention to these
patches, as the deadline for them is Friday (preferably European time).

Next, please have a look at the milestone page for ironic-inspector
itself: https://launchpad.net/ironic-inspector/+milestone/2.2.0
There are things that require review, and there are things without an
assignee. If you'd like to volunteer for something there, please assign
it to yourself. Our deadline is next Thursday, but it would be really
good to finish it earlier next week to dedicate some time to testing.


Forgot an important thing: we have 2 outstanding IPA patches as well:
https://review.openstack.org/#/c/222605/
https://review.openstack.org/#/c/223054



Thanks all, I'm looking forward to this release :)


 Forwarded Message 
Subject: Re: [openstack-dev] [all][ptl][release] final liberty cycle
client library releases needed
Date: Tue, 15 Sep 2015 10:45:45 -0400
From: Doug Hellmann 
Reply-To: OpenStack Development Mailing List (not for usage questions)

To: openstack-dev 

Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:

On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
>> PTLs and release liaisons,
>>
>> In order to keep the rest of our schedule for the end-of-cycle release
>> tasks, we need to have final releases for all client libraries in the
>> next day or two.
>>
>> If you have not already submitted your final release request for this
>> cycle, please do that as soon as possible.
>>
>> If you *have* already submitted your final release request for this
>> cycle, please reply to this email and let me know that you have so
I can
>> create your stable/liberty branch.
>>
>> Thanks!
>> Doug
>
> I forgot to mention that we also need the constraints file in
> global-requirements updated for all of the releases, so we're actually
> testing with them in the gate. Please take a minute to check the
version
> specified in openstack/requirements/upper-constraints.txt for your
> libraries and submit a patch to update it to the latest release if
> necessary. I'll do a review later in the week, too, but it's easier to
> identify the causes of test failures if we have one patch at a time.

Hi Doug!

When is the last and final deadline for doing all this for
not-so-important and non-release:managed projects like ironic-inspector?
We still lack some Liberty features covered in
python-ironic-inspector-client. Do we have time until end of week to
finish them?


We would like for the schedule to be the same for everyone. We need the
final versions for all libraries this week, so we can update
requirements constraints by early next week before the RC1.

https://wiki.openstack.org/wiki/Liberty_Release_Schedule

Doug



Sorry if you hear this question too often :)

Thanks!

>
> Doug
>
>
__

> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Inspector] Finishing Liberty (was: final liberty cycle client library releases needed)

2015-09-15 Thread Dmitry Tantsur
t's easier
> to
> > > > identify the causes of test failures if we have one patch at a time.
> > >
> > > Hi Doug!
> > >
> > > When is the last and final deadline for doing all this for
> > > not-so-important and non-release:managed projects like
> ironic-inspector?
> > > We still lack some Liberty features covered in
> > > python-ironic-inspector-client. Do we have time until end of week to
> > > finish them?
> >
> > We would like for the schedule to be the same for everyone. We need the
> > final versions for all libraries this week, so we can update
> > requirements constraints by early next week before the RC1.
> >
> > https://wiki.openstack.org/wiki/Liberty_Release_Schedule
> >
> > Doug
> >
> > >
> > > Sorry if you hear this question too often :)
> > >
> > > Thanks!
> > >
> > > >
> > > > Doug
> > > >
> > > >
> __
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > >
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Liberty soft freeze

2015-09-18 Thread Dmitry Tantsur
Note for inspector folks: this applies to us as well. Lets land whatever 
we have planned for 2.2.0 and fix any issues arising.


Please see milestone page for list of things that we still need to 
review/fix:

https://launchpad.net/ironic-inspector/+milestone/2.2.0

On 09/18/2015 03:50 AM, Jim Rollenhagen wrote:

Hi folks,

It's time for our soft freeze for Liberty, as planned. Core reviewers
should do their best to refrain from landing risky code. We'd like to
ship 4.2.0 as the candidate for stable/liberty next Thursday, September
24.

Here's the things we still want to complete in 4.2.0:
https://launchpad.net/ironic/+milestone/4.2.0

Note that zapping is no longer there; sadly, after lots of writing and
reviewing code, we want to rethink how we implement this. We've talked
about being able to go from MANAGEABLE->CLEANING->MANAGEABLE with a list
of clean steps. Same idea, but without the word zapping, the new DB
fields, etc. At any rate, it's been bumped to Mitaka to give us time to
figure it out.

This may also mean in-band RAID configuration may not land; the
interface in general did land, and drivers may do out-of-band
configuration. We assumed that in-band RAID would be done through
zapping. However, if folks can agree on how to do it during automated
cleaning, I'd be happy to get that in Liberty if the code is not too
risky. If it is risky, we'll need to punt it to Mitaka as well.

I'd like to see the rest of the work on the milestone completed during
Liberty, and I hope everyone can jump in and help us to do that.

Thanks in advance!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Stepping down from IPA core

2015-09-21 Thread Dmitry Tantsur

On 09/21/2015 05:49 PM, Josh Gachnang wrote:

Hey y'all, it's with a heavy heart I have to announce I'll be stepping
down from the IPA core team on Thurs, 9/24. I'm leaving Rackspace for a
healthcare startup (Triggr Health) and won't have the time to dedicate
to being an effective OpenStack reviewer.

Ever since the OnMetal team proposed IPA all the way back in the
Icehouse midcycle, this community has been welcoming, helpful, and all
around great. You've all helped me grow as a developer with your in
depth and patient reviews, for which I am eternally grateful. I'm really
sad I won't get to see everyone in Tokyo.


I'm a bit sad to hear it :) it was a big pleasure to work with you. Have 
the best of luck in your new challenges!




I'll still be on IRC after leaving, so feel free to ping me for any
reason :)

- JoshNang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server

2015-09-24 Thread Dmitry Tantsur
2015-09-24 17:38 GMT+02:00 Ionut Balutoiu 
:

> Hello, guys!
>
> I'm starting a new implementation for a dhcp provider,
> mainly to be used for Ironic standalone. I'm planning to
> push it upstream. I'm using isc-dhcp-server service from
> Linux. So, when an Ironic node is started, the ironic-conductor
> writes in the config file the MAC-IP reservation for that node and
> reloads dhcp service. I'm using a SQL database as a backend to store
> the dhcp reservations (I think is cleaner and it should allow us
> to have more than one DHCP server). What do you think about my
> implementation ?
>

What you describe slightly resembles how ironic-inspector works. It needs
to serve DHCP to nodes that are NOT know to Ironic, so it manages iptables
rules giving (or not giving access) to the dnsmasq instance. I wonder if we
may find some common code between these 2, but I definitely don't want to
reinvent Neutron :) I'll think about it after seeing your spec and/or code,
I'm already looking forward to them!


> Also, I'm not sure how can I scale this out to provide HA/failover.
> Do you guys have any idea ?
>
> Regards,
> Ionut Balutoiu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Dmitry Tantsur

On 09/25/2015 04:44 PM, Ihar Hrachyshka wrote:

Hi all,

releases are approaching, so it’s the right time to start some bike shedding on 
the mailing list.

Recently I got pointed out several times [1][2] that I violate our commit message 
requirement [3] for the message lines that says: "Subsequent lines should be 
wrapped at 72 characters.”

I agree that very long commit message lines can be bad, f.e. if they are 200+ 
chars. But <= 79 chars?.. Don’t think so. Especially since we have 79 chars 
limit for the code.

We had a check for the line lengths in openstack-dev/hacking before but it was 
killed [4] as per openstack-dev@ discussion [5].

I believe commit message lines of <=80 chars are absolutely fine and should not 
get -1 treatment. I propose to raise the limit for the guideline on wiki 
accordingly.


+1, I never understood it actually. I know some folks even question 80 
chars for the code, so having 72 chars for commit messages looks a bit 
weird to me.




Comments?

[1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
[2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
[3]: 
https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
[4]: https://review.openstack.org/#/c/142585/
[5]: 
http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519

Ihar



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Defining a public API for tripleo-common

2015-09-30 Thread Dmitry Tantsur

On 09/30/2015 03:15 PM, Ryan Brown wrote:

On 09/30/2015 04:08 AM, Dougal Matthews wrote:

Hi,

What is the standard practice for defining public API's for OpenStack
libraries? As I am working on refactoring and updating tripleo-common
I have
to grep through the projects I know that use it to make sure I don't
break
anything.


The API working group exists, but they focus on REST APIs so they don't
have any guidelines on library APIs.


Personally I would choose to have a policy of "If it is documented, it is
public" because that is very clear and it still allows us to do internal
refactoring.

Otherwise we could use __all__ to define what is public in each file, or
assume everything that doesn't start with an underscore is public.


I think assuming that anything without a leading underscore is public
might be too broad. For example, that would make all of libutils
ostensibly a "stable" interface. I don't think that's what we want,
especially this early in the lifecycle.

In heatclient, we present "heatclient.client" and "heatclient.exc"
modules as the main public API, and put versioned implementations in
modules.


I'd recommend to avoid things like 'heatclient.client', as in a big 
application it would lead to imports like


 from heatclient import client as heatclient

:)

What I did for ironic-inspector-client was to make a couple of most 
important things available directly on ironic_inspector_client top-level 
module, everything else - under ironic_inspector_client.v1 (modulo some 
legacy).




heatclient
|- client
|- exc
\- v1
   |- client
   |- resources
   |- events
   |- services

I think versioning the public API is the way to go, since it will make
it easier to maintain backwards compatibility while new needs/uses evolve.


++






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-09 Thread Dmitry Tantsur

On 10/08/2015 11:47 PM, Jim Rollenhagen wrote:

Hi all,

I've been thinking a lot about Ironic's core reviewer team and how we might
make it better.

I'd like to grow the team more through trust and mentoring. We should be
able to promote someone to core based on a good knowledge of *some* of
the code base, and trust them not to +2 things they don't know about. I'd
also like to build a culture of mentoring non-cores on how to review, in
preparation for adding them to the team. Through these pieces, I'm hoping
we can have a few rounds of core additions this cycle.

With that said...

I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
have been super high quality, and the quantity is ever-increasing. He's
also started helping out with some smaller efforts (full tempest, for
example), and I'd love to see that continue with larger efforts.


+2



I'd also like to nominate John Villalovos (jlvillal). John has been
reviewing a ton of code and making a real effort to learn everything,
and keep track of everything going on in the project.


+2



Ironic cores, please reply with your vote; provided feedback is positive,
I'd like to make this official next week sometime. Thanks!

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] jobs that make break when we remove Devstack extras.d in 10 weeks

2015-10-09 Thread Dmitry Tantsur

On 10/09/2015 12:35 PM, Sean Dague wrote:

 From now until the removal of devstack extras.d support I'm going to
send a weekly email of jobs that may break. A warning was added that we
can track in logstash.

Here are the top 25 jobs (by volume) that are currently tripping the
warning:

gate-murano-devstack-dsvm
gate-cue-integration-dsvm-rabbitmq
gate-murano-congress-devstack-dsvm
gate-solum-devstack-dsvm-centos7
gate-rally-dsvm-murano-task
gate-congress-dsvm-api
gate-tempest-dsvm-ironic-agent_ssh
gate-solum-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ipa-nv
gate-ironic-inspector-dsvm-nv
gate-tempest-dsvm-ironic-pxe_ssh
gate-tempest-dsvm-ironic-parallel-nv
gate-tempest-dsvm-ironic-pxe_ipa
gate-designate-dsvm-powerdns
gate-python-barbicanclient-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ssh-postgres
gate-rally-dsvm-designate-designate
gate-tempest-dsvm-ironic-pxe_ssh-dib
gate-tempest-dsvm-ironic-agent_ssh-src
gate-tempest-dsvm-ironic-pxe_ipa-src
gate-muranoclient-dsvm-functional
gate-designate-dsvm-bind9
gate-tempest-dsvm-python-ironicclient-src
gate-python-ironic-inspector-client-dsvm
gate-tempest-dsvm-ironic-lib-src-nv

(You can view this query with http://goo.gl/6p8lvn)

The ironic jobs are surprising, as something is crudding up extras.d
with a file named 23, which isn't currently run. Eventual removal of
that directory is going to potentially make those jobs fail, so someone
more familiar with it should look into it.


Thanks for noticing, looking now.



This is not guarunteed to be a complete list, but as jobs are removed /
fixed we should end up with other less frequently run jobs popping up in
future weeks.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Try to introduce RFC mechanism to CI.

2015-10-09 Thread Dmitry Tantsur

On 10/09/2015 12:06 PM, Tang Chen wrote:


On 10/09/2015 05:48 PM, Jordan Pittier wrote:

Hi,
On Fri, Oct 9, 2015 at 11:00 AM, Tang Chen mailto:tangc...@cn.fujitsu.com>> wrote:

Hi,

CI systems will run tests for each patch once it is submitted or
modified.
But most CI systems occupy a lot of resource, and take a long time to
run tests (1 or 2 hours for one patch).

I think, not all the patches submitted need to be tested. Even
those patches
with an approved BP and spec may be reworked for 20+ versions. So
I think
CI should support a RFC (Require For Comments) mechanism for
developers
to submit and review the code detail and rework. When the patches are
fully ready, I mean all reviewers have agreed on the
implementation detail,
then CI will test the patches.

So have the humans do the hard work to eventually find out that the
patch breaks the world ?


No. Developers of course will run some tests themselves before they
submit patches.


Tests, but not all possible CI's. E.g. in ironic we 6 devstack-based 
jobs, I don't really expect a submitter to go through them manually. 
Actually, it's an awesome feature of our CI system that I would not give 
away :)


Also as a reviewer, I'm not sure I would like to argue on function 
names, while I'm not even sure that this change does not break the world.



It is just a waste of resource if reviewers are discussing about where
this function should be,
or what the function should be named. After all these details are agreed
on, run the CI.


For a 20+ version patch-set, maybe 3 or 4 rounds
of tests are enough. Just test the last 3 or 4 versions.

 How do know, when a new patchset arrives, that it's part of the last
3 or 4 versions ?


I think it could work like this:
1. At first, developer submits v1 patch-set with RFC tag. CIs don't run.
2. After several versions reworked, like v5, v6, most reviewers have
agreed on the implementation
 is OK. Then submit v7 without RFC tag. Then CIs run.
3. After 3, 4 rounds of tests, v10 patch-set could be merged.

Thanks.



This can significantly reduce CI overload.

This workflow appears in many other OSS communities, such as Linux
kernel,
qemu and libvirt. Testers won't test patches with a [RFC] tag in
the commit message.
So I want to enable CI to support a similar mechanism.

I'm not sure if it is a good idea. Please help to review the
following BP.

https://blueprints.launchpad.net/openstack-ci/+spec/ci-rfc-mechanism

Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I am running a 3rd party for Cinder. The amount of time to setup,
operate and watch after the CI results cost way more than the 1 or 2
servers it take to run the jobs. So, I don"t want to be a party pooper
here, but in my opinion I am not sure it's worth the effort.

Note: I don"t know about nova or neutron.

Jordan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] jobs that make break when we remove Devstack extras.d in 10 weeks

2015-10-09 Thread Dmitry Tantsur

On 10/09/2015 12:58 PM, Dmitry Tantsur wrote:

On 10/09/2015 12:35 PM, Sean Dague wrote:

 From now until the removal of devstack extras.d support I'm going to
send a weekly email of jobs that may break. A warning was added that we
can track in logstash.

Here are the top 25 jobs (by volume) that are currently tripping the
warning:

gate-murano-devstack-dsvm
gate-cue-integration-dsvm-rabbitmq
gate-murano-congress-devstack-dsvm
gate-solum-devstack-dsvm-centos7
gate-rally-dsvm-murano-task
gate-congress-dsvm-api
gate-tempest-dsvm-ironic-agent_ssh
gate-solum-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ipa-nv
gate-ironic-inspector-dsvm-nv
gate-tempest-dsvm-ironic-pxe_ssh
gate-tempest-dsvm-ironic-parallel-nv
gate-tempest-dsvm-ironic-pxe_ipa
gate-designate-dsvm-powerdns
gate-python-barbicanclient-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ssh-postgres
gate-rally-dsvm-designate-designate
gate-tempest-dsvm-ironic-pxe_ssh-dib
gate-tempest-dsvm-ironic-agent_ssh-src
gate-tempest-dsvm-ironic-pxe_ipa-src
gate-muranoclient-dsvm-functional
gate-designate-dsvm-bind9
gate-tempest-dsvm-python-ironicclient-src
gate-python-ironic-inspector-client-dsvm
gate-tempest-dsvm-ironic-lib-src-nv

(You can view this query with http://goo.gl/6p8lvn)

The ironic jobs are surprising, as something is crudding up extras.d
with a file named 23, which isn't currently run. Eventual removal of
that directory is going to potentially make those jobs fail, so someone
more familiar with it should look into it.


Thanks for noticing, looking now.


As I'm leaving for the weekend, I'll post my findings here.

I was not able to spot what writes these files (in my case it was named 
33). I also was not able to reproduce it on my regular devstack environment.


I've posted a temporary patch https://review.openstack.org/#/c/233017/ 
so that we're able to track where and when these files appear. Right now 
I only understood that they really appear during the devstack run, not 
earlier.






This is not guarunteed to be a complete list, but as jobs are removed /
fixed we should end up with other less frequently run jobs popping up in
future weeks.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] jobs that make break when we remove Devstack extras.d in 10 weeks

2015-10-12 Thread Dmitry Tantsur

On 10/09/2015 05:41 PM, Dmitry Tantsur wrote:

On 10/09/2015 12:58 PM, Dmitry Tantsur wrote:

On 10/09/2015 12:35 PM, Sean Dague wrote:

 From now until the removal of devstack extras.d support I'm going to
send a weekly email of jobs that may break. A warning was added that we
can track in logstash.

Here are the top 25 jobs (by volume) that are currently tripping the
warning:

gate-murano-devstack-dsvm
gate-cue-integration-dsvm-rabbitmq
gate-murano-congress-devstack-dsvm
gate-solum-devstack-dsvm-centos7
gate-rally-dsvm-murano-task
gate-congress-dsvm-api
gate-tempest-dsvm-ironic-agent_ssh
gate-solum-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ipa-nv
gate-ironic-inspector-dsvm-nv
gate-tempest-dsvm-ironic-pxe_ssh
gate-tempest-dsvm-ironic-parallel-nv
gate-tempest-dsvm-ironic-pxe_ipa
gate-designate-dsvm-powerdns
gate-python-barbicanclient-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ssh-postgres
gate-rally-dsvm-designate-designate
gate-tempest-dsvm-ironic-pxe_ssh-dib
gate-tempest-dsvm-ironic-agent_ssh-src
gate-tempest-dsvm-ironic-pxe_ipa-src
gate-muranoclient-dsvm-functional
gate-designate-dsvm-bind9
gate-tempest-dsvm-python-ironicclient-src
gate-python-ironic-inspector-client-dsvm
gate-tempest-dsvm-ironic-lib-src-nv

(You can view this query with http://goo.gl/6p8lvn)

The ironic jobs are surprising, as something is crudding up extras.d
with a file named 23, which isn't currently run. Eventual removal of
that directory is going to potentially make those jobs fail, so someone
more familiar with it should look into it.


Thanks for noticing, looking now.


As I'm leaving for the weekend, I'll post my findings here.

I was not able to spot what writes these files (in my case it was named
33). I also was not able to reproduce it on my regular devstack
environment.

I've posted a temporary patch https://review.openstack.org/#/c/233017/
so that we're able to track where and when these files appear. Right now
I only understood that they really appear during the devstack run, not
earlier.


So, no file seems to be created, so it looks like a problem in devstack: 
https://review.openstack.org/#/c/233584/








This is not guarunteed to be a complete list, but as jobs are removed /
fixed we should end up with other less frequently run jobs popping up in
future weeks.

-Sean




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [inspector] Ideas for summit discussions

2015-10-12 Thread Dmitry Tantsur

Hi inspectors! :)

We don't have a proper design session in Tokyo, but I hope it won't 
prevent us from having an informal one, probably on Friday morning 
during the contributor meetup. I'm collecting the ideas of what we could 
discuss, so please feel free to jump in:

https://etherpad.openstack.org/p/mitaka-ironic-inspector

Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Introspection rules aka advances profiles replacement: next steps

2015-10-14 Thread Dmitry Tantsur

Hi OoO'ers :)

It's going to be a long letter, fasten your seat-belts (and excuse my 
bad, as usual, English)!


In RDO Manager we used to have a feature called advanced profiles 
matching. It's still there in the documentation at 
http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/profile_matching.html 
but the related code needed reworking and didn't quite make it upstream 
yet. This mail is an attempt to restart the discussion on this topic.


Short explanation for those unaware of this feature: we used detailed 
data from introspection (acquired using hardware-detect utility [1]) to 
provide scheduling hints, which we called profiles. A profile is 
essentially a flavor, but calculated using much more data. E.g. you 
could sat that a profile "foo" will be assigned to nodes with 1024 <= 
RAM <= 4096 and with GPU devices present (an artificial example). 
Profile was put on an Ironic as a capability as a result of 
introspection. Please read the documentation linked above for more details.


This feature had a bunch of problems with it, to name a few:
1. It didn't have an API
2. It required a user to modify files by hand to use it
3. It was tied to a pretty specific syntax of the hardware [1] library

So we decided to split this thing into 2 parts, which are of value one 
their own:


1. Pluggable introspection ramdisk - so that we don't force dependency 
on hardware-detect on everyone.
2. User-defined introspection rules - some DSL that will allow a user to 
define something like a specs file (see link above) via an API. The 
outcome would be something, probably capabilit(y|ies) set on a node.
3. Scheduler helper - an utility that will take capabilities set by the 
previous step, and turn them into exactly one profile to use.


Long story short, we got 1 and 2 implemented in appropriate projects 
(ironic-python-agent and ironic-inspector) during the Liberty time 
frame. Now it's time to figure out what we do in TripleO about this, namely:


1. Do we need some standard way to define introspection rules for 
TripleO? E.g. a JSON file like we have for ironic nodes?


2. Do we need a scheduler helper at all? We could use only capabilities 
for scheduling, but then we can end up with the following situation: 
node1 has capabilities C1 and C2, node2 has capability C1. First we 
deploy a flavor with capability C1, it goes to node1. Then we deploy a 
flavor with capability C2 and it fails, despite us having 2 correct 
nodes initially. This is what state files were solving in [1] (again, 
please refer to the documentation).


3. If we need, where does it go? tripleo-common? Do we need an HTTP API 
for it, or do we just do it in place where we need it? After all, it's a 
pretty trivial manipulation with ironic nodes..


4. Finally, we need an option to tell introspection to use 
python-hardware. I don't think it should be on by default, but it will 
require rebuilding of IPA (due to a new dependency).


Looking forward to your opinions.
Dmitry.

[1] https://github.com/redhat-cip/hardware

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-15 Thread Dmitry Tantsur

On 10/15/2015 12:18 AM, Robert Collins wrote:

On 15 October 2015 at 11:11, Thomas Goirand  wrote:


One major pain point is unfortunately something ridiculously easy to
fix, but which nobody seems to care about: the long & short descriptions
format. These are usually buried into the setup.py black magic, which by
the way I feel is very unsafe (does PyPi actually execute "python
setup.py" to find out about description texts? I hope they are running
this in a sandbox...).

Since everyone uses the fact that PyPi accepts RST format for the long
description, there's nothing that can really easily fit the
debian/control. Probably a rst2txt tool would help, but still, the long
description would still be polluted with things like changelog, examples
and such (damned, why people think it's the correct place to put that...).

The only way I'd see to fix this situation, would be a PEP. This will
probably take a decade to have everyone switching to a new correct way
to write a long & short description...


Perhaps Debian (1 thing) should change, rather than trying to change
all the upstreams packaged in it (>20K) :)


+1. Both README and PyPI are for users, and I personally find detailed 
descriptions (especially a couple of simple examples) on the PyPI page 
to be of so much value.




-Rob





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Design Summit Schedule

2015-10-16 Thread Dmitry Tantsur

On 10/15/2015 06:42 PM, Matthew Treinish wrote:


Hi Everyone,

I just pushed up the QA schedule for design summit:

https://mitakadesignsummit.sched.org/overview/type/qa

Let me know if there are any big schedule conflicts or other issues, so we can
work through the problem.


Hi!

I wonder if it's possible to move "QA: Tempest Microversion Support and 
Testing" one slot down (to 3:40), so that Ironic people can attend.




Thanks,

Matt Treinish



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][ironic] ironic-inspector release 2.2.2 (liberty)

2015-10-21 Thread Dmitry Tantsur

We are gleeful to announce the release of:

ironic-inspector 2.2.2: Hardware introspection for OpenStack Bare Metal

With source available at:

http://git.openstack.org/cgit/openstack/ironic-inspector

The most important change is a fix for CVE-2015-5306, all users 
(including users of ironic-discoverd) are highly advised to update.


Another user-visible change is defaulting MySQL to InnoDB, as MyISAM is 
known not to work.


For more details, please see the git log history below and:

http://launchpad.net/ironic-inspector/+milestone/2.2.2

Please report issues through launchpad:

http://bugs.launchpad.net/ironic-inspector

Changes in ironic-inspector 2.2.1..2.2.2


95db43c Always default to InnoDB for MySQL
2d42cdf Updated from global requirements
2c64da2 Never run Flask application with debug mode
bbf31de Fix gate broken by the devstack trueorfalse change
12eaf81 Use auth_strategy=noauth in functional tests

Diffstat (except docs and test files)
-

devstack/plugin.sh |  2 +-
ironic_inspector/db.py |  7 ++-
ironic_inspector/main.py   |  5 +--
.../versions/578f84f38d_inital_db_schema.py| 12 +++--
.../migrations/versions/d588418040d_add_rules.py   | 10 -
ironic_inspector/test/functional.py| 51 
+++---

requirements.txt   |  2 +-
7 files changed, 52 insertions(+), 37 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index e53d673..39b8423 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -21 +21 @@ oslo.rootwrap>=2.0.0 # Apache-2.0
-oslo.utils>=2.0.0 # Apache-2.0
+oslo.utils!=2.6.0,>=2.0.0 # Apache-2.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Next meeting is November 9

2015-10-22 Thread Dmitry Tantsur

On 10/22/2015 12:33 PM, Miles Gould wrote:

I've just joined - what is the usual place and time?


Hi and welcome!

All the information you need you can find here: 
https://wiki.openstack.org/wiki/Meetings/Ironic




Thanks,
Miles

- Original Message -
From: "Beth Elwell" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Thursday, 22 October, 2015 8:33:03 AM
Subject: Re: [openstack-dev] [ironic] Next meeting is November 9

Hi Jim,

I will be on holiday the week of the 9th November and so will be unable to make 
that meeting. Work on the ironic UI will be posted in the sub team report 
section and if anyone has any questions regarding it please shoot me an email 
or ping me.

Thanks!
Beth


On 22 Oct 2015, at 01:58, Jim Rollenhagen  wrote:

Hi folks,

Since we'll all be at the summit next week, and presumably recovering
the following week, the next Ironic meeting will be on November 9, in
the usual place and time. See you there! :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] re-introducing twisted to global-requirements

2016-01-07 Thread Dmitry Tantsur
2016-01-07 20:09 GMT+01:00 Jim Rollenhagen :

> Hi all,
>
> A change to global-requirements[1] introduces mimic, which is an http
> server that can mock various APIs, including nova and ironic, including
> control of error codes and timeouts. The ironic team plans to use this
> for testing python-ironicclient without standing up a full ironic
> environment.
>
> Here's the catch - mimic is built on twisted. I know twisted was
> previously removed from OpenStack (or at least people said "pls no", I
> don't know the full history). We didn't intend to stealth-introduce
> twisted back into g-r, but it was pointed out to me that it may appear
> this way, so here I am letting everyone know. lifeless pointed out that
> when tests are failing, people may end up digging into mimic or twisted
> code, which most people in this community aren't familiar with AFAIK,
> which is a valid point though I hope it isn't required often.
>

Btw, I've spent some amount of time (5 years?) with twisted on my previous
jobs. While my memory is no longer fresh on it, I can definitely be pinged
to help with it, if problems appear.


>
> So, the primary question here is: do folks have a problem with adding
> twisted here? We're holding off on Ironic changes that depend on this
> until this discussion has happened, but aren't reverting the g-r change
> until we decide one way or another.
>
> // jim
>
> [1] https://review.openstack.org/#/c/220268/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][openstackclient] check/gate job to check for duplicate openstackclient commands

2016-01-10 Thread Dmitry Tantsur
2016-01-10 8:36 GMT+01:00 Steve Martinelli :

> During the Liberty release the OpenStackClient (OSC) team ran into an
> issue that is documented here: [0] and here: [1]. In short, commands were
> clobbering each other because they had the same command name.
>
> A longer example is this, OSC has a command for listing compute flavors
> (os flavor list). zaqarclient, an OSC plugin, also implemented an `os
> flavor list` command. This caused OSC to break (it became unusable because
> it couldn't load entry points), and the user had to upgrade their
> zaqarclient, which included a renamed command (os messaging flavor list).
>
> In an effort to make sure this doesn't happen against, we did two things:
> 1) fixed the exception handling upon load at the cliff level, now OSC won't
> become unusable, it'll just take the last entrypoint it sees, and 2) we
> created a new gate/check job that checks for duplicate commands [2].
> (Thanks to ajaeger and dhellmann for their help in this work!)
>
> I've added this job to the OpenStackClient gate (in a non-voting capacity
> for now), and would like to get it added to the following projects, again
> in a non-voting capacity (since they are all OSC plugins):
>
> - python-barbicanclient
> - python-congressclient
> - python-cueclient
> - python-designateclient
> - python-heatclient
> - python-ironicclient
> - python-ironic-inspector-client
> - python-mistralclient
> - python-saharaclient
> - python-tuskarclient
> - python-zaqarclient
>

Note that python-tripleoclient is also an OSC plugin.


>
> If the core team for any of those projects objects to me adding a new
> check job then reply to this thread or this patch [3]
>
> Regarding the eventual question about the value of a non-voting job:
> AFAICT, the new check job is working fine, it's catching valid errors and
> succeeded where it should. I'd like to make this voting eventually, but
> it's only been running in the OSC gate for about a week, and I'd like a few
> non-voting runs in the plugin projects to ensure we don't have any hiccups
> before making this a voting job.
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/076272.html
> [1] https://bugs.launchpad.net/python-openstackclient/+bug/1503512
> [2] https://review.openstack.org/#/c/261828/
> [3] https://review.openstack.org/#/c/265608/
>
> Thanks,
>
> Steve Martinelli
> OpenStack Keystone Project Team Lead
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][tests] approach to functional/integration tests

2016-01-12 Thread Dmitry Tantsur

On 01/11/2016 03:49 PM, Serge Kovaleff wrote:

Hi All,

Last week I had a noble goal to write "one-more" functional test in Ironic.
I did find a folder "func" but it was empty.

Friends helped me to find a WIP patch
https://review.openstack.org/#/c/235612/

and here comes the question of this email: what approach we would like
to implement:
Option 1 - write infrastructure code that starts/configure/stops the
services
Option 2 - rely on installed DevStack and run the tests over it

Both options have their Cons and Pros. Both options are implemented
across the OpenStack umbrella.
Option 1 - Glance, Nova, the patch above
Option 2 - HEAT and my favorite at the moment.

Any ideas?


I think we should eventually end up with supporting both standalone 
functional tests (#1) and tempest-based (#3).


A bit of context on #1. We've been using it in ironic-inspector since 
nearly its inception, when devstack plugins didn't exist and our project 
was on stackforge, so we had no way of implementing our dsvm gate. The 
basic idea is to start a full-featured service with mocked access to 
other services and simplified environment. We've written a decorator [1] 
that starts the service in __enter__ and stops in __exit__. It uses a 
temporary configuration file [2] with authentication disabled and 
database in temporary SQLite file. The service is started in a new green 
thread and exits when the test exits. We mock ironicclient with the 
usual 'mock' library to avoid requirements on a running ironic instance.


We do 2 kinds of tests: just an API test like [3] or full-flow 
introspection tests like [4]. In the latter case we first start 
introspection via API, verify that status is "in progress", then call 
the ramdisk callback endpoint with a fake data, and verify that 
introspection ends successfully.


Applying the same thing to ironic might be somewhat trickier. We can run 
conductor and API in the same process and use oslo.messaging fake driver 
[5] to avoid AMQP dependency. We'll have to use a fake network 
implementation and either mock glance or make sure we use local file 
system URL's for all images (IIRC we do support it).


Going further, if we create a simple fake IPA, we can even do a 
full-flow test with fake_agent driver. We will start deployment, make 
sure fake IPA was called with a right image, make it report success and 
see deployment finish. We might want to modify fake interfaces to record 
all calls, so that we can verify that boot interface was called properly.


[1] 
https://github.com/openstack/ironic-inspector/blob/master/ironic_inspector/test/functional.py#L358-L393
[2] 
https://github.com/openstack/ironic-inspector/blob/master/ironic_inspector/test/functional.py#L36-L51
[3] 
https://github.com/openstack/ironic-inspector/blob/master/ironic_inspector/test/functional.py#L249-L291
[4] 
https://github.com/openstack/ironic-inspector/blob/master/ironic_inspector/test/functional.py#L177-L196

[5] http://docs.openstack.org/developer/oslo.messaging/drivers.html#fake



Cheers,
Serge Kovaleff
http://www.mirantis.com 
cell: +38 (063) 83-155-70


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Driving workflows with Mistral

2016-01-12 Thread Dmitry Tantsur

On 01/11/2016 11:09 PM, Tzu-Mainn Chen wrote:

- Original Message -

Background info:

We've got a problem in TripleO at the moment where many of our
workflows can be driven by the command line only. This causes some
problems for those trying to build a UI around the workflows in that
they have to duplicate deployment logic in potentially multiple places.
There are specs up for review which outline how we might solve this
problem by building what is called TripleO API [1].

Late last year I began experimenting with an OpenStack service called
Mistral which contains a generic workflow API. Mistral supports
defining workflows in YAML and then creating, managing, and executing
them via an OpenStack API. Initially the effort was focused around the
idea of creating a workflow in Mistral which could supplant our
"baremetal introspection" workflow which currently lives in python-
tripleoclient. I create a video presentation which outlines this effort
[2]. This particular workflow seemed to fit nicely within the Mistral
tooling.



More recently I've turned my attention to what it might look like if we
were to use Mistral as a replacement for the TripleO API entirely. This
brings forth the question of would TripleO be better off building out
its own API... or would relying on existing OpenStack APIs be a better
solution?

Some things I like about the Mistral solution:

- The API already exists and is generic.

- Mistral already supports interacting with many of the OpenStack API's
we require [3]. Integration with keystone is baked in. Adding support
for new clients seems straightforward (I've had no issues in adding
support for ironic, inspector, and swift actions).

- Mistral actions are pluggable. We could fairly easily wrap some of
our more complex workflows (perhaps those that aren't easy to replicate
with pure YAML workflows) by creating our own TripleO Mistral actions.
This approach would be similar to creating a custom Heat resource...
something we have avoided with Heat in TripleO but I think it is
perhaps more reasonable with Mistral and would allow us to again build
out our YAML workflows to drive things. This might allow us to build
off some of the tripleo-common consolidation that is already underway
...

- We could achieve a "stable API" by simply maintaining input
parameters for workflows in a stable manner. Or perhaps workflows get
versioned like a normal API would be as well.

- The purist part of me likes Mistral quite a bit. It fits nicely with
the deploy OpenStack with OpenStack. I sort of feel like if we have to
build our own API in TripleO part of this vision has failed and could
even be seen as a massive technical debt which would likely be hard to
build a community around outside of TripleO.

- Some of the proposed validations could perhaps be implemented as new
Mistral actions as well. I'm not convinced we require TripleO API just
to support a validations mechanism yet. Perhaps validations seem hard
because we are simply trying to do them in the wrong places anyway?
(like for example perhaps we should validate network connectivity at
inspection time rather than during provisioning).

- Power users might find a workflow built around a Mistral API more
easy to interact with and expand upon. Perhaps this ends up being
something that gets submitted as a patchset back to the TripleO that we
accept into our upstream "stock" workflow sets.



Hiya!  Thanks for putting down your thoughts.

I think I fundamentally disagree with the idea of using Mistral, simply
because many of the actions we'd like to expose through a REST API
(described in the tripleo-common deployment library spec [1]) aren't
workflows; they're straightforward get/set methods.


Right, because this spec describes nearly nothing from what is present 
in tripleoclient now. And what we realistically have now is workflows, 
which we'll have to reimplement in API somehow. So maybe we need both: 
the get/set TripleO API for deployment plans and Mistral for workflows.


> Putting a workflow

engine in front of that feels like overkill and an added complication
that simply isn't needed.  And added complications can lead to unneeded
complications: for instance, one open Mistral bug details how it may
not scale well [2].


Let's not talk about scaling in the context of what we have in 
tripleoclient now ;)




The Mistral solution feels like we're trying to force a circular peg
into a round-ish hole.  In a vacuum, if we were to consider the
engineering problem of exposing a code base to outside consumers in a
non-language specific fashion - I'm pretty sure we'd just suggest the
creation of a REST API and be done with it; the thought of using a
workflow engine as the frontend would not cross our minds.

I don't really agree with the 'purist' argument.  We already have custom
business logic written in the TripleO CLI; accepting that within TripleO,
but not a very thin API layer, feels like an arbitrary line to me.  And
if that li

Re: [openstack-dev] [release][ironic] ironic-python-agent release 1.1.0 (mitaka)

2016-01-12 Thread Dmitry Tantsur
Gate is not working right now, as we still use preversioning in 
setup.cfg, and we have a version mismatch, e.g. 
http://logs.openstack.org/74/264274/1/check/gate-ironic-python-agent-pep8/8d6ef18/console.html.


Patch to remove the version from setup.cfg:
https://review.openstack.org/#/c/266267/
Will backport to liberty as soon as it merges.

On 01/11/2016 10:01 PM, d...@doughellmann.com wrote:

We are glad to announce the release of:

ironic-python-agent 1.1.0: Ironic Python Agent Ramdisk

This release is part of the mitaka release series.

With package available at:

 https://pypi.python.org/pypi/ironic-python-agent

For more details, please see below.


1.1.0
^


New Features


* The CoreOS image builder now uses the latest CoreOS stable version
   when building images.

* IPA now supports Linux-IO as an alternative to tgtd. The iSCSI
   extension will try to use Linux-IO first, and fall back to tgtd if
   Linux-IO is not found or cannot be used.

* Adds support for setting proxy info for downloading images. This
   is controlled by the *proxies* and *no_proxy* keys in the
   *image_info* dict of the *prepare_image* command.

* Adds support for streaming raw images directly onto the disk. This
   avoids writing the image to a tmpfs partition before writing it to
   disk, which also enables using images larger than the usable amount
   of RAM on the machine IPA runs on. Pass *stream_raw_images=True* to
   the *prepare_image* command to enable this; it is disabled by
   default.

* CoreOS image builder now runs IPA in a chroot, instead of a
   container. systemd-nspawn has been adding more security features
   that break several things IPA needs to do (after all, IPA
   manipulates hardware), such as using sysrq triggers or writing to
   /sys.

* Root device hints now also inspect ID_WWN_WITH_EXTENSION and
   ID_WWN_VENDOR_EXTENSION from udev.


Upgrade Notes
*

* Now that IPA runs in a chroot, any operator tooling built around
   the container may need to change (for example, methods of getting a
   shell inside the container).


Bug Fixes
*

* Raw images larger than available of RAM may now be used by passing
   *stream_raw_images=True* to the *prepare_image* command; these will
   be streamed directly to disk.

* Fixes an issue using the "logs" inspection collector when logs
   contain non-ascii characters.

* Makes tgtd ready status detection more robust.

* Fixes configdrive creation for MBR disks greater than 2TB.


Other Notes
***

* System random is now used where applicable, rather than the
   default python random library.


Changes in ironic-python-agent 1.0.0..1.1.0
---

43a149d Updated from global requirements
dcdb06d Replace deprecated LOG.warn with LOG.warning
4b561f1 Updated from global requirements
943d2c0 Revert "Use latest CoreOS stable when building"
a39dfbd Updated from global requirements
ffcdcd4 Add mitaka reno page
cfcef97 Replace assertEqual(None, *) with assertIsNone in tests
b9df861 Catch up release notes for Mitaka
e8488c2 Add reno for release notes management
d185927 Fix trivial typo in docs
5bac998 Updated from global requirements
4cd64e2 Delete the Linux-IO target before setting up local boot
056bb42 CoreOS: Ensure /run is mounted before starting
6dc7f34 Deprecated tox -downloadcache option removed
a253e50 Use latest CoreOS stable when building
84fc428 Updated from global requirements
b5b0b63 Run IPA in chroot instead of container in CoreOS
5fa258b Fix "logs" inspection collector when logs contain non-ascii symbols
2fc6ce2 pyudev exception has changed for from_device_file
c474a5a Support Linux-IO in addition to tgtd
f4ad4d7 Updated from global requirements
863b47b Updated from global requirements
e320bb8 Add support for streaming raw images directly onto the disk
65053b7 Refactor the image download and checksum computation bits
c21409e Follow up patch for da9c3b0adc67efa916fc534d975823c0a45948a1
a01c4c9 Create partition at max msdos limit for disks > 2TB
54c901e Support proxies for image download
d97dbf2 Updated from global requirements
da9c3b0 Extend root device hints for different types of WWN
505b345 Fix to preserve double dashes of command line option in HTML.
59630d4 Updated from global requirements
9e75ba5 Use oslo.log instead of original logging
037e391 Updated from global requirements
18d5d6a Replace deprecated LOG.warn with LOG.warning
e51ccbe avoid duplicate text in ISCSIError message
fb920f4 determine tgtd ready status through tgtadm
f042be5 Updated from global requirements
1aeef4d Updated from global requirements
f01 Add param docstring into the normalize func
06d34ae Make calling arguments easier to understand
6131b2e Ensure all methods in utils.py have docstrings
7823240 Updated from global requirements
af20875 Update gitignore
5f7bc48 Reduce size of CoreOS ramdisk
deb50ac Add LOG.debug() if requested device type not found
d538f5e Babel is not a direct dependency
27048ef

Re: [openstack-dev] [release][ironic] ironic-python-agent release 1.1.0 (mitaka)

2016-01-12 Thread Dmitry Tantsur

On 01/12/2016 10:56 AM, Dmitry Tantsur wrote:

Gate is not working right now, as we still use preversioning in
setup.cfg, and we have a version mismatch, e.g.
http://logs.openstack.org/74/264274/1/check/gate-ironic-python-agent-pep8/8d6ef18/console.html.


Patch to remove the version from setup.cfg:
https://review.openstack.org/#/c/266267/
Will backport to liberty as soon as it merges.


Master change has merged, liberty was already fine, so the gate should 
be fine now.




On 01/11/2016 10:01 PM, d...@doughellmann.com wrote:

We are glad to announce the release of:

ironic-python-agent 1.1.0: Ironic Python Agent Ramdisk

This release is part of the mitaka release series.

With package available at:

 https://pypi.python.org/pypi/ironic-python-agent

For more details, please see below.


1.1.0
^


New Features


* The CoreOS image builder now uses the latest CoreOS stable version
   when building images.

* IPA now supports Linux-IO as an alternative to tgtd. The iSCSI
   extension will try to use Linux-IO first, and fall back to tgtd if
   Linux-IO is not found or cannot be used.

* Adds support for setting proxy info for downloading images. This
   is controlled by the *proxies* and *no_proxy* keys in the
   *image_info* dict of the *prepare_image* command.

* Adds support for streaming raw images directly onto the disk. This
   avoids writing the image to a tmpfs partition before writing it to
   disk, which also enables using images larger than the usable amount
   of RAM on the machine IPA runs on. Pass *stream_raw_images=True* to
   the *prepare_image* command to enable this; it is disabled by
   default.

* CoreOS image builder now runs IPA in a chroot, instead of a
   container. systemd-nspawn has been adding more security features
   that break several things IPA needs to do (after all, IPA
   manipulates hardware), such as using sysrq triggers or writing to
   /sys.

* Root device hints now also inspect ID_WWN_WITH_EXTENSION and
   ID_WWN_VENDOR_EXTENSION from udev.


Upgrade Notes
*

* Now that IPA runs in a chroot, any operator tooling built around
   the container may need to change (for example, methods of getting a
   shell inside the container).


Bug Fixes
*

* Raw images larger than available of RAM may now be used by passing
   *stream_raw_images=True* to the *prepare_image* command; these will
   be streamed directly to disk.

* Fixes an issue using the "logs" inspection collector when logs
   contain non-ascii characters.

* Makes tgtd ready status detection more robust.

* Fixes configdrive creation for MBR disks greater than 2TB.


Other Notes
***

* System random is now used where applicable, rather than the
   default python random library.


Changes in ironic-python-agent 1.0.0..1.1.0
---

43a149d Updated from global requirements
dcdb06d Replace deprecated LOG.warn with LOG.warning
4b561f1 Updated from global requirements
943d2c0 Revert "Use latest CoreOS stable when building"
a39dfbd Updated from global requirements
ffcdcd4 Add mitaka reno page
cfcef97 Replace assertEqual(None, *) with assertIsNone in tests
b9df861 Catch up release notes for Mitaka
e8488c2 Add reno for release notes management
d185927 Fix trivial typo in docs
5bac998 Updated from global requirements
4cd64e2 Delete the Linux-IO target before setting up local boot
056bb42 CoreOS: Ensure /run is mounted before starting
6dc7f34 Deprecated tox -downloadcache option removed
a253e50 Use latest CoreOS stable when building
84fc428 Updated from global requirements
b5b0b63 Run IPA in chroot instead of container in CoreOS
5fa258b Fix "logs" inspection collector when logs contain non-ascii
symbols
2fc6ce2 pyudev exception has changed for from_device_file
c474a5a Support Linux-IO in addition to tgtd
f4ad4d7 Updated from global requirements
863b47b Updated from global requirements
e320bb8 Add support for streaming raw images directly onto the disk
65053b7 Refactor the image download and checksum computation bits
c21409e Follow up patch for da9c3b0adc67efa916fc534d975823c0a45948a1
a01c4c9 Create partition at max msdos limit for disks > 2TB
54c901e Support proxies for image download
d97dbf2 Updated from global requirements
da9c3b0 Extend root device hints for different types of WWN
505b345 Fix to preserve double dashes of command line option in HTML.
59630d4 Updated from global requirements
9e75ba5 Use oslo.log instead of original logging
037e391 Updated from global requirements
18d5d6a Replace deprecated LOG.warn with LOG.warning
e51ccbe avoid duplicate text in ISCSIError message
fb920f4 determine tgtd ready status through tgtadm
f042be5 Updated from global requirements
1aeef4d Updated from global requirements
f01 Add param docstring into the normalize func
06d34ae Make calling arguments easier to understand
6131b2e Ensure all methods in utils.py have docstrings
7823240 Updated from global requirements

Re: [openstack-dev] Exception request : [stable] Ironic doesn't use cacert while talking to Swift ( https://review.openstack.org/#/c/253460/)

2016-01-18 Thread Dmitry Tantsur

On 01/17/2016 09:25 AM, Nisha Agarwal wrote:

Hello Team,

This patch got approval long back(Jan 6)  but due to Jenkins failure in
the merge pipeline of the Kilo branch, this patch was not merged.

Hence I request for an exception for this patch as  this was not merged
due to Jenkins issue.


Hi.

Our kilo gate is still not feeling well, so I'm not sure there's any 
point in giving an exception for anything not deadly critical. Sorry.




Regards
Nisha

--
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Dmitry Tantsur

On 02/07/2016 09:07 PM, Jay Pipes wrote:

Hello all,

tl;dr
=

I have long thought that the OpenStack Summits have become too
commercial and provide little value to the software engineers
contributing to OpenStack.

I propose the following:

1) Separate the design summits from the conferences
2) Hold only a single OpenStack conference per year
3) Return the design summit to being a low-key, low-cost working event


It sounds like a great idea, but I have a couple of concerns - see below.



details
===

The design summits originally started out as working events. Developers
got together in smallish rooms, arranged chairs in a fishbowl, and got
to work planning and designing.

With the OpenStack Summit growing more and more marketing- and
sales-focused, the contributors attending the design summit are often
unfocused. The precious little time that developers have to actually
work on the next release planning is often interrupted or cut short by
the large numbers of "suits" and salespeople at the conference event,
many of which are peddling a product or pushing a corporate agenda.

Many contributors submit talks to speak at the conference part of an
OpenStack Summit because their company says it's the only way they will
pay for them to attend the design summit. This is, IMHO, a terrible
thing. The design summit is a *working* event. Companies that contribute
to OpenStack projects should send their engineers to working events
because that is where work is done, not so that their engineer can go
give a talk about some vendor's agenda-item or newfangled product.


I'm afraid that if a company does not value employees participation in 
the design summit alone, they will continue to send them to the 
conference event, ignoring the design part completely. I.e. we'll get 
even less people from these companies. (of course it's only me guessing)


Also it means that people who actually have to be present in both places 
will travel even more, so it has high chances of increasing budget, not 
decreasing it.




Part of the reason that companies only send engineers who are giving a
talk at the conference side is that the cost of attending the OpenStack
Summit has become ludicrously expensive. Why have the events become so
expensive? I can think of a few reasons:

a) They are held every six months. I know of no other community or open
source project that holds *conference-type* events every six months.

b) They are held in extremely expensive hotels and conference centers
because the number of attendees is so big.


On one hand, big +1 for "extremely expensive" part.

On another hand, for participants arriving from another continent the 
airfare is roughly the half of the whole expense. This probably can't be 
improved (and may actually become worse from some of us, if new events 
become more US-centric).




c) Because the conferences have become sales and marketing-focused
events, companies shell out hundreds of thousands of dollars for schwag,
for rented event people, for food and beverage sponsorships, for keynote
slots, for lavish and often ridiculous parties, and more. This cost
means less money to send engineers to the design summit to do actual work.

I would love to see the OpenStack contributor community take back the
design summit to its original format and purpose and decouple it from
the OpenStack Summit's conference portion.

I believe the design summits should be organized by the OpenStack
contributor community, not the OpenStack Foundation and its marketing
and event planning staff. This will allow lower-cost venues to be chosen
that meet the needs only of the small group of active contributors, not
of huge masses of conference attendees. This will allow contributor
companies to send *more* engineers to *more* design summits, which is
something that really needs to happen if we are to grow our active
contributor pool.

Once this decoupling occurs, I think that the OpenStack Summit should be
renamed to the OpenStack Conference and Expo to better fit its purpose
and focus. This Conference and Expo event really should be held once a
year, in my opinion, and continue to be run by the OpenStack Foundation.

I, for one, would welcome events that have no conference check-in area,
no evening parties with 2000 people, no keynote and
powerpoint-as-a-service sessions, and no getting pulled into sales
meetings.

OK, there, I said it.

Thoughts? Criticism? Support? Suggestions welcome.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin

Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-08 Thread Dmitry Tantsur

On 02/08/2016 06:37 PM, Kevin L. Mitchell wrote:

On Mon, 2016-02-08 at 10:49 -0500, Jay Pipes wrote:

5) Dealing with schwag, giveaways, parties, and other superfluous
stuff


As a confirmed introvert, I have to say that I rarely attend parties,
for a variety of reasons.  However, I don't think our hypothetical
design-only meeting should completely eliminate parties, though we can
back off from some of the more extravagant ones.  If we maintain at
least one party, I think that would satisfy the social needs of the
community without distracting too much from the main purpose of the
event.  Of course, I agree with eliminating the other distracting
elements, such as schwag and giveaways…



+1, I think we can just make a party somewhat less fancy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-17 Thread Dmitry Tantsur

Hi everyone!

Yesterday on the Ironic midcycle we agreed that we would like to remove 
support for the old bash ramdisk from our code and gate. This, however, 
pose a problem, since we still support Kilo and Liberty. Meaning:


1. We can't remove gate jobs completely, as they still run on Kilo/Liberty.
2. Then we should continue to run our job on DIB, as DIB does not have 
stable branches.
3. Then we can't remove support from Ironic master as well, as it would 
break DIB job :(


I see the following options:

1. Wait for Kilo end-of-life (April?) before removing jobs and code. 
This means that the old ramdisk will essentially be supported in Mitaka, 
but we'll remove gating on stable/liberty and stable/mitaka very soon. 
Pros: it will happen soon. Cons: in theory we do support the old ramdisk 
on Liberty, so removing gates will end this support prematurely.


2. Wait for Liberty end-of-life. This means that the old ramdisk will 
essentially be supported in Mitaka and Newton. We should somehow 
communicate that it's not official and can be dropped at any moment 
during stable branches life time. Pros: we don't drop support of the 
bash ramdisk on any branch where we promised to support it. Cons: people 
might assume we still support the old ramdisk on Mitaka/Newton; it will 
also take a lot of time.


3. Do it now, recommend Kilo users to switch to IPA too. Pros: it 
happens now, no confusing around old ramdisk support in Mitaka and 
later. Cons: probably most Kilo users (us included) are using the bash 
ramdisk, meaning we can potentially break them when landing changes on 
stable/kilo.


4. Upper-cap DIB in stable/{kilo,liberty} to the current release, then 
remove gates from Ironic master and DIB, leaving them on Kilo and 
Liberty. Pros: we can remove old ramdisk support right now. Cons: DIB 
bug fixes won't affect kilo and liberty any more.


5. The same as #4, but only on Kilo.

As gate on stable/kilo is not working right now, and end-of-life is 
quickly approaching, I see number 3 as a pretty viable option anyway. We 
probably won't land any more changes on Kilo, so no use in keeping gates 
on it. Liberty is still a concern though, as the old ramdisk was only 
deprecated in Liberty.


What do you all think? Did I miss any options?

Cheers,
Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-17 Thread Dmitry Tantsur

On 02/17/2016 02:22 PM, John Trowbridge wrote:



On 02/17/2016 06:27 AM, Dmitry Tantsur wrote:

Hi everyone!

Yesterday on the Ironic midcycle we agreed that we would like to remove
support for the old bash ramdisk from our code and gate. This, however,
pose a problem, since we still support Kilo and Liberty. Meaning:

1. We can't remove gate jobs completely, as they still run on Kilo/Liberty.
2. Then we should continue to run our job on DIB, as DIB does not have
stable branches.
3. Then we can't remove support from Ironic master as well, as it would
break DIB job :(

I see the following options:

1. Wait for Kilo end-of-life (April?) before removing jobs and code.
This means that the old ramdisk will essentially be supported in Mitaka,
but we'll remove gating on stable/liberty and stable/mitaka very soon.
Pros: it will happen soon. Cons: in theory we do support the old ramdisk
on Liberty, so removing gates will end this support prematurely.

2. Wait for Liberty end-of-life. This means that the old ramdisk will
essentially be supported in Mitaka and Newton. We should somehow
communicate that it's not official and can be dropped at any moment
during stable branches life time. Pros: we don't drop support of the
bash ramdisk on any branch where we promised to support it. Cons: people
might assume we still support the old ramdisk on Mitaka/Newton; it will
also take a lot of time.

3. Do it now, recommend Kilo users to switch to IPA too. Pros: it
happens now, no confusing around old ramdisk support in Mitaka and
later. Cons: probably most Kilo users (us included) are using the bash
ramdisk, meaning we can potentially break them when landing changes on
stable/kilo.



I think if we were to do this, then we need to backport LIO support in
IPA to liberty and kilo. While the bash ramdisk is not awesome to
troubleshoot, tgtd is not great either, and the bash ramdisk has
supported LIO since Kilo. However, there is not stable/kilo branch in
IPA, so that backport is impossible. I have not looked at how hard the
stable/liberty backport would be, but I imagine not very.


4. Upper-cap DIB in stable/{kilo,liberty} to the current release, then
remove gates from Ironic master and DIB, leaving them on Kilo and
Liberty. Pros: we can remove old ramdisk support right now. Cons: DIB
bug fixes won't affect kilo and liberty any more.

5. The same as #4, but only on Kilo.

As gate on stable/kilo is not working right now, and end-of-life is
quickly approaching, I see number 3 as a pretty viable option anyway. We
probably won't land any more changes on Kilo, so no use in keeping gates
on it. Liberty is still a concern though, as the old ramdisk was only
deprecated in Liberty.

What do you all think? Did I miss any options?


My favorite option would be 5 with backport of LIO support to liberty
(since backport to kilo is not possible). That is the only benefit of
the current bash ramdisk over the liberty/kilo IPA ramdisk. This is not
just for RHEL, but RHEL derivatives like CentOS which the RDO distro is
based on. (technically tgt can still be installed from EPEL, but there
is a reason it is not included in the base repos)


Oh, that's a good catch, IPA is usable on RHEL starting with Mitaka... I 
wonder if having stable branches for IPA was a good idea at all, 
especially provided that our gate is using git master on all branches.




Other than that, I think 4 is the next best option.


Cheers,
Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][i18n] What is the backport policy on i18n changes?

2016-02-18 Thread Dmitry Tantsur

On 02/18/2016 01:16 AM, Matt Riedemann wrote:

I don't think we have an official policy for stable backports with
respect to translatable string changes.

I'm looking at a release request for ironic-inspector on stable/liberty
[1] and one of the changes in that has translatable string changes to
user-facing error messages [2].

mrunge brought up this issue in the stable team meeting this week also
since Horizon has to be extra careful about backporting changes with
translatable string changes.

I think on the server side, if they are changes that just go in the
logs, it's not a huge issue. But for user facing changes, should we
treat those like StringFreeze [3]? Or only if the stable branches for
the given project aren't getting translation updates? I know the server
projects (at least nova) is still get translation updates on
stable/liberty so if we do backport changes with translatable string
updates, they aren't getting updated in stable. I don't see anything
like that happening for ironic-inspector on stable/liberty though.


Hi!

I had this concern, but ironic-inspector has never had any actual 
translations, so I don't think it's worth blocking this (pretty 
annoying) bug fix based on that.




Thoughts?

[1] https://review.openstack.org/#/c/279515/
[2] https://review.openstack.org/#/c/279071/1/ironic_inspector/process.py
[3] https://wiki.openstack.org/wiki/StringFreeze




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [stable] Suggestion to remove stable/liberty and stable branches support from ironic-python-agent

2016-02-19 Thread Dmitry Tantsur

Hi all!

Initially we didn't plan on having stable branches for IPA at all. Our 
gate is using the prebuilt image generated from the master branch even 
on Ironic/Inspector stable branches. The branch in question was added by 
request of RDO folks, and today I got a request from trown to remove it:


 dtantsur: btw, what do you think the chances are that IPA gets 
rid of stable branch?
 I'm +1 on that, because currently only tripleo is using this 
stable branch, our own gates are using tarball from master

 s/tarball/prebuilt image/
 cool, from RDO perspective, I would prefer to have master 
package in our liberty delorean server, but I cant do that (without 
major hacks) if there is a stable/liberty branch

 LIO support being the main reason
 fwiw, I have tested master IPA on liberty and it works great

So I suggest we drop stable branches from IPA. This won't affect the 
Ironic gate in any regard, as we don't use stable IPA there anyway, as I 
mentioned before. As we do know already, we'll keep IPA compatible with 
all supported Ironic and Inspector versions.


Opinions?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] Suggestion to remove stable/liberty and stable branches support from ironic-python-agent

2016-02-19 Thread Dmitry Tantsur

On 02/19/2016 01:29 PM, Lucas Alvares Gomes wrote:

Hi,

By removing stable branches you mean stable branches for mitaka and
newer releases or that includes stable/liberty which already exist as
well?

I think the latter is more complicated, I don't think we should drop
stable/liberty like that because other people (apart from TripleO) may
also depend on that. I mean, it wouldn't be very "stable" if stable
branches were deleted before their supported phases.


Yeah, this is a valid concern. Maybe we should recommend RDO somehow 
ignore stable/liberty, and then no longer have stable branches..




But that said, I'm +1 to not have stable branches for newer releases.

Cheers,
Lucas

On Fri, Feb 19, 2016 at 12:17 PM, Dmitry Tantsur  wrote:

Hi all!

Initially we didn't plan on having stable branches for IPA at all. Our gate
is using the prebuilt image generated from the master branch even on
Ironic/Inspector stable branches. The branch in question was added by
request of RDO folks, and today I got a request from trown to remove it:

 dtantsur: btw, what do you think the chances are that IPA gets rid
of stable branch?
 I'm +1 on that, because currently only tripleo is using this
stable branch, our own gates are using tarball from master
 s/tarball/prebuilt image/
 cool, from RDO perspective, I would prefer to have master package in
our liberty delorean server, but I cant do that (without major hacks) if
there is a stable/liberty branch
 LIO support being the main reason
 fwiw, I have tested master IPA on liberty and it works great

So I suggest we drop stable branches from IPA. This won't affect the Ironic
gate in any regard, as we don't use stable IPA there anyway, as I mentioned
before. As we do know already, we'll keep IPA compatible with all supported
Ironic and Inspector versions.

Opinions?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Dmitry Tantsur

I agree with Daniel + a couple more comments inline.

On 02/22/2016 04:49 PM, Daniel P. Berrange wrote:

On Mon, Feb 22, 2016 at 04:14:06PM +0100, Thierry Carrez wrote:

Hi everyone,

TL;DR: Let's split the events, starting after Barcelona.


Yes, please. Your proposal addresses the big issue I have with current
summits which is the really poor timing wrt start of each dev cycle.


The idea would be to split the events. The first event would be for upstream
technical contributors to OpenStack. It would be held in a simpler,
scaled-back setting that would let all OpenStack project teams meet in
separate rooms, but in a co-located event that would make it easy to have
ad-hoc cross-project discussions. It would happen closer to the centers of
mass of contributors, in less-expensive locations.


The idea that we can choose less expensive locations is great, but I'm a
little wary of focusing too much on "centers of mass of contributors", as
it can easily become an excuse to have it in roughly the same places each
time. As a non-USA based contributor, I really value the fact the the
summits rotate around different regions instead of spending all the time
in the USA as was the case earlier in openstcck days. Minimizing travel
costs is no doubt a welcome aim for companies' budgets, but it should not
be allowed to dominate to such a large extent that we miss representation
of different regions. ie if we never went back to Asia because the it is
cheaper for the /current/ majority of contributors to go to the US, we'll
make it harder to attract new contributors from those regions we avoid on
cost ground. The "center of mass of contributors" could become a self-
fullfilling prophecy.

IOW, I'm onboard with choosing less expensive locations, but would like
to see us still make the effort to reach out across different regions
for the events, and not become too US focused once again.


+1 here. I got an impression that midcycles now usually happen in the 
US. Indeed, it's probably much cheaper for the majority of contributors, 
but would make things worse for non-US folks.





The split should ideally reduce the needs to organize separate in-person
mid-cycle events. If some are still needed, the main conference venue and
time could easily be used to provide space for such midcycle events (given
that it would end up happening in the middle of the cycle).


The obvious risk with suggesting that current mid-cycle events could take
place alongside the business conference, is that the "business conference"
ends up being just as large as our combined conference is today. IOW we
risk actually creating 4 big official developer events a year, instead of
the current 2 events + small unofficial mid-cycles. You'd need to find some
way to limit the scope of any "mid cycle" events that co-located with the
business conference to prevent it growing out of hand.  We really want to
make sure we keep the mid-cycles portrayed as optional small scale
"hackathons", and not something that contributors feel obligated to
attend. IMHO they're already risking getting out of hand - it is hard to
feel well connected to development plans if you miss the mid-cycle events.


This time we (Ironic) tried a virtual midcycle using the asterisk 
infrastructure provided by the infra team, and it worked surprisingly 
well. I'd recommend more teams trying this option instead of trying to 
find a better way of having one more expensive f2f event (even though I 
really like to meet other folks).




Regards,
Daniel




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Remember to follow RFE process

2016-03-03 Thread Dmitry Tantsur
2016-03-03 11:01 GMT+01:00 Lucas Alvares Gomes :

> Hi,
>
> > Ironic'ers, please remember to follow the RFE process; especially the
> cores.
> >
> > I noticed that a patch [1] got merged yesterday. The patch was associated
> > with an RFE [2] that hadn't been approved yet :-( What caught my eye was
> > that the commit message didn't describe the actual API change so I took a
> > quick look at the (RFE) bug and it wasn't documented there either.
> >
> > As a reminder, the RFE process is documented [3].
> >
> > Spec cores need to try to be more timely wrt specs (I admit, I am
> guilty).
> > And folks, especially cores, ought to take more care when reviewing.
> > Although I do feel like there are too many things that a reviewer needs
> to
> > keep in mind.
> >
> > Should we revert the patch [1] for now? (Disclaimer. I haven't looked at
> the
> > patch itself. But I don't think I should have to, to know what the API
> > change is.)
> >
>
> Thanks for calling it out Ruby, that's unfortunate that the patch was
> merged without the RFE being approved. About reverting the patch I
> think we shouldn't do that now because the patch is touching the API
> and introducing a new microversion to it.
>

Exactly. I've -2'ed the revert, as removing API version is even worse than
landing a change without an RFE approved. Let us make sure to approve RFE
asap, and then adjust the code according to it.


>
> And yes, as reviewers let's try to improve our process. We probably
> should talk about how we can do it in the next upstream meeting.
>
> Cheers,
> Lucas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI jobs failures

2016-03-07 Thread Dmitry Tantsur

On 03/06/2016 05:58 PM, James Slagle wrote:

On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi  wrote:

I'm kind of hijacking Dan's e-mail but I would like to propose some
technical improvements to stop having so much CI failures.


1/ Stop creating swap files. We don't have SSD, this is IMHO a terrible
mistake to swap on files because we don't have enough RAM. In my
experience, swaping on non-SSD disks is even worst that not having
enough RAM. We should stop doing that I think.


We have been relying on swap in tripleo-ci for a little while. While
not ideal, it has been an effective way to at least be able to test
what we've been testing given the amount of physical RAM that is
available.

The recent change to add swap to the overcloud nodes has proved to be
unstable. But that has more to do with it being racey with the
validation deployment afaict. There are some patches currently up to
address those issues.




2/ Split CI jobs in scenarios.

Currently we have CI jobs for ceph, HA, non-ha, containers and the
current situation is that jobs fail randomly, due to performances issues.

Puppet OpenStack CI had the same issue where we had one integration job
and we never stopped adding more services until all becomes *very*
unstable. We solved that issue by splitting the jobs and creating scenarios:

https://github.com/openstack/puppet-openstack-integration#description

What I propose is to split TripleO jobs in more jobs, but with less
services.

The benefit of that:

* more services coverage
* jobs will run faster
* less random issues due to bad performances

The cost is of course it will consume more resources.
That's why I suggest 3/.

We could have:

* HA job with ceph and a full compute scenario (glance, nova, cinder,
ceilometer, aodh & gnocchi).
* Same with IPv6 & SSL.
* HA job without ceph and full compute scenario too
* HA job without ceph and basic compute (glance and nova), with extra
services like Trove, Sahara, etc.
* ...
(note: all jobs would have network isolation, which is to me a
requirement when testing an installer like TripleO).


Each of those jobs would at least require as much memory as our
current HA job. I don't see how this gets us to using less memory. The
HA job we have now already deploys the minimal amount of services that
is possible given our current architecture. Without the composable
service roles work, we can't deploy less services than we already are.





3/ Drop non-ha job.
I'm not sure why we have it, and the benefit of testing that comparing
to HA.


In my opinion, I actually think that we could drop the ceph and non-ha
job from the check-tripleo queue.

non-ha doesn't test anything realistic, and it doesn't really provide
any faster feedback on patches. It seems at most it might run 15-20
minutes faster than the HA job on average. Sometimes it even runs
slower than the HA job.


The non-HA job is the only job with introspection. So you'll have to 
enable introspection on the HA job, bumping its run time.




The ceph job we could move to the experimental queue to run on demand
on patches that might affect ceph, and it could also be a daily
periodic job.

The same could be done for the containers job, an IPv6 job, and an
upgrades job. Ideally with a way to run an individual job as needed.
Would we need different experimental queues to do that?

That would leave only the HA job in the check queue, which we should
run with SSL and network isolation. We could deploy less testenv's
since we'd have less jobs running, but give the ones we do deploy more
RAM. I think this would really alleviate a lot of the transient
intermittent failures we get in CI currently. It would also likely run
faster.

It's probably worth seeking out some exact evidence from the RDO
centos-ci, because I think they are testing with virtual environments
that have a lot more RAM than tripleo-ci does. It'd be good to
understand if they have some of the transient failures that tripleo-ci
does as well.

We really are deploying on the absolute minimum cpu/ram requirements
that is even possible. I think it's unrealistic to expect a lot of
stability in that scenario. And I think that's a big reason why we get
so many transient failures.

In summary: give the testenv's more ram, have one job in the
check-tripleo queue, as many jobs as needed in the experimental queue,
and as many periodic jobs as necessary.





Any comment / feedback is welcome,
--
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openst

Re: [openstack-dev] [release][all][ptl] preparing to create stable/mitaka branches for libraries

2016-03-09 Thread Dmitry Tantsur
2016-03-09 18:26 GMT+01:00 Doug Hellmann :

> It's time to start opening the stable branches for libraries. I've
> prepared a list of repositories and the proposed versions from which
> we will create stable/mitaka branches, and need each team to sign off on
> the versions. If you know you intend to release a bug fix version in
> the next couple of days, we can wait to avoid having to backport
> patches, but otherwise we should go ahead and create the branches.
>
> I will process each repository as I hear from the owning team.
>
> openstack/ceilometermiddleware 0.4.0
> openstack/django_openstack_auth 2.2.0
> openstack/glance_store 0.13.0
> openstack/ironic-lib 1.1.0
> openstack/keystoneauth 2.3.0
> openstack/keystonemiddleware 4.3.0
> openstack/os-brick 1.1.0
> openstack/os-client-config 1.16.0
> openstack/pycadf 2.1.0
> openstack/python-barbicanclient 4.0.0
> openstack/python-brick-cinderclient-ext 0.1.0
> openstack/python-ceilometerclient 2.3.0
> openstack/python-cinderclient 1.6.0
> openstack/cliff 2.0.0
> openstack/python-designateclient 2.0.0
> openstack/python-glanceclient 2.0.0
> openstack/python-heatclient 1.0.0
> openstack/python-ironic-inspector-client 1.5.0
>

This one is fine.

Thanks,
Dmitry.


> openstack/python-ironicclient 1.2.0
> openstack/python-keystoneclient 2.3.1
> openstack/python-manilaclient 1.8.0
> openstack/python-neutronclient 4.1.1
> openstack/python-novaclient 3.3.0
> openstack/python-saharaclient 0.13.0
> openstack/python-swiftclient 3.0.0
> openstack/python-troveclient 2.1.0
> openstack/python-zaqarclient 1.0.0
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Logo for TripleO

2016-03-11 Thread Dmitry Tantsur

On 03/11/2016 06:32 AM, Jason Rist wrote:

Hey everyone -
We've been working on a UI for TripleO for a few months now and we're
just about to beg to be a part of upstream... and we're in need of a
logo for the login page and header.

In my evenings, I've come up with a logo.

It's a take on the work that Dan has already done on the Owl idea:
http://wixagrid.com/tripleo/tripleo_svg.html


This is looking fantastic!! Big +1 to using it everywhere.

We also need to put somewhere both this Owl and ironic's Bear together :)



I think it'd be cool if it were used on the CI page and maybe even
tripleo.org - I ran it by the guys on #tripleo and they seem to be
pretty warm on the idea, so I thought I'd run it by here if you missed
the conversation.

It's SVG so we can change the colors pretty easily as I have in the two
attached screenshots.  It also doesn't need to be loaded as a separate
asset.  Additionally, it scales well since it's basically vector instead
of rasterized.

What do you guys think?

Can we use it?

I can do a patch for tripleo.org and the ci and wherever else it's in use.

-J



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [ironic] [inspector] Rewriting nailgun agent on Python proposal

2016-03-16 Thread Dmitry Tantsur

On 03/15/2016 01:53 PM, Serge Kovaleff wrote:

Dear All,

Let's compare functional abilities of both solutions.

Till the recent Mitaka release Ironic-inspector had only Introspection
ability.

Discovery part is proposed and implemented by Anton Arefiev. We should
align expectations and current and future functionality.

Adding Tags to attract the Inspector community.


Hi!

It would be great to see what we can do to fit the nailgun use case. 
Unfortunately, I don't know much about it right now. What are you missing?




Cheers,
Serge Kovaleff
http://www.mirantis.com 
cell: +38 (063) 83-155-70

On Tue, Mar 15, 2016 at 2:07 PM, Alexander Saprykin
mailto:asapry...@mirantis.com>> wrote:

Dear all,

Thank you for the opinions about this problem.

I would agree with Roman, that it is always better to reuse
solutions than re-inventing the wheel. We should investigate
possibility of using ironic-inspector and integrating it into fuel.

Best regards,
Alexander Saprykin

2016-03-15 13:03 GMT+01:00 Sergii Golovatiuk
mailto:sgolovat...@mirantis.com>>:

My strong +1 to drop off nailgun-agent completely in favour of
ironic-inspector. Even taking into consideration we'lll need to
extend  ironic-inspector for fuel needs.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Tue, Mar 15, 2016 at 11:06 AM, Roman Prykhodchenko
mailto:m...@romcheg.me>> wrote:

My opition on this is that we have too many re-invented
wheels in Fuel and it’s better think about replacing them
with something we can re-use than re-inventing them one more
time.

Let’s take a look at Ironic and try to figure out how we can
use its features for the same purpose.


- romcheg
 > 15 бер. 2016 р. о 10:38 Neil Jerram
mailto:neil.jer...@metaswitch.com>> написав(ла):
 >
 > On 15/03/16 07:11, Vladimir Kozhukalov wrote:
 >> Alexander,
 >>
 >> We have many other places where use Ruby (astute, puppet
custom types,
 >> etc.). I don't think it is a good reason to re-write
something just
 >> because it is written in Ruby. You are right about
tests, about plugins,
 >> but let's look around. Ironic community has already
invented discovery
 >> component (btw written in python) and I can't see any
reason why we
 >> should continue putting efforts in nailgun agent and not
try to switch
 >> to ironic-inspector.
 >
 > +1 in general terms.  It's strange to me that there are
so many
 > OpenStack deployment systems that each do each piece of
the puzzle in
 > their own way (Fuel, Foreman, MAAS/Juju etc.) - and which
also means
 > that I need substantial separate learning in order to use
all these
 > systems.  It would be great to see some consolidation.
 >
 > Regards,
 >   Neil
 >
 >
 >

__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

 >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




_

Re: [openstack-dev] [Fuel] [ironic] [inspector] Rewriting nailgun agent on Python proposal

2016-03-19 Thread Dmitry Tantsur
gent/blob/master/agent
[1] https://docs.chef.io/ohai.html
[2]
https://github.com/openstack/fuel-nailgun-agent/blob/master/agent#L46-L51
[3] https://wiki.openstack.org/wiki/Fuel/Plugins


On Wed, Mar 16, 2016 at 1:39 PM, Dmitry Tantsur mailto:dtant...@redhat.com>> wrote:

On 03/15/2016 01:53 PM, Serge Kovaleff wrote:

Dear All,

Let's compare functional abilities of both solutions.

Till the recent Mitaka release Ironic-inspector had only
Introspection
ability.

Discovery part is proposed and implemented by Anton Arefiev. We
should
align expectations and current and future functionality.

Adding Tags to attract the Inspector community.


Hi!

It would be great to see what we can do to fit the nailgun use case.
Unfortunately, I don't know much about it right now. What are you
missing?


Cheers,
Serge Kovaleff
http://www.mirantis.com <http://www.mirantis.com/>
cell: +38 (063) 83-155-70

On Tue, Mar 15, 2016 at 2:07 PM, Alexander Saprykin
mailto:asapry...@mirantis.com>
<mailto:asapry...@mirantis.com <mailto:asapry...@mirantis.com>>>
wrote:

 Dear all,

 Thank you for the opinions about this problem.

 I would agree with Roman, that it is always better to reuse
 solutions than re-inventing the wheel. We should investigate
 possibility of using ironic-inspector and integrating it
into fuel.

 Best regards,
 Alexander Saprykin

 2016-03-15 13:03 GMT+01:00 Sergii Golovatiuk
 mailto:sgolovat...@mirantis.com>
<mailto:sgolovat...@mirantis.com
<mailto:sgolovat...@mirantis.com>>>:

 My strong +1 to drop off nailgun-agent completely in
favour of
 ironic-inspector. Even taking into consideration we'lll
need to
 extend  ironic-inspector for fuel needs.

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser

 On Tue, Mar 15, 2016 at 11:06 AM, Roman Prykhodchenko
 mailto:m...@romcheg.me>
<mailto:m...@romcheg.me <mailto:m...@romcheg.me>>> wrote:

 My opition on this is that we have too many re-invented
 wheels in Fuel and it’s better think about
replacing them
 with something we can re-use than re-inventing them
one more
 time.

 Let’s take a look at Ironic and try to figure out
how we can
 use its features for the same purpose.


 - romcheg
  > 15 бер. 2016 р. о 10:38 Neil Jerram
 mailto:neil.jer...@metaswitch.com>
 <mailto:neil.jer...@metaswitch.com
<mailto:neil.jer...@metaswitch.com>>> написав(ла):

  >
  > On 15/03/16 07:11, Vladimir Kozhukalov wrote:
  >> Alexander,
  >>
  >> We have many other places where use Ruby
(astute, puppet
 custom types,
  >> etc.). I don't think it is a good reason to
re-write
 something just
  >> because it is written in Ruby. You are right about
 tests, about plugins,
  >> but let's look around. Ironic community has already
 invented discovery
  >> component (btw written in python) and I can't
see any
 reason why we
  >> should continue putting efforts in nailgun
agent and not
 try to switch
  >> to ironic-inspector.
  >
  > +1 in general terms.  It's strange to me that
there are
 so many
  > OpenStack deployment systems that each do each
piece of
 the puzzle in
  > their own way (Fuel, Foreman, MAAS/Juju etc.) -
and which
 also means
  > that I need substantial separate learning in
order to use
 all these
  > systems.  It would be great to see some
consolidation.
  >
  > Regards,
  >   Neil
  >
  >
  >


___

[openstack-dev] [ironic] Backward incompatibility when moving from the old Ironic ramdisk to ironic-python-agent

2016-03-19 Thread Dmitry Tantsur

Hi all!

This is a heads up for you that we've found an issue [1] in IPA that 
changes the behavior for those of you with several hard drives. The 
difference is in the way our ramdisks pick the root device for 
deployment, when no root device hints [2] are provided. Namely:
- The old ramdisk picked the first available device from the list of 
device names in the "disk_devices" configuration option [3]. In practice 
it usually meant the first disk was chosen. Note that this approach was 
error-prone, as disk ordering is, generally speaking, not guaranteed by 
Linux.
- IPA ignores the "disk_devices" option completely and picks the 
smallest device larger than 4 GiB.


It is probably too late to change the IPA behavior to be more 
compatible, as a lot of folks are already relying on it. So we decided 
to raise this issue and get feedback on the preferred path forward.


[1] https://bugs.launchpad.net/ironic-python-agent/+bug/1554492
[2] 
http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment
[3] 
https://github.com/openstack/ironic/blob/master/etc/ironic/ironic.conf.sample#L2017


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating Julia Kreger for core reviewer

2016-03-25 Thread Dmitry Tantsur
24 марта 2016 г. 8:12 PM пользователь "Jim Rollenhagen" <
j...@jimrollenhagen.com> написал:
>
> Hey all,
>
> I'm nominating Julia Kreger (TheJulia in IRC) for ironic-core. She runs
> the Bifrost project, gives super valuable reviews, is beginning to lead
> the boot from volume efforts, and is clearly an expert in this space.
>
> All in favor say +1 :)

+1

>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [inspector] Proposing Anton Arefiev (aarefiev) for ironic-inspector-core

2016-04-05 Thread Dmitry Tantsur

Hi!

I'd like to propose Anton to the ironic-inspector core reviewers team. 
His stats are pretty nice [1], he's making meaningful reviews and he's 
pushing important things (discovery, now tempest).


Members of the current ironic-inspector-team and everyone interested, 
please respond with your +1/-1. A lazy consensus will be applied: if 
nobody objects by the next Tuesday, the change will be in effect.


Thanks

[1] http://stackalytics.com/report/contribution/ironic-inspector/60

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   6   7   >