Re: [OpenStack-Infra] Ask.o.o down

2017-02-20 Thread Tom Fifield



On 廿十七年二月廿一日 暮 03:11, Tom Fifield wrote:



On 廿十七年二月十四日 暮 04:19, Joshua Hesketh wrote:



On Tue, Feb 14, 2017 at 7:15 PM, Tom Fifield > wrote:

On 14/02/17 16:11, Joshua Hesketh wrote:

Hey Tom,

Where is that script being fired from (a quick grep doesn't find
it), or
is it a tool people are using?

If it's a tool we'd need to make sure whoever is using it gets
a new
version to rule it out.


Indeed.


It's fired from a PHP service on www.openstack.org
 itself, which writes to the Member
database:


https://github.com/OpenStackweb/openstack-org/blob/master/auc-metrics/code/services/ActiveModeratorService.php







Right. I wonder if somebody could check the logs to see if the process
times out. Sadly looking at that code it looks like any output messages
from the script will be discarded.



... and my patch was deployed, but the site is down today. So, looks
like it wasn't that.


Though, is it staying down for less time? It came back up just now - 
normally it'd be down for another 45mins.


Interesting traffic spikes at:
http://cacti.openstack.org/cacti/graph.php?action=view_graph_id=2549_id=all

seem to correlate with the outage. Perhaps we can set up some tcpdumps?




The next step is to update the copy of the script it references:


https://github.com/OpenStackweb/openstack-org/blob/master/auc-metrics/lib/uc-recognition/tools/get_active_moderator.py





I am not sure if this is in place using git submodules or manually,
but will figure it out and get that updated.




 - Josh

On Tue, Feb 14, 2017 at 7:07 PM, Tom Fifield 
>> wrote:

On 14/02/17 16:06, Joshua Hesketh wrote:

Hey,

I've brought the service back up, but have no new clues
as to why.


Cheers.

Going to try: https://review.openstack.org/#/c/433478/

>
to see if this script is culprit.


- Josh

On Tue, Feb 14, 2017 at 6:50 PM, Tom Fifield

>


>


>>

Skipping back through previous days I find some
similar gaps
starting anywhere from 06:30 to 07:00 and ending
between
07:00 and
08:00 but they don't seem to occur every day and
I'm not
having much
luck finding a pattern. It _is_ conspicuously
close to when
/etc/cron.daily scripts get fired from the
crontab so
might coincide
with log rotation/service restarts? The graphs
don't
show these gaps
correlating with any spikes in CPU, memory or
disk
activity so it
doesn't seem to be resource starvation (at least
not for
any common
resources we're 

Re: [OpenStack-Infra] Ask.o.o down

2017-02-20 Thread Tom Fifield



On 廿十七年二月十四日 暮 04:19, Joshua Hesketh wrote:



On Tue, Feb 14, 2017 at 7:15 PM, Tom Fifield > wrote:

On 14/02/17 16:11, Joshua Hesketh wrote:

Hey Tom,

Where is that script being fired from (a quick grep doesn't find
it), or
is it a tool people are using?

If it's a tool we'd need to make sure whoever is using it gets a new
version to rule it out.


Indeed.


It's fired from a PHP service on www.openstack.org
 itself, which writes to the Member database:


https://github.com/OpenStackweb/openstack-org/blob/master/auc-metrics/code/services/ActiveModeratorService.php





Right. I wonder if somebody could check the logs to see if the process
times out. Sadly looking at that code it looks like any output messages
from the script will be discarded.



... and my patch was deployed, but the site is down today. So, looks 
like it wasn't that.





The next step is to update the copy of the script it references:


https://github.com/OpenStackweb/openstack-org/blob/master/auc-metrics/lib/uc-recognition/tools/get_active_moderator.py



I am not sure if this is in place using git submodules or manually,
but will figure it out and get that updated.




 - Josh

On Tue, Feb 14, 2017 at 7:07 PM, Tom Fifield 
>> wrote:

On 14/02/17 16:06, Joshua Hesketh wrote:

Hey,

I've brought the service back up, but have no new clues
as to why.


Cheers.

Going to try: https://review.openstack.org/#/c/433478/

>
to see if this script is culprit.


- Josh

On Tue, Feb 14, 2017 at 6:50 PM, Tom Fifield

>


>


>>

Skipping back through previous days I find some
similar gaps
starting anywhere from 06:30 to 07:00 and ending
between
07:00 and
08:00 but they don't seem to occur every day and
I'm not
having much
luck finding a pattern. It _is_ conspicuously
close to when
/etc/cron.daily scripts get fired from the
crontab so
might coincide
with log rotation/service restarts? The graphs don't
show these gaps
correlating with any spikes in CPU, memory or disk
activity so it
doesn't seem to be resource starvation (at least
not for
any common
resources we're tracking).


Indeed. It's down again today during the same timeslot.

Another idea for the cron-based theory:




https://github.com/openstack/uc-recognition/blob/master/tools/get_active_moderator.py


Re: [openstack-dev] [Zun]Use 'uuid' instead of 'id' as object ident in data model

2017-02-20 Thread Qiming Teng
On Mon, Feb 20, 2017 at 02:14:20PM +0800, Wenzhi Yu wrote:
> Hi team,
> 
> I need your advice on this patch[1], which aims to implement etcd DB data 
> model and API
> for 'ResourceClass' object.
> 
> As you may know, in mysql implementation, mysql will generate a 'id' field, 
> which is an
> unique and auto increase integer. The 'id' is also used as 'primary key' or 
> 'foreign key'
> in mysql[2].

Can someone remind me the benefits we get from Integer over UUID as
primary key? UUID, as its name implies, is meant to be an identifier for
a resource. Why are we generating integer key values?

- Qiming
 
> However, in etcd implementation, etcd will NOT generate this 'id' itself, so 
> I intend to
> use the 'uuid' attribute of the object instead of 'id', and modify the DB API 
> method to use
> 'uuid' as object ident instead of 'id', like[3]. Personally I feel using 
> 'uuid' is more
> reasonable because 'id' is a specific field in DB like mysql, seems it does 
> not have actual
> meaning in data model, right?
> 
> An alternative way Hongbin suggested is to generate an unique 'id' like mysql 
> by ourselves
> and insert the 'id' into etcd data model. But he said he's OK with the idea 
> to replace 'id'
> with 'uuid' if it does not break anything.
> 
> What's your opinion on this issue? Thanks in advance!
> 
> [1]https://review.openstack.org/#/c/434909/
> [2]https://github.com/openstack/zun/blob/c0cebba170b8e3ea5e62e335536cf974bbbf08ec/zun/db/sqlalchemy/models.py#L200
> [3]https://github.com/openstack/zun/blob/c0cebba170b8e3ea5e62e335536cf974bbbf08ec/zun/db/etcd/api.py#L209
>  
> 
> 
> Best Regards,
> Wenzhi Yu (yuywz)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Nominating Hidekazu Nakamura as Watcher Core

2017-02-20 Thread Shedimbi, Prudhvi Rao
+1




On 2/20/17, 4:05 AM, "Antoine Cabot"  wrote:

>+1
>
>On Mon, Feb 20, 2017 at 9:25 AM, Vincent FRANÇOISE
> wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> Hi Team,
>>
>> Hidekazu Nakamura has made many contributions[1] to Watcher since the
>> Barcelona summit when we first met. I would like to nominate him to be
>> part of the Watcher Core Team. I really think he will provide many
>> valuable contributions to the project and his inputs (such as [2])
>> will most certainly help us substantially improve Watcher as a whole.
>>
>> It's now time to vote :)
>>
>> [1] http://stackalytics.com/report/contribution/watcher/120
>> [2] https://wiki.openstack.org/wiki/Zone_migration
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG v2.0.22 (GNU/Linux)
>>
>> iQEcBAEBAgAGBQJYqqgNAAoJEEb+XENQ/jVSf+kH/1+BLL907SrocIM87AlOdMAn
>> IuB0Xk+y7fAijrs4X7FqknEb2f8Ns4EN3f97SQGFF6WUqSxTMoyMkCZNBEaTWi0P
>> 0D+g2KLTgOhOG8UGdV26CQD0qj455Q+GsQztatcip3zBRalO3QYcF8WUNkCs3GY3
>> yMDoCnK9L3JE+aihGf93UklAeYij856LlY4zj1Nxnm5MCdUNLnYpz+VpyjOtpE2w
>> QX3AH0jPs6b2coYC7O0CggpbMF0xFJzpaLiiRaabSzvuLT8vh1ICaCyUpH9IgXIv
>> M8i1b7aIvLRyxo1ZylFdBNu3J74Ayv5BZKudEtA5yqGn0bExL+yPiSJgv20WcrY=
>> =4frW
>> -END PGP SIGNATURE-
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Watcher] End-of-Ocata core team updates

2017-02-20 Thread Shedimbi, Prudhvi Rao
Thank You for giving me this opportunity. I will try to fulfill my role in the 
Core team to the best of my ability. :)

Thank You
Prudhvi Rao Shedimbi

From: Чадин Александр >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, February 20, 2017 at 8:29 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [Watcher] End-of-Ocata core team updates

Hi Watcher Team!

There are some changes with Core Group in Watcher:

1. Li Canwei (licanwei) and Prudhvi Rao Shedimbi (pshedimb) have
been nominated as Core Developers for Watcher.
They have got enough votes to be included in Watcher Core group.

2. Jean-Emile DARTOIS has stepped down from Watcher Core since
he has little time to keep up with core reviewer duties.

3. Hidekazu Nakamura is being nominated as Core Developer for Watcher.
Have a good luck!

I want to congratulate our new Core Developers and to thank Jean-Emile
for his work and project support. He has made a lot of architecture design
reviews and implementations and helped to make Watcher as it is.

Thank you, Jean-Emile, have a good luck and remember that
Watcher Team is always opened for you.

Welcome aboard, Prudhvi Rao Shedimbi and Li Canwei!

Best Regards,
_
Alexander Chadin
OpenStack Developer
Servionica LTD
a.cha...@servionica.ru
+7 (916) 693-58-81

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[OpenStack-Infra] Please add new IRC channel and metting

2017-02-20 Thread Hiroyuki Eguchi
Hi infra team.

I've created new project named Meteos which provides machine learning as a 
service.
It is not an official OpenStack project yet.

Is it possible to register IRC channel and irc-meetings, even if it is not an 
official project ?

We have already IRC channel named #openstack-meteos and some developers are 
attending every day.

I would like to register this channel in OpenStack IRC channels.

Thanks.


Hiroyuki Eguchi

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [QA][ptg] QA Team social evening

2017-02-20 Thread Andrea Frittoli
Hi folks,

thank you for your vote!
It looks like the best option is tomorrow (Tuesday) night - there's going
to be a reception at the Sheraton between 5-7 but we can have dinner after
that.
I'll book a place tomorrow morning, if you have any good candidate for good
restaurant nearby please let me know.

Thanks!

andrea

On Sun, Feb 19, 2017 at 12:35 PM Andrea Frittoli 
wrote:

> Hello,
>
> I'd like to propose a social evening for the folks in the QA sessions.
> I prepared a doodle [0] so if you're interested please vote :)
>
> I added the link to the main QA etherpad [1] as well.
> If you know a good place to go for food or drinks please add it to the
> etherpad as well.
>
> Andrea
>
> [0] http://doodle.com/poll/yxy8ts8s8te8rwwh
> [1] https://etherpad.openstack.org/p/qa-ptg-pike
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Documentation logo - revised

2017-02-20 Thread Lana Brindley
And here's our logo with our whole name written on it :)

L


 Forwarded Message 
Subject:Documentation logo - revised
Date:   Fri, 17 Feb 2017 13:30:26 -0800
From:   Heidi Joy Tretheway 
To: Alexandra Settle , Lana Brindley 




Here are the updated files for Documentation! Have a great weekend.
https://www.dropbox.com/sh/htu234yuf963i9b/AAAsraXwT3a5O9HNmms4E9yFa?dl=0 


photo   
*Heidi Joy Tretheway*
Senior Marketing Manager, OpenStack Foundation
503 816 9769  | Skype: heidi.tretheway 

  








signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The end of OpenStack packages in Debian?

2017-02-20 Thread Doug Hellmann
Excerpts from Thomas Goirand's message of 2017-02-21 00:50:35 +0100:
> On 02/19/2017 08:43 PM, Clint Byrum wrote:
> > Excerpts from Thomas Goirand's message of 2017-02-19 00:58:01 +0100:
> >> On 02/18/2017 07:59 AM, Clint Byrum wrote:
> >>> Indeed, DPMT uses all the worst choices for maintaining most of the
> >>> python module packages in Debian. However, something will need to be
> >>> done to spread the load of maintaining the essential libraries, and the
> >>> usual answer to that for Python libraries is DPMT.
> >>
> >> I wish the Python team was more like the Perl one, who really is a well
> >> functioning with a strong team spirit and commitment, with a sense of
> >> collective responsibility. It's far from being the case in the DPMT.
> >>
> >> Moving packages to the DPMT will not magically get you new maintainers.
> >> Even within the team, there's unfortunately *a lot* of strong package
> >> ownership.
> >>
> > 
> > Whatever the issues are with that team, there's a _mountain_ of packages
> > to maintain, and only one team whose charter is to maintain python
> > modules. So we're going to have to deal with the shortcomings of that
> > relationship, or find more OpenStack specific maintainers.
> 
> I think there's a misunderstanding here. What I wrote is that the DPMT
> will *not* maintain packages just because they are pushed to the team,
> you will need to find maintainers for them. So that's the last option of
> your last sentence above that would work. The only issue is, nobody
> cared so far...
> 
> > It's also important that the generic libraries
> > we maintain, like stevedore, remain up to date in Debian so they don't
> > fall out of favor with users. Nothing kills a library like old versions
> > breaking apps.
> 
> Stevedore is a very good example. It build-depends on oslotest (to run
> unit tests), which itself needs os-client-config, oslo.config, which
> itself ... no need to continue, once you need oslo.config, you need

It sounds like we've broken our dependency cycle rule for some of
the libraries. I'll take a look at what can be done about that over
Pike.

Doug

> everything else. So to continue to package something like Stevedore, we
> need nearly the full stack. That's equivalent to maintaining all of
> OpenStack (as I wrote: the cherry on top of the cake is the services,
> the bulk work is the Python modules).
> 
> Cheers,
> 
> Thomas Goirand (zigo)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Blazar] Skip IRC meeting

2017-02-20 Thread Masahito MUROI
Some of us are in PTG, then we decided we'll skip the weekly meeting  
this week.


best regards,
Masahito



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Where is project:type documented?

2017-02-20 Thread Ian Cordasco
In openstack/releases in the deliverables directory, e.g.,
https://github.com/openstack/releases/blob/master/deliverables/ocata/glance.yaml

Please excuse my top-posting and brevity as I am sending this from my phone

On Feb 20, 2017 5:01 PM, "Kenny Johnston"  wrote:

> I saw in the cold upgrades tag page[1] that in order to receive the
> designation projects must also be "already tagged as type:service." Where
> do we maintain that distinction? I couldn't find it in projects.yaml[2].
>
> --
> Kenny Johnston | irc:kencjohnston | @kencjohnston
> [1]https://governance.openstack.org/tc/reference/
> tags/assert_supports-upgrade.html
> [2]https://github.com/openstack/governance/blob/
> master/reference/projects.yaml
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle][all] guide for reading source code

2017-02-20 Thread joehuang
Hello,

For those who are interested in digging into Tricircle source code, it'll be 
good to have a step by step guide. I prepared a wiki page to navigate the 
source code of Tricircle:

https://wiki.openstack.org/wiki/TricircleHowToReadCode

It was linked to the wiki page of Tricircle: 
https://wiki.openstack.org/wiki/Tricircle

And also please feel free to update the wiki to make it being more readable and 
consistent with the code.

Should we include it into the source code repo, and maintain it at the same 
time if we update the code logic?

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] parsing libraries

2017-02-20 Thread Eric K
Hi all,

In the congress project, we¹re looking to replace the antlr3-based parser
with a better maintained alternative. I see that pyparsing and Parsley are
used in some projects, could anyone share their experience with them and
potentially other parsing libraries?

The intended use is to parse input into abstract syntax trees according to
a grammar like this:
https://github.com/openstack/congress/blob/master/congress/datalog/Congress
.g

More specific questions:
- how would they do on a grammar that¹s slightly more complex than the
typical usage? e.g.,
https://github.com/openstack/congress/blob/master/congress/datalog/Congress
.g

- does anyone know if Parsley is well-maintained? Their repo seems to be
very quiet over the past 2 yrs
https://github.com/python-parsley/parsley/graphs/contributors?from=2015-02-
01=2017-02-20=c

- Any thoughts or comments on the other Other libraries I¹m considering?
(these are more geared toward larger grammars)
-- Grako https://pypi.python.org/pypi/grako/3.19.1

-- PLY https://pypi.python.org/pypi/ply/3.10
-- I have ³soft-rejected² Antlr4 because the AST feature has been removed.
If I have to put in non-trivial work anyway (to create the AST), I figure
I might as well invest the work into moving to a pure Python framework.
But would love to hear more if someone thinks it¹s the way to go!

Thanks so much!

Eric



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [publiccloud-wg]Atlanta Virtual PTG agenda

2017-02-20 Thread Zhipeng Huang
Hi team,

Please find an initial draft of our virtual ptg on Thursday at
https://etherpad.openstack.org/p/publiccloud-atlanta-ptg , feel free to add
anything that you want to discuss

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] The end of OpenStack packages in Debian?

2017-02-20 Thread Thomas Goirand
On 02/19/2017 08:43 PM, Clint Byrum wrote:
> Excerpts from Thomas Goirand's message of 2017-02-19 00:58:01 +0100:
>> On 02/18/2017 07:59 AM, Clint Byrum wrote:
>>> Indeed, DPMT uses all the worst choices for maintaining most of the
>>> python module packages in Debian. However, something will need to be
>>> done to spread the load of maintaining the essential libraries, and the
>>> usual answer to that for Python libraries is DPMT.
>>
>> I wish the Python team was more like the Perl one, who really is a well
>> functioning with a strong team spirit and commitment, with a sense of
>> collective responsibility. It's far from being the case in the DPMT.
>>
>> Moving packages to the DPMT will not magically get you new maintainers.
>> Even within the team, there's unfortunately *a lot* of strong package
>> ownership.
>>
> 
> Whatever the issues are with that team, there's a _mountain_ of packages
> to maintain, and only one team whose charter is to maintain python
> modules. So we're going to have to deal with the shortcomings of that
> relationship, or find more OpenStack specific maintainers.

I think there's a misunderstanding here. What I wrote is that the DPMT
will *not* maintain packages just because they are pushed to the team,
you will need to find maintainers for them. So that's the last option of
your last sentence above that would work. The only issue is, nobody
cared so far...

> It's also important that the generic libraries
> we maintain, like stevedore, remain up to date in Debian so they don't
> fall out of favor with users. Nothing kills a library like old versions
> breaking apps.

Stevedore is a very good example. It build-depends on oslotest (to run
unit tests), which itself needs os-client-config, oslo.config, which
itself ... no need to continue, once you need oslo.config, you need
everything else. So to continue to package something like Stevedore, we
need nearly the full stack. That's equivalent to maintaining all of
OpenStack (as I wrote: the cherry on top of the cake is the services,
the bulk work is the Python modules).

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [governance] Where is project:type documented?

2017-02-20 Thread Kenny Johnston
I saw in the cold upgrades tag page[1] that in order to receive the
designation projects must also be "already tagged as type:service." Where
do we maintain that distinction? I couldn't find it in projects.yaml[2].

-- 
Kenny Johnston | irc:kencjohnston | @kencjohnston
[1]
https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html
[2]
https://github.com/openstack/governance/blob/master/reference/projects.yaml
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-20 Thread Matt Greene
+2

Thanks for coordinating!

From: Kevin Benton 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, February 17, 2017 at 12:18 PM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

Hi all,

I'm organizing a Neutron social event for Thursday evening in Atlanta somewhere 
near the venue for dinner/drinks. If you're interested, please reply to this 
email with a "+1" so I can get a general count for a reservation.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] please review the Ocata final release tags

2017-02-20 Thread Doug Hellmann
Excerpts from Jim Rollenhagen's message of 2017-02-20 14:52:36 -0500:
> On Sun, Feb 19, 2017 at 8:54 PM, Doug Hellmann 
> wrote:
> 
> > Release liaisons and PTLs,
> >
> > The patch to tag the Ocata final releases for milestone-based
> > projects is up at [1]. Please check the patch to verify that it
> > matches your expectations and then sign off with a +1.
> >
> > Thanks,
> > Doug
> >
> > [1] https://review.openstack.org/#/c/435816/1
> 
> 
> I see this is only for milestone-based projects. Should
> cycle-with-intermediary
> projects like ironic add a diff-start to the most recent release, so the
> release
> announcement has the right details?
> 
> // jim

The diff-start value is used to produce the release announcement
email at the point of the release for an individual deliverable,
so it's not necessary to update the value for existing releases.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Ocata retrospective brainstorm

2017-02-20 Thread Eric K
Hi Congress folks,

At the PTG, we¹ll be starting with an Ocata retrospective to look at what we
may want to do more/less/same to make our work easier and better going
forward.

Feel free to get a head-start by thinking about what you¹d like to see us do
more/less/same of and putting them down in this ethercalc. All aspects
welcome =)

https://ethercalc.openstack.org/0mhh0iv0oz4b

See you all soon!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] VTG schedule and details

2017-02-20 Thread Antoni Segura Puimedon
Hi Kuryrs!

The VTG sessions[0] will be held in bluejeans:

https://bluejeans.com/5508709975

You can also join by phone finding a local number in [1] and entering the
meeting id "5508709975" followed by a '#'.

The sessions will be


┌──┬─┐
│12:30-13:30 utc   │13:45-14:45
utc   │
┌───┼──┼──┤
│Tue Feb 28th   │  Kuryr-K8s HA│ Kuryr-K8s tenancy and net
policy │
├───┼──┼──┤
│Wed March 1st  │  Kuryr-K8s resource Mgmt │ Fuxi: K8s and
Docker │
├───┼──┼──┤
│Thu March 2nd  │  Kuryr-K8s:multi device  │   Kuryr-K8s client and
testing   │
└───┴──┴──┘




[0] https://etherpad.openstack.org/p/kuryr_virtual_gathering_2017h1
[1] https://www.intercallonline.com/listNumbersByCode.action?
confCode=5508709975
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][heat] Heat stable-maint additions

2017-02-20 Thread Thierry Carrez
Steven Hardy wrote:
> I agree - those nominated by Zane are all highly experienced reviewers and
> as ex-PTLs are well aware of the constraints around stable backports and
> stable release management.
> 
> I do agree the requirements around reviews for stable branches are very
> different, but I think we need to assume good faith here and accept we have
> a bottleneck which can be best fixed by adding some folks we *know* are
> capable of exercising sound judgement to the stable-maint team for heat.
> 
> I respect the arguments made by the stable-maint core folks, and I think we
> all understand the reason for these concerns, but ultimately unless folks
> outside the heat core team are offering to help with reviews directly, I
> think it's a little unreasonable to block the addition of these reviewers,
> given they've been proposed by the current stable liason who I think is in
> the best position to judge the suitability of candidates.

That sounds reasonable to me. I think you should be able to get it
settled with tonyb over beer sometimes this week :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] no IRC meeting this week

2017-02-20 Thread Eric K
Congress IRC meeting cancelled this week for the PTG. Meeting will resume
next week (Mar 2 UTC). Thanks!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] PTG team dinner?

2017-02-20 Thread Clark Boylan
On Wed, Feb 15, 2017, at 02:58 PM, Clark Boylan wrote:
> On Tue, Feb 14, 2017, at 03:01 PM, Clark Boylan wrote:
> > With the help of the small restaurant guide on the wiki I have
> > discovered Poor Calvin's. This place looks neat, supposedly fusion of
> > Thai and Southern food and is well reviewed. It is 0.8 miles or ~15
> > minute walk from the Sheraton according to Google so should be walkable
> > for most. Rather than draw out restaurant selection I was thinking I
> > would go ahead and try to make a reservation for however many people are
> > signed up on the etherpad for as close to 7pm as I can manage Monday
> > night. I will wait until tomorrow before doing this in order to provide
> > veto time if this location does not work for someone, in which case
> > please do help suggest an alternative.
> > 
> > If I can't get a reservation for us at Poor Calvin's I was looking at
> > Sway as a backup since it also has southern food and seems to be well
> > reviewed despite being hotel restaurant. Again please veto if necessary
> > and suggest alternatives.
> > 
> > Lastly I did update the etherpad with links to these restaurants (and a
> > few others) if you need more info.
> 
> Alright, with Monty's help we now have a reservation at Poor Calvin's
> for Monday the 20th at 7pm. The reservation is for 14 and under my name,
> Clark Boylan. See you there! (and at the PTG).

For everyone that signed up please meet up in the Sheraton Lobby at
6:30pm and we will coordinate travel from there. They won't seat us
until we all arrive so we'll need to show up on time.

Thank you,
Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[openstack-dev] [zaqar] No meeting this week

2017-02-20 Thread Fei Long Wang
Hi all,

There will be no meetings this week due to the PTG
(https://www.openstack.org/ptg/)

-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [architecture][nova][neutron][cinder][ceilometer][ironic] PTG stuff -- Arch-WG nova-compute-api fact-gathering session Tuesday 10:30 Macon

2017-02-20 Thread Clint Byrum
Excerpts from Dmitry Tantsur's message of 2017-02-18 18:54:59 +0100:
> 2017-02-17 19:16 GMT+01:00 Clint Byrum :
> 
> > Hello, I'm looking forward to seeing many of you next week in Atlanta.
> > We're going to be working on Arch-WG topics all day Tuesday, and if
> > you'd like to join us for that in general, please add your topic here:
> >
> > https://etherpad.openstack.org/p/ptg-architecture-workgroup
> >
> > I specifically want to call out an important discussion session for one
> > of our active work streams, nova-compute-api:
> >
> > https://review.openstack.org/411527
> > https://review.openstack.org/43
> >
> > At this point, we've gotten a ton of information from various
> > contributors, and I want to thank everyone who commented on 411527 with
> > helpful data. I'll be compiling the data we have into some bullet points
> > which I intend to share on the projector in an etherpad[1], and then invite
> > the room to ensure the accuracy and completeness of what we have there.
> > I grabbed two 30-minute slots in Macon for Tuesday to do this, and I'd
> > like to invite anyone who has thoughts on how nova-compute interacts to
> > join us and participate. If you will not be able to attend, please read
> > the documents and comments in the reviews above and fill in any information
> > you think is missing on the etherpad[1] so we can address it there.
> >
> > [1] https://etherpad.openstack.org/p/arch-wg-nova-compute-api-ptg-pike
> 
> 
> https://ethercalc.openstack.org/Pike-PTG-Discussion-Rooms says you're in
> South Capital, not Macon. Who is right? :)
> 

OPS, you're right. Yes, please come to South Capital, not Macon. :)

> I can attend, but I don't really understand what this topic is about. Mind
> providing at TL;DR? Why exactly should we change something around
> nova-compute <-> ironic?
> 

It's ok, if you don't have time to read the referenced documents, I'll
try to summarize:

nova-compute is communicated with through a variety of poorly documented
or poorly encapsulated APIs. os-brick and os-vif do odd things with lock
files (I think!), ceilometer inspects files on disk and system level
monitors.

The idea is to define the entry points into nova-compute and more
clearly draw a line where nova-compute's responsibility begins and other
services authority ends. This is already well defined between
nova-compute and nova-conductor, but not so much with others.

My reason for wanting this is simplification of the interactions with the
compute node so OpenStack can be enhanced and debugged without reading
the code of nova-compute.

For Ironic, I believe there may be ways in which nova-compute imposes
a bit of an odd structure on the baremetal service, and I'm interested
in hearing Ironic developers' opinions on that. I may be totally off
base.

> >
> >
> > Once we have this data, I'll likely spend a small amount of time grabbing
> > people from
> > each relevant project team on Wednesday/Thursday to get a deeper
> > understanding of some
> > of the pieces that we talk about on Tuesday.
> >
> > From that, as a group we'll produce a detailed analysis of all the ways
> > nova-compute is interacted with today, and ongoing efforts to change
> > them. If you are interested in this please do raise your hand and come
> > to our meetings[2] as my time to work on this is limited, and the idea
> > for the Arch-WG isn't "Arch-WG solves OpenStack" but "Arch-WG provides
> > a structure by which teams can raise understanding of architecture."
> >
> > [2] https://wiki.openstack.org/wiki/Meetings/Arch-WG
> >
> > Once we've produced that analysis, which we intend to land as a document
> > in our arch-wg repository, we'll produce a set of specs in the appropriate
> > places (likely openstack-specs) for how to get it to where we want to
> > go.
> >
> > Also, speaking of the meeting -- Since we'll all be meeting on Tuesday
> > at the PTG, the meeting for next week is cancelled.
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] User survey feedback

2017-02-20 Thread Rodrigo Duarte
That's a nice feedback! Now we have a great way to know where to push
harder.

On Mon, Feb 20, 2017 at 5:34 PM, Lance Bragstad  wrote:

> As you may have noticed from other threads, we have some early feedback
> available from the User Survey. It hasn't closed yet - and I'm sure we'll
> get updated results once that happens, but the early feedback will be nice
> to have going into project discussions at the PTG.
>
> The question and responses are as follows:
>
> *In your opinion, where should the Keystone development team focus their
> effort(s)?*
>
> *Possible responses:*
>
> Enhancing policy
> Per domain configuration
> Federated identity enhancements
> Scaling out to multiple regions
> Performance improvements
> Other (with the option to give specific feedback)
>
> The following is a breakdown of the responses:
>
> Federated identity enhancements: *62* responses
> Scaling out to multiple regions: *62* responses
> Performance improvements: *51* responses
> Enhancing policy: *46* responses
> Per domain configuration: *41* responses
> Other: *5 *responses
>
> The following are the 5 Other responses directly from users taking the
> survey:
>
> "1: Better delegation of project admin rights (project admin should be
> able to easily add sub projects and users). Should work with federation. 2:
> AWS IAM role like functionality. Delegation of rights to instances. "
>
> "delegation is still very corse. Needs a way to do fine grained delegation
> and resource level delegation."
>
> "Easier role customization."
>
> "Role based access control- like we have in the corporates. restriction
> based on per API / Functionality bassis. And a user should be able to
> create sub-users for his / her account with RBAC."
>
> "User based policy"
>
>
> I'll post updates here if I get any more information/data regarding the
> feedback.
>
> Thanks,
>
> Lance
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] artifacts code removed from the glance codebase in Pike

2017-02-20 Thread Nikhil Komawar
Great move. Let's get this merged asap.

Thanks!

best,
-Nikhil

On Thu, Feb 16, 2017 at 11:15 PM, Brian Rosmaita  wrote:

> If you've never deployed, packaged, or used the Artifacts API supplied
> by Glance or Glare, you can safely disregard this message.
>
> There's a patch up [0] to remove the legacy EXPERIMENTAL Artifacts API
> code from the Glance code repository.  (This is an entirely separate
> issue from the question of whether the Glare *project* should be
> independent or part of Glance, which we'll be discussing at the PTG next
> week [1].)  I would like the patch to merge as soon as possible, but I
> also wanted to give people a heads-up in case anyone would be impacted
> by this change.
>
> The situation is explained in detail in the releasenotes [2] included on
> the patch.  Please let me know immediately if there is a reason why we
> should hold off merging this patch.  Otherwise, I'd like to merge it
> before the post-PTG burst of code changes begins.
>
> thanks,
> brian
>
> [0] https://review.openstack.org/427535
> [1] "Macon" room, 9:30-10:30 on Thursday
> [2]
> https://review.openstack.org/#/c/427535/14/releasenotes/
> notes/glare-ectomy-72a1f80f306f2e3b.yaml
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PWG] mid-cycle venue option

2017-02-20 Thread Arkady.Kanevsky
Team,
I had updated venue info at https://etherpad.openstack.org/p/MIL-pwg-meetup.
That includes nearby hotel info.
Rate from Hotels.com, booking.com, expedia.com and so on give me better rate 
than corporate one.


Need to cover some logistic issues.

1.   Do we need breakfast? Or everybody has it in hotel and we can skip it.

2.   Shamail, who is organizing group dinner? Assume it is Monday night.

3.   Do we want catered lunch or we will take a break to go for it?

Thanks,
Arkady
From: Kanevsky, Arkady
Sent: Tuesday, February 07, 2017 10:19 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [PWG] mid-cycle venue option

Team,
I finally checked on my side for the venue.

The address of my available venue is
Company: Dell
Street: Viale Piero e Alberto Pirelli 6
City: Milano

That is about 15 min drive or 20 min on public transport from coworking place.

I reserved 2 conf rooms for mon-tue.
While on Tue you'll benefit for a proper room for a roundtable, on Mon the only 
room available who could accommodate 15 people is a room with a table for 10 
people and chairs all around .

Let me know if we want to follow on it.


Arkady Kanevsky, Ph.D.
Director of SW Development
Dell EMC CPSD
Dell Inc. One Dell Way, MS PS2-91
Round Rock, TX 78682, USA
Phone: 512 723 5264

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] user survey question results

2017-02-20 Thread Nikhil Komawar
This info is excellent. Exactly the kind of context we need to get the
discussion on import refactor rolling.

best,
-Nikhil

On Mon, Feb 20, 2017 at 3:56 PM, Brian Rosmaita 
wrote:

> The responses to the Glance user survey question are in the following
> etherpad to make it easy to record reactions and/or suggestions for
> follow-up:
>
> https://etherpad.openstack.org/p/glance-user-survey-q-feb-2017
>
> This was the question we asked:
>
> "As you're aware, the Images API v1 supplied by OpenStack Glance is
> currently deprecated. If you haven't yet moved to the Images API v2,
> what is preventing you? Please be specific."
>
> cheers,
> brian
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [nova] Metadata service over virtio-vsock

2017-02-20 Thread Clint Byrum
Excerpts from Jeremy Stanley's message of 2017-02-20 20:08:00 +:
> On 2017-02-20 14:36:15 -0500 (-0500), Clint Byrum wrote:
> > What exactly is the security concern of the metadata service? Perhaps
> > those concerns can be addressed directly?
> [...]
> 
> A few I'm aware of:
> 

Thanks!

> 1. It's something that runs in the control plane but needs to be
> reachable from untrusted server instances (which may themselves even
> want to be on completely non-routed networks).
> 

As is DHCP

> 2. If you put a Web proxy between your server instances and the
> metadata service and also make it reachable without going through
> that proxy then instances may be able to spoof one another
> (OSSN-0074).
> 

That's assuming the link-local approach used by the EC2 style service.

If you have DHCP hand out a metadata URL with a nonce in it, that's no
longer an issue.

> 3. Lots of things, for example facter, like to beat on it heavily
> which makes for a fun DDoS and so is a bit of a scaling challenge in
> large deployments.
> 

These are fully mitigated by caching.

> There are probably plenty more I don't know since I'm not steeped in
> operating OpenStack deployments.

Thanks. I don't mean to combat the suggestions, but rather just see
what it is exactly that makes us dislike the metadata service.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] call today

2017-02-20 Thread Arkady.Kanevsky
Do we have PTG call this week?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] user survey question results

2017-02-20 Thread Brian Rosmaita
The responses to the Glance user survey question are in the following
etherpad to make it easy to record reactions and/or suggestions for
follow-up:

https://etherpad.openstack.org/p/glance-user-survey-q-feb-2017

This was the question we asked:

"As you're aware, the Images API v1 supplied by OpenStack Glance is
currently deprecated. If you haven't yet moved to the Images API v2,
what is preventing you? Please be specific."

cheers,
brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] User survey feedback

2017-02-20 Thread Lance Bragstad
As you may have noticed from other threads, we have some early feedback
available from the User Survey. It hasn't closed yet - and I'm sure we'll
get updated results once that happens, but the early feedback will be nice
to have going into project discussions at the PTG.

The question and responses are as follows:

*In your opinion, where should the Keystone development team focus their
effort(s)?*

*Possible responses:*

Enhancing policy
Per domain configuration
Federated identity enhancements
Scaling out to multiple regions
Performance improvements
Other (with the option to give specific feedback)

The following is a breakdown of the responses:

Federated identity enhancements: *62* responses
Scaling out to multiple regions: *62* responses
Performance improvements: *51* responses
Enhancing policy: *46* responses
Per domain configuration: *41* responses
Other: *5 *responses

The following are the 5 Other responses directly from users taking the
survey:

"1: Better delegation of project admin rights (project admin should be able
to easily add sub projects and users). Should work with federation. 2: AWS
IAM role like functionality. Delegation of rights to instances. "

"delegation is still very corse. Needs a way to do fine grained delegation
and resource level delegation."

"Easier role customization."

"Role based access control- like we have in the corporates. restriction
based on per API / Functionality bassis. And a user should be able to
create sub-users for his / her account with RBAC."

"User based policy"


I'll post updates here if I get any more information/data regarding the
feedback.

Thanks,

Lance
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-20 Thread Terry Wilson
+1

On Mon, Feb 20, 2017 at 12:57 PM, Lihi Wish  wrote:
> +1
>
> On Feb 20, 2017 1:13 PM, "Omer Anson"  wrote:
>>
>> +1
>>
>> On 20 February 2017 at 19:34, Bhatia, Manjeet S
>>  wrote:
>>>
>>> +1
>>>
>>>
>>>
>>> From: Kevin Benton [mailto:ke...@benton.pub]
>>> Sent: Friday, February 17, 2017 11:19 AM
>>> To: openstack-dev@lists.openstack.org
>>> Subject: [openstack-dev] [neutron] - Neutron team social in Atlanta on
>>> Thursday
>>>
>>>
>>>
>>> Hi all,
>>>
>>>
>>>
>>> I'm organizing a Neutron social event for Thursday evening in Atlanta
>>> somewhere near the venue for dinner/drinks. If you're interested, please
>>> reply to this email with a "+1" so I can get a general count for a
>>> reservation.
>>>
>>>
>>>
>>> Cheers,
>>>
>>> Kevin Benton
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-docs] HA Guide session

2017-02-20 Thread Alexandra Settle
Hi everyone,

As per the HA discussion in the session today, we’ll be meeting in the room 
‘Macon’ at the Sheraton from 3pm, on Wednesday the 22nd of February.

If you would like to join remotely, please let me know :)

Thanks,

Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] IPSec VPN Connection to Cisco Router

2017-02-20 Thread Adam Tauno Williams
I need to establish an IPSec VPN connection between an openstack cloud  
router (hosted by Catalyst) and a site with a Cisco 7200 series router.


I have established VPN site-to-site connections using IPSec protected  
GRE tunnels - but the terminology of the openstack setup does not  
correspond nicely to the Cisco terminotlogy.


Does anyone have an example of this configuration?

I have defined a configuration/profile on the openstack side -
  IKE - auth:SHA1 - encr:3DES  - pfs:group 14
  IPSEC - auth:SHA1 - encr:aes-256 - pfs:group 14

I am working on trying to get a corresponding configuration on the Cisco side.



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [neutron] - Team photo

2017-02-20 Thread Daniel Alvarez Sanchez
+1

On Mon, Feb 20, 2017 at 7:20 PM, Bhatia, Manjeet S <
manjeet.s.bha...@intel.com> wrote:

> +1
>
>
>
> *From:* Kevin Benton [mailto:ke...@benton.pub]
> *Sent:* Friday, February 17, 2017 3:08 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [neutron] - Team photo
>
>
>
> Hello!
>
>
>
> Is everyone free Thursday at 11:20AM (right before lunch break) for 10
> minutes for a group photo?
>
>
>
> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [nova] Metadata service over virtio-vsock

2017-02-20 Thread Jeremy Stanley
On 2017-02-20 14:36:15 -0500 (-0500), Clint Byrum wrote:
> What exactly is the security concern of the metadata service? Perhaps
> those concerns can be addressed directly?
[...]

A few I'm aware of:

1. It's something that runs in the control plane but needs to be
reachable from untrusted server instances (which may themselves even
want to be on completely non-routed networks).

2. If you put a Web proxy between your server instances and the
metadata service and also make it reachable without going through
that proxy then instances may be able to spoof one another
(OSSN-0074).

3. Lots of things, for example facter, like to beat on it heavily
which makes for a fun DDoS and so is a bit of a scaling challenge in
large deployments.

There are probably plenty more I don't know since I'm not steeped in
operating OpenStack deployments.
-- 
Jeremy Stanley

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack] [neutron]

2017-02-20 Thread Roua Touihri
Hello everybody,

Does anyone knows how can we *directly *link 2 VMs via Veth links with
neutron ?

thanks

-- 

Cordialement,


[image: pattern devoteam]
[image: Devoteam sur Linkedin]

[image: Devoteam sur Google Plus]

[image: Devoteam sur Twitter]

[image: Devoteam]
[image:
Innovative technology consulting for business]





Sent
with Mailtrack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [neutron]

2017-02-20 Thread Roua Touihri
Hello everybody,

Does anyone knows how can we *directly *link 2 VMs via Veth links with
neutron ?

thanks

-- 

Cordialement,


[image: pattern devoteam]
[image: Devoteam sur Linkedin]

[image: Devoteam sur Google Plus]

[image: Devoteam sur Twitter]

[image: Devoteam]
[image:
Innovative technology consulting for business]





Sent
with Mailtrack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] please review the Ocata final release tags

2017-02-20 Thread Jim Rollenhagen
On Sun, Feb 19, 2017 at 8:54 PM, Doug Hellmann 
wrote:

> Release liaisons and PTLs,
>
> The patch to tag the Ocata final releases for milestone-based
> projects is up at [1]. Please check the patch to verify that it
> matches your expectations and then sign off with a +1.
>
> Thanks,
> Doug
>
> [1] https://review.openstack.org/#/c/435816/1


I see this is only for milestone-based projects. Should
cycle-with-intermediary
projects like ironic add a diff-start to the most recent release, so the
release
announcement has the right details?

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Some early feedback from the User Survey

2017-02-20 Thread Matt Riedemann
The results are not all in yet, but this was some early feedback on the 
question asked about Nova in the recent User Survey:


--

Question: How important is it to be able to customize Nova in your 
deployment, e.g. classload your own managers/drivers, use hooks, plug in 
API extensions, etc.


* Not important; I use pretty much stock Nova with maybe some small 
patches or bug fixes that aren't upstream. = 83 (51%)


* Somewhat important; I have some custom scheduler filters and other 
small patches but nothing major. = 65 (40%)


* Very important; my Nova deployment is heavily customized and 
hooks/plugins/custom APIs are a major part of my operation. = 16 (9%)



--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [nova] Metadata service over virtio-vsock

2017-02-20 Thread Clint Byrum
What exactly is the security concern of the metadata service? Perhaps
those concerns can be addressed directly?

I ask because anything that requires special software on the guest is
a non-starter IMO. virtio is a Linux thing, so what does this do for
users of Windows?  FreeBSD? etc.

Excerpts from Artom Lifshitz's message of 2017-02-20 13:22:36 -0500:
> We've been having a discussion [1] in openstack-dev about how to best
> expose dynamic metadata that changes over a server's lifetime to the
> server. The specific use case is device role tagging with hotplugged
> devices, where a network interface or volume is attached with a role
> tag, and the guest would like to know what that role tag is right
> away.
> 
> The metadata API currently fulfills this function, but my
> understanding is that it's not hugely popular amongst operators and is
> therefore not universally deployed.
> 
> Dan Berrange came up with an idea [2] to add virtio-vsock support to
> Nova. To quote his explanation, " think of this as UNIX domain sockets
> between the host and guest. [...] It'd likely address at least some
> people's security concerns wrt metadata service. It would also fix the
> ability to use the metadata service in IPv6-only environments, as we
> would not be using IP at all."
> 
> So to those operators who are not deploying the metadata service -
> what are your reasons for doing so, and would those concerns be
> addressed by Dan's idea?
> 
> Cheers!
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-February/112490.html
> [2] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-February/112602.html
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [stable/ocata] [nova] [devstack multi-node] nova-conductor complaining about "No cell mapping found for cell0"

2017-02-20 Thread Matt Riedemann

On 2/20/2017 10:31 AM, Prashant Shetty wrote:

Thanks Jay for the response. Sorry I missed out on copying right error.

Here is the log:
2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost: No
valid host was found. There are not enough hosts available.
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.217 ERROR nova.conductor.manager
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] No cell mapping
found for cell0 while trying to record scheduling failure. Setup is
incomplete.

I tried command you mentioned, still I see same error on conductor.

As part of stack.sh on controller I see below command was executed
related to "cell". Isn't it devstack should take care of this part
during initial bringup or am I missing any parameters in localrc for same?.

NOTE: I have not explicitly enabled n-cell in localrc

2017-02-20 14:11:47.510 INFO migrate.versioning.api [-] done
+lib/nova:init_nova:683recreate_database nova
+lib/database:recreate_database:112local db=nova
+lib/database:recreate_database:113recreate_database_mysql nova
+lib/databases/mysql:recreate_database_mysql:56  local db=nova
+lib/databases/mysql:recreate_database_mysql:57  mysql -uroot -pvmware
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova;'
+lib/databases/mysql:recreate_database_mysql:58  mysql -uroot -pvmware
-h127.0.0.1 -e 'CREATE DATABASE nova CHARACTER SET utf8;'
+lib/nova:init_nova:684recreate_database nova_cell0
+lib/database:recreate_database:112local db=nova_cell0
+lib/database:recreate_database:113recreate_database_mysql
nova_cell0
+lib/databases/mysql:recreate_database_mysql:56  local db=nova_cell0
+lib/databases/mysql:recreate_database_mysql:57  mysql -uroot -pvmware
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova_cell0;'
+lib/databases/mysql:recreate_database_mysql:58  mysql -uroot -pvmware
-h127.0.0.1 -e 'CREATE DATABASE nova_cell0 CHARACTER SET utf8;'
+lib/nova:init_nova:689/usr/local/bin/nova-manage
--config-file /etc/nova/nova.conf db sync
WARNING: cell0 mapping not found - not syncing cell0.
2017-02-20 14:11:50.846 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] 215 -> 216...
2017-02-20 14:11:54.279 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] done
2017-02-20 14:11:54.280 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] 216 -> 217...
2017-02-20 14:11:54.288 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] done



Thanks,
Prashant

On Mon, Feb 20, 2017 at 8:21 PM, Jay Pipes > wrote:

On 02/20/2017 09:33 AM, Prashant Shetty wrote:

Team,

I have multi node devstack setup with single controller and multiple
computes running stable/ocata.

On compute:
ENABLED_SERVICES=n-cpu,neutron,placement-api

Both KVM and ESxi compute came up fine:
vmware@cntr11:~$ nova hypervisor-list

  warnings.warn(msg)

+++---+-+
| ID | Hypervisor hostname| State |
Status  |

+++---+-+
| 4  | domain-c82529.2fb3c1d7-fe24-49ea-9096-fcf148576db8 | up|
enabled |
| 7  | kvm-1  | up|
enabled |

+++---+-+
vmware@cntr11:~$

All services seems to run fine. When tried to launch instance I see
below errors in nova-conductor logs and instance stuck in
"scheduling"
state forever.
I dont have any config related to n-cell in controller. Could
someone
help me to identify why nova-conductor is complaining about cells.

2017-02-20 14:24:06.128 WARNING oslo_config.cfg
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Option
"scheduler_default_filters" from group "DEFAULT" is deprecated. Use
option "enabled_filters" from group "filter_scheduler".
2017-02-20 14:24:06.211 ERROR nova.conductor.manager
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Failed to
schedule instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most
recent call last):
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/conductor/manager.py", line 866, in
schedule_and_build_instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
request_specs[0].to_legacy_filter_properties_dict())
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File

[openstack-dev] [heat]No meeting this week

2017-02-20 Thread Rico Lin
Dear guys
Since this week is the PTG, we will not have a meeting this week.
Let me know if you have anything would like to target, I will share it to
team.
Or add new session here [1] if you like the team to target it.

[1] https://etherpad.openstack.org/p/pike-heat-ptg-open-sessions-proposal

-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-20 Thread Lihi Wish
+1

On Feb 20, 2017 1:13 PM, "Omer Anson"  wrote:

> +1
>
> On 20 February 2017 at 19:34, Bhatia, Manjeet S <
> manjeet.s.bha...@intel.com> wrote:
>
>> +1
>>
>>
>>
>> *From:* Kevin Benton [mailto:ke...@benton.pub]
>> *Sent:* Friday, February 17, 2017 11:19 AM
>> *To:* openstack-dev@lists.openstack.org
>> *Subject:* [openstack-dev] [neutron] - Neutron team social in Atlanta on
>> Thursday
>>
>>
>>
>> Hi all,
>>
>>
>>
>> I'm organizing a Neutron social event for Thursday evening in Atlanta
>> somewhere near the venue for dinner/drinks. If you're interested, please
>> reply to this email with a "+1" so I can get a general count for a
>> reservation.
>>
>>
>>
>> Cheers,
>>
>> Kevin Benton
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [nova] Metadata service over virtio-vsock

2017-02-20 Thread Artom Lifshitz
We've been having a discussion [1] in openstack-dev about how to best
expose dynamic metadata that changes over a server's lifetime to the
server. The specific use case is device role tagging with hotplugged
devices, where a network interface or volume is attached with a role
tag, and the guest would like to know what that role tag is right
away.

The metadata API currently fulfills this function, but my
understanding is that it's not hugely popular amongst operators and is
therefore not universally deployed.

Dan Berrange came up with an idea [2] to add virtio-vsock support to
Nova. To quote his explanation, " think of this as UNIX domain sockets
between the host and guest. [...] It'd likely address at least some
people's security concerns wrt metadata service. It would also fix the
ability to use the metadata service in IPv6-only environments, as we
would not be using IP at all."

So to those operators who are not deploying the metadata service -
what are your reasons for doing so, and would those concerns be
addressed by Dan's idea?

Cheers!

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-February/112490.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-February/112602.html

--
Artom Lifshitz

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Artom Lifshitz
> But before doing that though, I think it'd be worth understanding whether
> metadata-over-vsock support would be acceptable to people who refuse
> to deploy metadata-over-TCPIP today.

I wrote a thing [1], let's see what happens.

[1] 
http://lists.openstack.org/pipermail/openstack-operators/2017-February/012724.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Team photo

2017-02-20 Thread Bhatia, Manjeet S
+1

From: Kevin Benton [mailto:ke...@benton.pub]
Sent: Friday, February 17, 2017 3:08 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] - Team photo

Hello!

Is everyone free Thursday at 11:20AM (right before lunch break) for 10 minutes 
for a group photo?

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Structuring the documentation on all repositories

2017-02-20 Thread Christian Berendt

> On 20 Feb 2017, at 11:52, Paul Bourke  wrote:
> 
> I'm a little confused on the final outcome, it sounds like most of what 
> you've written is currently the case.
> 
> Besides a more user friendly deploy guide appearing under 
> https://docs.openstack.org/project-deploy-guide/ocata/, what is changing?

* all deploy instructions will be moved from doc directory in the deploy-guide 
directory
* all generic information will be removed from kolla-ansible/kolla-k8s 
repositories and will be provided in the kolla repository
* the generic information in the kolla repository will be extented

Christian.

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-20 Thread Omer Anson
+1

On 20 February 2017 at 19:34, Bhatia, Manjeet S 
wrote:

> +1
>
>
>
> *From:* Kevin Benton [mailto:ke...@benton.pub]
> *Sent:* Friday, February 17, 2017 11:19 AM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [neutron] - Neutron team social in Atlanta on
> Thursday
>
>
>
> Hi all,
>
>
>
> I'm organizing a Neutron social event for Thursday evening in Atlanta
> somewhere near the venue for dinner/drinks. If you're interested, please
> reply to this email with a "+1" so I can get a general count for a
> reservation.
>
>
>
> Cheers,
>
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Daniel P. Berrange
On Mon, Feb 20, 2017 at 12:46:09PM -0500, Artom Lifshitz wrote:
> > But before doing that though, I think it'd be worth understanding whether
> > metadata-over-vsock support would be acceptable to people who refuse
> > to deploy metadata-over-TCPIP today.
> 
> Sure, although I'm still concerned that it'll effectively make tagged
> hotplug libvirt-only.

Well there's still the option of accessing the metadata server the
traditional way over IP which is fully portable.  If some deployments
choose to opt-out of this facility I don't neccessarily think we need
to continue to invent further mechanisms. At some point you have to
say what's there is good enough and if people choose to trade off
features against some other criteria so be it.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Artom Lifshitz
>> But before doing that though, I think it'd be worth understanding whether
>> metadata-over-vsock support would be acceptable to people who refuse
>> to deploy metadata-over-TCPIP today.
>
> Sure, although I'm still concerned that it'll effectively make tagged
> hotplug libvirt-only.

Upon rethink, that not strictly true, there's still the existing
metadata service that works across all hypervisor drivers. I know
we're far for feature parity across all virt drivers, but would
metadata-over-vsock be acceptable? That's not even lack of feature
parity, that's a specific feature being exposed in a different (and
arguably worse) way depending on the virt driver.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Artom Lifshitz
> But before doing that though, I think it'd be worth understanding whether
> metadata-over-vsock support would be acceptable to people who refuse
> to deploy metadata-over-TCPIP today.

Sure, although I'm still concerned that it'll effectively make tagged
hotplug libvirt-only.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-20 Thread Bhatia, Manjeet S
+1

From: Kevin Benton [mailto:ke...@benton.pub]
Sent: Friday, February 17, 2017 11:19 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

Hi all,

I'm organizing a Neutron social event for Thursday evening in Atlanta somewhere 
near the venue for dinner/drinks. If you're interested, please reply to this 
email with a "+1" so I can get a general count for a reservation.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Structuring the documentation on all repositories

2017-02-20 Thread Andreas Jaeger

On 02/20/2017 05:34 PM, Christian Berendt  wrote:

This is a summary about structuring the documentation on all repositories as 
discussed at the PTG (https://etherpad.openstack.org/p/kolla-pike-ptg-docs).

The doc directory:

kolla/doc — kolla developer documentation (about our docker images) and generic 
documentation
kolla-ansible/doc — kolla-ansible developer documentation
kolla-k8s/doc — kolla-k8s developer documentation

Contents will be published to https://docs.openstack.org/developer/kolla…/

The doc directory in the kolla repositories will be splitted into 2 parts and 
will keep the generic kolla documentation (landing page, mission, philosophy, 
explanation of deliverables, overview of deployment guides (previous QSG), bug 
triage, how to contribute, ...) and the development documentation related to 
kolla images.

The central entry point (landing page) for the kolla project will be 
https://docs.openstack.org/developer/kolla.

The deploy-guide directory:

kolla-ansible/deploy-guide — Kolla deployment guide for Ansible, how to deploy 
with kolla-ansible (previous name: quick start guide)
kolla-k8s/deploy-guide — Kolla deployment guide for K8S, how to deploy with 
kolla-k8s (previous name: quick start guide)

We will add 2 links to https://docs.openstack.org/project-deploy-guide/ocata/:

* Kolla deployment guide for Ansible
* Kolla deployment guide for K8S

Sample split for kolla-ansible prepared at 
https://review.openstack.org/#/c/427965/.

The guides itself will be published at 
https://docs.openstack.org/project-deploy-guide/ocata/kolla-ansible and 
https://docs.openstack.org/project-deploy-guide/ocata/kolla-k8s.


Note that we publish to /ocata/ from stable/ocata branch - so remember 
to backport any such changes...


The master branch publishes to 
https://docs.openstack.org/project-deploy-guide/draft/kolla-ansible etc


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Daniel P. Berrange
On Mon, Feb 20, 2017 at 05:07:53PM +, Tim Bell wrote:
> Is there cloud-init support for this mode or do we still need to mount
> as a config drive?

I don't think it particularly makes sense to expose the config drive
via NVDIMM - it wouldn't solve any of the problems that config drive
has today and it'd be less portable wrt guest OS.

Rather I was suggesting we should consider NVDIMM as a transport for
the role device tagging metadata standalone, as that could provide us
a way to live-update the metadata on the fly, which is impractical /
impossible when the metadata is hidden inside the config drive.

But before doing that though, I think it'd be worth understanding whether
metadata-over-vsock support would be acceptable to people who refuse
to deploy metadata-over-TCPIP today.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] Stable/Ocata Version

2017-02-20 Thread Henry Fourie
Gary,
   The plan is to have a stable/ocata branch by end of month.

-Louis

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Sunday, February 19, 2017 4:29 AM
To: OpenStack List
Subject: [openstack-dev] [networking-sfc] Stable/Ocata Version

Hi,
When will this repo have a stable/ocata branch?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Tim Bell
Is there cloud-init support for this mode or do we still need to mount as a 
config drive?

Tim

On 20.02.17, 17:50, "Jeremy Stanley"  wrote:

On 2017-02-20 15:46:43 + (+), Daniel P. Berrange wrote:
> The data is exposed either as a block device or as a character device
> in Linux - which one depends on how the NVDIMM is configured. Once
> opening the right device you can simply mmap() the FD and read the
> data. So exposing it as a file under sysfs doesn't really buy you
> anything better.

Oh! Fair enough, if you can already access it as a character device
then I agree that solves the use cases I was considering.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [osops][osops-tools-monitoring] Updates for monitoring plugins

2017-02-20 Thread Major Hayden
Hey there,

During the PTG, one of the discussions in the OpenStack-Ansible room was around 
adding a monitoring component to OSA.  I found the 'osops-tools-monitoring' 
repository today.

The idea we discussed was around writing plugins using the OpenStack SDK and 
then adding a simple library that outputs data based on the monitoring tools in 
use (like Nagios, Telegraf, etc).  That would allow us to break apart the 
gathering of the metric (like checking Keystone's API response time) and the 
output of the metric (in the proper format for the tool).

Would that work make sense within this repo or in a different one?  Thanks!

--
Major Hayden

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [kolla] Structuring the documentation on all repositories

2017-02-20 Thread Paul Bourke

Hi Christian,

Thanks for the summary, useful for those of us not at the PTG.

I'm a little confused on the final outcome, it sounds like most of what 
you've written is currently the case.


Besides a more user friendly deploy guide appearing under 
https://docs.openstack.org/project-deploy-guide/ocata/, what is changing?


Thanks,
-Paul

On 20/02/17 16:34, Christian Berendt wrote:

This is a summary about structuring the documentation on all repositories as 
discussed at the PTG (https://etherpad.openstack.org/p/kolla-pike-ptg-docs).

The doc directory:

kolla/doc — kolla developer documentation (about our docker images) and generic 
documentation
kolla-ansible/doc — kolla-ansible developer documentation
kolla-k8s/doc — kolla-k8s developer documentation

Contents will be published to https://docs.openstack.org/developer/kolla…/

The doc directory in the kolla repositories will be splitted into 2 parts and 
will keep the generic kolla documentation (landing page, mission, philosophy, 
explanation of deliverables, overview of deployment guides (previous QSG), bug 
triage, how to contribute, ...) and the development documentation related to 
kolla images.

The central entry point (landing page) for the kolla project will be 
https://docs.openstack.org/developer/kolla.

The deploy-guide directory:

kolla-ansible/deploy-guide — Kolla deployment guide for Ansible, how to deploy 
with kolla-ansible (previous name: quick start guide)
kolla-k8s/deploy-guide — Kolla deployment guide for K8S, how to deploy with 
kolla-k8s (previous name: quick start guide)

We will add 2 links to https://docs.openstack.org/project-deploy-guide/ocata/:

* Kolla deployment guide for Ansible
* Kolla deployment guide for K8S

Sample split for kolla-ansible prepared at 
https://review.openstack.org/#/c/427965/.

The guides itself will be published at 
https://docs.openstack.org/project-deploy-guide/ocata/kolla-ansible and 
https://docs.openstack.org/project-deploy-guide/ocata/kolla-k8s.

Christian.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Jeremy Stanley
On 2017-02-20 15:46:43 + (+), Daniel P. Berrange wrote:
> The data is exposed either as a block device or as a character device
> in Linux - which one depends on how the NVDIMM is configured. Once
> opening the right device you can simply mmap() the FD and read the
> data. So exposing it as a file under sysfs doesn't really buy you
> anything better.

Oh! Fair enough, if you can already access it as a character device
then I agree that solves the use cases I was considering.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] OpenDaylight Internship 2017 : Introduction

2017-02-20 Thread Dong Ma
Hello Yolande, I have made some plugin changes to use convert xml [1], if
you interested you can work on this part.

[1] https://etherpad.openstack.org/p/JJB_plugins_support

Thanks,
Dong

2017-02-14 0:58 GMT+08:00 Yolande Amate :

> Hello,
>
> My name is Yolande Amate, I am a third year computer Science student
> at the university of Buea Cameroon in Africa. I have been selected for
> the OpenDaylight internship [0] 2017, to work on the project  "Jenkins
> Job Builder-Improve Jenkins Plugins Support". I will be implementing
> support for Jenkins plugins, so I would like to know if there are any
> currently any plugins of interest to the community which are currently
> missing that I could add support for.
>
>  I am looking forward to working with the community and a successful
> contribution to the project.
>
> Cheers,
> Yolande
>
> [0] https://wiki.opendaylight.org/view/Interns
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[openstack-dev] [kolla] Structuring the documentation on all repositories

2017-02-20 Thread Christian Berendt
This is a summary about structuring the documentation on all repositories as 
discussed at the PTG (https://etherpad.openstack.org/p/kolla-pike-ptg-docs).

The doc directory:

kolla/doc — kolla developer documentation (about our docker images) and generic 
documentation
kolla-ansible/doc — kolla-ansible developer documentation
kolla-k8s/doc — kolla-k8s developer documentation

Contents will be published to https://docs.openstack.org/developer/kolla…/

The doc directory in the kolla repositories will be splitted into 2 parts and 
will keep the generic kolla documentation (landing page, mission, philosophy, 
explanation of deliverables, overview of deployment guides (previous QSG), bug 
triage, how to contribute, ...) and the development documentation related to 
kolla images.

The central entry point (landing page) for the kolla project will be 
https://docs.openstack.org/developer/kolla.

The deploy-guide directory:

kolla-ansible/deploy-guide — Kolla deployment guide for Ansible, how to deploy 
with kolla-ansible (previous name: quick start guide)
kolla-k8s/deploy-guide — Kolla deployment guide for K8S, how to deploy with 
kolla-k8s (previous name: quick start guide)

We will add 2 links to https://docs.openstack.org/project-deploy-guide/ocata/:

* Kolla deployment guide for Ansible
* Kolla deployment guide for K8S

Sample split for kolla-ansible prepared at 
https://review.openstack.org/#/c/427965/.

The guides itself will be published at 
https://docs.openstack.org/project-deploy-guide/ocata/kolla-ansible and 
https://docs.openstack.org/project-deploy-guide/ocata/kolla-k8s.

Christian.

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Snapshot: Cannot determine the parent storage pool

2017-02-20 Thread John Petrini
Hi List,

We're running Mitaka with Ceph. Recently I enabled RBD snapshots by adding
write permissions to the images pool in Ceph. This works perfectly for some
instances but is failing back to standard snapshots for others with the
following error:

Performing standard snapshot because direct snapshot failed: Cannot
determine the parent storage pool for 7a7b5119-
85da-429b-89b5-ad345cfb649e; cannot determine where to store images

Looking at the code here:
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py
it appears that it looks for the pool of the base image to determine where
to save the snapshot. I believe the problem I'm encountering is that for
some of our instances the base image no longer exists.

Am I understanding this correctly and is there anyway to explicitly set the
pool to be used for snapshots and bypass this logic?

Thank You,

John Petrini
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [puppet] Reminder next meeting on Feb 28 @ 1500 UTC

2017-02-20 Thread Alex Schultz
Due to the PTG this week, we are skipping the meeting tomorrow. The
next meeting will be on Feb 28th. The agenda[0] is currently empty. If
you have something you wish to talking about, please add it to the
list.

Thanks,
-Alex

[0] https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20170228

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [javascript]Call for contributors for js-openstack-lib project

2017-02-20 Thread Dong Ma
Hello,

Currently we have a project js-openstack-lib need more contributors to help
work on. This project incubated as a single, gate-tested JavaScript API client
library for the OpenStack API's. The project aims to provide a consistent
and complete set of interactions with OpenStack's many services, along with
documentations, examples, and tools. This library is compatible with both
browser and server side Javascript.

If you're interested in contributing, you can ping me by email or IRC:
larainema, also if you are in PTG you can talk to me in person, and the
following will help you get started:
Bug Tracker: https://storyboard.openstack.org/#!/project/844
Code Hosting: https://git.openstack.org/cgit/openstack/js-openstack-lib

Thanks,
Dong Ma
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Daniel P. Berrange
On Mon, Feb 20, 2017 at 10:46:09AM -0500, Artom Lifshitz wrote:
> I don't think we're trying to re-invent configuration management in
> Nova. We have this problem where we want to communicate to the guest,
> from the host, a bunch of dynamic metadata that can change throughout
> the guest's lifetime. We currently have two possible avenues for this
> already in place, and both have problems:
> 
> 1. The metadata service isn't universally deployed by operators for
> security and other reasons.
> 2. The config drive was never designed for dynamic metadata.
> 
> So far in this thread we've mostly been discussing ways to shoehorn a
> solution into the config drive avenue, but that's going to be ugly no
> matter what because it was never designed for what we're trying to do
> in the first place.
> 
> Some folks are saying that we admit that the config drive is only for
> static information and metadata that is known at boot time, and work
> on a third way to communicate dynamic metadata to the guest. I can get
> behind that 100%. I like the virtio-vsock option, but that's only
> supported by libvirt IIUC. We've got device tagging support in hyper-v
> as well, and xenapi hopefully on the way soon [1], so we need
> something a bit more universal. How about fixing up the metadata
> service to be more deployable, both in terms of security, and IPv6
> support?

FYI, virtio-vsock is not actually libvirt specific. the VSOCK sockets
transport was in fact invented by VMWare and first merged into Linux
in 2013 as a vmware guest driver.

A mapping of the VSOCK protocol over virtio was later defined to enable
VSOCK to be used with QEMU, KVM and Xen all of which support virtio.
The intention was explicitly that applications consuming VSOCK in the
guest would be portable between KVM & VMWare.

That said I don't think it is available via XenAPI, and doubt hyperv
will support it any time soon, but it is none the less a portable
standard if HVs decide they want such a feature.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] vpnaas driver maintainers

2017-02-20 Thread Takashi Yamamoto
hi,

i want to document who cares which drivers.
https://review.openstack.org/#/c/436081/
please comment on the review if you know an appropriate contact person
for each drivers.

btw, is the following a driver for neutron-vpnaas?  i couldn't find
documentation.
vmware_nsx/plugins/nsx_v/vshield/edge_ipsecvpn_driver.py

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] no irc meeting today

2017-02-20 Thread James Page
Hi Team

As most people are at the PTG today, we'll skip todays team IRC meeting.

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Will unshelving an offloaded instance respect the original AZ?

2017-02-20 Thread Sylvain Bauza


Le 20/02/2017 09:41, Jay Pipes a écrit :
> On 02/18/2017 01:46 PM, Matt Riedemann wrote:
>> I haven't fully dug into testing this, but I got wondering about this
>> question from reviewing a change [1] which would make the unshelve
>> operation start to check the volume AZ compared to the instance AZ when
>> the compute manager calls _prep_block_device.
>>
>> That change is attempting to remove the check_attach() method in
>> nova.volume.cinder.API since it's mostly redundant with state checks
>> that Cinder does when reserving the volume. The only other thing that
>> Nova does in there right now is compare the AZs.
>>
>> What I'm wondering is, with that change, will things break because of a
>> scenario like this:
>>
>> 1. Create volume in AZ 1.
>> 2. Create server in AZ 1.
>> 3. Attach volume to server (or boot server from volume in step 2).
>> 4. Shelve (offload) server.
>> 5. Unshelve server - nova-scheduler puts it into AZ 2.
>> 6. _prep_block_device compares instance AZ 2 to volume AZ 1 and unshelve
>> fails with InvalidVolume.
>>
>> If unshelving a server in AZ 1 can't move it outside of AZ 1, then we're
>> fine and the AZ check when unshelving is redundant but harmless.
>>
>> [1]
>> https://review.openstack.org/#/c/335358/38/nova/virt/block_device.py@249
> 
> When an instance is unshelved, the unshelve_instance() RPC API method is
> passed a RequestSpec object as the request_spec parameter:
> 
> https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L600
> 
> 
> This request spec object is passed to schedule_instances():
> 
> https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L660
> 
> 
> (you will note that the code directly above there "resets force_hosts"
> parameters, ostensibly to prevent any forced destination host from being
> passed to the scheduler)
> 
> The question is: does the above request spec contain availability zone
> information for the original instance? If it does, we're good. If it
> doesn't, we can get into the problem described above.
> 
> From what I can tell (and Sylvain might be the best person to answer
> this, thus his cc'ing), the availability zone is *always* stored in the
> request spec for an instance:
> 
> https://github.com/openstack/nova/blob/master/nova/compute/api.py#L966
> 
> Which means that upon unshelving after a shelve_offload, we will always
> pass the scheduler the original AZ.
> 
> Sylvain, do you concur?
> 

tl;dr: Exactly this, it's not possible since Mitaka to unshelve on a
different AZ if you have the AZFilter enabled.

Longer version:

Exactly this. If the instance was booted using a specific AZ flag, then :

 #1 the instance.az field is set to something different from a conf opt
default
and #2 the attached RequestSpec is getting the AZ field set

Both are persisted later in the conductor.


Now, say this instance is shelved/unshelved, then we get the original
RequestSpec at the API level
https://github.com/openstack/nova/blob/466769e588dc44d11987430b54ca1bd7188abffb/nova/compute/api.py#L3275-L3276

That's how the above conductor method you provided is getting the Spec
passed as argument.

Later, when the call is made to the scheduler, if the AZFilter is
enabled, it goes verifying that spec_obj.az field against the compute AZ
and refuses to accept the host if the AZ is different.

One side note tho, if the instance is not specified with a AZ, then of
course it can be unshelved on a compute not in the same AZ, since the
user didn't explicitly asked to stick with an AZ.

HTH,
-Sylvain


> Best,
> -jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Daniel P. Berrange
On Mon, Feb 20, 2017 at 02:24:12PM +, Jeremy Stanley wrote:
> On 2017-02-20 13:38:31 + (+), Daniel P. Berrange wrote:
> [...]
> >Rather than mounting as a filesystem, you can also use NVDIMM directly
> >as a raw memory block, in which case it can contain whatever data format
> >you want - not merely a filesystem. With the right design, you could come
> >up with a format that let you store the role device metadata in a NVDIMM
> >and be able to update its contents on the fly for the guest during 
> > hotplug.
> [...]
> 
> Maybe it's just me, but this begs for a (likely fairly trivial?)
> kernel module exposing that data under /sys or /proc (at least for
> *nix guests).

The data is exposed either as a block device or as a character device
in Linux - which one depends on how the NVDIMM is configured. Once
opening the right device you can simply mmap() the FD and read the
data. So exposing it as a file under sysfs doesn't really buy you
anything better.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] PTG schedule

2017-02-20 Thread Lance Bragstad
Also - I just got word that keystone's project room for Wednesday through
Friday will be Georgia 13 located on the first floor. I've updated the
schedule with the location for all sessions we plan to have in that room.

On Mon, Feb 20, 2017 at 8:50 AM, Lance Bragstad  wrote:

> Late last week we had our schedule tentatively set and I haven't received
> anymore feedback on the current proposal [0]. I think it would be safe to
> consider it set unless something urgent comes up that we have to work
> around (urgent meaning I completely forgot to schedule a planned session
> with another project).
>
> Don't hesitate to ping me if you have any questions about the schedule and
> safe travels to Atlanta!
>
>
> [0] https://etherpad.openstack.org/p/keystone-pike-ptg
>
> On Thu, Feb 16, 2017 at 1:40 PM, Lance Bragstad 
> wrote:
>
>> Based on early feedback I've broken up our first session, which is
>> dedicated to reviewing things not addressed in Ocata [0], into three
>> different sessions. I'm hoping this will help us commit enough time to
>> finding resolutions for each topic, versus cramming it all into 40 minutes.
>>
>> Keep the feedback coming. Thanks!
>>
>>
>> [0] https://etherpad.openstack.org/p/pike-ptg-keystone-ocata-carry-over
>>
>> On Wed, Feb 15, 2017 at 10:24 PM, Lance Bragstad 
>> wrote:
>>
>>> Hi all,
>>>
>>> I tried to get most of our things shuffled around into some-what of a
>>> schedule [0]. Everything that was on the list was eventually refactored
>>> into the agenda.
>>>
>>> I've broken the various topics out into their own etherpads and linked
>>> them back to the main schedule. We should have the freedom to move things
>>> around as we see fit. The only exceptions are the sessions that we've
>>> already scheduled with other projects (cross-project policy, cross-project
>>> federation, or our joint session with horizon). Moving them might be tough
>>> since we've worked it around schedules from other projects and we've made
>>> the room reservation [1].
>>>
>>> I assume we'll need to make some adjustments before the end of the week.
>>> If you see anything that conflicts with another session, please let me know.
>>>
>>> Thanks!
>>>
>>> [0] https://etherpad.openstack.org/p/keystone-pike-ptg
>>> [1] https://ethercalc.openstack.org/Pike-PTG-Discussion-Rooms
>>>
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Artom Lifshitz
I don't think we're trying to re-invent configuration management in
Nova. We have this problem where we want to communicate to the guest,
from the host, a bunch of dynamic metadata that can change throughout
the guest's lifetime. We currently have two possible avenues for this
already in place, and both have problems:

1. The metadata service isn't universally deployed by operators for
security and other reasons.
2. The config drive was never designed for dynamic metadata.

So far in this thread we've mostly been discussing ways to shoehorn a
solution into the config drive avenue, but that's going to be ugly no
matter what because it was never designed for what we're trying to do
in the first place.

Some folks are saying that we admit that the config drive is only for
static information and metadata that is known at boot time, and work
on a third way to communicate dynamic metadata to the guest. I can get
behind that 100%. I like the virtio-vsock option, but that's only
supported by libvirt IIUC. We've got device tagging support in hyper-v
as well, and xenapi hopefully on the way soon [1], so we need
something a bit more universal. How about fixing up the metadata
service to be more deployable, both in terms of security, and IPv6
support?

[1] https://review.openstack.org/#/c/333781/

On Mon, Feb 20, 2017 at 10:35 AM, Clint Byrum  wrote:
> Excerpts from Jay Pipes's message of 2017-02-20 10:00:06 -0500:
>> On 02/17/2017 02:28 PM, Artom Lifshitz wrote:
>> > Early on in the inception of device role tagging, it was decided that
>> > it's acceptable that the device metadata on the config drive lags
>> > behind the metadata API, as long as it eventually catches up, for
>> > example when the instance is rebooted and we get a chance to
>> > regenerate the config drive.
>> >
>> > So far this hasn't really been a problem because devices could only be
>> > tagged at instance boot time, and the tags never changed. So the
>> > config drive was pretty always much up to date.
>> >
>> > In Pike the tagged device attachment series of patches [1] will
>> > hopefully merge, and we'll be in a situation where device tags can
>> > change during instance uptime, which makes it that much more important
>> > to regenerate the config drive whenever we get a chance.
>> >
>> > However, when the config drive is first generated, some of the
>> > information stored in there is only available at instance boot time
>> > and is not persisted anywhere, as far as I can tell. Specifically, the
>> > injected_files and admin_pass parameters [2] are passed from the API
>> > and are not stored anywhere.
>> >
>> > This creates a problem when we want to regenerated the config drive,
>> > because the information that we're supposed to put in it is no longer
>> > available to us.
>> >
>> > We could start persisting this information in instance_extra, for
>> > example, and pulling it up when the config drive is regenerated. We
>> > could even conceivably hack something to read the metadata files from
>> > the "old" config drive before refreshing them with new information.
>> > However, is that really worth it? I feel like saying "the config drive
>> > is static, deal with it - if you want to up to date metadata, use the
>> > API" is an equally, if not more, valid option.
>>
>> Yeah, config drive should, IMHO, be static, readonly. If you want to
>> change device tags or other configuration data after boot, use a
>> configuration management system or something like etcd watches. I don't
>> think Nova should be responsible for this.
>
> I tend to agree with you, and I personally wouldn't write apps that need
> this. However, in the interest of understanding the desire to change this,
> I think the scenario is this:
>
> 1) Servers are booted with {n_tagged_devices} and come up, actions happen
> using automated thing that reads device tags and reacts accordingly.
>
> 2) A new device is added to the general configuration.
>
> 3) New servers configure themselves with the new devices automatically. But
> existing servers do not have those device tags in their config drive. In
> order to configure these, one would now have to write a fair amount of
> orchestration to duplicate what already exists for new servers.
>
> While I'm a big fan of the cattle approach (just delete those old
> servers!) I don't think OpenStack is constrained enough to say that
> this is always going to be efficient. And writing two paths for server
> configuration feels like repeating yourself.
>
> I don't have a perfect answer to this, but I don't think "just don't
> do that" is sufficient as a response. We allowed the tags in config
> drive. We have to deal with the unintended consequences of that decision.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2017-02-20 10:00:06 -0500:
> On 02/17/2017 02:28 PM, Artom Lifshitz wrote:
> > Early on in the inception of device role tagging, it was decided that
> > it's acceptable that the device metadata on the config drive lags
> > behind the metadata API, as long as it eventually catches up, for
> > example when the instance is rebooted and we get a chance to
> > regenerate the config drive.
> >
> > So far this hasn't really been a problem because devices could only be
> > tagged at instance boot time, and the tags never changed. So the
> > config drive was pretty always much up to date.
> >
> > In Pike the tagged device attachment series of patches [1] will
> > hopefully merge, and we'll be in a situation where device tags can
> > change during instance uptime, which makes it that much more important
> > to regenerate the config drive whenever we get a chance.
> >
> > However, when the config drive is first generated, some of the
> > information stored in there is only available at instance boot time
> > and is not persisted anywhere, as far as I can tell. Specifically, the
> > injected_files and admin_pass parameters [2] are passed from the API
> > and are not stored anywhere.
> >
> > This creates a problem when we want to regenerated the config drive,
> > because the information that we're supposed to put in it is no longer
> > available to us.
> >
> > We could start persisting this information in instance_extra, for
> > example, and pulling it up when the config drive is regenerated. We
> > could even conceivably hack something to read the metadata files from
> > the "old" config drive before refreshing them with new information.
> > However, is that really worth it? I feel like saying "the config drive
> > is static, deal with it - if you want to up to date metadata, use the
> > API" is an equally, if not more, valid option.
> 
> Yeah, config drive should, IMHO, be static, readonly. If you want to 
> change device tags or other configuration data after boot, use a 
> configuration management system or something like etcd watches. I don't 
> think Nova should be responsible for this.

I tend to agree with you, and I personally wouldn't write apps that need
this. However, in the interest of understanding the desire to change this,
I think the scenario is this:

1) Servers are booted with {n_tagged_devices} and come up, actions happen
using automated thing that reads device tags and reacts accordingly.

2) A new device is added to the general configuration.

3) New servers configure themselves with the new devices automatically. But
existing servers do not have those device tags in their config drive. In
order to configure these, one would now have to write a fair amount of
orchestration to duplicate what already exists for new servers.

While I'm a big fan of the cattle approach (just delete those old
servers!) I don't think OpenStack is constrained enough to say that
this is always going to be efficient. And writing two paths for server
configuration feels like repeating yourself.

I don't have a perfect answer to this, but I don't think "just don't
do that" is sufficient as a response. We allowed the tags in config
drive. We have to deal with the unintended consequences of that decision.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable/ocata] [nova] [devstack multi-node] nova-conductor complaining about "No cell mapping found for cell0"

2017-02-20 Thread Prashant Shetty
Thanks Jay for the response. Sorry I missed out on copying right error.

Here is the log:
2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost: No valid
host was found. There are not enough hosts available.
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.217 ERROR nova.conductor.manager
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] No cell mapping
found for cell0 while trying to record scheduling failure. Setup is
incomplete.

I tried command you mentioned, still I see same error on conductor.

As part of stack.sh on controller I see below command was executed related
to "cell". Isn't it devstack should take care of this part during initial
bringup or am I missing any parameters in localrc for same?.

NOTE: I have not explicitly enabled n-cell in localrc

2017-02-20 14:11:47.510 INFO migrate.versioning.api [-] done
+lib/nova:init_nova:683recreate_database nova
+lib/database:recreate_database:112local db=nova
+lib/database:recreate_database:113recreate_database_mysql nova
+lib/databases/mysql:recreate_database_mysql:56  local db=nova
+lib/databases/mysql:recreate_database_mysql:57  mysql -uroot -pvmware
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova;'
+lib/databases/mysql:recreate_database_mysql:58  mysql -uroot -pvmware
-h127.0.0.1 -e 'CREATE DATABASE nova CHARACTER SET utf8;'
+lib/nova:init_nova:684recreate_database nova_cell0
+lib/database:recreate_database:112local db=nova_cell0
+lib/database:recreate_database:113recreate_database_mysql
nova_cell0
+lib/databases/mysql:recreate_database_mysql:56  local db=nova_cell0
+lib/databases/mysql:recreate_database_mysql:57  mysql -uroot -pvmware
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova_cell0;'
+lib/databases/mysql:recreate_database_mysql:58  mysql -uroot -pvmware
-h127.0.0.1 -e 'CREATE DATABASE nova_cell0 CHARACTER SET utf8;'
+lib/nova:init_nova:689/usr/local/bin/nova-manage
--config-file /etc/nova/nova.conf db sync
WARNING: cell0 mapping not found - not syncing cell0.
2017-02-20 14:11:50.846 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] 215 -> 216...
2017-02-20 14:11:54.279 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] done
2017-02-20 14:11:54.280 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] 216 -> 217...
2017-02-20 14:11:54.288 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] done



Thanks,
Prashant

On Mon, Feb 20, 2017 at 8:21 PM, Jay Pipes  wrote:

> On 02/20/2017 09:33 AM, Prashant Shetty wrote:
>
>> Team,
>>
>> I have multi node devstack setup with single controller and multiple
>> computes running stable/ocata.
>>
>> On compute:
>> ENABLED_SERVICES=n-cpu,neutron,placement-api
>>
>> Both KVM and ESxi compute came up fine:
>> vmware@cntr11:~$ nova hypervisor-list
>>
>>   warnings.warn(msg)
>> +++-
>> --+-+
>> | ID | Hypervisor hostname| State |
>> Status  |
>> +++-
>> --+-+
>> | 4  | domain-c82529.2fb3c1d7-fe24-49ea-9096-fcf148576db8 | up|
>> enabled |
>> | 7  | kvm-1  | up|
>> enabled |
>> +++-
>> --+-+
>> vmware@cntr11:~$
>>
>> All services seems to run fine. When tried to launch instance I see
>> below errors in nova-conductor logs and instance stuck in "scheduling"
>> state forever.
>> I dont have any config related to n-cell in controller. Could someone
>> help me to identify why nova-conductor is complaining about cells.
>>
>> 2017-02-20 14:24:06.128 WARNING oslo_config.cfg
>> [req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Option
>> "scheduler_default_filters" from group "DEFAULT" is deprecated. Use
>> option "enabled_filters" from group "filter_scheduler".
>> 2017-02-20 14:24:06.211 ERROR nova.conductor.manager
>> [req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Failed to
>> schedule instances
>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most
>> recent call last):
>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>> "/opt/stack/nova/nova/conductor/manager.py", line 866, in
>> schedule_and_build_instances
>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>> request_specs[0].to_legacy_filter_properties_dict())
>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>> "/opt/stack/nova/nova/conductor/manager.py", line 597, in
>> _schedule_instances
>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager hosts =
>> self.scheduler_client.select_destinations(context, spec_obj)
>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>> 

Re: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/tempest failed

2017-02-20 Thread Matthew Treinish
On Sun, Feb 19, 2017 at 07:25:39AM +, A mailing list for the OpenStack 
Stable Branch test reports. wrote:
> Build failed.
> 
> - periodic-tempest-dsvm-full-ubuntu-trusty-mitaka 
> http://logs.openstack.org/periodic-stable/periodic-tempest-dsvm-full-ubuntu-trusty-mitaka/27fe759/
>  : SUCCESS in 32m 57s
> - periodic-tempest-dsvm-neutron-full-ubuntu-trusty-mitaka 
> http://logs.openstack.org/periodic-stable/periodic-tempest-dsvm-neutron-full-ubuntu-trusty-mitaka/87b6097/
>  : SUCCESS in 52m 02s
> - periodic-tempest-dsvm-full-ubuntu-xenial-newton 
> http://logs.openstack.org/periodic-stable/periodic-tempest-dsvm-full-ubuntu-xenial-newton/8eb0b4c/
>  : SUCCESS in 42m 26s
> - periodic-tempest-dsvm-neutron-full-ubuntu-xenial-newton 
> http://logs.openstack.org/periodic-stable/periodic-tempest-dsvm-neutron-full-ubuntu-xenial-newton/d18419a/
>  : SUCCESS in 1h 19m 48s
> - periodic-tempest-dsvm-full-ubuntu-xenial-ocata 
> http://logs.openstack.org/periodic-stable/periodic-tempest-dsvm-full-ubuntu-xenial-ocata/07a2329/
>  : SUCCESS in 1h 01m 27s
> - periodic-tempest-dsvm-neutron-full-ubuntu-xenial-ocata 
> http://logs.openstack.org/periodic-stable/periodic-tempest-dsvm-neutron-full-ubuntu-xenial-ocata/f5a5874/
>  : FAILURE in 1h 08m 32s


So I took a quick look at this failure and it was the oom failure we're 
tracking with https://bugs.launchpad.net/tempest/+bug/1664953

http://logs.openstack.org/periodic-stable/periodic-tempest-dsvm-neutron-full-ubuntu-xenial-ocata/f5a5874/logs/syslog.txt.gz#_Feb_19_07_13_47

I guess I don't mind playing manual elastic-recheck today.

-Matt


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Jay Pipes

On 02/17/2017 07:05 PM, Clint Byrum wrote:

Excerpts from Michael Still's message of 2017-02-18 09:41:03 +1100:

We have had this discussion several times in the past for other reasons.
The reality is that some people will never deploy the metadata API, so I
feel like we need a better solution than what we have now.

However, I would consider it probably unsafe for the hypervisor to read the
current config drive to get values, and persisting things like the instance
root password in the Nova DB sounds like a bad idea too.


Agreed. What if we simply have a second config drive that is for "things
that change" and only rebuild that one on reboot?


Or not.

Why are we trying to reinvent configuration management systems in Nova?

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Jay Pipes

On 02/17/2017 02:28 PM, Artom Lifshitz wrote:

Early on in the inception of device role tagging, it was decided that
it's acceptable that the device metadata on the config drive lags
behind the metadata API, as long as it eventually catches up, for
example when the instance is rebooted and we get a chance to
regenerate the config drive.

So far this hasn't really been a problem because devices could only be
tagged at instance boot time, and the tags never changed. So the
config drive was pretty always much up to date.

In Pike the tagged device attachment series of patches [1] will
hopefully merge, and we'll be in a situation where device tags can
change during instance uptime, which makes it that much more important
to regenerate the config drive whenever we get a chance.

However, when the config drive is first generated, some of the
information stored in there is only available at instance boot time
and is not persisted anywhere, as far as I can tell. Specifically, the
injected_files and admin_pass parameters [2] are passed from the API
and are not stored anywhere.

This creates a problem when we want to regenerated the config drive,
because the information that we're supposed to put in it is no longer
available to us.

We could start persisting this information in instance_extra, for
example, and pulling it up when the config drive is regenerated. We
could even conceivably hack something to read the metadata files from
the "old" config drive before refreshing them with new information.
However, is that really worth it? I feel like saying "the config drive
is static, deal with it - if you want to up to date metadata, use the
API" is an equally, if not more, valid option.


Yeah, config drive should, IMHO, be static, readonly. If you want to 
change device tags or other configuration data after boot, use a 
configuration management system or something like etcd watches. I don't 
think Nova should be responsible for this.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] End-of-Ocata core team updates

2017-02-20 Thread Mario Villaplana
Hi all,

I know this hasn't been formally approved yet, but I'd like to thank
everybody for this great opportunity. I'm humbled by the votes of
confidence and look forward to serving the ironic community in this
new way. Thanks especially to Dmitry for proposing this to the mailing
list.

+1 to Vasyl and congratulations.

Thanks Devananda for everything, and I hope to see you back soon.

Mario

On Fri, Feb 17, 2017 at 6:43 PM, John Villalovos
 wrote:
> +1 to both Vasyl and Mario.
>
> Hopefully Deva will be able to come back again to Ironic in the future.
>
> On Fri, Feb 17, 2017 at 10:42 AM, Julia Kreger 
> wrote:
>>
>> Thank you Dmitry!
>>
>> I’m +1 to all of these actions. Vasyl and Mario will be great additions.
>> As for Devananda, it saddens me but I agree and I hope to work with him
>> again in the future.
>>
>> -Julia
>>
>> > On Feb 17, 2017, at 4:40 AM, Dmitry Tantsur  wrote:
>> >
>> > Hi all!
>> >
>> > I'd like to propose a few changes based on the recent contributor
>> > activity.
>> >
>> > I have two candidates that look very good and pass the formal barrier of
>> > 3 reviews a day on average [1].
>> >
>> > First, Vasyl Saienko (vsaienk0). I'm pretty confident in him, his stats
>> > [2] are high, he's doing a lot of extremely useful work around networking
>> > and CI.
>> >
>> > Second, Mario Villaplana (mariojv). His stats [3] are quite high too, he
>> > has been doing some quality reviews for critical patches in the Ocata 
>> > cycle.
>> >
>> > Active cores and interested contributors, please respond with your +-1
>> > to these suggestions.
>> >
>> > Unfortunately, there is one removal as well. Devananda, our team leader
>> > for several cycles since the very beginning of the project, has not been
>> > active on the project for some time [4]. I propose to (hopefully temporary)
>> > remove him from the core team. Of course, when (look, I'm not even saying
>> > "if"!) he comes back to active reviewing, I suggest we fast-forward him
>> > back. Thanks for everything Deva, good luck with your current challenges!
>> >
>> > Thanks,
>> > Dmitry
>> >
>> > [1] http://stackalytics.com/report/contribution/ironic-group/90
>> > [2] http://stackalytics.com/?user_id=vsaienko=marks
>> > [3] http://stackalytics.com/?user_id=mario-villaplana-j=marks
>> > [4] http://stackalytics.com/?user_id=devananda=marks
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable/ocata] [nova] [devstack multi-node] nova-conductor complaining about "No cell mapping found for cell0"

2017-02-20 Thread Jay Pipes

On 02/20/2017 09:33 AM, Prashant Shetty wrote:

Team,

I have multi node devstack setup with single controller and multiple
computes running stable/ocata.

On compute:
ENABLED_SERVICES=n-cpu,neutron,placement-api

Both KVM and ESxi compute came up fine:
vmware@cntr11:~$ nova hypervisor-list

  warnings.warn(msg)
+++---+-+
| ID | Hypervisor hostname| State |
Status  |
+++---+-+
| 4  | domain-c82529.2fb3c1d7-fe24-49ea-9096-fcf148576db8 | up|
enabled |
| 7  | kvm-1  | up|
enabled |
+++---+-+
vmware@cntr11:~$

All services seems to run fine. When tried to launch instance I see
below errors in nova-conductor logs and instance stuck in "scheduling"
state forever.
I dont have any config related to n-cell in controller. Could someone
help me to identify why nova-conductor is complaining about cells.

2017-02-20 14:24:06.128 WARNING oslo_config.cfg
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Option
"scheduler_default_filters" from group "DEFAULT" is deprecated. Use
option "enabled_filters" from group "filter_scheduler".
2017-02-20 14:24:06.211 ERROR nova.conductor.manager
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Failed to
schedule instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most
recent call last):
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/conductor/manager.py", line 866, in
schedule_and_build_instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
request_specs[0].to_legacy_filter_properties_dict())
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/conductor/manager.py", line 597, in
_schedule_instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager hosts =
self.scheduler_client.select_destinations(context, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/utils.py", line 371, in wrapped
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
func(*args, **kwargs)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/client/__init__.py", line 51, in
select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
self.queryclient.select_destinations(context, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/client/__init__.py", line 37, in
__run_method
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
getattr(self.instance, __name)(*args, **kwargs)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/client/query.py", line 32, in
select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
self.scheduler_rpcapi.select_destinations(context, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/rpcapi.py", line 129, in select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
cctxt.call(ctxt, 'select_destinations', **msg_args)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py",
line 169, in call
2017-02-20 14:24:06.211 TRACE nova.conductor.manager retry=self.retry)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py",
line 97, in _send
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
timeout=timeout, retry=retry)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 458, in send
2017-02-20 14:24:06.211 TRACE nova.conductor.manager retry=retry)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 449, in _send
2017-02-20 14:24:06.211 TRACE nova.conductor.manager raise result
2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost_Remote:
No valid host was found. There are not enough hosts available.
2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most
recent call last):
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py",
line 218, in inner
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
func(*args, **kwargs)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/manager.py", line 98, in select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager dests =

Re: [openstack-dev] [keystone] PTG schedule

2017-02-20 Thread Lance Bragstad
Late last week we had our schedule tentatively set and I haven't received
anymore feedback on the current proposal [0]. I think it would be safe to
consider it set unless something urgent comes up that we have to work
around (urgent meaning I completely forgot to schedule a planned session
with another project).

Don't hesitate to ping me if you have any questions about the schedule and
safe travels to Atlanta!


[0] https://etherpad.openstack.org/p/keystone-pike-ptg

On Thu, Feb 16, 2017 at 1:40 PM, Lance Bragstad  wrote:

> Based on early feedback I've broken up our first session, which is
> dedicated to reviewing things not addressed in Ocata [0], into three
> different sessions. I'm hoping this will help us commit enough time to
> finding resolutions for each topic, versus cramming it all into 40 minutes.
>
> Keep the feedback coming. Thanks!
>
>
> [0] https://etherpad.openstack.org/p/pike-ptg-keystone-ocata-carry-over
>
> On Wed, Feb 15, 2017 at 10:24 PM, Lance Bragstad 
> wrote:
>
>> Hi all,
>>
>> I tried to get most of our things shuffled around into some-what of a
>> schedule [0]. Everything that was on the list was eventually refactored
>> into the agenda.
>>
>> I've broken the various topics out into their own etherpads and linked
>> them back to the main schedule. We should have the freedom to move things
>> around as we see fit. The only exceptions are the sessions that we've
>> already scheduled with other projects (cross-project policy, cross-project
>> federation, or our joint session with horizon). Moving them might be tough
>> since we've worked it around schedules from other projects and we've made
>> the room reservation [1].
>>
>> I assume we'll need to make some adjustments before the end of the week.
>> If you see anything that conflicts with another session, please let me know.
>>
>> Thanks!
>>
>> [0] https://etherpad.openstack.org/p/keystone-pike-ptg
>> [1] https://ethercalc.openstack.org/Pike-PTG-Discussion-Rooms
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-20 Thread Miguel Angel Ajo Pelayo
+1 :-)

On Mon, Feb 20, 2017 at 9:16 AM, John Davidge 
wrote:

> +1
>
> On 2/20/17, 4:48 AM, "Carlos Gonçalves"  wrote:
>
> >+1
> >
> >On Mon, Feb 20, 2017 at 9:17 AM, Kevin Benton
> > wrote:
> >
> >No problem. Keep sending in RSPVs if you haven't already.
> >
> >On Mon, Feb 20, 2017 at 2:59 AM, Furukawa, Yushiro
> > wrote:
> >
> >
> >
> >+1
> >
> >Sorry for late, Kevin!!
> >
> >
> >  Yushiro Furukawa
> >
> >From: Kevin Benton [mailto:ke...@benton.pub]
> >
> >
> >
> >
> >Hi all,
> >
> >
> >I'm organizing a Neutron social event for Thursday evening in Atlanta
> >somewhere near the venue for dinner/drinks. If you're interested, please
> >reply to this email with a "+1" so I can get a general count for a
> >reservation.
> >
> >
> >
> >Cheers,
> >
> >Kevin Benton
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >___
> ___
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> >
> >
> >___
> ___
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> >
>
>
> 
> Rackspace Limited is a company registered in England & Wales (company
> registered number 03897010) whose registered office is at 5 Millington
> Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy
> can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail
> message may contain confidential or privileged information intended for the
> recipient. Any dissemination, distribution or copying of the enclosed
> material is prohibited. If you receive this transmission in error, please
> notify us immediately by e-mail at ab...@rackspace.com and delete the
> original message. Your cooperation is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Team photo

2017-02-20 Thread Miguel Angel Ajo Pelayo
Lol, ack :)

On Mon, Feb 20, 2017 at 2:37 AM, Kevin Benton  wrote:

> Clothes are strongly recommended as far as I understand it.
>
> On Mon, Feb 20, 2017 at 1:47 AM, Gary Kotton  wrote:
>
>> What is the dress code J
>>
>>
>>
>> *From: *"Das, Anindita" 
>> *Reply-To: *OpenStack List 
>> *Date: *Monday, February 20, 2017 at 5:16 AM
>> *To: *OpenStack List 
>> *Subject: *Re: [openstack-dev] [neutron] - Team photo
>>
>>
>>
>> +1
>>
>>
>>
>> *From: *Kevin Benton 
>> *Reply-To: *"OpenStack Development Mailing List (not for usage
>> questions)" 
>> *Date: *Friday, February 17, 2017 at 5:08 PM
>> *To: *"openstack-dev@lists.openstack.org" > .org>
>> *Subject: *[openstack-dev] [neutron] - Team photo
>>
>>
>>
>> Hello!
>>
>>
>>
>> Is everyone free Thursday at 11:20AM (right before lunch break) for 10
>> minutes for a group photo?
>>
>>
>>
>> Cheers,
>> Kevin Benton
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Will unshelving an offloaded instance respect the original AZ?

2017-02-20 Thread Jay Pipes

On 02/18/2017 01:46 PM, Matt Riedemann wrote:

I haven't fully dug into testing this, but I got wondering about this
question from reviewing a change [1] which would make the unshelve
operation start to check the volume AZ compared to the instance AZ when
the compute manager calls _prep_block_device.

That change is attempting to remove the check_attach() method in
nova.volume.cinder.API since it's mostly redundant with state checks
that Cinder does when reserving the volume. The only other thing that
Nova does in there right now is compare the AZs.

What I'm wondering is, with that change, will things break because of a
scenario like this:

1. Create volume in AZ 1.
2. Create server in AZ 1.
3. Attach volume to server (or boot server from volume in step 2).
4. Shelve (offload) server.
5. Unshelve server - nova-scheduler puts it into AZ 2.
6. _prep_block_device compares instance AZ 2 to volume AZ 1 and unshelve
fails with InvalidVolume.

If unshelving a server in AZ 1 can't move it outside of AZ 1, then we're
fine and the AZ check when unshelving is redundant but harmless.

[1]
https://review.openstack.org/#/c/335358/38/nova/virt/block_device.py@249


When an instance is unshelved, the unshelve_instance() RPC API method is 
passed a RequestSpec object as the request_spec parameter:


https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L600

This request spec object is passed to schedule_instances():

https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L660

(you will note that the code directly above there "resets force_hosts" 
parameters, ostensibly to prevent any forced destination host from being 
passed to the scheduler)


The question is: does the above request spec contain availability zone 
information for the original instance? If it does, we're good. If it 
doesn't, we can get into the problem described above.


From what I can tell (and Sylvain might be the best person to answer 
this, thus his cc'ing), the availability zone is *always* stored in the 
request spec for an instance:


https://github.com/openstack/nova/blob/master/nova/compute/api.py#L966

Which means that upon unshelving after a shelve_offload, we will always 
pass the scheduler the original AZ.


Sylvain, do you concur?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Clint Byrum
Excerpts from Artom Lifshitz's message of 2017-02-20 08:23:09 -0500:
> Config drive over read-only NFS anyone?
> 
> 
> A shared filesystem so that both Nova and the guest can do IO on it at the
> same time is indeed the proper way to solve this. But I'm afraid of the
> ramifications in terms of live migrations and all other operations we can
> do on VMs...
> 

What makes anyone think this will perform better than the metadata
service?

If we can hand people an address that is NFS-capable, we can hand them
an HTTP(S) URL that doesn't have performance problems.

> 
> Michael
> 
> On Sun, Feb 19, 2017 at 6:12 AM, Steve Gordon  wrote:
> 
> > - Original Message -
> > > From: "Artom Lifshitz" 
> > > To: "OpenStack Development Mailing List (not for usage questions)" <
> > openstack-dev@lists.openstack.org>
> > > Sent: Saturday, February 18, 2017 8:11:10 AM
> > > Subject: Re: [openstack-dev] [nova] Device tagging: rebuild config drive
> > upon instance reboot to refresh metadata on
> > > it
> > >
> > > In reply to Michael:
> > >
> > > > We have had this discussion several times in the past for other
> > reasons.
> > > > The
> > > > reality is that some people will never deploy the metadata API, so I
> > feel
> > > > like we need a better solution than what we have now.
> > >
> > > Aha, that's definitely a good reason to continue making the config
> > > drive a first-class citizen.
> >
> > The other reason is that the metadata API as it stands isn't an option for
> > folks trying to do IPV6-only IIRC.
> >
> > -Steve
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][heat] Heat stable-maint additions

2017-02-20 Thread Steven Hardy
On Fri, Feb 17, 2017 at 10:07:38PM +0530, Rabi Mishra wrote:
>On Fri, Feb 17, 2017 at 8:44 PM, Matt Riedemann 
>wrote:
> 
>  On 2/15/2017 12:40 PM, Zane Bitter wrote:
> 
>Traditionally Heat has given current and former PTLs of the project +2
>rights on stable branches for as long as they remain core reviewers.
>Usually I've done that by adding them to the heat-release group.
> 
>At some point the system changed so that the review rights for these
>branches are no longer under the team's control (instead, the
>stable-maint core team is in charge), and as a result at least the
>current PTL (Rico Lin) and the previous PTL (Rabi Mishra), and
>possibly
>others (Thomas Herve, Sergey Kraynev), haven't been added to the
>group.
>That's slowing down getting backports merged, amongst other things.
> 
>I'd like to request that we update the membership to be the same as
>https://review.openstack.org/#/admin/groups/152,members
> 
>Rabi Mishra
>Rico Lin
>Sergey Kraynev
>Steve Baker
>Steven Hardy
>Thomas Herve
>Zane Bitter
> 
>I also wonder if the stable-maint team would consider allowing the
>Heat
>team to manage the group membership again if we commit to the criteria
>above (all current/former PTLs who are also core reviewers) by just
>adding that group as a member of heat-stable-maint?
> 
>thanks,
>Zane.
> 
>
> __
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe:
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>  Reviewing patches on stable branches have different guidelines,
>  expressed here [1]. In the past when this comes up I've asked if the
>  people being asked to be added to the stable team for a project have
>  actually been doing reviews on the stable branches to show they are
>  following the guidelines, and at times when this has come up the people
>  proposed (usually PTLs) haven't, so I've declined at that time until
>  they start actually doing reviews and can show they are following the
>  guidelines.
> 
>  There are reviewstats tools for seeing the stable review numbers for
>  Heat, I haven't run that though to check against those proposed above,
>  but it's probably something I'd do first before just adding a bunch of
>  people.
> 
>Would it not be appropriate to trust the stable cross-project liaison for
>heat when he nominates stable cores? Having been the PTL for Ocata and one
>who struggled to get the backports on time for a stable release as
>planned, I don't recall seeing many reviews from stable maintenance core
>team for them to be able to judge the quality of reviews. So I don't think
>it's fair to decide eligibility only based on the review numbers and
>stats.

I agree - those nominated by Zane are all highly experienced reviewers and
as ex-PTLs are well aware of the constraints around stable backports and
stable release management.

I do agree the requirements around reviews for stable branches are very
different, but I think we need to assume good faith here and accept we have
a bottleneck which can be best fixed by adding some folks we *know* are
capable of exercising sound judgement to the stable-maint team for heat.

I respect the arguments made by the stable-maint core folks, and I think we
all understand the reason for these concerns, but ultimately unless folks
outside the heat core team are offering to help with reviews directly, I
think it's a little unreasonable to block the addition of these reviewers,
given they've been proposed by the current stable liason who I think is in
the best position to judge the suitability of candidates.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - approximate schedule for PTG

2017-02-20 Thread Ihar Hrachyshka
Thanks a lot for the ical, it's really helpful, adding some structure
to the PTG anarchy.

Ihar

On Sat, Feb 18, 2017 at 12:42 PM, Kevin Benton  wrote:
> Hi All,
>
>
> Here is a rough outline of the order in which I want to cover the items:
> https://etherpad.openstack.org/p/neutron-ptg-pike-final
>
>
> I've also put together a calendar that has the meeting times as well as a
> few relevant cross project sessions and our team dinner/photo.
>
> I suggest watching the calendar for changes since we will need to make
> adjustments as we go.
>
> ICAL:
> https://calendar.google.com/calendar/ical/khbmhi1mhtthrmgv2gnejtul3o%40group.calendar.google.com/public/basic.ics
>
> HTML page:
> https://calendar.google.com/calendar/embed?src=khbmhi1mhtthrmgv2gnejtul3o%40group.calendar.google.com=America/New_York
>
>
> If you notice any conflicts or missing items, reply right away so I can get
> it fixed.
>
> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable/ocata] [nova] [devstack multi-node] nova-conductor complaining about "No cell mapping found for cell0"

2017-02-20 Thread Prashant Shetty
Team,

I have multi node devstack setup with single controller and multiple
computes running stable/ocata.

On compute:
ENABLED_SERVICES=n-cpu,neutron,placement-api

Both KVM and ESxi compute came up fine:
vmware@cntr11:~$ nova hypervisor-list

  warnings.warn(msg)
+++---+-+
| ID | Hypervisor hostname| State | Status
|
+++---+-+
| 4  | domain-c82529.2fb3c1d7-fe24-49ea-9096-fcf148576db8 | up| enabled
|
| 7  | kvm-1  | up| enabled
|
+++---+-+
vmware@cntr11:~$

All services seems to run fine. When tried to launch instance I see below
errors in nova-conductor logs and instance stuck in "scheduling" state
forever.
I dont have any config related to n-cell in controller. Could someone help
me to identify why nova-conductor is complaining about cells.

2017-02-20 14:24:06.128 WARNING oslo_config.cfg
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Option
"scheduler_default_filters" from group "DEFAULT" is deprecated. Use option
"enabled_filters" from group "filter_scheduler".
2017-02-20 14:24:06.211 ERROR nova.conductor.manager
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Failed to schedule
instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most recent
call last):
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/conductor/manager.py", line 866, in
schedule_and_build_instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
request_specs[0].to_legacy_filter_properties_dict())
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/conductor/manager.py", line 597, in
_schedule_instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager hosts =
self.scheduler_client.select_destinations(context, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/utils.py", line 371, in wrapped
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return func(*args,
**kwargs)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/client/__init__.py", line 51, in
select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
self.queryclient.select_destinations(context, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/client/__init__.py", line 37, in
__run_method
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
getattr(self.instance, __name)(*args, **kwargs)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/client/query.py", line 32, in
select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
self.scheduler_rpcapi.select_destinations(context, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/rpcapi.py", line 129, in select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
cctxt.call(ctxt, 'select_destinations', **msg_args)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line
169, in call
2017-02-20 14:24:06.211 TRACE nova.conductor.manager retry=self.retry)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line
97, in _send
2017-02-20 14:24:06.211 TRACE nova.conductor.manager timeout=timeout,
retry=retry)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 458, in send
2017-02-20 14:24:06.211 TRACE nova.conductor.manager retry=retry)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 449, in _send
2017-02-20 14:24:06.211 TRACE nova.conductor.manager raise result
2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost_Remote: No
valid host was found. There are not enough hosts available.
2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most recent
call last):
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line
218, in inner
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return func(*args,
**kwargs)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/manager.py", line 98, in select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager dests =
self.driver.select_destinations(ctxt, spec_obj)

[openstack-dev] [Watcher] End-of-Ocata core team updates

2017-02-20 Thread Чадин Александр
Hi Watcher Team!

There are some changes with Core Group in Watcher:

1. Li Canwei (licanwei) and Prudhvi Rao Shedimbi (pshedimb) have
been nominated as Core Developers for Watcher.
They have got enough votes to be included in Watcher Core group.

2. Jean-Emile DARTOIS has stepped down from Watcher Core since
he has little time to keep up with core reviewer duties.

3. Hidekazu Nakamura is being nominated as Core Developer for Watcher.
Have a good luck!

I want to congratulate our new Core Developers and to thank Jean-Emile
for his work and project support. He has made a lot of architecture design
reviews and implementations and helped to make Watcher as it is.

Thank you, Jean-Emile, have a good luck and remember that
Watcher Team is always opened for you.

Welcome aboard, Prudhvi Rao Shedimbi and Li Canwei!

Best Regards,
_
Alexander Chadin
OpenStack Developer
Servionica LTD
a.cha...@servionica.ru
+7 (916) 693-58-81

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] PTG Team Photos!

2017-02-20 Thread Kendall Nelson
Hello!

Just wanted to check in with you all and remind you that if you want team
photo's there are still spots available on both Tuesday and Thursday!

-Kendall Nelson (diablo_rojo)

On Wed, Feb 15, 2017 at 12:24 PM Kendall Nelson 
wrote:

> Hello All!
>
> We are excited to see you next week at the PTG and wanted to share
> that we will be taking team photos! Provided is a google sheet signup for
> the available time slots [1]. We will be providing time on Tuesday Morning/
> Afternoon and Thursday Morning/Afternoon to come as a team to get your
> photo taken. Slots are only ten minutes so its *important that everyone
> be on time*! If you are unable to view/edit the spreadsheet let me know
> and I will try to get you access or can fill in a slot for you.
>
> The location we are taking the photos on the 3rd floor in the prefunction
> space in front of the Grand Ballroom (across the hall from Fandangles).
>
> See you next week!
>
> Thanks,
>
> -Kendall Nelson (diablo_rojo)
>
> [1]
> https://docs.google.com/spreadsheets/d/1bgwMDsUm37JgpksUJszoDWcoBMHciufXcGV3OYe5A-4/edit?usp=sharing
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][heat] Heat stable-maint additions

2017-02-20 Thread Zane Bitter

On 18/02/17 03:24, Thierry Carrez wrote:

Thomas Herve wrote:

[...]
At any rate, it's a matter of trust, a subject that comes from time to
time, and it's fairly divisive. In this case though, I find it ironic
that I can approve whatever garbage I want on master, it can make its
way into a release, but if I want a bugfix backported into another
branch, someone else has to supervise me.


Originally the lock was there to make sure that people with stable/*
rights were aware of the stable policy (in particular which changes are
backportable depending on the support phase). Rules to apply in stable
reviews are *completely* different from the rules to apply on master
reviews - so being trusted for master doesn't magically make you aware
of review rules for stable.


Yes, this is reasonable. FWIW I'm 100% confident that all of the ex-PTLs 
are familiar with the stable branch policy (I am the stable-branch 
liaison for Heat).



That said, I thought that we now defaulted to trusting the local stable
liaison to ensure that the policy was well-known and directly add people
to the group... (with stable-maint being able to remove people if
needed, ask for forgiveness rather than permission, etc.)


This seems like an appropriate policy to me. I would be happy to see it 
adopted if it hasn't been already.



I guess that's something we could discuss in the Stable team room
(Monday morning) at the PTG for those who will be around then.


I might stop by :)

cheers,
Zane.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Jeremy Stanley
On 2017-02-20 13:38:31 + (+), Daniel P. Berrange wrote:
[...]
>Rather than mounting as a filesystem, you can also use NVDIMM directly
>as a raw memory block, in which case it can contain whatever data format
>you want - not merely a filesystem. With the right design, you could come
>up with a format that let you store the role device metadata in a NVDIMM
>and be able to update its contents on the fly for the guest during hotplug.
[...]

Maybe it's just me, but this begs for a (likely fairly trivial?)
kernel module exposing that data under /sys or /proc (at least for
*nix guests).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-20 Thread John Davidge
+1

On 2/20/17, 4:48 AM, "Carlos Gonçalves"  wrote:

>+1
>
>On Mon, Feb 20, 2017 at 9:17 AM, Kevin Benton
> wrote:
>
>No problem. Keep sending in RSPVs if you haven't already.
>
>On Mon, Feb 20, 2017 at 2:59 AM, Furukawa, Yushiro
> wrote:
>
>
>
>+1
>
>Sorry for late, Kevin!!
>
>
>  Yushiro Furukawa
>
>From: Kevin Benton [mailto:ke...@benton.pub]
>
>
>
>
>Hi all,
>
>
>I'm organizing a Neutron social event for Thursday evening in Atlanta
>somewhere near the venue for dinner/drinks. If you're interested, please
>reply to this email with a "+1" so I can get a general count for a
>reservation.
>
>
>
>Cheers,
>
>Kevin Benton
>
>
>
>
>
>
>
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe:
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe:
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptg] [goals] Pike WSGI Goal Planning

2017-02-20 Thread Emilien Macchi
On Sun, Feb 19, 2017 at 10:17 PM, Emilien Macchi  wrote:
> On Mon, Feb 13, 2017 at 11:56 AM, Thierry Carrez  
> wrote:
>> Emilien Macchi wrote:
>>> I created https://etherpad.openstack.org/p/ptg-pike-wsgi so we can
>>> start discussing on this goal.
>>> Thierry confirmed to me that we would have a room on either Monday or
>>> Tuesday. Please let us know in the etherpad if you have schedule
>>> constraints.
>>
>> Sorry if I was unclear... you actually have the room available on both
>> days !
>
> That's awesome, so we have a room for 2 days: Georgia 11; Level 1.
>
> Because folks are busy to attend different cross-projects sessions, I
> think it would be great to kick-off the session on Monday morning with
> a group of people representing different projects.
>
> We might want to see some folks from Keystone and / or Telemetry who
> have been involved in WSGI work; so they can share their experience,
> feedback and the path they took to get there.
> These 2 days could be used to:
>
> 1) Share the feedback of how some projects moved to WSGI only
> deployments (Keystone, Telemetry, etc).
> 2) Dress a list of technical challenges.
> 3) Evaluate the amount of work for each project (we'll need a
> representative of each project involved in API work).
> 4) Target some work for Pike (blueprints, specs, etc).
>
> There is no specific agenda now, we hope to have dynamic discussions
> each others, where team can share their experience.
>
> I'll be at the room for 9am and stay around during the morning (note:
> I might also say Hi to SWG to see how things are being organized).
>
> Also, because there are 2 goals for Pike (this one and Python 3.5), we
> might need to sync with Doug, to avoid overlaps of sessions, so we
> allow folks to attend both topics. Our rooms will be neighbors, so I
> guess we can easily communicate.
>
> Please let us know any question or propose any session during these 2
> days about $topic.
>
> See you tomorrow,
> Thanks,
>
>> --
>> Thierry Carrez (ttx)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi

We're currently using Georgia 10 room with Python 3.5 Goal, so folks
can collaborate on both goals together.
I'll update the etherpad if it changes.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] What's new in StoryBoard?

2017-02-20 Thread Jeremy Stanley
On 2017-02-20 01:06:22 -0500 (-0500), Kevin Benton wrote:
> Thanks for the pointers.
> 
> So the plan for the projects with a lot of cross bug reports then is to
> wait until a unified switch then?

Right. While our spec[*] predates the cross-project goals model, the
idea is that (assuming everything remains on track) we'll be
proposing the coordinated switch to the community and TC as a Queens
cycle goal.

[*] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Daniel P. Berrange
On Sat, Feb 18, 2017 at 01:54:11PM -0500, Artom Lifshitz wrote:
> A few good points were made:
> 
> * the config drive could be VFAT, in which case we can't trust what's
> on it because the guest has write access
> * if the config drive is ISO9660, we can't selectively write to it, we
> need to regenerate the whole thing - but in this case it's actually
> safe to read from (right?)
> * the point about the precedent being set that the config drive
> doesn't change... I'm not sure I 100% agree. There's definitely a
> precedent that information on the config drive will remain present for
> the entire instance lifetime (so the admin_pass won't disappear after
> a reboot, even if using that "feature" in a workflow seems ludicrous),
> but we've made no promises that the information itself will remain
> constant. For example, nothing says the device metadata must remain
> unchanged after a reboot.
> 
> Based on that here's what I propose:
> 
> If the config drive is vfat, we can just update the information on it
> that we need to update. In the device metadata case, we write a new
> JSON file, overwriting the old one.
> 
> If the config drive is ISO9660, we can safely read from it to fill in
> what information isn't persisted anywhere else, then update it with
> the new stuff we want to change. Then write out the new image.

Neither of these really cope with dynamically updating the role device
metdata for a *running* guest during a disk/nic hotplug for example.
You can't have the guest re-write the FS data that's in use by a running
guest.

For the CDROM based config drive, you would have to eject the virtual
media and insert new media.

IMHO, I'd just declare config drive readonly no matter what and anything
which requires dynamic data must use a different mechanism. Trying to
make config drive at all dynamic just opens a can of worms.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gluon] No IRC Meeting This Week (Feb 22)

2017-02-20 Thread HU, BIN
Hello folks,



Many thanks to all of our contributors for your hard work.



As we agreed in our meeting last week (Feb 15), the IRC meeting this week (Feb 
22) is canceled for PTG.



Thank you and enjoy the break.



Bin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Daniel P. Berrange
On Sat, Feb 18, 2017 at 08:11:10AM -0500, Artom Lifshitz wrote:
> In reply to Michael:
> 
> > We have had this discussion several times in the past for other reasons. The
> > reality is that some people will never deploy the metadata API, so I feel
> > like we need a better solution than what we have now.
> 
> Aha, that's definitely a good reason to continue making the config
> drive a first-class citizen.

FYI, there are a variety of other options available in QEMU for exposing
metadata from the host to the guest that may be a better option than either
config drive or network metadata service, that we should consider.

 - NVDIMM - this is an arbitrary block of data mapped into the guest OS
   memory space. As the name suggests, from a physical hardware POV this
   is non-volatile RAM, but in the virt space we have much more flexibilty.
   It is possible to back an NVDIMM in the guest with a plain file in the
   host, or with volatile ram in the host.

   In the guest, the NVDIMM can be mapped as a block device, and from there
   mounted as a filesystem. Now this isn't actually more useful that config
   drive really, since guest filesystem drivers get upset if the host changes
   the filesystem config behind its back. So this wouldn't magically make it
   possible to dynamically update role device metdata at hotplug time.

   Rather than mounting as a filesystem, you can also use NVDIMM directly
   as a raw memory block, in which case it can contain whatever data format
   you want - not merely a filesystem. With the right design, you could come
   up with a format that let you store the role device metadata in a NVDIMM
   and be able to update its contents on the fly for the guest during hotplug.

 - virtio-vsock - think of this as UNIX domain sockets between the host and
   guest.  This is to deal with the valid use case of people wanting to use
   a network protocol, but not wanting an real NIC exposed to the guest/host
   for security concerns. As such I think it'd be useful to run the metadata
   service over virtio-vsock as an option. It'd likely address at lesat some
   people's security concerns wrt metadata service. It would also fix the
   ability to use the metadat service in IPv6-only environments, as we would
   not be using IP at all :-)


Both of these are pretty new features only recently added to qemu/libvirt
so its not going to immediately obsolete the config drive / IPv4 metadata
service, but they're things to consider IMHO. It would be valid to say
the config drive role device tagging metadata will always be readonly,
and if you want dynamic data you must use the metdata service over IPv4
or virtio-vsock.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Instances are not creating after adding 3 additional nova nodes

2017-02-20 Thread Kevin Benton
Expanding on that, you get that binding error usually when Neutron thinks
it can't wire up the ports on the compute nodes. So ensure that you started
the appropriate Neutron agents on the new compute nodes and that they are
alive by running 'neutron agent-list'.

On Mon, Feb 20, 2017 at 8:14 AM, Kostyantyn Volenbovskyi <
volenbov...@yandex.ru> wrote:

> Hi,
> this 'Unexpected vif_type=binding_failed’ is as well fairly-generic, but
> you can change focus from Nova to Neutron+virtual switch.
>
> So check:
> -Neutron server logs
> -Logs of Neutron agent on target Compute Host(s)
> -OVS logs and possibly things like /var/log/messages for things related to
> virtual networking.
>
> The root cause is typically:
> -misconfiguration of mechanism driver/type driver.
> -misconfiguration of virtual switching (typically OVS)
> Go through installation documents in docs.openstack.org, that provides a
> guide for values
> parameters related to that.
>
>
> BR,
> Konstantin
>
> On Feb 20, 2017, at 8:16 AM, Anwar Durrani 
> wrote:
>
> Further when i tried and attempt to launch new instance, i can see the
> following
>
> tail -f /var/log/nova/nova-compute.log
>
>
> 2017-02-20 12:45:26.596 5365 TRACE nova.compute.manager [instance:
> 1840ac2e-5a54-4941-a96f-a431b2a2c236] flavor, virt_type)
>
> 2017-02-20 12:45:26.596 5365 TRACE nova.compute.manager [instance:
> 1840ac2e-5a54-4941-a96f-a431b2a2c236]   File "/usr/lib/python2.7/site-
> packages/nova/virt/libvirt/vif.py", line 374, in get_config
>
> 2017-02-20 12:45:26.596 5365 TRACE nova.compute.manager [instance:
> 1840ac2e-5a54-4941-a96f-a431b2a2c236] _("Unexpected vif_type=%s") %
> vif_type)
>
> 2017-02-20 12:45:26.596 5365 TRACE nova.compute.manager [instance:
> 1840ac2e-5a54-4941-a96f-a431b2a2c236] NovaException: Unexpected
> vif_type=binding_failed
>
> 2017-02-20 12:45:26.596 5365 TRACE nova.compute.manager [instance:
> 1840ac2e-5a54-4941-a96f-a431b2a2c236]
>
>
>
> On Mon, Feb 20, 2017 at 12:31 PM, Melvin Hillsman 
> wrote:
>
>> Since the error was with scheduling you will want to modify the config
>> for nova to show verbose output, try to create another instance, and check
>> for the uuid and/or requestid of the creation attempt in the log -
>> nova-scheduler.log
>>
>> I would turn verbose logging off right after you get a failed attempt to
>> schedule as well since they logs can grow quickly.
>>
>> On Mon, Feb 20, 2017 at 12:56 AM, Saverio Proto  wro
>> te:
>>
>>> Well,
>>> I have no idea from this log file. Trying to make nova-compute more
>>> verbose if you dont find anything in the logs
>>>
>>> Saverio
>>>
>>> 2017-02-20 7:50 GMT+01:00 Anwar Durrani :
>>> >
>>> > On Thu, Feb 16, 2017 at 1:44 PM, Saverio Proto 
>>> wrote:
>>> >>
>>> >> openstack server show uuid
>>> >
>>> >
>>> > Hi Saverio,
>>> >
>>> > I have investigated and progressed the case as per your saying, i got
>>> to
>>> > know that instance was supposed to be launched on one of the nova node,
>>> > where i dig and tried find out log as you mentioned, following output
>>> i have
>>> > seen as below :
>>> >
>>> > tail -f /var/log/nova/nova-compute.log
>>> >
>>> > 2017-02-20 10:40:19.318 5365 WARNING nova.compute.manager
>>> > [req-34fa4448-061d-44ad-b6e9-6ff0d1fd072f - - - - -] While
>>> synchronizing
>>> > instance power states, found 4 instances in the database and 0
>>> instances on
>>> > the hypervisor.
>>> >
>>> > and
>>> >
>>> > other log where i have found following :
>>> >
>>> > tail -f /var/log/nova/nova-manage.log
>>> >
>>> > 2017-02-15 16:42:42.896 115003 TRACE nova   File
>>> > "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 34,
>>> in
>>> > sleep
>>> >
>>> > 2017-02-15 16:42:42.896 115003 TRACE nova hub.switch()
>>> >
>>> > 2017-02-15 16:42:42.896 115003 TRACE nova   File
>>> > "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in
>>> switch
>>> >
>>> > 2017-02-15 16:42:42.896 115003 TRACE nova return
>>> self.greenlet.switch()
>>> >
>>> > 2017-02-15 16:42:42.896 115003 TRACE nova   File
>>> > "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 346, in
>>> run
>>> >
>>> > 2017-02-15 16:42:42.896 115003 TRACE nova self.wait(sleep_time)
>>> >
>>> >
>>> > Thanks
>>> >
>>> >
>>> >
>>> > --
>>> > Thanks & regards,
>>> > Anwar M. Durrani
>>> > +91-9923205011 <099232%2005011>
>>> >
>>> >
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>>
>>
>> --
>> Kind regards,
>>
>> Melvin Hillsman
>> Ops Technical Lead
>> OpenStack Innovation Center
>>
>> mrhills...@gmail.com
>> phone: (210) 312-1267
>> mobile: (210) 413-1659
>> http://osic.org
>>
>> Learner | Ideation | Belief | Responsibility | Command
>>
>
>
>
> --
> Thanks & regards,
> Anwar M. Durrani
> +91-9923205011 

  1   2   >