Re: [Openstack-operators] Ceilometer/oslo.messaging connect to multiple RMQ endpoints

2016-11-03 Thread Mike Dorman
Just what I needed, thanks Sam.  And I can also confirm this works like a champ.

I was digging through oslo.messaging stuff looking for this, I completely 
overlooked the notification settings in ceilometer itself.

Appreciate pointing me in the right direction!
Mike


From: Sam Morrison 
Date: Thursday, November 3, 2016 at 7:04 PM
To: Mike Dorman 
Cc: OpenStack Operators 
Subject: Re: [Openstack-operators] Ceilometer/oslo.messaging connect to 
multiple RMQ endpoints

That was me! and yes you can do it when consuming notifications with 
ceilometer-agent-notification

Eg in our ceilometer.conf we have

[notification]
workers=12
disable_non_metric_meters=true
store_events = true
batch_size = 50
batch_timeout = 5
messaging_urls = rabbit://XX:XX@rabbithost1:5671/vhost1
messaging_urls = rabbit://XX:XX@rabbithost2:5671/vhost2
messaging_urls = rabbit://XX:XX@rabbithost3:5671/vhost3


If no messaging_urls are set then it will fall back to the settings in the 
[oslo_messaging_rabbit] config section
Also if you set messaging_urls then it won’t consume from the rabbit specified 
in [oslo_messaging_rabbit] so you have to add it to messaging_urls too.

Cheers,
Sam




On 4 Nov. 2016, at 10:28 am, Mike Dorman 
> wrote:

I heard third hand from the summit that it’s possible to configure 
Ceilometer/oslo.messaging with multiple rabbitmq_hosts config entries, which 
will let you connect to multiple RMQ endpoints at the same time.

The scenario here is we use the Ceilometer notification agent the pipe events 
from OpenStack services into a Kafka queue for consumption by other team(s) in 
the company.  We also run Nova cells v1, so we have to run one Ceilometer agent 
for the API cell, as well as an agent for every compute cell (because they have 
independent RMQ clusters.)

Anyway, I tried configuring it this way and it still only connects to a single 
RMQ server.  We’re running Liberty Ceilometer and oslo.messaging, so I’m 
wondering if this behavior is only in a later version?  Can anybody shed any 
light?  I would love to get away from running so many Ceilometer agents.

Thanks!
Mike

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [puppet][fuel][packstack][tripleo] puppet 3 end of life

2016-11-03 Thread Sam Morrison

> On 4 Nov. 2016, at 1:33 pm, Emilien Macchi  wrote:
> 
> On Thu, Nov 3, 2016 at 9:10 PM, Sam Morrison  > wrote:
>> Wow I didn’t realise puppet3 was being deprecated, is anyone actually using 
>> puppet4?
>> 
>> I would hope that the openstack puppet modules would support puppet3 for a 
>> while still, at lest until the next ubuntu LTS is out else we would get to 
>> the stage where the openstack  release supports Xenial but the corresponding 
>> puppet module would not? (Xenial has puppet3)
> 
> I'm afraid we made a lot of communications around it but you might
> have missed it, no problem.
> I have 3 questions for you:
> - for what reasons would you not upgrade puppet?

Because I’m a time poor operator with more important stuff to upgrade :-)
Upgrading puppet *could* be a big task and something we haven’t had time to 
look into. Don’t follow along with puppetlabs so didn’t realise puppet3 was 
being deprecated. Now that this has come to my attention we’ll look into it for 
sure.

> - would it be possible for you to use puppetlabs packaging if you need
> puppet4 on Xenial? (that's what upstream CI is using, and it works
> quite well).

OK thats promising, good to know that the CI is using puppet4. It’s all my 
other dodgy puppet code I’m worried about.

> - what version of the modules do you deploy? (and therefore what
> version of OpenStack)

We’re using a mixture of newton/mitaka/liberty/kilo, sometimes the puppet 
module version is newer than the openstack version too depending on where we’re 
at in the upgrade process of the particular openstack project.

I understand progress must go on, I am interested though in how many operators 
use puppet4. We may be in the minority and then I’ll be quiet :-)

Maybe it should be deprecated in one release and then dropped in the next?


Cheers,
Sam





> 
>> My guess is that this would also be the case for RedHat and other distros 
>> too.
> 
> Fedora is shipping Puppet 4 and we're going to do the same for Red Hat
> and CentOS7.
> 
>> Thoughts?
>> 
>> 
>> 
>>> On 4 Nov. 2016, at 2:58 am, Alex Schultz  wrote:
>>> 
>>> Hey everyone,
>>> 
>>> Puppet 3 is reaching it's end of life at the end of this year[0].
>>> Because of this we are planning on dropping official puppet 3 support
>>> as part of the Ocata cycle.  While we currently are not planning on
>>> doing any large scale conversion of code over to puppet 4 only syntax,
>>> we may allow some minor things in that could break backwards
>>> compatibility.  Based on feedback we've received, it seems that most
>>> people who may still be using puppet 3 are using older (< Newton)
>>> versions of the modules.  These modules will continue to be puppet 3.x
>>> compatible but we're using Ocata as the version where Puppet 4 should
>>> be the target version.
>>> 
>>> If anyone has any concerns or issues around this, please let us know.
>>> 
>>> Thanks,
>>> -Alex
>>> 
>>> [0] https://puppet.com/misc/puppet-enterprise-lifecycle
>>> 
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
>> 
> 
> 
> 
> -- 
> Emilien Macchi

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [puppet][fuel][packstack][tripleo] puppet 3 end of life

2016-11-03 Thread Emilien Macchi
On Thu, Nov 3, 2016 at 9:10 PM, Sam Morrison  wrote:
> Wow I didn’t realise puppet3 was being deprecated, is anyone actually using 
> puppet4?
>
> I would hope that the openstack puppet modules would support puppet3 for a 
> while still, at lest until the next ubuntu LTS is out else we would get to 
> the stage where the openstack  release supports Xenial but the corresponding 
> puppet module would not? (Xenial has puppet3)

I'm afraid we made a lot of communications around it but you might
have missed it, no problem.
I have 3 questions for you:
- for what reasons would you not upgrade puppet?
- would it be possible for you to use puppetlabs packaging if you need
puppet4 on Xenial? (that's what upstream CI is using, and it works
quite well).
- what version of the modules do you deploy? (and therefore what
version of OpenStack)

> My guess is that this would also be the case for RedHat and other distros too.

Fedora is shipping Puppet 4 and we're going to do the same for Red Hat
and CentOS7.

> Thoughts?
>
>
>
>> On 4 Nov. 2016, at 2:58 am, Alex Schultz  wrote:
>>
>> Hey everyone,
>>
>> Puppet 3 is reaching it's end of life at the end of this year[0].
>> Because of this we are planning on dropping official puppet 3 support
>> as part of the Ocata cycle.  While we currently are not planning on
>> doing any large scale conversion of code over to puppet 4 only syntax,
>> we may allow some minor things in that could break backwards
>> compatibility.  Based on feedback we've received, it seems that most
>> people who may still be using puppet 3 are using older (< Newton)
>> versions of the modules.  These modules will continue to be puppet 3.x
>> compatible but we're using Ocata as the version where Puppet 4 should
>> be the target version.
>>
>> If anyone has any concerns or issues around this, please let us know.
>>
>> Thanks,
>> -Alex
>>
>> [0] https://puppet.com/misc/puppet-enterprise-lifecycle
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Emilien Macchi

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [puppet][fuel][packstack][tripleo] puppet 3 end of life

2016-11-03 Thread Sam Morrison
Wow I didn’t realise puppet3 was being deprecated, is anyone actually using 
puppet4?

I would hope that the openstack puppet modules would support puppet3 for a 
while still, at lest until the next ubuntu LTS is out else we would get to the 
stage where the openstack  release supports Xenial but the corresponding puppet 
module would not? (Xenial has puppet3)

My guess is that this would also be the case for RedHat and other distros too.

Thoughts?



> On 4 Nov. 2016, at 2:58 am, Alex Schultz  wrote:
> 
> Hey everyone,
> 
> Puppet 3 is reaching it's end of life at the end of this year[0].
> Because of this we are planning on dropping official puppet 3 support
> as part of the Ocata cycle.  While we currently are not planning on
> doing any large scale conversion of code over to puppet 4 only syntax,
> we may allow some minor things in that could break backwards
> compatibility.  Based on feedback we've received, it seems that most
> people who may still be using puppet 3 are using older (< Newton)
> versions of the modules.  These modules will continue to be puppet 3.x
> compatible but we're using Ocata as the version where Puppet 4 should
> be the target version.
> 
> If anyone has any concerns or issues around this, please let us know.
> 
> Thanks,
> -Alex
> 
> [0] https://puppet.com/misc/puppet-enterprise-lifecycle
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceilometer/oslo.messaging connect to multiple RMQ endpoints

2016-11-03 Thread Sam Morrison
That was me! and yes you can do it when consuming notifications with 
ceilometer-agent-notification

Eg in our ceilometer.conf we have

[notification]
workers=12
disable_non_metric_meters=true
store_events = true
batch_size = 50
batch_timeout = 5
messaging_urls = rabbit://XX:XX@rabbithost1:5671/vhost1 

messaging_urls = rabbit://XX:XX@rabbithost2:5671/vhost 
2
messaging_urls = rabbit://XX:XX@rabbithost3:5671/vhost3


If no messaging_urls are set then it will fall back to the settings in the 
[oslo_messaging_rabbit] config section
Also if you set messaging_urls then it won’t consume from the rabbit specified 
in [oslo_messaging_rabbit] so you have to add it to messaging_urls too.

Cheers,
Sam




> On 4 Nov. 2016, at 10:28 am, Mike Dorman  wrote:
> 
> I heard third hand from the summit that it’s possible to configure 
> Ceilometer/oslo.messaging with multiple rabbitmq_hosts config entries, which 
> will let you connect to multiple RMQ endpoints at the same time.
>  
> The scenario here is we use the Ceilometer notification agent the pipe events 
> from OpenStack services into a Kafka queue for consumption by other team(s) 
> in the company.  We also run Nova cells v1, so we have to run one Ceilometer 
> agent for the API cell, as well as an agent for every compute cell (because 
> they have independent RMQ clusters.)
>  
> Anyway, I tried configuring it this way and it still only connects to a 
> single RMQ server.  We’re running Liberty Ceilometer and oslo.messaging, so 
> I’m wondering if this behavior is only in a later version?  Can anybody shed 
> any light?  I would love to get away from running so many Ceilometer agents.
>  
> Thanks!
> Mike
>  
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceilometer/oslo.messaging connect to multiple RMQ endpoints

2016-11-03 Thread Matt Fischer
Unless this has drastically changed I thought the multiple entries was sort
of like a "pick one" scenario rather than a "connect to all of them". You
specify all the nodes in case one or more is down. I don't think it can be
used to talk to multiple rabbit clusters.

On Thu, Nov 3, 2016 at 5:28 PM, Mike Dorman  wrote:

> I heard third hand from the summit that it’s possible to configure
> Ceilometer/oslo.messaging with multiple rabbitmq_hosts config entries,
> which will let you connect to multiple RMQ endpoints at the same time.
>
>
>
> The scenario here is we use the Ceilometer notification agent the pipe
> events from OpenStack services into a Kafka queue for consumption by other
> team(s) in the company.  We also run Nova cells v1, so we have to run one
> Ceilometer agent for the API cell, as well as an agent for every compute
> cell (because they have independent RMQ clusters.)
>
>
>
> Anyway, I tried configuring it this way and it still only connects to a
> single RMQ server.  We’re running Liberty Ceilometer and oslo.messaging, so
> I’m wondering if this behavior is only in a later version?  Can anybody
> shed any light?  I would love to get away from running so many Ceilometer
> agents.
>
>
>
> Thanks!
>
> Mike
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Updating oschecks

2016-11-03 Thread Mike Dorman
Absolutely agree.  The osops repos started as (and frankly, still are) mostly 
dumping grounds for tools folks had built and were running locally.  This was 
meant to be a first step at sharing and collaboration.  The kind of 
improvements you’re talking about is exactly the direction we want to take this 
stuff.

Thanks!!
Mike


From: Melvin Hillsman 
Organization: OpenStack Innovation Center
Reply-To: "mrhills...@gmail.com" 
Date: Thursday, November 3, 2016 at 12:48 PM
To: Lars Kellogg-Stedman , OpenStack Operators 

Subject: Re: [Openstack-operators] Updating oschecks

Hey Lars,

I think the needs you have are relevant to anyone who would use this tooling 
and think you should definitely move forward with implementing what you have 
prototyped. I personally believe any improvements to the tools in osops repos 
are welcome. Bringing modularity to this as well is great from my perspective.

On 11/03/2016 01:03 PM, Lars Kellogg-Stedman wrote:

I've recently started working with the oscheck scripts in the

osops-tools-monitoring project [1], and I found that in their current

form they didn't quite meet my needs.  In particular:



- They don't share a common set of authentication options

- They can't read credentials from files

- Many of them require a priori configuration of the openstack

  environment, which means they can't be used to health check a new

  deployment



I've spent a little time recently prototyping a new set of health

check scripts, available here:



  https://github.com/larsks/oschecks



I'd like to emphasize that these *are not* currently meant as a usable

replacement for the existing checks; they were to prototype (a) the

way I'd like the user interface to work and (b) the way I'd like

things like credentials to work.



This project offers the following features:



- They use os_client_config for managing credentials, so they can be

  configured from a clouds.yaml file, or the environment, or the

  command line, and it all Just Works.



- Authentication is handled in just one place in the code for all the

  checks.



- The checks are extensible (using the cliff framework), which means

  that checks with different sets of requirements can be

  packaged/installed separately.  See, for example:



https://github.com/larsks/oschecks_systemd



- For every supported service there is a simple "can I make an

  authenticated request to the API successfully" check that does not

  require any pre-existing resources to be created.



- They are (hopefully) structured such that it is relatively easy to

  write new checks the follow the same syntax and behavior of the

  other checks.



If people think this is a useful way of implementing these health

checks, I would be happy to do the work necessary to make them a mostly

drop-in replacement for the existing checks (adding checks that are

currently missing, and adding appropriate console-script entrypoints to

match the existing names, etc).



I would appreciate any feedback.  Sorry for the long message, and thanks

for taking the time to read this far!



[1]: 
https://github.com/openstack/osops-tools-monitoring/tree/master/monitoring-for-openstack/oschecks






___

OpenStack-operators mailing list

OpenStack-operators@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--

Kind regards,

--

Melvin Hillsman

Ops Technical Lead

OpenStack Innovation Center



mrhills...@gmail.com

mobile: (210) 413-1659

office: (210) 312-1267

Learner | Ideation | Belief | Responsibility | Command

http://osic.org
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Neutron fails to notify nova on events network-vif-plugged

2016-11-03 Thread Ahmed Mostafa
Actually it is not authenticating it self against keystone, it is
communicating directly to nova by using a ketstone client

in neutron.conf you have to options which are set to True by default, they
are

notify_nova_on_port_status_changes

notify_nova_on_port_data_changes

If you set both of them to False, you wont have any errors any more, but if
for some reason you require using neutron nova notification, then you must
configure nova authentication and api url in neutron.conf

You will find in neutron.conf a section named nova, under it you will see
all condiguration option you can use to configure the notification driver
to notify nova on status or data changes on ports


On Tuesday, 1 November 2016, Davíð Örn Jóhannsson  wrote:

> I’m working on setting up a OpenStack Liberty development env on Ubuntu
> 14.04. At the present I have 3 nodes, Controller, Network and Compute. I am
> up to the place where I’m trying to spin up an instance where
> neutron-server seems to fail to notify nova because of an authentication
> error against keystone , I’ve been struggling for some time to figure out
> the cause of this and was hoping that some could lend me more experienced
> eyes
>
> Controller node /etc/neutron/neutron.conf http://paste.openstack.org/show/
> 587547/
> Openstack endpoint list http://paste.openstack.org/show/587548/
>
> 2016-11-01 12:42:04.067 15888 DEBUG keystoneclient.session [-] RESP: [300]
> Content-Length: 635 Vary: X-Auth-Token Connection: keep-alive Date: Tue, 01
> Nov 2016 12:42:04 GMT Content-Type: application/json X-Distribution: Ubuntu
> RESP BODY: {"versions": {"values": [{"status": "stable", "updated":
> "2015-03-30T00:00:00Z", "media-types": [{"base": "application/json",
> "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.4",
> "links": [{"href": "http://controller-01:35357/v3/;, "rel": "self"}]},
> {"status": "stable", "updated": "2014-04-17T00:00:00Z", "media-types":
> [{"base": "application/json", "type": 
> "application/vnd.openstack.identity-v2.0+json"}],
> "id": "v2.0", "links": [{"href": "http://controller-01:35357/v2.0/;,
> "rel": "self"}, {"href": "http://docs.openstack.org/;, "type":
> "text/html", "rel": "describedby"}]}]}}
>  _http_log_response /usr/lib/python2.7/dist-packages/keystoneclient/
> session.py:215
> 2016-11-01 12:42:04.067 15888 DEBUG keystoneclient.auth.identity.v3.base
> [-] Making authentication request to http://controller-01:35357/v3/
> auth/tokens get_auth_ref /usr/lib/python2.7/dist-
> packages/keystoneclient/auth/identity/v3/base.py:188
> 2016-11-01 12:42:04.091 15888 DEBUG keystoneclient.session [-] Request
> returned failure status: 401 request /usr/lib/python2.7/dist-
> packages/keystoneclient/session.py:400
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova [-] Failed to
> notify nova on events: [{'status': 'completed', 'tag':
> u'bf092fd0-51ba-4fbf-8d3d-9c3004b3811f', 'name': 'network-vif-plugged',
> 'server_uuid': u'24616ae2-a6e4-4843-ade6-357a9ce80bc0'}]
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova Traceback (most
> recent call last):
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File
> "/usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py", line 248,
> in send_events
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova
> batched_events)
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File
> "/usr/lib/python2.7/dist-packages/novaclient/v2/contrib/server_external_events.py",
> line 39, in create
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova
> return_raw=True)
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File
> "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 169, in
> _create
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova _resp, body
> = self.api.client.post(url, body=body)
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File
> "/usr/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 176,
> in post
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova return
> self.request(url, 'POST', **kwargs)
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File
> "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 91, in
> request
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova **kwargs)
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File
> "/usr/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 206,
> in request
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova resp =
> super(LegacyJsonAdapter, self).request(*args, **kwargs)
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File
> "/usr/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 95, in
> request
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova return
> self.session.request(url, method, **kwargs)
> 2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File
> 

Re: [Openstack-operators] [scientific][scientific-wg] Reminder: IRC meeting Tuesday 1st November 2100 UTC

2016-11-03 Thread Stig Telfer

> On 3 Nov 2016, at 16:21, Álvaro López García  wrote:
> 
> On 01 Nov 2016 (12:59), Stig Telfer wrote:
>> Hi All - 
> 
> Hi All,
> 
>> We have a Scientific WG IRC meeting today at 2100 UTC on channel 
>> #openstack-meeting
>> 
>> The agenda is available here[1] and full IRC meeting details are here[2].
>> 
>> We’d like to follow up on the events at Barcelona, and plan activity areas 
>> for the Ocata design cycle.
>> 
>> If anyone would like to add an item for discussion on the agenda, it is also 
>> available in an etherpad[3].
> 
> I could not attend the meeting (2100 UTC meetings are hard for me) and I
> am afraid I will not be able to join next week either.
> 
> Nevertheless, I have read the minutes and the irc logs and you can count
> me on the (identity) federation part.

Hi Alvaro - thank you for volunteering, that’s great news!

Best wishes,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [cinder][stable] Deprecation notice: Driver for NetApp Data ONTAP operating in 7-mode

2016-11-03 Thread Ravi, Goutham
Developers and Operators,

The NetApp unified driver in Cinder currently provides integration for two 
major generations of the ONTAP operating system: the current “clustered” ONTAP 
and the legacy 7-mode. NetApp’s “full support” for 7-mode ended in August of 
2015 and the current “limited support” period will end in February of 2017 [1].

In accordance with community policy [2], we are initiating the deprecation 
process for the 7-mode components of the Cinder NetApp unified driver set to 
conclude with their removal in the Queens release. This will apply to all three 
protocols currently supported in this driver: iSCSI, FC and NFS.

What is being deprecated: Cinder drivers for NetApp Data ONTAP 7-mode NFS, 
iSCSI, FC
Period of deprecation: 7-mode drivers will be around in stable/ocata and 
stable/pike and will be removed in the Queens release (All milestones of this 
release)
What should users/operators do: Follow the recommended migration path to 
upgrade to Clustered Data ONTAP 
[3] or get in touch 
with your NetApp support representative.

The cinder change for deprecation 
[4]

[1] 
https://mysupport.netapp.com/info/web/ECMP1147223.html#_Data%20ONTAP%20Operating%20System%20Version%20Support%20Policy
[2] 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
[3] https://mysupport.netapp.com/info/web/ECMP1658253.html
[4] https://review.openstack.org/#/c/393450/


Thanks,
Goutham
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Updating oschecks

2016-11-03 Thread Lars Kellogg-Stedman
I've recently started working with the oscheck scripts in the
osops-tools-monitoring project [1], and I found that in their current
form they didn't quite meet my needs.  In particular:

- They don't share a common set of authentication options
- They can't read credentials from files
- Many of them require a priori configuration of the openstack
  environment, which means they can't be used to health check a new
  deployment

I've spent a little time recently prototyping a new set of health
check scripts, available here:

  https://github.com/larsks/oschecks

I'd like to emphasize that these *are not* currently meant as a usable
replacement for the existing checks; they were to prototype (a) the
way I'd like the user interface to work and (b) the way I'd like
things like credentials to work.

This project offers the following features:

- They use os_client_config for managing credentials, so they can be
  configured from a clouds.yaml file, or the environment, or the
  command line, and it all Just Works.

- Authentication is handled in just one place in the code for all the
  checks.

- The checks are extensible (using the cliff framework), which means
  that checks with different sets of requirements can be
  packaged/installed separately.  See, for example:

https://github.com/larsks/oschecks_systemd

- For every supported service there is a simple "can I make an
  authenticated request to the API successfully" check that does not
  require any pre-existing resources to be created.

- They are (hopefully) structured such that it is relatively easy to
  write new checks the follow the same syntax and behavior of the
  other checks.

If people think this is a useful way of implementing these health
checks, I would be happy to do the work necessary to make them a mostly
drop-in replacement for the existing checks (adding checks that are
currently missing, and adding appropriate console-script entrypoints to
match the existing names, etc).

I would appreciate any feedback.  Sorry for the long message, and thanks
for taking the time to read this far!

[1]: 
https://github.com/openstack/osops-tools-monitoring/tree/master/monitoring-for-openstack/oschecks

-- 
Lars Kellogg-Stedman  | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



signature.asc
Description: PGP signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific][scientific-wg] Reminder: IRC meeting Tuesday 1st November 2100 UTC

2016-11-03 Thread Álvaro López García
On 01 Nov 2016 (12:59), Stig Telfer wrote:
> Hi All - 

Hi All,

> We have a Scientific WG IRC meeting today at 2100 UTC on channel 
> #openstack-meeting
> 
> The agenda is available here[1] and full IRC meeting details are here[2].
> 
> We’d like to follow up on the events at Barcelona, and plan activity areas 
> for the Ocata design cycle.
> 
> If anyone would like to add an item for discussion on the agenda, it is also 
> available in an etherpad[3].

I could not attend the meeting (2100 UTC meetings are hard for me) and I
am afraid I will not be able to join next week either.

Nevertheless, I have read the minutes and the irc logs and you can count
me on the (identity) federation part.

Cheers,
-- 
Álvaro López García  al...@ifca.unican.es
Instituto de Física de Cantabria http://alvarolopez.github.io
Ed. Juan Jordá, Campus UC  tel: (+34) 942 200 969
Avda. de los Castros s/nskype: aloga.csic
39005 Santander (SPAIN)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [keystone][tripleo][ansible][puppet][all] changing default token format

2016-11-03 Thread Rochelle Grober
a blog post on the OpenStack sore might be good. superuser? there are folks 
reading this who can help

Sent from HUAWEI AnyOffice
From:Lance Bragstad
To:OpenStack Development Mailing List (not for usage 
questions),openstack-operators@lists.openstack.org,
Date:2016-11-03 08:11:20
Subject:Re: [openstack-dev] [keystone][tripleo][ansible][puppet][all] changing 
default token format

I totally agree with communicating this the best we can. I'm adding the 
operator list to this thread to increase visibility.

If there are any other methods folks think of for getting the word out, outside 
of what we've already done (release notes, email threads, etc.), please let me 
know. I'd be happy to drive those communications.

On Thu, Nov 3, 2016 at 9:45 AM, Alex Schultz 
> wrote:
Hey Steve,

On Thu, Nov 3, 2016 at 8:29 AM, Steve Martinelli 
> wrote:
> Thanks Alex and Emilien for the quick answer. This was brought up at the
> summit by Adam, but I don't think we have to prevent keystone from changing
> the default. TripleO and Puppet can still specify UUID as their desired
> token format; it is not deprecated or slated for removal. Agreed?
>

My email was not to tell you to stop.I was just letting you know that
your change does not affect the puppet modules because we define our
default as UUID.  It was just as a heads up to others on this email
that this change should not affect anyone consuming the puppet modules
because our default is still UUID and will be even after keystone's
default changes.

Thanks,
-Alex

> On Thu, Nov 3, 2016 at 10:23 AM, Alex Schultz 
> > wrote:
>>
>> Hey Steve,
>>
>> On Thu, Nov 3, 2016 at 8:11 AM, Steve Martinelli 
>> >
>> wrote:
>> > As a heads up to some of keystone's consuming projects, we will be
>> > changing
>> > the default token format from UUID to Fernet. Many patches have merged
>> > to
>> > make this possible [1]. The last 2 that you probably want to look at are
>> > [2]
>> > and [3]. The first flips a switch in devstack to make fernet the
>> > selected
>> > token format, the second makes it default in Keystone itself.
>> >
>> > [1] https://review.openstack.org/#/q/topic:make-fernet-default
>> > [2] DevStack patch: https://review.openstack.org/#/c/367052/
>> > [3] Keystone patch: https://review.openstack.org/#/c/345688/
>> >
>>
>> Thanks for the heads up. In puppet openstack we had already
>> anticipated this and attempted to do the same for the
>> puppet-keystone[0] module as well.  Unfortunately after merging it, we
>> found that tripleo wasn't yet prepared to handle the HA implementation
>> of fernet tokens so we had to revert it[1].  This shouldn't impact
>> anyone currently consuming puppet-keystone as we define uuid as the
>> default for now. Our goal is to do something similar this cycle but
>> there needs to be some further work in the downstream consumers to
>> either define their expected default (of uuid) or support fernet key
>> generation correctly.
>>
>> Thanks,
>> -Alex
>>
>> [0] https://review.openstack.org/#/c/389322/
>> [1] https://review.openstack.org/#/c/392332/
>>
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [puppet][fuel][packstack][tripleo] puppet 3 end of life

2016-11-03 Thread Alex Schultz
Hey everyone,

Puppet 3 is reaching it's end of life at the end of this year[0].
Because of this we are planning on dropping official puppet 3 support
as part of the Ocata cycle.  While we currently are not planning on
doing any large scale conversion of code over to puppet 4 only syntax,
we may allow some minor things in that could break backwards
compatibility.  Based on feedback we've received, it seems that most
people who may still be using puppet 3 are using older (< Newton)
versions of the modules.  These modules will continue to be puppet 3.x
compatible but we're using Ocata as the version where Puppet 4 should
be the target version.

If anyone has any concerns or issues around this, please let us know.

Thanks,
-Alex

[0] https://puppet.com/misc/puppet-enterprise-lifecycle

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [keystone][tripleo][ansible][puppet][all] changing default token format

2016-11-03 Thread Lance Bragstad
I totally agree with communicating this the best we can. I'm adding the
operator list to this thread to increase visibility.

If there are any other methods folks think of for getting the word out,
outside of what we've already done (release notes, email threads, etc.),
please let me know. I'd be happy to drive those communications.

On Thu, Nov 3, 2016 at 9:45 AM, Alex Schultz  wrote:

> Hey Steve,
>
> On Thu, Nov 3, 2016 at 8:29 AM, Steve Martinelli 
> wrote:
> > Thanks Alex and Emilien for the quick answer. This was brought up at the
> > summit by Adam, but I don't think we have to prevent keystone from
> changing
> > the default. TripleO and Puppet can still specify UUID as their desired
> > token format; it is not deprecated or slated for removal. Agreed?
> >
>
> My email was not to tell you to stop.I was just letting you know that
> your change does not affect the puppet modules because we define our
> default as UUID.  It was just as a heads up to others on this email
> that this change should not affect anyone consuming the puppet modules
> because our default is still UUID and will be even after keystone's
> default changes.
>
> Thanks,
> -Alex
>
> > On Thu, Nov 3, 2016 at 10:23 AM, Alex Schultz 
> wrote:
> >>
> >> Hey Steve,
> >>
> >> On Thu, Nov 3, 2016 at 8:11 AM, Steve Martinelli <
> s.martine...@gmail.com>
> >> wrote:
> >> > As a heads up to some of keystone's consuming projects, we will be
> >> > changing
> >> > the default token format from UUID to Fernet. Many patches have merged
> >> > to
> >> > make this possible [1]. The last 2 that you probably want to look at
> are
> >> > [2]
> >> > and [3]. The first flips a switch in devstack to make fernet the
> >> > selected
> >> > token format, the second makes it default in Keystone itself.
> >> >
> >> > [1] https://review.openstack.org/#/q/topic:make-fernet-default
> >> > [2] DevStack patch: https://review.openstack.org/#/c/367052/
> >> > [3] Keystone patch: https://review.openstack.org/#/c/345688/
> >> >
> >>
> >> Thanks for the heads up. In puppet openstack we had already
> >> anticipated this and attempted to do the same for the
> >> puppet-keystone[0] module as well.  Unfortunately after merging it, we
> >> found that tripleo wasn't yet prepared to handle the HA implementation
> >> of fernet tokens so we had to revert it[1].  This shouldn't impact
> >> anyone currently consuming puppet-keystone as we define uuid as the
> >> default for now. Our goal is to do something similar this cycle but
> >> there needs to be some further work in the downstream consumers to
> >> either define their expected default (of uuid) or support fernet key
> >> generation correctly.
> >>
> >> Thanks,
> >> -Alex
> >>
> >> [0] https://review.openstack.org/#/c/389322/
> >> [1] https://review.openstack.org/#/c/392332/
> >>
> >> >
> >> > 
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [telecom-nfv] Meeting #12 results

2016-11-03 Thread Curtis
Hi All,

I just thought I'd type up some of the things we discussed at the last
meeting [1].

First, the meeting was well attended with 6 people and maybe a couple
of lurkers in the background. :) The good attendance was because of
our session at the summit, so that is great.

The entire meeting was spent on discussing what mid to long term
project we could work on. My impression was that we had pretty good
consensus on doing some testing and benchmarking on a "vanilla" or
"minimal" NFVi reference architecture (which we would have to define),
perhaps using neutron-sfc as a component. But we ran out of time
before we could ensure we had complete agreement, so that will be the
first item on the next meeting agenda. Certainly it was a good
discussion that needed to happen. Also we noted other teams and
projects such as OPNFV and the performance working group that we could
potentially work with.

Thanks to all who attended and who were willing to make their ideas
and opinions known so that we can start working on a project.

Thanks,
Curtis.

[1]: 
http://eavesdrop.openstack.org/meetings/operators_telco_nfv/2016/operators_telco_nfv.2016-11-02-15.03.html

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Single day OpenStack conference in Canberra, Australia

2016-11-03 Thread Michael Still
Heya,

I've been asked to let you all know about a single day OpenStack conference
in Canberra that's coming up in a few weeks. The event is being run by the
OpenStack Foundation along with the various meetup organizers.

The conference is on Monday 14 November and has two tracks -- a management
track and a technical one. This is a follow on event from one that ran in
Sydney a few months ago that was very well received, so I have every belief
this event will be excellent too.

You can find out more about the event at:

http://australiaday.openstack.org.au/

Cheers,
Michael

-- 
Rackspace Australia
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators