Re: [Openstack-operators] need feedback about Glance image 'visibility' migration in Ocata

2016-11-16 Thread Sam Morrison

> On 17 Nov. 2016, at 3:49 pm, Brian Rosmaita  
> wrote:
> 
> Ocata workflow:  (1) create an image with default visibility, (2) change
> its visibility to 'shared', (3) add image members

Unsure why this can’t be done in 2 steps, when someone adds an image member to 
a ‘private’ image the visibility changes to ‘shared’ automatically.
Just seems an extra step for no reason?

Sam

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-operators][neutron] neutron-plugin-openvswitch-agent and neutron-openvswitch-agent

2016-11-16 Thread Akshay Kumar Sanghai
Hi,
I installed a kilo version before and now I am installing mitaka. The
installation document for mitaka uses linuxbridge agent and not ovs. In
kilo, it says to install neutron-plugin-openvswitch-agent. In mitaka, for
linuxbridge, it says to install neuton-linuxbridge-agent.

Is there any difference between neutron-plugin-openvswitch-agent and
neutron-openvswitch-agent?

Thanks
Akshay
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] sync power states and externally shut off VMs

2016-11-16 Thread Kris G. Lindgren
As a follow up on this.  You can configure the host to shutdown and start up in 
a way that all the VM’s are shutdown and started up automatically.

To do this you need to do a few things:

1.) Ensure that nova-compute is configured to stopped before 
libvirt-guests.  Make sure libvirt-guests is enabled.

2.) Allow libvirt-guests to shutdown the vm’s that are running (I recommend 
avoiding suspending the vm’s. As this will lead to in vm clock sync issues):  
ON_SHUTDOWN=shutdown

3.) Ensure that libvirt-guests is configure with: ON_BOOT=ignore

4.) Set [DEFAULT] resume_guests_state_on_host_boot=true in nova.conf

This config will gracefully shutdown the running vm’s via a normal host 
shutdown, preserving the running state in nova.  This will then cause nova to 
bring the vm’s online when nova-compute starts on host start up.  This also 
works with ungraceful power downs.  The key is that nova needs to be the one 
that starts the VM’s.  Because libvirt-guests will not be able to successfully 
start the vm’s, because neutron eeds to plug the vifs for the vms.  As long as 
the state of the VM is “running” inside the DB, this config will work.

NB: if you do chassis swaps and for some reason the OS comes up in a config 
that no longer works, all of the VM’s will go to error.  You will need to fix 
whatever issue prevented the vm’s from starting and manually reset the state 
and start the vms.
Some examples that we have seen: vtx extensions disabled on new server.  
Replacement server has different cpu’s and no longer matches either the numa 
config or the new processors do not have the same cpu extensions as the old 
processors.

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Adam Thurlow 
Date: Wednesday, November 16, 2016 at 9:58 PM
To: "openstack-operators@lists.openstack.org" 

Subject: Re: [Openstack-operators] sync power states and externally shut off VMs


If you are interested in manually mucking around with local virsh domain 
states, and you don't want nova to interfere, you can just stop the local 
nova-compute service and it won't be doing any syncing. Once you get those 
instances back into their desired state, you can restart nova-compute and it 
won't be any wiser.

You can obviously shoot yourself in the foot using this method, but I can 
understand in some cases that large hammers and manual virsh commands are 
necessary.
Cheers!
On 2016-11-16 17:32, Mohammed Naser wrote:
Typically, you should not be managing your VMs by virsh. After a power outage, 
I would recommend sending a start API call to instances that are housed on that 
specific hypervisor

Sent from my iPhone

On Nov 16, 2016, at 4:26 PM, Gustavo Randich 
mailto:gustavo.rand...@gmail.com>> wrote:
When a VM is shutdown without using nova API (kvm process down, libvirt failed 
to start instance on host boot, etc.), Openstack "freezes" the shutdown power 
state in the DB, and then re-applies it if the VM is not started via API, e.g.:

# virsh shutdown 

[ sync power states -> stop instance via API ], because hypervisor rules 
("power_state is always updated from hypervisor to db")

# virsh startup 

[ sync power states -> stop instance via API ], because database rules


I understand this behaviour is "by design", but I'm confused about the 
asymmetry: if VM is shutdown without using nova API, should I not be able to 
start it up again without nova API?

This is a common scenario in power outages or failures external to Openstack, 
when VMs fail to start and we need to start them up again using virsh.

Thanks!

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___

OpenStack-operators mailing list

OpenStack-operators@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] sync power states and externally shut off VMs

2016-11-16 Thread Adam Thurlow
If you are interested in manually mucking around with local virsh domain 
states, and you don't want nova to interfere, you can just stop the 
local nova-compute service and it won't be doing any syncing. Once you 
get those instances back into their desired state, you can restart 
nova-compute and it won't be any wiser.


You can obviously shoot yourself in the foot using this method, but I 
can understand in some cases that large hammers and manual virsh 
commands are necessary.


Cheers!

On 2016-11-16 17:32, Mohammed Naser wrote:
Typically, you should not be managing your VMs by virsh. After a power 
outage, I would recommend sending a start API call to instances that 
are housed on that specific hypervisor


Sent from my iPhone

On Nov 16, 2016, at 4:26 PM, Gustavo Randich 
mailto:gustavo.rand...@gmail.com>> wrote:


When a VM is shutdown without using nova API (kvm process down, 
libvirt failed to start instance on host boot, etc.), Openstack 
"freezes" the shutdown power state in the DB, and then re-applies it 
if the VM is not started via API, e.g.:


# virsh shutdown 

[ sync power states -> stop instance via API ], because
hypervisor rules ("power_state is always updated from hypervisor
to db")

# virsh startup 

[ sync power states -> stop instance via API ], because database
rules



I understand this behaviour is "by design", but I'm confused about 
the asymmetry: if VM is shutdown without using nova API, should I not 
be able to start it up again without nova API?


This is a common scenario in power outages or failures external to 
Openstack, when VMs fail to start and we need to start them up again 
using virsh.


Thanks!

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org 


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] need feedback about Glance image 'visibility' migration in Ocata

2016-11-16 Thread Brian Rosmaita
Hello Operators,

The long-awaited implementation of "community images" in Glance [0] is
just around the corner, but before we can merge it, we need to make a
decision about how the database migration of the image 'visibility' field
will work.  We could use your help.

Here's what's at issue:

Up through the Newton release of the Images API v2, there are two values
for 'visibility':
* public
* private

As you're aware, an end user can "share" an image by adding members to it.
 Currently, this does not change the visibility of the image: it remains
'private' even though other users have access to it, which is kind of
counterintuitive.  As part of the community images implementation in
Ocata, the range of values for 'visibility' is changed to:
* public
* private
* shared
* community

In Newton, an image had to have 'private' visibility in order for member
operations to be performed on it.  In Ocata, this is being changed so that
an image must have 'shared' visibility in order to be subject to member
operations.  Now, there's a bit of weirdness here in that an image with
'shared' visibility that doesn't have any members on it is not actually
accessible to anyone, and hence maybe not "shared"; but I think a better
way to look at it is that any image with 'shared' visibility is in fact
shared with all the image members it has (which could be zero).
(Editorial comment: I think this is a definite improvement over images
with 'private' visibility that aren't actually private.)

So what's the problem?  Well, our original proposal was that an image
would have a default visibility of 'private'.  Some concerned users
brought to our attention that this would introduce a backward
incompatibility into the image sharing workflow:

Diablo to Newton workflow: (1) create an image with default visibility,
(2) share it by adding members

Ocata workflow:  (1) create an image with default visibility, (2) change
its visibility to 'shared', (3) add image members

During development, we realized that if the default visibility were
'shared', then everything would behave as it does now.  If an end user
didn't add any members, the image would effectively be "private" since no
one other than the owner would have access to it; and if a user wanted to
add members, they could be immediately added just as they are in the
Diablo-to-Newton workflow.

Further, given that there are OpenStack clouds using both the Image v1 and
v2 APIs, we need to look at the situation with respect to v1 (which was
deprecated in Newton, but is still supported in Ocata).  The v1 API
doesn't have any concept of 'visibility'; all it has is the boolean
'is_public'.  If an image is "born" with 'shared' visibility in the
database, is_public will be False in the v1 API, as it is now, and
further, the image can be shared immediately via the v1 API, as it can
now.  So this change is backward compatible for v1.  At the same time, the
visibility rules for the v2 API can be respected if a user manipulates the
same image using the v2 API.  End users who solely use the v1 API can be
blissfully unaware that an image's visibility is "really" 'shared', but
this will be appropriately visible to users of the v2 API.

What if a v2 user sets the visibility of an image to 'private'?  If some
other user in that project tries to share that image via the v1 API, the
v1 user will get a 409 (Conflict) response.  This seems appropriate,
because by changing the visibility to 'private', the v2 user explicitly
put the image in a state where it cannot be shared, and we don't want
someone to do an end-run around that simply by using the v1 API.  Is this
a problem for v1 users?  I'd argue it's not.  If the users in a project
*only* use v1, this issue will never arise.  It can only occur if some
user in the project is using the v2 API, and in that case, that user will
have the ability to change the image's visibility back to 'shared' if
sharing really is desired.

Thus, a default visibility of 'shared' will work well for mixed v1/v2
deployments.

Note that we're just talking about the default value for 'visibility'; if
an end user specifies a particular visibility as part of the image-create
request, it will be honored (assuming the user has appropriate permissions
to create an image with that visibility).  The key point here is that even
though the default visibility *value* will be different than it was up
through Newton, the default visibility *behavior* in Ocata will be
identical.  We think this will make for an easy transition for users.

Sounds great, so what's the problem?  Well, as part of the upgrade to
Glance Ocata, the database will have to be migrated so that all images
have a specific visibility.  (What's really in the database up through
Newton, believe it or not, is the boolean 'is_public'.)  All images with
is_public == True will be migrated to visibility == 'public'.  Prior to
Ocata, community images don't exist, so no images will be migrated to
visibility == 'community'.  But wh

Re: [Openstack-operators] Audit Logging - Interested? What's missing?

2016-11-16 Thread Sam Morrison
Anybody using http://docs.openstack.org/developer/keystonemiddleware/audit.html 
 ??




> On 17 Nov. 2016, at 11:51 am, Kris G. Lindgren  wrote:
> 
> I need to do a deeper dive on audit logging. 
> 
> However, we have a requirement for when someone changes a security group that 
> we log what the previous security group was and what the new security group 
> is and who changed it.  I don’’t know if this is specific to our crazy 
> security people or if others security peoples want to have this.  I am sure I 
> can think of others.
> 
> 
> ___
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
> 
> On 11/16/16, 3:29 PM, "Tom Fifield"  wrote:
> 
>Hi Ops,
> 
>Was chatting with Department of Defense in Australia the other day, and 
>one of their pain points is Audit Logging. Some bits of OpenStack just 
>don't leave enough information for proper audit. So, thought it might be 
>a good idea to gather people who are interested to brainstorm how to get 
>it to a good level for all :)
> 
>Does your cloud need good audit logging? What do you wish was there at 
>the moment, but isn't?
> 
> 
>Regards,
> 
> 
>Tom
> 
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Audit Logging - Interested? What's missing?

2016-11-16 Thread Kris G. Lindgren
I need to do a deeper dive on audit logging. 

However, we have a requirement for when someone changes a security group that 
we log what the previous security group was and what the new security group is 
and who changed it.  I don’’t know if this is specific to our crazy security 
people or if others security peoples want to have this.  I am sure I can think 
of others.


___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 11/16/16, 3:29 PM, "Tom Fifield"  wrote:

Hi Ops,

Was chatting with Department of Defense in Australia the other day, and 
one of their pain points is Audit Logging. Some bits of OpenStack just 
don't leave enough information for proper audit. So, thought it might be 
a good idea to gather people who are interested to brainstorm how to get 
it to a good level for all :)

Does your cloud need good audit logging? What do you wish was there at 
the moment, but isn't?


Regards,


Tom

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Audit Logging - Interested? What's missing?

2016-11-16 Thread Nematollah Bidokhti
Hi Tom,

It would be great if the logs were formatted as such that could convey the 
following:
- Fault classification/types
- Potential root Causes
- Latest state before failure/crash

The above can help with automation and self healing. This is part of our Fault 
Genes WG mission to get to consistent log structure for our fault management 
policies.

Thanks,
Nemat


-Original Message-
From: Tom Fifield [mailto:t...@openstack.org] 
Sent: Wednesday, November 16, 2016 2:29 PM
To: OpenStack Operators
Subject: [Openstack-operators] Audit Logging - Interested? What's missing?

Hi Ops,

Was chatting with Department of Defense in Australia the other day, and one of 
their pain points is Audit Logging. Some bits of OpenStack just don't leave 
enough information for proper audit. So, thought it might be a good idea to 
gather people who are interested to brainstorm how to get it to a good level 
for all :)

Does your cloud need good audit logging? What do you wish was there at the 
moment, but isn't?


Regards,


Tom

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Audit Logging - Interested? What's missing?

2016-11-16 Thread David Medberry
We've added ELK to our cloud (but of course it largely relies on the
existing logging.) There will be a talk about this next month at OpenStack
Days Mountain West in SLC. I can provide a link to the slides after that
occurs.

Our use of ELK is around added security, so ties in nicely with this use
case.

On Wed, Nov 16, 2016 at 3:29 PM, Tom Fifield  wrote:

> Hi Ops,
>
> Was chatting with Department of Defense in Australia the other day, and
> one of their pain points is Audit Logging. Some bits of OpenStack just
> don't leave enough information for proper audit. So, thought it might be a
> good idea to gather people who are interested to brainstorm how to get it
> to a good level for all :)
>
> Does your cloud need good audit logging? What do you wish was there at the
> moment, but isn't?
>
>
> Regards,
>
>
> Tom
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Audit Logging - Interested? What's missing?

2016-11-16 Thread David Medberry
rather, here:
https://openstackmountainwest2016.sched.org/event/8AkE/osdef-devops-driven-approach-to-securing-a-cloud-infrastructure-using-bigdata?iframe=no&w=&sidebar=yes&bg=no

On Wed, Nov 16, 2016 at 5:07 PM, David Medberry 
wrote:

> more info here:
> http://www.openstackdaysmw.com/schedule/
>
> On Wed, Nov 16, 2016 at 5:06 PM, David Medberry 
> wrote:
>
>> We've added ELK to our cloud (but of course it largely relies on the
>> existing logging.) There will be a talk about this next month at OpenStack
>> Days Mountain West in SLC. I can provide a link to the slides after that
>> occurs.
>>
>> Our use of ELK is around added security, so ties in nicely with this use
>> case.
>>
>> On Wed, Nov 16, 2016 at 3:29 PM, Tom Fifield  wrote:
>>
>>> Hi Ops,
>>>
>>> Was chatting with Department of Defense in Australia the other day, and
>>> one of their pain points is Audit Logging. Some bits of OpenStack just
>>> don't leave enough information for proper audit. So, thought it might be a
>>> good idea to gather people who are interested to brainstorm how to get it
>>> to a good level for all :)
>>>
>>> Does your cloud need good audit logging? What do you wish was there at
>>> the moment, but isn't?
>>>
>>>
>>> Regards,
>>>
>>>
>>> Tom
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Audit Logging - Interested? What's missing?

2016-11-16 Thread David Medberry
more info here:
http://www.openstackdaysmw.com/schedule/

On Wed, Nov 16, 2016 at 5:06 PM, David Medberry 
wrote:

> We've added ELK to our cloud (but of course it largely relies on the
> existing logging.) There will be a talk about this next month at OpenStack
> Days Mountain West in SLC. I can provide a link to the slides after that
> occurs.
>
> Our use of ELK is around added security, so ties in nicely with this use
> case.
>
> On Wed, Nov 16, 2016 at 3:29 PM, Tom Fifield  wrote:
>
>> Hi Ops,
>>
>> Was chatting with Department of Defense in Australia the other day, and
>> one of their pain points is Audit Logging. Some bits of OpenStack just
>> don't leave enough information for proper audit. So, thought it might be a
>> good idea to gather people who are interested to brainstorm how to get it
>> to a good level for all :)
>>
>> Does your cloud need good audit logging? What do you wish was there at
>> the moment, but isn't?
>>
>>
>> Regards,
>>
>>
>> Tom
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Audit Logging - Interested? What's missing?

2016-11-16 Thread Tom Fifield

Hi Ops,

Was chatting with Department of Defense in Australia the other day, and 
one of their pain points is Audit Logging. Some bits of OpenStack just 
don't leave enough information for proper audit. So, thought it might be 
a good idea to gather people who are interested to brainstorm how to get 
it to a good level for all :)


Does your cloud need good audit logging? What do you wish was there at 
the moment, but isn't?



Regards,


Tom

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] sync power states and externally shut off VMs

2016-11-16 Thread Mohammed Naser
Typically, you should not be managing your VMs by virsh. After a power outage, 
I would recommend sending a start API call to instances that are housed on that 
specific hypervisor

Sent from my iPhone

> On Nov 16, 2016, at 4:26 PM, Gustavo Randich  
> wrote:
> 
> When a VM is shutdown without using nova API (kvm process down, libvirt 
> failed to start instance on host boot, etc.), Openstack "freezes" the 
> shutdown power state in the DB, and then re-applies it if the VM is not 
> started via API, e.g.:
> 
> # virsh shutdown 
> 
> [ sync power states -> stop instance via API ], because hypervisor rules 
> ("power_state is always updated from hypervisor to db")
> 
> # virsh startup 
> 
> [ sync power states -> stop instance via API ], because database rules
> 
> 
> I understand this behaviour is "by design", but I'm confused about the 
> asymmetry: if VM is shutdown without using nova API, should I not be able to 
> start it up again without nova API?
> 
> This is a common scenario in power outages or failures external to Openstack, 
> when VMs fail to start and we need to start them up again using virsh.
> 
> Thanks!
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] sync power states and externally shut off VMs

2016-11-16 Thread Gustavo Randich
When a VM is shutdown without using nova API (kvm process down, libvirt
failed to start instance on host boot, etc.), Openstack "freezes" the
shutdown power state in the DB, and then re-applies it if the VM is not
started via API, e.g.:

# virsh shutdown 

[ sync power states -> stop instance via API ], because hypervisor rules
("power_state is always updated from hypervisor to db")

# virsh startup 

[ sync power states -> stop instance via API ], because database rules



I understand this behaviour is "by design", but I'm confused about the
asymmetry: if VM is shutdown without using nova API, should I not be able
to start it up again without nova API?

This is a common scenario in power outages or failures external to
Openstack, when VMs fail to start and we need to start them up again using
virsh.

Thanks!
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Fuel] Add new controllers

2016-11-16 Thread Andrew Woodward
Absolutely, versions higher than 4.1 support this by default. Keep in mind
that you should maintain an odd number for quorum as even will lead to
split brain. Also, if you are using multiple node groups, the controllers
must be in the same one for fail over of the vip addresses

On Wed, Nov 16, 2016, 1:38 PM Fran Barrera  wrote:

> Hi,
>
> It is possible to add new controllers node to an existing cluster of
> openstack deployed with fuel?
>
> Thanks,
> Fran.
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-- 
Andrew Woodward
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [app-catalog] App Catalog IRC meeting Thursday November 17th

2016-11-16 Thread Christopher Aedo
Join us tomorrow (Thursday) for our weekly meeting, scheduled for
November 17th at 17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to discuss
something with the Community App Catalog team:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Now that the dust has settled from the summit we'll pick up where we
left off - that will be discussing the transition to using Glare for
the backend.  We have a test server launched by infra now and are at
the point where we are down to fine tuning.  If you can join the
meeting to discuss further, please do!

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Fuel] Add new controllers

2016-11-16 Thread Fran Barrera
Hi,

It is possible to add new controllers node to an existing cluster of
openstack deployed with fuel?

Thanks,
Fran.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2016-11-16 Thread Armando M.
Hi,

As of today, the project neutron-vpnaas is no longer part of the neutron
governance. This was a decision reached after the project saw a dramatic
drop in active development over a prolonged period of time.

What does this mean in practice?

   - From a visibility point of view, release notes and documentation will
   no longer appear on openstack.org as of Ocata going forward.
   - No more releases will be published by the neutron release team.
   - The neutron team will stop proposing fixes for the upstream CI, if not
   solely on a voluntary basis (e.g. I still felt like proposing [2]).

How does it affect you, the user or the deployer?

   - You can continue to use vpnaas and its CLI via the
   python-neutronclient and expect it to work with neutron up until the newton
   release/python-neutronclient 6.0.0. After this point, if you want a release
   that works for Ocata or newer, you need to proactively request a release
   [5], and reach out to a member of the neutron release team [3] for
   approval. Assuming that the vpnaas CI is green, you can expect to have a
   working vpnaas system upon release of its package in the foreseeable future.
   - Outstanding bugs and new bug reports will be rejected on the basis of
   lack of engineering resources interested in helping out in the typical
   OpenStack review workflow.
   - Since we are freezing the development of the neutron CLI in favor of
   the openstack unified client (OSC), the lack of a plan to make the VPN
   commands available in the OSC CLI means that at some point in the future
   the neutron client CLI support for vpnaas may be dropped (though I don't
   expect this to happen any time soon).

Can this be reversed?

   - If you are interested in reversing this decision, now it is time to
   step up. That said, we won't be reversing the decision for Ocata. There is
   quite a curve to ramp up to make neutron-vpnaas worthy of being classified
   as a neutron stadium project, and that means addressing all the gaps
   identified in [6]. If you are interested, please reach out, and I will work
   with you to add your account to [4], so that you can drive the
   neutron-vpnaas agenda going forward.

Please do not hesitate to reach out to ask questions and/or clarifications.

Cheers,
Armando

[1] https://review.openstack.org/#/c/392010/
[2] https://review.openstack.org/#/c/397924/
[3] https://review.openstack.org/#/admin/groups/150,members
[4] https://review.openstack.org/#/admin/groups/502,members
[5] https://github.com/openstack/releases
[6] http://specs.openstack.org/openstack/neutron-specs/
specs/stadium/ocata/neutron-vpnaas.html
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] FW: [openstack-dev] [openstack-ansible] How do you haproxy?

2016-11-16 Thread Jean-Philippe Evrard
Hello David (and dear Operators!),

You are 100% correct, this topic could be interesting for operators.

Here is food for thoughts on “How do you haproxy for openstack-ansible?”:
https://etherpad.openstack.org/p/openstack-ansible-haproxy-improvements

Feel free to comment there!

Best regards,
Jean-Philippe Evrard (evrardjp)


From: David Moreau Simard 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 11 November 2016 at 12:40
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [openstack-ansible] How do you haproxy?


I feel like you might get valuable feedback by cross-posting this to 
openstack-operators.

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

On Nov 10, 2016 7:40 AM, "Jean-Philippe Evrard" 
mailto:jean-philippe.evr...@rackspace.co.uk>>
 wrote:
Hello,

In openstack-ansible, we are using haproxy (and keepalived) as load balancer 
mechanism.
We’ve been recommending to use hardware load balancers for a while, but I think 
it’s time to improve haproxy configuration flexibility and testing.

I’m gathering requirements of what you’d like to see in haproxy.

I have an etherpad set up, and I’d be happy if you could fill your comments 
there:
https://etherpad.openstack.org/p/openstack-ansible-haproxy-improvements

Thank you in advance.

Best regards,
Jean-Philippe Evrard (evrardjp)


Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at 
www.rackspace.co.uk/legal/privacy-policy
 - This e-mail message may contain confidential or privileged information 
intended for the recipient. Any dissemination, distribution or copying of the 
enclosed material is prohibited. If you receive this transmission in error, 
please notify us immediately by e-mail at 
ab...@rackspace.com and delete the original 
message. Your cooperation is appreciated.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [publicClouds-wg] Public Cloud Working Group

2016-11-16 Thread Adam Kijak
I'm interested as well :)


From: matt Jarvis 
Sent: Tuesday, November 15, 2016 11:43 AM
To: OpenStack Operators; user-committee
Subject: [Openstack-operators] [publicClouds-wg] Public Cloud Working Group

So after input from a variety of sources over the last few weeks, I'd very much 
like to try and put together a public cloud working group, with the very high 
level goal of representing the interests of the public cloud provider community 
around the globe.

I'd like to propose that, in line with the new process for creation of working 
groups, we set up some initial IRC meetings for all interested parties. The 
goals for these initial meetings would be :

1. Define the overall scope and mission statement for the working group
2. Set out the constituency for the group - communication methods, definitions 
of public clouds, meeting schedules, chairs etc.
3. Identify areas of interest - eg. technical issues, collaboration 
opportunities

Before I go ahead and schedule first meetings, I'd very much like to gather 
input from the community on interested parties, and if this seems like a 
reasonable first step forward. My thought initially was to schedule a first 
meeting on #openstack-operators and then decide on best timings, locations and 
communication methods from there, but again I'd welcome input.

At this point it would seem to me that one of the key metrics for this working 
group to be successful is participation as widely as possible within the public 
cloud provider community, currently approximately 21 companies globally 
according to https://www.openstack.org/marketplace/public-clouds/. If we could 
get representation from all of those companies in any potential working group, 
then that would clearly be the best outcome, although that may be optimistic ! 
As has become clear at recent Ops events, it may be that not all of those 
companies are represented on these lists, so I'd welcome any input on the best 
way to reach out to those folks.

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators