Re: [openstack-dev] [openstack-ansible] Nominate Major Hayden for core in openstack-ansible-security

2016-05-04 Thread Kevin Carter
+1 for me too.


--

Kevin Carter
IRC: cloudnull



From: Matthew Thode <prometheanf...@gentoo.org>
Sent: Tuesday, May 3, 2016 1:50 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [openstack-ansible] Nominate Major Hayden for core 
in openstack-ansible-security

On 05/03/2016 01:47 PM, Truman, Travis wrote:
> Major has made an incredible number of contributions of code and reviews
> to the OpenStack-Ansible community. Given his role as the primary author
> of the openstack-ansible-security project, I can think of no better
> addition to the core reviewer team.
>
> Travis Truman
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
+1 because it still means something to me

--
-- Matthew Thode (prometheanfire)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [OpenStack-Ansible] Mitaka Upgrade

2016-05-02 Thread Kevin Carter
Hi Wade, sorry for the late reply most of us have been traveling / afk for a 
bit for the summit. Regarding Liberty > Mitaka upgrades there are a few issues 
that we need to work out before we have a supported upgrade process.


Most notably we need to address:

* https://bugs.launchpad.net/openstack-ansible/+bug/1577245

* https://bugs.launchpad.net/openstack-ansible/+bug/1568029

* https://bugs.launchpad.net/openstack-ansible/+bug/1574019

* https://bugs.launchpad.net/openstack-ansible/+bug/1574303


It's likely if you hand fix these items you should be all taken care of. That 
said, work will be in full swing in the next week or so to get upgrades 100% 
squared away and there may be some other things that need be addressed before 
we'd say they're fully supported.

As for the RTFM part, it's not RTFM at all. We generally give our releases a 
bit of time to stabilize before announcing to the world how "wonderful our 
upgrade process is". Sadly, the issues you've found are a product of us not 
having worked everything out yet. We do have some documentation regarding the 
minor upgrade process as outlined here: [0] as well as other operational 
documentation that can be found here: [1]. Also if you're interested in working 
on the upgrades we'd love to have a chat regarding all of the things you've run 
into so that we can automate them away. I'd recommend joining in on the 
#openstack-ansible IRC channel and potentially raising issues on launchpad [2] 
for things we've not yet thought of to work on.

As for what you've done thus far, it seems like a sensible approach. The 
playbook execution should drop new code into place and start/restart all of the 
services. However the straggler process may have been related to this issue [ 
https://bugs.launchpad.net/openstack-ansible/+bug/1577245 ] if you can look 
through that and the other issues mentioned earlier we'd appreciate the 
feedback.

Sorry for the TL;DR I hope you're well. Ping us if you have questions.

[0] - 
http://docs.openstack.org/developer/openstack-ansible/install-guide/app-minorupgrade.html
[1] - 
http://docs.openstack.org/developer/openstack-ansible/install-guide/#operations
[2] - https://bugs.launchpad.net/openstack-ansible/

--

Kevin Carter
IRC: cloudnull


From: Wade Holler <wade.hol...@gmail.com>
Sent: Thursday, April 28, 2016 11:33 AM
To: OpenStack Operators; OpenStack Development Mailing List (not for usage 
questions)
Subject: [openstack-dev] [Openstack-operators] [OpenStack-Ansible] Mitaka 
Upgrade

Hi All,

If this is RTFM please point me there and I apologize.

If not:

We are testing the Liberty to Mitaka transition with OSAD.  Could someone 
please advise if these were the correct general steps.

osad multinode (VMs/instances inside a osad cloud) built with latest 12.X 
liberty osad
1. save off config files: openstack_user_config.yml, user_variables,yml, 
...hostname..., inventory,  etc.
2. rm -rf /etc/openstack_deploy; rm -rf /opt/openstack-ansible
3. git clone -b stable/mitaka 
4. copy config files back in place
5. ./scripts/bootstrap-ansible.sh
6. openstack-ansible setup-everything.yml

That is the process we ran and it appears to have went well after adding the 
"-e rabbit_upgrade=true" flag.

Only straggler process that I found so far was a neutron-ns-metadata-proxy that 
was still running 12.X liberty code.  restarted the container and it didn't 
start again, but the 13.X mitaka version is running ( and was before shutdown ).

Is this the correct upgrade process or are there other steps / approaches that 
should be taken ?

Best Regards,
Wade


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Summit: cross-project session for deployment tools projects

2016-03-31 Thread Kevin Carter
+1 This is great and I look forward to meeting and talking with folks at 
the summit.

On 03/31/2016 04:42 PM, Emilien Macchi wrote:
> Hi,
>
> OpenStack big tent has different projects for deployments: Puppet,
> Chef, Ansible, Kolla, TripleO, (...) but we still have some topics  in
> common.
> I propose we use the Cross-project day to meet and talk about our
> common topics: CI, doc, release, backward compatibility management,
> etc.
>
> Feel free to look at the proposed session and comment:
> https://etherpad.openstack.org/p/newton-cross-project-sessions
>
> Thanks,
>

--

Kevin Carter
IRC: cloudnull

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][kolla][release] Deploying the big tent

2016-03-29 Thread Kevin Carter
to try?
>
> Puppet OpenStack, Chef OpenStack and Ansible OpenStack took another
> approach, by having a separated module per project.
>
> This is how we started 4 years ago in Puppet modules: having one
> module that deploys one component.
> Example: openstack/puppet-nova - openstack/puppet-keystone - etc
> Note that we currently cover 27 OpenStackc components, documented here:
> https://wiki.openstack.org/wiki/Puppet
>
> We have split the governance a little bit over the last 2 cycles,
> where some modules like puppet-neutron and puppet-keystone (eventually
> more in the future) have a dedicated core member group (among other
> Puppet OpenStack core members) that have a special expertise on a
> project.
>
> Our model allows anyone expert on a specific project (ex: Keystone) to
> contribute on puppet-keystone and eventually become core on the
> project (It's happening every cycle).
> For now, I don't see an interest to have this code living in core projects.

Agreed.

> Yes, there are devstack plugins, but historically, devstack is the
> only tool right now that is used to gate core projects (nova, neutron,
> etc).
> Also, yes there are tempest plugins but it's because Tempest is the
> official tool to run functional testing in OpenStack.
> I would not be against having Kolla plugins, but I'm not sure this is
> the right approach for deployment tools, because there exists multiple
> of them.

Agreed. I also believe service projects have enough to worry about 
without adding yet another workflow and set of bugs to their already 
full plate.

> I would rather investigate some CI jobs (non-voting for now) that
> would run in core projects and run Kolla / Puppet / whatever CI jobs,
> beside devstack.
> What do you think?

I personally think this is a great idea and it could potentially help 
bring developers and deployers together which is a "win/win" in my opinion.

>> The incentive to contribute to Kolla is to have your project deployed from
>> source or from binary in containers with full organized upgrade capability
>> and full customization.  If that isn't enough incentive, your right, I
>> don't see people jumping on this.
>>
>> Regards,
>> -steve
>>
>>>
>>> --
>>>
>>> Thanks,
>>>
>>> Matt Riedemann
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>

--

Kevin Carter
IRC: cloudnull

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][kolla][release] Deploying the big tent

2016-03-28 Thread Kevin Carter
me requests, the answer
> is they could.  Why haven't they?  I can never speak to the motives or
> actions of others, but perhaps they didn't think to try?

While I also can not speak to the intentions of others I will say I've 
never felt the need to burden other communities with deployment code for 
a given service because its generally not part of their development 
workflow. When I've been working on a service and needed things from the 
community developing that service I've reached out to them directly. As 
someone working on deployment code its been been great to get additional 
developer feedback from folks outside of our community; this is 
something everyone in the OpenStack-Ansible community is encouraged to 
do for all of the services we support. External interactions with other 
communities have been invaluable and greatly improved our project. All 
that said, if PTLs are looking to get their project contributors 
involved the various deployment projects I'm sure the additional input 
and access to resources would be greatly appreciated.

> The incentive to contribute to Kolla is to have your project deployed from
> source or from binary in containers with full organized upgrade capability
> and full customization.  If that isn't enough incentive, your right, I
> don't see people jumping on this.
>
> Regards,
> -steve
>
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

--

Kevin Carter
IRC: cloudnull

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] OpenVSwitch support

2016-03-24 Thread Kevin Carter
I believe the OVS bits are being worked on, however I don't remember by whom 
and I don't know the current state of the work. Personally, I'd welcome the 
addition of other neutron plugin options and if you have time to work on any of 
those bits I'd be happy to help out where I can and review the PRs.

--

Kevin Carter
IRC: cloudnull



From: Curtis <serverasc...@gmail.com>
Sent: Thursday, March 24, 2016 10:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [openstack-ansible] OpenVSwitch support

Hi,

I'm in the process of building OPNFV style labs [1]. I'd prefer to
manage these labs with openstack-ansible, but I will need things like
OpenVSwitch.

I know there was talk of supporting OVS in some fashion [2] but I'm
wondering what the current status or thinking is. If it's desirable by
the community to add OpenVSwitch support, and potentially other OPNFV
related features, I have time to contribute to work on them (as best I
can, at any rate).

Let me know what you think,
Curtis.

[1]: https://www.opnfv.org/
[2]: https://etherpad.openstack.org/p/osa-neutron-dvr

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Streamline adoption of Magnum

2016-03-22 Thread Kevin Carter
describe the details. You are
>>> welcome to provide your inputs. Thanks.
>>> Best regards,
>>> Hongbin
>>> *From:*Tim Bell [mailto:tim.b...@cern.ch]
>>> *Sent:*March-19-16 5:55 AM
>>> *To:*OpenStack Development Mailing List (not for usage questions)
>>> *Subject:*Re: [openstack-dev] [magnum] High Availability
>>> *From:*Hongbin Lu <hongbin...@huawei.com <mailto:hongbin...@huawei.com>>
>>> *Reply-To:*"OpenStack Development Mailing List (not for usage
>>> questions)" <openstack-dev@lists.openstack.org
>>> <mailto:openstack-dev@lists.openstack.org>>
>>> *Date:*Saturday 19 March 2016 at 04:52
>>> *To:*"OpenStack Development Mailing List (not for usage questions)"
>>> <openstack-dev@lists.openstack.org
>>> <mailto:openstack-dev@lists.openstack.org>>
>>> *Subject:*Re: [openstack-dev] [magnum] High Availability
>>>> ...
>>>> If you disagree, I would request you to justify why this approach
>>>> works for Heat but not for Magnum. Also, I also wonder if Heat has a
>>>> plan to set a hard dependency on Barbican for just protecting the
>>>> hidden parameters.
>>> There is a risk that we use decisions made by other projects to
>>> justify how Magnum is implemented. Heat was created 3 years ago
>>> according to
>>> https://www.openstack.org/software/project-navigator/ and Barbican
>>> only 2 years ago, thus Barbican may not have been an option (or a
>>> high risk one).
>>> Barbican has demonstrated that the project has corporate diversity
>>> and good stability
>>> (https://www.openstack.org/software/releases/liberty/components/barbican).
>>> There are some areas that could be improved (packaging and puppet
>>> modules are often needing some more investment).
>>> I think it is worth a go to try it out and have concrete areas to
>>> improve if there are problems.
>>> Tim
>>>> If you don’t like code duplication between Magnum and Heat, I would
>>>> suggest to move the implementation to a oslo library to make it DRY.
>>>> Thoughts?
>>>> [1]https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html
>>>> Best regards,
>>>> Hongbin
>>>> *From:*David Stanek [mailto:dsta...@dstanek.com]
>>>> *Sent:*March-18-16 4:12 PM
>>>> *To:*OpenStack Development Mailing List (not for usage questions)
>>>> *Subject:*Re: [openstack-dev] [magnum] High Availability
>>>>
>>>> On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal
>>>> <douglas.mendiza...@rackspace.com
>>>> <mailto:douglas.mendiza...@rackspace.com>> wrote:
>>>>>
>>>>> [snip]
>>>>> >
>>>>> > Regarding the Keystone solution, I'd like to hear the Keystone team's 
>>>>> > feadback on that.  It definitely sounds to me like you're trying to put 
>>>>> > a square peg in a round hole.
>>>>> >
>>>>>
>>>> I believe that using Keystone for this is a mistake. As mentioned in
>>>> the blueprint, Keystone is not encrypting the data so magnum would
>>>> be on the hook to do it. So that means that if security is a
>>>> requirement you'd have to duplicate more than just code. magnum
>>>> would start having a larger security burden. Since we have a system
>>>> designed to securely store data I think that's the best place for
>>>> data that needs to be secure.
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:openstack-dev-requ...@lists.openstack.org
>> <mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 

--

Kevin Carter
IRC: cloudnull

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Installing networking-* pythonclient extensions to multiple locations

2016-02-26 Thread Kevin Carter
?++ I think it'd be great to get that nailed down because, like you've 
mentioned, its likely we'll see a bit more of this as we widen our scope.


--

Kevin Carter
IRC: cloudnull


From: Javeria Khan <javer...@plumgrid.com>
Sent: Friday, February 26, 2016 4:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack-ansible] Installing networking-* 
pythonclient extensions to multiple locations

That makes sense Kevin. I don't have a use case in mind that would require 
access to the additional clients, so we can address that if/when it ever comes 
up. For now the utility container getting the extensions installed should 
suffice. As OSA is integrating with more types of plugins moving forward, it 
would help to get some kind of structure in place that references neutron 
parameters for such use cases.


--
Javeria

On Thu, Feb 25, 2016 at 4:08 AM, Kevin Carter 
<kevin.car...@rackspace.com<mailto:kevin.car...@rackspace.com>> wrote:

Hi Javeria,


We could call out our supported sub-projects in the `openstack_services.yml` 
file and install them as part of the plugin backend similar to what we've done 
with the "neutron_lbaas" package[0][1]. However, this would not specifically 
fix the neutron clients in all places as you've mentioned. While I can make a 
case for the utility container to get the extra neutron-client-extensions, I'm 
not sure we need them everywhere. Do we suspect a user or service may need 
access to the additional client extensions from the Heat or Nova venvs or 
anywhere else for that matter?



[0] - 
https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/os_neutron/defaults/main.yml#L349

[1] - 
https://github.com/openstack/openstack-ansible/blob/master/playbooks/defaults/repo_packages/openstack_services.yml#L85-L87


--

Kevin Carter
IRC: cloudnull


From: Javeria Khan <javer...@plumgrid.com<mailto:javer...@plumgrid.com>>
Sent: Sunday, February 21, 2016 5:09 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [openstack-ansible] Installing networking-* 
pythonclient extensions to multiple locations

Hey everyone,

At the moment OSA installs the python-neutronclient in a few locations 
including the containers neutron-server, utility, heat, tempest.

Now neutron has a bunch of sub-projects like networking-l2gw [1], 
networking-bgpvpn [2] networking-plumgrid [5] etc, which have their own 
python-neutronclient CLI extensions [3][4][5] in their respective repositories 
and packages.

Since these CLI extensions are not part of the neutron packages and must be 
enabled by installing the additional networking-* packages. We don't install 
most of these sub-projects in OSA at the moment, however moving forward do you 
think its reasonable to install said packages in every location that installs 
the neutron client inside the OSA plays? If so, then how would you recommend we 
go about it since the installation will be conditional on the enabling of the 
relevant neutron subproject features?

[1] https://github.com/openstack/networking-l2gw
[2] https://github.com/openstack/networking-bgpvpn
[3] 
https://github.com/openstack/networking-l2gw/tree/master/networking_l2gw/l2gatewayclient
[4] 
https://github.com/openstack/networking-bgpvpn/tree/master/networking_bgpvpn/neutronclient
[5] https://github.com/openstack/networking-plumgrid


Thanks,
Javeria

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Installing networking-* pythonclient extensions to multiple locations

2016-02-24 Thread Kevin Carter
Hi Javeria,


We could call out our supported sub-projects in the `openstack_services.yml` 
file and install them as part of the plugin backend similar to what we've done 
with the "neutron_lbaas" package[0][1]. However, this would not specifically 
fix the neutron clients in all places as you've mentioned. While I can make a 
case for the utility container to get the extra neutron-client-extensions, I'm 
not sure we need them everywhere. Do we suspect a user or service may need 
access to the additional client extensions from the Heat or Nova venvs or 
anywhere else for that matter?



[0] - 
https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/os_neutron/defaults/main.yml#L349

[1] - 
https://github.com/openstack/openstack-ansible/blob/master/playbooks/defaults/repo_packages/openstack_services.yml#L85-L87


--

Kevin Carter
IRC: cloudnull


From: Javeria Khan <javer...@plumgrid.com>
Sent: Sunday, February 21, 2016 5:09 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [openstack-ansible] Installing networking-* 
pythonclient extensions to multiple locations

Hey everyone,

At the moment OSA installs the python-neutronclient in a few locations 
including the containers neutron-server, utility, heat, tempest.

Now neutron has a bunch of sub-projects like networking-l2gw [1], 
networking-bgpvpn [2] networking-plumgrid [5] etc, which have their own 
python-neutronclient CLI extensions [3][4][5] in their respective repositories 
and packages.

Since these CLI extensions are not part of the neutron packages and must be 
enabled by installing the additional networking-* packages. We don't install 
most of these sub-projects in OSA at the moment, however moving forward do you 
think its reasonable to install said packages in every location that installs 
the neutron client inside the OSA plays? If so, then how would you recommend we 
go about it since the installation will be conditional on the enabling of the 
relevant neutron subproject features?

[1] https://github.com/openstack/networking-l2gw
[2] https://github.com/openstack/networking-bgpvpn
[3] 
https://github.com/openstack/networking-l2gw/tree/master/networking_l2gw/l2gatewayclient
[4] 
https://github.com/openstack/networking-bgpvpn/tree/master/networking_bgpvpn/neutronclient
[5] https://github.com/openstack/networking-plumgrid


Thanks,
Javeria
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Helion HLM on GitHub and what next?

2016-01-29 Thread Kevin Carter
Thanks Bailey for the update. I intend to look over more of what you 
folks have published really soon. Thanks again for putting all of this 
out there and I hope to work with you and the team soon on more of the 
convergence pieces.

I don't know if you or any of your team are headed to the OPS and or 
OpenStack-Ansible midcycles in February but I'll be there and would love 
the opportunity to work with folks while we're all in person.

Thanks again for publishing and have a good weekend.

-- 

Kevin Carter
IRC: Cloudnull

On 01/29/2016 01:30 PM, Bailey, Darragh wrote:
> Hi,
>
>
> Those present at some of the Ansible collaboration sessions at Tokyo may
> recall a mention about Helion Lifecycle Manager (HLM), which constituted
> a collection of ansible playbooks and particular patterns used by the
> HPE Helion OS 2.0 release to deploy clouds.
>
> We promised at the time that we'd get the code out there to start some
> discussions on where we could collaborate better with openstack-ansible.
>
> I've already mentioned it to a few folks in IRC, however I think it's
> worth sharing out a bit further.
>
> The initial code for the difference ansible components has been uploaded
> to GitHub under https://github.com/hpe-helion-os earlier this month.
>
>
> https://github.com/hpe-helion-os/helion-input-model
> Defines a cloud though a combination of service input definitions
> (relatively static) and a topology that controls how network and the
> services are laid out control the shape and size of the cloud desired.
>
>
> https://github.com/hpe-helion-os/helion-configuration-processor
> The processing engine that consumes the desired cloud topology
> configuration from the input model, and generates all the hosts and
> variables that are consumed by the ansible playbooks/roles to deploy the
> requested cloud.
>
>
> https://github.com/hpe-helion-os/helion-ansible
> Contains all the ansible playbooks/roles used to deploy HPE's HOS 2.0.
> This is run against the output of the helion-configuration-processor to
> execute the build/upgrade of the cloud specification.
>
>
> Obviously lots of discussions to be had and work to be done, and
> hopefully with some help we should have a good idea by Austin as to what
> will be needed to integrate some of the concepts into the existing
> openstack-ansible project.
>
>
> Enjoy the weekend :-)
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack installer

2016-01-27 Thread Kevin Carter
running OpenStack clouds at various sizes for long periods of time 
becomes very difficult as packages, package dependencies, patches the 
third party is carrying, and other things change causing instability and 
general breakage. That said I agree package management in Linux has 
always been a strong point but I've found out the hard way that package 
deployments of OpenStack don't scale or upgrade well. It may be better 
today than it was before however color me skeptical.

> And some minor points:
> - Need root rights to start. I don't really understand why it is needed.
You do need root to run the OSA playbooks, however you could use the 
ansible "become" process to achieve it. Even in package deployments of 
OpenStack, as provided by the distro, you still need root privileges to 
create users, init scripts, etc...

> - I think the role plays are unnecessary fragmented into files. Ansible 
> designed with simplicity in mind,
>now keystone for example has 29 files, lots of them with 1 task.
This is true, some of our roles are rather large but they do just about 
everything that the service provides. We've found it better to structure 
the roles with includes instead of simply overloading the main.yml. It 
makes debugging and focusing parts of the role on specific tasks a given 
service may require easier to understand and develop. While the Roles 
could be greatly simplified we're looking to support as many things as 
possible within a given service, such as Keystone w/ various token 
provider backens, federation, using apache+mod-wsgi for the api service 
etc... I'd like to point our that "simplicity in mind" is the driving 
thought and something that we try to adhere too however holding fast on 
simplicity is not always possible when the services being deployed are 
complex. As a deployer simplicity should be a driver in how something 
works which doesn't always translate to implementation.

>...I could not understand what the
> - The 'must have tags' are also against Ansible's philosophy. No one should 
> need to start a play with a tag
> (tagging should be an exception, not the rule).
I'm not sure what this means. The only thing I could think of is when 
rebootstratping the galera cluster after every node within the cluster 
is down. Not that the tag is required is this case, its only used to 
speed up the bootstrap process and recover the cluster. We do have a few 
sanity checks in places that will cause a role to hard fail and may 
require passing an extra variable on the command line to run however the 
fail output provides a fairly robust message regarding why the task is 
being hard stopped. This was done so that you don't inadvertently cause 
yourself downtime or data-loss. In either case, these are the exceptions 
and not the rules. So like I said I think I'm missing the point here.

Running a role doesn't take more than 10-20 secs, if it is already
> completed, tagging is just unnecessary bloat. If you need to start something 
> at the middle of a play, then that play
> is not right.
This is plain wrong... Tags are not bloat and you'll wish you had them 
when you need to rapidly run a given task to recover or reconfigure 
something especially as your playbooks and roles grow in sophistication 
and capabilities. I will say though that we had a similar philosophy in 
our early Ansible adventures however we've since reversed that position 
entirely.

>
> So those were the reason why we started our project, hope you can understand 
> it. We don't want to compete,
> just it serves us better.
>
>> All that said, thanks for sharing the release and if I can help in any way 
>> please
>> reach out.
>>
> Thanks, maybe we can work together in the future.
>
I too hope that we can work together. It'd be great to get different 
perspectives on roles and plays that we're creating and that you may 
need to serve your deployments. I'll also note that we've embarked on a 
massive decoupling of the roles from the main OSA repository which may 
be beneficial to you and your project, or other projects like it. A full 
list of roles we've done thus far can be seen here [0]. In the Mitaka 
release time we hope to have the roles fully stand alone and brought 
into OSA via the ansible-galaxy resolver which will make it possible for 
developers and deployers a like to benifit from the roles on an `a la 
carte` basis.


If you ever have other questions as you build out your own project or if 
there's something that we can help with please let us know. We're almost 
always in the #openstack-ansible channel and generally I'd say that most 
of the folks in there are happy to help. Take care and happy Ansible'ing!


[0] - https://github.com/openstack?utf8=%E2%9C%93=openstack-ansible

>> --
>>
>> Kevin Carter
>> IRC: cloudnull
>>
> Br,
> György
>
>>
>> _

Re: [openstack-dev] OpenStack installer

2016-01-26 Thread Kevin Carter
Hi Gyorgy,

I'll definitely give this a look and thanks for sharing. I would like to ask 
however why you found OpenStack-Anisble overly complex so much so that you've 
taken on the complexity of developing a new installer all together? I'd love to 
understand the issues you ran into and see what we can do in upstream 
OpenStack-Ansible to overcome them for the greater community. Being that 
OpenStack-Ansible is no longer a Rackspace project but a community effort 
governed by the OpenStack Foundation I'd been keen on seeing how we can 
simplify the deployment offerings we're currently working on today in an effort 
foster greater developer interactions so that we can work together on building 
the best deployer and operator experience.

All that said, thanks for sharing the release and if I can help in any way 
please reach out.

--

Kevin Carter
IRC: cloudnull



From: Gyorgy Szombathelyi <gyorgy.szombathe...@doclerholding.com>
Sent: Tuesday, January 26, 2016 4:32 AM
To: 'openstack-dev@lists.openstack.org'
Subject: [openstack-dev] OpenStack installer

Hello!

I just want to announce a new installer for OpenStack:
https://github.com/DoclerLabs/openstack
It is GPLv3, uses Ansible (currently 1.9.x,  2.0.0.2 has some bugs which has to 
be resolved), has lots of components integrated (of course there are missing 
ones).
Goal was simplicity and also operating the cloud, not just installing it.
We started with Rackspace's openstack-ansible, but found it a bit complex with 
the containers. Also it didn't include all the components we required, so 
started this project.
Feel free to give it a try! The documentation is sparse, but it'll improve with 
time.
(Hope you don't consider it as an advertisement, we don't want to sell this, 
just wanted to share our development).

Br,
György

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support

2016-01-26 Thread Kevin Carter
Seems like a sensible change however I'd love to see it written up as a spec. 
Also do we know if there are any scenario tests in tempest for octavia or would 
we need to develop them/something? 

As for adding Octavia as a new service within OpenStack Ansible this makes 
sense. Another approach may be to add octavia to the existing neutron-agent 
container which would making coordinating some of the services easier while 
ensuring the service deployment is simpler but that has isolation and 
segmentation drawback so i have no strong opinions on whats best. 

I personally think it'd be great to see this feature in OSA and I look forward 
to reviewing the spec.

--

Kevin Carter
IRC: cloudnull



From: Major Hayden <ma...@mhtx.net>
Sent: Tuesday, January 26, 2016 1:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hey there,

After poking around a bit at LBaaS in OpenStack-Ansible, I discovered that 
LBaaS v2[1] was available in Liberty and Mitaka.  At first, I thought it 
involved switching agents from neutron-lbaas-agent to neutron-lbaasv2-agent, 
but things are a little bit more involved.

LBaaS v1 works by configuring HAProxy within agent containers.  However, LBaaS 
v2 creates virtual machines to hold load balancers and attaches those virtual 
machines to the appropriate subnet.  It offers some active/passive failover 
capabilities, but a single load balancer is the default.  One of the biggest 
benefits of v2 is that you can put multiple listeners on the same load 
balancer.  For example, you could host a website on ports 80 and 443 on the 
same VIP and floating IP address.

The provisioning would look like this for v2:

  * Create a load balancer
  * Create a listener
  * Create a pool
  * Create members in the pool

Many thanks to Brandon Logan (blogan) for sitting down with me this morning to 
go over it.  It looks like we'd need to do the following to get LBaaS v2 into 
OpenStack-Ansible:

  1) Build a new container to hold an Octavia venv

  2) Run four new daemons in that container:

* octavia-api
* octavia-worker
* octavia-housekeeping
* octavia-health-manager

  3) Ensure that neutron-lbaas-agent isn't running at the same time as the 
octavia stack

  4) Create a new RabbitMQ queue for octavia along with credentials

  5) Create a new MariaDB database for octavia along with credentials

At this moment, LBaaS v2 panels are planned for Horizon in Mitaka, but they're 
not available as of right now.  It seems like a spec would be necessary for 
this effort.

Are there users/deployers who would like to have this feature available?

[1] 
http://docs.openstack.org/developer/devstack/guides/devstack-with-lbaas-v2.html

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWp8usAAoJEHNwUeDBAR+xXk8P/37tkHZujAbbX3SY5X4dR2wX
cmR1DN+upBHJgVfrEKdFEBkGaS5ByZXnSvB0nGdJGYluL22DmNQRW2VxYDkqF+/W
h/0dprxEzscdYCt8cO/8LVftZ0krln7Wp7Yn8YUCLSm9yHPrrgUIUIJNm6r552Ts
BEJrdDaC+9R+vMstYFzdHKPegV53L25muXFCU7FM50WeGEXOgd72rMNf81VSQXUU
DBJzYyYvN8MZownOcvoh9aAH6a+ASwZmEMZpc7HGj2ltpc99LSfmuTT+t8Jzysr5
prCK6XBzzsedgYFWG2v1JZUOvTgjhbkeLIjPhYdnzfYp3b1sOz1qL9EXOcw/p4z7
xyHgns2HlpAMixTmqg+ZfaveGfqKAo6Pu+6z+BIT3+uqec7t1cQy3CQ7bBOX8GBe
PQyzU06jdT9x+/sarQGGfqMOfnX9XPEfUlfC7xa1KGUDdK7wf+yZdVf+D2Uh+vr/
K8Tohnswr6wDgVxB60Z+tptXkmSkV4jhPvXo9cPN2Gjed7/R1wb71XSb+OJ/3jxg
OdCVAz6mbCBxjWhrGkz7RR90NDZNy5CD3tqv22rVOuYZIFKw+IccCZ6KIfN8Fgne
XscCZPsZ2n/535PjAXDYqfHi+Qb7bAjjvj7Ast9bGNGrUiwNuoKa+L4HjFfopUqs
hXlq6F7n3pPmMCIgR76o
=KsIk
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support

2016-01-26 Thread Kevin Carter
I know that Neutron LBaaS V1 is still available in Liberty and functional, and 
at this point I assume its in Mitaka (simply judging the code not the actual 
functionality). From a production stand point I think its safe to say we can 
keep supporting the V1 implementation for a while however we'll be stuck once 
V1 is deprecated should there not be a proper migration path for old and new 
LBs at that time. 

I'd also echo the request from Kevin for a share on some of the migration 
scripts that have been made such that we can all benefit from the prior art 
that has already been created. @Eichberger If not possible to share the 
"proprietary" scripts outright, maybe we could get an outline of the process / 
whitepaper on what's been done so we can work on to getting the needful 
migrations baked into Octavia proper? (/me speaking as someone with no 
experience in Octavia nor the breath of work that I may be asking for however I 
am interested in making things better for deployers, operators, developers)

--

Kevin Carter
IRC: cloudnull



From: Fox, Kevin M <kevin@pnnl.gov>
Sent: Tuesday, January 26, 2016 5:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support

Thats very very unfortunate. :/ Lbaas team, (or any other team), please never 
do this again. :/

so does liberty/mitaka at least support using the old v1? it would be nice to 
have a different flag day to upgrade the load balancers then the upgrade day to 
get from kilo to release next...

Any chance you can share your migration scripts? I'm guessing we're not the 
only two clouds that need to migrate things.

hmm Would it be possible to rename the tables to something else and tweak a 
few lines of code so they could run in parallel? Or is there deeper 
incompatibility then just the same table schema being interpreted differently?

Thanks,
Kevin

From: Eichberger, German [german.eichber...@hpe.com]
Sent: Tuesday, January 26, 2016 1:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support

Hi,

As Brandon pointed out you can’t run V1 and V2 at the same time because they 
share the same database tables and interpret columns differently. Hence, at HPE 
we have some proprietary script which takes the V1 database tables and migrates 
them to the V2 format. After that the v2 agent based driver will pick it up and 
create those load balancers.

To migrate agent based driver to Octavia we are thinking self migration since 
people van use the same (ansible) scripts and point them at Octavia.

Thanks,
German



On 1/26/16, 12:40 PM, "Fox, Kevin M" <kevin@pnnl.gov> wrote:

>I assumed they couldn't run on the same host, but would work on different 
>hosts. maybe I was wrong?
>
>I've got a production cloud that's heavily using v1. Having a flag day where 
>we upgrade all from v1 to v2 might be possible, but will be quite painful. If 
>they can be made to co'exist, that would be substantially better.
>
>Thanks,
>Kevin
>
>From: Brandon Logan [brandon.lo...@rackspace.com]
>Sent: Tuesday, January 26, 2016 12:19 PM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support
>
>Oh lbaas versioning was a big deal in the beginning.  Versioning an
>advanced service is a whole other topic and exposed many "interesting"
>issues with the neutron extension and service plugin framework.
>
>The reason v1 and v2 cannot be run together are mainly to get over an
>issue we had with the 2 different agents which woudl have caused a much
>larger refactor.  The v1 OR v2 requirement was basically a hack to get
>around that.  Now that Octavia is the reference implementation and the
>default, relaxing this restriction shouldn't cause any problems really.
>Although, I don't want to 100% guarantee that because it's been a while
>since I was in that world.
>
>If that were relaxed, the v2 agent and v1 agent could still be run at
>the same time which is something to think about.  Come to think about
>it, we might want to revisit whether the v2 and v1 agent running
>together is something that can be easily fixed because many things have
>improved since then AND my knowledge has obviously improved a lot since
>that time.
>
>Glad yall brought this up.
>
>Thanks,
>Brandon
>
>
>On Tue, 2016-01-26 at 14:07 -0600, Major Hayden wrote:
>> On 01/26/2016 02:01 PM, Fox, Kevin M wrote:
>> > I believe lbaas v1 and v2 are different then every other openstack api 
>> > version in that while you can run v1 and v2 at the same time but they are 
>> >

Re: [openstack-dev] [openstack-ansible]

2016-01-02 Thread Kevin Carter
@Duck,


Is this an offline install or have you retried since posting the message? I ask 
because I'm not able to reproduce the problem. Its possible that the Ansible 
Galaxy API was down but its hard to say.

?
--

Kevin Carter
IRC: cloudnull


From: Duck Euler <duck2...@gmail.com>
Sent: Friday, December 18, 2015 7:06 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [openstack-ansible]


when running openstack ansible quick start ( curl 
https://raw.githubusercontent.com/openstack/openstack-ansible/master/scripts/run-aio-build.sh
 | sudo bash )

it will halt with the following error.

++ ansible-galaxy install --role-file=ansible-role-requirements.yml 
--ignore-errors --force
- the API server (galaxy.ansible.com<http://galaxy.ansible.com>) is not 
responding, please try again later.

It just started happening in the last day.

I found if I comment out the following roles in the requirements file, that 
command runs to completion.

- src: evrardjp.keepalived
  name: keepalived
  version: '1.3'
- src: mattwillsher.sshd
  name: sshd

seeking advice or help on this issue.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ansible] One or more undefined variables: 'dict object' has no attribute 'bridge'

2015-12-14 Thread Kevin Carter
The port binding issues are usually related to a neutron physical interface 
mapping issue however based on your previous config I don't think that was the 
problem. If you're deploying Liberty/Master(Mitaka) there was was a fix that 
went in that resolved an issue within neutron and the use of L2/multicast 
groups [0] and if your on the stable tag the fix has not been released yet and 
will be there for the 12.0.3 tag, coming soon. To resolve the issue the fix is 
to simply to add the following to your `user_variables.yml` file:

== If you don't want to use l2 population add the following ==
neutron_l2_population: "False"
neutron_vxlan_group: "239.1.1.1"
 
== If you want to use l2 population add the following ==
neutron_l2_population: "True"

As for the neutron services on your compute nodes, they should be running 
within the host namespace. In liberty/Master the python bits will be within a 
vent using an upstart init script to control the service. If your not seeing 
the neutron service running its likely due to this bug [2] which is resolve by 
dropping the previously mentioned user variable options. 

I hope this helps and let me know how it goes. 

[0] https://review.openstack.org/#/c/255624
[1] https://github.com/openstack/openstack-ansible/commits/liberty
[2] https://bugs.launchpad.net/neutron/+bug/1470584

--

Kevin Carter
IRC: cloudnull



From: Mark Korondi <korondi.m...@gmail.com>
Sent: Sunday, December 13, 2015 9:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ansible] One or more undefined variables: 'dict 
object' has no attribute 'bridge'

Thanks cloudnull,

This solved the installation issue. I commented out all non-flat
related networks before, to investigate my main problem, which is

> PortBindingFailed: Binding failed for port 
> fe67a2d5-6d6a-4440-80d0-acbe2ff5c27f, please check neutron logs for more 
> information.

I still have this problem; I created the flat external network with no
errors, still I get this when trying to launch an instance. What's
really interesting to me, is that no neutron microservices are
deployed and running on the compute node.

Mark (kmARC)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ansible] One or more undefined variables: 'dict object' has no attribute 'bridge'

2015-12-14 Thread Kevin Carter
The neutron service is running on the physical host within the compute 
nodes, as well as the l3 agent containers. You should be able to do 
`service neutron-linuxbridge-agent restart` on the compute node to start 
the service. If its failing to run, there should be something within the 
logs to indicate what the failure is. Check your neutron logs in 
/var/log/neutron to see what's happening within the service as there 
should be more data on what is causing it to fail. You might also be 
able to active the venv `source 
/openstack/venvs/${SERVICENAME_VERSION}/bin/activate` and run the 
service in the foreground which may help shine some more light on the 
issues.

If you can jump into the IRC (#openstack-ansible) channel we might be 
able to work faster on getting this resolved for the deployment feel 
free to ping me anytime.

--

Kevin Carter
IRC: cloudnull

On 12/14/2015 11:22 AM, Mark Korondi wrote:
> I am using the liberty branch. Unfortunately this did not help, I get
> the same error.
>
> I also don't understand where the neutron service should run. This is
> the output on my compute node:
>
> root@os-compute-1:~# ps aux | grep neutron
> root 18782  0.0  0.0  11748  2232 pts/0S+   17:56   0:00 grep
> --color=auto neutron
> root@os-compute-1:~# ip netns list
> root@os-compute-1:~#
>
> Is there a step-by step guide that shows how to set up a simple flat
> networking with OSA? I guess this whole thing is optimized around vlan
> provider networking which I don't have on my playground environment.
>
>
>
> On Mon, Dec 14, 2015 at 4:46 PM, Kevin Carter
> <kevin.car...@rackspace.com> wrote:
>> The port binding issues are usually related to a neutron physical interface 
>> mapping issue however based on your previous config I don't think that was 
>> the problem. If you're deploying Liberty/Master(Mitaka) there was was a fix 
>> that went in that resolved an issue within neutron and the use of 
>> L2/multicast groups [0] and if your on the stable tag the fix has not been 
>> released yet and will be there for the 12.0.3 tag, coming soon. To resolve 
>> the issue the fix is to simply to add the following to your 
>> `user_variables.yml` file:
>>
>> == If you don't want to use l2 population add the following ==
>> neutron_l2_population: "False"
>> neutron_vxlan_group: "239.1.1.1"
>>
>> == If you want to use l2 population add the following ==
>> neutron_l2_population: "True"
>>
>> As for the neutron services on your compute nodes, they should be running 
>> within the host namespace. In liberty/Master the python bits will be within 
>> a vent using an upstart init script to control the service. If your not 
>> seeing the neutron service running its likely due to this bug [2] which is 
>> resolve by dropping the previously mentioned user variable options.
>>
>> I hope this helps and let me know how it goes.
>>
>> [0] https://review.openstack.org/#/c/255624
>> [1] https://github.com/openstack/openstack-ansible/commits/liberty
>> [2] https://bugs.launchpad.net/neutron/+bug/1470584
>>
>> --
>>
>> Kevin Carter
>> IRC: cloudnull
>>
>>
>> 
>> From: Mark Korondi <korondi.m...@gmail.com>
>> Sent: Sunday, December 13, 2015 9:10 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [ansible] One or more undefined variables: 
>> 'dict object' has no attribute 'bridge'
>>
>> Thanks cloudnull,
>>
>> This solved the installation issue. I commented out all non-flat
>> related networks before, to investigate my main problem, which is
>>
>>> PortBindingFailed: Binding failed for port 
>>> fe67a2d5-6d6a-4440-80d0-acbe2ff5c27f, please check neutron logs for more 
>>> information.
>>
>> I still have this problem; I created the flat external network with no
>> errors, still I get this when trying to launch an instance. What's
>> really interesting to me, is that no neutron microservices are
>> deployed and running on the compute node.
>>
>> Mark (kmARC)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:un

Re: [openstack-dev] [ansible] One or more undefined variables: 'dict object' has no attribute 'bridge'

2015-12-13 Thread Kevin Carter
Hi Mark,

What Wade said is spot on, there's a missing entry in your inventory. 
 From looking at your configuration it seems you need to add this stanza 
[0], or something like it, in to your `openstack_user_config.yml` file 
under the "provider_networks" section. This will add the required tunnel 
network configuration type to your compute hosts and allow the neutron 
installation to continue.

On a related note the issue is due to an assumption we've made in the 
playbook. We've assumed that the "tunnel" network type is always present 
on compute nodes. In you current configuration file only a single flat 
network was referenced which failed to load the required network entry 
into the host variables of your inventory on your compute node. While I 
assume you'll want or need tunnel networks on your compute nodes, the 
interface file you shared has a "br-vxlan" device, the single network 
use case seems like something we should be able to address. If you 
wouldn't mind raising an issue in launchpad [1] I'd appreciated it.

[0] http://paste.openstack.org/show/481751/
[1] https://bugs.launchpad.net/openstack-ansible

--

Kevin Carter
IRC: cloudnull


On 12/12/2015 11:11 AM, Wade Holler wrote:
> Hi Mark,
>
> I haven't reviewed your configs yet but if "bridge" is a valid ansible
> inventory attribute , then this error is usually caused by trying to
> reference a host that ansible didn't check in on yet / gather facts on.
> At least this is what is has been in my brief experience.
>
> For example if I wanted to reference all hosts in a "webservers" ansible
> group to build a haproxy config but that playbook didn't apply to the
> "webservers" group them their facts have not been collected.
>
> Just a thought.
>
> Cheers
> Wade
>
>
> On Sat, Dec 12, 2015 at 9:34 AM Mark Korondi <korondi.m...@gmail.com
> <mailto:korondi.m...@gmail.com>> wrote:
>
> Hi all,
>
> Trying to set up openstack-ansible, but stuck on this point:
>
>  > TASK: [set local_ip fact (is_metal)]
> *
>  > ...
>  > fatal: [os-compute-1] => One or more undefined variables: 'dict
> object' has no
>  > attribute 'bridge'
>  > ...
>  > One or more undefined variables: 'dict object' has no attribute
> 'bridge'
>
> These are my configs:
> - http://paste.openstack.org/show/481739/
>(openstack_user_config.yml)
> - http://paste.openstack.org/show/481740/
>(/etc/network/interfaces on compute host called `os-compute-1`)
>
> I set up the eth12 veth pair interface also on the compute host as
> you can see.
> `ifup-ifdown` works without any problems reported.
>
>
> Why is it reporting an undefined bridge variable? Any ideas on my
> config is well
> appreciated.
>
> Mark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-ansible] Mid Cycle Sprint

2015-12-10 Thread Kevin Carter
Count me in as wanting to be part of the mid-cycle. I live in San 
Antonio but I think we should strongly consider having the meetup in the 
UK. It seems most of our deployers live in the UK and it'd be nice to 
get people involved whom may not have been able to attend the summit. 
While I'll need to get travel approval if we decide to hold the event in 
the UK, during the mid-cycle I'd like to focus on working on the 
"Upgrade Framework" and "multi-OS". Additionally, if we have time, I'd 
like to see if people are interested in bringing new services online and 
work with folks regarding the implementation details and how to compose 
new roles.

Cheers!

--

Kevin Carter
IRC: Cloudnull

On 12/09/2015 08:44 AM, Curtis wrote:
> On Wed, Dec 9, 2015 at 5:45 AM, Jesse Pretorius
> <jesse.pretor...@gmail.com> wrote:
>> Hi everyone,
>>
>> At the Mitaka design summit in Tokyo we had some corridor discussions about
>> doing a mid-cycle meetup for the purpose of continuing some design
>> discussions and doing some specific sprint work.
>>
>> ***
>> I'd like indications of who would like to attend and what
>> locations/dates/topics/sprints would be of interest to you.
>> ***
>>
>
> I'd like to get more involved in openstack-ansible. I'll be going to
> the operators mid-cycle in Feb, so could stay later and attend in West
> London. However, I could likely make it to San Antonio as well. Not
> sure if that helps but I will definitely try to attend where ever it
> occurs.
>
> Thanks.
>
>> For guidance/background I've put some notes together below:
>>
>> Location
>> 
>> We have contributors, deployers and downstream consumers across the globe so
>> picking a venue is difficult. Rackspace have facilities in the UK (Hayes,
>> West London) and in the US (San Antonio) and are happy for us to make use of
>> them.
>>
>> Dates
>> -
>> Most of the mid-cycles for upstream OpenStack projects are being held in
>> January. The Operators mid-cycle is on February 15-16.
>>
>> As I feel that it's important that we're all as involved as possible in
>> these events, I would suggest that we schedule ours after the Operators
>> mid-cycle.
>>
>> It strikes me that it may be useful to do our mid-cycle immediately after
>> the Ops mid-cycle, and do it in the UK. This may help to optimise travel for
>> many of us.
>>
>> Format
>> --
>> The format of the summit is really for us to choose, but typically they're
>> formatted along the lines of something like this:
>>
>> Day 1: Big group discussions similar in format to sessions at the design
>> summit.
>>
>> Day 2: Collaborative code reviews, usually performed on a projector, where
>> the goal is to merge things that day (if a review needs more than a single
>> iteration, we skip it. If a review needs small revisions, we do them on the
>> spot).
>>
>> Day 3: Small group / pair programming.
>>
>> Topics
>> --
>> Some topics/sprints that come to mind that we could explore/do are:
>>   - Install Guide Documentation Improvement [1]
>>   - Development Documentation Improvement (best practises, testing, how to
>> develop a new role, etc)
>>   - Upgrade Framework [2]
>>   - Multi-OS Support [3]
>>
>> [1] https://etherpad.openstack.org/p/oa-install-docs
>> [2] https://etherpad.openstack.org/p/openstack-ansible-upgrade-framework
>> [3] https://etherpad.openstack.org/p/openstack-ansible-multi-os-support
>>
>> --
>> Jesse Pretorius
>> IRC: odyssey4me
>>
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
>

-- 

--

Kevin Carter
IRC: cloudnull

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-ansible] Fedora/CentOS/other Support

2015-11-19 Thread Kevin Carter
I don't believe we should have a class system in the OS's that we choose 
to support. If we're going to bring in a new OS I think we should first 
bring it in using non-voting jobs however once all of the bits have been 
finalized the supported OS should be able to pass the same sets of tests 
as everything else. Now, we may run into various applications that may 
not be compatible or available on different distros, which is fine, but 
in that case we should call out the deficiencies marking a specific job 
as non-voting until we can see about resolving the issues. I would like 
to avoid creating second class operating systems within OSA if at all 
possible.

--

Kevin Carter
IRC: cloudnull

On 11/19/2015 09:21 AM, Major Hayden wrote:
> On 11/18/2015 04:19 AM, Jesse Pretorius wrote:
>> The current community has done some research into appropriate patterns to 
>> use and has a general idea of how to do it - but in order to actually 
>> execute there need to be enough people who commit to actually maintaining 
>> the work once it's done. We don't want to carry the extra code if we don't 
>> also pick up extra contributors to maintain the code.
>
> Should there be a concept of primary and secondary operating systems 
> supported by openstack-ansible?  I'm thinking something similar to the tiers 
> of hypervisors in OpenStack where some are tested heavily with gating while 
> others have a lighter amount of testing.
>
> We might be able to have something along the lines of:
>
>* Primary OS: Used in gate checks, heavily tested
>* Secondary OS: Not used in gate checks, lightly tested
>* Tertiary OS: Support in WIP state, not tested
>
> --
> Major Hayden
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-10 Thread Kevin Carter
>I believe Clint already linked to
>https://aphyr.com/posts/309-knossos-redis-and-linearizability or
>similar - but 'known for general ease of use and reliability' is uhm,
>a bold claim. Its worth comparing that (and the other redis writeups)
>to this one: https://aphyr.com/posts/291-call-me-maybe-zookeeper. "Use
>zookeeper, its mature".

Those write ups are from 2013 and with general improvements in Redis over the 
last two years I'd find it hard to believe that they're still relevant, however 
its worth testing to confirm if Redis is deemed as a viable option.

>The openjdk is present on the same linux distributions, and has been
>used in both open source and proprietary programs for decades. *what*
>license implications are you speaking of?

The license issues would be related to deployers using Oracle java which may or 
may not be needed by certain deployers for scale and performance requirements. 
While I do not have specific performance numbers at my fingertips to illustrate 
general performance issues using zookeeper at scale with OpenJDK, I have, in 
the past, compared OpenJDK to Oracle Java and found that Oracle java was quite 
a bit more stable and packed far more performance capabilities.   I did find [ 
http://blog.cloud-benchmarks.org/2015/07/17/cassandra-write-performance-on-gce-and-aws.html
 ] which claims a 32% performance improvement with casandra using Oracle Java 8 
over OpenJDK on top of the fact that it was less prone to crashes but that may 
not be entirely relevant to this case. Also there's no denying that Oracle has 
a questionable history dealing with Opensource projects regarding Java in the 
past and because performance / stability concerns may require the use of Oracle 
Java which will undoubtedly come with
  questionable license requirements.

>I believe you can do this with zookeeper - both single process, or
>three processes on one machine to emulate a cluster - very easily.
>Quoting http://qnalist.com/questions/29943/java-heap-size-for-zookeeper
>- "It's more dependent on your workload than anything. If you're
>storing on order of hundreds of small znodes then 1gb is going to [be]
>more then fine." Obviously we should test this and confirm it, but
>developer efficiency is a key part of any decision, and AFAIK there is
>absolutely nothing in the way as far as zookeeper goes. 

This would be worth while testing to ensure that a developer really can use 
typical work machines to test without major performance impacts or changes to 
their workflow.

>Just like rabbitmq and openvswitch, its a mature thing, written in a language
>other than Python, which needs its own care and feeding (and that
>feeding is something like 90% zk specific, not 'java headaches').

Agreed that with different solutions care is needed to facilitate optimal 
language environments however I'm not sure OpenVSwitch/RabbitMQ may be the best 
comparison to the point of maturity. As I'm sure you're well aware, both of 
these pieces of software have been "mature" for some time however up until real 
recently stable OvS has been known to reach havoc in large scale production 
environments and RabbitMQ clustering is still really something to be desired. 
The point being is just because something is "mature" doesn't mean that its the 
most stable and or the right solution.

> The default should be suitable for use in the majority of clouds

IDK if we can quantify that Zookeeper fits the "majority" of clouds as I 
believe there'll be scale issues while using OpenJDK forcing the use of Oracla 
Java and the potential for inadvertent license issues. That said, I have no 
real means to say Redis will be any better, however prior operational 
experience tells me that managing zookeeper at real scale is a PITA.

I too wouldn't mind seeing a solution using consul put forth as the default. 
It's really an interesting approach, provides multiple backends (http/dns/etc), 
should scale to multiple DCs without much  if any hacking, written in go (using 
1.5.1+ in master and 1.4.1+ in stable) which will come with some impressive 
performance capabilities, is under active development, licensed "Mozilla Public 
License, version 2.0", should be fairly minimal in terms of resource 
requirements on development machines, and hits many of the other shinneyness 
factors developers / deployers may be interested in. 


All said I'm very interested in a DLM solution and if there's anything I can do 
to help make it go please let me know.

--

Kevin Carter
IRC: cloudnull



From: Robert Collins <robe...@robertcollins.net>
Sent: Tuesday, November 10, 2015 12:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager 
discussion @ the summit

On 10 November 2015 at 19:24, Kevin Ca

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-10 Thread Kevin Carter
Clint,

> While I'm sure it works a lot of the time, when it breaks it will break
> in very mysterious, and possibly undetectable way.

> For some things, this would be no big deal. But in some it may result in
> total disaster, like two conductors trying to both own a single Ironic
> node and one accidentally erasing what the other just wrote there.

This is fair, it may break in unpredictable ways due to the master slave 
replication, the asynchronous nature of the cluster implementation and how 
redis promotes a slave when a master go down (all of which could result in 
catastrophic failures due to the noted race conditions). While a test case 
would be interesting I acknowledge that it may be impossible to confirm such a 
situation in a controlled environment. 

>For some things, this would be no big deal. But in some it may result in
>total disaster, like two conductors trying to both own a single Ironic
>node and one accidentally erasing what the other just wrote there.

So that may be a mark against Redis being the preferred back-end for DLM 
however a quick look into the issue tracker for zookeeper reveals a similar 
sets of race conditions that are currently open and could result in the same 
kinds of situations [0]. While not ideal, it may really be a case and weighing 
the technology choices (like you've said) and picking the best fit for now. 

[0] - http://bit.ly/1NGQrAd  # Search string for Zookeeper Jira was too long so 
i shortened it.

--

Kevin Carter
IRC: cloudnull



From: Clint Byrum <cl...@fewbar.com>
Sent: Tuesday, November 10, 2015 2:21 AM
To: openstack-dev
Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager  
discussion @ the summit

Excerpts from Kevin Carter's message of 2015-11-09 22:24:16 -0800:
> Hello all,
>
> The rational behind using a solution like zookeeper makes sense however in 
> reviewing the thread I found myself asking if there was a better way to 
> address the problem without the addition of a Java based solution as the 
> default. While it has been covered that the current implementation would be a 
> reference and that "other" driver support in Tooz would allow for any backend 
> a deployer may want, the work being proposed within devstack [0] would become 
> the default development case thus making it the de-facto standard and I think 
> we could do better in terms of supporting developers and delivering 
> capability.
>
> My thoughts on using Redis+Redislock instead of Java+Zookeeper as the default 
> option:
> * Tooz already support redislock
> * Redis has an established cluster system known for general ease of use and 
> reliability on distributed systems.
> * Several OpenStack projects already support Redis as a backend option or 
> have extended capabilities using a Redis.
> * Redis can be implemented in RHEL, SUSE, and DEB based systems with ease.
> * Redis is Opensource software licensed under the "three clause BSD license" 
> and would not have any of the same questionable license implications as found 
> when dealing with anything Java.
> * The inclusion of Redis would work on a single node allowing developers to 
> continue work using VMs running on Laptops with 4GB or ram but would also 
> scale to support the multi-controller use case with ease. This would also 
> give developers the ability to work on a systems that will actually resemble 
> production.
> * Redislock will bring with it no additional developer facing language 
> dependencies (Redis is written in ANSI C and works ... without external 
> dependencies [1]) while also providing a plethora of language bindings [2].
>
>
> I apologize for questioning the proposed solution so late into the 
> development of this thread and for not making the summit conversations to 
> talk more with everyone whom worked on the proposal. While the ship may have 
> sailed on this point for now I figured I'd ask why we might go down the path 
> of Zookeeper+Java when a solution with likely little to no development effort 
> already exists, can support just about any production/development 
> environment, has lots of bindings, and (IMHO) would integrate with the larger 
> community easier; many OpenStack developers and deployers already know Redis. 
> With the inclusion of ZK+Java in DevStack and the act of making it the 
> default it essentially creates new hard dependencies one of which is Java and 
> I'd like to avoid that if at all possible; basically I think we can do better.
>

Kevin, thanks so much for your thoughts on this. I really do appreciate
that we've had a high diversity of opinions and facts brought to bear on
this subject.

The Aphyr/Jepsen tests that were linked before [1] show, IMO, that Redis
satisfies availability and partition tolerance in the 

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-09 Thread Kevin Carter
Hello all, 

The rational behind using a solution like zookeeper makes sense however in 
reviewing the thread I found myself asking if there was a better way to address 
the problem without the addition of a Java based solution as the default. While 
it has been covered that the current implementation would be a reference and 
that "other" driver support in Tooz would allow for any backend a deployer may 
want, the work being proposed within devstack [0] would become the default 
development case thus making it the de-facto standard and I think we could do 
better in terms of supporting developers and delivering capability.

My thoughts on using Redis+Redislock instead of Java+Zookeeper as the default 
option:
* Tooz already support redislock
* Redis has an established cluster system known for general ease of use and 
reliability on distributed systems. 
* Several OpenStack projects already support Redis as a backend option or have 
extended capabilities using a Redis.
* Redis can be implemented in RHEL, SUSE, and DEB based systems with ease. 
* Redis is Opensource software licensed under the "three clause BSD license" 
and would not have any of the same questionable license implications as found 
when dealing with anything Java.
* The inclusion of Redis would work on a single node allowing developers to 
continue work using VMs running on Laptops with 4GB or ram but would also scale 
to support the multi-controller use case with ease. This would also give 
developers the ability to work on a systems that will actually resemble 
production.
* Redislock will bring with it no additional developer facing language 
dependencies (Redis is written in ANSI C and works ... without external 
dependencies [1]) while also providing a plethora of language bindings [2].


I apologize for questioning the proposed solution so late into the development 
of this thread and for not making the summit conversations to talk more with 
everyone whom worked on the proposal. While the ship may have sailed on this 
point for now I figured I'd ask why we might go down the path of Zookeeper+Java 
when a solution with likely little to no development effort already exists, can 
support just about any production/development environment, has lots of 
bindings, and (IMHO) would integrate with the larger community easier; many 
OpenStack developers and deployers already know Redis. With the inclusion of 
ZK+Java in DevStack and the act of making it the default it essentially creates 
new hard dependencies one of which is Java and I'd like to avoid that if at all 
possible; basically I think we can do better.


[0] - https://review.openstack.org/#/c/241040/
[1] - http://redis.io/topics/introduction
[2] - http://redis.io/topics/distlock

--

Kevin Carter
IRC: cloudnull



From: Fox, Kevin M <kevin@pnnl.gov>
Sent: Monday, November 9, 2015 1:54 PM
To: maishsk+openst...@maishsk.com; OpenStack Development Mailing List (not for 
usage questions)
Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager 
discussion @ the summit

Dedicating 3 controller nodes in a small cloud is not the best allocation of 
resources sometimes.  Your thinking of medium to large clouds. Small production 
clouds are a thing too. and at that scale, a little downtime if you actually 
hit the rare case of a node failure on the controller may be acceptable. Its up 
for an OP to decide.

We've also experienced that sometimes HA software causes more, or longer 
downtimes then it solves sometimes. Due to its complexity, knowledge required, 
proper testing, etc. Again, the risk gets higher the smaller the cloud is in 
some ways.

Being able to keep it simple and small for that case, then scale with switching 
out pieces as needed does have some tangible benefits.

Thanks,
Kevin

From: Maish Saidel-Keesing [mais...@maishsk.com]
Sent: Monday, November 09, 2015 11:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager 
discussion @ the summit

On 11/05/15 23:18, Fox, Kevin M wrote:
> Your assuming there are only 2 choices,
>   zk or db+rabbit. I'm claiming both hare suboptimal at present. a 3rd might 
> be needed. Though even with its flaws, the db+rabbit choice has a few 
> benefits too.
>
> You also seem to assert that to support large clouds, the default must be 
> something that can scale that large. While that would be nice, I don't think 
> its a requirement if its overly burdensome on deployers of non huge clouds.
>
> I don't have metrics, but I would be surprised if most deployments today 
> (production + other) used 3 controllers with a full ha setup. I would guess 
> that the majority are single controller setups. With those, the
I think it would be safe to assume - that any kind of production cloud -
or any operator that

Re: [openstack-dev] [openstack-ansible] Security spec status update

2015-10-06 Thread Kevin Carter
Great work on this Major! I look forward to seeing the role built out and 
adding it into the stack.

If anyone out there is interested, the greater OpenStack-Ansible community 
would love feedback on the initial role import [0].

[0] - https://review.openstack.org/#/c/231165 

--

Kevin Carter
IRC: cloudnull



From: Major Hayden <ma...@mhtx.net>
Sent: Friday, October 2, 2015 2:19 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [openstack-ansible] Security spec status update

Hello there,

A couple of people were asking me about the status of the security spec[1] for 
openstack-ansible.  Here are a few quick updates as of today:

  * We've moved away from considering CIS temporarily due to licensing and 
terms of use issues
  * We're currently adapting the RHEL 6 STIG[2] for Ubuntu 14.04
  * There's are lots of tasks coming together in a temporary repository[3]
  * Documentation is up on ReadTheDocs[4] (temporarily)

At this point, we have 181 controls left to evaluate (out of 264[5]).  Feel 
free to hop into #openstack-ansible and ask any questions you have about the 
work.

[1] 
http://specs.openstack.org/openstack/openstack-ansible-specs/specs/mitaka/security-hardening.html
[2] http://iase.disa.mil/stigs/Pages/index.aspx
[3] https://github.com/rackerlabs/openstack-ansible-security
[4] http://openstack-ansible-security.readthedocs.org/en/latest/
[5] https://www.stigviewer.com/stig/red_hat_enterprise_linux_6/

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Proposing Steve Lewis (stevelle) for core reviewer

2015-09-30 Thread Kevin Carter
+1 from me


--

Kevin Carter
IRC: cloudnull


From: Jesse Pretorius <jesse.pretor...@gmail.com>
Sent: Wednesday, September 30, 2015 3:51 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [openstack-ansible] Proposing Steve Lewis (stevelle) 
for core reviewer

Hi everyone,

I'd like to propose that Steve Lewis (stevelle) be added as a core reviewer.

He has made an effort to consistently keep up with doing reviews in the last 
cycle and always makes an effort to ensure that his responses are made after 
thorough testing where possible. I have found his input to be valuable.

--
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] PTL Non-Candidacy

2015-09-14 Thread Kevin Carter
Hello Everyone,

TL;DR - I'm sending this out to announce that I won't be running for PTL of the 
OpenStack-Ansible project in the upcoming cycle. Although I won't be running 
for PTL, with community support, I intend to remain an active contributor just 
with more time spent more cross project and in other upstream communities.

Being a PTL has been difficult, fun, and rewarding and is something I think 
everyone should strive to do at least once. In the the upcoming cycle I believe 
our project has reached the point of maturity where its time for the leadership 
to change. OpenStack-Ansible was recently moved into the "big-tent" and I 
consider this to be the perfect juncture for me to step aside and allow the 
community to evolve under the guidance of a new team lead. I share the opinions 
of current and former PTLs that having a revolving door of leadership is key to 
the success of any project [0]. While OpenStack-Ansible has only recently been 
moved out of Stackforge and into the OpenStack namespace as a governed project 
(I'm really excited about that) I've had the privileged of working as the 
project technical lead ever since it's inception at Rackspace with the initial 
proof of concept known as "Ansible-LXC-RPC". It's been an amazing journey so 
far and I'd like to thank everyone that's helped make OpenStack-
 Ansible (formally OSAD) possible; none of this would have happened without the 
contributions made by our devout and ever growing community of deployers and 
developers. 

Thank you again and I look forward to seeing you all online and in Tokyo.

[0] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html

--

Kevin Carter
IRC: cloudnull

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ansible][Infra] Moving ansible roles into big tent?

2015-09-08 Thread Kevin Carter
Hi Paul,

We'd love to collaborate on improving openstack-ansible and getting our 
OpenStack roles into the general big tent and out of our monolithic repository. 
We have a proposal in review for moving our roles out of our main repository 
and into separate repositories [0] making openstack-ansible consume the roles 
through the use of an Ansible Galaxy interface. We've been holding off on this 
effort until os-ansible-deployment is moved into the OpenStack namespace which 
should be happening sometime on September 11 [1][2]. With that, I'd say join us 
in the #openstack-ansible channel if you have any questions on the 
os-ansible-deployment project in general and check out our twice weekly 
meetings [3]. Lastly, many of the core members / deployers of the project will 
be at the summit and if you're interested / will be in Tokyo we can schedule 
some time to work out a path to convergence. 

Look forward to talking to you and others about this more soon. 

--

[0] - https://review.openstack.org/#/c/213779
[1] - https://review.openstack.org/#/c/200730
[2] - 
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Upcoming_Project_Renames
[3] - https://wiki.openstack.org/wiki/Meetings/openstack-ansible

Kevin Carter
IRC: cloudnull



From: Paul Belanger <pabelan...@redhat.com>
Sent: Tuesday, September 8, 2015 9:57 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Ansible][Infra] Moving ansible roles into big tent?

Greetings,

I wanted to start a discussion about the future of ansible / ansible roles in
OpenStack. Over the last week or so I've started down the ansible path, starting
my first ansible role; I've started with ansible-role-nodepool[1].

My initial question is simple, now that big tent is upon us, I would like
some way to include ansible roles into the opentack git workflow.  I first
thought the role might live under openstack-infra however I am not sure that
is the right place.  My reason is, -infra tents to include modules they
currently run under the -infra namespace, and I don't want to start the effort
to convince people to migrate.

Another thought might be to reach out to the os-ansible-deployment team and ask
how they see roles in OpenStack moving foward (mostly the reason for this
email).

Either way, I would be interested in feedback on moving forward on this. Using
travis-ci and github works but OpenStack workflow is much better.

[1] https://github.com/pabelanger/ansible-role-nodepool

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANN] OpenStack Kilo on Ubuntu fully automated with Ansible! Ready for NFV L2 Bridges via Heat!

2015-08-31 Thread Kevin Carter
Thiago, Sorry for not replying inline, OWA is all I have access to today.


We too have the ability to run everything on an AIO (all in one) which can be 
accomplished with a single command: 

---
# Command to run an AIO
bash <(curl -s 
https://raw.githubusercontent.com/stackforge/os-ansible-deployment/master/scripts/run-aio-build.sh)
---

Documentation on building a development stack [0] - Official documentation that 
we've been maintaining [1]


This AIO process is what we do to test gating on every commit made to the 
upstream project and it creates several dummy interfaces to make it mimic a 
real world deployment, even if its on a single host, and has the ability to 
test clustered Galera, Rabbit, Repos, etc...


If you don't want to use our containerized architecture you don't have to. 
Simply set the flag `is_metal: true` within the conf.d items that you don't 
want to run within a container. Information on how this can be done can be read 
/ seen here: [2]  -- essentially, set that flag and all container creates / 
management will be ignored. This option can be set to various group types and 
can be mixed/matched as you see fit.


We are not using Ubuntu packages or any OpenStack packages from any distro; in 
truth, we have no intention to do so, unless there's someone within the 
community that wants to contribute and maintain that type of deployment. We've 
found that we're able to ensure better consistency and stability when using 
source. Additionally, we can test/implement upstream bug/security fixes faster 
without having to wait on package fixes from the distros or having to deal with 
some of the brokenness they've provided.


As for backporting fixes or custom code you OSAD can help you do that already 
by simply maintaining your own git sources. The OSAD project is able to build 
from and use just about anything. This is enabled and covered here: [3] I'd be 
happy to cover more of that if it would be of interest to you. IMHO building 
form git in this manner will help with many of the packaging problems that can 
arise from using the PPA system.


We do not support OvS yet. However, we'd love someone to come along and want to 
contribute some bits to make it supportable. The framework itself is robust 
enough to support a variety of network providers but as I've mentioned before, 
we simply need contributors whom want to see these types of features within the 
project, PRs are always welcome.


Thiago, stop by the #openstack-ansible IRC channel there's almost always 
someone in channel and we'd be happy to help answer any of the questions that 
you or your team may have and if there's sufficient interest I'm sure we can 
align on some of your goals to progress our collective projects.

--

Kevin Carter
IRC: cloudnull

[0] - 
https://github.com/stackforge/os-ansible-deployment/blob/master/development-stack.rst
[1] - http://osad.readthedocs.org/en/latest/
[2] - 
https://github.com/stackforge/os-ansible-deployment/blob/master/etc/openstack_deploy/env.d/cinder.yml#L53-L57
[3] - 
https://github.com/stackforge/os-ansible-deployment/tree/master/playbooks/defaults/repo_packages


From: Martinx - ジェームズ <thiagocmarti...@gmail.com>
Sent: Monday, August 31, 2015 11:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ANN] OpenStack Kilo on Ubuntu fully automated 
with Ansible! Ready for NFV L2 Bridges via Heat!

Hey guys,

 I know, the Ansible deployment
(https://github.com/stackforge/os-ansible-deployment) is impressive...

 But, for our need, we required something much simpler, that could be
deployed in an all-in-one fashion, without LXC containers for each
service, Ansible against localhost to simplify even more and using
only one NIC to begin with (br-ex and vxlan data net are being used
against a dummy0 and dummy1 interfaces).

 So, the Stackforge deployments is far too much complex for what we need.

 We found some limitations and a serious bug on Linux (I'll report
later), when using it with Linux Bridges (OVS is a requirement here,
specially because we want to try DPDK later).


 Concerns about "stackforge/os-ansible-deployment":


 1- It does not make use of Ubuntu OpenStack packages, it installs
everything from Git and Python PIP (we don't like it this way, why not
use Ubuntu packages? No reason...);

 2- It uses Linux Bridges, but unfortunately, it is not ready yet for
NFV deployments (where an Instance acts as a L2 Bridge), it simple
doesn't work / can not be used for "L2-NFV" (we found a very serious
BUG on Linux);

 3- Very hard deployment procedure and it needs more that 1 physical
box (or more than one VM, 1 for Ansible host, 1 for controller and its
containers, 1 for Net node, and the compute nodes).


So, we need Open vSwitch (not supported by "os-ansible-deployment",
right?), OpenStack Ubuntu packages from Cloud Archive and a very
simple setup / topology (

Re: [openstack-dev] [ANN] OpenStack Kilo on Ubuntu fully automated with Ansible! Ready for NFV L2 Bridges via Heat!

2015-08-26 Thread Kevin Carter
+1 I'd love to know this too. 

Additionally, if vagrant is something that is important to folks in the greater 
community it would be great to get some of those bits upstreamed. 

Per the NFV options, I don't see much in the way of OSAD not being able to 
support that presently, its really a matter of introducing the new 
configuration options/package additions however I may be missing something 
based on a cursory look at your published repository. If you're interested, I 
would be happy to help you get the capabilities into OSAD so that the greater 
community can benefit from some of the work you've done.

--

Kevin Carter
IRC: cloudnull



From: Thierry Carrez thie...@openstack.org
Sent: Wednesday, August 26, 2015 5:15 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [ANN] OpenStack Kilo on Ubuntu fully automated 
with Ansible! Ready for NFV L2 Bridges via Heat!

Martinx - ジェームズ wrote:
  I'm proud to announce an Ansible Playbook to deploy OpenStack on Ubuntu!
  Check it out!
  * https://github.com/sandvine/os-ansible-deployment-lite

How does it compare with the playbooks developed as an OpenStack project
by the OpenStackAnsible team[1] ?

Any benefit, difference ? Anything you could contribute back there ? Any
value in merging the two efforts ?

[1] http://governance.openstack.org/reference/projects/openstackansible.html

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-ansible-deployment] Core nomination for Matt Kassawara (sam-i-am)

2015-07-17 Thread Kevin Carter
Hello all,

I would like to nominate Matt Kassawara (IRC: sam-i-am) to the OpenStack 
Ansible deployment core team. Matt has been instrumental in building out our 
current install and use documentation[0], he is just about always in the 
community channel(s) helping folks, is a great technical resource for all 
things networking / OpenStack, and is one of the main people we already rely on 
to ensure we're documenting the right things in the right places. IMHO, its 
about time he be given Core powers within the OSAD project.

Please respond with +1/-1s and or any other concerns.

As a reminder, we are using the voting process outlined at [ 
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess ] to add 
members to our core team.

--

Kevin Carter
IRC: cloudnull

[0] 
https://review.openstack.org/#/q/owner:%22Matthew+Kassawara%22+status:merged+project:stackforge/os-ansible-deployment,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-ansible-deployment] Core nomination for Matt Kassawara (sam-i-am)

2015-07-17 Thread Kevin Carter
To be clear Matt has been active an active reviewer. The previous reference 
link simply showed commits owned by him which also pertained to our project. 
Here is a complete list of changes that he's reviewed[0][1] in the OSAD project.


--

Kevin Carter
IRC: cloudnull

[0] 
https://review.openstack.org/#/q/project:stackforge/os-ansible-deployment+reviewer:%22Matthew+Kassawara%22,n,z
[1] 
https://review.openstack.org/#/q/project:stackforge/os-ansible-deployment-specs+reviewer:%22Matthew+Kassawara%22,n,z



From: Andy McCrae andy.mcc...@gmail.com
Sent: Friday, July 17, 2015 1:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [os-ansible-deployment] Core nomination for Matt 
Kassawara (sam-i-am)


Matt does add value, and I definitely don't want to imply that what he does 
isn't great work or that it isn't helpful to the project, however its hard to 
+1 a Core reviewer who has 1 review in the liberty timeframe. Since one of the 
primary purposes of core reviewers is to actively review code it doesn't make 
much sense to me.

If Matt becomes more active performing quality reviews then I have no qualms.

On Fri, Jul 17, 2015 at 4:14 PM, Kevin Carter 
kevin.car...@rackspace.commailto:kevin.car...@rackspace.com wrote:
Hello all,

I would like to nominate Matt Kassawara (IRC: sam-i-am) to the OpenStack 
Ansible deployment core team. Matt has been instrumental in building out our 
current install and use documentation[0], he is just about always in the 
community channel(s) helping folks, is a great technical resource for all 
things networking / OpenStack, and is one of the main people we already rely on 
to ensure we're documenting the right things in the right places. IMHO, its 
about time he be given Core powers within the OSAD project.

Please respond with +1/-1s and or any other concerns.

As a reminder, we are using the voting process outlined at [ 
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess ] to add 
members to our core team.

--

Kevin Carter
IRC: cloudnull

[0] 
https://review.openstack.org/#/q/owner:%22Matthew+Kassawara%22+status:merged+project:stackforge/os-ansible-deployment,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-ansible-deployment] [openstack-ansible] Using commit flags appropriately

2015-07-16 Thread Kevin Carter
+1 - I think we should start doing this immediately. 

--

Kevin Carter
Racker, Developer, Hacker @ The Rackspace Private Cloud.


From: Ian Cordasco ian.corda...@rackspace.com
Sent: Thursday, July 16, 2015 10:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [os-ansible-deployment] [openstack-ansible] Using 
commit flags appropriately

Hey everyone,

Now that the project is starting to grow and has some amount of
documentation. We should really start using flags in our commits more
appropriately, e.g., UpgradeImpact, DocImpact, etc.

For example, my own recent change to upgrade our keystone module to use v3
should have also had an UpgradeImpact flag but only had a DocImpact flag.

This will help consumers of openstack-ansible in the future and it will
help us if/when we start writing release notes for openstack-ansible.

Cheers,
Ian
sigmavirus24 (irc.freenode.net)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][release] Announcing Liberty-1 release of Kolla

2015-07-01 Thread Kevin Carter
Steve, 

The initial review you guys had done did help a bunch and it was great to work 
with you and everyone else in the channel. As you're aware, code base that you 
had tested was our Juno (stable at that time) release which has more than its 
fair share of Rackspace-isms. One of which is the requirement to have access to 
the upstream repository for the installation of its python bits. So within that 
release it is true that if the upstream repository were to go away a 
redeployment or the expansion of the stack would be impossible until service 
was restored. While you could always self host the upstream repos, there is an 
open rsync relay, that wasn't functionality baked into OSAD at that time. 
However, since your eval we've released Kilo which now provides for the ability 
to self-host all of the python bits / container images / and anything else you 
may need or want from within the infrastructure (that's the default and what we 
gate on). While this functionality existed in master when you gu
 ys had done the test it had not been officially released so its likely you had 
not looked into it at this point. Additionally, we've done a huge amount of 
work to separate Kilo / Master from what was done in Icehouse / Juno while also 
providing an upgrade path for our existing deployments which will ensure that 
deployers are able to take advantage of the general improvements throughout the 
stack in Kilo and beyond. We, like you, do still have to reliance on some 
upstream resources however the inclusion of the repo-server containers should 
thwart these issues. Our python bits are built once within that repo-server 
infrastructure and everything within the OSAD points to the internal repository 
for its source of truth. As I said, we still have some reliance on upstream and 
likely always will but once an OSAD deployment is online, in Kilo or Master, it 
should be able to redeploy itself indefinitely. Obviously there's still more 
that we can do to make this better, and we're getting there
 , but I don't believe the same theoretical issues you had seen before are 
present now.

All that said, great work on the Libery-1 release and I look forward to play 
with Kolla with these new bits sometime in the near future.

--

Kevin Carter
IRC: cloudnull


From: Steven Dake (stdake) std...@cisco.com
Sent: Wednesday, July 1, 2015 2:21 PM
To: OpenStack Development Mailing List (not for usage questions); s...@yaple.net
Subject: Re: [openstack-dev] [kolla][release] Announcing Liberty-1 release of 
Kolla

On 7/1/15, 8:11 AM, Ian Cordasco ian.corda...@rackspace.com wrote:



On 6/30/15, 23:36, Sam Yaple sam...@yaple.net wrote:

Ian,


The most significant difference would be that Kolla uses image based
deployment rather than building from source on each node at runtime
allowing for a more consistent and repeatable deployment.


Do you mean specific docker images? Can you expand on how
os-ansible-deployment is not repeatable? They use an lxc-container cached
image so all containers are uniform (consistent, repeatable, etc.) and
build wheels (once) and use an internal repo mirror so that all
installations use the exact same set of wheels (e.g., consistent and
repeatable).

Are there places where you've found osad to be not consistent or
repeatable?

Ian,

We did a 10 day eval of OSAD and liked the tech.  We did find the way the
deployment pipeline works to be lacking.  A purely theoretical problem
with the deployment pipeline is key repositories used to build the
software could be offline.  Since the building of the software occurs
during deployment, this could result in an inability to alter the
configuration of the deployment after OpenStack is deployed.  Kolla
suffers from this same problem during the installation (build pipeline)
step.  But as long as you have already built images somewhere in your
system, you are still able to deploy, avoiding complete downtime on
deployment that OSAD could theoretically suffer.

This theoretical issue makes the deployment non-repeatable.  Hope our 10
day eval analysis helps improve OSAD.

Regards
-steve



On Tue, Jun 30, 2015 at 2:28 PM, Ian Cordasco
ian.corda...@rackspace.com wrote:



On 6/29/15, 23:59, Steven Dake (stdake) std...@cisco.com wrote:

The Kolla community
is pleased to announce the
 release of the
 Kolla Liberty 1 milestone.  This release fixes 56 bugs
 and implements 14 blueprints!


Our community developed the following notable features:



* A start at
source-basedcontainers

So how does this now compare to the stackforge/os-ansible-deployment
(soon
to be openstack/openstack-ansible) project?

_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman

[openstack-dev] [os-ansible-deployment] new features needing reviewers

2015-06-30 Thread Kevin Carter
Hello all, 

The os-ansible-deployment/OpenStack-Ansible project is gearing up for a couple 
feature drops and looking for reviews from interested people within the greater 
deployer/operator/dev community to ensure that we're developing the 
features/support people are looking for.


Ceilometer:
  This review[0] completes the ceilometer blueprint[1] which adds support for 
ceilometer throughout the stack. While the change doesn't bring with it a 
salable mongodb deployment the solution does add ceilometer to OSAD as a first 
class capability and is tested using mongodb via our gate scripts. That said, 
if anyone knows of or has an Ansible solution for deploying, managing, and 
scaling mongodb we'd love to review, contribute, and pull it in as an external 
dependency when deploying Ceilometer with OSAD.


Ceph:
  This review[2] adds ceph support for the various OpenStack service that can 
consume ceph. This is not a way to deploy a ceph infrastructure using Ansible, 
for that we're still recommending the upstream ceph playbooks/roles[3]. 
Additionally, this is not implementing ceph as a replacement to swift.


These reviews are presently targeted at master which is gating on upstream 
liberty but we're intending to bring these changes into our kilo branch, which 
is tracking upstream kilo, to be released with the 11.1.0 tag and is scheduled 
to drop in the next few weeks[4]. We have lots of new goodness coming for 
11.1.0 but these two features are largish so we'd appreciate some additional 
feedback/reviews. If you have some time and are interested in the OSAD project, 
we'd love to hear from you.


--

Kevin Carter

[0] https://review.openstack.org/#/c/173067/
[1] 
https://github.com/stackforge/os-ansible-deployment-specs/blob/master/specs/kilo/implement-ceilometer.rst
[2] https://review.openstack.org/#/c/181957/
[3] https://github.com/ceph/ceph-ansible
[4] https://launchpad.net/openstack-ansible/+milestone/11.1.0
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-ansible-deployment] Core team nomination

2015-06-19 Thread Kevin Carter
Please join me in welcoming Ian Cordasco to the `os-ansible-deployment` core 
team!

??

--

Kevin Carter


From: Hugh Saunders h...@wherenow.org
Sent: Thursday, June 18, 2015 10:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [os-ansible-deployment] Core team nomination

+1

--
Hugh Saunders

On 13 June 2015 at 18:18, Kevin Carter 
kevin.car...@rackspace.commailto:kevin.car...@rackspace.com wrote:
Hello,

I would like to nominate Ian Cordasco (sigmavirus24 on IRC) for the 
os-ansible-deployment-core team. Ian has been contributing to the OSAD project 
for some time now and has always had quality reviews[0], he's landing great 
patches[1], he's almost always in the meetings, and is simply an amazing person 
to work with. His open source first attitude, security mindset, and willingness 
to work cross project is invaluable and will only stand to better the project 
and the deployers whom consume it.

Please respond with +1/-1s and or any other concerns.

As a reminder, we are using the voting process outlined at [ 
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess ] to add 
members to our core team.

Thank you.

--

Kevin Carter

[0] 
https://review.openstack.org/#/q/status:closed+owner:%22Ian+Cordasco%22+project:stackforge/os-ansible-deployment,n,z
[1] 
https://review.openstack.org/#/q/status:merged+owner:%22Ian+Cordasco%22+project:stackforge/os-ansible-deployment,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-ansible-deployment] Core team nomination

2015-06-13 Thread Kevin Carter
Hello,

I would like to nominate Ian Cordasco (sigmavirus24 on IRC) for the 
os-ansible-deployment-core team. Ian has been contributing to the OSAD project 
for some time now and has always had quality reviews[0], he's landing great 
patches[1], he's almost always in the meetings, and is simply an amazing person 
to work with. His open source first attitude, security mindset, and willingness 
to work cross project is invaluable and will only stand to better the project 
and the deployers whom consume it.

Please respond with +1/-1s and or any other concerns.

As a reminder, we are using the voting process outlined at [ 
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess ] to add 
members to our core team.

Thank you.

-- 

Kevin Carter

[0] 
https://review.openstack.org/#/q/status:closed+owner:%22Ian+Cordasco%22+project:stackforge/os-ansible-deployment,n,z
[1] 
https://review.openstack.org/#/q/status:merged+owner:%22Ian+Cordasco%22+project:stackforge/os-ansible-deployment,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][openstack-ansible] Proposal to move project openstack-ansible

2015-06-12 Thread Kevin Carter
Hello TC members and fellow stackers,

The `os-ansible-deployment`[0] project has submitted a review to to change the 
Governance of the project[1]. Our community of developers and deployers have
discussed the move at length, in IRC/meetings and on etherpad[2], and believe 
that 
we're ready to be considered for a move to big tent. As this time, we would 
like to formally ask the TC to consider our candidacy to be an official 
OpenStack 
project.

Thank you

--

Kevin Carter

[0] https://github.com/stackforge/os-ansible-deployment
[1] https://review.openstack.org/#/c/191105/
[2] https://etherpad.openstack.org/p/osad-openstack-naming

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nominating Serge van Ginderachter to the os-ansible-deployment core team

2015-05-28 Thread Kevin Carter
Hello,

I would like to nominate Serge (svg on IRC) for the os-ansible-deployment-core 
team. Serge has been involved with the greater Ansible community for some time 
and has been working with the OSAD project for the last couple of months. He 
has been an active contributor in the #openstack-ansible channel and has been 
participating in the general deployment/evolution of the project since joining 
the channel. I believe that his expertise will be invaluable to moving OSAD 
forward and is an ideal candidate for the Core team.

Please respond with +1/-1s and or any other concerns.

As a reminder, we are using the voting process outlined at [ 
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess ] to add 
members to our core team.

Thank you.

--

Kevin Carter
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-ansible-deplpoyment has released Kilo

2015-05-05 Thread Kevin Carter
Hi Dani,

To touch on the provisioining we presently have no plans to introduce baremetal 
provisioning via a PXE, Razor, djeep, etc… Our present thinking is to allow 
people to use whatever they want to provision the hosts and then come in with 
OSAD, post the OS deployment, to install OpenStack.

As for the LXC containers they are used for infrastructure components within 
the cloud deployment. By default your instances are run under KVM which would 
allow you to sick to standard VM instances.If you didn’t want the separation 
and scalability that the LXC containers provide the infrastructure components, 
you could set the flag, “is_metal” to true within the 
`openstack_environment.yml` file which would install some or all of the various 
services, that we presently support, on the hosts specified within your Anisble 
inventory.

I hope that helps / answers your questions, we are working on documentation to 
better spell out all of the things that you can do with the system, so watch 
for that soon. Also feel free to ping me @cloudnull or others within the 
#openstack-ansible channel or pop by one of our meetings [ 
https://wiki.openstack.org/wiki/Meetings/openstack-ansible ] there’s a bunch of 
us around that are more than happy to help / answer any more questions that you 
might have.

—

Kevin

 On May 4, 2015, at 15:49, Daniel Comnea comnea.d...@gmail.com wrote:
 
 Hey Kevin,
 
 Let me add more info:
 
 1) trying to understand if there is any support for baremetal provisioning 
 (e.g setup the UCS manager if using UCS blades etc, dump the OS on it). I 
 don't care if is Ironic or PXE/ Kickstart or Foreman etc
 2) deployments on baremetal without the use of LXC containers (stick with 
 default VM instances)
 
 Dani
 
 On Mon, May 4, 2015 at 3:02 PM, Kevin Carter kevin.car...@rackspace.com 
 wrote:
 Hey Dani,
 
 Are you looking for support for Ironic for baremetal provisioning or for 
 deployments on baremetal without the use of LXC containers?
 
 —
 
 Kevin
 
  On May 3, 2015, at 06:45, Daniel Comnea comnea.d...@gmail.com wrote:
 
  Great job Kevin  co !!
 
  Are there any plans in supporting configure the baremetal as well ?
 
  Dani
 
  On Thu, Apr 30, 2015 at 11:46 PM, Liu, Guang Jun (Gene) 
  gene@alcatel-lucent.com wrote:
  cool!
  
  From: Kevin Carter [kevin.car...@rackspace.com]
  Sent: Thursday, April 30, 2015 4:36 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] os-ansible-deplpoyment has released Kilo
 
  Hello Stackers,
 
  The OpenStack Ansible Deployment (OSAD) project is happy to announce our 
  stable Kilo release, version 11.0.0. The project has come a very long way 
  from initial inception and taken a lot of work to excise our original 
  vendor logic from the stack and transform it into a community-driven 
  architecture and deployment process. If you haven’t yet looked at the 
  `os-ansible-deployment` project on StackForge, we'd love for you to take a 
  look now [ https://github.com/stackforge/os-ansible-deployment ]. We offer 
  an OpenStack solution orchestrated by Ansible and powered by upstream 
  OpenStack source. OSAD is a batteries included OpenStack deployment 
  solution that delivers OpenStack as the developers intended it: no 
  modifications to nor secret sauce in the services it deploys. This release 
  includes 436 commits that brought the project from Rackspace Private Cloud 
  technical debt to an OpenStack community deployment solution. I'd like to 
  recognize the following people (from Git logs) for all of their hard work 
  in making the OSAD project successful:
 
  Andy McCrae
  Matt Thompson
  Jesse Pretorius
  Hugh Saunders
  Darren Birkett
  Nolan Brubaker
  Christopher H. Laco
  Ian Cordasco
  Miguel Grinberg
  Matthew Kassawara
  Steve Lewis
  Matthew Oliver
  git-harry
  Justin Shepherd
  Dave Wilde
  Tom Cameron
  Charles Farquhar
  BjoernT
  Dolph Mathews
  Evan Callicoat
  Jacob Wagner
  James W Thorne
  Sudarshan Acharya
  Jesse P
  Julian Montez
  Sam Yaple
  paul
  Jeremy Stanley
  Jimmy McCrory
  Miguel Alex Cantu
  elextro
 
 
  While Rackspace remains the main proprietor of the project in terms of 
  community members and contributions, we're looking forward to more 
  community participation especially after our stable Kilo release with a 
  community focus. Thank you to everyone that contributed on the project so 
  far and we look forward to working with more of you as we march on.
 
  —
 
  Kevin Carter
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe

Re: [openstack-dev] os-ansible-deplpoyment has released Kilo

2015-05-04 Thread Kevin Carter
Hey Dani,

Are you looking for support for Ironic for baremetal provisioning or for 
deployments on baremetal without the use of LXC containers?

—

Kevin

 On May 3, 2015, at 06:45, Daniel Comnea comnea.d...@gmail.com wrote:
 
 Great job Kevin  co !!
 
 Are there any plans in supporting configure the baremetal as well ?
 
 Dani
 
 On Thu, Apr 30, 2015 at 11:46 PM, Liu, Guang Jun (Gene) 
 gene@alcatel-lucent.com wrote:
 cool!
 
 From: Kevin Carter [kevin.car...@rackspace.com]
 Sent: Thursday, April 30, 2015 4:36 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] os-ansible-deplpoyment has released Kilo
 
 Hello Stackers,
 
 The OpenStack Ansible Deployment (OSAD) project is happy to announce our 
 stable Kilo release, version 11.0.0. The project has come a very long way 
 from initial inception and taken a lot of work to excise our original vendor 
 logic from the stack and transform it into a community-driven architecture 
 and deployment process. If you haven’t yet looked at the 
 `os-ansible-deployment` project on StackForge, we'd love for you to take a 
 look now [ https://github.com/stackforge/os-ansible-deployment ]. We offer an 
 OpenStack solution orchestrated by Ansible and powered by upstream OpenStack 
 source. OSAD is a batteries included OpenStack deployment solution that 
 delivers OpenStack as the developers intended it: no modifications to nor 
 secret sauce in the services it deploys. This release includes 436 commits 
 that brought the project from Rackspace Private Cloud technical debt to an 
 OpenStack community deployment solution. I'd like to recognize the following 
 people (from Git logs) for all of their hard work in making the OSAD project 
 successful:
 
 Andy McCrae
 Matt Thompson
 Jesse Pretorius
 Hugh Saunders
 Darren Birkett
 Nolan Brubaker
 Christopher H. Laco
 Ian Cordasco
 Miguel Grinberg
 Matthew Kassawara
 Steve Lewis
 Matthew Oliver
 git-harry
 Justin Shepherd
 Dave Wilde
 Tom Cameron
 Charles Farquhar
 BjoernT
 Dolph Mathews
 Evan Callicoat
 Jacob Wagner
 James W Thorne
 Sudarshan Acharya
 Jesse P
 Julian Montez
 Sam Yaple
 paul
 Jeremy Stanley
 Jimmy McCrory
 Miguel Alex Cantu
 elextro
 
 
 While Rackspace remains the main proprietor of the project in terms of 
 community members and contributions, we're looking forward to more community 
 participation especially after our stable Kilo release with a community 
 focus. Thank you to everyone that contributed on the project so far and we 
 look forward to working with more of you as we march on.
 
 —
 
 Kevin Carter
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] os-ansible-deplpoyment has released Kilo

2015-04-30 Thread Kevin Carter
Hello Stackers,

The OpenStack Ansible Deployment (OSAD) project is happy to announce our stable 
Kilo release, version 11.0.0. The project has come a very long way from initial 
inception and taken a lot of work to excise our original vendor logic from the 
stack and transform it into a community-driven architecture and deployment 
process. If you haven’t yet looked at the `os-ansible-deployment` project on 
StackForge, we'd love for you to take a look now [ 
https://github.com/stackforge/os-ansible-deployment ]. We offer an OpenStack 
solution orchestrated by Ansible and powered by upstream OpenStack source. OSAD 
is a batteries included OpenStack deployment solution that delivers OpenStack 
as the developers intended it: no modifications to nor secret sauce in the 
services it deploys. This release includes 436 commits that brought the project 
from Rackspace Private Cloud technical debt to an OpenStack community 
deployment solution. I'd like to recognize the following people (from Git logs) 
for all of their hard work in making the OSAD project successful:

Andy McCrae
Matt Thompson
Jesse Pretorius
Hugh Saunders
Darren Birkett
Nolan Brubaker
Christopher H. Laco
Ian Cordasco
Miguel Grinberg
Matthew Kassawara
Steve Lewis
Matthew Oliver
git-harry
Justin Shepherd
Dave Wilde
Tom Cameron
Charles Farquhar
BjoernT
Dolph Mathews
Evan Callicoat
Jacob Wagner
James W Thorne
Sudarshan Acharya
Jesse P
Julian Montez
Sam Yaple
paul
Jeremy Stanley
Jimmy McCrory
Miguel Alex Cantu
elextro


While Rackspace remains the main proprietor of the project in terms of 
community members and contributions, we're looking forward to more community 
participation especially after our stable Kilo release with a community focus. 
Thank you to everyone that contributed on the project so far and we look 
forward to working with more of you as we march on.

—

Kevin Carter



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-ansible-deployment] Kilo branch now available

2015-04-16 Thread Kevin Carter
The stackforge/os-ansible-deployment project is happy to announce initial 
support for deployments using upstream Kilo based on the current head of the 
stable/kilo branches (as found within the various OpenStack projects that we 
support). The OSAD project will be creating RC tags in short order and while 
the branch is not production ready, due to Kilo not being officially released 
yet, we've been tracking Kilo (Master) for a while and OSAD is now stable 
enough to create the branch.

At present we support cinder, glance, heat, horizon, keystone, neutron, nova, 
and swift as well as a full complement of roles to accompany the OpenStack 
services such as Galera, RabbitMQ, etc… If you have some time and are 
interested in Ansible deploying OpenStack please have a look at the project and 
if you've looked at it in the past we'd love for you to give it another look as 
there've been a lot of changes since it's initial release. If you're looking to 
kick the tires we have some minimal documentation on getting started which can 
be found here: [ 
https://github.com/stackforge/os-ansible-deployment/blob/kilo/development-stack.rst
 ].

Thanks again to everyone that's worked on our project so far!

--

Kevin Carter



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-ansible-deployment] Nominating Nolan Brubaker for core team

2015-03-30 Thread Kevin Carter
Please join me in welcoming Nolan Brubaker (palendae) to the 
os-ansible-deployment core team.

—

Kevin Carter


 On Mar 30, 2015, at 06:54, Jesse Pretorius jesse.pretor...@gmail.com wrote:
 
 On 25 March 2015 at 15:24, Kevin Carter kevin.car...@rackspace.com wrote:
 I would like to nominate Nolan Brubaker (palendae on IRC) for the 
 os-ansible-deployment-core team. Nolan has been involved with the project for 
 the last few months and has been an active reviewer with solid reviews. IMHO, 
 I think he is ready to receive core powers on the repository.
 
 References:
   [ 
 https://review.openstack.org/#/q/project:stackforge/os-ansible-deployment+reviewer:%22nolan+brubaker%253Cnolan.brubaker%2540rackspace.com%253E%22,n,z
  ]
 
 Please respond with +1/-1s or any other concerns.
 
 +1 Nolan's been an active reviewer, provided good feedback and contributions.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-ansible-deployment] Nominating Nolan Brubaker for core team

2015-03-25 Thread Kevin Carter
Greetings,

I would like to nominate Nolan Brubaker (palendae on IRC) for the 
os-ansible-deployment-core team. Nolan has been involved with the project for 
the last few months and has been an active reviewer with solid reviews. IMHO, I 
think he is ready to receive core powers on the repository.

References:
  [ 
https://review.openstack.org/#/q/project:stackforge/os-ansible-deployment+reviewer:%22nolan+brubaker%253Cnolan.brubaker%2540rackspace.com%253E%22,n,z
 ]

Please respond with +1/-1s or any other concerns.

As a reminder, we are using the voting process outlined at [ 
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess ] to add 
members to our core team.

—

Kevin Carter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-ansible-deployment] Changes in the os-ansible-deployment project

2015-02-19 Thread Kevin Carter
Hello all,

Over the past few weeks the os-ansible-deployment team has been working on 
getting the repository (https://github.com/stackforge/os-ansible-deployment) 
into a more community friendly state. For anyone who’s look at the project in 
the past and thought it was too Rackspace specific I’m here to announce that 
we’ve fixed that. The project has been De-Rackspace’d and Genericized while 
maintaining the reference architecture and intent. As of today, we’ve fully 
excised all of Rackspace Private Cloud specific roles and playbooks and have 
converted the remaining roles and playbooks into ones using Ansible best 
practices. We still have a lot of work to do but the project is growing in 
capability and scale and we’re eager to begin working more with the greater 
OpenStack community on what we think will be an asset to Operators and 
Developers alike. If you have an interest in Ansible, LXC containers, and/or 
deployments from source we’d love to have you join our community.

Weekly meetings: https://wiki.openstack.org/wiki/Meetings/openstack-ansible
IRC channel: #openstack-ansible

Thank you again to everyone who’s contributed so far and we look forward to 
seeing you online.

—

Kevin



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing the openstack ansible deployment repo

2014-12-10 Thread Kevin Carter
Hello all,


The RCBOPS team at Rackspace has developed a repository of Ansible roles, 
playbooks, scripts, and libraries to deploy Openstack inside containers for 
production use. We’ve been running this deployment for a while now,
and at the last OpenStack summit we discussed moving the repo into Stackforge 
as a community project. Today, I’m happy to announce that the 
os-ansible-deployment repo is online within Stackforge. This project is a 
work in progress and we welcome anyone who’s interested in contributing.

This project includes:
  * Ansible playbooks for deployment and orchestration of infrastructure 
resources.
  * Isolation of services using LXC containers.
  * Software deployed from source using python wheels.

Where to find us:
  * IRC: #openstack-ansible
  * Launchpad: https://launchpad.net/openstack-ansible
  * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC. (The 
meeting schedule is not fully formalized and may be subject to change.)
  * Code: https://github.com/stackforge/os-ansible-deployment

Thanks and we hope to see you in the channel.

—

Kevin



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing the openstack ansible deployment repo

2014-12-10 Thread Kevin Carter
Hey John,

We too ran into the same issue with iSCSI and after a lot of digging and 
chasing red-hearings we found that the cinder-volume service wasn’t the cause 
of the issues, it was iscsiadm login” that caused the problem and it was 
happening from within the nova-compute container. If we weren’t running cinder 
there were no issues with nova-compute running vm’s from within a container 
however once we attempted to attach a volume to a running VM iscsiadm would 
simply refuse to initiate. We followed up on an existing upstream bug regarding 
the issues but its gotten little traction at present: 
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855”.  In testing we’ve 
found that if we give the compute container the raw device instead of using a 
bridge on a veth type interface we didn’t see the same issues however doing 
that was less than ideal so we opted to simply leave compute nodes as physical 
hosts. From within the playbooks we can set any service to run on bare metal as 
the “container” type so that’s what we’ve done with nova-compute but hopefully 
sometime soon-ish well be able to move nova-compute back into a container, 
assuming the upstream bugs are fixed.

I’d love to chat some more on this or anything else, hit me up anytime; I’m 
@cloudnull in the channel.

—

Kevin


 On Dec 10, 2014, at 19:01, John Griffith john.griffi...@gmail.com wrote:
 
 On Wed, Dec 10, 2014 at 3:16 PM, Kevin Carter
 kevin.car...@rackspace.com wrote:
 Hello all,
 
 
 The RCBOPS team at Rackspace has developed a repository of Ansible roles, 
 playbooks, scripts, and libraries to deploy Openstack inside containers for 
 production use. We’ve been running this deployment for a while now,
 and at the last OpenStack summit we discussed moving the repo into 
 Stackforge as a community project. Today, I’m happy to announce that the 
 os-ansible-deployment repo is online within Stackforge. This project is a 
 work in progress and we welcome anyone who’s interested in contributing.
 
 This project includes:
  * Ansible playbooks for deployment and orchestration of infrastructure 
 resources.
  * Isolation of services using LXC containers.
  * Software deployed from source using python wheels.
 
 Where to find us:
  * IRC: #openstack-ansible
  * Launchpad: https://launchpad.net/openstack-ansible
  * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC. (The 
 meeting schedule is not fully formalized and may be subject to change.)
  * Code: https://github.com/stackforge/os-ansible-deployment
 
 Thanks and we hope to see you in the channel.
 
 —
 
 Kevin
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Hey Kevin,
 
 Really cool!  I have some questions though, I've been trying to do
 this exact sort of thing on my own with Cinder but can't get iscsi
 daemon running in a container.  In fact I run into a few weird
 networking problems that I haven't sorted, but the storage piece seems
 to be a big stumbling point for me even when I cut some of the extra
 stuff I was trying to do with devstack out of it.
 
 Anyway, are you saying that this enables running the reference LVM
 impl c-vol service in a container as well?  I'd love to hear/see more
 and play around with this.
 
 Thanks,
 John
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev