Re: [openstack-dev] [requirements] adding uwsgi to global-requirements

2017-12-19 Thread Sam Yaple
>  Original Message 
> Subject: Re: [openstack-dev] [requirements] adding uwsgi to 
> global-requirements
> Local Time: December 19, 2017 9:57 PM
> UTC Time: December 20, 2017 2:57 AM
> From: mtrein...@kortar.org
> To: Sam Yaple , OpenStack Development Mailing List (not for 
> usage questions) 
>
> On Tue, Dec 19, 2017 at 07:50:59PM -0500, Sam Yaple wrote:
>
>>>  Original Message 
>>> Subject: Re: [openstack-dev] [requirements] adding uwsgi to 
>>> global-requirements
>>> Local Time: December 19, 2017 6:34 PM
>>> UTC Time: December 19, 2017 11:34 PM
>>> From: mtrein...@kortar.org
>>> To: Sam Yaple s...@yaple.net, OpenStack Development Mailing List (not for 
>>> usage questions) openstack-dev@lists.openstack.org
>>> On Tue, Dec 19, 2017 at 05:46:34PM -0500, Sam Yaple wrote:
>>>
>>>> Hello,
>>>> I wanted to bring up the idea of getting uwsgi into the requirements repo. 
>>>> I seem to recall this being discussed a couple of years back, but for the 
>>>> life of me I cannot find the thread, so forgive me if I am digging up 
>>>> ancient history.
>>>> I would like to see uwsgi in global-requirements.txt and 
>>>> upper-constraints.txt .
>>>> Since the recent goal of running all api's behind WSGI has been mostly 
>>>> accomplished, I have seen a migration toward wsgi based deploys. Some of 
>>>> which use uwsgi+nginx/apache.
>>>> Glance recommends uwsgi [0] as "the current recommended way to deploy" if 
>>>> the docs are to be believed.
>>>> Umm finish the sentence there, it says "with a real web server". The 
>>>> context
>>>> there is use uwsgi if you want to run glance with Apache HTTPD, nginx, 
>>>> etc. Not
>>>> a general recommendation to use uwsgi.
>>
>> I did say uwsgi+nginx/apache on the line directly before that. You cannot 
>> run wsgi+apache with glance at all (directly) due to the lack of chunked 
>> transfer support making a wsgi deploy of glance require uwsgi. Though this 
>> goes to your support further of not defining how people deploy.
>>
>>>> In fact if you read more of the doc it
>>>> outlines issues involved with using uwsgi and glance and lots of tradeoffs 
>>>> with
>>>> doing that. The wording in the more recent doc might make the situation a 
>>>> bit
>>>> clearer. [1] If you want all the API functionality to work in glance you 
>>>> should
>>>> still be using the eventlet server, using WSGI means things like the tasks 
>>>> api
>>>> doesn't work. (although TBH, I don't think most people care about that)
>>>> The LOCI project has been including uwsgi (and recommending people use it) 
>>>> since its inception.
>>>> These facts, in my opinion, make a pretty strong case for uwsgi being an 
>>>> indirect dependancy and worthy of inclusion and tracking in the 
>>>> requirements repo.
>>>> My question for the community, are there strong feelings against including 
>>>> uwsgi? If so, why?
>>>> For the majority of projects out there we test using the WSGI interface 
>>>> using
>>>> uWSGI, but uWSGI isn't actually a requirement. The cross project goal[2] 
>>>> where
>>>> we moved all the devstack jobs to use uWSGI was not about using uWSGI, but
>>>> about using the standard interfaces for deploying web services under a web
>>>> server, the goal is about exposing a WSGI not using uWSGI. The uWSGI part 
>>>> in
>>>> the goal is an implementation detail for setting up the gate jobs.
>>
>> Agreed. I should clarify, I am in no way trying to force anyone to use 
>> uwsgi. Quite the opposite. I am talking specifically about those who choose 
>> to use uwsgi. Which, as you point out, the gate jobs already do as part of 
>> the implementation.
>>
>>>> We don't want to dictate how people are deploying the webapps, instead we 
>>>> say
>>>> we use the normal interfaces for deploying python webapps. So if your used 
>>>> to
>>>> use mod_wsgi with apache, gunicorn + ngix, or uwsgi standalone, etc. you 
>>>> can do
>>>> that. uwsgi in this context is the same as apache. It's not actually a
>>>> requirement for any project, you can install and run everything without 
>>>> it, and
>>>> in fact many peop

Re: [openstack-dev] [requirements] adding uwsgi to global-requirements

2017-12-19 Thread Sam Yaple
>  Original Message 
> Subject: Re: [openstack-dev] [requirements] adding uwsgi to 
> global-requirements
> Local Time: December 19, 2017 6:34 PM
> UTC Time: December 19, 2017 11:34 PM
> From: mtrein...@kortar.org
> To: Sam Yaple , OpenStack Development Mailing List (not for 
> usage questions) 
>
> On Tue, Dec 19, 2017 at 05:46:34PM -0500, Sam Yaple wrote:
>
>> Hello,
>> I wanted to bring up the idea of getting uwsgi into the requirements repo. I 
>> seem to recall this being discussed a couple of years back, but for the life 
>> of me I cannot find the thread, so forgive me if I am digging up ancient 
>> history.
>> I would like to see uwsgi in global-requirements.txt and 
>> upper-constraints.txt .
>> Since the recent goal of running all api's behind WSGI has been mostly 
>> accomplished, I have seen a migration toward wsgi based deploys. Some of 
>> which use uwsgi+nginx/apache.
>> Glance recommends uwsgi [0] as "the current recommended way to deploy" if 
>> the docs are to be believed.
>>
>> Umm finish the sentence there, it says "with a real web server". The context
>> there is use uwsgi if you want to run glance with Apache HTTPD, nginx, etc. 
>> Not
>> a general recommendation to use uwsgi.

I did say uwsgi+nginx/apache on the line directly before that. You cannot run 
wsgi+apache with glance at all (directly) due to the lack of chunked transfer 
support making a wsgi deploy of glance *require* uwsgi. Though this goes to 
your support further of not defining how people deploy.

>> In fact if you read more of the doc it
>> outlines issues involved with using uwsgi and glance and lots of tradeoffs 
>> with
>> doing that. The wording in the more recent doc might make the situation a bit
>> clearer. [1] If you want all the API functionality to work in glance you 
>> should
>> still be using the eventlet server, using WSGI means things like the tasks 
>> api
>> doesn't work. (although TBH, I don't think most people care about that)
>> The LOCI project has been including uwsgi (and recommending people use it) 
>> since its inception.
>> These facts, in my opinion, make a pretty strong case for uwsgi being an 
>> indirect dependancy and worthy of inclusion and tracking in the requirements 
>> repo.
>> My question for the community, are there strong feelings against including 
>> uwsgi? If so, why?
>>
>> For the majority of projects out there we test using the WSGI interface using
>> uWSGI, but uWSGI isn't actually a requirement. The cross project goal[2] 
>> where
>> we moved all the devstack jobs to use uWSGI was not about using uWSGI, but
>> about using the standard interfaces for deploying web services under a web
>> server, the goal is about exposing a WSGI not using uWSGI. The uWSGI part in
>> the goal is an implementation detail for setting up the gate jobs.

Agreed. I should clarify, I am in no way trying to force anyone to use uwsgi. 
Quite the opposite. I am talking specifically about those who _choose_ to use 
uwsgi. Which, as you point out, the gate jobs already do as part of the 
implementation.

>> We don't want to dictate how people are deploying the webapps, instead we say
>> we use the normal interfaces for deploying python webapps. So if your used to
>> use mod_wsgi with apache, gunicorn + ngix, or uwsgi standalone, etc. you can 
>> do
>> that. uwsgi in this context is the same as apache. It's not actually a
>> requirement for any project, you can install and run everything without it, 
>> and
>> in fact many people do.
>>
>> The other half of this is that just pip installing uwsgi is not always enough
>> to actually leverage using it with a webserver. You also need the web server
>> support for talking to uwsgi. If that's how you use choose to deploy it, 
>> which
>> is not always straightforward. For example, take a look at how it is 
>> installed
>> in devstack to make uwsgi work properly with apache. [3] There are also other
>> concerns using pip to install uwsgi. uWSGI is a C application and not 
>> actually
>> a python project. It also supports running applications in several 
>> languages[4],
>> not just python. The pypi published install is kind of a hack to download the
>> tarball and compile the application with only the python bindings compiled.
>> The setup.py literally calls out to gcc to build it, it's essentially a 
>> makefile
>> written in python. [5][6]
>>
>> So what advantage do we get by adding it to global requirements when it's not
>> actually a 

[openstack-dev] [requirements] adding uwsgi to global-requirements

2017-12-19 Thread Sam Yaple
Hello,

I wanted to bring up the idea of getting uwsgi into the requirements repo. I 
seem to recall this being discussed a couple of years back, but for the life of 
me I cannot find the thread, so forgive me if I am digging up ancient history.

I would like to see uwsgi in global-requirements.txt and upper-constraints.txt .

Since the recent goal of running all api's behind WSGI has been mostly 
accomplished, I have seen a migration toward wsgi based deploys. Some of which 
use uwsgi+nginx/apache.

Glance recommends uwsgi [0] as "the current recommended way to deploy" if the 
docs are to be believed.

The LOCI project has been including uwsgi (and recommending people use it) 
since its inception.

These facts, in my opinion, make a pretty strong case for uwsgi being an 
indirect dependancy and worthy of inclusion and tracking in the requirements 
repo.

My question for the community, are there strong feelings against including 
uwsgi? If so, why?

Thanks,
SamYaple

[0] https://docs.openstack.org/glance/pike/admin/apache-httpd.html#uwsgi__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Ansiblize init-runonce script

2017-11-28 Thread Sam Yaple
For what its worth, this init-runonce script was never meant for production 
usage. Ops *shouldn't* be running it like you suggest. It was historically for 
use in the gate and a quick-n-dirty environment setup for testing.

If you want to get into writing operations scripts, thats your prerogative, but 
it was discussed before and mostly considered a bad idea.

Thanks,
SamYaple

>  Original Message 
> Subject: Re: [openstack-dev] [kolla] Ansiblize init-runonce script
> Local Time: November 28, 2017 8:10 AM
> UTC Time: November 28, 2017 1:10 PM
> From: zhang.lei@gmail.com
> To: OpenStack Development Mailing List (not for usage questions) 
> 
>
> in my opinion,
>
> idempotent scrip
> t is very necessary.
> for several reason
>
> - there is already some idempotent logical in the script
> - it is common that this script failed by wrong configuration,
>   after fix the config,
> ops will want to run this script again.
>
> On Tue, Nov 28, 2017 at 7:14 PM, Paul Bourke  wrote:
>
>> I think this came up before at one stage. My position is I don't see the 
>> need to ansible-ise small shell scripts. init-runonce is currently just an 
>> easy to understand sequence of openstack commands provided to help people 
>> test/demo their setups. Unless we want to make it more than that, i.e. make 
>> it idempotent, customizable, etc. I don't see the need to wheel in Ansible.
>>
>> On 28/11/17 03:23, Jeffrey Zhang wrote:
>>
>>> hi
>>>
>>> check this [0]. I tried to convert it  to ansible playbooks.
>>>
>>> [0] https://review.openstack.org/523072
>>>
>>> On Tue, Nov 28, 2017 at 2:57 AM, Ravi Shekhar Jethani >> > wrote:
>>>
>>> Hi,
>>>
>>> While exploring kolla-ansible I ran into a few issues with
>>> init-runonce script. This lead to following bugs and patches:
>>>
>>> https://launchpad.net/bugs/1732963 
>>> https://review.openstack.org/51
>>> 
>>> https://review.openstack.org/521190
>>> 
>>>
>>> But it was highlighted that instead of fixing/changing the
>>> script, 'ansibilzing' it would be the ideal solution.
>>> Hence I hereby formally propose the same.
>>>
>>> My thoughts:
>>> * Playbook Name: init-stack.yml
>>>
>>> * Playbook path can be:
>>> kolla-ansible/ansible/init-stack.yml
>>>
>>> * The play can be executed like:
>>> $ kolla-ansible init-stack -i 
>>>
>>> * The cirros test image should be downloaded in /tmp
>>>
>>> * What should be the behavior if the play is run multiple times
>>> against the same stack?
>>>- some error message OR
>>>- simply ignore the action
>>>
>>> Kindly provide suggestions.
>>>
>>> --
>>> Best Regards,
>>> Ravi J.
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>>
>>> --
>>> Regards,
>>> Jeffrey Zhang Blog: http://xcodest.me 
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Regards,
> Jeffrey Zhang
> Blog: [http://xcodest.me](http://xcodest.me/)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Distutils][pbr][devstack][qa] Announcement: Pip 10 is coming, and will move all internal APIs

2017-10-20 Thread Sam Yaple
On Fri, Oct 20, 2017 at 4:32 PM, Doug Hellmann 
wrote:

> Excerpts from Clark Boylan's message of 2017-10-20 13:14:13 -0700:
> > On Fri, Oct 20, 2017, at 11:17 AM, Clark Boylan wrote:
> > > On Fri, Oct 20, 2017, at 07:23 AM, Doug Hellmann wrote:
> > > > It sounds like the PyPI/PyPA folks are planning some major changes to
> > > > pip internals, soon.
> > > >
> > > > I know pbr uses setuptools, and I don't think it uses pip, but if
> > > > someone has time to verify that it would be helpful.
> > > >
> > > > We'll also want to watch out for breakage in normal use of pip 10. If
> > > > they're making changes this big, they may miss something in their own
> > > > test coverage that affects our jobs.
> > > >
> > >
> > > After a quick skim of PBR I don't think we use pip internals anywhere.
> > > Its all executed via the command itself. That said we should test this
> > > so I've put up https://review.openstack.org/513825 (others should feel
> > > free to iterate on it if it doesn't work) to install latest pip master
> > > in a devstack run.
> >
> > The current issue this change is facing can be seen at
> > http://logs.openstack.org/25/513825/4/check/legacy-tempest-
> dsvm-py35/c31deb2/logs/devstacklog.txt.gz#_2017-10-20_20_07_54_838.
> > The tl;dr is that for distutils installed packages (basically all the
> > distro installed python packges) pip refuses to uninstall them in order
> > to perform upgrades because it can't reliably determine where all the
> > files are. I think this is a new pip 10 behavior.
> >
> > In the general case I think this means we can not rely on global pip
> > installs anymore. This may be a good thing to bring up with upstream
> > PyPA as I expect it will break a lot of people in a lot of places (it
> > will break infra for example too).
> >
> > Clark
> >
>
> Yes, it would be good for someone who understands the cases where we're
> doing these sorts of global package installations using pip to post on
> distutils-sig explaining the breakage and asking for some time to let us
> work around the issue.
>
> We have a couple of ways to mitigate it, depending on the situation.
>
> 1. Pin the affected packages to the system package versions in our
>constraints list.
>
> 2. Install into a virtualenv that ignores system packages, avoiding the
>need to upgrade at all.
>
> 3. Remove the system package before installing from pip (why is it there
>in the first place?).
>
> It's not clear to me which is the best approach.  I've added
> [devstack][qa] to the subject line to draw some more attention to
> this thread from folks familiar with these jobs.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I have used option 2 with great success to avoid the issue of system
package conflicts for a couple of years now, back when pip would still
uninstall system packages for upgrades.

Option 3 is difficult to make work since so many packages have dependancies
on python-* packages, I abandoned that approach fairly quickly.

Thanks,
SamYaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Sam Yaple
On Thu, Oct 19, 2017 at 11:23 PM, Gabriele Cerami 
wrote:

> On 19 Oct, Sam Yaple wrote:
> > So it seems tripleo is building *all* images and then pushing them.
> > Reworking your number leads me to believe you will be consuming 10-15GB
> in
> > total on Dockerhub. Kolla images are only the size that you posted when
> > built as seperate services. Just keep building all the images at the same
> > time and you wont get anywhere near the numbers you posted.
>
> Makes sense, so considering the shared layers
> - a size of 10-15GB per build.
> - 4-6 builds rotated per release
> - 3-4 releases
>

- a size of 1-2GB per build
- 4-6 builds rotated per release
- 3-4 releases

At worst you are looking at 48GB not 360GB. Dont worry so much there!

>
> total size will be approximately be 360GB in the worst case, and 120GB in
> the best case, which seems a bit more reasonable.
>
> Thanks for he clarifications
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Sam Yaple
On Thu, Oct 19, 2017 at 9:38 PM, Gabriele Cerami  wrote:

> On 19 Oct, Sam Yaple wrote:
> > docker_image wouldn't be the best place for that. Buf if you are looking
> > for a quicker solution, kolla_docker was written specifically to be
> license
> > compatible for openstack. its structure should make it easily adapted to
> > delete an image. And you can copy it and cut it up thanks to the license.
>
> Thanks, I'll look into it.
>
> > Are you pushing images with no shared base layers at all (300MB
> compressed
> > image is no shared base layers)? With shared base layers a full image set
> > of Kolla images should be much smaller than the numbers you posted.
>
> 300MB is the rounded size of the size reported by the dockerhub UI
> e.g https://hub.docker.com/r/tripleopike/centos-binary-heat-api/
> shows 265MB for the newest tag. I'm not sure what size is dockerhub
> reporting.
>

This is misleading. For example, you will download 265MB if you download
only tripleopike/centos-binary-heat-api:current-tripleo . But if you
download both tripleopike/centos-binary-heat-api:current-tripleo and
tripleopike/centos-binary-heat-engine:current-tripleo you will have only
downloaded 266MB in total since the majority of those layers are shared.

So it seems tripleo is building *all* images and then pushing them.
Reworking your number leads me to believe you will be consuming 10-15GB in
total on Dockerhub. Kolla images are only the size that you posted when
built as seperate services. Just keep building all the images at the same
time and you wont get anywhere near the numbers you posted.


> When pulling the image, docker downloads 30 layers. The final size
> reported locally is 815MB.
>

This is the uncompressed size, but even here layers are shared.

>
> Thanks
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Sam Yaple
docker_image wouldn't be the best place for that. Buf if you are looking
for a quicker solution, kolla_docker was written specifically to be license
compatible for openstack. its structure should make it easily adapted to
delete an image. And you can copy it and cut it up thanks to the license.

Are you pushing images with no shared base layers at all (300MB compressed
image is no shared base layers)? With shared base layers a full image set
of Kolla images should be much smaller than the numbers you posted.

Thanks,
SamYaple

On Thu, Oct 19, 2017 at 11:03 AM, Gabriele Cerami 
wrote:

> Hi,
>
> our CI scripts are now automatically building, testing and pushing
> approved openstack/RDO services images to public repositories in
> dockerhub using ansible docker_image module.
>
> Promotions have had some hiccups, but we're starting to regularly upload
> new images every 4 hours.
>
> When we'll get at full speed, we'll potentially have
> - 3-4 different sets of images, one per release of openstack (counting a
>   EOL release grace period)
> - 90-100 different services images per release
> - 4-6 different versions of the same image ( keeping older promoted
>   images for a while )
>
> At around 300MB per image a possible grand total is around 650GB of
> space used.
>
> We don't know if this is acceptable usage of dockerhub space and for
> this we already sent a similar email the to docker support to ask
> specifically if the user would get penalty in any way (e.g. enforcing
> quotas, rete limiting, blocking). We're still waiting for a reply.
>
> In any case it's critical to keep the usage around the estimate, and to
> achieve this we need a way to automatically delete the older images.
> docker_image module does not provide this functionality, and we think
> the only way is issuing direct calls to dockerhub API
>
> https://docs.docker.com/registry/spec/api/#deleting-an-image
>
> docker_image module structure doesn't seem to encourage the addition of
> such functionality directly in it, so we may be forced to use the uri
> module.
> With new images uploaded potentially every 4 hours, this will become a
> problem to be solved within the next two weeks.
>
> We'd appreciate any input for an existing, in progress and/or better
> solution for bulk deletion, and issues that may arise with our space
> usage in dockerhub
>
> Thanks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] TC Candidates: what does an OpenStack user look like?

2017-10-13 Thread Sam Yaple
I used to be able to say "my OpenStack background is in Operations" but
that isn't strictly true anymore. I've now spent the majority of my time
doing what is considered 'developer' work. One thing is for certain though,
I have never stopped building OpenStack for what I see as the "hypothetical
OpenStack user".

The majority of these users in my experince are actually small-to-medium
businesses. While the large companies of the world certainly drive alot of
OpenStack, the users that I see are much smaller than that. Small 5-10 node
clusters. That is who I am building OpenStack for. That happens to also be
the group that I am in personally, as thats what I run at my house.

Most of my focus is on lowering the bar to use services to give these
small-to-medium businesses the least amount of work to do to gain access to
almost everything OpenStack has to offer with a very small operational
overhead for the team. My personal experince with this is currently it can
take a small team months to get a working openstack deployment, and then
simply maintaining that deployment will consume the rest of thier time.
This leads to OpenStack deployments that are years out of date on an island
that has no real upgrade path anymore and a general sense that "OpenStack
is bad" internally. I am building OpenStack for users so that this stops
happening.

Thanks,
SamYaple

On Thu, Oct 12, 2017 at 12:51 PM, Zane Bitter  wrote:

> (Reminder: we are in the TC election campaigning period, and we all have
> the opportunity to question the candidates. The campaigning period ends on
> Saturday, so make with the questions.)
>
>
> In my head, I have a mental picture of who I'm building OpenStack for.
> When I'm making design decisions I try to think about how it will affect
> these hypothetical near-future users. By 'users' here I mean end-users, the
> actual consumers of OpenStack APIs. What will it enable them to do? What
> will they have to work around? I think we probably all do this, at least
> subconsciously. (Free tip: try doing it consciously.)
>
> So my question to the TC candidates (and incumbent TC members, or anyone
> else, if they want to answer) is: what does the hypothetical OpenStack user
> that is top-of-mind in your head look like? Who are _you_ building
> OpenStack for?
>
> There's a description of mine in this email, as an example:
> http://lists.openstack.org/pipermail/openstack-dev/2017-Octo
> ber/123312.html
>
> To be clear, for me at least there's only one wrong answer ("person who
> needs somewhere to run their IRC bouncer"). What's important in my opinion
> is that we have a bunch of people with *different* answers on the TC,
> because I think that will lead to better discussion and hopefully better
> decisions.
>
> Discuss.
>
> cheers,
> Zane.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [kolla][puppet][openstack-ansible][tripleo] Better way to run wsgi service.

2017-08-24 Thread Sam Yaple
I have been running api services behind uwsgi in http mode from Newton
forward. I recently switched to the uwsgi+nginx model with 2 containers
since I was having wierd issues with things that I couldn't track down.
Mainly after I started using keystone with ldap. There would be timeouts
and message-to-long type errors that all went away with nginx.

Additionally, running with HTTPS was measurably faster with nginx proxying
to a uwsgi socket.

This was just my experience with it, if you do want to switch to uwsgi+http
make sure you do thorough testing of all the components or you may be left
with a component that just won't work with your model.


On Thu, Aug 24, 2017 at 12:29 PM, Mohammed Naser 
wrote:

> On Thu, Aug 24, 2017 at 11:15 AM, Jeffrey Zhang 
> wrote:
> > We are moving to deploy service via WSGI[0].
> >
> > There are two recommended ways.
> >
> > - apache + mod_wsgi
> > - nginx + uwsgi
> >
> > later one is more better.
> >
> > For traditional deployment, it is easy to implement. Use one
> > uwsgi progress to launch all wsgi services ( nova-api,cinder-api , etc).
> > Then one nginx process to forward the http require into uwsgi server.
> >
> > But kolla is running one process in one container. If we use
> > the recommended solution, we have to use two container to run
> > nova-api container. and it will generate lots of containers.
> > like: nginx-nova-api, uwsig-nova-api.
> >
> > i propose use uwsgi native http mode[1]. Then one uwsgi
> > container is enough to run nova-api service. Base one the official
> > doc, if there is no static resource, uWSGI is recommended to use
> > as a real http server.
> >
> > So how about this?
>
> I think this is an interesting approach.  At the moment, the Puppet
> modules currently allow deploying for services using mod_wsgi.
> Personally, I don't think that relying on the HTTP support of uWSGI is
> very favorable.   It is quite difficult to configure and 'get right'
> and it leaves a lot to be desired (for example, federated auth relies
> on Apache modules which would make this nearly impossible).
>
> Given that the current OpenStack testing infrastructure proxies to
> UWSGI, I think it would be best if that approach is taken.  This way,
> a container (or whatever service) would expose a UWSGI API, which you
> can connect whichever web server that you need.
>
> In the case of Kolla, the `httpd` container would be similar to what
> the `haproxy` is.  In the case of Puppet, I can imagine this being
> something to be managed by the user (with some helpers in there).  I
> think it would be interesting to add the tripleo folks on their
> opinion here as consumers of the Puppet modules.
>
> >
> >
> > [0] https://governance.openstack.org/tc/goals/pike/deploy-api-
> in-wsgi.html
> > [1]
> > http://uwsgi-docs.readthedocs.io/en/latest/HTTP.html#can-i-
> use-uwsgi-s-http-capabilities-in-production
> >
> > --
> > Regards,
> > Jeffrey Zhang
> > Blog: http://xcodest.me
> >
> > ___
> > OpenStack-operators mailing list
> > openstack-operat...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Sam Yaple
I would like to bring up a subject that hasn't really been discussed in
this thread yet, forgive me if I missed an email mentioning this.

What I personally would like to see is a publishing infrastructure to allow
pushing built images to an internal infra mirror/repo/registry for
consumption of internal infra jobs (deployment tools like kolla-ansible and
openstack-ansible). The images built from infra mirrors with security
turned off are perfect for testing internally to infra.

If you build images properly in infra, then you will have an image that is
not security checked (no gpg verification of packages) and completely
unverifiable. These are absolutely not images we want to push to
DockerHub/quay for obvious reasons. Security and verification being chief
among them. They are absolutely not images that should ever be run in
production and are only suited for testing. These are the only types of
images that can come out of infra.

Thanks,
SamYaple

On Tue, May 16, 2017 at 1:57 PM, Michał Jastrzębski 
wrote:

> On 16 May 2017 at 06:22, Doug Hellmann  wrote:
> > Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
> >> Flavio Percoco wrote:
> >> > From a release perspective, as Doug mentioned, we've avoided
> releasing projects
> >> > in any kind of built form. This was also one of the concerns I raised
> when
> >> > working on the proposal to support other programming languages. The
> problem of
> >> > releasing built images goes beyond the infrastructure requirements.
> It's the
> >> > message and the guarantees implied with the built product itself that
> are the
> >> > concern here. And I tend to agree with Doug that this might be a
> problem for us
> >> > as a community. Unfortunately, putting your name, Michal, as contact
> point is
> >> > not enough. Kolla is not the only project producing container images
> and we need
> >> > to be consistent in the way we release these images.
> >> >
> >> > Nothing prevents people for building their own images and uploading
> them to
> >> > dockerhub. Having this as part of the OpenStack's pipeline is a
> problem.
> >>
> >> I totally subscribe to the concerns around publishing binaries (under
> >> any form), and the expectations in terms of security maintenance that it
> >> would set on the publisher. At the same time, we need to have images
> >> available, for convenience and testing. So what is the best way to
> >> achieve that without setting strong security maintenance expectations
> >> for the OpenStack community ? We have several options:
> >>
> >> 1/ Have third-parties publish images
> >> It is the current situation. The issue is that the Kolla team (and
> >> likely others) would rather automate the process and use OpenStack
> >> infrastructure for it.
> >>
> >> 2/ Have third-parties publish images, but through OpenStack infra
> >> This would allow to automate the process, but it would be a bit weird to
> >> use common infra resources to publish in a private repo.
> >>
> >> 3/ Publish transient (per-commit or daily) images
> >> A "daily build" (especially if you replace it every day) would set
> >> relatively-limited expectations in terms of maintenance. It would end up
> >> picking up security updates in upstream layers, even if not immediately.
> >>
> >> 4/ Publish images and own them
> >> Staff release / VMT / stable team in a way that lets us properly own
> >> those images and publish them officially.
> >>
> >> Personally I think (4) is not realistic. I think we could make (3) work,
> >> and I prefer it to (2). If all else fails, we should keep (1).
> >>
> >
> > At the forum we talked about putting test images on a "private"
> > repository hosted on openstack.org somewhere. I think that's option
> > 3 from your list?
> >
> > Paul may be able to shed more light on the details of the technology
> > (maybe it's just an Apache-served repo, rather than a full blown
> > instance of Docker's service, for example).
>
> Issue with that is
>
> 1. Apache served is harder to use because we want to follow docker API
> and we'd have to reimplement it
> 2. Running registry is single command
> 3. If we host in in infra, in case someone actually uses it (there
> will be people like that), that will eat up lot of network traffic
> potentially
> 4. With local caching of images (working already) in nodepools we
> loose complexity of mirroring registries across nodepools
>
> So bottom line, having dockerhub/quay.io is simply easier.
>
> > Doug
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [kolla] rabbitmq cluster_partition_handling config in kolla-ansible

2017-03-20 Thread Sam Yaple
Hello Nikita,

There is no technical reason this cannot be made variable. I don't think
anyone could come up with a valid reason to block such a patch.

However, I would ask what you plan to gain from _not_ having it 'autoheal'?
The other options for partition handling are basically "let it partition
and do nothing" and "quarantine the partitioned node". Each of those
require an operator to take action. I have not personally known a single
OpenStack operator to ever go and recover a message from a partitioned
rabbitmq node and reinject it into the cluster. In fact, I do not know if
that would even be an advisable action given the retries that exist within
OpenStack. Not to mention the times when the resource was, say, a new port
in Neutron and you reinject the message after the VM consuming that port
was deleted.

With the reasons above, it is hard to justify anything but 'autoheal' for
OpenStack specifically. I certainly don't see any advantages.

Now that the ask has been made though, a variable would be 2 lines of code
in total, so I say go for it.

Thanks,
SamYaple

Sam Yaple

On Mon, Mar 20, 2017 at 2:43 PM, Nikita Gerasimov <
nikita.gerasi...@oracle.com> wrote:

> Hi,
>
> Since [1] kolla-ansible have rabbitmq cluster_partition_handling option
> hard-coded to 'autoheal'. According to [2] it's not a best mode for 3+ node
> clusters with reliable network.
> Is it reasonable to make this option changeable by user or even place some
> logic to pickup mode based on cluster structure?
> Or we have a reason to keep it hard-coded?
>
>
> [1] https://github.com/openstack/kolla-ansible/commit/0c6594c258
> 64d0c90cd0009726cee84967fe65dc
> [2] https://www.rabbitmq.com/partitions.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Multi-Regions Support

2017-01-06 Thread Sam Yaple
On Fri, Jan 6, 2017 at 8:01 PM, Jay Pipes  wrote:

> On 01/05/2017 09:12 AM, Ronan-Alexandre Cherrueau wrote:
>
>> Hello,
>>
>> TL;DR: We make a multi-regions deployment with Kolla. It requires to
>> patch the code a little bit, and you can find the diff on our
>> GitHub[1]. This patch is just a first attempt to support multi-regions
>> in Kolla and it raises questions. Some modifications are not done in
>> an idiomatic way and we do not expect this to be merged in Kolla. The
>> reminder of this mail explains our patch and states our questions.
>>
>> At Inria/Discovery[2], we evaluate OpenStack at scale for the
>> Performance Working Group. So far, we focus on one single OpenStack
>> region deployment with hundreds of computes and we always go with
>> Kolla for our deployment. Over the last few days, we tried to achieve
>> a multi-regions OpenStack deployment with Kolla. We want to share with
>> you our current deployment workflow, patches we had to apply on Kolla
>> to support multi-regions, and also ask you if we do things correctly.
>>
>> First of all, our multi-regions deployment follows the one described
>> by the OpenStack documentation[3].
>>
>
> I don't see an "Admin Region" as part of the OpenStack documentation for
> multi-region deployment. I also see LDAP mentioned as the recommended
> authentication/IdM store.
>
> > Concretely, the deployment
>
>> considers /one/ Administrative Region (AR) that contains Keystone and
>> Horizon.
>>
>
> That's not a region. Those should be shared resources *across* regions.
>
> > This is a Kolla-based deployment, so Keystone is hidden
>
>> behind an HAProxy, and has MariaDB and memcached as backend.
>>
>
> I thought at Inria, the Nova "MySQL DB has been replaced by the noSQL
> system REDIS"? But here, you're using MariaDB -- a non-distributed database
> -- for the Keystone component which is the very thing that is the most
> highly distributed of all state storage in OpenStack.
>

This should be read as MariaDB+Galera for replication. It is a
highly-available database.


>
> So, you are replacing the Nova DB (which doesn't need to be distributed at
> all, since it's a centralized control plane piece) within the regions with
> a "distributed" NoSQL store (and throwing away transactional safety I might
> add) but you're going with a non-distributed traditional RDBMS for the very
> piece that needs to be shared, distributed, and highly-available across
> OpenStack. I don't understand that.
>
> At the
>
>> same time, /n/ OpenStack Regions (OSR1, ..., OSRn) contain a full
>> OpenStack, except Keystone. We got something as follows at the end of
>> the deployment:
>>
>> Admin Region (AR):
>> - control:
>>   * Horizon
>>   * HAProxy
>>   * Keyston
>>   * MariaDB
>>   * memcached
>>
>
> Again, that's not a region. Those are merely shared services between
> regions.
>
>
> OpenStack Region x (OSRx):
>> - control:
>>   * HAProxy
>>   * nova-api/conductor/scheduler
>>   * neutron-server/l3/dhcp/...
>>   * glance-api/registry
>>   * MariaDB
>>   * RabbitMQ
>>
>> - compute1:
>>   * nova-compute
>>   * neutron-agent
>>
>> - compute2: ...
>>
>> We do the deployment by running Kolla n+1 times. The first run deploys
>> the Administrative Region (AR) and the other runs deploy OpenStack
>> Regions (OSR). For each run, we fix the value of `openstack_region_name'
>> variable to the name of the current region.
>>
>> In the context of multi-regions, Keystone (in the AR) should be
>> available to all OSRs. This means, there are as many Keystone
>> endpoints as regions. For instance, if we consider two OSRs, the
>> result of listing endpoints at the end of the AR deployment looks like
>> this:
>>
>>
>>  $ openstack endpoint list
>>
>>  | Region | Serv Name | Serv Type | Interface | URL
>> |
>>  |+---+---+---+-
>> -|
>>  | AR | keystone  | identity  | public|
>> http://10.24.63.248:5000/v3  |
>>  | AR | keystone  | identity  | internal  |
>> http://10.24.63.248:5000/v3  |
>>  | AR | keystone  | identity  | admin |
>> http://10.24.63.248:35357/v3 |
>>  | OSR1   | keystone  | identity  | public|
>> http://10.24.63.248:5000/v3  |
>>  | OSR1   | keystone  | identity  | internal  |
>> http://10.24.63.248:5000/v3  |
>>  | OSR1   | keystone  | identity  | admin |
>> http://10.24.63.248:35357/v3 |
>>  | OSR2   | keystone  | identity  | public|
>> http://10.24.63.248:5000/v3  |
>>  | OSR2   | keystone  | identity  | internal  |
>> http://10.24.63.248:5000/v3  |
>>  | OSR2   | keystone  | identity  | admin |
>> http://10.24.63.248:35357/v3 |
>>
>
> There shouldn't be an AR region. If the Keystone authentication domain is
> indeed shared between OpenStack regions, then an administrative user should
> be able to hit any Keystone endpoint in any OpenStack region and add
> users/projects/roles, etc. to the shared Keystone data store (or if using
> LDAP, the admin should be able to add a user to ActiveDire

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 7:42 PM, Doug Hellmann  wrote:

> Excerpts from Sam Yaple's message of 2017-01-05 18:22:54 +:
> > On Thu, Jan 5, 2017 at 6:12 PM, Doug Hellmann 
> wrote:
> >
> > > Excerpts from Sam Yaple's message of 2017-01-05 17:02:35 +:
> > > > On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley 
> > > wrote:
> > > >
> > > > > On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
> > > > > [...]
> > > > > > I do feel this is slightly different than whats described. Since
> it
> > > is
> > > > > not
> > > > > > unrelated services, but rather, for lack of a better word,
> competing
> > > > > > services. To my knowledge infra doesn't have several service
> doing
> > > the
> > > > > same
> > > > > > job with different core teams (though I could be wrong).
> > > > >
> > > > > True, though I do find it an interesting point of view that helping
> > > > > Kolla support multiple and diverse configuration management and
> > > > > automation ecosystems is a "competition" rather than merely
> > > > > extending the breadth of the project as a whole.
> > > > >
> > > >
> > > > Yea I computer good, but I am no wordsmith. Perhaps 'friendly
> rivalry'? I
> > > > expect these different deploy tools to bring new techniques that can
> then
> > > > be reapplied to kolla-ansible and kolla-kubernetes to help out
> everyone.
> > >
> > > I'm still curious to understand why, if the teams building those
> > > different things have little or no overlap in membership, they need to
> > > be part of "kolla" and not just part of the larger OpenStack? Why build
> > > a separate project hierarchy instead of keeping things flat?
> > >
> > > Do I misunderstand the situation?
> > >
> > > You absolutely do not misunderstand the situation. It is a very valid
> > question, one to which I do not have a satisfying answer. I can say that
> it
> > has been the intention since I started work on the ansible bits of kolla
> to
> > have separate repos for the deployment parts. That grew to having several
> > different deployment tools in the future and I don't think anyone really
> > stopped to think that building this hierarchy isn't necessarily the right
> > thing to do. It certainly isn't a required thing to do.
> >
> > With the separation of ansible from the main kolla repo, the kolla repo
> now
> > becomes a consumable much like the relationship keystone and glance.
> >
> > The only advantage I can really think of at the moment is to reuse the
> > Kolla name and community when starting a new project, but that may not be
> > as advantageous as I initially thought. By my own admission, why do these
> > other projects care about a different orchestration tool.
> >
> > So in your estimation Doug, do you feel kolla-salt would be better served
> > as a new project in it's own right? As well as future orchestration tools
> > using Kolla container images?
>
> I don't know enough about the situation to say for sure, and I'll
> leave it up to the people involved, but I thought I should raise
> the option as a way to ease up some of the friction.
>
> Our team structure is really supposed to be organized around groups
> of people rather than groups of things.  The fact that there's some
> negotiation going on to decide who needs to (or gets to) have a say
> in when new deliverables are added, with some people saying they
> don't want to have to vote or that others shouldn't have a vote,
> just makes it seem to me that we're trying to force a fit where it
> would be simpler to establish separate teams.
>
> There may be some common space for shared tools, and it sounds like
> that's how things started out. But not maybe it's time to rethink
> that?
>
> This is definitely the case. Shared tooling. In the case of kolla-k8s and
kolla-ansible sharing the configs, this has broken kolla-k8s many times.
Perhaps the right decision long term is identifying all the needed pieces
that Kolla would be sharing and centralizing them rather than building this
Kolla hierarchy of projects forcing Kolla more into what resembles a
benevolent dictator model for all underlying deployment projects.

Thanks,
SamYaple

> Doug
>
> >
> > Thanks,
> > SamYaple
> >
> >

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 7:45 PM, Michał Jastrzębski  wrote:

> I think total separation of projects would require much larger
> discussion in community. Currently we agreed on having kolla-ansible
> and kolla-k8s to be deliverables under kolla umbrella from historical
> reasons. Also I don't agree that there is "little or no overlap" in
> teams, in fact there is ton of overlap, just not 100%. Many
> contributors (myself included) jump between deliverables today.
>
> Having single Kolla umbrella has practical benefits which I would hate
> to lose quite frankly. One of which would be that Kolla is being
> evaluated by lot of different companies, and having full separation
> between projects would make navigation of a landscape harder. Another
> reason is single community which we value - there is no full
> separation even between kolla-ansible and kolla-k8s (ansible still
> generates config files for k8s for example), and further separation of
> projects would hurt cooperation, and I don't think we've hit situation
> when it's necessary. I'm not ready to have this discussion yet, and
> I'm personally quite opposed to this.
>
> If kolla-salt would like to be first completely separate project,
> there is nothing we can (or want) to do to stop it, but I wouldn't
> like to see this being pushed. Having special beast isn't great, and
> moving kolla-ansible and kolla-k8s out of kolla umbrella is revolution
> I don't want to rush. I'd rather figure out process to accept
> kolla-salt (and following projects) to kolla umbrella and have this
> discussion later, when we actually hit community scale issues.
>

I don't think moving kolla-ansible or kolla-k8s out of the kolla namespace
was being suggested. If I implied that, it was not intended. That said,
with Doug's comments, I am not sure it makes sense to continue building a
Kolla deployment hierarchy. I would ask what the benefit of having
kolla-salt or kolla-puppet would be?

It is just a point that hasn't been discussed or considered up until now.
We had all just assumed kolla-salt and kolla-puppet and kolla-chef would be
a thing, but would there be a benefit to sitting under the kolla namespace?
I am not sure what those benefits are.

Thanks,
SamYaple


> Cheers,
> Michal
>
>
> On 5 January 2017 at 10:22, Sam Yaple  wrote:
> > On Thu, Jan 5, 2017 at 6:12 PM, Doug Hellmann 
> wrote:
> >>
> >> Excerpts from Sam Yaple's message of 2017-01-05 17:02:35 +:
> >> > On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley 
> >> > wrote:
> >> >
> >> > > On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
> >> > > [...]
> >> > > > I do feel this is slightly different than whats described. Since
> it
> >> > > > is
> >> > > not
> >> > > > unrelated services, but rather, for lack of a better word,
> competing
> >> > > > services. To my knowledge infra doesn't have several service doing
> >> > > > the
> >> > > same
> >> > > > job with different core teams (though I could be wrong).
> >> > >
> >> > > True, though I do find it an interesting point of view that helping
> >> > > Kolla support multiple and diverse configuration management and
> >> > > automation ecosystems is a "competition" rather than merely
> >> > > extending the breadth of the project as a whole.
> >> > >
> >> >
> >> > Yea I computer good, but I am no wordsmith. Perhaps 'friendly
> rivalry'?
> >> > I
> >> > expect these different deploy tools to bring new techniques that can
> >> > then
> >> > be reapplied to kolla-ansible and kolla-kubernetes to help out
> everyone.
> >>
> >> I'm still curious to understand why, if the teams building those
> >> different things have little or no overlap in membership, they need to
> >> be part of "kolla" and not just part of the larger OpenStack? Why build
> >> a separate project hierarchy instead of keeping things flat?
> >>
> >> Do I misunderstand the situation?
> >>
> > You absolutely do not misunderstand the situation. It is a very valid
> > question, one to which I do not have a satisfying answer. I can say that
> it
> > has been the intention since I started work on the ansible bits of kolla
> to
> > have separate repos for the deployment parts. That grew to having several
> > different deployment tools in the future and I don't think anyone really
> >

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 6:12 PM, Doug Hellmann  wrote:

> Excerpts from Sam Yaple's message of 2017-01-05 17:02:35 +:
> > On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley 
> wrote:
> >
> > > On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
> > > [...]
> > > > I do feel this is slightly different than whats described. Since it
> is
> > > not
> > > > unrelated services, but rather, for lack of a better word, competing
> > > > services. To my knowledge infra doesn't have several service doing
> the
> > > same
> > > > job with different core teams (though I could be wrong).
> > >
> > > True, though I do find it an interesting point of view that helping
> > > Kolla support multiple and diverse configuration management and
> > > automation ecosystems is a "competition" rather than merely
> > > extending the breadth of the project as a whole.
> > >
> >
> > Yea I computer good, but I am no wordsmith. Perhaps 'friendly rivalry'? I
> > expect these different deploy tools to bring new techniques that can then
> > be reapplied to kolla-ansible and kolla-kubernetes to help out everyone.
>
> I'm still curious to understand why, if the teams building those
> different things have little or no overlap in membership, they need to
> be part of "kolla" and not just part of the larger OpenStack? Why build
> a separate project hierarchy instead of keeping things flat?
>
> Do I misunderstand the situation?
>
> You absolutely do not misunderstand the situation. It is a very valid
question, one to which I do not have a satisfying answer. I can say that it
has been the intention since I started work on the ansible bits of kolla to
have separate repos for the deployment parts. That grew to having several
different deployment tools in the future and I don't think anyone really
stopped to think that building this hierarchy isn't necessarily the right
thing to do. It certainly isn't a required thing to do.

With the separation of ansible from the main kolla repo, the kolla repo now
becomes a consumable much like the relationship keystone and glance.

The only advantage I can really think of at the moment is to reuse the
Kolla name and community when starting a new project, but that may not be
as advantageous as I initially thought. By my own admission, why do these
other projects care about a different orchestration tool.

So in your estimation Doug, do you feel kolla-salt would be better served
as a new project in it's own right? As well as future orchestration tools
using Kolla container images?

Thanks,
SamYaple

> Doug
>
> >
> > Thanks,
> > SamYaple
> >
> > > --
> > > Jeremy Stanley
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 5:56 PM, Alex Schultz  wrote:

> On Thu, Jan 5, 2017 at 10:25 AM, Michał Jastrzębski 
> wrote:
> > Ad kolla-ansible core team +2ing new deliverable,I agree with Sam,
> > there is no reason in my mind for kolla-ansible/k8s core team to vote
> > on accepting new deliverable. I think this should be lightweight and
> > easy, we should allow experimentation (after all, kolla itself went
> > through few fail iterations before ansible).
> >
> > Ad disconnect, I think we all are interested in other orch tools more
> > or less, but it's more about "who should allow new one to be added",
> > and that requires more than interest as might potentially block adding
> > new deliverable. Avoiding this disconnect is exactly why I'd like to
> > keep all deliverable team under one Kolla umbrealla, keep everyone in
> > same community so they can make use of each others experience (that
> > would also mean, that kolla-puppet is what I'd like to see rather than
> > puppet-kolla:)).
> >
>
> I mean it depends on what a proposed 'kolla-puppet' does.  If it's
> like puppet-tripleo, which falls under the TripleO umbrella and not
> Puppet OpenStack because it configures more than just a single
> 'openstack' related service then that would make sense.  Since
> puppet-tripleo a way of deploying all things OpenStack it lives in the
> TripleO world.  But I don't necessarily agree that kolla-puppet should
> exist over puppet-kolla if it just configures 'kolla'.  I would like
> to see more cross group collaboration with the deployment tool groups
> and not keeping things to themselves.  As for the intricacies of the
> specific deployment tooling, because we already have patterns and
> plenty of tooling for deploying OpenStack related services in our
> other 40 or so modules I think the Puppet OpenStack community might be
> better suited to provide review feedback than say the Kolla group when
> it comes to puppet specific questions and best practices.  And
> speaking as the Puppet PTL there would not be anything stopping us
> from having Kolla cores also be cores on puppet-kolla.
>

These are good points. I don't know which way I lean on this subject at the
moment. But I would mention that kolla-ansible doesn't exist under
openstack-ansible-kolla. And kolla-salt (should that become a thing) isn't
openstack-salt-kolla.

Just because there is an openstack project that uses a orchestration tool
doesn't mean only one such project can exist. Nor that all approaches would
be the same, even if the end goal is the same (deploy openstack).

So I am going to remain neutral on this point and say what you are saying
is reasonable, though on the other hand it may not be compatible in some
situations.

Thanks,
SamYaple

I think its important to focus on the sense of OpenStack community
> building (not just Kolla community) and spreading knowledge I think it
> would be better not to try and keep everything to yourself if there's
> already a group of people in the community who specialize in a
> specific thing.  As an aside, I'd honestly like to see more
> contribution by the upstream projects into the puppet-* world because
> I think it's important for people to understand how the software they
> right actually gets consumed.
>
> Thanks,
> -Alex
>
> > Ad multi-deployment-tool friendly rivalry, it is meant to extend
> > breadth of the project indeed, but let's face it, religious wars are
> > real (and vim is better than emacs.);) I don't thing problem would be
> > ill intent tho, I could easily predict problem being rather in "I
> > don't have time to look at this review queue" sort. Stalling reviews
> > can kill lots of potentially great changes.
> >
> >
> > On 5 January 2017 at 09:02, Sam Yaple  wrote:
> >> On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley 
> wrote:
> >>>
> >>> On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
> >>> [...]
> >>> > I do feel this is slightly different than whats described. Since it
> is
> >>> > not
> >>> > unrelated services, but rather, for lack of a better word, competing
> >>> > services. To my knowledge infra doesn't have several service doing
> the
> >>> > same
> >>> > job with different core teams (though I could be wrong).
> >>>
> >>> True, though I do find it an interesting point of view that helping
> >>> Kolla support multiple and diverse configuration management and
> >>> automation ecosystems is a "competition" rather than 

Re: [openstack-dev] [kolla] Multi-Regions Support

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 2:12 PM, Ronan-Alexandre Cherrueau <
ronan-alexandre.cherru...@inria.fr> wrote:

> Hello,
>
>
> TL;DR: We make a multi-regions deployment with Kolla. It requires to
> patch the code a little bit, and you can find the diff on our
> GitHub[1]. This patch is just a first attempt to support multi-regions
> in Kolla and it raises questions. Some modifications are not done in
> an idiomatic way and we do not expect this to be merged in Kolla. The
> reminder of this mail explains our patch and states our questions.
>
>
> At Inria/Discovery[2], we evaluate OpenStack at scale for the
> Performance Working Group. So far, we focus on one single OpenStack
> region deployment with hundreds of computes and we always go with
> Kolla for our deployment. Over the last few days, we tried to achieve
> a multi-regions OpenStack deployment with Kolla. We want to share with
> you our current deployment workflow, patches we had to apply on Kolla
> to support multi-regions, and also ask you if we do things correctly.
>
> First of all, our multi-regions deployment follows the one described
> by the OpenStack documentation[3]. Concretely, the deployment
> considers /one/ Administrative Region (AR) that contains Keystone and
> Horizon. This is a Kolla-based deployment, so Keystone is hidden
> behind an HAProxy, and has MariaDB and memcached as backend. At the
> same time, /n/ OpenStack Regions (OSR1, ..., OSRn) contain a full
> OpenStack, except Keystone. We got something as follows at the end of
> the deployment:
>
>
> Admin Region (AR):
> - control:
>   * Horizon
>   * HAProxy
>   * Keyston
>   * MariaDB
>   * memcached
>
> OpenStack Region x (OSRx):
> - control:
>   * HAProxy
>   * nova-api/conductor/scheduler
>   * neutron-server/l3/dhcp/...
>   * glance-api/registry
>   * MariaDB
>   * RabbitMQ
>
> - compute1:
>   * nova-compute
>   * neutron-agent
>
> - compute2: ...
>
>
> We do the deployment by running Kolla n+1 times. The first run deploys
> the Administrative Region (AR) and the other runs deploy OpenStack
> Regions (OSR). For each run, we fix the value of `openstack_region_name'
> variable to the name of the current region.
>
> In the context of multi-regions, Keystone (in the AR) should be
> available to all OSRs. This means, there are as many Keystone
> endpoints as regions. For instance, if we consider two OSRs, the
> result of listing endpoints at the end of the AR deployment looks like
> this:
>
>
>  $ openstack endpoint list
>
>  | Region | Serv Name | Serv Type | Interface | URL
>   |
>  |+---+---+---+--
> |
>  | AR | keystone  | identity  | public|
> http://10.24.63.248:5000/v3  |
>  | AR | keystone  | identity  | internal  |
> http://10.24.63.248:5000/v3  |
>  | AR | keystone  | identity  | admin |
> http://10.24.63.248:35357/v3 |
>  | OSR1   | keystone  | identity  | public|
> http://10.24.63.248:5000/v3  |
>  | OSR1   | keystone  | identity  | internal  |
> http://10.24.63.248:5000/v3  |
>  | OSR1   | keystone  | identity  | admin |
> http://10.24.63.248:35357/v3 |
>  | OSR2   | keystone  | identity  | public|
> http://10.24.63.248:5000/v3  |
>  | OSR2   | keystone  | identity  | internal  |
> http://10.24.63.248:5000/v3  |
>  | OSR2   | keystone  | identity  | admin |
> http://10.24.63.248:35357/v3 |
>
>
> This requires patching the `keystone/tasks/register.yml' play[4] to
> re-execute the `Creating admin project, user, role, service, and
> endpoint' task for all regions we consider. An example of such a patch
> is given on our GitHub[5]. In this example, the `openstack_regions'
> variable is a list that contains the name of all regions (see [6]). As
> a drawback, the patch implies to know in advance all OSR. A better
> implementation would execute the `Creating admin project, user, role,
> service, and endpoint' task every time a new OSR is going to be
> deployed. But this requires to move the task somewhere else in the
> Kolla workflow and we have no idea where this should be.
>
> In the AR, we also have to change the Horizon configuration file to
> handle multi-regions[7]. The modification could be done easily and
> idiomatically by setting the `node_custome_config' variable to the
> `multi-regions' directory[8] and benefits from Kolla merging config
> system.
>
> Also, deploying OSRs requires patching the kolla-toolbox as it seems
> not region-aware. In particular, patch the `kolla_keystone_service.py'
> module[9] that is responsible for contacting Keystone and creating a
> new endpoint when we register a new OpenStack service.
>
>
>  73  for _endpoint in cloud.keystone_client.endpoints.list():
>  74  if _endpoint.service_id == service.id and \
>  75 _endpoint.interface == interface:
>  76endpoint = _endpoint
>  77if endpoint.url != url:
>  78  changed = True
>  79  cloud.keystone_client.endpoints.update(
>  80endpoint, url=url)
>  81

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley  wrote:

> On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
> [...]
> > I do feel this is slightly different than whats described. Since it is
> not
> > unrelated services, but rather, for lack of a better word, competing
> > services. To my knowledge infra doesn't have several service doing the
> same
> > job with different core teams (though I could be wrong).
>
> True, though I do find it an interesting point of view that helping
> Kolla support multiple and diverse configuration management and
> automation ecosystems is a "competition" rather than merely
> extending the breadth of the project as a whole.
>

Yea I computer good, but I am no wordsmith. Perhaps 'friendly rivalry'? I
expect these different deploy tools to bring new techniques that can then
be reapplied to kolla-ansible and kolla-kubernetes to help out everyone.

Thanks,
SamYaple

> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 4:34 PM, Jeremy Stanley  wrote:

> On 2017-01-05 15:58:45 + (+), Sam Yaple wrote:
> > Involving kolla-ansible and kolla-kubernetes in a decision about
> kolla-salt
> > (or kolla-puppet, or kolla-chef) is silly since the projects are
> unrelated.
> > That would be like involving glance when cinder has a new service because
> > they both use keystone. The kolla-core team is reasonable since those are
> > the images being consumed.
>
> In contrast, the Infra team also has a vast number of fairly
> unrelated deliverables with their own dedicated core review teams.
> In our case, we refer to them as our "Infra Council" and ask them to
> weigh in with Roll-Call votes on proposals to the infra-specs repo.
> In short, just because people work primarily on one particular
> service doesn't mean they're incapable of providing useful feedback
> on and help prioritize proposals to add other (possibly unrelated)
> services.
>

I do feel this is slightly different than whats described. Since it is not
unrelated services, but rather, for lack of a better word, competing
services. To my knowledge infra doesn't have several service doing the same
job with different core teams (though I could be wrong).

Thanks,
SamYaple

> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 4:06 PM, Doug Hellmann  wrote:

> Excerpts from Sam Yaple's message of 2017-01-05 15:58:45 +:
> > Involving kolla-ansible and kolla-kubernetes in a decision about
> kolla-salt
> > (or kolla-puppet, or kolla-chef) is silly since the projects are
> unrelated.
> > That would be like involving glance when cinder has a new service because
> > they both use keystone. The kolla-core team is reasonable since those are
> > the images being consumed.
>
> If those teams are so disconnected as to not have an interest in the
> work the other is doing, why are they part of the same umbrella project
> team?
>
> The disconnection is rather new, though it has been a long time coming. In
Newton, all of the kolla-ansible code existed in the kolla repo. It has
since been split to kolla-ansible to separate interests and allow for new
projects like kolla-kubernetes (and even kolla-salt) to have the same
advantages as kolla-ansible. Be a first-class citizen. Frankly, the kolla
namespace is _not_ needed for these projects anymore. Kolla-salt does not
need to be called kolla-salt at all. It could be called some other name
without much ado. However, the primary goal of the split was to encourage
projects like this to pop up under the kolla namespace, and keep different
deployment tools from interfering with each other.

As someone who works with Ansible and Salt, I don't personally think I
should be voting on the acceptance of a new deployment tool I have no
interest in that won't affect anything I am working on. Of course, this is
just my opinion.

Thanks,
SamYaple

Doug
>
> >
> > Thanks,
> > SamYaple
> >
> >
> >
> > Sam Yaple
> >
> > On Thu, Jan 5, 2017 at 12:31 AM, Steven Dake (stdake) 
> > wrote:
> >
> > > Michal,
> > >
> > > Another option is 2 individuals from each core review team + PTL.
> That is
> > > lighter weight then 3 and 4, yet more constrained then 1 and 2 and
> would be
> > > my preferred choice (or alternatively 3 or 4).  Adding a deliverable is
> > > serious business ☺
> > >
> > > FWIW I don’t’ think we are at an impasse, it just requires a policy
> vote
> > > as we do today.
> > >
> > > Regards
> > > -steve
> > >
> > > -Original Message-
> > > From: Michał Jastrzębski 
> > > Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" <
> > > openstack-dev@lists.openstack.org>
> > > Date: Wednesday, January 4, 2017 at 3:38 PM
> > > To: "OpenStack Development Mailing List (not for usage questions)" <
> > > openstack-dev@lists.openstack.org>
> > > Subject: [openstack-dev] [tc][kolla] Adding new deliverables
> > >
> > > Hello,
> > >
> > > New deliverable to Kolla was proposed, and we found ourselves in a
> bit
> > > of an impasse regarding process of accepting new deliverables.
> Kolla
> > > community grew a lot since we were singular project, and now we
> have 3
> > > deliverables already (kolla, kolla-ansible and kolla-kubernetes).
> 4th
> > > one was proposed, kolla-salt, all of them having separate core
> teams
> > > today. How to we proceed with this and following deliverables? How
> to
> > > we accept them to kolla namespace? I can think of several ways.
> > >
> > > 1) Open door policy - whoever wants to create new deliverable, is
> just
> > > free to do so.
> > > 2) Lightweight agreement - just 2*+2 from Kolla core team to some
> list
> > > of deliveralbes that will sit in kolla repo, potentially 2*+2 + PTL
> > > vote it would be good for PTL to know what he/she is PTL of;)
> > > 3) Majority vote from Kolla core team - much like we do with policy
> > > changes today
> > > 4) Majority vote from all Kolla deliverables core teams
> > >
> > > My personal favorite is option 2+PTL vote. We want to encourage
> > > experiments and new contributors to use our namespace, for both
> larger
> > > community and ease of navigation for users.
> > >
> > > One caveat to this would be to note that pre-1.0 projects are
> > > considered dev/experimental.
> > >
> > > Thoughts?
> > >
> > > Cheers,
> > > Michal
> > >
> > > 
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
&g

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
Involving kolla-ansible and kolla-kubernetes in a decision about kolla-salt
(or kolla-puppet, or kolla-chef) is silly since the projects are unrelated.
That would be like involving glance when cinder has a new service because
they both use keystone. The kolla-core team is reasonable since those are
the images being consumed.

Thanks,
SamYaple



Sam Yaple

On Thu, Jan 5, 2017 at 12:31 AM, Steven Dake (stdake) 
wrote:

> Michal,
>
> Another option is 2 individuals from each core review team + PTL.  That is
> lighter weight then 3 and 4, yet more constrained then 1 and 2 and would be
> my preferred choice (or alternatively 3 or 4).  Adding a deliverable is
> serious business ☺
>
> FWIW I don’t’ think we are at an impasse, it just requires a policy vote
> as we do today.
>
> Regards
> -steve
>
> -Original Message-
> From: Michał Jastrzębski 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, January 4, 2017 at 3:38 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [tc][kolla] Adding new deliverables
>
> Hello,
>
> New deliverable to Kolla was proposed, and we found ourselves in a bit
> of an impasse regarding process of accepting new deliverables. Kolla
> community grew a lot since we were singular project, and now we have 3
> deliverables already (kolla, kolla-ansible and kolla-kubernetes). 4th
> one was proposed, kolla-salt, all of them having separate core teams
> today. How to we proceed with this and following deliverables? How to
> we accept them to kolla namespace? I can think of several ways.
>
> 1) Open door policy - whoever wants to create new deliverable, is just
> free to do so.
> 2) Lightweight agreement - just 2*+2 from Kolla core team to some list
> of deliveralbes that will sit in kolla repo, potentially 2*+2 + PTL
> vote it would be good for PTL to know what he/she is PTL of;)
> 3) Majority vote from Kolla core team - much like we do with policy
> changes today
> 4) Majority vote from all Kolla deliverables core teams
>
> My personal favorite is option 2+PTL vote. We want to encourage
> experiments and new contributors to use our namespace, for both larger
> community and ease of navigation for users.
>
> One caveat to this would be to note that pre-1.0 projects are
> considered dev/experimental.
>
> Thoughts?
>
> Cheers,
> Michal
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-04 Thread Sam Yaple
As the person purposing kolla-salt, I would like to weigh in. I like the
idea of option 2, I certainly feel the PTL should always be involved in
these things.

I would also agree with the pre-1.0 as dev/experimental so as to not be
tightly bound by rules for more stable and long term projects (like
backward compatibility) until after 1.0 is tagged. This would give the
flexibility for the project to reinvent itself as it grows.

What I do notice is lack of wording describing who would be committing to
work on the new deliverable, and I think that is important. It could be a
team of people wanting to work on it, or one person putting forth work for
a new deployment tool to use Kolla container and others joining in as they
see the potential of the project themselves. I would prefer to keep it that
way.

Thanks,
SamYaple

Sam Yaple

On Wed, Jan 4, 2017 at 11:38 PM, Michał Jastrzębski 
wrote:

> Hello,
>
> New deliverable to Kolla was proposed, and we found ourselves in a bit
> of an impasse regarding process of accepting new deliverables. Kolla
> community grew a lot since we were singular project, and now we have 3
> deliverables already (kolla, kolla-ansible and kolla-kubernetes). 4th
> one was proposed, kolla-salt, all of them having separate core teams
> today. How to we proceed with this and following deliverables? How to
> we accept them to kolla namespace? I can think of several ways.
>
> 1) Open door policy - whoever wants to create new deliverable, is just
> free to do so.
> 2) Lightweight agreement - just 2*+2 from Kolla core team to some list
> of deliveralbes that will sit in kolla repo, potentially 2*+2 + PTL
> vote it would be good for PTL to know what he/she is PTL of;)
> 3) Majority vote from Kolla core team - much like we do with policy
> changes today
> 4) Majority vote from all Kolla deliverables core teams
>
> My personal favorite is option 2+PTL vote. We want to encourage
> experiments and new contributors to use our namespace, for both larger
> community and ease of navigation for users.
>
> One caveat to this would be to note that pre-1.0 projects are
> considered dev/experimental.
>
> Thoughts?
>
> Cheers,
> Michal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] the user in container should NOT have write permission for configuration file

2016-09-26 Thread Sam Yaple
On Mon, Sep 26, 2016 at 4:32 PM, Jeffrey Zhang 
wrote:

> Hey Sam,
>
> Yes. world readable is bad. But writable for current running service is
> also bad.
>
> But in nova.conf, the rootwrap_config is configurable. It can be changed
> to a custom file to gain root permission.
>
> # nova.conf
> rootwrap_config = /tmp/rootrwap.conf
>
> # /tmp/rootwrap.conf
> [DEFAULT]
> filters_path = /tmp/rootwrap.conf.d/
>

Sorry Jeffrey, you are mistaken about this. Just because you change the
filters_path means nothing. The filters_path is hardcoded in the
/etc/sudoers.d/nova file. Sudo will not work with any arbitary path you
specify. If you want to make the service config files nova:nova 0400 you
can. Though there is no added benefit in doing this in my opinion. It is
not a bad precaution I suppose, though it may affect some peoples
development cycle with Kolla. I remember I personally changed the config
from inside the running docker container once or twice while testing.

SamYaple


> so, for the file should be
>
> 0640 root:nova nova.conf
>
>
> On Mon, Sep 26, 2016 at 10:43 PM, Sam Yaple  wrote:
>
>> On Mon, Sep 26, 2016 at 1:18 PM, Jeffrey Zhang 
>> wrote:
>>
>>> Using the same user for running service and the configuration files is
>>> a danger. i.e. the service running user shouldn't change the
>>> configuration files.
>>>
>>> a simple attack like:
>>> * a hacker hacked into nova-api container with nova user
>>> * he can change the /etc/nova/rootwrap.conf file and
>>> /etc/nova/rootwrap.d file, which he can get much greater authority
>>> with sudo
>>> * he also can change the /etc/nova/nova.conf file to use another
>>> privsep_command.helper_command to get greater authority
>>> [privsep_entrypoint]
>>> helper_command=sudo nova-rootwrap /etc/nova/rootwrap.conf
>>> privsep-helper --config-file /etc/nova/nova.conf
>>>
>>> This is not true. The helper command required /etc/sudoers.d/*
>> configuration files to work. So just because it was changed to something
>> else, doesn't mean an attacker could actually do anything to adjust that,
>> considering /etc/nova/rootwrap* is already owned by root. This was fixed
>> early on in the Kolla lifecycle, pre-liberty.
>>
>> Feel free to adjust /etc/nova/nova.conf to root:root, but you won't be
>> gaining any security advantage in doing so, you will be making it worse
>> (see below). I don't know of a need for it to be owned by the service user,
>> other than that is how all openstack things are packaged and those are the
>> permissions in the repo and other deploy tools.
>>
>>
>>> So right rule should be: do not let the service running user have
>>> write permission to configuration files,
>>>
>>> about for the nova.conf file, i think root:root with 644 permission
>>> is enough
>>> for the directory file, root:root with 755 is enough.
>>>
>>
>> So this actually makes it _less_ secure. The 0600 permissions were chosen
>> for a reason.  The nova.conf file has passwords to the DB and rabbitmq. If
>> the configuration files are world readable then those passwords could leak
>> to an unprivileged user on the host.
>>
>>
>>> A related BP[0] and PS[1] is created
>>>
>>> [0] https://blueprints.launchpad.net/kolla/+spec/config-readonly
>>> [1] https://review.openstack.org/376465
>>>
>>> On Sat, Sep 24, 2016 at 11:08 PM, 1392607554 <1392607...@qq.com> wrote:
>>>
>>>> configuration file owner and permission in container
>>>>
>>>> --
>>>> Regrad,
>>>> zhubingbing
>>>>
>>>> 
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: openstack-dev-requ...@lists.op
>>>> enstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Regards,
>>> Jeffrey Zhang
>>> Blog: http://xcodest.me
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] the user in container should NOT have write permission for configuration file

2016-09-26 Thread Sam Yaple
On Mon, Sep 26, 2016 at 3:03 PM, Christian Berendt <
bere...@betacloud-solutions.de> wrote:

> > On 26 Sep 2016, at 16:43, Sam Yaple  wrote:
> >
> > So this actually makes it _less_ secure. The 0600 permissions were
> chosen for a reason.  The nova.conf file has passwords to the DB and
> rabbitmq. If the configuration files are world readable then those
> passwords could leak to an unprivileged user on the host.
>
> Confirmed. Please do not make configuration files world readable.
>
> We use volumes for the configuration file directories. Why do we not
> simply use read only volumes? This way we do not have to touch the current
> implementation (files are owned by the service user with 0600 permissions)
> and can make the configuration files read only.
>

This is already done. When I first setup the config bind mounting we did
make sure it was read only. See [1]. The way configs work in Kolla is the
files from that readonly bind mount are copied into the appropriate
directory in the container on container startup.

[1]
https://github.com/openstack/kolla/blob/b1f986c3492faa2d5386fc7baabbd6d8e370554a/ansible/roles/nova/tasks/start_compute.yml#L11

>
> Christian.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] the user in container should NOT have write permission for configuration file

2016-09-26 Thread Sam Yaple
On Mon, Sep 26, 2016 at 1:18 PM, Jeffrey Zhang 
wrote:

> Using the same user for running service and the configuration files is
> a danger. i.e. the service running user shouldn't change the
> configuration files.
>
> a simple attack like:
> * a hacker hacked into nova-api container with nova user
> * he can change the /etc/nova/rootwrap.conf file and
> /etc/nova/rootwrap.d file, which he can get much greater authority
> with sudo
> * he also can change the /etc/nova/nova.conf file to use another
> privsep_command.helper_command to get greater authority
> [privsep_entrypoint]
> helper_command=sudo nova-rootwrap /etc/nova/rootwrap.conf
> privsep-helper --config-file /etc/nova/nova.conf
>
> This is not true. The helper command required /etc/sudoers.d/*
configuration files to work. So just because it was changed to something
else, doesn't mean an attacker could actually do anything to adjust that,
considering /etc/nova/rootwrap* is already owned by root. This was fixed
early on in the Kolla lifecycle, pre-liberty.

Feel free to adjust /etc/nova/nova.conf to root:root, but you won't be
gaining any security advantage in doing so, you will be making it worse
(see below). I don't know of a need for it to be owned by the service user,
other than that is how all openstack things are packaged and those are the
permissions in the repo and other deploy tools.


> So right rule should be: do not let the service running user have
> write permission to configuration files,
>
> about for the nova.conf file, i think root:root with 644 permission
> is enough
> for the directory file, root:root with 755 is enough.
>

So this actually makes it _less_ secure. The 0600 permissions were chosen
for a reason.  The nova.conf file has passwords to the DB and rabbitmq. If
the configuration files are world readable then those passwords could leak
to an unprivileged user on the host.


> A related BP[0] and PS[1] is created
>
> [0] https://blueprints.launchpad.net/kolla/+spec/config-readonly
> [1] https://review.openstack.org/376465
>
> On Sat, Sep 24, 2016 at 11:08 PM, 1392607554 <1392607...@qq.com> wrote:
>
>> configuration file owner and permission in container
>>
>> --
>> Regrad,
>> zhubingbing
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron and MTU advertisements -- post newton

2016-07-11 Thread Sam Yaple
After lots of fun on IRC I have given up this battle. I am giving up
quickly because frickler has purposed a workaround (or better solution
depending on who you ask). So for all of you keeping track at home, if you
want your vxlan and your vlan networks to have the same MTU, here are the
relevant options to set as of Mitaka.

[DEFAULT]
global_physnet_mtu = 1550
[ml2]
path_mtu = 1550
physical_network_mtus = physnet1:1500

This should go without saying, but i'll say it anyway: Your underlying
network interface must be at least 1550 MTU for the above config to result
in all instances receiving 1500 mtu regardless of network type. If you want
some extra IRC reading, there was a more extensive conversation about this
[1]. Good luck, you'll need it.

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2016-07-11.log.html#t2016-07-11T13:39:45

Sam Yaple

On Mon, Jul 11, 2016 at 7:22 PM, Fox, Kevin M  wrote:

> I fought for two weeks to figure out why one of my clouds didn't seem to
> want to work properly. It was in fact one of those helpful souls you
> mention below filtering out PMTU's. I had to play with some rather gnarly
> iptables rules to workaround the issue. -j TCPMSS --clamp-mss-to-pmtu
>
> So it does happen.
>
> Thanks,
> Kevin
>
> --
> *From:* Ian Wells [ijw.ubu...@cack.org.uk]
> *Sent:* Monday, July 11, 2016 12:04 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] Neutron and MTU advertisements -- post
> newton
>
> On 11 July 2016 at 11:49, Sean M. Collins  wrote:
>
>> Sam Yaple wrote:
>> > In this situation, since you are mapping real-ips and the real world
>> runs
>> > on 1500 mtu
>>
>> Don't be so certain about that assumption. The Internet is a very big
>> and diverse place
>
>
> OK, I'll contradict myself now - the original question wasn't L2 transit.
> Never mind.
>
> That 'inter' bit is actually rather important.  MTU applies to a layer 2
> domain, and routing is designed such that the MTUs on the two ports of a
> router are irrelevant to each other.  What the world does has no bearing on
> the MTU I want on my L2 domain, and so Sam's point - 'I must choose the MTU
> other people use' - is simply invalid.  You might reasonably want your
> Neutron router to have an external MTU of 1500, though, to do what he asks
> in the face of some thoughtful soul filtering out PMTU exceeded messages.
> I still think it comes back to the same thing as I suggested in my other
> mail.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron and MTU advertisements -- post newton

2016-07-11 Thread Sam Yaple
On Mon, Jul 11, 2016 at 4:39 PM, Jay Pipes  wrote:

> On 07/11/2016 07:45 AM, Sam  Yaple wrote:
>
>> Hello,
>>
>> There was alot of work to get MTU advertisement working well in Mitaka.
>> With the work that was done we can finally have 1500 mtu networks for
>> tunneled networks if the underlying infrastructure supports jumbo frames.
>>
>> Its fantastic for people who have 1500 mtu networks and want to use
>> vxlan, no more hacks to get the instance to use 1450 mtu. Its fantastic
>> for people who want to use 1500+ networks and get the instances setup
>> with 9000 mtu interfaces. Its is not good for people who want consistent
>> mtu no matter the network type. But thats fine, since mtu advertisement
>> _could_ be disabled. Its a fantastic default to have it turned on.
>>
>> With a recent patchset [1] the ability to turn off MTU advertisements
>> was deprecated in Newton. In the review it was stated there is no valid
>> use case for it. I disagree.
>>
>> The scenario is the infrastructure has jumbo frames enabled, but I do
>> not want the instances to be using jumbo frames, but I want them to be
>> using the default 1500 mtu that the rest of the world operates on. This
>> would still setup all of the virtual switching infrastructure to the
>> correct MTUs, but not try to adjust the instances MTUs. In this way the
>> instances are only communicating at 1500 mtu, but never having to
>> fragment/drop inside of the SDN when communicating with other networks
>> even if it is a VXLAN or other tunneled network.
>>
>> Without the option to disable mtu advertisement, inside the same
>> environment flat/vlan and gre/vxlan network will always have different
>> mtu, even if the backend supports jumbo frames.
>>
>> My ask is we keep the advertise_mtu option, and keep it enabled by
>> default. This would allow for the default, common 1500 mtu across
>> networks of different types.
>>
>> This scenario would be very similiar to having a computer with 1500 mtu
>> attached to a switch which supports jumbo frames. Just because the
>> switch will accept and process a 9000 mtu frame, doesnt mean the
>> computer has to send a 9000 mtu frame. A very common scenario in the
>> real world.
>>
>> [1] https://review.openstack.org/#/c/310448/
>>
>
> Hi Sam,
>
> Out of curiosity, in what scenarios is it better to limit the instance's
> MTU to a value lower than that of the maximum path MTU of the
> infrastructure? In other words, if the infrastructure supports jumbo
> frames, why artificially limit an instance's MTU to 1500 if that instance
> could theoretically communicate with other instances in the same
> infrastructure via a higher MTU?
>
> Hey Jay,

A not-so-uncommon way to setup networks in neutron involves the use of 1:1
NATs. You have a firewall device that holds real, valid public addresses
that map to private addresses (RFC-1918). So to OpenStack the network
appears as a private network, but some of those address map to public
addresses outside of OpenStack's sphere of knowledge. This works really,
really well when you have multiple separate ranges of public ip addresses
and separate gateways for each and you want to use them without creating
multiple subnets on an external network with OpenStack. This has been
written about in blog posts [1] and used in enterprise environments (it is
what Rackspace does for their private cloud [2]).

In this situation, since you are mapping real-ips and the real world runs
on 1500 mtu, you want to make sure your MTUs match in ways that cannot be
auto-discovered. A good way to do this is to just use the default 1500 mtu
for every instance and ensure that that never fragments (which means at
least 1550 mtu networks for vxlan). So in this case you would have setup
your network in such a way that a 1500 mtu frame from the internet can
arrive at your instance without ever being fragment, and outgoing traffic
isn't trying to send >1500mtu packets into the real internet.

Additionally, there may be other services using the interface (it is not
dedicated to just neutron traffic) such as ceph which loves high MTUs. I
mention this as a secondary point because neutron doesn't affect this at
all, but it is related to your question.

[1] http://dachary.org/?p=2466
[2] https://developer.rackspace.com/blog/neutron-networking-l3-agent/

Sorry if my question is poorly worded. I'm no networking expert and am
> genuinely curious here. :)
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron and MTU advertisements -- post newton

2016-07-11 Thread Sam Yaple
Hello,

There was alot of work to get MTU advertisement working well in Mitaka.
With the work that was done we can finally have 1500 mtu networks for
tunneled networks if the underlying infrastructure supports jumbo frames.

Its fantastic for people who have 1500 mtu networks and want to use vxlan,
no more hacks to get the instance to use 1450 mtu. Its fantastic for people
who want to use 1500+ networks and get the instances setup with 9000 mtu
interfaces. Its is not good for people who want consistent mtu no matter
the network type. But thats fine, since mtu advertisement _could_ be
disabled. Its a fantastic default to have it turned on.

With a recent patchset [1] the ability to turn off MTU advertisements was
deprecated in Newton. In the review it was stated there is no valid use
case for it. I disagree.

The scenario is the infrastructure has jumbo frames enabled, but I do not
want the instances to be using jumbo frames, but I want them to be using
the default 1500 mtu that the rest of the world operates on. This would
still setup all of the virtual switching infrastructure to the correct
MTUs, but not try to adjust the instances MTUs. In this way the instances
are only communicating at 1500 mtu, but never having to fragment/drop
inside of the SDN when communicating with other networks even if it is a
VXLAN or other tunneled network.

Without the option to disable mtu advertisement, inside the same
environment flat/vlan and gre/vxlan network will always have different mtu,
even if the backend supports jumbo frames.

My ask is we keep the advertise_mtu option, and keep it enabled by default.
This would allow for the default, common 1500 mtu across networks of
different types.

This scenario would be very similiar to having a computer with 1500 mtu
attached to a switch which supports jumbo frames. Just because the switch
will accept and process a 9000 mtu frame, doesnt mean the computer has to
send a 9000 mtu frame. A very common scenario in the real world.

[1] https://review.openstack.org/#/c/310448/

Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] deprecation of icehosue/juno/kilo branches

2016-03-25 Thread Sam Yaple
+1

Sam Yaple

On Fri, Mar 25, 2016 at 4:52 PM, Michał Jastrzębski 
wrote:

> +1
>
> On 25 March 2016 at 11:36, Jeffrey Zhang  wrote:
> > +1
> >
> > On Sat, Mar 26, 2016 at 12:07 AM, Ryan Hallisey 
> wrote:
> >>
> >> +1 for sure
> >>
> >> -Ryan
> >>
> >> - Original Message -
> >> From: "Paul Bourke" 
> >> To: openstack-dev@lists.openstack.org
> >> Sent: Friday, March 25, 2016 11:43:06 AM
> >> Subject: Re: [openstack-dev] [kolla][vote] deprecation of
> >> icehosue/juno/kilo branches
> >>
> >> +1
> >>
> >> On 25/03/16 15:36, Steven Dake (stdake) wrote:
> >> > Hey folks,
> >> >
> >> > These branches are essentially unmaintained and we have completely
> given
> >> > up on them.  That’s ok – they were paths of our development.  I didn't
> >> > really want to branch them in the first place, but the community
> >> > demanded it, so I delivered that :)
> >> >
> >> > Now that our architecture is fairly stable in liberty and mitaka, I
> >> > think it makes sense to do the following
> >> > 1.. tag each of the above branches with icehosue-eol, juno-eol,
> kilo-eol
> >> > 2. Ask infrastructure to remove the branches from git
> >> >
> >> > This is possible; I have just verified in openstack-infra.  I'd like a
> >> > vote from the core review team.  If you would like to see the kilo
> >> > branch stay, please note that, and if there is a majority on
> >> > icehosue/juno but not kilo I'll make the appropriate arrangements with
> >> > openstack-infra.
> >> >
> >> > I will leave voting open for 1 week until Saturday April 2nd.  I will
> >> > close voting early if there is a majority vote.
> >> >
> >> > Regards
> >> > -steve
> >> >
> >> >
> >> >
> >> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > --
> > Jeffrey Zhang
> > Blog: http://xcodest.me
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][ptl] Kolla PTL Candidacy for Sam Yaple

2016-03-19 Thread Sam Yaple
I would like to announce my candidacy for PTL for the Kolla team for the Newton

release cycle.

I have been heavily involved with Kolla since the Liberty cycle. A few of my
major technical contributions include:

 * Bringing Ansible to Kolla
 * Containerized host dependencies (kolla-toolbox)
 * Named volumes for data in Docker
 * The kolla_docker.py Ansible module
 * The initial build.py (kolla-build) script
 * Making Neutron "thin" containers work

Some of the things I would like to see us focus on for the Newton cycle:

 * Separation of the Ansible code from the kolla repo into kolla-ansible
 * kolla-mesos and kolla-ansible testing, along with gating procedures leading
   the way for other deployment utilities to use Kolla containers (such as
   saltstack and kubernetes)
 * Stabilization of our stable branches. As a deployment project, I would like
   to see us increase our focus on the usability of our project. This means a
   much stronger focus on keeping our stable branches stable
 * Better use of the bug tracker for master and stable branches
 * Stronger documentation across the board
 * Better gates for testing deployment and upgrades from stable branches

My desire for this cycle is to focus on making Kolla usable to the people that
matter most to our project, the Operators. Several of us were operators
ourselves at one time and know the struggle of being forced to use unstable and
unintuitive software. The feedback we have received real-world usage has been
the most useful, I don't expect that trend to stop.

I am also very active within the Kolla community. I provided the highest number
of reviews and commits in both the Liberty and Mitaka cycles [0]. This has
given me a deep understanding of the codebase and inner workings of Kolla. For
these reasons, I have a strong vision of the direction we can take Kolla over
the next cycle.

Thanks for your contributions and consideration. I look forward to continuing
to work closely with our community!
- Sam Yaple

[0] http://stackalytics.com/?project_type=all&module=kolla&metric=commits
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [infra] Size of images built in the gate

2016-03-15 Thread Sam Yaple
Steve,

overlayfs does not reduce the disk usage at all

Paul, we can bump the size of the docker mountpoint up to ~20GB if you
check all the gates for the appropriate space.

Sam Yaple

On Mon, Mar 14, 2016 at 7:20 PM, Steven Dake (stdake) 
wrote:

> Vikarm,
>
> /var/lib/docker is a btrfs filesystem.  It would be nice to just use
> overlayfs as it doesn't take nearly as much disk space and is much faster
> then btrfs.  I use overlay daily and it works like a champ unless I want to
> rebuild, in which case I think a bug in the ovl yum plugin prevents
> rebuilding if an image already exists.
>
> https://github.com/openstack/kolla/blob/master/tools/setup_RedHat.sh#L13
>
> Regards,
> -steve
>
>
> From: "Vikram Hosakote (vhosakot)" 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Monday, March 14, 2016 at 11:51 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [kolla] [infra] Size of images built in the
> gate
>
> /var/lib/docker/aufs  takes a lot of space if many containers are running
> and
> writing data to their volumes.
>
> If the gate VM’s  /var/lib/docker  partition is small, you could move
> /var/lib/docker  to a different partition that is big enough (like /home
> or /tmp)
> and create a symbolic link to  /var/lib/docker.
>
> mv  /var/lib/docker  /home/new_docker_path_with_more_space
> ln -s  /home/new_docker_path_with_more_space  /var/lib/docker
>
> Alternatively, you can use the -g option for the docker daemon to use a
> different
> path instead of  /var/lib/docker  if it runs out of disk space.
>
> DOCKER_OPTS="-g /home/new_docker_path_with_more_space"
>
> Restart the docker daemon after making the above change.
>
> https://github.com/docker/docker/issues/3127
>
> Regards,
> Vikram Hosakote
> IRC: vhosakot
>
> From: Paul Bourke 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Monday, March 14, 2016 at 11:25 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [kolla] [infra] Size of images built in the gate
>
> Hi all,
>
> I'm currently working to add new gates for the oraclelinux image base
> type (https://blueprints.launchpad.net/kolla/+spec/oraclelinux-gate) and
> had a question on size available in the gate VMs.
>
> Right now the binary build is running out of space in the gate, as it
> exceeds the 10GB we're allocating for /var/lib/docker. For me, a current
> local build using binary+oraclelinux is clocking in at 10.01GB, where as
> the centos+binary is a little smaller at 8.89GB.
>
> Is it written anywhere exactly what space is available in the gate VMs?
> Sam mentioned briefly that different providers used in the gate give a
> variety of disk sizes, so do we have an idea what we can reasonably bump
> the docker partition to?
>
> The above number indicates centos will likely soon run into the same
> problem as we dockerise more of the big tent, so I think it's a good
> idea to check this now before more gates start falling over.
>
> Thanks,
> -Paul
>
> p.s. if people are interested here's lists sorted by name of the
> oraclelinux and centos builds:
>
> oraclelinux: http://paste.fedoraproject.org/339672/96915914
> centos: http://paste.fedoraproject.org/339673/57969163
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] exception for backporting upgrades to liberty/stable

2016-03-07 Thread Sam Yaple
On Mon, Mar 7, 2016 at 3:03 PM, Steven Dake (stdake) 
wrote:

> Hi folks,
>
> It was never really discussed if we would back-port upgrades to liberty.
> This came up during  an irc conversation Friday [1], and a vote was
> requested.  Tthe facts of the discussion distilled are:
>
>- We never agreed as a group to do back-port of upgrade during our
>back-port discussion
>- An operator that can't upgrade her Z version of Kolla (1.1.1 from
>1.1.0) is stuck without CVE or OSSA corrections.
>- Because of a lack of security upgrades, the individual responsible
>for executing the back-port would abandon the work (but not tuse the
>abaondon feature for of gerrit for changes already in the queue)
>
> Since we never agreed, in that IRC discussion a vote was requested, and I
> am administering the vote.  The vote request was specifically should we
> have a back-port of upgrade in 1.1.0.  Both parties agreed they would live
> with the results.
>
> I would like to point out that not porting upgrades means that the liberty
> branch would essentially become abandoned unless some other brave soul
> takes up a backport.  I would also like to point out that that is yet
> another exception much like thin-containers back-port which was accepted.
> See how exceptions become the way to the dark side.  We really need to stay
> exception free going forward (in Mitaka and later) as much as possible to
> prevent expectations that we will make exceptions when none should be made.
>
>
This is not an exception. This was always a requirement. If you disagree
with that then we have never actually had a stable branch. The fact is we
_always_ needed z version upgrades for Kolla. It was _always_ the plan to
have them. Feel free to reference the IRC logs and our prior mid-cycle and
our Tokyo upgrade sessions. What changed was time and peoples memories and
that's why this is even a conversation.


> Please vote +1 (backport) or –1 (don’t backport).  An abstain in this case
> is the same as voting –1, so please vote either way.  I will leave the
> voting open for 1 week until April 14th.  If there I a majority in favor, I
> will close voting early.  We currently require 6 votes for majority as our
> core team consists of 11 people.
>
> Regards,
> -steve
>
>
> [1]
> http://eavesdrop.openstack.org/irclogs/%23kolla/%23kolla.2016-03-04.log.html#t2016-03-04T18:23:26
>
> Warning [1] was a pretty heated argument and there may have been some
> swearing :)
>
> voting.
>
> "Should we back-port upgrades
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
Obviously I am +1 on committing to a stable _*stable_* branch. And that has
always required Z upgrades. Luckily the work we did in master is also
usable for z upgrades. Without the ability to perform an update after a
vulnerability, we have to stable branch.

I would remind every one we _did_ have this conversation in Tokyo when we
discussed pinning to stable versions of other projects rather than using
head of thier stable branch. We currently do that and there is a review for
a tool to make that even easier to maintain [1]. There was even talk of a
bot to purpose these bumps in versions. That means z upgrades were expected
for Liberty. Anyone thinking otherwise has a short memory.

[1] https://review.openstack.org/#/c/248481/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Alicja Kwasniewska for core reviewer

2016-03-07 Thread Sam Yaple
+1 Keep up the great reviews and patches!

Sam Yaple

On Mon, Mar 7, 2016 at 3:41 PM, Jeff Peeler  wrote:

> +1
>
> On Mon, Mar 7, 2016 at 3:57 AM, Michal Rostecki 
> wrote:
> > +1
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] unblocking the gate

2016-02-29 Thread Sam Yaple
On Mon, Feb 29, 2016 at 6:42 PM, Clark Boylan  wrote:

> On Mon, Feb 29, 2016, at 09:09 AM, Steven Dake (stdake) wrote:
> >
> >
> > On 2/29/16, 12:26 AM, "Andreas Jaeger"  wrote:
> > >This is not needed, the CI system always rebases if you run tests. To
> > >get current tests, a simple "recheck" is enough.
> > >
> > >Also, we test in the gate before merging - again after rebasing to head.
> > >That should take care of not merging anything broken. Running recheck
> > >after a larger change will ensure that you have recent results.
> >
> > Andreas,
> >
> > Thanks for the recheck information.  I thought the gate ran against what
> > it was submitted with as head.  We don't have any gate jobs at present
> > (or
> > many) they are mostly check jobs, so its pre-merge checking that we need
> > folks to do.
> >
> To clarify the check pipeline changes are merged into their target
> branches before testing and the gate pipeline changes are merged into
> the forward looking state of the world that Zuul generates as part of
> gate testing. This means you do not need to rebase and create new
> patchsets to get updated test results, just recheck.
>
>
Unfortunately we do not have voting gates in Kolla that do building and
deploying. This will change soon, but that would be where the confusion in
the thread is coming from I believe. Our only indication is a "Green" check
job at this time. This is why Steven is asking for a rebase to make the
check gates green before we merge new patches.

You should only need to rebase and create a new patchset if a merge
> conflict exists.
>
> Clark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] discussion about core reviewer limitations by company

2016-02-20 Thread Sam Yaple
Oracle, Redhat, Mirantis, Servosity, 99cloud. Those are the biggest users,
at least according to the reviews and commits.

I am not in favor of limiting the number of cores from a single company.
However, it is an unwritten rule that I've heard and abide by that a
company should not push a patch through. This means 2 people from the same
company should not approve a third person from that same company's patch. I
feel that is a decent rule to follow.
On Feb 20, 2016 12:41 PM, "Joshua Harlow"  wrote:

> Out of curiosity, who are kollas big users?
>
> If it's mirantis and redhat (and nobody much else?) then meh, does this
> really matter. Sure work on getting more usage and adoption and other
> companies interested but why stagnate a project (by doing this) while that
> is underway?
>
> Other question; is kolla so influenced by mirantis or redhat management
> that there isn't trust that things will be handled appropriately by smart
> engineers/reviewers (that should not blindly listen to there management for
> all the things, but think of the bigger picture).
>
> Just my 2 cents (I prefer trust rather than not and just curious what the
> real concern here is, and what evidence from past examples shows that this
> really is a concern in the first place).
>
> I will show myself out now, ha.
>
> -Josh
>
> On 02/20/2016 09:09 AM, Steven Dake (stdake) wrote:
>
>> Hey folks,
>>
>> Mirantis has been developing a big footprint in the core review team,
>> and Red Hat already has a big footprint in the core review team.  These
>> are all good things, but I want to avoid in the future a situation in
>> which one company has a majority of core reviewers.  Since core
>> reviewers set policy for the project, the project could be harmed if one
>> company has such a majority.  This is one reason why project diversity
>> is so important and has its own special snowflake tag in the governance
>> repository.
>>
>> I'd like your thoughts on how to best handle this situation, before I
>> trigger  a vote we can all agree on.
>>
>> I was thinking of something simple like:
>> "1 company may not have more then 33% of core reviewers.  At the
>> conclusion of PTL elections, the current cycle's 6 months of reviews
>> completed will be used as a metric to select the core reviewers from
>> that particular company if the core review team has shrunk as a result
>> of removal of core reviewers during the cycle."
>>
>> Thoughts, comments, questions, concerns, etc?
>>
>> Regards,
>> -steve
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] port neutron thin containers to stable/liberty

2016-02-20 Thread Sam Yaple
I was under the impression we did have a majority of cores in favor of the
idea at the midcycle. But if this is a vote-vote, then I am a very strong
+1 as well. This is something operators will absolutely want and and need.

Sam Yaple

On Sat, Feb 20, 2016 at 4:27 PM, Michał Jastrzębski 
wrote:

> Strong +1 from me. This have multiple benefits:
> Easier (aka possible) debugging of networking in running envs (not
> having tools like tcpdump at your disposal is a pain) - granted, there
> are ways to get this working without thin containers but require fair
> amount of docker knowledge.
> Docker daemon restart will not break routers - currently with docker
> restart container with namespace dies and we lose our routers (they
> will migrate using HA, but well, still a networking downtime). This
> will no longer be the case so...
> Upgrades with no vm downtime whatsoever depends on this one.
> If we could deploy liberty code with all these nice stuff, I'd be
> happier person;)
>
> Cheers,
> Michal
>
> On 20 February 2016 at 07:40, Steven Dake (stdake) 
> wrote:
> > Just clarifying, this is not a "revote" - there were not enough core
> > reviewers in favor of this idea at the Kolla midcycle, so we need to
> have a
> > vote on the mailing list to sort out this policy decision of managing
> > stable/liberty.
> >
> > Regards,
> > -steve
> >
> >
> > From: Steven Dake 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: Saturday, February 20, 2016 at 6:28 AM
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: [openstack-dev] [kolla][vote] port neutron thin containers to
> > stable/liberty
> >
> > Folks,
> >
> > There were not enough core reviewers to pass a majority approval of the
> > neutron thin container backport idea, so we separated it out from fixing
> > stable/liberty itself.
> >
> > I am going to keep voting open for *2* weeks this time.  The reason for
> the
> > two weeks is I would like a week of discussion before people just blindly
> > vote ;)
> >
> > Voting begins now and concludes March 4th.  Since this is a policy
> decision,
> > no veto votes are permitted, just a +1 and a  -1.  Abstaining is the
> same as
> > voting –1.
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Angus Salkeld for kolla-core

2016-02-19 Thread Sam Yaple
+1 of course. I mean, its Angus. Who can say no to Angus?

Sam Yaple

On Fri, Feb 19, 2016 at 10:57 PM, Michal Rostecki 
wrote:

> On 02/19/2016 07:04 PM, Steven Dake (stdake) wrote:
>
>> Angus is already in kolla-mesos-core but doesn't have broad ability to
>> approve changes for all of kolla-core.  We agreed by majority vote in
>> Tokyo that folks in kolla-mesos-core that integrated well with the
>> project would be moved from kolla-mesos-core to kolla-core.  Once
>> kolla-mesos-core is empty, we will deprecate that group.
>>
>> Angus has clearly shown his commitment to Kolla:
>> He is #9 in reviews for Mitaka and #3 in commits(!) as well as shows a
>> solid PDE of 64 (meaning 64 days of interaction with either reviews,
>> commits, or mailing list participation.
>>
>> Count my vote as a +1.  If your on the fence, feel free to abstain.  A
>> vote of –1 is a VETO vote, which terminates the voting process.  If
>> there is unanimous approval prior to February 26, or a veto vote, the
>> voting will be closed and appropriate changes made.
>>
>> Remember now we agreed it takes a majority vote to approve a core
>> reviewer, which means Angus needs a +1 support from at least 6 core
>> reviewers with no veto votes.
>>
>> Regards,
>> -steve
>>
>>
> +1
> Good job, Angus!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Decision of how to manage stable/liberty from Kolla Midcycle

2016-02-16 Thread Sam Yaple
On Tue, Feb 16, 2016 at 6:15 PM, Steven Dake (stdake) 
wrote:

> Hey folks,
>
> We held a midcycle Feb 9th and 10th.  The full notes of the midcycle are
> here:
> https://etherpad.openstack.org/p/kolla-mitaka-midcycle
>
> We had 3 separate ~40 minute sessions on making stable stable.  The reason
> for so many sessions on this topic were that it took a long time to come to
> an agreement about the problem and solution.
>
> There are two major problems with stable:
> Stable is hard-pinned to 1.8.2 of docker.  Ansible 1.9.4 is the last
> version of Ansible in the 1 z series coming from Ansible.  Ansible 1.9.4
> docker module is totally busted with Docker 1.8.3 and later.
>
> Stable uses data containers.  Data containers used with Ansible can
> result, in some very limited instances, such as an upgrade of the data
> container image, *data loss*.  We didn't really recognize this until
> recently.  We can't really fix Ansible to behave correctly with the data
> containers.
>

This point is not correct. This is not an issue with Ansible, rather with
Docker and persistent data. The solution to this problem is named volumes
with Docker, which Docker has been moving toward and was release in Docker
1.9.


>
> The solution:
> Use the kolla-docker.py module to replace ansible's built in docker
> module.  This is not a fork of that module from Ansible's upstream so it
> has no GPLv3 licensing concerns.  Instead its freshly written code in
> master.  This allows the Kolla upstream to implement support for any
> version of docker we prefer.
>
> We will be making 1.9 and possibly 1.10 depending on the outcome of a thin
> containers vote the minimum version of docker required to run
> stable/liberty.
>
> We will be replacing the data containers with named volumes.  Named
> volumes offer a similar functionality (persistent data containment) in a
> different implementation way.  They were introduced in Docker 1.9, because
> data containers have many shortcomings.
>
> This will require some rework on the playbooks.  Rather then backport the
> 900+ patches that have entered master since liberty, we are going to
> surgically correct the problems with named volumes.  We suspect this work
> will take 4-6 weeks to complete and will be less then 15 patches on top of
> stable/liberty.  The numbers here are just estimates, it could be more or
> less, but on that order of magnitude.
>
> The above solution is what we decided we would go with, after nearly 3
> hours of debate ;)  If I got any of that wrong, please feel free to chime
> in for folks that were there.  Note there was a majority of core reviewers
> present, and nobody raised objection to this plan of activity, so I'd
> consider it voted and approved :)  There was not a majority approval for
> another proposal to backport thin containers for neutron which I will
> handle in a separate email.
>
> Going forward, my personal preference is that we make stable branches a
> low-rate-of-change branch, rather then how it  is misnamed to to imply a
> high rate of backports to fix problems.  We will have further design
> sessions about stable branch maintenance at the Austin ODS.
>
> Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
To add to this, this would be a y change to Kolla. So this release would be
a 1.1.0 release rather than a 1.0.1 release. y releases are not desired,
but in this case would be needed to do the changes we purpose.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Agenda for Kolla Midcycle

2016-02-09 Thread Sam Yaple
On Tue, Feb 9, 2016 at 6:58 AM, Steven Dake (stdake) 
wrote:

> Hey folks,
>
> The agenda generation happened approximately a month ago over a 3 week
> period, and then was voted on. I left the actual creation of the agenda
> until today in case any last minute pressing issues came in.  I took some
> suggestions from the copious notes in the Etherpad regarding pair
> programming for 80 minutes for upgrades and knocking out some reviews
> related only to upgrades.
>
> The agenda is here:
> https://etherpad.openstack.org/p/kolla-mitaka-midcycle
>
> Please *don't edit the agenda* – we can discuss in the morning and see if
> it needs fine tuning to fit folks schedules and come to a common agreement
> as a group.
>
> Folks on the west coast need to leave around 3:30-4:00PM (me included) on
> Wednesday to catch flights home which get them in at midnight ftl.  The
> midcycle can continue past this time, but please close up shop at 5pm and
> make copious notes for the folks that are on budget constraints.
>
> Ryan, if your up for facilitating Thursday from 3:30 onward and you will
> be here, I think that makes sense :)
>

Surely you mean Wednesday. I am not sure that continuing the discussion and
making decisions without everyone there is a good idea. I would prefer a
working session where we could hammer out some code.

Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-06 Thread Sam Yaple
On Sat, Feb 6, 2016 at 5:12 PM, Duncan Thomas 
wrote:

> Actually, keeping track of changed blocks on cinder volumes would make the
> cinder incremental backup substantially more efficient... Something could
> push them into cinder at detach time, and an api call for cinder to pull
> them at live backup time, and cinder backup can do the rest... Not sure of
> the none-cinder bits of the architecture, but certainly an interesting
> idea. In the event something goes wrong, cinder can assume the while device
> has changed or fall back to the current mechanism, so it is back compatible
> from a tenant point of view...
>
The mechanism that nova could call right now has no libvirt counterpart. So
with Ekko I am talking to the qemu process through the monitor socket to
initiate these commands. QEMU spits out the data I need to backup, I am as
of now unsure if the changed block bitmap itself can be extracted or if it
is all an internal tracking (I haven't looked into this).

I can see a future where, though nova-api's, both Cinder and Ekko can
perform backups without the raw data having to traverse through nova
itself, just the metadata such as changed-blocks and what not. This would
be my preferred solution rather than have Nova itself run the backup and
transfer of data to the backing storage (not to mention scheduling and
retention). Additionally it would mean that neither Ekko nor Cinder would
need to talk to the hypervisors directly and can still utilize that
low-level info.


> On 6 Feb 2016 17:56, "Sam Yaple"  wrote:
>
>> On Sat, Feb 6, 2016 at 3:00 PM, Jeremy Stanley  wrote:
>>
>>> On 2016-02-05 16:38:19 + (+), Sam Yaple wrote:
>>> > I always forget to qualify that statement don't I? Nova does not
>>> > have a mechanism for _incremental_ backups. Nor does Nova have
>>> > compression or encryption because AFAIK that api only creates a
>>> > snapshot. I would also point out again that snapshots != backups,
>>> > at least not for those who care about backups.
>>>
>>> And just to be clear, you assert that the Nova team would reject
>>> extending their existing backup implementation to support this, so
>>> the only real solution is to make another project.
>>>
>>
>> I don't know if Nova would reject it or not, but as discussed it could be
>> extended to Cinder. Should Nova ever backup Cinder volumes? Additionally,
>> why don't we combine networking into Nova? Or images? Or volumes? What I do
>> assert is that we have done alot of work to strip out components from Nova,
>> backups don't seem like a good candidate to shove into Nova.
>>
>> Luckily since Ekko and Nova (just like Ekko and Freezer) don't have any
>> conflicting operations should Ekko be built separate and merged into Nova
>> it would be fairly painless process since there are no overlapping services.
>>
>> Integration with Nova where Nova controls the hypervisor and Ekko
>> requests operations through the Nova api before doing the backup is another
>> question, and that is reasonable in my opinion. This is likely an issue
>> that can be addressed down the road rather than at this moment, though.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-06 Thread Sam Yaple
On Sat, Feb 6, 2016 at 3:00 PM, Jeremy Stanley  wrote:

> On 2016-02-05 16:38:19 + (+), Sam Yaple wrote:
> > I always forget to qualify that statement don't I? Nova does not
> > have a mechanism for _incremental_ backups. Nor does Nova have
> > compression or encryption because AFAIK that api only creates a
> > snapshot. I would also point out again that snapshots != backups,
> > at least not for those who care about backups.
>
> And just to be clear, you assert that the Nova team would reject
> extending their existing backup implementation to support this, so
> the only real solution is to make another project.
>

I don't know if Nova would reject it or not, but as discussed it could be
extended to Cinder. Should Nova ever backup Cinder volumes? Additionally,
why don't we combine networking into Nova? Or images? Or volumes? What I do
assert is that we have done alot of work to strip out components from Nova,
backups don't seem like a good candidate to shove into Nova.

Luckily since Ekko and Nova (just like Ekko and Freezer) don't have any
conflicting operations should Ekko be built separate and merged into Nova
it would be fairly painless process since there are no overlapping services.

Integration with Nova where Nova controls the hypervisor and Ekko requests
operations through the Nova api before doing the backup is another
question, and that is reasonable in my opinion. This is likely an issue
that can be addressed down the road rather than at this moment, though.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-05 Thread Sam Yaple
On Fri, Feb 5, 2016 at 3:31 PM, Jay Pipes  wrote:

> On 02/05/2016 09:58 AM, Sam Yaple wrote:
>
>> Since Nova has no backup mechanism this is clearly a gap and that was the
>> issue
>> Ekko wants to solve.
>>
>
> Nova has had backups for a long time:
>
> http://developer.openstack.org/api-ref-compute-v2.1.html#createBackup
>
> I always forget to qualify that statement don't I? Nova does not have a
mechanism for _incremental_ backups. Nor does Nova have compression or
encryption because AFAIK that api only creates a snapshot. I would also
point out again that snapshots != backups, at least not for those who care
about backups.

Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-05 Thread Sam Yaple
On Thu, Feb 4, 2016 at 2:23 PM, gordon chung  wrote:

>
>
> On 03/02/2016 10:38 AM, Sam Yaple wrote:
>
> On Wed, Feb 3, 2016 at 2:52 PM, Jeremy Stanley < 
> fu...@yuggoth.org> wrote:
>
>> On 2016-02-03 14:32:36 + (+), Sam Yaple wrote:
>> [...]
>> > Luckily, digging into it it appears cinder already has all the
>> > infrastructure in place to handle what we had talked about in a
>> > separate email thread Duncan. It is very possible Ekko can
>> > leverage the existing features to do it's backup with no change
>> > from Cinder.
>> [...]
>>
>> If Cinder's backup facilities already do most of
>> what you want from it and there's only a little bit of development
>> work required to add the missing feature, why jump to implementing
>> this feature in a completely separate project instead rather than
>> improving Cinder's existing solution so that people who have been
>> using that can benefit directly?
>>
>
> Backing up Cinder was never the initial goal, just a potential feature on
> the roadmap. Nova is the main goal.
>
> i'll extend fungi's question, are the backup framework/mechanisms common
> whether it be Nova or Cinder or anything else? or are they unique but only
> grouped together as a service because they backup something. it seems the
> problem is we've imagined the service as tackling a horizontal issue when
> really it is just a vertical story that appears across many silos.
>
>
The framework would not be common even between Ekko and Cinder, and there
is no Nova "backup". The main featureset that will be utilized is CBT
(changed-block-tracking) which is needed for efficient incremental backups.
Otherwise the entire block device must be read each time a backup is
performed. The design of Ekko solves the horizontal issue of scaling
backups, there is also this vertical integration that _can_ be done to make
the whole experience more pleasant and reliable. Since Nova has no backup
mechanism this is clearly a gap and that was the issue Ekko wants to solve.
Also backuping up Cinder volumes is on the roadmap, but has not been a
stated priority.

Sam Yaple


> cheers,
>
> --
> gord
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Sam Yaple
On Wed, Feb 3, 2016 at 4:53 PM, Preston L. Bannister 
wrote:

> On Wed, Feb 3, 2016 at 6:32 AM, Sam Yaple  wrote:
>
>> [snip]
>>
> Full backups are costly in terms of IO, storage, bandwidth and time. A
>> full backup being required in a backup plan is a big problem for backups
>> when we talk about volumes that are terabytes large.
>>
>
> As an incidental note...
>
> You have to collect full backups, periodically. To do otherwise assumes 
> *absolutely
> no failures* anywhere in the entire software/hardware stack -- ever --
> and no failures in storage over time. (Which collectively is a tad
> optimistic, at scale.) Whether due to a rare software bug, a marginal piece
> of hardware, or a stray cosmic ray - an occasional bad block will slip
> through.
>

A new full can be triggered at any time should there be concern of a
problem. (see my next point)

>
> More exactly, you need some means of doing occasional full end-to-end
> verification of stored backups. Periodic full backups are one
> safeguard. How you go about performing full verification, and how often is
> a subject for design and optimization. This is where things get a *bit*
> more complex. :)
>

Yes an end-to-end verification of the backup would be easy to implement,
but costly to run. But thats more on the user to decided those things. With
a proper scheduler this is less an issue for Ekko, and more a backup policy
issue.

>
> Or you just accept a higher error rate. (How high depends on the
> implementation.)
>

And its not a full loss, its just not a 100% valid backup. Luckily youve
only lost a single segment (a few thousand sectors) chances are the
critical stuff you want isn't there. That data can still be recovered.
And object-storage with replication makes it very, very hard to loss data
when properly maintained (look at S3 and the data its lost over time). We
have checksum/hash verification in place already so the underlying data
must be valid or we don't restore. But your points are well received.

>
> And "Yes", multi-terabyte volumes *are* a challenge.
>

And increasingly common...
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Sam Yaple
On Wed, Feb 3, 2016 at 3:58 PM, Duncan Thomas 
wrote:

> On 3 February 2016 at 17:52, Sam Yaple  wrote:
>
>
>> This is a very similiar method to what Ekko is doing. The json mapping in
>> Ekko is a manifest file which is a sqlite database. The major difference I
>> see is Ekko is doing backup trees. If you launch 1000 instances from the
>> same glance image, you don't need 1000 fulls, you need 1 full and 1000
>> incrementals. Doing that means you save a ton of space, time, bandwidth,
>> IO, but it also means n number of backups can reference the same chunk of
>> data and it makes deletion of that data much harder than you describe in
>> Cinder. When restoring a backup, you don't _need_ a new full, you need to
>> start your backups based on the last restore point and the same point about
>> saving applies. It also means that Ekko can provide "backups can scale with
>> OpenStack" in that sense. Your backups will only ever be your changed data.
>>
>> I recognize that isn't probably a huge concern for Cinder, with volumes
>> typically being just unique data and not duplicate data, but with nova I
>> would argue _most_ instances in an OpenStack deployment will be based on
>> the same small subset of images and thats alot of duplicate data to
>> consider backing up especially at scale.
>>
>>
>
> So this sounds great. If your backup formats are similar enough, it is
> worth considering putting a backup export function in that spits out a
> cinder-backup compatible JSON file (it's a dead simple format) and perhaps
> an import for the same. That would allow cinder backup and Ekko to exchange
> data where desired. I'm not sure if this is possible, but I'd certainly
> suggest looking at it.
>
> This is potentially possible. The issue I see would be around
compression/encryption of the different chunks (in ekko we refer to them as
segments). But we will probably be able to work this out in time.


> Thanks for keeping the dialog open, it has definitely been useful.
>
> I have enjoyed the exchange as well. I am a big fan of open-source and
community.

>
> --
> Duncan Thomas
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Sam Yaple
On Wed, Feb 3, 2016 at 3:36 PM, Duncan Thomas 
wrote:

> On 3 February 2016 at 17:27, Sam Yaple  wrote:
>
>
>>
>> And here we get to the meat of the matter. Squashing backups is awful in
>> object storage. It requires you to pull both backups, merge them, then
>> reupload. This also has the downside of casting doubt on a backup since you
>> are now modifying data after it has been backed up (though that doubt is
>> lessened with proper checksuming/hashing which cinder does it looks like).
>> This is the issue Ekko can solve (and has solved over the past 2 years).
>> Ekko can do this "squashing" in a non-traditional way, without ever
>> modifying content or merging anything. With deletions only. This means we
>> do not have to pull two backups, merge, and reupload to delete a backup
>> from the chain.
>>
>
> I'm sure we've lost most of the audience by this point, but I might as
> well reply here as anywhere else...
>

That's ok. We are talking and thats important for featuresets that people
don't even know they want!

>
> In the cinder backup case, since the backup is chunked in object store,
> all that is required is to reference count the chunks that are required for
> the backups you want to keep, get rid of the rest, and re-upload the (very
> small) json mapping file. You can either upload over the old json, or
> create a new one. Either way, the bulk data does not need to be touched.
>

This is a very similiar method to what Ekko is doing. The json mapping in
Ekko is a manifest file which is a sqlite database. The major difference I
see is Ekko is doing backup trees. If you launch 1000 instances from the
same glance image, you don't need 1000 fulls, you need 1 full and 1000
incrementals. Doing that means you save a ton of space, time, bandwidth,
IO, but it also means n number of backups can reference the same chunk of
data and it makes deletion of that data much harder than you describe in
Cinder. When restoring a backup, you don't _need_ a new full, you need to
start your backups based on the last restore point and the same point about
saving applies. It also means that Ekko can provide "backups can scale with
OpenStack" in that sense. Your backups will only ever be your changed data.

I recognize that isn't probably a huge concern for Cinder, with volumes
typically being just unique data and not duplicate data, but with nova I
would argue _most_ instances in an OpenStack deployment will be based on
the same small subset of images and thats alot of duplicate data to
consider backing up especially at scale.

I will have to understand a bit more about cinder-backup before I approach
that subject with Ekko (which right now is on the newton roadmap). What you
have told me absolutely justifies the cinder-backup name (rather than
cinder-snapshot) so thank you for correcting me on that point!


>
>
>
> --
> --
> Duncan Thomas
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Sam Yaple
On Wed, Feb 3, 2016 at 2:52 PM, Jeremy Stanley  wrote:

> On 2016-02-03 14:32:36 + (+), Sam Yaple wrote:
> [...]
> > Luckily, digging into it it appears cinder already has all the
> > infrastructure in place to handle what we had talked about in a
> > separate email thread Duncan. It is very possible Ekko can
> > leverage the existing features to do it's backup with no change
> > from Cinder.
> [...]
>
> If Cinder's backup facilities already do most of
> what you want from it and there's only a little bit of development
> work required to add the missing feature, why jump to implementing
> this feature in a completely separate project instead rather than
> improving Cinder's existing solution so that people who have been
> using that can benefit directly?
>

Backing up Cinder was never the initial goal, just a potential feature on
the roadmap. Nova is the main goal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Sam Yaple
On Wed, Feb 3, 2016 at 2:37 PM, Duncan Thomas 
wrote:

>
>
> On 3 February 2016 at 16:32, Sam Yaple  wrote:
>
>>
>> Looking into it, however, shows Cinder has no mechanism to delete backups
>> in the middle of a chain since you use dependent backups (please correct me
>> if I am wrong here). This means after a number of incremental backups you
>> _must_ take another full to ensure the chain doesn't get to long. That is a
>> problem Ekko is purposing to solve as well. Full backups are costly in
>> terms of IO, storage, bandwidth and time. A full backup being required in a
>> backup plan is a big problem for backups when we talk about volumes that
>> are terabytes large.
>>
>
> You're right that this is an issue currently. Cinder actually has enough
> info in theory to be able to trivially squash backups to be able to break
> the chain, it's only a bit of metadata ref counting and juggling, however
> nobody has yet written the code.
>
>
And here we get to the meat of the matter. Squashing backups is awful in
object storage. It requires you to pull both backups, merge them, then
reupload. This also has the downside of casting doubt on a backup since you
are now modifying data after it has been backed up (though that doubt is
lessened with proper checksuming/hashing which cinder does it looks like).
This is the issue Ekko can solve (and has solved over the past 2 years).
Ekko can do this "squashing" in a non-traditional way, without ever
modifying content or merging anything. With deletions only. This means we
do not have to pull two backups, merge, and reupload to delete a backup
from the chain.


> Luckily, digging into it it appears cinder already has all the
>> infrastructure in place to handle what we had talked about in a separate
>> email thread Duncan. It is very possible Ekko can leverage the existing
>> features to do it's backup with no change from Cinder. This isn't the
>> initial priority for Ekko though, but it is good information to have. Thank
>> you for your comments!
>>
>
>
> Always interested in better ways to solve backup.
>

Thats the plan!

>
> --
> Duncan Thomas
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Sam Yaple
On Wed, Feb 3, 2016 at 1:41 PM, Duncan Thomas 
wrote:

> On 2 February 2016 at 02:28, Sam Yaple  wrote:
>
>>
>> I disagree with this statement strongly as I have stated before. Nova has
>> snapshots. Cinder has snapshots (though they do say cinder-backup). Freezer
>> wraps Nova and Cinder. Snapshots are not backups. They are certainly not
>> _incremental_ backups. They can have neither compression, nor encryption.
>> With this in mind, Freezer does not have this "feature" at all. Its not
>> that it needs improvement, it simply does not exist in Freezer. So a
>> separate project dedicated to that one goal is not unreasonable. The real
>> question is whether it is practical to merge Freezer and Ekko, and this is
>> the question Ekko and the Freezer team are attempting to answer.
>>
>
> You're misinformed of the cinder feature set there - cinder has both
> snapshots (usually fast COW thing on the same storage backend) and backups
> (copy to a different storage backend, usually swift but might be
> NFS/ceph/TSM) - the backups support incremental and compression. Separate
> encryption to the volume encryption is not yet supported or implemented,
> merely because nobody has written it yet. There's also live backup
> (internally via a snapshot) merged last cycle.
>
> You are right Duncan. I was working on outdated information that Cinder
does not have incremental backups. I apologize for the misstep there, we
haven't started on the Cinder planning yet so I haven't looked into it in
great detail.

Looking into it, however, shows Cinder has no mechanism to delete backups
in the middle of a chain since you use dependent backups (please correct me
if I am wrong here). This means after a number of incremental backups you
_must_ take another full to ensure the chain doesn't get to long. That is a
problem Ekko is purposing to solve as well. Full backups are costly in
terms of IO, storage, bandwidth and time. A full backup being required in a
backup plan is a big problem for backups when we talk about volumes that
are terabytes large.

Luckily, digging into it it appears cinder already has all the
infrastructure in place to handle what we had talked about in a separate
email thread Duncan. It is very possible Ekko can leverage the existing
features to do it's backup with no change from Cinder. This isn't the
initial priority for Ekko though, but it is good information to have. Thank
you for your comments!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-02 Thread Sam Yaple
On Tue, Feb 2, 2016 at 11:31 PM, Sean McGinnis 
wrote:

> On Tue, Feb 02, 2016 at 02:04:44PM -0800, Preston L. Bannister wrote:
> > I submitted for a presentation on "State of the Art for in-Cloud backup
> of
> > high-value Applications". Notion is to give context for the folk who need
> > this sort of backup. Something about where we have been, where we are,
> and
> > what might become possible. I think it would be great to pull in folk
> from
> > Freezer and Ekko. Jay Pipes seems to like to weigh in on the topic, and
> > could represent Nova? Will gladly add as speakers interested folk! (Of
> > course, the odds of winning a slot are pretty low, but worth a shot.)
>
> I'd love to see a presentation on Ekko, Freezer, Smaug, as well as the
> built in capabilities such as Cinder backup.
>

I submitted an Ekko talk "The 'B' Word -- Backups in OpenStack". Title
seems all inclusive, but in reality I am just talking about the block-based
side of backups. I am co-presenting with another Ekko dev and we do have a
brief slot in our outline for explaining Ekko's place in the OpenStack
ecosystem and its difference from nova-snapshot or cinder-backup.

Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-02 Thread Sam Yaple
On Feb 2, 2016 7:41 AM, "Preston L. Bannister"  wrote:
>
> Oh, for the other folk reading, in QEMU you want to look at:
>
> http://wiki.qemu.org/Features/IncrementalBackup
>
> The above page looks to be current. The QEMU wiki seems to have a number
of stale pages that describe proposed function that was abandoned / never
implemented. Originally, I ended up reading the QEMU mailing list and
source code to figure out which bits were real. :)

Unfortunately the link you provided is old as well. Source code is the only
way to go at this point for valid information. But I have a working
structure for a backup using QMP directly if not manually at this point.

>
>
>
>
>
> On Tue, Feb 2, 2016 at 4:04 AM, Preston L. Bannister 
wrote:
>>
>> To be clear, I work for EMC, and we are building a backup product for
OpenStack (which at this point is very far along). The primary lack is a
good means to efficiently extract changed-block information from OpenStack.
About a year ago I worked through the entire Nova/Cinder/libvirt/QEMU
stack, to see what was possible. The changes to QEMU (which have been
in-flight since 2011) looked most promising, but when they would land was
unclear. They are starting to land. This is big news. :)
>>
>> That is not the end of the problem. Unless the QEMU folk are perfect,
there are likely bugs to be found when the code is put into production.
(With more exercise, the sooner any problems can be identified and
addressed.) OpenStack uses libvirt to talk to QEMU, and libvirt is a fairly
thick abstraction. Likely there will want to be adjustments to libvirt.
Bypassing Nova and chatting with libvirt directly is a bit suspect (but may
be needed). There might be adjustments needed in Nova.
>>
>> To offer suggestions...
>>
>> Ekko is an opinionated approach to backup. This is not the only way to
solve the problem. I happen very much like the approach, but as a specific
approach, it probably does not belong in Cinder or Nova. (I believe it was
Jay who offered a similar argument about backup more generally.)
>>
>> (Keep in mind QEMU is not the only hypervisor supported by Nova, if the
majority of use. Would you want to attempt a design that works for all
hypervisors? I would not!  ...at least at this point. Also, last I checked
the Cinder folk were a bit hung up on replication, as finding common
abstractions across storage was not easy. This problem looks similar.)

Actually, now that you mention it, the approach Ekko takes _does_ work
against all hypervisors that support changed-block-tracking. At this point
that includes QEMU, VMWare, and HyperV (no Xen).
>>
>> While wary of bypassing Nova/Cinder, my suggestion would to be rude in
the beginning, with every intent of becoming civil in the end.
>>
>> Start by talking to libvirt directly. (The was a bypass mechanism in
libvirt that looked like it might be sufficient.) Break QEMU early, and get
it fixed. :)
>>
>> When QEMU usage is working, talk to the libvirt folk about proven needs,
and what is needed to become civil.
>>
>> When libvirt is updated (or not), talk to Nova folk about proven needs,
and what is needed to become civil. (Perhaps simply awareness, or a small
set of primitives.)

It's like you are a mind reader! This is my exact thoughts on the approach
as well.
>>
>> It might take quite a while for the latest QEMU and libvirt to ripple
through into OpenStack distributions. Getting any fixes into QEMU early (or
addressing discovered gaps in needed function) seems like a good thing.
>>
>> All the above is a sufficiently ambitious project, just by itself. To my
mind, that justifies Ekko as a unique, focused project.
>>
Thank you for you input. This is the case in my mind as well, but if this
goal can be achieved _better_ with Freezer we absolutely should meld with
Freezer. That's the question we are trying to figure out.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-01 Thread Sam Yaple
On Mon, Feb 1, 2016 at 10:32 PM, Fausto Marzi 
wrote:

> Hi Preston,
> Thank you. You saw Fabrizio in Vancouver, I'm Fausto, but it's allright, :
> P
>
> The challenge is interesting. If we want to build a dedicated backup API
> service (which is always what we wanted to do), probably we need to:
>
>
>- Place out of Nova and Cinder the backup features, as it wouldn't
>make much sense to me to have a Backup service and also have backups
>managed independently by Nova and Cinder.
>
>
> That said, I'm not a big fan of the following:
>
>- Interacting with the hypervisors and the volumes directly without
>passing through the Nova and Cinder API.
>
> Passing through the api will be a huge issue for extracting data due to
the sheer volume of data needed (TB through the api is going to kill
everything!)

>
>- Adding any additional workload on the compute nodes or block storage
>nodes.
>- Computing incremental, compression, encryption is expensive. Have
>many simultaneous process doing that may lead  to bad behaviours on core
>services.
>
> These are valid concerns, but the alternative is still shipping the raw
data elsewhere to do this work, and that has its own issue in terms of
bandwidth.

>
> My (flexible) thoughts are:
>
>- The feature is needed and is brilliant.
>- We should probably implement the newest feature provided by the
>hypervisor in Nova and export them from the Nova API.
>- Create a plugin that is integrated with Freezer to leverage that new
>features.
>- Same apply for Cinder.
>- The VMs and Volumes backup feature is already available by Nova,
>Cinder and Freezer. It needs to be improved for sure a lot, but do we need
>to create a new project for a feature that needs to be improved, rather
>than work with the existing Teams?
>
> I disagree with this statement strongly as I have stated before. Nova has
snapshots. Cinder has snapshots (though they do say cinder-backup). Freezer
wraps Nova and Cinder. Snapshots are not backups. They are certainly not
_incremental_ backups. They can have neither compression, nor encryption.
With this in mind, Freezer does not have this "feature" at all. Its not
that it needs improvement, it simply does not exist in Freezer. So a
separate project dedicated to that one goal is not unreasonable. The real
question is whether it is practical to merge Freezer and Ekko, and this is
the question Ekko and the Freezer team are attempting to answer.

>
>- No one wants to block others, Sam proposed solution is indeed
>remarkable, but this is OpenStack, we work in Teams, why we cannot do that
>and be less fragmented.
>
>
> Thanks,
> Fausto
>
>
Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-01 Thread Sam Yaple
On Mon, Feb 1, 2016 at 8:23 PM, Preston L. Bannister 
wrote:

> Hi Fausto,
>
> To be clear, I am not in any way critical of Freezer and the folk putting
> in work. (Please, I want to be *entirely* clear on this point! Also, saw
> your presentation in Vancouver.)
>
> That said, Freezer is a bit of a Swiss-Army-Knife set of combined backup
> functions. Sometimes it is better to focus on a single aspect (or few). The
> new features landing in QEMU present an opportunity. A project focused
> solely on that opportunity, to work through initial set of issues, makes a
> lot of sense to me. (Something like how KVM forked QEMU for a time, to
> build faster x86 emulation.)
>
> I do not see these as competing projects, and more as cooperative. The
> Ekko work should be able to plug into Freezer, cleanly.
>

>From what I see this is certainly a possibly future. Ekko may be able to
complement Freezer by running as a plugin, but just as easily be a
standalone project that is fully compatible with a Freezer without
conflicting in any way. As an operator, I am a fan of only deploying what
is needed and Freezer needs infrastructure to do what it does that isn't
useful to block-based backup purposed by Ekko. That may change as we have
already started talking with the Freezer team and they have this idea of a
plugin-type system that may very well do the trick.


>
Aspects of the problem, as I see:
>
>1. Extracting efficient instance backups from OpenStack (via new QEMU
>function).
>2. Storing backups in efficient form (general implementation, and
>vendor-specific supersets).
>3. Offering an OpenStack backup-service API, with core and
>vendor-specific extensions.
>
>
> Vendors (like my employer, EMC) might be somewhat opinionated about (2),
> and for reason. :)
>

On the note of point (2), its not so much the storage as managing retention
in object storage (essentially a read-only medium). I would argue (2) is
much harder than (1). People have been doing (1) with VMWare for a long
time, and QEMU's steps won't be that different.


> The huge missing piece is (1), and a focused project seems to make a lot
> of sense.
>

And everyone please keep in mind, the only overlap I have seen so far is
the API, and _potentially_ the scheduler. So if a merging is needed, then
it should be fairly simple. None of this has to be decided right now. We
aren't going down a path that can't be reversed, and we likely never will
given how little these two projects overlap in their current forms.

Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-27 Thread Sam Yaple
 exist on the compute node does not
necessarily perform the upload from the compute node since the data may
exist on a backend like Ceph or NFS. But in the case of a local storage
there is no way to get around this. Further, current nova-snapshots would
do all the same things (minus the compression) and have the same overhead.
I am not sure what you are speaking about when talking about Ceilometer.
Ekko has full plans to have a plugin for this information as well since
Ekko is in control of all of this information. The hypervisor is doing
very, very little of "the work" and doing nothing that is intensive at all.

> -  There’s no documentation whatsoever provided with Ekko. I had
> to read the source code, have conversations directly with you and invest
> significant time on it. I think provide some documentation is helpful, as
> the doc link in the openstack/ekko repo return 404 Not Found.
>
This is true. We are a repo of a few weeks. Give it time :). Ekko has an
informal mid-cycle planned since all the Core contributors as of now will
be at the Kolla midcycle in Feb. We plan on documenting and presenting a
roadmap at this time.

> Please let me know what your thoughts are on this.
>
> Thanks,
> Fausto
>

Hopefully these have answered your questions.

Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-27 Thread Sam Yaple
On Wed, Jan 27, 2016 at 7:06 PM, Flavio Percoco  wrote:
>
> FWIW, the current governance model does not prevent competition. That's
> not to
> be understood as we encourage it but rather than there could be services
> with
> some level of overlap that are still worth being separate.
>
> What Jay is referring to is that regardless the projects do similar
> things, the
> same or totally different things, we should strive to have different APIs.
> The
> API shouldn't overlap in terms of endpoints and the way they are exposed.
>
> With all that said, I'd like to encourage collaboration over competition
> and I'm
> sure both teams will find a way to make this work.
>
> Cheers,
> Flavio


And to come full circle on this thread, I will point out once again there
is no competition between Ekko and Freezer at this time. Freezer is
file-level backup where Ekko is block-level backup. Anyone familiar with
backups knows these are drastically different types of backups. Those using
block-level backups typically won't be using file-level backups and
vice-versa. That said, even if there is no convergence of Freezer and Ekko
 they can still live side-by-side without any conflict at all.

As of now, Ekko and Freezer teams have started a dialogue and we will
continue to collaborate rather than compete in every way that is reasonable
for both projects.

Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-26 Thread Sam Yaple
On Tue, Jan 26, 2016 at 9:57 PM, Jay Pipes  wrote:

>
> I am not suggesting you "share an API" at all. I am requesting that if you
> have a RESTful API planned for your "backup", then you do not use the same
> RESTful API resource endpoint names that Freezer does. Because if you do,
> then users of the OpenStack APIs will have two APIs that use identical
> resource endpoints for entirely different things. So the request is to not
> use Freezer's resource endpoints, which have /backups as its primary
> resource endpoint.
>
> I don't like the fact that Freezer's resource endpoint is /backups, since
> the OpenStack Volume API has a /{tenant_id}/backups resource endpoint, but
> I really, *really* do not want to see a set of OpenStack APIs one of which
> has /{tenant_id}/backups as a resource endpoint, another which has /backups
> as a top-level resource, and still another which has /backups as a
> top-level resource.
>
> It makes for a crappy user experience. Crappier than the crappy user
> experience that OpenStack API users already have because we have done a
> crappy job shepherding projects in order to make sure there isn't overlap
> between their APIs (yes, Ceilometer and Monasca, I'm looking directly at
> you).
>
>
That is a much, much clearer point. One that I will be happy to follow. I
understand and agree with what you are saying.

A more detailed conversation has been scheduled to determine if Ekko and
Freezer can co-exist together sharing resources in a plugin-type fashion.
It is not known if this is possible yet, but if it is not I will certainly
follow your suggestion, Jay. Thank you for your insight!

SamYaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-26 Thread Sam Yaple
Didn't hit the mailing list with the last reply. Forwarding to a wider
audience than just Dean
-- Forwarded message --
From: "Sam Yaple" 
Date: Jan 26, 2016 12:00 PM
Subject: Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup
for OpenStack
To: "Dean Troyer" 
Cc:


On Jan 26, 2016 11:42 AM, "Dean Troyer"  wrote:
>
> On Tue, Jan 26, 2016 at 9:28 AM, Sam Yaple  wrote:
>>
>> On Tue, Jan 26, 2016 at 10:15 AM, Jay Pipes  wrote:
>>>
>>> My personal request is that the two contributor communities do
everything in their power to ensure that the REST API endpoints are not
overlapping. The last thing we need is to have two APIs for performing
backups that are virtually identical to each other.
>>>
>>
>> The way I see this situation is the same as asking Ekko to integrate
with cinder-backup because they perform backups that are "virtually
identical" to each other. They aren't actually related at all, other than
perhaps an API
>
>
> But you see this is exactly where they are directly related to everyone
who is not a developer of the back-end services.  Everything that wants to
do a volume backup (users, other services, etc) should not have multiple
choices to perform that backup, irregardless of how that action is
implemented.

You skipped over the important part. "Actual implementation and end results
are wildly different." They are not the same even if the API call is named
similar, as I originally stated.

> Perhaps this is a problem whose time has come to address?

Absolutely! I would love to not have to worry about building and
maintaining an API. That said, Ekko isn't here to solve that issue.

> I would hate to see us keep duplicating user-facing stuff for the sake of
back-end developer convenience

I agree about not duplicating user-facing things. But it is a bit more than
"back-end developer convenience". I am both an operator and a developer, so
I can speak from both of those points of view when I say I do not want to
be forced to deploy services that I won't use for a feature unrelated to
them. For sale of example, requiring Ceilometer to use Cinder would be
awful. Those wanting to use Ekko may have no interest in using Freezer and
vice versa. Forcing unrelated processes and services for the sake of one
API is not something I agree with.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] 40 hour time commitment requested from core reviewers before Feb 9th/10th midcycle

2016-01-26 Thread Sam Yaple
On Mon, Jan 25, 2016 at 11:44 AM, Sean M. Collins 
wrote:

> Just an FYI for anyone taking the Neutron piece, please feel free to
> attend the upgrades subteam - we have a meeting today.
>
> https://wiki.openstack.org/wiki/Meetings/Neutron-Upgrades-Subteam
> --
> Sean M. Collins
>

Thanks Sean. I joined that meeting the other day and I will most likely
need to reach out again. Perhaps I can also get a +1 review from you and
your team once I have a patchset up for the upgrade of Neutron.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-26 Thread Sam Yaple
On Tue, Jan 26, 2016 at 10:15 AM, Jay Pipes  wrote:

> On 01/26/2016 02:47 AM, Sam Yaple wrote:
>
>> Hello Fausto,
>>
>> I am happy to have a conversation about this with you and the Freezer
>> team. I have a feeling the current direction of Ekko will add many
>> components that will not be needed for Freezer and vice-versa.
>> Nevertheless, I am all about community!
>>
>
> My personal request is that the two contributor communities do everything
> in their power to ensure that the REST API endpoints are not overlapping.
> The last thing we need is to have two APIs for performing backups that are
> virtually identical to each other.
>
>
The way I see this situation is the same as asking Ekko to integrate with
cinder-backup because they perform backups that are "virtually identical"
to each other. They aren't actually related at all, other than perhaps an
API call that says 'backup'. Actual implementation and end results are
wildly different. So my question would be, how would you go about solving
that situation? I could absolutely get on board with sharing an API and
even scheduler, but Ekko and Freezer are two distinct projects solving
different issues with different infrastructure requirements and I am not
aware of anyway to share APIs between projects other than merging the
projects.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-25 Thread Sam Yaple
Hello Fausto,

I am happy to have a conversation about this with you and the Freezer team.
I have a feeling the current direction of Ekko will add many components
that will not be needed for Freezer and vice-versa. Nevertheless, I am all
about community!

Sam Yaple

On Tue, Jan 26, 2016 at 2:20 AM, Fausto Marzi 
wrote:

> Hi Sam,
> My opinion would be to converge, so to have Ekko features exported from
> the freezer-api and horizon web interface. Also the freezer-scheduler can
> be integrated, that would enable Ekko to execute backup syncronized over
> multiple nodes.
>
> By all mean, this does not mean you have to, it's just how I feel about it.
>
> We are totally open, so please let us know if there's any interest from
> your side.
>
> Thanks,
> Fausto
>
> Sent from my iPhone
>
> On 25 Jan 2016, at 08:58, Sam Yaple  wrote:
>
> On Mon, Jan 25, 2016 at 8:45 AM, Thierry Carrez 
> wrote:
>
>> Sam Yaple wrote:
>>
>>> We would like to introduce you to a new community-driven OpenStack
>>> project called Ekko.
>>>
>>> The aim of Ekko is to provide incremental block-level backup and restore
>>> of Nova instances. We see backups as a key area that is missing in
>>> OpenStack. One issue that has previously prevented backups in OpenStack
>>> is the scalability of the storage backend. Object-storage is the answer
>>> to this scalability problem, but with block-based backups you often see
>>> large files that require POSIX operations to perform retention and
>>> deletions. These operations are not able to be performed in the
>>> traditional way in object storage, which has prevented leveraging
>>> object-storage to its full potential. With Ekko we can solve this issue
>>> allowing us to use storage that can scale with OpenStack.
>>>
>>
>> Hi!
>>
>> How does Ekko compare to / differ from Freezer, which is an official
>> OpenStack project targeted to the same problem space ? I suspect this is
>> more low-level ? Is there some potential for convergence between the two
>> projects ?
>>
>>
> Hello Thierry,
>
> These are good questions. The biggest difference you already caught onto,
> Ekko is more low-level. Freezer is targeted at the filesystem and specific
> applications (like databases) directly.
>
> There are only two places with overlapping goals that I know of*. The
> first is backup of a Cinder volume which is a future goal of Ekko and
> something Freezer can currently do for LVM backed Cinder. The second is
> backup of a nova instance. This isn't something freezer does directly,
> instead it leverages nova-snapshot which is very disruptive to the instance
> and will cause downtime for said instance. The current pursuit of Ekko is
> _live_ incremental block-level backup of an nova instance and in that
> regard there is no overlap with Freezer or any other project for that
> matter.
>
> To answer the question of convergence between Ekko and Freezer, I would
> say it's possible. That being said both projects are addressing different
> problems in different ways. As discussed above, there is little overlap
> between the two projects and the areas where there is overlap of goals have
> drastically different implementations. I could put together a list of Ekko
> vs Freezer, but I think that would be comparing apples to oranges. To state
> this in terms of compatibility, Ekko and Freezer can run side-by-side
> without interfering with each other in anyway.
>
> *Disclaimer: I am no expert on Freezer, I may be wrong in some statements
> and am open to correction.
>
> Sam Yaple
>
> --
>> Thierry Carrez (ttx)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-25 Thread Sam Yaple
On Mon, Jan 25, 2016 at 8:45 AM, Thierry Carrez 
wrote:

> Sam Yaple wrote:
>
>> We would like to introduce you to a new community-driven OpenStack
>> project called Ekko.
>>
>> The aim of Ekko is to provide incremental block-level backup and restore
>> of Nova instances. We see backups as a key area that is missing in
>> OpenStack. One issue that has previously prevented backups in OpenStack
>> is the scalability of the storage backend. Object-storage is the answer
>> to this scalability problem, but with block-based backups you often see
>> large files that require POSIX operations to perform retention and
>> deletions. These operations are not able to be performed in the
>> traditional way in object storage, which has prevented leveraging
>> object-storage to its full potential. With Ekko we can solve this issue
>> allowing us to use storage that can scale with OpenStack.
>>
>
> Hi!
>
> How does Ekko compare to / differ from Freezer, which is an official
> OpenStack project targeted to the same problem space ? I suspect this is
> more low-level ? Is there some potential for convergence between the two
> projects ?
>
>
Hello Thierry,

These are good questions. The biggest difference you already caught onto,
Ekko is more low-level. Freezer is targeted at the filesystem and specific
applications (like databases) directly.

There are only two places with overlapping goals that I know of*. The first
is backup of a Cinder volume which is a future goal of Ekko and something
Freezer can currently do for LVM backed Cinder. The second is backup of a
nova instance. This isn't something freezer does directly, instead it
leverages nova-snapshot which is very disruptive to the instance and will
cause downtime for said instance. The current pursuit of Ekko is _live_
incremental block-level backup of an nova instance and in that regard there
is no overlap with Freezer or any other project for that matter.

To answer the question of convergence between Ekko and Freezer, I would say
it's possible. That being said both projects are addressing different
problems in different ways. As discussed above, there is little overlap
between the two projects and the areas where there is overlap of goals have
drastically different implementations. I could put together a list of Ekko
vs Freezer, but I think that would be comparing apples to oranges. To state
this in terms of compatibility, Ekko and Freezer can run side-by-side
without interfering with each other in anyway.

*Disclaimer: I am no expert on Freezer, I may be wrong in some statements
and am open to correction.

Sam Yaple

-- 
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-22 Thread Sam Yaple
Dear community,

We would like to introduce you to a new community-driven OpenStack project
called Ekko.

The aim of Ekko is to provide incremental block-level backup and restore of
Nova instances. We see backups as a key area that is missing in OpenStack.
One issue that has previously prevented backups in OpenStack is the
scalability of the storage backend. Object-storage is the answer to this
scalability problem, but with block-based backups you often see large files
that require POSIX operations to perform retention and deletions. These
operations are not able to be performed in the traditional way in object
storage, which has prevented leveraging object-storage to its full
potential. With Ekko we can solve this issue allowing us to use storage
that can scale with OpenStack.

Based on previous projects [1] and research that has been progressing for
the past 2 years or so, Ekko is being completely written from scratch by
the community, incorporating these learnings. This research combined with a
new feature available in QEMU 2.4 [2] will allow us to bring incremental
block-based backups to OpenStack.

The project is in it's beginning stages, however, we already have active
developers submitting code and collaborating via IRC and code review [3].
We welcome thoughts, questions, and feedback. You can join us on
#openstack-ekko, and be sure to keep an eye out for our talk at the summit
to learn more.

Best Regards,
The Ekko Team

[1] https://github.com/SamYaple/osdk
[2] http://wiki.qemu.org/Features/IncrementalBackup
[3] https://review.openstack.org/#/q/project:openstack/ekko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in English) - jeffrey4l on irc

2016-01-19 Thread Sam Yaple
+1 another full time kolla dev. w00t.

Jeffrey has done some amazing reviews and great work for Kolla.

Sam Yaple

On Tue, Jan 19, 2016 at 3:30 PM, Michał Jastrzębski 
wrote:

> +1 :)
>
> On 19 January 2016 at 07:28, Ryan Hallisey  wrote:
> > +1 nice work!
> >
> > -Ryan
> >
> > - Original Message -
> > From: "Steven Dake (stdake)" 
> > To: openstack-dev@lists.openstack.org
> > Sent: Tuesday, January 19, 2016 3:26:38 AM
> > Subject: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in
> English) - jeffrey4l on irc
> >
> > Hi folks,
> >
> > I would like to propose Lei Zhang for our core reviewer team. Count this
> proposal as a +1 vote from me. Lei has done a fantastic job in his reviews
> over the last 6 weeks and has managed to produce some really nice
> implementation work along the way. He participates in IRC regularly, and
> has a commitment from his management team at his employer to work full time
> 100% committed to Kolla for the foreseeable future (although things can
> always change in the future :)
> >
> > Please vote +1 if you approve of Lei for core reviewer, or –1 if wish to
> veto his nomination. Remember just one –1 vote is a complete veto, so if
> your on the fence, another option is to abstain from voting.
> >
> > I would like to change from our 3 votes required, as our core team has
> grown, to requiring a simple majority of core reviewers with no veto votes.
> As we have 9 core reviewers, this means Lei requires 4 more +1 votes with
> no veto vote in the voting window to join the core reviewer team.
> >
> > I will leave the voting open for 1 week as is the case with our other
> core reviewer nominations until January 26th. If the vote is unanimous or
> there is a veto vote before January 26th I will close voting. I'll make
> appropriate changes to gerrit permissions if Lei is voted into the core
> reviewer team.
> >
> > Thank you for your time in evaluating Lei for the core review team.
> >
> > Regards
> > -steve
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Introduction of Heka in Kolla

2016-01-11 Thread Sam Yaple
Here is why I am on board with this. As we have discovered, the logging
with the syslog plugin leaves alot to be desired. It (to my understanding)
still can't save tracebacks/stacktraces to the log files for whatever
reason. stdout/stderr however works perfectly fine. That said the Docker
log stuff has been a source of pain in the past, but it has gotten better.
It does have the limitation of being only able to log one output at a time.
This means, as an example, the neutron-dhcp-agent could send its logs to
stdout/err but the dnsmasq process that it launch (that also has logs)
would have to mix its logs in with the neutron logs in stdout/err. Can Heka
handle this and separate them efficiently? Otherwise I see no choice but to
stick with something that can handle multiple logs from a single container.

Sam Yaple

On Mon, Jan 11, 2016 at 10:16 PM, Eric LEMOINE 
wrote:

>
> Le 11 janv. 2016 18:45, "Michał Jastrzębski"  a écrit :
> >
> > On 11 January 2016 at 10:55, Eric LEMOINE  wrote:
> > > Currently the services running in containers send their logs to
> > > rsyslog. And rsyslog stores the logs in local files, located in the
> > > host's /var/log directory.
> >
> > Yeah, however plan was to teach rsyslog to forward logs to central
> > logging stack once this thing is implemented.
>
> Yes. With the current ELK Change Request, Rsyslog sends logs to the
> central Logstash instance. If you read my design doc you'll understand that
> it's precisely what we're proposing changing.
>
> > > I know. Our plan is to rely on Docker. Basically: containers write
> > > their logs to stdout. The logs are collected by Docker Engine, which
> > > makes them available through the unix:///var/run/docker.sock socket.
> > > The socket is mounted into the Heka container, which uses the Docker
> > > Log Input plugin [*] to reads the logs from that that socket.
> > >
> > > [*] <
> http://hekad.readthedocs.org/en/latest/config/inputs/docker_log.html>
> >
> > So docker logs isn't best thing there is, however I'd suspect that's
> > mostly console output fault. If you can tap into stdout efficiently,
> > I'd say that's pretty good option.
>
> I'm not following you. Could you please be more specific?
>
> > >> Seems to me we need additional comparason of heka vs rsyslog;) Also
> > >> this would have to be hands down better because rsyslog is already
> > >> implemented, working and most of operators knows how to use it.
> > >
> > >
> > > We don't need to remove Rsyslog. Services running in containers can
> > > write their logs to both Rsyslog and stdout, which even is what they
> > > do today (at least for the OpenStack services).
> > >
> >
> > There is no point for that imho. I don't want to have several systems
> > doing the same thing. Let's make decision of one, but optimal toolset.
> > Could you please describe bottoms up what would your logging stack
> > look like? Heka listening on stdout, transfering stuff to
> > elasticsearch and kibana on top of it?
>
> My plan is to provide details in the blueprint document, that I'll
> continue working on if the core developers agree with the principles of the
> proposed architecture and change.
>
> But here's our plan—as already described in my previous email: the Kolla
> services, which run in containers, write their logs to stdout. Logs are
> collected by the Docker engine. Heka's Docker Log Input plugin is used to
> read the container logs from the Docker endpoint (Unix socket). Since Heka
> will run in a container a volume is necessary for accessing the Docker
> endpoint. The Docker Log Input plugin inserts the logs into the Heka
> pipeline, at the end of which an Elasticsearch Output plugin will send the
> log messages to Elasticsearch. Here's a blog post reporting on that
> approach: <
> http://www.ianneubert.com/wp/2015/03/03/how-to-use-heka-docker-and-tutum/>.
> We haven't tested that approach yet, but we plan to experiment with it as
> we work on the specs.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Introduction of Heka in Kolla

2016-01-11 Thread Sam Yaple
I like the idea of using Heka. You and I have discussed this on IRC before.
So my vote for this is +1. I can't think of any downside. I would like to
hear Alicja Kwasniewska's view on this as she has done the majority of work
with Logstash up until this point.

Sam Yaple

On Mon, Jan 11, 2016 at 3:16 PM, Eric LEMOINE  wrote:

> Hi
>
> As discussed on IRC the other day [1] we want to propose a distributed
> logs processing architecture based on Heka [2], built on Alicja
> Kwasniewska's ELK work with
> <https://review.openstack.org/#/c/252968/>.  Please take a look at the
> design document I've started working on [3].  The document is still
> work-in-progress, but the "Problem statement" and "Proposed change"
> sections should provide you with a good overview of the architecture
> we have in mind.
>
> In the proposed architecture each cluster node runs an instance of
> Heka for collecting and processing logs.  And instead of sending the
> processed logs to a centralized Logstash instance, logs are directly
> sent to Elasticsearch, which itself can be distributed across multiple
> nodes for high-availability and scaling.  The proposed architecture is
> based on Heka, and it doesn't use Logstash.
>
> That being said, it is important to note that the intent of this
> proposal is not strictly directed at replacing Logstash by Heka.  The
> intent is to propose a distributed architecture with Heka running on
> each cluster node rather than having Logstash run as a centralized
> logs processing component.  For such a distributed architecture we
> think that Heka is more appropriate, with a smaller memory footprint
> and better performances in general.  In addition, Heka is also more
> than a logs processing tool, as it's designed to process streams of
> any type of data, including events, logs and metrics.
>
> Some elements of comparison between Heka and Logstash:
>
> * Logstash was designed for logs processing.  Heka is a "unified data
> processing" software, designed to process streams of any type of data.
> So Heka is about running one service on each box instead of many.
> Using a single service for processing different types of data also
> makes it possible to do correlations, and derive metrics from logs and
> events.  See Rob Miller's presentation [4] for more details.
>
> * The virtual size of the Logstash Docker image is 447 MB, while the
> virtual size of an Heka image built from the same base image
> (debian:jessie) is 177 MB.  For comparison the virtual size of the
> Elasticsearch image is 345 MB.
>
> * Heka is written in Go and has no dependencies.  Go programs are
> compiled to native code.  This in contrast to Logstash which uses
> JRuby and as such requires running a Java Virtual Machine.  Besides
> this native versus interpreted code aspect, this also can raise the
> question of which JVM to use (Oracle, OpenJDK?) and which version
> (6,7,8?).
>
> * There are six types of Heka plugins: Inputs, Splitters, Decoders,
> Filters, Encoders, and Outputs.  Heka plugins are written in Go or
> Lua.  When written in Lua their executions are sandbox'ed, where
> misbehaving plugins may be shut down by Heka.  Lua plugins may also be
> dynamically added to Heka with no config changes or Heka restart. This
> is an important property on container environments such as Mesos,
> where workloads are changed dynamically.
>
> * To avoid losing logs under high load it is often recommend to use
> Logstash together with Redis [5].  Redis plays the role of a buffer,
> where logs are queued when Logstash or Elasticsearch cannot keep up
> with the load.  Heka, as a "unified data processing" software,
> includes its own resilient message queue, making it unnecessary to use
> an external queue (Redis for example).
>
> * Heka is faster than Logstash for processing logs, and its memory
> footprint is smaller.  I ran tests, where 3,400,000 log messages were
> read from 500 input files and then written to a single output file.
> Heka processed the 3,400,000 log messages in 12 seconds, consuming
> 500M of RAM.  Logstash processed the 3,400,000 log messages in 1mn
> 35s, consuming 1.1G of RAM.  Adding a grok filter to parse and
> structure logs, Logstash processed the 3,400,000 log messages in 2mn
> 15s, consuming 1.5G of RAM. Using an equivalent filtering plugin, Heka
> processed the 3,400,000 log messages in 27s, consuming 730M of RAM.
> See my GitHub repo [6] for more information about the test
> environment.
>
> Also, I want to say that our team has been using Heka in production
> for about a year, in clusters of up to 200 nodes.  Heka has proven to
> be very robust, efficient and flexible enough to addr

Re: [openstack-dev] [kolla] Adding Ubuntu Liberty to Kolla-Mitaka

2016-01-03 Thread Sam Yaple
> > I also want Debian implementations in our code base.  I'm sorry to say I
> > don't have any bandwidth to help actually make that happen on your end,
> > but once the Ubuntu work is done, it should be fairly straightforward to
> > port it to also work on Debian (since I think the package names are the
> > same).

> That's not entirely truth. There's package name differences (and the
> services which they attach to) for Nova, Neutron and Horizon. All of
> which is handled by upstream puppet-openstack.

Thats fine. The sentence should have been worded "mostly the same". We
already deal with small differences.

> Hopefully, this will happen soon. Though best would be if it is made in
> upstream infra. Which I'm currently stuck with because I don't know how
> to add a new Debian image. If this happen, then my project will go
further.

Luckily, thats not as big an issue for us. Specifically we do everything in
Docker containers so the only thing shared with the host is going to be the
kernel (though there may be some systemd issues until 16.04). The point
being we can test fairly reliable even on an Ubuntu host. That said if you
have a link to the project-config patch I will be more than happy to review
it and follow it.

Sam Yaple

On Sun, Jan 3, 2016 at 10:05 AM, Thomas Goirand  wrote:

> On 12/29/2015 10:36 PM, Steven Dake (stdake) wrote:
> > Thomas,
> >
> > I also want Debian implementations in our code base.  I'm sorry to say I
> > don't have any bandwidth to help actually make that happen on your end,
> > but once the Ubuntu work is done, it should be fairly straightforward to
> > port it to also work on Debian (since I think the package names are the
> > same).
>
> That's not entirely truth. There's package name differences (and the
> services which they attach to) for Nova, Neutron and Horizon. All of
> which is handled by upstream puppet-openstack.
>
> > The beta releases are not enough for us, however.  What we need is in
> > order of preference 1) per-commit, 2) nightly package builds 3) builds
> > with 3-5 day delta.  Anything else is not helpful.  I am not criticizing
> > you, I know you are only one dude and are not superman, just explaining
> > what we need :)
>
> Hopefully, this will happen soon. Though best would be if it is made in
> upstream infra. Which I'm currently stuck with because I don't know how
> to add a new Debian image. If this happen, then my project will go further.
>
> > The reason we need this is we want to gate on the latest software
> > OpenStack has to offer.  In this thread there has been discussion of
> > Delorean which is Red Hat's RDO rebuilt from master every 2-3 days.
> While
> > imperfect and sometimes gate-blocking for several days (because it isn't
> > per-commit) it gives us a good sanity check of our software gates.
> >
> > The only way we can ever get to voting gating packaging is with
> per-commit
> > builds as well as OpenStack mirrors of the packaging.
>
> See above. Help me to get a patch on upstream project-config, to get a
> new Debian image added, then I'll be able to work further on this.
>
> Cheers,
>
> Thomas Goirand (zigo)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Adding Ubuntu Liberty to Kolla-Mitaka

2015-12-28 Thread Sam Yaple
>We could just state we wont block any activity on the Debian binary
release (this includes tagging, releasing, etc)

I think thats key. If cloud-archive is available before we tag then
fantastic, but I don't want to hold it up because of this. Additionally, we
could do experimental gates for this and potentially a non-voting binary
build gate. I doubt we could do a deploy gate because it will always fail.
You can't run Liberty with our Mitaka code and because of that the gate
would be broken almost immediately after a release.

Artur's patch is being broken up into several patches right now. With the
pace he is going the code should be mergable before the end of the week.

Sam Yaple

On Mon, Dec 28, 2015 at 4:28 PM, Michał Jastrzębski 
wrote:

> Hey,
>
> So one thing we need to consider there is that currently 3 options we
> support (binary+souce centos and source ubuntu) basically behaves the
> same way - it install current master (or last nights master, which is
> close enough). This one will have fundamentally different behavior,
> and we need to make that clear. This might be documentation issue, but
> I feel we need to make sure that it's there.
>
> Cheers,
> inc0
>
> On 28 December 2015 at 10:16, Steven Dake (stdake) 
> wrote:
> > Hey folks,
> >
> > I have received significant feedback that the lack of Ubuntu binary
> support
> > is a problem for Kolla adoption.  Still, we had nobody to do the work,
> so we
> > held off approving the blueprint.  There were other reasons such as:
> >
> > There is no delorean style repository for debian meaning we would always
> be
> > installing Liberty with our Mitaka tree
> > Given the first problem, a gate may not be feasible – a voting gate would
> > never be feasible
> >
> >
> > Still, I think on balance, the pain here is worth the gain.  We could
> just
> > state we wont block any activity on the Debian binary release (this
> includes
> > tagging, releasing, etc) and mark it as "technical preview" until we sort
> > out the Liberty->Mitaka port in the stable branch.  I'd like other core
> > reviewers thoughts?  I really really want this feature, even if tis tech
> > preview and marked with all kinds of warnings.  Without it Kolla is
> > incomplete.
> >
> > This review is terrible (it needs to be broken up into separate patches)
> but
> > its a huge step in the right direction IMO :)
> >
> > https://review.openstack.org/#/c/260069/2
> >
> > Comments and thoughts welcome.
> >
> > Regards
> > -stevve
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Using existing ceph pools

2015-12-03 Thread Sam Yaple
This is a planned feature from the beginning, but not implemented.

I have done this locally, but I have not submitted a patchset. You can
expect this available and working by mitaka-2.
On Dec 1, 2015 6:08 AM, "OpenStack Mailing List Archive" 
wrote:

> Link: https://openstack.nimeyo.com/67172/?show=67172#q67172
> From: kunciil 
>
> I am planning to use an existing Ceph pool, the question is how to avoid
> [storage] group creating new Ceph pool with 'enable*ceph: "yes"' and how
> to pass existing Ceph values (rbd*user,libvirt*images*rbd_pool etc...) to
> Nova, Glance and Cinder config files?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] docker module replacement

2015-11-25 Thread Sam Yaple
Point of clarification, Brian Coca (bcoca) confirmed there _will_ be an
ansible 1.9.5 to me the other day. So we will be unblocked at some point
for Docker 1.8.2. Unfortunately the module is broken with v1 registry past
Docker 1.8.2.

That said here are my issues:
* its been 4 weeks since we fixed the bug upstream, we still don't have
a consumable version of ansible with the fix
* waiting on new features to propagate through Docker and docker-py
then we must wait on ansible to implement/merge a PR for the new features
and THEN we must wait for a new ansible tag. At that point our new minimum
requirement is the latest version of ansible. Hopefully it doesn't have any
bugs
* it has been said that because the v1 registry is deprecated so the
ansible  Docker module won't do anything to make that work (and its broken
with the upstream module). The local v2 registry is orders of magnitude
slower than a local v1. This is a big deal for us and its this type of
issue that makes me think we need our own module long term. We can't have
another project dictating such a crucial piece of our project and
development

In the case of liberty, while we can unpin 1.8.2, who is to say 1.9.3 won't
be broken for the same reason? Requiring later versions of ansible for
fixes here won't work long term.

I'm open to solutions that solve my concerns above without requiring a
module in our control.
On Nov 25, 2015 5:55 PM, "Steven Dake (stdake)"  wrote:

> Hey folks,
>
> I understand there is some contention over whether we should make our own
> docker module to deal with the fact that upstream is continually busted.
>
> The short answer is yes, I fully support our own docker module with some
> caveats:
>
> The long answer is:
> I would like the module to be compatible from a docker module perspective
> as it relates to Ansible integration.
> We are not waiting until Ansible 2.0 to unpin from docker 1.8.2.
> I want the code quality to be good, so I would appreciate thoughtful
> reviews of the docker module Sam has started on.
> The code may NOT be based upon a fork of the existing code for licensing
> reasons (GPLV3 incompatibility).  It doesn't have to be cleanroom, but it
> does have to be our own body of work.
> If upstream Ansible + docker ever get their act together, we will go back
> to using upstream.  If not, not. :)
>
> I am not blaming anyone from Ansible or Docker for these problems.
> Software integration is the hardest job on the planet as it relates to
> engineering, which is why the world is swiftly moving to full-blown CI to
> resolve these problems.  I know this isn't entirely the upstream way.  We
> should be fixing these things in upstream.  And we do actually do  that!
> The problem is Ansible 1.9.4 is the last release of Ansible 1.9, and
> Ansible, being a 50 person company, can't maintain two individual versions
> of Ansible.  So we are really doing this as a pragmatic factor of the
> environment in which we operate.
>
> Hope that clears up my position.
>
> Regards,
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Michal Rostecki for Core Reviewer (nihilfer on IRC)

2015-11-13 Thread Sam Yaple
Michal does great patches and is very receptive to feedback. +1 for me.

We may need to call him Michal2 though. Might get confusing to have
multiple Michals on the team.

Sam Yaple

On Thu, Nov 12, 2015 at 9:23 AM, Paul Bourke  wrote:

> +1
>
>
> On 12/11/15 08:41, Steven Dake (stdake) wrote:
>
>> Hey folks,
>>
>> Its been awhile since we have had a  core reviewer nomination, but I
>> really feel like Michal has the right stuff.  If you look at the 30 day
>> stats for kolla[1] , he is #3 in reviews (70 reviews) with 6 commits in
>> a 30 day period.  He is beating 2/3rds of our core reviewer team on all
>> stats.  I think his reviews, while could use a little more depth, are
>> solid and well considered.  That said he participates on the mailing
>> list more then others and has been very active in IRC.  He has come up
>> to speed on the code base in quick order and I expect if he keeps pace,
>> the top reviewers in Kolla will be challenged to maintain their spots :)
>>   Consider this proposal as a +1 vote from me.
>>
>> As a reminder, our core review process requires 3 core reviewer +1
>> votes, with no core reviewer –1 (veto) votes within a 1 week period.  If
>> your on the fence, best to abstain :)  I'll close voting November 20th,
>> or sooner if there I a veto vote or a unanimous vote.
>>
>> Regards
>> -steve
>>
>> http://stackalytics.com/report/contribution/kolla-group/30
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Liberty release builds

2015-10-22 Thread Sam Yaple
No you should not push with latest tag.

The initial push should have tag '1.0.0-liberty-000'. After we validate
that is a good build we can add additional tags '1.0.0' and '1.0.0-liberty'

If the build is not good we will rebuild and repush with tag
'1.0.0-liberty-001'. We should never be overwriting a tag or pushing a
'latest' tag.
On Oct 22, 2015 10:00 AM, "Paul Bourke"  wrote:

> Ok so now liberty is tagged we can nail down the process here. Suggest we
> push first with latest, then when happy add the following tags:
>
> * 1.0.0
> * 1.0.0-liberty
> * 1.0.0-liberty-release-001
>
> ===
> git clone https://github.com/openstack/kolla
> cd kolla
> git checkout tags/1.0.0-liberty
> tools/build.py \
> --type source \
> -- base < ... > \   # e.g. centos / ubuntu / oraclelinux / ...
> --no-cache \
> --push
> ===
>
> Please respond with whether you're happy or not with the above. Would be
> nice to get images pushed before the summit.
>
> Cheers,
> -Paul
>
> On 20/10/15 23:19, Steven Dake (stdake) wrote:
>
>>
>>
>> On 10/20/15, 9:06 AM, "Paul Bourke"  wrote:
>>
>> Kolla core team,
>>>
>>> Here's a link to an etherpad created just now for deciding the process
>>> on getting release builds to dockerhub:
>>>
>>> https://etherpad.openstack.org/p/kolla-release-builds
>>>
>>> Please review and revise. Steve, can you follow up to this mail when
>>> you'd like volunteers to kick off builds?
>>>
>>
>> Paul,
>>
>> Thanks for the initiative here!  We actually had one last minute patch
>> that needs to go into the tree related to Ceph failing to build
>> occasionally because of timers, so I decided to not push the tag until it
>> was resolved.  This was compounded by a node pool restart which broke
>> about 4 hours of CI testing of the patch that needs to go in to fix the
>> problem.
>>
>> Once the release is tagged, we can go ahead and build for direct
>> consumption from the docker hub.
>>
>> Regards,
>> -steve
>>
>>
>>
>>
>>> Cheers,
>>> -Paul
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Backport policy for Liberty

2015-10-09 Thread Sam Yaple
On Thu, Oct 8, 2015 at 2:47 PM, Steven Dake (stdake) 
wrote:

> Kolla operators and developers,
>
> The general consensus of the Core Reviewer team for Kolla is that we
> should embrace a liberal backport policy for the Liberty release.  An
> example of liberal -> We add a new server service to Ansible, we would
> backport the feature to liberty.  This is in breaking with the typical
> OpenStack backports policy.  It also creates a whole bunch more work and
> has potential to introduce regressions in the Liberty release.
>
> Given these realities I want to put on hold any liberal backporting until
> after Summit.  I will schedule a fishbowl session for a backport policy
> discussion where we will decide as a community what type of backport policy
> we want.  The delivery required before we introduce any liberal backporting
> policy then should be a description of that backport policy discussion at
> Summit distilled into a RST file in our git repository.
>
> If you have any questions, comments, or concerns, please chime in on the
> thread.
>
> Regards
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
I am in favor of a very liberal backport policy. We have the potential to
have very little code difference between N, N-1, and N-2 releases while
still deploying the different versions of OpenStack. However, I recognize
is a big undertaking to backport all things, not to mention the testing
involved.

I would like to see two things before we truly embrace a liberal policy.
The first is better testing. A true gate that does upgrades and potentially
multinode (at least from a network perspective). The second thing is a bot
or automation of some kind to automatically propose non-conflicting patches
to the stable branches if they include the 'backport: xyz' tag in the
commit message. Cores would still need to confirm these changes with the
normal review process and could easily abandon them, but that would remove
alot of overhead of performing the actual backport.

Since Kolla simply deploys OpenStack, it is alot closer to a client or a
library than it is to Nova or Neutron. And given its mission maybe it
should break from the "typical OpenStack backports policy" so we can give a
consistent deployment experience across all stable and supported version of
OpenStack at any given time.

Those are my thoughts on the matter at least. I look forward to some
conversations about this in Tokyo.

Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] new yaml format for all.yml, need feedback

2015-09-30 Thread Sam Yaple
Also in favor is it lands before Liberty. But I don't want to see a format
change straight into Mitaka.

Sam Yaple

On Wed, Sep 30, 2015 at 1:03 PM, Steven Dake (stdake) 
wrote:

> I am in favor of this work if it lands before Liberty.
>
> Regards
> -steve
>
>
> On 9/30/15, 10:54 AM, "Jeff Peeler"  wrote:
>
> >The patch I just submitted[1] modifies the syntax of all.yml to use
> >dictionaries, which changes how variables are referenced. The key
> >point being in globals.yml, the overriding of a variable will change
> >from simply specifying the variable to using the dictionary value:
> >
> >old:
> >api_interface: 'eth0'
> >
> >new:
> >network:
> >api_interface: 'eth0'
> >
> >Preliminary feedback on IRC sounded positive, so I'll go ahead and
> >work on finishing the review immediately assuming that we'll go
> >forward. Please ping me if you hate this change so that I can stop the
> >work.
> >
> >[1] https://review.openstack.org/#/c/229535/
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core reviewer

2015-09-29 Thread Sam Yaple
+1 Michal will be a great addition to the Core team.
On Sep 29, 2015 6:48 PM, "Martin André"  wrote:

>
>
> On Wed, Sep 30, 2015 at 7:20 AM, Steven Dake (stdake) 
> wrote:
>
>> Hi folks,
>>
>> I am proposing Michal for core reviewer.  Consider my proposal as a +1
>> vote.  Michal has done a fantastic job with rsyslog, has done a nice job
>> overall contributing to the project for the last cycle, and has really
>> improved his review quality and participation over the last several months.
>>
>> Our process requires 3 +1 votes, with no veto (-1) votes.  If your
>> uncertain, it is best to abstain :)  I will leave the voting open for 1
>> week until Tuesday October 6th or until there is a unanimous decision or a
>>  veto.
>>
>
> +1, without hesitation.
>
> Martin
>
>
>> Regards
>> -steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Followup to review in gerrit relating to RHOS + RDO types

2015-09-14 Thread Sam Yaple
On Mon, Sep 14, 2015 at 11:19 AM, Paul Bourke 
wrote:

>
>
> On 13/09/15 18:34, Steven Dake (stdake) wrote:
>
>> Response inline.
>>
>> From: Sam Yaple mailto:sam...@yaple.net>>
>> Reply-To: "s...@yaple.net<mailto:s...@yaple.net>" > s...@yaple.net>>
>> Date: Sunday, September 13, 2015 at 1:35 AM
>> To: Steven Dake mailto:std...@cisco.com>>
>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org> openstack-dev@lists.openstack.org>>
>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
>> types
>>
>> On Sun, Sep 13, 2015 at 3:01 AM, Steven Dake (stdake) > <mailto:std...@cisco.com>> wrote:
>> Response inline.
>>
>> From: Sam Yaple mailto:sam...@yaple.net>>
>> Reply-To: "s...@yaple.net<mailto:s...@yaple.net>" > s...@yaple.net>>
>> Date: Saturday, September 12, 2015 at 11:34 PM
>> To: Steven Dake mailto:std...@cisco.com>>
>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org> openstack-dev@lists.openstack.org>>
>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
>> types
>>
>>
>>
>> Sam Yaple
>>
>> On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake) > <mailto:std...@cisco.com>> wrote:
>>
>>
>> From: Sam Yaple mailto:sam...@yaple.net>>
>> Reply-To: "s...@yaple.net<mailto:s...@yaple.net>" > s...@yaple.net>>
>> Date: Saturday, September 12, 2015 at 11:01 PM
>> To: Steven Dake mailto:std...@cisco.com>>
>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org> openstack-dev@lists.openstack.org>>
>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
>> types
>>
>>
>> On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) > <mailto:std...@cisco.com>> wrote:
>> Hey folks,
>>
>> Sam had asked a reasonable set of questions regarding a patchset:
>> https://review.openstack.org/#/c/222893/
>>
>> The purpose of the patchset is to enable both RDO and RHOS as binary
>> choices on RHEL platforms.  I suspect over time, from-source deployments
>> have the potential to become the norm, but the business logistics of such a
>> change are going to take some significant time to sort out.
>>
>> Red Hat has two distros of OpenStack neither of which are from source.
>> One is free called RDO and the other is paid called RHOS.  In order to
>> obtain support for RHEL VMs running in an OpenStack cloud, you must be
>> running on RHOS RPM binaries.  You must also be running on RHEL.  It
>> remains to be seen whether Red Hat will actively support Kolla deployments
>> with a RHEL+RHOS set of packaging in containers, but my hunch says they
>> will.  It is in Kolla’s best interest to implement this model and not make
>> it hard on Operators since many of them do indeed want Red Hat’s support
>> structure for their OpenStack deployments.
>>
>> Now to Sam’s questions:
>> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more do
>> we add? What's our policy on adding a new type?”
>>
>> I’m not immediately clear on how binary fits in.  We could make binary
>> synonymous with the community supported version (RDO) while still
>> implementing the binary RHOS version.  Note Kolla does not “support” any
>> distribution or deployment of OpenStack – Operators will have to look to
>> their vendors for support.
>>
>> If everything between centos+rdo and rhel+rhos is mostly the same then I
>> would think it would make more sense to just use the base ('rhel' in this
>> case) to branch of any differences in the templates. This would also allow
>> for the least amount of change and most generic implementation of this
>> vendor specific packaging. This would also match what we do with
>> oraclelinux, we do not have a special type for that and any specifics would
>> be handled by an if statement around 'oraclelinux' and not some special
>> type.
>>
>> I think what you are proposing is RHEL + RHOS and CENTOS + RDO.  RDO also
>> runs on RHEL.  I want to enable Red Hat customers to make a choice to have
>> a supported  operating system but not a supported Cloud environment.  The
>> answer here is RHEL + RDO.  This leads to full support down the

Re: [openstack-dev] [kolla] Followup to review in gerrit relating to RHOS + RDO types

2015-09-13 Thread Sam Yaple
On Sun, Sep 13, 2015 at 3:01 AM, Steven Dake (stdake) 
wrote:

> Response inline.
>
> From: Sam Yaple 
> Reply-To: "s...@yaple.net" 
> Date: Saturday, September 12, 2015 at 11:34 PM
> To: Steven Dake 
> Cc: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
> types
>
>
>
> Sam Yaple
>
> On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake) 
> wrote:
>
>>
>>
>> From: Sam Yaple 
>> Reply-To: "s...@yaple.net" 
>> Date: Saturday, September 12, 2015 at 11:01 PM
>> To: Steven Dake 
>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
>> types
>>
>>
>> On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) 
>> wrote:
>>
>>> Hey folks,
>>>
>>> Sam had asked a reasonable set of questions regarding a patchset:
>>> https://review.openstack.org/#/c/222893/
>>>
>>> The purpose of the patchset is to enable both RDO and RHOS as binary
>>> choices on RHEL platforms.  I suspect over time, from-source deployments
>>> have the potential to become the norm, but the business logistics of such a
>>> change are going to take some significant time to sort out.
>>>
>>> Red Hat has two distros of OpenStack neither of which are from source.
>>> One is free called RDO and the other is paid called RHOS.  In order to
>>> obtain support for RHEL VMs running in an OpenStack cloud, you must be
>>> running on RHOS RPM binaries.  You must also be running on RHEL.  It
>>> remains to be seen whether Red Hat will actively support Kolla deployments
>>> with a RHEL+RHOS set of packaging in containers, but my hunch says they
>>> will.  It is in Kolla’s best interest to implement this model and not make
>>> it hard on Operators since many of them do indeed want Red Hat’s support
>>> structure for their OpenStack deployments.
>>>
>>> Now to Sam’s questions:
>>> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more
>>> do we add? What's our policy on adding a new type?”
>>>
>>> I’m not immediately clear on how binary fits in.  We could make
>>> binary synonymous with the community supported version (RDO) while still
>>> implementing the binary RHOS version.  Note Kolla does not “support” any
>>> distribution or deployment of OpenStack – Operators will have to look to
>>> their vendors for support.
>>>
>>
>> If everything between centos+rdo and rhel+rhos is mostly the same then I
>> would think it would make more sense to just use the base ('rhel' in this
>> case) to branch of any differences in the templates. This would also allow
>> for the least amount of change and most generic implementation of this
>> vendor specific packaging. This would also match what we do with
>> oraclelinux, we do not have a special type for that and any specifics would
>> be handled by an if statement around 'oraclelinux' and not some special
>> type.
>>
>>
>> I think what you are proposing is RHEL + RHOS and CENTOS + RDO.  RDO also
>> runs on RHEL.  I want to enable Red Hat customers to make a choice to have
>> a supported  operating system but not a supported Cloud environment.  The
>> answer here is RHEL + RDO.  This leads to full support down the road if the
>> Operator chooses to pay Red Hat for it by an easy transition to RHOS.
>>
>
> I am against including vendor specific things like RHOS in Kolla outright
> like you are purposing. Suppose another vendor comes along with a new base
> and new packages. They are willing to maintain it, but its something that
> no one but their customers with their licensing can use. This is not
> something that belongs in Kolla and I am unsure that it is even appropriate
> to belong in OpenStack as a whole. Unless RHEL+RHOS can be used by those
> that do not have a license for it, I do not agree with adding it at all.
>
>
> Sam,
>
> Someone stepping up to maintain a completely independent set of docker
> images hasn’t happened.  To date nobody has done that.  If someone were to
> make that offer, and it was a significant change, I think the community as
> a whole would have to evaluate such a drastic change.  That would certainly
> increase our implementation and maintenance burden, which we d

Re: [openstack-dev] [kolla] Followup to review in gerrit relating to RHOS + RDO types

2015-09-12 Thread Sam Yaple
Sam Yaple

On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake) 
wrote:

>
>
> From: Sam Yaple 
> Reply-To: "s...@yaple.net" 
> Date: Saturday, September 12, 2015 at 11:01 PM
> To: Steven Dake 
> Cc: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
> types
>
>
> On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) 
> wrote:
>
>> Hey folks,
>>
>> Sam had asked a reasonable set of questions regarding a patchset:
>> https://review.openstack.org/#/c/222893/
>>
>> The purpose of the patchset is to enable both RDO and RHOS as binary
>> choices on RHEL platforms.  I suspect over time, from-source deployments
>> have the potential to become the norm, but the business logistics of such a
>> change are going to take some significant time to sort out.
>>
>> Red Hat has two distros of OpenStack neither of which are from source.
>> One is free called RDO and the other is paid called RHOS.  In order to
>> obtain support for RHEL VMs running in an OpenStack cloud, you must be
>> running on RHOS RPM binaries.  You must also be running on RHEL.  It
>> remains to be seen whether Red Hat will actively support Kolla deployments
>> with a RHEL+RHOS set of packaging in containers, but my hunch says they
>> will.  It is in Kolla’s best interest to implement this model and not make
>> it hard on Operators since many of them do indeed want Red Hat’s support
>> structure for their OpenStack deployments.
>>
>> Now to Sam’s questions:
>> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more do
>> we add? What's our policy on adding a new type?”
>>
>> I’m not immediately clear on how binary fits in.  We could make
>> binary synonymous with the community supported version (RDO) while still
>> implementing the binary RHOS version.  Note Kolla does not “support” any
>> distribution or deployment of OpenStack – Operators will have to look to
>> their vendors for support.
>>
>
> If everything between centos+rdo and rhel+rhos is mostly the same then I
> would think it would make more sense to just use the base ('rhel' in this
> case) to branch of any differences in the templates. This would also allow
> for the least amount of change and most generic implementation of this
> vendor specific packaging. This would also match what we do with
> oraclelinux, we do not have a special type for that and any specifics would
> be handled by an if statement around 'oraclelinux' and not some special
> type.
>
>
> I think what you are proposing is RHEL + RHOS and CENTOS + RDO.  RDO also
> runs on RHEL.  I want to enable Red Hat customers to make a choice to have
> a supported  operating system but not a supported Cloud environment.  The
> answer here is RHEL + RDO.  This leads to full support down the road if the
> Operator chooses to pay Red Hat for it by an easy transition to RHOS.
>

I am against including vendor specific things like RHOS in Kolla outright
like you are purposing. Suppose another vendor comes along with a new base
and new packages. They are willing to maintain it, but its something that
no one but their customers with their licensing can use. This is not
something that belongs in Kolla and I am unsure that it is even appropriate
to belong in OpenStack as a whole. Unless RHEL+RHOS can be used by those
that do not have a license for it, I do not agree with adding it at all.


> For oracle linux, I’d like to keep RDO for oracle linux and from source on
> oracle linux as choices.  RDO also runs on oracle linux.  Perhaps the patch
> set needs some later work here to address this point in more detail, but as
> is “binary” covers oracle linu.
>

> Perhaps what we should do is get rid of the binary type entirely.  Ubuntu
> doesn’t really have a binary type, they have a cloudarchive type, so binary
> doesn’t make a lot of sense.  Since Ubuntu to my knowledge doesn’t have two
> distributions of OpenStack the same logic wouldn’t apply to providing a
> full support onramp for Ubuntu customers.  Oracle doesn’t provide a binary
> type either, their binary type is really RDO.
>

The binary packages for Ubuntu are _packaged_ by the cloudarchive team. But
in the case of when OpenStack collides with an LTS release (Icehouse and
14.04 was the last one) you do not add a new repo because the packages are
in the main Ubuntu repo.

Debian provides its own packages as well. I do not want a type name per
distro. 'binary' catches all packaged OpenStack things by a distro.


>
> FWIW I never liked the transit

Re: [openstack-dev] [kolla] Followup to review in gerrit relating to RHOS + RDO types

2015-09-12 Thread Sam Yaple
On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) 
wrote:

> Hey folks,
>
> Sam had asked a reasonable set of questions regarding a patchset:
> https://review.openstack.org/#/c/222893/
>
> The purpose of the patchset is to enable both RDO and RHOS as binary
> choices on RHEL platforms.  I suspect over time, from-source deployments
> have the potential to become the norm, but the business logistics of such a
> change are going to take some significant time to sort out.
>
> Red Hat has two distros of OpenStack neither of which are from source.
> One is free called RDO and the other is paid called RHOS.  In order to
> obtain support for RHEL VMs running in an OpenStack cloud, you must be
> running on RHOS RPM binaries.  You must also be running on RHEL.  It
> remains to be seen whether Red Hat will actively support Kolla deployments
> with a RHEL+RHOS set of packaging in containers, but my hunch says they
> will.  It is in Kolla’s best interest to implement this model and not make
> it hard on Operators since many of them do indeed want Red Hat’s support
> structure for their OpenStack deployments.
>
> Now to Sam’s questions:
> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more do
> we add? What's our policy on adding a new type?”
>
> I’m not immediately clear on how binary fits in.  We could make
> binary synonymous with the community supported version (RDO) while still
> implementing the binary RHOS version.  Note Kolla does not “support” any
> distribution or deployment of OpenStack – Operators will have to look to
> their vendors for support.
>

If everything between centos+rdo and rhel+rhos is mostly the same then I
would think it would make more sense to just use the base ('rhel' in this
case) to branch of any differences in the templates. This would also allow
for the least amount of change and most generic implementation of this
vendor specific packaging. This would also match what we do with
oraclelinux, we do not have a special type for that and any specifics would
be handled by an if statement around 'oraclelinux' and not some special
type.

Since we implement multiple bases, some of which are not RPM based, it
doesn't make much sense to me to have rhel and rdo as a type which is why
we removed rdo in the first place in favor of the more generic 'binary'.


>
> As such the implied second question “How many more do we add?” sort of
> sounds like ‘how many do we support?”.  The answer to the second question
> is none – again the Kolla community does not support any deployment of
> OpenStack.  To the question as posed, how many we add, the answer is it is
> really up to community members willing to  implement and maintain the
> work.  In this case, I have personally stepped up to implement RHOS and
> maintain it going forward.
>
> Our policy on adding a new type could be simple or onerous.  I prefer
> simple.  If someone is willing to write the code and maintain it so that is
> stays in good working order, I see no harm in it remaining in tree.  I
> don’t suspect there will be a lot of people interested in adding multiple
> distributions for a particular operating system.  To my knowledge, and I
> could be incorrect, Red Hat is the only OpenStack company with a paid and
> community version available of OpenStack simultaneously and the paid
> version is only available on RHEL.  I think the risk of RPM based
> distributions plus their type count spiraling out of manageability is low.
> Even if the risk were high, I’d prefer to keep an open mind to facilitate
> an increase in diversity in our community (which is already fantastically
> diverse, btw ;)
>
> I am open to questions, comments or concerns.  Please feel free to voice
> them.
>
> Regards,
> -steve
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Swapnil Kulkarni (coolsvap) for kolla core reviewer team

2015-08-15 Thread Sam Yaple
+1 Swapnil has always been very helpful.
On Aug 14, 2015 2:21 PM, "Harm Weites"  wrote:

> great! +1 :)
>
> Op 14-08-15 om 15:38 schreef Paul Bourke:
>
>> +1, Swapnil has made a ton of useful contributions and continues to do so
>> :)
>>
>> On 14/08/15 14:29, Steven Dake (stdake) wrote:
>>
>>> Hi folks,
>>>
>>> Swapnil has done a bunch of great technical work, participates heavily
>>> in IRC, and has contributed enormously to the implementation of Kolla.  I’d
>>> like to see more reviews from Swapnil, but he has committed to doing more
>>> reviews and already has gone from something like 0 reviews to 90 reviews in
>>> about a month.  Count this proposal as a +1 from me.
>>>
>>> His 90 day stats are:
>>> http://stackalytics.com/report/contribution/kolla-group/90
>>>
>>> The process we go to select new core reviewers is that we require a
>>> minimum of 3 +1 votes within a 1 week voting window. A vote of –1 is a
>>> veto.  It is fine to abstain as well without any response to this email.
>>> If the vote is unanimous or there is a veto vote prior to the end of the
>>> voting window, I’ll close voting then.
>>>
>>> The voting is open until Friday August 21st.
>>>
>>> Thanks!
>>> -steve
>>>
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Developers please assign yourself 3 or more Docker templates

2015-08-11 Thread Sam Yaple
Sure thing, I can handle rabbitmq and keystone tonight. The keystone one I
wrote during the mid-cycle so I will just throw it in a patch.

Sam Yaple

On Tue, Aug 11, 2015 at 10:54 AM, Steven Dake (stdake) 
wrote:

> Hi,
>
> Alicja has been heading up this blueprint:
> https://blueprints.launchpad.net/kolla/+spec/dockerfile-template
>
> Other projects like TripleO and downstreams depend on this change to
> happen as soon as possible.  It is a mountain of work – too much for one
> person to finish in a sprint.  I’d like to distribute the load to all our
> developers, so everyone understands how the templates work and how to make
> them for new containers.  After we have good confidence in the new
> container set, we will delete the docker directory as it exists today.
>
> The work is being coordinated by Alicja here:
> https://etherpad.openstack.org/p/kolla-dockerfile-template
>
> Please expand the etherpad with the various containers we have for the
> various services.  Then assign yourself 3 or more :)  I’d estimate we have
> about 50 containers or more, so if folks can pick up 4 or 5 it would be
> ideal.  Before beginning work I’d like to see a reference implementation of
> an infra container and a basic service container from either Alicja or Sam
> or a combination of both so we can copy the same style throughout the code
> base.  If you pick a service with multiple containers, try to do the whole
> thing.  If you already have experience with the containers in that area
> (for example Ryan with Cinder) it might be ideal to pick that container.  I
> picked the Heat containers for example because I have experience with them.
>
> Keep in mind we have a different blueprint to implement RHEL support in
> containers, and I’d really REALLY like to see that land for liberty-3,
> which means we can’t have this work land at the end of l3 and expect RHEL
> support to land in 1 day :)
>
> Lets pick rabbitmq and keystone as our reference containers.  Sam or
> Alicja please crank out some ref implementations of these as soon as
> possible (top priority) since everyone is blocked on that work.
>
> Regards,
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Puppet] Deploying OpenStack with Puppet modules on Docker with Heat

2015-08-05 Thread Sam Yaple
 On Wed, Aug 5, 2015 at 1:29 PM, Dan Prince  wrote:

> ...snip...
> -The external config file mechanism for Kolla containers only seems to
> support a single config file. Some services (Neutron) can have multiple
> files. Could we extend the external config support to use multiple
> files?
>

>
Yes! I would actually prefer a rework. We implemented that in a hurry but
if you look at the initial commit messages we knew it was a stop-gap until
a better idea came along. We need to have some way to do this dynamically
or at least in a more readable way. I had a thought of laying down a json
file with the contents being which files to copy/move/change permissions on
and reading that in. In that way the config-external script never actually
changes and the deploy tool can determine which files would get pulled in
by the way it lays down that json file.

I rejected using a '*' match for security reasons but also because some
configs need to go to different places. In the case of neutron, the
neutron.conf will be in /etc/neutron and the ml2_conf.ini will be in
/etc/neutron/plugins/ml2. So I don't think '*' matching will work. We are
open to ideas!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kuryr][kolla] - Bringing Dockers networking to Neutron

2015-07-23 Thread Sam Yaple
This is great news! I can see our two projects working close together since
we have aligned but not entirely overlapping goals.

Antoni, please reach out on IRC at any time. #kolla is active almost 24/7
due to all of our different timezones. There is usually two cores talking
at any given time. I would love to help you add MidoNet in!

Sam Yaple

On Thu, Jul 23, 2015 at 1:16 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

>
>
> On Thu, Jul 23, 2015 at 7:35 PM, Mohammad Banikazemi 
> wrote:
>
>> I let the creators of the project speak for themselves but here is my
>> take on project Kuryr.
>>
>> The goal is not to containerize Neutron or other OpenStack services. The
>> main objective is to use Neutron as a networking backend option for Docker.
>> The original proposal was to do so in the context of using containers (for
>> different Neutron backends or vif types). While the main objective is
>> fundamental to the project, the latter (use of containers in this
>> particular way) seems to be a tactical choice we need to make. I see
>> several different options available to achieve the same goal in this regard.
>>
>
> Thanks Mohammad. It is as you say, the goal of Kuryr to provide Docker
> with a new libnetwork remote
> driver that is powered by Neutron, not a containerization of Neutron.
> Kuryr deployments as you point out,
> may opt to point to a Neutron that is containerized, and for that I was
> looking at using Kolla. However,
> that is just deployment and I consider it to be something up to the
> deployer (of course, we'll make Kuryr
> containerizable and part of Kolla :-) ).
>
> The design for interaction/configuration is not yet final, as I still have
> to push drafts for the blueprints and
> get comments, but my initial idea is that you will configure docker to
> pass the configuration of which
> device to take hold of for the overlay and where the neutron api are in
> the following way.
>
> $ docker -d --kv-store=consul:localhost:8500 
> --label=com.docker.network.driver.kuryr.bind_interface=eth0 
> --label=com.docker.network.driver.kuryr.neutron_api=10.10.10.10
> --label=com.docker.network.driver.kuryr.token=AUTH_tk713d067336d21348bcea1ab220965485
>
> Another possibility is that those values were passed as env variables or 
> plain old configuration files.
>
>
>
>>
>> Now, there is another aspect of using containers in the context of this
>> project that is more interesting at least to me (and I do not know if
>> others share this view or not) and that is the use of containers for
>> providing network services that are not available through libnetwork as of
>> now or in near future or ever. From the talks I have had with libnetwork
>> developers the plan is to stay with the basic networking infrastructure and
>> leave additional features to be developed by the community and to do so
>> possibly by using what else, containers.
>>
>
>> So take the current features available in libnetwork. You mainly get
>> support for connectivity/isolation for multiple networks across multiple
>> hosts. Now if you want to route between these networks, you have to devise
>> a solution yourself. One possible solution would be having a router service
>> in a container that gets connected to say two Docker networks. Whether the
>> router service is implemented with the use of the current Neutron router
>> services or by some other solutions is something to look into and discuss
>> but this is a direction where I think Kuryr (did I spell it right? ;)) can
>> and should contribute to.
>>
>
> You got that right. The idea is indeed to get the containers networked via
> libnetwork that, as you point out,
> was left intentionally simple to be developed by the community; then we
> want to make make:
>
> a) Kuryr get the containers to networks that have been pre-configured with
> advanced networking (lb, sec groups, etc).
> Being able to perform changes on those networks via neutron after the fact
> as well. For example the container
> orchestration software could create a Neutron network with a load balancer
> and a FIP, start containers on that network
> and add them to the load balancer.
> b) Via the usage of either docker labels on `docker run` make Kuryr
> implicitly set up Neutron networks/topologies.
>
> Yup, you spelled it well. In Czech it is Kurýr, but for project purposes I
> dropped the "´"
> Thanks a lot for contributing and I'm very happy to see that you got a
> very good sense for the direction we are taking.
> I'm looking forward to meet you all in the community meetings!
>
>>
>> Just my 2 cents on 

Re: [openstack-dev] [Neutron][Kuryr][Kolla] - Bringing Dockers networking to Neutron

2015-07-22 Thread Sam Yaple
Thanks for bringing this to my attention Kevin! I may not have see this if
my filter didn't catch 'Kolla'.

I am very interested in this. In Kolla and related projects we containerize
all the neutron components near the beginning of the year. We too have no
wish to getting a situation where we are "re-inventing the wheel for each
different project".

Unfortunately, I am not seeing much documentation about this project. I
would love to read more about the current status and plans so we can both
contribute to each other!

Sam Yaple

On Wed, Jul 22, 2015 at 2:04 PM, Fox, Kevin M  wrote:

>  Awesome. :)
>
> Dockerization of Neutron plugins is already in scope of the Kolla project.
> Might want to coordinate the effort with the Kolla team.
>
> Thanks,
> Kevin
>
> --
> *From:* Gal Sagie
> *Sent:* Wednesday, July 22, 2015 9:28:50 AM
> *To:* OpenStack Development Mailing List (not for usage questions); Eran
> Gampel; Antoni Segura Puimedon; Irena Berezovsky
> *Subject:* [openstack-dev] [Neutron][Kuryr] - Bringing Dockers networking
> to Neutron
>
>
>  Hello Everyone,
>
> Project Kuryr is now officially part of Neutron's big tent.
> Kuryr is aimed to be used as a generic docker remote driver that connects
> docker to Neutron API's
> and provide containerised images for the common Neutron plugins.
> We also plan on providing common additional networking services API's from
> other sub projects
> in the Neutron big tent.
>
>  We hope to get everyone on board with this project and leverage this
> joint effort for the many different solutions out there (instead of
> everyone re-inventing the wheel for each different project).
>
>  We want to start doing a weekly IRC meeting to coordinate the different
> requierments and
> tasks, so anyone that is interested to participate please share your time
> preference
> and we will try to find the best time for the majority.
>
>  Remember we have people in Europe, Tokyo and US, so we won't be able to
> find time that fits
> everyone.
>
>  The currently proposed time is  *Wedensday at 16:00 UTC *
>
>  Please reply with your suggested time/day,
> Hope to see you all, we have an interesting and important project ahead of
> us
>
>  Thanks
> Gal.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][magnum] Removal of Daneyon Hansen from the Core Reviewer team for Kolla

2015-07-22 Thread Sam Yaple
Daneyon,

We haven't had much overlap here in Kolla, but our interactions have always
been pleasant and informative. You are clearly a very smart and driven guy.

Good luck with Magnum. Hopefully you will see me around Magnum more in the
future as well, I expect great things!

Sam Yaple

On Wed, Jul 22, 2015 at 3:47 PM, Steven Dake (stdake) 
wrote:

>  Fellow Kolla developers,
>
>  Daneyon has been instrumental in getting Kolla rolling and keeping our
> project alive.  He even found me a new job that would pay my mortgage and
> Panamera payment so I could continue performing as PTL for Kolla and get
> Magnum off the ground.  But Daneyon has conferred with me that he has a
> personal objective of getting highly involved in the Magnum project and
> leading the container networking initiative coming out of Magnum.  For a
> sample of his new personal mission:
>
>  https://review.openstack.org/#/c/204686/
>
>  I’m a bit sad to lose Daneyon to Magnum, but life is short and not sweet
> enough.  I personally feel people should do what makes them satisfied and
> happy professionally.  Daneyon will still be present at the Kolla midcycle
> and contribute to our talk (if selected by the community) in Tokyo.  I
> expect Daneyon will make a big impact in Magnum, just as he has with Kolla.
>
>  In the future if Daneyon decides he wishes to re-engage with the Kolla
> project, we will welcome him with open arms because Daneyon rocks and does
> super high quality work.
>
>  NB Typically we would vote on removal of a core reviewer, unless they
> wish to be removed to focus on on other projects.  Since that is the case
> here, there is no vote necessary.
>
>  Please wish Daneyon well in his adventures in Magnum territory and prey
> he comes back when he finishes the job on Magnum networking :)
>
>  Regards
> -steve
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for Paul Bourke for Kolla Core

2015-07-13 Thread Sam Yaple
+1 from me.

Paul reviews are always helpful and easily in the same number with the
other Core members (108 reviews this cycle!). Additionally he has been
helpful in testing the new Ansible pieces as well as pushing forward the
source installation, both areas we need help in currently.

Sam Yaple

On Mon, Jul 13, 2015 at 9:40 PM, Steven Dake (stdake) 
wrote:

>  Hey folks,
>
>  I am proposing Paul Bourke for the Kolla core team.  He did a fantastic
> job getting Kolla into shape to support multiple distros and from
> source/from binary installation.  His statistics are fantastic including
> both code and reviews.  His reviews are not only voluminous, but
> consistently good.  Paul is helping on many fronts and I would feel make a
> fantastic addition to our core reviewer team.
>
>  Consider my proposal to count as one +1 vote.
>
>  Any Kolla core is free to vote +1, abstain, or vote –1.  A –1 vote is a
> veto for the candidate, so if you are on the fence, best to abstain :)  We
> require 3 core reviewer votes to approve a candidate.  I will leave the
> voting open until July 20th  UTC.  If the vote is unanimous prior to
> that time or a veto vote is received, I’ll close voting and make
> appropriate adjustments to the gerrit groups.
>
>  Regards
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] Plans for using Pre-2.0 Ansible modules

2015-07-09 Thread Sam Yaple
Hey Kevin,

Thanks for the offer. We had discussed pulling in the module you wrote for
OSAD originally. The main blocker was we had an internal requirement of
being able to use Keystone v3 api. I have looked at the patch to make OSAD
v3, but it appears you are importing v3 directly rather than letting the
module decide whether to use v2.0 or v3. That may not work for our
community, but I will dig into it to determine if that conflicts with
anything for us. I will follow the patchset and once it merges Kolla can
discuss using that if it meets our needs until the Ansible Shade modules
land.

Additionally, I have left a review pointing you to an issue you may run
into with creating all of the endpoints with v3 directly as it appears you
are doing in that patchset's playbooks.

Sam Yaple

On Wed, Jul 8, 2015 at 8:14 PM, Steven Dake (stdake) 
wrote:

>  Kevin,
>
>  Thanks for the offer.  I personally am not an expert in Ansible so not
> in a position to judge if this would be the appropriate path or using
> something small footprint with less stuff would be more appropriate for our
> needs.  I think we can all agree these types of modules don’t offer a lot
> of value to either of our systems and it doesn’t make a ton of sense having
> duplication of things when it offers no measurable value.
>
>  I’ll connect with Sam off list and we can make a decision as to a path
> forward.
>
>  Regards
> -steve
>
>
>   From: Kevin Carter 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, July 8, 2015 at 5:19 PM
> To: "s...@yaple.net" , "OpenStack Development Mailing List
> (not for usage questions)" 
> Cc: Greg DeKoenigsberg 
> Subject: Re: [openstack-dev] [kolla][tc] Plans for using Pre-2.0 Ansible
> modules
>
>   We have several Ansible modules that we've been carrying[0] which were
> created in support of managing OpenStack. We've had these modules for a
> while now and you're free to take / use all that you may needed without
> running into request, license, owner, or governance issues. Like you,
> we're hoping to drop these modules in favor of the new Ansible V2 modules
> once released.
>
>
>  This may be a good first convergence point for our projects as
> we're both leveraging Ansible for deployments, we have a similar needs in
> the space, the OSAD code base has been tracking liberty for a while which
> you guys are now working on, and we already have a bunch of modules that we
> use everyday. In terms of support we have an active review to
> add keystone v3 support to our keystone module[1] and while it may not fit
> your current syntax it should be enough to keep things going until we both
> need to refactor some things to leverage all of the coming upstream
> goodness.
>
>  I hope this helps and if you guys are interested in working on any these
> things we'd love help in a collaborative effort.
>
>--
>
> Kevin Carter
> IRC: cloudnull
>
>   [0]
> https://github.com/stackforge/os-ansible-deployment/tree/master/playbooks/library
> ​
>
> [1] https://review.openstack.org/#/c/196943/
>
>
>
> --
> *From:* Steven Dake (stdake) 
> *Sent:* Wednesday, July 8, 2015 12:47 PM
> *To:* s...@yaple.net; OpenStack Development Mailing List (not for usage
> questions)
> *Cc:* Greg DeKoenigsberg
> *Subject:* Re: [openstack-dev] [kolla][tc] Plans for using Pre-2.0
> Ansible modules
>
>  That sounds like option #4, so then I guess we don’t need the TC to
> evaluate the “legalness” of this approach since it does not trigger GPL
> contamination.
>
>  TC apologies for the noise – Sam said option #4 was difficult to do :)
>
>  Regards
> -steve
>
>
>   From: Sam Yaple 
> Reply-To: "s...@yaple.net" , "OpenStack Development Mailing
> List (not for usage questions)" 
> Date: Wednesday, July 8, 2015 at 5:15 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Cc: Greg DeKoenigsberg 
> Subject: Re: [openstack-dev] [kolla][tc] Plans for using Pre-2.0 Ansible
> modules
>
>   All,
>
> I went ahead and wrote the temp module that will fill the gaps that the
> GPLv3 modules will eventually solve. It appears that upstream Shade still
> doesn't have merge the capability to create roles, even though mordred has
> the reviews up. This means even if we solve the licensing issue, we will
> still be lacking role usage support until shade is updated upstream.
>
>  The review listed below has a 'kolla_keystone.py' module. As well as two
> modules that are licensed ASL2.0

Re: [openstack-dev] [kolla][tc] Plans for using Pre-2.0 Ansible modules

2015-07-08 Thread Sam Yaple
All,

I went ahead and wrote the temp module that will fill the gaps that the
GPLv3 modules will eventually solve. It appears that upstream Shade still
doesn't have merge the capability to create roles, even though mordred has
the reviews up. This means even if we solve the licensing issue, we will
still be lacking role usage support until shade is updated upstream.

The review listed below has a 'kolla_keystone.py' module. As well as two
modules that are licensed ASL2.0 and I have permission from the author to
use in our repo (there is a link in each module with a git commit
referencing where they were pulled from with the appropriate license).

https://review.openstack.org/199463

Sam Yaple
864-901-0012

On Tue, Jul 7, 2015 at 9:21 PM, Steven Dake (stdake) 
wrote:

>
>
> On 7/7/15, 2:05 PM, "Robert Collins"  wrote:
>
> >On 4 July 2015 at 06:53, Steven Dake (stdake)  wrote:
> >> Kolla Devs as well as the Technical Committee,
> >>
> >> I wanted to get the TC¹s thoughts on this plan of action as we intend to
> >> apply for big tent once our Ansible code has completed implementation.
> >>If
> >> the approach outlined in this email seems like a blocker and we should
> >>just
> >> start with #4 instead, it would be immensely helpful to know now.
> >>
> >> The problem:
> >> A whole slew of OpenStack modules exist upstream in the Ansible core
> >> directory.  Kolla wants to use these modules.  These files are licensed
> >> under the GPLv3.  They will be released with Ansible 2.0 but Ansible
> >>2.0 is
> >> not yet available.  In the meantime we need these modules to execute our
> >> system.  The repo in question is:
> >
> >As I understand our current license situation, you won't be eligible
> >for big-tent if you depend on GPLv3 code.
> >
> >From the requirements "* Project must have no library dependencies
> >which effectively restrict
> >  how the project may be distributed or deployed
> >"
> >
> >So I'm also strongly inclined to recommend you speak to the legal list
> >about the implications here. Using a GPLv3 tool via the CLI is very
> >different (by the GPL's design) to using it as a library.
>
> Rob,
>
> I pinged legal-discuss on this matter.  I am hopeful the experts can
> provide guidance for the Technical Committee and our project as to how to
> proceed.
>
> Regards
> -steve
>
> >
> >-Rob
> >
> >
> >--
> >Robert Collins 
> >Distinguished Technologist
> >HP Converged Cloud
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] Plans for using Pre-2.0 Ansible modules

2015-07-07 Thread Sam Yaple
Steve,

We will need some of the modules from the "merge-it-all" branch of emoty's
fork[1]. If it is more convient, you can just grab them all. The ones we
require to proceed without triggering Option #4 would be the keystone
related modules. That list is as follows:

os_auth.py
os_keystone_domain.py
os_keystone_endpoint.py
os_keystone_role.py
os_keystone_service.py
os_user.py
os_user_group.py
os_user_role.py

I would also suggest to use git-submodule to add these directly to the
ansible/library folder for Kolla. Assuming this doesn't still somehow taint
the license or violate some stackforge rule. This too would only be
temporary until Ansible 2.0.

[1]
https://github.com/emonty/ansible-modules-core/tree/merge-it-all/cloud/openstack

Sam Yaple
864-901-0012

On Fri, Jul 3, 2015 at 1:56 PM, Greg DeKoenigsberg  wrote:

> Option 3 sounds fine to me. We hope to have 2.0 out in August, so one
> hopes you wouldn't have to carry them very long.
>
> --g
>
> On Fri, Jul 3, 2015 at 2:53 PM, Steven Dake (stdake) 
> wrote:
> > Kolla Devs as well as the Technical Committee,
> >
> > I wanted to get the TC’s thoughts on this plan of action as we intend to
> > apply for big tent once our Ansible code has completed implementation.
> If
> > the approach outlined in this email seems like a blocker and we should
> just
> > start with #4 instead, it would be immensely helpful to know now.
> >
> > The problem:
> > A whole slew of OpenStack modules exist upstream in the Ansible core
> > directory.  Kolla wants to use these modules.  These files are licensed
> > under the GPLv3.  They will be released with Ansible 2.0 but Ansible 2.0
> is
> > not yet available.  In the meantime we need these modules to execute our
> > system.  The repo in question is:
> >
> > https://github.com/ansible/ansible-modules-core
> >
> > The possible solutions:
> > 1. Mordred suggested just merging the code in our repo, but I thought
> this
> > might trigger license contamination so I am not hot on this idea.
> > 2. Relicense the upstream modules in ASL short term.  Mordred tried this
> but
> > thinks its not possible because of the varied contributors.
> > 3. Fork the repo In question, remove everything except cloud/openstack
> > directory and turn this into a pip installable library.
> > 4. Make a hacky solution that doesn’t use any upstream modules but gets
> the
> > job done.
> >
> > For the moment we have settled on #3, that is creating a repo here:
> >
> > https://github.com/sdake/kolla-pre-ansible-2-openstack/
> >
> > And installing these in the deployment system.  Once Ansible 2.0 is
> > available, we would deprecate this model, and rely on Ansible 2.0
> > exclusively.
> >
> > Thoughts or concerns on this approach?
> >
> > Thanks
> > -steve
>
>
>
> --
> Greg DeKoenigsberg
> Ansible Community Guy
>
> Find out why SD Times named Ansible
> their #1 Company to Watch in 2015:
> http://sdtimes.com/companies-watch-2015/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][release] Announcing Liberty-1 release of Kolla

2015-06-30 Thread Sam Yaple
Ian,

The most significant difference would be that Kolla uses image based
deployment rather than building from source on each node at runtime
allowing for a more consistent and repeatable deployment.

On Tue, Jun 30, 2015 at 2:28 PM, Ian Cordasco 
wrote:

>
>
> On 6/29/15, 23:59, "Steven Dake (stdake)"  wrote:
>
> >The Kolla community
> >is pleased to announce the
> > release of the
> > Kolla Liberty 1 milestone.  This release fixes 56 bugs
> > and implements 14 blueprints!
> >
> >
> >Our community developed the following notable features:
> >
> >
> >
> >* A start at
> >source-basedcontainers
>
> So how does this now compare to the stackforge/os-ansible-deployment (soon
> to be openstack/openstack-ansible) project?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for changing 1600UTC meeting to 1700 UTC

2015-06-17 Thread Sam Yaple
1600 is late for me as it is. The earlier the better is my vote. I would
make 1630 work, 1700 is too late.

Sam Yaple
864-901-0012

On Tue, Jun 16, 2015 at 2:09 PM, Harm Weites  wrote:

> I'm ok with moving to 16:30 UTC instead of staying at 16:00.
>
> I actually prefer it in my evening schedule :) Moving to 16:30 would
> already be a great improvement to the current schedule and should at least
> allow me to not miss everything.
>
> - harmw
>
> Op 12-06-15 om 15:44 schreef Steven Dake (stdake):
>
>> Even though 7am is not ideal for the west coast, I¹d be willing to go back
>> that far.  That would put the meeting at the morning school rush for the
>> west coast folks though (although we are in summer break in the US and we
>> could renegotiate a time in 3 months when school starts up again if its a
>> problem) - so creating different set of problems for different set of
>> people :)
>>
>> This would be a 1400 UTC meeting.
>>
>> While I wake up prior to 7am, (usually around 5:30) I am not going to put
>> people through the torture of a 6am meeting in any timezone if I can help
>> it so 1400 is the earliest we can go :)
>>
>> Regards
>> -steve
>>
>>
>> On 6/12/15, 4:37 AM, "Paul Bourke"  wrote:
>>
>>  I'm fairly easy on this but, if the issue is that the meeting is running
>>> into people's evening schedules (in EMEA), would it not make sense to
>>> push it back an hour or two into office hours, rather than forward?
>>>
>>> On 10/06/15 18:20, Ryan Hallisey wrote:
>>>
>>>> After some upstream discussion, moving the meeting from 1600 to 1700
>>>> UTC does not seem very popular.
>>>> It was brought up that changing the time to 16:30 UTC could accommodate
>>>> more people.
>>>>
>>>> For the people that attend the 1600 UTC meeting time slot can you post
>>>> further feedback to address this?
>>>>
>>>> Thanks,
>>>> Ryan
>>>>
>>>> - Original Message -
>>>> From: "Jeff Peeler" 
>>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>>> 
>>>> Sent: Tuesday, June 9, 2015 2:19:00 PM
>>>> Subject: Re: [openstack-dev] [kolla] Proposal for changing 1600UTC
>>>> meeting to 1700 UTC
>>>>
>>>> On Mon, Jun 08, 2015 at 05:15:54PM +, Steven Dake (stdake) wrote:
>>>>
>>>>> Folks,
>>>>>
>>>>> Several people have messaged me from EMEA timezones that 1600UTC fits
>>>>> right into the middle of their family life (ferrying kids from school
>>>>> and what-not) and 1700UTC while not perfect, would be a better fit
>>>>> time-wise.
>>>>>
>>>>> For all people that intend to attend the 1600 UTC, could I get your
>>>>> feedback on this thread if a change of the 1600UTC timeslot to 1700UTC
>>>>> would be acceptable?  If it wouldn¹t be acceptable, please chime in as
>>>>> well.
>>>>>
>>>> Both 1600 and 1700 UTC are fine for me.
>>>>
>>>> Jeff
>>>>
>>>>
>>>>
>>>> _
>>>> _
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>>
>>>> _
>>>> _
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

2015-06-15 Thread Sam Yaple
+1 for me as well. Designate looks great
On Jun 15, 2015 6:43 AM, "Ryan Hallisey"  wrote:

> +1 Great job with Cinder.
>
> -Ryan
>
> - Original Message -
> From: "Steven Dake (stdake)" 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Sent: Sunday, June 14, 2015 1:48:48 PM
> Subject: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites
>
> Hey folks,
>
> I am proposing Harm Waites for the Kolla core team. He did a fantastic job
> implementing Designate in a container[1] which I’m sure was incredibly
> difficult and never gave up even though there were 13 separate patch
> reviews :) Beyond Harm’s code contributions, he is responsible for 32% of
> the “independent” reviews[1] where independents compose 20% of our total
> reviewer output. I think we should judge core reviewers on more then
> output, and I knew Harm was core reviewer material with his fantastic
> review of the cinder container where he picked out 26 specific things that
> could be broken that other core reviewers may have missed ;) [3]. His other
> reviews are also as thorough as this particular review was. Harm is active
> in IRC and in our meetings for which his TZ fits. Finally Harm has agreed
> to contribute to the ansible-multi implementation that we will finish in
> the liberty-2 cycle.
>
> Consider my proposal to count as one +1 vote.
>
> Any Kolla core is free to vote +1, abstain, or vote –1. A –1 vote is a
> veto for the candidate, so if you are on the fence, best to abstain :)
> Since our core team has grown a bit, I’d like 3 core reviewer +1 votes this
> time around (vs Sam’s 2 core reviewer votes). I will leave the voting open
> until June 21  UTC. If the vote is unanimous prior to that time or a
> veto vote is received, I’ll close voting and make appropriate adjustments
> to the gerrit groups.
>
> Regards
> -steve
>
> [1] https://review.openstack.org/#/c/182799/
> [2]
> http://stackalytics.com/?project_type=all&module=kolla&company=%2aindependent
> [3] https://review.openstack.org/#/c/170965/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev