Re: [openstack-dev] [python3] Enabling py37 unit tests

2018-11-07 Thread Dr. Jens Harbott (frickler)
2018-11-07 12:47 GMT+00:00 Mohammed Naser :
> On Wed, Nov 7, 2018 at 1:37 PM Doug Hellmann  wrote:
>>
>> Corey Bryant  writes:
>>
>> > On Wed, Oct 10, 2018 at 8:45 AM Corey Bryant 
>> > wrote:
>> >
>> > I'd like to start moving forward with enabling py37 unit tests for a subset
>> > of projects. Rather than putting too much load on infra by enabling 3 x py3
>> > unit tests for every project, this would just focus on enablement of py37
>> > unit tests for a subset of projects in the Stein cycle. And just to be
>> > clear, I would not be disabling any unit tests (such as py35). I'd just be
>> > enabling py37 unit tests.
>> >
>> > As some background, this ML thread originally led to updating the
>> > python3-first governance goal (https://review.openstack.org/#/c/610708/)
>> > but has now led back to this ML thread for a +1 rather than updating the
>> > governance goal.
>> >
>> > I'd like to get an official +1 here on the ML from parties such as the TC
>> > and infra in particular but anyone else's input would be welcomed too.
>> > Obviously individual projects would have the right to reject proposed
>> > changes that enable py37 unit tests. Hopefully they wouldn't, of course,
>> > but they could individually vote that way.
>> >
>> > Thanks,
>> > Corey
>>
>> This seems like a good way to start. It lets us make incremental
>> progress while we take the time to think about the python version
>> management question more broadly. We can come back to the other projects
>> to add 3.7 jobs and remove 3.5 jobs when we have that plan worked out.
>
> What's the impact on the number of consumption in upstream CI node usage?

I think the relevant metric here will be nodes_used * time_used.
nodes_used will increase by one, time_used for usual unit test jobs
seems to be < 10 minutes, so I'd think that the total increase in CI
usage should be neglegible compared to full tempest or similar jobs
that take 1-2 hours.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04)

2018-11-06 Thread Dr. Jens Harbott (frickler)
Dear OpenStackers,

earlier this year Ubuntu released their current LTS version 18.04
codenamed "Bionic Beaver" and we are now facing the task to migrate
our devstack-based jobs to run on Bionic instead of the previous LTS
version 16.04 "Xenial Xerus".

The last time this has happened two years ago (migration from 14.04 to
16.04) and at that time it seems the migration was mostly driven by
the Infra team (see [1]), mostly because all of the job configuration
was still centrally hosted in a single repository
(openstack-infra/project-config). In the meantime, however, our CI
setup has been updated to use Zuul v3 and one of the new features that
come with this development is the introduction of per-project job
definitions.

So this new flexibility requires us to make a choice between the two
possible options we have for migrating jobs now:

1) Change the "devstack" base job to run on Bionic instances
instead of Xenial instances
2) Create new "devstack-bionic" and "tempest-full-bionic" base
jobs and migrate projects piecewise

Choosing option 1) would cause all projects that base their own jobs
on this job (possibly indirectly by e.g. being based on the
"tempest-full" job) switch automatically. So there would be the
possibility that some jobs would break and require to be fixed before
patches could be merged again in the affected project(s). To
accomodate those risks, QA team can give some time to projects to test
their jobs on Bionic with WIP patches (QA can provide Bionic base job
as WIP patch). This option does not require any pre/post migration
changes on project's jobs.

Choosing option 2) would avoid this by letting projects switch at
their own pace, but create the risk that some projects would never
migrate. It would also make further migrations, like the one expected
to happen when 20.04 is released, either having to follow the same
scheme or re-introduce the unversioned base job. Other point to note
down with this option is,
   - project job definitions need to change their parent job from
"devstack" -> "devstack-bionic" or "tempest-full" ->
"tempest-full-bionic"
 - QA needs to maintain existing jobs ("devstack", "tempest-full") and
bionic version jobs ("devstack-bionic", "tempest-full-bionic")

In order to prepare the decision, we have created a set of patches
that test the Bionic
jobs, you can find them under the common topic "devstack-bionic" [2].
There is also an
etherpad to give a summarized view of the results of these tests [3].

Please respond to this mail if you want to promote either of the above
options or
maybe want to propose an even better solution. You can also find us
for discussion
in the #openstack-qa IRC channel on freenode.

Infra team has tried both approaches during precise->trusty &
trusty->xenial migration[4].

Note that this mailing-list itself will soon be migrated, too, so if
you haven't subscribed
to the new list yet, this is a good time to act and avoid missing the
best parts [5].

Yours,
Jens (frickler@IRC)


[1] http://lists.openstack.org/pipermail/openstack-dev/2016-November/106906.html
[2] https://review.openstack.org/#/q/topic:devstack-bionic
[3] https://etherpad.openstack.org/p/devstack-bionic
[4] 
http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2018-11-01.log.html#t2018-11-01T12:40:22
[5] 
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-community] DevStack Installation issue

2018-06-27 Thread Dr. Jens Harbott (frickler)
2018-06-27 16:58 GMT+02:00 Amy Marrich :
> Abhijit,
>
> I'm forwarding your issue to the OpenStack-dev list so that the right people
> might see your issue and respond.
>
> Thanks,
>
> Amy (spotz)
>
> -- Forwarded message --
> From: Abhijit Dutta 
> Date: Wed, Jun 27, 2018 at 5:23 AM
> Subject: [openstack-community] DevStack Installation issue
> To: "commun...@lists.openstack.org" 
>
>
> Hi,
>
>
> I am trying to install DevStack for the first time in a baremetal with
> Fedora 28 installed.  While executing the stack.sh I am getting the
> following error:
>
>
> No match for argument: Django
> Error: Unable to find a match
>
> Can anybody in the community help me out with this problem.

We are aware of some issues with deploying devstack on Fedora 28,
these are being worked on, see
https://review.openstack.org/#/q/status:open+project:openstack-dev/devstack+branch:master+topic:uwsgi-f28

If you want a quick solution, you could try deploying on Fedora 27 or
Centos 7 instead.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions about token scopes

2018-06-01 Thread Jens Harbott
2018-05-30 20:37 GMT+00:00 Matt Riedemann :
> On 5/30/2018 9:53 AM, Lance Bragstad wrote:
>>
>> While scope isn't explicitly denoted by an
>> attribute, it can be derived from the attributes of the token response.
>>
>
> Yeah, this was confusing to me, which is why I reported it as a bug in the
> API reference documentation:
>
> https://bugs.launchpad.net/keystone/+bug/1774229
>
>>> * It looks like python-openstackclient doesn't allow specifying a
>>> scope when issuing a token, is that going to be added?
>>
>> Yes, I have a patch up for it [6]. I wanted to get this in during
>> Queens, but it missed the boat. I believe this and a new release of
>> oslo.context are the only bits left in order for services to have
>> everything they need to easily consume system-scoped tokens.
>> Keystonemiddleware should know how to handle system-scoped tokens in
>> front of each service [7]. The oslo.context library should be smart
>> enough to handle system scope set by keystonemiddleware if context is
>> built from environment variables [8]. Both keystoneauth [9] and
>> python-keystoneclient [10] should have what they need to generate
>> system-scoped tokens.
>>
>> That should be enough to allow the service to pass a request environment
>> to oslo.context and use the context object to reason about the scope of
>> the request. As opposed to trying to understand different token scope
>> responses from keystone. We attempted to abstract that away in to the
>> context object.
>>
>> [6]https://review.openstack.org/#/c/524416/
>> [7]https://review.openstack.org/#/c/564072/
>> [8]https://review.openstack.org/#/c/530509/
>> [9]https://review.openstack.org/#/c/529665/
>> [10]https://review.openstack.org/#/c/524415/
>
>
> I think your reply in IRC was more what I was looking for:
>
> lbragstad   mriedem: if you install
> https://review.openstack.org/#/c/524416/5 locally with devstack and setup a
> clouds.yaml, ``openstack token issue --os-cloud devstack-system-admin``
> should work 15:39
> lbragstad   http://paste.openstack.org/raw/722357/  15:39
>
> So users with the system role will need to create a token using that role to
> get the system-scoped token, as far as I understand. There is no --scope
> option on the 'openstack token issue' CLI.

IIUC there is no option to the "token issue" command because that
command creates a token just like any other OSC command would do from
the global authentication parameters specified, either on the command
line, in the environment or via a clouds.yaml file. The "token issue"
command simply outputs the token that is then received instead of
using it as authentication for the "real" action taken by other
commands.

So the option to request a system scope would seem to be
"--os-system-scope all" or the corresponding env var OS_SYSTEM_SCOPE.
And if you do that, the resulting system-scoped token will directly be
used when you issue a command like "openstack server list".

One thing to watch out for, however, is that that option seems to be
silently ignored if the credentials also specify either a project or a
domain. Maybe generating a warning or even an error in that situation
would be a cleaner solution.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thank you TryStack!!

2018-04-30 Thread Jens Harbott
2018-03-26 22:51 GMT+00:00 Jimmy Mcarthur :
> Hi everyone,
>
> We recently made the tough decision, in conjunction with the dedicated
> volunteers that run TryStack, to end the service as of March 29, 2018.  For
> those of you that used it, thank you for being part of the TryStack
> community.
>
> The good news is that you can find more resources to try OpenStack at
> http://www.openstack.org/start, including the Passport Program, where you
> can test on any participating public cloud. If you are looking to test
> different tools or application stacks with OpenStack clouds, you should
> check out Open Lab.
>
> Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and the
> many other volunteers who have managed this valuable service for the last
> several years!  Your contribution to OpenStack was noticed and appreciated
> by many in the community.

Seems it would be great if https://trystack.openstack.org/ would be
updated with this information, according to comments in #openstack
users are still landing on that page and try to get a stack there in
vain.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is there any way to recheck only one job?

2018-04-30 Thread Jens Harbott
2018-04-30 7:12 GMT+00:00 Slawomir Kaplonski :
> Hi,
>
> I wonder if there is any way to recheck only one type of job instead of 
> rechecking everything.
> For example sometimes I have to debug some random failure in specific job 
> type, like „neutron-fullstack” and I want to collect some additional data or 
> test something. So in such case I push some „Do not merge” patch and waits 
> for job result - but I really don’t care about e.g. pep8 or UT results so 
> would be good is I could run (recheck) only job which I want. That could safe 
> some resources for other jobs and speed up my tests a little as I could be 
> able to recheck only my job faster :)
>
> Is there any way that I can do it with gerrit and zuul currently? Or maybe it 
> could be consider as a new feature to add? What do You think about it?

This is intentionally not implemented as it could be used to trick
patches leading to unstable behaviour into passing too easily, hiding
possible issues.

As an alternative, you could include a change to .zuul.yaml into your
test patch, removing all jobs except the one you are interested in.
This would still run the jobs defined in project-config, but may be
good enough for your scenario.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Meeting Times - change to office hours?

2018-04-23 Thread Jens Harbott
2018-04-23 13:11 GMT+02:00 Graham Hayes :
> Hi All,
>
> We moved our meeting time to 14:00UTC on Wednesdays, but attendance
> has been low, and it is also the middle of the night for one of our
> cores.
>
> I would like to suggest we have an office hours style meeting, with
> one in the UTC evening and one in the UTC morning.
>
> If this seems reasonable - when and what frequency should we do
> them? What times suit the current set of contributors?

My preferred range would be 06:00UTC-14:00UTC, Mon-Thu, though
extending a couple of hours in either direction might be possible for
me, too.

If we do alternating times, with the current amount of work happening
we maybe could make each of them monthly, so we end up with a roughly
bi-weekly schedule.

I also have a slight preference for continuing to use one of the
meeting channels as opposed to meeting in the designate channel, if
that is what "office hours style meeting" is meant to imply.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][infra] pip vs psutil

2018-04-16 Thread Jens Harbott
2018-04-16 7:46 GMT+00:00 Ian Wienand :
> On 04/15/2018 09:32 PM, Gary Kotton wrote:
>>
>> The gate is currently broken with
>>  https://launchpad.net/bugs/1763966.
>> https://review.openstack.org/#/c/561427/
>>  Can unblock us in the short term. Any other ideas?
>
>
> I'm thinking this is probably along the lines of the best idea.  I
> left a fairly long comment on this in [1], but the root issue here is
> that if a system package is created using distutils (rather than
> setuptools) we end up with this problem with pip10.
>
> That means the problem occurs when we a) try to overwrite a system
> package and b) that package has been created using distutils.  This
> means it is a small(er) subset of packages that cause this problem.
> Ergo, our best option might be to see if we can avoid such packages on
> a one-by-one basis, like here.
>
> In some cases, we could just delete the .egg-info file, which is
> approximately what was happening before anyway.
>
> In this particular case, the psutils package is used by glance & the
> peakmem tracker.  Under USE_PYTHON3, devstack's pip_install_gr only
> installs the python3 library; however the peakmem tracker always uses
> python2 -- leaing to missing library the failures in [2].  I have two
> thoughts; either install for both python2 & 3 always [3] or make
> peakmem tracker obey USE_PYTHON3 [4].  We can discuss the approach in
> the reviews.
>
> The other option is to move everything to virtualenv's, so we never
> conflict with a system package, as suggested by clarkb [5] or
> pabelanger [6].  These are more invasive changes, but also arguably
> more correct.
>
> Note diskimage-builder, and hence our image generation for some
> platforms, is also broken.  Working on that in [7].

The cap in devstack has been merged in master and stable/queens, other
merges are being help up by unstable volume checks or so it seems.

There is also another issue caused by pip 10 treating some former
warning as error now. I've tried to list all "global" (Infra+QA)
related issues in [8], feel free to amend as needed.

[8] https://etherpad.openstack.org/p/pip10-mitigation

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-06 Thread Jens Harbott
2018-04-05 19:26 GMT+00:00 Matthew Thode :
> On 18-04-05 20:11:04, Graham Hayes wrote:
>> On 05/04/18 16:47, Matthew Thode wrote:
>> > eventlet-0.22.1 has been out for a while now, we should try and use it.
>> > Going to be fun times.
>> >
>> > I have a review projects can depend upon if they wish to test.
>> > https://review.openstack.org/533021
>>
>> It looks like we may have an issue with oslo.service -
>> https://review.openstack.org/#/c/559144/ is failing gates.
>>
>> Also - what is the dance for this to get merged? It doesn't look like we
>> can merge this while oslo.service has the old requirement restrictions.
>>
>
> The dance is as follows.
>
> 0. provide review for projects to test new eventlet version
>projects using eventlet should make backwards compat code changes at
>this time.

But this step is currently failing. Keystone doesn't even start when
eventlet-0.22.1 is installed, because loading oslo.service fails with
its pkg definition still requiring the capped eventlet:

http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482

So it looks like we need to have an uncapped release of oslo.service
before we can proceed here.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sdk] git repo rename and storyboard migration

2018-03-22 Thread Jens Harbott
2018-03-21 21:44 GMT+01:00 Monty Taylor :
> Hey everybody!
>
> This upcoming Friday we're scheduled to complete the transition from
> python-openstacksdk to openstacksdk. This was started a while back (Tue Jun
> 16 12:05:38 2015 to be exact) by changing the name of what gets published to
> PyPI. Renaming the repo is to get those two back inline (and remove a hack
> in devstack to deal with them not being the same)
>
> Since this is a repo rename, it means that local git remotes will need to be
> updated. This can be done either via changing urls in .git/config - or by
> just re-cloning.
>
> Once that's done, we'll be in a position to migrate to storyboard. shade is
> already over there, which means we're currently split between storyboard and
> launchpad for the openstacksdk team repos.
>
> diablo_rojo has done a test migration and we're good to go there - so I'm
> thinking either Friday post-repo rename - or sometime early next week. Any
> thoughts or opinions?
>
> This will migrate bugs from launchpad for python-openstacksdk and
> os-client-config.

IMO this list is still much too long [0] and I expect it will make
dealing with the long backlog even more tedious if the bugs are moved.
Also there are lots of issues that intersect between sdk and
python-openstackclient, so moving both at the same time would also
sound reasonable.

[0] 
https://storyboard.openstack.org/#!/story/list?status=active&tags=blocking-storyboard-migration

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][stable] New release for Pike is overdue

2018-03-15 Thread Jens Harbott
The last neutron release for Pike has been made in November, a lot of
bug fixes have made it into the stable/pike branch, can we please get
a fresh release for it soon?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Poll: S Release Naming

2018-03-14 Thread Jens Harbott
2018-03-14 9:21 GMT+01:00 Sławomir Kapłoński :
> Hi,
>
> Are You sure this link is good? I just tried it and I got info that "Already 
> voted" which isn't true in fact :)

Comparing with previous polls, these should be personalized links that
need to be sent out to each voter individually, so I agree that this
looks like a mistake.

>> Wiadomość napisana przez Paul Belanger  w dniu 
>> 14.03.2018, o godz. 00:58:
>>
>> Greetings all,
>>
>> It is time again to cast your vote for the naming of the S Release. This time
>> is little different as we've decided to use a public polling option over per
>> user private URLs for voting. This means, everybody should proceed to use the
>> following URL to cast their vote:
>>
>>  
>> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3
>>
>> Because this is a public poll, results will currently be only viewable by 
>> myself
>> until the poll closes. Once closed, I'll post the URL making the results
>> viewable to everybody. This was done to avoid everybody seeing the results 
>> while
>> the public poll is running.
>>
>> The poll will officially end on 2018-03-21 23:59:59[1], and results will be
>> posted shortly after.
>>
>> [1] 
>> http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst
>> ---
>>
>> According to the Release Naming Process, this poll is to determine the
>> community preferences for the name of the R release of OpenStack. It is
>> possible that the top choice is not viable for legal reasons, so the second 
>> or
>> later community preference could wind up being the name.
>>
>> Release Name Criteria
>>
>> Each release name must start with the letter of the ISO basic Latin alphabet
>> following the initial letter of the previous release, starting with the
>> initial release of "Austin". After "Z", the next name should start with
>> "A" again.
>>
>> The name must be composed only of the 26 characters of the ISO basic Latin
>> alphabet. Names which can be transliterated into this character set are also
>> acceptable.
>>
>> The name must refer to the physical or human geography of the region
>> encompassing the location of the OpenStack design summit for the
>> corresponding release. The exact boundaries of the geographic region under
>> consideration must be declared before the opening of nominations, as part of
>> the initiation of the selection process.
>>
>> The name must be a single word with a maximum of 10 characters. Words that
>> describe the feature should not be included, so "Foo City" or "Foo Peak"
>> would both be eligible as "Foo".
>>
>> Names which do not meet these criteria but otherwise sound really cool
>> should be added to a separate section of the wiki page and the TC may make
>> an exception for one or more of them to be considered in the Condorcet poll.
>> The naming official is responsible for presenting the list of exceptional
>> names for consideration to the TC before the poll opens.
>>
>> Exact Geographic Region
>>
>> The Geographic Region from where names for the S release will come is Berlin
>>
>> Proposed Names
>>
>> Spree (a river that flows through the Saxony, Brandenburg and Berlin states 
>> of
>>   Germany)
>>
>> SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin)
>>
>> Spandau (One of the twelve boroughs of Berlin)
>>
>> Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently
>>   abbreviated as 🍺)
>>
>> Steglitz (a locality in the South Western part of the city)
>>
>> Springer (Berlin is headquarters of Axel Springer publishing house)
>>
>> Staaken (a locality within the Spandau borough)
>>
>> Schoenholz (A zone in the Niederschönhausen district of Berlin)
>>
>> Shellhaus (A famous office building)
>>
>> Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg)
>>
>> Schiller (A park in the Mitte borough)
>>
>> Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood)
>>   (The adjective form, Saatwinkler is also a really cool bridge but
>>   that form is too long)
>>
>> Sonne (Sonnenallee is the name of a large street in Berlin crossing the 
>> former
>>   wall, also translates as "sun")
>>
>> Savigny (Common place in City-West)
>>
>> Soorstreet (Street in Berlin restrict Charlottenburg)
>>
>> Solar (Skybar in Berlin)
>>
>> See (Seestraße or "See Street" in Berlin)
>>
>> Thanks,
>> Paul
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[openstack-dev] Pros and Cons of face-to-face meetings

2018-03-08 Thread Jens Harbott
With the current PTG just finished and seeing discussions happen about
the format of the next[0], it seems that the advantages of these seem
to be pretty clear to most, so let me use the occasion to remind
everyone of the disadvantages.

Every meeting that is happening is excluding those contributors that
can not attend it. And with that it is violating the fourth Open
principle[1], having a community that is open to everyone. If you are
wondering whom this would affect, here's a non-exclusive (sic) list of
valid reasons not to attend physical meetings:

- Health issues
- Privilege issues (like not getting visa or travel permits)
- Caretaking responsibilities (children, other family, animals, plants)
- Environmental concerns

So when you are considering whether it is worth the money and effort
to organise PTGs or similar events, I'd like you also to consider
those being excluded by such activities. It is not without a reason
that IRC and emails have been settled upon as preferred means of
communication. I'm not saying that physical meetings should be dropped
altogether, but maybe more effort can be placed into providing means
of remote participation, which might at least reduce some effects.

[0] http://lists.openstack.org/pipermail/openstack-dev/2018-March/127991.html
[1] https://governance.openstack.org/tc/reference/opens.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-22 Thread Jens Harbott
2017-12-22 9:18 GMT+00:00 Tobias Urdin :
> Follow up on Alex[1] point. The db sync upgrade for neutron fails here[0].
>
> [0] http://paste.openstack.org/show/629628/

This seems to be a known issue, see [2]. Also I think that this is a
red herring caused by the database migration being run by the Ubuntu
postinst before there is a proper configuration. Where did you find
that log? You are not trying to run neutron with sqlite for real, are
you?

[2] https://bugs.launchpad.net/neutron/+bug/1697881

> On 12/22/2017 04:57 AM, Alex Schultz wrote:
>>> Just a note, the queens repo is not currently synced in the infra so
>>> the queens repo patch is failing on Ubuntu jobs. I've proposed adding
>>> queens to the infra configuration to resolve this:
>>> https://review.openstack.org/529670
>>>
>> As a follow up, the mirrors have landed and two of the four scenarios
>> now pass.  Scenario001 is failing on ceilometer-api which was removed
>> so I have a patch[0] to remove it. Scenario004 is having issues with
>> neutron and the db looks to be very unhappy[1].

The later errors seem to be coming from some issues with neutron-l2gw,
which IIUC no longer is a stadium project, so maybe you should factor
that out of your default testing scenario.

>> Thanks,
>> -Alex
>>
>> [0] https://review.openstack.org/529787
>> [1] 
>> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing internet access from unit test gates

2017-11-24 Thread Jens Harbott
2017-11-21 15:04 GMT+00:00 Jeremy Stanley :
> On 2017-11-21 09:28:20 +0100 (+0100), Thomas Goirand wrote:
> [...]
>> The only way that I see going forward, is having internet access
>> removed from unit tests in the gate, or probably just the above
>> variables set.
> [...]
...
> Removing network access from the machines running these jobs won't
> work, of course, because our job scheduling and execution service
> needs to reach them over the Internet to start jobs, monitor
> progress and collect results.

I have tested a variant that would accomodate this: Run the tests in a
new network namespace that no network configuration at all. There are
some issues with this still:

- One needs sudo access in order to run something similar to "ip netns
exec ns1 tox ...". This could still be set up in a way such that the
tox user/environment itself does not need sudo.
- I found some unit tests that do need to talk to localhost, so one
still has to setup lo with 127.0.0.1/32.
- Most important issue that prevents me from successfully running tox
currently though is that even if I prepared the venv beforehand with
"tox -epy27 --notest", the next tox run will still want to reinstall
the project itself and most projects have something like

install_command =
pip install -U
-c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
{opts} {packages}

in their tox.ini, which will obviously fail without network
connectivity. Running something like

sudo ip netns exec ns1 su -c ".tox/py27/bin/stestr run" $USER

does work rather well though. Does anyone have an idea how to force
tox to just run the tests without doing any installation steps? Then I
guess one could come up with a small wrapper to handle the other
steps.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][qa][ubuntu][neutron] Xenial Neutron Timeouts

2017-11-14 Thread Jens Harbott
2017-11-14 16:29 GMT+00:00 Mohammed Naser :
> Hi everyone,
>
> Thank you so much for the work on this, I'm sure we can progress with
> this together.  I have noticed that this only occurs in master and
> never in the stable branches.  Also, it only occurs under Ubuntu (so
> maybe something related to mod_wsgi version?)
>
> Given that we don't have any "master" built packages for Ubuntu, we
> test against the latest release which is the pike release.
>
> https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/repos.pp#L6-L10
>
> I've noticed the issue is not as present in older branches but much
> more visible in master.

So does the issue not happen at all for stable/pike or less often?
Anyway that would seem to indicate not an issue with the Ubuntu
packages, but with the way they are deployed.

If you look at [1] you can see that for pike you setup nova-api wsgi
with workers=1 and threads=$::os_workers, which in master was swapped,
see [2]. I'd suggest testing a revert of that change.

[1] 
https://github.com/openstack/puppet-nova/blob/stable/pike/manifests/wsgi/apache_api.pp
[2] 
https://github.com/openstack/puppet-nova/commit/df638e2526d2d957318519dfcfb9098cb7726095

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][qa][ubuntu][neutron] Xenial Neutron Timeouts

2017-11-14 Thread Jens Harbott
2017-11-14 8:24 GMT+00:00 Tobias Urdin :
> Trying to trace this, tempest calls the POST /servers//action
> API endpoint for the nova compute api.
>
> https://github.com/openstack/tempest/blob/master/tempest/lib/services/compute/floating_ips_client.py#L82
>
> Nova then takes the requests and tries to do this floating ip association
> using the neutron server api.
>
> http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/nova/nova-api.txt.gz
>
> 2017-10-29 23:12:35.521 17800 ERROR nova.api.openstack.compute.floating_ips
> [req-7f810cc7-a498-4bf4-b27e-8fc80d652785 42526a28b1a14c629b83908b2d75c647
> 2493426e6a3c4253a60c0b7eb35cfe19 - default default] Unable to associate
> floating IP 172.24.5.17 to fixed IP 10.100.0.8 for instance
> d265626a-77c1-4d2f-8260-46abe548293e. Error: Request to
> https://127.0.0.1:9696/v2.0/floatingips/2e3fa334-d6ac-443c-b5ba-eeb521d6324c
> timed out: ConnectTimeout: Request to
> https://127.0.0.1:9696/v2.0/floatingips/2e3fa334-d6ac-443c-b5ba-eeb521d6324c
> timed out
>
> Checking that timestamp in the neutron-server logs:
> http://paste.openstack.org/show/626240/
>
> We can see that during this timestamp right before at 23:12:30.377 and then
> after 23:12:35.611 everything seems to be doing fine.
> So there is some connectivity issues to the neutron API from where the Nova
> API is running causing a timeout.
>
> Now some more questions would be:
>
> * Why is the return code 400? Are we being fooled or is it actually a
> connection timeout.
> * Is the Neutron API stuck causing the failed connection? All talk are done
> over loopback so chance of a problem there is very low.
> * Any firewall catching this? Not likely since the agent processes requests
> right before and after.
>
> I can't find anything interesting in the overall other system logs that
> could explain that.
> Back to the logs!

I'm pretty certain that this is a deadlock between nova and neutron,
though I cannot put my finger on the exact spot yet. But looking at
the neutron log that you extracted you can see that neutron indeed
tries to give a successful answer to the fip request just after nova
has given up waiting for it (seems the timeout is 30s here):

2017-10-29 23:12:35.932 18958 INFO neutron.wsgi
[req-e737b7dd-ed9c-46a7-911b-eb77efe11aa8
42526a28b1a14c629b83908b2d75c647 2493426e6a3c4253a60c0b7eb35cfe19 -
default default] 127.0.0.1 "PUT
/v2.0/floatingips/2e3fa334-d6ac-443c-b5ba-eeb521d6324c HTTP/1.1"
status: 200  len: 746 time: 30.4427412

Also, looking at
http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/apache_config/10-nova_api_wsgi.conf.txt.gz
is seems that nova-api is started with two processes and one thread,
not sure if that means two processes with one thread each or only one
thread total, anyway nova-api might be getting stuck there.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Jens Harbott
2017-09-29 5:41 GMT+00:00 Ian Wienand :
> On 09/29/2017 03:37 PM, Ian Wienand wrote:
>>
>> I'm not aware of issues other than these at this time
>
>
> Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
> also failing for unknown reasons.  Any debugging would be helpful,
> thanks.

Seem there are multiple issues with the multinode jobs:

a) post_failures due to an error in log collection, sample fix at
https://review.openstack.org/508473
b) jobs are being run as two identical tasks on primary and subnodes,
triggering https://bugs.launchpad.net/zun/+bug/1720240

Other issues:
- openstack-tox-py27 is being run on trusty nodes instead of xenial
- unit tests are missing in at least neutron gate runs
- some patches are not getting any results from zuul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Jens Harbott
2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
> On Fri, Sep 29, 2017 at 03:41:54PM +1000, Ian Wienand wrote:
>>
>> On 09/29/2017 03:37 PM, Ian Wienand wrote:
>>>
>>> I'm not aware of issues other than these at this time
>>
>>
>> Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
>> also failing for unknown reasons.  Any debugging would be helpful,
>> thanks.
>
>
> We also have our legacy-telemetry-dsvm-integration-ceilometer broken:
>
> http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt

That looks similar to what Ian fixed in [1], seems like your job needs
a corresponding patch.

[1] https://review.openstack.org/#/c/508396

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-dynamic-routing] 4-byte AS Numbers Support

2017-08-30 Thread Jens Harbott
2017-08-30 9:04 GMT+02:00 Hirofumi Ichihara :
> Hi team,
>
> Currently neutron-dynamic-routing supports 2 byte AS numbers only[1].
> Does team have a plan supporting 4 byte AS numbers?
>
> [1]:
> https://github.com/openstack/neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/common/constants.py#L27

Not a plan, but at least an RFE bug[1], lost track of it a bit, but
happy to help in case someone wants to pursue it.

[1] https://bugs.launchpad.net/neutron/+bug/1573092

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev