[openstack-dev] [rally][release] How rally release version

2017-02-12 Thread Jeffrey Zhang
Hey guys,

I found rally already releases its 0.8.1 tag from[0][1]. But
I found nothing in openstack/releases project[2]. How rally
create tag?

[0] http://tarballs.openstack.org/rally/
[1] https://github.com/openstack/rally/releases
[2]
https://github.com/openstack/releases/blob/master/deliverables/_independent/rally.yaml#L45

-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca] Failed to upgrade monasca source code in devstack

2017-02-12 Thread An Qi YL Lu
Hi all
I failed to do the unstack and stack operations with devstack when I want to pull the latest Monasca code and upgrade related projects.
I took following steps:
go to /opt/stackgo to the monasca project folder that I want to upgraderun git pullrun unstack.shrun stack.sh
The full error trace is shown as below:
ubuntu@monasca-devstack:~/devstack$ bash stack.sh
+ unset GREP_OPTIONS
+ unset LANG
+ unset LANGUAGE
+ LC_ALL=C
+ export LC_ALL
+ umask 022
+ PATH=/home/ubuntu/.nvm/versions/node/v4.0.0/bin:/opt/monasca/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/sbin:/usr/sbin:/sbin
+++ dirname stack.sh
++ cd .
++ pwd
+ TOP_DIR=/home/ubuntu/devstack
+ NOUNSET=
+ [[ -n '' ]]
++ date +%s
+ DEVSTACK_START_TIME=1486958013
+ [[ -r /home/ubuntu/devstack/.stackenv ]]
+ rm /home/ubuntu/devstack/.stackenv
+ FILES=/home/ubuntu/devstack/files
+ '[' '!' -d /home/ubuntu/devstack/files ']'
+ '[' '!' -d /home/ubuntu/devstack/inc ']'
+ '[' '!' -d /home/ubuntu/devstack/lib ']'
+ [[ '' == \y ]]
+ [[ 1000 -eq 0 ]]
+ [[ -n /opt/monasca ]]
+ set +o xtrace
You appear to be running under a python virtualenv.
DevStack does not support this, as we may break the
virtualenv you are currently in by modifying
external system-level components the virtualenv relies on.
We recommend you use a separate virtual-machine if
you are worried about DevStack taking over your system.

I checked $VIRTUAL_ENV variable, which pointed to /opt/monasca. So I simply took a rm -rf /opt/monasca* operation, thinking that these virtual environment workspace lead to the failure. But after that, when I rerun stack.sh, I am blocked by this error:
/home/ubuntu/devstack/functions-common: line 511: cd: /opt/stack/monasca-statsd: No such file or directory

Could you please shed some light on how to correctly upgrade Monasca project in devstack environment? Cheers :)
Best,Anqi
​


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca] Ideas to work on

2017-02-12 Thread An Qi YL Lu
Hi Roland
 
I am not sure whether you received my last email because I got a delivery failure notification. I am sending this again to ensure that you can see this email.
 
Best,
Anqi
 
- Original message -From: An Qi YL Lu/China/IBMTo: roland.hochm...@hpe.comCc: openstack-dev@lists.openstack.orgSubject: Re: [monasca] Ideas to work onDate: Fri, Feb 10, 2017 5:14 PM 
Hi Roland
 
Thanks for your suggestions. The list you made is useful, helping me get clues in areas that I can work on. I spent some time doing investigation in the bps that you introduced.
 
I am most interested in data retention and metrics deleting.
 
Data retention: I had a quick look into the data retention policy of influxDB. It apparently support different retention policy for different series. To my understanding, the whiteboard in this bp has a straightforward design for this feature. I didn't quite get what is the complex point. Could you please shed some light so I can learn where the complicated part is?
 
Metrics deleting: In influxDB 1.1 (or any version after 0.9), it supports deleting series, though you cannot specify time interval for this operation. It simply deletes all points from a series in a database. I think one of the tricky parts is to decide the data dependent on a metric to be deleted, such as measurements, alarms. Please point it out if my understanding is not precise.
 
I would like to look at logs publishing as well. But unfortunately I did not find the monasca-log-api doc, which is supposed to be at https://github.com/openstack/monasca-log-api/tree/master/docs . I don't know how this log-api works now. Please share me a copy of the doc if you have one.
 
Best,
Anqi
 
- Original message -From: "Hochmuth, Roland M" To: OpenStack List , An Qi YL Lu/China/IBM@IBMCNCc:Subject: [monasca] Ideas to work onDate: Fri, Feb 10, 2017 11:13 AM 
Hi Anqi, You had expressed a strong interest in working on Monasca the other day in our Weekly Monasca Team Meeting. I owed you a response. The team had also asked me to also keep them in the loop. Here is a list that I feel is interesting, that is not trivial or extremely complex (just right hopefully), and doesn't overlap with some of the areas that other developers are working on, and consequently difficult to coordinate in a limited time.
RBAC: Currently, the Python API doesn't fully support Role Based Access Controls (RBAC) in the API. We've had discussions on this topic, but oddly, there isn't a blueprint written for this. But, this would be very useful to implement in the APIs similar to what other OpenStack projects support.Data retention: https://blueprints.launchpad.net/monasca/+spec/per-project-data-retention. We haven't completely reviewed and or approved this blueprint, but it would be very useful to add support for per-project, or per-metric data retention. This would involve understanding how data retention works in InfluxDB. We would also want to have some design discussion prior to proceeding, as it is probably more complex than described in the bp.Publish logs and/or metrics to topics selectively. https://blueprints.launchpad.net/monasca/+spec/publish-logs-to-topic-selectively. In the context of metrics, this would be useful to identifying specific metrics as metering as opposed to monitoring metrics and allow them to be published to different Kafka topics as a result. The way this would be used is that the downstream Monasca Transform Engine would only get metrics sent to it that will be transformed and therefore doesn't need to filter them, which would help improve performance dramatically. For logging, it would help identity operational logs from audit logs. It could also be used to identity high priority metrics such that they could be published to a high-priority metrics topic in Kafka. There are several more contexts in which this is useful.Delete metrics: https://blueprints.launchpad.net/monasca/+spec/delete-metrics. Basically adding the ability to delete metrics using the Monasca API. Typically, time series databases are not very good at deletes. We haven't tried to do this with InfluxDB, and while this might seem an easy task, it is a lot more involved than issuing the obvious and straight-forward DELETE command.
I hope this helps. Let me know if you want to discuss further or want more ideas.
 
Regards --Roland
 
 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Alternative approaches for L3 HA

2017-02-12 Thread zhi
Hi, we are using L3 HA in our production environment now. Router instances
communicate to each other by VRRP protocol. In my opinion, although VRRP is
a control plane thing, but the real VRRP traffic is using data plane nic so
that router namespaces can not talk to each other sometimes when the  data
plan is busy. If we were used etcd (or other), does every router instance
register one "id" in etcd ?


Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 11

2017-02-12 Thread Alex Xu
2017-02-11 0:10 GMT+08:00 Ed Leafe :

> Your regular reporter, Chris Dent, is on PTO today, so I'm filling in.
> I'll be brief.
>
> After the flurry of activity to get as much in before the Ocata RCs, this
> past week was relatively calm. Work continued on the patch to have Ironic
> resources tracked as, well, individual entities instead of pseudo-VMs, and
> with a little more clarity, should be ready to merge soon.
>
> https://review.openstack.org/#/c/404472/
>
> The patch series to add the concept of nested resource providers is moving
> forward a bit more slowly. Nested RPs allow for modeling of complex
> resources, such as a compute node that contains PCI devices, each of which
> has multiple physical and virtual functions. The series starts here:
>
> https://review.openstack.org/#/c/415920/
>
> We largely ignored traits, which represent the qualitative part of a
> resource, and focused on the quantitative side during Ocata. With Pike
> development now open, we look to begin discussing and developing the traits
> work in more detail. The spec for traits is here:
>
> https://review.openstack.org/#/c/345138/
>
> …and the series of POC code starts with:
>
> https://review.openstack.org/#/c/377381/9


Ed, thanks for summary! I will add comments for the PoC. I hope that can
help people to understand what is about easily:

The first patch https://review.openstack.org/#/c/377381/9 is about
os_traits library, that is clone from
https://github.com/jaypipes/os-traits which
created by Jay. I just put it in the nova tree for implement the PoC.

The Traits API starts from the second patch
https://review.openstack.org/#/c/376198/  and the last one is
https://review.openstack.org/#/c/376202. It is all about the data model and
the API implement. The new added API endpoints are '/traits' and
'/resource_providers/{rp_uuid}/traits'.

The client of Traits API is
https://review.openstack.org/#/c/417288. It implements that the
ResourceTracker reports the CPU features as traits to the placement service
by Traits API.

The last one https://review.openstack.org/#/c/429364 is for getting a
specific list resource providers. The patch is based on Sylvain's patch
https://review.openstack.org/#/c/392569. It adds two new filters
'required_traits' and 'preferred_traits'. Then you can a list of RPs which
match the requirement of resources and traits by the request 'GET
/resource_provider?resources=._traits=..._traits='.
(The API layer patch will be submitted soon)

Thanks
Alex


We've also begun planning for the discussions at the PTG around what our
goals for Pike will be. I'm sure that there will a summary of those
discussions in one of these emails after the PTG.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-12 Thread Clint Byrum
Excerpts from Brian Rosmaita's message of 2017-02-10 12:39:11 -0500:
> I want to give all interested parties a heads up that I have scheduled a
> session in the Macon room from 9:30-10:30 a.m. on Thursday morning
> (February 23).
> 
> Here's what we need to discuss.  This is from my perspective as Glance
> PTL, so it's going to be Glance-centric.  This is a quick narrative
> description; please go to the session etherpad [0] to turn this into a
> specific set of discussion items.
> 
> Glance is the OpenStack image cataloging and delivery service.  A few
> cycles ago (Juno?), someone noticed that maybe Glance could be
> generalized so that instead of storing image metadata and image data,
> Glance could store arbitrary digital "stuff" along with metadata
> describing the "stuff".  Some people (like me) thought that this was an
> obvious direction for Glance to take, but others (maybe wiser, cooler
> heads) thought that Glance needed to focus on image cataloging and
> delivery and make sure it did a good job at that.  Anyway, the Glance
> mission statement was changed to include artifacts, but the Glance
> community never embraced them 100%, and in Newton, Glare split off as
> its own project (which made sense to me, there was too much unclarity in
> Glance about how Glare fit in, and we were holding back development, and
> besides we needed to focus on images), and the Glance mission statement
> was re-amended specifically to exclude artifacts and focus on images and
> metadata definitions.
> 
> OK, so the current situation is:
> - Glance "does" image cataloging and delivery and metadefs, and that's
> all it does.
> - Glare is an artifacts service (cataloging and delivery) that can also
> handle images.
> 
> You can see that there's quite a bit of overlap.  I gave you the history
> earlier because we did try to work as a single project, but it did not
> work out.
> 
> So, now we are in 2017.  The OpenStack development situation has been
> fragile since the second half of 2016, with several big OpenStack
> sponsors pulling way back on the amount of development resources being
> contributed to the community.  This has left Glare in the position where
> it cannot qualify as a Bit Tent project, even though there is interest
> in artifacts.
> 
> Mike Fedosin, the PTL for Glare, has asked me about Glare becoming part
> of the Glance project again.  I will be completely honest, I am inclined
> to say "no".  I have enough problems just getting Glance stuff done (for
> example, image import missed Ocata).  But in addition to doing what's
> right for Glance, I want to do what's right for OpenStack.  And I look
> at the overlap and think ...
> 
> Well, what I think is that I don't want to go through the Juno-Newton
> cycles of argument again.  And we have to do what is right for our users.
> 
> The point of this session is to discuss:
> - What does the Glance community see as the future of Glance?
> - What does the wider OpenStack community (TC) see as the future of Glance?
> - Maybe, more importantly, what does the wider community see as the
> obligations of Glance?
> - Does Glare fit into this vision?
> - What kind of community support is there for Glare?
> 
> My reading of Glance history is that while some people were on board
> with artifacts as the future of Glance, there was not a sufficient
> critical mass of the Glance community that endorsed this direction and
> that's why things unravelled in Newton.  I don't want to see that happen
> again.  Further, I don't think the Glance community got the word out to
> the broader OpenStack community about the artifacts project, and we got
> a lot of pushback along the lines of "WTF? Glance needs to do images"
> variety.  And probably rightly so -- Glance needs to do images.  My
> point is that I don't want Glance to take Glare back unless it fits in
> with what the community sees as the appropriate direction for Glance.
> And I certainly don't want to take it back if the entire Glance
> community is not on board.
> 
> Anyway, that's what we're going to discuss.  I've booked one of the
> fishbowl rooms so we can get input from people beyond just the Glance
> and Glare projects.
> 

Does anybody else feel like this is deja vu of Neutron's inception?

While I understand sometimes there are just incompatibilities in groups,
I think we should probably try again. Unfortunately, it sounds like
Glare already did the Neutron thing of starting from scratch and sort
of overlapping in functionality, instead of the Cinder thing where you
forklift the code from one project into a new one and pay close attention
to the peaceful transition of users. But we've been here before and we
can do better. :)

Given that, I'm hoping some folks who helped with the Neutron transition
can attend this session and share what Neutron learned so that if Glare
does take over for images, we don't end up in a multi-cycle quagmire
where Glance is frozen and Glare is moving too fast for Glance devs to

Re: [openstack-dev] [tripleo] TripleO Ocata release blockers

2017-02-12 Thread Emilien Macchi
Quick updates:

- All FFEs patches have been merged except:
https://review.openstack.org/#/c/330050/ - so we're green on this
side.
- Upgrade team is still working on automation to upgrade from newton to ocata.
- ovb-updates (CI job with IPv6) doesn't pass -
https://bugs.launchpad.net/tripleo/+bug/1663187 - See
https://review.openstack.org/#/c/432761/ for a potential fix.
- our CI is testing OpenStack from trunk, we have regular promotions.

At this stage, I believe we can release TripleO Ocata RC1 by Thursday
17th, considering Congress will be landed, 1663187 fixed, no new
blocker in CI.

I think upgrade folks will still have some patches after Thursday, but
I don't think it's a big deal, we'll just backport them to
stable/ocata.

Please let us know asap if you see more blockers for Ocata RC1.

On Mon, Feb 6, 2017 at 9:19 AM, Emilien Macchi  wrote:
> I found useful to share the reasons why we didn't release TripleO RC1
> last week and what are the remaining blockers.
>
> - Nova from trunk doesn't work for TripleO. We have to pin Nova to
> ocata-3 if we want to continue the promotions in our CI.
>
> Some work is in progress to solve this problem, you can follow it here:
> https://review.openstack.org/#/q/status:open+topic:bug/1661360
>
> TL;DR: we're disabling Nova API in WSGI with Apache when deploying
> TripleO & Puppet OpenStack CI.
>
> - Fix updating images in promotion pipeline:
> https://review.openstack.org/#/c/429713
>
> - Upgrades from Newton to Ocata. The list of bugs / work in progress
> is still consequent, it doesn't make sense to release before we have
> them fixed. Composable upgrades are our essential blueprint for Ocata.
>
>
> I proposed https://review.openstack.org/#/c/428416/ to prepare TripleO
> RC1 but I won't update it before our blockers are solved.
>
> Any help, feedback or question is very welcome.
> Thanks,
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical quotas at the PTG?

2017-02-12 Thread Tim Bell

On 12 Feb 2017, at 12:13, Boris Bobrov 
> wrote:

I would like to talk about it too.

On 02/10/2017 11:56 PM, Matt Riedemann wrote:
Operators want hierarchical quotas [1]. Nova doesn't have them yet and
we've been hesitant to invest scarce developer resources in them since
we've heard that the implementation for hierarchical quotas in Cinder
has some issues. But it's unclear to some (at least me) what those
issues are.

I don't know what the actual issue is, but from from keystone POV
the issue is that it basically replicates project tree that is stored
in keystone. On top of usual replication issues, there is another one --
it requires too many permissions. Basically, it requires service user
to be cloud admin.

I have not closely followed the cinder implementation since the CERN and BARC 
Mumbai focus has more around Nova.

The various feedbacks I have had was regarding how to handle overcommit on the 
cinder proposal. A significant share of the operator community would like to 
allow

- No overcommit for the ‘top level’ project (i.e. you can’t use more than you 
are allocated)]
- Sub project over commit is OK (i.e. promising your sub projects more is OK, 
sum of the commitment to subprojects>project is OK but should be given an error 
if it actually happens)



Has anyone already planned on talking about hierarchical quotas at the
PTG, like the architecture work group?

I know there was a bunch of razzle dazzle before the Austin summit about
quotas, but I have no idea what any of that led to. Is there still a
group working on that and can provide some guidance here?

In my opinion, projects should not re-implements quotas every time.
I would like to have a common library for enforcing quotas (usages)
and a service for storing quotas (limits). We should also think of a
way to transfer necessary projects subtree from keystone to quota
enforcer.

We could store quota limits in keystone and distribute it in token
body, for example. Here is a POC that we did some time ago --
https://review.openstack.org/#/c/403588/ and
https://review.openstack.org/#/c/391072/
But it still has the issue with permissions.


There has been an extended discussion since the Boson proposal at the Hong Kong 
summit on how to handle quotas, where a full quota service was proposed.

A number of ideas have emerged since then

- Quota limits stored in Keystone with the project data
- An oslo library to support checking that a resource request would be OK

One Forum session at the summit is due to be on this topic.

Some of the academic use cases are described in 
https://openstack-in-production.blogspot.fr/2016/04/resource-management-at-cern.html
 but commercial reseller models are valid here where

- company A has valuable resources to re-sell (e.g. flood risk and associated 
models)
- company B signs an agreement with Company A (e.g. an insurance company wants 
to use flood risk data as factor in their cost models)

The natural way of delivering this is that ‘A’ gives a pricing model based on 
‘B’’s consumption of compute and storage resources.

Tim



[1]
http://lists.openstack.org/pipermail/openstack-operators/2017-January/012450.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-12 Thread Tim Bell
Although there has not been much discussion on this point on the mailing list, 
I feel we do need to find the right level of granularity for ‘mainstream’ 
projects:

For CERN, we look for the following before offering a project to our end users:

- Distro packaging (in our case RPMs through RDO)
- Puppet modules
- Openstack client support (which brings Kerberos/X.509 authentication)
- Install, admin and user docs
- Project diversity for long term sustainability

We have many use cases of ‘resellers’ where one project provides a deliverable 
for others to consume, some degree of community image sharing is arriving and 
these are the same problems to face for artefacts and application catalogues 
(such as Heat and Magnum).

For me, which project provides this for images and/or artefacts is a choice for 
the technical community but consistent semantics would be greatly appreciated 
for those discussions with our end users such as “I need a Heat template for X 
but this needs community image Y and the visibility rules means that one needs 
to be shared in advance, the other I need to subscribe to” are difficult 
discussion which discourages uptake.

A cloud user should be able to click on community offered ‘R-as-a-Service’ in 
the application catalog GUI, and that’s all.

Tim

On 10.02.17, 18:39, "Brian Rosmaita"  wrote:

I want to give all interested parties a heads up that I have scheduled a
session in the Macon room from 9:30-10:30 a.m. on Thursday morning
(February 23).

Here's what we need to discuss.  This is from my perspective as Glance
PTL, so it's going to be Glance-centric.  This is a quick narrative
description; please go to the session etherpad [0] to turn this into a
specific set of discussion items.

Glance is the OpenStack image cataloging and delivery service.  A few
cycles ago (Juno?), someone noticed that maybe Glance could be
generalized so that instead of storing image metadata and image data,
Glance could store arbitrary digital "stuff" along with metadata
describing the "stuff".  Some people (like me) thought that this was an
obvious direction for Glance to take, but others (maybe wiser, cooler
heads) thought that Glance needed to focus on image cataloging and
delivery and make sure it did a good job at that.  Anyway, the Glance
mission statement was changed to include artifacts, but the Glance
community never embraced them 100%, and in Newton, Glare split off as
its own project (which made sense to me, there was too much unclarity in
Glance about how Glare fit in, and we were holding back development, and
besides we needed to focus on images), and the Glance mission statement
was re-amended specifically to exclude artifacts and focus on images and
metadata definitions.

OK, so the current situation is:
- Glance "does" image cataloging and delivery and metadefs, and that's
all it does.
- Glare is an artifacts service (cataloging and delivery) that can also
handle images.

You can see that there's quite a bit of overlap.  I gave you the history
earlier because we did try to work as a single project, but it did not
work out.

So, now we are in 2017.  The OpenStack development situation has been
fragile since the second half of 2016, with several big OpenStack
sponsors pulling way back on the amount of development resources being
contributed to the community.  This has left Glare in the position where
it cannot qualify as a Bit Tent project, even though there is interest
in artifacts.

Mike Fedosin, the PTL for Glare, has asked me about Glare becoming part
of the Glance project again.  I will be completely honest, I am inclined
to say "no".  I have enough problems just getting Glance stuff done (for
example, image import missed Ocata).  But in addition to doing what's
right for Glance, I want to do what's right for OpenStack.  And I look
at the overlap and think ...

Well, what I think is that I don't want to go through the Juno-Newton
cycles of argument again.  And we have to do what is right for our users.

The point of this session is to discuss:
- What does the Glance community see as the future of Glance?
- What does the wider OpenStack community (TC) see as the future of Glance?
- Maybe, more importantly, what does the wider community see as the
obligations of Glance?
- Does Glare fit into this vision?
- What kind of community support is there for Glare?

My reading of Glance history is that while some people were on board
with artifacts as the future of Glance, there was not a sufficient
critical mass of the Glance community that endorsed this direction and
that's why things unravelled in Newton.  I don't want to see that happen
again.  Further, I don't think the Glance community got the word out to
 

[openstack-dev] [tripleo] deploying Nova Placement API in IPv6

2017-02-12 Thread Emilien Macchi
I have been investigating this bug over the week-end:
https://bugs.launchpad.net/tripleo/+bug/1663187

The ovb-updates CI job has been red since we've unpinned Nova in TripleO CI.
Reminder: this job deploys TripleO with IPv6 activated on public endpoints.

I've been able to reproduce the issue locally and proposed a patch
upstream that change the Placement public endpoint to use internal
network on IPv4.
I haven't found yet why the compute node can't reach the public
endpoint (on ipv6) - but I hope someone from our team would have an
answer before I continue to dig.

This is the patch:
https://review.openstack.org/#/c/432761/

If you think that's a valid fix, or you think there is another way to
fix this thing, please let me know in the review.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder to use recheck rather then rebase to trigger a gate re-run: was Re: [kolla] unblocking the gate

2017-02-12 Thread Steven Dake (stdake)
Hey folks,

I’d like to remind kolla contributors of this email exchange from Feb 29th, 
2016 with the infra team.

I originally requested kolla contributors to rebase all of their patches to 
unblock the gate.  This was an incorrect request.  The infrastructure team 
quickly corrected my misunderstanding of how the gate worked.  Both Sam Yaple 
and I verified the infrastructure team was correct, and I’ve been having to get 
other people in the kolla team to verify it ever since since most folks think 
the gate jobs run against the patch they were last committed to.  Instead as 
Clark and Andreas point out, they run against master in every case.

Please re-read this thread for details on how this works.

Regards
-steve

-Original Message-
From: Clark Boylan 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, February 29, 2016 at 7:16 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [kolla] unblocking the gate

On Mon, Feb 29, 2016, at 01:38 PM, Sam Yaple wrote:
> On Mon, Feb 29, 2016 at 6:42 PM, Clark Boylan 
> wrote:
> 
> > On Mon, Feb 29, 2016, at 09:09 AM, Steven Dake (stdake) wrote:
> > >
> > >
> > > On 2/29/16, 12:26 AM, "Andreas Jaeger"  wrote:
> > > >This is not needed, the CI system always rebases if you run tests. To
> > > >get current tests, a simple "recheck" is enough.
> > > >
> > > >Also, we test in the gate before merging - again after rebasing to 
head.
> > > >That should take care of not merging anything broken. Running recheck
> > > >after a larger change will ensure that you have recent results.
> > >
> > > Andreas,
> > >
> > > Thanks for the recheck information.  I thought the gate ran against 
what
> > > it was submitted with as head.  We don't have any gate jobs at present
> > > (or
> > > many) they are mostly check jobs, so its pre-merge checking that we 
need
> > > folks to do.
> > >
> > To clarify the check pipeline changes are merged into their target
> > branches before testing and the gate pipeline changes are merged into
> > the forward looking state of the world that Zuul generates as part of
> > gate testing. This means you do not need to rebase and create new
> > patchsets to get updated test results, just recheck.
> >
> >
> Unfortunately we do not have voting gates in Kolla that do building and
> deploying. This will change soon, but that would be where the confusion
> in
> the thread is coming from I believe. Our only indication is a "Green"
> check
> job at this time. This is why Steven is asking for a rebase to make the
> check gates green before we merge new patches.

I understand the situation with check vs gate tests, but my previous
statement (below) still holds. All you need to do is recheck. You only
need a rebase if you have run into a merge conflict.

> 
> You should only need to rebase and create a new patchset if a merge
> > conflict exists.

Clark


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Support for non-x86_64 architectures

2017-02-12 Thread Steven Dake (stdake)
Marcin,

I think this work is fantastic!

The Ocata feature deadline has passed.  Reference:
https://releases.openstack.org/ocata/schedule.html

The feature freeze deadline was January 25th 2017.  What this means is that 
this work will need to go into Pike instead of Ocata.  One of the reasons you 
haven’t seen much activity on the reviews is the core reviewer team is busy 
finalizing Ocata.  Once Jeffrey Zhang (our release liaison) branches ocata, 
this patch would be ready for review.  I would recommend filing a blueprint 
here:
https://blueprints.launchpad.net/kolla/+addspec

And point out the blueprint in IRC asking for a blueprint triage.  Any core 
reviewer can do that blueprint triage.  Blueprints are how some projects, but 
not all, track work in OpenStack.

The branch of Ocata should happen early this week.

Regards
-steve


-Original Message-
From: Marcin Juszkiewicz 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, February 10, 2017 at 11:21 AM
To: OpenStack Development Mailing List 
Cc: Gema Gomez 
Subject: [openstack-dev] [kolla] Support for non-x86_64 architectures

Hello

At Linaro I work on running OpenStack on AArch64 (arm64, 64-bit arm,
ARMv8a) architecture. We built Cinder, Glance, Heat, Horizon, Keystone,
Neutron and Nova for our use and deployed it several times.

But for next release we decided to move to use containers for delivering
components. This got me working on Kolla to get it working on our machines.

The problem is that Kolla targets only x86-64 architecture. I was not
surprised when saw that and do not blame anyone for it. That's quite
common behaviour nowadays when there is no Alpha nor Itanium on a market.

So I digged a bit and found patch [1] which added ppc64le architecture
support. Fetched, reviewed and decided that it can be used as a base for
my work.

1. https://review.openstack.org/#/c/423239/6

I cut all stuff about repositories and other ppc64le/ubuntu specific
issues and then edited it to take care of aarch64 as well. Then I posted
it to gerrit for review [2].

2. https://review.openstack.org/#/c/430940

Jenkins looks happy about it, I got some comments from few developers
(both in review and on irc) and handled them proper way.

I tested patch with "aarch64/ubuntu" and "aarch64/debian" images used as
a base. My target are CentOS (waiting for official image) and Debian.

Current state:

19:18 hrw@pinkiepie-centos:kolla$ docker images|grep kolla/ubuntu|wc -l
29
19:18 hrw@pinkiepie-centos:kolla$ docker images|grep kolla/debian|wc -l
124

During weekend I will run more builds to check all possible images.

If someone has some spare time then I would love to see my patch
reviewed. There is one change affecting x86-64: Debian/Ubuntu
repositories are split to base + architecture ones to allow for
architecture specific repos configuration.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical quotas at the PTG?

2017-02-12 Thread Boris Bobrov
I would like to talk about it too.

On 02/10/2017 11:56 PM, Matt Riedemann wrote:
> Operators want hierarchical quotas [1]. Nova doesn't have them yet and
> we've been hesitant to invest scarce developer resources in them since
> we've heard that the implementation for hierarchical quotas in Cinder
> has some issues. But it's unclear to some (at least me) what those
> issues are.

I don't know what the actual issue is, but from from keystone POV
the issue is that it basically replicates project tree that is stored
in keystone. On top of usual replication issues, there is another one --
it requires too many permissions. Basically, it requires service user
to be cloud admin.

> Has anyone already planned on talking about hierarchical quotas at the
> PTG, like the architecture work group?
> 
> I know there was a bunch of razzle dazzle before the Austin summit about
> quotas, but I have no idea what any of that led to. Is there still a
> group working on that and can provide some guidance here?

In my opinion, projects should not re-implements quotas every time.
I would like to have a common library for enforcing quotas (usages)
and a service for storing quotas (limits). We should also think of a
way to transfer necessary projects subtree from keystone to quota
enforcer.

We could store quota limits in keystone and distribute it in token
body, for example. Here is a POC that we did some time ago --
https://review.openstack.org/#/c/403588/ and
https://review.openstack.org/#/c/391072/
But it still has the issue with permissions.

> [1]
> http://lists.openstack.org/pipermail/openstack-operators/2017-January/012450.html
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev