[Yahoo-eng-team] [Bug 1253497] Re: Replace uuidutils.generate_uuid() with str(uuid.uuid4())

2017-07-12 Thread Hiroaki Kobayashi
** Changed in: blazar
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1253497

Title:
  Replace uuidutils.generate_uuid() with str(uuid.uuid4())

Status in Barbican:
  Fix Released
Status in BillingStack:
  Invalid
Status in Blazar:
  Fix Released
Status in Cinder:
  Invalid
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in Karbor:
  New
Status in Manila:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in oslo-incubator:
  Fix Released
Status in Sahara:
  Fix Released
Status in staccato:
  Invalid
Status in taskflow:
  Invalid
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in tuskar:
  Fix Released

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2013-November/018980.html

  
  > Hi all,
  >
  > We had a discussion of the modules that are incubated in Oslo.
  >
  > https://etherpad.openstack.org/p/icehouse-oslo-status
  >
  > One of the conclusions we came to was to deprecate/remove uuidutils in
  > this cycle.
  >
  > The first step into this change should be to remove generate_uuid() from
  > uuidutils.
  >
  > The reason is that 1) generating the UUID string seems trivial enough to
  > not need a function and 2) string representation of uuid4 is not what we
  > want in all projects.
  >
  > To address this, a patch is now on gerrit.
  > https://review.openstack.org/#/c/56152/
  >
  > Each project should directly use the standard uuid module or implement its
  > own helper function to generate uuids if this patch gets in.
  >
  > Any thoughts on this change? Thanks.
  >

  Unfortunately it looks like that change went through before I caught up on
  email. Shouldn't we have removed its use in the downstream projects (at
  least integrated projects) before removing it from Oslo?

  Doug

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1253497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1704043] [NEW] Expose `sudo_file` parameter

2017-07-12 Thread Ivan Kurnosov
Public bug reported:

At the moment the

def write_sudo_rules(self, user, rules, sudo_file=None):

function accepts custom `sudo_file` parameter, but it's invoked without
passing the third argument:

self.write_sudo_rules(name, kwargs['sudo'])

It would be great if the `kwargs['sudo_file']` was passed there

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1704043

Title:
  Expose `sudo_file` parameter

Status in cloud-init:
  New

Bug description:
  At the moment the

  def write_sudo_rules(self, user, rules, sudo_file=None):

  function accepts custom `sudo_file` parameter, but it's invoked
  without passing the third argument:

  self.write_sudo_rules(name, kwargs['sudo'])

  It would be great if the `kwargs['sudo_file']` was passed there

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1704043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662911] Re: v3 API create_user does not use default_project_id

2017-07-12 Thread Lance Bragstad
I'm going to mark this as invalid based on the security concerns
highlighted in comment #9. Please feel free to continue using the thread
for discussion as needed.

** Changed in: keystone
   Status: New => Opinion

** Changed in: keystone
   Status: Opinion => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1662911

Title:
  v3 API create_user does not use default_project_id

Status in Designate:
  Triaged
Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  The v3 call to create a user doesn't use the default_project_id
  argument except to validate it.

  
https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L918-L919

  This caused problems when updating grenade to allow the ocata->pike
  tests to run, because the user was not set up with a default role as
  it had been under V2.

  https://review.openstack.org/#/c/427916/1

  
http://logs.openstack.org/16/427916/1/check/gate-grenade-dsvm-neutron-ubuntu-xenial/56b7a7d/logs/apache/keystone.txt.gz?level=WARNING#_2017-02-08_13_48_57_247
  
http://logs.openstack.org/16/427916/1/check/gate-grenade-dsvm-neutron-ubuntu-xenial/56b7a7d/logs/grenade.sh.txt.gz#_2017-02-08_13_48_54_600

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1662911/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681866] Re: Bad response code while validating token: 502

2017-07-12 Thread Lance Bragstad
After double checking the keystone source, I'm not seeing any places
where keystone raises a 502. I'm going to remove keystone from the
affected projects based on comment #4.

** No longer affects: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1681866

Title:
  Bad response code while validating token: 502

Status in devstack:
  New

Bug description:
  Found this while investigating a gate failure [1].

  Tempest logs say "2017-04-11 10:07:02,765 23082 INFO
  [tempest.lib.common.rest_client] Request
  (TestSecurityGroupsBasicOps:_run_cleanups): 503 DELETE
  https://198.72.124.138:8774/v2.1/servers/f736a878-2ac4-4c37-b6a8-e5cd8df5a7fd
  0.018s"

  That 503 looks suspicious. So I go to the nova-api logs. Which gives

  2017-04-11 10:07:02.762 32191 ERROR keystonemiddleware.auth_token [...] Bad 
response code while validating token: 502
  2017-04-11 10:07:02.763 32191 WARNING keystonemiddleware.auth_token [...] 
Identity response: 
  
  502 Proxy Error
  
  Proxy Error
  The proxy server received an invalid
  response from an upstream server.
  The proxy server could not handle the request GET /identity_admin/v3/auth/tokens.
  Reason: Error reading from remote server
  
  Apache/2.4.18 (Ubuntu) Server at 198.72.124.138 Port 443
  

  2017-04-11 10:07:02.763 32191 CRITICAL keystonemiddleware.auth_token
  [...] Unable to validate token: Failed to fetch token data from
  identity server

  So Apache is complaining, some network connection issue, related to
  proxy-ing. So I open "logs/apache/tls-proxy_error.txt.gz" and find

  [Tue Apr 11 10:07:02.761420 2017] [proxy_http:error] [pid 7136:tid 
140090189690624] (20014)Internal error (specific information not available): 
[client 198.72.124.138:38722] [frontend 198.72.124.138:443] AH01102: error 
reading status line from remote server 198.72.124.138:80
  [Tue Apr 11 10:07:02.761454 2017] [proxy:error] [pid 7136:tid 
140090189690624] [client 198.72.124.138:38722] [frontend 198.72.124.138:443] 
AH00898: Error reading from remote server returned by 
/identity_admin/v3/auth/tokens

  Interesting. Google says that adding "proxy-initial-not-pooled" to the
  apache2 vhost config could help.

  Anyway, a good elasticsearch query for this is

  message:"Bad response code while validating token: 502"

  8 hits, no worries.


  [1] : http://logs.openstack.org/03/455303/2/check/gate-tempest-dsvm-
  neutron-full-ubuntu-xenial/aa8c7fd/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1681866/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697458] Re: Cannot deploy stable/ocata

2017-07-12 Thread Lance Bragstad
Marking this as invalid for now. If the issue resurfaces, please feel
free to reopen this issue.

** Changed in: keystone
   Status: New => Incomplete

** Changed in: keystone
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1697458

Title:
  Cannot deploy stable/ocata

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  I tried to deploy stable/ocata environment for following 2 ways in
  Ubuntu 16.04.2 LTS.  Both ways were failed to deploy.  Am I missing
  something?

  Pattern A: using master devstack and following local.conf

REQUIREMENTS_BRANCH=stable/ocata
KEYSTONE_BRANCH=stable/ocata
NOVA_BRANCH=stable/ocata
NEUTRON_BRANCH=stable/ocata
GLANCE_BRANCH=stable/ocata
CINDER_BRANCH=stable/ocata
IRONIC_BRANCH=stable/ocata
SWIFT_BRANCH=stable/ocata

disable_service n-net
disable_service horizon
disable_service tempest
disable_service c-api
disable_service c-vol
disable_service c-sch
enable_service neutron
enable_plugin ironic https://git.openstack.org/openstack/ironic stable/ocata
enable_service s-proxy
enable_service s-object
enable_service s-container
enable_service s-account
..(snip)...

  Pattern B: using stable/ocata devstack and same local.conf with above
  definition.

  
  [Error for Pattern A] /opt/stack/logs/stack.sh.log

   ...(snip)...
  2017-06-12 13:21:57.118 | ++lib/keystone:create_keystone_accounts:330  
openstack project show admin -f value -c id
  2017-06-12 13:22:00.598 | You are not authorized to perform the requested 
action: identity:list_projects. (HTTP 403) (Request-ID: 
req-55f243e3-8720-4cc2-a63d-8c5dfcfa269d)

  I executed 'source devstack/openrc admin admin; openstack --debug 
endpoint list' and got an error:
  ...(snip)...
  REQ: curl -g -i -X GET http://192.168.122.198/identity/v3/auth/tokens 
-H "X-Subject-Token: {SHA1}23dde272ead75b0e520d229864a9fb9931aeabce" -H 
"User-Agent: python-keystoneclient"
   -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}23dde272ead75b0e520d229864a9fb9931aeabce"
  Resetting dropped connection: 192.168.122.198 
http://192.168.122.198:80 "GET /identity/v3/auth/tokens 
   HTTP/1.1" 403 141
  RESP: [403] Date: Mon, 12 Jun 2017 13:22:54 GMT Server: Apache/2.4.18 
(Ubuntu) Vary: X-Auth-Token Content-Type: application/json Content-Length: 141 
x-openstack-request-id: req-bb143aa4-e31a-46f6-91e2-89984a512ad4 Connection: 
close
  RESP BODY: {"error": {"message": "You are not authorized to perform 
the requested action: identity:validate_token.", "code": 403, "title": 
"Forbidden"}}
  ...(snip)...

  [Error for Pattern B] /opt/stack/logs/stack.sh.log
  2017-06-12 13:52:53.474 | ++::
curl -g -k --noproxy '*' -s -o /dev/null -w '%{http_code}' 
http://192.168.122.198/identity/v3/
  2017-06-12 13:52:53.498 | +::[[ 
503 == 503 ]]
  2017-06-12 13:52:53.505 | +::
sleep 1
  2017-06-12 13:52:54.517 | ++::
curl -g -k --noproxy '*' -s -o /dev/null -w '%{http_code}' 
http://192.168.122.198/identity/v3/
  2017-06-12 13:52:54.537 | +::[[ 
503 == 503 ]]
  2017-06-12 13:52:54.544 | +::
sleep 1
  ...(snip)...
  2017-06-12 13:52:55.363 | [ERROR] /home/stack/devstack/lib/keystone:615 
keystone did not start
  2017-06-12 13:52:56.371 | Error on exit

I also checked /var/log/apache2/error.log

  [Mon Jun 12 22:56:01.868120 2017] [proxy:error] [pid 32263:tid 
140048708118272] (111)Connection refused: AH02454: uwsgi: attempt to connect to 
Unix domain socket /var/run/uwsgi/keystone-wsgi-public.socket 
(uwsgi-uds-keystone-wsgi-public) failed
  [Mon Jun 12 22:56:01.868214 2017] [proxy:error] [pid 32263:tid 
140048708118272] AH00959: ap_proxy_connect_backend disabling worker for 
(uwsgi-uds-keystone-wsgi-public) for 0s
  [Mon Jun 12 22:56:01.868232 2017] [:error] [pid 32263:tid 
140048708118272] [client 192.168.122.198:36640] failed to make connection to 
backend: httpd-UDS:0

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1697458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587777] Re: Mitaka: dashboard performance

2017-07-12 Thread Lance Bragstad
Marking this as invalid since it's been a while without an update.
Please feel free to reopen this report if the issue resurfaces.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/158

Title:
  Mitaka: dashboard performance

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Environment: Openstack Mitaka on top of Leap 42.1, 1 control node, 2
  compute nodes, 3-node-Ceph-cluster.

  Issue: Since switching to Mitaka, we're experiencing severe delays
  when accessing the dashboard - i.e. switching between "Compute -
  Overview" and "Compute - Instances" takes 15+ seconds, even after
  multiple invocations.

  Steps to reproduce:
  1. Install Openstack Mitaka, incl. dashboard & navigate through the dashboard.

  Expected result:
  Browsing through the dashboard with reasonable waiting times.

  Actual result:
  Refreshing the dashboard can take up to 30 secs, switching between views 
(e.g. volumes to instances) takes about 15 secs in average.

  Additional information:
  I've had a look at the requests, the Apache logs and our control node's stats 
and noticed that it's a single call that's taking all the time... I see no 
indications of any error, it seems that once WSGI is invoked, that call simply 
takes its time. Intermediate curl requests are logged, so I see it's doing its 
work. Looking at "vmstat" I can see that it's user space taking all the load 
(Apache / mod_wsgi drives its CPU to 100%, while other CPUs are idle - and no 
i/o wait, no system space etc.).

  ---cut here---
  control1:/var/log # top
  top - 10:51:35 up 8 days, 18:16,  2 users,  load average: 2,17, 1,65, 1,48
  Tasks: 383 total,   2 running, 381 sleeping,   0 stopped,   0 zombie
  %Cpu0  : 31,7 us,  2,9 sy,  0,0 ni, 65,0 id,  0,3 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu1  : 13,1 us,  0,7 sy,  0,0 ni, 86,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu2  : 17,2 us,  0,7 sy,  0,0 ni, 81,2 id,  1,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu3  : 69,4 us, 12,6 sy,  0,0 ni, 17,9 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu4  : 14,6 us,  1,0 sy,  0,0 ni, 84,4 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu5  : 16,9 us,  0,7 sy,  0,0 ni, 81,7 id,  0,7 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu6  : 17,3 us,  1,3 sy,  0,0 ni, 81,0 id,  0,3 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu7  : 21,2 us,  1,3 sy,  0,0 ni, 77,5 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  KiB Mem:  65943260 total, 62907676 used,  3035584 free, 1708 buffers
  KiB Swap:  2103292 total,0 used,  2103292 free. 53438560 cached Mem

PID USER  PR  NIVIRTRESSHR S  %CPU  %MEM TIME+ COMMAND
   6776 wwwrun20   0  565212 184504  13352 S 100,3 0,280   0:07.83 
httpd-prefork
   1130 root  20   0  399456  35760  22508 S 5,980 0,054 818:13.17 X
   1558 sddm  20   0  922744 130440  72148 S 5,316 0,198 966:03.82 
sddm-greeter
  20999 nova  20   0  285888 116292   5696 S 2,658 0,176 164:27.08 
nova-conductor
  21030 nova  20   0  758752 182644  16512 S 2,658 0,277  58:20.40 nova-api
  18757 heat  20   0  273912  73740   4612 S 2,326 0,112  50:48.72 
heat-engine
  18759 heat  20   0  273912  73688   4612 S 2,326 0,112   4:27.54 
heat-engine
  20995 nova  20   0  286236 116644   5696 S 2,326 0,177 164:38.89 
nova-conductor
  21027 nova  20   0  756204 180752  16980 S 2,326 0,274  58:20.09 nova-api
  21029 nova  20   0  756536 180644  16496 S 2,326 0,274 139:46.29 nova-api
  21031 nova  20   0  756888 180920  16512 S 2,326 0,274  58:36.37 nova-api
  24771 glance20   0 2312152 139000  17360 S 2,326 0,211  24:47.83 
glance-api
  24772 glance20   0  631672 111248   4848 S 2,326 0,169  22:59.77 
glance-api
  28424 cinder20   0  720972 108536   4968 S 2,326 0,165  28:31.42 
cinder-api
  28758 neutron   20   0  317708 101812   4472 S 2,326 0,154 153:45.55 
neutron-server

  #

  control1:/var/log # vmstat 1
  procs ---memory-- ---swap-- -io -system-- 
--cpu-
   r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa 
st
   1  0  0 2253144   1708 5344047200 46044 11  1 88 
 0  0
   0  0  0 2255588   1708 5344047600 0   568 3063 7627 15  1 83 
 0  0
   1  0  0 2247596   1708 5344047600 0   144 3066 6803 14  2 83 
 0  0
   1  0  0 2156008   1708 5344047600 072 3474 7193 25  3 72 
 0  0
   2  0  0 2131968   1708 5344048400 0   652 3497 8565 28  2 70 
 0  0
   3  1  0 2134000   1708 5344051200 0 14340 3629 10644 25  2 
71  2  0
   2  0  0 2136956   1708 5344058000 012 3483 10620 25  2 
70  3  0
   9  1  0 2138164   1708 5344059600 0   248 3442 9980 27  1 72 
 0  0
   4  0  0 2105160   1708 53

[Yahoo-eng-team] [Bug 1699717] Re: Updating of firewall-rule while attached to firewall via non-admin user shows exception on Horizon

2017-07-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/481008
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas-dashboard/commit/?id=a767cef2ad7973696b1723e17f518cc6435aaacc
Submitter: Jenkins
Branch:master

commit a767cef2ad7973696b1723e17f518cc6435aaacc
Author: Adit Sarfaty 
Date:   Thu Jul 6 15:09:07 2017 +0300

Fix FWaaS create/update rule with non-admin

Creating and updating a shared rule is forbidden for non admin user.

This patch makes sure the 'shared' attribute is disabled, and not added
to the request body of the update request, so the request will not fail
in neutron.

Change-Id: I439947198bd9b0a647640f3f663ba7029b2507b4
Closes-Bug: #1699717


** Changed in: neutron-fwaas-dashboard
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1699717

Title:
  Updating of firewall-rule while attached to firewall via non-admin
  user shows exception on Horizon

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Neutron FWaaS dashboard:
  Fix Released

Bug description:
  Created non-admin user using below commands:-
  # openstack project create sam
  # openstack user create --password openstack --project 
acdc3b0348224a019878d628cc40681c sam-user
  # openstack role create user-role
  # openstack role add  --project acdc3b0348224a019878d628cc40681c --user 
sam-user user-role

  Steps:-
  1) Created firewall-rule 
  2) Created firewall policy and firewall-rule.
  3) Created firewall and add firewall-policy to it
  4) Now try to update firewall-rule using non-admin user it shows exception.
  Error: Failed to update rule fire-rule-sam: {u'protocol': u'tcp', 
u'description': u'', 'attributes_to_update': [u'protocol', u'name', u'enabled', 
u'source_ip_address', u'destination_ip_address', u'action', u'source_port', 
u'shared', u'destination_port', u'ip_version', u'description'], u'source_port': 
None, u'source_ip_address': None, u'destination_ip_address': None, 
'firewall_policy_id': u'ce84a478-3eaf-45ba-9d00-2f82b90916e4', 
u'destination_port': None, 'id': u'86850f40-6b26-4849-8eb9-f65b4136cf87', 
u'name': u'fire-rule-sam', 'tenant_id': u'acdc3b0348224a019878d628cc40681c', 
u'enabled': True, u'action': u'allow', 'shared': False, 'project_id': 
u'acdc3b0348224a019878d628cc40681c', u'ip_version': 4} is disallowed by policy 
rule (rule:update_firewall_rule and rule:update_firewall_rule:shared) with 
{'project_id': u'acdc3b0348224a019878d628cc40681c', 'domain': None, 
'project_name': u'sam', 'user_id': u'2e4470864c674331bec8b9f25d546e04', 
'roles': [u'user-role'], 'user_domain_id': None, 
 'service_project_id': None, 'project_domain': None, 'tenant_id': 
u'acdc3b0348224a019878d628cc40681c', 'service_user_domain_id': None, 
'service_project_domain_id': None, 

  But issue doesn't comes when using cli command to update firewall-rules for 
non-admin user.
  Use credentials for non-admin tenant then run below command:-

  $  neutron firewall-rule-update 86850f40-6b26-4849-8eb9-f65b4136cf87 
--protocol tcp --action reject
  Updated firewall_rule: 86850f40-6b26-4849-8eb9-f65b4136cf87

  So above command via cli is executed fine but with horizon it shows
  issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1699717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701996] Re: nova-api-os-compute can't start

2017-07-12 Thread Matt Riedemann
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1701996

Title:
  nova-api-os-compute can't start

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  For testing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1701996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703303] Re: compute.pp fails during installation on rhel 7.4

2017-07-12 Thread Matt Riedemann
This isn't a nova bug, you'd have to find the proper bug tracker for
packstack.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1703303

Title:
  compute.pp fails during installation on rhel 7.4

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I was trying to install packstack on rhel 7.4, but I am getting following 
error-
  ^[[1;31mError: 
/Stage[main]/Packstack::Nova::Compute::Libvirt/File_line[libvirt-guests]: Could 
not evaluate: No such file or directory - /etc/sysconfig/libvirt-guests^[[0m
  ^[[1;31mError: 
/Stage[main]/Packstack::Nova::Compute::Libvirt/Exec[virsh-net-destroy-default]: 
Could not evaluate: Could not find command '/usr/bin/virsh'^[[0m
   and compute.pp gets fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1703303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703311] Re: gkk

2017-07-12 Thread Matt Riedemann
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1703311

Title:
  gkk

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  gkk test

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1703311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703540] Re: Reschedule with libvirt exception leaves dangling neutron ports

2017-07-12 Thread Matt Riedemann
With a libvirtError coming up from the driver.spawn method, I think
you'd get here:

https://github.com/openstack/nova/blob/stable/ocata/nova/compute/manager.py#L1784

And since you have retries left, you wouldn't call
_cleanup_allocated_networks:

https://github.com/openstack/nova/blob/stable/ocata/nova/compute/manager.py#L1790

And since it's not the Ironic driver or an SR-IOV port you don't
deallocate here:

https://github.com/openstack/nova/blob/stable/ocata/nova/compute/manager.py#L1811

So we call self.network_api.cleanup_instance_network_on_host but that's
a noop for the neutron networking backend code in Nova:

https://github.com/openstack/nova/blob/stable/ocata/nova/network/neutronv2/api.py#L2335

So yeah, we don't cleanup the ports anywhere if this happens.

** Changed in: nova
   Status: New => Triaged

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1703540

Title:
  Reschedule with libvirt exception leaves dangling neutron ports

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) ocata series:
  Triaged

Bug description:
  When an instance fails to spawn, for example with the exception:

  2017-07-11 04:39:56.942 ERROR nova.compute.manager 
[req-1e54a66a-6da5-4720-89cc-f65568dea131 ashok ashok] [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] Instance failed to spawn
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] Traceback (most recent call last):
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2124, in _build_resources
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] yield resources
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1930, in _build_and_run_instance
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] block_device_info=block_device_info)
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2714, in spawn
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] destroy_disks_on_failure=True)
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 5130, in 
_create_domain_and_network
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] destroy_disks_on_failure)
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] self.force_reraise()
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] six.reraise(self.type_, self.value, 
self.tb)
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 5102, in 
_create_domain_and_network
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] post_xml_callback=post_xml_callback)
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 5020, in _create_domain
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] guest.launch(pause=pause)
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 145, in launch
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] self._encoded_xml, errors='ignore')
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] sel

[Yahoo-eng-team] [Bug 1687479] Re: Evacuated instances that are deleted before the source host comes up causes cleanup not to happen

2017-07-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/467774
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=42b1fa965028c12d6e78b70d2487d5dd49158176
Submitter: Jenkins
Branch:master

commit 42b1fa965028c12d6e78b70d2487d5dd49158176
Author: mdrabe 
Date:   Wed May 24 15:56:13 2017 -0500

Query deleted instance records during _destroy_evacuated_instances

_destroy_evacuated_instances is responsible for cleaning up the
remnants of instance evacuations from the source host. Currently
this method doesn't account for instances that have been deleted
after being evacuated.

Change-Id: Ib5f6b03189b7fc5cd0b226ea2dca74865fbef12a
Closes-Bug: #1687479


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1687479

Title:
  Evacuated instances that are deleted before the source host comes up
  causes cleanup not to happen

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  Description
  ===
  When an instance is evacuated to another host, the VM remains on the source 
host until it is brought back up and deleted by compute via 
_destroy_evacuated_instances.

  However if the VM that's created on the destination is deleted before
  the source host is brought back up, then _destroy_evacuated_instances
  won't reap the remains because it searches non-deleted records.

  Steps to reproduce
  ==
  1. Deploy a VM.
  2. Bring the host the VM is on down.
  3. Evacuate the VM to a different host.
  4. Delete the VM from the destination.
  5. Bring the source host back up.

  The source remnants from the evacuation will not be cleaned up, but
  they should be.

  Suspect code is in the nova compute manager in
  _destroy_evacuated_instances:

  1. MigrationList.get_by_filters doesn't appear to return deleted migration 
records.
  2. {'deleted': False} is currently passed as the filter to 
_get_instances_on_driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1687479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1693555] Re: document x-openstack-request-id in api-ref

2017-07-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/474847
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=9fbd7861dd5c8b49b3a9fe96c03d45234a55a6b6
Submitter: Jenkins
Branch:master

commit 9fbd7861dd5c8b49b3a9fe96c03d45234a55a6b6
Author: Takashi NATSUME 
Date:   Fri Jun 16 11:38:55 2017 +0900

api-ref: Add X-Openstack-Request-Id description

Add the description for the following items
in the API reference and the API guide.

* 'X-Openstack-Request-Id' header in request
* 'X-Openstack-Request-Id' header in response
* 'X-Compute-Request-Id' in response

Change-Id: Idd9181c1530eb9576da9941416b697a97c0cfb8d
Closes-Bug: #1693555


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1693555

Title:
  document x-openstack-request-id in api-ref

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  x-compute-request-id and x-openstack-request-id are not documented in
  our API ref, we should add those.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1693555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699732] Re: api-ref: Incorrect parameter description in server-security-groups.inc

2017-07-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/476434
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=3f96ec6490c72d3646813f1a9d3ee38773ec1823
Submitter: Jenkins
Branch:master

commit 3f96ec6490c72d3646813f1a9d3ee38773ec1823
Author: Takashi NATSUME 
Date:   Thu Jun 22 17:40:47 2017 +0900

api-ref: Fix parameters in server-security-groups

Change-Id: Ie8dc3252603ce77910e1addb67cdc8844369dfca
Implements: blueprint api-ref-in-rst-pike
Closes-Bug: #1699732


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1699732

Title:
  api-ref: Incorrect parameter description in server-security-groups.inc

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  https://developer.openstack.org/api-ref/compute/?expanded=list-
  security-groups-by-server-detail#list-security-groups-by-server

  In "List Security Groups By Server" API, there are some incorrect
  descriptions.

  In Response,

  * 'security_groups' is optional. But it should be 'required'.

  * The description of 'security_groups' is "One or more security groups. 
Specify the name of the security group in the name attribute. If you omit this 
attribute, the API creates the server in the default security group."
It is not proper in the response.

  * The description of 'id' is: "The security group name or UUID."
    "name" is not proper here.

  * 'rules' is an array. So it had better to change the description in
  order to clarify that it is the list of security group rules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1699732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1704014] [NEW] Instance resize and rebuild actions should be shown as destructive actions

2017-07-12 Thread Ying Zuo
Public bug reported:

The destructive actions which require shutting down the server are shown
in red in the action dropdown menu. Both instance resize and rebuild
actions require shutting down the server so they should be shown as
destructive actions.

** Affects: horizon
 Importance: Undecided
 Assignee: Ying Zuo (yingzuo)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ying Zuo (yingzuo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1704014

Title:
  Instance resize and rebuild actions should be shown as destructive
  actions

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The destructive actions which require shutting down the server are
  shown in red in the action dropdown menu. Both instance resize and
  rebuild actions require shutting down the server so they should be
  shown as destructive actions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1704014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1689468] Re: odd keystone behavior when X-Auth-Token ends with carriage return

2017-07-12 Thread Gage Hugo
** Also affects: keystonemiddleware
   Importance: Undecided
   Status: New

** Changed in: keystonemiddleware
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1689468

Title:
  odd keystone behavior when X-Auth-Token ends with carriage return

Status in OpenStack Identity (keystone):
  In Progress
Status in keystonemiddleware:
  In Progress

Bug description:
  I had to root cause a very odd problem today, where a user complained
  that they had a token that worked with neutron but didn't work with
  keystone. E.g. they could list networks, but couldn't list projects. I
  thought there must be some mistake, but I was finally able to
  reproduce it and they were correct. Here's a script that shows the
  problem:

  OPENSTACK=
  AUTH_FILE=/root/auth.json

  TOKEN=`curl -s -1 -k -i -X POST https://$OPENSTACK:5000/v3/auth/tokens
  -H "Accept:application/json" -H "Content-Type: application/json" -d
  @${AUTH_FILE} | grep X-Subject-Token | awk '{FS=":"; print $2}'`

  echo 'neutron:'; curl -1 -k -X GET
  https://$OPENSTACK:9696/v2.0/networks -H "X-Auth-Token: $TOKEN" -H
  "Content-Type: application/json"; echo; echo

  echo 'keystone:'; curl -1 -k -X GET
  https://$OPENSTACK:5000/v3/projects -H "X-Auth-Token: $TOKEN" -H
  "Accept: application/json"; echo; echo

  
  With debug=True and insecure_debug=True and 
default_log_levels=keystonemiddleware=True, this yields something like:

  neutron:
  {"networks": []}

  keystone:
  {"error": {"message": "auth_context did not decode anything useful (Disable 
insecure_debug mode to suppress these details.)", "code": 401, "title": 
"Unauthorized"}}


  I was finally able to figure out why... the awk command used to parse
  the token out of the X-Subject-Token header was leaving a \r on the
  end of the $TOKEN value, and apparently that's handled fine when you
  make the request to neutron (and presumably any non-keystone service),
  but not when you are talking to keystone directly. That makes some
  sense, since keystone has to do its own token validation differently.

  Changing the following line in the script above (adding the gsub to
  trim the \r) fixed the issue:

  TOKEN=`curl -s -1 -k -i -X POST https://$OPENSTACK:5000/v3/auth/tokens
  -H "Accept:application/json" -H "Content-Type: application/json" -d
  @${AUTH_FILE} | grep X-Subject-Token | awk '{FS=":";
  gsub(/\r$/,"",$2); print $2}'`

  
  We should fix this to be consistent with non-keystone token validation, to 
save someone else the trouble debugging this if nothing else. Keystone was 
doing weird things, where the debug logs would show that the context knew the 
user and roles, but had no token... leaving one to wonder how it figured out 
the user and roles if it didn't have a token?!? Not a good user experience for 
someone trying to write a script to our APIs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1689468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1704012] [NEW] hw_video:ram_max_mb is tracked for quota but not compute node resource provider allocations

2017-07-12 Thread Matt Riedemann
Public bug reported:

This came up in discussion here:

https://review.openstack.org/#/c/416521/58/nova/compute/api.py@1904

There is a flavor extra spec called "hw_video:ram_max_mb" which is used
in the compute API code to count against a project's ram usage in
addition to the memory_mb in the flavor being used to create the
instance.

The hw_video:ram_max_mb flavor extra spec is only then used in the
libvirt driver, but not anywhere else.

The issue is that when doing claims in the resource tracker and
reporting allocations for the compute node resource provider to the
placement service, we don't account for the hw_video:ram_max_mb value in
addition to the flavor.memory_mb value, so we're really taking up more
MEMORY_MB inventory capacity on the compute node than what's reported to
Placement, which is a problem, probably. :)

This is a latent bug since that flavor extra spec was introduced.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1704012

Title:
  hw_video:ram_max_mb is tracked for quota but not compute node resource
  provider allocations

Status in OpenStack Compute (nova):
  New

Bug description:
  This came up in discussion here:

  https://review.openstack.org/#/c/416521/58/nova/compute/api.py@1904

  There is a flavor extra spec called "hw_video:ram_max_mb" which is
  used in the compute API code to count against a project's ram usage in
  addition to the memory_mb in the flavor being used to create the
  instance.

  The hw_video:ram_max_mb flavor extra spec is only then used in the
  libvirt driver, but not anywhere else.

  The issue is that when doing claims in the resource tracker and
  reporting allocations for the compute node resource provider to the
  placement service, we don't account for the hw_video:ram_max_mb value
  in addition to the flavor.memory_mb value, so we're really taking up
  more MEMORY_MB inventory capacity on the compute node than what's
  reported to Placement, which is a problem, probably. :)

  This is a latent bug since that flavor extra spec was introduced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1704012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1704010] [NEW] VMware: attach volume fails with AttributeError

2017-07-12 Thread Vipin Balachandran
Public bug reported:

Attaching/detaching volume with adapter type IDE fails with:
AttributeError: 'int' object has no attribute 'lower'

2017-07-11 23:20:11.876 ERROR nova.virt.block_device 
[req-15c66739-f62f-405d-ad71-e9e46dfeea88 demo admin] [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] Driver failed to attach volume 
7f94ea59-510c-4c9e-bf5b-9accc59a7a54 at /dev/sdc: AttributeError: 'int' object 
has no attribute 'lower'
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] Traceback (most recent call last):
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 389, in attach
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] device_type=self['device_type'], 
encryption=encryption)
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 328, in attach_volume
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] return 
self._volumeops.attach_volume(connection_info, instance)
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 381, in attach_volume
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] 
self._attach_volume_vmdk(connection_info, instance, adapter_type)
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 335, in 
_attach_volume_vmdk
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] if state.lower() != 'poweredoff':
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] AttributeError: 'int' object has no 
attribute 'lower'
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] 

2017-07-11 23:20:56.655 ERROR nova.virt.block_device 
[req-d985d896-119c-40e4-868e-677dbf461df1 demo admin] [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] Failed to detach volume 
af84764d-dd81-4108-8d2f-b39cedeb9aa2 from /dev/sdb: AttributeError: 'int' 
object has no attribute 'lower'
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] Traceback (most recent call last):
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 277, in driver_detach
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] encryption=encryption)
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 333, in detach_volume
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] return 
self._volumeops.detach_volume(connection_info, instance)
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 582, in detach_volume
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] 
self._detach_volume_vmdk(connection_info, instance)
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 535, in 
_detach_volume_vmdk
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] if state.lower() != 'poweredoff':
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] AttributeError: 'int' object has no 
attribute 'lower'
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]

** Affects: nova
 Importance: Undecided
 Assignee: Vipin Balachandran (vbala)
 Status: New


** Tags: vmware

** Changed in: nova
 Assignee: (unassigned) => Vipin Balachandran (vbala)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1704010

Title:
  VMware: attach volume fails with AttributeError

Status in OpenStack Compute (nova):
  New

Bug description:
  Attaching/detaching volume with adapter type IDE fails with:
  AttributeError: 'int' object has no attribute 'lower'

  2017-07-11 23:20:11.876 ERROR nova.virt.block_device 
[req-15c66739-f62f-405d-ad71-e9e4

[Yahoo-eng-team] [Bug 1704000] [NEW] Sometimes OVO unit tests clash on non-unique attributes

2017-07-12 Thread Ihar Hrachyshka
Public bug reported:

ft1.22: 
neutron.tests.unit.objects.test_l3agent.RouterL3AgentBindingDbObjTestCase.test_update_objects_StringException:
 Traceback (most recent call last):
  File "neutron/tests/base.py", line 118, in func
return f(self, *args, **kwargs)
  File "neutron/tests/unit/objects/test_base.py", line 1848, in 
test_update_objects
self.context, new_values, **keys)
  File "neutron/objects/base.py", line 494, in update_objects
**cls.modify_fields_to_db(kwargs))
  File "neutron/objects/db/api.py", line 103, in update_objects
return q.update(values, synchronize_session=False)
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 3345, in update
update_op.exec_()
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py",
 line 1179, in exec_
self._do_exec()
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py",
 line 1334, in _do_exec
mapper=self.mapper)
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1139, in execute
bind, close_with_result=True).execute(clause, params or {})
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 945, in execute
return meth(self, multiparams, params)
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
 line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1053, in _execute_clauseelement
compiled_sql, distilled_params
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1189, in _execute_context
context)
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1398, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1182, in _execute_context
context)
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 470, in do_execute
cursor.execute(statement, parameters)
oslo_db.exception.DBDuplicateEntry: (sqlite3.IntegrityError) UNIQUE constraint 
failed: routerl3agentbindings.router_id, routerl3agentbindings.binding_index 
[SQL: u'UPDATE routerl3agentbindings SET binding_index=? WHERE 
routerl3agentbindings.router_id IN (?) AND routerl3agentbindings.l3_agent_id IN 
(?)'] [parameters: (2, '2a0036c0-dfc8-4dee-b6e8-ae7039abb5e0', 
'98fa8bc4-d5be-422f-88a2-cfcd78a7f2d6')]


http://logs.openstack.org/73/304873/45/check/gate-neutron-python27
-ubuntu-xenial/c6512d6/testr_results.html.gz

This is because of self.update_obj_fields used in several test classes
not updating unique_tracker in the base test class that makes sure no
non-unique values are generated by get_random_object_fields. We probably
need to make those two cooperate.

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: gate-failure unittest

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Tags added: gate-failure unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1704000

Title:
  Sometimes OVO unit tests clash on non-unique attributes

Status in neutron:
  Confirmed

Bug description:
  ft1.22: 
neutron.tests.unit.objects.test_l3agent.RouterL3AgentBindingDbObjTestCase.test_update_objects_StringException:
 Traceback (most recent call last):
File "neutron/tests/base.py", line 118, in func
  return f(self, *args, **kwargs)
File "neutron/tests/unit/objects/test_base.py", line 1848, in 
test_update_objects
  self.context, new_values, **keys)
File "neutron/objects/base.py", line 494, in update_objects
  **cls.modify_fields_to_db(kwargs))
File "neutron/objects/db/api.py", line 103, in update_objects
  return q.update(values, sync

[Yahoo-eng-team] [Bug 1702573] Re: api-ref: GET /servers/{server_id}/os-instance-actions/{request_id} does not list 'events' response key as optional

2017-07-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/480792
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d2d84eb102023d75911ca848c1d30a9f81e6f40f
Submitter: Jenkins
Branch:master

commit d2d84eb102023d75911ca848c1d30a9f81e6f40f
Author: Matt Riedemann 
Date:   Wed Jul 5 21:39:58 2017 -0400

api-ref: mark instance action events parameter as optional

For "GET /servers/{server_id}/os-instance-actions/{request_id}",
the "events" parameter in the response body is only included by
default policy for administrators. You can get details if you're
an admin or own the server, but the events are only returned for
admins by default.

This change does two things:

1. Fixes the description of the default policy since admin or
   owner can get action details for a particular request.
2. Fixes the "events" parameter description by pointing out it
   is optional and only returned by default for admins.

Change-Id: I6410a0aac223133d8d07fd65c268553ebb9e7e67
Closes-Bug: #1702573


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1702573

Title:
  api-ref: GET /servers/{server_id}/os-instance-actions/{request_id}
  does not list 'events' response key as optional

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The API reference for "GET /servers/{server_id}/os-instance-
  actions/{request_id}" does not list the "events" key in the response
  body as optional:

  https://developer.openstack.org/api-ref/compute/?expanded=show-server-
  action-details-detail,list-actions-for-server-detail

  However, based on the API code it is:

  
https://github.com/openstack/nova/blob/635e29433cdadd3d1b664ea2354f049125c393fe/nova/api/openstack/compute/instance_actions.py#L83

  By default policy rules, the event details are only shown for admin
  users:

  
https://github.com/openstack/nova/blob/635e29433cdadd3d1b664ea2354f049125c393fe/nova/policies/instance_actions.py#L25

  Using the os_compute_api:os-instance-actions:events policy rule.

  The API reference should be updated to reflect this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1702573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615014] Re: Prevent --expand, --migrate, --contract from being run out of order

2017-07-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/437441
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=6bab551cd8a523332b7c387c36c701cb90fd96bd
Submitter: Jenkins
Branch:master

commit 6bab551cd8a523332b7c387c36c701cb90fd96bd
Author: Richard Avelar 
Date:   Thu Feb 23 15:35:21 2017 +

Validate rolling upgrade is run in order

This patch addresses a bug that allows rolling upgrades to be run
out of order and without first checking if the previous command
has been run to a higher version before hand.

Change-Id: I55fa4f600d89f3a2fb14868f6886b52fd1ef6c6b
Closes-Bug: 1615014


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1615014

Title:
  Prevent --expand, --migrate, --contract from being run out of order

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Currently, keystone does nothing to prevent an operator from running
  each step of the rolling migration process out of order.
  Theoretically, most migrations will fail if the table they're looking
  to drop does not exist, etc, but that might not always be the case.

  The transition to rolling migrations introduces a few expectations
  between the migration repositories that we should be able to enforce
  rather easily, given that all 3 of the new migration repos should
  always contain the same number of migration steps).

  1. All legacy migrations need to be run before any --expand migrations
  are allow to run.

  2. The version number of the --expand repo must be greater than or
  equal to the version number of the --migrate repo.

  3. The version number of the --migrate repo must be greater than or
  equal to the version number of the --contract repo.

  I'd expect each command to abort with an error message if there are
  outstanding steps from the previous repository that had not been run.
  As a bit of a special case, db_sync --expand could (continue to) run
  the legacy repository automatically, but only if the legacy repository
  is guaranteed to be additive-only, as non-additive migrations should
  never be run by --expand (perhaps we should find the version number of
  the last non-additive migration and check the current state of the db
  will not result in that non-additive migration to be accidentally run.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1615014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686035] Re: [RFE] More detailed reporting of available QoS rules

2017-07-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/475260
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=2cc547241c99b01e36fdc69a08c59f975b32c508
Submitter: Jenkins
Branch:master

commit 2cc547241c99b01e36fdc69a08c59f975b32c508
Author: Sławek Kapłoński 
Date:   Mon Jun 19 06:35:25 2017 +

New API call to get details of supported QoS rule type

This commit adds new API call that allows to discover
details about supported QoS rule type and its parameters
by each of loaded backend drivers.

DocImpact: New call to get details about supported
   rule_type for each loaded backend driver
ApiImpact

Change-Id: I2008e9d3e400dd717434fbdd2e693c9c5e34c3a4
Closes-Bug: #1686035


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1686035

Title:
  [RFE] More detailed reporting of available QoS rules

Status in neutron:
  Fix Released

Bug description:
  Currently Neutron has API call "qos-available-rule-types" which will return 
subset of qos rules supported by all loaded drivers (openvswitch, linuxbridge, 
etc.)
  After https://bugs.launchpad.net/neutron/+bug/1586056 was closed it can be  
not enough sometimes.

  I would suggest to add new API call to report details about supported
  rules. It should returns something like

  ++-++
  | Driver | Supported rule  | Supported parameters   |
  ++-++
  | ovs|bandwidth_limit_rule | direction: egress  |
  || | max_kbps: ANY VALUE|
  ++-++
  | LB |bandwidth_limit_rule | direction: egress, ingress |
  || | max_kbps: ANY VALUE|
  ++-++

  Thanks to that API call operator will be able to discover exactly which rules 
and with what values can be applied to ports bound with specific driver.
  As such call can leak used driver names to the users it should be available 
only for admins.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1686035/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703976] [NEW] New API call to get details of supported QoS rule type

2017-07-12 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/475260
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 2cc547241c99b01e36fdc69a08c59f975b32c508
Author: Sławek Kapłoński 
Date:   Mon Jun 19 06:35:25 2017 +

New API call to get details of supported QoS rule type

This commit adds new API call that allows to discover
details about supported QoS rule type and its parameters
by each of loaded backend drivers.

DocImpact: New call to get details about supported
   rule_type for each loaded backend driver
ApiImpact

Change-Id: I2008e9d3e400dd717434fbdd2e693c9c5e34c3a4
Closes-Bug: #1686035

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1703976

Title:
  New API call to get details of supported QoS rule type

Status in neutron:
  New

Bug description:
  https://review.openstack.org/475260
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 2cc547241c99b01e36fdc69a08c59f975b32c508
  Author: Sławek Kapłoński 
  Date:   Mon Jun 19 06:35:25 2017 +

  New API call to get details of supported QoS rule type
  
  This commit adds new API call that allows to discover
  details about supported QoS rule type and its parameters
  by each of loaded backend drivers.
  
  DocImpact: New call to get details about supported
 rule_type for each loaded backend driver
  ApiImpact
  
  Change-Id: I2008e9d3e400dd717434fbdd2e693c9c5e34c3a4
  Closes-Bug: #1686035

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1703976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703954] [NEW] Attach/Detach encrypted volume problems with real paths

2017-07-12 Thread Gorka Eguileor
Public bug reported:

OS-Brick on 1.14 and 1.15 returns real paths instead of returning
symbolic links, which results in the encryption attach_volume call
replacing the real device with a link to the crypt dm.

The issue comes from the Nova flow when attaching an encrypted volume:

1- Attach volume
2- Generate libvirt configuration with path from step 1
3- Encrypt attach volume

Since step 2 has already generated the config with the path from step 1
then step 3 must preserve this path.

When step 1 returns a symbolic link we just forcefully replace it with a
link to the crypt dm and everything is OK, but when we return a real
path it does the same thing, which means we'll be replacing for example
/dev/sda with a symlink, which will then break the detach process, and
all future attachments.

If flow order was changed to be 1, 3, 2 then the encrypt attach volume
could give a different path to be used for the libvirt config
generation.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: os-brick
 Importance: Undecided
 Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Description changed:

  OS-Brick on 1.14 and 1.15 returns real paths instead of returning
  symbolic links, which results in the encryption attach_volume call
  replacing the real device with a link to the crypt dm.
  
  The issue comes from the Nova flow when attaching an encrypted volume:
  
  1- Attach volume
  2- Generate libvirt configuration with path from step 1
  3- Encrypt attach volume
  
  Since step 2 has already generated the config with the path from step 1
  then step 3 must preserve this path.
  
  When step 1 returns a symbolic link we just forcefully replace it with a
  link to the crypt dm and everything is OK, but when we return a real
- path it does the same thing.
+ path it does the same thing, which means we'll be replacing for example
+ /dev/sda with a symlink, which will then break the detach process, and
+ all future attachments.
  
  If flow order was changed to be 1, 3, 2 then the encrypt attach volume
  could give a different path to be used for the libvirt config
  generation.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1703954

Title:
  Attach/Detach encrypted volume problems with real paths

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New
Status in os-brick:
  New

Bug description:
  OS-Brick on 1.14 and 1.15 returns real paths instead of returning
  symbolic links, which results in the encryption attach_volume call
  replacing the real device with a link to the crypt dm.

  The issue comes from the Nova flow when attaching an encrypted volume:

  1- Attach volume
  2- Generate libvirt configuration with path from step 1
  3- Encrypt attach volume

  Since step 2 has already generated the config with the path from step
  1 then step 3 must preserve this path.

  When step 1 returns a symbolic link we just forcefully replace it with
  a link to the crypt dm and everything is OK, but when we return a real
  path it does the same thing, which means we'll be replacing for
  example /dev/sda with a symlink, which will then break the detach
  process, and all future attachments.

  If flow order was changed to be 1, 3, 2 then the encrypt attach volume
  could give a different path to be used for the libvirt config
  generation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1703954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450067] Re: Server with ML2 & L3 service plugin exposes dvr extension even if OVS agent is unused

2017-07-12 Thread Ihar Hrachyshka
Now that we have a config knob for this, I believe we can claim it's
fixed from neutron side. I understand there are some hard feelings about
the solution, and we can revisit the way we tackle it later, but from
short sight it seems like we can close the bug.

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450067

Title:
  Server with ML2 & L3 service plugin exposes dvr extension even if OVS
  agent is unused

Status in neutron:
  Fix Released

Bug description:
  In a deployment using the L3 service plugin, the DVR extension is
  always declared as available, this is even if the ML2 OVS mech driver
  is not configured. Deployments could be using the LB mech driver or
  others. Not only is the extension declared, DVR router creation is not
  blocked. We should not rely only on documentation, but additionally
  provide expected behavior (Fail gracefully, not expose the extension).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703938] [NEW] AttributeError: 'PortContext' object has no attribute 'session' in l3_hamode_db

2017-07-12 Thread Ihar Hrachyshka
Public bug reported:

Jul 11 20:08:35.720679 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers [None 
req-13c07cf3-201f-4b86-9e92-8f51bd141c6c admin admin] Mechanism driver 
'l2population' failed in delete_port_postcommit: AttributeError: 'PortContext' 
object has no attribute 'session'
Jul 11 20:08:35.720775 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers Traceback (most 
recent call last):
Jul 11 20:08:35.720895 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/managers.py", line 426, in 
_call_on_drivers
Jul 11 20:08:35.720971 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
Jul 11 20:08:35.721056 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/l2pop/mech_driver.py", line 
79, in delete_port_postcommit
Jul 11 20:08:35.721134 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers context, 
port['device_owner'], port['device_id']):
Jul 11 20:08:35.721206 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/db/l3_hamode_db.py", line 726, in 
is_ha_router_port
Jul 11 20:08:35.721283 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers context, 
router_id=router_id, ha=True)
Jul 11 20:08:35.721369 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/objects/base.py", line 712, in objects_exist
Jul 11 20:08:35.721447 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers context, 
cls.db_model, **cls.modify_fields_to_db(kwargs))
Jul 11 20:08:35.721526 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/objects/db/api.py", line 32, in get_object
Jul 11 20:08:35.721610 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers return 
_get_filter_query(context, model, **kwargs).first()
Jul 11 20:08:35.721725 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/objects/db/api.py", line 25, in 
_get_filter_query
Jul 11 20:08:35.721802 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers with 
context.session.begin(subtransactions=True):
Jul 11 20:08:35.721877 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers AttributeError: 
'PortContext' object has no attribute 'session'
Jul 11 20:08:35.721952 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers 

Example: http://logs.openstack.org/73/304873/44/check/gate-tempest-dsvm-
neutron-dvr-ha-multinode-full-ubuntu-xenial-
nv/586400d/logs/screen-q-svc.txt.gz?level=TRACE#_Jul_11_20_08_35_720679

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: db l3-ha

** Tags added: db l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1703938

Title:
  AttributeError: 'PortContext' object has no attribute 'session' in
  l3_hamode_db

Status in neutron:
  New

Bug description:
  Jul 11 20:08:35.720679 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers [None 
req-13c07cf3-201f-4b86-9e92-8f51bd141c6c admin admin] Mechanism driver 
'l2population' failed in delete_port_postcommit: AttributeError: 'PortContext' 
object has no attribute 'session'
  Jul 11 20:08:35.720775 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers Traceback (most 
recent call last):
  Jul 11 20:08:35.720895 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/managers.py", line 426, in 
_call_on_drivers
  Jul 11 20:08:35.720971 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  Jul 11 20:08:35.721056 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 
neutron-server[27121]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/l2pop/mech_driver.py", line

[Yahoo-eng-team] [Bug 1375625] Re: Problem in l3-agent tenant-network interface would cause split-brain in HA router

2017-07-12 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1375625

Title:
  Problem in l3-agent tenant-network interface would cause split-brain
  in HA router

Status in neutron:
  Fix Released
Status in openstack-manuals:
  Confirmed

Bug description:
  Assuming l3-agents have  1 NIC (ie eth0) assigned to tenant-network (tunnel) 
traffic and another (ie eth1) assigned to external network,.
  Disconnecting eth0 would prevent keeplived reports and trigger one of the 
slaves to become master. However, since the error is outside the router 
namespace, the original master is unaware of that and would not trigger "fault" 
state. Instead it will continue to receive traffic on the, yet active, external 
network interface - eth1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1375625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433172] Re: L3 HA routers master state flapping between nodes after router updates or failovers when using 1.2.14 or 1.2.15 (-1.2.15-6)

2017-07-12 Thread Ihar Hrachyshka
The bug is in keepalived not neutron, moving to Won't Fix.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433172

Title:
  L3 HA routers master state flapping between nodes after router updates
  or failovers when using 1.2.14 or 1.2.15 (-1.2.15-6)

Status in neutron:
  Won't Fix
Status in openstack-ansible:
  Fix Released

Bug description:
  keepalived 1.2.14 introduced a regression when running it in no-preempt mode. 
More details here in a thread I started on the keepalived-devel list:
  http://sourceforge.net/p/keepalived/mailman/message/33604497/

  A fix was backported to 1.2.15-6, and is present in 1.2.16.

  Current status (Updated on the 30th of April, 2015):
  Fedora 20, 21 and 22 have 1.2.16.
  CentOS and RHEL are on 1.2.13

  Ubuntu is using 1.2.10 or older.
  Debian is using 1.2.13.

  In summary, as long as you're not using 1.2.14 or 1.2.15 (Excluding
  1.2.15-6), you're OK, which should be the case if you're using the
  latest keepalived packaged for your distro.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497272] Re: L3 HA: Unstable rescheduling time for keepalived v1.2.7

2017-07-12 Thread Ihar Hrachyshka
It's keepalived issue, and supported platforms like centos/rhel or
xenial, already ship fixed packages. We also documented the issue in
networking guide. There seems to be nothing we can do more on neutron
side, so moving the bug to Won't Fix.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497272

Title:
  L3 HA: Unstable rescheduling time for keepalived v1.2.7

Status in neutron:
  Won't Fix
Status in openstack-ansible:
  Fix Released
Status in openstack-manuals:
  Fix Released

Bug description:
  I have tested work of L3 HA on environment with 3 controllers and 1 compute 
(Kilo) with this simple scenario:
  1) ping vm by floating ip
  2) disable master l3-agent (which ha_state is active)
  3) wait for pings to continue and another agent became active
  4) check number of packages that were lost

  My results are  following:
  1) When max_l3_agents_per_router=2, 3 to 4 packages were lost.
  2) When max_l3_agents_per_router=3 or 0 (meaning the router will be scheduled 
on every agent), 10 to 70 packages were lost.

  I should mention that in both cases there was only one ha router.

  It is expected that less packages will be lost when
  max_l3_agents_per_router=3(0).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640701] Re: _notify_l3_agent_ha_port_update failed for stable/mitaka

2017-07-12 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640701

Title:
  _notify_l3_agent_ha_port_update failed for stable/mitaka

Status in neutron:
  Fix Released

Bug description:
  Backport https://review.openstack.org/#/c/364407/ bring
  _notify_l3_agent_ha_port_update to Mitaka code with several changes.
  This code if giving constant errors in neutron-server logs
  http://paste.openstack.org/show/588382/.

  Newton version and later are not affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1640701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696094] Re: CI: ovb-ha promotion job fails with 504 gateway timeout, neutron-server create-subnet timing out

2017-07-12 Thread Ihar Hrachyshka
It was not a neutron bug but eventlet/dns issue, so marking the bug as
Invalid for neutron.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1696094

Title:
  CI: ovb-ha promotion job fails with 504 gateway timeout, neutron-
  server create-subnet timing out

Status in neutron:
  Invalid
Status in tripleo:
  Fix Released

Bug description:
  http://logs.openstack.org/15/359215/106/experimental-tripleo/gate-
  tripleo-ci-centos-7-ovb-
  ha/2ea94ab/console.html#_2017-06-05_23_52_38_539282

  2017-06-05 23:50:34.148537 | 
+---+--+
  2017-06-05 23:50:35.545475 | neutron CLI is deprecated and will be removed in 
the future. Use openstack CLI instead.
  2017-06-05 23:52:38.539282 | 504 Gateway Time-out
  2017-06-05 23:52:38.539408 | The server didn't respond in time.
  2017-06-05 23:52:38.539437 | 

  It happens on where subnet creation should be.
  I see in logs ovs-vsctl failure, but not sure it's not red herring.

  http://logs.openstack.org/15/359215/106/experimental-tripleo/gate-
  tripleo-ci-centos-7-ovb-ha/2ea94ab/logs/controller-1-tripleo-
  ci-b-bar/var/log/messages

  Jun  5 23:48:22 localhost ovs-vsctl: ovs|1|vsctl|INFO|Called as 
/bin/ovs-vsctl --timeout=5 --id=@manager -- create Manager 
"target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
  Jun  5 23:48:22 localhost ovs-vsctl: ovs|2|db_ctl_base|ERR|transaction 
error: {"details":"Transaction causes multiple rows in \"Manager\" table to 
have identical values (\"ptcp:6640:127.0.0.1\") for index on column \"target\". 
 First row, with UUID 7e2b866a-40d5-4f9c-9e08-0be3bb34b199, existed in the 
database before this transaction and was not modified by the transaction.  
Second row, with UUID 49488cff-271a-457a-b1e7-e6ca3da6f069, was inserted by 
this transaction.","error":"constraint violation"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1696094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703935] [NEW] GCE unit test tries to connect to the network

2017-07-12 Thread Joonas Kylmälä
Public bug reported:

The GCE unit test tries to connect the network and what it should
instead do is mock the http request it is doing. See the attachment for
tox output that gives more info about the error.

** Affects: cloud-init
 Importance: Medium
 Assignee: Chad Smith (chad.smith)
 Status: In Progress

** Attachment added: "tox output"
   https://bugs.launchpad.net/bugs/1703935/+attachment/4913507/+files/error

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1703935

Title:
  GCE unit test tries to connect to the network

Status in cloud-init:
  In Progress

Bug description:
  The GCE unit test tries to connect the network and what it should
  instead do is mock the http request it is doing. See the attachment
  for tox output that gives more info about the error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1703935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703917] [NEW] Sometimes test_update_user_password fails with Unauthorized

2017-07-12 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/51/473751/11/gate/gate-tempest-dsvm-neutron-
dvr-ubuntu-xenial/aeb2743/console.html

2017-07-12 09:30:35.693828 | Traceback (most recent call last):
2017-07-12 09:30:35.693890 |   File 
"tempest/api/identity/admin/v3/test_users.py", line 89, in 
test_update_user_password
2017-07-12 09:30:35.693932 | password=new_password).response
2017-07-12 09:30:35.693989 |   File 
"tempest/lib/services/identity/v3/token_client.py", line 132, in auth
2017-07-12 09:30:35.694037 | resp, body = self.post(self.auth_url, 
body=body)
2017-07-12 09:30:35.694088 |   File "tempest/lib/common/rest_client.py", 
line 270, in post
2017-07-12 09:30:35.694143 | return self.request('POST', url, 
extra_headers, headers, body, chunked)
2017-07-12 09:30:35.694201 |   File 
"tempest/lib/services/identity/v3/token_client.py", line 161, in request
2017-07-12 09:30:35.694254 | raise 
exceptions.Unauthorized(resp_body['error']['message'])
2017-07-12 09:30:35.694298 | tempest.lib.exceptions.Unauthorized: 
Unauthorized
2017-07-12 09:30:35.694348 | Details: The request you have made requires 
authentication.

Logstash:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_update_user_password%5C%22

20 hits in 7 days.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1703917

Title:
  Sometimes test_update_user_password fails with Unauthorized

Status in OpenStack Identity (keystone):
  New

Bug description:
  http://logs.openstack.org/51/473751/11/gate/gate-tempest-dsvm-neutron-
  dvr-ubuntu-xenial/aeb2743/console.html

  2017-07-12 09:30:35.693828 | Traceback (most recent call last):
  2017-07-12 09:30:35.693890 |   File 
"tempest/api/identity/admin/v3/test_users.py", line 89, in 
test_update_user_password
  2017-07-12 09:30:35.693932 | password=new_password).response
  2017-07-12 09:30:35.693989 |   File 
"tempest/lib/services/identity/v3/token_client.py", line 132, in auth
  2017-07-12 09:30:35.694037 | resp, body = self.post(self.auth_url, 
body=body)
  2017-07-12 09:30:35.694088 |   File "tempest/lib/common/rest_client.py", 
line 270, in post
  2017-07-12 09:30:35.694143 | return self.request('POST', url, 
extra_headers, headers, body, chunked)
  2017-07-12 09:30:35.694201 |   File 
"tempest/lib/services/identity/v3/token_client.py", line 161, in request
  2017-07-12 09:30:35.694254 | raise 
exceptions.Unauthorized(resp_body['error']['message'])
  2017-07-12 09:30:35.694298 | tempest.lib.exceptions.Unauthorized: 
Unauthorized
  2017-07-12 09:30:35.694348 | Details: The request you have made requires 
authentication.

  Logstash:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_update_user_password%5C%22

  20 hits in 7 days.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1703917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703856] [NEW] 502 Bad gateway error on image-create

2017-07-12 Thread Ellen Batbouta
Public bug reported:


The glance code that I am using is from the upstream master branch (Pike) and I 
just pulled down the latest code this morning and still can reproduce this 
problem.

Up until about 2 weeks ago, I was able to upload my database image into
glance using this command:

glance image-create --name 'Db 12.1.0.2' --file
Oracle12201DBRAC_x86_64-xvdb.qcow2 --container-format bare --disk-format
qcow2

However, now it fails as follows:

 glance --debug  image-create --name 'Db 12.1.0.2' --file
Oracle12201DBRAC_x86_64-xvdb.qcow2 --container-format bare --disk-format
qcow2

DEBUG:keystoneauth.session:REQ: curl -g -i -X GET http://172.16.35.10/identity 
-H "Accept: application/json" -H "User-Agent: glance keystoneauth1/2.21.0 
python-requests/2.18.1 CPython/2.7.12"
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 172.16.35.10
DEBUG:urllib3.connectionpool:http://172.16.35.10:80 "GET /identity HTTP/1.1" 
300 606
DEBUG:keystoneauth.session:RESP: [300] Date: Wed, 12 Jul 2017 14:26:39 GMT 
Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token Content-Type: 
application/json Content-Length: 606 Connection: close 
RESP BODY: {"versions": {"values": [{"status": "stable", "updated": 
"2017-02-22T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}], "id": "v3.8", "links": 
[{"href": "http://172.16.35.10/identity/v3/";, "rel": "self"}]}, {"status": 
"deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": 
"application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 
"id": "v2.0", "links": [{"href": "http://172.16.35.10/identity/v2.0/";, "rel": 
"self"}, {"href": "https://docs.openstack.org/";, "type": "text/html", "rel": 
"describedby"}]}]}}

DEBUG:keystoneauth.identity.v3.base:Making authentication request to 
http://172.16.35.10/identity/v3/auth/tokens
DEBUG:urllib3.connectionpool:Resetting dropped connection: 172.16.35.10
DEBUG:urllib3.connectionpool:http://172.16.35.10:80 "POST 
/identity/v3/auth/tokens HTTP/1.1" 201 4893
DEBUG:keystoneauth.identity.v3.base:{"token": {"is_domain": false, "methods": 
["password"], "roles": [{"id": "325205c52aba4b31801e2d71ec95483b", "name": 
"admin"}], "expires_at": "2017-07-12T15:26:40.00Z", "project": {"domain": 
{"id": "default", "name": "Default"}, "id": "4aa1233111e140b2a1e4ba170881f092", 
"name": "demo"}, "catalog": [{"endpoints": [{"url": 
"http://172.16.35.10/image";, "interface": "public", "region": "RegionOne", 
"region_id": "RegionOne", "id": "0d10d85bc3ae4e13a49ed344fcf6f737"}], "type": 
"image", "id": "01c2acd1845d4dd28c5b69351fa0dbf3", "name": "glance"}, 
{"endpoints": [{"url": 
"http://172.16.35.10:8004/v1/4aa1233111e140b2a1e4ba170881f092";, "interface": 
"public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"0fbba7f276e44921ba112edd1e157561"}, {"url": 
"http://172.16.35.10:8004/v1/4aa1233111e140b2a1e4ba170881f092";, "interface": 
"internal", "region": "RegionOne", "region_id": "RegionOne", "id": 
"72abdff47e2940f09db32720b709d01f"}, {"url": "http://172.1
 6.35.10:8004/v1/4aa1233111e140b2a1e4ba170881f092", "interface": "admin", 
"region": "RegionOne", "region_id": "RegionOne", "id": 
"d2789811c71342d69d69e45c09268ebc"}], "type": "orchestration", "id": 
"343101b65cba48afafb5b70fcbae5c3d", "name": "heat"}, {"endpoints": [{"url": 
"http://172.16.35.10/compute/v2/4aa1233111e140b2a1e4ba170881f092";, "interface": 
"public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"d7fe183ce05d46d986c7ec7600b583a5"}], "type": "compute_legacy", "id": 
"3d75e8b88ed14f95b162b5398acfde82", "name": "nova_legacy"}, {"endpoints": 
[{"url": "http://172.16.35.10:8082";, "interface": "admin", "region": 
"RegionOne", "region_id": "RegionOne", "id": 
"65e5e92c5646468583f033cfb05ae0cb"}, {"url": "http://172.16.35.10:8082";, 
"interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"8cbae4cbce354314aa5f2b5e5c4e4592"}, {"url": "http://172.16.35.10:8082";, 
"interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": 
"d761a53278654ac6
 90fb56b42752c1a4"}], "type": "application-catalog", "id": 
"42038a7b5c744771842615613d21f2ba", "name": "murano"}, {"endpoints": [{"url": 
"http://172.16.35.10/identity";, "interface": "admin", "region": "RegionOne", 
"region_id": "RegionOne", "id": "4b5c6e820b9446f586be1f64da5ae2f6"}, {"url": 
"http://172.16.35.10/identity";, "interface": "public", "region": "RegionOne", 
"region_id": "RegionOne", "id": "f6c18a74f19a4b728eeb5f3916dde7c1"}], "type": 
"identity", "id": "518a08b01ddf4c38ba2dfb0481aa196f", "name": "keystone"}, 
{"endpoints": [{"url": 
"http://172.16.35.10:8776/v1/4aa1233111e140b2a1e4ba170881f092";, "interface": 
"public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"26a34fe1d409488dbc1d4178642de6f1"}], "type": "volume", "id": 
"57e3dbcfe62741b1bbc735d137caf3c7", "name": "cinder"}, {"endpoints": [{"url": 
"http://172.16.35.10:9696/";, "interface": "public", "region"

[Yahoo-eng-team] [Bug 1703369] Re: get_identity_providers policy should be singular

2017-07-12 Thread Jeremy Stanley
Since Luke is running with the OSSN task confirmed, I'm going to take
that as agreement that this is class B2 and set our OSSA task to won't
fix. Thanks!

** Changed in: ossa
   Status: Incomplete => Won't Fix

** Tags added: security

** Information type changed from Public Security to Public

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1703369

Title:
  get_identity_providers policy should be singular

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) newton series:
  New
Status in OpenStack Identity (keystone) ocata series:
  New
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Confirmed

Bug description:
  identity:get_identity_providers should be
  identity:get_identity_provider (singular) since a GET is targeted on a
  single provider and the code is setup to check for
  identity:get_identity_provider (singular). See
  
https://github.com/openstack/keystone/blob/c7e29560b7bf7a44e44722eea0645bf18ad56af3/keystone/federation/controllers.py#L112

  found in master (pike)

  The ocata default policy.json also has this problem. Unless someone
  manually overrode policy to specify identity:get_identity_provider
  (singular), the result would be that the default rule was actually
  used for that check instead of identity:get_identity_providers. We
  could go back and fix the default policy.json for past releases, but
  the default actually has the same value as
  identity:get_identity_providers, and if nobody has complained it's
  probably safer to just leave it. It is, after all, just defaults there
  and anyone can override by specifying the correct value.

  But we must fix in pike to go along with the shift of policy into
  code. Policy defaults in code definitely need to match up with what
  the code actually checks. There should no longer be any reliance on
  the default rule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1703369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703844] [NEW] api-ref for GET os-instance-actions does not list event's key(start_time etc) as optional

2017-07-12 Thread Ghanshyam Mann
Public bug reported:

GET os-instance-actions can return 'events' key in response if it corresponding 
policy permit it.
'events' in response is list of following elements:
{'event', 'start_time', 'finish_time', 'result', 'traceback'}

API ref mentioned those elements in 'events' response key as mandatory
which is not the case.

https://developer.openstack.org/api-ref/compute/?expanded=show-server-
action-details-detail#show-server-action-details

'events' response key can be empty list.

Events in DB are being created when event_start() is called from
decorator wrap_instance_event()

https://github.com/openstack/nova/blob/0ffe7b27892fde243fc1006f800f309c10d66028/nova/objects/instance_action.py#L169

https://github.com/openstack/nova/blob/0ffe7b27892fde243fc1006f800f309c10d66028/nova/db/sqlalchemy/api.py#L6159

and if no events in DB then API controller going to return 'events' as
empty list.

https://github.com/openstack/nova/blob/0ffe7b27892fde243fc1006f800f309c10d66028/nova/api/openstack/compute/instance_actions.py#L86

** Affects: nova
 Importance: Undecided
 Assignee: Ghanshyam Mann (ghanshyammann)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1703844

Title:
  api-ref for GET os-instance-actions does not list event's
  key(start_time etc) as optional

Status in OpenStack Compute (nova):
  New

Bug description:
  GET os-instance-actions can return 'events' key in response if it 
corresponding policy permit it.
  'events' in response is list of following elements:
  {'event', 'start_time', 'finish_time', 'result', 'traceback'}

  API ref mentioned those elements in 'events' response key as mandatory
  which is not the case.

  https://developer.openstack.org/api-ref/compute/?expanded=show-server-
  action-details-detail#show-server-action-details

  'events' response key can be empty list.

  Events in DB are being created when event_start() is called from
  decorator wrap_instance_event()

  
https://github.com/openstack/nova/blob/0ffe7b27892fde243fc1006f800f309c10d66028/nova/objects/instance_action.py#L169

  
https://github.com/openstack/nova/blob/0ffe7b27892fde243fc1006f800f309c10d66028/nova/db/sqlalchemy/api.py#L6159

  and if no events in DB then API controller going to return 'events' as
  empty list.

  
https://github.com/openstack/nova/blob/0ffe7b27892fde243fc1006f800f309c10d66028/nova/api/openstack/compute/instance_actions.py#L86

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1703844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459042] Re: cloud-init fails to report IPv6 connectivity when booting

2017-07-12 Thread Dr. Jens Rosenboom
** Changed in: cirros
   Status: Invalid => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1459042

Title:
  cloud-init fails to report IPv6 connectivity when booting

Status in CirrOS:
  Fix Committed
Status in cloud-init:
  Confirmed

Bug description:
  It would be convenient to see the IPv6 networking information printed
  at boot, similar to the IPv4 networking information currently is.

  Output from the boot log:
  [   15.621085] cloud-init[1058]: Cloud-init v. 0.7.7 running 'init' at Tue, 
14 Jun 2016 13:48:14 +. Up 6.71 seconds.
  [   15.622670] cloud-init[1058]: ci-info: Net 
device info+
  [   15.624106] cloud-init[1058]: ci-info: 
++--++-+---+---+
  [   15.625516] cloud-init[1058]: ci-info: | Device |  Up  |  Address   | 
Mask| Scope | Hw-Address|
  [   15.627058] cloud-init[1058]: ci-info: 
++--++-+---+---+
  [   15.628504] cloud-init[1058]: ci-info: | ens3:  | True | 10.42.0.48 | 
255.255.0.0 |   .   | fa:16:3e:f9:86:07 |
  [   15.629930] cloud-init[1058]: ci-info: | ens3:  | True | .  |  
.  |   d   | fa:16:3e:f9:86:07 |
  [   15.631334] cloud-init[1058]: ci-info: |  lo:   | True | 127.0.0.1  |  
255.0.0.0  |   .   | . |
  [   15.632765] cloud-init[1058]: ci-info: |  lo:   | True | .  |  
.  |   d   | . |
  [   15.634221] cloud-init[1058]: ci-info: 
++--++-+---+---+
  [   15.635671] cloud-init[1058]: ci-info: 
+++Route IPv4 info+++
  [   15.637186] cloud-init[1058]: ci-info: 
+---+-+---+-+---+---+
  [   15.638682] cloud-init[1058]: ci-info: | Route |   Destination   |  
Gateway  | Genmask | Interface | Flags |
  [   15.640182] cloud-init[1058]: ci-info: 
+---+-+---+-+---+---+
  [   15.641657] cloud-init[1058]: ci-info: |   0   | 0.0.0.0 | 
10.42.0.1 | 0.0.0.0 |ens3   |   UG  |
  [   15.643149] cloud-init[1058]: ci-info: |   1   |10.42.0.0|  
0.0.0.0  |   255.255.0.0   |ens3   |   U   |
  [   15.644661] cloud-init[1058]: ci-info: |   2   | 169.254.169.254 | 
10.42.0.1 | 255.255.255.255 |ens3   |  UGH  |
  [   15.646175] cloud-init[1058]: ci-info: 
+---+-+---+-+---+---+

  Output from running system:
  ci-info: +++Net device 
info+++
  ci-info: 
++---+-+---++---+
  ci-info: |   Device   |   Up  | Address | 
 Mask | Scope  | Hw-Address|
  ci-info: 
++---+-+---++---+
  ci-info: |ens3|  True |10.42.0.44   |  
255.255.0.0  |   .| fa:16:3e:90:11:e0 |
  ci-info: |ens3|  True | 2a04:3b40:8010:1:f816:3eff:fe90:11e0/64 | 
  .   | global | fa:16:3e:90:11:e0 |
  ci-info: | lo |  True |127.0.0.1|   
255.0.0.0   |   .| . |
  ci-info: | lo |  True | ::1/128 | 
  .   |  host  | . |
  ci-info: 
++---+-+---++---+
  ci-info: +++Route IPv4 
info+++
  ci-info: 
+---+-+---+-+---+---+
  ci-info: | Route |   Destination   |  Gateway  | Genmask | Interface 
| Flags |
  ci-info: 
+---+-+---+-+---+---+
  ci-info: |   0   | 0.0.0.0 | 10.42.0.1 | 0.0.0.0 |ens3   
|   UG  |
  ci-info: |   1   |10.42.0.0|  0.0.0.0  |   255.255.0.0   |ens3   
|   U   |
  ci-info: |   2   | 169.254.169.254 | 10.42.0.1 | 255.255.255.255 |ens3   
|  UGH  |
  ci-info: 
+---+-+---+-+---+---+

  $ netstat -rn46
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 10.42.0.1   0.0.0.0 UG0 0  0 ens3
  10.42.0.0   0.0.0.0 255.255.0.0 U 0 0  0 ens3
  169.254.169.254 10.42.0.1   255.255.255.255 UGH   0 0  0 ens3
  192.168.122.0   0.0.0.0 255.255.255.0   U 

[Yahoo-eng-team] [Bug 1703789] [NEW] Disk setup example text only lists MBR as valid table_type

2017-07-12 Thread Sandor Zeestraten
Public bug reported:

The disk setup example in the docs mentions that only MBR table_type is
supported, while support for GPT was introduced in 0.7.7.

See here for disk setup example text: https://git.launchpad.net/cloud-
init/tree/doc/examples/cloud-config-disk-setup.txt#n99

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1703789

Title:
  Disk setup example text only lists MBR as valid table_type

Status in cloud-init:
  New

Bug description:
  The disk setup example in the docs mentions that only MBR table_type
  is supported, while support for GPT was introduced in 0.7.7.

  See here for disk setup example text: https://git.launchpad.net/cloud-
  init/tree/doc/examples/cloud-config-disk-setup.txt#n99

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1703789/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703760] [NEW] Cann't update flavor access if base information is not changed.

2017-07-12 Thread Debo Zhang
Public bug reported:

In update flavor form, I only change access and save, then failed.
Later, I change it's name and access both, then succeed.
So it doesn't work well to change access if information not changed.

** Affects: horizon
 Importance: Undecided
 Assignee: Debo Zhang (laun-zhangdebo)
 Status: In Progress

** Changed in: horizon
   Status: New => In Progress

** Changed in: horizon
 Assignee: (unassigned) => Debo Zhang (laun-zhangdebo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1703760

Title:
  Cann't update flavor access if base information is not changed.

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In update flavor form, I only change access and save, then failed.
  Later, I change it's name and access both, then succeed.
  So it doesn't work well to change access if information not changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1703760/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp