[Yahoo-eng-team] [Bug 1521193] Re: Filters does not support non-ASCII value

2017-02-01 Thread Bhagyashri Shewale
** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => Bhagyashri Shewale (bhagyashri-shewale)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1521193

Title:
  Filters does not support non-ASCII value

Status in Glance:
  New
Status in Glance Client:
  Invalid

Bug description:
  Glanceclient does not encode value "--property-filter" that call 500
  error on wsgi server

  STEPS TO REPRODUCE:
  glance --debug image-list --property-filter name=Привет

  ACTUAL RESULT:
  curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 
'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: 
{SHA1}e2f09d062a40c2c9dd36331412527ca29c01f2c7' -H 'Content-Type: 
application/octet-stream' 
http://172.18.66.81:9292/v2/images?limit=2000&name=%D0%9F%D1%80%D0%B8%D0%B2%D0%B5%D1%82&sort_key=name&sort_dir=asc
  Request returned failure status 500.
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/glanceclient/shell.py", line 
709, in main
  args.func(client, args)
File "/usr/local/lib/python2.7/dist-packages/glanceclient/v2/shell.py", 
line 182, in do_image_list
  utils.print_list(images, columns)
File "/usr/local/lib/python2.7/dist-packages/glanceclient/common/utils.py", 
line 183, in print_list
  for o in objs:
File "/usr/local/lib/python2.7/dist-packages/glanceclient/v2/images.py", 
line 176, in list
  for image in paginate(url, page_size, limit):
File "/usr/local/lib/python2.7/dist-packages/glanceclient/v2/images.py", 
line 108, in paginate
  resp, body = self.http_client.get(next_url)
File "/usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py", 
line 280, in get
  return self._request('GET', url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py", 
line 272, in _request
  resp, body_iter = self._handle_response(resp)
File "/usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py", 
line 93, in _handle_response
  raise exc.from_response(resp, resp.content)
  HTTPInternalServerError: HTTPInternalServerError (HTTP 500)
  HTTPInternalServerError (HTTP 500)

  
  logs on glance-server:
  http://paste.openstack.org/show/480358/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1521193/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656276] Re: Error running nova-manage cell_v2 simple_cell_setup when configuring nova with puppet-nova

2017-02-01 Thread Emilien Macchi
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656276

Title:
  Error running nova-manage  cell_v2 simple_cell_setup when configuring
  nova with puppet-nova

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  New
Status in Packstack:
  New
Status in puppet-nova:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  When installing and configuring nova with puppet-nova (with either
  tripleo, packstack or puppet-openstack-integration), we are getting
  following errors:

  Debug: Executing: '/usr/bin/nova-manage  cell_v2 simple_cell_setup 
--transport-url=rabbit://guest:guest@172.19.2.159:5672/?ssl=0'
  Debug: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Sleeping for 5 seconds between tries
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Cell0 is already setup.
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 No hosts found to map to cell, exiting.

  The issue seems to be that it's running "nova-manage  cell_v2
  simple_cell_setup" as part of the nova database initialization when no
  compute nodes have been created but it returns 1 in that case [1].
  However, note that the previous steps (Cell0 mapping and schema
  migration) were successfully run.

  I think for nova bootstrap a reasonable orchestrated workflow would
  be:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. nova cell0 mapping and schema creation.
  4. Adding compute nodes
  5. mapping compute nodes (by running nova-manage cell_v2 discover_hosts)

  For step 3 we'd need to get simple_cell_setup to return 0 when not
  having compute nodes, or having a different command.

  With current behavior of nova-manage the only working workflow we can
  do is:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. Adding all compute nodes
  4. nova cell0 mapping and schema creation with "nova-manage cell_v2 
simple_cell_setup".

  Am I right?, Is there any better alternative?

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L1112-L1114

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555384] Re: ML2: routers and multiple mechanism drivers

2017-02-01 Thread Armando Migliaccio
Happy to hear what other think but I personally feel this is an invalid
neutron issue.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555384

Title:
  ML2: routers and multiple mechanism drivers

Status in networking-midonet:
  Fix Released
Status in neutron:
  Invalid

Bug description:
  We have an ML2 environment with linuxbridge and midonet networks. For
  L3 we use the midonet driver.

  If a user tries to bind a linuxbridge port to a midonet router it
  returns the following error:

  {"message":"There is no NeutronPort with ID eddb3d08-97f6-480d-
  a1a4-9dfe3d22e62c.","code":404}

  However the port is created (user can't see it)  and doing a router
  show shows the router as having that interface.

  Ideally this should not be allowed and should error out gracefully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1555384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658571] Re: Microversion 2.37 break 2.32 usage

2017-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/424759
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e80e2511cf825671a479053cc8d41463aab1caaa
Submitter: Jenkins
Branch:master

commit e80e2511cf825671a479053cc8d41463aab1caaa
Author: Artom Lifshitz 
Date:   Tue Jan 24 12:27:15 2017 -0500

Fix tag attribute disappearing in 2.33 and 2.37

In the context of device tagging, bugs have caused the tag attribute
to disappear starting with version 2.33 for block_devices and starting
with version 2.37 for network interfaces. In other words, block
devices could only be tagged in 2.32 and network interfaces between
2.32 and 2.36 inclusively.

This patch documents this behaviour in api-ref and introduces
microversion 2.42, which re-adds the tag in all the right places.

Change-Id: Ia0869dc6f7f5bd347ccbd0930d1d668d37695a22
Closes-bug: 1658571
Implements: blueprint fix-tag-attribute-disappearing


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1658571

Title:
  Microversion 2.37 break 2.32 usage

Status in OpenStack Compute (nova):
  Fix Released
Status in python-novaclient:
  New

Bug description:
  Device tagging support was added in microversion 2.32, as for
  ports:
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/servers.py#n76
  but in latter microversion 2.37 accidentally removed it:
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/servers.py#n82

  And for bdms:
  the schema is added by
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/block_device_mapping.py#L76
  and it will only work for microversion 2.32

  So this function is only usable from microversion 2.32 to 2.37 for
  ports and only for 2.32 for bdms.

  we should fix it and backport to Newton.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1658571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661111] Re: Update bug triaging policy for documentation

2017-02-01 Thread Armando Migliaccio
This is a bot brainfart because of the spurious docimpact flag in the
commit message.

** Changed in: neutron
   Status: New => Invalid

** Tags removed: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/166

Title:
  Update bug triaging policy for documentation

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/427672
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 792cc0fddc5f06f3439e634fdb4c7d7e00a7a814
  Author: John Davidge 
  Date:   Wed Feb 1 12:38:59 2017 +

  Update bug triaging policy for documentation
  
  Update the bug triage policy for neutron DocImpact bugs affecting
  openstack-manuals.
  
  The two policy additions are:
  
  1. Tagging the bug for the appropriate guide. This will assist the
 openstack-manuals team with their part of the triage process.
  2. Removing neutron from affected projects if the bug only affects
 openstack-manuals. This will help reduce clutter in the neutron bug 
queue.
  
  Change-Id: Iac2dd10f1476cde0843c356f0d161b5a59943c99

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549443] Re: Port Security does not consistently update nova iptables

2017-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/421832
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a8b6a597b6aab7cd3b0a5d0c3baad75af395fe1d
Submitter: Jenkins
Branch:master

commit a8b6a597b6aab7cd3b0a5d0c3baad75af395fe1d
Author: Bernard Cafarelli 
Date:   Thu Jan 19 14:14:12 2017 +0100

Revert "Setup firewall filters only for required ports"

This reverts commit 75edc1ff28a460342a9b5e5b7d63c6f4fb59862d.

Ports with port security disabled require firewall entries in
neutron-openvswi-FORWARD chain to work properly.
Ports created with no security groups will not get skipped with current
code.
With fixed security groups check, these ports' security groups can not
be updated after creation.

Change-Id: I95ddbe38d8ac8a927a860a98f54e41e17fb71d43
Closes-Bug: #1549443


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1549443

Title:
  Port Security does not consistently update nova iptables

Status in neutron:
  Fix Released

Bug description:
  I have created a network with port security set to enabled.  I have
  set --no-security-group and --port_security_enabled=False on the port
  however the iptables on the hypervisor is not consistently set.

  I have 2 VM on this hypervisors:

  VM1: 
  tap0cc26c65-d1

  VM2: 
  tap672dbe42-10

  Dump of iptables save:
  -A INPUT -j neutron-openvswi-INPUT
  -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
  -A INPUT -p icmp -j ACCEPT
  -A INPUT -i lo -j ACCEPT
  -A INPUT -s 10.0.0.0/8 -d 10.0.0.0/8 -j ACCEPT
  -A INPUT -j REJECT --reject-with icmp-host-prohibited
  -A FORWARD -j neutron-filter-top
  -A FORWARD -j neutron-openvswi-FORWARD
  -A FORWARD -j REJECT --reject-with icmp-host-prohibited
  -A OUTPUT -j neutron-filter-top
  -A OUTPUT -j neutron-openvswi-OUTPUT
  -A OUTPUT -s 10.0.0.0/8 -d 10.0.0.0/8 -j ACCEPT
  -A neutron-filter-top -j neutron-openvswi-local
  -A neutron-openvswi-FORWARD -m physdev --physdev-out tap85e24fb1-61 
--physdev-is-bridged -m comment --comment "Direct traffic from the VM interface 
to the security group chain." -j neutron-openvswi-sg-chain
  -A neutron-openvswi-FORWARD -m physdev --physdev-in tap85e24fb1-61 
--physdev-is-bridged -m comment --comment "Direct traffic from the VM interface 
to the security group chain." -j neutron-openvswi-sg-chain
  -A neutron-openvswi-FORWARD -m physdev --physdev-out tap1fe43774-ef 
--physdev-is-bridged -m comment --comment "Direct traffic from the VM interface 
to the security group chain." -j neutron-openvswi-sg-chain
  -A neutron-openvswi-FORWARD -m physdev --physdev-in tap1fe43774-ef 
--physdev-is-bridged -m comment --comment "Direct traffic from the VM interface 
to the security group chain." -j neutron-openvswi-sg-chain
  -A neutron-openvswi-FORWARD -m physdev --physdev-out tap0cc26c65-d1 
--physdev-is-bridged -m comment --comment "Accept all packets when port 
security is disabled." -j ACCEPT
  -A neutron-openvswi-FORWARD -m physdev --physdev-in tap0cc26c65-d1 
--physdev-is-bridged -m comment --comment "Accept all packets when port 
security is disabled." -j ACCEPT
  -A neutron-openvswi-INPUT -m physdev --physdev-in tap85e24fb1-61 
--physdev-is-bridged -m comment --comment "Direct incoming traffic from VM to 
the security group chain." -j neutron-openvswi-o85e24fb1-6
  -A neutron-openvswi-INPUT -m physdev --physdev-in tap1fe43774-ef 
--physdev-is-bridged -m comment --comment "Direct incoming traffic from VM to 
the security group chain." -j neutron-openvswi-o1fe43774-e
  -A neutron-openvswi-INPUT -m physdev --physdev-in tap0cc26c65-d1 
--physdev-is-bridged -m comment --comment "Accept all packets when port 
security is disabled." -j ACCEPT
  -A neutron-openvswi-i1fe43774-e -m state --state RELATED,ESTABLISHED -m 
comment --comment "Direct packets associated with a known session to the RETURN 
chain." -j RETURN
  -A neutron-openvswi-i1fe43774-e -s 10.1.51.1/32 -p udp -m udp --sport 67 -m 
udp --dport 68 -j RETURN
  -A neutron-openvswi-i1fe43774-e -p tcp -m tcp -m multiport --dports 1:65535 
-j RETURN
  -A neutron-openvswi-i1fe43774-e -p udp -m udp -m multiport --dports 1:65535 
-j RETURN
  -A neutron-openvswi-i1fe43774-e -m set --match-set 
NIPv4a5bf8991-231c-43db-9dd0- src -j RETURN
  -A neutron-openvswi-i1fe43774-e -p icmp -j RETURN
  -A neutron-openvswi-i1fe43774-e -m state --state INVALID -m comment --comment 
"Drop packets that appear related to an existing connection (e.g. TCP ACK/FIN) 
but do not have an entry in conntrack." -j DROP
  -A neutron-openvswi-i1fe43774-e -m comment --comment "Send unmatched traffic 
to the fallback chain." -j neutron-openvswi-sg-fallback
  -A neutron-openvswi-i85e24fb1-6 -m state --state RELATED,ESTABLISHED -m 
comment --comment "Direct packets associated with a known session to the RETURN 
chain." -j RETURN
  -A neu

[Yahoo-eng-team] [Bug 1660317] Re: NotImplementedError for detach_interface in nova-compute during instance deletion

2017-02-01 Thread Matt Riedemann
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1660317

Title:
  NotImplementedError for detach_interface in nova-compute during
  instance deletion

Status in Ironic:
  New
Status in OpenStack Compute (nova):
  In Progress
Status in ironic package in Ubuntu:
  New

Bug description:
  When baremetal instance deleted there is a harmless but annoying trace
  in nova-compute output.

  nova.compute.manager[26553]: INFO [instance: 
e265be67-9e87-44ea-95b6-641fc2dcaad8] Terminating instance 
[req-5f1eba69-239a-4dd4-8677-f28542b190bc 5a08515f35d749068a6327e387ca04e2 
7d450ecf00d64399aeb93bc122cb6dae - - -]
  nova.compute.resource_tracker[26553]: INFO Auditing locally available compute 
resources for node d02c7361-5e3a-4fdf-89b5-f29b3901f0fc 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
  nova.compute.resource_tracker[26553]: INFO Final resource view: 
name=d02c7361-5e3a-4fdf-89b5-f29b3901f0fc phys_ram=0MB used_ram=8096MB 
phys_disk=0GB used_disk=480GB total_vcpus=0 used_vcpus=0 pci_stats=[] 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
  nova.compute.resource_tracker[26553]: INFO Compute_service record updated for 
bare-compute1:d02c7361-5e3a-4fdf-89b5-f29b3901f0fc 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
  nova.compute.manager[26553]: INFO [instance: 
e265be67-9e87-44ea-95b6-641fc2dcaad8] Neutron deleted interface 
6b563aa7-64d3-4105-9ed5-c764fee7b536; detaching it from the instance and 
deleting it from the info cache [req-fdfeee26-a860-40a5-b2e3-2505973ffa75 
11b95cf353f74788938f580e13b652d8 93c697ef6c2649eb9966900a8d6a73d8 - - -]
  oslo_messaging.rpc.server[26553]: ERROR Exception during message handling 
[req-fdfeee26-a860-40a5-b2e3-2505973ffa75 11b95cf353f74788938f580e13b652d8 
93c697ef6c2649eb9966900a8d6a73d8 - - -]
  oslo_messaging.rpc.server[26553]: TRACE Traceback (most recent call last):
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 133, in 
_process_incoming
  oslo_messaging.rpc.server[26553]: TRACE res = 
self.dispatcher.dispatch(message)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 150, 
in dispatch
  oslo_messaging.rpc.server[26553]: TRACE return 
self._do_dispatch(endpoint, method, ctxt, args)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 121, 
in _do_dispatch
  oslo_messaging.rpc.server[26553]: TRACE result = func(ctxt, **new_args)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 75, in 
wrapped
  oslo_messaging.rpc.server[26553]: TRACE function_name, call_dict, binary)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  oslo_messaging.rpc.server[26553]: TRACE self.force_reraise()
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  oslo_messaging.rpc.server[26553]: TRACE six.reraise(self.type_, 
self.value, self.tb)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 66, in 
wrapped
  oslo_messaging.rpc.server[26553]: TRACE return f(self, context, *args, 
**kw)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6691, in 
external_instance_event
  oslo_messaging.rpc.server[26553]: TRACE event.tag)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6660, in 
_process_instance_vif_deleted_event
  oslo_messaging.rpc.server[26553]: TRACE 
self.driver.detach_interface(instance, vif)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/virt/driver.py", line 524, in 
detach_interface
  oslo_messaging.rpc.server[26553]: TRACE raise NotImplementedError()
  oslo_messaging.rpc.server[26553]: TRACE NotImplementedError
  oslo_messaging.rpc.server[26553]: TRACE

  
  Affected version:
  nova 14.0.3
  neutron 6.0.0
  ironic 6.2.1

  configuration for nova-compute:
  compute_driver = ironic.IronicDriver

  Ironic is configured to use neutron networks with generic switch as
  mechanism driver for ML2 pluging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1660317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660959] Re: placement resource provider filtering does not work with postgres

2017-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427667
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=03eced19f5d6665724a4fa432401be742383f8cf
Submitter: Jenkins
Branch:master

commit 03eced19f5d6665724a4fa432401be742383f8cf
Author: Mehdi Abaakouk 
Date:   Wed Feb 1 13:31:38 2017 +0100

placement-api: fix ResourceProviderList query

The ResourceProviderList query use groupby without all grouped columns.
This works on mysql with unpredicable result, but don't for other RDBMS.

For example, postgresql gating jobs dsvm that use nova are currently
broken.

This change removes the unused consumer_id on first query,
and uses the primary key 'id' instead of 'uuid' the second groupby.
(Because groupby in postgresql requires a PK or all non-primary columns)

The fix is tested by 
gate-ceilometer-dsvm-tempest-plugin-postgresql-ubuntu-xenial job
here: https://review.openstack.org/#/c/427668/

closes-bug: #1660959
Change-Id: I6cc93ba0dd569d56696c9210d38dd2d77b4157c1


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660959

Title:
  placement resource provider filtering does not work with postgres

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Telemetry tests with postgres found a bug in the sql used to filter
  resource providers that is breaking their gate:

  http://logs.openstack.org/82/405682/8/check/gate-ceilometer-dsvm-
  tempest-plugin-postgresql-ubuntu-xenial/02f896f/logs/apache/placement-
  api.txt.gz?level=ERROR

  The fix appears to be adding to the group_by on the usage join:

   usage = usage.group_by(_ALLOC_TBL.c.resource_provider_id,
  -   _ALLOC_TBL.c.resource_class_id)
  +   _ALLOC_TBL.c.resource_class_id,
  +   _ALLOC_TBL.c.consumer_id)

  Not sure about the ordering.

  (full log example below)




  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
[req-f0c425b6-bd71-44ae-ae33-46ce688d53dd service placement] Uncaught exception
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
Traceback (most recent call last):
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   
File "/opt/stack/new/nova/nova/api/openstack/placement/handler.py", line 195, 
in __call__
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return dispatch(environ, start_response, self._map)
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   
File "/opt/stack/new/nova/nova/api/openstack/placement/handler.py", line 122, 
in dispatch
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return handler(environ, start_response)
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
resp = self.call_func(req, *args, **self.kwargs)
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return self.func(req, *args, **kwargs)
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   
File "/opt/stack/new/nova/nova/api/openstack/placement/util.py", line 55, in 
decorated_function
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return f(req)
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   
File 
"/opt/stack/new/nova/nova/api/openstack/placement/handlers/resource_provider.py",
 line 305, in list_resource_providers
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
context, filters)
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   
File "/opt/stack/new/nova/nova/objects/resource_provider.py", line 695, in 
get_all_by_filters
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
resource_providers = cls._get_all_by_filters_from_db(context, filters)
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   
File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 894, in wrapper
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return fn(*args, **kwargs)
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   
File "/opt/stack/new/nova/nova/objects/resource_provider.py", line 675, in 
_get_all_by_filters_from_db
  2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return query.all

[Yahoo-eng-team] [Bug 1660997] Re: instance cannt be created. always status ERROR

2017-02-01 Thread Matt Riedemann
There aren't any errors in the n-api logs, are there errors in the n-cpu
logs? And if you're an admin, you can see details on the server using
the 'nova show' command and that would show any faults associated with
the instance to tell you why it's in error state.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660997

Title:
  instance cannt be created. always status ERROR

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Hello,
  I am running two nodes openstack Mitka deployment in two  ubuntu 16.04 
servers.
  As per installion guide, 
http://docs.openstack.org/mitaka/install-guide-ubuntu/launch-instance-provider.html,
 I creted provider network. 

  After I tried to create server. The command went well and when  I
  checked list of servers. That includes ERROR status. please see the
  commands.

   $openstack server create --flavor m1.nano  --image cirros  --nic net-
  id=67dfa549-856b-4884-9cc8-f3570d0cfdc5 --security-group default
  --key-name mykey testserver2

  $ openstack server list
  
+--+-++--++
  | ID   | Name| Status | Networks | 
Image Name |
  
+--+-++--++
  | a84a8db3-e40f-4c84-8a26-81aaddfd8230 | testserver2 | ERROR  |  | 
cirros |
  +
  WhyI couldnt start the instance? pls help if you know. Thanks.

  
  Log file from nova-api.log
  IN the log filen there is information HTTP exception thrown "IMAGE NOT FOUND"

  
  2017-02-01 15:03:27.404 2532 INFO nova.api.openstack.wsgi 
[req-cfd597c8-50e1-4fa9-ba4e-7cff96dd1b19 bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] HTTP exception thrown: Image not found.
  2017-02-01 15:03:27.415 2532 INFO nova.osapi_compute.wsgi.server 
[req-cfd597c8-50e1-4fa9-ba4e-7cff96dd1b19 bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] 192.168.1.31 "GET 
/v2.1/8db9e72abf27483c9197048dc6407208/images/cirros HTTP/1.1" status: 404 len: 
416 time: 0.3876941
  2017-02-01 15:03:27.456 2532 INFO nova.api.openstack.wsgi 
[req-af99d2cf-012b-4abc-932e-6403ed06f7de bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] HTTP exception thrown: Image not found.
  2017-02-01 15:03:27.457 2532 INFO nova.osapi_compute.wsgi.server 
[req-af99d2cf-012b-4abc-932e-6403ed06f7de bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] 192.168.1.31 "GET 
/v2.1/8db9e72abf27483c9197048dc6407208/images/cirros HTTP/1.1" status: 404 len: 
416 time: 0.0388799
  2017-02-01 15:03:27.588 2532 INFO nova.osapi_compute.wsgi.server 
[req-fe4da280-897d-4d40-adf8-1ea10b4f5416 bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] 192.168.1.31 "GET 
/v2.1/8db9e72abf27483c9197048dc6407208/images HTTP/1.1" status: 200 len: 830 
time: 0.1273210
  2017-02-01 15:03:27.659 2532 INFO nova.osapi_compute.wsgi.server 
[req-f88f7566-627f-46dc--20ac376d7efb bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] 192.168.1.31 "GET 
/v2.1/8db9e72abf27483c9197048dc6407208/images/69d6443d-e783-43b9-b729-cfb5c6b6f0e3
 HTTP/1.1" status: 200 len: 1011 time: 0.0668230
  2017-02-01 15:03:27.692 2532 INFO nova.api.openstack.wsgi 
[req-73298ede-548a-415f-8aa3-468df5e37e4d bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] HTTP exception thrown: Flavor m1.nano could 
not be found.
  2017-02-01 15:03:27.693 2532 INFO nova.osapi_compute.wsgi.server 
[req-73298ede-548a-415f-8aa3-468df5e37e4d bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] 192.168.1.31 "GET 
/v2.1/8db9e72abf27483c9197048dc6407208/flavors/m1.nano HTTP/1.1" status: 404 
len: 434 time: 0.0313950
  2017-02-01 15:03:27.721 2532 INFO nova.api.openstack.wsgi 
[req-a8cd25aa-03d4-4765-86d7-a17cf7e3819e bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] HTTP exception thrown: Flavor m1.nano could 
not be found.
  2017-02-01 15:03:27.722 2532 INFO nova.osapi_compute.wsgi.server 
[req-a8cd25aa-03d4-4765-86d7-a17cf7e3819e bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a

[Yahoo-eng-team] [Bug 1661024] Re: schedule_and_build_instances short-circuits on all instances if one build request is already deleted

2017-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427839
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f09d11269a76b04dacd0f1425ae637490d6ddca9
Submitter: Jenkins
Branch:master

commit f09d11269a76b04dacd0f1425ae637490d6ddca9
Author: Matt Riedemann 
Date:   Wed Feb 1 13:02:45 2017 -0500

Continue processing build requests even if one is gone already

There was a very subtle yet busted 'return' statement in the
except block when cleaning up an instance in the case that the
build request was already deleted. This return statement would
short-circuit the for loop it's in such that no other build
requests in the loop would get processed (so those instances
wouldn't get built).

This was probably missed because of how large the method is so
that when you're looking at that cleanup code, it's easy to miss
that you're in a for loop.

This change moves the build request cleanup block into a private
method so it's more self-contained, and fixes the issue with the
return statement by changing it to a 'continue' statement.

An existing test that deals with multiple instances already is
updated to show the bug and the fix (and also cleaned up a bit
in the process to avoid lots of copy/paste).

Change-Id: I399023ea705c514c33d07cc3613d79744cbf7a07
Closes-Bug: #1661024


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661024

Title:
  schedule_and_build_instances short-circuits on all instances if one
  build request is already deleted

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The 'return' statement here should be a 'continue':

  
https://github.com/openstack/nova/blob/d308d1f70e7e840b1b0c8f4307998d89f9a5ddff/nova/conductor/manager.py#L950

  That's in a block of code that's cleaning up an instance recently
  created if the build request was already deleted by the time conductor
  tried to delete the build request, i.e. the user deleted the instance
  before it was created (which actually deleted the build request in
  nova-api).

  The return is wrong though since we're in a loop over build_requests,
  so if we hit that, and there are more build requests to process, those
  other instances won't get built.

  It's easy to miss this context because the method is so large. We
  should break the build request cleanup code into a separate private
  helper method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661113] [NEW] Filtering servers by terminated_at does not work

2017-02-01 Thread Matt Riedemann
Public bug reported:

I noticed this in review after the code was merged:

https://review.openstack.org/#/c/408571/41/nova/api/openstack/compute/schemas/servers.py@277

That field should be 'terminated_at' to match the DB field.

** Affects: nova
 Importance: High
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress


** Tags: api ocata-rc-potential

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Tags added: ocata-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661113

Title:
  Filtering servers by terminated_at does not work

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  I noticed this in review after the code was merged:

  
https://review.openstack.org/#/c/408571/41/nova/api/openstack/compute/schemas/servers.py@277

  That field should be 'terminated_at' to match the DB field.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661111] [NEW] Update bug triaging policy for documentation

2017-02-01 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/427672
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 792cc0fddc5f06f3439e634fdb4c7d7e00a7a814
Author: John Davidge 
Date:   Wed Feb 1 12:38:59 2017 +

Update bug triaging policy for documentation

Update the bug triage policy for neutron DocImpact bugs affecting
openstack-manuals.

The two policy additions are:

1. Tagging the bug for the appropriate guide. This will assist the
   openstack-manuals team with their part of the triage process.
2. Removing neutron from affected projects if the bug only affects
   openstack-manuals. This will help reduce clutter in the neutron bug 
queue.

Change-Id: Iac2dd10f1476cde0843c356f0d161b5a59943c99

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/166

Title:
  Update bug triaging policy for documentation

Status in neutron:
  New

Bug description:
  https://review.openstack.org/427672
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 792cc0fddc5f06f3439e634fdb4c7d7e00a7a814
  Author: John Davidge 
  Date:   Wed Feb 1 12:38:59 2017 +

  Update bug triaging policy for documentation
  
  Update the bug triage policy for neutron DocImpact bugs affecting
  openstack-manuals.
  
  The two policy additions are:
  
  1. Tagging the bug for the appropriate guide. This will assist the
 openstack-manuals team with their part of the triage process.
  2. Removing neutron from affected projects if the bug only affects
 openstack-manuals. This will help reduce clutter in the neutron bug 
queue.
  
  Change-Id: Iac2dd10f1476cde0843c356f0d161b5a59943c99

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661106] [NEW] neutron ns proxy reading files from config dir

2017-02-01 Thread Dirk Mueller
Public bug reported:

files in /etc/neutron/neutron.conf.d/ (or any of the other config file
dirs that oslo.config defaults to) are being read by neutron-ns-
metadata-proxy. this causes funny side effects like e.g. the metadata
endpoint suddenly speaking https behind port 80:

# curl -k https://169.254.169.254:80/
1.0

We should skip config_dirs as well.

** Affects: neutron
 Importance: Undecided
 Assignee: Dirk Mueller (dmllr)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661106

Title:
  neutron ns proxy reading files from config dir

Status in neutron:
  In Progress

Bug description:
  files in /etc/neutron/neutron.conf.d/ (or any of the other config file
  dirs that oslo.config defaults to) are being read by neutron-ns-
  metadata-proxy. this causes funny side effects like e.g. the metadata
  endpoint suddenly speaking https behind port 80:

  # curl -k https://169.254.169.254:80/
  1.0

  We should skip config_dirs as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660160] Re: No host-to-cell mapping found for selected host

2017-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427534
Committed: 
https://git.openstack.org/cgit/openstack/tripleo-common/commit/?id=f6c286dbe882111b8de3b4f53391f1e96ad2d120
Submitter: Jenkins
Branch:master

commit f6c286dbe882111b8de3b4f53391f1e96ad2d120
Author: Oliver Walsh 
Date:   Wed Feb 1 02:51:15 2017 +

Fix race in undercloud cell_v2 host discovery

Ensure that the ironic nodes have been picked up by the nova resource 
tracker
before running nova-manage cell_v2 host discovery.

Also adds logging of the verbose command output to mistral engine log.

Change-Id: I4cc67935df8f37cdb2d8b0bfd96cf90eb7a6ce25
Closes-Bug: #1660160


** Changed in: tripleo
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660160

Title:
  No host-to-cell mapping found for selected host

Status in OpenStack Compute (nova):
  Invalid
Status in tripleo:
  Fix Released

Bug description:
  This report is maybe not a bug but I found useful to share what happens in 
TripleO since this commit:
  https://review.openstack.org/#/c/319379/

  We are unable to deploy the overcloud nodes anymore (in other words,
  create servers with Nova / Ironic).

  Nova Conductor sends this message:
  "No host-to-cell mapping found for selected host"
  
http://logs.openstack.org/31/426231/1/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/915aeba/logs/undercloud/var/log/nova/nova-conductor.txt.gz#_2017-01-27_19_21_56_348

  And it sounds like the compute host is not registered:
  
http://logs.openstack.org/31/426231/1/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/915aeba/logs/undercloud/var/log/nova/nova-compute.txt.gz#_2017-01-27_18_56_56_543

  Nova Config is available here:
  
http://logs.openstack.org/31/426231/1/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/915aeba/logs/etc/nova/nova.conf.txt.gz

  That's all the details I have now, feel free for more details if
  needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660973] Re: "No hosts found to map to cell, exiting" / PlacementNotConfigured exception

2017-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427747
Committed: 
https://git.openstack.org/cgit/openstack/networking-bgpvpn/commit/?id=48d9ed35b4b7a133737bc3a4e021b3e7705a9c9c
Submitter: Jenkins
Branch:master

commit 48d9ed35b4b7a133737bc3a4e021b3e7705a9c9c
Author: Thomas Morin 
Date:   Wed Feb 1 15:49:34 2017 +0100

devstack job config: add placement-api service

See 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/111295.html

Change-Id: Iede105d4c3d225eca2625afaa98ae007992abf53
Closes-Bug: 1660973


** Changed in: bgpvpn
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660973

Title:
  "No hosts found to map to cell, exiting" / PlacementNotConfigured
  exception

Status in networking-bgpvpn:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  http://logs.openstack.org/26/427626/1/check/gate-tempest-dsvm-
  networking-bgpvpn-bagpipe-ubuntu-
  xenial/4e448d9/logs/screen-n-cpu.txt.gz?level=WARNING#_2017-02-01_10_39_41_532

  
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service [-] Error starting 
thread.
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service Traceback (most 
recent call last):
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 722, in 
run_service
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service service.start()
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service   File 
"/opt/stack/new/nova/nova/service.py", line 144, in start
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service 
self.manager.init_host()
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1136, in init_host
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service raise 
exception.PlacementNotConfigured()
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service 
PlacementNotConfigured: This compute is not configured to talk to the placement 
service. Configure the [placement] section of nova.conf and restart the service.
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service 

  
  causing a failure in devstack:

  http://logs.openstack.org/26/427626/1/check/gate-tempest-dsvm-
  networking-bgpvpn-bagpipe-ubuntu-
  xenial/4e448d9/logs/devstacklog.txt.gz#_2017-02-01_10_40_30_836

  2017-02-01 10:40:30.836 | No hosts found to map to cell, exiting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1660973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653430] Re: Launch Instance vm Starting up

2017-02-01 Thread Tim Serewicz
** Also affects: centos
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653430

Title:
   Launch Instance vm Starting up

Status in OpenStack Compute (nova):
  Confirmed
Status in CentOS:
  New

Bug description:
  [root@controller ~]# virsh  list
   IdName   State
  
   10instance-001a  running



  [root@controller ~]# cat /var/log/libvirt/qemu/instance-001a.log 
  2017-01-01 15:27:19.429+: starting up libvirt version: 2.0.0, package: 
10.el7_3.2 (CentOS BuildSystem , 2016-12-06-19:53:38, 
c1bm.rdu2.centos.org), qemu version: 2.6.0 (qemu-kvm-ev-2.6.0-27.1.el7), 
hostname: controller
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin 
HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm 
-name guest=instance-001a,debug-threads=on -S -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-10-instance-001a/master-key.aes
 -machine pc-i440fx-rhel7.3.0,accel=tcg,usb=off -cpu 
Haswell-noTSX,+vme,+ds,+ss,+ht,+osxsave,+f16c,+rdrand,+hypervisor,+arat,+tsc_adjust,+xsaveopt,+pdpe1gb,+abm
 -m 64 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 
8cbaa40d-c061-47fc-83df-698f2455c7d9 -smbios 
'type=1,manufacturer=RDO,product=OpenStack 
Compute,version=14.0.2-1.el7,serial=4803ff26-9107-4c8c-b9f9-83cda5553350,uuid=8cbaa40d-c061-47fc-83df-698f2455c7d9,family=Virtual
 Machine' -no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-10-instance-001a/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
file=/var/lib/nova/instances/8cbaa40d-c061-47fc-83df-698f2455c7d9/disk,format=qcow2,if=none,id=drive-virtio-disk0,cache=none
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -netdev tap,fd=26,id=hostnet0 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:3e:38:f9,bus=pci.0,addr=0x3 
-add-fd set=1,fd=29 -chardev file,id=charserial0,path=/dev/fdset/1,append=on 
-device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 
-device isa-serial,chardev=charserial1,id=serial1 -device 
usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.40.1.70:0 -k en-us -device 
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
  char device redirected to /dev/pts/0 (label charserial1)
  warning: TCG doesn't support requested feature: CPUID.01H:EDX.vme [bit 1]
  warning: TCG doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
  warning: TCG doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.fma [bit 12]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.pcid [bit 17]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.x2apic [bit 21]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.tsc-deadline 
[bit 24]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.osxsave [bit 27]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.avx [bit 28]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.f16c [bit 29]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.rdrand [bit 30]
  warning: TCG doesn't support requested feature: CPUID.07H:EBX.tsc_adjust [bit 
1]
  warning: TCG doesn't support requested feature: CPUID.07H:EBX.avx2 [bit 5]
  warning: TCG doesn't support requested feature: CPUID.07H:EBX.erms [bit 9]
  warning: TCG doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10]


  
  error:
  Instance status   Starting up ...



  
  nova use  qemu

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1653430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661086] [NEW] Failed to plug VIF VIFBridge

2017-02-01 Thread Michael Johnson
Public bug reported:

I did a fresh restack/reclone this morning and can no longer boot up a
cirros instance.

Nova client returns:

| fault| {"message": "Failure running
os_vif plugin plug method: Failed to plug VIF
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397
-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-
fe3fc3c7", "code": 500, "details": "  File
\"/opt/stack/nova/nova/compute/manager.py\", line 1780, in
_do_build_and_run_instance |

pip list:
nova (15.0.0.0b4.dev77, /opt/stack/nova)
os-vif (1.4.0)

n-cpu.log shows:
2017-02-01 11:13:32.880 DEBUG nova.network.os_vif_util 
[req-17c8b4e4-2197-4205-aed3-007d0f2837e4 admin admin] Converted object 
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-fe3fc3c7e593),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapd3377ad5-43')
 from (pid=69603) nova_to_osvif_vif 
/opt/stack/nova/nova/network/os_vif_util.py:425
2017-02-01 11:13:32.880 DEBUG os_vif [req-17c8b4e4-2197-4205-aed3-007d0f2837e4 
admin admin] Unplugging vif 
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-fe3fc3c7e593),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapd3377ad5-43')
 from (pid=69603) unplug 
/usr/local/lib/python2.7/dist-packages/os_vif/__init__.py:112
2017-02-01 11:13:32.881 DEBUG oslo.privsep.daemon [-] privsep: 
request[139935485013840]: (3, b'vif_plug_ovs.linux_net.delete_bridge', 
('qbrd3377ad5-43', b'qvbd3377ad5-43'), {}) from (pid=69603) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2017-02-01 11:13:32.881 DEBUG oslo.privsep.daemon [-] privsep: Exception during 
request[139935485013840]: a bytes-like object is required, not 'str' from 
(pid=69603) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2017-02-01 11:13:32.881 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139935485013840]: (5, 'builtins.TypeError', ("a bytes-like object is 
required, not 'str'",)) from (pid=69603) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2017-02-01 11:13:32.882 ERROR os_vif [req-17c8b4e4-2197-4205-aed3-007d0f2837e4 
admin admin] Failed to unplug vif 
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-fe3fc3c7e593),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapd3377ad5-43')
2017-02-01 11:13:32.882 TRACE os_vif Traceback (most recent call last):
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/os_vif/__init__.py", line 113, in unplug
2017-02-01 11:13:32.882 TRACE os_vif plugin.unplug(vif, instance_info)
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/vif_plug_ovs/ovs.py", line 216, in 
unplug
2017-02-01 11:13:32.882 TRACE os_vif self._unplug_bridge(vif, instance_info)
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/vif_plug_ovs/ovs.py", line 192, in 
_unplug_bridge
2017-02-01 11:13:32.882 TRACE os_vif 
linux_net.delete_bridge(vif.bridge_name, v1_name)
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/oslo_privsep/priv_context.py", line 
205, in _wrap
2017-02-01 11:13:32.882 TRACE os_vif return self.channel.remote_call(name, 
args, kwargs)
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py", line 186, in 
remote_call
2017-02-01 11:13:32.882 TRACE os_vif exc_type = 
importutils.import_class(result[1])
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 30, in 
import_class
2017-02-01 11:13:32.882 TRACE os_vif __import__(mod_str)
2017-02-01 11:13:32.882 TRACE os_vif ImportError: No module named builtins
2017-02-01 11:13:32.882 TRACE os_vif

Full n-cpu.log is attached.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661086

Title:
  Failed to plug VIF VIFBridge

Status in OpenStack Compute (nova):
  New

Bug description:
  I did a fresh restack/reclone this morning and can no longer boot up a
  cirros instance.

  Nova client returns:

  | fault| {"message": "Failure running
  os_vif plugin plug method: Failed to plug VIF
  
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_n

[Yahoo-eng-team] [Bug 1633280] Re: [RFE]need a way to disable anti-spoofing rules and yet keep security groups

2017-02-01 Thread Armando Migliaccio
spec: https://review.openstack.org/#/c/388398/

** Changed in: neutron
   Status: Opinion => Confirmed

** Changed in: neutron
 Assignee: Rui Zang (rui-zang) => (unassigned)

** Changed in: neutron
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1633280

Title:
  [RFE]need a way to disable anti-spoofing rules and yet keep security
  groups

Status in neutron:
  Confirmed

Bug description:
  Basically all NFV use-cases would require this split. The current
  approach for NFV is to turn things off and have the VNFs protect
  themselves rather than the infra-structure supports security.  Even in
  simple deployments, like cloud bursting, you'll need to be able to
  allow the customer to control his addressing. The customer might want
  to do so by having the router (which does the IPSEC tunnel
  termination) either use ICMP RA (in case of v6/SLAAC) or DHCP (v4/v6)
  to control addressing - as opposed to have openstack control the
  addressing. In this case, the VNF only deals with addressing but it
  has to protect itself without security groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1633280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654183] Re: Token based authentication in Client class does not work

2017-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427515
Committed: 
https://git.openstack.org/cgit/openstack/instack-undercloud/commit/?id=0b02f1478abfad29d787174ca956e1ebdc9a385c
Submitter: Jenkins
Branch:master

commit 0b02f1478abfad29d787174ca956e1ebdc9a385c
Author: Andrey Kurilin 
Date:   Wed Feb 1 01:52:16 2017 +0200

Fix initialization of novaclient

projectid argument of novaclient's(< 7.0) entry-point had several meaning
in case of different cases. It is not a user-friendly behaviour, so it was
fixed in 7.0 . Now projectid means project/tenant id in terms of keystone,
like it should be from the beginning.

tenant/project name should be transmitted viaa project_name or tenant_name
keyword argument.

Closes-Bug: #1654183
Change-Id: I106ee603e0853bbc2da4b99724e83587de3cb4ba


** Changed in: tripleo
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1654183

Title:
  Token based authentication in Client class does not work

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in python-novaclient:
  Fix Released
Status in tripleo:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  With newly released novaclient (7.0.0) it seems that token base
  authentication does not work in novaclient.client.Clinet.

  I have get back the following response from Nova server:

  Malformed request URL: URL's project_id
  'e0beb44615f34d54b8a9a9203a3e5a1c' doesn't match Context's project_id
  'None' (HTTP 400)

  I just created the Nova client in following way:
  Client(
  2,
  endpoint_type="public",
  service_type='compute',
  auth_token=auth_token,
  tenant_id="devel",
  region_name="RegionOne",
  auth_url=keystone_url,
  insecure=True,
  endpoint_override=nova_endpoint 
#https://.../v2/e0beb44615f34d54b8a9a9203a3e5a1c
  )

  After it nova client performs a new token based authentication without
  project_id (tenant_id) and it causes that the new token does not
  belong to any project. Anyway if we have a token already why
  novaclient requests a new one from keystone? (Other clients like Heat
  and Neutron for example does not requests any token from keystone if
  it is already provided for client class)

  The bug is introduced by follwoig commit:
  
https://github.com/openstack/python-novaclient/commit/8409e006c5f362922baae9470f14c12e0443dd70

  +if not auth and auth_token:
  +auth = identity.Token(auth_url=auth_url,
  +  token=auth_token)

  When project_id is also passed into Token authentication than
  everything works fine. So newly requested token belongs to right
  project/tenant.

  Note: Originally this problem appears in Mistral project of OpenStack,
  which is using the client classes directly from their actions with
  token based authentication.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1654183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653430] Re: Launch Instance vm Starting up

2017-02-01 Thread Tim Serewicz
** Changed in: nova
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653430

Title:
   Launch Instance vm Starting up

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  [root@controller ~]# virsh  list
   IdName   State
  
   10instance-001a  running



  [root@controller ~]# cat /var/log/libvirt/qemu/instance-001a.log 
  2017-01-01 15:27:19.429+: starting up libvirt version: 2.0.0, package: 
10.el7_3.2 (CentOS BuildSystem , 2016-12-06-19:53:38, 
c1bm.rdu2.centos.org), qemu version: 2.6.0 (qemu-kvm-ev-2.6.0-27.1.el7), 
hostname: controller
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin 
HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm 
-name guest=instance-001a,debug-threads=on -S -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-10-instance-001a/master-key.aes
 -machine pc-i440fx-rhel7.3.0,accel=tcg,usb=off -cpu 
Haswell-noTSX,+vme,+ds,+ss,+ht,+osxsave,+f16c,+rdrand,+hypervisor,+arat,+tsc_adjust,+xsaveopt,+pdpe1gb,+abm
 -m 64 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 
8cbaa40d-c061-47fc-83df-698f2455c7d9 -smbios 
'type=1,manufacturer=RDO,product=OpenStack 
Compute,version=14.0.2-1.el7,serial=4803ff26-9107-4c8c-b9f9-83cda5553350,uuid=8cbaa40d-c061-47fc-83df-698f2455c7d9,family=Virtual
 Machine' -no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-10-instance-001a/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
file=/var/lib/nova/instances/8cbaa40d-c061-47fc-83df-698f2455c7d9/disk,format=qcow2,if=none,id=drive-virtio-disk0,cache=none
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -netdev tap,fd=26,id=hostnet0 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:3e:38:f9,bus=pci.0,addr=0x3 
-add-fd set=1,fd=29 -chardev file,id=charserial0,path=/dev/fdset/1,append=on 
-device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 
-device isa-serial,chardev=charserial1,id=serial1 -device 
usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.40.1.70:0 -k en-us -device 
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
  char device redirected to /dev/pts/0 (label charserial1)
  warning: TCG doesn't support requested feature: CPUID.01H:EDX.vme [bit 1]
  warning: TCG doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
  warning: TCG doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.fma [bit 12]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.pcid [bit 17]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.x2apic [bit 21]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.tsc-deadline 
[bit 24]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.osxsave [bit 27]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.avx [bit 28]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.f16c [bit 29]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.rdrand [bit 30]
  warning: TCG doesn't support requested feature: CPUID.07H:EBX.tsc_adjust [bit 
1]
  warning: TCG doesn't support requested feature: CPUID.07H:EBX.avx2 [bit 5]
  warning: TCG doesn't support requested feature: CPUID.07H:EBX.erms [bit 9]
  warning: TCG doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10]


  
  error:
  Instance status   Starting up ...



  
  nova use  qemu

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1653430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659391] Re: Server list API does not show existing servers

2017-02-01 Thread Sujitha
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659391

Title:
  Server list API does not show existing servers

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  After merge of commit [1] command "nova list --all-" started returning
  empty list when servers exist. Revert of this change makes API work
  again.

  Steps to reproduce:
  1) install latest nova that contains commit [1]
  2) create VM
  3) run any of following commands:
  $ nova list --all-
  $ openstack server list --all
  $ openstack server show %name-of-server%
  $ nova show %name-of-server%

  Expected: we see data of server we created on second step.
  Actual: empty list on "list" command or "NotFound" error on "show" command.

  [1] https://review.openstack.org/#/c/396775/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1659391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660893] Re: Neutron SFC port chain delete fails

2017-02-01 Thread John Davidge
** Project changed: neutron => networking-sfc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1660893

Title:
  Neutron SFC port chain delete fails

Status in networking-sfc:
  New

Bug description:
  I'm experimenting with Neutron SFC. The chain creation happens successfully 
whereas the chain deletion doesn't succeed always. For the first couple of 
times, I was able to delete the chains successfully however once it fails, it 
mostly never succeeds again.
  When it failed for the first time, I had to manually delete the port chain 
entry from DB. Even though the successive chain creations happen successfully, 
the deletion continuously fails. Once I remove the port chain entry from DB, 
other things like flow classifier, port pairs, port pair groups can be removed 
through CLI.

  Environment: Multi-node devstack with 1 controller and 2 computes where VMs 
are launched in the same compute.
  OS: Ubuntu 16.04
  Kernel: 4.4.0-59-generic
  OVS: 2.6.1
  Devstack and SFC: Newton
  All my neutron agents are alive.

  Steps to create chain
  openstack network create net11
  openstack subnet create --subnet-range 11.0.0.0/24 --network net11 sub11
  openstack network create net12
  openstack subnet create --subnet-range 12.0.0.0/24 --network net12 sub12

  openstack router create sfc-router
  openstack router add subnet sfc-router sub11
  openstack router add subnet sfc-router sub12

  openstack port create --network net11 p1
  openstack port create --network net12 p2
  openstack server create --nic port-id=p1 --nic port-id=p2 --flavor 3 
--image vyos sf-vm

  sleep 5
  openstack port pair create --ingress p1 --egress p2 pp1
  openstack port pair group create --port-pair pp1 ppg1

  openstack flow classifier create --source-ip-prefix 11.0.0.0/24 
--destination-ip-prefix\
   12.0.0.0/24 --source-port 1:65535 --destination-port 80:80  --protocol 
TCP \
   --logical-source-port $(neutron port-list | grep \"11.0.0.1\" | awk 
'{print $2}') fc1

  openstack port chain create --port-pair-group ppg1 --flow-
  classifier fc1 pc1

  Steps to delete chain
  openstack server delete sf-vm

  openstack port chain delete pc1
  openstack flow classifier delete fc1
  openstack port pair group delete ppg1
  openstack port pair delete pp1
  openstack port delete p1
  openstack port delete p2

  openstack router remove subnet sfc-router sub11
  openstack router remove subnet sfc-router sub12
  openstack subnet delete sub11
  openstack subnet delete sub12
  openstack router delete sfc-router
  openstack network delete net11
  openstack network delete net12

  Expected Output
  Successful completion of step "openstack port chain delete pc1" and other 
obvious success messages.

  Actual Output
  delete_port_chain failed.
  Neutron server returns request_ids: 
['req-eea26895-dcf5-4c31-a858-4da5ffe40b14']

  This is a blocker for me.

  Neutron Log: http://paste.openstack.org/show/597125/
  Controller OVS agent LOG: http://paste.openstack.org/show/597126/
  Compute OVS agent LOG: http://paste.openstack.org/show/597127/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-sfc/+bug/1660893/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660102] Re: Instance fails to boot with message "Starting Up..."

2017-02-01 Thread Tim Serewicz
*** This bug is a duplicate of bug 1653430 ***
https://bugs.launchpad.net/bugs/1653430

** This bug has been marked a duplicate of bug 1653430
Launch Instance vm Starting up

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660102

Title:
  Instance fails to boot with message "Starting Up..."

Status in OpenStack Compute (nova):
  New

Bug description:
  Environment:
  Using RDO Newton on AWS node running CentOS 7.3.1611 (Core) via packstack. 
Storage reports as iscsi. Downloaded the software and ran yum update today. I 
have attached a file, within the compressed tar file,  with rpm output of 
package versions. I'm using the default networking which I believe is neutron 
with OVS and openVSwitch.

  Problem:
  Using the BUI, nova or openstack commands the instance becomes active but 
never boots. The console only reports "Starting up ..." I cannot seem to find 
other errors or obvious output. Some times it seems there is no volume 
attached, others that there is. But the error - failure to boot is the same.

  One of the three ways this bug happens, output attached in a file:
  root#  nova --debug boot \
    --flavor smallfry \
    --image cirros  \
    --security-group web-ssh \
    --key-name finance-key \
    --nic net-id=46d5f53c-9d05-4cfb-b229-56d4d2ea9bd5 \
    try25

  I've tried this on a couple of different instances over the last
  couple weeks with the same issue. The very same system and install
  produced working instances in Liberty.

  Also in the tar file is output of sosreport for cinder and nova as
  well as software information and --debug output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567807] Re: nova delete doesn't work with EFI booted VMs

2017-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/357190
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=539d381434ccadcdc3f5d58c2705c35558a3a065
Submitter: Jenkins
Branch:master

commit 539d381434ccadcdc3f5d58c2705c35558a3a065
Author: Kevin Zhao 
Date:   Thu Jan 5 21:32:41 2017 +

libvirt: fix nova can't delete the instance with nvram

Currently libvirt needs a flag when deleting an VM with a nvram file,
without which nova can't delete an instance booted with UEFI. Add
deletion flag for NVRAM. Also add a test case.

Co-authored-by: Derek Higgins 
Change-Id: I46baa952b6c3a1a4c5cf2660931f317cafb5757d
Closes-Bug: #1567807


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567807

Title:
  nova delete doesn't work with EFI booted VMs

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Triaged

Bug description:
  I've been setting up a Mitaka Openstack using the cloud archive
  running on Trusty, and am having problems working with EFI enabled
  instances on ARM64.

  I've done some work with wgrant and gotten things to a stage where I
  can boot instances, using the aavmf images.

  However, when I tried to delete a VM booted like this, I get an error:

libvirtError: Requested operation is not valid: cannot delete
  inactive domain with nvram

  I've included the full traceback at
  https://paste.ubuntu.com/15682718/.

  Thanks to a suggestion from wgrant again, I got it working by editing 
nova/virt/libvirt/guest.py in delete_configuration() and replacing  
self._domain.undefineFlags(libvirt.VIR_DOMAIN_UNDEFINE_MANAGED_SAVE) with 
self._domain.undefineFlags(libvirt.VIR_DOMAIN_UNDEFINE_MANAGED_SAVE | 
libvirt.VIR_DOMAIN_UNDEFINE_NVRAM).
  I've attached a rough patch.

  Once that's applied and nova-compute restarted, I was able to delete
  the instance fine.

  Could someone please investigate this and see if its the correct fix,
  and look at getting it fixed in the archive?

  This was done on a updated trusty deployment using the cloud-archives
  for mitaka.

  $ dpkg-query -W python-nova
  python-nova 2:13.0.0~b2-0ubuntu1~cloud0

  Please let me know if you need any further information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1567807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600366] Re: Federated users cannot use heat

2017-02-01 Thread Boris Bobrov
** This bug is no longer a duplicate of bug 1642687
   Missing domain for federated users

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1600366

Title:
  Federated users cannot use heat

Status in OpenStack Identity (keystone):
  Confirmed

Bug description:
  Federated users cannot create heat stacks.

  To reproduce:
  Enable heat,
  Sign into horizon using federation
  Create a heat stack (errors out here)

  My guess:
  This is caused because federated users cannot perform trust delegation 
because they do not have any real roles associated with them (Although in other 
cases they somehow get the same roles as the group in the mapping and also the 
local user created after log in is not part of the group).

  Work around:
  1. list the users and find the federated user uuid that was created locally 
on the service provider after signing in
  2. assign the heat_stack_owner role to the federated user uuid
  3. should work now.

  It would be nice if it worked out of the box without having to do the
  work around.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1600366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661024] [NEW] schedule_and_build_instances short-circuits on all instances if one build request is already deleted

2017-02-01 Thread Matt Riedemann
Public bug reported:

The 'return' statement here should be a 'continue':

https://github.com/openstack/nova/blob/d308d1f70e7e840b1b0c8f4307998d89f9a5ddff/nova/conductor/manager.py#L950

That's in a block of code that's cleaning up an instance recently
created if the build request was already deleted by the time conductor
tried to delete the build request, i.e. the user deleted the instance
before it was created (which actually deleted the build request in nova-
api).

The return is wrong though since we're in a loop over build_requests, so
if we hit that, and there are more build requests to process, those
other instances won't get built.

It's easy to miss this context because the method is so large. We should
break the build request cleanup code into a separate private helper
method.

** Affects: nova
 Importance: High
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: conductor ocata-rc-potential

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661024

Title:
  schedule_and_build_instances short-circuits on all instances if one
  build request is already deleted

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  The 'return' statement here should be a 'continue':

  
https://github.com/openstack/nova/blob/d308d1f70e7e840b1b0c8f4307998d89f9a5ddff/nova/conductor/manager.py#L950

  That's in a block of code that's cleaning up an instance recently
  created if the build request was already deleted by the time conductor
  tried to delete the build request, i.e. the user deleted the instance
  before it was created (which actually deleted the build request in
  nova-api).

  The return is wrong though since we're in a loop over build_requests,
  so if we hit that, and there are more build requests to process, those
  other instances won't get built.

  It's easy to miss this context because the method is so large. We
  should break the build request cleanup code into a separate private
  helper method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661014] Re: Multinode job fails with "Compute host X not found"

2017-02-01 Thread Vasyl Saienko
** Also affects: nova
   Importance: Undecided
   Status: New

** Description changed:

  Example failure:
  
  http://logs.openstack.org/75/427675/2/check/gate-tempest-dsvm-ironic-
  ipa-wholedisk-agent_ipmitool-tinyipa-multinode-ubuntu-xenial-
  nv/3ff2401/console.html#_2017-02-01_14_55_05_875428
+ 
+ 
+ 2017-02-01 14:55:05.875428 | Details: {u'code': 500, u'message': 
u'Compute host 5 could not be found.\nTraceback (most recent call last):\n\n  
File "/opt/stack/new/nova/nova/conductor/manager.py", line 92, in 
_object_dispatch\nreturn getattr(target, method)(*args, **kwargs)\n\n  File 
"/usr/local/lib/python2.7/dist-packages', u'created': u'2017-02-01T14:44:56Z', 
u'details': u'  File "/opt/stack/new/nova/nova/compute/manager.py", line 1780, 
in _do_build_and_run_instance\nfilter_properties)\n  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2016, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n'}

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661014

Title:
  Multinode job fails with "Compute host X not found"

Status in Ironic:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Example failure:

  http://logs.openstack.org/75/427675/2/check/gate-tempest-dsvm-ironic-
  ipa-wholedisk-agent_ipmitool-tinyipa-multinode-ubuntu-xenial-
  nv/3ff2401/console.html#_2017-02-01_14_55_05_875428

  
  2017-02-01 14:55:05.875428 | Details: {u'code': 500, u'message': 
u'Compute host 5 could not be found.\nTraceback (most recent call last):\n\n  
File "/opt/stack/new/nova/nova/conductor/manager.py", line 92, in 
_object_dispatch\nreturn getattr(target, method)(*args, **kwargs)\n\n  File 
"/usr/local/lib/python2.7/dist-packages', u'created': u'2017-02-01T14:44:56Z', 
u'details': u'  File "/opt/stack/new/nova/nova/compute/manager.py", line 1780, 
in _do_build_and_run_instance\nfilter_properties)\n  File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2016, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1661014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661016] [NEW] Multiple attempts to swap volumes using volume-update fail

2017-02-01 Thread Lee Yarwood
Public bug reported:

Description
===
The second and any future attempts to swap volumes using volume-update fail due 
to a BDM lookup failure using the original volume id (see Logs & Configs for an 
example).

A previous attempt to fix this was made in bug#1490236 and reverted by
bug#1625660.

Steps to reproduce
==
- Boot an instance
- Create multiple volumes
- Attach a single volume
- Swap the attached volume with one that is unattached via nova volume-update.
- Swap the attached volume with one that is unattached via nova volume-update.

Expected result
===
The second attempt succeeds and the new volume is now attached to the instance.

Actual result
=
The second attempt fails looking up a BDM with the ID of the original volume.

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/
   $ pwd
   /opt/stack/nova
   $ git rev-parse HEAD
   dae6b760b9c40bbf3b72a0218dbf1dbc823f30e2

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

   Libvirt + KVM

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

   LVM/iSCSI

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)

   n/a

Logs & Configs
==
$ nova boot --image cirros-0.3.4-x86_64-uec --flavor 1 test-boot
$ cinder create 1 ; cinder create 1

$ nova volume-attach ef426f1e-32e4-4f8c-a3fc-b58080d38294 \
 23933e67-a4c0-4de9-b0dc-3da37bce1b78

$ nova volume-update ef426f1e-32e4-4f8c-a3fc-b58080d38294 \
 23933e67-a4c0-4de9-b0dc-3da37bce1b78 \
 cace165f-9c97-4d6d-a0e8-ea087fa80263

$ nova volume-update ef426f1e-32e4-4f8c-a3fc-b58080d38294 \
 cace165f-9c97-4d6d-a0e8-ea087fa80263 \ 
 23933e67-a4c0-4de9-b0dc-3da37bce1b78

n-cpu.log :

4448 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server Traceback (most 
recent call last):
4449 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 155, in 
_process_incoming
4450 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
4451 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 222, 
in dispatch
4452 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
4453 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 192, 
in _do_dispatch
4454 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
4455 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/exception_wrapper.py", line 75, in wrapped
4456 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server function_name, 
call_dict, binary)
4457 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
4458 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server 
self.force_reraise()
4459 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
4460 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
4461 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/exception_wrapper.py", line 66, in wrapped
4462 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server return f(self, 
context, *args, **kw)
4463 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/compute/manager.py", line 188, in decorated_function
4464 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server 
LOG.warning(msg, e, instance=instance)
4465 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
4466 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server 
self.force_reraise()
4467 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
4468 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
4469 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/compute/manager.py", line 157, in decorated_function
4470 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
4471 2017-02-01 07:35:04.931 TRACE oslo_messaging.rpc.s

[Yahoo-eng-team] [Bug 1620587] Re: ml2_conf.ini contains oslo.log options

2017-02-01 Thread Ihar Hrachyshka
I don't believe this is a bug. Those options are duplicated, so that
users may override settings per-service.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620587

Title:
  ml2_conf.ini contains oslo.log options

Status in neutron:
  Won't Fix

Bug description:
  When running neutron-server or one of the agents, neutron.conf is
  usually included which already contains the oslo.log options in the
  [DEFAULT] section. There's no need to add the options again to the
  ml2_conf.ini

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661007] [NEW] Use hosts without domain hostname change hostname to localdomain

2017-02-01 Thread CesarJorge
Public bug reported:

Use hosts without domain hostname change hostname to localdomain

With Centos/Redhat 7 and cloud-init 0.7.5:

When we create a new server with simple hostname:
myhostname

And start it, cloud init changes the hostname to: myhostname.localdomain

But this is incorrect, the default installation of Centos/RedHat of /etc/hosts:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

And network interfaces not work correctly
ping myhostname.localdomain
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.031 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.052 ms

If we not change this hostname:
ping myhostname
PING myhostname (10.0.1.188) 56(84) bytes of data.
64 bytes from myhostname (10.0.1.188): icmp_seq=1 ttl=64 time=0.031 ms

You can add a possibility for not to change hostname to
hostname.localdomain?

By now, we change this manually, but is hard to manage in OS images if new 
package available and update:
https://launchpad.net/cloud-init/trunk/0.7.5/+download/cloud-init-0.7.5.tar.gz

# Remove cloud-init default domainname localdomain
sed -i -e 's/localdomain//g' 
/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py
rm -f /usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py[co]

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: change cloud-init hostname localdomain

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1661007

Title:
  Use hosts without domain hostname change hostname to localdomain

Status in cloud-init:
  New

Bug description:
  Use hosts without domain hostname change hostname to localdomain

  With Centos/Redhat 7 and cloud-init 0.7.5:

  When we create a new server with simple hostname:
  myhostname

  And start it, cloud init changes the hostname to:
  myhostname.localdomain

  But this is incorrect, the default installation of Centos/RedHat of 
/etc/hosts:
  127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

  And network interfaces not work correctly
  ping myhostname.localdomain
  PING localhost (127.0.0.1) 56(84) bytes of data.
  64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.031 ms
  64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.052 ms

  If we not change this hostname:
  ping myhostname
  PING myhostname (10.0.1.188) 56(84) bytes of data.
  64 bytes from myhostname (10.0.1.188): icmp_seq=1 ttl=64 time=0.031 ms

  You can add a possibility for not to change hostname to
  hostname.localdomain?

  By now, we change this manually, but is hard to manage in OS images if new 
package available and update:
  https://launchpad.net/cloud-init/trunk/0.7.5/+download/cloud-init-0.7.5.tar.gz

  # Remove cloud-init default domainname localdomain
  sed -i -e 's/localdomain//g' 
/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py
  rm -f /usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py[co]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1661007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639220] Re: [RFE] Introduce Network QoS policy "is_default" behaviour

2017-02-01 Thread Rodolfo Alonso
** Also affects: python-openstacksdk
   Importance: Undecided
   Status: New

** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

** Changed in: python-openstackclient
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: python-openstacksdk
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1639220

Title:
  [RFE] Introduce Network QoS policy "is_default" behaviour

Status in neutron:
  New
Status in python-openstackclient:
  New
Status in OpenStack SDK:
  New

Bug description:
  Introduce a new parameter in Network QoS policy: "is_default".

  If a new Network QoS policy is created/set with the parameter
  "is_default" equal to True, any new network created for that project
  will have this default QoS policy assigned.

  E.g.:
  - Create a new QoS policy
  openstack network qos policy create --is-default qos_1
  - Create a new network
  openstack network create net_1
  This new network, "net_1", will have "qos_1" as QoS policy.

  The parameter "is-default" can be set in the creation and the update
  commands.

  If a new Network QoS policy is created or updated with this flag and
  another Network QoS policy in the same project is set as the default
  policy, the new one won't be created or updated (see subnet-pool
  behaviour).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1639220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660973] Re: "No hosts found to map to cell, exiting" / PlacementNotConfigured exception

2017-02-01 Thread Matt Riedemann
This is intentional, see:

http://lists.openstack.org/pipermail/openstack-
dev/2017-January/111295.html

** Changed in: nova
   Status: New => Invalid

** Also affects: bgpvpn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660973

Title:
  "No hosts found to map to cell, exiting" / PlacementNotConfigured
  exception

Status in networking-bgpvpn:
  New
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  http://logs.openstack.org/26/427626/1/check/gate-tempest-dsvm-
  networking-bgpvpn-bagpipe-ubuntu-
  xenial/4e448d9/logs/screen-n-cpu.txt.gz?level=WARNING#_2017-02-01_10_39_41_532

  
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service [-] Error starting 
thread.
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service Traceback (most 
recent call last):
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 722, in 
run_service
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service service.start()
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service   File 
"/opt/stack/new/nova/nova/service.py", line 144, in start
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service 
self.manager.init_host()
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1136, in init_host
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service raise 
exception.PlacementNotConfigured()
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service 
PlacementNotConfigured: This compute is not configured to talk to the placement 
service. Configure the [placement] section of nova.conf and restart the service.
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service 

  
  causing a failure in devstack:

  http://logs.openstack.org/26/427626/1/check/gate-tempest-dsvm-
  networking-bgpvpn-bagpipe-ubuntu-
  xenial/4e448d9/logs/devstacklog.txt.gz#_2017-02-01_10_40_30_836

  2017-02-01 10:40:30.836 | No hosts found to map to cell, exiting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1660973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660997] [NEW] instance cannt be created. always status ERROR

2017-02-01 Thread Sothy
Public bug reported:

Hello,
I am running two nodes openstack Mitka deployment in two  ubuntu 16.04 servers.
As per installion guide, 
http://docs.openstack.org/mitaka/install-guide-ubuntu/launch-instance-provider.html,
 I creted provider network. 

After I tried to create server. The command went well and when  I
checked list of servers. That includes ERROR status. please see the
commands.

 $openstack server create --flavor m1.nano  --image cirros  --nic net-
id=67dfa549-856b-4884-9cc8-f3570d0cfdc5 --security-group default
--key-name mykey testserver2

$ openstack server list
+--+-++--++
| ID   | Name| Status | Networks | 
Image Name |
+--+-++--++
| a84a8db3-e40f-4c84-8a26-81aaddfd8230 | testserver2 | ERROR  |  | 
cirros |
+
WhyI couldnt start the instance? pls help if you know. Thanks.


Log file from nova-api.log
IN the log filen there is information HTTP exception thrown "IMAGE NOT FOUND"


2017-02-01 15:03:27.404 2532 INFO nova.api.openstack.wsgi 
[req-cfd597c8-50e1-4fa9-ba4e-7cff96dd1b19 bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] HTTP exception thrown: Image not found.
2017-02-01 15:03:27.415 2532 INFO nova.osapi_compute.wsgi.server 
[req-cfd597c8-50e1-4fa9-ba4e-7cff96dd1b19 bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] 192.168.1.31 "GET 
/v2.1/8db9e72abf27483c9197048dc6407208/images/cirros HTTP/1.1" status: 404 len: 
416 time: 0.3876941
2017-02-01 15:03:27.456 2532 INFO nova.api.openstack.wsgi 
[req-af99d2cf-012b-4abc-932e-6403ed06f7de bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] HTTP exception thrown: Image not found.
2017-02-01 15:03:27.457 2532 INFO nova.osapi_compute.wsgi.server 
[req-af99d2cf-012b-4abc-932e-6403ed06f7de bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] 192.168.1.31 "GET 
/v2.1/8db9e72abf27483c9197048dc6407208/images/cirros HTTP/1.1" status: 404 len: 
416 time: 0.0388799
2017-02-01 15:03:27.588 2532 INFO nova.osapi_compute.wsgi.server 
[req-fe4da280-897d-4d40-adf8-1ea10b4f5416 bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] 192.168.1.31 "GET 
/v2.1/8db9e72abf27483c9197048dc6407208/images HTTP/1.1" status: 200 len: 830 
time: 0.1273210
2017-02-01 15:03:27.659 2532 INFO nova.osapi_compute.wsgi.server 
[req-f88f7566-627f-46dc--20ac376d7efb bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] 192.168.1.31 "GET 
/v2.1/8db9e72abf27483c9197048dc6407208/images/69d6443d-e783-43b9-b729-cfb5c6b6f0e3
 HTTP/1.1" status: 200 len: 1011 time: 0.0668230
2017-02-01 15:03:27.692 2532 INFO nova.api.openstack.wsgi 
[req-73298ede-548a-415f-8aa3-468df5e37e4d bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] HTTP exception thrown: Flavor m1.nano could 
not be found.
2017-02-01 15:03:27.693 2532 INFO nova.osapi_compute.wsgi.server 
[req-73298ede-548a-415f-8aa3-468df5e37e4d bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] 192.168.1.31 "GET 
/v2.1/8db9e72abf27483c9197048dc6407208/flavors/m1.nano HTTP/1.1" status: 404 
len: 434 time: 0.0313950
2017-02-01 15:03:27.721 2532 INFO nova.api.openstack.wsgi 
[req-a8cd25aa-03d4-4765-86d7-a17cf7e3819e bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] HTTP exception thrown: Flavor m1.nano could 
not be found.
2017-02-01 15:03:27.722 2532 INFO nova.osapi_compute.wsgi.server 
[req-a8cd25aa-03d4-4765-86d7-a17cf7e3819e bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] 192.168.1.31 "GET 
/v2.1/8db9e72abf27483c9197048dc6407208/flavors/m1.nano HTTP/1.1" status: 404 
len: 434 time: 0.0256779
2017-02-01 15:03:27.750 2532 INFO nova.osapi_compute.wsgi.server 
[req-f518192f-e915-45f8-badd-876f695eed17 bb198515208f4a94ba3738dc3ad544f0 
8db9e72abf27483c9197048dc6407208 - 1784ef9b4c4a45fbaa909915e606b695 
1784ef9b4c4a45fbaa909915e606b695] 192.168.1.31 "GET 
/v2.1/8db9e72abf27483c9197048dc6407208/flavors HTTP/1.1" status: 200 len: 586 
time: 0.0252788
2017-02-01 15:03:27.766 2532 INFO nova.osapi_compute.wsgi.server 
[req-e7d41a60-7e0a-4985-91ac-39e667b91cf6 bb198515208f4a9

[Yahoo-eng-team] [Bug 1660973] [NEW] "No hosts found to map to cell, exiting" / PlacementNotConfigured exception

2017-02-01 Thread Thomas Morin
Public bug reported:

http://logs.openstack.org/26/427626/1/check/gate-tempest-dsvm-
networking-bgpvpn-bagpipe-ubuntu-
xenial/4e448d9/logs/screen-n-cpu.txt.gz?level=WARNING#_2017-02-01_10_39_41_532


2017-02-01 10:39:41.532 23909 ERROR oslo_service.service [-] Error starting 
thread.
2017-02-01 10:39:41.532 23909 ERROR oslo_service.service Traceback (most recent 
call last):
2017-02-01 10:39:41.532 23909 ERROR oslo_service.service   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 722, in 
run_service
2017-02-01 10:39:41.532 23909 ERROR oslo_service.service service.start()
2017-02-01 10:39:41.532 23909 ERROR oslo_service.service   File 
"/opt/stack/new/nova/nova/service.py", line 144, in start
2017-02-01 10:39:41.532 23909 ERROR oslo_service.service 
self.manager.init_host()
2017-02-01 10:39:41.532 23909 ERROR oslo_service.service   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1136, in init_host
2017-02-01 10:39:41.532 23909 ERROR oslo_service.service raise 
exception.PlacementNotConfigured()
2017-02-01 10:39:41.532 23909 ERROR oslo_service.service 
PlacementNotConfigured: This compute is not configured to talk to the placement 
service. Configure the [placement] section of nova.conf and restart the service.
2017-02-01 10:39:41.532 23909 ERROR oslo_service.service 


causing a failure in devstack:

http://logs.openstack.org/26/427626/1/check/gate-tempest-dsvm-
networking-bgpvpn-bagpipe-ubuntu-
xenial/4e448d9/logs/devstacklog.txt.gz#_2017-02-01_10_40_30_836

2017-02-01 10:40:30.836 | No hosts found to map to cell, exiting.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660973

Title:
  "No hosts found to map to cell, exiting" / PlacementNotConfigured
  exception

Status in OpenStack Compute (nova):
  New

Bug description:
  http://logs.openstack.org/26/427626/1/check/gate-tempest-dsvm-
  networking-bgpvpn-bagpipe-ubuntu-
  xenial/4e448d9/logs/screen-n-cpu.txt.gz?level=WARNING#_2017-02-01_10_39_41_532

  
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service [-] Error starting 
thread.
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service Traceback (most 
recent call last):
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 722, in 
run_service
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service service.start()
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service   File 
"/opt/stack/new/nova/nova/service.py", line 144, in start
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service 
self.manager.init_host()
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1136, in init_host
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service raise 
exception.PlacementNotConfigured()
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service 
PlacementNotConfigured: This compute is not configured to talk to the placement 
service. Configure the [placement] section of nova.conf and restart the service.
  2017-02-01 10:39:41.532 23909 ERROR oslo_service.service 

  
  causing a failure in devstack:

  http://logs.openstack.org/26/427626/1/check/gate-tempest-dsvm-
  networking-bgpvpn-bagpipe-ubuntu-
  xenial/4e448d9/logs/devstacklog.txt.gz#_2017-02-01_10_40_30_836

  2017-02-01 10:40:30.836 | No hosts found to map to cell, exiting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656010] Re: Incorrect notification to nova about ironic baremetall port (for nodes in 'cleaning' state)

2017-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/424248
Committed: 
https://git.openstack.org/cgit/openstack/ironic/commit/?id=cbdf5076d37df61d2e9c46a0a73c7ad65652b866
Submitter: Jenkins
Branch:master

commit cbdf5076d37df61d2e9c46a0a73c7ad65652b866
Author: Sam Betts 
Date:   Mon Jan 23 17:08:35 2017 +

Don't override device_owner for tenant network ports

When a vif is passed to us from nova as a tenant port we shouldn't
change the device_owner or device_id because that is what links the port
to the nova instance. This enables the neutron nova notifier to trigger
the correct events in nova for when the neutron port changes, e.g. being
deleted, triggers the detach interface endpoint.

Change-Id: I43c3af9f424a65211ef5a39f13e4810072997339
Closes-Bug: #1656010


** Changed in: ironic
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656010

Title:
  Incorrect notification to nova about ironic baremetall port (for nodes
  in 'cleaning' state)

Status in Ironic:
  Fix Released
Status in neutron:
  In Progress
Status in ironic package in Ubuntu:
  New
Status in neutron package in Ubuntu:
  New

Bug description:
  version: newton (2:9.0.0-0ubuntu1~cloud0)

  When neutron trying to bind port for Ironic baremetall node, it
  sending wrong notification to nova about port been ready. neutron send
  it with 'device_id' == ironic-node-id, and nova rejects it as 'not
  found' (there is no nova instance with such id).

  Log:
  neutron.db.provisioning_blocks[22265]: DEBUG Provisioning for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 completed by entity DHCP. 
[req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:147
  neutron.db.provisioning_blocks[22265]: DEBUG Provisioning complete for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a 
- - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:153
  neutron.callbacks.manager[22265]: DEBUG Notify callbacks 
[('neutron.plugins.ml2.plugin.Ml2Plugin._port_provisioned--9223372036854150578',
 >)] for port, 
provisioning_complete [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] 
_notify_loop /usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py:142
  neutron.plugins.ml2.plugin[22265]: DEBUG Port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 cannot update to ACTIVE because it is not 
bound. [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] _port_provisioned 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py:224
  oslo_messaging._drivers.amqpdriver[22265]: DEBUG sending reply msg_id: 
254703530cd3440584c980d72ed93011 reply queue: 
reply_8b6e70ad5191401a9512147c4e94ca71 time elapsed: 0.0452275519492s 
[req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] _send_reply 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:73
  neutron.notifiers.nova[22263]: DEBUG Sending events: [{'name': 
'network-changed', 'server_uuid': u'd02c7361-5e3a-4fdf-89b5-f29b3901f0fc'}] 
send_events /usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py:257
  novaclient.v2.client[22263]: DEBUG REQ: curl -g -i --insecure -X POST 
http://nova-api.p.ironic-dal-1.servers.com:28774/v2/93c697ef6c2649eb9966900a8d6a73d8/os-server-external-events
 -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}592539c9fcd820d7e369ea58454ee17fe7084d5e" -d '{"events": [{"name": 
"network-changed", "server_uuid": "d02c7361-5e3a-4fdf-89b5-f29b3901f0fc"}]}' 
_http_log_request /usr/lib/python2.7/dist-packages/keystoneauth1/session.py:337
  novaclient.v2.client[22263]: DEBUG RESP: [404] Content-Type: 
application/json; charset=UTF-8 Content-Length: 78 X-Compute-Request-Id: 
req-a029af9e-e460-476f-9993-4551f3b210d6 Date: Thu, 12 Jan 2017 15:43:37 GMT 
Connection: keep-alive 
  RESP BODY: {"itemNotFound": {"message": "No instances found for any event", 
"code": 404}}
   _http_log_response 
/usr/lib/python2.7/dist-packages/keystoneauth1/session.py:366
  novaclient.v2.client[22263]: DEBUG POST call to compute for 
http://nova-api.p.ironic-dal-1.servers.com:28774/v2/93c697ef6c2649eb9966900a8d6a73d8/os-server-external-events
 used request id req-a029af9e-e460-476f-9993-4551f3b210d6 _log_request_id 
/usr/lib/python2.7/dist-packages/novaclient/client.py:85
  neutron.notifiers.nova[22263]: DEBUG Nova returned NotFound for event: 
[{'name': 'network-changed', 'server_uuid': 
u'd02c7361-5e3a-4fdf-89b5-f29b3901f0fc'}] send_events 
/usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py:263
  oslo_messaging._drivers.amqpdriver[22265]: DEBUG received message msg_id: 
0bf04ac8fedd4234bd6cd6c04547beca reply to 
reply_8b6e70ad5191401a9512147c4e94ca71 __call__ 
/usr/lib/python2.7/dist-

[Yahoo-eng-team] [Bug 1660959] [NEW] placement resource provider filtering does not work with postgres

2017-02-01 Thread Chris Dent
Public bug reported:

Telemetry tests with postgres found a bug in the sql used to filter
resource providers that is breaking their gate:

http://logs.openstack.org/82/405682/8/check/gate-ceilometer-dsvm-
tempest-plugin-postgresql-ubuntu-xenial/02f896f/logs/apache/placement-
api.txt.gz?level=ERROR

The fix appears to be adding to the group_by on the usage join:

 usage = usage.group_by(_ALLOC_TBL.c.resource_provider_id,
-   _ALLOC_TBL.c.resource_class_id)
+   _ALLOC_TBL.c.resource_class_id,
+   _ALLOC_TBL.c.consumer_id)

Not sure about the ordering.

(full log example below)



2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
[req-f0c425b6-bd71-44ae-ae33-46ce688d53dd service placement] Uncaught exception
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
Traceback (most recent call last):
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/opt/stack/new/nova/nova/api/openstack/placement/handler.py", line 195, in 
__call__
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return dispatch(environ, start_response, self._map)
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/opt/stack/new/nova/nova/api/openstack/placement/handler.py", line 122, in 
dispatch
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return handler(environ, start_response)
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
resp = self.call_func(req, *args, **self.kwargs)
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return self.func(req, *args, **kwargs)
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/opt/stack/new/nova/nova/api/openstack/placement/util.py", line 55, in 
decorated_function
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return f(req)
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/opt/stack/new/nova/nova/api/openstack/placement/handlers/resource_provider.py",
 line 305, in list_resource_providers
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
context, filters)
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/opt/stack/new/nova/nova/objects/resource_provider.py", line 695, in 
get_all_by_filters
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
resource_providers = cls._get_all_by_filters_from_db(context, filters)
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 894, in wrapper
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return fn(*args, **kwargs)
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/opt/stack/new/nova/nova/objects/resource_provider.py", line 675, in 
_get_all_by_filters_from_db
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return query.all()
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2613, in 
all
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return list(self)
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2761, in 
__iter__
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return self._execute_and_instances(context)
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2776, in 
_execute_and_instances
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
result = conn.execute(querycontext.statement, self._params)
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 914, 
in execute
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return meth(self, multiparams, params)
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 323, 
in _execute_on_connection
2017-02-01 10:20:33.543 8670 ERROR nova.api.openstack.placement.handler 
return connection._execute_clauseelement(self, multiparams, params)
201

[Yahoo-eng-team] [Bug 1660747] Re: test_list_servers_filter_by_error_status intermittently fails with MismatchError on no servers in response

2017-02-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427394
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=826df45a40599490dfa2f2209a2cd9586fbd6c89
Submitter: Jenkins
Branch:master

commit 826df45a40599490dfa2f2209a2cd9586fbd6c89
Author: melanie witt 
Date:   Tue Jan 31 20:20:17 2017 +

Read instances from API cell for cells v1

Recent work on cells v2 has us reading instances from compute cells
via target_cell, but that breaks the existing behavior of cells v1
because of the syncing between API cell and compute cells. When
state changes are effected in the API cell, there is a slight delay
before they are reflected in the compute cell. So, reading from the
compute cell instead of the API cell results in behavior changes.

This adds a conditional in compute/api get_all to read instances
from the API cell if cells v1 is enabled. We are already doing this
for compute/api get, and get_all was missed.

Closes-Bug: #1660747

Change-Id: I7df5c4616ef386216c7bd7efea2be68173c61be0


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660747

Title:
  test_list_servers_filter_by_error_status intermittently fails with
  MismatchError on no servers in response

Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  New

Bug description:
  Seen here:

  http://logs.openstack.org/59/424759/12/gate/gate-tempest-dsvm-cells-
  ubuntu-xenial/d7b1311/console.html#_2017-01-31_17_48_34_663273

  2017-01-31 17:48:34.663337 | Captured traceback:
  2017-01-31 17:48:34.663348 | ~~~
  2017-01-31 17:48:34.663363 | Traceback (most recent call last):
  2017-01-31 17:48:34.663393 |   File 
"tempest/api/compute/admin/test_servers.py", line 59, in 
test_list_servers_filter_by_error_status
  2017-01-31 17:48:34.663414 | self.assertIn(self.s1_id, map(lambda x: 
x['id'], servers))
  2017-01-31 17:48:34.663448 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 417, in assertIn
  2017-01-31 17:48:34.663468 | self.assertThat(haystack, 
Contains(needle), message)
  2017-01-31 17:48:34.663502 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2017-01-31 17:48:34.663515 | raise mismatch_error
  2017-01-31 17:48:34.663542 | testtools.matchers._impl.MismatchError: 
u'108b4797-74fd-4a00-912a-b7fe0e142888' not in []

  This test resets the state on a server to ERROR:

  2017-01-31 17:48:34.663649 | 2017-01-31 17:28:39,375 504 INFO 
[tempest.lib.common.rest_client] Request 
(ServersAdminTestJSON:test_list_servers_filter_by_error_status): 202 POST 
http://10.23.154.32:8774/v2.1/servers/108b4797-74fd-4a00-912a-b7fe0e142888/action
 0.142s
  2017-01-31 17:48:34.663695 | 2017-01-31 17:28:39,376 504 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2017-01-31 17:48:34.663714 | Body: {"os-resetState": {"state": 
"error"}}

  Then tries to list servers by that status and expects to get that one
  back:

  2017-01-31 17:48:34.663883 | 2017-01-31 17:28:39,556 504 INFO 
[tempest.lib.common.rest_client] Request 
(ServersAdminTestJSON:test_list_servers_filter_by_error_status): 200 GET 
http://10.23.154.32:8774/v2.1/servers?status=error 0.179s
  2017-01-31 17:48:34.663955 | 2017-01-31 17:28:39,556 504 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2017-01-31 17:48:34.663969 | Body: None
  2017-01-31 17:48:34.664078 | Response - Headers: 
{u'x-openstack-nova-api-version': '2.1', u'vary': 
'X-OpenStack-Nova-API-Version', u'content-length': '15', 'status': '200', 
u'content-type': 'application/json', u'x-compute-request-id': 
'req-91ef16ab-28c3-47c5-b823-6a321bde5c01', u'date': 'Tue, 31 Jan 2017 17:28:39 
GMT', 'content-location': 'http://10.23.154.32:8774/v2.1/servers?status=error', 
u'openstack-api-version': 'compute 2.1', u'connection': 'close'}
  2017-01-31 17:48:34.664094 | Body: {"servers": []}

  And the list is coming back empty, intermittently, with cells v1. So
  there is probably some vm_state change race between the state change
  in the child cell and reporting that back up to the parent API cell.

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Body%3A%20%7B%5C%5C%5C%22servers%5C%5C%5C%22%3A%20%5B%5D%7D%5C%22%20AND%20tags%3A%5C%22console%5C%22%20AND%20build_name%3A%5C
  %22gate-tempest-dsvm-cells-ubuntu-xenial%5C%22&from=7d

  16 hits in 7 days, check and gate, all failure