[Yahoo-eng-team] [Bug 1660899] [NEW] vpn agent stuck during sync

2017-01-31 Thread YAMAMOTO Takashi
Public bug reported:

for some reason, vpn agent's event consumer thread stuck after producing the 
following line,
holding the 'vpn-agent' lock.  further updates can't be processed because of 
the lock.

http://logs.openstack.org/01/419801/3/experimental/gate-neutron-dsvm-
tempest-vpnaas-ubuntu-xenial/43c55dc/

2017-02-01 04:40:34.655 3790 DEBUG neutron.agent.linux.utils [req-
8be2bfb5-2bf2-4b40-9121-cfa80e47ccac - tempest-Vpnaas-130233446] Running
command (rootwrap daemon): ['ip', 'netns', 'exec', 'qrouter-
1cb07028-b795-43c6-beb0-c1ebf9e76056', 'neutron-vpn-netns-wrapper', '--
mount_paths=/etc:/opt/stack/data/neutron/ipsec/1cb07028-b795-43c6-beb0-c1ebf9e76056/etc,/var/run:/opt/stack/data/neutron/ipsec/1cb07028-b795-43c6-beb0-c1ebf9e76056/var/run',
'--cmd=ipsec,up,3227d225-92ae-4e9f-b524-85eecead6fef']
execute_rootwrap_daemon
/opt/stack/new/neutron/neutron/agent/linux/utils.py:113

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

** Tags added: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1660899

Title:
  vpn agent stuck during sync

Status in neutron:
  New

Bug description:
  for some reason, vpn agent's event consumer thread stuck after producing the 
following line,
  holding the 'vpn-agent' lock.  further updates can't be processed because of 
the lock.

  http://logs.openstack.org/01/419801/3/experimental/gate-neutron-dsvm-
  tempest-vpnaas-ubuntu-xenial/43c55dc/

  2017-02-01 04:40:34.655 3790 DEBUG neutron.agent.linux.utils [req-
  8be2bfb5-2bf2-4b40-9121-cfa80e47ccac - tempest-Vpnaas-130233446]
  Running command (rootwrap daemon): ['ip', 'netns', 'exec', 'qrouter-
  1cb07028-b795-43c6-beb0-c1ebf9e76056', 'neutron-vpn-netns-wrapper', '
  --
  
mount_paths=/etc:/opt/stack/data/neutron/ipsec/1cb07028-b795-43c6-beb0-c1ebf9e76056/etc,/var/run:/opt/stack/data/neutron/ipsec/1cb07028-b795-43c6-beb0-c1ebf9e76056/var/run',
  '--cmd=ipsec,up,3227d225-92ae-4e9f-b524-85eecead6fef']
  execute_rootwrap_daemon
  /opt/stack/new/neutron/neutron/agent/linux/utils.py:113

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1660899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660893] [NEW] Neutron SFC port chain delete fails

2017-01-31 Thread Deepak S
Public bug reported:

I'm experimenting with Neutron SFC. The chain creation happens successfully 
whereas the chain deletion doesn't succeed always. For the first couple of 
times, I was able to delete the chains successfully however once it failed, it 
never succeeded again.
When it failed for the first time, I had to manually delete the port chain 
entry from DB. Even though the successive chain creations happens successfully 
the deletion continuously fails. Once I remove the port chain entry from DB, 
other things like flow classifier, port pairs, port pair groups can be removed 
through CLI.

Environment: Multi-node devstack with 1 controller and 2 computes where VMs are 
launched in the same compute.
OS: Ubuntu 16.04
Kernel: 4.4.0-59-generic
OVS: 2.6.1
Devstack and SFC: Newton
All my neutron agents are alive.

Steps to create chain
openstack network create net11
openstack subnet create --subnet-range 11.0.0.0/24 --network net11 sub11
openstack network create net12
openstack subnet create --subnet-range 12.0.0.0/24 --network net12 sub12

openstack router create sfc-router
openstack router add subnet sfc-router sub11
openstack router add subnet sfc-router sub12

openstack port create --network net11 p1
openstack port create --network net12 p2
openstack server create --nic port-id=p1 --nic port-id=p2 --flavor 3 
--image vyos sf-vm

sleep 5
openstack port pair create --ingress p1 --egress p2 pp1
openstack port pair group create --port-pair pp1 ppg1

openstack flow classifier create --source-ip-prefix 11.0.0.0/24 
--destination-ip-prefix\
 12.0.0.0/24 --source-port 1:65535 --destination-port 80:80  --protocol TCP 
\
 --logical-source-port $(neutron port-list | grep \"11.0.0.1\" | awk 
'{print $2}') fc1

openstack port chain create --port-pair-group ppg1 --flow-classifier
fc1 pc1

Steps to delete chain
openstack server delete sf-vm

openstack port chain delete pc1
openstack flow classifier delete fc1
openstack port pair group delete ppg1
openstack port pair delete pp1
openstack port delete p1
openstack port delete p2

openstack router remove subnet sfc-router sub11
openstack router remove subnet sfc-router sub12
openstack subnet delete sub11
openstack subnet delete sub12
openstack router delete sfc-router
openstack network delete net11
openstack network delete net12

Expected Output
Successful completion of step "openstack port chain delete pc1" and other 
obvious success messages.

Actual Output
delete_port_chain failed.
Neutron server returns request_ids: 
['req-eea26895-dcf5-4c31-a858-4da5ffe40b14']

This is a blocker for me.

Neutron Log: http://paste.openstack.org/show/597125/
Controller OVS agent LOG: http://paste.openstack.org/show/597126/
Compute OVS agent LOG: http://paste.openstack.org/show/597127/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: chaining ovs sfc

** Description changed:

- I'm experimenting with Neutron SFC. The chain creation happens successfully 
whereas the chain deletion doesn't succeed always. For the first couple of 
times, I was able to delete the chains successfully however once it failed, it 
never succeeded.
+ I'm experimenting with Neutron SFC. The chain creation happens successfully 
whereas the chain deletion doesn't succeed always. For the first couple of 
times, I was able to delete the chains successfully however once it failed, it 
never succeeded again.
  When it failed for the first time, I had to manually delete the port chain 
entry from DB. Even though the successive chain creations happens successfully 
the deletion continuously fails. Once I remove the port chain entry from DB, 
other things like flow classifier, port pairs, port pair groups can be removed 
through CLI.
  
  Environment: Multi-node devstack with 1 controller and 2 computes where VMs 
are launched in the same compute.
  OS: Ubuntu 16.04
  Kernel: 4.4.0-59-generic
  OVS: 2.6.1
  Devstack and SFC: Newton
  All my neutron agents are alive and sound.
  
  Steps to create chain
- openstack network create net11
- openstack subnet create --subnet-range 11.0.0.0/24 --network net11 sub11
- openstack network create net12
- openstack subnet create --subnet-range 12.0.0.0/24 --network net12 sub12
- 
- openstack router create sfc-router
- openstack router add subnet sfc-router sub11
- openstack router add subnet sfc-router sub12
- 
- openstack port create --network net11 p1
- openstack port create --network net12 p2
- openstack server create --nic port-id=p1 --nic port-id=p2 --flavor 3 
--image vyos sf-vm
+ openstack network create net11
+ openstack subnet create --subnet-range 11.0.0.0/24 --network net11 sub11
+ openstack network create net12
+ openstack subnet create --subnet-range 12.0.0.0/24 --network net12 sub12
  
- sleep 5
- openstack port pair create 

[Yahoo-eng-team] [Bug 1284317] Re: Not found router raises generic exception

2017-01-31 Thread Armando Migliaccio
** No longer affects: python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1284317

Title:
  Not found router raises generic exception

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  As a horizon developer, I use the neutronclient package, which
  currently provides several exceptions that horizon can catch,
  including NetworkNotFoundClient, and PortNotFoundClient.
  Unfortunately, there is no RouterNotFoundClient exception.  When an
  api call is made to get a router_id that doesn't exist, a very generic
  NeutronClientException is raised which is significantly problematic.
  (Though it does have a status_code of 404, and an error message saying
  "Router  could not be found" which is good.)

  I'd like for a new exception to be thrown so we can catch it and treat
  it as a NOT_FOUND error instead of having to parse it and fight
  changing-error-messages and the like.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1284317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285530] Re: exception message should use gettextutils

2017-01-31 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1285530

Title:
  exception message should use gettextutils

Status in oslo-incubator:
  Won't Fix
Status in oslo.messaging:
  Incomplete
Status in python-neutronclient:
  Incomplete
Status in OpenStack Object Storage (swift):
  In Progress
Status in tuskar:
  Won't Fix
Status in tuskar-ui:
  Won't Fix

Bug description:
  What To Translate

  At present the convention is to translate all user-facing strings.
  This means API messages, CLI responses, documentation, help text, etc.

  There has been a lack of consensus about the translation of log
  messages; the current ruling is that while it is not against policy to
  mark log messages for translation if your project feels strongly about
  it, translating log messages is not actively encouraged.

  Exception text should not be marked for translation, becuase if an
  exception occurs there is no guarantee that the translation machinery
  will be functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo-incubator/+bug/1285530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247568] Re: NotFound neutron errors should have a unique base class

2017-01-31 Thread Armando Migliaccio
** Changed in: python-neutronclient
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1247568

Title:
  NotFound neutron errors should have a unique base class

Status in neutron:
  Expired
Status in python-neutronclient:
  Invalid
Status in tempest:
  Expired

Bug description:
  Most of the sever side exception are caught by the client and re-
  thrown as the general NeutronError. original message and type are
  still found in the message.

  404-{u'NeutronError': {u'message': u'Security group rule 6957532c-
  ae2c-4273-89c3-11a does not exist', u'type':
  u'SecurityGroupRuleNotFound', u'detail': u''}}

  this makes it impossible to catch unique error types such as NotFound.

  for instance: in tempest (tempest/scenario/manager:

   try:
   # OpenStack resources are assumed to have a delete()
   # method which destroys the resource...
   thing.delete()
   except Exception as e:
   # If the resource is already missing, mission accomplished.
   if e.__class__.__name__ == 'NotFound':
   continue
   raise

  this code should catch and dismiss all not found resources, however,
  since neutronClient doesn't have a unique NotFound type, it fails to
  catch here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1247568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279683] Re: The problem generated by deleting dhcp'port by mistack

2017-01-31 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279683

Title:
  The problem generated by deleting dhcp'port by mistack

Status in tempest:
  Fix Released

Bug description:
  In my environment, I delete the dhcp port by mistack, then I found the vm in 
primary subnet can not get IP, the reason is that when a dhcp port is deleted, 
neutron will create the dhcp port automaticly, but the VIF TAP will not be 
deleted, you will see one IP in two VIF TAP
   
  root@shz-dev:~# ip netns exec qdhcp-ab049276-3b7c-41a2-b62c-3f587a02b0a6 
ifconfig
  loLink encap:Local Loopback  
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:576 (576.0 B)  TX bytes:576 (576.0 B)

  tap4694b3c4-a6 Link encap:Ethernet  HWaddr fa:16:3e:41:f6:fc  
inet addr:50.50.50.2  Bcast:50.50.50.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe41:f6fc/64 Scope:Link
UP BROADCAST RUNNING PROMISC  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:0 (0.0 B)  TX bytes:796 (796.0 B)

  tapa546a666-31 Link encap:Ethernet  HWaddr fa:16:3e:98:dd:a7  
inet addr:50.50.50.2  Bcast:50.50.50.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe98:dda7/64 Scope:Link
UP BROADCAST RUNNING PROMISC  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:0 (0.0 B)  TX bytes:496 (496.0 B)

  Even if the problem is caused by error operation, I think the dhcp
  port should not allow be deleted, the port on router can not be
  deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1279683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282842] Re: default nova+neutron setup cannot handle spawning 20 images concurrently

2017-01-31 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282842

Title:
  default nova+neutron setup cannot handle spawning 20 images
  concurrently

Status in OpenStack Compute (nova):
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  This breaks any @scale use of a cloud.

  Symptoms include 500 errors from 'nova list' (which causes a heat
  stack failure) and errors like 'unknown auth strategy' from
  neutronclient when its being called from the nova compute.manager.

  Sorry for the many-project-tasks here - its not clear where the bug
  lies, nor whether its bad defaults, or code handling errors, or perf
  tuning etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1282842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285323] Re: Services fail to shut down on the old side of Grenade

2017-01-31 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1285323

Title:
  Services fail to shut down on the old side of Grenade

Status in grenade:
  In Progress

Bug description:
  During a grenade run, some times services aren't shutting down
  correctly. This shows up in 2 ways, one in the sanity check that
  catches a still running service, and one in the sanity check that
  checks for the new services.

  The latest Logstash Query for this is:

  ((message:"+ echo \'The following services are still running:")
  OR (message:"Error: Service" AND message:"is not running"))
  AND filename:"console.html" AND NOT build_name:"check-grenade-
  dsvm-partial-ncpu"

  check-grenade-dsvm-partial-ncpu is excluded because it's still a work
  in progress.

  Example log

  http://logs.openstack.org/40/72040/5/gate/gate-grenade-
  dsvm/eef5df3/console.html#_2014-02-26_03_33_14_937

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1285323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424970] Re: VMware: Router Type Extension Support

2017-01-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424970

Title:
  VMware: Router Type Extension Support

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/157377
  commit a8ed4ad3d345187bb314c7c574a1476de1a16584
  Author: Gary Kotton 
  Date:   Thu Feb 19 05:55:14 2015 -0800

  VMware: Router Type Extension Support
  
  Provide a router type extension. This will enable defining the
  type of edge appliance. That is, an exclusive edge or a shared
  edge for routing services.
  
  DocImpact
  
  Change-Id: I1464d0390c0b3ee7658e7955e6433f2ac078a5fe

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424970/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463254] Re: all PortContexts trigger network segment lookup

2017-01-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463254

Title:
  all PortContexts trigger network segment lookup

Status in neutron:
  Won't Fix

Bug description:
  Every time an ML2 port context is constructed, it builds a network
  context which results in a db lookup of all of the network segments.
  There are various places where the network is completely irrelevant
  (e.g. port status update) so this DB activity is just a waste of
  resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471032] Re: [api-ref]Support Basic Address Scope CRUD as extensions

2017-01-31 Thread Armando Migliaccio
I see [1]. I assume this is done.

[1] http://docs.openstack.org/newton/networking-guide/config-address-
scopes.html

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1471032

Title:
  [api-ref]Support Basic Address Scope CRUD as extensions

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/189741
  commit cbd95318ad6c44e72a3aa163f7a399353c8b4458
  Author: vikram.choudhary 
  Date:   Tue Jun 9 19:55:59 2015 +0530

  Support Basic Address Scope CRUD as extensions
  
  This patch adds the support for basic address scope CRUD.
  Subsequent patches will be added to use this address scope
  on subnet pools.
  
  DocImpact
  APIImpact
  
  Co-Authored-By: Ryan Tidwell 
  Co-Authored-By: Numan Siddique 
  Change-Id: Icabdd22577cfda0e1fbf6042e4b05b8080e54fdb
  Partially-implements:  blueprint address-scopes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1471032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482622] Re: functional test fails with "RuntimeError: Second simultaneous read on fileno 6 "

2017-01-31 Thread Armando Migliaccio
No longer happening.

** Tags added: gate-failure

** Tags added: functional-tests

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1482622

Title:
  functional test fails with "RuntimeError: Second simultaneous read on
  fileno 6 "

Status in neutron:
  Invalid

Bug description:
  Logstash query:
  message:"Second simultaneous read on fileno" AND tags:"console"
  Select 'all time'.


  Neutron functional test is failing with below error in gate

  2015-07-28 17:14:04.164 | 2015-07-28 17:14:04.154 | 2015-07-28 
17:13:58.550 6878 ERROR neutron.callbacks.manager evtype, fileno, evtype, 
cb, bucket[fileno]))
  2015-07-28 17:14:04.166 | 2015-07-28 17:14:04.156 | 2015-07-28 
17:13:58.550 6878 ERROR neutron.callbacks.manager RuntimeError: Second 
simultaneous read on fileno 6 detected.  Unless you really know what you're 
doing, make sure that only one greenthread can read any particular socket.  
Consider using a pools.Pool. If you do know what you're doing and want to 
disable this error, call eventlet.debug.hub_prevent_multiple_readers(False) - 
MY THREAD=; THAT THREAD=FdListener('read', 6, , )

  For full log
  
http://logs.openstack.org/36/200636/4/check/gate-neutron-vpnaas-dsvm-functional-sswan/1dcd044/console.html.gz

  The functional test uses two l3 agents. During test cleanup, both the
  agents try to clean up the resources simultaneously, and failing with
  the above error.

  To avoid this issue I am using  root_helper instead of root_helper_daemon in 
the test
  
https://review.openstack.org/#/c/200636/5/neutron_vpnaas/tests/functional/common/test_scenario.py
Line 173

  To reproduce this issue, please enable root_helper_daemon in
  test(disable line 173 in above file).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1482622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483873] Re: It should be possible to override the URL used in previous/next links

2017-01-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483873

Title:
  It should be possible to override the URL used in previous/next links

Status in neutron:
  Won't Fix

Bug description:
  The code for fetching the pagination links is hard coded to use
  WebOb's Request path_url property. In setups where a proxy is used to
  forward requests to neutron nodes, the href shown to the requesting
  tenant doesn't match the one he/she made the request against.

  
https://github.com/openstack/neutron/blob/master/neutron/api/api_common.py#L56-L73

  Ideally, one should be able to define a configuration variable that's
  used in place of the request.path_url if defined

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295443] Re: Handle unicode exception messages

2017-01-31 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295443

Title:
  Handle unicode exception messages

Status in heat:
  Fix Released
Status in python-neutronclient:
  Incomplete

Bug description:
  There are many places in the code that explicitly ask for byte code
  strings while using exception messages. They should be able to work
  with unicode string and use it in a way that is PY3 compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1295443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660878] [NEW] test_reboot_deleted_server fails with 409 "Cannot 'reboot' instance while it is in vm_state building"

2017-01-31 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/91/426991/1/check/gate-tempest-dsvm-neutron-
full-ubuntu-xenial/f218227/console.html#_2017-02-01_02_06_33_592237

2017-02-01 02:06:33.592237 | 
tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_reboot_deleted_server[id-581a397d-5eab-486f-9cf9-1014bbd4c984,negative]
2017-02-01 02:06:33.592305 | 
--
2017-02-01 02:06:33.592321 | 
2017-02-01 02:06:33.592340 | Captured traceback:
2017-02-01 02:06:33.592367 | ~~~
2017-02-01 02:06:33.592398 | Traceback (most recent call last):
2017-02-01 02:06:33.592453 |   File 
"tempest/api/compute/servers/test_servers_negative.py", line 190, in 
test_reboot_deleted_server
2017-02-01 02:06:33.593010 | server['id'], type='SOFT')
2017-02-01 02:06:33.593072 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 485, in assertRaises
2017-02-01 02:06:33.593110 | self.assertThat(our_callable, matcher)
2017-02-01 02:06:33.593162 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 496, in assertThat
2017-02-01 02:06:33.593205 | mismatch_error = 
self._matchHelper(matchee, matcher, message, verbose)
2017-02-01 02:06:33.593266 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 547, in _matchHelper
2017-02-01 02:06:33.593294 | mismatch = matcher.match(matchee)
2017-02-01 02:06:33.593345 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
2017-02-01 02:06:33.593388 | mismatch = 
self.exception_matcher.match(exc_info)
2017-02-01 02:06:33.593443 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
2017-02-01 02:06:33.593468 | mismatch = matcher.match(matchee)
2017-02-01 02:06:33.593515 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 475, in match
2017-02-01 02:06:33.593544 | reraise(*matchee)
2017-02-01 02:06:33.593597 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
2017-02-01 02:06:33.593618 | result = matchee()
2017-02-01 02:06:33.593667 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 1049, in __call__
2017-02-01 02:06:33.593699 | return self._callable_object(*self._args, 
**self._kwargs)
2017-02-01 02:06:33.593736 |   File 
"tempest/lib/services/compute/servers_client.py", line 236, in reboot_server
2017-02-01 02:06:33.593777 | return self.action(server_id, 'reboot', 
**kwargs)
2017-02-01 02:06:33.593814 |   File 
"tempest/lib/services/compute/servers_client.py", line 186, in action
2017-02-01 02:06:33.594170 | post_body)
2017-02-01 02:06:33.594219 |   File "tempest/lib/common/rest_client.py", 
line 275, in post
2017-02-01 02:06:33.594261 | return self.request('POST', url, 
extra_headers, headers, body, chunked)
2017-02-01 02:06:33.594298 |   File 
"tempest/lib/services/compute/base_compute_client.py", line 48, in request
2017-02-01 02:06:33.594328 | method, url, extra_headers, headers, body, 
chunked)
2017-02-01 02:06:33.594360 |   File "tempest/lib/common/rest_client.py", 
line 663, in request
2017-02-01 02:06:33.594392 | self._error_checker(resp, resp_body)
2017-02-01 02:06:33.594449 |   File "tempest/lib/common/rest_client.py", 
line 775, in _error_checker
2017-02-01 02:06:33.594496 | raise exceptions.Conflict(resp_body, 
resp=resp)
2017-02-01 02:06:33.594555 | tempest.lib.exceptions.Conflict: An object 
with that identifier already exists
2017-02-01 02:06:33.594641 | Details: {u'code': 409, u'message': u"Cannot 
'reboot' instance d9978a80-0349-4cb8-84fa-46aa0b8eaa16 while it is in vm_state 
building"}

http://logs.openstack.org/91/426991/1/check/gate-tempest-dsvm-neutron-
full-ubuntu-
xenial/f218227/logs/screen-n-api.txt.gz#_2017-02-01_01_25_02_367

The test creates a server, then deletes the server and waits for it to
be gone and then tries to reboot it and expects a 404 but gets a 409
because the instance is building.

We're probably hitting a window of time between when we have a build
request and a real instance due to the recent change of moving instance
creation from nova-api to nova-conductor.

Looks like this started on 1/27:


[Yahoo-eng-team] [Bug 1555384] Re: ML2: routers and multiple mechanism drivers

2017-01-31 Thread Sam Morrison
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555384

Title:
  ML2: routers and multiple mechanism drivers

Status in networking-midonet:
  Fix Released
Status in neutron:
  New

Bug description:
  We have an ML2 environment with linuxbridge and midonet networks. For
  L3 we use the midonet driver.

  If a user tries to bind a linuxbridge port to a midonet router it
  returns the following error:

  {"message":"There is no NeutronPort with ID eddb3d08-97f6-480d-
  a1a4-9dfe3d22e62c.","code":404}

  However the port is created (user can't see it)  and doing a router
  show shows the router as having that interface.

  Ideally this should not be allowed and should error out gracefully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1555384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535514] Re: run_test virtual-env-path args not work

2017-01-31 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535514

Title:
  run_test virtual-env-path args not work

Status in Glance:
  In Progress
Status in Manila:
  In Progress

Bug description:
  when run the unit test use arg virtual-env-path, the venv path not
  point to the args one

  exec
  ./run_tests.sh --virtual-env-path /opt/stack

  the venv-path is $(pwd), not /opt/stack

  as in run_tests.sh it export environ value of venv_path, tools_path,
  venv_name. But in the install_venv.py not get the environ. And in the
  with_venv.sh not get the environ in the neutron and get wrong value of
  VENV_PATH, TOOLS_PATH, VENV_NAME.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1535514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565105] Re: while spining vms on large scale some vms get multiple ips and multiple ports get created

2017-01-31 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565105

Title:
  while spining vms on large scale some vms get multiple ips and
  multiple ports get created

Status in kolla:
  Fix Released
Status in kolla mitaka series:
  Fix Committed

Bug description:
  attached logs from neutron_server

  not getting any error logs in rabbit mq

  http://paste.openstack.org/show/492785/

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla/+bug/1565105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555384] Re: ML2: routers and multiple mechanism drivers

2017-01-31 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555384

Title:
  ML2: routers and multiple mechanism drivers

Status in networking-midonet:
  Fix Released
Status in neutron:
  New

Bug description:
  We have an ML2 environment with linuxbridge and midonet networks. For
  L3 we use the midonet driver.

  If a user tries to bind a linuxbridge port to a midonet router it
  returns the following error:

  {"message":"There is no NeutronPort with ID eddb3d08-97f6-480d-
  a1a4-9dfe3d22e62c.","code":404}

  However the port is created (user can't see it)  and doing a router
  show shows the router as having that interface.

  Ideally this should not be allowed and should error out gracefully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1555384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582185] Re: when vm detaches security group with remote_group_id, vm's ip address don't be deleted from ipset member.

2017-01-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582185

Title:
  when vm detaches security group with remote_group_id,  vm's ip address
  don't be deleted from ipset member.

Status in neutron:
  Invalid

Bug description:
  There is default security group, and have been attached two vms, the
  security group as below:

  | 204844ae-6939-44d3-a375-1999cd44c942 | default | egress, IPv4   
 |
  |  | | egress, IPv4, 22/tcp, 
remote_group_id: 204844ae-6939-44d3-a375-1999cd44c942 |
  |  | | egress, IPv6   
 |
  |  | | ingress, IPv4, 22/tcp  
 |
  |  | | ingress, IPv4, 3389/tcp
 |
  |  | | ingress, IPv4, icmp, 
remote_ip_prefix: 0.0.0.0/0|
  |  | | ingress, IPv4, 
remote_group_id: 204844ae-6939-44d3-a375-1999cd44c942|
  |  | | ingress, IPv6, 22/tcp  
 |
  |  | | ingress, IPv6, 3389/tcp
 |
  |  | | ingress, IPv6, icmp
 |
  |  | | ingress, IPv6, 
remote_group_id: 204844ae-6939-44d3-a375-1999cd44c942|

  [root@openstack ~(keystone_admin)]# nova list
  
+--+---+++-+---+
  | ID   | Name  | Status | Task State | Power 
State | Networks  |
  
+--+---+++-+---+
  | 4558881d-2784-40b8-a0fc-a8238196ca47 | vm1| ACTIVE | -  | 
Running | =172.16.0.9   |
  | e67ba1de-305d-4915-a2bc-bb24b0389546 | vm2  | ACTIVE | -  | Running 
| test=192.168.12.6 |
  
+--+---+++-+---+

  Reproduce:
  step 1: vm1 attaches the default security group
  step 2: vm2 attaches the default security group, we can see the ipset member:
  [root@openstack ~]# ipset list NETIPv4204844ae-6939-44d3-a
  Name: NETIPv4204844ae-6939-44d3-a
  Type: hash:net
  Revision: 3
  Header: family inet hashsize 1024 maxelem 65536
  Size in memory: 16880
  References: 6
  Members:
  192.168.12.6
  172.16.0.9
  step3: vm2 detaches the default, now we can see "192.168.12.6" still over 
there:
  [root@openstack ~]# ipset list NETIPv4204844ae-6939-44d3-a
  Name: NETIPv4204844ae-6939-44d3-a
  Type: hash:net
  Revision: 3
  Header: family inet hashsize 1024 maxelem 65536
  Size in memory: 16880
  References: 5
  Members:
  192.168.12.6
  172.16.0.9

  Expected:
  "192.168.12.6" should be removed from ipset member.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582807] Re: neutron-rootwrap-daemon explodes because of misbehaving "ip rule"

2017-01-31 Thread Armando Migliaccio
I don't think we can reproduce this anymore

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582807

Title:
  neutron-rootwrap-daemon explodes because of misbehaving "ip rule"

Status in neutron:
  Invalid
Status in oslo.rootwrap:
  New

Bug description:
  Somehow, one of my deployments (ubuntu trusty64 based)

  root@devstack:~# uname -a
  Linux devstack 3.19.0-37-generic #42~14.04.1-Ubuntu SMP Mon Nov 23 15:13:51 
UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

  results on never-ending output of "ip rule"

  # ip rule
  0:from all lookup local
  0:from all lookup local
  0:from all lookup local
  0:from all lookup local
  [...]
  runs for ever... (apparently because of a kernel bug)

  As a result of that, when neutron-l3-agent calls "ip rule" via
  rootwrap-deamon, the daemon eventually takes all the VM memory, and
  explodes.

  I believe we should include some sort of limit on the rootwrap-daemon
  command response collection to avoid crashing the system on conditions
  like this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579458] Re: dhcp port was deleted unexpectedly when running rally list_and_create_ports test

2017-01-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1579458

Title:
  dhcp port was deleted unexpectedly when running rally
  list_and_create_ports test

Status in neutron:
  Invalid

Bug description:
  I tested this test in devstack master: (the content of json is not
  related to the bug, so I don't detail the content)

  rally --debug task start
  /opt/stack/rally/samples/tasks/scenarios/neutron/create_and_list_ports.json

  The test shows it passed successfully on the high level, however in
  the q-dhcp.log file, I found same errors:

  2016-05-08 09:26:48.918 ^[[01;31mERROR neutron.agent.linux.utils 
[^[[01;36mreq-bc440ad1-097e-4854-ba9f-b3d6fa3ebc57 ^[[00;36mc_rally_aabb  
2955_PDaqOU6i 41c974f05789403894531cd2979bd870^[[01;31m] ^[[01;35m^[[01;31mExit 
code: 1; Stdin: ; Stdout: ; Stderr: Cannot find device "t  ap123bb3c5-ab"
  15639 ^[[00m
  15640 2016-05-08 09:26:48.919 ^[[01;31mERROR neutron.agent.dhcp.agent 
[^[[01;36mreq-bc440ad1-097e-4854-ba9f-b3d6fa3ebc57 ^[[00;36mc_rally_aabb2  
955_PDaqOU6i 41c974f05789403894531cd2979bd870^[[01;31m] 
^[[01;35m^[[01;31mUnable to reload_allocations dhcp for 
1a671e08-78b5-45fa-8bfe-f  9651444b242.^[[00m
  15641 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mTraceback (most recent call last):
  15642 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/dhcp/agent.py", l  
ine 113, in call_driver
  15643 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mgetattr(driver, action)(**action_kwargs)
  15644 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", l  
ine 477, in reload_allocations
  15645 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mself.device_manager.update(self.network, self.interfac  
e_name)
  15646 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", l  
ine 1281, in update
  15647 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mself._set_default_route(network, device_name)
  15648 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", l  
ine 1015, in _set_default_route
  15649 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mgateway = device.route.get_gateway()
  15650 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py",   
line 765, in get_gateway
  15651 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mroute_list_lines = self._run(options, tuple(args)).spl  
it('\n')
  15652 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py",   
line 355, in _run
  15653 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mreturn self._parent._run(options, self.COMMAND, args)
  15654 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py",   
line 79, in _run
  15655 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mreturn self._as_root(options, command, args)
  15656 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py",   
line 94, in _as_root
  15657 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mlog_fail_as_error=self.log_fail_as_error)
  15658 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py",   
line 103, in _execute
  15659 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mlog_fail_as_error=log_fail_as_error)
  15660 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/agent/linux/utils.py",   
line 137, in execute
  15661 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mraise RuntimeError(msg)
  15662 ^[[01;31m2016-05-08 09:26:48.919 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mRuntimeError: Exit code: 1; Stdin: ; Stdout: ; Stderr: Can  
not find device "tap123bb3c5-ab"

  
  This is because the dhcp port was deleted before that:

  15624 2016-05-08 09:26:48.579 ^[[00;32mDEBUG neutron.agent.dhcp.agent 

[Yahoo-eng-team] [Bug 1579966] Re: dns_name is always wiped out on Neutron port-update

2017-01-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1579966

Title:
  dns_name is always wiped out on Neutron port-update

Status in neutron:
  Invalid

Bug description:
  DESCRIPTION: Suppose a DNS name is set on a port, either during initial port 
creation or a subsequent port-update. Whenever a port update is invoked in 
neutron - whether through the CLI or API - to change some other attribute, the 
dns_name is wiped out. All other properties of a port are untouched on a port 
update. As a a result the dns_assignment is also changed from hostname to IP 
format. This could also happen when a VM gets rescheduled to anotehr hypervisor 
(say due to network incompatibility/
  resource incompatibility, or other reasons), a port_update is called via 
Nova's neutron API to change the host ID binding, without dns_name in the 
request.

  I have rootcaused this to the port_update handler in neutron's DB base
  plugin is always setting the dns_name to an empty '' if the request
  does not have the dns_name. Not consistent with behavior for other
  attributes, or intended behavior of an "UPDATE" operation.

  REPRODUCTION STEPS:

  1. Create any new VM and attach to a tenant network, it will get
  allocated a new port/fixed IP, find this port's UUID. OR, attach a VM
  to an existing neutron port.

  2. For the given port, do a port-update and set the dns_name, verify
  in DB / via show CLI that dns_name and assignment are updated.

  3. Do a port-update and change any other attribute - for example the
  name, or admin state.

  4. Verify that the dns_name got cleared out. All other attributes
  untouched.

  OUTPUT:

  [root@ip-172-31-14-173 ~(admin_admin)]# neutron port-show
  355e5876-c195-443e-bac7-3cf40e032c7c

  | device_owner  | compute:None
  |
  | dns_assignment| {"hostname": "vmnet2", "ip_address": 
"192.168.66.5", "fqdn": "vmnet2.example.net."} |
  | dns_name  | vmnet2  
  |
  | extra_dhcp_opts   | 
  |
  | fixed_ips | {"subnet_id": 
"2b2486c7-2b90-47f1-9109-e93a8b54d073", "ip_address": "192.168.66.5"}   |

  [root@ip-172-31-14-173 ~(admin_admin)]# neutron port-update 
355e5876-c195-443e-bac7-3cf40e032c7c --name ARJUN
  Updated port: 355e5876-c195-443e-bac7-3cf40e032c7c

  [root@ip-172-31-14-173 ~(admin_admin)]# neutron port-show
  355e5876-c195-443e-bac7-3cf40e032c7c

  | dns_assignment| {"hostname": "host-192-168-66-5", "ip_address": 
"192.168.66.5", "fqdn": "host-192-168-66-5.example.net."} |
  | dns_name  | 
|
  | extra_dhcp_opts   | 
|
  | fixed_ips | {"subnet_id": 
"2b2486c7-2b90-47f1-9109-e93a8b54d073", "ip_address": "192.168.66.5"}   
  |

  EXPECTED OUTPUT: dns_name not removed, dns_assignment's hostname does
  not get replaced with "host-IP" format.

  VERSION: Liberty, Centos 7.1, regular deployment with controller node,
  network host, multiple hypervisors

  Severity: High - DNS behavior and DHCP host entries can unexpectedly
  change, killing connectivity or any current traffic. As mentioned,
  port can get updated for multitude of reasons (one being Nova
  migrating the instance to a new host)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1579966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580381] Re: pecan failing with shimrequest attribute error

2017-01-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580381

Title:
  pecan failing with shimrequest attribute error

Status in neutron:
  Invalid

Bug description:
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
[req-f3a5850c-f611-4758-b195-d58cae88b82f admin -] An unexpected exception was 
caught: 'ShimRequest' object has no attribute 'GET'
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
Traceback (most recent call last):
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/pecan/core.py", line 678, in 
__call__
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
self.invoke_controller(controller, args, kwargs, state)
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/pecan/core.py", line 572, in 
invoke_controller
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
result = controller(*args, **kwargs)
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation   
File "/opt/stack/new/neutron/neutron/pecan_wsgi/controllers/utils.py", line 98, 
in index
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
return self.controller_index(shim_request, **uri_identifiers)
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in 
wrapper
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
ectxt.value = e.inner_exc
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, 
in __exit__
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, 
in force_reraise
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in 
wrapper
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*args, **kwargs)
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation   
File "/opt/stack/new/neutron/neutron/api/v2/base.py", line 343, in index
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
return self._items(request, True, parent_id)
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation   
File "/opt/stack/new/neutron/neutron/api/v2/base.py", line 254, in _items
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
api_common.list_args(request, 'fields'))
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation   
File "/opt/stack/new/neutron/neutron/api/api_common.py", line 130, in list_args
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
return [v for v in request.GET.getall(arg) if v]
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 
AttributeError: 'ShimRequest' object has no attribute 'GET'
  2016-05-10 21:26:48.980 19440 ERROR neutron.pecan_wsgi.hooks.translation 

  http://logs.openstack.org/51/273951/24/experimental/gate-neutron-dsvm-
  api-pecan/452fb92/logs/screen-q-svc.txt.gz#_2016-05-10_21_26_48_980

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581320] Re: deprecation warnings for ICMPV6_ALLOWED_TYPES

2017-01-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581320

Title:
  deprecation warnings for ICMPV6_ALLOWED_TYPES

Status in neutron:
  Won't Fix

Bug description:
  neutron/agent/linux/openvswitch_firewall/firewall.py:371: 
DeprecationWarning: ICMPV6_ALLOWED_TYPES in version 'mitaka' and will be 
removed in version 'newton': moved to neutron_lib.constants
  2016-05-12 16:12:52.348 |   for icmp_type in 
constants.ICMPV6_ALLOWED_TYPES:
  2016-05-12 16:12:52.348 | 
neutron/agent/linux/openvswitch_firewall/firewall.py:550: DeprecationWarning: 
ICMPV6_ALLOWED_TYPES in version 'mitaka' and will be removed in version 
'newton': moved to neutron_lib.constants
  2016-05-12 16:12:52.348 |   for icmp_type in 
constants.ICMPV6_ALLOWED_TYPES:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660682] Re: resource tracker sets wrong initial resource provider generation when creating a new resource provider

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/427319
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=29f3d8bae23750ec53b59a258eb71459e6972927
Submitter: Jenkins
Branch:master

commit 29f3d8bae23750ec53b59a258eb71459e6972927
Author: Chris Dent 
Date:   Tue Jan 31 17:25:11 2017 +

Fresh resource provider in RT must have generation 0

When the resource tracker creates a new resource provider it sets
the generation to 1. This is wrong, because on the server side the
default value for a generation, and thus the generation for a new
provider is 0. This fixes that situation and thus avoids causing the
first inventory write operation from the resource tracker to the
placement API being a 409 conflict.

Change-Id: Iaa9e786f33049799678f21b5dfaac417c6601ac8
Closes-Bug: #1660682


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660682

Title:
  resource tracker sets wrong initial resource provider generation when
  creating a new resource provider

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  New

Bug description:
  When the resource tracker creates (and then effectively caches) a new
  resource provider for the compute node, it sets the generation to 1.
  This leads to the first PUT to set inventory resulting in a 409
  conflict because the generation is wrong.

  The default generation in the database is 0, so for any new resource
  provider this is what it will be. So in the resource tracker, in
  _create_resource_provider, the generation should also be 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597861] Re: Cannot install mac spoofing for allowed_address_pairs if port mac address is unavaliable

2017-01-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597861

Title:
  Cannot install mac spoofing for allowed_address_pairs if port mac
  address is unavaliable

Status in neutron:
  Invalid

Bug description:
  From
  
https://github.com/openstack/neutron/blob/3e26a7851e10d0bae5625c042e8f5f0f74ed327e/neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py#L133-L134,
  if the port_details mac address is not presented I cannot install the
  allowed_address_pairs mac protection.

  Since mac address is not a required field in port model, we need to
  add extra check for the port_details to see if the mac_address field
  is presented or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599964] Re: neutron-ovs-cleanup fails in python 3

2017-01-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: In Progress => Invalid

** Changed in: neutron
 Assignee: Terry Wilson (otherwiseguy) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599964

Title:
  neutron-ovs-cleanup fails in python 3

Status in neutron:
  Invalid

Bug description:
  The error is as follows.

  (py34) $ neutron-ovs-cleanup
  Guru mediation now registers SIGUSR1 and SIGUSR2 by default for backward 
compatibility. SIGUSR1 will no longer be registered in a future release, so 
please use SIGUSR2 to generate reports.
  2016-07-07 18:56:12.516 33407 INFO neutron.common.config [-] Logging enabled!
  2016-07-07 18:56:12.516 33407 INFO neutron.common.config [-] 
/usr/local/bin/neutron-ovs-cleanup version 9.0.0.0b2.dev266
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands [-] 
Error executing command
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands 
Traceback (most recent call last):
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands   
File "/opt/stack/new/neutron/neutron/agent/ovsdb/native/commands.py", line 35, 
in execute
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands 
txn.add(self)
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands   
File "/opt/stack/new/neutron/neutron/agent/ovsdb/api.py", line 76, in __exit__
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands 
self.result = self.commit()
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands   
File "/opt/stack/new/neutron/neutron/agent/ovsdb/impl_idl.py", line 56, in 
commit
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands 
self.ovsdb_connection.queue_txn(self)
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands   
File "/opt/stack/new/neutron/neutron/agent/ovsdb/native/connection.py", line 
123, in queue_txn
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands 
self.txns.put(txn)
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands   
File "/opt/stack/new/neutron/neutron/agent/ovsdb/native/connection.py", line 
47, in put
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands 
self.alertout.write('X')
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands 
TypeError: 'str' does not support the buffer interface
  2016-07-07 18:56:12.881 33407 ERROR neutron.agent.ovsdb.native.commands
  2016-07-07 18:56:12.884 33407 CRITICAL neutron [-] TypeError: 'str' does not 
support the buffer interface
  2016-07-07 18:56:12.884 33407 ERROR neutron Traceback (most recent call last):
  2016-07-07 18:56:12.884 33407 ERROR neutron   File 
"/usr/local/bin/neutron-ovs-cleanup", line 10, in 
  2016-07-07 18:56:12.884 33407 ERROR neutron sys.exit(main())
  2016-07-07 18:56:12.884 33407 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/cmd/ovs_cleanup.py", line 88, in main
  2016-07-07 18:56:12.884 33407 ERROR neutron ovs_bridges = 
set(ovs.get_bridges())
  2016-07-07 18:56:12.884 33407 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/common/ovs_lib.py", line 142, in 
get_bridges
  2016-07-07 18:56:12.884 33407 ERROR neutron return 
self.ovsdb.list_br().execute(check_error=True)
  2016-07-07 18:56:12.884 33407 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/ovsdb/native/commands.py", line 42, in 
execute
  2016-07-07 18:56:12.884 33407 ERROR neutron ctx.reraise = False
  2016-07-07 18:56:12.884 33407 ERROR neutron   File 
"/usr/local/lib/python3.4/dist-packages/oslo_utils/excutils.py", line 221, in 
__exit__
  2016-07-07 18:56:12.884 33407 ERROR neutron self.force_reraise()
  2016-07-07 18:56:12.884 33407 ERROR neutron   File 
"/usr/local/lib/python3.4/dist-packages/oslo_utils/excutils.py", line 197, in 
force_reraise
  2016-07-07 18:56:12.884 33407 ERROR neutron six.reraise(self.type_, 
self.value, self.tb)
  2016-07-07 18:56:12.884 33407 ERROR neutron   File 
"/usr/local/lib/python3.4/dist-packages/six.py", line 686, in reraise
  2016-07-07 18:56:12.884 33407 ERROR neutron raise value
  2016-07-07 18:56:12.884 33407 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/ovsdb/native/commands.py", line 35, in 
execute
  2016-07-07 18:56:12.884 33407 ERROR neutron txn.add(self)
  2016-07-07 18:56:12.884 33407 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/ovsdb/api.py", line 76, in __exit__
  2016-07-07 18:56:12.884 33407 ERROR neutron self.result = self.commit()
  2016-07-07 18:56:12.884 33407 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/ovsdb/impl_idl.py", line 56, in commit
  2016-07-07 18:56:12.884 33407 ERROR neutron 
self.ovsdb_connection.queue_txn(self)
  2016-07-07 18:56:12.884 33407 ERROR neutron   File 

[Yahoo-eng-team] [Bug 1622654] Re: Security Group doesn't work if the specific allowed-address-pairs value is set

2017-01-31 Thread Armando Migliaccio
As Kevin said.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622654

Title:
  Security Group doesn't work if  the specific allowed-address-pairs
  value is set

Status in neutron:
  Invalid

Bug description:
  Summary:

  Security Group doesn't work if  the specific allowed-address-pairs
  value is set to the port associated with it.

  High level description:

  OpenStack user is allowed to specify arbitrary mac_address/ip_address
  pairs that are allowed to pass through a port. For some practical
  reasons, OpenStack users can specify huge subnets, and CIRDs provided
  there are not sanitized. If the CIRD provided with 'allowed-address-
  pairs' for any single port associated with Security Group overlaps
  with a subnet used by the VM, the VM is always accessible by any port
  and any protocol, despite the fact that its security group denies all
  ingress traffic.

  Step-by-step reproduction process:

  1) Create a VM in OpenStack
  2) Check that there are no rules allowing icmp (for instance) in the security 
group associated with the VM
  3) perform: 
  neutron port-update [any-port-associated-with-the-secgroup] 
--allowed-address-pairs type=dict list=true ip_address=[a-very-huge-cidr]

  if your VM uses a private IPv4 address from networks 192.168.x or
  172.16.x, then 128.0.0.0/1 will work as "a-very-huge-cidr", if it uses
  10.x network then 0.0.0.0/1 should.

  4) ping all the VMs in this secgroup successfully (from router
  namespace, or from any host which is allowed to access floating IPs if
  floating IP is also assigned to the VM), as well as access it by any
  port and protocol which the VM is listening.

  Version:

  All OpenStack releases up to Mitaka.

  Perceived severity:

  It's not a blocker as workaround are pretty obvious, but it's a huge
  security bug: all the network security provided by Security Groups
  might be ruined easily, just by updating a single port in neutron.

  If we restrict the value of allowed-address-pairs in neutron to a
  single address (/32 or /128), might it break anything?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1622654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617864] Re: NoFilterMatched when kill metadata proxy process in external_process.disable method

2017-01-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1617864

Title:
  NoFilterMatched when kill metadata proxy process in
  external_process.disable method

Status in neutron:
  Invalid

Bug description:
  l3_agent.log

  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
[req-b879a237-49a6-4ae0-ae5a-b52fff1be64e - - - - -] Error during 
L3NATAgentWithStateReport.periodic_sync_routers_task
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task Traceback 
(most recent call last):
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task File 
"/usr/lib/python2.7/site-packages/oslo_service/periodic_task.py", line 220, in 
run_periodic_tasks
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task task(self, 
context)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 547, in 
periodic_sync_routers_task
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
self.fetch_and_sync_all_routers(context, ns_manager)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/namespace_manager.py", line 
90, in __exit__
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
self._cleanup(_ns_prefix, ns_id)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/namespace_manager.py", line 
140, in _cleanup
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
self.process_monitor, ns_id, self.agent_conf)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task File 
"/usr/lib/python2.7/site-packages/neutron/agent/metadata/driver.py", line 131, 
in destroy_monitored_metadata_proxy
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task pm.disable()
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/external_process.py", 
line 109, in disable
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
utils.execute(cmd, run_as_root=True)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 116, in 
execute
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
execute_rootwrap_daemon(cmd, process_input, addl_env))
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 102, in 
execute_rootwrap_daemon
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task return 
client.execute(cmd, process_input)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task File 
"/usr/lib/python2.7/site-packages/oslo_rootwrap/client.py", line 128, in execute
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task res = 
proxy.run_one_command(cmd, stdin)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task File 
"", line 2, in run_one_command
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task File 
"/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task raise 
convert_to_error(kind, result)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task NoFilterMatched

  dhcp_agent.log

  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher 
[req-17ab00ea-166d-402f-8233-01e4c6fdc840 be5b0cbb38af4b18b2e4fd26bbe832d8 
aef8e3f16eb549d39bb6585c68b84442 - - -] Exception during message handling:
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _dispatch
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 127, 
in _do_dispatch
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in 
inner
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2016-08-25 

[Yahoo-eng-team] [Bug 1450604] Re: Fix DVR multinode upstream CI testing

2017-01-31 Thread Armando Migliaccio
We should probably file more tailored and up to date bug reports for
more recent failures.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450604

Title:
  Fix DVR multinode upstream CI testing

Status in neutron:
  Invalid

Bug description:
  This bug should capture any change required to get the DVR multi node
  job to run successfully and reliably.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554187] Re: log noise due to TooManyExternalNetworks exception

2017-01-31 Thread Armando Migliaccio
It looks like this went away since the spin off of BGP.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554187

Title:
  log noise due to TooManyExternalNetworks exception

Status in neutron:
  Invalid

Bug description:
  This is mostly innocent, but it unveils a devstack single node
  misconfiguration that we should deal with but it's done in the larger
  context initiated by bug 1491668

  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher 
[req-2c4909b8-5055-49bf-9a5d-7768225cac0c - -] Exception during message 
handling: More than one external network exists.
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
138, in _dispatch_and_reply
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
183, in _dispatch
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
127, in _do_dispatch
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/api/rpc/handlers/l3_rpc.py", line 213, in 
get_external_network_id
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher net_id 
= self.plugin.get_external_network_id(context)
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/db/external_net_db.py", line 195, in 
get_external_network_id
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher raise 
n_exc.TooManyExternalNetworks()
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher 
TooManyExternalNetworks: More than one external network exists.
  2016-03-07 18:11:47.922 18038 ERROR oslo_messaging.rpc.dispatcher

  http://logs.openstack.org/12/288212/2/check/gate-neutron-dsvm-
  api/082813d/logs/screen-q-svc.txt.gz?level=TRACE

  message:"TooManyExternalNetworks"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653882] Re: Creation of QOS policy name with too long name returns "Request Failed: internal server error while processing your request." instead of "Bad Request" error

2017-01-31 Thread Armando Migliaccio
*** This bug is a duplicate of bug 1610103 ***
https://bugs.launchpad.net/bugs/1610103

** This bug has been marked a duplicate of bug 1610103
   Creating qos with name that has more than 255 characters returns 500 error.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1653882

Title:
  Creation of QOS policy name with too long name returns "Request
  Failed:  internal server error while processing your request." instead
  of "Bad Request" error

Status in neutron:
  Confirmed

Bug description:
  The qos negative test fails [1] on top of Newton release with
  following error:

  "Request Failed: internal server error while processing your request.
  Neutron server returns request_ids: 
['req-c8fadf15-dce8-4c2f-943a-3cedc67f']"

  Instead of "Bad Request" error.


  [1] 
neutron.tests.tempest.api.test_qos_negative.QosNegativeTestJSON.test_add_policy_with_too_long_name
  from 
https://github.com/openstack/neutron/blob/master/neutron/tests/tempest/api/test_qos_negative.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1653882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656403] Re: The py-modindex for neutron-dynamic-routing docs is a broken link

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/426770
Committed: 
https://git.openstack.org/cgit/openstack/neutron-dynamic-routing/commit/?id=9869b08da50c48922a6888c70ebd0d6a794a665a
Submitter: Jenkins
Branch:master

commit 9869b08da50c48922a6888c70ebd0d6a794a665a
Author: Boden R 
Date:   Mon Jan 30 07:06:38 2017 -0700

Fix broken doc links

This patch fixes broken doc links as follows:
- Removes the ref to generate the 'modindex' as this only
applies to source that provides module level documentation
with class refs. For example [1]. Today we don't have any such
module doc in this project; and thus this generates a dead link.
It can be added later (if needed) and the modindex ref can be
added back in.
- Fixes various internal links that are using '.rst'; this needs to be
'.html' in order to link to the generated doc html files.
- Fixes an incorrect link; missing a space and has an unneeded
leading underscore.

For more details please see the bug report associated with this
fix.

[1] https://github.com/openstack/neutron/blob/
master/neutron/neutron_plugin_base_v2.py#L19

Change-Id: I12e227ae1a3682a7268b75c844825d3f7464941b
Closes-Bug: #1656403


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656403

Title:
  The py-modindex for neutron-dynamic-routing docs is a broken link

Status in neutron:
  Fix Released

Bug description:
  There's a broken link for the Module Index on
  http://docs.openstack.org/developer/neutron-dynamic-routing/.

  Also, that doc set has these broken links:

  
  {"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/route-advertisement.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},
   
  {"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/others/testing.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},
   
  {"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/dynamic-routing-agent.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},
   
  {"url": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/design/drivers.rst;,
 "status": 404, "referer": 
"http://docs.openstack.org/developer/neutron-dynamic-routing/functionality/bgp-speaker.html"},

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1656403/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658551] Re: neutron_dynamic_routing tempest plugin shadows neutron's

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/423873
Committed: 
https://git.openstack.org/cgit/openstack/neutron-dynamic-routing/commit/?id=5aa7d9ac58ad664c2c8c6948175d5046f9bc0425
Submitter: Jenkins
Branch:master

commit 5aa7d9ac58ad664c2c8c6948175d5046f9bc0425
Author: YAMAMOTO Takashi 
Date:   Mon Jan 23 09:54:02 2017 +0900

tempest plugin: Don't use the same name as neutron

Closes-Bug: #1658551
Change-Id: I4045ed7e114dc6ffb48563514eba2ed001a53dd9


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1658551

Title:
  neutron_dynamic_routing tempest plugin shadows neutron's

Status in neutron:
  Fix Released

Bug description:
  neutron_dynamic_routing tempest plugin uses the same name as
  neutron's.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1658551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654210] Re: Get more detail about tag extension

2017-01-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1654210

Title:
  Get more detail about tag extension

Status in neutron:
  Won't Fix

Bug description:
  Currently we only know Neutron tags extension support in some external 
system, like Kuryr.
  This is insufficiency, we need to get which network objects(network, subnet 
or others) support tag.
  In project Kuryr we need this check before mapping Neutron resources and 
docker resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1654210/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654183] Re: Token based authentication in Client class does not work

2017-01-31 Thread Andrey Kurilin
7.1.0 should fix an issue with horizon.

** Changed in: horizon
   Status: New => Incomplete

** Changed in: horizon
   Status: Incomplete => Invalid

** Changed in: trove
   Status: New => Fix Released

** Changed in: trove
 Assignee: (unassigned) => Andrey Kurilin (andreykurilin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1654183

Title:
  Token based authentication in Client class does not work

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in python-novaclient:
  Fix Released
Status in tripleo:
  In Progress
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  With newly released novaclient (7.0.0) it seems that token base
  authentication does not work in novaclient.client.Clinet.

  I have get back the following response from Nova server:

  Malformed request URL: URL's project_id
  'e0beb44615f34d54b8a9a9203a3e5a1c' doesn't match Context's project_id
  'None' (HTTP 400)

  I just created the Nova client in following way:
  Client(
  2,
  endpoint_type="public",
  service_type='compute',
  auth_token=auth_token,
  tenant_id="devel",
  region_name="RegionOne",
  auth_url=keystone_url,
  insecure=True,
  endpoint_override=nova_endpoint 
#https://.../v2/e0beb44615f34d54b8a9a9203a3e5a1c
  )

  After it nova client performs a new token based authentication without
  project_id (tenant_id) and it causes that the new token does not
  belong to any project. Anyway if we have a token already why
  novaclient requests a new one from keystone? (Other clients like Heat
  and Neutron for example does not requests any token from keystone if
  it is already provided for client class)

  The bug is introduced by follwoig commit:
  
https://github.com/openstack/python-novaclient/commit/8409e006c5f362922baae9470f14c12e0443dd70

  +if not auth and auth_token:
  +auth = identity.Token(auth_url=auth_url,
  +  token=auth_token)

  When project_id is also passed into Token authentication than
  everything works fine. So newly requested token belongs to right
  project/tenant.

  Note: Originally this problem appears in Mistral project of OpenStack,
  which is using the client classes directly from their actions with
  token based authentication.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1654183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347309] Re: if multidomain enabled, login error message should change

2017-01-31 Thread David Lyle
The message was made more generic to be "Invalid Credentials" in an
unrelated refactor.

** Changed in: django-openstack-auth
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347309

Title:
  if multidomain enabled, login error message should change

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  To test this, update your local_settings.py to:

  OPENSTACK_API_VERSIONS = {
  “identity”: 3
  }

  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
  #OPENSTACK_KEYSTONE_URL = “http://:5000/v2.0″ % OPENSTACK_HOST
  OPENSTACK_KEYSTONE_URL = “http://:5000/v3″ % OPENSTACK_HOST

  After you make these changes you will see an addition 'Domain' field
  on the login screen.

  However, when you enter in the domain name wrong, you are presented
  with message: 'Invalid user name or password.'  It should be updated
  to say: 'Invalid domain, user name or password.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1347309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491117] Re: Error when logging back in after timeout

2017-01-31 Thread David Lyle
** Changed in: django-openstack-auth
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491117

Title:
  Error when logging back in after timeout

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Internal Server Error: /auth/login/
  Traceback (most recent call last):
    File 
"/Users/robcresswell/horizon/.venv/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File 
"/Users/robcresswell/horizon/.venv/lib/python2.7/site-packages/django/views/decorators/debug.py",
 line 76, in sensitive_post_parameters_wrapper
  return view(request, *args, **kwargs)
    File 
"/Users/robcresswell/horizon/.venv/lib/python2.7/site-packages/django/utils/decorators.py",
 line 110, in _wrapped_view
  response = view_func(request, *args, **kwargs)
    File 
"/Users/robcresswell/horizon/.venv/lib/python2.7/site-packages/django/views/decorators/cache.py",
 line 57, in _wrapped_view_func
  response = view_func(request, *args, **kwargs)
    File 
"/Users/robcresswell/horizon/.venv/lib/python2.7/site-packages/openstack_auth/views.py",
 line 112, in login
  **kwargs)
    File 
"/Users/robcresswell/horizon/.venv/lib/python2.7/site-packages/django/views/decorators/debug.py",
 line 76, in sensitive_post_parameters_wrapper
  return view(request, *args, **kwargs)
    File 
"/Users/robcresswell/horizon/.venv/lib/python2.7/site-packages/django/utils/decorators.py",
 line 110, in _wrapped_view
  response = view_func(request, *args, **kwargs)
    File 
"/Users/robcresswell/horizon/.venv/lib/python2.7/site-packages/django/views/decorators/cache.py",
 line 57, in _wrapped_view_func
  response = view_func(request, *args, **kwargs)
    File 
"/Users/robcresswell/horizon/.venv/lib/python2.7/site-packages/django/contrib/auth/views.py",
 line 51, in login
  auth_login(request, form.get_user())
    File 
"/Users/robcresswell/horizon/.venv/lib/python2.7/site-packages/django/contrib/auth/__init__.py",
 line 102, in login
  if _get_user_session_key(request) != user.pk or (
    File 
"/Users/robcresswell/horizon/.venv/lib/python2.7/site-packages/django/contrib/auth/__init__.py",
 line 59, in _get_user_session_key
  return get_user_model()._meta.pk.to_python(request.session[SESSION_KEY])
    File 
"/Users/robcresswell/horizon/.venv/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
 line 969, in to_python
  params={'value': value},
  ValidationError: [u"'63bb5e0a7a3c42049236df7438246e57' value must be an 
integer."]

  Django: 1.8.4
  D-O-A: 1.4.0

  My initial thoughts are: the move to Django 1.8 has introduced some
  different behaviour OR https://review.openstack.org/#/c/142481/ has
  caused shenanigans.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1491117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566455] Re: Using V3 Auth throws error TypeError at /auth/login/ __init__() got an unexpected keyword argument 'unscoped'

2017-01-31 Thread David Lyle
This bug is on a version that is no longer supported, if you face the
issue, try the attached patch, But we will not be fixing this on the
stable branch.

** Changed in: django-openstack-auth
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1566455

Title:
  Using V3 Auth throws error TypeError at /auth/login/ __init__() got an
  unexpected keyword argument 'unscoped'

Status in django-openstack-auth:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  PasswordPlugin class creates a plugin with 'unscoped' parameter. This
  is throwing the following error:

  TypeError at /auth/login/
  __init__() got an unexpected keyword argument 'unscoped'

  LOG.debug('Attempting to authenticate for %s', username)
  if utils.get_keystone_version() >= 3:
  return v3_auth.Password(auth_url=auth_url,
  username=username,
  password=password,
  user_domain_name=user_domain_name,
  unscoped=True) ---> 
Deleting this line removes the error and authenticates successfully.
  else:
  return v2_auth.Password(auth_url=auth_url,
  username=username,
  password=password) 

  I have V3 API and URL configured in Horizon settings. Using Horizon
  Kilo version.

  Is there some other setting that is needed?

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1566455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359774] Re: No way to specify initial services region during login

2017-01-31 Thread David Lyle
** Changed in: django-openstack-auth
   Status: In Progress => Incomplete

** Changed in: django-openstack-auth
   Status: Incomplete => Invalid

** Changed in: django-openstack-auth
   Importance: Medium => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359774

Title:
  No way to specify initial services region during login

Status in django-openstack-auth:
  Invalid
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  If keystone is set up with multiple service regions, the initial
  service region is selected for you.  This is done by searching the
  service catalog for the first non-identity service and then selecting
  the region for the first endpoint.  This is an inconvenience when the
  user knows exactly which service region they want to use first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1359774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612056] Re: Neutron-Vpnaas Rally run fails with error 'module' object has no attribute 'set'

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/353838
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=a6c697ad2915a8b1839b33f3b9a297d656499283
Submitter: Jenkins
Branch:master

commit a6c697ad2915a8b1839b33f3b9a297d656499283
Author: ashish-kumar-gupta 
Date:   Thu Aug 11 11:04:43 2016 +0530

Fix the types.set error for rally job run

Rally jobs run for neutron vpnaas fails
Closes-Bug: #1612056

Change-Id: Ia5356a466b7521f1a22903c176c696422553


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612056

Title:
  Neutron-Vpnaas Rally run fails with error 'module' object has no
  attribute 'set'

Status in neutron:
  Fix Released

Bug description:
  Neutron-Vpnaas Rally run fails with error 
  Failed to load module with plugins /opt/rally/plugins/test_vpn_status.py: 
'module' object has no attribute 'set'.

  Actual: In the latest code of the types:
  First, we will add a new function, ``types.convert()``, to replace 
  ``types.set()``. ``types.convert()`` will accept arguments differently

  Reference :
  
https://github.com/openstack/rally/blob/14726077d41b4b883637b808236d780f0d9c3c7b/doc/release_notes/archive/v0.3.3.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654736] Re: VM number summarize not match.

2017-01-31 Thread Sujitha
@pratik-bandarkar,

I agree with you. Also openstack usage list shows resource usage per
project based on start and end date range. Both the commands have
different usages. The behavior observed is expected. Marking this as
invalid.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1654736

Title:
  VM number summarize not match.

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  [root@controller1 nova]# openstack usage list
  Usage from 2016-12-10 to 2017-01-08: 
  
+--+-+--+---+---+
  | Project  | Servers | RAM MB-Hours | CPU Hours | 
Disk GB-Hours |
  
+--+-+--+---+---+
  | e98032abd959422bb6b38029fa1b2b36 |   9 |223970.37 |218.72 | 
  1093.61 |
  
+--+-+--+---+---+

  [root@controller1 nova]# nova list
  
+--+---+-++-++
  | ID   | Name  | Status  | Task State | 
Power State | Networks   |
  
+--+---+-++-++
  | 582eda06-d3b6-409d-84fc-5315751ecc9e | VM-User-1 | SHUTOFF | -  | 
Shutdown| VM-Net=192.168.201.110 |
  | 9fdb65d4-2eed-4251-9f63-822b19497bf7 | VM-User-2 | SHUTOFF | -  | 
Shutdown| VM-Net=192.168.201.103 |
  | d9b0c52b-bbd6-4f63-814f-cd244d090ad4 | test-1| ACTIVE  | -  | 
Running | VM-Net=192.168.201.108 |
  | a2ccbc21-363b-4b8a-93d1-923d58d067ed | test-2| SHUTOFF | -  | 
Shutdown| VM-Net=192.168.201.109 |
  
+--+---+-++-++
  the two command has the different number VM created.

  "openstack usage list" give 9 VM created.
  "nova list" give 4 VM created.

  the "nova list" command show the correct result.
  I suspect the "openstack usage list" dump the database data directly.

  MariaDB [nova]> select id, display_name ,node from instances;
  ++--++
  | id | display_name | node   |
  ++--++
  |  1 | Linux| NULL   |
  |  2 | Linux| NULL   |
  |  3 | linux| NULL   |
  |  4 | linux| NULL   |
  |  5 | linux| NULL   |
  |  6 | Linux1   | NULL   |
  |  7 | linux| NULL   |
  |  8 | linux| NULL   |
  |  9 | linux| NULL   |
  | 10 | 123  | NULL   |
  | 11 | 123  | NULL   |
  | 12 | vm111| NULL   |
  | 13 | test | compute1   |
  | 14 | VM-Min1-1| compute1   |
  | 15 | VM-Min1-2| compute1   |
  | 16 | Min-1| compute1   |
  | 17 | Min-2| compute1   |
  | 18 | VM-User-1| compute1   |
  | 19 | VM-User-2| compute1   |
  | 20 | test-1   | Compute2.puppy.com |
  | 21 | test-2   | Compute2.puppy.com |
  ++--++
  In the instance table, there are 9 VM.

  I am not sure the issue caused by what.

  Jichen

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1654736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657260] Re: Established connection don't stops when rule is removed

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/426429
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=10bfa690885f06316ccec1fee39e51ca64058443
Submitter: Jenkins
Branch:master

commit 10bfa690885f06316ccec1fee39e51ca64058443
Author: Sławek Kapłoński 
Date:   Fri Jan 27 23:19:25 2017 +

Clear conntrack entries without zones if CT zones are not used

CT zones are used only in OVSHybridIptablesFirewallDriver.
Such zones are not set in IptablesFirewallDriver class but
even if iptables driver was is not using CT zones, it was
used by conntrack manager class during delete of conntrack
entry.
This cause issue that for Linuxbridge agent established and
active connection stayed active even after security group
rule was deleted.
This patch changes conntrack manager class that it will not
use CT zone (-w option) if zone for port was not assigned
earlier.

Change-Id: Ib9c8d0a09d0858ff6f36db406c6b2a9191f304d1
Closes-bug: 1657260


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657260

Title:
  Established connection don't stops when rule is removed

Status in neutron:
  Fix Released

Bug description:
  If iptables driver is used for Security groups (e.g. in Linuxbridge L2 agent) 
there is an issue with update rules. When You have rule which allows some kind 
of traffic (like ssh for example from some src IP address) and You have 
established, active connection which match this rule, connection will be still 
active even if rule will be removed/changed.
  It is because in iptables in chain for each SG as first there is rule to 
accept packets with "state RELATED,ESTABLISHED".
  I'm not sure if it is in fact bug or maybe it's just design decision to have 
better performance of iptables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633200] Re: Unable to create server with image that has hw_watchdog_action='disabled'

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/386221
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=98d93196cddab435f7a258bc1ef8beca6a7e2004
Submitter: Jenkins
Branch:master

commit 98d93196cddab435f7a258bc1ef8beca6a7e2004
Author: Matt Riedemann 
Date:   Tue Jan 3 17:27:40 2017 -0500

Add 'disabled' to WatchdogAction field

Image property hw_watchdog_action can have a 'disabled'
value which is actually defined in the glance metadefs so
someone using Horizon might pick that. Also, it's the default
behavior in the libvirt driver. However, if you try to create
a server with that set the create fails in nova-api because
the enum field didn't have 'disabled' as a valid value.

This adds it, bumps the ImageMetaProps version and adds tests.

Change-Id: I4cec3e8b8527b909cc60893db26732a19263220d
Closes-Bug: #1633200


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1633200

Title:
  Unable to create server with image that has
  hw_watchdog_action='disabled'

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This is from ocata devstack. Set the hw_watchdog_action image property
  with value 'disabled':

  stack@osc:~$ openstack image set --property
  hw_watchdog_action=disabled cirros-0.3.4-x86_64-uec

  stack@osc:~$ openstack image show -c properties cirros-0.3.4-x86_64-uec
  
+++
  | Field  | Value  
|
  
+++
  | properties | hw_watchdog_action='disabled', 
kernel_id='08463073-3460-4b5f-92cc-ade974936e96', 
ramdisk_id='ff195fc4-c039-43b5-acca-501aba68aba2' |
  
+++

  Then try to boot the server and it will fail:

  stack@osc:~$ nova boot --poll --image c8af19ff-cebc-4112-a237-78dcd19e588c 
--flavor 42 test-watchdog-disabled
  ERROR (BadRequest): Invalid image metadata. Error: Field value disabled is 
invalid (HTTP 400) (Request-ID: req-488be9ab-ebcb-473b-b238-968f91ed0f48)

  
  stack@osc:/opt/stack/nova$ git log -1
  commit 7a9eb10d0d15e5327aa73c72418d89afce11abef
  Merge: b796673 951dee3
  Author: Jenkins 
  Date:   Wed Oct 5 18:27:22 2016 +

  Merge "Fix periodic-nova-py{27,35}-with-oslo-master"

  
  The problem is the ImageMetaProps object in nova is using enums for the 
hw_watchdog_action field:

  
https://github.com/openstack/nova/blob/7a9eb10d0d15e5327aa73c72418d89afce11abef/nova/objects/fields.py#L383

  And that doesn't have 'disabled' as a value.

  However, if you look at the glance metadef it's an option, so someone
  using Horizon could set this:

  
https://github.com/openstack/glance/blob/d3e820724e1d578003b13e72e753d9b1d75173e1/etc/metadefs
  /compute-watchdog.json#L25

  And the libvirt driver actually defaults to 'disabled':

  
https://github.com/openstack/nova/blob/7a9eb10d0d15e5327aa73c72418d89afce11abef/nova/virt/libvirt/driver.py#L4536

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1633200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580648] Re: Two HA routers in master state during functional test

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/420693
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=8d3f216e2421a01b54a4049c639bdb803df72510
Submitter: Jenkins
Branch:master

commit 8d3f216e2421a01b54a4049c639bdb803df72510
Author: Artur Korzeniewski 
Date:   Fri Jan 27 11:19:16 2017 +0100

Addressing L3 HA keepalived failures in functional tests

Current testing of Keepalived was not configuring the connectivity
between 2 agent namespaces.
Added setting up the veth pair.

Also, bridges external qg- and internal qr- were removed
from agent1 namespace and moved to agent2 namespace, because they had
the same name.
Added patching the qg and qr bridges name creation to be different for
functional tests.


Change-Id: I82b3218091da4feb39a9e820d0e54639ae27c97d
Closes-Bug: #1580648


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580648

Title:
  Two HA routers in master state during functional test

Status in neutron:
  Fix Released

Bug description:
  Scheduling ha routers end with two routers in master state.
  Issue discovered in that bug fix - https://review.openstack.org/#/c/273546 - 
after preparing new functional test.

  ha_router.py in method - _get_state_change_monitor_callback() is
  starting a neutron-keepalived-state-change process with parameter
  --monitor-interface as ha_device (ha-xxx) and it's IP address.

  That application is monitoring using
  "ip netns exec xxx ip -o monitor address"
  all changes in that namespace. Each addition of that ha-xxx device produces a 
call to neutron-server API that this router becomes "master".
  It's producing false results because that device doesn't tell anything about 
that router is master or not.

  Logs from
  test_ha_router.L3HATestFailover.test_ha_router_lost_gw_connection

  Agent2:
  2016-05-10 16:23:20.653 16067 DEBUG neutron.agent.linux.async_process [-] 
Launching async process [ip netns exec 
qrouter-962f19e6-f592-49f7-8bc4-add116c0b7a3@agent1@agent2 ip -o monitor 
address]. start /neutron/neutron/agent/linux/async_process.py:109
  2016-05-10 16:23:20.654 16067 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', 'netns', 'exec', 
'qrouter-962f19e6-f592-49f7-8bc4-add116c0b7a3@agent1@agent2', 'ip', '-o', 
'monitor', 'address'] create_process /neutron/neutron/agent/linux/utils.py:82
  2016-05-10 16:23:20.661 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Monitor: ha-8aedf0c6-2a, 169.254.0.1/24 run 
/neutron/neutron/agent/l3/keepalived_state_change.py:59
  2016-05-10 16:23:20.661 16067 INFO neutron.agent.linux.daemon [-] Process 
runs with uid/gid: 1000/1000
  2016-05-10 16:23:20.767 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Event: qr-88c93aa9-5a, fe80::c8fe:deff:fead:beef/64, False 
parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73
  2016-05-10 16:23:20.901 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Event: qg-814d252d-26, fe80::c8fe:deff:fead:beee/64, False 
parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73
  2016-05-10 16:23:21.324 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Event: ha-8aedf0c6-2a, fe80::2022:22ff:fe22:/64, True 
parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73
  2016-05-10 16:23:29.807 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Event: ha-8aedf0c6-2a, 169.254.0.1/24, True parse_and_handle_event 
/neutron/neutron/agent/l3/keepalived_state_change.py:73
  2016-05-10 16:23:29.808 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Wrote router 962f19e6-f592-49f7-8bc4-add116c0b7a3 state master 
write_state_change /neutron/neutron/agent/l3/keepalived_state_change.py:87
  2016-05-10 16:23:29.808 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] State: master notify_agent 
/neutron/neutron/agent/l3/keepalived_state_change.py:93

  Agent1:
  2016-05-10 16:23:19.417 15906 DEBUG neutron.agent.linux.async_process [-] 
Launching async process [ip netns exec 
qrouter-962f19e6-f592-49f7-8bc4-add116c0b7a3@agent1 ip -o monitor address]. 
start /neutron/neutron/agent/linux/async_process.py:109
  2016-05-10 16:23:19.418 15906 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', 'netns', 'exec', 
'qrouter-962f19e6-f592-49f7-8bc4-add116c0b7a3@agent1', 'ip', '-o', 'monitor', 
'address'] create_process /neutron/neutron/agent/linux/utils.py:82
  2016-05-10 16:23:19.425 15906 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Monitor: ha-22a4d1e0-ad, 169.254.0.1/24 run 
/neutron/neutron/agent/l3/keepalived_state_change.py:59
  2016-05-10 16:23:19.426 15906 INFO neutron.agent.linux.daemon [-] Process 
runs with uid/gid: 1000/1000
  2016-05-10 16:23:19.525 15906 DEBUG 

[Yahoo-eng-team] [Bug 1645464] Re: InstanceNotFound stacktrace in _process_instance_vif_deleted_event

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/403925
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4fce45f83bd510202fd1cbcdb7694361c41e9400
Submitter: Jenkins
Branch:master

commit 4fce45f83bd510202fd1cbcdb7694361c41e9400
Author: Matt Riedemann 
Date:   Mon Nov 28 17:07:51 2016 -0500

Pre-load info_cache when handling external events and handle NotFound

Before change a5b920a197c70d2ae08a1e1335d979857f923b4f we'd join the
info_cache column when getting the instance in the API. Without
joining the info_cache column when getting the instance, that has
to be lazy-loaded on the compute when processing an external
neutron event, like network-vif-deleted. So this change pre-loads
the info_cache in the API again as an optimization.

There is also a race to contend with here when deleting an instance.
Neutron can send the network-vif-deleted event after we've already
marked the instance as deleted in the database, at which point
lazy-loading info_cache (or updating InstanceInfoCache for that
matter), can result in an Instance(InfoCache)NotFound error, so this
change handles that also.

Change-Id: Ia3b4288691b392d56324e9d13c92e8e0b0d81e76
Closes-Bug: #1645464


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1645464

Title:
  InstanceNotFound stacktrace in _process_instance_vif_deleted_event

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Confirmed

Bug description:
  Seen in a CI run here:

  http://logs.openstack.org/16/393416/6/check/gate-tempest-dsvm-full-
  devstack-plugin-ceph-ubuntu-
  xenial/44f3329/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-11-28_17_34_36_535

  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server 
[req-8d784631-28c2-41b4-8175-7cfb59480838 nova service] Exception during 
message handling
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
225, in dispatch
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
195, in _do_dispatch
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 75, in wrapped
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 66, in wrapped
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server return 
f(self, context, *args, **kw)
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6800, in 
external_instance_event
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server event.tag)
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6755, in 
_process_instance_vif_deleted_event
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server network_info 
= instance.info_cache.network_info
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
67, in getter
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server 
self.obj_load_attr(name)
  2016-11-28 17:34:36.535 2361 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/objects/instance.py", line 1063, in obj_load_attr
  2016-11-28 

[Yahoo-eng-team] [Bug 1660436] Re: Federated users cannot log into horizon

2017-01-31 Thread Steve Martinelli
This was discussed at the keystone meeting today, the thinking is that
adding domain information to the fernet token formatter may help to
resolve the issues -- adding keystone as an affected project.

** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: keystone
Milestone: None => ocata-rc1

** Changed in: keystone
 Assignee: (unassigned) => Colleen Murphy (krinkle)

** Changed in: keystone
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1660436

Title:
  Federated users cannot log into horizon

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Identity (keystone):
  New
Status in keystoneauth:
  New
Status in python-novaclient:
  New

Bug description:
  As of this bugfix in novaclient, federated users cannot log in to
  horizon:

  https://bugs.launchpad.net/python-novaclient/+bug/1658963

  Before this bugfix, horizon would attempt to list nova extensions
  using what was apparently the wrong class, and the error would be
  caught and quietly logged as such:

   Call to list supported extensions failed. This is likely due to a
  problem communicating with the Nova endpoint. Host Aggregates panel
  will not be displayed.

  The dashboard would display:

   Error: Unable to retrieve usage information.

  but at least the user was logged into the dashboard.

  The error that was being hidden was:

   __init__() takes at least 3 arguments (2 given)

  Now that that is fixed, horizon makes it further but fails to
  authenticate the federated user when attempting this request, giving
  the traceback here:

   http://paste.openstack.org/show/596929/

  The problem lies somewhere between keystoneauth, novaclient, and
  horizon.

  keystoneauth:

  When keystoneauth does version discovery, it first tries the Identity
  v2.0 API, and finding no domain information in the request, returns
  that API as the Identity endpoint. Modifying keystoneauth to not stop
  there and continue trying the v3 API, even though it lacks domain
  information, allows the user to successfully log in:

   http://paste.openstack.org/show/596930/

  I'm not really sure why that works or what would break with that
  change.

  novaclient:

  When creating a Token plugin the novaclient is aware of a project's
  domain but not of a domain on its own or of a default domain:

   http://git.openstack.org/cgit/openstack/python-
  novaclient/tree/novaclient/client.py#n137

  keystoneauth relies on having default_domain_(id|name),
  domain_(id|name), or project_domain(id|name) set, and novaclient isn't
  receiving information about the project_domain(id|name) and isn't
  capable of sending any other domain information when using the Token
  plugin, which it must for a federated user.

  horizon:

  For federated users novaclient is only set up to pass along domain
  info for the project, which horizon doesn't store in its user object:

  
http://git.openstack.org/cgit/openstack/django_openstack_auth/tree/openstack_auth/user.py#n202

  However things seem to just work if we fudge the user_domain_id as the
  project_domain_id, though that is obviously not a good solution:

   http://paste.openstack.org/show/596933/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1660436/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251224] Re: ICMP security group rules should have a type and code params instead of using "--port-range-min" and "--port-range-max"

2017-01-31 Thread Boden R
Moving to invalid/unassigned and will timeout in 60 days as-is.

If additional work is needed and in scope please reassign.

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
   Importance: Medium => Undecided

** Changed in: neutron
 Assignee: Anthony Chow (vcloudernbeer) => (unassigned)

** Changed in: neutron
   Status: Invalid => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251224

Title:
  ICMP security group rules should have a type and code params instead
  of using "--port-range-min" and "--port-range-max"

Status in neutron:
  Incomplete

Bug description:
  Version
  ==
  Havana on rhel

  Description
  =
  I couldn't find a doc specifying whether icmp security group rules ignore the 
"--port-range-min" and "--port-range-max" params or use then as code and type 
as we know from nova security group rules.
  I think that it should be:

  i. Well documented.
  ii. prohibited for use of "--port-range-min" and "--port-range-max" in icmp 
rules context, new switches should be created for code and type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1251224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537315] Re: Make advertisement intervals for radvd configurable

2017-01-31 Thread Boden R
OpenStack manual's config reference already includes these [1].

The neutron patch contained a release note.

Best I can tell no neutron or other work is needed here.


[1] 
http://docs.openstack.org/newton/config-reference/networking/networking_options_reference.html

** Changed in: neutron
   Status: Fix Committed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1537315

Title:
  Make advertisement intervals for radvd configurable

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/265451
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit d9e4d20da86f29f7ebdb3c6b07086924888edd39
  Author: Sean M. Collins 
  Date:   Fri Jan 8 13:32:14 2016 -0800

  Make advertisement intervals for radvd configurable
  
  Currently a global setting that is applied for all managed radvd
  processes. Per-process setting could be done in the future.
  
  For large clouds, it may be useful to increase the intervals, to reduce
  multicast storms.
  
  Co-Authored-By: Brian Haley 
  
  DocImpact Router advertisement intervals for radvd are now configurable
  Related-Bug: #1532338
  
  Change-Id: I6cc313599f0ee12f7d51d073a22321221fca263f

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1537315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551368] Re: L7 capability extension implementation for lbaas v2

2017-01-31 Thread Boden R
Marking this as incomplete with the following reasoning:
- The openstack-manuals team marked it as invalid and thus don't plan to update 
the docs/manuals.
- Unless we have someone willing to create/drive the change into the 
openstack-manuals, I don't see how this will get done.

If we have someone to work on it, please feel free to restore out of
invalid state.

** Changed in: neutron
   Status: Confirmed => Incomplete

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551368

Title:
  L7 capability extension implementation for lbaas v2

Status in neutron:
  Won't Fix
Status in openstack-api-site:
  Invalid
Status in openstack-manuals:
  Invalid

Bug description:
  https://review.openstack.org/148232
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.

  commit 254190b3051453f823dbdf1826f1a6b984c57164
  Author: Evgeny Fedoruk 
  Date:   Mon Jan 19 03:47:39 2015 -0800

  L7 capability extension implementation for lbaas v2
  
  Including extension modifications
  Including db model modifications
  Including alembic migration
  Including new driver managers for l7 rules and policies
  Inclufing logging_noop driver implementation
  Including unit testing
  Including Octavia driver updates
  
  DocImpact
  APIImpact
  Change-Id: I8f9535ebf28155adf43ebe046d2dbdfa86c4d81b
  Implements: blueprint lbaas-l7-rules
  Co-Authored-By: Evgeny Fedoruk 
  Co-Authored-By: Stephen Balukoff 

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557119] Re: Services and Agents DevRef doesn't describe configuration seperation

2017-01-31 Thread Armando Migliaccio
>From the merged commit it sounds like similar content would be required
to openstack deployment guide and devstack to mark this issue complete.
However as far as neutron goes, we should be all set.

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: In Progress => Fix Released

** Changed in: neutron
Milestone: ocata-rc1 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1557119

Title:
  Services and Agents DevRef doesn't describe configuration seperation

Status in neutron:
  Fix Released
Status in openstack-manuals:
  New

Bug description:
  Presently the services and agents DevRef doesn't describe how
  configuration options should be separated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1557119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660747] Re: test_list_servers_filter_by_error_status intermittently fails with MismatchError on no servers in response

2017-01-31 Thread Matt Riedemann
This is potentially a Tempest bug since the os-resetState API in Nova
returns a 202 response, so Tempest should probably be waiting for the
server to go into error state before doing the GET query with the status
filter.

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660747

Title:
  test_list_servers_filter_by_error_status intermittently fails with
  MismatchError on no servers in response

Status in OpenStack Compute (nova):
  Confirmed
Status in tempest:
  New

Bug description:
  Seen here:

  http://logs.openstack.org/59/424759/12/gate/gate-tempest-dsvm-cells-
  ubuntu-xenial/d7b1311/console.html#_2017-01-31_17_48_34_663273

  2017-01-31 17:48:34.663337 | Captured traceback:
  2017-01-31 17:48:34.663348 | ~~~
  2017-01-31 17:48:34.663363 | Traceback (most recent call last):
  2017-01-31 17:48:34.663393 |   File 
"tempest/api/compute/admin/test_servers.py", line 59, in 
test_list_servers_filter_by_error_status
  2017-01-31 17:48:34.663414 | self.assertIn(self.s1_id, map(lambda x: 
x['id'], servers))
  2017-01-31 17:48:34.663448 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 417, in assertIn
  2017-01-31 17:48:34.663468 | self.assertThat(haystack, 
Contains(needle), message)
  2017-01-31 17:48:34.663502 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2017-01-31 17:48:34.663515 | raise mismatch_error
  2017-01-31 17:48:34.663542 | testtools.matchers._impl.MismatchError: 
u'108b4797-74fd-4a00-912a-b7fe0e142888' not in []

  This test resets the state on a server to ERROR:

  2017-01-31 17:48:34.663649 | 2017-01-31 17:28:39,375 504 INFO 
[tempest.lib.common.rest_client] Request 
(ServersAdminTestJSON:test_list_servers_filter_by_error_status): 202 POST 
http://10.23.154.32:8774/v2.1/servers/108b4797-74fd-4a00-912a-b7fe0e142888/action
 0.142s
  2017-01-31 17:48:34.663695 | 2017-01-31 17:28:39,376 504 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2017-01-31 17:48:34.663714 | Body: {"os-resetState": {"state": 
"error"}}

  Then tries to list servers by that status and expects to get that one
  back:

  2017-01-31 17:48:34.663883 | 2017-01-31 17:28:39,556 504 INFO 
[tempest.lib.common.rest_client] Request 
(ServersAdminTestJSON:test_list_servers_filter_by_error_status): 200 GET 
http://10.23.154.32:8774/v2.1/servers?status=error 0.179s
  2017-01-31 17:48:34.663955 | 2017-01-31 17:28:39,556 504 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2017-01-31 17:48:34.663969 | Body: None
  2017-01-31 17:48:34.664078 | Response - Headers: 
{u'x-openstack-nova-api-version': '2.1', u'vary': 
'X-OpenStack-Nova-API-Version', u'content-length': '15', 'status': '200', 
u'content-type': 'application/json', u'x-compute-request-id': 
'req-91ef16ab-28c3-47c5-b823-6a321bde5c01', u'date': 'Tue, 31 Jan 2017 17:28:39 
GMT', 'content-location': 'http://10.23.154.32:8774/v2.1/servers?status=error', 
u'openstack-api-version': 'compute 2.1', u'connection': 'close'}
  2017-01-31 17:48:34.664094 | Body: {"servers": []}

  And the list is coming back empty, intermittently, with cells v1. So
  there is probably some vm_state change race between the state change
  in the child cell and reporting that back up to the parent API cell.

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Body%3A%20%7B%5C%5C%5C%22servers%5C%5C%5C%22%3A%20%5B%5D%7D%5C%22%20AND%20tags%3A%5C%22console%5C%22%20AND%20build_name%3A%5C
  %22gate-tempest-dsvm-cells-ubuntu-xenial%5C%22=7d

  16 hits in 7 days, check and gate, all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443898] Re: TestVolumeBootPattern.test_volume_boot_pattern fail with libvirt-xen

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/402318
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=12485c74faf317dfe2c05ee0b4877412002893d6
Submitter: Jenkins
Branch:master

commit 12485c74faf317dfe2c05ee0b4877412002893d6
Author: Thomas Bechtold 
Date:   Thu Nov 24 17:04:37 2016 +0100

Fix root_device_name for Xen

When running the Tempest scenario test_volume_boot_pattern.\
 TestVolumeBootPattern.test_volume_boot_pattern with a
libvirt+Xen environment, the tests fails because the
root_device_name is set to 'vda' which is invalid for Xen.
When getting the BDM for the root block device, check that
the root_device_name is compatible with the used disk bus and
if it is not, adjust the root_device_name.

Change-Id: I4c231b5ef646a067be1566653b6ee89ccbee60f3
Closes-Bug: #1443898


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443898

Title:
  TestVolumeBootPattern.test_volume_boot_pattern fail with libvirt-xen

Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  Invalid

Bug description:
  The following test is failling when runned with libvirt-xen nova compute node.
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern

  This is due to the function _boot_instance_from_volume which create a
  volume with the device name "vda", but this is not supported by Xen.

  Also Nova itself does not reject the device name.

  This is with devstack master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1443898/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552077] Re: [api-ref]Use network RBAC feature for external access

2017-01-31 Thread Boden R
The original patch included a release note.

The networking guide is already updated to include documentation on
access_as_external [1].

That said I don't see any work left here. Closing out as invalid.


[1] 
http://docs.openstack.org/newton/networking-guide/config-rbac.html#allowing-a-network-to-be-used-as-an-external-network

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552077

Title:
  [api-ref]Use network RBAC feature for external access

Status in neutron:
  Invalid
Status in openstack-api-site:
  Invalid

Bug description:
  https://review.openstack.org/282295
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 49b4dd3478d782aee4260033825aa6b47eaf644a
  Author: Kevin Benton 
  Date:   Fri Feb 19 03:34:27 2016 -0800

  Use network RBAC feature for external access
  
  This allows access to external networks to be controlled via the
  RBAC framework added during Liberty with a new 'access_as_external'
  action.
  
  A migration adds all current external networks to the RBAC policies
  table with a wildcard indicating that all tenants can access the network
  as RBAC.
  
  Unlike the conversion of shared networks to RBAC, the external table
  is left in the DB to avoid invasive changes throughout the codebase
  to calculate the flag relative to the caller. So the current 'external'
  flag is used throughout the code base as it previously was for wiring
  up floating IPs, router gateway ports, etc. Then the RBAC entries are
  only referenced when determining what networks to show the tenants.
  
  API Behavior:
   * Marking a network as 'external' will automatically create a wildcard
 entry that allows that network to be accessed by all tenants.
   * An external network may have all of its RBAC entries deleted and then
 only an admin will be able to attach to it.
   * An RBAC 'access_as_external' entry cannot be deleted if it is required
 for a tenant that currently has a router attached to that network.
   * Creating an 'access_as_external' RBAC entry will automatically convert
 the network into an external network. (This is to enable a workflow
 where a private external network is never visible to everyone.)
   * The default policy.json will prevent a non-admin from creating wildcard
 'access_as_external' RBAC entries to align with the current default 
policy
 we have on setting the 'external' field on the network to prevent 
poluting
 everyone else's network lists.
   * The default policy.json will allow a tenant to create an
 'access_as_external' RBAC entry to allow specific tenants
 (including itself) the ability to use its network as an external 
network.
  
  Closes-Bug: #1547985
  DocImpact: External networks can now have access restricted to small 
subsets
 of tenants
  APIImpact: 'access_as_external' will be allowed as an action in the RBAC
 API for networks
  Change-Id: I4d8ee78a9763c58884e4fd3d7b40133da659cd61

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553451] Re: Add timestamp for neutron core resources

2017-01-31 Thread Boden R
Based on [1] this has already been fixed in the api-ref.
Nothing else to do from a neutron perspective, so closing out.


[1] 
https://developer.openstack.org/api-ref/networking/v2/?expanded=show-network-details-provider-network-detail,show-network-details-detail,show-port-details-detail

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1553451

Title:
  Add timestamp for neutron core resources

Status in neutron:
  Invalid
Status in openstack-api-site:
  Fix Released

Bug description:
  https://review.openstack.org/213586
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 4c2c983618ddb7a528c9005b0d7aaf5322bd198d
  Author: ZhaoBo 
  Date:   Thu Feb 18 13:28:58 2016 +0800

  Add timestamp for neutron core resources
  
  Currently, neutron core resources (like net, subnet, port and subnetpool)
  do not save time-stamps upon their creation and updation. This
  information can be critical for debugging purposes.
  
  This patch introduces a new extension called "timestamp" extending 
existing
  the neutron core resources to allow their creation and modification times
  to be record. Now this patch add this resource schema and the functions 
which
  listen db events to add timestamp fields.
  
  APIImpact
  DocImpact: Neutron core resources now contain 'timestamp' fields like
 'created_at' and 'updated_at'
  
  Change-Id: I24114b464403435d9c1e1e123d2bc2f37c8fc6ea
  Partially-Implements: blueprint add-port-timestamp

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1553451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660747] [NEW] test_list_servers_filter_by_error_status intermittently fails with MismatchError on no servers in response

2017-01-31 Thread Matt Riedemann
Public bug reported:

Seen here:

http://logs.openstack.org/59/424759/12/gate/gate-tempest-dsvm-cells-
ubuntu-xenial/d7b1311/console.html#_2017-01-31_17_48_34_663273

2017-01-31 17:48:34.663337 | Captured traceback:
2017-01-31 17:48:34.663348 | ~~~
2017-01-31 17:48:34.663363 | Traceback (most recent call last):
2017-01-31 17:48:34.663393 |   File 
"tempest/api/compute/admin/test_servers.py", line 59, in 
test_list_servers_filter_by_error_status
2017-01-31 17:48:34.663414 | self.assertIn(self.s1_id, map(lambda x: 
x['id'], servers))
2017-01-31 17:48:34.663448 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 417, in assertIn
2017-01-31 17:48:34.663468 | self.assertThat(haystack, 
Contains(needle), message)
2017-01-31 17:48:34.663502 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
2017-01-31 17:48:34.663515 | raise mismatch_error
2017-01-31 17:48:34.663542 | testtools.matchers._impl.MismatchError: 
u'108b4797-74fd-4a00-912a-b7fe0e142888' not in []

This test resets the state on a server to ERROR:

2017-01-31 17:48:34.663649 | 2017-01-31 17:28:39,375 504 INFO 
[tempest.lib.common.rest_client] Request 
(ServersAdminTestJSON:test_list_servers_filter_by_error_status): 202 POST 
http://10.23.154.32:8774/v2.1/servers/108b4797-74fd-4a00-912a-b7fe0e142888/action
 0.142s
2017-01-31 17:48:34.663695 | 2017-01-31 17:28:39,376 504 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2017-01-31 17:48:34.663714 | Body: {"os-resetState": {"state": 
"error"}}

Then tries to list servers by that status and expects to get that one
back:

2017-01-31 17:48:34.663883 | 2017-01-31 17:28:39,556 504 INFO 
[tempest.lib.common.rest_client] Request 
(ServersAdminTestJSON:test_list_servers_filter_by_error_status): 200 GET 
http://10.23.154.32:8774/v2.1/servers?status=error 0.179s
2017-01-31 17:48:34.663955 | 2017-01-31 17:28:39,556 504 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2017-01-31 17:48:34.663969 | Body: None
2017-01-31 17:48:34.664078 | Response - Headers: 
{u'x-openstack-nova-api-version': '2.1', u'vary': 
'X-OpenStack-Nova-API-Version', u'content-length': '15', 'status': '200', 
u'content-type': 'application/json', u'x-compute-request-id': 
'req-91ef16ab-28c3-47c5-b823-6a321bde5c01', u'date': 'Tue, 31 Jan 2017 17:28:39 
GMT', 'content-location': 'http://10.23.154.32:8774/v2.1/servers?status=error', 
u'openstack-api-version': 'compute 2.1', u'connection': 'close'}
2017-01-31 17:48:34.664094 | Body: {"servers": []}

And the list is coming back empty, intermittently, with cells v1. So
there is probably some vm_state change race between the state change in
the child cell and reporting that back up to the parent API cell.

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Body%3A%20%7B%5C%5C%5C%22servers%5C%5C%5C%22%3A%20%5B%5D%7D%5C%22%20AND%20tags%3A%5C%22console%5C%22%20AND%20build_name%3A%5C
%22gate-tempest-dsvm-cells-ubuntu-xenial%5C%22=7d

16 hits in 7 days, check and gate, all failures.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: cells testing v1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660747

Title:
  test_list_servers_filter_by_error_status intermittently fails with
  MismatchError on no servers in response

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Seen here:

  http://logs.openstack.org/59/424759/12/gate/gate-tempest-dsvm-cells-
  ubuntu-xenial/d7b1311/console.html#_2017-01-31_17_48_34_663273

  2017-01-31 17:48:34.663337 | Captured traceback:
  2017-01-31 17:48:34.663348 | ~~~
  2017-01-31 17:48:34.663363 | Traceback (most recent call last):
  2017-01-31 17:48:34.663393 |   File 
"tempest/api/compute/admin/test_servers.py", line 59, in 
test_list_servers_filter_by_error_status
  2017-01-31 17:48:34.663414 | self.assertIn(self.s1_id, map(lambda x: 
x['id'], servers))
  2017-01-31 17:48:34.663448 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 417, in assertIn
  2017-01-31 17:48:34.663468 | self.assertThat(haystack, 
Contains(needle), message)
  2017-01-31 17:48:34.663502 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2017-01-31 17:48:34.663515 | raise mismatch_error
  2017-01-31 17:48:34.663542 | 

[Yahoo-eng-team] [Bug 1261893] Re: glance image-create --location doesn't fail with bad URLs

2017-01-31 Thread Ian Cordasco
Image creation with locations was asynchronous in v1. Given that
locations are disabled by default in v2 and no longer as ergonomic, I'm
marking this as won't fix.

** Changed in: python-glanceclient
 Assignee: Victor Morales (electrocucaracha) => (unassigned)

** Changed in: python-glanceclient
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1261893

Title:
  glance image-create --location doesn't fail with bad URLs

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in Glance Client:
  Won't Fix

Bug description:
  I ran the command below, then realized that the URL is bad, there's no
  server there. Instead of throwing an error, it blindly created another
  image of size ''. Didn't see anything in the recent commits for it,
  this is on Ubuntu 12.04.

   python-glanceclient  1:0.11.0-0ubuntu1~cloud0
  Client library for Openstack glance server.

  
  root@larry:~# glance image-create --name cirros --is-public true 
--container-format bare --disk-format qcow2 --location 
http://10.0.0.1:9630/isos/cirros-0.3.0-x86_64-disk.img
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | bare |
  | created_at   | 2013-12-17T19:20:57  |
  | deleted  | False|
  | deleted_at   | None |
  | disk_format  | qcow2|
  | id   | 93ca9d12-f9c9-4ef1-a12a-192cc2251da3 |
  | is_public| True |
  | min_disk | 0|
  | min_ram  | 0|
  | name | cirros   |
  | owner| 9db5a2b06743410eb506384f19ae7db7 |
  | protected| False|
  | size | 0|
  | status   | active   |
  | updated_at   | 2013-12-17T19:20:57  |
  +--+--+
  root@larry:~# glance image-list
  
+--++-+--+--++
  | ID   | Name   | Disk Format | Container 
Format | Size | Status |
  
+--++-+--+--++
  | 93ca9d12-f9c9-4ef1-a12a-192cc2251da3 | cirros | qcow2   | bare  
   |  | active |
  
+--++-+--+--++

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1261893/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658415] Re: The name of _get_vm_ref_from_uuid is not consistent with its implementation

2017-01-31 Thread Sujitha
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1658415

Title:
  The name of _get_vm_ref_from_uuid is not consistent with its
  implementation

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Bug description:
The meaning of _get_vm_ref_from_uuid is to get vm_ref by instance uuid, but 
in its implementation it is to get vm_ref by instance name. It is a good idea 
to change the function name to be consistent with its implementation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1658415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659499] Re: API Access View Credentials form is semantically incorrect

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/425580
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=1064803a6ef5a7f3059bbed86424a40f05b3a11e
Submitter: Jenkins
Branch:master

commit 1064803a6ef5a7f3059bbed86424a40f05b3a11e
Author: Rob Cresswell 
Date:   Thu Jan 26 08:33:42 2017 +

Improve API Access Credentials template

- Match labels to inputs correctly
- Remove some redundant classes
- Fix indentation/make template readable

Change-Id: Ia98602fe3f602b315e4c16102bb593ae77f9be7b
Closes-Bug: 1659499


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1659499

Title:
  API Access View Credentials form is semantically incorrect

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  There are some simple improvements we can make to UX/a11y on the View
  Credentials form on the API Access panel, like correctly labelling
  inputs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1659499/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570026] Re: Add an option for WSGI pool size

2017-01-31 Thread Boden R
The manual's config reference is already updated [1].

The original patch already included a release note [2].

Marking as invalid as I don't see any work left here.


[1] 
http://docs.openstack.org/newton/config-reference/networking/networking_options_reference.html#api
[2] https://review.openstack.org/#/c/278007/

** Changed in: neutron
   Status: Fix Committed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570026

Title:
  Add an option for WSGI pool size

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/278007
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 9d573387f1e33ce85269d3ed9be501717eed4807
  Author: Mike Bayer 
  Date:   Tue Feb 9 13:10:57 2016 -0500

  Add an option for WSGI pool size
  
  Neutron currently hardcodes the number of
  greenlets used to process requests in a process to 1000.
  As detailed in
  
http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html
  
  this can cause requests to wait within one process
  for available database connection while other processes
  remain available.
  
  By adding a wsgi_default_pool_size option functionally
  identical to that of Nova, we can lower the number of
  greenlets per process to be more in line with a typical
  max database connection pool size.
  
  DocImpact: a previously unused configuration value
 wsgi_default_pool_size is now used to affect
 the number of greenlets used by the server. The
 default number of greenlets also changes from 1000
 to 100.
  Change-Id: I94cd2f9262e0f330cf006b40bb3c0071086e5d71

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570892] Re: [api-ref]Openswan/Libreswan: support sha256 for auth algorithm

2017-01-31 Thread Boden R
sha256 has already been added to the API ref [1].
Marking as invalid.

[1] https://developer.openstack.org/api-ref/networking/v2/?expanded
=show-ipsec-policy-detail

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570892

Title:
  [api-ref]Openswan/Libreswan: support sha256 for auth algorithm

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/303684
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.

  commit b73e1002555cfa70ccfea8ffe685672c0b679212
  Author: nick.zhuyj 
  Date:   Fri Apr 8 23:48:33 2016 -0500

  Openswan/Libreswan: support sha256 for auth algorithm
  
  Add support for sha256 as it is requirement from customer.
  Currently, there is no ike/esp fields in strongswan ipsec.conf
  template, so by default. sha256 is used. But for openswan auth
  algorithm is get from configuration, so only sha1 is supported.
  This patch enable Openswan/Libreswan to support sha256.
  
  Note: this change is DocImpact and APIImpact
  
  Change-Id: I02c80ec3494eb0edef2fdaa5d21ca0c3bbcacac2
  Closes-Bug: #1567846

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577289] Re: Ensure floating IPs only use IPv4 addresses

2017-01-31 Thread Boden R
>From a neutron perspective I'm not seeing anything needed.

However from an openstack-manuals POV it seems like we should note the
IPv4 requirement for floating IPs (e.g. can't create floating IPs on net
with no IPv4 subnets, etc. see commit message for more details). When
searching the openstack docs I didn't find anything current talking
about floating IPs, other than [1] (I think [1] is older).

Moving this over to openstack-manuals in hopes we can find somewhere to
mention this requirement for floating IPs.


[1] http://docs.openstack.org/admin-guide/cli-admin-manage-ip-addresses.html

** Changed in: neutron
   Status: Triaged => New

** Changed in: neutron
   Importance: Low => Undecided

** Changed in: neutron
 Assignee: Dustin Lundquist (dlundquist) => (unassigned)

** Project changed: neutron => openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1577289

Title:
  Ensure floating IPs only use IPv4 addresses

Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/267891
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 4858cd7cb97354ae54f8e7d47aeaaddad714c9dd
  Author: Dustin Lundquist 
  Date:   Mon Jul 6 13:53:46 2015 -0700

  Ensure floating IPs only use IPv4 addresses
  
  Description:
  Presently Neutron doesn't validate the address family of floating IP
  addresses or the internal addresses they are associated with. It merely
  associates the first IP of the floating IP's port with the first IP of
  the internal port, unless a specified fixed IP is specified. This can
  lead to incorrect or poorly defined behavior when IPv6 is present.
  
  The existing L3 agent implementation only manages IPv4 NAT rules. While
  IPv6 NAT and NAT protocol translation are possible, the existing
  implementation does not support these configurations.
  
  Presently a floating IP can be created on an IPv6 only external network
  or associated with an IPv6 fixed IP, but the L3 agent is unable to bind
  these configurations.
  
  Implementation:
  When creating and updating a floating IP, only consider IPv4 addresses
  on both the floating IPs port and the internal port he floating IP is
  associated with. Additionally disallow creating floating IPs on networks
  without any IPv4 subnets, since these floating IPs could not be
  allocated an IPv4 address.
  
  DocImpact
  APIImpact
  
  Co-Authored-By: Bradley Jones 
  Change-Id: I79b28a304b38ecdafc17eddc41213df1c24ec202
  Related-Bug: #1437855
  Closes-Bug: #1323766
  Closes-Bug: #1469322
  (cherry picked from commit 4cdc71e7d0e5220a5f12ee2dfea1ff3db045c041)

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1577289/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660682] Re: resource tracker sets wrong initial resource provider generation when creating a new resource provider

2017-01-31 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660682

Title:
  resource tracker sets wrong initial resource provider generation when
  creating a new resource provider

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  New

Bug description:
  When the resource tracker creates (and then effectively caches) a new
  resource provider for the compute node, it sets the generation to 1.
  This leads to the first PUT to set inventory resulting in a 409
  conflict because the generation is wrong.

  The default generation in the database is 0, so for any new resource
  provider this is what it will be. So in the resource tracker, in
  _create_resource_provider, the generation should also be 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595706] Re: Remove the deprecated config 'router_id'

2017-01-31 Thread Boden R
As per comment #2, a docs change already landed. I also confirmed in the
newton L3 agent config docs; the opt is not there.

Nothing left to do from a neutron POV so closing out.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595706

Title:
  Remove the deprecated config 'router_id'

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/332018
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 448bc8e5220d3633f9c9ee804de0a38c2d829d78
  Author: Hong Hui Xiao 
  Date:   Tue Jun 21 08:49:42 2016 +

  Remove the deprecated config 'router_id'
  
  It was deprecated at [1].
  Remove the deprecated config 'router_id' and its related tests.
  
  [1] https://review.openstack.org/#/c/248498
  
  DocImpact: All references of 'router_id' configuration option
  and its description should be removed from the docs.
  
  UpgradeImpact: Remove 'router_id' configuration option from the
  l3_agent.ini.
  
  Change-Id: Ic9420191e8c1a333e4dcc0b70411591b8573ec7c
  Closes-Bug: #1594711

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1595706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600405] Re: Remove the deprecated config "quota_items"

2017-01-31 Thread Boden R
quota_items was removed from the manuals with [1] and I couldn't find
any other refs to it in the newton docs (to confirm).

>From a neutron POV I don't see anything left to do.


[1] https://review.openstack.org/#/c/344325/

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600405

Title:
  Remove the deprecated config "quota_items"

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/331591
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit ff73054e8637f29737cc65dc84ef0f9aea9d5abd
  Author: Hong Hui Xiao 
  Date:   Mon Jun 20 09:46:21 2016 +

  Remove the deprecated config "quota_items"
  
  It was deprecated at [1], and quota of resource will be registered
  at initialization of APIRouter. So, no need to use the config now.
  
  [1] https://review.openstack.org/#/c/181593
  
  DocImpact: All references of 'quota_items' configuration option
  and its description should be removed from the docs.
  
  UpgradeImpact: Remove 'quota_items' configuration option from
  neutron.conf file.
  
  Change-Id: I0698772a49f51d7c65f1f4cf1ea7660cd07113a0
  Closes-Bug: #1593772

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1600405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460060] Re: Glance v1 and v2 api returns 500 while passing --min-ram and --min-disk greater than 2^(31) max value

2017-01-31 Thread Ian Cordasco
This was fixed on the server-side and requires no additional work on the
client side.

** Changed in: python-glanceclient
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1460060

Title:
  Glance v1 and v2 api returns 500 while passing --min-ram and --min-
  disk greater than 2^(31) max value

Status in Glance:
  Fix Released
Status in Glance Client:
  Won't Fix

Bug description:
  glance image-create --name test --container-format bare --disk-format raw 
--file delete_images.py --min-disk 100
  HTTPInternalServerError (HTTP 500)

  glance image-create --name test --container-format bare --disk-format raw 
--file delete_images.py --min-ram 100
  HTTPInternalServerError (HTTP 500)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1460060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605793] Re: Calculate MTU on every network fetch instead of on create

2017-01-31 Thread Boden R
Manuals have already been updated (see comment #2).

>From a neutron POV, the initial patch [1] already contained a release
note. Marking as invalid from a neutron perspective since nothing else
appears to be needed.


[1] https://review.openstack.org/#/c/336805/

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605793

Title:
  Calculate MTU on every network fetch instead of on create

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/336805
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit a984f9554cdcbe93c840a1d8f5c04302e9331e79
  Author: Ihar Hrachyshka 
  Date:   Sat Jul 2 17:30:21 2016 +0200

  Calculate MTU on every network fetch instead of on create
  
  Today, existing networks may not reflect MTU configured for
  neutron-server, if they were created when neutron-server was using
  different MTU setup for its infrastructure, or when it was using bad
  default values for network MTUs (specifically, before Mitaka, all networks
  were getting MTU = 0 by default, disabling both advertisement and data
  path MTU size enforcement).
  
  This patch stops persisting MTU in the database on network create and
  instead calculate it on every network resource fetch.
  
  DocImpact Now changes to MTU configuration options immediately affect
existing network MTUs, not just new networks.
  
  UpgradeImpact Existing networks with invalid MTU persisted in database
may change their MTU values to reflect configuration.
  
  Change-Id: Iee4f5037bf10b73ba98464143b183aacb59c22f2
  Closes-Bug: #1556182

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1605793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506842] Re: Glanceclient + SSL - Show warnings in console

2017-01-31 Thread Ian Cordasco
Those warnings are not generated by python-glanceclient, nor should they
be suppressed by it.

** Changed in: python-glanceclient
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1506842

Title:
  Glanceclient + SSL - Show warnings in console

Status in Glance:
  Invalid
Status in Glance Client:
  Invalid

Bug description:
  If we use glanceclient with ssl, console displays few "extra" warnings
  like this:

  /usr/lib/python2.7/dist-packages/urllib3/util/ssl_.py:90: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
  /usr/lib/python2.7/dist-packages/urllib3/connection.py:251: SecurityWarning: 
Certificate has no `subjectAltName`, falling back to check for a `commonName` 
for now. This feature is being removed by major browsers and deprecated by RFC 
2818. (See https://github.com/shazow/urllib3/issues/497 for details.)
SecurityWarning

  or that:

  /usr/lib/python2.7/dist-packages/urllib3/util/ssl_.py:90: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
  /usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:770: 
InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.org/en/latest/security.html
InsecureRequestWarning)

  Affected: python-glanceclient (and CLI).

  Steps to reproduce:

  1. Deploy openstack with enabling services in HTTPS mode (using TLS).
  2. Try to use this command: glance image-list

  Actual result: Displays a list of images with some warnings.

  root@node-1:~# glance image-list
  /usr/lib/python2.7/dist-packages/urllib3/util/ssl_.py:90: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
  /usr/lib/python2.7/dist-packages/urllib3/connection.py:251: SecurityWarning: 
Certificate has no `subjectAltName`, falling back to check for a `commonName` 
for now. This feature is being removed by major browsers and deprecated by RFC 
2818. (See https://github.com/shazow/urllib3/issues/497 for details.)
SecurityWarning
  +--++
  | ID   | Name   |
  +--++
  | 43c99677-94b4-4356-b3ee-cd3690f26fdc | TestVM |
  +--++

  Excepted result: Displays a list of images without any warnings.

  root@node-1:~# glance image-list

  +--++
  | ID   | Name   |
  +--++
  | 43c99677-94b4-4356-b3ee-cd3690f26fdc | TestVM |
  +--++

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1506842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660695] [NEW] memcached restart required after horizon upgrade

2017-01-31 Thread Corey Bryant
Public bug reported:

If horizon is being run with memcached, the memcached service must be
restarted after horizon is upgraded from one openstack release to
another.

If memcached isn't restarted, I'm noticing the horizon theme and
possibly static assets are not being applied correctly.

Restarting memcached isn't a huge issue if memcached is running locally
but if run distributed where memcached is on a different node from
horizon then it gets harder to manage.

Ideally horizon would just work after upgrade without a memcached
restart.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1660695

Title:
  memcached restart required after horizon upgrade

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If horizon is being run with memcached, the memcached service must be
  restarted after horizon is upgraded from one openstack release to
  another.

  If memcached isn't restarted, I'm noticing the horizon theme and
  possibly static assets are not being applied correctly.

  Restarting memcached isn't a huge issue if memcached is running
  locally but if run distributed where memcached is on a different node
  from horizon then it gets harder to manage.

  Ideally horizon would just work after upgrade without a memcached
  restart.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1660695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1221757] Re: Support adding multiple images/tenants as members

2017-01-31 Thread Ian Cordasco
** Changed in: python-glanceclient
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1221757

Title:
  Support adding multiple images/tenants as members

Status in Glance:
  Invalid
Status in Glance Client:
  Won't Fix

Bug description:
  I think we should add an option to the member-create/delete api's to
  add/delete multiple members to an image or to add/delete multiple
  images to one tenant.

  currently we can only run glance member-create  
  or glance member-delete  

  it would be very helpful if we could add 
   --multiple-images and --multiple tenants  options which will be separated by 
commas 
  or just add --image-id/--tenant-id and than accept multiple objects

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1221757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607724] Re: OVS Mech: Set hybrid plug based on agent config

2017-01-31 Thread Boden R
While a networking guide change did go out for this (see comment #3),
after looking at what we have in this regard I still feel the docs are
confusing. As a result I generated another openstack-manuals defect [1].

>From a neutron perspective I'm not seeing other docs needed; if a devref
was required it should'be been with the implementation patch unless
otherwise noted so I'm closing out the neutron side.


[1] https://bugs.launchpad.net/openstack-manuals/+bug/1660687

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607724

Title:
  OVS Mech: Set hybrid plug based on agent config

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/327996
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 7a969d4b0a5ae902695b3204a858305b5122c5ff
  Author: Kevin Benton 
  Date:   Fri Apr 29 18:01:51 2016 -0700

  OVS Mech: Set hybrid plug based on agent config
  
  This adjusts the logic in the OVS mechanism driver to determine
  what the ovs_hybrid_plug value should be set to in the VIF details.
  Previously it was based purely on the firewall driver configured on
  the server side. This prevented a mixed environment where some agents
  might be running a native OVS firewall driver while others are still
  based on the IPTables hybrid driver.
  
  This patch has the OVS agents report back whether they want hybrid
  plugging in their configuration dictionary sent during report_state.
  The OVS agent sets this based on an explicit attribute on the firewall
  driver requesting OVS hybrid plugging.
  
  To maintain backward compat, if an agent doesn't report this, the old
  logic of basing it off of the server-side config is applied.
  
  DocImpact: The server no longer needs to be configured with a firewall
 driver for OVS. It will read config from agent state reports.
  Closes-Bug: #1560957
  Change-Id: Ie554c2d37ce036e7b51818048153b466eee02913
  (cherry picked from commit 2f17a30ba04082889f3a703aca1884b031767942)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1607724/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660682] [NEW] resource tracker sets wrong initial resource provider generation when creating a new resource provider

2017-01-31 Thread Chris Dent
Public bug reported:

When the resource tracker creates (and then effectively caches) a new
resource provider for the compute node, it sets the generation to 1.
This leads to the first PUT to set inventory resulting in a 409 conflict
because the generation is wrong.

The default generation in the database is 0, so for any new resource
provider this is what it will be. So in the resource tracker, in
_create_resource_provider, the generation should also be 0.

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: placement scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660682

Title:
  resource tracker sets wrong initial resource provider generation when
  creating a new resource provider

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  When the resource tracker creates (and then effectively caches) a new
  resource provider for the compute node, it sets the generation to 1.
  This leads to the first PUT to set inventory resulting in a 409
  conflict because the generation is wrong.

  The default generation in the database is 0, so for any new resource
  provider this is what it will be. So in the resource tracker, in
  _create_resource_provider, the generation should also be 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1198813] Re: Duplicated glance image service

2017-01-31 Thread Ian Cordasco
This has been marked incomplete in glance for well over two years. I'm
closing this as invalid as there has been no activity other than other
projects cleaning up their own trackers.

** Changed in: python-glanceclient
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1198813

Title:
  Duplicated glance image service

Status in Cinder:
  Invalid
Status in Ironic:
  Expired
Status in OpenStack Compute (nova):
  Invalid
Status in Glance Client:
  Invalid

Bug description:
  This code is duplicated in nova, cinder and ironic. Should be removed
  and use the common version on python-glanceclient once the code lands.

  https://review.openstack.org/#/c/33327/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1198813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406486] Re: Suspending an instance fails when using vnic_type=direct

2017-01-31 Thread Ian Cordasco
** No longer affects: python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406486

Title:
  Suspending an instance fails when using vnic_type=direct

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  When launching an instance with a pre-created port with 
binding:vnic_type='direct' and suspending the instance 
  fails with error  'NoneType' object has no attribute 'encode'

  Nova compute log:
  http://paste.openstack.org/show/155141/

  Version
  ==
  openstack-nova-common-2014.2.1-3.el7ost.noarch
  openstack-nova-compute-2014.2.1-3.el7ost.noarch
  python-novaclient-2.20.0-1.el7ost.noarch
  python-nova-2014.2.1-3.el7ost.noarch

  How to Reproduce
  ===
  # neutron port-create tenant1-net1 --binding:vnic-type direct
  # nova boot --flavor m1.small --image rhel7 --nic port-id= vm1
  # nova suspend 
  # nova show 

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1406486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256043] Re: Need to add Development environment files to ignore list

2017-01-31 Thread Ian Cordasco
** Changed in: python-glanceclient
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1256043

Title:
  Need to add Development environment files to ignore list

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in python-cinderclient:
  Fix Released
Status in Glance Client:
  Won't Fix
Status in python-keystoneclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in python-swiftclient:
  Won't Fix
Status in OpenStack Object Storage (swift):
  Won't Fix

Bug description:
  Following files generated by Eclipse development environment should be
  in ignore list to avoid their inclusion during a git push.

  .project
  .pydevproject

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1256043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367564] Re: metadata definition property show should handle type specific prefix

2017-01-31 Thread Ian Cordasco
It's no longer apparent that this is a glanceclient issue since it has
been fixed in the Glance Metadef Service. Closing as invalid until it
becomes apparent that this is a client issue as well.

** Changed in: python-glanceclient
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367564

Title:
  metadata definition property show should handle type specific prefix

Status in Glance:
  Fix Released
Status in Glance Client:
  Invalid

Bug description:
  metadata definition property show should handle type specific prefix

  The metadata definitions API supports listing namespaces by resource
  type.  For example, you can list only namespaces applicable to images
  by specifying OS::Glance::Image

  The API also support showing namespace properties for a specific
  resource type.  The API will automatically prepend any prefix specific
  to that resource type.  For example, in the
  OS::Compute::VirtCPUTopology namespace, the properties will come back
  with "hw_" prepended.

  However, if you then ask the API to show the property with the prefix,
  it will return a "not found" error.   To actually see the details of
  the property, you have to know the base property (without the prefix).
  It would be nice if the API would attempt to auto-resolve any
  automatically prefixed properties when showing a property.

  This is evident from the command line.  If you look at the below
  interactions, you will see the namespaces listed, then limited to a
  particular resource type, then the properties shown for the namespace,
  and then a failure to show the property using the automatically
  prepended prefix.

  * Apologize for formatting.

  $ glance --os-image-api-version 2 md-namespace-list
  ++
  | namespace  |
  ++
  | OS::Compute::VMware|
  | OS::Compute::XenAPI|
  | OS::Compute::Quota |
  | OS::Compute::Libvirt   |
  | OS::Compute::Hypervisor|
  | OS::Compute::Watchdog  |
  | OS::Compute::HostCapabilities  |
  | OS::Compute::Trust |
  | OS::Compute::VirtCPUTopology   |
  | OS::Glance:CommonImageProperties   |
  | OS::Compute::RandomNumberGenerator |
  ++

  $ glance --os-image-api-version 2 md-namespace-list --resource-type 
OS::Glance::Image
  +--+
  | namespace|
  +--+
  | OS::Compute::VMware  |
  | OS::Compute::XenAPI  |
  | OS::Compute::Libvirt |
  | OS::Compute::Hypervisor  |
  | OS::Compute::Watchdog|
  | OS::Compute::VirtCPUTopology |
  +--+

  $ glance --os-image-api-version 2 md-namespace-show 
OS::Compute::VirtCPUTopology --resource-type OS::Glance::Image
  
++--+
  | Property   | Value  
  |
  
++--+
  | created_at | 2014-09-10T02:55:40Z   
  |
  | description| This provides the preferred socket/core/thread 
counts for the virtual CPU|
  || instance exposed to guests. This enables the 
ability to avoid hitting|
  || limitations on vCPU topologies that OS vendors 
place on their products. See  |
  || also: 
http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/virt-   |
  || driver-vcpu-topology.rst   
  |
  | display_name   | Virtual CPU Topology   
  |
  | namespace  | OS::Compute::VirtCPUTopology   
  |
  | owner  | admin  
  |
  | properties | ["hw_cpu_cores", "hw_cpu_sockets", 
"hw_cpu_maxsockets", "hw_cpu_threads",|
  || "hw_cpu_maxcores", "hw_cpu_maxthreads"]
  |
  | protected  | True   
  |
  | resource_type_associations | ["OS::Glance::Image", "OS::Cinder::Volume", 
"OS::Nova::Flavor"]  |
  | schema | /v2/schemas/metadefs/namespace 
   

[Yahoo-eng-team] [Bug 1616114] Re: Flavors lack documentation

2017-01-31 Thread Boden R
Added openstack-manuals as affected project to all them to triage the
ability to update the networking guide with "flavors" support. The spec
linked in the description of this defect has a nice overview of the
functionality.


Leaving the neutron bug as-is; we would like to also have a devref for
flavors (doesn't exist today) so the neutron bug can track that work
item (devref for flavors in neutron).

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1616114

Title:
  Flavors lack documentation

Status in neutron:
  Confirmed
Status in openstack-manuals:
  New

Bug description:
  Flavors [1] lack documentation, particularly for operators. Consider
  adding it to the networking guide.

  [1] http://specs.openstack.org/openstack/neutron-specs/specs/kilo
  /neutron-flavor-framework.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1616114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660670] [NEW] ConnectionError: ('Connection aborted.', BadStatusLine("''", ))

2017-01-31 Thread Jason Hobbs
Public bug reported:

Description
===
Using python-novaclient, I call nova.servers.create() and then poll with 
nova.servers.get() until my new instance has a private IP assigned. 
Occasionally the client raises an error instead:

ConnectionError: ('Connection aborted.', BadStatusLine("''",))

I don't see any indication in the nova cloud controller logs as to what
the failure may be.


Steps to reproduce
==
* I used nova.servers.create() to create a server
* I used nova.servers.get() to check if the new server was ready to use.
* nova.servers.get() failed with an error:
ConnectionError: ('Connection aborted.', BadStatusLine("''",))

Expected result
===
I expected the server to come up in ACTIVE state and have a private IP assigned.

Actual result
=
The nova client throw a connection error.

Environment
===
I'm running nova on newton:

ii  nova-api-os-compute  2:14.0.1-0ubuntu1~cloud0   all 
 OpenStack Compute - OpenStack Compute API frontend
ii  nova-cert2:14.0.1-0ubuntu1~cloud0   all 
 OpenStack Compute - certificate management
ii  nova-common  2:14.0.1-0ubuntu1~cloud0   all 
 OpenStack Compute - common files
ii  nova-conductor   2:14.0.1-0ubuntu1~cloud0   all 
 OpenStack Compute - conductor service
ii  nova-scheduler   2:14.0.1-0ubuntu1~cloud0   all 
 OpenStack Compute - virtual machine scheduler
ii  python-nova  2:14.0.1-0ubuntu1~cloud0   all 
 OpenStack Compute Python libraries

2. Which hypervisor did you use?
libvirt + kvm
ii  libvirt-bin  1.3.1-1ubuntu10.6   
amd64programs for the libvirt library
ii  libvirt0:amd64   1.3.1-1ubuntu10.6   
amd64library for interfacing with different virtualization systems
ii  nova-compute-libvirt 2:14.0.1-0ubuntu1~cloud0
all  OpenStack Compute - compute node libvirt support
ii  python-libvirt   1.3.1-1ubuntu1  
amd64libvirt Python bindings


2. Which storage type did you use?
ceph, but with no attached volumes - not really using it here.

3. Which networking type did you use?
neutron with openvswitch

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: oil oil-guestos-fail

** Attachment added: "sosreport-juju-e54bcf-1-lxd-1-20170131160047.tar.xz"
   
https://bugs.launchpad.net/bugs/1660670/+attachment/4811425/+files/sosreport-juju-e54bcf-1-lxd-1-20170131160047.tar.xz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660670

Title:
  ConnectionError: ('Connection aborted.', BadStatusLine("''",))

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Using python-novaclient, I call nova.servers.create() and then poll with 
nova.servers.get() until my new instance has a private IP assigned. 
Occasionally the client raises an error instead:

  ConnectionError: ('Connection aborted.', BadStatusLine("''",))

  I don't see any indication in the nova cloud controller logs as to
  what the failure may be.


  Steps to reproduce
  ==
  * I used nova.servers.create() to create a server
  * I used nova.servers.get() to check if the new server was ready to use.
  * nova.servers.get() failed with an error:
  ConnectionError: ('Connection aborted.', BadStatusLine("''",))

  Expected result
  ===
  I expected the server to come up in ACTIVE state and have a private IP 
assigned.

  Actual result
  =
  The nova client throw a connection error.

  Environment
  ===
  I'm running nova on newton:

  ii  nova-api-os-compute  2:14.0.1-0ubuntu1~cloud0   all   
   OpenStack Compute - OpenStack Compute API frontend
  ii  nova-cert2:14.0.1-0ubuntu1~cloud0   all   
   OpenStack Compute - certificate management
  ii  nova-common  2:14.0.1-0ubuntu1~cloud0   all   
   OpenStack Compute - common files
  ii  nova-conductor   2:14.0.1-0ubuntu1~cloud0   all   
   OpenStack Compute - conductor service
  ii  nova-scheduler   2:14.0.1-0ubuntu1~cloud0   all   
   OpenStack Compute - virtual machine scheduler
  ii  python-nova  2:14.0.1-0ubuntu1~cloud0   all   
   OpenStack Compute Python libraries

  2. Which hypervisor did you use?
  libvirt + kvm
  ii  libvirt-bin  1.3.1-1ubuntu10.6   
amd64programs for the libvirt library
  ii  libvirt0:amd64   1.3.1-1ubuntu10.6   
amd64library 

[Yahoo-eng-team] [Bug 1366911] Re: Nova does not ensure a valid token is available if snapshot process exceeds token lifetime

2017-01-31 Thread Ian Cordasco
This should be addressed with the availability of trusts in keystone. I
believe glanceclient supports the usage of trusts now so this should be
fixed.

** Changed in: python-glanceclient
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366911

Title:
  Nova does not ensure a valid token is available if snapshot process
  exceeds token lifetime

Status in OpenStack Compute (nova):
  Invalid
Status in Glance Client:
  Fix Released

Bug description:
  Recently we encountered the following issue due to the change in
  Icehouse for the default lifetime of a token before it expires. It's
  now 1 hour, while previously it was 8.

  If a snapshot process takes longer than an hour, when it goes to the
  next phase it will fail with a 401 Unauthorized error because it has
  an invalid token.

  In our specific example the following would take place:

  1. User would set a snapshot to begin and a token would be associated with 
this request.
  2. Snapshot would be created, compression time would take about 55 minutes. 
Enough to just push the snapshotting of this instance over the 60 minute mark.
  3. Upon Image Upload ("Uploading image data for image" in the logs) Nova 
would then return a 401 Unauthorized error stating "This server could not 
verify that you are authorized to access the document you requested. Either you 
supplied the wrong credentials (e.g., bad password), or your browser does not 
understand how to supply the credentials required."

  Icehouse 2014.1.2, KVM as the hypervisor.

  The workaround is to specify a longer token timeout - however limits
  the ability to set short token expirations.

  A possible solution may be to get a new/refresh the token if the time
  has exceeded the timeout.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366911/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612907] Re: Add setting default max_burst value if not given by user

2017-01-31 Thread Boden R
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: openstack-manuals
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612907

Title:
  Add setting default max_burst value if not given by user

Status in neutron:
  Invalid
Status in openstack-manuals:
  Invalid

Bug description:
  https://review.openstack.org/347017
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 8b5d04ff57223885a45dc34e7769a7fd424f7de5
  Author: Sławek Kapłoński 
  Date:   Sat Apr 30 00:53:22 2016 +0200

  Add setting default max_burst value if not given by user
  
  If user will not provide burst value for bandwidth limit rule there is
  need to set such value on appropriate to ensure that bandwith limit
  will work fine.
  To ensure at least proper limitation of TCP traffic we decided to set
  burst as 80% of bw_limit value.
  LinuxBridge agent already sets is like that. This patch add same
  behaviour for QoS extension driver in openvswitch L2 agent.
  
  DocImpact: Add setting default max_burst value in LB and OvS drivers
  
  Conflicts:
neutron/agent/linux/tc_lib.py
  
  Change-Id: Ib12a7cbf88cdffd10c8f6f8442582bf7377ca27d
  Closes-Bug: #1572670
  (cherry picked from commit f766fc71bedc58a0cff0c6dc6e5576f0ab5e2507)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660667] [NEW] placement API policy handling uses deprecated fields

2017-01-31 Thread Chris Dent
Public bug reported:

When running tests of the placement API, the following deprecation
warnings are produced:

nova/api/openstack/placement/auth.py:74: DeprecationWarning: Property 'user' 
has moved to 'user_id' in version '2.6' and will be removed in version '3.0'
  if ctx.user is None:
nova/api/openstack/placement/policy.py:63: DeprecationWarning: Property 
'tenant' has moved to 'project_id' in version '2.6' and will be removed in 
version '3.0'
  target = {'project_id': context.tenant,
nova/api/openstack/placement/policy.py:64: DeprecationWarning: Property 'user' 
has moved to 'user_id' in version '2.6' and will be removed in version '3.0'
  'user_id': context.user}

We should fix this.

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: placement scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660667

Title:
  placement API policy handling uses deprecated fields

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  When running tests of the placement API, the following deprecation
  warnings are produced:

  nova/api/openstack/placement/auth.py:74: DeprecationWarning: Property 'user' 
has moved to 'user_id' in version '2.6' and will be removed in version '3.0'
if ctx.user is None:
  nova/api/openstack/placement/policy.py:63: DeprecationWarning: Property 
'tenant' has moved to 'project_id' in version '2.6' and will be removed in 
version '3.0'
target = {'project_id': context.tenant,
  nova/api/openstack/placement/policy.py:64: DeprecationWarning: Property 
'user' has moved to 'user_id' in version '2.6' and will be removed in version 
'3.0'
'user_id': context.user}

  We should fix this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297414] Re: Users can set arbitrary headers by adding newlines to header values

2017-01-31 Thread Ian Cordasco
The underlying HTTP transport for glanceclient no longer allows users to
send or receive headers like this. This is fixed in newer versions of
glanceclient which rely on those newer versions of requests.

** Changed in: python-glanceclient
   Status: New => Fix Released

** Changed in: glance
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1297414

Title:
  Users can set arbitrary headers by adding newlines to header values

Status in Glance:
  Won't Fix
Status in Glance Client:
  Fix Released

Bug description:
  Glance and the python-glanceclient (v1) do not armor/sanitize their
  inputs when assembling headers.  In particular, "x-image-meta-
  property-description" is exposed via interfaces like Horizon (which
  still uses v1) as a free-form text field, (Unicode, newlines, etc.
  allowed) and if users introduce newlines, the glanceclient will POST
  them to Glance verbatim without any extra encoding, which means
  maliciously/incompetently constructed Description: values can set
  header values that the client otherwise would not.

  I can't really see anything in the code that uses HTTP headers to set
  any sort of security context, but this could just be a lack of
  imagination on my part.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1297414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660655] [NEW] Any instance that is boot from new volume adding up to local_gb_used

2017-01-31 Thread Amith Nayak
Public bug reported:

I have Mitaka implemented in a multihost platform. All the hypervisors have 
minimal required disk. The VMs are stored on a seperate storage cinder host 
with lvm backend. Whenever I launch an instance with Boot from image(Create a 
new Volume), it is incorrectly adding up the new volume size into "Local Disk 
Usage" or "local_gb_used". If the volume size is beyond the Local Storage 
available, though the cinder lvm backend has enough storage, instance fails to 
launch with error. But if the instance volume size is within "Local Disk", the 
instance launch successfully and also the volume is created correctly in the 
lvm cinder backend.
The issue here is that while launching the instance it considers the local disk 
of the hypervisors and adds up to its local consumption.

nova hypervisor-stats
+--++
| Property | Value  |
+--++
| count| 7  |
| current_workload | 0  |
| disk_available_least | 98 |
| free_disk_gb | 64 |
| free_ram_mb  | 71369  |
| local_gb | 126|
| local_gb_used| 274|
| memory_mb| 112329 |
| memory_mb_used   | 40960  |
| running_vms  | 7  |
| vcpus| 56 |
| vcpus_used   | 13 |
+--++

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660655

Title:
  Any instance that is boot from new volume adding up to local_gb_used

Status in OpenStack Compute (nova):
  New

Bug description:
  I have Mitaka implemented in a multihost platform. All the hypervisors have 
minimal required disk. The VMs are stored on a seperate storage cinder host 
with lvm backend. Whenever I launch an instance with Boot from image(Create a 
new Volume), it is incorrectly adding up the new volume size into "Local Disk 
Usage" or "local_gb_used". If the volume size is beyond the Local Storage 
available, though the cinder lvm backend has enough storage, instance fails to 
launch with error. But if the instance volume size is within "Local Disk", the 
instance launch successfully and also the volume is created correctly in the 
lvm cinder backend.
  The issue here is that while launching the instance it considers the local 
disk of the hypervisors and adds up to its local consumption.

  nova hypervisor-stats
  +--++
  | Property | Value  |
  +--++
  | count| 7  |
  | current_workload | 0  |
  | disk_available_least | 98 |
  | free_disk_gb | 64 |
  | free_ram_mb  | 71369  |
  | local_gb | 126|
  | local_gb_used| 274|
  | memory_mb| 112329 |
  | memory_mb_used   | 40960  |
  | running_vms  | 7  |
  | vcpus| 56 |
  | vcpus_used   | 13 |
  +--++

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660650] [NEW] Can't log in into Horizon when deploying Manila+Sahara

2017-01-31 Thread Daniel Mellado
Public bug reported:

When installing latest devstack trunk alongside with the Manila +
Sahara/Sahara Dashboard plugins, I can't log in using horizon, even if
the log says I logged in successfully.

Horizon gets stuck on the greeter, asking again from the username and
password.

local.conf extract:


  # Enable Manila   

  
  enable_plugin manila https://github.com/openstack/manila  

  


  
~ #Enable heat plugin   

  
~_enable_plugin heat https://git.openstack.org/openstack/heat   

  


  
  # Enable Swift

  
  enable_service s-proxy s-object s-container s-account 

  
  SWIFT_REPLICAS=1  

  
  SWIFT_HASH=$ADMIN_PASSWORD

  


  
  # Enable Sahara   

  
  enable_plugin sahara git://git.openstack.org/openstack/sahara 

  


  
  # Enable sahara-dashboard 

  
  enable_plugin sahara-dashboard 
git://git.openstack.org/openstack/sahara-dashboard

** Affects: devstack
 Importance: Undecided
 Status: New

** Affects: horizon
 Importance: Undecided
 Status: New

** Affects: manila
 Importance: Undecided
 Status: New

** Affects: sahara
 Importance: Undecided
 Status: New

** Also affects: manila
   Importance: Undecided
   Status: New

** Also affects: sahara
   Importance: Undecided
   Status: New

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1660650

Title:
  Can't log in into Horizon when deploying Manila+Sahara

Status in devstack:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in Manila:
  New
Status in Sahara:
  New

Bug description:
  When installing latest devstack trunk alongside with the Manila +
  Sahara/Sahara Dashboard plugins, I can't log in using horizon, even if
  the log says I logged in successfully.

  Horizon gets stuck on the greeter, asking again from the username and
  password.

  local.conf extract:

  
# Enable Manila 


enable_plugin manila https://github.com/openstack/manila



 

[Yahoo-eng-team] [Bug 1660647] [NEW] _cleanup_failed_start aggressively removes local instance files when handling plug_vif failures

2017-01-31 Thread Lee Yarwood
Public bug reported:

Description
===
Iab5afdf1b5b8d107ea0e5895c24d50712e7dc7b1 [1] ensured that 
_cleanup_failed_start is always called if we encounter VIF plugging failures in 
_create_domain_and_network. However this currently leads to any local instance 
files being removed as cleanup is called with destroy_disks=True.

As such any failures when resuming or restarting an instance will lead
to these files being removed and the instance left in an unbootable
state. IMHO these files should only be removed when cleaning up after
errors hit while initially spawning an instance.

Steps to reproduce
==
- Boot an instance using local disks
- Stop the instance
- Start the instance, causing a timeout or other failure during plug_vifs
- Attempt to start the instance again

Expected result
===
The local instance files are left on the host if instances are rebooting or 
resuming.

Actual result
=
The local instance files are removed from the host if _cleanup_failed_start is 
called.

Environment
===
1. Exact version of OpenStack you are running. See the following
   list for all releases: http://docs.openstack.org/releases/

   $ pwd
   /opt/stack/nova
   $ git rev-parse HEAD
   4969a21ee28ef4a68bd5ab1ec8a12c4ad126


2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

   Libvirt + KVM

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

   N/A

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)

   N/A

Logs & Configs
==

$ nova boot --image cirros-0.3.4-x86_64-uec --flavor 1 test-boot
[..]
$ nova stop test-boot
$ ll ../data/nova/instances/be6cb386-e005-4fb2-8332-7e0c375ee452/
total 18596
-rw-rw-r--. 1 root  root16699 Jan 31 09:30 console.log
-rw-r--r--. 1 root  root 10289152 Jan 31 09:30 disk
-rw-r--r--. 1 stack libvirtd  257 Jan 31 09:29 disk.info
-rw-rw-r--. 1 qemu  qemu  4979632 Jan 31 09:29 kernel
-rw-rw-r--. 1 qemu  qemu  3740163 Jan 31 09:29 ramdisk

I used the following change to artificially recreate an issue plugging
the VIFs :

$ git diff
diff --git a/nova/virt/libvirt/driver.py b/nova/virt/libvirt/driver.py
index 33e3157..248e960 100644
--- a/nova/virt/libvirt/driver.py
+++ b/nova/virt/libvirt/driver.py
@@ -5015,6 +5015,7 @@ class LibvirtDriver(driver.ComputeDriver):
 pause = bool(events)
 guest = None
 try:
+raise exception.VirtualInterfaceCreateException()
 with self.virtapi.wait_for_instance_event(
 instance, events, deadline=timeout,
 error_callback=self._neutron_failed_callback):

$ nova start test-boot
Request to start server test-boot has been accepted.
$ nova list
+--+---+-++-++
| ID   | Name  | Status  | Task State | 
Power State | Networks   |
+--+---+-++-++
| be6cb386-e005-4fb2-8332-7e0c375ee452 | test-boot | SHUTOFF | -  | 
Shutdown| public=172.24.4.8, 2001:db8::9 |
+--+---+-++-++
$ ll ../data/nova/instances/be6cb386-e005-4fb2-8332-7e0c375ee452/
ls: cannot access 
'../data/nova/instances/be6cb386-e005-4fb2-8332-7e0c375ee452/': No such file or 
directory

Future attempts to start the instance will fail as a result :

$ nova start test-boot
Request to start server test-boot has been accepted.
$ vi ../logs/n-cpu.log
[..]
5353 2017-01-31 09:35:59.117 TRACE oslo_messaging.rpc.server Traceback (most 
recent call last):
5354 2017-01-31 09:35:59.117 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 155, in 
_process_incoming
5355 2017-01-31 09:35:59.117 TRACE oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
5356 2017-01-31 09:35:59.117 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 222, 
in dispatch
5357 2017-01-31 09:35:59.117 TRACE oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
5358 2017-01-31 09:35:59.117 TRACE oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 192, 
in _do_dispatch
5359 2017-01-31 09:35:59.117 TRACE oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
5360 2017-01-31 09:35:59.117 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/exception_wrapper.py", line 75, in wrapped
5361 2017-01-31 09:35:59.117 TRACE oslo_messaging.rpc.server function_name, 
call_dict, 

[Yahoo-eng-team] [Bug 1660648] [NEW] Wrong timezone in settings page

2017-01-31 Thread Mateusz Kowalski
Public bug reported:

When new user (=clean cookies) logs into Horizon, in Settings page "UTC"
is always shown even if the local_settings indicate another timezone. At
the same time all the dates are shown in the real timezone (=the one
from local settings).

Whole procedure to reproduce:

* set local_settings to TIME_ZONE = "Europe/Zurich" (=UTC+1)
* clear the cookies
* log into dashboard and check value (=label) is set to UTC
* check some random date in UI (like instance creation time)
* go to the settings and change value to UTC+1
* check the same date in UI and notice it hasn't changed
* go to the settings and change value back to UTC
* check the same date one more time and notice the value has now changed

It means only the initial value is wrong. As tz is stored in cookie,
after user changes it for the first time in the current session (=cookie
lifecycle) everything is shown correctly.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1660648

Title:
  Wrong timezone in settings page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When new user (=clean cookies) logs into Horizon, in Settings page
  "UTC" is always shown even if the local_settings indicate another
  timezone. At the same time all the dates are shown in the real
  timezone (=the one from local settings).

  Whole procedure to reproduce:

  * set local_settings to TIME_ZONE = "Europe/Zurich" (=UTC+1)
  * clear the cookies
  * log into dashboard and check value (=label) is set to UTC
  * check some random date in UI (like instance creation time)
  * go to the settings and change value to UTC+1
  * check the same date in UI and notice it hasn't changed
  * go to the settings and change value back to UTC
  * check the same date one more time and notice the value has now changed

  It means only the initial value is wrong. As tz is stored in cookie,
  after user changes it for the first time in the current session
  (=cookie lifecycle) everything is shown correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1660648/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660305] Re: DVR multinode job fails over 20 tests

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/426729
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=4750112ea348690af25035f0ed73decad4a070a8
Submitter: Jenkins
Branch:master

commit 4750112ea348690af25035f0ed73decad4a070a8
Author: Oleg Bondarev 
Date:   Mon Jan 30 16:00:56 2017 +0400

Init privsep on l3 agent start

Should fix the problem with privsep call failures

Change-Id: I1150291eb6310677f4f4ab035a5cd275d21b7e39
Closes-Bug: #1660305


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1660305

Title:
  DVR multinode job fails over 20 tests

Status in neutron:
  Fix Released

Bug description:
  Example: http://logs.openstack.org/39/426339/2/check/gate-tempest-
  dsvm-neutron-dvr-multinode-full-ubuntu-xenial-
  nv/b31bdd2/logs/testr_results.html.gz

  Mostly  connectivity failures: cannot ssh to instance, floating IP not
  ACTIVE, etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1660305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628038] Re: Spelling mistakes in MTU selection and advertisement document

2017-01-31 Thread Boden R
Marking as invalid; see review comments - we don't fix typos in old
specs.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628038

Title:
  Spelling mistakes in MTU selection and advertisement document

Status in neutron:
  Invalid

Bug description:
  There are some spelling mistakes in MTU selection and advertisement 
document.They are
  "elemtns","desination","Aginst","correcly".
  
https://specs.openstack.org/openstack/neutron-specs/specs/kilo/mtu-selection-and-advertisement.html
  
https://github.com/openstack/neutron-specs/blob/006515248de2624c234b5fe054d7725a02540278/specs/kilo/mtu-selection-and-advertisement.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1628038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628564] Re: New option for num_threads for state change server

2017-01-31 Thread Boden R
As noted in the commit message; a new config option has been introduced.
Moving this to the manual team so they can consume.

Note this is for master and there are separate bugs for other branches.

** Changed in: neutron
   Status: Confirmed => New

** Changed in: neutron
   Importance: Low => Undecided

** Project changed: neutron => openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628564

Title:
  New option for num_threads for state change server

Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/317616
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 70ea188f5d87c45fb60ace8b8405274e5f6dd489
  Author: venkata anil 
  Date:   Tue May 17 16:30:13 2016 +

  New option for num_threads for state change server
  
  Currently max number of client connections(i.e greenlets spawned at
  a time) opened at any time by the WSGI server is set to 100 with
  wsgi_default_pool_size[1].
  
  This configuration may be fine for neutron api server. But with
  wsgi_default_pool_size(=100) requests, state change server
  is creating heavy cpu load on agent.
  So this server(which run on agents) need lesser value i.e
  can be configured to half the number of cpu on agent
  
  We use "ha_keepalived_state_change_server_threads" config option
  to configure number of threads in state change server instead of
  wsgi_default_pool_size.
  
  [1] https://review.openstack.org/#/c/278007/
  
  DocImpact: Add new config option -
  ha_keepalived_state_change_server_threads, to configure number
  of threads in state change server.
  
  Closes-Bug: #1581580
  Change-Id: I822ea3844792a7731fd24419b7e90e5aef141993

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1628564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640512] Re: wrong format for novncproxy_base_url

2017-01-31 Thread Stephen Finucane
That configuration option should support all *valid* URIs, be they IP
addresses or domain names. The URI you provided - 'http://os-
controller:6080/vnc_auto.html' - is a valid one and I can validate it
locally using the same check that oslo.config (which nova uses to parse
config files) uses [1].

  $ pip install rfc3986
  Collecting rfc3986
Using cached rfc3986-0.4.1-py2.py3-none-any.whl
  Installing collected packages: rfc3986
  Successfully installed rfc3986-0.4.1
  $ python
  >>> from rfc3986 import is_valid_uri
  >>> x = 'http://os-controller:6080/vnc_auto.html'
  >>> is_valid_uri(x, require_scheme=True, require_authority=True)

This leads me to think the issue is either a misconfiguration or a bug
with either oslo.config or the dependent rfc3986 library. Please provide
the version info for nova, oslo.config and rfc3986 so we can attempt to
reproduce the issue and issue a fix/blacklist the broken package.

Alexandra: this means the documentation is actually correct, thus, I
have marked the openstack-manuals issue as invalid.

[1]
https://github.com/openstack/oslo.config/blob/3.22.0/oslo_config/types.py#L837-L839

** Changed in: openstack-manuals
   Status: Incomplete => Invalid

** Also affects: oslo.config
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Incomplete

** Changed in: oslo.config
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1640512

Title:
  wrong format for novncproxy_base_url

Status in OpenStack Compute (nova):
  Incomplete
Status in openstack-manuals:
  Invalid
Status in oslo.config:
  Incomplete

Bug description:
  "In the [vnc] section, enable and configure remote console access:

  [vnc]
  ...
  enabled = True
  vncserver_listen = 0.0.0.0
  vncserver_proxyclient_address = $my_ip
  novncproxy_base_url = http://controller:6080/vnc_auto.html

  The server component listens on all IP addresses and the proxy
  component only listens on the management interface IP address of the
  compute node. The base URL indicates the location where you can use a
  web browser to access remote consoles of instances on this compute
  node."

  This looks like 'controller' is used as a domain name but instead an IP is 
expected. With 
  'novncproxy_base_url = http://os-controller:6080/vnc_auto.html'
  in my nova.conf the following error is logged:

  2016-11-09 16:02:02.475 23828 ERROR nova Traceback (most recent call last):
  2016-11-09 16:02:02.475 23828 ERROR nova   File "/usr/bin/nova-compute", line 
10, in 
  2016-11-09 16:02:02.475 23828 ERROR nova sys.exit(main())
  2016-11-09 16:02:02.475 23828 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/compute.py", line 62, in main
  2016-11-09 16:02:02.475 23828 ERROR nova service.wait()
  2016-11-09 16:02:02.475 23828 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 415, in wait
  2016-11-09 16:02:02.475 23828 ERROR nova _launcher.wait()
  2016-11-09 16:02:02.475 23828 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 328, in wait
  2016-11-09 16:02:02.475 23828 ERROR nova status, signo = 
self._wait_for_exit_or_signal()
  2016-11-09 16:02:02.475 23828 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 303, in 
_wait_for_exit_or_signal
  2016-11-09 16:02:02.475 23828 ERROR nova self.conf.log_opt_values(LOG, 
logging.DEBUG)
  2016-11-09 16:02:02.475 23828 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2626, in 
log_opt_values
  2016-11-09 16:02:02.475 23828 ERROR nova _sanitize(opt, 
getattr(group_attr, opt_name)))
  2016-11-09 16:02:02.475 23828 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3057, in __getattr__
  2016-11-09 16:02:02.475 23828 ERROR nova return self._conf._get(name, 
self._group)
  2016-11-09 16:02:02.475 23828 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2668, in _get
  2016-11-09 16:02:02.475 23828 ERROR nova value = self._do_get(name, 
group, namespace)
  2016-11-09 16:02:02.475 23828 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2711, in _do_get
  2016-11-09 16:02:02.475 23828 ERROR nova % (opt.name, str(ve)))
  2016-11-09 16:02:02.475 23828 ERROR nova ConfigFileValueError: Value for 
option novncproxy_base_url is not valid: invalid URI: 
'http://os-controller:6080/vnc_auto.html'

  With 
  'novncproxy_base_url = http://10.40.x.x:6080/vnc_auto.html'
  in my nova.conf the service starts successfully.

  My suggestion would be to either change the line to reflect that an IP
  is expected or file a bug against nova?

  ---
  Release: 0.1 on 2016-11-09 11:38
  SHA: 8cac353f76b4ec663626d5111e6b10c3bb305b47
  Source: 

[Yahoo-eng-team] [Bug 1631375] Re: New option for num_threads for state change server

2017-01-31 Thread Boden R
As noted in commit message; this change adds a new config option.
However please note this defect is for Mitaka which is about 3 months from EOL 
and therefore may not make sense to add at this point in time.

Moving to manuals team so they can decide if they want to consume this
mitaka config option.

** Changed in: neutron
   Importance: Low => Undecided

** Changed in: neutron
   Status: Triaged => New

** Project changed: neutron => openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631375

Title:
  New option for num_threads for state change server

Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/379580
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 039ab16f4ea9308792d4168adb1d501310145346
  Author: venkata anil 
  Date:   Tue May 17 16:30:13 2016 +

  New option for num_threads for state change server
  
  Currently max number of client connections(i.e greenlets spawned at
  a time) opened at any time by the WSGI server is set to 100 with
  wsgi_default_pool_size[1].
  
  This configuration may be fine for neutron api server. But with
  wsgi_default_pool_size(=100) requests, state change server
  is creating heavy cpu load on agent.
  So this server(which run on agents) need lesser value i.e
  can be configured to half the number of cpu on agent
  
  We use "ha_keepalived_state_change_server_threads" config option
  to configure number of threads in state change server instead of
  wsgi_default_pool_size.
  
  [1] https://review.openstack.org/#/c/278007/
  
  DocImpact: Add new config option -
  ha_keepalived_state_change_server_threads, to configure number
  of threads in state change server.
  
  Closes-Bug: #1581580
  Change-Id: I822ea3844792a7731fd24419b7e90e5aef141993
  (cherry picked from commit 70ea188f5d87c45fb60ace8b8405274e5f6dd489)

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1631375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631374] Re: New option for num_threads for state change server

2017-01-31 Thread Boden R
This change is for liberty which is already EOL; therefore I'm marking this as 
won't fix.
Note: there are separate defects for this in other releases.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631374

Title:
  New option for num_threads for state change server

Status in neutron:
  Won't Fix

Bug description:
  https://review.openstack.org/379582
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 4387d4aedfa2ccf129eb957ee057ad9c9edac82d
  Author: venkata anil 
  Date:   Tue May 17 16:30:13 2016 +

  New option for num_threads for state change server
  
  Currently max number of client connections(i.e greenlets spawned at
  a time) opened at any time by the WSGI server is set to 100 with
  wsgi_default_pool_size[1].
  
  This configuration may be fine for neutron api server. But with
  wsgi_default_pool_size(=100) requests, state change server
  is creating heavy cpu load on agent.
  So this server(which run on agents) need lesser value i.e
  can be configured to half the number of cpu on agent
  
  We use "ha_keepalived_state_change_server_threads" config option
  to configure number of threads in state change server instead of
  wsgi_default_pool_size.
  
  [1] https://review.openstack.org/#/c/278007/
  
  DocImpact: Add new config option -
  ha_keepalived_state_change_server_threads, to configure number
  of threads in state change server.
  
  Closes-Bug: #1581580
  Change-Id: I822ea3844792a7731fd24419b7e90e5aef141993
  (cherry picked from commit 70ea188f5d87c45fb60ace8b8405274e5f6dd489)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1631374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644535] Re: iptables: fail to start ovs/linuxbridge agents on missing sysctl knobs

2017-01-31 Thread Boden R
As noted in the commit message of the fix, this one requires updates to
the install/upgrade/deploy guide. Also note this is for newton; a
separate bug is out there for master (ocata).

Moving to openstack-manuals to get the proper docs updated.

** Project changed: neutron => openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1644535

Title:
  iptables: fail to start ovs/linuxbridge agents on missing sysctl
  knobs

Status in openstack-manuals:
  Confirmed

Bug description:
  https://review.openstack.org/398817
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 4371a4f5cdc6559955af9158c4c28851e77914da
  Author: Ihar Hrachyshka 
  Date:   Thu Sep 15 21:48:10 2016 +

  iptables: fail to start ovs/linuxbridge agents on missing sysctl knobs
  
  For new kernels (3.18+), bridge module is split into two pieces: bridge
  and br_netfilter. The latter provides firewall support for bridged
  traffic, as well as the following sysctl knobs:
  
  * net.bridge.bridge-nf-call-arptables
  * net.bridge.bridge-nf-call-ip6tables
  * net.bridge.bridge-nf-call-iptables
  
  Before kernel 3.18, any brctl command was loading the 'bridge' module
  with the knobs, so at the moment where we reached iptables setup, they
  were always available.
  
  With new 3.18+ kernels, brctl still loads 'bridge' module, but not
  br_netfilter. So bridge existance no longer guarantees us knobs'
  presence. If we reach _enable_netfilter_for_bridges before the new
  module is loaded, then the code will fail, triggering agent resync. It
  will also fail to enable bridge firewalling on systems where it's
  disabled by default (examples of those systems are most if not all Red
  Hat/Fedora based systems), making security groups completely
  ineffective.
  
  Systems that don't override default settings for those knobs would work
  fine except for this exception in the log file and agent resync. This is
  because the first attempt to add a iptables rule using 'physdev' module
  (-m physdev) will trigger the kernel module loading. In theory, we could
  silently swallow missing knobs, and still operate correctly. But on
  second thought, it's quite fragile to rely on that implicit module
  loading. In the case where we can't detect whether firewall is enabled,
  it's better to fail than hope for the best.
  
  An alternative to the proposed path could be trying
  to fix broken deployment, meaning we would need to load the missing
  kernel module on agent startup. It's not even clear whether we can
  assume the operation would be available to us. Even with that, adding a
  rootwrap filter to allow loading code in the kernel sounds quite scary.
  If we would follow the path, we would also hit an issue of
  distinguishing between cases of built-in kernel module vs. modular one.
  A complexity that is probably beyond what Neutron should fix.
  
  The patch introduces a sanity check that would fail on missing
  configuration knobs.
  
  DocImpact: document the new deployment requirement in operations guide
  UpgradeImpact: deployers relying on agents fixing wrong sysctl defaults
 will need to make sure bridge firewalling is enabled.
 Also, the kernel module providing sysctl knobs must be
 loaded before starting the agent, otherwise it will fail
 to start.
  
  Changes made to this backport:
 neutron/agent/linux/iptables_firewall.py
 - removed deprecation warning when setting sysctl values to 1 as
   they are planned to be removed in Ocata
 neutron/cmd/sanity/checks.py
 - Re-implemented the flow to check only for presence of sysctl
   options instead of checking the values. Kernel options are set
   in runtime thus the values don't matter.
  
  Depends-On: Id6bfd9595f0772a63d1096ef83ebbb6cd630fafd
  Change-Id: I9137ea017624ac92a05f73863b77f9ee4681bbe7
  Related-Bug: #1622914
  (cherry picked from commit e83a44b96a8e3cd81b7cc684ac90486b283a3507)

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1644535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More 

[Yahoo-eng-team] [Bug 1635468] Re: iptables: fail to start ovs/linuxbridge agents on missing sysctl knobs

2017-01-31 Thread Boden R
As noted in the commit message, this one impacts the
install/upgrade/deploy guide; so I'm moving it to openstack-manuals for
the appropriate updates.

Note this is for master; there's a separate bug for the newton changes.

** Project changed: neutron => openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635468

Title:
  iptables: fail to start ovs/linuxbridge agents on missing sysctl
  knobs

Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/371523
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit e83a44b96a8e3cd81b7cc684ac90486b283a3507
  Author: Ihar Hrachyshka 
  Date:   Thu Sep 15 21:48:10 2016 +

  iptables: fail to start ovs/linuxbridge agents on missing sysctl knobs
  
  For new kernels (3.18+), bridge module is split into two pieces: bridge
  and br_netfilter. The latter provides firewall support for bridged
  traffic, as well as the following sysctl knobs:
  
  * net.bridge.bridge-nf-call-arptables
  * net.bridge.bridge-nf-call-ip6tables
  * net.bridge.bridge-nf-call-iptables
  
  Before kernel 3.18, any brctl command was loading the 'bridge' module
  with the knobs, so at the moment where we reached iptables setup, they
  were always available.
  
  With new 3.18+ kernels, brctl still loads 'bridge' module, but not
  br_netfilter. So bridge existance no longer guarantees us knobs'
  presence. If we reach _enable_netfilter_for_bridges before the new
  module is loaded, then the code will fail, triggering agent resync. It
  will also fail to enable bridge firewalling on systems where it's
  disabled by default (examples of those systems are most if not all Red
  Hat/Fedora based systems), making security groups completely
  ineffective.
  
  Systems that don't override default settings for those knobs would work
  fine except for this exception in the log file and agent resync. This is
  because the first attempt to add a iptables rule using 'physdev' module
  (-m physdev) will trigger the kernel module loading. In theory, we could
  silently swallow missing knobs, and still operate correctly. But on
  second thought, it's quite fragile to rely on that implicit module
  loading. In the case where we can't detect whether firewall is enabled,
  it's better to fail than hope for the best.
  
  An alternative to the proposed path could be trying
  to fix broken deployment, meaning we would need to load the missing
  kernel module on agent startup. It's not even clear whether we can
  assume the operation would be available to us. Even with that, adding a
  rootwrap filter to allow loading code in the kernel sounds quite scary.
  If we would follow the path, we would also hit an issue of
  distinguishing between cases of built-in kernel module vs. modular one.
  A complexity that is probably beyond what Neutron should fix.
  
  The patch introduces a sanity check that would fail on missing
  configuration knobs.
  
  DocImpact: document the new deployment requirement in operations guide
  UpgradeImpact: deployers relying on agents fixing wrong sysctl defaults
 will need to make sure bridge firewalling is enabled.
 Also, the kernel module providing sysctl knobs must be
 loaded before starting the agent, otherwise it will fail
 to start.
  
  Depends-On: Id6bfd9595f0772a63d1096ef83ebbb6cd630fafd
  Change-Id: I9137ea017624ac92a05f73863b77f9ee4681bbe7
  Related-Bug: #1622914

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1635468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631384] Re: New option for num_threads for state change server

2017-01-31 Thread Boden R
As noted in the commit message, this introduces a new config option that
needs to be doc'd in the manuals; moving to openstack-manuals.

Note this is for newton; there's a separate bug for master.

** Changed in: neutron
   Status: Triaged => New

** Changed in: neutron
   Importance: Low => Undecided

** Project changed: neutron => openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631384

Title:
  New option for num_threads for state change server

Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/379578
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 5c1516e1fb07bb3e026a3569396567c3907b31c6
  Author: venkata anil 
  Date:   Tue May 17 16:30:13 2016 +

  New option for num_threads for state change server
  
  Currently max number of client connections(i.e greenlets spawned at
  a time) opened at any time by the WSGI server is set to 100 with
  wsgi_default_pool_size[1].
  
  This configuration may be fine for neutron api server. But with
  wsgi_default_pool_size(=100) requests, state change server
  is creating heavy cpu load on agent.
  So this server(which run on agents) need lesser value i.e
  can be configured to half the number of cpu on agent
  
  We use "ha_keepalived_state_change_server_threads" config option
  to configure number of threads in state change server instead of
  wsgi_default_pool_size.
  
  [1] https://review.openstack.org/#/c/278007/
  
  DocImpact: Add new config option -
  ha_keepalived_state_change_server_threads, to configure number
  of threads in state change server.
  
  Closes-Bug: #1581580
  Change-Id: I822ea3844792a7731fd24419b7e90e5aef141993
  (cherry picked from commit 70ea188f5d87c45fb60ace8b8405274e5f6dd489)

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1631384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656355] Re: The py-modindex for networking-bgpvpn docs is a broken link

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/426781
Committed: 
https://git.openstack.org/cgit/openstack/networking-bgpvpn/commit/?id=119657dbb39450c21dd49a4a2663122441b6aa91
Submitter: Jenkins
Branch:master

commit 119657dbb39450c21dd49a4a2663122441b6aa91
Author: Boden R 
Date:   Mon Jan 30 07:32:32 2017 -0700

Remove doc modindex ref

Sphinx module index docs won't be generated unless a
module level docstring is included (ex [1]) with class refs.
As we don't include such module level refs in this project,
nothing is generated and thus no py-modindex.html
is created. This results in a dead link in our devref (see
bug report).

This change removes the modindex ref from our devref's
index to account for this fact. In the future if we wish to
add module documentation to support generation of
modindex we can add the ref back into our index.

[1] https://github.com/openstack/neutron/blob/
master/neutron/neutron_plugin_base_v2.py#L19

Change-Id: I2c7f9a58d398033cf4a8029d16b430fe34fcc56e
Closes-Bug: #1656355


** Changed in: bgpvpn
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656355

Title:
  The py-modindex for networking-bgpvpn docs is a broken link

Status in networking-bgpvpn:
  Fix Released
Status in neutron:
  In Progress

Bug description:
  The Module Index link on this page:

  http://docs.openstack.org/developer/networking-bgpvpn/

  Is giving a 404 error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1656355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659995] Re: stop using per-user id settings in security_compliance

2017-01-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/424220
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=9844fa1e264fe52276d9bd804b26b518ef3aaa24
Submitter: Jenkins
Branch:master

commit 9844fa1e264fe52276d9bd804b26b518ef3aaa24
Author: Morgan Fainberg 
Date:   Mon Jan 23 08:26:34 2017 -0800

Create user option `ignore_lockout_failure_attempts`

Rather than an option list in keystone.conf, leverage the
per-user options available via REST APIs, to define whether
a user is exempt from being disabled upon too many failed
password attempts.

Closes-Bug: 1659995

Co-Authored-By: "Steve Martinelli "
Change-Id: I6999343314a9e11a82664cd0db6e56047084fa76


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1659995

Title:
  stop using per-user id settings in security_compliance

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  `password_expires_ignore_user_ids` is a list option in
  [security_compliance], but this is impractical since any update would
  require a restart of keystone.

  instead let's use per-resouce "options" available in the identity
  database.

  also, `lockout_ignored_user_ids` is another config option we should
  remove, but no need to deprecate it since it was introduced in ocata
  (and will be removed in ocata)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1659995/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660612] [NEW] gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial times out on execution

2017-01-31 Thread Jakub Libosvar
Public bug reported:

It took a bit above 1 hour to run tempest with linux bridge agent and
then the job was terminated:

http://logs.openstack.org/47/416647/4/check/gate-tempest-dsvm-neutron-
linuxbridge-ubuntu-
xenial/7d1d4e5/console.html#_2017-01-30_17_51_50_343819

e-r-q:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name
%3Agate-tempest-dsvm-neutron-linuxbridge-ubuntu-
xenial%20AND%20message%3A%5C%22Killed%5C%22%20AND%20message%3A%5C%22timeout%20-s%209%5C%22%20AND%20tags%3Aconsole

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1660612

Title:
  gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial times out on
  execution

Status in neutron:
  New

Bug description:
  It took a bit above 1 hour to run tempest with linux bridge agent and
  then the job was terminated:

  http://logs.openstack.org/47/416647/4/check/gate-tempest-dsvm-neutron-
  linuxbridge-ubuntu-
  xenial/7d1d4e5/console.html#_2017-01-30_17_51_50_343819

  e-r-q:
  http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name
  %3Agate-tempest-dsvm-neutron-linuxbridge-ubuntu-
  
xenial%20AND%20message%3A%5C%22Killed%5C%22%20AND%20message%3A%5C%22timeout%20-s%209%5C%22%20AND%20tags%3Aconsole

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1660612/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660596] Re: ValueError: Expecting property name enclosed in double quotes

2017-01-31 Thread A.Ojea
Fixed, was a problem with the policy.json

Once I replaced the modified with the original one it started to work.

https://raw.githubusercontent.com/openstack/keystone/stable/mitaka/etc/policy.json


The error should be handled better, it tooks me a lot of hours to pinpoint the 
problem. 

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1660596

Title:
  ValueError: Expecting property name enclosed in double quotes

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  After creating some users with an script, keystone stopped to work
  because of the following error

  2017-01-31 20:29:11.111462 2017-01-31 20:29:11.110 39594 DEBUG 
keystone.middleware.auth [req-5ea06271-928b-4f50-a5d6-aa60ab642852 
ebaefe56b50f40e4a884e0ac713cfdc2 16c3015d618542c6bba0d1abd576b5b5 - default 
default] RBAC: auth_context: {'is_delegated_auth': False, 'access_token_id': 
None, 'user_id': u'ebaefe56b50f40e4a884e0ac713cfdc2', 'roles': [u'admin'], 
'user_domain_id': u'default', 'trustee_id': None, 'trustor_id': None, 
'consumer_id': None, 'token': , 'project_id': 
u'16c3015d618542c6bba0d1abd576b5b5', 'trust_id': None, 'project_domain_id': 
u'default'} process_request 
/usr/lib/python2.7/dist-packages/keystone/middleware/auth.py:221
  2017-01-31 20:29:11.114669 2017-01-31 20:29:11.114 39594 INFO 
keystone.common.wsgi [req-5ea06271-928b-4f50-a5d6-aa60ab642852 
ebaefe56b50f40e4a884e0ac713cfdc2 16c3015d618542c6bba0d1abd576b5b5 - default 
default] GET http://192.168.200.2:35357/v3/auth/tokens
  2017-01-31 20:29:11.115339 2017-01-31 20:29:11.114 39594 DEBUG 
keystone.common.controller [req-5ea06271-928b-4f50-a5d6-aa60ab642852 
ebaefe56b50f40e4a884e0ac713cfdc2 16c3015d618542c6bba0d1abd576b5b5 - default 
default] RBAC: Authorizing identity:validate_token() 
_build_policy_check_credentials 
/usr/lib/python2.7/dist-packages/keystone/common/controller.py:80
  2017-01-31 20:29:11.115886 2017-01-31 20:29:11.115 39594 DEBUG 
keystone.common.controller [req-5ea06271-928b-4f50-a5d6-aa60ab642852 
ebaefe56b50f40e4a884e0ac713cfdc2 16c3015d618542c6bba0d1abd576b5b5 - default 
default] RBAC: using auth context from the request environment 
_build_policy_check_credentials 
/usr/lib/python2.7/dist-packages/keystone/common/controller.py:85
  2017-01-31 20:29:11.127038 2017-01-31 20:29:11.126 39594 DEBUG 
keystone.policy.backends.rules [req-5ea06271-928b-4f50-a5d6-aa60ab642852 
ebaefe56b50f40e4a884e0ac713cfdc2 16c3015d618542c6bba0d1abd576b5b5 - default 
default] enforce identity:validate_token: {'is_delegated_auth': False, 
'access_token_id': None, 'user_id': u'ebaefe56b50f40e4a884e0ac713cfdc2', 
'roles': [u'admin'], 'user_domain_id': u'default', 'trustee_id': None, 
'trustor_id': None, 'consumer_id': None, 'token': , 'project_id': u'16c3015d618542c6bba0d1abd576b5b5', 'trust_id': 
None, 'project_domain_id': u'default'} enforce 
/usr/lib/python2.7/dist-packages/keystone/policy/backends/rules.py:76
  2017-01-31 20:29:11.129622 2017-01-31 20:29:11.127 39594 ERROR 
keystone.common.wsgi [req-5ea06271-928b-4f50-a5d6-aa60ab642852 
ebaefe56b50f40e4a884e0ac713cfdc2 16c3015d618542c6bba0d1abd576b5b5 - default 
default] Expecting property name enclosed in double quotes: line 202 column 1 
(char 9768)
  2017-01-31 20:29:11.129639 2017-01-31 20:29:11.127 39594 ERROR 
keystone.common.wsgi Traceback (most recent call last):
  2017-01-31 20:29:11.129645 2017-01-31 20:29:11.127 39594 ERROR 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 249, in 
__call__
  2017-01-31 20:29:11.129650 2017-01-31 20:29:11.127 39594 ERROR 
keystone.common.wsgi result = method(context, **params)
  2017-01-31 20:29:11.129654 2017-01-31 20:29:11.127 39594 ERROR 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/controller.py", line 179, in 
inner
  2017-01-31 20:29:11.129659 2017-01-31 20:29:11.127 39594 ERROR 
keystone.common.wsgi utils.flatten_dict(policy_dict))
  2017-01-31 20:29:11.129663 2017-01-31 20:29:11.127 39594 ERROR 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/policy/backends/rules.py", line 77, 
in enforce
  2017-01-31 20:29:11.129668 2017-01-31 20:29:11.127 39594 ERROR 
keystone.common.wsgi enforce(credentials, action, target)
  2017-01-31 20:29:11.129672 2017-01-31 20:29:11.127 39594 ERROR 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/policy/backends/rules.py", line 69, 
in enforce
  2017-01-31 20:29:11.129677 2017-01-31 20:29:11.127 39594 ERROR 
keystone.common.wsgi return _ENFORCER.enforce(action, target, credentials, 
**extra)
  2017-01-31 20:29:11.129681 2017-01-31 20:29:11.127 39594 ERROR 
keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/oslo_policy/policy.py", line 540, in enforce
  2017-01-31 20:29:11.129685 

  1   2   >