[Yahoo-eng-team] [Bug 1551163] Re: confusion on security group rules on icmp setting

2016-02-29 Thread yalei wang
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551163

Title:
  confusion on security group rules on icmp setting

Status in neutron:
  Invalid

Bug description:
  ---
  Request body: {u'security_group_rule': {u'remote_group_id': None, 
u'direction': u'ingress', u'protocol': u'icmp', u'ethertype': u'IPv4', 
u'port_range_max': None, u'security_group_id': 
u'b2f57cd6-e466-4e92-9670-571334188394', u'port_range_min': None, 
u'remote_ip_prefix': u'0.0.0.0/0'}} from (pid=13486) prepare_request_body 
/opt/stack/neutron/neutron/api/v2/base.py:654

  will create rules

  0 0 RETURN icmp --  *  *   0.0.0.0/0
  0.0.0.0/0icmptype 0 code 0


  --

  Request body: {u'security_group_rule': {u'remote_group_id': None,
  u'direction': u'ingress', u'protocol': u'icmp', u'remote_ip_prefix':
  u'0.0.0.0/0', u'port_range_max': u'0', u'security_group_id': u
  '2a5124cf-527d-4812-8d1a-17b78dee0cea', u'port_range_min': u'0',
  u'ethertype': u'IPv4'}} from (pid=13486) prepare_request_body
  /opt/stack/neutron/neutron/api/v2/base.py:654

  will create rules
  184 RETURN icmp --  *  *   0.0.0.0/00.0.0.0/0

  
  the difference is that the input arg in request: 'port_range_max': '0' or 
'None', I guess it's a bug, we need clear definition on the arg in API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551593] [NEW] functional test failures caused by failure to setup OVS bridge

2016-02-29 Thread Kevin Benton
Public bug reported:

Right now we get random functional test failures and it seems to be
related to the fact that we only log and error and continue when a flow
fails to be inserted into a bridge:
http://logs.openstack.org/90/283790/10/check/gate-neutron-dsvm-
functional/ab54e0a/logs/neutron.tests.functional.agent.test_ovs_flows.ARPSpoofOFCtlTestCase.test_arp_spoof_doesnt_block_ipv6_native_.log.txt.gz#_2016-03-01_01_47_17_903

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551593

Title:
  functional test failures caused by failure to setup OVS bridge

Status in neutron:
  New

Bug description:
  Right now we get random functional test failures and it seems to be
  related to the fact that we only log and error and continue when a
  flow fails to be inserted into a bridge:
  http://logs.openstack.org/90/283790/10/check/gate-neutron-dsvm-
  
functional/ab54e0a/logs/neutron.tests.functional.agent.test_ovs_flows.ARPSpoofOFCtlTestCase.test_arp_spoof_doesnt_block_ipv6_native_.log.txt.gz#_2016-03-01_01_47_17_903

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551583] [NEW] Updating the segmentation id of network is failed

2016-02-29 Thread Zhigang Li
Public bug reported:

The users do not  have a way to change the segmentation id when Network
which network_type is vlan has been  created.

When trying to update the segmentation id of network ,such as "neutron
net-update --provider:segmentation_id=1020 my_network", there is  a
prompting to tell us that "Invalid input for operation: Plugin does not
support updating provider attributes."

That meanings if the user changes the networking scheme,they have to
delete all the vm , port and rebuild network with new segmentation id.
That is too hard for  the user. In some case the user don't allowed to
delete the vm when they are using it.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551583

Title:
  Updating the segmentation id of network is failed

Status in neutron:
  New

Bug description:
  The users do not  have a way to change the segmentation id when
  Network which network_type is vlan has been  created.

  When trying to update the segmentation id of network ,such as "neutron
  net-update --provider:segmentation_id=1020 my_network", there is  a
  prompting to tell us that "Invalid input for operation: Plugin does
  not support updating provider attributes."

  That meanings if the user changes the networking scheme,they have to
  delete all the vm , port and rebuild network with new segmentation id.
  That is too hard for  the user. In some case the user don't allowed to
  delete the vm when they are using it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551542] [NEW] need to check if a plugin's start rpc listeners is supported before trying to start it

2016-02-29 Thread Brandon Logan
Public bug reported:

This showed up on a new service plugin being created that did not need
to start rpc workers.  It did not implement the start_rpc_listeners
method so the base class has it raise NotImplemented.

The neutron.service.RpcWorker.start method should check if the plugin
supports it by using the plugin.rpc_workers_supported method.

https://github.com/openstack/neutron/blob/master/neutron/neutron_plugin_base_v2.py#L389-L402


2016-03-01 11:35:19.983 ERROR oslo_service.service [-] Error starting 
thread.
2016-03-01 11:35:19.983 TRACE oslo_service.service Traceback (most recent 
call last):
2016-03-01 11:35:19.983 TRACE oslo_service.service   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 680, in 
run_service
2016-03-01 11:35:19.983 TRACE oslo_service.service service.start()
2016-03-01 11:35:19.983 TRACE oslo_service.service   File 
"/opt/stack/neutron/neutron/service.py", line 142, in start
2016-03-01 11:35:19.983 TRACE oslo_service.service servers = 
getattr(plugin, self.start_listeners_method)()
2016-03-01 11:35:19.983 TRACE oslo_service.service   File 
"/opt/stack/neutron/neutron/neutron_plugin_base_v2.py", line 377, in 
start_rpc_listeners
2016-03-01 11:35:19.983 TRACE oslo_service.service raise 
NotImplementedError()
2016-03-01 11:35:19.983 TRACE oslo_service.service NotImplementedError
2016-03-01 11:35:19.983 TRACE oslo_service.service

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551542

Title:
  need to check if a plugin's start rpc listeners is supported before
  trying to start it

Status in neutron:
  New

Bug description:
  This showed up on a new service plugin being created that did not need
  to start rpc workers.  It did not implement the start_rpc_listeners
  method so the base class has it raise NotImplemented.

  The neutron.service.RpcWorker.start method should check if the plugin
  supports it by using the plugin.rpc_workers_supported method.

  
https://github.com/openstack/neutron/blob/master/neutron/neutron_plugin_base_v2.py#L389-L402

  
  2016-03-01 11:35:19.983 ERROR oslo_service.service [-] Error starting 
thread.
  2016-03-01 11:35:19.983 TRACE oslo_service.service Traceback (most recent 
call last):
  2016-03-01 11:35:19.983 TRACE oslo_service.service   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 680, in 
run_service
  2016-03-01 11:35:19.983 TRACE oslo_service.service service.start()
  2016-03-01 11:35:19.983 TRACE oslo_service.service   File 
"/opt/stack/neutron/neutron/service.py", line 142, in start
  2016-03-01 11:35:19.983 TRACE oslo_service.service servers = 
getattr(plugin, self.start_listeners_method)()
  2016-03-01 11:35:19.983 TRACE oslo_service.service   File 
"/opt/stack/neutron/neutron/neutron_plugin_base_v2.py", line 377, in 
start_rpc_listeners
  2016-03-01 11:35:19.983 TRACE oslo_service.service raise 
NotImplementedError()
  2016-03-01 11:35:19.983 TRACE oslo_service.service NotImplementedError
  2016-03-01 11:35:19.983 TRACE oslo_service.service

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551530] [NEW] With snat disabled legacy router Pings to floating IPs replied with fixed-ips

2016-02-29 Thread Hong Hui Xiao
Public bug reported:

On my single node devstack setup, there are 2 VMs hosted. VM1 has no floating 
IP assigned. VM2 has a floating IP assigned. From VM1, ping to VM2 using the 
floating IP. Ping output reports the replies comes from VM2's fixed ip address.
The reply should be from VM2's floating ip address.

VM1: 10.0.0.4
VM2: 10.0.0.3  floating ip:172.24.4.4

$ ping 172.24.4.4 -c 1 -W 1
PING 172.24.4.4 (172.24.4.4): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=3.440 ms

This will only happen for legacy router with snat disabled, and at the
same time, VM1 and VM2 are in the same subnet.

Compared the iptables, this following rule is missed when snat is
disabled.

Chain neutron-vpn-agen-snat (1 references)
 pkts bytes target prot opt in out source   destination 

184 SNAT   all  --  *  *   0.0.0.0/00.0.0.0/0   
 mark match ! 0x2/0x ctstate DNAT to:172.24.4.6

This rule will SNAT internal traffic to floatingip. Without this rule,
the packet of VM2 replying VM1 will be treated as a traffic inside
subnet, and these traffic will not go through router. As a result, the
DNAT record in router namespace will not work for reply packet.

The intentional fix will add the mentioned iptables rule, no matter of
snat enabling. So, the packet of VM2 replying VM1 will dest to
<172.24.4.6>, and go through router namespace. As a result, the DNAT and
SNAT record will work to make things right.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551530

Title:
  With snat disabled legacy router Pings to floating IPs replied with
  fixed-ips

Status in neutron:
  New

Bug description:
  On my single node devstack setup, there are 2 VMs hosted. VM1 has no floating 
IP assigned. VM2 has a floating IP assigned. From VM1, ping to VM2 using the 
floating IP. Ping output reports the replies comes from VM2's fixed ip address.
  The reply should be from VM2's floating ip address.

  VM1: 10.0.0.4
  VM2: 10.0.0.3  floating ip:172.24.4.4

  $ ping 172.24.4.4 -c 1 -W 1
  PING 172.24.4.4 (172.24.4.4): 56 data bytes
  64 bytes from 10.0.0.3: seq=0 ttl=64 time=3.440 ms

  This will only happen for legacy router with snat disabled, and at the
  same time, VM1 and VM2 are in the same subnet.

  Compared the iptables, this following rule is missed when snat is
  disabled.

  Chain neutron-vpn-agen-snat (1 references)
   pkts bytes target prot opt in out source   
destination 
  184 SNAT   all  --  *  *   0.0.0.0/00.0.0.0/0 
   mark match ! 0x2/0x ctstate DNAT to:172.24.4.6

  This rule will SNAT internal traffic to floatingip. Without this rule,
  the packet of VM2 replying VM1 will be treated as a traffic inside
  subnet, and these traffic will not go through router. As a result, the
  DNAT record in router namespace will not work for reply packet.

  The intentional fix will add the mentioned iptables rule, no matter of
  snat enabling. So, the packet of VM2 replying VM1 will dest to
  <172.24.4.6>, and go through router namespace. As a result, the DNAT
  and SNAT record will work to make things right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551499] Re: many of fwaas tempest tests are failing

2016-02-29 Thread YAMAMOTO Takashi
add neutron because this turned out not to be midonet-specific.
eg. 
http://logs.openstack.org/59/286059/1/experimental/gate-neutron-fwaas-dsvm-tempest/419f22e/

** Summary changed:

- many of fwaas tests are failing
+ many of fwaas tempest tests are failing

** Also affects: neutron
   Importance: Undecided
   Status: New

** Tags added: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551499

Title:
  many of fwaas tempest tests are failing

Status in networking-midonet:
  New
Status in neutron:
  New

Bug description:
  an example of failure:
  
http://logs.openstack.org/87/199387/18/check/gate-tempest-dsvm-networking-midonet-v2/e3b8cdb/logs/testr_results.html.gz

  2016-02-29 09:03:15.262 | {1} 
tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps.test_server_connectivity_suspend_resume
 [48.037252s] ... ok
  2016-02-29 09:03:17.290 | {1} 
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_create_ebs_image_and_check_boot
 ... SKIPPED: Skipped because the volume service is not available
  2016-02-29 09:03:17.298 | {1} 
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
 ... SKIPPED: Skipped because the volume service is not available
  2016-02-29 09:03:20.428 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_create_show_delete_firewall
 [1.430725s] ... FAILED
  2016-02-29 09:03:20.749 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_create_update_delete_firewall_policy
 [0.315712s] ... FAILED
  2016-02-29 09:03:20.947 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_create_update_delete_firewall_rule
 [0.204520s] ... ok
  2016-02-29 09:03:21.622 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_firewall_insertion_mode_add_remove_router
 [0.652992s] ... FAILED
  2016-02-29 09:03:22.052 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_firewall_insertion_mode_one_firewall_per_router
 [0.408611s] ... FAILED
  2016-02-29 09:03:22.730 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_firewall_rule_insertion_position_removal_rule_from_policy
 [0.700395s] ... ok
  2016-02-29 09:03:22.882 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_list_firewall_policies
 [0.144653s] ... ok
  2016-02-29 09:03:23.018 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_list_firewall_rules
 [0.126882s] ... ok
  2016-02-29 09:03:23.167 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_show_firewall_policy
 [0.149603s] ... ok
  2016-02-29 09:03:23.309 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_show_firewall_rule
 [0.139067s] ... ok
  2016-02-29 09:03:23.787 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_update_firewall_policy_audited_attribute
 [0.476629s] ... ok
  2016-02-29 09:03:33.413 | {3} 
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_update_instance_port_admin_state
 [40.272015s] ... ok

  2016-02-29 09:06:02.773 | {3} 
tempest.scenario.test_stamp_pattern.TestStampPattern.test_stamp_pattern ... 
SKIPPED: Skipped until Bug: 1205344 is resolved.
  2016-02-29 09:06:33.274 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.scenario.test_fwaas.TestFWaaS.test_firewall_all_disabled_rules(without
 router insersion) [184.297478s] ... FAILED
  2016-02-29 09:08:28.515 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.scenario.test_fwaas.TestFWaaS.test_firewall_all_disabled_rules(with
 router insersion) [115.168147s] ... FAILED
  2016-02-29 09:09:26.927 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.scenario.test_fwaas.TestFWaaS.test_firewall_block_icmp(without
 router insersion) [58.389020s] ... FAILED
  2016-02-29 09:10:23.515 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.scenario.test_fwaas.TestFWaaS.test_firewall_block_icmp(with
 router insersion) [56.544355s] ... FAILED
  2016-02-29 09:13:19.726 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.scenario.test_fwaas.TestFWaaS.test_firewall_block_ip(without
 router insersion) [176.174231s] ... FAILED
  2016-02-29 09:15:15.579 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.scenario.test_fwaas.TestFWaaS.test_firewall_block_ip(with
 router insersion) [115.822342s] ... FAILED
  2016-02-29 09:18:14.539 | {1} 
neutron_fwaas.tests.tempest_plugin.tests.scenario.test_fwaas.TestFWaaS.test_firewall_disable_rule(without
 router insersion) [178.934993s] ... ok
  2016-02-29 09:20:14.151 | {1} 

[Yahoo-eng-team] [Bug 1510234] Re: Heartbeats stop when time is changed

2016-02-29 Thread Davanum Srinivas (DIMS)
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1510234

Title:
  Heartbeats stop when time is changed

Status in OpenStack Compute (nova):
  New
Status in oslo.service:
  Confirmed

Bug description:
  Heartbeats stop working when you mess with the system time. If a
  monotonic clock were used, they would continue to work when the system
  time was changed.

  Steps to reproduce:

  1. List the nova services ('nova-manage service list'). Note that the
  'State' for each services is a happy face ':-)'.

  2. Move the time ahead (for example 2 hours in the future), and then
  list the nova services again. Note that heartbeats continue to work
  and use the future time (see 'Updated_At').

  3. Revert back to the actual time, and list the nova services again.
  Note that all heartbeats stop, and have a 'State' of 'XXX'.

  4. The heartbeats will start again in 2 hours when the actual time
  catches up to the future time, or if you restart the services.

  5. You'll see a log message like the following when the heartbeats
  stop:

  2015-10-26 17:14:10.538 DEBUG nova.servicegroup.drivers.db [req-
  c41a2ad7-e5a5-4914-bdc8-6c1ca8b224c6 None None] Seems service is down.
  Last heartbeat was 2015-10-26 17:20:20. Elapsed time is -369.461679
  from (pid=13994) is_up
  /opt/stack/nova/nova/servicegroup/drivers/db.py:80

  Here's example output demonstrating the issue:

  http://paste.openstack.org/show/477404/

  See bug #1450438 for more context:

  https://bugs.launchpad.net/oslo.service/+bug/1450438

  Long story short: looping call is using the built-in time rather than
  a  monotonic clock for sleeps.

  
https://github.com/openstack/oslo.service/blob/3d79348dae4d36bcaf4e525153abf74ad4bd182a/oslo_service/loopingcall.py#L122

  Oslo Service: version 0.11
  Nova: master (commit 2c3f9c339cae24576fefb66a91995d6612bb4ab2)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1510234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551513] [NEW] Multi-byte character input limit to the key value of the extra

2016-02-29 Thread Ryosuke Mizuno
Public bug reported:

Data in the extra is stored in the form of key=value.

If the key value on the multibyte-character is created by using such as 
projects API and users API, projects and users that have the created data
will not be able to refer in the portal of the Horizon and CLI.

Therefore, to limit the input of the multi-byte character.

Example:

user@ubuntu1404-7065:~$ curl -s \
>  -H "X-Auth-Token: 6d6fe7eab11d45968c4ac1f8e4b559c5" \
>  -H "Content-Type: application/json" \
>  -d '{
> "project": {
> "name": "multibyte-test",
> "テスト" : "test"
> }
> }
> ' \
>  http://localhost:5000/v3/projects | python -mjson.tool

{
"project": {
"description": "",
"domain_id": "default",
"enabled": true,
"id": "d20df48e0f2343cf8c09dc9dbbc69f25",
"is_domain": false,
"links": {
"self": 
"http://localhost:5000/v3/projects/d20df48e0f2343cf8c09dc9dbbc69f25;
},
"name": "multibyte-test",
"parent_id": null,
"\u30c6\u30b9\u30c8": "test"
}
}
user@ubuntu1404-7065:~$ openstack
(openstack) project list
'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1551513

Title:
  Multi-byte character input limit to the key value of the extra

Status in OpenStack Identity (keystone):
  New

Bug description:
  Data in the extra is stored in the form of key=value.

  If the key value on the multibyte-character is created by using such as 
  projects API and users API, projects and users that have the created data
  will not be able to refer in the portal of the Horizon and CLI.

  Therefore, to limit the input of the multi-byte character.

  Example:

  user@ubuntu1404-7065:~$ curl -s \
  >  -H "X-Auth-Token: 6d6fe7eab11d45968c4ac1f8e4b559c5" \
  >  -H "Content-Type: application/json" \
  >  -d '{
  > "project": {
  > "name": "multibyte-test",
  > "テスト" : "test"
  > }
  > }
  > ' \
  >  http://localhost:5000/v3/projects | python -mjson.tool

  {
  "project": {
  "description": "",
  "domain_id": "default",
  "enabled": true,
  "id": "d20df48e0f2343cf8c09dc9dbbc69f25",
  "is_domain": false,
  "links": {
  "self": 
"http://localhost:5000/v3/projects/d20df48e0f2343cf8c09dc9dbbc69f25;
  },
  "name": "multibyte-test",
  "parent_id": null,
  "\u30c6\u30b9\u30c8": "test"
  }
  }
  user@ubuntu1404-7065:~$ openstack
  (openstack) project list
  'ascii' codec can't encode characters in position 0-2: ordinal not in 
range(128)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1551513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505256] Re: Potential user_id collision between Federated IdPs

2016-02-29 Thread Steve Martinelli
this is resolved via shadow users

** Changed in: keystone
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1505256

Title:
  Potential user_id collision between Federated IdPs

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  User Ids cannot be something specified entirely by the Federation
  providers.  If they are, there are a handful of potential problems:

  1. The userId specified will be too big for the column (varchar 64)
  2. Two different Identity Providers  can provide the same value for user_id

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1505256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551504] [NEW] I/O (PCIe) Based NUMA Scheduling can't really achieve pci numa binding in some cases.

2016-02-29 Thread jinquanni(ZTE)
Public bug reported:

1. version
kilo 2015.1.0, liberty


this bug is base on BP:
https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling


In the current implementation scheme:


 def _filter_pools_for_numa_cells(pools, numa_cells):
# Some systems don't report numa node info for pci devices, in
# that case None is reported in pci_device.numa_node, by adding None
# to numa_cells we allow assigning those devices to instances with
# numa topology
numa_cells = [None] + [cell.id for cell in numa_cells]
#

If some compute nodes don't report numa node info for pci devices.
Then these pci devices will be regarded as "belong to all numa node" to deal 
with by default.

This can lead to a problem:
Pci devices is not on the numa node which CPU\MEM on.
In this way, the real purpose of I/O (PCIe) Based NUMA Scheduling is not 
reached.
More serious is that the user will be wrong thought pci devices is on the numa 
node that CPU\MEM on.

The truth is, there are still many systems don't report numa node info for pci 
devices.
So, i think this bug need fixed.

** Affects: nova
 Importance: Undecided
 Assignee: jinquanni(ZTE) (ni-jinquan)
 Status: New


** Tags: numa pci

** Changed in: nova
 Assignee: (unassigned) => jinquanni(ZTE) (ni-jinquan)

** Tags added: numa pci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1551504

Title:
  I/O (PCIe) Based NUMA Scheduling  can't really achieve pci numa
  binding in some cases.

Status in OpenStack Compute (nova):
  New

Bug description:
  1. version
  kilo 2015.1.0, liberty

  
  this bug is base on BP:
  https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling

  
  In the current implementation scheme:

  

   def _filter_pools_for_numa_cells(pools, numa_cells):
  # Some systems don't report numa node info for pci devices, in
  # that case None is reported in pci_device.numa_node, by adding None
  # to numa_cells we allow assigning those devices to instances with
  # numa topology
  numa_cells = [None] + [cell.id for cell in numa_cells]
  
#

  If some compute nodes don't report numa node info for pci devices.
  Then these pci devices will be regarded as "belong to all numa node" to deal 
with by default.

  This can lead to a problem:
  Pci devices is not on the numa node which CPU\MEM on.
  In this way, the real purpose of I/O (PCIe) Based NUMA Scheduling is not 
reached.
  More serious is that the user will be wrong thought pci devices is on the 
numa node that CPU\MEM on.

  The truth is, there are still many systems don't report numa node info for 
pci devices.
  So, i think this bug need fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1551504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551498] [NEW] Glance won't check location's data

2016-02-29 Thread Fei Long Wang
Public bug reported:

Currently, when adding new location for an image, Glance doesn't check
the checksum, so we can't grantee every location for an image is bit-
for-bit identical, which is incompatible with the idea of image
immutability.

** Affects: glance
 Importance: Undecided
 Assignee: Fei Long Wang (flwang)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Fei Long Wang (flwang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1551498

Title:
  Glance won't check location's data

Status in Glance:
  New

Bug description:
  Currently, when adding new location for an image, Glance doesn't check
  the checksum, so we can't grantee every location for an image is bit-
  for-bit identical, which is incompatible with the idea of image
  immutability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1551498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551492] [NEW] Branding: Remove unnecessary Horizon variables

2016-02-29 Thread Diana Whitten
Public bug reported:

There are many variables inside of
openstack_dashboard/static/dashboard/scss/_variables.scss that are
unnecessary or actually prohibit themability.  These should be removed.

** Affects: horizon
 Importance: Low
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress

** Changed in: horizon
   Status: New => In Progress

** Changed in: horizon
 Assignee: (unassigned) => Diana Whitten (hurgleburgler)

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1551492

Title:
  Branding: Remove unnecessary Horizon variables

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There are many variables inside of
  openstack_dashboard/static/dashboard/scss/_variables.scss that are
  unnecessary or actually prohibit themability.  These should be
  removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1551492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509500] Re: novaclient stats all files in /usr/bin

2016-02-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280725
Committed: 
https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=c18ccb1bfae574b4b496c138e9192fc737ed9c20
Submitter: Jenkins
Branch:master

commit c18ccb1bfae574b4b496c138e9192fc737ed9c20
Author: Andrey Kurilin 
Date:   Tue Feb 16 15:35:57 2016 +0200

Add a way to discover only contrib extensions

Several OS projects(cinder, neutron, osc...) use
`novaclient.discover_extensions` for initialization novaclient.client.Client
with novaclient.v2.contrib extensions. In this case, it would be nice to
provide a way to not discover extension via python path an entry-point.

Change-Id: I030f4c55c2795c7f7973f5f12e54b9819c4a5578
Closes-Bug: #1509500


** Changed in: python-novaclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509500

Title:
  novaclient stats all files in /usr/bin

Status in neutron:
  New
Status in python-novaclient:
  Fix Released

Bug description:
  It appears that novaclient is searching Python's sys.path to find
  novaclient's own executable, and a side effect of this is an operating
  system security package will log hundreds of errors each time this
  happens.  For example, this stack trace:

/usr/lib/python2.7/site-packages/neutron/manager.py(244)get_plugin()
  -> return weakref.proxy(cls.get_instance().plugin)
/usr/lib/python2.7/site-packages/neutron/manager.py(238)get_instance()
  -> cls._create_instance()
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py(252)inner()
  -> return f(*args, **kwargs)
/usr/lib/python2.7/site-packages/neutron/manager.py(224)_create_instance()
  -> cls._instance = cls()
/usr/lib/python2.7/site-packages/neutron/manager.py(120)__init__()
  -> plugin_provider)

/usr/lib/python2.7/site-packages/neutron/manager.py(157)_get_plugin_instance()
  -> return plugin_class()

/usr/lib/python2.7/site-packages/neutron/quota/resource_registry.py(121)wrapper()
  -> return f(*args, **kwargs)

/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py(145)__init__()
  -> super(Ml2Plugin, self).__init__()

/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py(103)__init__()
  -> self.nova_notifier = nova.Notifier()
/usr/lib/python2.7/site-packages/neutron/notifiers/nova.py(98)__init__()
  -> ext for ext in nova_client.discover_extensions(NOVA_API_VERSION)
  > 
/usr/lib/python2.7/site-packages/novaclient/client.py(724)discover_extensions()
  -> _discover_via_contrib_path(version)

  This stack trace is during neutron server startup, a novaclient call
  is made which results in _discover_via_python_path() being invoked
  here: https://github.com/openstack/python-
  novaclient/blob/master/novaclient/client.py#L723

  This method uses pkgutil.iter_modules() which will search all of
  /usr/bin (among many other places).  An operating system security
  package such as SELinux on RedHat will log hundreds of errors like
  this to /var/log/audit/audit.log:

  type=AVC msg=audit(10/23/2015 15:41:08.766:368903) : avc:  denied  {
  getattr } for  pid=13716 comm=neutron-server path=/usr/bin/virsh
  dev="dm-5" ino=138258059 scontext=system_u:system_r:neutron_t:s0
  tcontext=system_u:object_r:virsh_exec_t:s0 tclass=file

  One error is logged for every searched file in /usr/bin, about 1,300
  messages each time neutron-server restarts on my test system.  This
  generates a huge amount of noise in audit.log.  I have not attempted
  to reproduce this with Ubuntu / AppArmor to verify if the issue is the
  same.

  Is this something the novaclient code would worry about?  Is there
  some way I could submit a patch to fix this?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551483] [NEW] Radware LBaaS driver sporadically failing unit test

2016-02-29 Thread Evgeny Fedoruk
Public bug reported:

Radware LBaaS driver at https://review.openstack.org/#/c/202147/ 
has a sporadically failing unit test which was taken out until the fix.

tests.unit.drivers.radware.test_v2_plugin_driver.TestLBaaSDriver.test_build_objects_with_l7

assert_has_calls function sometimes does not find a call in mock calls list, 
although it's there.
call parameters dict arrangement is always different and mock Call class is 
built to handle it.
Anyhow in this specific test, the parameters dict is deep, so maybe it has 
something to do with a problem.
Tests order is not an issue since no configurations or globals are used. 

This Bug is filed for fixing this test.

** Affects: neutron
 Importance: Undecided
 Assignee: Evgeny Fedoruk (evgenyf)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Evgeny Fedoruk (evgenyf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551483

Title:
  Radware LBaaS driver sporadically failing unit test

Status in neutron:
  New

Bug description:
  Radware LBaaS driver at https://review.openstack.org/#/c/202147/ 
  has a sporadically failing unit test which was taken out until the fix.

  
tests.unit.drivers.radware.test_v2_plugin_driver.TestLBaaSDriver.test_build_objects_with_l7

  assert_has_calls function sometimes does not find a call in mock calls list, 
although it's there.
  call parameters dict arrangement is always different and mock Call class is 
built to handle it.
  Anyhow in this specific test, the parameters dict is deep, so maybe it has 
something to do with a problem.
  Tests order is not an issue since no configurations or globals are used. 

  This Bug is filed for fixing this test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551473] [NEW] duplicate rbac policy internal server error

2016-02-29 Thread Kevin Benton
Public bug reported:

creating a duplcate RBAC policy results in an internal server error:


administrator@15:21:42:/opt/stack/neutron$ neutron rbac-create 
1da63901-c3b1-4bb0-9e83-d03a8016b627 --action access_as_shared --type network 
--target-tenant 44
Created a new rbac_policy:
+---+--+
| Field | Value|
+---+--+
| action| access_as_shared |
| id| 1a2e3faa-9e95-4e0c-8044-5d9845aef954 |
| object_id | 1da63901-c3b1-4bb0-9e83-d03a8016b627 |
| object_type   | network  |
| target_tenant | 44   |
| tenant_id | ac167722a61943d792bf7dc50c01dca6 |
+---+--+
administrator@15:21:53:/opt/stack/neutron$ neutron rbac-create 
1da63901-c3b1-4bb0-9e83-d03a8016b627 --action access_as_shared --type network 
--target-tenant 44
Request Failed: internal server error while processing your request.


LOGS:

2016-02-29 15:21:55.573 27994 ERROR neutron.api.v2.resource
DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, u"Duplicate entry
'access_as_shared-1da63901-c3b1-4bb0-9e83-d03a8016b627-44' for key
'uniq_networkrbacs0tenant_target0object_id0action'") [SQL: u'INSERT INTO
networkrbacs (tenant_id, id, target_tenant, action, object_id) VALUES
(%(tenant_id)s, %(id)s, %(target_tenant)s, %(action)s, %(object_id)s)']
[parameters: {'action': u'access_as_shared', 'tenant_id':
u'ac167722a61943d792bf7dc50c01dca6', 'target_tenant': u'44', 'id':
'd0176c95-981a-4a35-bc3b-407bcf2d0de8', 'object_id':
u'1da63901-c3b1-4bb0-9e83-d03a8016b627'}]

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551473

Title:
  duplicate rbac policy internal server error

Status in neutron:
  New

Bug description:
  creating a duplcate RBAC policy results in an internal server error:

  
  administrator@15:21:42:/opt/stack/neutron$ neutron rbac-create 
1da63901-c3b1-4bb0-9e83-d03a8016b627 --action access_as_shared --type network 
--target-tenant 44
  Created a new rbac_policy:
  +---+--+
  | Field | Value|
  +---+--+
  | action| access_as_shared |
  | id| 1a2e3faa-9e95-4e0c-8044-5d9845aef954 |
  | object_id | 1da63901-c3b1-4bb0-9e83-d03a8016b627 |
  | object_type   | network  |
  | target_tenant | 44   |
  | tenant_id | ac167722a61943d792bf7dc50c01dca6 |
  +---+--+
  administrator@15:21:53:/opt/stack/neutron$ neutron rbac-create 
1da63901-c3b1-4bb0-9e83-d03a8016b627 --action access_as_shared --type network 
--target-tenant 44
  Request Failed: internal server error while processing your request.


  
  LOGS:

  2016-02-29 15:21:55.573 27994 ERROR neutron.api.v2.resource
  DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, u"Duplicate
  entry 'access_as_shared-1da63901-c3b1-4bb0-9e83-d03a8016b627-44' for
  key 'uniq_networkrbacs0tenant_target0object_id0action'") [SQL:
  u'INSERT INTO networkrbacs (tenant_id, id, target_tenant, action,
  object_id) VALUES (%(tenant_id)s, %(id)s, %(target_tenant)s,
  %(action)s, %(object_id)s)'] [parameters: {'action':
  u'access_as_shared', 'tenant_id': u'ac167722a61943d792bf7dc50c01dca6',
  'target_tenant': u'44', 'id': 'd0176c95-981a-4a35-bc3b-407bcf2d0de8',
  'object_id': u'1da63901-c3b1-4bb0-9e83-d03a8016b627'}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551455] [NEW] Branding: Horizon workflow cancel button doesn't inherit from theme

2016-02-29 Thread Diana Whitten
Public bug reported:

Branding: Horizon workflow cancel button doesn't inherit from theme

Cancel buttons are quite hard coded in their styles.

** Affects: horizon
 Importance: Low
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress

** Changed in: horizon
   Status: New => Confirmed

** Changed in: horizon
 Assignee: (unassigned) => Diana Whitten (hurgleburgler)

** Changed in: horizon
Milestone: None => mitaka-3

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1551455

Title:
  Branding: Horizon workflow cancel button doesn't inherit from theme

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Branding: Horizon workflow cancel button doesn't inherit from theme

  Cancel buttons are quite hard coded in their styles.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1551455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551189] Re: Tests don't work on leapdays

2016-02-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/285987
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=7c6556ce18eb170fd8102168b33a50272b22ec5a
Submitter: Jenkins
Branch:master

commit 7c6556ce18eb170fd8102168b33a50272b22ec5a
Author: Sean Dague 
Date:   Mon Feb 29 06:31:40 2016 -0500

Make keystone tests work on leap years

A bunch of tests have to do with fast forwarding time to figure out
what happens with expirations. That is done with a naive .replace on
year. Which is fine, except on Feb 29th, because the python code does
care if something is a valid datetime. Previously 2030 and 2031 were
used as magic years. Neither is a leap year. Fast forwarding to that
year on Feb 29th is a fatal python error.

Narrowly fix this by using 2028 and 2032 instead. A better solution
should probably be implemented long-term, but until this is landed
keystone is blocked.

Change-Id: If13e53bd5d4ace2a53a245d74166e4f3b1c34d4a
Closes-Bug: #1551189


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1551189

Title:
  Tests don't work on leapdays

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  2016-02-29 08:57:08.024 | Captured traceback:
  2016-02-29 08:57:08.025 | ~~~
  2016-02-29 08:57:08.025 | Traceback (most recent call last):
  2016-02-29 08:57:08.042 |   File "keystone/tests/unit/test_backend.py", 
line , in test_list_trust_by_trustor
  2016-02-29 08:57:08.042 | self.create_sample_trust(uuid.uuid4().hex)
  2016-02-29 08:57:08.042 |   File "keystone/tests/unit/test_backend.py", 
line 5475, in create_sample_trust
  2016-02-29 08:57:08.042 | expires_at = 
datetime.datetime.utcnow().replace(year=2031)
  2016-02-29 08:57:08.042 | ValueError: day is out of range for month

  
  The keystone test code assume all months have the same length in all years. 
That is not the case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1551189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470437] Re: ImageCacheManager raises Permission denied error on nova compute in race condition

2016-02-29 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470437

Title:
  ImageCacheManager raises Permission denied error on nova compute in
  race condition

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  New

Bug description:
  ImageCacheManager raises Permission denied error on nova compute in
  race condition

  While creating an instance snapshot nova calls guest.launch method
  from libvirt driver which changes the base file permissions and
  updates base file user from openstack to libvirt-qemu (in case of
  qcow2 image backend). In race condition when ImageCacheManager is
  trying to update last access time of this base file and guest.launch
  is called by instance snapshot just before updating the access time,
  ImageCacheManager raise Permission denied error in nova compute for
  os.utime().

  Steps to reproduce:
  1. Configure image_cache_manager_interval=120 in nova.conf and use qcow2 
image backend.
  2. Add a sleep for 60 sec in _handle_base_image method of libvirt.imagecache 
just before calling os.utime().
  3. Restart nova services.
  4. Create an instance using image.
  $ nova boot --image 5e1659aa-6d38-44e8-aaa3-4217337436c0 --flavor 1 instance-1
  5. Check that instance is in active state.
  6. Go to the n-cpu screen and check imagecache manager logs at the point it 
waits to execute sleep statement added in step #2.
  7. Send instance snapshot request when imagecache manger is waiting to 
execute sleep.
  $ nova image-create 19c7900b-73d5-4c2e-b129-5e2a6b13f396 instance-1-snap
  8. instance snapshot request updates the base file owner to libvirt-qemu by 
calling guest.launch method from libvirt driver.
  9. Now when imagecache manger comes out from sleep and executes os.utime it 
raise following Permission denied error in nova compute.

  2015-07-01 01:51:46.794 ERROR nova.openstack.common.periodic_task 
[req-a03fa45f-ffb9-48dd-8937-5b0414c6864b None None] Error during 
ComputeManager._run_image_cache_manager_pass
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
Traceback(most recent call last):
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/openstack/common/periodic_task.py", line 224, in 
run_periodic_tasks
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/compute/manager.py", line 6177, in 
_run_image_cache_manager_pass
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
self.driver.manage_image_cache(context, filtered_instances)
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 6252, in manage_image_cache
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
self.image_cache_manager.update(context, all_instances)
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/virt/libvirt/imagecache.py", line 668, in update
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task
  self._age_and_verify_cached_images(context, all_instances, base_dir)
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/virt/libvirt/imagecache.py", line 598, in 
_age_and_verify_cached_images
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
self._handle_base_image(img, base_file)
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/virt/libvirt/imagecache.py", line 570, in 
_handle_base_image
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
os.utime(base_file, None)
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
OSError:[Errno 13] Permission denied: 
'/opt/stack/data/nova/instances/_base/8d2c340dcce68e48a75457b1e91457feed27aef5'
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task

  Expected result: guest.launch should not update the base file
  permissions and owner to libvirt-qemu.  Base file owner should remain
  unchanged.

  Actual result: Libvirt is updating the base file owner which causes
  permission issues in nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256838] Re: Race between imagebackend and imagecache

2016-02-29 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1256838

Title:
  Race between imagebackend and imagecache

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  New

Bug description:
  After ImageCacheManager judges a base image is not used recently and
  marks it as to be removed, there is some time before the image is
  actually removed. So if an instance using the image is launched during
  the time, the image will be removed unfortunately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1256838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551439] [NEW] Auto-loaded service plugins defined in conf

2016-02-29 Thread Boden R
Public bug reported:

In [1] support was added to auto-start service plugins in neutron core.
As part of this change, the default auto-started plugins is defined in
the plugin's constants file [2]. While this surely works, it would be
beneficial to provide a neutron.conf property to specify the default
auto-started service plugins. Providing a conf property allows admins to
specify the default plugins to autostart which will ease ops when such
additional plugins are added down the road.


[1] https://review.openstack.org/#/c/273439/
[2] 
https://review.openstack.org/#/c/273439/10/neutron/plugins/common/constants.py

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551439

Title:
  Auto-loaded service plugins defined in conf

Status in neutron:
  New

Bug description:
  In [1] support was added to auto-start service plugins in neutron
  core. As part of this change, the default auto-started plugins is
  defined in the plugin's constants file [2]. While this surely works,
  it would be beneficial to provide a neutron.conf property to specify
  the default auto-started service plugins. Providing a conf property
  allows admins to specify the default plugins to autostart which will
  ease ops when such additional plugins are added down the road.


  [1] https://review.openstack.org/#/c/273439/
  [2] 
https://review.openstack.org/#/c/273439/10/neutron/plugins/common/constants.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1129748] Re: image files in _base should not be world-readable

2016-02-29 Thread Tristan Cacqueray
The /var/lib/nova/instances directory is likely to be a packaging issue,
I don't know how disk image mode bits are set, but at least the disk
info is explicitly written as 644 by nova/virt/libvirt/imagebackend.py.

Anyway I closed the OSSA task since multi-user system is not a realistic
threat for openstack system.

** Changed in: ossa
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1129748

Title:
  image files in _base should not be world-readable

Status in OpenStack Compute (nova):
  Opinion
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Already public in https://bugzilla.redhat.com/show_bug.cgi?id=896085 ,
  so probably no point making this private.  But I checked the security
  vulnerability box anyway so someone else can decide.

  We create image files in /var/lib/nova/instances/_base with default
  permissions, usually 644.  It would be better to not make the image
  files world-readable, in case they contain private data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1129748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1102897] Re: quantum port-create --fixed-ip ignores additional invalid items

2016-02-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/253878
Committed: 
https://git.openstack.org/cgit/openstack/python-neutronclient/commit/?id=ed17ae60bd5fbcfffb94e91f270191f0b4d92970
Submitter: Jenkins
Branch:master

commit ed17ae60bd5fbcfffb94e91f270191f0b4d92970
Author: Akihiro Motoki 
Date:   Sun Dec 6 16:55:24 2015 +0900

Improve str2dict key validation to avoid wrong keys

This commit adds valid_keys and required_keys to str2dict
and define a new function which can be used as argparse type
validator.

By this function, we can declare what fields are valid and what
fields are required for dictionary option in option definition.

Change-Id: Ib7b233e082c15d0bd6e16a754b8acad52e413986
Closes-Bug: #1102897


** Changed in: python-neutronclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1102897

Title:
  quantum port-create --fixed-ip ignores additional invalid items

Status in neutron:
  Invalid
Status in python-neutronclient:
  Fix Released

Bug description:
  
  When doing a port-create - and possibly other requests - you can add any old 
rubbish to the request, and provided the minimum required info is present, 
there is no error :

  ubuntu@az3devstackvm2:~/devstack$ quantum port-create --fixed-ip  
subnet_id=1972dfa7-5537-465b-b41d-d3fdcd1de7ce,RUBBISH=SOMETHING 
20147087-5520-4bb0-81a1-43ccbf2d101a
  Created a new port:
  
++-+
  | Field  | Value  
 |
  
++-+
  | admin_state_up | True   
 |
  | device_id  |
 |
  | device_owner   |
 |
  | fixed_ips  | {"subnet_id": "1972dfa7-5537-465b-b41d-d3fdcd1de7ce", 
"ip_address": "10.0.0.6"} |
  | id | 78b68e9f-b414-40c8-af9e-0c537e5e45c8   
 |
  | mac_address| fa:16:3e:a0:b6:82  
 |
  | name   |
 |
  | network_id | 20147087-5520-4bb0-81a1-43ccbf2d101a   
 |
  | status | ACTIVE 
 |
  | tenant_id  | 5e0d8b02dd0a448da10749dd5d33b88c   
 |
  
++-+
  ubuntu@az3devstackvm2:~/devstack$ 

  
  I have validated that the RUBBISH:SOMETHING is passed to the server, and is 
silently ignored.

  This means that e.g. typos in commands can generate something other
  than the user was expecting :

  ubuntu@az3devstackvm2:~/devstack$ q port-create --fixed-ip 
subnet_id=1972dfa7-5537-465b-b41d-d3fdcd1de7ce,ip-address=10.0.0.6 
20147087-5520-4bb0-81a1-43ccbf2d101a
  Created a new port:
  
++-+
  | Field  | Value  
 |
  
++-+
  | admin_state_up | True   
 |
  | device_id  |
 |
  | device_owner   |
 |
  | fixed_ips  | {"subnet_id": "1972dfa7-5537-465b-b41d-d3fdcd1de7ce", 
"ip_address": "10.0.0.5"} |
  | id | 008336f8-6fbd-4821-ab67-f9c8f70d0ab2   
 |
  | mac_address| fa:16:3e:c6:ca:05  
 |
  | name   |
 |
  | network_id | 20147087-5520-4bb0-81a1-43ccbf2d101a   
 |
  | status | ACTIVE 
 |
  | tenant_id  | 5e0d8b02dd0a448da10749dd5d33b88c   
 |
  
++-+

  
  I thought I was asking for 10.0.0.6, but I got 10.0.0.5

  Would a 400 error not be 

[Yahoo-eng-team] [Bug 1551368] Re: L7 capability extension implementation for lbaas v2

2016-02-29 Thread Armando Migliaccio
Is there any operator/user documentation to be provided?


** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551368

Title:
  L7 capability extension implementation for lbaas v2

Status in neutron:
  New
Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/148232
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.

  commit 254190b3051453f823dbdf1826f1a6b984c57164
  Author: Evgeny Fedoruk 
  Date:   Mon Jan 19 03:47:39 2015 -0800

  L7 capability extension implementation for lbaas v2
  
  Including extension modifications
  Including db model modifications
  Including alembic migration
  Including new driver managers for l7 rules and policies
  Inclufing logging_noop driver implementation
  Including unit testing
  Including Octavia driver updates
  
  DocImpact
  APIImpact
  Change-Id: I8f9535ebf28155adf43ebe046d2dbdfa86c4d81b
  Implements: blueprint lbaas-l7-rules
  Co-Authored-By: Evgeny Fedoruk 
  Co-Authored-By: Stephen Balukoff 

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548511] Re: Shared pools support

2016-02-29 Thread Armando Migliaccio
Is there any operator/user documentation to be provided?

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548511

Title:
  Shared pools support

Status in neutron:
  Confirmed
Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/218560
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.

  commit 4f3cf154829fcd69ecf3fa7f4e49f82d0104a4f0
  Author: Stephen Balukoff 
  Date:   Sat Aug 29 02:51:09 2015 -0700

  Shared pools support
  
  In preparation for L7 switching functionality, we need to
  reduce the rigidity of our model somewhat and allow pools
  to exist independent of listeners and be shared by 0 or
  more listeners. With this patch, pools are now associated
  with loadbalancers directly, and there is now a N:M
  relationship between listeners and pools.
  
  This patch does alter the Neutron LBaaS v2 API slightly,
  but all these changes are backward compatible. Nevertheless,
  since Neutron core dev team has asked that any API changes
  take place in an extension, that is what is being done in
  this patch.
  
  This patch also updates the reference namespace driver to
  render haproxy config templates correctly given the pool
  sharing functionality added with the patch.
  
  Finally, the nature of shared pools means that the usual
  workflow for tenants can be (but doesn't have to be)
  altered such that pools can be created before listeners
  independently, and assigned to listeners as a later step.
  
  DocImpact
  APIImpact
  Partially-Implements: blueprint lbaas-l7-rules
  Change-Id: Ia0974b01f1f02771dda545c4cfb5ff428a9327b4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551368] [NEW] L7 capability extension implementation for lbaas v2

2016-02-29 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/148232
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.

commit 254190b3051453f823dbdf1826f1a6b984c57164
Author: Evgeny Fedoruk 
Date:   Mon Jan 19 03:47:39 2015 -0800

L7 capability extension implementation for lbaas v2

Including extension modifications
Including db model modifications
Including alembic migration
Including new driver managers for l7 rules and policies
Inclufing logging_noop driver implementation
Including unit testing
Including Octavia driver updates

DocImpact
APIImpact
Change-Id: I8f9535ebf28155adf43ebe046d2dbdfa86c4d81b
Implements: blueprint lbaas-l7-rules
Co-Authored-By: Evgeny Fedoruk 
Co-Authored-By: Stephen Balukoff 

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron-lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551368

Title:
  L7 capability extension implementation for lbaas v2

Status in neutron:
  New

Bug description:
  https://review.openstack.org/148232
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.

  commit 254190b3051453f823dbdf1826f1a6b984c57164
  Author: Evgeny Fedoruk 
  Date:   Mon Jan 19 03:47:39 2015 -0800

  L7 capability extension implementation for lbaas v2
  
  Including extension modifications
  Including db model modifications
  Including alembic migration
  Including new driver managers for l7 rules and policies
  Inclufing logging_noop driver implementation
  Including unit testing
  Including Octavia driver updates
  
  DocImpact
  APIImpact
  Change-Id: I8f9535ebf28155adf43ebe046d2dbdfa86c4d81b
  Implements: blueprint lbaas-l7-rules
  Co-Authored-By: Evgeny Fedoruk 
  Co-Authored-By: Stephen Balukoff 

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538767] Re: Users cannot create extra-routes with nexthop on ext-net

2016-02-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/273278
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3d5d378769f0715e3254ac00b6f091a6f9f6960b
Submitter: Jenkins
Branch:master

commit 3d5d378769f0715e3254ac00b6f091a6f9f6960b
Author: Cedric Brandily 
Date:   Wed Jan 27 23:58:18 2016 +0100

Allow non-admins to define "external" extra-routes

Currently non-admin users can create extra-routes when the nexthop is on
router-interfaces subnets but not on external-network subnet. Indeed
user permissions are used to get router ports in order to validate
nexthops BUT non-admin users don't "see" router port on its external
network.

This change uses an elevated context instead of user context to enable
non-admins to create "external" extra-routes.

APIImpact
Closes-Bug: #1538767
Change-Id: I08b1d8586a4cd241a3589e8cb7151b77ab679124


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1538767

Title:
  Users cannot create extra-routes with nexthop on ext-net

Status in neutron:
  Fix Released

Bug description:
  Non-admin users cannot create extra-routes on a router with a nexthop
  on ext-net subnet:

# With admin user
neutron net-create pub --router-:external
neutron subnet-create pub 192.168.0.0/16

# With non-admin user
neutron router-create router
neutron router-gateway-set router pub
neutron router-update router --routes 
nexthop=192.168.0.99,destination=10.10.10.0/24
>> Invalid format for routes: [{u'destination': u'10.10.10.0/24', 
u'nexthop': u'192.168.0.99'}], the nexthop is not connected with router

  But it succeeds with an admin user.

  nexthop validation gets all ports connected to the router to check if
  nexthop is on a subnet connected to the router BUT non-admin users are
  only allowed to get internal ports!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1538767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542855] Fix merged to glance (master)

2016-02-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/281165
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=f89bbc5d879ab098134a1b3214669526e30a5d89
Submitter: Jenkins
Branch:master

commit f89bbc5d879ab098134a1b3214669526e30a5d89
Author: Stuart McLaren 
Date:   Wed Feb 17 09:57:11 2016 +

Add unit test for default number of workers

The default number of workers should match the cpu count.

Change-Id: Idfa1fea88e4a2e45ea2f888097bf82cb0005ab05
Partial-bug: #1542855


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1542855

Title:
  Glance generates a non-reproducible config file

Status in Glance:
  Fix Released

Bug description:
  When generating glance-api.conf and glance-registry.conf, the workers
  directive, even though commented, is populated with a value which
  depends on the number of CPU (or core?) of the machine generating the
  file. This means that the build of Glance isn't reproducible (ie:
  building the package on 2 different machine will not produce byte for
  byte identical packages).

  If you didn't hear about the Debian reproducible build effort, please read 
these wiki entries:
  https://wiki.debian.org/ReproducibleBuilds
  https://wiki.debian.org/ReproducibleBuilds/About

  and please consider removing non-deterministic bits when generating
  the configuration files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1542855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512858] Re: Default gateway in fip namespace not updated after a subnet-update

2016-02-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/241357
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a0f7153f9512ce313f0c8762889fe3ef6b455a32
Submitter: Jenkins
Branch:master

commit a0f7153f9512ce313f0c8762889fe3ef6b455a32
Author: Rawlin Peters 
Date:   Tue Nov 3 13:09:27 2015 -0700

Update default gateway in the fip namespace after subnet-update

The gateway does not get updated after a subnet-update --gateway
because the router is already subscribed to the fip namespace.
On a restart, the fip namespace has no subscribers, which causes
the gateway to be created. You shouldn't have to restart the L3
agent in order to update the gateway.

In order for the default gateway in the fip namespace to be updated
after a subnet-update, the underlying code which sets the default
gateway needs to be called even if the router is already subscribed to
the fip namespace.

The bit of code which is responsible for updating the default route has
been refactored into its own method, update_gateway_port, which is then
called after a 'subnet-update --gateway ...'. If the gateway has
actually changed, the default route in the fip namespace will be
updated.

Change-Id: I0552c4a50ae09a6606e88395f7545179d02733dd
Closes-Bug: #1512858


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1512858

Title:
  Default gateway in fip namespace not updated after a subnet-update

Status in neutron:
  Fix Released

Bug description:
  After running the following command:

  neutron subnet-update --gateway  

  the default gateway does not get set to  in the
  corresponding fip namespace until the l3-agent is restarted.

  The default gateway in the corresponding fip namespace should be
  updated right after running the "neutron subnet-update" command, just
  like it gets updated in the corresponding snat namespace.

  Current output:

  stack@stack:~$ sudo ip netns exec snat-7f799d98-66f5-47e9-a2d1-6fd21d41f181 
ip r | grep default
  default via 10.127.10.223 dev qg-aa91f11a-97
  stack@stack:~$ sudo ip netns exec fip-25f35519-87ee-4c73-86dc-e0441a75734f ip 
r | grep default
  default via 10.127.10.223 dev fg-d9c0442f-5b

  stack@stack:~$ neutron subnet-update --gateway 10.127.10.222 
09e385df-3bbf-474d-a0c1-bec63ad2069f
  Updated subnet: 09e385df-3bbf-474d-a0c1-bec63ad2069f

  stack@stack:~$ sudo ip netns exec snat-7f799d98-66f5-47e9-a2d1-6fd21d41f181 
ip r | grep default
  default via 10.127.10.222 dev qg-aa91f11a-97
  stack@stack:~$ sudo ip netns exec fip-25f35519-87ee-4c73-86dc-e0441a75734f ip 
r | grep default
  default via 10.127.10.223 dev fg-d9c0442f-5b

  *restart the l3 agent*

  stack@stack:~$ sudo ip netns exec fip-25f35519-87ee-4c73-86dc-e0441a75734f ip 
r | grep default
  default via 10.127.10.222 dev fg-d9c0442f-5b

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1512858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550795] Re: live migration with block migration on Xen produces "cannot marshal" exception

2016-02-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/285741
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=387329a959b4980d74541a5ddf6b54deab9fe957
Submitter: Jenkins
Branch:master

commit 387329a959b4980d74541a5ddf6b54deab9fe957
Author: Corey Wright 
Date:   Fri Feb 26 00:24:48 2016 -0600

Do not pass call_xenapi unmarshallable type

Convert xmlrpc-unmarshallable CoercedDict from DictOfStringsField
object field to marshallable built-in dict.

Upstream commit f710f709 changed migrate_send_data to a
DictOfStringsField in the XenapiLiveMigrateData object, which produces
a CoercedDict, and upstream commit 69e01758 changed the handling of
migrate_send_data in the xenapi virt driver, but didn't guard against
migrate_send_data from being passed to xmlrpc, by way of XenAPI, as
the unmarshallable CoercedDict, so convert to a standard dict.

Technically, migrate_send_data is a nullable object field and can
produce None, but that's invalid at this point in the code as the
XenAPI commands VM.assert_can_migrate and VM.migrate_send require it,
so throw an InvalidParameterValue exception and correct the unit tests
with incorrect test data (ie block_migration=True, but
migrate_send_data=None) that are accidentally triggering this
exception.

Change-Id: Ia28f9af8087dd45e9159264c3510637f6cf24be1
Closes-Bug: #1550795


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1550795

Title:
  live migration with block migration on Xen produces "cannot marshal"
  exception

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  1. Nova version

  commit ce5a2fb419f999bec0fb2c67413387c8b67a691a

  2. Deserialized traceback as captured in log as received by messaging

  Traceback (most recent call last):
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 143, in _dispatch_and_reply
  executor_callback))
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 189, in _dispatch
  executor_callback)
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 130, in _do_dispatch
  result = func(ctxt, **new_args)
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/nova/exception.py",
 line 110, in wrapped
  payload)
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/nova/exception.py",
 line 89, in wrapped
  return f(self, context, *args, **kw)
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/nova/compute/manager.py",
 line 413, in decorated_function
  return function(self, context, *args, **kwargs)
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/nova/compute/manager.py",
 line 391, in decorated_function
  kwargs['instance'], e, sys.exc_info())
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/nova/compute/manager.py",
 line 379, in decorated_function
  return function(self, context, *args, **kwargs)
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/nova/compute/manager.py",
 line 5281, in check_can_live_migrate_source
  block_device_info)
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/nova/virt/xenapi/driver.py",
 line 530, in check_can_live_migrate_source
  dest_check_data)
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 2419, in check_can_live_migrate_source
  "VM.assert_can_migrate", vm_ref, dest_check_data)
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 2524, in _call_live_migrate_command
  vdi_map, vif_map, options)
File 
"/opt/rackstack/rackstack.532.2/nova/lib/python2.7/site-packages/nova/virt/xenapi/client/session.py",
 line 210, in call_xenapi
  return session.xenapi_request(method, args)
File 

[Yahoo-eng-team] [Bug 1551312] [NEW] Python unit tests don't run in a clean 15.10 environment

2016-02-29 Thread Sean Dague
Public bug reported:

Attempting to test fixes for keystone but tox -e py27 doesn't work for
me:

ribos:~/code/openstack/keystone(master)> rm -rf .tox
ribos:~/code/openstack/keystone(master)> tox -e py27
py27 create: /home/sdague/code/openstack/keystone/.tox/py27
py27 installdeps: -r/home/sdague/code/openstack/keystone/test-requirements.txt, 
.[ldap,memcache,mongodb]
py27 develop-inst: /home/sdague/code/openstack/keystone
py27 installed: -f 
/home/sdague/.pip/wheelhouse,alembic==0.8.4,amqp==1.4.9,anyjson==0.3.3,appdirs==1.4.0,Babel==2.2.0,bashate==0.4.0,beautifulsoup4==4.4.1,cachetools==1.1.5,cffi==1.5.2,contextlib2==0.5.1,coverage==4.0.3,cryptography==1.2.2,debtcollector==1.3.0,decorator==4.0.9,docutils==0.12,dogpile.cache==0.5.7,dogpile.core==0.4.1,ecdsa==0.13,enum34==1.1.2,eventlet==0.18.4,extras==0.0.3,fasteners==0.14.1,fixtures==1.4.0,flake8==2.2.4,flake8-docstrings==0.2.1.post1,funcsigs==0.4,functools32==3.2.3.post2,futures==3.0.5,futurist==0.13.0,greenlet==0.4.9,hacking==0.10.2,httplib2==0.9.2,idna==2.0,ipaddress==1.0.16,iso8601==0.1.11,Jinja2==2.8,jsonschema==2.5.1,-e
 
git+https://github.com/openstack/keystone.git@428cbeec71aa063fe627ea89b23d77a1d4556763#egg=keystone,keystoneauth1==2.3.0,keystonemiddleware==4.3.0,kombu==3.0.33,ldappool==1.0,linecache2==1.0.0,lxml==3.5.0,Mako==1.0.3,MarkupSafe==0.23,mccabe==0.2.1,mock==1.3.0,monotonic==0.6,mox3==0.14.0,msgpack-python==0.4.7,netaddr==0.7.18,netifa
 
ces==0.10.4,oauthlib==1.0.3,os-client-config==1.16.0,os-testr==0.6.0,oslo.cache==1.4.0,oslo.concurrency==3.6.0,oslo.config==3.9.0,oslo.context==2.2.0,oslo.db==4.6.0,oslo.i18n==3.4.0,oslo.log==3.2.0,oslo.messaging==4.5.0,oslo.middleware==3.7.0,oslo.policy==1.5.0,oslo.serialization==2.4.0,oslo.service==1.7.0,oslo.utils==3.7.0,oslosphinx==4.3.0,oslotest==2.3.0,paramiko==1.16.0,passlib==1.6.5,Paste==2.0.2,PasteDeploy==1.5.2,pbr==1.8.1,pep257==0.7.0,pep8==1.5.7,pika==0.10.0,pika-pool==0.1.3,positional==1.0.1,prettytable==0.7.2,pyasn1==0.1.9,pycadf==2.1.0,pycparser==2.14,pycrypto==2.6.1,pyflakes==0.8.1,Pygments==2.1.2,pyinotify==0.9.6,pymongo==3.2.1,pyOpenSSL==0.15.1,pyrsistent==0.11.12,pysaml2==4.0.2,python-dateutil==2.5.0,python-editor==0.5,python-keystoneclient==2.3.1,python-ldap==2.4.25,python-memcached==1.57,python-mimeparse==1.5.1,python-subunit==1.2.0,pytz==2015.7,PyYAML==3.11,reno==1.5.0,repoze.lru==0.6,repoze.who==2.2,requests==2.9.1,requestsexceptions==1.1.3,retrying==1.3.3,Rout
 
es==2.2,six==1.10.0,Sphinx==1.2.3,SQLAlchemy==1.0.12,sqlalchemy-migrate==0.10.0,sqlparse==0.1.18,stevedore==1.12.0,tempest-lib==0.14.0,Tempita==0.5.2,testrepository==0.0.20,testscenarios==0.5.0,testtools==2.0.0,traceback2==1.4.0,unittest2==1.1.0,waitress==0.8.10,WebOb==1.5.1,WebTest==2.0.20,wheel==0.26.0,wrapt==1.10.6,zope.interface==4.1.3
py27 runtests: PYTHONHASHSEED='446975225'
py27 runtests: commands[0] | bash tools/pretty_tox.sh 
running testr
runningNon-zero exit code (2) from test listing.
=error: testr failed (3)

OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit} --list 
--- import errors ---
Failed to import test module: keystone.tests.unit.test_backend_ldap_pool
Traceback (most recent call last):
  File 
"/home/sdague/code/openstack/keystone/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/home/sdague/code/openstack/keystone/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "keystone/tests/unit/test_backend_ldap_pool.py", line 23, in 
from keystone.identity.backends import ldap
  File 
"/home/sdague/code/openstack/keystone/keystone/identity/backends/ldap/__init__.py",
 line 17, in 
ImportError: No module named core

Failed to import test module: keystone.tests.unit.test_ldap_livetest
Traceback (most recent call last):
  File 
"/home/sdague/code/openstack/keystone/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/home/sdague/code/openstack/keystone/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "keystone/tests/unit/test_ldap_livetest.py", line 23, in 
from keystone.identity.backends import ldap as identity_ldap
  File 
"/home/sdague/code/openstack/keystone/keystone/identity/backends/ldap/__init__.py",
 line 17, in 
ImportError: No module named core

Failed to import test module: keystone.tests.unit.test_ldap_pool_livetest
Traceback (most recent call last):
  File 
"/home/sdague/code/openstack/keystone/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 

[Yahoo-eng-team] [Bug 1416132] Re: _get_instance_disk_info fails to read files from NFS due to permissions

2016-02-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/221162
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1345d0fe1cad5093d49a58b6f0b7f4cb650f61d8
Submitter: Jenkins
Branch:master

commit 1345d0fe1cad5093d49a58b6f0b7f4cb650f61d8
Author: Abhijeet Malawade 
Date:   Fri Sep 4 00:21:47 2015 -0700

Pass bdm info to _get_instance_disk_info method

If bdm info is not passed to the '_get_instance_disk_info' method,
then in case of processing qcow2 backing file, it raises permission
denied error provided the backing is belonging to an attached and
NFS-hosted Cinder volume.

Passed bdm info to the '_get_instance_disk_info' method if any
volumes are attached to the instance.

Co-Authored-By: Ankit Agrawal 
Closes-Bug: #1416132
Change-Id: I5d5c64ea4d304022282ec9ab121e295f3c9e0d03


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416132

Title:
  _get_instance_disk_info fails to read files from NFS due to
  permissions

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  LibvirtDriver's _get_instance_disk_info calls
  libvirt_utils.get_disk_backing_file() if processing a qcow2 backing
  file.  If this is a file belonging to an attached and NFS-hosted
  Cinder volume, it may be owned by qemu:qemu and  therefore not
  readable as the nova user.

  [update: am now aiming at a different solution]
  My proposed solution is to run the images.qemu_img_info() call as root in 
this case.

  Note that this requires a change to grenade to upgrade the rootwrap 
configuration for gating to pass.
  [/update]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1416132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551306] [NEW] Horizon could show invalid number of used instances

2016-02-29 Thread Paul Karikh
Public bug reported:

Steps to reproduce:
1) Set up Nova devstack to user NoopQuotaDriver by setting 
quota_driver=nova.quota.NoopQuotaDriver in the default section of 
/etc/nova/nova.conf. And restart nova-api, nova-conductor and nova-compute.
2) Launch an instance
3) Go to project/overview and see that pie-charts show that 0 instances are 
running.

Here Horizon takes values of "totalInstancesUsed", "maxTotalInstances" for pie 
charts (adding it into context['charts'] and rendering in 
horizon/templates/horizon/common/_limit_summary.html)
https://github.com/openstack/horizon/blob/master/openstack_dashboard/usage/views.py#L67

** Affects: horizon
 Importance: Medium
 Assignee: Paul Karikh (pkarikh)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Paul Karikh (pkarikh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1551306

Title:
  Horizon could show invalid number of used instances

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:
  1) Set up Nova devstack to user NoopQuotaDriver by setting 
  quota_driver=nova.quota.NoopQuotaDriver in the default section of 
/etc/nova/nova.conf. And restart nova-api, nova-conductor and nova-compute.
  2) Launch an instance
  3) Go to project/overview and see that pie-charts show that 0 instances are 
running.

  Here Horizon takes values of "totalInstancesUsed", "maxTotalInstances" for 
pie charts (adding it into context['charts'] and rendering in 
horizon/templates/horizon/common/_limit_summary.html)
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/usage/views.py#L67

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1551306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551290] [NEW] leap year bug - day is out of range for month

2016-02-29 Thread Bernhard M. Wiedemann
Public bug reported:

Horizon Dashboard -> Admin -> Settings -> Change Language -> Save (->
Trigger)

triggers a bug in
openstack_dashboard/dashboards/settings/user/forms.py : 33
which currently has
return datetime(now.year + 1, now.month, now.day, now.hour,
now.minute, now.second, now.microsecond, now.tzinfo)

ValueError: day is out of range for month

because there will not be a 2017-02-29

IMHO, we should just use the Epoch value and add 365*24*60*60 seconds

** Affects: horizon
 Importance: Undecided
 Status: New

** Project changed: python-openstackclient => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1551290

Title:
  leap year bug - day is out of range for month

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon Dashboard -> Admin -> Settings -> Change Language -> Save (->
  Trigger)

  triggers a bug in
  openstack_dashboard/dashboards/settings/user/forms.py : 33
  which currently has
  return datetime(now.year + 1, now.month, now.day, now.hour,
  now.minute, now.second, now.microsecond, now.tzinfo)

  ValueError: day is out of range for month

  because there will not be a 2017-02-29

  IMHO, we should just use the Epoch value and add 365*24*60*60 seconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1551290/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551288] [NEW] Fullstack native tests sometimes fail with an OVS agent failing to start with 'Address already in use' error

2016-02-29 Thread Assaf Muller
Public bug reported:

Example failure:
test_connectivity(VLANs,Native) fails with this error:

http://paste.openstack.org/show/488585/

wait_until_env_is_up is timing out, which typically means that the
expected number of agents failed to start. Indeed in this particular
example I saw this line being output repeatedly in neutron-server.log:

[29/Feb/2016 04:16:31] "GET /v2.0/agents.json HTTP/1.1" 200 1870
0.005458

Fullstack calls GET on agents to determine if the expected amount of
agents were started and are successfully reporting back to neutron-
server.

We then see that one of the three OVS agents crashed with this TRACE:
http://paste.openstack.org/show/488586/

This happens only with the native tests using the Ryu library.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fullstack

** Tags added: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551288

Title:
  Fullstack native tests sometimes fail with an OVS agent failing to
  start with 'Address already in use' error

Status in neutron:
  New

Bug description:
  Example failure:
  test_connectivity(VLANs,Native) fails with this error:

  http://paste.openstack.org/show/488585/

  wait_until_env_is_up is timing out, which typically means that the
  expected number of agents failed to start. Indeed in this particular
  example I saw this line being output repeatedly in neutron-server.log:

  [29/Feb/2016 04:16:31] "GET /v2.0/agents.json HTTP/1.1" 200 1870
  0.005458

  Fullstack calls GET on agents to determine if the expected amount of
  agents were started and are successfully reporting back to neutron-
  server.

  We then see that one of the three OVS agents crashed with this TRACE:
  http://paste.openstack.org/show/488586/

  This happens only with the native tests using the Ryu library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551290] [NEW] leap year bug - day is out of range for month

2016-02-29 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Horizon Dashboard -> Admin -> Settings -> Change Language -> Save (->
Trigger)

triggers a bug in
openstack_dashboard/dashboards/settings/user/forms.py : 33
which currently has
return datetime(now.year + 1, now.month, now.day, now.hour,
now.minute, now.second, now.microsecond, now.tzinfo)

ValueError: day is out of range for month

because there will not be a 2017-02-29

IMHO, we should just use the Epoch value and add 365*24*60*60 seconds

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
leap year bug - day is out of range for month
https://bugs.launchpad.net/bugs/1551290
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551282] [NEW] devstack launches extra instance of lbaas agent

2016-02-29 Thread YAMAMOTO Takashi
Public bug reported:

when using lbaas devstack plugin, two lbaas agents will be launced.
one by devstack neutron-legacy, and another by neutron-lbaas devstack plugin.

enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas
ENABLED_SERVICES+=,q-lbaas

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

** Tags added: lbaas

** Description changed:

  when using lbaas devstack plugin, two lbaas agents will be launced.
  one by devstack neutron-legacy, and another by neutron-lbaas devstack plugin.
+ 
+ enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas
+ ENABLED_SERVICES+=,q-lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551282

Title:
  devstack launches extra instance of lbaas agent

Status in neutron:
  New

Bug description:
  when using lbaas devstack plugin, two lbaas agents will be launced.
  one by devstack neutron-legacy, and another by neutron-lbaas devstack plugin.

  enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas
  ENABLED_SERVICES+=,q-lbaas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533441] Re: HA router can not be deleted in L3 agent after race between HA router creating and deleting

2016-02-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/285572
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=046be0b8f30291cd029e6e97a4c6c5a1717a8bd1
Submitter: Jenkins
Branch:master

commit 046be0b8f30291cd029e6e97a4c6c5a1717a8bd1
Author: Kevin Benton 
Date:   Wed Feb 24 13:30:24 2016 -0800

Filter HA routers without HA interface and state

This patch adjusts the sync method to exclude any HA
routers from the response that are missing necessary
HA fields (the HA interface and the HA state).

This prevents the agent from every receiving a partially
formed router.

Co-Authored-By: Ann Kamyshnikova 

Related-Bug: #1499647
Closes-Bug: #1533441
Change-Id: Iadb5a69d4cbc2515fb112867c525676cadea002b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533441

Title:
  HA router can not be deleted in L3 agent after race between HA router
  creating and deleting

Status in neutron:
  Fix Released

Bug description:
  HA router can not be deleted in L3 agent after race between HA router
  creating and deleting

  Exception:
  1. Unable to process HA router %s without HA port (HA router initialize)

  2. AttributeError: 'NoneType' object has no attribute 'config' (HA
  router deleting procedure)

  
  With the newest neutron code, I find a infinite loop in _safe_router_removed.
  Consider a HA router without HA port was placed in the l3 agent,
  usually because of the race condition.

  Infinite loop steps:
  1. a HA router deleting RPC comes
  2. l3 agent remove it
  3. the RouterInfo will delete its the router 
namespace(self.router_namespace.delete())
  4. the HaRouter, ha_router.delete(), where the AttributeError: 'NoneType' or 
some error will be raised.
  5. _safe_router_removed return False
  6. self._resync_router(update)
  7. the router namespace is not existed, RuntimeError raised, go to 5, 
infinite loop 5 - 7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393391] Re: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-update_fanout..

2016-02-29 Thread Corey Bryant
Hui, I've uploaded your fix and it is now awaiting review by the sru
team for trusty-proposed:
https://launchpad.net/ubuntu/trusty/+queue?queue_state=1_text=

** Changed in: neutron (Ubuntu)
   Status: Confirmed => Invalid

** Changed in: neutron (Ubuntu Trusty)
   Status: Confirmed => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393391

Title:
  neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-
  update_fanout..

Status in neutron:
  Confirmed
Status in neutron package in Ubuntu:
  Invalid
Status in neutron source package in Trusty:
  Fix Committed

Bug description:
  Under an HA deployment, neutron-openvswitch-agent can get stuck
  when receiving a close command on a fanout queue the agent is not subscribed 
to.

  It stops responding to any other messages, so it stops effectively
  working at all.

  2014-11-11 10:27:33.092 3027 INFO neutron.common.config [-] Logging enabled!
  2014-11-11 10:27:34.285 3027 INFO neutron.openstack.common.rpc.common 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Connected to AMQP server on 
vip-rabbitmq:5672
  2014-11-11 10:27:34.370 3027 INFO neutron.openstack.common.rpc.common 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Connected to AMQP server on 
vip-rabbitmq:5672
  2014-11-11 10:27:35.348 3027 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Agent initialized successfully, 
now running...
  2014-11-11 10:27:35.351 3027 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Agent out of sync with plugin!
  2014-11-11 10:27:35.401 3027 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Agent tunnel out of sync with 
plugin!
  2014-11-11 10:27:35.414 3027 INFO neutron.openstack.common.rpc.common 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Connected to AMQP server on 
vip-rabbitmq:5672
  2014-11-11 10:32:33.143 3027 INFO neutron.agent.securitygroups_rpc 
[req-22c7fa11-882d-4278-9f83-6dd56ab95ba4 None] Security group member updated 
[u'4c7b3ad2-4526-48a7-959e-a8b8e4da6413']
  2014-11-11 10:58:11.916 3027 INFO neutron.agent.securitygroups_rpc 
[req-484fd71f-8f61-496c-aa8a-2d3abf8de365 None] Security group member updated 
[u'4c7b3ad2-4526-48a7-959e-a8b8e4da6413']
  2014-11-11 10:59:43.954 3027 INFO neutron.agent.securitygroups_rpc 
[req-2c0bc777-04ed-470a-aec5-927a59100b89 None] Security group member updated 
[u'4c7b3ad2-4526-48a7-959e-a8b8e4da6413']
  2014-11-11 11:00:22.500 3027 INFO neutron.agent.securitygroups_rpc 
[req-df447d01-d132-40f2-8528-1c1c4d57c0f5 None] Security group member updated 
[u'4c7b3ad2-4526-48a7-959e-a8b8e4da6413']
  2014-11-12 01:27:35.662 3027 ERROR neutron.openstack.common.rpc.common [-] 
Failed to consume message from queue: Socket closed
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
Traceback (most recent call last):
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", 
line 579, in ensure
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return method(*args, **kwargs)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", 
line 659, in _consume
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return self.connection.drain_events(timeout=timeout)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 281, in 
drain_events
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return self.transport.drain_events(self.connection, **kwargs)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 94, in 
drain_events
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return connection.drain_events(**kwargs)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 266, in drain_events
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
chanmap, None, timeout=timeout,
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 328, in 
_wait_multiple
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
channel, method_sig, args, content = read_timeout(timeout)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 

[Yahoo-eng-team] [Bug 1551223] Re: Live migration (volume-backed) failed if launch instance from bootable volume with new blank block-devices

2016-02-29 Thread Yevgeniy Ovsyannikov
** Project changed: cinder => nova

** Description changed:

  Kilo RHOSP Version: 2015.1.2
  
  I'm trying to live-migrate instance which was created with blank block
  devices like:
  
  nova boot --flavor m1.large --block-device source=image,id=5524dc31
  -fabe-
  47b5-95e7-53d915034272,dest=volume,size=24,shutdown=remove,bootindex=0
  TEST --nic net-id=a31c345c-a7d8-4ae8-870d-6da30fc6c083 --block-device
  source=blank,dest=volume,size=10,shutdown=remove --block-device
  source=blank,dest=volume,size=1,format=swap,shutdown=remove
  
  
+--+--+++-+-+
  | ID  
 | Name | Status | Task State | Power State | Networks|
  
+--+--+++-+-+
  | 10b3414d-8a91-435d-9fe9-44b90837f519 | TEST | ACTIVE | -  | Running 
| public=172.24.4.228 |
  
+--+--+++-+-+
  
  Instance on compute1
  When Im trying migrate to compute2 I get this error:
  
  # nova live-migration TEST compute2
  ERROR (BadRequest): compute1 is not on shared storage: Live migration can not 
be used without shared storage. (HTTP 400) (Request-ID: 
req-5963c4c1-d486-4778-97dc-a6af73b0db0d)
  
- 
  I have found work flow, but volumes have to created manually:
  
  1. Create 2 basic volumes manually
  2. Perform this command:
  
  nova boot --flavor m1.large --block-device source=image,id=5524dc31
  -fabe-
  47b5-95e7-53d915034272,dest=volume,size=8,shutdown=remove,bootindex=0
  TEST --nic net-id=087a89bf-b864-4208-9ac3-c638bb1ad1cc --block-device
  source=volume,dest=volume,id=94e2c86a-b56d-
  4b06-b220-ebca182d01d3,shutdown=remove --block-device
  source=volume,dest=volume,id=362e13f6-4cc3-4bb5-a99c-
  0f65e37abe5c,shutdown=remove
  
- If create instance this way, lime-migration work proper
+ If create instance this way, live-migration work proper

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1551223

Title:
  Live migration (volume-backed) failed if launch instance from bootable
  volume with new blank block-devices

Status in OpenStack Compute (nova):
  New

Bug description:
  Kilo RHOSP Version: 2015.1.2

  I'm trying to live-migrate instance which was created with blank block
  devices like:

  nova boot --flavor m1.large --block-device source=image,id=5524dc31
  -fabe-
  47b5-95e7-53d915034272,dest=volume,size=24,shutdown=remove,bootindex=0
  TEST --nic net-id=a31c345c-a7d8-4ae8-870d-6da30fc6c083 --block-device
  source=blank,dest=volume,size=10,shutdown=remove --block-device
  source=blank,dest=volume,size=1,format=swap,shutdown=remove

  
+--+--+++-+-+
  | ID  
 | Name | Status | Task State | Power State | Networks|
  
+--+--+++-+-+
  | 10b3414d-8a91-435d-9fe9-44b90837f519 | TEST | ACTIVE | -  | Running 
| public=172.24.4.228 |
  
+--+--+++-+-+

  Instance on compute1
  When Im trying migrate to compute2 I get this error:

  # nova live-migration TEST compute2
  ERROR (BadRequest): compute1 is not on shared storage: Live migration can not 
be used without shared storage. (HTTP 400) (Request-ID: 
req-5963c4c1-d486-4778-97dc-a6af73b0db0d)

  I have found work flow, but volumes have to created manually:

  1. Create 2 basic volumes manually
  2. Perform this command:

  nova boot --flavor m1.large --block-device source=image,id=5524dc31
  -fabe-
  47b5-95e7-53d915034272,dest=volume,size=8,shutdown=remove,bootindex=0
  TEST --nic net-id=087a89bf-b864-4208-9ac3-c638bb1ad1cc --block-device
  source=volume,dest=volume,id=94e2c86a-b56d-
  4b06-b220-ebca182d01d3,shutdown=remove --block-device
  source=volume,dest=volume,id=362e13f6-4cc3-4bb5-a99c-
  0f65e37abe5c,shutdown=remove

  If create instance this way, live-migration work proper

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1551223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551223] [NEW] Live migration (volume-backed) failed if launch instance from bootable volume with new blank block-devices

2016-02-29 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Kilo RHOSP Version: 2015.1.2

I'm trying to live-migrate instance which was created with blank block
devices like:

nova boot --flavor m1.large --block-device source=image,id=5524dc31
-fabe-
47b5-95e7-53d915034272,dest=volume,size=24,shutdown=remove,bootindex=0
TEST --nic net-id=a31c345c-a7d8-4ae8-870d-6da30fc6c083 --block-device
source=blank,dest=volume,size=10,shutdown=remove --block-device
source=blank,dest=volume,size=1,format=swap,shutdown=remove

+--+--+++-+-+
| ID
   | Name | Status | Task State | Power State | Networks|
+--+--+++-+-+
| 10b3414d-8a91-435d-9fe9-44b90837f519 | TEST | ACTIVE | -  | Running   
  | public=172.24.4.228 |
+--+--+++-+-+

Instance on compute1
When Im trying migrate to compute2 I get this error:

# nova live-migration TEST compute2
ERROR (BadRequest): compute1 is not on shared storage: Live migration can not 
be used without shared storage. (HTTP 400) (Request-ID: 
req-5963c4c1-d486-4778-97dc-a6af73b0db0d)


I have found work flow, but volumes have to created manually:

1. Create 2 basic volumes manually
2. Perform this command:

nova boot --flavor m1.large --block-device source=image,id=5524dc31
-fabe-
47b5-95e7-53d915034272,dest=volume,size=8,shutdown=remove,bootindex=0
TEST --nic net-id=087a89bf-b864-4208-9ac3-c638bb1ad1cc --block-device
source=volume,dest=volume,id=94e2c86a-b56d-
4b06-b220-ebca182d01d3,shutdown=remove --block-device
source=volume,dest=volume,id=362e13f6-4cc3-4bb5-a99c-
0f65e37abe5c,shutdown=remove

If create instance this way, lime-migration work proper

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: live-migration
-- 
Live migration (volume-backed) failed if launch instance from bootable volume 
with new blank block-devices
https://bugs.launchpad.net/bugs/1551223
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548200] Re: 'image-show' get image info twice in glanceclient v1

2016-02-29 Thread Wenjun Wang
** No longer affects: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1548200

Title:
  'image-show' get image info twice in glanceclient v1

Status in python-glanceclient:
  In Progress

Bug description:
  the code:
  
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/shell.py#L149

  https://github.com/openstack/python-
  glanceclient/blob/master/glanceclient/common/utils.py#L215

  if we provide image's id,find_resource() will return an object which
  contains detail info of this image.

  if we provide image's name,find_resource() will return an object which
  contains general info of this image,but image's id is in this
  info,then we use image's id to get detail info of this image.

  it's not need to get twice info if we provide image's id.we can
  optimize the method

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1548200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551219] [NEW] Remove hard coded "LinuxBridge" logs from common agent and make it variable

2016-02-29 Thread Andreas Scheuring
Public bug reported:

The Common Agent still has some hard coded "LinuxBridge" logs, although
it might be used by other agents (like macvtap agent) as well. Those
logs should reflect the agent type using the common agent.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551219

Title:
  Remove hard coded "LinuxBridge" logs from common agent and make it
  variable

Status in neutron:
  New

Bug description:
  The Common Agent still has some hard coded "LinuxBridge" logs,
  although it might be used by other agents (like macvtap agent) as
  well. Those logs should reflect the agent type using the common agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551214] [NEW] Projector selector icon is not helpful

2016-02-29 Thread Rob Cresswell
Public bug reported:

The project selector uses an ambiguous icon.  There is no shortage of
space in either the top nav or the collapsed nav menu, so there is no
reason for an icon.

We should just replace it with text.

** Affects: horizon
 Importance: Undecided
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Summary changed:

- Replace projector selector icon with text
+ Projector selector icon is not helpful

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1551214

Title:
  Projector selector icon is not helpful

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The project selector uses an ambiguous icon.  There is no shortage of
  space in either the top nav or the collapsed nav menu, so there is no
  reason for an icon.

  We should just replace it with text.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1551214/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548200] Re: 'image-show' get image info twice in glanceclient v1

2016-02-29 Thread Wenjun Wang
** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
   Status: New => In Progress

** Changed in: glance
 Assignee: (unassigned) => Wenjun Wang (wangwenjun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1548200

Title:
  'image-show' get image info twice in glanceclient v1

Status in Glance:
  In Progress
Status in python-glanceclient:
  In Progress

Bug description:
  the code:
  
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/shell.py#L149

  https://github.com/openstack/python-
  glanceclient/blob/master/glanceclient/common/utils.py#L215

  if we provide image's id,find_resource() will return an object which
  contains detail info of this image.

  if we provide image's name,find_resource() will return an object which
  contains general info of this image,but image's id is in this
  info,then we use image's id to get detail info of this image.

  it's not need to get twice info if we provide image's id.we can
  optimize the method

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1548200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551189] [NEW] Tests don't work on leapdays

2016-02-29 Thread Sean Dague
Public bug reported:

2016-02-29 08:57:08.024 | Captured traceback:
2016-02-29 08:57:08.025 | ~~~
2016-02-29 08:57:08.025 | Traceback (most recent call last):
2016-02-29 08:57:08.042 |   File "keystone/tests/unit/test_backend.py", 
line , in test_list_trust_by_trustor
2016-02-29 08:57:08.042 | self.create_sample_trust(uuid.uuid4().hex)
2016-02-29 08:57:08.042 |   File "keystone/tests/unit/test_backend.py", 
line 5475, in create_sample_trust
2016-02-29 08:57:08.042 | expires_at = 
datetime.datetime.utcnow().replace(year=2031)
2016-02-29 08:57:08.042 | ValueError: day is out of range for month


The keystone test code assume all months have the same length in all years. 
That is not the case.

** Affects: keystone
 Importance: Undecided
 Assignee: Sean Dague (sdague)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1551189

Title:
  Tests don't work on leapdays

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  2016-02-29 08:57:08.024 | Captured traceback:
  2016-02-29 08:57:08.025 | ~~~
  2016-02-29 08:57:08.025 | Traceback (most recent call last):
  2016-02-29 08:57:08.042 |   File "keystone/tests/unit/test_backend.py", 
line , in test_list_trust_by_trustor
  2016-02-29 08:57:08.042 | self.create_sample_trust(uuid.uuid4().hex)
  2016-02-29 08:57:08.042 |   File "keystone/tests/unit/test_backend.py", 
line 5475, in create_sample_trust
  2016-02-29 08:57:08.042 | expires_at = 
datetime.datetime.utcnow().replace(year=2031)
  2016-02-29 08:57:08.042 | ValueError: day is out of range for month

  
  The keystone test code assume all months have the same length in all years. 
That is not the case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1551189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551179] [NEW] poor network speed due to tso enabled

2016-02-29 Thread Rossella Sblendido
Public bug reported:

In some deployments we are experiencing low network speed. When
disabling tso on all virtual interfaces the problem is fixed. See also
[1].  I need to dig more into it, anyway I wonder if we should disable
TSO automatically every time Neutron creates a vif...


[1] 
http://askubuntu.com/questions/503863/poor-upload-speed-in-kvm-guest-with-virtio-eth-driver-in-openstack-on-3-14

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551179

Title:
  poor network speed due to tso enabled

Status in neutron:
  New

Bug description:
  In some deployments we are experiencing low network speed. When
  disabling tso on all virtual interfaces the problem is fixed. See also
  [1].  I need to dig more into it, anyway I wonder if we should disable
  TSO automatically every time Neutron creates a vif...

  
  [1] 
http://askubuntu.com/questions/503863/poor-upload-speed-in-kvm-guest-with-virtio-eth-driver-in-openstack-on-3-14

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551179/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525124] Re: attach port when VM is down,port status is active

2016-02-29 Thread Yan Songming
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525124

Title:
  attach port when VM is down,port status is active

Status in neutron:
  Invalid

Bug description:
  1 Create a OVS port.
  2 Attach this port to a vm,whic is shutdown.
  3 Use nova interface-list ,we found the status of this port is 'ACTIVE' if 
fact this should be 'DOWN'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525124/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551165] [NEW] delet a vm with macvtap port,port‘s binding:profile did not clean

2016-02-29 Thread Yan Songming
Public bug reported:

1 neutron  port-create b9ebf2d9-6823-472f-837f-405dc27d01cc  --name bandwidth 
--binding:vnic-type  macvtap
Created a new port:
+-+--+
| Field   | Value   
 |
+-+--+
| admin_state_up  | True
 |
| binding:host_id | 
 |
| binding:profile | {}  
 |
| binding:vif_details | {}  
 |
| binding:vif_type| unbound 
 |
| binding:vnic_type   | macvtap 
 |
| device_id   |   

2  nova boot --flavor m1.small --image
2a3e422a-0480-4a73-b6b1-2c15e06ed373 --nic port-id=71d9c5a3-aadc-
4b29-8e52-17d6028d1b0c instance3

3 eutron port-show  ff57dc93-bce5-4981-980b-5bd027ba4ce0
+-+--+
| Field   | Value   
 |
+-+--+
| admin_state_up  | True
 |
| binding:host_id | host179 
 |
| binding:profile | {"pci_slot": ":02:1f.4", "physical_network": 
"physnet2", "pci_vendor_info": "8086:10ed"} |
| binding:vif_details | {}  
 |

4  nova delete 70900b6f-5e4c-4830-8f89-4df4f0f14cc4
Request to delete server 70900b6f-5e4c-4830-8f89-4df4f0f14cc4 has been accepted.

5 neutron port-show 71d9c5a3-aadc-4b29-8e52-17d6028d1b0c
+-+--+
| Field   | Value   
 |
+-+--+
| admin_state_up  | True
 |
| binding:host_id | 
 |
| binding:profile | {"pci_slot": ":02:1f.4", "physical_network": 
"physnet2", "pci_vendor_info": "8086:10ed"} |
| binding:vif_details | {}  
 |
| binding:vif_type| unbound 
 |
| binding:vnic_type   | macvtap 
 |
| device_id   | 
 |

| binding:profile did not been delete.

** Affects: neutron
 Importance: Undecided
 Assignee: Yan Songming (songmingyan)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Yan Songming (songmingyan)

** Project changed: nova => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1551165

Title:
  delet a  vm with macvtap port,port‘s binding:profile did not clean

Status in neutron:
  New

Bug description:
  1 neutron  port-create b9ebf2d9-6823-472f-837f-405dc27d01cc  --name bandwidth 
--binding:vnic-type  macvtap
  Created a new port:
  
+-+--+
  | Field  

[Yahoo-eng-team] [Bug 1551154] [NEW] Snapshot of VM with SRIOV port fails if host PF is down

2016-02-29 Thread David Edery
Public bug reported:

Scenario:

1. Create a VM with SRIOV port --> the vNIC state should be UP
2. On the VM host, shutdown the PF (i.e. ifconfig ens1f1 down) --> The vNIC 
state should be DOWN
3. Create a snapshot of this suboptimal VM

Expected behavior:

Snapshot creation process begins, after a while the snapshot succeeds, VM 
should be up without the vNIC (if the PF is still down)

Actual behavior:
---
Snapshot creation process begins, after a while the snapshot fails and is 
deleted. The VM is up without the vNIC (since the PF is still down)

Analysis:
---
>From nova-compute.log

2016-02-29 07:21:51.578 34455 ERROR nova.compute.manager 
[req-00a47655-f6c8-4e53-913f-bef0c61100d0 6dc69ac7e48549e0a6f0577e870a096e 
7e5da331cb6342fca3688ea4b4df01f4 - - -] [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] Instance failed to spawn
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] Traceback (most recent call last):
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2480, in 
_build_resource
s
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] yield resources
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2352, in 
_build_and_run_
instance
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] block_device_info=block_device_info)
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2463, in 
spawn
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] block_device_info=block_device_info)
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4588, in 
_create_domain_and_network
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] power_on=power_on)
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4519, in 
_create_domain
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] LOG.error(err)
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] six.reraise(self.type_, self.value, 
self.tb)
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4509, in 
_create_domain
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] domain.createWithFlags(launch_flags)
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] rv = execute(f, *args, **kwargs)
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] six.reraise(c, e, tb)
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b] rv = meth(*args, **kwargs)
2016-02-29 07:21:51.578 34455 TRACE nova.compute.manager [instance: 
14a17ed9-8cbd-44f0-9cf6-a93b71de474b]   File 

[Yahoo-eng-team] [Bug 1551142] [NEW] User can't change locale in horizon on 29 february

2016-02-29 Thread Alexey Dushechkin
Public bug reported:

Steps to reproduce:

1. Wait for Feb 29
2. Log in to Horizon
3. Change locale in "Settings" / "User Settings" to something different that 
current locale

Environment:

Icehouse(Ubuntu), Kilo(RDO), but seems that trunk is affected too.

Expected result:

Locale changed successfully.

Actual result:

"Something went wrong" page, locale not changed.

Exception in log:

Internal Server Error: /horizon/settings/
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 
112, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 38, in dec
return view_func(request, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 54, in dec
return view_func(request, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 38, in dec
return view_func(request, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 
69, in view
return self.dispatch(request, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 
87, in dispatch
return handler(request, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/django/views/generic/edit.py", line 
171, in post
return self.form_valid(form)
  File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/settings/user/views.py",
 line 40, in form_valid
return form.handle(self.request, form.cleaned_data)
  File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/settings/user/forms.py",
 line 89, in handle
expires=_one_year())
  File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/settings/user/forms.py",
 line 33, in _one_year
now.minute, now.second, now.microsecond, now.tzinfo)
ValueError: day is out of range for month

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1551142

Title:
  User can't change locale in horizon on 29 february

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:

  1. Wait for Feb 29
  2. Log in to Horizon
  3. Change locale in "Settings" / "User Settings" to something different that 
current locale

  Environment:

  Icehouse(Ubuntu), Kilo(RDO), but seems that trunk is affected too.

  Expected result:

  Locale changed successfully.

  Actual result:

  "Something went wrong" page, locale not changed.

  Exception in log:

  Internal Server Error: /horizon/settings/
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 
112, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 38, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 54, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 38, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 
69, in view
  return self.dispatch(request, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 
87, in dispatch
  return handler(request, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/django/views/generic/edit.py", line 
171, in post
  return self.form_valid(form)
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/settings/user/views.py",
 line 40, in form_valid
  return form.handle(self.request, form.cleaned_data)
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/settings/user/forms.py",
 line 89, in handle
  expires=_one_year())
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/settings/user/forms.py",
 line 33, in _one_year
  now.minute, now.second, now.microsecond, now.tzinfo)
  ValueError: day is out of range for month

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1551142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp