[Yahoo-eng-team] [Bug 1491275] [NEW] Failed to create swarm bay

2015-09-02 Thread Eli Qiao
Public bug reported:

m-cond reports KeyError: 'number_of_minions' when creating a new swarm
bay

commit 97ad930147527fbb76acc320676d7c8a2a32e13e
Merge: 4089b1b 86ed292
Author: Jenkins 
Date:   Wed Sep 2 01:27:21 2015 +

Merge "Add roles to context"

taget@taget-ThinkStation-P300:~/nova$ magnum bay-create --name swarmbay  
--baymodel swarmmodel --node-count 2
++--+
| Property   | Value|
++--+
| status | None |
| uuid   | abb8213d-ceab-44af-afa2-fb7a99a24035 |
| status_reason  | None |
| created_at | 2015-09-02T07:57:32+00:00|
| updated_at | None |
| bay_create_timeout | 0|
| api_address| None |
| baymodel_id| ec0647b3-9285-41fe-be10-edfd3e357c4b |
| node_count | 2|
| node_addresses | None |
| master_count   | 1|
| discovery_url  | None |
| name   | swarmbay |
++--+


2015-09-02 15:57:33.492 ERROR oslo.service.loopingcall 
[req-bf80d962-74cd-4e82-89bb-09095578cb0e admin admin] Fixed interval looping 
call 'magnum.conductor.handlers.bay_conductor.HeatPoller.poll_and_check' failed
2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall Traceback (most recent 
call last):
2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 113, 
in _run_loop
2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall result = 
func(*self.args, **self.kw)
2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall   File 
"/opt/stack/magnum/magnum/conductor/handlers/bay_conductor.py", line 249, in 
poll_and_check
2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall self.bay.node_count 
= stack.parameters['number_of_minions']
2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall KeyError: 
'number_of_minions'
2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall

** Affects: magnum
 Importance: Undecided
 Assignee: Eli Qiao (taget-9)
 Status: Confirmed

** Description changed:

+ m-cond reports KeyError: 'number_of_minions' when creating a new swarm
+ bay
+ 
+ commit 97ad930147527fbb76acc320676d7c8a2a32e13e
+ Merge: 4089b1b 86ed292
+ Author: Jenkins 
+ Date:   Wed Sep 2 01:27:21 2015 +
+ 
+ Merge "Add roles to context"
+ 
+ taget@taget-ThinkStation-P300:~/nova$ magnum bay-create --name swarmbay  
--baymodel swarmmodel --node-count 2
+ ++--+
+ | Property   | Value|
+ ++--+
+ | status | None |
+ | uuid   | abb8213d-ceab-44af-afa2-fb7a99a24035 |
+ | status_reason  | None |
+ | created_at | 2015-09-02T07:57:32+00:00|
+ | updated_at | None |
+ | bay_create_timeout | 0|
+ | api_address| None |
+ | baymodel_id| ec0647b3-9285-41fe-be10-edfd3e357c4b |
+ | node_count | 2|
+ | node_addresses | None |
+ | master_count   | 1|
+ | discovery_url  | None |
+ | name   | swarmbay |
+ ++--+
+ 
+ 
  2015-09-02 15:57:33.492 ERROR oslo.service.loopingcall 
[req-bf80d962-74cd-4e82-89bb-09095578cb0e admin admin] Fixed interval looping 
call 'magnum.conductor.handlers.bay_conductor.HeatPoller.poll_and_check' failed
  2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall Traceback (most recent 
call last):
  2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 113, 
in _run_loop
  2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall result = 
func(*self.args, **self.kw)
  2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall   File 
"/opt/stack/magnum/magnum/conductor/handlers/bay_conductor.py", line 249, in 
poll_and_check
  2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall 
self.bay.node_count = stack.parameters['number_of_minions']
  2015-09-02 15:57:33.492 TRACE oslo.service.loopingcall KeyError: 
'number_of_minions'
  

[Yahoo-eng-team] [Bug 1491282] Re: Logs from keystone python application are not reported in /var/log/keystone-all.log

2015-09-02 Thread Kovid Goyal
** Project changed: calibre => keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1491282

Title:
  Logs from keystone python application are not reported in /var/log
  /keystone-all.log

Status in Keystone:
  New

Bug description:
  Now keystone is using apache to run. Keystone applications are located
  at two places:

  - /usr/lib/cgi-bin/keystone/admin
  - /usr/lib/cgi-bin/keystone/main

  
  Keystone is configured to use rsyslog for logging:

  === /etc/keystone/keystone.conf ===

  [...]
  use_syslog = True
  [...]
  syslog_log_facility = LOG_USER

  ===

  rsyslog is configured to catch logs with "keystone" word:

  root@node-3:/# cat /etc/rsyslog.d/20-keystone.conf 
  # managed by puppet

  :syslogtag, contains, "keystone" -/var/log/keystone-all.log
  & ~

  
  The problem is that two scripts are named "admin" and "main" so rsyslog is 
not able to catch logs from them because if I understand correctly they should 
be name "keystone-admin" for example to be catch. It is needed because in the 
log it is the name of the application that is used. Here is an example of such 
logs that is in /var/log/user.log but not in /var/log/keystone-all.log:

  === cat /var/log/user.log
  [...]
  Aug 28 09:00:51 node-3 admin 2015-08-28 09:00:51.735 18865 WARNING 
oslo_config.cfg [-] Option "admin_bind_host" from group "DEFAULT" is 
deprecated. Use option "admin_bind_host" from group "eventlet_server".
  [...]

  And that should also be reported in /var/log/keystone-all.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1491282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491282] Re: Logs from keystone python application are not reported in /var/log/keystone-all.log

2015-09-02 Thread guillaume thouvenin
Oops sorry I created this bug in the wrong project

** Project changed: keystone => fuel

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1491282

Title:
  Logs from keystone python application are not reported in /var/log
  /keystone-all.log

Status in Fuel for OpenStack:
  New

Bug description:
  Now keystone is using apache to run. Keystone applications are located
  at two places:

  - /usr/lib/cgi-bin/keystone/admin
  - /usr/lib/cgi-bin/keystone/main

  
  Keystone is configured to use rsyslog for logging:

  === /etc/keystone/keystone.conf ===

  [...]
  use_syslog = True
  [...]
  syslog_log_facility = LOG_USER

  ===

  rsyslog is configured to catch logs with "keystone" word:

  root@node-3:/# cat /etc/rsyslog.d/20-keystone.conf 
  # managed by puppet

  :syslogtag, contains, "keystone" -/var/log/keystone-all.log
  & ~

  
  The problem is that two scripts are named "admin" and "main" so rsyslog is 
not able to catch logs from them because if I understand correctly they should 
be name "keystone-admin" for example to be catch. It is needed because in the 
log it is the name of the application that is used. Here is an example of such 
logs that is in /var/log/user.log but not in /var/log/keystone-all.log:

  === cat /var/log/user.log
  [...]
  Aug 28 09:00:51 node-3 admin 2015-08-28 09:00:51.735 18865 WARNING 
oslo_config.cfg [-] Option "admin_bind_host" from group "DEFAULT" is 
deprecated. Use option "admin_bind_host" from group "eventlet_server".
  [...]

  And that should also be reported in /var/log/keystone-all.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1491282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491256] [NEW] VMware: nova log files overflow

2015-09-02 Thread Gary Kotton
Public bug reported:

The commit 
https://review.openstack.org/#q,bcc002809894f39c84dca5c46034468ed0469c2b,n,z 
removed the suds logging level. This causes the log files to overlfow with data.
An example is http://208.91.1.172/logs/178652/3/1437935064/ the nova compute 
log file is 56M compared to all other files that are a few K.
This makes debugging and troubleshooting terribly difficult. This also has 
upgrade impact for people going to L from any other version and that is a 
degradation

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491256

Title:
  VMware: nova log files overflow

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The commit 
https://review.openstack.org/#q,bcc002809894f39c84dca5c46034468ed0469c2b,n,z 
removed the suds logging level. This causes the log files to overlfow with data.
  An example is http://208.91.1.172/logs/178652/3/1437935064/ the nova compute 
log file is 56M compared to all other files that are a few K.
  This makes debugging and troubleshooting terribly difficult. This also has 
upgrade impact for people going to L from any other version and that is a 
degradation

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491244] [NEW] server details view can include empty string image

2015-09-02 Thread Yasu Nakamura
Public bug reported:

When image ref is not specified (like boot from cinder volume image), server 
details view returns empty string as image.
image value should be empty dictionary.

Currently when users with strong type system receive string, their
system cannot handle this correctly.

http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/views/servers.py#n220
this line should "return {}"

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491244

Title:
  server details view can include empty string image

Status in OpenStack Compute (nova):
  New

Bug description:
  When image ref is not specified (like boot from cinder volume image), server 
details view returns empty string as image.
  image value should be empty dictionary.

  Currently when users with strong type system receive string, their
  system cannot handle this correctly.

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/views/servers.py#n220
  this line should "return {}"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2015-09-02 Thread Pavlo Shchelokovskyy
** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
   Importance: Undecided => Low

** Changed in: heat
 Assignee: (unassigned) => Pavlo Shchelokovskyy (pshchelo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in heat:
  In Progress
Status in Ironic:
  Fix Committed
Status in neutron:
  New

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491512] [NEW] Behavior change with latest nova paste config

2015-09-02 Thread Monty Taylor
Public bug reported:

http://logs.openstack.org/55/219655/1/check/gate-shade-dsvm-functional-
nova/1154770/console.html#_2015-09-02_12_10_56_113

This started failing about 12 hours ago. Looking at it with Sean, we
think it's because it actually never worked, but nova was failing silent
before. It's not throwing an error, which wile more correcter (you know
you didn't delete something) is a behavior change.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491512

Title:
  Behavior change with latest nova paste config

Status in OpenStack Compute (nova):
  New

Bug description:
  http://logs.openstack.org/55/219655/1/check/gate-shade-dsvm-
  functional-nova/1154770/console.html#_2015-09-02_12_10_56_113

  This started failing about 12 hours ago. Looking at it with Sean, we
  think it's because it actually never worked, but nova was failing
  silent before. It's not throwing an error, which wile more correcter
  (you know you didn't delete something) is a behavior change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491511] [NEW] Behavior change with latest nova paste config

2015-09-02 Thread Monty Taylor
Public bug reported:

http://logs.openstack.org/55/219655/1/check/gate-shade-dsvm-functional-
nova/1154770/console.html#_2015-09-02_12_10_56_113

This started failing about 12 hours ago. Looking at it with Sean, we
think it's because it actually never worked, but nova was failing silent
before. It's not throwing an error, which wile more correcter (you know
you didn't delete something) is a behavior change.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491511

Title:
  Behavior change with latest nova paste config

Status in OpenStack Compute (nova):
  New

Bug description:
  http://logs.openstack.org/55/219655/1/check/gate-shade-dsvm-
  functional-nova/1154770/console.html#_2015-09-02_12_10_56_113

  This started failing about 12 hours ago. Looking at it with Sean, we
  think it's because it actually never worked, but nova was failing
  silent before. It's not throwing an error, which wile more correcter
  (you know you didn't delete something) is a behavior change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2015-09-02 Thread yong sheng gong
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in Ironic:
  Fix Committed
Status in neutron:
  New

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491469] [NEW] Contributing doc needs updating

2015-09-02 Thread Rob Cresswell
Public bug reported:

The "Contributing" document
(https://github.com/openstack/horizon/blob/master/doc/source/contributing.rst)
needs updating inline with modern angular practices.

It also has several grammatical mistakes that can be fixed at the same
time.

** Affects: horizon
 Importance: Low
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Changed in: horizon
Milestone: None => liberty-3

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491469

Title:
  Contributing doc needs updating

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The "Contributing" document
  (https://github.com/openstack/horizon/blob/master/doc/source/contributing.rst)
  needs updating inline with modern angular practices.

  It also has several grammatical mistakes that can be fixed at the same
  time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1491469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491662] [NEW] upon creating vpn-service-create, the status is hung in 'pending_create' even after waiting for 40-50 sec or so

2015-09-02 Thread Madhusudhan Kandadai
Public bug reported:

Clone master branch of vpnaas repo.

Create necessary two networks, routers, subnets, boot up two vms.

Create ipkepolicy, ipsecpolicy and vpn-service.

neutron vpn-service-create --name myvpnA --description "My vpn serviceA"
 

It is created, but the status is still shown as 'Pending_create'. Try
checking the status again for about 40-sec, but no luck.

docker@ubuntu:~/devstack$ neutron vpn-service-show myvpnA
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description| My vpnA service  |
| external_v4_ip | 172.24.4.3   |
| external_v6_ip | 2001:db8::3  |
| id | 2077e692-ceee-4ae0-8564-80a114a95e2d |
| name   | myvpnA   |
| router_id  | 5686f412-6dcd-4ff9-9f03-e559c00ed0f5 |
| status | PENDING_CREATE   |
| subnet_id  | eae807fe-21a7-41eb-bbaa-ae229e4a271c |
| tenant_id  | 166bec794e52459589aa1749d195238e |
++--+

This worked fine about 6-7 hours back, looks like the latest patch-set
broken it.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

** Summary changed:

- upon creating vpn-service-create, the status is hung in 'pending_create' 
status
+ upon creating vpn-service-create, the status is hung in 'pending_create'  
even after waiting for 40-50 sec or so

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491662

Title:
  upon creating vpn-service-create, the status is hung in
  'pending_create'  even after waiting for 40-50 sec or so

Status in neutron:
  New

Bug description:
  Clone master branch of vpnaas repo.

  Create necessary two networks, routers, subnets, boot up two vms.

  Create ipkepolicy, ipsecpolicy and vpn-service.

  neutron vpn-service-create --name myvpnA --description "My vpn
  serviceA"  

  It is created, but the status is still shown as 'Pending_create'. Try
  checking the status again for about 40-sec, but no luck.

  docker@ubuntu:~/devstack$ neutron vpn-service-show myvpnA
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description| My vpnA service  |
  | external_v4_ip | 172.24.4.3   |
  | external_v6_ip | 2001:db8::3  |
  | id | 2077e692-ceee-4ae0-8564-80a114a95e2d |
  | name   | myvpnA   |
  | router_id  | 5686f412-6dcd-4ff9-9f03-e559c00ed0f5 |
  | status | PENDING_CREATE   |
  | subnet_id  | eae807fe-21a7-41eb-bbaa-ae229e4a271c |
  | tenant_id  | 166bec794e52459589aa1749d195238e |
  ++--+

  This worked fine about 6-7 hours back, looks like the latest patch-set
  broken it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491662/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430365] Re: config drive table force_config_drive parameter values do not match Impl

2015-09-02 Thread Lana
** Changed in: openstack-manuals
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430365

Title:
  config drive table force_config_drive parameter  values do not match
  Impl

Status in OpenStack Compute (nova):
  Invalid
Status in openstack-manuals:
  Invalid

Bug description:
  The config drive table [1] describes for the force_config_drive parameter the 
following options: 
  Default: None
  Other Options: always

  But looking at the code [2], only the following values are allowed:
  always, True, False

  What does the current default 'None' mean? Is it really a value? Or
  desribes it the abscense of the parameter?  I could not configure it
  for the force_config_drive option. The abscense of the parameter is
  the same as 'False', so I guess that's the correct default, isn't it?

  The docs should be updated accordingly.

  
  [1] 
https://github.com/openstack/openstack-manuals/blob/master/doc/common/tables/nova-configdrive.xml
  [2] https://github.com/openstack/nova/blob/master/nova/virt/configdrive.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490796] Re: [SRU] cloud-init must check/format Azure ephemeral disks each boot

2015-09-02 Thread Chris J Arges
** Also affects: cloud-init (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Vivid)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1490796

Title:
  [SRU] cloud-init must check/format Azure ephemeral disks each boot

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  In Progress
Status in cloud-init source package in Trusty:
  New
Status in cloud-init source package in Vivid:
  New

Bug description:
  
  [Impact]
  Sometimes when rebooting an instance in Azure, a different ephemeral disk 
will be presented. Azure ephemeral disks are presented as NTFS disks, but we 
reformat them to ext4.  With the last cloud-init upload, we shifted to using 
/dev/disk symlinks to mount these disks (in case they are not presented as the 
same physical device).  Unfortunately, the code that determines if we have a 
new ephemeral disk was not updated to handle symlinks, so never detects a new 
disk.

  [Test Case]
  1) Boot an Azure instance and install the new cloud-init.
  2) Change the size of the instance using the Azure web interface (as this 
near-guarantees that the ephemeral disk will be replaced with a new one). This 
will reboot the instance.
  3) Once the instance is rebooted, SSH in and confirm that:
   a) An ext4 ephemeral disk is mounted at /mnt, and
   b) cloud-init.log indicates that a fabric formatted ephemeral disk was found 
on this boot.

  [Regression Potential]
  Limited; two LOCs change, to dereference symlinks instead of using paths 
verbatim.

  [Original Report]
  Ubuntu 14.04.3 (20150805) on Azure with cloud-init package 0.7.5-0ubuntu1.8.

  On Azure cloud-init prepares the ephemeral device as ext4 for the
  first boot.  However, if the VM is ever moved to another Host for any
  reason, then a new ephemeral disk might be provided to the VM.  This
  ephemeral disk is NTFS formatted, so for subsequent reboots cloud-init
  must detect and reformat the new disk as ext4.  However, with cloud-
  init 0.7.5-0ubuntu1.8 subsequent boots may result in fuse mounted NTFS
  file system.

  This issue occurred in earlier versions of cloud-init, but was fixed
  with bug 1292648 (https://bugs.launchpad.net/ubuntu/+source/cloud-
  init/+bug/1292648).  So this appears to be a regression.

  Repro:
    - Create an Ubuntu 14.04.3 VM on Azure
    - Resize the VM to a larger size (this typically moves the VM)
    - Log in and run 'blkid' to show an ntfs formatted ephemeral disk:

  # blkid
  /dev/sdb1: LABEL="Temporary Storage" UUID="A43C43DD3C43A95E" TYPE="ntfs"

  Expected results:
    - After resizing the ephemeral disk should be formatted as ext4.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1490796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477741] [NEW] Execute _poll_shelved_instances only if shelved_offload_time is > 0

2015-09-02 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

https://review.openstack.org/201436
commit abf20cd57023039958091c39f6cdb775c912c9ac
Author: abhishekkekane 
Date:   Thu Jul 9 02:58:30 2015 -0700

Execute _poll_shelved_instances only if shelved_offload_time is > 0

shelved_offload_time -1 means never offload shelved instance. Currently,
the periodic task "_poll_shelved_instances" changes the vm state from
SHELVED to SHELVED_OFFLOAD even when the shelved_offload_time is set to -1.

Added check in _poll_shelved_instances to offload instances only if
shelved_offload_time is greater than 0.

DocImpact
Closes-bug: #1472946
Change-Id: I55368e961b65a5ca73718fdcd42823680f7cbfa4

** Affects: nova
 Importance: Medium
 Status: In Progress


** Tags: autogenerate-config-docs nova
-- 
Execute _poll_shelved_instances only if shelved_offload_time is > 0
https://bugs.launchpad.net/bugs/1477741
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491662] Re: upon creating vpn-service-create, the status is hung in 'pending_create' even after waiting for 40-50 sec or so

2015-09-02 Thread Madhusudhan Kandadai
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491662

Title:
  upon creating vpn-service-create, the status is hung in
  'pending_create'  even after waiting for 40-50 sec or so

Status in neutron:
  Invalid

Bug description:
  Clone master branch of vpnaas repo.

  Create necessary two networks, routers, subnets, boot up two vms.

  Create ipkepolicy, ipsecpolicy and vpn-service.

  neutron vpn-service-create --name myvpnA --description "My vpn
  serviceA"  

  It is created, but the status is still shown as 'Pending_create'. Try
  checking the status again for about 40-sec, but no luck.

  docker@ubuntu:~/devstack$ neutron vpn-service-show myvpnA
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description| My vpnA service  |
  | external_v4_ip | 172.24.4.3   |
  | external_v6_ip | 2001:db8::3  |
  | id | 2077e692-ceee-4ae0-8564-80a114a95e2d |
  | name   | myvpnA   |
  | router_id  | 5686f412-6dcd-4ff9-9f03-e559c00ed0f5 |
  | status | PENDING_CREATE   |
  | subnet_id  | eae807fe-21a7-41eb-bbaa-ae229e4a271c |
  | tenant_id  | 166bec794e52459589aa1749d195238e |
  ++--+

  This worked fine about 6-7 hours back, looks like the latest patch-set
  broken it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491662/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491679] [NEW] ipsec-site-connection-create is not back to ACTIVE state after updating admin_state_up as True -> False -> True

2015-09-02 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

neutron vpn-service-create --name myvpnA --description "My vpnA service" 
routerA subA
neutron vpn-service-create --name myvpnB --description "My vpnB service" 
routerB subB

neutron ipsec-site-connection-create --name vpnconnectionA --vpnservice-id 
myvpnA \
--ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 
172.24.4.102 \
--peer-id 172.24.4.102 --peer-cidr 10.2.0.0/24 --psk secret

neutron ipsec-site-connection-create --name vpnconnectionB --vpnservice-id 
myvpnB \
--ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 
172.24.4.101 \
--peer-id 172.24.4.101 --peer-cidr 10.1.0.0/24 --psk secret

docker@ubuntu:~/devstack$ neutron ipsec-site-connection-list
+--++--+---++---++
| id   | name   | peer_address | 
peer_cidrs| route_mode | auth_mode | status |
+--++--+---++---++
| 4ca689a5-180a-4661-bd5b-6182d9dad5e4 | vpnconnectionA | 172.24.4.102 | 
"10.2.0.0/24" | static | psk   | ACTIVE |
| f65263bd-b5cf-46c5-a809-a376bec461d9 | vpnconnectionB | 172.24.4.101 | 
"10.1.0.0/24" | static | psk   | ACTIVE |
+--++--+---++---++

Now, change the admin_state_up as false for ipsec-site-connection -
vpnconnectionA.

docker@ubuntu:~/devstack$ neutron ipsec-site-connection-list
+--++--+---++---++
| id   | name   | peer_address | 
peer_cidrs| route_mode | auth_mode | status |
+--++--+---++---++
| 4ca689a5-180a-4661-bd5b-6182d9dad5e4 | vpnconnectionA | 172.24.4.102 | 
"10.2.0.0/24" | static | psk   | DOWN   |
| f65263bd-b5cf-46c5-a809-a376bec461d9 | vpnconnectionB | 172.24.4.101 | 
"10.1.0.0/24" | static | psk   | ACTIVE |
+--++--+---++---++

ping vm1 to vm2 does not work and ping vm2 and vm1 does not work ( this
is expected)

Change back the admin_state_up as True, the status is still shown as
DOWN, even the vpn-service for myvpnA is at DOWN

docker@ubuntu:~/devstack$ neutron ipsec-site-connection-show vpnconnectionA
+++
| Field  | Value  |
+++
| admin_state_up | True   |
| auth_mode  | psk|
| description||
| dpd| {"action": "hold", "interval": 30, "timeout": 120} |
| id | 4ca689a5-180a-4661-bd5b-6182d9dad5e4   |
| ikepolicy_id   | d5112709-8909-4ce3-a7aa-99569474c812   |
| initiator  | bi-directional |
| ipsecpolicy_id | 430ece29-8cf5-488a-b77d-798f0e7d455e   |
| mtu| 1500   |
| name   | vpnconnectionA |
| peer_address   | 172.24.4.102   |
| peer_cidrs | 10.2.0.0/24|
| peer_id| 172.24.4.102   |
| psk| secret |
| route_mode | static |
| status | DOWN   |
| tenant_id  | 7d0f12937859462bb7c1d5d012111dec   |
| vpnservice_id  | 33311333-a6be-4b59-bedc-d3f1583459e7   |
+++

docker@ubuntu:~/devstack$ neutron vpn-service-show myvpnA
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description| My vpnA service  |
| external_v4_ip | 172.24.4.101 |
| external_v6_ip |  |
| id | 33311333-a6be-4b59-bedc-d3f1583459e7 |
| name   | myvpnA   |
| router_id  | 3b4fab84-6bac-4a29-8bf2-65378e342dc4 |
| status | DOWN |
| subnet_id  | 63d99342-c9e0-41a9-993d-3c2a6e0256ad |
| 

[Yahoo-eng-team] [Bug 1491668] [NEW] deprecate external_network_bridge option in L3 agent

2015-09-02 Thread Kevin Benton
Public bug reported:

The external_network_bridge option in the L3 agent allows the L3 agent
to plug directly into a bridge and skip all of the management by the L2
agent. This creates two ways to accomplish the same wiring, but it
results in differences that cause confusion a issues for debugging.

When the external_network_bridge option is used, all of the provider
properties (e.g. VLAN tags, VXLAN VNIs) of the external network are
ignored. So we end up with scenarios where users will create an external
network with a VLAN tag, attach a router to it, and then complain when
it's not sending the correct tagged traffic. It also means that features
added to the L2 agent will not apply to router ports (e.g. enhanced
debugging, QoS, port mirroring, etc).

The appropriate way to do this is to define a physnet for the external
network (e.g. 'external') and then create a bridge_mapping entry for it
on the L2 agent that maps it to the external bridge (e.g. 'external:br-
ex'). Then when the external Neutron network is created, it should be
created with the 'flat' provider type and the 'external' provider
physnet.

We should deprecate external_network_bridge in L and remove it in M to
migrate people to the more consistent approach with bridge_mappings.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491668

Title:
  deprecate external_network_bridge option in L3 agent

Status in neutron:
  In Progress

Bug description:
  The external_network_bridge option in the L3 agent allows the L3 agent
  to plug directly into a bridge and skip all of the management by the
  L2 agent. This creates two ways to accomplish the same wiring, but it
  results in differences that cause confusion a issues for debugging.

  When the external_network_bridge option is used, all of the provider
  properties (e.g. VLAN tags, VXLAN VNIs) of the external network are
  ignored. So we end up with scenarios where users will create an
  external network with a VLAN tag, attach a router to it, and then
  complain when it's not sending the correct tagged traffic. It also
  means that features added to the L2 agent will not apply to router
  ports (e.g. enhanced debugging, QoS, port mirroring, etc).

  The appropriate way to do this is to define a physnet for the external
  network (e.g. 'external') and then create a bridge_mapping entry for
  it on the L2 agent that maps it to the external bridge (e.g. 'external
  :br-ex'). Then when the external Neutron network is created, it should
  be created with the 'flat' provider type and the 'external' provider
  physnet.

  We should deprecate external_network_bridge in L and remove it in M to
  migrate people to the more consistent approach with bridge_mappings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477741] Re: Execute _poll_shelved_instances only if shelved_offload_time is > 0

2015-09-02 Thread Lana
** Project changed: openstack-manuals => nova

** Changed in: nova
Milestone: liberty => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477741

Title:
  Execute _poll_shelved_instances only if shelved_offload_time is >
  0

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  https://review.openstack.org/201436
  commit abf20cd57023039958091c39f6cdb775c912c9ac
  Author: abhishekkekane 
  Date:   Thu Jul 9 02:58:30 2015 -0700

  Execute _poll_shelved_instances only if shelved_offload_time is > 0
  
  shelved_offload_time -1 means never offload shelved instance. Currently,
  the periodic task "_poll_shelved_instances" changes the vm state from
  SHELVED to SHELVED_OFFLOAD even when the shelved_offload_time is set to 
-1.
  
  Added check in _poll_shelved_instances to offload instances only if
  shelved_offload_time is greater than 0.
  
  DocImpact
  Closes-bug: #1472946
  Change-Id: I55368e961b65a5ca73718fdcd42823680f7cbfa4

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1477741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491679] [NEW] ipsec-site-connection-create is not back to ACTIVE state after updating admin_state_up as True -> False -> True

2015-09-02 Thread Madhusudhan Kandadai
Public bug reported:

neutron vpn-service-create --name myvpnA --description "My vpnA service" 
routerA subA
neutron vpn-service-create --name myvpnB --description "My vpnB service" 
routerB subB

neutron ipsec-site-connection-create --name vpnconnectionA --vpnservice-id 
myvpnA \
--ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 
172.24.4.102 \
--peer-id 172.24.4.102 --peer-cidr 10.2.0.0/24 --psk secret

neutron ipsec-site-connection-create --name vpnconnectionB --vpnservice-id 
myvpnB \
--ikepolicy-id ikepolicy --ipsecpolicy-id ipsecpolicy --peer-address 
172.24.4.101 \
--peer-id 172.24.4.101 --peer-cidr 10.1.0.0/24 --psk secret

docker@ubuntu:~/devstack$ neutron ipsec-site-connection-list
+--++--+---++---++
| id   | name   | peer_address | 
peer_cidrs| route_mode | auth_mode | status |
+--++--+---++---++
| 4ca689a5-180a-4661-bd5b-6182d9dad5e4 | vpnconnectionA | 172.24.4.102 | 
"10.2.0.0/24" | static | psk   | ACTIVE |
| f65263bd-b5cf-46c5-a809-a376bec461d9 | vpnconnectionB | 172.24.4.101 | 
"10.1.0.0/24" | static | psk   | ACTIVE |
+--++--+---++---++

Now, change the admin_state_up as false for ipsec-site-connection -
vpnconnectionA.

docker@ubuntu:~/devstack$ neutron ipsec-site-connection-list
+--++--+---++---++
| id   | name   | peer_address | 
peer_cidrs| route_mode | auth_mode | status |
+--++--+---++---++
| 4ca689a5-180a-4661-bd5b-6182d9dad5e4 | vpnconnectionA | 172.24.4.102 | 
"10.2.0.0/24" | static | psk   | DOWN   |
| f65263bd-b5cf-46c5-a809-a376bec461d9 | vpnconnectionB | 172.24.4.101 | 
"10.1.0.0/24" | static | psk   | ACTIVE |
+--++--+---++---++

ping vm1 to vm2 does not work and ping vm2 and vm1 does not work ( this
is expected)

Change back the admin_state_up as True, the status is still shown as
DOWN, even the vpn-service for myvpnA is at DOWN

docker@ubuntu:~/devstack$ neutron ipsec-site-connection-show vpnconnectionA
+++
| Field  | Value  |
+++
| admin_state_up | True   |
| auth_mode  | psk|
| description||
| dpd| {"action": "hold", "interval": 30, "timeout": 120} |
| id | 4ca689a5-180a-4661-bd5b-6182d9dad5e4   |
| ikepolicy_id   | d5112709-8909-4ce3-a7aa-99569474c812   |
| initiator  | bi-directional |
| ipsecpolicy_id | 430ece29-8cf5-488a-b77d-798f0e7d455e   |
| mtu| 1500   |
| name   | vpnconnectionA |
| peer_address   | 172.24.4.102   |
| peer_cidrs | 10.2.0.0/24|
| peer_id| 172.24.4.102   |
| psk| secret |
| route_mode | static |
| status | DOWN   |
| tenant_id  | 7d0f12937859462bb7c1d5d012111dec   |
| vpnservice_id  | 33311333-a6be-4b59-bedc-d3f1583459e7   |
+++

docker@ubuntu:~/devstack$ neutron vpn-service-show myvpnA
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description| My vpnA service  |
| external_v4_ip | 172.24.4.101 |
| external_v6_ip |  |
| id | 33311333-a6be-4b59-bedc-d3f1583459e7 |
| name   | myvpnA   |
| router_id  | 3b4fab84-6bac-4a29-8bf2-65378e342dc4 |
| status | DOWN |
| subnet_id  | 63d99342-c9e0-41a9-993d-3c2a6e0256ad |
| tenant_id  | 

[Yahoo-eng-team] [Bug 1381017] [NEW] PCI passthrough_whitelist configuration option documentation

2015-09-02 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

pci_passthrough_whitelist can be be specified in different ways, but current 
documentation of pci_passthrough_whitelist in nova.conf captured in OpenStack 
Configuration Reference guide, defines only limited option. 
There is a need to document available options as it is defined in the 
https://github.com/openstack/nova-specs/blob/master/specs/juno/implemented/pci-passthrough-sriov.rst
  specification.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: autogenerate-config-docs nova
-- 
PCI  passthrough_whitelist  configuration option documentation
https://bugs.launchpad.net/bugs/1381017
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491690] [NEW] IPv6 Address Resolution protection support in Neutron.

2015-09-02 Thread Sridhar Gaddam
Public bug reported:

Similar to IPv4 arp protection support (Bug#1274034), we would require
Neutron to add the necessary OVS rules to prevent ports attached to
agent from sending any icmpv6 Neighbor Advertisement messages that
contain an IPv6 address not belonging to the port.

For more details, please refer to "Figure 3. Attack against IPv6 Address 
Resolution"
http://www.cisco.com/web/about/security/intelligence/ipv6_first_hop.html

** Affects: neutron
 Importance: Undecided
 Assignee: Sridhar Gaddam (sridhargaddam)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Sridhar Gaddam (sridhargaddam)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491690

Title:
  IPv6 Address Resolution protection support in Neutron.

Status in neutron:
  In Progress

Bug description:
  Similar to IPv4 arp protection support (Bug#1274034), we would require
  Neutron to add the necessary OVS rules to prevent ports attached to
  agent from sending any icmpv6 Neighbor Advertisement messages that
  contain an IPv6 address not belonging to the port.

  For more details, please refer to "Figure 3. Attack against IPv6 Address 
Resolution"
  http://www.cisco.com/web/about/security/intelligence/ipv6_first_hop.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381017] Re: PCI passthrough_whitelist configuration option documentation

2015-09-02 Thread Lana
** Project changed: openstack-manuals => nova

** Changed in: nova
Milestone: liberty => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381017

Title:
  PCI  passthrough_whitelist  configuration option documentation

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  pci_passthrough_whitelist can be be specified in different ways, but current 
documentation of pci_passthrough_whitelist in nova.conf captured in OpenStack 
Configuration Reference guide, defines only limited option. 
  There is a need to document available options as it is defined in the 
https://github.com/openstack/nova-specs/blob/master/specs/juno/implemented/pci-passthrough-sriov.rst
  specification.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400814] [NEW] Libvirt: SMB volume driver

2015-09-02 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

https://review.openstack.org/131734
commit 561f8afa5fbc7d94cd65616225597850585d909f
Author: Lucian Petrut 
Date:   Tue Oct 28 14:49:26 2014 +0200

Libvirt: SMB volume driver

Currently, there are Libvirt volume drivers that support
network-attached file systems such as Gluster or NFS. This patch
adds a new volume driver in order to support attaching volumes
hosted on SMB shares.

Co-Authored-By: Gabriel Samfira 

DocImpact

Change-Id: I1db3d2a6d8ee94932348c63cc03698fdefff0b5c
Implements: blueprint libvirt-smbfs-volume-support

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: nova
-- 
Libvirt: SMB volume driver
https://bugs.launchpad.net/bugs/1400814
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474204] Re: vlan id are invalid

2015-09-02 Thread Assaf Muller
Please try ask.openstack.org, thank you!

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474204

Title:
  vlan id are invalid

Status in neutron:
  Invalid

Bug description:
  i config CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:2:4096, but  "Error:
  vlan id are invalid. on node controller_251" is found, and i don't
  know what are vlan_min and vlan_max.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1474204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474204] [NEW] vlan id are invalid

2015-09-02 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

i config CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:2:4096, but  "Error:
vlan id are invalid. on node controller_251" is found, and i don't know
what are vlan_min and vlan_max.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
vlan id are invalid
https://bugs.launchpad.net/bugs/1474204
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491579] Re: against all sanity, nova needs to work around broken public clouds

2015-09-02 Thread Matt Riedemann
** Changed in: nova
   Importance: Critical => High

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
   Status: New => Confirmed

** No longer affects: nova

** Changed in: python-novaclient
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491579

Title:
  against all sanity, nova needs to work around broken public clouds

Status in python-novaclient:
  Confirmed

Bug description:
  The version listing endpoint is a clearly a giant security risk, so
  the intrepid operators of Rackspace, HP, Dreamhost, Auro and RunAbove
  have all decided, in their infinite wisdom and deference to magical
  ponies that BLOCKING it is a great idea.

  Luckily, a non-zero number of them have decided to block it by hanging
  connections rather than throwing a 401.

  http://paste.openstack.org/show/442384/

  In any case, the end result is that those clouds are unusable with the
  latest release of novaclient, which means it's probably on the nova
  team to figure out a workaround.

  After this bug is fixed, a defcore test should be added and it should
  be very non-optional.

  Also, everyone involved with blocking the VERSION LISTING endpoint
  should feel very ashamed of the stance against progress, joy and
  light.

  On the other hand, please everyone join me in rejoicing in the
  excellent work by the teams at vexxhost and UnitedStack who are
  running excellent clouds that do not exhibit this pox.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1491579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491579] [NEW] against all sanity, nova needs to work around broken public clouds

2015-09-02 Thread Monty Taylor
Public bug reported:

The version listing endpoint is a clearly a giant security risk, so the
intrepid operators of Rackspace, HP, Dreamhost, Auro and RunAbove have
all decided, in their infinite wisdom and deference to magical ponies
that BLOCKING it is a great idea.

Luckily, a non-zero number of them have decided to block it by hanging
connections rather than throwing a 401.

http://paste.openstack.org/show/442384/

In any case, the end result is that those clouds are unusable with the
latest release of novaclient, which means it's probably on the nova team
to figure out a workaround.

After this bug is fixed, a defcore test should be added and it should be
very non-optional.

Also, everyone involved with blocking the VERSION LISTING endpoint
should feel very ashamed of the stance against progress, joy and light.

On the other hand, please everyone join me in rejoicing in the excellent
work by the teams at vexxhost and UnitedStack who are running excellent
clouds that do not exhibit this pox.

** Affects: python-novaclient
 Importance: Critical
 Status: Confirmed


** Tags: liberty-rc-blocker

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491579

Title:
  against all sanity, nova needs to work around broken public clouds

Status in python-novaclient:
  Confirmed

Bug description:
  The version listing endpoint is a clearly a giant security risk, so
  the intrepid operators of Rackspace, HP, Dreamhost, Auro and RunAbove
  have all decided, in their infinite wisdom and deference to magical
  ponies that BLOCKING it is a great idea.

  Luckily, a non-zero number of them have decided to block it by hanging
  connections rather than throwing a 401.

  http://paste.openstack.org/show/442384/

  In any case, the end result is that those clouds are unusable with the
  latest release of novaclient, which means it's probably on the nova
  team to figure out a workaround.

  After this bug is fixed, a defcore test should be added and it should
  be very non-optional.

  Also, everyone involved with blocking the VERSION LISTING endpoint
  should feel very ashamed of the stance against progress, joy and
  light.

  On the other hand, please everyone join me in rejoicing in the
  excellent work by the teams at vexxhost and UnitedStack who are
  running excellent clouds that do not exhibit this pox.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1491579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491581] [NEW] Some functional tests use 'sudo' alone without rootwrap

2015-09-02 Thread Assaf Muller
Public bug reported:

While looking at functional tests console outputs Ihar noticed that some
tests are executing 'sudo' without going through rootwrap.

Example:
http://logs.openstack.org/03/216603/4/check/gate-neutron-dsvm-functional/9ed19a4/console.html
 (CTRL-F for "['sudo'").
(Command: ['sudo', 'kill', '-15', '22920']).

Patch https://review.openstack.org/#/c/114717/ added a gate hook for the
Neutron functional job, and it's possible that since then we've been
allowing 'naked sudo' at the functional gate, because stripping the
'stack' user from 'sudo' was previously being done by the gate_hook in
the devstack_gate project.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: New


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491581

Title:
  Some functional tests use 'sudo' alone without rootwrap

Status in neutron:
  New

Bug description:
  While looking at functional tests console outputs Ihar noticed that
  some tests are executing 'sudo' without going through rootwrap.

  Example:
  
http://logs.openstack.org/03/216603/4/check/gate-neutron-dsvm-functional/9ed19a4/console.html
 (CTRL-F for "['sudo'").
  (Command: ['sudo', 'kill', '-15', '22920']).

  Patch https://review.openstack.org/#/c/114717/ added a gate hook for
  the Neutron functional job, and it's possible that since then we've
  been allowing 'naked sudo' at the functional gate, because stripping
  the 'stack' user from 'sudo' was previously being done by the
  gate_hook in the devstack_gate project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491586] [NEW] Boot from image (Creates a new volume) does not work

2015-09-02 Thread Erlon R. Cruz
Public bug reported:

The instance creation fails with:

Invalid input for field/attribute delete_on_termination. Value: 1. 1 is
not of type 'boolean', 'string' (HTTP 400) (Request-ID: req-7cd57330
-cbfc-40ab-9622-084edc2f4d57)

** Affects: horizon
 Importance: Undecided
 Assignee: Erlon R. Cruz (sombrafam)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491586

Title:
  Boot from image (Creates a new volume) does not work

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The instance creation fails with:

  Invalid input for field/attribute delete_on_termination. Value: 1. 1
  is not of type 'boolean', 'string' (HTTP 400) (Request-ID: req-
  7cd57330-cbfc-40ab-9622-084edc2f4d57)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1491586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491563] [NEW] Failed to spawn lxc instance due to nbd mount in use or rootfs busy

2015-09-02 Thread Matt Riedemann
Public bug reported:

I have a devstack change up which runs tempest with nova + the lxc
backend and it fails some instances on boot because the nbd device is in
use or the rootfs mount is busy:

https://review.openstack.org/#/c/219448/

http://logs.openstack.org/48/219448/1/check/gate-tempest-dsvm-
full/bc0fc2a/logs/screen-n-cpu.txt.gz?level=TRACE

It looks like we're leaking nbd devices and failing to tear down
instances properly, e.g.:

2015-09-01 22:10:50.141 ERROR nova.virt.disk.api 
[req-7e5384a8-34f3-4e3a-98cd-b4bb544804a7 
tempest-ServersAdminTestJSON-276960936 tempest-ServersAdminTestJSON-718419646] 
Failed to mount container filesystem '' on 
'/opt/stack/data/nova/instances/a9317161-ad31-4948-9c2d-3d4899718a78/rootfs': 
--
Failed to mount filesystem: Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf mount /dev/nbd4 
/opt/stack/data/nova/instances/a9317161-ad31-4948-9c2d-3d4899718a78/rootfs
Exit code: 32
Stdout: u''
Stderr: u'mount: /dev/nbd4 already mounted or 
/opt/stack/data/nova/instances/a9317161-ad31-4948-9c2d-3d4899718a78/rootfs 
busy\n'
2015-09-01 22:10:50.142 ERROR nova.compute.manager 
[req-7e5384a8-34f3-4e3a-98cd-b4bb544804a7 
tempest-ServersAdminTestJSON-276960936 tempest-ServersAdminTestJSON-718419646] 
[instance: a9317161-ad31-4948-9c2d-3d4899718a78] Instance failed to spawn
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] Traceback (most recent call last):
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2138, in _build_resources
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] yield resources
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2008, in 
_build_and_run_instance
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] block_device_info=block_device_info)
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2449, in spawn
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] block_device_info=block_device_info)
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4521, in 
_create_domain_and_network
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] block_device_info, disk_info):
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78]   File 
"/usr/lib/python2.7/contextlib.py", line 17, in __enter__
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] return self.gen.next()
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4430, in 
_lxc_disk_handler
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] block_device_info, disk_info)
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4382, in 
_create_domain_setup_lxc
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] container_dir=container_dir)
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78]   File 
"/opt/stack/new/nova/nova/virt/disk/api.py", line 452, in setup_container
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] raise 
exception.NovaException(img.errors)
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] NovaException: 
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] --
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] Failed to mount filesystem: Unexpected 
error while running command.
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf mount /dev/nbd4 
/opt/stack/data/nova/instances/a9317161-ad31-4948-9c2d-3d4899718a78/rootfs
2015-09-01 22:10:50.142 4548 ERROR nova.compute.manager [instance: 
a9317161-ad31-4948-9c2d-3d4899718a78] Exit code: 32
2015-09-01 22:10:50.142 4548 ERROR 

[Yahoo-eng-team] [Bug 1474204] Re: vlan id are invalid

2015-09-02 Thread Matt Fischer
not a puppt bug.

** Project changed: puppet-neutron => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474204

Title:
  vlan id are invalid

Status in neutron:
  New

Bug description:
  i config CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:2:4096, but  "Error:
  vlan id are invalid. on node controller_251" is found, and i don't
  know what are vlan_min and vlan_max.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1474204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490796] Re: [SRU] cloud-init must check/format Azure ephemeral disks each boot

2015-09-02 Thread Ben Howard
** Also affects: cloud-init (Ubuntu Precise)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1490796

Title:
  [SRU] cloud-init must check/format Azure ephemeral disks each boot

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  In Progress
Status in cloud-init source package in Precise:
  New
Status in cloud-init source package in Trusty:
  New
Status in cloud-init source package in Vivid:
  New

Bug description:
  
  [Impact]
  Sometimes when rebooting an instance in Azure, a different ephemeral disk 
will be presented. Azure ephemeral disks are presented as NTFS disks, but we 
reformat them to ext4.  With the last cloud-init upload, we shifted to using 
/dev/disk symlinks to mount these disks (in case they are not presented as the 
same physical device).  Unfortunately, the code that determines if we have a 
new ephemeral disk was not updated to handle symlinks, so never detects a new 
disk.

  [Test Case]
  1) Boot an Azure instance and install the new cloud-init.
  2) Change the size of the instance using the Azure web interface (as this 
near-guarantees that the ephemeral disk will be replaced with a new one). This 
will reboot the instance.
  3) Once the instance is rebooted, SSH in and confirm that:
   a) An ext4 ephemeral disk is mounted at /mnt, and
   b) cloud-init.log indicates that a fabric formatted ephemeral disk was found 
on this boot.

  [Regression Potential]
  Limited; two LOCs change, to dereference symlinks instead of using paths 
verbatim.

  [Original Report]
  Ubuntu 14.04.3 (20150805) on Azure with cloud-init package 0.7.5-0ubuntu1.8.

  On Azure cloud-init prepares the ephemeral device as ext4 for the
  first boot.  However, if the VM is ever moved to another Host for any
  reason, then a new ephemeral disk might be provided to the VM.  This
  ephemeral disk is NTFS formatted, so for subsequent reboots cloud-init
  must detect and reformat the new disk as ext4.  However, with cloud-
  init 0.7.5-0ubuntu1.8 subsequent boots may result in fuse mounted NTFS
  file system.

  This issue occurred in earlier versions of cloud-init, but was fixed
  with bug 1292648 (https://bugs.launchpad.net/ubuntu/+source/cloud-
  init/+bug/1292648).  So this appears to be a regression.

  Repro:
    - Create an Ubuntu 14.04.3 VM on Azure
    - Resize the VM to a larger size (this typically moves the VM)
    - Log in and run 'blkid' to show an ntfs formatted ephemeral disk:

  # blkid
  /dev/sdb1: LABEL="Temporary Storage" UUID="A43C43DD3C43A95E" TYPE="ntfs"

  Expected results:
    - After resizing the ephemeral disk should be formatted as ext4.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1490796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2015-09-02 Thread Travis McPeak
There isn't anything specific to OpenStack about this.  Since we don't
really have any good advice other than limiting connections, I'm closing
the OSSN for this.  Anybody interested should look into "slowloris
attacks".

** Changed in: ossn
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in heat:
  Fix Released
Status in Keystone:
  Fix Released
Status in Keystone icehouse series:
  Confirmed
Status in Keystone juno series:
  Fix Committed
Status in Keystone kilo series:
  Fix Released
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Won't Fix
Status in Sahara:
  Confirmed

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print "RESPONSE %s-%d" % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete" % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s" % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1491645] [NEW] Launch instance from Volumes Page opens LEGACY launch instance, even if LEGACY is set to False in local_settings

2015-09-02 Thread Amogh
Public bug reported:

Launch instance from Volumes Page opens LEGACY launch instance, even if LEGACY 
is set to False in local_settings
1. Create bootable volume.
2. Launch the instance from volume, Observe that 'legacy' launch instance is 
opened. local_settings is set with below settings:

LAUNCH_INSTANCE_NG_ENABLED = True
LAUNCH_INSTANCE_LEGACY_ENABLED = False

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491645

Title:
  Launch instance from Volumes Page opens LEGACY launch instance, even
  if LEGACY is set to False in local_settings

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Launch instance from Volumes Page opens LEGACY launch instance, even if 
LEGACY is set to False in local_settings
  1. Create bootable volume.
  2. Launch the instance from volume, Observe that 'legacy' launch instance is 
opened. local_settings is set with below settings:

  LAUNCH_INSTANCE_NG_ENABLED = True
  LAUNCH_INSTANCE_LEGACY_ENABLED = False

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1491645/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1163569] Re: security groups don't work with vip and ovs plugin

2015-09-02 Thread Robert Clark
Two OSSN members have tried and failed to write an OSSN for this.

For now I've set the status to wont-fix. If there's someone from Neutron
who can help with the OSSN we'll re-open.

** Changed in: ossn
 Assignee: Steven Weston (steve.weston) => (unassigned)

** Changed in: ossn
   Status: In Progress => New

** Changed in: ossn
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1163569

Title:
  security groups don't work with vip and ovs plugin

Status in neutron:
  Confirmed
Status in OpenStack Security Notes:
  Won't Fix

Bug description:
  http://codepad.org/xU8G4s00

  I pinged nachi and he suggested to try using:

  interface_driver = quantum.agent.linux.interface.BridgeInterfaceDriver

  But after setting this it seems like the vip does not work at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1163569/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491622] [NEW] Images "shared with me" misleading

2015-09-02 Thread David Medberry
Public bug reported:

The horizon project images tab shows three sub tabs: Project (count),
Shared with me (count), and Public (count)

This is somewhat misleading as glance images are shared with a project
(formerly known as tenants) not with users. Users see this and are
afraid other users on the same project cannot see the image since it is
shared with "me". They view me as user_id level scoping/association vs.
project/tenant level scoping/association. So users file tickets to go
"huh". Can we fix this please?

A couple of other possible labels for this:
* Just "shared"
* "Project-wide shared"
* STDs (implying others have already been intimately involved with this 
non-public image)
* "Cross-project Shared"

etc.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491622

Title:
  Images "shared with me" misleading

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The horizon project images tab shows three sub tabs: Project (count),
  Shared with me (count), and Public (count)

  This is somewhat misleading as glance images are shared with a project
  (formerly known as tenants) not with users. Users see this and are
  afraid other users on the same project cannot see the image since it
  is shared with "me". They view me as user_id level scoping/association
  vs. project/tenant level scoping/association. So users file tickets to
  go "huh". Can we fix this please?

  A couple of other possible labels for this:
  * Just "shared"
  * "Project-wide shared"
  * STDs (implying others have already been intimately involved with this 
non-public image)
  * "Cross-project Shared"

  etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1491622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491637] [NEW] Error when adding a new Firewall Rule

2015-09-02 Thread Lin Hua Cheng
Public bug reported:

Reported from openstack-dev ML.

When i try to add new firewall rule , horizon broken with "'NoneType' object 
has no attribute 'id'" Error.
This was fine about 10 hours back. Seems one of the  latest commit broken it.

Traceback in horizon:

2015-09-02 16:15:35.337872 return nodelist.render(context)
2015-09-02 16:15:35.337877   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 903, in 
render
2015-09-02 16:15:35.337893 bit = self.render_node(node, context)
2015-09-02 16:15:35.337899   File 
"/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 79, in 
render_node
2015-09-02 16:15:35.337903 return node.render(context)
2015-09-02 16:15:35.337908   File 
"/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 89, in 
render
2015-09-02 16:15:35.337913 output = self.filter_expression.resolve(context)
2015-09-02 16:15:35.337917   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 647, in 
resolve
2015-09-02 16:15:35.337922 obj = self.var.resolve(context)
2015-09-02 16:15:35.337927   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 787, in 
resolve
2015-09-02 16:15:35.337931 value = self._resolve_lookup(context)
2015-09-02 16:15:35.337936   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 825, in 
_resolve_lookup
2015-09-02 16:15:35.337940 current = getattr(current, bit)
2015-09-02 16:15:35.337945   File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py", line 
59, in attr_string
2015-09-02 16:15:35.337950 return flatatt(self.get_final_attrs())
2015-09-02 16:15:35.337954   File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py", line 
42, in get_final_attrs
2015-09-02 16:15:35.337959 final_attrs['class'] = self.get_final_css()
2015-09-02 16:15:35.337964   File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py", line 
47, in get_final_css
2015-09-02 16:15:35.337981 default = " ".join(self.get_default_classes())
2015-09-02 16:15:35.337986   File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", 
line 792, in get_default_classes
2015-09-02 16:15:35.337991 if not self.url:
2015-09-02 16:15:35.337995   File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", 
line 756, in url
2015-09-02 16:15:35.338000 url = self.column.get_link_url(self.datum)
2015-09-02 16:15:35.338004   File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", 
line 431, in get_link_url
2015-09-02 16:15:35.338009 return self.link(datum)
2015-09-02 16:15:35.338014   File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/firewalls/tables.py",
 line 261, in get_policy_link
2015-09-02 16:15:35.338019 kwargs={'policy_id': datum.policy.id})
2015-09-02 16:15:35.338023 AttributeError: 'NoneType' object has no attribute 
'id'

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491637

Title:
  Error when adding a new Firewall Rule

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Reported from openstack-dev ML.

  When i try to add new firewall rule , horizon broken with "'NoneType' object 
has no attribute 'id'" Error.
  This was fine about 10 hours back. Seems one of the  latest commit broken it.

  Traceback in horizon:

  2015-09-02 16:15:35.337872 return nodelist.render(context)
  2015-09-02 16:15:35.337877   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 903, in 
render
  2015-09-02 16:15:35.337893 bit = self.render_node(node, context)
  2015-09-02 16:15:35.337899   File 
"/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 79, in 
render_node
  2015-09-02 16:15:35.337903 return node.render(context)
  2015-09-02 16:15:35.337908   File 
"/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 89, in 
render
  2015-09-02 16:15:35.337913 output = 
self.filter_expression.resolve(context)
  2015-09-02 16:15:35.337917   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 647, in 
resolve
  2015-09-02 16:15:35.337922 obj = self.var.resolve(context)
  2015-09-02 16:15:35.337927   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 787, in 
resolve
  2015-09-02 16:15:35.337931 value = self._resolve_lookup(context)
  2015-09-02 16:15:35.337936   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 825, in 
_resolve_lookup
  2015-09-02 16:15:35.337940 current = getattr(current, bit)
  2015-09-02 16:15:35.337945   File 

[Yahoo-eng-team] [Bug 1331882] Re: trustor_user_id not available in v2 trust token

2015-09-02 Thread Jamie Finnigan
Per commentary here and on https://review.openstack.org/#/c/101829/,
there is a minimal data gap but no security exposure.  Skipping the
OSSN, given this conclusion.

** Changed in: ossn
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1331882

Title:
  trustor_user_id not available in v2 trust token

Status in Keystone:
  Fix Released
Status in OpenStack Security Notes:
  Won't Fix

Bug description:
  The trust information in the v2 token is missing the trustor_user_id
  and impersonation values. This means you are unable to tell who gave
  you the trust.

  The following two examples were generated with the same information.
  (They are printed from client.auth_ref which is why they are missing
  some structure information)

  v2 Trust token:

  {u'metadata': {u'is_admin': 0,
 u'roles': [u'136bc06cef2f496f842a76644feaed03',
u'7d42773abeff45ea90fdb4067f6b3a9f']},
   u'serviceCatalog': [...],
   u'token': {u'expires': u'2014-06-19T02:41:19Z',
  u'id': u'4b8d23d9707a4c9f8a270759725dfcf8',
  u'issued_at': u'2014-06-19T01:41:19.811417',
  u'tenant': {u'description': u'Default Tenant',
  u'enabled': True,
  u'id': u'9029b226bc894fa3a23ec24fd9f4796c',
  u'name': u'demo'}},
   u'trust': {u'id': u'0b16de31a8c64fd5b0054054db468a00',
  u'trustee_user_id': u'f6cce259563e40acb3f841f5d89c6191'},
   u'user': {u'id': u'f6cce259563e40acb3f841f5d89c6191',
 u'name': u'bob',
 u'roles': [{u'name': u'can_create'}, {u'name': u'can_delete'}],
 u'roles_links': [],
 u'username': u'bob'}}

  
  v3 Trust token: 

  {u'OS-TRUST:trust': {u'id': u'0b16de31a8c64fd5b0054054db468a00',
   u'impersonation': False,
   u'trustee_user': {u'id': 
u'f6cce259563e40acb3f841f5d89c6191'},
   u'trustor_user': {u'id': 
u'5fcb10539aa646ea8b0fe3c80e15d33d'}},
   'auth_token': '0b8a2d2e081e4e6e8ae3ad5dfedcf9db',
   u'catalog': [...],
   u'expires_at': u'2014-06-19T02:41:19.935302Z',
   u'extras': {},
   u'issued_at': u'2014-06-19T01:41:19.935330Z',
   u'methods': [u'password'],
   u'project': {u'domain': {u'id': u'default', u'name': u'Default'},
u'id': u'9029b226bc894fa3a23ec24fd9f4796c',
u'name': u'demo'},
   u'roles': [{u'id': u'136bc06cef2f496f842a76644feaed03',
   u'name': u'can_create'},
  {u'id': u'7d42773abeff45ea90fdb4067f6b3a9f',
   u'name': u'can_delete'}],
   u'user': {u'domain': {u'id': u'default', u'name': u'Default'},
 u'id': u'f6cce259563e40acb3f841f5d89c6191',
 u'name': u'bob'}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1331882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491350] [NEW] /versions endpoint returns 300 everytime

2015-09-02 Thread Flavio Percoco
Public bug reported:

The /versions endpoint returns a 300 status code even when the user
explicitly asks for the version list. This is confusing and not the
desired behavior since there's just 1 choice for the `/versions`
endpoint.

** Affects: glance
 Importance: Medium
 Assignee: Flavio Percoco (flaper87)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Flavio Percoco (flaper87)

** Changed in: glance
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1491350

Title:
  /versions endpoint returns 300 everytime

Status in Glance:
  In Progress

Bug description:
  The /versions endpoint returns a 300 status code even when the user
  explicitly asks for the version list. This is confusing and not the
  desired behavior since there's just 1 choice for the `/versions`
  endpoint.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1491350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491361] [NEW] Trace seen in L3 DVR agent during router interface removal

2015-09-02 Thread Eugene Nikanorov
Public bug reported:

The trace is from Kilo code, but issue persists in Liberty code as of
now.

37749 ERROR neutron.agent.l3.dvr_router [-] DVR: removed snat failed
 37749 TRACE neutron.agent.l3.dvr_router Traceback (most recent call last):
 37749 TRACE neutron.agent.l3.dvr_router File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_router.py", line 260, in 
_snat_redirect_modify
 37749 TRACE neutron.agent.l3.dvr_router for gw_fixed_ip in 
gateway['fixed_ips']:
 37749 TRACE neutron.agent.l3.dvr_router TypeError: 'NoneType' object has no 
attribute '__getitem__'

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491361

Title:
  Trace seen in L3 DVR agent during router interface removal

Status in neutron:
  In Progress

Bug description:
  The trace is from Kilo code, but issue persists in Liberty code as of
  now.

  37749 ERROR neutron.agent.l3.dvr_router [-] DVR: removed snat failed
   37749 TRACE neutron.agent.l3.dvr_router Traceback (most recent call last):
   37749 TRACE neutron.agent.l3.dvr_router File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_router.py", line 260, in 
_snat_redirect_modify
   37749 TRACE neutron.agent.l3.dvr_router for gw_fixed_ip in 
gateway['fixed_ips']:
   37749 TRACE neutron.agent.l3.dvr_router TypeError: 'NoneType' object has no 
attribute '__getitem__'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491317] [NEW] Add port fowrading extension to L3

2015-09-02 Thread Gal Sagie
Public bug reported:

I have searched and found many past efforts to implement port forwarding in 
Neutron.
I have found two incomplete blueprints [1], [2] and an abandoned patch [3].

There is even a project in Stackforge [4], [5] that claims
to implement this, but the L3 parts in it seems older then current master.

I have recently came across this requirement for various use cases, one of them 
is
providing feature compliance with Docker port-mapping feature (for Kuryr), and 
saving floating
IP's space.
There has been many discussions in the past that require this feature, so i 
assume
there is a demand to make this formal, just a small examples [6], [7], [8], [9]

The idea in a nutshell is to support port forwarding (TCP/UDP ports) on the 
external router
leg from the public network to internal ports, so user can use one Floating IP 
(the external
gateway router interface IP) and reach different internal ports depending on 
the port numbers.
This should happen on the network node (and can also be leveraged for security 
reasons).

I think that the POC implementation in the Stackforge project shows that this 
needs to be
implemented inside the L3 parts of the current reference implementation, it 
will be hard
to maintain something like that in an external repository.
(I also think that the API/DB extensions should be close to the current L3 
reference
implementation)

I would like to renew the efforts on this feature and propose  a spec for this 
to the next release.
And of course if any of the people interested or any of the people that worked 
on this before
want to join the effort, you are more then welcome to join and comment.

[1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
[2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
[3] https://review.openstack.org/#/c/60512/
[4] https://github.com/stackforge/networking-portforwarding
[5] https://review.openstack.org/#/q/port+forwarding,n,z

[6] 
https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
[7] http://www.gossamer-threads.com/lists/openstack/dev/34307
[8] 
http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
[9] 
http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html

** Affects: neutron
 Importance: Undecided
 Assignee: Gal Sagie (gal-sagie)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => Gal Sagie (gal-sagie)

** Tags added: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491317

Title:
  Add port fowrading extension to L3

Status in neutron:
  New

Bug description:
  I have searched and found many past efforts to implement port forwarding in 
Neutron.
  I have found two incomplete blueprints [1], [2] and an abandoned patch [3].

  There is even a project in Stackforge [4], [5] that claims
  to implement this, but the L3 parts in it seems older then current master.

  I have recently came across this requirement for various use cases, one of 
them is
  providing feature compliance with Docker port-mapping feature (for Kuryr), 
and saving floating
  IP's space.
  There has been many discussions in the past that require this feature, so i 
assume
  there is a demand to make this formal, just a small examples [6], [7], [8], 
[9]

  The idea in a nutshell is to support port forwarding (TCP/UDP ports) on the 
external router
  leg from the public network to internal ports, so user can use one Floating 
IP (the external
  gateway router interface IP) and reach different internal ports depending on 
the port numbers.
  This should happen on the network node (and can also be leveraged for 
security reasons).

  I think that the POC implementation in the Stackforge project shows that this 
needs to be
  implemented inside the L3 parts of the current reference implementation, it 
will be hard
  to maintain something like that in an external repository.
  (I also think that the API/DB extensions should be close to the current L3 
reference
  implementation)

  I would like to renew the efforts on this feature and propose  a spec for 
this to the next release.
  And of course if any of the people interested or any of the people that 
worked on this before
  want to join the effort, you are more then welcome to join and comment.

  [1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
  [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
  [3] https://review.openstack.org/#/c/60512/
  [4] https://github.com/stackforge/networking-portforwarding
  [5] https://review.openstack.org/#/q/port+forwarding,n,z

  [6] 
https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
  [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
  [8] 

[Yahoo-eng-team] [Bug 1491414] [NEW] LibvirtConfigObject.to_xml() should not log the xml str - leave that to the caller

2015-09-02 Thread Matt Riedemann
Public bug reported:

There are a lot of places where LibvirtConfigObject.to_xml() is called
and then the results are logged in context, like in the libvirt driver
code:

http://logs.openstack.org/55/218355/5/check/gate-tempest-dsvm-neutron-
full/4dce003/logs/screen-n-cpu.txt.gz#_2015-09-01_19_07_16_539

It would actually be nicer to log those in the context of the instance
uuid we're working on, so we get the instance highlighting in the logs
for debugging.

So I think we should remove the LOG.debug call here:

http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/config.py#n82

And for any time that we do really want that xml logged, we can do it in
the calling code where we will usually also have the instance uuid in
context and can log that also.

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: libvirt serviceability

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491414

Title:
  LibvirtConfigObject.to_xml() should not log the xml str - leave that
  to the caller

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  There are a lot of places where LibvirtConfigObject.to_xml() is called
  and then the results are logged in context, like in the libvirt driver
  code:

  http://logs.openstack.org/55/218355/5/check/gate-tempest-dsvm-neutron-
  full/4dce003/logs/screen-n-cpu.txt.gz#_2015-09-01_19_07_16_539

  It would actually be nicer to log those in the context of the instance
  uuid we're working on, so we get the instance highlighting in the logs
  for debugging.

  So I think we should remove the LOG.debug call here:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/config.py#n82

  And for any time that we do really want that xml logged, we can do it
  in the calling code where we will usually also have the instance uuid
  in context and can log that also.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491401] [NEW] awkward wording for shutdown an instance

2015-09-02 Thread Matthias Runge
Public bug reported:

When selecting shitdown instance from instances table:

You have selected "foo". Please confirm your selection. To power off a
specific instance.

That wording is awful.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491401

Title:
  awkward wording for shutdown an instance

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When selecting shitdown instance from instances table:

  You have selected "foo". Please confirm your selection. To power off a
  specific instance.

  That wording is awful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1491401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491400] [NEW] Going to "instances" makes "Error: unable to connect to neutron" to appear

2015-09-02 Thread Alex Syafeyev
Public bug reported:

python-django-horizon-2014.1.5-1.el6ost.noarch

python-neutronclient-2.3.4-3.el6ost.noarch
openstack-neutron-2014.1.5-2.el6ost.noarch
python-neutron-2014.1.5-2.el6ost.noarch
openstack-neutron-ml2-2014.1.5-2.el6ost.noarch
openstack-neutron-openvswitch-2014.1.5-2.el6ost.noarch


We have Cirros image - 1 Vm 

actions: 
1. install icehouse on rhel 6.5
2. create Inernal and external networks and subnets. 
3. create router and attach ports. 
4. upload cirros image and bring up a VM 

Go to horizon and go to project. 
go to instances.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "printscreen"
   
https://bugs.launchpad.net/bugs/1491400/+attachment/4456147/+files/Screenshot%20from%202015-09-02%2015%3A59%3A01.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491400

Title:
  Going to "instances" makes "Error: unable to connect to neutron" to
  appear

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  python-django-horizon-2014.1.5-1.el6ost.noarch

  python-neutronclient-2.3.4-3.el6ost.noarch
  openstack-neutron-2014.1.5-2.el6ost.noarch
  python-neutron-2014.1.5-2.el6ost.noarch
  openstack-neutron-ml2-2014.1.5-2.el6ost.noarch
  openstack-neutron-openvswitch-2014.1.5-2.el6ost.noarch

  
  We have Cirros image - 1 Vm 

  actions: 
  1. install icehouse on rhel 6.5
  2. create Inernal and external networks and subnets. 
  3. create router and attach ports. 
  4. upload cirros image and bring up a VM 

  Go to horizon and go to project. 
  go to instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1491400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490764] Re: Wrong handling of domain_id passed as None

2015-09-02 Thread Henrique Truta
Marking as invalid, as this is a new behaviour and should be fixed in
this same patch.

** Changed in: keystone
   Status: Incomplete => Opinion

** Changed in: keystone
   Status: Opinion => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1490764

Title:
  Wrong handling of domain_id passed as None

Status in Keystone:
  Invalid

Bug description:
  Keystone does not handle the domain_id passed as none in controller
  layer, as in:

  
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L743

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1490764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447215] Re: Schema Missing kernel_id, ramdisk_id causes #1447193

2015-09-02 Thread James Page
** Changed in: glance (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447215

Title:
  Schema Missing kernel_id, ramdisk_id causes #1447193

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Committed
Status in glance package in Ubuntu:
  Fix Released
Status in glance source package in Vivid:
  Fix Released

Bug description:
  [Description]

  
  [Environment]

  - Ubuntu 14.04.2
  - OpenStack Kilo

  ii  glance   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Daemons
  ii  glance-api   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - API
  ii  glance-common1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Common
  ii  glance-registry  1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Registry
  ii  python-glance1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Python library
  ii  python-glance-store  0.4.0-0ubuntu1~cloud0all 
 OpenStack Image Service store library - Python 2.x
  ii  python-glanceclient  1:0.15.0-0ubuntu1~cloud0 all 
 Client library for Openstack glance server.

  [Steps to reproduce]

  0) Set /etc/glance/glance-api.conf to enable_v2_api=False
  1) nova boot --flavor m1.small --image base-image --key-name keypair 
--availability-zone nova --security-groups default snapshot-bug 
  2) nova image-create snapshot-bug snapshot-bug-instance 

  At this point the created image has no kernel_id (None) and image_id
  (None)

  3) Enable_v2_api=True in glance-api.conf and restart.

  4) Run a os-image-api=2 client,

  $ glance --os-image-api-version 2 image-list

  This will fail with #1447193

  [Description]

  The schema-image.json file needs to be modified to allow null, string
  values for both attributes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491343] [NEW] Unable to delete object from a pseudo folder under container if pseudo folder name contains backslash "\"

2015-09-02 Thread Ankur Jain
Public bug reported:

Hi all,

All the operations are performed from Horizon dashboard. (Openstack-kilo
release)

Inside a container if we create a pseudo folder with name which contains a 
backward slash ( like : “xyz\” )
Then if we uplaod a object inside the pseudo folder.
Then we can't delete this object .

I installed openstack manually from http://docs.openstack.org/kilo
/install-guide/install/yum/content/ch_basic_environment.html

Thanks & Regards
Ankur Jain

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491343

Title:
  Unable to delete object from a pseudo folder under container if pseudo
  folder name contains backslash "\"

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hi all,

  All the operations are performed from Horizon dashboard. (Openstack-
  kilo release)

  Inside a container if we create a pseudo folder with name which contains a 
backward slash ( like : “xyz\” )
  Then if we uplaod a object inside the pseudo folder.
  Then we can't delete this object .

  I installed openstack manually from http://docs.openstack.org/kilo
  /install-guide/install/yum/content/ch_basic_environment.html

  Thanks & Regards
  Ankur Jain

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1491343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491321] [NEW] Address source selector visible even when no subnetpools available

2015-09-02 Thread Frode Nordahl
Public bug reported:

A off by one error causes the "Address source" selector to be visible
even when there are no subnetpools available.

** Affects: horizon
 Importance: Undecided
 Assignee: Frode Nordahl (fnordahl)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491321

Title:
  Address source selector visible even when no subnetpools available

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  A off by one error causes the "Address source" selector to be visible
  even when there are no subnetpools available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1491321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491377] [NEW] traffics of unknown unicast or arp broadcast in the same compute node should not flooded out tunnels

2015-09-02 Thread yaobo
Public bug reported:

Traffics of unknown unicast or arp broadcast in the same compute node
will be flooded out tunnels  to all other compute nodes, that is a
waste.  It is better that these traffics are droped in br-tun bridge

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491377

Title:
  traffics of unknown unicast or arp broadcast in the same compute node
  should not flooded out tunnels

Status in neutron:
  New

Bug description:
  Traffics of unknown unicast or arp broadcast in the same compute node
  will be flooded out tunnels  to all other compute nodes, that is a
  waste.  It is better that these traffics are droped in br-tun bridge

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490796] Re: [SRU] cloud-init must check/format Azure ephemeral disks each boot

2015-09-02 Thread Dan Watkins
** Patch added: "precise.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1490796/+attachment/4456140/+files/precise.debdiff

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Fix Committed

** Changed in: cloud-init
 Assignee: (unassigned) => Dan Watkins (daniel-thewatkins)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1490796

Title:
  [SRU] cloud-init must check/format Azure ephemeral disks each boot

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  In Progress

Bug description:
  
  [Impact]
  Sometimes when rebooting an instance in Azure, a different ephemeral disk 
will be presented. Azure ephemeral disks are presented as NTFS disks, but we 
reformat them to ext4.  With the last cloud-init upload, we shifted to using 
/dev/disk symlinks to mount these disks (in case they are not presented as the 
same physical device).  Unfortunately, the code that determines if we have a 
new ephemeral disk was not updated to handle symlinks, so never detects a new 
disk.

  [Test Case]
  1) Boot an Azure instance and install the new cloud-init.
  2) Change the size of the instance using the Azure web interface (as this 
near-guarantees that the ephemeral disk will be replaced with a new one). This 
will reboot the instance.
  3) Once the instance is rebooted, SSH in and confirm that:
   a) An ext4 ephemeral disk is mounted at /mnt, and
   b) cloud-init.log indicates that a fabric formatted ephemeral disk was found 
on this boot.

  [Regression Potential]
  Limited; two LOCs change, to dereference symlinks instead of using paths 
verbatim.

  [Original Report]
  Ubuntu 14.04.3 (20150805) on Azure with cloud-init package 0.7.5-0ubuntu1.8.

  On Azure cloud-init prepares the ephemeral device as ext4 for the
  first boot.  However, if the VM is ever moved to another Host for any
  reason, then a new ephemeral disk might be provided to the VM.  This
  ephemeral disk is NTFS formatted, so for subsequent reboots cloud-init
  must detect and reformat the new disk as ext4.  However, with cloud-
  init 0.7.5-0ubuntu1.8 subsequent boots may result in fuse mounted NTFS
  file system.

  This issue occurred in earlier versions of cloud-init, but was fixed
  with bug 1292648 (https://bugs.launchpad.net/ubuntu/+source/cloud-
  init/+bug/1292648).  So this appears to be a regression.

  Repro:
    - Create an Ubuntu 14.04.3 VM on Azure
    - Resize the VM to a larger size (this typically moves the VM)
    - Log in and run 'blkid' to show an ntfs formatted ephemeral disk:

  # blkid
  /dev/sdb1: LABEL="Temporary Storage" UUID="A43C43DD3C43A95E" TYPE="ntfs"

  Expected results:
    - After resizing the ephemeral disk should be formatted as ext4.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1490796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488233] Re: FC with LUN ID >255 not recognized

2015-09-02 Thread Doug Hellmann
** Changed in: os-brick
   Status: Fix Committed => Fix Released

** Changed in: os-brick
Milestone: None => 0.4.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488233

Title:
  FC with LUN ID >255 not recognized

Status in Cinder:
  New
Status in Cinder kilo series:
  Fix Committed
Status in OpenStack Compute (nova):
  New
Status in OpenStack Compute (nova) kilo series:
  New
Status in os-brick:
  Fix Released

Bug description:
  (s390 architecture/System z Series only) FC LUNs with LUN ID >255 are not 
recognized by neither Cinder nor Nova when trying to attach the volume.
  The issue is that Fibre-Channel volumes need to be added using the unit_add 
command with a properly formatted LUN string.
  The string is set correctly for LUNs <=0xff. But not for LUN IDs within the 
range 0xff and 0x.
  Due to this the volumes do not get properly added to the hypervisor 
configuration and the hypervisor does not find them.

  Note: The change for Liberty os-brick is ready. I would also like to
  patch it back to Kilo. Since os-brick has been integrated with
  Liberty, but was separate before, I need to release a patch for Nova,
  Cinder, and os-brick. Unfortunately there is no option on this page to
  nominate the patch for Kilo. Can somebody help? Thank you!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1488233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479842] Re: os-brick needs to provide it's own rootwrap filters file

2015-09-02 Thread Doug Hellmann
** Changed in: os-brick
   Status: Fix Committed => Fix Released

** Changed in: os-brick
Milestone: None => 0.4.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479842

Title:
  os-brick needs to provide it's own rootwrap filters file

Status in Cinder:
  Confirmed
Status in devstack:
  In Progress
Status in OpenStack Compute (nova):
  Confirmed
Status in os-brick:
  Fix Released

Bug description:
  This came up in the review for the scaleio libvirt volume driver in
  nova:

  https://review.openstack.org/#/c/194454/16/etc/nova/rootwrap.d/compute.filters

  Basically, having to define rootwrap filters in nova and cinder for
  things that are run in os-brick is kind of terrible since every time
  os-brick needs to add a new command to run as root, it has to be added
  to nova/cinder, and we have to deal with version compat issues (will
  the version of nova/cinder have the filters required for the version
  of os-brick that's running?).

  This was already introduced with multipathd to compute.filters in the
  os-brick integration change:

  https://review.openstack.org/#/c/175569/32/etc/nova/rootwrap.d/compute.filters

  Rather than revert the os-brick integration change to nova, we should
  work this as a bug so that os-brick can carry it's own os-
  brick.filters file and then that can be dropped into
  /etc/nova/rootwrap.d and /etc/cinder/rootwrap.d.

  So we'll need os-brick changes, nova, cinder and devstack changes to
  land this.

  Also considered a release blocker for liberty for the nova team.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1479842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370226] Re: LibvirtISCSIVolumeDriver cannot find volumes that include pci-* in the /dev/disk/by-path device

2015-09-02 Thread Doug Hellmann
** Changed in: os-brick
   Status: Fix Committed => Fix Released

** Changed in: os-brick
Milestone: None => 0.4.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370226

Title:
  LibvirtISCSIVolumeDriver cannot find volumes that include pci-* in the
  /dev/disk/by-path device

Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Fix Released

Bug description:
  I am currently unable to attach iSCSI volumes to our system because
  the path that is expected by the LibvirtISCSIVolumeDriver doesn't
  match what is being created in /dev/disk/by-path:

  2014-09-16 01:33:22.533 24304 DEBUG nova.openstack.common.lockutils 
[req-f466db73-0a7c-4e1f-85ad-473c688d0a68 None] Semaphore / lock released 
"connect_volume" inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:328
  2014-09-16 01:33:22.534 24304 ERROR nova.virt.block_device 
[req-f466db73-0a7c-4e1f-85ad-473c688d0a68 None] [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] Driver failed to attach volume 
97e38815-c934-48a7-b343-880c5a9bf4b8 at /dev/vdd
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] Traceback (most recent call last):
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9]   File 
"/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 252, in 
attach
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] device_type=self['device_type'], 
encryption=encryption)
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9]   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1283, in 
attach_volume
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] conf = 
self._connect_volume(connection_info, disk_info)
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9]   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1237, in 
_connect_volume
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] return 
driver.connect_volume(connection_info, disk_info)
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9]   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", line 
325, in inner
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] return f(*args, **kwargs)
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9]   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/volume.py", line 295, in 
connect_volume
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] % (host_device))
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] NovaException: iSCSI device not found at 
/dev/disk/by-path/ip-10.90.50.10:3260-iscsi-iqn.1986-03.com.ibm:2145.abbav3700.node2-lun-4

  The paths that are being created, however, are  of the following
  format:

  [root@abba-n09 rules.d]# ll /dev/disk/by-path/
  total 0
  lrwxrwxrwx. 1 root root  9 Sep 16 10:56 
pci-:0c:00.2-ip-10.90.50.11:3260-iscsi-iqn.1986-03.com.ibm:2145.abbav3700.node1-lun-0
 -> ../../sdc
  lrwxrwxrwx. 1 root root  9 Sep 16 10:56 
pci-:0c:00.2-ip-10.90.50.11:3260-iscsi-iqn.1986-03.com.ibm:2145.abbav3700.node1-lun-1
 -> ../../sdd
  lrwxrwxrwx. 1 root root  9 Sep 16 10:56 
pci-:0c:00.2-ip-10.90.50.11:3260-iscsi-iqn.1986-03.com.ibm:2145.abbav3700.node1-lun-2
 -> ../../sde
  lrwxrwxrwx. 1 root root  9 Sep 16 10:56 
pci-:0c:00.2-ip-10.90.50.11:3260-iscsi-iqn.1986-03.com.ibm:2145.abbav3700.node1-lun-3
 -> ../../sdf
  lrwxrwxrwx. 1 root root  9 Sep 10 18:46 pci-:16:00.0-scsi-0:2:0:0 -> 
../../sda
  lrwxrwxrwx. 1 root root 10 Sep 10 18:46 pci-:16:00.0-scsi-0:2:0:0-part1 
-> ../../sda1
  lrwxrwxrwx. 1 root root 10 Sep 10 18:46 pci-:16:00.0-scsi-0:2:0:0-part2 
-> ../../sda2
  lrwxrwxrwx. 1 root root 10 Sep 10 18:46 pci-:16:00.0-scsi-0:2:0:0-part3 
-> ../../sda3
  lrwxrwxrwx. 1 root root 10 Sep 10 18:46 pci-:16:00.0-scsi-0:2:0:0-part4 
-> ../../sda4
  lrwxrwxrwx. 1 root root  9 Sep 10 18:46 pci-:16:00.0-scsi-0:2:1:0 -> 
../../sdb
  [root@abba-n09 rules.d]#

  When the devices are created the physical location of the HBA is being 
included:
  0c:00.2 Mass storage controller: Emulex Corporation OneConnect 10Gb iSCSI 
Initiator (be3) (rev 02)

  Looking at the code, I see that theLibvirtISERVolumeDriver actually
  does the check that 

[Yahoo-eng-team] [Bug 1470047] Re: CLI fails to report an error after creating a snapshot from instance

2015-09-02 Thread Doug Hellmann
** Changed in: python-novaclient
   Status: Fix Committed => Fix Released

** Changed in: python-novaclient
Milestone: None => 2.27.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470047

Title:
  CLI fails to report an error after creating a snapshot from instance

Status in OpenStack Compute (nova):
  In Progress
Status in python-novaclient:
  Fix Released

Bug description:
  Description of problem:
  The CLI fails to declare an error and stuck with "Server snapshotting... 0" 
when a user tries to save a snapshot of an instance while his quota is too 
small. 

  Version-Release number of selected component (if applicable):
  python-glanceclient-0.17.0-2.el7ost.noarch
  python-glance-2015.1.0-6.el7ost.noarch
  python-glance-store-0.4.0-1.el7ost.noarch
  openstack-glance-2015.1.0-6.el7ost.noarch
  openstack-nova-api-2015.1.0-13.el7ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Edit /etc/glance/glance-api.conf set user_storage_quota with low space for 
creating snapshot from instance 
  2. openstack-service restart glance
  3. Create a snapshot from instance via command line: 'nova image-create 
  --poll'

  Actual results:
  The CLI fails to declare an error and stuck with "Server snapshotting... 0"

  Expected results:
  ERROR should be appeared indicating that quota is too small

  
  Additional info:
  log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481210] Re: No tenant id displayed using server-group-list --all-projects

2015-09-02 Thread Doug Hellmann
** Changed in: python-novaclient
   Status: Fix Committed => Fix Released

** Changed in: python-novaclient
Milestone: None => 2.27.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481210

Title:
  No tenant id displayed using server-group-list --all-projects

Status in OpenStack Compute (nova):
  In Progress
Status in python-novaclient:
  Fix Released

Bug description:
  Currently, Admin can list all existing server groups using "nova 
server-group-list --all-projects".
  But the output display does not contain project id information, it is really 
difficult to identify which server group
  belongs to which project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1481210/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491446] [NEW] monkey_patch config option has a wrong help message

2015-09-02 Thread Balazs Gibizer
Public bug reported:

The help text of mockey_patch nova config option states that it turns the 
logging of monkey patching on and off. See:
https://github.com/openstack/nova/blob/master/nova/utils.py#L66

However the code use this config parameter to turn the monkey patching on and 
off. See: 
https://github.com/openstack/nova/blob/master/nova/utils.py#L720

** Affects: nova
 Importance: Undecided
 Assignee: Balazs Gibizer (balazs-gibizer)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491446

Title:
  monkey_patch config option has a wrong help message

Status in OpenStack Compute (nova):
  New

Bug description:
  The help text of mockey_patch nova config option states that it turns the 
logging of monkey patching on and off. See:
  https://github.com/openstack/nova/blob/master/nova/utils.py#L66

  However the code use this config parameter to turn the monkey patching on and 
off. See: 
  https://github.com/openstack/nova/blob/master/nova/utils.py#L720

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473009] Re: HTTP external sources are not supported

2015-09-02 Thread Cindy Pallares
** Project changed: glance => glance-store

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1473009

Title:
  HTTP external sources are not supported

Status in glance_store:
  New

Bug description:
  Trying to define a new image from an external source keeps failing with 400 
Bad Request External source are not supported.
  In my specific case I have tryed to define an image via Heat:

  # heat stack-create image-test -f cirros.yaml

  cirros.yaml:
  heat_template_version: 2013-05-23

  description: Glance image provider

  parameters:
image_name:
  type: string
  default: heat_cirros

  resources:

glance_image:
  type: OS::Glance::Image
  properties:
container_format: bare
disk_format: qcow2
is_public: true
location: 
http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
min_disk: 1
min_ram: 64
name: {get_param: image_name}
protected: false

  Note that the URL is well formatted should be valid as an external
  source.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1473009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383722] Re: glance store initialization should be improved

2015-09-02 Thread Cindy Pallares
** Project changed: glance => glance-store

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1383722

Title:
  glance store initialization should be improved

Status in glance_store:
  Confirmed

Bug description:

  Currently the glance store initialization is explicitly called in the
  api start script:

  
   cat glance/cmd/api.py
   .
   .
   .

  glance_store.register_opts(config.CONF)
  glance_store.create_stores(config.CONF)
  glance_store.verify_default_store()

  
  Other parameter's are picked up for free if we perform a configuration reload 
on sighup (eg sqlalchemy settings).
  (An exception is logging where a log.setup() needs to be called).

  It would be great, in order to fit cleanly with configuration reload,
  to have the glance_store initialization be called automatically,
  triggered by the eventlet.wsgi.server() call in glance.common.wsgi.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1383722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp