[Yahoo-eng-team] [Bug 1420662] [NEW] VMware: InstanceList.get_by_host raise rpc timeout error

2015-02-11 Thread Rui Chen
Public bug reported:

I deploy my OpenStack with VMware driver, one nova-compute connect to
VMware deployment, there are about 3000 VMs in VMware deployment. I use
mysql.

The method of InstanceList.get_by_host rasie rpc timeout error when
ComputeManager.init_host() and _sync_power_states periodic task execute.

Looks like a performance issue. currently, one nova-compute host map to
the whole VMware deployment that maybe contain several clusters in nova
VMware driver. When InstanceList.get_by_host execute in ComputeManager,
it indicate that nova-compute will execute a rpc call to nova-
conducutor, nova-conductor will fetch a lots of instances in the whole
VMware deployment in once, in my case , it's 3000 instances. The long
time SQL query maybe lead to the nova-conductor rpc timeout.

PS: 
vSphere 5.1 now allows 100 hosts and 3000 powered on VMs.
vSphere 6 now allows 1000 hosts and 10,000 powered on VMs.

** Affects: nova
 Importance: Undecided
 Assignee: Rui Chen (kiwik-chenrui)
 Status: New


** Tags: vmware

** Changed in: nova
 Assignee: (unassigned) => Rui Chen (kiwik-chenrui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420662

Title:
  VMware: InstanceList.get_by_host raise rpc timeout error

Status in OpenStack Compute (Nova):
  New

Bug description:
  I deploy my OpenStack with VMware driver, one nova-compute connect to
  VMware deployment, there are about 3000 VMs in VMware deployment. I
  use mysql.

  The method of InstanceList.get_by_host rasie rpc timeout error when
  ComputeManager.init_host() and _sync_power_states periodic task
  execute.

  Looks like a performance issue. currently, one nova-compute host map
  to the whole VMware deployment that maybe contain several clusters in
  nova VMware driver. When InstanceList.get_by_host execute in
  ComputeManager, it indicate that nova-compute will execute a rpc call
  to nova-conducutor, nova-conductor will fetch a lots of instances in
  the whole VMware deployment in once, in my case , it's 3000 instances.
  The long time SQL query maybe lead to the nova-conductor rpc timeout.

  PS: 
  vSphere 5.1 now allows 100 hosts and 3000 powered on VMs.
  vSphere 6 now allows 1000 hosts and 10,000 powered on VMs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1420662/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420683] [NEW] status value of interface table is not translatable

2015-02-11 Thread Masco Kaliyamoorthy
Public bug reported:

in the interface table the value of status
column is not translatable.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420683

Title:
  status value of interface table is not translatable

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  in the interface table the value of status
  column is not translatable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420688] [NEW] keystone notification context is empty

2015-02-11 Thread Jay Lau
Public bug reported:

When keystone send notification, the context will be set as {}, this
caused some problem when related component such as nova to deseralize
the context

2015-01-28 13:01:40.391 20150 ERROR oslo.messaging.notify.dispatcher [-] 
Exception during message handling
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher Traceback 
(most recent call last):
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/notify/dispatcher.py", line 
87, in _dispatch_and_handle_error
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher return 
self._dispatch(incoming.ctxt, incoming.message)
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/notify/dispatcher.py", line 
103, in _dispatch
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher ctxt = 
self.serializer.deserialize_context(ctxt)
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/rpc.py", line 117, in deserialize_context
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher return 
nova.context.RequestContext.from_dict(context)
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/context.py", line 179, in from_dict
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher return 
cls(**values)
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher TypeError: 
__init__() takes at least 3 arguments (1 given)
2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 

We should at least set project_id and user_id to context.

https://github.com/openstack/keystone/blob/master/keystone/notifications.py#L241

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1420688

Title:
  keystone notification context is empty

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When keystone send notification, the context will be set as {}, this
  caused some problem when related component such as nova to deseralize
  the context

  2015-01-28 13:01:40.391 20150 ERROR oslo.messaging.notify.dispatcher [-] 
Exception during message handling
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 
Traceback (most recent call last):
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/notify/dispatcher.py", line 
87, in _dispatch_and_handle_error
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 
return self._dispatch(incoming.ctxt, incoming.message)
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/notify/dispatcher.py", line 
103, in _dispatch
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher ctxt 
= self.serializer.deserialize_context(ctxt)
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/rpc.py", line 117, in deserialize_context
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 
return nova.context.RequestContext.from_dict(context)
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/context.py", line 179, in from_dict
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 
return cls(**values)
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 
TypeError: __init__() takes at least 3 arguments (1 given)
  2015-01-28 13:01:40.391 20150 TRACE oslo.messaging.notify.dispatcher 

  We should at least set project_id and user_id to context.

  
https://github.com/openstack/keystone/blob/master/keystone/notifications.py#L241

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1420688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420692] [NEW] Interface lost in Docker conatainer after rebooting that conatiner with nova reboot

2015-02-11 Thread ashok
Public bug reported:

Interfaces lost in docker container after rebooting that container with
nova reboot command.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420692

Title:
  Interface lost in Docker conatainer after rebooting that conatiner
  with nova reboot

Status in OpenStack Compute (Nova):
  New

Bug description:
  Interfaces lost in docker container after rebooting that container
  with nova reboot command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1420692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420707] [NEW] Update help for glance member-update v2 api

2015-02-11 Thread Yamini Sardana
Public bug reported:

The output for glance --os-image-api-version 2 help member-update should
display the options available for MEMBER STATUS field (Accepted,
Rejected, Pending)

At the moment it does not display the available options.

** Affects: glance
 Importance: Undecided
 Assignee: Yamini Sardana (yamini-sardana)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Yamini Sardana (yamini-sardana)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1420707

Title:
  Update help for glance member-update v2 api

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The output for glance --os-image-api-version 2 help member-update
  should display the options available for MEMBER STATUS field
  (Accepted, Rejected, Pending)

  At the moment it does not display the available options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1420707/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372317] Re: Disable the "Gateway IP" field when "Disable gateway" is checked

2015-02-11 Thread Matthias Runge
** This bug is no longer a duplicate of bug 1372313
   create network - disable gateway checkbox

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1372317

Title:
  Disable the "Gateway IP" field when "Disable gateway" is checked

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  Does unchecking the Create Subnet checkbox mean that none of the
  information on this step is now required?

  We are still required to enter Gateway IPs in the proper format even
  though unchecking that box should make that unnecessary (same for when
  the "Disable Gateway" box is checked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1372317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420731] [NEW] Icehouse nova quota-delete also erase currently used absolute limits

2015-02-11 Thread Xavier
Public bug reported:

Example :

nova absolute-limits --tenant test_fil
+-+---+
| Name| Value |
+-+---+
| maxServerMeta   | 128   |
| maxPersonality  | 5 |
| maxImageMeta| 128   |
| maxPersonalitySize  | 10240 |
| maxTotalRAMSize | 8192  |
| maxSecurityGroupRules   | 20|
| maxTotalKeypairs| 4 |
| totalRAMUsed| 2048  |
| maxSecurityGroups   | 10|
| totalFloatingIpsUsed| 0 |
| totalInstancesUsed  | 1 |
| totalSecurityGroupsUsed | 1 |
| maxTotalFloatingIps | 4 |
| maxTotalInstances   | 4 |
| totalCoresUsed  | 1 |
| maxTotalCores   | 4 |
+-+---+

nova --debug quota-delete --tenant test_fil
REQ: curl -i 'http://controller1:8774/v2/admin/os-quota-sets/test_fil' -X 
DELETE -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H 
"Accept: application/json" -H "X-Auth-Token: ...

nova absolute-limits --tenant test_fil
+-+---+
| Name| Value |
+-+---+
| maxServerMeta   | 128   |
| maxPersonality  | 5 |
| maxImageMeta| 128   |
| maxPersonalitySize  | 10240 |
| maxTotalRAMSize | 8192  |
| maxSecurityGroupRules   | 20|
| maxTotalKeypairs| 4 |
| totalRAMUsed| 0 |
| maxSecurityGroups   | 10|
| totalFloatingIpsUsed| 0 |
| totalInstancesUsed  | 0 |
| totalSecurityGroupsUsed | 0 |
| maxTotalFloatingIps | 4 |
| maxTotalInstances   | 4 |
| totalCoresUsed  | 0 |
| maxTotalCores   | 4 |
+-+---+

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420731

Title:
  Icehouse nova quota-delete also erase currently used absolute limits

Status in OpenStack Compute (Nova):
  New

Bug description:
  Example :

  nova absolute-limits --tenant test_fil
  +-+---+
  | Name| Value |
  +-+---+
  | maxServerMeta   | 128   |
  | maxPersonality  | 5 |
  | maxImageMeta| 128   |
  | maxPersonalitySize  | 10240 |
  | maxTotalRAMSize | 8192  |
  | maxSecurityGroupRules   | 20|
  | maxTotalKeypairs| 4 |
  | totalRAMUsed| 2048  |
  | maxSecurityGroups   | 10|
  | totalFloatingIpsUsed| 0 |
  | totalInstancesUsed  | 1 |
  | totalSecurityGroupsUsed | 1 |
  | maxTotalFloatingIps | 4 |
  | maxTotalInstances   | 4 |
  | totalCoresUsed  | 1 |
  | maxTotalCores   | 4 |
  +-+---+

  nova --debug quota-delete --tenant test_fil
  REQ: curl -i 'http://controller1:8774/v2/admin/os-quota-sets/test_fil' -X 
DELETE -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H 
"Accept: application/json" -H "X-Auth-Token: ...

  nova absolute-limits --tenant test_fil
  +-+---+
  | Name| Value |
  +-+---+
  | maxServerMeta   | 128   |
  | maxPersonality  | 5 |
  | maxImageMeta| 128   |
  | maxPersonalitySize  | 10240 |
  | maxTotalRAMSize | 8192  |
  | maxSecurityGroupRules   | 20|
  | maxTotalKeypairs| 4 |
  | totalRAMUsed| 0 |
  | maxSecurityGroups   | 10|
  | totalFloatingIpsUsed| 0 |
  | totalInstancesUsed  | 0 |
  | totalSecurityGroupsUsed | 0 |
  | maxTotalFloatingIps | 4 |
  | maxTotalInstances   | 4 |
  | totalCoresUsed  | 0 |
  | maxTotalCores   | 4 |
  +-+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1420731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413341] Re: keystone doesn't respond if cluster is actively used

2015-02-11 Thread Davanum Srinivas (DIMS)
Alexander, I can confirm @Roman's comment. keystone-all is *never* used
under Apache.

** No longer affects: keystone

** No longer affects: oslo.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1413341

Title:
  keystone doesn't respond if cluster is actively used

Status in Mirantis OpenStack:
  Confirmed

Bug description:
  [root@fuel ~]# fuel --fuel-version
  api: '1.0'
  astute_sha: f7cda2171b0b677dfaeb59693d980a2d3ee4c3e0
  auth_required: true
  build_id: 2015-01-20_08-49-23
  build_number: '38'
  feature_groups:
  - mirantis
  fuellib_sha: 9aa913096fb93ea4847ee14bfaf33597326886f3
  fuelmain_sha: 1ee1766a51bdb5bed75d5c2efdcaaa318118e439
  nailgun_sha: 5f91157daa6798ff522ca9f6d34e7e135f150a90
  ostf_sha: 3d2f44dcfa32d6ce0372cc64695e9edcc1913ea7
  production: docker
  release: 6.0.1
  release_versions:
2014.2-6.0:
  VERSION:
api: '1.0'
astute_sha: f7cda2171b0b677dfaeb59693d980a2d3ee4c3e0
build_id: 2015-01-20_08-49-23
build_number: '38'
feature_groups:
- mirantis
fuellib_sha: 9aa913096fb93ea4847ee14bfaf33597326886f3
fuelmain_sha: 1ee1766a51bdb5bed75d5c2efdcaaa318118e439
nailgun_sha: 5f91157daa6798ff522ca9f6d34e7e135f150a90
ostf_sha: 3d2f44dcfa32d6ce0372cc64695e9edcc1913ea7
production: docker
release: 6.0.1

  Ubuntu, HA, neutron-gre, Ceilometer, debug, Ceph for 
volumes,images,ephemerals,objects
  controllers: 3
  computes: 97

  root@node-47:~# keystone --debug user-list
  DEBUG:keystoneclient.auth.identity.v2:Making authentication request to 
http://192.168.0.2:5000/v2.0/tokens
  INFO:urllib3.connectionpool:Starting new HTTP connection (1): 192.168.0.2
  DEBUG:urllib3.connectionpool:Setting read timeout to 600.0
  DEBUG:urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 4102
  DEBUG:keystoneclient.session:REQ: curl -i -X GET 
http://172.16.45.101:35357/v2.0/users -H "User-Agent: python-keystoneclient" -H 
"X-Auth-Token: TOKEN_REDACTED"
  INFO:urllib3.connectionpool:Starting new HTTP connection (1): 172.16.45.101
  DEBUG:urllib3.connectionpool:Setting read timeout to 600.0
  DEBUG:urllib3.connectionpool:"GET /v2.0/users HTTP/1.1" 503 None
  DEBUG:keystoneclient.session:RESP:
  DEBUG:keystoneclient.session:Request returned failure status: 503
  Service Unavailable (HTTP 503)

  root@node-47:~# crm status
  Last updated: Wed Jan 21 15:30:14 2015
  Last change: Tue Jan 20 20:09:03 2015 via cibadmin on node-47
  Stack: classic openais (with plugin)
  Current DC: node-72 - partition with quorum
  Version: 1.1.10-42f2063
  3 Nodes configured, 3 expected votes
  29 Resources configured

  
  Online: [ node-47 node-71 node-72 ]

   vip__public(ocf::mirantis:ns_IPaddr2): Started node-71
   Clone Set: clone_ping_vip__public [ping_vip__public]
   Started: [ node-47 node-71 node-72 ]
   vip__management(ocf::mirantis:ns_IPaddr2): Started node-72
   Clone Set: clone_p_heat-engine [p_heat-engine]
   Started: [ node-47 node-71 node-72 ]
   Master/Slave Set: master_p_rabbitmq-server [p_rabbitmq-server]
   Masters: [ node-71 ]
   Slaves: [ node-47 node-72 ]
   Clone Set: clone_p_neutron-plugin-openvswitch-agent 
[p_neutron-plugin-openvswitch-agent]
   Started: [ node-47 node-71 node-72 ]
   p_neutron-dhcp-agent   (ocf::mirantis:neutron-agent-dhcp): Started 
node-47
   Clone Set: clone_p_neutron-metadata-agent [p_neutron-metadata-agent]
   Started: [ node-47 node-71 node-72 ]
   Clone Set: clone_p_neutron-l3-agent [p_neutron-l3-agent]
   Started: [ node-47 node-71 node-72 ]
   Clone Set: clone_p_mysql [p_mysql]
   Started: [ node-47 node-71 node-72 ]
   Clone Set: clone_p_haproxy [p_haproxy]
   Started: [ node-47 node-71 node-72 ]

  diagnostic snapshot is here
  
https://drive.google.com/a/mirantis.com/file/d/0Bx4ptZV1Jt7hRTZlUWIzUk5PYmM/view?usp=sharing

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1413341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420744] [NEW] Improve and document the PageTitleMixin

2015-02-11 Thread Sam Betts
Public bug reported:

https://review.openstack.org/#/c/142802 adds new functionality to reduce
duplication in horizon templates, this bug is to track adding extra
documentation and readability to those new features.

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Betts (sambetts)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420744

Title:
  Improve and document the PageTitleMixin

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  https://review.openstack.org/#/c/142802 adds new functionality to
  reduce duplication in horizon templates, this bug is to track adding
  extra documentation and readability to those new features.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420788] [NEW] Logging blocks on race condition under eventlet

2015-02-11 Thread Alexander Makarov
Public bug reported:

Wrong initialization order makes logging block on race condition under
eventlet.

bin/keystone-all launcher initialize logging first and after that does
eventlet patching leaving logging system with generic thred.lock in
critical sections what leads to infinite thread locks under high load.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1420788

Title:
  Logging blocks on race condition under eventlet

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Wrong initialization order makes logging block on race condition under
  eventlet.

  bin/keystone-all launcher initialize logging first and after that does
  eventlet patching leaving logging system with generic thred.lock in
  critical sections what leads to infinite thread locks under high load.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1420788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420791] [NEW] python keystoneclient misreports connection error reason

2015-02-11 Thread David Kirchner
Public bug reported:

The python keystoneclient reports a set of connection errors as
"Connection refused", except connection refused itself:

https://github.com/openstack/python-
keystoneclient/blob/a333c56ac9a6b48b8e5d16e654251fb7d15bd89f/keystoneclient/session.py#L411

This can make tracking down authentication and connection problems
difficult without using a tool like strace.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1420791

Title:
  python keystoneclient misreports connection error reason

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The python keystoneclient reports a set of connection errors as
  "Connection refused", except connection refused itself:

  https://github.com/openstack/python-
  
keystoneclient/blob/a333c56ac9a6b48b8e5d16e654251fb7d15bd89f/keystoneclient/session.py#L411

  This can make tracking down authentication and connection problems
  difficult without using a tool like strace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1420791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410651] Re: wrong output when trying to delete a default security group of admin tenant

2015-02-11 Thread yanhe...@gmail.com
** Changed in: nova
   Status: Invalid => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1410651

Title:
  wrong output when trying to delete a default security group of admin
  tenant

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  When we are trying to delete a default security group from admin
  tenant then the wrong output displayed to the user.

  Steps to replicate :

  in admin tenant:

  1.  nova secgroup-list
  +--+-+-+
  | Id   | Name| Description |
  +--+-+-+
  | 2d504a0f-b8c6-4ae5-b7f0-7184d43a998a| default | default |
  +--+-+-+
  2. nova secgroup-delete default

  +--+-+-+
  | Id   | Name| Description |
  +--+-+-+
  | 2d504a0f-b8c6-4ae5-b7f0-7184d43a998a | default | default |
  +--+-+-+

  3. again list the security group you will found the same list

nova secgroup-list
  +--+-+-+
  | Id   | Name| Description |
  +--+-+-+
  | 2d504a0f-b8c6-4ae5-b7f0-7184d43a998a| default | default |
  +--+-+-+

  delete command successfully runs but did not delete the default
  security group.

  Expected result :

  Removing default security group not allowed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1410651/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420809] [NEW] Glance image-create help in CLI is not correct

2015-02-11 Thread Eran Kuris
Public bug reported:

In Glance CLI help we can see parameter that are written as "optional" when 
they are mandatory.
I compare it with horizon WebUI dashboard and I saw which fields are mandatory .
to reproduce the bug in CLI run the command : 
#glance help image-create

compare it with WebUI

Version 
[root@puma15 ~(keystone_eran1)]# rpm -qa |grep glance 
python-glanceclient-0.14.2-1.el7ost.noarch
openstack-glance-2014.2.1-3.el7ost.noarch
python-glance-2014.2.1-3.el7ost.noarch
python-glance-store-0.1.10-2.el7ost.noarch

[root@puma15 ~(keystone_eran1)]# rpm -qa |grep rhel
libreport-rhel-2.1.11-10.el7.x86_64


**Attatched screenshot - describe compare between CLI & WebUI

** Affects: glance
 Importance: Undecided
 Status: New

** Attachment added: "glance_help.jpg"
   
https://bugs.launchpad.net/bugs/1420809/+attachment/4317165/+files/glance_help.jpg

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1420809

Title:
  Glance image-create  help in CLI  is not correct

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  In Glance CLI help we can see parameter that are written as "optional" when 
they are mandatory.
  I compare it with horizon WebUI dashboard and I saw which fields are 
mandatory .
  to reproduce the bug in CLI run the command : 
  #glance help image-create

  compare it with WebUI

  Version 
  [root@puma15 ~(keystone_eran1)]# rpm -qa |grep glance 
  python-glanceclient-0.14.2-1.el7ost.noarch
  openstack-glance-2014.2.1-3.el7ost.noarch
  python-glance-2014.2.1-3.el7ost.noarch
  python-glance-store-0.1.10-2.el7ost.noarch

  [root@puma15 ~(keystone_eran1)]# rpm -qa |grep rhel
  libreport-rhel-2.1.11-10.el7.x86_64

  
  **Attatched screenshot - describe compare between CLI & WebUI

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1420809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420692] Re: Interface lost in Docker conatainer after rebooting that conatiner with nova reboot

2015-02-11 Thread Davanum Srinivas (DIMS)
** Project changed: nova => nova-docker

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420692

Title:
  Interface lost in Docker conatainer after rebooting that conatiner
  with nova reboot

Status in Nova Docker Driver:
  New

Bug description:
  Interfaces lost in docker container after rebooting that container
  with nova reboot command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova-docker/+bug/1420692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420100] Re: security group remove from a vm reports error "ERROR: openstack Circular reference detected"

2015-02-11 Thread Davanum Srinivas (DIMS)
** Project changed: nova => python-openstackclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420100

Title:
  security group remove from a vm reports error "ERROR: openstack
  Circular reference detected"

Status in OpenStack Command Line Client:
  New

Bug description:
  we can not  remove a security group from a vm via python-
  openstackclient, error message is "ERROR: openstack Circular reference
  detected"

  
  test log:
  (testuser_dx_px_du)(.venv)cetest@cer106n0035:~/weixiaol$ openstack server 
show ins1
  
--+
  Field Value

  
--+
  OS-EXT-AZ:availability_zone   nova
  OS-EXT-STS:power_state1
  OS-EXT-STS:task_state None
  OS-EXT-STS:vm_state   active
  OS-SRV-USG:launched_at2015-01-27T08:25:25.00
  OS-SRV-USG:terminated_at  None
  accessIPv4 
  accessIPv6 
  addresses private_net=192.168.1.6, 10.23.78.180
  config_drive   
  created   2015-01-27T08:24:43Z
  flavorm1.small (2)
  hostId320b949dde87a1c350a79b6ee2ea3dbfec000bc73d9a93acc1eaa1fb
  id8a0b60ea-d425-4a82-ba17-ac3817a9bf1d
  image Lily_fedora (31de5eb8-6f11-44f2-a94f-07df7e12552b)
  key_name  key2
  name  ins1
  os-extended-volumes:volumes_attached  []
  progress  0
  properties 
  security_groups   [{u'name': u'default'}, {u'name': u'sec1'}]
  statusACTIVE
  tenant_id 71a98a7595ef47d48dc3c093ed6f2428
  updated   2015-01-27T08:25:25Z
  user_id   858c5046ad8b46ad8fe203db02282cf6

  
--+
  (testuser_dx_px_du)(.venv)cetest@cer106n0035:~/weixiaol$
  (testuser_dx_px_du)(.venv)cetest@cer106n0035:~/weixiaol$ openstack server 
remove security group ins1 sec1
  ERROR: openstack Circular reference detected

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1420100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420843] [NEW] Wrong skip_because message in integration tests

2015-02-11 Thread Martin Pavlásek
Public bug reported:

When you use skip_because(bug=[1,2,3]) decorator, it will produce wrong message 
output:
Skipped until Bugs: %s are resolved.1, 2, 3
instead of:
Skipped until Bugs: 1, 2, 3 are resolved.

** Affects: horizon
 Importance: Undecided
 Assignee: Martin Pavlásek (mpavlase)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Martin Pavlásek (mpavlase)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420843

Title:
  Wrong skip_because message in integration tests

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When you use skip_because(bug=[1,2,3]) decorator, it will produce wrong 
message output:
  Skipped until Bugs: %s are resolved.1, 2, 3
  instead of:
  Skipped until Bugs: 1, 2, 3 are resolved.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420848] [NEW] nova-compute service spuriously marked as "up" when disabled

2015-02-11 Thread Chris Friesen
Public bug reported:

I think our usage of the "updated_at" field to determine whether a
service is "up" or not is buggy.  Consider this scenario:

1) nova-compute is happily running and is up/enabled on compute-0
2) something causes nova-compute to stop (process crash, hardware fault, power 
failure, network isolation, etc.)
3) a minute later, the nova-compute service is reported as "down"
4) I run "nova service-disable compute-0 nova-compute"
5) nova-compute is now reported as "up" for the next minute

I wonder if it would make sense to have a separate "last status
timestamp" database field that would only get updated when we get a
service status update and not when we change any other fields.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420848

Title:
  nova-compute service spuriously marked as "up" when disabled

Status in OpenStack Compute (Nova):
  New

Bug description:
  I think our usage of the "updated_at" field to determine whether a
  service is "up" or not is buggy.  Consider this scenario:

  1) nova-compute is happily running and is up/enabled on compute-0
  2) something causes nova-compute to stop (process crash, hardware fault, 
power failure, network isolation, etc.)
  3) a minute later, the nova-compute service is reported as "down"
  4) I run "nova service-disable compute-0 nova-compute"
  5) nova-compute is now reported as "up" for the next minute

  I wonder if it would make sense to have a separate "last status
  timestamp" database field that would only get updated when we get a
  service status update and not when we change any other fields.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1420848/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420862] [NEW] Missing comma in help string

2015-02-11 Thread Julie Pichon
Public bug reported:

When creating a new network, the help text reads as follow:

"Create a new network. In addition a subnet associated with the network
can be created in the next panel."

Folks proficient in the English language report that "In addition"
should be followed by a comma or it is wrong/doesn't read well.

String is in that file:
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/networks/workflows.py

** Affects: horizon
 Importance: Low
 Status: Confirmed


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420862

Title:
  Missing comma in help string

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  When creating a new network, the help text reads as follow:

  "Create a new network. In addition a subnet associated with the
  network can be created in the next panel."

  Folks proficient in the English language report that "In addition"
  should be followed by a comma or it is wrong/doesn't read well.

  String is in that file:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/networks/workflows.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420863] [NEW] Default setting should be secure

2015-02-11 Thread Brant Knudson
Public bug reported:


Horizon has some instructions for setting it up in a secure way[1]. These 
settings should be the default.

[1] http://docs.openstack.org/developer/horizon/topics/deployment.html
#secure-site-recommendations

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420863

Title:
  Default setting should be secure

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  Horizon has some instructions for setting it up in a secure way[1]. These 
settings should be the default.

  [1] http://docs.openstack.org/developer/horizon/topics/deployment.html
  #secure-site-recommendations

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362676] Re: Hyper-V agent doesn't create stateful security group rules

2015-02-11 Thread Claudiu Belu
** Project changed: neutron => networking-hyperv

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Claudiu Belu (cbelu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362676

Title:
  Hyper-V agent doesn't create stateful security group rules

Status in Hyper-V Networking Agent:
  In Progress
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hyper-V agent does not create stateful security group rules (ACLs),
  meaning it doesn't allow any response traffic to pass through.

  For example, the following security group rule:
  {"direction": "ingress", "remote_ip_prefix": null, "protocol": "tcp", 
"port_range_max": 22,  "port_range_min": 22, "ethertype": "IPv4"}
  Allows tcp  inbound traffic through port 22, but since the Hyper-V agent does 
not add this rule as stateful, the reply traffic never received, unless 
specifically added an egress security group rule as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1362676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420809] Re: Glance image-create help in CLI is not correct

2015-02-11 Thread Pasquale Porreca
Horizon is not a reliable source to verify what parameters are
mandatory, since it is another client (as python-glanceclient, i.e.
glance CLI) for the glance API.

The command glance client doesn't actually require any parameter (not
even an image name!) to process correctly, anyway in this case the image
is only "queued".

To have an image in the state "active" (i.e. actually created) it is
necessary to specify the source (--file for local source or --location
for URL), the disk format and the container format.

It may be arguable why Horizon doesn't allow to only queue an image or
why it forces to have a name for the created images, differently from
python-glanceclient, anyway in my opinion these are more implementation
choices than bugs.

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1420809

Title:
  Glance image-create  help in CLI  is not correct

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  In Glance CLI help we can see parameter that are written as "optional" when 
they are mandatory.
  I compare it with horizon WebUI dashboard and I saw which fields are 
mandatory .
  to reproduce the bug in CLI run the command : 
  #glance help image-create

  compare it with WebUI

  Version 
  [root@puma15 ~(keystone_eran1)]# rpm -qa |grep glance 
  python-glanceclient-0.14.2-1.el7ost.noarch
  openstack-glance-2014.2.1-3.el7ost.noarch
  python-glance-2014.2.1-3.el7ost.noarch
  python-glance-store-0.1.10-2.el7ost.noarch

  [root@puma15 ~(keystone_eran1)]# rpm -qa |grep rhel
  libreport-rhel-2.1.11-10.el7.x86_64

  
  **Attatched screenshot - describe compare between CLI & WebUI

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1420809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420067] Re: subunit-trace in pretty_tox.sh swallows import errors

2015-02-11 Thread Matt Riedemann
Does this fix it? https://review.openstack.org/#/c/154933/

** Also affects: tempest
   Importance: Undecided
   Status: New

** No longer affects: nova

** Changed in: tempest
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420067

Title:
  subunit-trace in pretty_tox.sh swallows import errors

Status in Tempest:
  In Progress

Bug description:
  See example Nova run:
  
http://logs.openstack.org/60/154260/3/check/gate-nova-python27/eaed0fd/console.html

  When one import is wrong say in nova/test.py, then we don't get any
  information since "python setup.py testr --slowest" prints the import
  error stack trace on stdout and subunit-trace consumes that stream
  entirely.

  Talking to @lifeless on irc, looks like something quick can be done in
  subunit-trace

  lifeless: dims__: righto so yes subunit-trace is not showing stdout non-test 
events
  lifeless: dims__: it needs something like the original subunit-trace code 
reinstated, in another leaf in the CopyStreamResult list

  thanks,
  dims

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1420067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420890] [NEW] Attribute resource_type_names is not always present in Metadata Defs

2015-02-11 Thread Rob Cresswell
Public bug reported:

The error message below is shown on loading the Metadata Definitions
table in DevStack. The table shows the resource_type_names column, but
this is not always present in each value,

The attribute resource_type_names does not exist on .

** Affects: horizon
 Importance: Undecided
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420890

Title:
  Attribute resource_type_names is not always present in Metadata Defs

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The error message below is shown on loading the Metadata Definitions
  table in DevStack. The table shows the resource_type_names column, but
  this is not always present in each value,

  The attribute resource_type_names does not exist on .

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420863] Re: Default setting should be secure

2015-02-11 Thread Doug Fish
** Also affects: openstack-chef
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420863

Title:
  Default setting should be secure

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack + Chef:
  New

Bug description:
  
  Horizon has some instructions for setting it up in a secure way[1]. These 
settings should be the default.

  [1] http://docs.openstack.org/developer/horizon/topics/deployment.html
  #secure-site-recommendations

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1157261] Re: Performance issue on high concurrency

2015-02-11 Thread Dmitry Mescheryakov
The bug is hanging in Incomplete for more than a month for MOS project.
Hence moving to invalid.

** Changed in: mos/6.0.x
   Status: Incomplete => Invalid

** Changed in: mos/6.1.x
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1157261

Title:
  Performance issue on high concurrency

Status in OpenStack Identity (Keystone):
  Triaged
Status in Mirantis OpenStack:
  Invalid
Status in Mirantis OpenStack 6.0.x series:
  Invalid
Status in Mirantis OpenStack 6.1.x series:
  Invalid

Bug description:
  Im currently using Keystone as the auth-server in Swift.
  We have done performance test on our solution and we found that performance 
of keystone+swift is quite low, only 200 ops/s. While with the same setup the 
throughput can be improved to 7k ops/s by replacing keystone with swauth
  We found the keystone process is fully saturating one sandy bridge core. This 
looks like a scalability issue.
  Would be good if keystone could scale up well on a multi-core server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1157261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420894] [NEW] Filter in Project page works in only current page.

2015-02-11 Thread Amogh
Public bug reported:

1. As a admin Create multiple projects
2. Go to Identity->Projects page

Try to filter the project which is not listed in the current page. Observe that 
filter does not return any result.
I think the current filter wont help user when thousands of projects are 
available and not displayed in one page.

Would be helpful if filter picks the project listed in other pages as
well ( not only in current page)

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "projects_filter.PNG"
   
https://bugs.launchpad.net/bugs/1420894/+attachment/4317351/+files/projects_filter.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420894

Title:
  Filter in Project page works in only current page.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  1. As a admin Create multiple projects
  2. Go to Identity->Projects page

  Try to filter the project which is not listed in the current page. Observe 
that filter does not return any result.
  I think the current filter wont help user when thousands of projects are 
available and not displayed in one page.

  Would be helpful if filter picks the project listed in other pages as
  well ( not only in current page)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420937] [NEW] [data processing] Job binaries form has incorrect label

2015-02-11 Thread Chad Roberts
Public bug reported:

*low priority*

In the Data Processing -> Job Binaries -> Create form, the URL field has
a sub-label of "internal-db://".  That label is incorrect and should
read "swift://".

** Affects: horizon
 Importance: Undecided
 Assignee: Chad Roberts (croberts)
 Status: In Progress


** Tags: sahara

** Tags added: sahara

** Changed in: horizon
 Assignee: (unassigned) => Chad Roberts (croberts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420937

Title:
  [data processing] Job binaries form has incorrect label

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  *low priority*

  In the Data Processing -> Job Binaries -> Create form, the URL field
  has a sub-label of "internal-db://".  That label is incorrect and
  should read "swift://".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420944] [NEW] Display image virtual size

2015-02-11 Thread Chris St. Pierre
Public bug reported:

In Icehouse, Glance added a new field, 'virtual_size', to hold the
virtual size (as opposed to on-disk size) of sparsed or compressed
images: https://blueprints.launchpad.net/glance/+spec/split-image-size

In most cases, this is far more valuable to know than the on-disk size
of an image; for instance, the official Fedora 21 QCOW2 image is only
150 Mb on disk, but is a 3 Gb image. But Horizon displays the on-disk
size in both the image detail view, and in the launch instance modal.
The latter is particularly bad, since it can lead to confusion about
whether an image can be launched on a flavor of a given size.
(Displaying the on-disk size leads one to believe that the Fedora 21
image could be launched on an m1.tiny, for instance -- 150 Mb < 1 Gb,
after all -- but it can't, because the image virtual size is greater
than 1 Gb.)

When an image has a virtual size, it should be displayed in the image
detail and in the launch instance modal.

** Affects: horizon
 Importance: Undecided
 Assignee: Chris St. Pierre (stpierre)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420944

Title:
  Display image virtual size

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In Icehouse, Glance added a new field, 'virtual_size', to hold the
  virtual size (as opposed to on-disk size) of sparsed or compressed
  images: https://blueprints.launchpad.net/glance/+spec/split-image-size

  In most cases, this is far more valuable to know than the on-disk size
  of an image; for instance, the official Fedora 21 QCOW2 image is only
  150 Mb on disk, but is a 3 Gb image. But Horizon displays the on-disk
  size in both the image detail view, and in the launch instance modal.
  The latter is particularly bad, since it can lead to confusion about
  whether an image can be launched on a flavor of a given size.
  (Displaying the on-disk size leads one to believe that the Fedora 21
  image could be launched on an m1.tiny, for instance -- 150 Mb < 1 Gb,
  after all -- but it can't, because the image virtual size is greater
  than 1 Gb.)

  When an image has a virtual size, it should be displayed in the image
  detail and in the launch instance modal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420791] Re: python keystoneclient misreports connection error reason

2015-02-11 Thread Rodrigo Duarte
Redirecting to python-keystoneclient.

** Project changed: keystone => python-keystoneclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1420791

Title:
  python keystoneclient misreports connection error reason

Status in Python client library for Keystone:
  New

Bug description:
  The python keystoneclient reports a set of connection errors as
  "Connection refused", except connection refused itself:

  https://github.com/openstack/python-
  
keystoneclient/blob/a333c56ac9a6b48b8e5d16e654251fb7d15bd89f/keystoneclient/session.py#L411

  This can make tracking down authentication and connection problems
  difficult without using a tool like strace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-keystoneclient/+bug/1420791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420960] [NEW] Pool's session persistence is deleted when HM added

2015-02-11 Thread Evgeny Fedoruk
Public bug reported:

On Neutron LBaaS v2:

Scenario:
Creating pool with session persistence data
Adding health monitor to the pool - session persistence data disappears
 
Following is a technical description of a bug:
create_healthmonitor_on_pool method in db layer 
(db/loadbalancer/loadbalancer_dbv2.py)
calls self.update_pool method with HM data on it
 In its turn, update_pool method handles both, hm and session persistence.
It deletes session persistence data if not passed.

In this case session persistence is not passed because the method was
called in hm creation flow.

The result is deletion of session persistence data when adding hm to a
pool

Solution:
create_healthmonitor_on_pool should probably pass current pool's session 
persistence data to update_pool method

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1420960

Title:
  Pool's session persistence is deleted when HM added

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On Neutron LBaaS v2:

  Scenario:
  Creating pool with session persistence data
  Adding health monitor to the pool - session persistence data disappears
   
  Following is a technical description of a bug:
  create_healthmonitor_on_pool method in db layer 
(db/loadbalancer/loadbalancer_dbv2.py)
  calls self.update_pool method with HM data on it
   In its turn, update_pool method handles both, hm and session persistence.
  It deletes session persistence data if not passed.

  In this case session persistence is not passed because the method was
  called in hm creation flow.

  The result is deletion of session persistence data when adding hm to a
  pool

  Solution:
  create_healthmonitor_on_pool should probably pass current pool's session 
persistence data to update_pool method

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1420960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420971] [NEW] iscsi_transport parameter should be called iscsi_iface

2015-02-11 Thread Anish Bhatt
Public bug reported:

The change https://review.openstack.org/#/c/146233/ added a libvirt
parameter called iscsi_transport for open-iscsi transport supports.

We actually need transport_iface, not transport as the param (iscsi_tcp & iser 
are exceptions to this, where transport and
transport_iface are one and the same), and this is misleading. Giving transport 
name instead of transport_iface name will also cause login to fail currently, 
would be better to fall back to iscsi_tcp (also called the default param) when 
the parameter is incorrect (i.e. the corresponding transport_iface file is 
non-existent)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420971

Title:
  iscsi_transport parameter should be called iscsi_iface

Status in OpenStack Compute (Nova):
  New

Bug description:
  The change https://review.openstack.org/#/c/146233/ added a libvirt
  parameter called iscsi_transport for open-iscsi transport supports.

  We actually need transport_iface, not transport as the param (iscsi_tcp & 
iser are exceptions to this, where transport and
  transport_iface are one and the same), and this is misleading. Giving 
transport name instead of transport_iface name will also cause login to fail 
currently, would be better to fall back to iscsi_tcp (also called the default 
param) when the parameter is incorrect (i.e. the corresponding transport_iface 
file is non-existent)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1420971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420984] [NEW] [v1] User can change image owner to any other

2015-02-11 Thread Fei Long Wang
Public bug reported:

Only admin can change image owner, but everybody.

** Affects: glance
 Importance: Undecided
 Assignee: Fei Long Wang (flwang)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Fei Long Wang (flwang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1420984

Title:
  [v1] User can change image owner to any other

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Only admin can change image owner, but everybody.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1420984/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421011] [NEW] Remove unused RPC methods from the L3_rpc

2015-02-11 Thread Swaminathan Vasudevan
Public bug reported:

Remove unsued RPC methods from the L3_rpc.
The "get_snat_router_interface_ports" is defined but not currently used by any 
agents. So it need to be cleaned.

** Affects: neutron
 Importance: Undecided
 Assignee: Swaminathan Vasudevan (swaminathan-vasudevan)
 Status: In Progress


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Swaminathan Vasudevan (swaminathan-vasudevan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421011

Title:
  Remove unused RPC methods from the L3_rpc

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Remove unsued RPC methods from the L3_rpc.
  The "get_snat_router_interface_ports" is defined but not currently used by 
any agents. So it need to be cleaned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420984] Re: [v1] User can change image owner to any other

2015-02-11 Thread Fei Long Wang
Just verified again, the problem has gone. So maybe is an environment
issue. And I found this,
https://github.com/openstack/glance/blob/master/glance/registry/api/v1/images.py#L460
so I think this bug is invalid.

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1420984

Title:
  [v1] User can change image owner to any other

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Only admin can change image owner, but everybody.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1420984/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421017] [NEW] Flavor/image size checks are insufficient and untested

2015-02-11 Thread Chris St. Pierre
Public bug reported:

When launching an instance, Horizon does some checks to ensure that the
flavor the user has chosen is large enough to support the image they
have chosen. Unfortunately:

1. These checks are broken. For disk size, they only check the
`min_disk` property of the image, not the image size or virtual size; as
a result, the image disk size is only checked in practice if the user
has bothered to set the `min_disk` property.

2. The unit tests for the checks are broken. They modify local versions
of the glance image data, but then run the tests by passing image IDs,
which means that the original unmodified test images are used. The tests
happen to pass, accidentally, but they don't test what we think they're
testing.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1421017

Title:
  Flavor/image size checks are insufficient and untested

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When launching an instance, Horizon does some checks to ensure that
  the flavor the user has chosen is large enough to support the image
  they have chosen. Unfortunately:

  1. These checks are broken. For disk size, they only check the
  `min_disk` property of the image, not the image size or virtual size;
  as a result, the image disk size is only checked in practice if the
  user has bothered to set the `min_disk` property.

  2. The unit tests for the checks are broken. They modify local
  versions of the glance image data, but then run the tests by passing
  image IDs, which means that the original unmodified test images are
  used. The tests happen to pass, accidentally, but they don't test what
  we think they're testing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1421017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421037] [NEW] [metering agent] Failed to get any traffic data if the first chain is missing

2015-02-11 Thread Fei Long Wang
Public bug reported:

Based on current implement, if there is any reason caused the chain of
metering label is missing, the whole function of
iptables_driver.get_traffic_counters() will be broken, see
https://github.com/openstack/neutron/blob/master/neutron/services/metering/drivers/iptables/iptables_driver.py#L275

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421037

Title:
  [metering agent] Failed to get any traffic data if the first chain is
  missing

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Based on current implement, if there is any reason caused the chain of
  metering label is missing, the whole function of
  iptables_driver.get_traffic_counters() will be broken, see
  
https://github.com/openstack/neutron/blob/master/neutron/services/metering/drivers/iptables/iptables_driver.py#L275

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288210] Re: Failed to attach volume

2015-02-11 Thread Julio González Gil
I'm able to reproduce this problem with RDO installed with "packstack
--allinone"

OS: CentOS 7.0.1406
Nova: 2014.2.1

2015-02-12 02:26:29.131 4108 ERROR nova.virt.block_device 
[req-8c1bf3d1-955a-4910-99c8-293395f650df None] [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] Driver failed to attach volume 
910e1aab-fcaa-4b77-be12-a303c6216d04 at /dev/hda
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] Traceback (most recent call last):
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05]   File 
"/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 252, in 
attach
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] device_type=self['device_type'], 
encryption=encryption)
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1380, in 
attach_volume
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] conf = 
self._connect_volume(connection_info, disk_info)
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1327, in 
_connect_volume
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] return 
driver.connect_volume(connection_info, disk_info)
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05]   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 
272, in inner
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] return f(*args, **kwargs)
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume.py", line 299, in 
connect_volume
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] self._run_iscsiadm(iscsi_properties, 
("--rescan",))
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume.py", line 236, in 
_run_iscsiadm
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] check_exit_code=check_exit_code)
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05]   File 
"/usr/lib/python2.7/site-packages/nova/utils.py", line 163, in execute
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] return processutils.execute(*cmd, 
**kwargs)
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05]   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py", line 
203, in execute
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] cmd=sanitized_cmd)
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] ProcessExecutionError: Unexpected error 
while running command.
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf iscsiadm -m node -T 
iqn.2010-10.org.openstack:volume-910e1aab-fcaa-4b77-be12-a303c6216d04 -p 
192.168.3.150:3260 --rescan
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] Exit code: 21
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] Stdout: u''
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] Stderr: u'iscsiadm: No session found.\n'
2015-02-12 02:26:29.131 4108 TRACE nova.virt.block_device [instance: 
3dc0e1ee-96a3-437c-bbe1-74454dc85e05] 


** Changed in: nova
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288210

Title:
  Failed to attach volume

Status in Fuel: OpenStack installer that works:
  Invalid
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  {"build_id": "2014-03-05_07-31-01", "mirantis": "yes", "build_number":
  "235", "nailgun_sha": "f58aad317829112913f364347b14f1f0518ad371",
  "ostf_sha": "dc54d99ddff2f497b131ad1a42362515f2a61afa",
  "fuelmain_sha": "16637e2ea0ae6fe9a773aceb9d76c6e3a75f6c3b

[Yahoo-eng-team] [Bug 1421042] [NEW] AttributeError during migration legacy router to DVR router.

2015-02-11 Thread Xu Han Peng
Public bug reported:

After commit https://review.openstack.org/124879 is merged, when
migrating a legacy router to distributed router, AttributeError:
'LegacyRouter' object has no attribute 'dist_fip_count' is reported
because ri object in l3 agent is LegacyRouter instead of DvrRouter.
LegacyRouter doesn't has attribute dist_fip_count.

** Affects: neutron
 Importance: Undecided
 Assignee: Xu Han Peng (xuhanp)
 Status: In Progress


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Xu Han Peng (xuhanp)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421042

Title:
  AttributeError during migration legacy router to DVR router.

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  After commit https://review.openstack.org/124879 is merged, when
  migrating a legacy router to distributed router, AttributeError:
  'LegacyRouter' object has no attribute 'dist_fip_count' is reported
  because ri object in l3 agent is LegacyRouter instead of DvrRouter.
  LegacyRouter doesn't has attribute dist_fip_count.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396854] Re: fail to create an instance with specific ip

2015-02-11 Thread Robert Collins
Fixed in https://review.openstack.org/#/c/149905/

** Changed in: nova
   Status: In Progress => Fix Released

** Changed in: nova
   Status: Fix Released => Fix Committed

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1396854

Title:
  fail to create an instance with specific ip

Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  When I using below command to create an instance with specific ip, it
  failed.

  nova boot --image cirros-0.3.2-x86_64-uec --flavor m1.nano --nic net-
  id=5b7930ae-ff24-4dcf-a429-e039cb7502dd,v4-fixed-ip=10.0.0.5 test

  My env is latest devstack on fedora20.



  Here is trace log from nova-compute.
  2014-11-27 11:15:09.565 ERROR nova.compute.manager [-] [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] Instance failed to spawn
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] Traceback (most recent call last):
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2247, in _build_resources
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] yield resources
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2117, in _build_and_run_instance
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] instance_type=instance_type)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2634, in spawn
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] admin_pass=admin_password)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3095, in _create_image
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] content=files, extra_md=extra_md, 
network_info=network_info)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/api/metadata/base.py", line 167, in __init__
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] 
ec2utils.get_ip_info_for_instance_from_nw_info(network_info)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/api/ec2/ec2utils.py", line 152, in 
get_ip_info_for_instance_from_nw_info
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] fixed_ips = nw_info.fixed_ips()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/network/model.py", line 450, in _sync_wrapper
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] self.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/network/model.py", line 482, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] self[:] = self._gt.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] return self._exit_event.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 125, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] current.throw(*self._exc)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] result = function(*args, **kwargs)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1647, in _allocate_network_async
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance

[Yahoo-eng-team] [Bug 1396854] Re: fail to create an instance with specific ip

2015-02-11 Thread Robert Collins
*** This bug is a duplicate of bug 1408529 ***
https://bugs.launchpad.net/bugs/1408529

** This bug has been marked a duplicate of bug 1408529
   nova boot vm with '--nic net-id=, v4-fixed-ip=xxx' failed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1396854

Title:
  fail to create an instance with specific ip

Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  When I using below command to create an instance with specific ip, it
  failed.

  nova boot --image cirros-0.3.2-x86_64-uec --flavor m1.nano --nic net-
  id=5b7930ae-ff24-4dcf-a429-e039cb7502dd,v4-fixed-ip=10.0.0.5 test

  My env is latest devstack on fedora20.



  Here is trace log from nova-compute.
  2014-11-27 11:15:09.565 ERROR nova.compute.manager [-] [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] Instance failed to spawn
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] Traceback (most recent call last):
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2247, in _build_resources
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] yield resources
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2117, in _build_and_run_instance
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] instance_type=instance_type)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2634, in spawn
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] admin_pass=admin_password)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3095, in _create_image
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] content=files, extra_md=extra_md, 
network_info=network_info)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/api/metadata/base.py", line 167, in __init__
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] 
ec2utils.get_ip_info_for_instance_from_nw_info(network_info)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/api/ec2/ec2utils.py", line 152, in 
get_ip_info_for_instance_from_nw_info
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] fixed_ips = nw_info.fixed_ips()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/network/model.py", line 450, in _sync_wrapper
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] self.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/network/model.py", line 482, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] self[:] = self._gt.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] return self._exit_event.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 125, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] current.throw(*self._exc)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] result = function(*args, **kwargs)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1647, in _allocate_network_async
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f

[Yahoo-eng-team] [Bug 1421049] [NEW] Remove dvr router interface consume much time

2015-02-11 Thread shihanzhang
Public bug reported:

In my environment, I create a DVR router with only one subnet(cidr is 
10.0.0.0/8) attached to this router, then I create 1 ports in this subnet, 
when I use 'router-interface-delete' to remove this subnet from router, it 
consume much time to return, I analyse the reason is bellow:
1. when 'remove_router_interface', it will notify l3 agent  'routers_updated', 
2. in this _notification, it will schedule_routers this router
def _notification(self, context, method, router_ids, operation,
  shuffle_agents):
"""Notify all the agents that are hosting the routers."""
plugin = manager.NeutronManager.get_service_plugins().get(
service_constants.L3_ROUTER_NAT)
if not plugin:
LOG.error(_LE('No plugin for L3 routing registered. Cannot notify '
  'agents with the message %s'), method)
return
if utils.is_extension_supported(
plugin, constants.L3_AGENT_SCHEDULER_EXT_ALIAS):
adminContext = (context.is_admin and
context or context.elevated())
plugin.schedule_routers(adminContext, router_ids)
self._agent_notification(
context, method, router_ids, operation, shuffle_agents)
3. in _schedule_router it will get the candidates l3 agent, but in 
'get_l3_agent_candidates' it will check 'check_ports_exist_on_l3agent'

if agent_mode in ('legacy', 'dvr_snat') and (
not is_router_distributed):
candidates.append(l3_agent)
elif is_router_distributed and agent_mode.startswith('dvr') and (
self.check_ports_exist_on_l3agent(
context, l3_agent, sync_router['id'])):
candidates.append(l3_agent)

4. but for 'remove_router_interface', it has deleted the router interface 
before do schedule, so the 'get_subnet_ids_on_router' will 
return a empty list, then use this list as filter to get ports, if port number 
are very large, it will consume much time

def check_ports_exist_on_l3agent(self, context, l3_agent, router_id):
"""
This function checks for existence of dvr serviceable
ports on the host, running the input l3agent.
"""
subnet_ids = self.get_subnet_ids_on_router(context, router_id)

core_plugin = manager.NeutronManager.get_plugin()
filter = {'fixed_ips': {'subnet_id': subnet_ids}}
ports = core_plugin.get_ports(context, filters=filter)

so I think when 'remove_router_interface', it should not reschedule
router

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421049

Title:
  Remove dvr router interface consume much time

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In my environment, I create a DVR router with only one subnet(cidr is 
10.0.0.0/8) attached to this router, then I create 1 ports in this subnet, 
when I use 'router-interface-delete' to remove this subnet from router, it 
consume much time to return, I analyse the reason is bellow:
  1. when 'remove_router_interface', it will notify l3 agent  
'routers_updated', 
  2. in this _notification, it will schedule_routers this router
  def _notification(self, context, method, router_ids, operation,
shuffle_agents):
  """Notify all the agents that are hosting the routers."""
  plugin = manager.NeutronManager.get_service_plugins().get(
  service_constants.L3_ROUTER_NAT)
  if not plugin:
  LOG.error(_LE('No plugin for L3 routing registered. Cannot notify 
'
'agents with the message %s'), method)
  return
  if utils.is_extension_supported(
  plugin, constants.L3_AGENT_SCHEDULER_EXT_ALIAS):
  adminContext = (context.is_admin and
  context or context.elevated())
  plugin.schedule_routers(adminContext, router_ids)
  self._agent_notification(
  context, method, router_ids, operation, shuffle_agents)
  3. in _schedule_router it will get the candidates l3 agent, but in 
'get_l3_agent_candidates' it will check 'check_ports_exist_on_l3agent'

  if agent_mode in ('legacy', 'dvr_snat') and (
  not is_router_distributed):
  candidates.append(l3_agent)
  elif is_router_distributed and agent_mode.startswith('dvr') and (
  self.check_ports_exist_on_l3agent(
  context, l3_agent, sync_router['id'])):
  candidates.append(l3_agent)

  4. but for

[Yahoo-eng-team] [Bug 1421055] [NEW] bulk create gre/vxlan network failed

2015-02-11 Thread shihanzhang
Public bug reported:

I bulk create 1000 vxlan networks, it always failed,  I analyze the reason is 
bellow:
it does not lock when it allocate segment id

def allocate_fully_specified_segment(self, session, **raw_segment):
network_type = self.get_type()
try:
with session.begin(subtransactions=True):
alloc = (session.query(self.model).filter_by(**raw_segment).
 first())

def allocate_partially_specified_segment(self, session, **filters):

network_type = self.get_type()
with session.begin(subtransactions=True):
select = (session.query(self.model).
  filter_by(allocated=False, **filters))

I think it should add a lock when it allocate segment id

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421055

Title:
  bulk create gre/vxlan network failed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I bulk create 1000 vxlan networks, it always failed,  I analyze the reason is 
bellow:
  it does not lock when it allocate segment id

  def allocate_fully_specified_segment(self, session, **raw_segment):
  network_type = self.get_type()
  try:
  with session.begin(subtransactions=True):
  alloc = (session.query(self.model).filter_by(**raw_segment).
   first())

  def allocate_partially_specified_segment(self, session,
  **filters):

  network_type = self.get_type()
  with session.begin(subtransactions=True):
  select = (session.query(self.model).
filter_by(allocated=False, **filters))

  I think it should add a lock when it allocate segment id

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421073] [NEW] can't add a new panel to admin dashboard

2015-02-11 Thread reachlin
Public bug reported:

I am trying to add a new panel into admin dashboard and following the
sample in the _50_admin_add_panel_py.example in the enabled folder. But
the server gives the error "tuple has no method of append".

Then, I figured in line 23 of
openstack_dashboard.dashboards/admin/dashboard.py, panels is a tuple.
After I change it into an array. it seems OK.

So please either fix the sample or the admin dashboard. Also, this way
to add a new panel can't specify the location of the new panel appearing
in the navigation bar on the left.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1421073

Title:
  can't add a new panel to admin dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I am trying to add a new panel into admin dashboard and following the
  sample in the _50_admin_add_panel_py.example in the enabled folder.
  But the server gives the error "tuple has no method of append".

  Then, I figured in line 23 of
  openstack_dashboard.dashboards/admin/dashboard.py, panels is a tuple.
  After I change it into an array. it seems OK.

  So please either fix the sample or the admin dashboard. Also, this way
  to add a new panel can't specify the location of the new panel
  appearing in the navigation bar on the left.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1421073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421084] [NEW] admin_password inject failed in kvm

2015-02-11 Thread Eric Xie
Public bug reported:

Inject "admin_password" info to instance's image with modifying the
content of image not config_drive.

Instance booted successfully, but i got the log below:
2015-02-12 14:11:28.119 2 WARNING nova.virt.disk.api 
[req-bebb329c-ee79-48d2-b800-e02e2e7ffbd5 189303f928574600af4f95a00f13c280 
94b44a817f9a4c40a21b136de7cefee7] Ignoring error injecting data into image 
(Error mounting 
/var/lib/nova/instances/eb030d0d-2f39-453b-9faa-4bffaef2ee9c/disk with 
libguestfs (/usr/bin/qemu-system-x86_64 exited with error status 1.

And login failed when i use the password above.

Reproduce:
1. prepare environment:
With libvirt and kvm.
In nova.conf
inject_partition=-1
inject_password=true
2. Launch one instance with "admin_pass" configured.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421084

Title:
  admin_password inject failed in kvm

Status in OpenStack Compute (Nova):
  New

Bug description:
  Inject "admin_password" info to instance's image with modifying the
  content of image not config_drive.

  Instance booted successfully, but i got the log below:
  2015-02-12 14:11:28.119 2 WARNING nova.virt.disk.api 
[req-bebb329c-ee79-48d2-b800-e02e2e7ffbd5 189303f928574600af4f95a00f13c280 
94b44a817f9a4c40a21b136de7cefee7] Ignoring error injecting data into image 
(Error mounting 
/var/lib/nova/instances/eb030d0d-2f39-453b-9faa-4bffaef2ee9c/disk with 
libguestfs (/usr/bin/qemu-system-x86_64 exited with error status 1.

  And login failed when i use the password above.

  Reproduce:
  1. prepare environment:
  With libvirt and kvm.
  In nova.conf
  inject_partition=-1
  inject_password=true
  2. Launch one instance with "admin_pass" configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1421084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421083] [NEW] admin_password inject failed in kvm

2015-02-11 Thread Eric Xie
Public bug reported:

Inject "admin_password" info to instance's image with modifying the
content of image not config_drive.

Instance booted successfully, but i got the log below:
2015-02-12 14:11:28.119 2 WARNING nova.virt.disk.api 
[req-bebb329c-ee79-48d2-b800-e02e2e7ffbd5 189303f928574600af4f95a00f13c280 
94b44a817f9a4c40a21b136de7cefee7] Ignoring error injecting data into image 
(Error mounting 
/var/lib/nova/instances/eb030d0d-2f39-453b-9faa-4bffaef2ee9c/disk with 
libguestfs (/usr/bin/qemu-system-x86_64 exited with error status 1.

And login failed when i use the password above.

Reproduce:
1. prepare environment:
With libvirt and kvm.
In nova.conf
inject_partition=-1
inject_password=true
2. Launch one instance with "admin_pass" configured.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421083

Title:
  admin_password inject failed in kvm

Status in OpenStack Compute (Nova):
  New

Bug description:
  Inject "admin_password" info to instance's image with modifying the
  content of image not config_drive.

  Instance booted successfully, but i got the log below:
  2015-02-12 14:11:28.119 2 WARNING nova.virt.disk.api 
[req-bebb329c-ee79-48d2-b800-e02e2e7ffbd5 189303f928574600af4f95a00f13c280 
94b44a817f9a4c40a21b136de7cefee7] Ignoring error injecting data into image 
(Error mounting 
/var/lib/nova/instances/eb030d0d-2f39-453b-9faa-4bffaef2ee9c/disk with 
libguestfs (/usr/bin/qemu-system-x86_64 exited with error status 1.

  And login failed when i use the password above.

  Reproduce:
  1. prepare environment:
  With libvirt and kvm.
  In nova.conf
  inject_partition=-1
  inject_password=true
  2. Launch one instance with "admin_pass" configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1421083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421086] [NEW] admin_password inject failed with config_drive

2015-02-11 Thread Eric Xie
Public bug reported:

Inject "admin_password" to instance with config_drive failed.

Reproduce:
1. Set "force_config_drive=always" in nova.conf
2. Launch one instance with "admin_pass";

I checked the /dev/sr0 with console of this instance. 
And found the "meta_data.json" file that had info {"admin_pass": "admin",……}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421086

Title:
  admin_password inject failed with config_drive

Status in OpenStack Compute (Nova):
  New

Bug description:
  Inject "admin_password" to instance with config_drive failed.

  Reproduce:
  1. Set "force_config_drive=always" in nova.conf
  2. Launch one instance with "admin_pass";

  I checked the /dev/sr0 with console of this instance. 
  And found the "meta_data.json" file that had info {"admin_pass": "admin",……}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1421086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421089] [NEW] no any index for port DB sqlalchemy

2015-02-11 Thread shihanzhang
Public bug reported:

Now there is not any index for port  DB sqlalchemy, but for nova bulk create 
VM, it frequently select port according to 'tenant_id','network_id' 
,'device_id', if no index, it will consume much time in db operation, so I 
think it is better to add index for
port 'tenant_id','network_id' ,'device_id'

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421089

Title:
  no any index for port DB sqlalchemy

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Now there is not any index for port  DB sqlalchemy, but for nova bulk create 
VM, it frequently select port according to 'tenant_id','network_id' 
,'device_id', if no index, it will consume much time in db operation, so I 
think it is better to add index for
  port 'tenant_id','network_id' ,'device_id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1421089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421092] [NEW] assertTableColumns always passes

2015-02-11 Thread wanghong
Public bug reported:

assertTableColumns method in tests/test_sql_upgrade.py is used to asserts that 
the table contains the expected set of columns. But, it always passes because 
the return of sort method is None:
https://github.com/openstack/keystone/blob/master/keystone/tests/test_sql_upgrade.py#L265
def assertTableColumns(self, table_name, expected_cols):
"""Asserts that the table contains the expected set of columns."""
self.initialize_sql()
table = self.select_table(table_name)
actual_cols = [col.name for col in table.columns]
# Check if the columns are equal, but allow for a different order,
# which might occur after an upgrade followed by a downgrade
self.assertEqual(expected_cols.sort(), actual_cols.sort(),
 '%s table' % table_name)

** Affects: keystone
 Importance: Undecided
 Assignee: wanghong (w-wanghong)
 Status: New

** Description changed:

  assertTableColumns method in tests/test_sql_upgrade.py is used to asserts 
that the table contains the expected set of columns. But, it always passes 
because the return of sort method is None:
  
https://github.com/openstack/keystone/blob/master/keystone/tests/test_sql_upgrade.py#L265
- def assertTableColumns(self, table_name, expected_cols):
- """Asserts that the table contains the expected set of columns."""
- self.initialize_sql()
- table = self.select_table(table_name)
- actual_cols = [col.name for col in table.columns]
- # Check if the columns are equal, but allow for a different order,
- # which might occur after an upgrade followed by a downgrade
- self.assertEqual(expected_cols.sort(), actual_cols.sort(),
-  '%s table' % table_name)
+ def assertTableColumns(self, table_name, expected_cols):
+ """Asserts that the table contains the expected set of columns."""
+ self.initialize_sql()
+ table = self.select_table(table_name)
+ actual_cols = [col.name for col in table.columns]
+ # Check if the columns are equal, but allow for a different order,
+ # which might occur after an upgrade followed by a downgrade
+ self.assertEqual(expected_cols.sort(), actual_cols.sort(),
+  '%s table' % table_name)

** Changed in: keystone
 Assignee: (unassigned) => wanghong (w-wanghong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1421092

Title:
  assertTableColumns always passes

Status in OpenStack Identity (Keystone):
  New

Bug description:
  assertTableColumns method in tests/test_sql_upgrade.py is used to asserts 
that the table contains the expected set of columns. But, it always passes 
because the return of sort method is None:
  
https://github.com/openstack/keystone/blob/master/keystone/tests/test_sql_upgrade.py#L265
  def assertTableColumns(self, table_name, expected_cols):
  """Asserts that the table contains the expected set of columns."""
  self.initialize_sql()
  table = self.select_table(table_name)
  actual_cols = [col.name for col in table.columns]
  # Check if the columns are equal, but allow for a different order,
  # which might occur after an upgrade followed by a downgrade
  self.assertEqual(expected_cols.sort(), actual_cols.sort(),
   '%s table' % table_name)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1421092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp