[Yahoo-eng-team] [Bug 1301775] [NEW] Make operations in VM web page(not first page) encounter strange happenings

2014-04-03 Thread Jing Yuan
Public bug reported:

I have 30 VM displayed in two pages.

When made operations in page 1, all things were ok.
But when made operation in page 2, like started a VM, horizon redirected web 
page to page 1 and did not carry out any request.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  I have 30 VM displayed in two pages.
  
- When made operations in page 1, all things are ok.
- But when made operation in page 2, like start a VM, horizon redirected web 
page to page 1 and did not carry out the request.
+ When made operations in page 1, all things were ok.
+ But when made operation in page 2, like started a VM, horizon redirected web 
page to page 1 and did not carry out any request.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1301775

Title:
  Make operations in VM web page(not first page) encounter strange
  happenings

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I have 30 VM displayed in two pages.

  When made operations in page 1, all things were ok.
  But when made operation in page 2, like started a VM, horizon redirected web 
page to page 1 and did not carry out any request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1301775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244831] Re: Reason for Volume creation failures is not displayed

2014-04-03 Thread sandeep mane
** Changed in: cinder
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1244831

Title:
  Reason for Volume creation failures is not displayed

Status in Cinder:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  When creating a Volume from Volume Snaphot, a Volume Size larger than
  the snaphot was specified.

  Error displays unable to create volume. , it would be helpful if the
  exception message is included in the message. This way, the user can
  adjust the volume sizeto match the snapshot and fix the error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1244831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290286] Re: Pop new page when click instance console link

2014-04-03 Thread Matthias Runge
what about middle clicking (e.g in Firefox)? Personally, I don't like
opening new windows at all. Making a new window popping up as a default,
you'll take this choice away from the user. So, I disagree in this being
a bug.

** Changed in: horizon
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1290286

Title:
  Pop new page when click instance console link

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In instance detail page ,when click ' Click here to show only
  console',if we want to view other page,we must click  browser's back
  button to exit the fullscreen mode.But if  add '_blank',then we can
  view  instance console and we can also use the dashboard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1290286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301822] [NEW] Fail to associate floating IP via modifying instance

2014-04-03 Thread jiajiali
Public bug reported:

nstances are initated via heat template successfully, I want to
associate the instance with a floating IP.

1) Method1

Dashboard GUI=Project=manage compute=accesssecurity=floatingIPs,
succeed to associate and de-associate.

2) Method2

Dashboard GUI=Project=manage
compute=instance=actions=more=associating floating IP, deassociating
floating IP, there is no error reported, but in face the action does not
effect.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1301822

Title:
  Fail to associate floating IP via modifying instance

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  nstances are initated via heat template successfully, I want to
  associate the instance with a floating IP.

  1) Method1

  Dashboard GUI=Project=manage compute=accesssecurity=floatingIPs,
  succeed to associate and de-associate.

  2) Method2

  Dashboard GUI=Project=manage
  compute=instance=actions=more=associating floating IP,
  deassociating floating IP, there is no error reported, but in face the
  action does not effect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1301822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299349] Re: upstream-translation-update Jenkins job failing

2014-04-03 Thread Thierry Carrez
** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1299349

Title:
  upstream-translation-update Jenkins job failing

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  Fix Committed
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Manuals:
  New

Bug description:
  Various upstream-translation-update succeed apparently but looking
  into the file errors get ignored - and there are errors uploading the
  files to the translation site like:

  Error uploading file: There is a syntax error in your file.
  Line 1936: duplicate message definition...
  Line 7: ...this is the location of the first definition

  
  For keystone, see:
  
http://logs.openstack.org/78/7882359da114079e8411bd3f97c5628f2cd1c098/post/keystone-upstream-translation-update/27cbb22/

  This has been fixed on ironic with:
  https://review.openstack.org/#/c/83935
  See also: 
http://lists.openstack.org/pipermail/openstack-i18n/2014-March/000479.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1299349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301824] [NEW] Hypervisors only shows the disk size of /root

2014-04-03 Thread jiajiali
Public bug reported:

Dashboard=admin=Hypervisors, Storage total=48G, however, the system is
~300G, only /root is ~48G. Looks like that dashboard only shows the disk
size of /root rather than the whole system's disk.

[root@HPBlade1 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/vg_hpblade1-lv_root
   50G   35G   13G  74% /
tmpfs  32G   76K   32G   1% /dev/shm
/dev/sda1 485M   63M  397M  14% /boot
/dev/mapper/vg_hpblade1-lv_home
  198G   23G  165G  13% /home
[root@HPBlade1 ~]#

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1301824

Title:
  Hypervisors only shows the disk size of /root

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Dashboard=admin=Hypervisors, Storage total=48G, however, the system
  is ~300G, only /root is ~48G. Looks like that dashboard only shows the
  disk size of /root rather than the whole system's disk.

  [root@HPBlade1 ~]# df -h
  FilesystemSize  Used Avail Use% Mounted on
  /dev/mapper/vg_hpblade1-lv_root
 50G   35G   13G  74% /
  tmpfs  32G   76K   32G   1% /dev/shm
  /dev/sda1 485M   63M  397M  14% /boot
  /dev/mapper/vg_hpblade1-lv_home
198G   23G  165G  13% /home
  [root@HPBlade1 ~]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1301824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301838] [NEW] SG rule should not allow an ICMP Policy when icmp-code alone is provided.

2014-04-03 Thread Sridhar Gaddam
Public bug reported:

When we add an Security Group ICMP rule with icmp-type/code, the rule
gets added properly and it translates to an appropriate firewall policy.

It was noticed that when adding a security group rule, without providing the 
icmp-type and only providing the icmp-code, there is no error.
But the iptables rule that gets added is a generic one.

Example:
neutron --debug security-group-rule-create 4b3a5866-8cdd-4e15-b51b-9523ede2f6f8 
--protocol icmp --direction ingress --ethertype ipv4 --port-range-max 4

translates to a iptables rule like
-A neutron-openvswi-i49e920d5-c -p icmp -j RETURN

The Security Group rules listing in Horizon/neutron-client display the icmp 
rule with port-range as None-icmp-code.
This could be misleading as it is inconsistent.
It would be good if validation is done when --port-range-max is passed 
without providing the --port-range-min so that SG Group rules are consistent 
with the iptable rules that are added.

** Affects: neutron
 Importance: Undecided
 Assignee: Sridhar Gaddam (sridhargaddam)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Sridhar Gaddam (sridhargaddam)

** Description changed:

- When we add an Security Group ICMP Policy with icmp-type/code, the
- policy gets added properly and it translates to an appropriate firewall
- policy.
+ When we add an Security Group ICMP rule with icmp-type/code, the rule
+ gets added properly and it translates to an appropriate firewall policy.
  
- It was noticed that when adding a security group policy, without providing 
the icmp-type and only providing the icmp-code, there is no error. 
- But the iptables rule that gets added is a generic one. 
+ It was noticed that when adding a security group rule, without providing the 
icmp-type and only providing the icmp-code, there is no error.
+ But the iptables rule that gets added is a generic one.
  
  Example:
  neutron --debug security-group-rule-create 
4b3a5866-8cdd-4e15-b51b-9523ede2f6f8 --protocol icmp --direction ingress 
--ethertype ipv4 --port-range-max 4
  
- translates to a iptables rule like 
- -A neutron-openvswi-i49e920d5-c -p icmp -j RETURN 
+ translates to a iptables rule like
+ -A neutron-openvswi-i49e920d5-c -p icmp -j RETURN
  
- The Security Group rules listing in Horizon/neutron-client display the icmp 
rule with port-range as None-icmp-code. 
- This could be misleading as it is inconsistent. 
+ The Security Group rules listing in Horizon/neutron-client display the icmp 
rule with port-range as None-icmp-code.
+ This could be misleading as it is inconsistent.
  It would be good if validation is done when --port-range-max is passed 
without providing the --port-range-min so that SG Group rules are consistent 
with the iptable rules that are added.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301838

Title:
  SG rule should not allow an ICMP Policy when icmp-code alone is
  provided.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When we add an Security Group ICMP rule with icmp-type/code, the rule
  gets added properly and it translates to an appropriate firewall
  policy.

  It was noticed that when adding a security group rule, without providing the 
icmp-type and only providing the icmp-code, there is no error.
  But the iptables rule that gets added is a generic one.

  Example:
  neutron --debug security-group-rule-create 
4b3a5866-8cdd-4e15-b51b-9523ede2f6f8 --protocol icmp --direction ingress 
--ethertype ipv4 --port-range-max 4

  translates to a iptables rule like
  -A neutron-openvswi-i49e920d5-c -p icmp -j RETURN

  The Security Group rules listing in Horizon/neutron-client display the icmp 
rule with port-range as None-icmp-code.
  This could be misleading as it is inconsistent.
  It would be good if validation is done when --port-range-max is passed 
without providing the --port-range-min so that SG Group rules are consistent 
with the iptable rules that are added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300274] Re: V3 Authentication Chaining - uniqueness of auth method names

2014-04-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/84735
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=ce6cedb30c5c4b4cf4db9380f09443de22414b39
Submitter: Jenkins
Branch:milestone-proposed

commit ce6cedb30c5c4b4cf4db9380f09443de22414b39
Author: Florent Flament florent.flament-...@cloudwatt.com
Date:   Tue Apr 1 12:48:22 2014 +

Sanitizes authentication methods received in requests.

When a user authenticates against Identity V3 API, he can specify
multiple authentication methods. This patch removes duplicates, which
could have been used to achieve DoS attacks.

Change-Id: Iec9a1875a4ff6e2fac0fb2c3db6f3ce34a5dfd1d
Closes-Bug: 1300274


** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1300274

Title:
  V3 Authentication Chaining - uniqueness of auth method names

Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Security Advisories:
  Incomplete

Bug description:
  In V3.0 API,  we can chain authentication methods. An attacker can
  place the same authentication method multiple times in the methods
  filed. This will result in the same authentication method checking
  over and over (for loop in code).  Using this, an attacker can achieve
  some sorts of Denial of Service.   The methods field is not properly
  sanitized.

  {
 auth:{
identity:{
   methods:[
  password,
  password,
   password,
   password,
   password 
   ],
  password:{
  user:{
 domain:{
id:default
 },
 name:demo,
 password:stack
  }
   }
}
 }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1300274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245892] Re: Radware LBaaS driver: NeutronException usage is faulty

2014-04-03 Thread Avishay Balderman
** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1245892

Title:
  Radware LBaaS driver: NeutronException usage is faulty

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Radware's LBaaS driver is instantiating the NeutronException wrong.

  Example:
  raise n_exc.NeutronException(
  _('params must contain __ids__ field!')

  NeutronException expects kwargs. Passing string will cause error.

  There are 4 places in services/loadbalancer/drivers/radware/driver.py
  module that should be fixed.

  Another problem is the context usage:
  Operations Handling thread is using the same context
  persistent from the start-up time, which is wrong.
  Each operation should use it's own context to complete the operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1245892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245888] Re: Removing health monitor associated with a pool does not reflect the change to Radware vDirect system

2014-04-03 Thread Avishay Balderman
the function
 def delete_health_monitor(...) is no longer exists in plugin.py -- invalid

** Changed in: neutron
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1245888

Title:
  Removing health monitor associated with a pool does not reflect the
  change to Radware vDirect system

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Radware is the LBaaS provider.
  Having health monitor associated with a pool.
  Remove the health monitor.
  Removing the health monitor succeeds, but the change is not reflected to 
Radware vDirect system.

  I know that removing HM which is associated to a pool might be forbidden via 
the Horizon,
  but technically, it's possible. 
  HM should be removed and all pools having the association to it should be 
disconnected.

  The bug in the lbaas plugin
  services/loadbalancer/plugin.py 
  Function delete_health_monitor()
  It should change the HM status to PENDING_DELETE before deleting the pool-hm 
association .

  Function after fix:
  def delete_health_monitor(self, context, id):
  with context.session.begin(subtransactions=True):
  hm = self.get_health_monitor(context, id)
  qry = context.session.query(
  ldb.PoolMonitorAssociation
  ).filter_by(monitor_id=id).join(ldb.Pool)
  for assoc in qry:
  driver = self._get_driver_for_pool(context, assoc['pool_id'])

  self.update_pool_health_monitor(context, id, assoc['pool_id'],
  constants.PENDING_DELETE)

  driver.delete_pool_health_monitor(context,
hm,
assoc['pool_id'])
  super(LoadBalancerPlugin, self).delete_health_monitor(context, id)

  I know that health monitor design might be reviewed and changed during 
Icehouse.
  If so, this issue might be not relevant

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1245888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301935] [NEW] Dashboard ignores the HORIZON_CONFIG['user_home'] setting

2014-04-03 Thread Radomir Dopieralski
Public bug reported:

Dashboard has its own splash, with a hardcoded get_user_home that
igonres HORIZON_CONFIG. We need to make it use horiozn.get_user_home
which actually checks the user_home setting, and with the default
settings.py, uses dashboard's get_user_home anyways, but allows for
changing that default behavior.

** Affects: horizon
 Importance: Undecided
 Assignee: Radomir Dopieralski (thesheep)
 Status: In Progress


** Tags: icehouse-rc-potential

** Changed in: horizon
 Assignee: (unassigned) = Radomir Dopieralski (thesheep)

** Tags added: icehouse-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1301935

Title:
  Dashboard ignores the HORIZON_CONFIG['user_home'] setting

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Dashboard has its own splash, with a hardcoded get_user_home that
  igonres HORIZON_CONFIG. We need to make it use horiozn.get_user_home
  which actually checks the user_home setting, and with the default
  settings.py, uses dashboard's get_user_home anyways, but allows for
  changing that default behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1301935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271706] Re: Misleading warning about MySQL TRADITIONAL mode not being set

2014-04-03 Thread Attila Fazekas
** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271706

Title:
  Misleading warning about MySQL TRADITIONAL mode not being set

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  common.db.sqlalchemy.session logs a scary warning if create_engine is
  not being called with mysql_traditional_mode set to True:

  WARNING keystone.openstack.common.db.sqlalchemy.session [-] This
  application has not enabled MySQL traditional mode, which means silent
  data corruption may occur. Please encourage the application developers
  to enable this mode.

  That warning is problematic for several reasons:

  (1) It suggests the wrong mode. Arguably TRADITIONAL is better than the 
default, but STRICT_ALL_TABLES would actually be more useful.
  (2) The user has no way to fix the warning.
  (3) The warning does not take into account that a global sql-mode may in fact 
have been set via the server-side MySQL configuration, in which case the 
session *may* in fact be using TRADITIONAL mode all along, despite the warning 
saying otherwise. This makes (2) even worse.

  My suggested approach would be:
  - Remove the warning.
  - Make the SQL mode a config option.

  Patches forthcoming.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1271706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301982] [NEW] anti-affinity server boot failure shows stack trace

2014-04-03 Thread David Patterson
Public bug reported:

If anti-affinity policy is set for two different servers, and only one
host is available, a stack trace is shown instead of a error message.
Failure is expected, but a message about why it failed would be helpful.

On a single node devstack setup:

nova server-group-create --policy anti-affinity aagroup

nova boot --flavor m1.nano --image cirros-0.3.1-x86_64-uec --nic net-
id=ea784f2b-b262-451a-821a-5ee7f69b3d63 --hint group=0f2b71ea-3dc3-423d-
a9f4-41ac5d0fff07 server1

nova boot --flavor m1.nano --image cirros-0.3.1-x86_64-uec --nic net-
id=ea784f2b-b262-451a-821a-5ee7f69b3d63 --hint group=0f2b71ea-3dc3-423d-
a9f4-41ac5d0fff07 server2

server1 boots fine
server2 has Status Error and Power State No State

Horizon shows:

Fault
Message
unsupported operand type(s) for |: 'list' and 'set'
Code
500
Details
File /opt/stack/nova/nova/scheduler/manager.py, line 140, in run_instance 
legacy_bdm_in_spec) File /opt/stack/nova/nova/scheduler/filter_scheduler.py, 
line 86, in schedule_run_instance filter_properties, instance_uuids) File 
/opt/stack/nova/nova/scheduler/filter_scheduler.py, line 289, in _schedule 
filter_properties) File /opt/stack/nova/nova/scheduler/filter_scheduler.py, 
line 275, in _setup_instance_group filter_properties['group_hosts'] = 
user_hosts | group_hosts
Created
April 3, 2014, 1:58 p.m.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: Sorry for the large logs.  Look at end of screen-n-sch*
   
https://bugs.launchpad.net/bugs/1301982/+attachment/4063305/+files/logs.tar.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301982

Title:
  anti-affinity server boot failure shows stack trace

Status in OpenStack Compute (Nova):
  New

Bug description:
  If anti-affinity policy is set for two different servers, and only one
  host is available, a stack trace is shown instead of a error message.
  Failure is expected, but a message about why it failed would be
  helpful.

  On a single node devstack setup:

  nova server-group-create --policy anti-affinity aagroup

  nova boot --flavor m1.nano --image cirros-0.3.1-x86_64-uec --nic net-
  id=ea784f2b-b262-451a-821a-5ee7f69b3d63 --hint group=0f2b71ea-3dc3
  -423d-a9f4-41ac5d0fff07 server1

  nova boot --flavor m1.nano --image cirros-0.3.1-x86_64-uec --nic net-
  id=ea784f2b-b262-451a-821a-5ee7f69b3d63 --hint group=0f2b71ea-3dc3
  -423d-a9f4-41ac5d0fff07 server2

  server1 boots fine
  server2 has Status Error and Power State No State

  Horizon shows:

  Fault
  Message
  unsupported operand type(s) for |: 'list' and 'set'
  Code
  500
  Details
  File /opt/stack/nova/nova/scheduler/manager.py, line 140, in run_instance 
legacy_bdm_in_spec) File /opt/stack/nova/nova/scheduler/filter_scheduler.py, 
line 86, in schedule_run_instance filter_properties, instance_uuids) File 
/opt/stack/nova/nova/scheduler/filter_scheduler.py, line 289, in _schedule 
filter_properties) File /opt/stack/nova/nova/scheduler/filter_scheduler.py, 
line 275, in _setup_instance_group filter_properties['group_hosts'] = 
user_hosts | group_hosts
  Created
  April 3, 2014, 1:58 p.m.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1301982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297607] Re: Key length issue with UTF-8 database

2014-04-03 Thread Mark McClain
** Changed in: neutron
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1297607

Title:
  Key length issue with UTF-8 database

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  On Icehouse-3, the neutron-server daemon generates the following error
  on startup when configured to use a UTF-8 database:

  2014-03-25 21:23:17.942 26195 INFO neutron.db.api [-] Database
  registration exception: (OperationalError) (1071, 'Specified key was
  too long; max key length is 1000 bytes') '\nCREATE TABLE agents
  (\n\tid VARCHAR(36) NOT NULL, \n\tagent_type VARCHAR(255) NOT NULL,
  \n\t`binary` VARCHAR(255) NOT NULL, \n\ttopic VARCHAR(255) NOT NULL,
  \n\thost VARCHAR(255) NOT NULL, \n\tadmin_state_up BOOL NOT NULL,
  \n\tcreated_at DATETIME NOT NULL, \n\tstarted_at DATETIME NOT NULL,
  \n\theartbeat_timestamp DATETIME NOT NULL, \n\tdescription
  VARCHAR(255), \n\tconfigurations VARCHAR(4095) NOT NULL, \n\tPRIMARY
  KEY (id), \n\tCONSTRAINT uniq_agents0agent_type0host UNIQUE
  (agent_type, host), \n\tCHECK (admin_state_up IN (0, 1))\n)\n\n' ()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1297607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295806] Re: XML responses still show quantum

2014-04-03 Thread Mark McClain
This is correct because we cannot change these values until the v2 XML
API is retired.

** Changed in: neutron
   Status: New = Incomplete

** Changed in: neutron
   Status: Incomplete = Invalid

** Changed in: neutron
 Assignee: shihanzhang (shihanzhang) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295806

Title:
  XML responses still show quantum

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When I run neutron v2.0 API, quantum still shows up in the XML
  responses.

  For example - for the call

  curl -i 'http://166.78.46.130:9696/v2.0/networks/af374017-c9ae-
  4a1d-b799-ab73111476e2.xml' -X PUT -H X-Auth-Project-Id: admin -H
  Accept: application/xml -H X-Auth-Token: $token -T network-
  update.xml

  This response is returned, with a quantum namespace. Is that correct?

  ?xml version='1.0' encoding='UTF-8'?
  network xmlns=http://openstack.org/quantum/api/v2.0;
  xmlns:provider=http://docs.openstack.org/ext/provider/api/v1.0;
  xmlns:quantum=http://openstack.org/quantum/api/v2.0;
  xmlns:router=http://docs.openstack.org/ext/neutron/router/api/v1.0;
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
  statusACTIVE/status
  subnets quantum:type=list/
  namesample-network-4-updated/name
  provider:physical_network xsi:nil=true/
  admin_state_up quantum:type=boolTrue/admin_state_up
  tenant_id4fd44f30292945e481c7b8a0c8908869/tenant_id
  provider:network_typelocal/provider:network_type
  router:external quantum:type=boolFalse/router:external
  shared quantum:type=boolFalse/shared
  idaf374017-c9ae-4a1d-b799-ab73111476e2/id
  provider:segmentation_id xsi:nil=true/
  /network

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1295806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302007] [NEW] l3 agent uses method concatenating two constants

2014-04-03 Thread Jakub Libosvar
Public bug reported:

In l3 agent is method ns_name() that concatenates constant NS_PREFIX
with router.id that doesn't change during router's lifecycle. It's
ineffective, namespace name can be an instance attribute.

** Affects: neutron
 Importance: Undecided
 Assignee: Jakub Libosvar (libosvar)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302007

Title:
  l3 agent uses method concatenating two constants

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In l3 agent is method ns_name() that concatenates constant NS_PREFIX
  with router.id that doesn't change during router's lifecycle. It's
  ineffective, namespace name can be an instance attribute.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301036] Re: openstack.common.db.sqlalchemy.migration utf8 table check issue on initial migration

2014-04-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/84870
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=c15fac2b9a074de2d81b67b857bab38f1b1e3fb3
Submitter: Jenkins
Branch:milestone-proposed

commit c15fac2b9a074de2d81b67b857bab38f1b1e3fb3
Author: Morgan Fainberg m...@metacloud.com
Date:   Wed Apr 2 14:10:12 2014 -0700

Sync from oslo db.sqlalchemy.migration

Sync the updated migration module to prevent errors on non-utf8
migrate_version and alembic_version tables (not controlled by the
schema migration directory).

Change from oslo commits:
[2fd457bf2cc] Ignore migrate versioning tables in utf8 sanity check

Change-Id: I6aef126b8d09c7a44f3815c4143ef1d4e5463426
Closes-Bug: #1301036


** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1301036

Title:
  openstack.common.db.sqlalchemy.migration utf8 table check issue on
  initial migration

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  New
Status in Ironic (Bare Metal Provisioning):
  Invalid
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  In Progress
Status in “keystone” package in Ubuntu:
  Triaged
Status in “keystone” source package in Trusty:
  Triaged

Bug description:
  The code in openstack.common.db.sqlachemy.migration that does the
  sanity checking on the UTF8 table is a bit overzealous and checks the
  migration_version (and alembic equivalent) table for utf8 status. This
  will cause migrations to fail in any case where the db isn't forcing a
  default character set of utf8, the db engine isn't forced to utf8, or
  the migration_version table isn't fixed before migrations occur.

  This was duplicated by doing a clean Ubuntu 12.04 install with mysql,
  using the default latin1 character set and simply creating the DB with
  ``create database keystone;``

  The result is migrations fail at migration 0 unless the db sanity
  check is disabled (e.g. glance).

  root@precise64:~/keystone# keystone-manage --config-file /etc/keystone.conf 
db_sync
  2014-04-01 14:03:23.858 19840 CRITICAL keystone [-] ValueError: Tables 
migrate_version have non utf8 collation, please make sure all tables are 
CHARSET=utf8

  This is unaffected by the character set in the connection string.

  The solution is to explicitly ignore the migrate_version (and alembic
  equivalent) table in the sanity check.

  Code in question is here:

  
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/openstack/common/db/sqlalchemy/migration.py?id=51772e1addf8ab6f7483df44b7243fd7842eba4a#n200

  This will affect any project using this code for migrations when using
  the mysql engine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1301036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299349] Re: upstream-translation-update Jenkins job failing

2014-04-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/84563
Committed: 
https://git.openstack.org/cgit/openstack/cinder/commit/?id=c773cf96a52924a7dc42f774c57463e8ed66c9ef
Submitter: Jenkins
Branch:milestone-proposed

commit c773cf96a52924a7dc42f774c57463e8ed66c9ef
Author: Andreas Jaeger a...@suse.de
Date:   Sat Mar 29 06:38:52 2014 +0100

Fix Jenkins translation jobs

The jobs cinder-propose-translation-update and
cinder-upstream-translation-update do not update from
transifex since our po files contain duplicate entries where
obsolete entries duplicate normal entries.

Remove all obsolete entries to fix the jobs.

Change-Id: I6d41dbdcc41646fcbd1ee84ce48cb0c461cd454c
Closes-Bug: #1299349
(cherry picked from commit c72cc4b2345e9e271205d1de00c9812d0a2684ec)


** Changed in: cinder
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1299349

Title:
  upstream-translation-update Jenkins job failing

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Manuals:
  New

Bug description:
  Various upstream-translation-update succeed apparently but looking
  into the file errors get ignored - and there are errors uploading the
  files to the translation site like:

  Error uploading file: There is a syntax error in your file.
  Line 1936: duplicate message definition...
  Line 7: ...this is the location of the first definition

  
  For keystone, see:
  
http://logs.openstack.org/78/7882359da114079e8411bd3f97c5628f2cd1c098/post/keystone-upstream-translation-update/27cbb22/

  This has been fixed on ironic with:
  https://review.openstack.org/#/c/83935
  See also: 
http://lists.openstack.org/pipermail/openstack-i18n/2014-March/000479.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1299349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302075] [NEW] service catalog in token contains legacy_endpoint_id and enabled fields

2014-04-03 Thread Adam Young
Public bug reported:

regression of bug 1152635


Endpoints in the token look like this:

v3 [{url:
http://192.168.187.14:8774/v2/5d15013cbebd4b1e95ad3b5785c866f7;,
region: RegionOne, interface: admin, id:
486ce25c849b471a806d55304cd18191}, {url:
http://192.168.187.14:8774/v2/5d15013cbebd4b1e95ad3b5785c866f7;,
region: RegionOne, interface: internal, id:
63076e278a9e4560a08ce962e49c3618}, {url:
http://192.168.187.14:8774/v2/5d15013cbebd4b1e95ad3b5785c866f7;,
region: RegionOne, interface: public, id:
9b6e089b91734c508bbe117ea0865c76}]

** Affects: keystone
 Importance: Critical
 Assignee: Adam Young (ayoung)
 Status: Incomplete

** Changed in: keystone
 Assignee: (unassigned) = Adam Young (ayoung)

** Changed in: keystone
Milestone: None = icehouse-rc2

** Changed in: keystone
   Status: New = Incomplete

** Changed in: keystone
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1302075

Title:
  service catalog in token contains legacy_endpoint_id and enabled
  fields

Status in OpenStack Identity (Keystone):
  Incomplete

Bug description:
  regression of bug 1152635

  
  Endpoints in the token look like this:

  v3 [{url:
  http://192.168.187.14:8774/v2/5d15013cbebd4b1e95ad3b5785c866f7;,
  region: RegionOne, interface: admin, id:
  486ce25c849b471a806d55304cd18191}, {url:
  http://192.168.187.14:8774/v2/5d15013cbebd4b1e95ad3b5785c866f7;,
  region: RegionOne, interface: internal, id:
  63076e278a9e4560a08ce962e49c3618}, {url:
  http://192.168.187.14:8774/v2/5d15013cbebd4b1e95ad3b5785c866f7;,
  region: RegionOne, interface: public, id:
  9b6e089b91734c508bbe117ea0865c76}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1302075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302091] [NEW] Redundant SG rule create calls in unit tests

2014-04-03 Thread Sridhar Gaddam
Public bug reported:

The following test cases in TestSecurityGroups of 
test_extension_security_group.py file 
are making multiple calls to create a SG rule.
test_create_security_group_rule_min_port_greater_max()
test_create_security_group_rule_ports_but_no_protocol()
test_create_security_group_rule_port_range_min_only()
test_create_security_group_rule_port_range_max_only()
test_create_security_group_rule_icmp_type_too_big()
test_create_security_group_rule_icmp_code_too_big()

The redundant calls can be removed.

** Affects: neutron
 Importance: Undecided
 Assignee: Sridhar Gaddam (sridhargaddam)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Sridhar Gaddam (sridhargaddam)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302091

Title:
  Redundant SG rule create calls in unit tests

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The following test cases in TestSecurityGroups of 
test_extension_security_group.py file 
  are making multiple calls to create a SG rule.
  test_create_security_group_rule_min_port_greater_max()
  test_create_security_group_rule_ports_but_no_protocol()
  test_create_security_group_rule_port_range_min_only()
  test_create_security_group_rule_port_range_max_only()
  test_create_security_group_rule_icmp_type_too_big()
  test_create_security_group_rule_icmp_code_too_big()

  The redundant calls can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302097] [NEW] Unable to get images by image type

2014-04-03 Thread Santiago Baldassin
Public bug reported:

We should be able to get images by type. In Horizon for example, in
order to populate the boot image from options, we currently retrieve
all images and filter them by type

** Affects: glance
 Importance: Undecided
 Assignee: Santiago Baldassin (santiago-b-baldassin)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Santiago Baldassin (santiago-b-baldassin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1302097

Title:
  Unable to get images by image type

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  We should be able to get images by type. In Horizon for example, in
  order to populate the boot image from options, we currently retrieve
  all images and filter them by type

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1302097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302106] [NEW] LDAP non-URL safe characters cause auth failure

2014-04-03 Thread Kevin Stevens
Public bug reported:

An Openstack user attempting to integrate Keystone with AD has reported
that when his user contains a comma (full name CN='Doe, John'), a 'Bad
search filter' error is thrown. If the full name CN is instead 'John
Doe', authorization succeeds.

dpkg -l |grep keystone
ii  keystone 1:2013.2.2-0ubuntu1~cloud0 
 OpenStack identity service - Daemons
ii  python-keystone  1:2013.2.2-0ubuntu1~cloud0 
 OpenStack identity service - Python library
ii  python-keystoneclient1:0.3.2-0ubuntu1~cloud0
 Client library for OpenStack Identity API

Relevant error message:
Authorization Failed: An unexpected error prevented the server from fulfilling 
your request. {'desc': 'Bad search filter'} (HTTP 500)

Relevant stack trace:
2014-03-31 15:44:27.459 3018 ERROR keystone.common.wsgi [-] {'desc': 'Bad 
search filter'}
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi Traceback (most recent 
call last):
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 238, in 
__call__
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi result = 
method(context, **params)
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/token/controllers.py, line 94, in 
authenticate
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi context, auth)
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/token/controllers.py, line 272, in 
_authenticate_local
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi user_id, tenant_id)
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/token/controllers.py, line 369, in 
_get_project_roles_and_ref
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi user_id, tenant_id)
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/identity/core.py, line 475, in 
get_roles_for_user_and_project
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi user_id, tenant_id)
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/assignment/core.py, line 160, in 
get_roles_for_user_and_project
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi group_role_list = 
_get_group_project_roles(user_id, project_ref)
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/assignment/core.py, line 111, in 
_get_group_project_roles
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi group_refs = 
self.identity_api.list_groups_for_user(user_id)
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/identity/core.py, line 177, in 
wrapper
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi return f(self, 
*args, **kwargs)
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/identity/core.py, line 425, in 
list_groups_for_user
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi group_list = 
driver.list_groups_for_user(user_id)
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/identity/backends/ldap.py, line 
154, in list_groups_for_user
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi return 
self.group.list_user_groups(user_dn)
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/identity/backends/ldap.py, line 
334, in list_user_groups
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi memberships = 
self.get_all(query)
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py, line 388, in 
get_all
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi for x in 
self._ldap_get_all(filter)]
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py, line 364, in 
_ldap_get_all
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi 
self.attribute_mapping.values())
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py, line 571, in 
search_s
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi res = 
self.conn.search_s(dn, scope, query, attrlist)
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/ldap/ldapobject.py, line 502, in search_s
2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi return 
self.search_ext_s(base,scope,filterstr,attrlist,attrsonly,None,None,timeout=self.timeout)
2014-03-31 15:44:27.459 3018 TRACE 

[Yahoo-eng-team] [Bug 1174132] Re: Only one instance of a DHCP agent is run per network

2014-04-03 Thread James Slagle
** Changed in: tripleo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1174132

Title:
  Only one instance of a DHCP agent is run per network

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  DHCP Agent's are stateless and can run HA, but the scheduler only
  schedules one instance - when a network node fails (or is rebooted /
  has maintenance done on it) this results in user visible downtime.

  See also bug 1174591 which talks about the administrative overhead of
  dealing with failed nodes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1174132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276510] Re: MySQL 2013 lost connection is being raised

2014-04-03 Thread Adam Gandelman
** Changed in: cinder/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276510

Title:
  MySQL 2013 lost connection is being raised

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in Cinder havana series:
  Fix Released
Status in Ironic (Bare Metal Provisioning):
  Fix Released
Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Tuskar:
  Fix Released

Bug description:
  MySQL's 2013 code error is not in the list of connection lost issues.
  This causes the reconnect loop to raise this error and stop retrying.

  [database]
  max_retries = -1
  retry_interval = 1

  mysql down:

  == scheduler.log ==
  2014-02-03 16:51:50.956 16184 CRITICAL cinder [-] (OperationalError) (2013, 
Lost connection to MySQL server at 'reading initial communication packet', 
system error: 0) None None

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1276510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244092] Re: db connection retrying doesn't work against db2

2014-04-03 Thread Adam Gandelman
** Changed in: cinder/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244092

Title:
  db connection retrying doesn't work against db2

Status in Cinder:
  Confirmed
Status in Cinder havana series:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  When I start Openstack following below steps, Openstack services can't be 
started without db2 connection:
  1, start openstack services;
  2, start db2 service.

  I checked codes in session.py under
  nova/openstack/common/db/sqlalchemy, the root cause is db2 connection
  error code -30081 isn't in conn_err_codes in _is_db_connection_error
  function, connection retrying codes are skipped against db2, in order
  to enable connection retrying function against db2, we need add db2
  support in _is_db_connection_error function

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1244092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276371] Re: Flavor Access settings lost between edits

2014-04-03 Thread Adam Gandelman
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1276371

Title:
  Flavor Access settings lost between edits

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Public / Flavor Access controls seem to be bugged.

  To reproduce:

  # Create a flavor, ignore the Flavor Access tab
  # Watch the flavor get created with Public access
  # Click Edit, go to Flavor Access, set it to a project or two
  # Click Save
  # Watch the access get switched to Not Public
  # Click Edit, go back to Flavor Access
  # See that all flavors have disappeared
  # Click Save, and watch the access go back to Public

  Expected Behaviour:

  Flavor Access / Public settings are preserved between edits

  Actual Behaviour:

  All Flavor Access settings are lost between edits, and unless
  explicitly preserved, an edit can inadvertently make a private flavor
  public.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1276371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268966] Re: glance requires pyOpenSSL=0.11

2014-04-03 Thread Adam Gandelman
** Changed in: glance/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1268966

Title:
  glance requires pyOpenSSL=0.11

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released

Bug description:
  while running unit tests with pyOpenSSL==0.10 I have got an exception
  _StringException: Traceback (most recent call last):
File 
/usr/lib/python2.6/site-packages/glance/tests/unit/common/test_utils.py, line 
197, in test_validate_key_cert_key
  utils.validate_key_cert(keyfile, certfile)
File /usr/lib/python2.6/site-packages/glance/common/utils.py, line 507, 
in validate_key_cert
  out = crypto.sign(key, data, digest)
  AttributeError: 'module' object has no attribute 'sign'

  
  Since OpenSSL.crypto.sign() is new in version 0.11, we have to update 
pyOpenSSL  requirement to pyOpenSSL=0.11

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1268966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279907] Re: Latest keystoneclient breaks tests

2014-04-03 Thread Adam Gandelman
** Changed in: heat/havana
   Status: Fix Committed = Fix Released

** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1279907

Title:
  Latest keystoneclient breaks tests

Status in Orchestration API (Heat):
  Fix Released
Status in heat havana series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) grizzly series:
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  The new release of keystoneclient (0.6.0) introduces some new
  metaclass magic which breaks our mocking in master :(

  We probably need to modify the test to use mock instead of mox, as the
  issue seems to be that mox misinterprets the class type due to the
  metaclass.

  Immediate workaround while we workout the solution is probably to
  temporarily cap keystoneclient to 0.5.1 which did not have this issue.

  Traceback (most recent call last):
File /home/shardy/git/heat/heat/tests/test_heatclient.py, line 449, in 
test_trust_init
  self._stubs_v3(method='trust')
File /home/shardy/git/heat/heat/tests/test_heatclient.py, line 83, in 
_stubs_v3
  self.m.StubOutClassWithMocks(kc_v3, Client)
File /usr/lib/python2.7/site-packages/mox.py, line 366, in 
StubOutClassWithMocks
  raise TypeError('Given attr is not a Class.  Use StubOutWithMock.')
  TypeError: Given attr is not a Class.  Use StubOutWithMock.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1279907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256821] Re: Cannot snapshot instances that are paused or suspended via Horizon

2014-04-03 Thread Adam Gandelman
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1256821

Title:
  Cannot snapshot instances that are paused or suspended via Horizon

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Description of problem:
  ===
  Those options are available via CLI  API, (added to nova in bug 1100556) but 
it cannot be performed in Horizon.
  Once you set an instance to pause/suspend , the take snapshot option is 
absent from the menu.

  Version:
  
  horizon-2013.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1256821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247056] Re: Too many connections to nova-api (and not cleaning up)?

2014-04-03 Thread Adam Gandelman
** Changed in: horizon/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1247056

Title:
  Too many connections to nova-api (and not cleaning up)?

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released
Status in Python client library for Nova:
  Fix Released

Bug description:
  We hit this bug while doing a tripleo/tuskar provision against 7
  baremetal machines. Basically after everything was up and running for
  a while, whilst using the Horizon UI to view the active instances (the
  7 baremetal machines that were provisioned as nova compute nodes)
  Horizon threw an error complaining about too may open files.

  
  [stack@ucl-control-live ~]$ sudo lsof -i :8774 | wc -l
  2073  

  Restarting openstack-nova-api closed them all  (put them all into
  FIN_WAIT2 / CLOSE_WAIT.)

  I was able to recreate this on a more 'standard' setup with devstack.
  To recreate:

  1. Run devstack
  2. Monitor connections to nova-api in a terminal: while true; sudo lsof -i 
:8774; date; sleep 2; done 
 At this point for me the output here was steady at 10.

  3. Log into Horizon and Launch an instance, or two.
  4. In Horizon, alternate between Project--Overview and 
Project--Instances
  5. Watch the output from lsof. In a short time I got this up to 150+. Leaving 
it idle (doing nothing more anywhere), the connections hang around (i.e. all in 
ESTABLISHED state). In fact they hang around even after I log out of Horizon.

  Is this expected behaviour?

  thanks, marios

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1247056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245947] Re: [DB2] Glance image-list command failed when there are more than DEFAULT_PAGE_SIZE images

2014-04-03 Thread Adam Gandelman
** Changed in: glance/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1245947

Title:
  [DB2] Glance image-list command failed when there are more than
  DEFAULT_PAGE_SIZE images

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released

Bug description:
  Glance image-list command will fail when there are more than
  DEFAULT_PAGE_SIZE images and user will run into below error with cli.

  ERROR: The server has either erred or is incapable of performing the
  requested operation. (HTTP 500) (Request-ID: req-
  30c23185-8a84-4321-8fd0-155b25904405)

  It only happens on db2 backend after applying the patch
  https://review.openstack.org/#/c/35178/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1245947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238604] Re: Run into 500 error during delete image

2014-04-03 Thread Adam Gandelman
** Changed in: glance/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1238604

Title:
  Run into 500 error during delete image

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released

Bug description:
  Recreate steps:
  1. Enable delayed delete
  delayed_delete = True

  2. Create a new image by:
  glance image-create --name flwang_1 --container-format bare --disk-format 
qcow2 --is-public yes --location 
https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

  3. Delete image by:
  glance image-delete flwang_1

  You will see an error like below, but the image has been deleted, in 
'pending-delete' state.
  Request returned failure status.
  HTTPInternalServerError (HTTP 500): Unable to delete image 
86d0a3df-d140-4d41-aaae-f1c538591d3d

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1238604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301027] Re: Permission denied error in unit tests

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301027

Title:
  Permission denied error in unit tests

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Released

Bug description:
  Unit tests are intermittently failing because of permission denied
  errors on /var/lib/neutron

  It seems tmpdir fixtures  are not being used.

  Logstash query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiUGVybWlzc2lvbiBkZW5pZWQ6ICcvdmFyL2xpYi9uZXV0cm9uJ1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTYzODY2NDIyNzB9

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286717] Re: Keystone unit tests fails with SQLAlchemy 0.9.3

2014-04-03 Thread Adam Gandelman
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1286717

Title:
  Keystone unit tests fails with SQLAlchemy 0.9.3

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Fix Released

Bug description:
  Keystone fails its unit tests when running with SQLAlchemy 0.9.3, as
  per the log below. It is important for Debian that Havana Keystone
  continues to work in Sid with SQLA 0.9.

  ==
  ERROR: keystone.tests.test_sql_upgrade.SqlUpgradeTests.test_upgrade_14_to_16
  --
  _StringException: traceback-1: {{{
  Traceback (most recent call last):
File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/tests/test_sql_upgrade.py,
 line 90, in tearDown
  self.downgrade(0)
File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/tests/test_sql_upgrade.py,
 line 125, in downgrade
  self._migrate(*args, downgrade=True, **kwargs)
File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/tests/test_sql_upgrade.py,
 line 139, in _migrate
  self.schema.runchange(ver, change, changeset.step)
File /usr/lib/python2.7/dist-packages/migrate/versioning/schema.py, line 
91, in runchange
  change.run(self.engine, step)
File /usr/lib/python2.7/dist-packages/migrate/versioning/script/py.py, 
line 145, in run
  script_func(engine)
File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/common/sql/migrate_repo/versions/016_normalize_domain_ids.py,
 line 430, in downgrade
  downgrade_user_table_with_copy(meta, migrate_engine, session)
File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/common/sql/migrate_repo/versions/016_normalize_domain_ids.py,
 line 225, in downgrade_user_table_with_copy
  'extra': user.extra})
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 
978, in execute
  clause, params or {})
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
717, in execute
  return meth(self, multiparams, params)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py, line 
317, in _execute_on_connection
  return connection._execute_clauseelement(self, multiparams, params)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
814, in _execute_clauseelement
  compiled_sql, distilled_params
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
927, in _execute_context
  context)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
1076, in _handle_dbapi_exception
  exc_info
File /usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 
185, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
920, in _execute_context
  context)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
425, in do_execute
  cursor.execute(statement, parameters)
  IntegrityError: (IntegrityError) UNIQUE constraint failed: temp_user.name 
u'insert into temp_user (id, name, password, enabled, extra) values ( ?, ?, ?, 
?, ?);' (u'433e0e1c02ff436a9bf1829ee42790d1', 
u'6327d5d819064064a82bc10e0ef7fdca', u'5ef83255a7df4fb1a3ae1d6877719a1e', True, 
u'{}')
  }}}
   
  Traceback (most recent call last):
File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/tests/test_sql_upgrade.py,
 line 487, in test_upgrade_14_to_16
  self.check_uniqueness_constraints()
File 
/home/zigo/sources/openstack/havana/keystone/build-area/keystone-2013.2.2/keystone/tests/test_sql_upgrade.py,
 line 882, in check_uniqueness_constraints
  cmd = this_table.delete(id=user['id'])
File string, line 1, in lambda
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/selectable.py, line 
1237, in delete
  return dml.Delete(self, whereclause, **kwargs)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/dml.py, line 749, in 
__init__
  self._validate_dialect_kwargs(dialect_kw)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/base.py, line 132, 
in _validate_dialect_kwargs
  named dialectname_argument, got '%s' % k)
  TypeError: Additional arguments should be named dialectname_argument, got 
'id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1286717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   

[Yahoo-eng-team] [Bug 1298053] Re: Subnets should be set as lazy='join'

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298053

Title:
  Subnets should be set as lazy='join'

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Currently if one does a net-list tons of queries are issued against
  the database as the default query mode is 'select' which performs a
  query when the field is actually accessed. In this patch I change the
  the mode to 'joined' so subnets are loaded as the networks are loaded.
  Usually there are only 1 or 2 subnets on a network so loading this
  data shouldn't help. This patch in my setup with 5000 networks reduces
  the net-list call from 27 seconds to 7!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1298053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282452] Re: gate-neutron-python27 fails: greenlet.GreenletExit and then WARNING: Unable to find to confirm results!

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282452

Title:
  gate-neutron-python27 fails: greenlet.GreenletExit and then
  WARNING: Unable to find  to confirm results!

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in OpenStack Core Infrastructure:
  Invalid

Bug description:
  gate-neutron-python27 fails with WARNING: Unable to find  to confirm 
results!.
  The original cause is greenlet.GreenletExit and testr aborted.

  message:greenlet.GreenletExit AND filename:console.html
  87 hits in this 48 hours. (84 are in gate-neutron-python27 and 3 are in 
gate-oslo-incubator-python27)

  I am not sure it is a bug of neutron or CI infra, but the hit rate
  above says it is only related to neutron python 2.7.

  logstash:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiZ3JlZW5sZXQuR3JlZW5sZXRFeGl0XCIgQU5EIGZpbGVuYW1lOmNvbnNvbGUuaHRtbCIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiY3VzdG9tIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7ImZyb20iOiIyMDE0LTAyLTE4VDA4OjA5OjQ3KzAwOjAwIiwidG8iOiIyMDE0LTAyLTIwVDA4OjA5OjQ3KzAwOjAwIiwidXNlcl9pbnRlcnZhbCI6IjAifSwic3RhbXAiOjEzOTI4ODU1NDA1MjAsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1282452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281148] Re: QPID reconnection delay can't be configured

2014-04-03 Thread Adam Gandelman
** Changed in: ceilometer/havana
   Status: Fix Committed = Fix Released

** Changed in: ceilometer/havana
Milestone: None = 2013.2.3

** Changed in: cinder/havana
   Status: Fix Committed = Fix Released

** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

** Changed in: keystone/havana
Milestone: None = 2013.2.3

** Changed in: nova/havana
   Status: Fix Committed = Fix Released

** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

** Changed in: keystone
Milestone: 2013.2.3 = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1281148

Title:
  QPID reconnection delay can't  be configured

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Ceilometer havana series:
  Fix Released
Status in Cinder:
  Invalid
Status in Cinder havana series:
  Fix Released
Status in OpenStack Identity (Keystone):
  Invalid
Status in Keystone havana series:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in oslo havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  Current qpid's reconnection can get up to 60s and it's not
  configurable. This is unfortunate because 60s is quite a lot of time
  to wait for HA systems, which makes this issue a blocker for this kind
  of deployments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1281148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281772] Re: NSX: sync do not pass around model object

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1281772

Title:
  NSX: sync do not pass around model object

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  The NSX sync backend previously passed around a sqlalchemy model object
  around which was nice because we did not need to query the database an
  additional time to update the status of an object. Unfortinately, this
  was add done within a db transaction which included a call to NSX which 
  could cause deadlock this needed to be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1281772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278991] Re: NSX: Deadlock can occur in sync code

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1278991

Title:
  NSX: Deadlock can occur in sync code

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  The sync code that syncs the operational status to the db calls out to
  nsx within a db transaction which can cause deadlock if another
  request comes in and eventlet context switches it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1278991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280941] Re: metadata agent throwing AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id' with latest release

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280941

Title:
  metadata agent throwing AttributeError: 'HTTPClient' object has no
  attribute 'auth_tenant_id' with latest release

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in Python client library for Neutron:
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Committed
Status in “python-neutronclient” package in Ubuntu:
  Fix Released
Status in “python-neutronclient” source package in Saucy:
  Fix Released
Status in “python-neutronclient” source package in Trusty:
  Fix Released

Bug description:
  So we need a new release - this is fixed in:
  commit 02baef46968b816ac544b037297273ff6a4e8e1b

  but until a new release is done, anyone running trunk Neutron will
  have the metadata agent fail.

  And neutron itself is missing a versioned dep on the fixed client (but
  obviously that has to wait for the client release to be done)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1280941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274361] Re: improve version validation for router operations with the nsx/nvp plugin

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274361

Title:
  improve version validation for router operations with the nsx/nvp
  plugin

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  Improve the way we check if the controller is at a version that
  supports some router operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1274361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268460] Re: PLUMgrid plugin should report proper error messages

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1268460

Title:
  PLUMgrid plugin should report proper error messages

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  PLUMgrid Director error messages should be reported at plugin level as
  well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1268460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269703] Re: list_projects_for_endpoint interface fail with 500 Internal Server Error

2014-04-03 Thread Adam Gandelman
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1269703

Title:
  list_projects_for_endpoint interface fail with 500 Internal Server
  Error

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Fix Released

Bug description:
  Hello All,
  
  When I test endpoint filter feature, I send a GET request to 
'/v3/OS-EP-FILTER/endpoints/{endpoint_id}/projects', then get the following 
error

  {
  error: {
  message: An unexpected error prevented the server from fulfilling 
your request. 'EndpointFilter' object has no attribute 
'list_project_endpoints',
  code: 500,
  title: Internal Server Error
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1269703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267101] Re: nicira: Lock wait timeout exceeded on port operations

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1267101

Title:
  nicira: Lock wait timeout exceeded on port operations

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  Stacktrace here:

  http://paste.openstack.org/show/60785/

  Occurence here:

  http://162.209.83.206/logs/58017/16/logs/screen-q-svc.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1267101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260080] Re: [OSSA 2014-006] Trustee token revocations with memcache backend (CVE-2014-2237)

2014-04-03 Thread Adam Gandelman
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1260080

Title:
  [OSSA 2014-006] Trustee token revocations with memcache backend
  (CVE-2014-2237)

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone grizzly series:
  Fix Released
Status in Keystone havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  When using the memcache token backend, In cases where token
  revocations (bulk revocations) that rely on the trustee's token-index-
  list to perform the revocations - the revocations cannot occur.  This
  is because trustee tokens are added only to the trustor's token list.
  This appears to only be an issue for trusts with impersonation
  enabled.  This is most noticeable when the trustee user is disabled or
  the trustee changes a password.  I am sure there are other related
  scenarios.

  Example reproduction -

  Create the trust:

  curl -H X-Auth-Token: $TOKEN -X POST http://127.0.0.1:35357/v3/OS-
  TRUST/trusts -d '{trust:{impersonation: true, project_id:
  9b2c18a72c8a477b891b48931c26cebe, roles:
  
[{name:admin}],trustee_user_id:36240e5f6ed04857bf15d08a519b6e37,trustor_user_id:46c9e36c50c14dfab36b6511d6ebedd4}}'
  -H 'Content-Type: application/json'

  RESPONSE:
  {
  trust: {
  expires_at: null,
  id: 0d2dc361043d4f9d8a84a7f253e20924,
  impersonation: true,
  links: {
  self: 
http://172.16.30.195:5000/v3/OS-TRUST/trusts/0d2dc361043d4f9d8a84a7f253e20924;
  },
  project_id: 9b2c18a72c8a477b891b48931c26cebe,
  roles: [
  {
  id: 0bd1d61badd742bebc044dc246a43513,
  links: {
  self: 
http://172.16.30.195:5000/v3/roles/0bd1d61badd742bebc044dc246a43513;
  },
  name: admin
  }
  ],
  roles_links: {
  next: null,
  previous: null,
  self: 
http://172.16.30.195:5000/v3/OS-TRUST/trusts/0d2dc361043d4f9d8a84a7f253e20924/roles;
  },
  trustee_user_id: 36240e5f6ed04857bf15d08a519b6e37,
  trustor_user_id: 46c9e36c50c14dfab36b6511d6ebedd4
  }
  }

  Consume the trust:

  vagrant@precise64:~/devstack$ cat trust_token.json
  {
  auth: {
  identity: {
  methods: [
  token
  ],
  token: {
  id: PKI TOKEN ID
  }
  },
  scope: {
  OS-TRUST:trust: {
  id: 0d2dc361043d4f9d8a84a7f253e20924
  }
  }
  }
  }

  curl -si -d @trust_token.json -H 'Content-Type: application/json' -X
  POST http://localhost:35357/v3/auth/tokens

  RESPONSE:

  {
  token: {
  OS-TRUST:trust: {
  id: 0d2dc361043d4f9d8a84a7f253e20924,
  impersonation: true,
  trustee_user: {
  id: 36240e5f6ed04857bf15d08a519b6e37
  },
  trustor_user: {
  id: 46c9e36c50c14dfab36b6511d6ebedd4
  }
  },
  catalog: [
     ... SNIP FOR BREVITY
  ],
  expires_at: 2013-12-12T20:06:00.239812Z,
  extras: {},
  issued_at: 2013-12-11T20:10:22.224381Z,
  methods: [
  token,
  password
  ],
  project: {
  domain: {
  id: default,
  name: Default
  },
  id: 9b2c18a72c8a477b891b48931c26cebe,
  name: admin
  },
  roles: [
  {
  id: 0bd1d61badd742bebc044dc246a43513,
  name: admin
  }
  ],
  user: {
  domain: {
  id: default,
  name: Default
  },
  id: 46c9e36c50c14dfab36b6511d6ebedd4,
  name: admin
  }
  }
  }

  Check the memcache token lists, the admin user should have 1 token
  (used to create the trust), the demo user should have 2 tokens
  (initial token, and trust token)

  ADMIN-ID: 46c9e36c50c14dfab36b6511d6ebedd4
  DEMO-ID: 36240e5f6ed04857bf15d08a519b6e37

  vagrant@precise64:~/devstack$ telnet localhost 11211
  Trying 127.0.0.1...
  Connected to localhost.
  Escape character is '^]'.
  get usertokens-46c9e36c50c14dfab36b6511d6ebedd4
  VALUE usertokens-46c9e36c50c14dfab36b6511d6ebedd4 0 104
  1bb6a8abc42e1d1a944e08a20de20a91,ee512379a66027733001648799083349
  END
  get usertokens-36240e5f6ed04857bf15d08a519b6e37
  VALUE usertokens-36240e5f6ed04857bf15d08a519b6e37 0 34
  36df008ec5b5d3dd6983c8fe84d407f6

  This does not affect the SQL or KVS backends.  This will affect Master
  (until KVS refactor is complete), Havana, and Grizzly.  

[Yahoo-eng-team] [Bug 1256036] Re: Metering iptables driver doesn't read the right root_helper param

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1256036

Title:
  Metering iptables driver doesn't read the right root_helper param

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  The metering iptables driver doesn't read the root_helper param, thus
  it uses the default value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1256036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269722] Re: Help message of flag 'enable_isolated_metadata' is not correct

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269722

Title:
  Help message of flag 'enable_isolated_metadata' is not correct

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  To provide metadata service on isolated subnet, the DHCP agent send a static 
route to the guest to be able to join the metadata IP. This functionality was 
available through the flag 'enable_isolated_metadata'.
  Thanks to the patch [1], now the DHCP agent determines a subnet is isolated 
if it doesn't contain router port.
  The help message of the flag wasn't updated accordingly to this evolution.

  [1]
  
https://github.com/openstack/neutron/commit/c73b54e50b62c489f04432bdbc5bee678b18226e

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1269722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245435] Re: Unable to remove aws key as normal user

2014-04-03 Thread Adam Gandelman
** Changed in: keystone/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1245435

Title:
  Unable to remove aws key as normal user

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  Fix Released

Bug description:
  In devstack as normal user I'm able to create a bunch of AWS key pair. But 
I'm unable to delete those AWS key pair
  as normal user. Below are the commands :
  fabien@devstack-1:~$ . openrc demo demo
  fabien@devstack-1:~/devstack$ keystone ec2-credentials-create
  +---+--+
  |  Property |  Value   |
  +---+--+
  |   access  | 11fcd9628779482f9b7971ec0bc69359 |
  |   secret  | 4c7aa22f89ba49ce8de67512abf513df |
  | tenant_id | 53f4610540fd4be7938e65f4c9567e25 |
  |  user_id  | 970acc126501440b9bb60b5494b6460c |
  +---+--+
  fabien@devstack-1:~/devstack$ keystone ec2-credentials-get --access 
11fcd9628779482f9b7971ec0bc69359
  +---+--+
  |  Property |  Value   |
  +---+--+
  |   access  | 11fcd9628779482f9b7971ec0bc69359 |
  |   secret  | 4c7aa22f89ba49ce8de67512abf513df |
  | tenant_id | 53f4610540fd4be7938e65f4c9567e25 |
  |  user_id  | 970acc126501440b9bb60b5494b6460c |
  +---+--+
  fabien@devstack-1:~/devstack$ keystone ec2-credentials-delete --access 
11fcd9628779482f9b7971ec0bc69359
  Unable to delete credential: Could not find credential, 
11fcd9628779482f9b7971ec0bc69359. (HTTP 404)

  As admin user the deletion work as expected:
  fabien@devstack-1:~$ . openrc admin admin
  fabien@devstack-1:~/devstack$ . openrc admin admin
  fabien@devstack-1:~/devstack$ keystone ec2-credentials-delete --access 
11fcd9628779482f9b7971ec0bc69359
  Credential has been deleted.

  Is this the normal behavior ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1245435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1192786] Re: WARNING [QUANTUM.DB.AGENTSCHEDULERS_DB] FAIL SCHEDULING NETWORK

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1192786

Title:
  WARNING [QUANTUM.DB.AGENTSCHEDULERS_DB] FAIL SCHEDULING NETWORK

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  when i boot a new instance, i got a error message, and Everything is
  running normal but the error is still present

  2013-06-20 02:41:39 WARNING [quantum.db.agentschedulers_db] Fail
  scheduling network {'status': u'ACTIVE', 'subnets':
  [u'9a1c3be6-4678-4cf7-85e8-494b49fa40b7'], 'name': u'vlan3001',
  'provider:physical_network': u'physnet1', 'admin_state_up': True,
  'tenant_id': u'71eedd4d59384d13b7c80cdfb4999bf0',
  'provider:network_type': u'vlan', 'router:external': False, 'shared':
  True, 'id': u'8717d0e0-d17e-4262-ab3e-2a423d71bae4',
  'provider:segmentation_id': 3001L} 2013-06-20 02:41:40 WARNING
  [quantum.db.agentschedulers_db] Fail scheduling network {'status':
  u'ACTIVE', 'subnets': [u'5c8ac388-972d-4599-a0f9-ad4cc7fa720f'],
  'name': u'vlan3501', 'provider:physical_network': u'physnet2',
  'admin_state_up': True, 'tenant_id':
  u'71eedd4d59384d13b7c80cdfb4999bf0', 'provider:network_type': u'vlan',
  'router:external': False, 'shared': True, 'id': u'5b18231f-8a3a-4c5a-
  b6a8-a4773f45c695', 'provider:segmentation_id': 3501L}

  if i check agent-list, the Open vSwitch agent on compute is up and
  down every several seconds.

  date;quantum agent-list 

  Thu Jun 20 02:40:02 UTC 2013
  
+--++-+---++
  | id | agent_type | host | alive | admin_state_up |
  
+--++-+---++
  | 31e422a4-833c-47fd-8be2-aa71471999a0 | L3 agent | h172-16-0-2 | :-)
  | True | | 4a26d0b7-23aa-4b84-96e2-295bd9aff9db | DHCP agent |
  h172-16-0-2 | :-) | True | | 67f74ab1-f5b1-4944-8acf-f4b9eefa88ac |
  Open vSwitch agent | h172-16-0-2 | :-) | True | | 99c9f8a2-587f-42ec-
  ac51-cc2bdc6cad5a | Open vSwitch agent | h172-16-0-3 | xxx | True |
  
+--++-+---++

  date;quantum agent-list 

  Thu Jun 20 02:40:04 UTC 2013
  
+--++-+---++
  | id | agent_type | host | alive | admin_state_up |
  
+--++-+---++
  | 31e422a4-833c-47fd-8be2-aa71471999a0 | L3 agent | h172-16-0-2 | :-)
  | True | | 4a26d0b7-23aa-4b84-96e2-295bd9aff9db | DHCP agent |
  h172-16-0-2 | :-) | True | | 67f74ab1-f5b1-4944-8acf-f4b9eefa88ac |
  Open vSwitch agent | h172-16-0-2 | :-) | True | | 99c9f8a2-587f-42ec-
  ac51-cc2bdc6cad5a | Open vSwitch agent | h172-16-0-3 | :-) | True |
  
+--++-+---++

  date;quantum agent-list 

  Thu Jun 20 02:40:09 UTC 2013
  
+--++-+---++
  | id | agent_type | host | alive | admin_state_up |
  
+--++-+---++
  | 31e422a4-833c-47fd-8be2-aa71471999a0 | L3 agent | h172-16-0-2 | :-)
  | True | | 4a26d0b7-23aa-4b84-96e2-295bd9aff9db | DHCP agent |
  h172-16-0-2 | :-) | True | | 67f74ab1-f5b1-4944-8acf-f4b9eefa88ac |
  Open vSwitch agent | h172-16-0-2 | :-) | True | | 99c9f8a2-587f-42ec-
  ac51-cc2bdc6cad5a | Open vSwitch agent | h172-16-0-3 | xxx | True |
  
+--++-+---++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1192786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243327] Re: [OSSA 2014-008] Routers can be cross plugged by other tenants (CVE-2014-0056)

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1243327

Title:
  [OSSA 2014-008] Routers can be cross plugged by other tenants
  (CVE-2014-0056)

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  In Progress
Status in neutron havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  The l3-agent does not check tenant_id and allows for tenants to be
  able to plug ports into other's routers if the device_id is set to
  another tenants router.

  
  # become admin tenant
  arosen@arosen-desktop:~/devstack$ source openrc admin admin

  # Create router as admin: 
  arosen@arosen-desktop:~/devstack$ neutron  router-create admin-router
  Created a new router:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | external_gateway_info |  |
  | id| 80ffe19a-649c-4fc9-a0d9-2a3d67c5f600 |
  | name  | admin-router |
  | status| ACTIVE   |
  | tenant_id | 04e94acfe69f4960a69c6a78d39466c4 |
  +---+--+

  # Become demo tenant
  arosen@arosen-desktop:~/devstack$ source openrc demo demo 

  #create port with correct device_id and device_owner

  
  arosen@arosen-desktop:~/devstack$ neutron port-create private --device-id 
80ffe19a-649c-4fc9-a0d9-2a3d67c5f600 --device-owner network:router_interface
  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | device_id | 80ffe19a-649c-4fc9-a0d9-2a3d67c5f600
|
  | device_owner  | network:router_interface
|
  | fixed_ips | {subnet_id: 
5786a0a6-24c8-4156-b981-cc817011c6a7, ip_address: 10.0.0.3} |
  | id| 895cf428-4bfb-4c79-86c2-d40af9bf3587
|
  | mac_address   | fa:16:3e:21:33:6c   
|
  | name  | 
|
  | network_id| 4de8b4f6-ac11-4836-aefb-7ed4f49ab9a7
|
  | security_groups   | 
|
  | status| DOWN
|
  | tenant_id | ad069ea620614cce9c4b6f088d39d03e
|
  
+---+-+

  Now when the l3-agent is restarted or enters its periodic sync state:

  arosen@arosen-desktop:~/devstack$ sudo ip netns exec  
qrouter-80ffe19a-649c-4fc9-a0d9-2a3d67c5f600 ifconfig 
  loLink encap:Local Loopback  
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

  qr-895cf428-4b Link encap:Ethernet  HWaddr fa:16:3e:21:33:6c  
inet addr:10.0.0.3  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe21:336c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:300 (300.0 B)  TX bytes:398 (398.0 B)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1243327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : 

[Yahoo-eng-team] [Bug 1301436] Re: 'icehouse-compat' broken for nova-compute in Havana

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301436

Title:
  'icehouse-compat' broken for nova-compute in Havana

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The following havana-specific commit was merged a while ago:

  commit 2c63cae921d024f39f7d14459873d20caa5cf9e0
  Author: Russell Bryant rbry...@redhat.com
  Date:   Thu Dec 5 15:34:58 2013 -0500

  Allow resizes and migrates during live upgrade
  
  This patch completes the changes needed to havana's compute rpc api to
  allow live upgrades to icehouse.  Specifically, these changes are needed
  to allow resizes and migrates to work.  Before havana compute nodes get
  put in a mixed environment with icehouse compute nodes, they need to be
  speaking version 3.0 messages.
  
  Change-Id: If09bd38c9d8c3beb5b95107c497699dec47aa3ef
  Closes-bug: #1258253

  
  Unfortunately, it doesn't work completely as expected.  There are some 
messages that claim to be sending 3.0 messages, but have the data in the wrong 
format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1301436/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288463] Re: neutron_metadata_proxy_shared_secret should not be written to log file

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288463

Title:
  neutron_metadata_proxy_shared_secret should not be written to log file

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  neutron_metadata_proxy_shared_secret should not be written to log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280140] Re: cleanup_running_deleted_instances peroidic task failed with instance not found

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280140

Title:
  cleanup_running_deleted_instances peroidic task failed with instance
  not found

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  this is because the db  query is not including the deleted instance while
  _delete_instance_files() in libvirt driver.

  I can reproduce this bug both in master and stable havana.
  reproduce steps:
  1. create an instance
  2. stop nova-compute
  3. wait for nova-manage serivce list display xxx of nova-compute
  4. modify the running_deleted_instance_poll_interval=60
  running_deleted_instance_action = reap,
  and start nova-compute and wait for this clean up peroidic task
  5. a warnning will be given in the compute log:
  2014-02-14 16:22:25.917 WARNING nova.compute.manager [-] [instance: 
c32db267-21a0-41e7-9d50-931d8396d8cb] Periodic cleanup failed to delete 
instance: Instance c32db267-21a0-41e7-9d50-931d8396d8cb could not be found.

  the debug trace is:
  ipdb n
   /opt/stack/nova/nova/virt/libvirt/driver.py(1006)cleanup()
 1005 block_device_mapping = driver.block_device_info_get_mapping(
  - 1006 block_device_info)
 1007 for vol in block_device_mapping:

  ipdb n
   /opt/stack/nova/nova/virt/libvirt/driver.py(1007)cleanup()
 1006 block_device_info)
  - 1007 for vol in block_device_mapping:
 1008 connection_info = vol['connection_info']

  ipdb n
   /opt/stack/nova/nova/virt/libvirt/driver.py(1041)cleanup()
 1040 
  - 1041 if destroy_disks:
 1042 self._delete_instance_files(instance)

  ipdb n
   /opt/stack/nova/nova/virt/libvirt/driver.py(1042)cleanup()
 1041 if destroy_disks:
  - 1042 self._delete_instance_files(instance)
 1043 

  ipdb s
  --Call--
   /opt/stack/nova/nova/virt/libvirt/driver.py(4950)_delete_instance_files()
 4949 
  - 4950 def _delete_instance_files(self, instance):
 4951 # NOTE(mikal): a shim to handle this file not using instance objects

  
  ipdb n
   /opt/stack/nova/nova/virt/libvirt/driver.py(4953)_delete_instance_files()
 4952 # everywhere. Remove this when that conversion happens.

  - 4953 context = nova_context.get_admin_context()
 4954 inst_obj = instance_obj.Instance.get_by_uuid(context, 
instance['uuid'])

  ipdb n
   /opt/stack/nova/nova/virt/libvirt/driver.py(4954)_delete_instance_files()
 4953 context = nova_context.get_admin_context()
  - 4954 inst_obj = instance_obj.Instance.get_by_uuid(context, 
instance['uuid'])
 4955 

  ipdb n
  InstanceNotFound: Instance...found.',)
   /opt/stack/nova/nova/virt/libvirt/driver.py(4954)_delete_instance_files()
 4953 context = nova_context.get_admin_context()
  - 4954 inst_obj = instance_obj.Instance.get_by_uuid(context, 
instance['uuid'])
 4955 

  ipdb n
  --Return--
  None
   /opt/stack/nova/nova/virt/libvirt/driver.py(4954)_delete_instance_files()
 4953 context = nova_context.get_admin_context()
  - 4954 inst_obj = instance_obj.Instance.get_by_uuid(context, 
instance['uuid'])
 4955 

  ipdb n
  InstanceNotFound: Instance...found.',)
   /opt/stack/nova/nova/virt/libvirt/driver.py(1042)cleanup()
 1041 if destroy_disks:
  - 1042 self._delete_instance_files(instance)
 1043 

  ipdb n
  --Return--
  None
   /opt/stack/nova/nova/virt/libvirt/driver.py(1042)cleanup()
 1041 if destroy_disks:
  - 1042 self._delete_instance_files(instance)
 1043 

  ipdb n
  InstanceNotFound: Instance...found.',)
   /opt/stack/nova/nova/virt/libvirt/driver.py(931)destroy()
  930 self.cleanup(context, instance, network_info, block_device_info,
  -- 931 destroy_disks)
  932 

  ipdb n
  --Return--
  None
   /opt/stack/nova/nova/virt/libvirt/driver.py(931)destroy()
  930 self.cleanup(context, instance, network_info, block_device_info,
  -- 931 destroy_disks)
  932 

  ipdb n
  InstanceNotFound: Instance...found.',)
   /opt/stack/nova/nova/compute/manager.py(1905)_shutdown_instance()
 1904 self.driver.destroy(context, instance, network_info,
  - 1905 block_device_info)
 1906 except exception.InstancePowerOffFailure:

  ipdb n
   /opt/stack/nova/nova/compute/manager.py(1906)_shutdown_instance()
 1905 block_device_info)
  - 1906 except exception.InstancePowerOffFailure:
 1907 # if the instance can't power off, don't release the ip

  
  ipdb n
   /opt/stack/nova/nova/compute/manager.py(1910)_shutdown_instance()
 1909 pass
  - 1910 except Exception:
 1911 with excutils.save_and_reraise_exception():

  ipdb n
   /opt/stack/nova/nova/compute/manager.py(1911)_shutdown_instance()
 1910 except Exception:
  - 1911 with excutils.save_and_reraise_exception():
 1912 # deallocate ip and fail without proceeding to

  
  ipdb n
   

[Yahoo-eng-team] [Bug 1279497] Re: NSX plugin: punctual state synchronization might cause timeouts

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279497

Title:
  NSX plugin: punctual state synchronization might cause timeouts

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  When performing punctual state synchronization (*) the NSX plugin
  might trigger the infamous eventlet-mysql deadlock since a NSX API
  call is performed from within a DB transaction.

  This should be avoided, especially since the transaction is not needed
  in this case.

  
  (*) This can be enabled either via a configuration variable or by explicitly 
selecting the 'status' field in Neutron APIs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1279497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271780] Re: block live migration writes wrong libvirt.xml

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271780

Title:
  block live migration writes wrong libvirt.xml

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  release: stable/havana, 2013.2.1
  virt driver: libvirt
  volume driver: cinder.volume.san.HpSanISCSIDriver

  when excute (live) block migration for vm made by bootable volume, method 
post_live_migration_at_destination write a libvirt.xml in destination host.
  but it missed block disk information so moved libvirt.xml always has a wrong 
disk information.

  example)
  $ cinder create --image-id  --display_name cirros_boot_volume 10
  it return volume id 

  $ nova boot test_vm --flavor 1 --boot-volume 
  it makes vm and libvirt.xml has different information from common vm.
  ...
devices
  disk type=block device=disk
driver name=qemu type=raw cache=none/
source 
dev=/dev/disk/by-path/ip-san_host:3260-iscsi-iqn.-739918ef-20aa-45ae-8c86-a923d755942a-lun-0/
target bus=virtio dev=vda/
serial739918ef-20aa-45ae-8c86-a923d755942a/serial
  /disk
  ...

  $ nova live-migration --block-migrate vm uuid destination host
  After migrate, libvirt.xml at destination host has common disk information.
  ...
devices
  disk type=file device=disk
driver name=qemu type=qcow2 cache=none/
source 
file=/var/lib/nova/instances/c59f0510-1549-4249-993c-0fb79cc2ccab/disk/
target bus=virtio dev=vda/
  /disk
  ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270008] Re: periodic tasks will be invalid if a qemu process becomes to defunct status

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270008

Title:
  periodic tasks will be invalid if a qemu process becomes to defunct
  status

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  I am using stable havana nova.
  I got this exception while I delete my kvm instance, but the qemu process of 
this instance become to 'defunct' status by some unknown reason(may be a 
qemu/kvm bug), and then the periodic task stopped unexpectly everytime, then 
the resources of this compute node will never be reported, because of this 
exception below, I think we should handle this exception while running periodic 
task.
  2014-01-16 15:53:28.421 47954 ERROR nova.openstack.common.periodic_task [-] 
Error during ComputeManager.update_available_resource: cannot get CPU affinity 
of process 62279: No such process
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/nova/openstack/common/periodic_task.py, 
line 180, in run_periodic_tasks
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 5617, in 
update_available_resource
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
rt.update_available_resource(context)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, 
line 246, in inner
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
return f(*args, **kwargs)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py, line 
281, in update_available_resource
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
resources = self.driver.get_available_resource(self.nodename)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 4275, 
in get_available_resource
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
stats = self.host_state.get_host_stats(refresh=True)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 5350, 
in get_host_stats
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
self.update_status()
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 5386, 
in update_status
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
data[vcpus_used] = self.driver.get_vcpu_used()
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 3949, 
in get_vcpu_used
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
vcpus = dom.vcpus()
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 179, in doit
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
result = proxy_call(self._autowrap, f, *args, **kwargs)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 139, in 
proxy_call
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
rv = execute(f,*args,**kwargs)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 77, in tworker
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
rv = meth(*args,**kwargs)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/libvirt.py, line , in vcpus
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
if ret == -1: raise libvirtError ('virDomainGetVcpus() failed', dom=self)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
libvirtError: cannot get CPU affinity of process 62279: No such process
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task

  and the exception while I delete this instance:
  2014-01-16 15:13:26.640 47954 ERROR 

[Yahoo-eng-team] [Bug 1267097] Re: A glance error (image unavailable) during a snapshot leaves an instance in SNAPSHOTTING

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1267097

Title:
  A glance error (image unavailable) during a snapshot leaves an
  instance in SNAPSHOTTING

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  If a snapshot is attempted to be taken when Glance can't find the
  underlying image (usually because it's been deleted), then
  ImageNotFound is raised to
  nova.compute.manager.ComputeManager._snapshot_instance. The error path
  for this leaves the instance in a snapshotting state; after which
  operator intervention is required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1267097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270304] Re: error in guestfs driver setup causes orphaned libguestfs processes

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270304

Title:
  error in guestfs driver setup causes orphaned libguestfs processes

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  In the libguestfs driver for Nova VFS, certain errors in the setup
  method can cause a libguestfs process to remain running, even after
  the VM is terminated and/or the method that spawned the libguestfs VM
  has finished.  These processes become zombies when killed, and can
  only be destroyed by restarting nova-compute.

  In the particular issue encountered, when using gluster as a cinder
  backend and attempting to launch a VM before adding a keypair, the
  error would occur.  However, it is feasible that the error could occur
  elsewhere.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274341] Re: reservation_commit can lead to deadlock

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274341

Title:
  reservation_commit can lead to deadlock

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  reservation_commit can deadlock under load:

  2014-01-29 18:51:00.470 ERROR nova.quota 
[req-659db2df-6124-4e6a-b86b-700aae1de805 183137d95c584efb84e773a21f2ef7a1 
d8d5b06deffc45e7b258eb65ea04017c] Failed to commit reservations 
['5bea6421-1648-4fe1-9d21-4232f536e031', 
'8b27edda-f40e-476c-a9b3-d007aa3f6aac', '58e357df-b45d-4008-9ef3-0da3d8daebbb']
  2014-01-29 18:51:00.470 4380 TRACE nova.quota Traceback (most recent call 
last):
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/nova/quota.py, line 982, in commit
  2014-01-29 18:51:00.470 4380 TRACE nova.quota 
self._driver.commit(context, reservations, project_id=project_id)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/nova/quota.py, line 370, in commit
  2014-01-29 18:51:00.470 4380 TRACE nova.quota 
db.reservation_commit(context, reservations, project_id=project_id)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/nova/db/api.py, line 970, in 
reservation_commit
  2014-01-29 18:51:00.470 4380 TRACE nova.quota project_id=project_id)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 114, in 
wrapper
  2014-01-29 18:51:00.470 4380 TRACE nova.quota return f(*args, **kwargs)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 2786, in 
reservation_commit
  2014-01-29 18:51:00.470 4380 TRACE nova.quota for reservation in 
reservation_query.all():
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2115, in all
  2014-01-29 18:51:00.470 4380 TRACE nova.quota return list(self)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2227, in 
__iter__
  2014-01-29 18:51:00.470 4380 TRACE nova.quota return 
self._execute_and_instances(context)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2242, in 
_execute_and_instances
  2014-01-29 18:51:00.470 4380 TRACE nova.quota result = 
conn.execute(querycontext.statement, self._params)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1449, in 
execute
  2014-01-29 18:51:00.470 4380 TRACE nova.quota params)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1584, in 
_execute_clauseelement
  2014-01-29 18:51:00.470 4380 TRACE nova.quota compiled_sql, 
distilled_params
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1698, in 
_execute_context
  2014-01-29 18:51:00.470 4380 TRACE nova.quota context)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1691, in 
_execute_context
  2014-01-29 18:51:00.470 4380 TRACE nova.quota context)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 331, in 
do_execute
  2014-01-29 18:51:00.470 4380 TRACE nova.quota cursor.execute(statement, 
parameters)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in execute
  2014-01-29 18:51:00.470 4380 TRACE nova.quota self.errorhandler(self, 
exc, value)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
/usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
  2014-01-29 18:51:00.470 4380 TRACE nova.quota raise errorclass, errorvalue
  2014-01-29 18:51:00.470 4380 TRACE nova.quota OperationalError: 
(OperationalError) (1213, 'Deadlock found when trying to get lock; try 
restarting transaction') 'SELECT reservations.created_at AS 
reservations_created_at, reservations.updated_at AS reservations_updated_at, 
reservations.deleted_at AS reservations_deleted_at, reservations.deleted AS 
reservations_deleted, reservations.id AS reservations_id, reservations.uuid AS 
reservations_uuid, reservations.usage_id AS reservations_usage_id, 
reservations.project_id AS reservations_project_id, reservations.resource AS 
reservations_resource, reservations.delta AS reservations_delta, 
reservations.expire AS reservations_expire \nFROM reservations \nWHERE 

[Yahoo-eng-team] [Bug 1266051] Re: Attach/detach volume iSCSI multipath doesn't work properly if there are different targets associated with different portals for a mulitpath device.

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266051

Title:
  Attach/detach volume iSCSI multipath doesn't work properly if there
  are different targets associated with different portals for a
  mulitpath device.

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The connect_volume and disconnect_volume code in
  LibvirtISCSIVolumeDriver assumes that the targets for different
  portals are the same for the same multipath device. This is true for
  some arrays but not for others. When there are different targets
  associated with different portals for the same multipath device,
  multipath doesn't work properly during attach/detach volume
  operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1266051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261631] Re: Reconnect on failure for multiple servers always connects to first server

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261631

Title:
  Reconnect on failure for multiple servers always connects to first
  server

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  Triaged
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  In Progress
Status in Ironic (Bare Metal Provisioning):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in oslo havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  In attempting to reconnect to an AMQP server when a communication
  failure occurs, both the qpid and rabbit drivers target the configured
  servers in the order in which they were provided.  If a connection to
  the first server had failed, the subsequent reconnection attempt would
  be made to that same server instead of trying one that had not yet
  failed.  This could increase the time to failover to a working server.

  A plausible workaround for qpid would be to decrease the value for
  qpid_timeout, but since the problem only occurs if the failed server
  is the first configured, the results of the workaround would depend on
  the order that the failed server appears in the configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1261631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265494] Re: Openstack Nova: Unpause after host reboot fails

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265494

Title:
  Openstack Nova: Unpause after host reboot fails

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Description of problem:
  Unpauseing an instance fails if host has rebooted. 

  Version-Release number of selected component (if applicable):
  RHEL: release 6.5 (Santiago)
  openstack-nova-api-2013.2.1-1.el6ost.noarch
  openstack-nova-compute-2013.2.1-1.el6ost.noarch
  openstack-nova-scheduler-2013.2.1-1.el6ost.noarch
  openstack-nova-common-2013.2.1-1.el6ost.noarch
  openstack-nova-console-2013.2.1-1.el6ost.noarch
  openstack-nova-conductor-2013.2.1-1.el6ost.noarch
  openstack-nova-novncproxy-2013.2.1-1.el6ost.noarch
  openstack-nova-cert-2013.2.1-1.el6ost.noarch

  How reproducible:
  Every time 

  Steps to Reproduce:
  1. Boot an instance 
  2. Pause that instance
  3. Reboot host
  4. Unpause instance  

  Actual results:
  can't unpause instance stuck in status paused, power state - shutdown

  Expected results:
  Instance should unpause, return to running state

  Additional info:

  virsh list -all --managed-save 
  ID is missing from paused instance - (pausecirros), state - shut off.

  [root@orange-vdse ~(keystone_admin)]# virsh list --all --managed-save
   IdName   State
  
   1 instance-0003  running
   2 instance-0002  running
   - instance-0001  shut off

  [root@orange-vdse ~(keystone_admin)]# nova list  (notice nova status paused)
  
+--+---+++-+-+
  | ID   | Name  | Status | Task State 
| Power State | Networks|
  
+--+---+++-+-+
  | ebe310c2-d715-45e5-83b6-32717af1ac90 | cirros| ACTIVE | None   
| Running | net=192.168.1.4 |
  | 3ef89feb-414f-4524-b806-f14044efdb14 | pausecirros   | PAUSED | None   
| Shutdown| net=192.168.1.5 |
  | 8bcae041-2f92-4ae2-a2c2-ee59b067ac76 | suspendcirros | ACTIVE | None   
| Running | net=192.168.1.2 |
  
+--+---+++-+-+

  
  Testing without rebooting host, ID/state (1/paused) instance (cirros) are 
ok and it unpauses ok.

  [root@orange-vdse ~(keystone_admin)]# virsh list --all --managed-save
   IdName   State
  
   1 instance-0003  paused
   2 instance-0002  running
   - instance-0001  shut off
  
+--+---+++-+-+
  | ID   | Name  | Status | Task State 
| Power State | Networks|
  
+--+---+++-+-+
  | ebe310c2-d715-45e5-83b6-32717af1ac90 | cirros| PAUSED | None   
| Paused  | net=192.168.1.4 |
  | 3ef89feb-414f-4524-b806-f14044efdb14 | pausecirros   | PAUSED | None   
| Shutdown| net=192.168.1.5 |
  | 8bcae041-2f92-4ae2-a2c2-ee59b067ac76 | suspendcirros | ACTIVE | None   
| Running | net=192.168.1.2 |
  
+--+---+++-+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263766] Re: Hyper-V agent fails when ceilometer ACLs are already applied

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263766

Title:
  Hyper-V agent fails when ceilometer ACLs are already applied

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in neutron havana series:
  Fix Released

Bug description:
  The Hyper-V agent fails when the restarted after the ceilometer ACLs
  have been already applied.

  Stack trace: http://pastebin.com/xnuszXid

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1263766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260697] Re: network_device_mtu configuration does not apply on LibvirtHybridOVSBridgeDriver to OVS VIF ports and VETH pairs

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260697

Title:
  network_device_mtu configuration does not apply on
  LibvirtHybridOVSBridgeDriver to OVS VIF ports and VETH pairs

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Due to this missing functionality MTU can not be increased/adapted to 
specific requirements.
  For example configure compute node VIFs to make use of jumbo frames.

  LinuxOVSInterfaceDriver and LinuxBridgeInterfaceDriver are using
  network_device_mtu to configure the MTU of some created VIFs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256041] Re: Typo in the metering agent when there is an exception when it invokes a driver method

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1256041

Title:
  Typo in the metering agent when there is an exception when it invokes
  a driver method

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  Typo in the metering agent when there is an exception when it invokes
  a driver method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1256041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257119] Re: PLUMgrid plugin is missing id in update_floatingip

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257119

Title:
  PLUMgrid plugin is missing id in update_floatingip

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  In PLUMgrid plugin, the update_floatingip is missing id as shown
  above:

  def update_floatingip(self, net_db, floating_ip):
  self.plumlib.update_floatingip(net_db, floating_ip, id)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258360] Re: Remove unneeded call to conductor in network interface

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258360

Title:
  Remove unneeded call to conductor in network interface

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Remove unneeded call to conductor in network interface

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255577] Re: Requests to Metadata API fail with 500 if Neutron network plugin is used

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255577

Title:
  Requests to Metadata API fail with 500 if Neutron network plugin  is
  used

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  In TripleO devtest story we are using Nova + Baremetal Driver +
  Neutron. The provisioned baremetal instance obtains its configuration
  from Metadata API. Currently all requests to Metadata API fail with
  error 500.

  In nova-api log I can see the following traceback:

  2013-11-27 11:44:01,423.423 5895 ERROR nova.api.metadata.handler 
[req-0d22f3c7-663e-452e-bfa9-747b728fc13b None None] Failed to get metadata for 
ip: 192.0.2.2
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler Traceback 
(most recent call last):
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/api/metadata/handler.py,
 line 136, in _handle_remote_ip_request
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler 
meta_data = self.get_metadata_by_remote_address(remote_address)
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/api/metadata/handler.py,
 line 78, in get_metadata_by_remote_address
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler data = 
base.get_metadata_by_address(self.conductor_api, address)
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py,
 line 466, in get_metadata_by_address
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler fixed_ip 
= network.API().get_fixed_ip_by_address(ctxt, address)
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py,
 line 680, in get_fixed_ip_by_address
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler 
uuid_maps = self._get_instance_uuids_by_ip(context, address)
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py,
 line 582, in _get_instance_uuids_by_ip
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler data = 
neutronv2.get_client(context).list_ports(**search_opts)
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler   File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/network/neutronv2/__init__.py,
 line 69, in get_client
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler raise 
exceptions.Unauthorized()
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler 
Unauthorized: Unauthorized: bad credentials

  Analyzing this issue we found that Metadata API stopped working since
  change https://review.openstack.org/#/c/56174/4 was merged (it seems
  that change of line 57 in
  https://review.openstack.org/#/c/56174/4/nova/network/neutronv2/__init__.py
  is the reason).

  The commit message looks pretty sane and that fix seems to be the
  right thing to do, because we don't want to do neutron requests on
  behalf of neutron service user we have in nova config, but rather on
  behalf of the admin user instead who made the original request to nova
  api. So it seems that context.is_admin should be extended to make it
  possible to distinguish between those two cases of admin users: the
  real admin users, and the cases when nova api needs to talk to
  neutron.

  The problem is that all metadata queries are handled using default
  admin context (user and other vars are set to None while
  is_admin=True), so with https://review.openstack.org/#/c/56174/4
  applied, get_client() always raises an exception when Metadata API
  requests are handled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255609] Re: VMware: possible collision of VNC ports

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255609

Title:
  VMware: possible collision of VNC ports

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  We assign VNC ports to VM instances with the following method:

  def _get_vnc_port(vm_ref):
  Return VNC port for an VM.
  vm_id = int(vm_ref.value.replace('vm-', ''))
  port = CONF.vmware.vnc_port + vm_id % CONF.vmware.vnc_port_total
  return port

  the vm_id is a simple counter in vSphere which increments fast and
  there is a chance to get the same port number if the vm_ids are equal
  modulo vnc_port_total (1 by default).

  A report was received that if the port number is reused you may get
  access to the VNC console of another tenant. We need to fix the
  implementation to always choose a port number which is not taken or
  report an error if there are no free ports available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255355] Re: VMware: Instances stuck in spawning when insufficient disk space

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255355

Title:
  VMware: Instances stuck in spawning when insufficient disk space

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  When using VC Driver, booting an instance when there is insufficient
  disk space on the datastore causes the instance to get stuck in
  SPAWNING state.

  The following expected error message is seen in the logs before
  instance is stuck SPAWNING.

  2013-11-26 15:47:58.189 DEBUG nova.virt.vmwareapi.driver [-] Task 
[CopyVirtualDisk_Task] (returnval){
 value = task-8768
 _type = Task
   } status: success from (pid=22523) _poll_task 
/opt/stack/nova/nova/virt/vmwareapi/driver.py:916
  2013-11-26 15:47:58.189 DEBUG nova.virt.vmwareapi.vmops 
[req-bd4ed37e-e5ce-4dd5-a7b7-4a1340c64111 admin demo] [instance: 
02b1b31e-ffd3-48fe-b151-68395e80d1ce] Copied Virtual Disk of size 1048576 KB 
and type preallocated on the ESX host local store iscsi-1 from (pid=22523) 
_copy_virtual_disk /opt/stack/nova/nova/virt/vmwareapi/vmops.py:458
  2013-11-26 15:47:58.190 DEBUG nova.virt.vmwareapi.vmops 
[req-bd4ed37e-e5ce-4dd5-a7b7-4a1340c64111 admin demo] Extending root virtual 
disk to 20971520 from (pid=22523) _extend_virtual_disk 
/opt/stack/nova/nova/virt/vmwareapi/vmops.py:156
  2013-11-26 15:48:03.254 WARNING nova.virt.vmwareapi.driver [-] Task 
[ExtendVirtualDisk_Task] (returnval){
 value = task-8769
 _type = Task
   } status: error Insufficient disk space on datastore ''.
  Instance failed to spawn
  Traceback (most recent call last):
File /opt/stack/nova/nova/compute/manager.py, line 1407, in _spawn
  block_device_info)
File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 629, in spawn
  admin_password, network_info, block_device_info)
File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 544, in spawn
  root_vmdk_path, dc_map.ref)
File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 166, in 
_extend_virtual_disk
  vmdk_extend_task)
File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 898, in 
_wait_for_task
  ret_val = done.wait()
File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116, 
in wait
  return hubs.get_hub().switch()
File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 
187, in switch
  return self.greenlet.switch()
  NovaException: Insufficient disk space on datastore ''.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253455] Re: Compute node stats update may lead to DBDeadlock

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1253455

Title:
  Compute node stats update may lead to DBDeadlock

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  During a tempest run, when a compute node's usage stats are updated on
  the DB as part of resource claiming for an instance spawn, we hit a
  DBDeadlock exception:

  File .../nova/compute/manager.py, line 1002, in _build_instance
   with rt.instance_claim(context, instance, limits):
  File .../nova/openstack/common/lockutils.py, line 248, in inner
   return f(*args, **kwargs)
  File .../nova/compute/resource_tracker.py, line 126, in instance_claim
   self._update(elevated, self.compute_node)
  File .../nova/compute/resource_tracker.py, line 429, in _update
   context, self.compute_node, values, prune_stats)
  File .../nova/conductor/api.py, line 240, in compute_node_update
   prune_stats)
  File .../nova/conductor/rpcapi.py, line 363, in compute_node_update
   prune_stats=prune_stats)
  File .../nova/rpcclient.py, line 85, in call
   return self._invoke(self.proxy.call, ctxt, method, **kwargs)
  File .../nova/rpcclient.py, line 63, in _invoke
   return cast_or_call(ctxt, msg, **self.kwargs)
  File .../nova/openstack/common/rpc/proxy.py, line 126, in call
   result = rpc.call(context, real_topic, msg, timeout)
  File .../nova/openstack/common/rpc/__init__.py, line 139, in call
   return _get_impl().call(CONF, context, topic, msg, timeout)
  File .../nova/openstack/common/rpc/impl_kombu.py, line 816, in call
   rpc_amqp.get_connection_pool(conf, Connection))
  File .../nova/openstack/common/rpc/amqp.py, line 574, in call
   rv = list(rv)
  File .../nova/openstack/common/rpc/amqp.py, line 539, in __iter__
   raise result
    RemoteError: Remote error: DBDeadlock (OperationalError) (1213, 'Deadlock 
found when trying to get lock; try restarting transaction') 'UPDATE 
compute_nodes SET updated_at=%s, hypervisor_version=%s WHERE compute_nodes.id = 
%s' (datetime.datetime(2013, 11, 20, 18, 28, 19, 525920), u'5.1.0 1)

  (A more complete log is at http://paste.openstack.org/raw/53702/)

  Can someone characterize the conditions under which this type of
  errors can occur?

  Perhaps sqlchemy.api.compute_node_update() needs the
  @_retry_on_deadlock treatment?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1253455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251792] Re: infinite recursion when deleting an instance with no network interfaces

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251792

Title:
  infinite recursion when deleting an instance with no network
  interfaces

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  In some situations when an instance has no network information (a
  phrase that I'm using loosely), deleting the instance results in
  infinite recursion. The stack looks like this:

  2013-11-15 18:50:28.995 DEBUG nova.network.neutronv2.api 
[req-28f48294-0877-4f09-bcc1-7595dbd4c15a demo demo]   File 
/usr/lib/python2.7/dist-packages/eventlet/greenpool.py, line 80, in 
_spawn_n_impl
  func(*args, **kwargs)
File /opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 461, in 
_process_data
  **args)
File /opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 354, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/exception.py, line 73, in wrapped
  return f(self, context, *args, **kw)
File /opt/stack/nova/nova/compute/manager.py, line 230, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 295, in 
decorated_function
  function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 259, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1984, in 
terminate_instance
  do_terminate_instance(instance, bdms)
File /opt/stack/nova/nova/openstack/common/lockutils.py, line 248, in 
inner
  return f(*args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1976, in 
do_terminate_instance
  reservations=reservations)
File /opt/stack/nova/nova/hooks.py, line 105, in inner
  rv = f(*args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1919, in 
_delete_instance
  self._shutdown_instance(context, db_inst, bdms)
File /opt/stack/nova/nova/compute/manager.py, line 1829, in 
_shutdown_instance
  network_info = self._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/compute/manager.py, line 868, in 
_get_instance_nw_info
  instance)
File /opt/stack/nova/nova/network/neutronv2/api.py, line 449, in 
get_instance_nw_info
  result = self._get_instance_nw_info(context, instance, networks)
File /opt/stack/nova/nova/network/api.py, line 64, in wrapper
  nw_info=res)

  RECURSION STARTS HERE

File /opt/stack/nova/nova/network/api.py, line 77, in 
update_instance_cache_with_nw_info
  nw_info = api._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/network/api.py, line 64, in wrapper
  nw_info=res)

  ... REPEATS AD NAUSEUM ...

File /opt/stack/nova/nova/network/api.py, line 77, in 
update_instance_cache_with_nw_info
  nw_info = api._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/network/api.py, line 64, in wrapper
  nw_info=res)
File /opt/stack/nova/nova/network/api.py, line 77, in 
update_instance_cache_with_nw_info
  nw_info = api._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/network/api.py, line 49, in wrapper
  res = f(self, context, *args, **kwargs)
File /opt/stack/nova/nova/network/neutronv2/api.py, line 459, in 
_get_instance_nw_info
  LOG.debug('%s', ''.join(traceback.format_stack()))

  Here's a step-by-step explanation of how the infinite recursion
  arises:

  1. somebody calls nova.network.neutronv2.api.API.get_instance_nw_info

  2. in the above call, the network info is successfully retrieved as
  result = self._get_instance_nw_info(context, instance, networks)

  3. however, since the instance has no network information, result is
  the empty list (i.e., [])

  4. the result is put in the cache by calling
  nova.network.api.update_instance_cache_with_nw_info

  5. update_instance_cache_with_nw_info is supposed to add the result to
  the cache, but due to a bug in update_instance_cache_with_nw_info, it
  recursively calls api.get_instance_nw_info, which brings us back to
  step 1. The bug is the check before the recursive call:

  if not nw_info:
  nw_info = api._get_instance_nw_info(context, instance)

  which erroneously equates [] and None. Hence the check should be if
  nw_info is None:

  I should clarify that the instance _did_ have network information at
  some point (i.e., I booted it normally with a NIC), however, some time
  after I issued a nova delete request, the network information was
  gone (i.e., in 

[Yahoo-eng-team] [Bug 1250763] Re: Users with admin role in Nova should not re-authenticate with Neutron

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1250763

Title:
  Users with admin role in Nova should not re-authenticate with Neutron

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  A recent change to the way Nova creates a Neutron client 
https://review.openstack.org/#/c/52954/4
  changed the conditions under which it re-authenticates using the neutron 
admin credentials from
  “if admin” to “if admin or context.is_admin”.

  This means that any user with admin role in Nova now interacts with Neutron 
as a different tenant.
  Not only does this cause an unnecessary re-authentication (The user 
may/should also have an admin
  role in Neutron) it means that they can no longer  allocate and assign a 
floating IP to their instance
  via Nova (as the floating ip will now always be allocated in the context of 
neutron_admin_tenant).

  The context_is_admin part of this change should be reverted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1250763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252849] Re: tempest fail due to neutron cache miss

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1252849

Title:
  tempest fail due to neutron cache miss

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Sounds like a regression caused by following commits:
    1. 
https://github.com/openstack/nova/commit/1957339df302e2da75e0dbe78b5d566194ab2c08
    2. 
https://github.com/openstack/nova/commit/651fac3d5d250d42e640c3ac113084bf0d2fa3b4

  The above patches causing 2 issues:
    1. tempest.api.compute.servers.test_server_actions test fail leaving the 
servers in ERROR state.
    2. Unable to delete those servers using nova delete.

  Tempest and compute traceback here:
http://pastebin.com/CVjG03eV

  The patch was to disable cache refresh in allocate_for_instance,
  allocate_port_for_instance and deallocate_port_for_instance methods.
  This is breaking the nova boot process when the servers are created
  using the above tempest test. However, we could create servers using
  nova boot api, manually. It is likely because the cache is disabled
  while allocating the instance.

  The servers created using above test is left in ERROR state and we are
  unable to delete them. It is likely because the cache is disabled
  while deallocating the instance and/or port.

  NOTE:
  If we restore the @refresh_sync decorator against those methods and do not 
use the decorate in _get_instance_nw_info, the  the above tempest test is 
successful. I have not tested it when deleting the VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1252849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250580] Re: Excessive calls to keystone from neutron glue code

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1250580

Title:
  Excessive calls to keystone from neutron glue code

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  This has been noticed before, it just came up again with comments in
  https://bugs.launchpad.net/neutron/+bug/1250168

  [12:23] dims salv-orlando, for 1250168, where are the keystone calls being 
made from? (you indicated a large percent of api calls are keystone in the last 
comment)
  [12:35] salv-mobile It looks like all calls in admin context trigger a POST 
to keystone as the admon token is not cachex
  [12:35] salv-mobile Cached
  [12:36] salv-mobile I think this was intentional even if I know do not 
recall the precise reason
  [12:40] dims salv-mobile, which file? in python-neutronclient?
  [12:41] salv-mobile I am afk i think it's either nova.network.neutronv2.api 
or nova.network.neutronv2
  [12:41] dims salv-mobile, thx, will take a peek
  [12:42] salv-mobile The latter would be __init_.py of course

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1250580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249065] Re: Nova throws 400 when attempting to add floating ip (No nw_info cache associated with instance)

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1249065

Title:
  Nova throws 400 when attempting to add floating ip (No nw_info cache
  associated with instance)

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Ran into this problem in check-tempest-devstack-vm-neutron

   Traceback (most recent call last):
 File tempest/scenario/test_snapshot_pattern.py, line 74, in 
test_snapshot_pattern
   self._set_floating_ip_to_server(server, fip_for_server)
 File tempest/scenario/test_snapshot_pattern.py, line 62, in 
_set_floating_ip_to_server
   server.add_floating_ip(floating_ip)
 File /opt/stack/new/python-novaclient/novaclient/v1_1/servers.py, line 
108, in add_floating_ip
   self.manager.add_floating_ip(self, address, fixed_address)
 File /opt/stack/new/python-novaclient/novaclient/v1_1/servers.py, line 
465, in add_floating_ip
   self._action('addFloatingIp', server, {'address': address})
 File /opt/stack/new/python-novaclient/novaclient/v1_1/servers.py, line 
993, in _action
   return self.api.client.post(url, body=body)
 File /opt/stack/new/python-novaclient/novaclient/client.py, line 234, in 
post
   return self._cs_request(url, 'POST', **kwargs)
 File /opt/stack/new/python-novaclient/novaclient/client.py, line 213, in 
_cs_request
   **kwargs)
 File /opt/stack/new/python-novaclient/novaclient/client.py, line 195, in 
_time_request
   resp, body = self.request(url, method, **kwargs)
 File /opt/stack/new/python-novaclient/novaclient/client.py, line 189, in 
request
   raise exceptions.from_response(resp, body, url, method)
   BadRequest: No nw_info cache associated with instance (HTTP 400) 
(Request-ID: req-9fea0363-4532-4ad1-af89-114cff68bd89)

  Full console logs here: http://logs.openstack.org/27/55327/3/check
  /check-tempest-devstack-vm-neutron/8d26d3c/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1249065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247296] Re: VMware: launching instance from instance snapshot fails

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1247296

Title:
  VMware: launching instance from instance snapshot fails

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  Fix Committed

Bug description:
  branch: stable/havana

  When using the nova VMwareVCDriver, instances launched from instance
  snapshots are spawning with ERROR.

  Steps to Reproduce:
  (using Horizon)
  1. Launch an instance using a vmdk image
  2. Take a snapshot of the instance
  3. Launch an instance using the snapshot

  Expected result: Instance should boot up successfully
  Actual result: Instance boots up with ERROR and following error is found in 
n-cpu.log:

   Traceback (most recent call last):
     File /opt/stack/nova/nova/compute/manager.py, line 1407, in _spawn
   block_device_info)
     File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 623, in spawn
   admin_password, network_info, block_device_info)
     File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 449, in spawn
   uploaded_vmdk_path)
     File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 387, in
   self._session._wait_for_task(instance['uuid'], vmdk_copy_task)
     File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 900, in
   ret_val = done.wait()
     File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116,
   return hubs.get_hub().switch()
     File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line
   return self.greenlet.switch()
   NovaException: A specified parameter was not correct.
   fileType

  The error seems to occur during the CopyVirtualDisk_Task operation.
  Full log for context is available here:
  http://paste.openstack.org/show/50411/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1247296/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1248002] Re: bandwidth metering - excluded option doesn't work

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1248002

Title:
  bandwidth metering - excluded option doesn't work

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  When I using 'meter-label-rule-create' command with neutronclient, excluded 
option is doesn't work.
  The option exclude this cidr from the label.
  You have to fix below code:
  neutron / neutron / services / metering / drivers / iptables / 
iptables_driver.py
  (line number is 157)

  def _process_metering_label_rules(self, rm, rules, label_chain,
rules_chain):
  im = rm.iptables_manager
  ext_dev = self.get_external_device_name(rm.router['gw_port_id'])
  if not ext_dev:
  return

  for rule in rules:
  remote_ip = rule['remote_ip_prefix']

  dir = '-i ' + ext_dev
  if rule['direction'] == 'egress':
  dir = '-o ' + ext_dev

  if rule['excluded'] == 'true':   -- fix it : True (boolean 
type)
  ipt_rule = dir + ' -d ' + remote_ip + ' -j RETURN'
  im.ipv4['filter'].add_rule(rules_chain, ipt_rule, wrap=False,
 top=True)
  else:
  ipt_rule = dir + ' -d ' + remote_ip + ' -j ' + label_chain
  im.ipv4['filter'].add_rule(rules_chain, ipt_rule,
 wrap=False, top=False)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1248002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245746] Re: Grizzly to Havana Upgrade wipes out Nova quota_usages table

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1245746

Title:
  Grizzly to Havana Upgrade wipes out Nova quota_usages table

Status in OpenStack Compute (Nova):
  Incomplete
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  In grizzly, there is no user_id in quota_usages table, and the
  database with quota usages table is like this:

  mysql select * from quota_usages;
  
+-+-+++--+---++--+---+-+
  | created_at  | updated_at  | deleted_at | id | project_id
   | resource  | in_use | reserved | until_refresh | deleted |
  
+-+-+++--+---++--+---+-+
  | 2013-10-29 03:03:05 | 2013-10-29 03:19:30 | NULL   |  1 | 
9cb04bffbe784771bd28fa093d749804 | instances |  1 |0 |  
NULL |   0 |
  | 2013-10-29 03:03:05 | 2013-10-29 03:19:30 | NULL   |  2 | 
9cb04bffbe784771bd28fa093d749804 | ram   |512 |0 |  
NULL |   0 |
  | 2013-10-29 03:03:05 | 2013-10-29 03:19:30 | NULL   |  3 | 
9cb04bffbe784771bd28fa093d749804 | cores |  1 |0 |  
NULL |   0 |
  
+-+-+++--+---++--+---+-+

  The problem can be recreated througth the following steps:

  1. In upgrade from Grizzly to Havana, migration script
  203_make_user_quotas_key_and_value.py adds 'user_id' column to
  quota_usages table and its shadow table.

  2. Migration script 216_sync_quota_usages.py willl delete all the any
  instances/cores/ram/etc quota_usages without a user_id by
  delete_null_rows. Since this is a Grizzly to Havana upgrade, and there
  is no user_id colume in Grizzly ( user_id is added by
  203_make_user_quotas_key_and_value.py in Havana), all the
  instances/cores/ram/etc resources in quota_usages will be deleted.

  3. Then  script 216_sync_quota_usages.py will try to add new
  quota_usages entrance based on a query of resources left on the table.
  Remember from step 2, they are already deleted, therefore there will
  be no quota entry inserted or updated.

  The result is the quota_usage entry from Grizzly are wiped out during
  upgrade to Havana.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1245746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243193] Re: VMware: snapshot backs up wrong disk when instance is attached to volume

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243193

Title:
  VMware: snapshot backs up wrong disk when instance is attached to
  volume

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  When a volume is attached to an instance and we backup the instance or
  try to create an image from it, the volume's disk is being backed up
  and not the instance's primary disk.

  More info:
  https://communities.vmware.com/community/vmtn/openstack/blog/2013/08/28
  /introducing-vova-an-easy-way-to-try-out-openstack-on-
  vsphere#comment-29775

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243193/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247901] Re: Querying Windows via WMI intermittently fails in get_device_number_for_target

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1247901

Title:
  Querying Windows via WMI intermittently fails in
  get_device_number_for_target

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The WMI query in basevolumeutils.get_device_number_for_target can
  incorrectly return no device_number during extended stress runs.  We
  saw that it would return incorrect data after about an hour.  It could
  also return an initiator_sessions object that was empty.

  By adding a check to make sure that devices wasn't empty and adding a
  retry loop in volumeops._get_mounted_disk_from_lun we could avoid
  hitting the case where it thought it couldn't get a mounted disk for
  the target_iqn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1247901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236585] Re: tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since fails sporadically

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1236585

Title:
  
tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since
  fails sporadically

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  See: http://logs.openstack.org/69/49169/2/check/check-tempest-
  devstack-vm-full/473539f/console.html

  2013-10-07 18:57:09.859 | 
==
  2013-10-07 18:57:09.859 | FAIL: 
tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since[gate]
  2013-10-07 18:57:09.860 | 
tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since[gate]
  2013-10-07 18:57:09.860 | 
--
  2013-10-07 18:57:09.860 | _StringException: Empty attachments:
  2013-10-07 18:57:09.861 |   stderr
  2013-10-07 18:57:09.861 |   stdout
  2013-10-07 18:57:09.861 | 
  2013-10-07 18:57:09.862 | pythonlogging:'': {{{
  2013-10-07 18:57:09.862 | 2013-10-07 18:38:33,059 Request: GET 
http://127.0.0.1:8774/v2/3b02395d4eec44958ffcc10ac2673fd2/servers?changes-since=2013-10-07T18%3A38%3A31.034080
  2013-10-07 18:57:09.862 | 2013-10-07 18:38:33,490 Response Status: 200
  2013-10-07 18:57:09.863 | 2013-10-07 18:38:33,490 Nova request id: 
req-906f2cf2-6508-4cd2-bf62-3595b18d223a
  2013-10-07 18:57:09.863 | }}}
  2013-10-07 18:57:09.863 | 
  2013-10-07 18:57:09.863 | Traceback (most recent call last):
  2013-10-07 18:57:09.864 |   File 
tempest/api/compute/servers/test_list_servers_negative.py, line 191, in 
test_list_servers_by_changes_since
  2013-10-07 18:57:09.864 | self.assertEqual(num_expected, 
len(body['servers']))
  2013-10-07 18:57:09.864 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 322, in 
assertEqual
  2013-10-07 18:57:09.865 | self.assertThat(observed, matcher, message)
  2013-10-07 18:57:09.865 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 417, in 
assertThat
  2013-10-07 18:57:09.865 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-10-07 18:57:09.865 | MismatchError: 3 != 2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1236585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241275] Re: Nova / Neutron Client failing upon re-authentication after token expiration

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241275

Title:
  Nova / Neutron Client failing upon re-authentication after token
  expiration

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Python client library for Neutron:
  Fix Committed

Bug description:
  By default, the token length for clients is 24 hours.  When that token
  expires (or is invalidated for any reason), nova should obtain a new
  token.

  Currently, when the token expires, it leads to the following fault:
  File /usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py, 
line 136, in _get_available_networks
nets = neutron.list_networks(**search_opts).get('networks', [])
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, 
line 108, in with_params
ret = self.function(instance, *args, **kwargs)
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, 
line 325, in list_networks
**_params)
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, 
line 1197, in list
for r in self._pagination(collection, path, **params):
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, 
line 1210, in _pagination
res = self.get(path, params=params)
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, 
line 1183, in get
headers=headers, params=params)
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, 
line 1168, in retry_request
headers=headers, params=params)
  File /usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, 
line 1103, in do_request
resp, replybody = self.httpclient.do_request(action, method, body=body)
  File /usr/lib/python2.6/site-packages/neutronclient/client.py, line 
188, in do_request
self.authenticate()
  File /usr/lib/python2.6/site-packages/neutronclient/client.py, line 
224, in authenticate
token_url = self.auth_url + /tokens
  TRACE nova.openstack.common.rpc.amqp TypeError: unsupported operand 
type(s) for +: 'NoneType' and 'str'

  This error is occurring because nova/network/neutronv2/__init__.py
  obtains a token for communication with neutron.  Nova is then
  authenticating the token (nova/network/neutronv2/__init__.py -
  _get_auth_token).  Upon authentication, it passes in the token into
  the neutron client (via the _get_client method).  It should be noted
  that the token is the main element passed into the neutron client
  (auth_url, username, password, etc... are not passed in as part of the
  request)

  Since nova is passing the token directly into the neutron client, nova
  does not validate whether or not the token is authenticated.

  After the 24 hour period of time, the token naturally expires.
  Therefore, when the neutron client goes to make a request, it catches
  an exceptions.Unauthorized block.  Upon catching this exception, the
  neutron client attempts to re-authenticate and then make the request
  again.

  The issue arises in the re-authentication of the token.  The neutron client's 
authenticate method requires that the following parameters are sent in from its 
users:
   - username
   - password
   - tenant_id or tenant_name
   - auth_url
   - auth_strategy

  Since the nova client is not passing these parameters in, the neutron
  client is failing with the exception above.

  Not all methods from the nova client are exposed to this.  Invocations
  to nova/network/neutronv2/__init__.py - get_client with an 'admin'
  value set to True will always get a new token.  However, the clients
  that invoke the get_client method without specifying the admin flag,
  or by explicitly setting it to False will be affected by this.  Note
  that the admin flag IS NOT determined based off the context's admin
  attribute.

  Methods from nova/network/neutronv2/api.py that are currently affected appear 
to be:
   - _get_available_networks
   - allocate_for_instance
   - deallocate_for_instance
   - deallocate_port_for_instance
   - list_ports
   - show_port
   - add_fixed_ip_to_instance
   - remove_fixed_ip_from_instance
   - validate_networks
   - _get_instance_uuids_by_ip
   - associate_floating_ip
   - get_all
   - get
   - get_floating_ip
   - get_floating_ip_pools
   - get_floating_ip_by_address
   - get_floating_ips_by_project
   - get_instance_id_by_floating_address
   - allocate_floating_ip
   - release_floating_ip
   - disassociate_floating_ip
   - _get_subnets_from_port

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1241275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1235435] Re: 'SubnetInUse: Unable to complete operation on subnet UUID. One or more ports have an IP allocation from this subnet.'

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235435

Title:
  'SubnetInUse: Unable to complete operation on subnet UUID. One or more
  ports have an IP allocation from this subnet.'

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Tempest:
  Invalid

Bug description:
  Occasional tempest failure:

  http://logs.openstack.org/86/49086/2/gate/gate-tempest-devstack-vm-
  neutron-isolated/ce14ceb/testr_results.html.gz

  ft3.1: tearDownClass 
(tempest.scenario.test_network_basic_ops.TestNetworkBasicOps)_StringException: 
Traceback (most recent call last):
File tempest/scenario/manager.py, line 239, in tearDownClass
  thing.delete()
File tempest/api/network/common.py, line 71, in delete
  self.client.delete_subnet(self.id)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 112, in with_params
  ret = self.function(instance, *args, **kwargs)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 380, in delete_subnet
  return self.delete(self.subnet_path % (subnet))
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 1233, in delete
  headers=headers, params=params)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 1222, in retry_request
  headers=headers, params=params)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 1165, in do_request
  self._handle_fault_response(status_code, replybody)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 1135, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
File /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, 
line 97, in exception_handler_v20
  message=msg)
  NeutronClientException: 409-{u'NeutronError': {u'message': u'Unable to 
complete operation on subnet 9e820b02-bfe2-47e3-b186-21c5644bc9cf. One or more 
ports have an IP allocation from this subnet.', u'type': u'SubnetInUse', 
u'detail': u''}}

  
  logstash query:

  @message:One or more ports have an IP allocation from this subnet
  AND @fields.filename:logs/screen-q-svc.txt and @message:
  SubnetInUse: Unable to complete operation on subnet


  
http://logstash.openstack.org/#eyJzZWFyY2giOiJAbWVzc2FnZTpcIk9uZSBvciBtb3JlIHBvcnRzIGhhdmUgYW4gSVAgYWxsb2NhdGlvbiBmcm9tIHRoaXMgc3VibmV0XCIgQU5EIEBmaWVsZHMuZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1xLXN2Yy50eHRcIiBhbmQgQG1lc3NhZ2U6XCIgU3VibmV0SW5Vc2U6IFVuYWJsZSB0byBjb21wbGV0ZSBvcGVyYXRpb24gb24gc3VibmV0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODA5MTY1NDUxODcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1235435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1195139] Re: vmware: error trying to store hypervisor_version as string in using postgresql

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1195139

Title:
  vmware: error trying to store hypervisor_version as string in using
  postgresql

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  Fix Released

Bug description:
  Hi,

  I am trying to use VMware hypervisor ESXi 5.0.0 as a compute resource
  in OpenStack 2013.1.1. The driver being used is
  •vmwareapi.VMwareVCDriver. The vCenter is contains a single cluster
  (two nodes) running ESXi version 5.0.

  In the compute node, I am seeing the below error

  2013-06-26 17:45:27.532 10253 AUDIT nova.compute.resource_tracker [-] Free 
ram (MB): 146933
  2013-06-26 17:45:27.532 10253 AUDIT nova.compute.resource_tracker [-] Free 
disk (GB): 55808
  2013-06-26 17:45:27.533 10253 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 24
  2013-06-26 17:45:27.533 10253 DEBUG nova.openstack.common.rpc.amqp [-] Making 
synchronous call on conductor ... multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:583
  2013-06-26 17:45:27.534 10253 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID 
is ece63342e4254910be13ee92b948ace6 multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:586
  2013-06-26 17:45:27.534 10253 DEBUG nova.openstack.common.rpc.amqp [-] 
UNIQUE_ID is c85fb461be22468b846368a6b8608ac2. _add_unique_id 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:337
  2013-06-26 17:45:27.566 10253 DEBUG nova.openstack.common.rpc.amqp [-] Making 
synchronous call on conductor ... multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:583
  2013-06-26 17:45:27.567 10253 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID 
is 3f5d065d1c8a4989b4d36415d45abe8b multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:586
  2013-06-26 17:45:27.567 10253 DEBUG nova.openstack.common.rpc.amqp [-] 
UNIQUE_ID is 1421bebfd9744271be63dcef74551c93. _add_unique_id 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:337
  2013-06-26 17:45:27.597 10253 CRITICAL nova [-] Remote error: DBError 
(DataError) invalid input syntax for integer: 5.0.0
  LINE 1: ..., 7, 24, 147445, 55808, 0, 512, 0, 'VMware ESXi', '5.0.0', '...
   ^
   'INSERT INTO compute_nodes (created_at, updated_at, deleted_at, deleted, 
service_id, vcpus, memory_mb, local_gb, vcpus_used, memory_mb_used, 
local_gb_used, hypervisor_type, hypervisor_version, hypervisor_hostname, 
free_ram_mb, free_disk_gb, current_workload, running_vms, cpu_info, 
disk_available_least) VALUES (%(created_at)s, %(updated_at)s, %(deleted_at)s, 
%(deleted)s, %(service_id)s, %(vcpus)s, %(memory_mb)s, %(local_gb)s, 
%(vcpus_used)s, %(memory_mb_used)s, %(local_gb_used)s, %(hypervisor_type)s, 
%(hypervisor_version)s, %(hypervisor_hostname)s, %(free_ram_mb)s, 
%(free_disk_gb)s, %(current_workload)s, %(running_vms)s, %(cpu_info)s, 
%(disk_available_least)s) RETURNING compute_nodes.id' {'local_gb': 55808, 
'vcpus_used': 0, 'deleted': 0, 'hypervisor_type': u'VMware ESXi', 'created_at': 
datetime.datetime(2013, 6, 26, 12, 15, 34, 804731), 'local_gb_used': 0, 
'updated_at': None, 'hypervisor_hostname': u'10.100.10.42', 'memory_mb': 
147445, 'current_workload': 0, 'vcpus': 24, 'free_ram_mb': 146933, 
'running_vms': 0, 'free_disk_gb': 55808, 'service_id': 7, 'hypervisor_version': 
u'5.0.0', 'disk_available_least': None, 'deleted_at': None, 'cpu_info': 
u'{model: Intel(R) Xeon(R) CPU   L5640  @ 2.27GHz, vendor: HP, 
topology: {cores: 12, threads: 24, sockets: 2}}', 'memory_mb_used': 512}
  [u'Traceback (most recent call last):\n', u'  File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 430, 
in _process_data\nrval = self.proxy.dispatch(ctxt, version, method, 
**args)\n', u'  File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, 
line 133, in dispatch\nreturn getattr(proxyobj, method)(ctxt, **kwargs)\n', 
u'  File /usr/lib/python2.7/dist-packages/nova/conductor/manager.py, line 
350, in compute_node_create\nresult = self.db.compute_node_create(context, 
values)\n', u'  File /usr/lib/python2.7/dist-packages/nova/db/api.py, line 
192, in compute_node_create\nreturn IMPL.compute_node_create(context, 
values)\n', u'  File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 96, in 
wrapper\nreturn f(*args, **kwargs)\n', u'  File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 498, in 
compute_node_create\ncompute_node_ref.save()\n', u'  File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/db/sqlalchemy/models.py,
 line 54, in save\n

[Yahoo-eng-team] [Bug 1233188] Re: Cant create VM with rbd backend enabled

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1233188

Title:
  Cant create VM with rbd backend enabled

Status in Ubuntu Cloud Archive:
  Triaged
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  nova-compute.log:

  2013-09-30 15:52:18.897 12884 ERROR nova.compute.manager 
[req-d112a8fd-89c4-4b5b-b6c2-1896dcd0e4ab f70773b792354571a10d44260397fde1 
b9e4ccd38a794fee82dfb06a52ec3cfd] [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] Error: libvirt_info() takes exactly 6 
arguments (7 given)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] Traceback (most recent call last):
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1037, in 
_build_instance
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] set_access_ip=set_access_ip)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1410, in _spawn
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1407, in _spawn
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] block_device_info)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2069, in 
spawn
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] write_to_disk=True)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 3042, in 
to_xml
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] disk_info, rescue, block_device_info)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2922, in 
get_guest_config
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] inst_type):
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2699, in 
get_guest_storage_config
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] inst_type)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2662, in 
get_guest_disk_config
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] self.get_hypervisor_version())
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] TypeError: libvirt_info() takes exactly 6 
arguments (7 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1233188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1221190] Re: [0SSA 2014-009] Image format not enforced when using rescue (CVE-2014-0134)

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1221190

Title:
  [0SSA 2014-009] Image format not enforced when using rescue
  (CVE-2014-0134)

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Rescuing an instance seems to guess the image format at some point.
  This allows reading files from the compute host via the qcow2 backing
  file.

  Requirements:
  - instances spawned using libvirt
  - use_cow_images = False in the config

  To reproduce:
  1. Create a qcow2 file backed by the path you want to read from the compute 
host. (qemu-img create -f qcow2 -b /path/to/the/file $((1024*1024)) evil.qcow2)
  2. Spawn an instance, scp the file into it.
  3. Overwrite the disk inside the instance (dd if=evil.qcow2 of=/dev/vda)
  4. Shutdown the instance.
  5. Rescue the instance
  6. While in rescue mode, login and read /dev/vdb - beginning should be read 
from the qcow backing file

  Libvirt description of the rescued instance will contain the entry for
  the second disk with attribute type=qcow2, even though it should be
  raw - same as the original instance.

  Mitigating factors:
  - files have to be readable by libvirt/kvm
  - apparmor/selinux will limit the number of accessible files
  - only full blocks of the file are visible in the rescued instance, so short 
files will not be available at all and long files are going to be truncated

  Possible targets:
  - private snapshots with known uuids, or instances of other tenants are a 
good target for this attack

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1221190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276510] Re: MySQL 2013 lost connection is being raised

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276510

Title:
  MySQL 2013 lost connection is being raised

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in Cinder havana series:
  Fix Released
Status in Ironic (Bare Metal Provisioning):
  Fix Released
Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Tuskar:
  Fix Released

Bug description:
  MySQL's 2013 code error is not in the list of connection lost issues.
  This causes the reconnect loop to raise this error and stop retrying.

  [database]
  max_retries = -1
  retry_interval = 1

  mysql down:

  == scheduler.log ==
  2014-02-03 16:51:50.956 16184 CRITICAL cinder [-] (OperationalError) (2013, 
Lost connection to MySQL server at 'reading initial communication packet', 
system error: 0) None None

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1276510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1228847] Re: VMware: VimException: Exception in __deepcopy__ Method not found

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1228847

Title:
  VMware: VimException: Exception in __deepcopy__ Method not found

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  When an exception occurs in the VMWare driver, for example when there
  are no more IP addresses available, then the following exception is
  returned:

  2013-09-22 05:26:22.522 ERROR nova.compute.manager 
[req-b29710eb-5cb9-4de1-adca-919119b10460 demo demo] [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] Error: Exception in __deepcopy__ Method 
not found: 'VimService.VimPort.__deepcopy__'
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] Traceback (most recent call last):
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File 
/opt/stack/nova/nova/compute/manager.py, line 1038, in _build_instance
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] set_access_ip=set_access_ip)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File 
/opt/stack/nova/nova/compute/claims.py, line 53, in __exit__
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] self.abort()
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File 
/opt/stack/nova/nova/compute/claims.py, line 107, in abort
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] LOG.debug(_(Aborting claim: %s) % 
self, instance=self.instance)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File 
/opt/stack/nova/nova/openstack/common/gettextutils.py, line 228, in __mod__
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] return copied._save_parameters(other)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File 
/opt/stack/nova/nova/openstack/common/gettextutils.py, line 186, in 
_save_parameters
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] self.params = copy.deepcopy(other)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File /usr/lib/python2.7/copy.py, line 
190, in deepcopy
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] y = _reconstruct(x, rv, 1, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File /usr/lib/python2.7/copy.py, line 
334, in _reconstruct
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] state = deepcopy(state, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File /usr/lib/python2.7/copy.py, line 
163, in deepcopy
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] y = copier(x, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File /usr/lib/python2.7/copy.py, line 
257, in _deepcopy_dict
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] y[deepcopy(key, memo)] = 
deepcopy(value, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File /usr/lib/python2.7/copy.py, line 
190, in deepcopy
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] y = _reconstruct(x, rv, 1, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File /usr/lib/python2.7/copy.py, line 
334, in _reconstruct
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] state = deepcopy(state, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File /usr/lib/python2.7/copy.py, line 
163, in deepcopy
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] y = copier(x, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File /usr/lib/python2.7/copy.py, line 
257, in _deepcopy_dict
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]

[Yahoo-eng-team] [Bug 1263881] Re: ML2 l2-pop Mechanism driver fails when more than one port is added/deleted simultaneously

2014-04-03 Thread Adam Gandelman
** Changed in: neutron/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263881

Title:
  ML2 l2-pop Mechanism driver fails when more than one port is
  added/deleted simultaneously

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  If I create more than one VM at a time (with nova boot option --num-
  instances), sometimes the flooding flow for broadcast, multicast and
  unknown unicast are not added on certain l2 agents.

  And conversely, when I delete more than one VM at a time, the flooding
  rule are not purge on certain l2 agents when it's necessary.

  I made this test on the trunk version with OVS agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1263881/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244092] Re: db connection retrying doesn't work against db2

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244092

Title:
  db connection retrying doesn't work against db2

Status in Cinder:
  Confirmed
Status in Cinder havana series:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  When I start Openstack following below steps, Openstack services can't be 
started without db2 connection:
  1, start openstack services;
  2, start db2 service.

  I checked codes in session.py under
  nova/openstack/common/db/sqlalchemy, the root cause is db2 connection
  error code -30081 isn't in conn_err_codes in _is_db_connection_error
  function, connection retrying codes are skipped against db2, in order
  to enable connection retrying function against db2, we need add db2
  support in _is_db_connection_error function

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1244092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246848] Re: VMWare: AssertionError: Trying to re-send() an already-triggered event.

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246848

Title:
  VMWare: AssertionError: Trying to re-send() an already-triggered
  event.

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  When an exceptin occurs in _wait_for_task and a failure occurs, for
  example a file is requested and it does not exists then another
  exception is also thrown:

  013-10-31 10:49:52.617 WARNING nova.virt.vmwareapi.driver [-] In 
vmwareapi:_poll_task, Got this error Trying to re-send() an already-triggered 
event.
  2013-10-31 10:49:52.618 ERROR nova.openstack.common.loopingcall [-] in fixed 
duration looping call
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall Traceback 
(most recent call last):
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall   File 
/opt/stack/nova/nova/openstack/common/loopingcall.py, line 78, in _inner
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 941, in _poll_task
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall 
done.send_exception(excep)
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall   File 
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 208, in 
send_exception
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall return 
self.send(None, args)
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall   File 
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 150, in send
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall assert 
self._result is NOT_USED, 'Trying to re-send() an already-triggered event.'
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall 
AssertionError: Trying to re-send() an already-triggered event.
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246848/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239220] Re: boto version checking in test cases is not backwards compatible

2014-04-03 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239220

Title:
  boto version checking in test cases is not backwards compatible

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  The changes in  commit:

  
https://github.com/openstack/nova/commit/7161c62c22ebe609ecaf7e01d2feae473d01495a

  Introduced version checking for boto 2.14; however string comparison
  is not effective for version checking as

  '2.9.6' = '2.14' == true

  Resulting in older boto versions (as found in saucy) trying to test
  with the 2.14 code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1239220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302211] [NEW] RbdTestCase.test_cache_base_dir_exists is duplicated in test_imagebackend

2014-04-03 Thread Dmitry Borodaenko
Public bug reported:

RbdTestCase in nova/tests/virt/libvirt/test_imagebackend.py now has two 
slightly different versions of the same test case:
https://github.com/openstack/nova/blob/dc8de426066969a3f0624fdc2a7b29371a2d55bf/nova/tests/virt/libvirt/test_imagebackend.py#L759
https://github.com/openstack/nova/blob/dc8de426066969a3f0624fdc2a7b29371a2d55bf/nova/tests/virt/libvirt/test_imagebackend.py#L806

The redundant version was added in:
https://review.openstack.org/82840

I think it should be removed, it doesn't do anything the original test
case doesn't already have.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1302211

Title:
  RbdTestCase.test_cache_base_dir_exists is duplicated in
  test_imagebackend

Status in OpenStack Compute (Nova):
  New

Bug description:
  RbdTestCase in nova/tests/virt/libvirt/test_imagebackend.py now has two 
slightly different versions of the same test case:
  
https://github.com/openstack/nova/blob/dc8de426066969a3f0624fdc2a7b29371a2d55bf/nova/tests/virt/libvirt/test_imagebackend.py#L759
  
https://github.com/openstack/nova/blob/dc8de426066969a3f0624fdc2a7b29371a2d55bf/nova/tests/virt/libvirt/test_imagebackend.py#L806

  The redundant version was added in:
  https://review.openstack.org/82840

  I think it should be removed, it doesn't do anything the original test
  case doesn't already have.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1302211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302238] [NEW] Enable ServerGroupAffinityFilter and ServerGroupAntiAffinityFilter by default

2014-04-03 Thread Jay Lau
Public bug reported:

After https://blueprints.launchpad.net/nova/+spec/instance-group-api-
extension, nova has the feature of creating instance groups with
affinity or anti-affinity policy and creating vm instance with affinity
/anti-affinity  group.

If did not enable ServerGroupAffinityFilter and
ServerGroupAntiAffinityFilter, then the instance group will not able to
leverage affinity/anti-affinity.

Take the following case:
1) Create a group with affinity
2) Create two vms with this group
3) The result is that those two vms was not created on the same host.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1302238

Title:
  Enable ServerGroupAffinityFilter and ServerGroupAntiAffinityFilter by
  default

Status in OpenStack Compute (Nova):
  New

Bug description:
  After https://blueprints.launchpad.net/nova/+spec/instance-group-api-
  extension, nova has the feature of creating instance groups with
  affinity or anti-affinity policy and creating vm instance with
  affinity/anti-affinity  group.

  If did not enable ServerGroupAffinityFilter and
  ServerGroupAntiAffinityFilter, then the instance group will not able
  to leverage affinity/anti-affinity.

  Take the following case:
  1) Create a group with affinity
  2) Create two vms with this group
  3) The result is that those two vms was not created on the same host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1302238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302260] [NEW] Add documentation for ISO image support in Glance

2014-04-03 Thread Andy Dugas
Public bug reported:

There is no documentation for ISO image support in the Image Service
documentation.

** Affects: nova
 Importance: Undecided
 Assignee: Andy Dugas (adugas)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Andy Dugas (adugas)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1302260

Title:
  Add documentation for ISO image support in Glance

Status in OpenStack Compute (Nova):
  New

Bug description:
  There is no documentation for ISO image support in the Image Service
  documentation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1302260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301519] [NEW] nova.conf.sample missing from the 2014.1.rc1 tarball

2014-04-03 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

This patch [1] removed the nova.conf.sample because it's not gated but now we 
are left without the sample config file in the tarball.
We could generate the nova.conf.sample in setup.py (based on this comment [2]) 
and include it in the tarball for rc2.

[1] https://review.openstack.org/#/c/81588/
[2] https://bugs.launchpad.net/nova/+bug/1294774/comments/4

** Affects: nova
 Importance: High
 Assignee: Vladan Popovic (vpopovic)
 Status: Confirmed

-- 
nova.conf.sample missing from the 2014.1.rc1 tarball
https://bugs.launchpad.net/bugs/1301519
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301519] Re: nova.conf.sample missing from the 2014.1.rc1 tarball

2014-04-03 Thread Vladan Popovic
** Changed in: pbr
 Assignee: (unassigned) = Vladan Popovic (vpopovic)

** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: nova

** Also affects: pbr
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301519

Title:
  nova.conf.sample missing from the 2014.1.rc1 tarball

Status in OpenStack Compute (Nova):
  Confirmed
Status in Python Build Reasonableness:
  New

Bug description:
  This patch [1] removed the nova.conf.sample because it's not gated but now we 
are left without the sample config file in the tarball.
  We could generate the nova.conf.sample in setup.py (based on this comment 
[2]) and include it in the tarball for rc2.

  [1] https://review.openstack.org/#/c/81588/
  [2] https://bugs.launchpad.net/nova/+bug/1294774/comments/4

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1301519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302272] [NEW] neutron iptables manager is slow modifying a large amount of rules

2014-04-03 Thread Brian Haley
Public bug reported:

Sudhakar Gariganti has noticed that with a very large number of iptables
rules that _modify_rules() was taking so long to complete (140 seconds)
that VMs couldn't be reliably booted because the rules weren't getting
put in place before the initial DHCP requests had timed out.  With a
small change the update can be done much quicker, and also allow each
node to support a larger set of iptables rules.

I've included a snippet from the related bug for reference,
https://bugs.launchpad.net/neutron/+bug/1253993

We have done significant testing with this patch and want to share few
results from our experiments.

We were basically trying to see how many VMs we can scale with the OVS agent in 
use. With default security groups(which has remote security group), beyond 
250-300 VMs, VMs were not able to get DHCP IPs. We were having 16 CNs, with VMs 
uniformly distributed across them. The VM image had a wait period of 120 secs 
to receive the DHCP response.
By the time we have around 18-19 VMs on each CN(there were around 6k Iptable 
rules), each RPC loop was taking close to 140 seconds(if there is any update). 
And the reason VMs were not getting IPs was that the Iptable rules required for 
the VM to send out the DHCP request were not in place before the 120 secs wait 
period. Upon further investigations we discovered that the for loop searching 
iptable rules in _modify_rules method of iptables_manger.py is eating a big 
chunk of the overall time spent.

After this patch, we were able to see close to 680 VMs were able to get
IPs. The number of Iptable rules at this point was close to 20K, with
around 40 VMs per CN.

To summarize, we were able to increase the processing capability of
compute node from 6K Iptable rules to 20K Iptable rules, which helped
more VMs get DHCP IP within the 120 sec wait period. You can imagine the
situation when the wait time is less than 120 secs.

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302272

Title:
  neutron iptables manager is slow modifying a large amount of rules

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Sudhakar Gariganti has noticed that with a very large number of
  iptables rules that _modify_rules() was taking so long to complete
  (140 seconds) that VMs couldn't be reliably booted because the rules
  weren't getting put in place before the initial DHCP requests had
  timed out.  With a small change the update can be done much quicker,
  and also allow each node to support a larger set of iptables rules.

  I've included a snippet from the related bug for reference,
  https://bugs.launchpad.net/neutron/+bug/1253993

  We have done significant testing with this patch and want to share
  few results from our experiments.

  We were basically trying to see how many VMs we can scale with the OVS agent 
in use. With default security groups(which has remote security group), beyond 
250-300 VMs, VMs were not able to get DHCP IPs. We were having 16 CNs, with VMs 
uniformly distributed across them. The VM image had a wait period of 120 secs 
to receive the DHCP response.
  By the time we have around 18-19 VMs on each CN(there were around 6k Iptable 
rules), each RPC loop was taking close to 140 seconds(if there is any update). 
And the reason VMs were not getting IPs was that the Iptable rules required for 
the VM to send out the DHCP request were not in place before the 120 secs wait 
period. Upon further investigations we discovered that the for loop searching 
iptable rules in _modify_rules method of iptables_manger.py is eating a big 
chunk of the overall time spent.

  After this patch, we were able to see close to 680 VMs were able to
  get IPs. The number of Iptable rules at this point was close to 20K,
  with around 40 VMs per CN.

  To summarize, we were able to increase the processing capability of
  compute node from 6K Iptable rules to 20K Iptable rules, which helped
  more VMs get DHCP IP within the 120 sec wait period. You can imagine
  the situation when the wait time is less than 120 secs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302275] [NEW] DHCP service start failed

2014-04-03 Thread shihanzhang
Public bug reported:

In neutron.conf, I set 'force_gateway_on_subnet = False', then create a subnet 
with gateway ip out of
 subnet' CIDR,  I found a error happen in dhcp.log, the log is bellow:

 [-] Unable to enable dhcp for af572283-3f32-4a12-9419-1154e9c386f9.
 Traceback (most recent call last):
   File /usr/lib64/python2.6/site-packages/neutron/agent/dhcp_agent.py, line 
128, in call_driver
 getattr(driver, action)(**action_kwargs)
   File /usr/lib64/python2.6/site-packages/neutron/agent/linux/dhcp.py, line 
167, in enable
 reuse_existing=True)
   File /usr/lib64/python2.6/site-packages/neutron/agent/linux/dhcp.py, line 
727, in setup
 self._set_default_route(network)
   File /usr/lib64/python2.6/site-packages/neutron/agent/linux/dhcp.py, line 
615, in _set_default_route
 device.route.add_gateway(subnet.gateway_ip)
   File /usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py, 
line 368, in add_gateway
 self._as_root(*args)
   File /usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py, 
line 217, in _as_root
 kwargs.get('use_root_namespace', False))
   File /usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py, 
line 70, in _as_root
 namespace)
   File /usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py, 
line 81, in _execute
 root_helper=root_helper)
   File /usr/lib64/python2.6/site-packages/neutron/agent/linux/utils.py, line 
75, in execute
 raise RuntimeError(m)
 RuntimeError:
 Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-af572283-3f32-4a12-9419-1154e9c386f9', 'ip', 
'route', 'replace', 'default', 'via', '80.80.1.1', 'dev', 'tapcb420785-8d']
 Exit code: 2
 Stdout: ''
 Stderr: 'RTNETLINK answers: No such process\n'
 TRACE neutron.agent.dhcp_agent

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302275

Title:
  DHCP service start failed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In neutron.conf, I set 'force_gateway_on_subnet = False', then create a 
subnet with gateway ip out of
   subnet' CIDR,  I found a error happen in dhcp.log, the log is bellow:

   [-] Unable to enable dhcp for af572283-3f32-4a12-9419-1154e9c386f9.
   Traceback (most recent call last):
 File /usr/lib64/python2.6/site-packages/neutron/agent/dhcp_agent.py, 
line 128, in call_driver
   getattr(driver, action)(**action_kwargs)
 File /usr/lib64/python2.6/site-packages/neutron/agent/linux/dhcp.py, 
line 167, in enable
   reuse_existing=True)
 File /usr/lib64/python2.6/site-packages/neutron/agent/linux/dhcp.py, 
line 727, in setup
   self._set_default_route(network)
 File /usr/lib64/python2.6/site-packages/neutron/agent/linux/dhcp.py, 
line 615, in _set_default_route
   device.route.add_gateway(subnet.gateway_ip)
 File /usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py, 
line 368, in add_gateway
   self._as_root(*args)
 File /usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py, 
line 217, in _as_root
   kwargs.get('use_root_namespace', False))
 File /usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py, 
line 70, in _as_root
   namespace)
 File /usr/lib64/python2.6/site-packages/neutron/agent/linux/ip_lib.py, 
line 81, in _execute
   root_helper=root_helper)
 File /usr/lib64/python2.6/site-packages/neutron/agent/linux/utils.py, 
line 75, in execute
   raise RuntimeError(m)
   RuntimeError:
   Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-af572283-3f32-4a12-9419-1154e9c386f9', 'ip', 
'route', 'replace', 'default', 'via', '80.80.1.1', 'dev', 'tapcb420785-8d']
   Exit code: 2
   Stdout: ''
   Stderr: 'RTNETLINK answers: No such process\n'
   TRACE neutron.agent.dhcp_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >