[Yahoo-eng-team] [Bug 1332660] Re: Update statistics from computes if RBD ephemeral is used

2014-11-19 Thread Roman Podoliaka
** Changed in: mos/4.1.x
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332660

Title:
  Update statistics from computes if RBD ephemeral is used

Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 4.1.x series:
  Won't Fix
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If we use RBD as the backend for ephemeral drives, compute nodes still 
calculate their available disk size looking back to the local disks.
  This is the path how they do it:

  * nova/compute/manager.py

  def update_available_resource(self, context):
  See driver.get_available_resource()

  Periodic process that keeps that the compute host's understanding of
  resource availability and usage in sync with the underlying 
hypervisor.

  :param context: security context
  
  new_resource_tracker_dict = {}
  nodenames = set(self.driver.get_available_nodes())
  for nodename in nodenames:
  rt = self._get_resource_tracker(nodename)
  rt.update_available_resource(context)
  new_resource_tracker_dict[nodename] = rt
  
  def _get_resource_tracker(self, nodename):
  rt = self._resource_tracker_dict.get(nodename)
  if not rt:
  if not self.driver.node_is_available(nodename):
  raise exception.NovaException(
  _(%s is not a valid node managed by this 
compute host.) % nodename)

  rt = resource_tracker.ResourceTracker(self.host,
self.driver,
nodename)
  self._resource_tracker_dict[nodename] = rt
  return rt

  * nova/compute/resource_tracker.py

  def update_available_resource(self, context):
  Override in-memory calculations of compute node resource usage 
based
  on data audited from the hypervisor layer.

  Add in resource claims in progress to account for operations that have
  declared a need for resources, but not necessarily retrieved them from
  the hypervisor layer yet.
  
  LOG.audit(_(Auditing locally available compute resources))
  resources = self.driver.get_available_resource(self.nodename)

  * nova/virt/libvirt/driver.py

  def get_local_gb_info():
  Get local storage info of the compute node in GB.

  :returns: A dict containing:
   :total: How big the overall usable filesystem is (in gigabytes)
   :free: How much space is free (in gigabytes)
   :used: How much space is used (in gigabytes)
  

  if CONF.libvirt_images_type == 'lvm':
  info = libvirt_utils.get_volume_group_info(
   CONF.libvirt_images_volume_group)
  else:
  info = libvirt_utils.get_fs_info(CONF.instances_path)

  for (k, v) in info.iteritems():
  info[k] = v / (1024 ** 3)

  return info

  
  It would be nice to have something like libvirt_utils.get_rbd_info which 
could be used in case CONF.libvirt_images_type == 'rbd'

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1332660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300002] Re: neutron-db-manage does not work properly when using Metaplugin

2014-11-19 Thread Ann Kamyshnikova
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/132

Title:
  neutron-db-manage does not work properly when using Metaplugin

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  neutron-db-manage does not create Neutron DB nor upgrade Neutron DB
  properly when using Metaplugin.

  The first cause of this problem is that 'active_plugins' parameter
  passed to migration scripts includes only metaplugin (i.e. not include
  target plugins under metaplugin).

  There some problems even if the first cause is fixed.
  For example there are multiple scripts which handles an same table (off 
course the target plugin for each script is different, but they may be used at 
the same time under metaplugin).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394112] [NEW] Allocate gateway_ip automatically (if not specified) for IPv6 SLAAC subnet during subnet_creation.

2014-11-19 Thread Sridhar Gaddam
Public bug reported:

For a SLAAC subnet that is created without specifying the gateway_ip,
Neutron currently allocates the gateway_ip at a later stage (i.e.,
neutron router_interface_add). In order to keep the API consistency
between IPv4 and IPv6 it is recommended to allocate the gateway_ip
during subnet_create stage itself.

Please refer to the following thread for more details.
https://review.openstack.org/#/c/134530/

** Affects: neutron
 Importance: Undecided
 Assignee: Sridhar Gaddam (sridhargaddam)
 Status: New


** Tags: ipv6

** Changed in: neutron
 Assignee: (unassigned) = Sridhar Gaddam (sridhargaddam)

** Tags added: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394112

Title:
  Allocate gateway_ip automatically (if not specified) for IPv6 SLAAC
  subnet during subnet_creation.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  For a SLAAC subnet that is created without specifying the gateway_ip,
  Neutron currently allocates the gateway_ip at a later stage (i.e.,
  neutron router_interface_add). In order to keep the API consistency
  between IPv4 and IPv6 it is recommended to allocate the gateway_ip
  during subnet_create stage itself.

  Please refer to the following thread for more details.
  https://review.openstack.org/#/c/134530/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394112/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267362] Re: The meaning of Disk GB Hours is not really clear

2014-11-19 Thread Tom Fifield
As per comment #8, this isn't something that should be fixed in
documentation - removing the reference to manuals.

** No longer affects: openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1267362

Title:
  The meaning of Disk GB Hours is not really clear

Status in OpenStack Dashboard (Horizon):
  Fix Committed

Bug description:
  when visiting the admin overview page,
  http://localhost:8000/admin/

   the usage summary lists
   
  Disk GB Hours

  The same term This Period's GB-Hours: 0.00 can be found e.g here:
  
https://github.com/openstack/horizon/blob/master/horizon/templates/horizon/common/_usage_summary.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1267362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394148] [NEW] subnet-create should not force me to supply cidr

2014-11-19 Thread Deepak Jadiya
Public bug reported:

When I create a subnet with dhcp disabled, it should not force me to
supply mandatory cidr.

Currently it is:
neutron subnet-create dhcp-net -- --enable_dhcp=False cidr

Expected should be:
neutron subnet-create dhcp-net -- --enable_dhcp=False

So, that I can use this subnet flexibly with extra_dhcp_opts.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394148

Title:
  subnet-create should not force me to supply cidr

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When I create a subnet with dhcp disabled, it should not force me to
  supply mandatory cidr.

  Currently it is:
  neutron subnet-create dhcp-net -- --enable_dhcp=False cidr

  Expected should be:
  neutron subnet-create dhcp-net -- --enable_dhcp=False

  So, that I can use this subnet flexibly with extra_dhcp_opts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394165] [NEW] L3 agent hangs while processing a router

2014-11-19 Thread Eugene Nikanorov
Public bug reported:

In some cases when L3 agent is restarted, it can't detect that ns-
metadata-proxy process is already running in the qrouter-namespace so L3
agent spawns it again and again with each restart.

After some number of ns-metadata-proxy processes running in a namespace, L3 
agent hangs on spawning additional process.
The symptoms are that router processing loop is getting stuck on one of the 
router, so remaining routers are not processed.

The workaround is to kill ns-metadata-proxy processes from that router's
namespace and restart L3 agent.

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394165

Title:
  L3 agent hangs while processing a router

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  In some cases when L3 agent is restarted, it can't detect that ns-
  metadata-proxy process is already running in the qrouter-namespace so
  L3 agent spawns it again and again with each restart.

  After some number of ns-metadata-proxy processes running in a namespace, L3 
agent hangs on spawning additional process.
  The symptoms are that router processing loop is getting stuck on one of the 
router, so remaining routers are not processed.

  The workaround is to kill ns-metadata-proxy processes from that
  router's namespace and restart L3 agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394192] [NEW] neutron.openstack.common.rpc.common -- AMQP Server is unreachable [Errno 113]

2014-11-19 Thread ISC2014
Public bug reported:

Hello world!
On Neutron Compute : /var/log/neutron/openvswitch-agent.log, we have this error 
:
neutron.openstack.common.rpc.common -- AMQP Server is unreachable [Errno 113]

We can ping the host when AMQP Server (rabbitmq) is installed.
Nova-compute can connect to AMQP server.

On nova.conf :
rabbit_host = OUR_IP_ADDRESS

For nova-compute its works.

But... for /var/log/neutron/openvswitch-agent.log, with the same
parameters, AMQP server is unreachable.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394192

Title:
  neutron.openstack.common.rpc.common -- AMQP Server is unreachable
  [Errno 113]

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hello world!
  On Neutron Compute : /var/log/neutron/openvswitch-agent.log, we have this 
error :
  neutron.openstack.common.rpc.common -- AMQP Server is unreachable [Errno 113]

  We can ping the host when AMQP Server (rabbitmq) is installed.
  Nova-compute can connect to AMQP server.

  On nova.conf :
  rabbit_host = OUR_IP_ADDRESS

  For nova-compute its works.

  But... for /var/log/neutron/openvswitch-agent.log, with the same
  parameters, AMQP server is unreachable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387870] Re: Minimum testtools version doesn't support message parameter in assertIn

2014-11-19 Thread Davanum Srinivas (DIMS)
Looks like this is already taken care of

test-requirements.txt:testtools=0.9.36,!=1.2.0,!=1.4.0

** Changed in: nova
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387870

Title:
  Minimum testtools version doesn't support message parameter in
  assertIn

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The current min version of testtools (0.3.4) doesn't have the
  `message` parameter in `assertIn`, so we need to bump up to at least
  0.3.6.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1387870/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312133] Re: Cannot launch VMs with KVM since Icehouse upgrade

2014-11-19 Thread Daniel Berrange
 Since I upgraded my deployment from Havana to Icehouse, I can no
longer launch VMs when the hypervisor is KVM. However, with QEMU it
works perfectly.

 I can see the qemu-system-x86_64 process taking up 100% CPU. I can't
find any relevant logs to further understand what's happening.

These are somewhat at odds with each other.  The logs provided show that
'qemu-system-x86_64' corresponds to the TCG based QEMU, while  'qemu-
spice' is the KVM based QEMU.

Either way, AFAICT, there is nothing notable changed between Havana and
Icehouse that would affect the way KVM guests are launched by Nova, but
not affect TCG guests.

Possibly there is some non-determinstic bug in QEMU and/or guest kernel
that is hitting you here.

I dont think there's enough evidence of a problem in Nova to justify
keeping this open.

If you see problems with current Juno based OpenStack  up2date Ubuntu
KVM packages please do feel free to file a new bug.

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1312133

Title:
  Cannot launch VMs with KVM since Icehouse upgrade

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Since I upgraded my deployment from Havana to Icehouse, I can no
  longer launch VMs when the hypervisor is KVM. However, with QEMU it
  works perfectly. Back in Havana I've always used KVM.

  When I say KVM, I mean that virt_type=kvm is set at the compute's
  /etc/nova/nova-compute.conf [libvirt] section, and when I say QEMU
  it's when virt_type=qemu is set.

  The machines and operating systems are unchanged since Havana, and the
  OS is Ubuntu 12.04.4 LTS.

  I can see the qemu-system-x86_64 process taking up 100% CPU. I can't
  find any relevant logs to further understand what's happening.

  I am using Neutron for networking, which is also working properly.

  Here is the Compute node's nova.conf:
  http://paste.openstack.org/show/76662/

  Compute node's nova-compute.conf:
  http://paste.openstack.org/show/76663/

  Controller node's nova.conf: http://paste.openstack.org/show/76664/

  Compute's libvirt instance file: /etc/libvirt/qemu/instance-*.xml:
  http://paste.openstack.org/show/76674/

  And the commands that are run using and not not using KVM:
  http://paste.openstack.org/show/76927/

  I notice that the QEMU machine type is now pc-i440fx-trusty, even
  though I'm still using precise. Anyway, without KVM and the same
  machine type it works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1312133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394205] [NEW] apt unattended-upgrades support

2014-11-19 Thread Nobuto MURATA
Public bug reported:

At this moment, I usually enable unattended-upgrades in Ubuntu by:

#cloud-config
debconf_selections: |
  unattended-upgrades unattended-upgrades/enable_auto_updates boolean true
runcmd:
 - [ dpkg-reconfigure, -fnoninteractive, unattended-upgrades ]


It would be nice if cloud-init supports this operation by one config value for 
example:

#cloud-config
apt_unattended_upgrade: true


** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1394205

Title:
  apt unattended-upgrades support

Status in Init scripts for use on cloud images:
  New

Bug description:
  At this moment, I usually enable unattended-upgrades in Ubuntu by:
  
  #cloud-config
  debconf_selections: |
unattended-upgrades unattended-upgrades/enable_auto_updates boolean true
  runcmd:
   - [ dpkg-reconfigure, -fnoninteractive, unattended-upgrades ]
  

  It would be nice if cloud-init supports this operation by one config value 
for example:
  
  #cloud-config
  apt_unattended_upgrade: true
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1394205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376102] Re: Unnecesary to catch FloatingIpAssociated on addFloatingIp REST API

2014-11-19 Thread Davanum Srinivas (DIMS)
Marking this as won't fix, ok Ken? (since
https://review.openstack.org/#/c/125267 is abandoned)

** Changed in: nova
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376102

Title:
  Unnecesary to catch FloatingIpAssociated on addFloatingIp REST API

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  FloatingIpAssociated exception happens on the following cases only:

  * nova/network/neutronv2/api.py: disassociate_floating_ip
  * nova/network/floating_ips.py: deallocate_floating_ip

  However, addFloatingIp REST API catches the exception now.

  Maybe this catch had been added for associating the associated
  floating-ip, but now the association against the associated floating-
  ip succeeds like the following:

  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | 1c73fc7d-5918-43b1-94f4-4898adf47884 | vm01 | ACTIVE | -  | Running 
| private=10.0.0.2 |
  | 7fd66909-eab1-4408-b831-2ef1210ac196 | vm02 | ACTIVE | -  | Running 
| private=10.0.0.3, 172.24.4.1 |
  
+--+--+++-+--+
  $ nova floating-ip-associate vm01 172.24.4.1
  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | 1c73fc7d-5918-43b1-94f4-4898adf47884 | vm01 | ACTIVE | -  | Running 
| private=10.0.0.2, 172.24.4.1 |
  | 7fd66909-eab1-4408-b831-2ef1210ac196 | vm02 | ACTIVE | -  | Running 
| private=10.0.0.3 |
  
+--+--+++-+--+

  So now this catch seems unnecessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375688] Re: unit test failure in ShelveComputeManagerTestCase.test_unshelve

2014-11-19 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375688

Title:
  unit test failure in ShelveComputeManagerTestCase.test_unshelve

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Full logs here: http://logs.openstack.org/02/124402/3/check/gate-nova-
  python26/1d3512b/

  Seen:

  2014-09-26 15:20:46.795 | ExpectedMethodCallsError: Verify: Expected 
methods never called:
  2014-09-26 15:20:46.796 |   0.  
_notify_about_instance_usage.__call__(nova.context.RequestContext object at 
0xcf5e990, 
Instance(access_ip_v4=None,access_ip_v6=None,architecture='x86_64',auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive=None,created_at=2014-09-26T15:09:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description=None,display_name=None,ephemeral_gb=0,ephemeral_key_uuid=None,fault=?,host='fake-mini',hostname=None,id=1,image_ref='fake-image-ref',info_cache=?,instance_type_id=2,kernel_id=None,key_data=None,key_name=None,launch_index=None,launched_at=2014-09-26T15:09:39Z,launched_on=None,locked=False,locked_by=None,memory_mb=0,metadata={},node='fakenode1',numa_topology=?,os_type='Linux',pci_devices=?,power_state=123,progress=None,project_id='fake',ramdisk_id=None,reservation_id='r-fakeres',root_device_name=None,root_gb=0,scheduled_at=None,security_g
 
roups=?,shutdown_terminate=False,system_metadata={instance_type_ephemeral_gb='0',instance_type_flavorid='1',instance_type_id='2',instance_type_memory_mb='512',instance_type_name='m1.tiny',instance_type_root_gb='1',instance_type_rxtx_factor='1.0',instance_type_swap='0',instance_type_vcpu_weight=None,instance_type_vcpus='1'},task_state=None,terminated_at=None,updated_at=2014-09-26T15:09:38Z,user_data=None,user_id='fake',uuid=cb73da32-e73e-4f52-a332-f66e9752ac9d,vcpus=0,vm_mode=None,vm_state='active'),
 'unshelve.end') - None

  and:

  2014-09-26 15:20:46.800 | UnexpectedMethodCallError: Unexpected
  method call
  instance_update_and_get_original.__call__(nova.context.RequestContext
  object at 0xcf5e990, 'cb73da32-e73e-4f52-a332-f66e9752ac9d',
  {'vm_state': u'active', 'expected_task_state': 'spawning', 'key_data':
  None, 'host': u'fake-mini', 'image_ref': u'fake-image-ref',
  'power_state': 123, 'auto_disk_config': False, 'task_state': None,
  'launched_at': datetime.datetime(2014, 9, 26, 15, 9, 39, 224533,
  tzinfo=iso8601.iso8601.Utc object at 0xa8b48d0)},
  columns_to_join=['metadata', 'system_metadata'], update_cells=False)
  - None

  My initial reaction is that the mox error messages don't contain
  enough information to diagnose the problem, or at least they certainly
  don't make it obvious to the uninitiated, due to the missing expected
  values.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394215] [NEW] Please document shelving in VM state diagram

2014-11-19 Thread Shaunak Kashyap
Public bug reported:

The diagram on
http://docs.openstack.org/developer/nova/devref/vmstates.html does not
currently show the SHELVED_OFFLOADED state and the associated
shelve, shelveOffload and unshelve state transitions. Please
update the diagram appropriately. Thank you.

** Affects: nova
 Importance: Undecided
 Status: Confirmed


** Tags: documentation

** Tags removed: docimpact
** Tags added: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394215

Title:
  Please document shelving in VM state diagram

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  The diagram on
  http://docs.openstack.org/developer/nova/devref/vmstates.html does not
  currently show the SHELVED_OFFLOADED state and the associated
  shelve, shelveOffload and unshelve state transitions. Please
  update the diagram appropriately. Thank you.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1394215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269061] Re: Calls to get_vifs_by_instance are not implemented when using neutron

2014-11-19 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269061

Title:
  Calls to get_vifs_by_instance are not implemented when using neutron

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  The get_vifs_by_instance() call in the nova.network.neutronv2.api
  module is not implemented.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1269061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269062] Re: Call to get_vif_by_mac_address is not implemented when using neutron

2014-11-19 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269062

Title:
  Call to get_vif_by_mac_address is not implemented when using neutron

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  The get_vif_by_mac_address() method in the nova.network.neutronv2.api
  module is not implemented.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1269062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394219] [NEW] Failed to deploy new instance that server group on failed host

2014-11-19 Thread Alex Xu
Public bug reported:

when the host is down/disable, for deploying instance that belongs to
server group with affinity policy and other instances with same server
group on the down/disable host, the scheduling will fail. Because
scheduler is trying to find same host for the new instance.

This is also for after we honors the server group when evacuate, if the
instance is in server group with affinity can't be evacuated.

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Alex Xu (xuhj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394219

Title:
  Failed to deploy new instance that server group on failed host

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  when the host is down/disable, for deploying instance that belongs to
  server group with affinity policy and other instances with same server
  group on the down/disable host, the scheduling will fail. Because
  scheduler is trying to find same host for the new instance.

  This is also for after we honors the server group when evacuate, if
  the instance is in server group with affinity can't be evacuated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1394219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272447] Re: Instances fail to boot properly with more than 5 cinder volumes attached

2014-11-19 Thread Daniel Berrange
Marking as Invalid since there's been no response from reporter in  6
months and others confirmed it works for them.

** Changed in: nova
   Status: Incomplete = Invalid

** Changed in: cinder
   Status: Incomplete = Opinion

** Changed in: cinder
   Status: Opinion = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1272447

Title:
  Instances fail to boot properly with more than 5 cinder volumes
  attached

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Instances will start, but will not boot into their respective
  operating systems with more than five cinder volumes attached to the
  instance. The instance sits at a No bootable device error.

  Openstack Version: OpenStack Havana (2013.2)

  uname -a:   Linux controller01 3.8.0-29-generic #42~precise1-Ubuntu
  SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

  Log files have been attached including cinder  nova logs from the
  controller and from the compute node where the instance resides. For
  reference, the instance name is:  'alex-test-volume' and volumes are
  named: 'alex-test-1' through 'alex-test-6'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1272447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394234] [NEW] Separate jasmine unit test for services and controllers

2014-11-19 Thread Maxime Vidori
Public bug reported:

In the current implementation of jasmine unit tests there is no
separation between services tests and controllers tests.

The reason to separate them is to avoid collision during tests, avoid
big templates with all the scripts, provide a better understanding on
how the tests are performed and how to add specific ones.

The second reason is controllers tests need to mock the entire horizon
application whereas services need only their dependencies, so we will
avoid huge amount of useless scripts during the service testing phase
and have a dedicated file for dependencies during the controller testing
phase.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1394234

Title:
  Separate jasmine unit test for services and controllers

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the current implementation of jasmine unit tests there is no
  separation between services tests and controllers tests.

  The reason to separate them is to avoid collision during tests, avoid
  big templates with all the scripts, provide a better understanding on
  how the tests are performed and how to add specific ones.

  The second reason is controllers tests need to mock the entire horizon
  application whereas services need only their dependencies, so we will
  avoid huge amount of useless scripts during the service testing phase
  and have a dedicated file for dependencies during the controller
  testing phase.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1394234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394236] [NEW] glance client should fail if using a non-boolean string when a boolean is required

2014-11-19 Thread Cindy Pallares
Public bug reported:

Description of problem:
When the following command is executed, the command succeded but revirests the 
--is-public IS_PUBLIC to false. This should come out with wrong input error

glance --debug image-create --name  CentOS-7-x86_64-GenericCloud --is-
public IS_PUBLIC --disk-format raw --container-format bare --file
CentOS-7-x86_64-GenericCloud-20140707_01.qcow2.1 --progress


Version-Release number of selected component (if applicable):
all

How reproducible:
Always

Steps to Reproduce:
1.glance --debug image-create --name  CentOS-7-x86_64-GenericCloud --is-public 
IS_PUBLIC --disk-format raw --container-format bare --file 
CentOS-7-x86_64-GenericCloud-20140707_01.qcow2.1 --progress 
2.
3.

Actual results:
sets is-public to false

Expected results:
an error

Additional info:
This looks to be becuase of the strutils.bool_from_string which looks to follow 
the rule of if its not true then its false.

** Affects: glance
 Importance: Undecided
 Assignee: Cindy Pallares (cindy-pallaresq)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Cindy Pallares (cindy-pallaresq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1394236

Title:
  glance client should fail if using a non-boolean string when a boolean
  is required

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Description of problem:
  When the following command is executed, the command succeded but revirests 
the --is-public IS_PUBLIC to false. This should come out with wrong input error

  glance --debug image-create --name  CentOS-7-x86_64-GenericCloud --is-
  public IS_PUBLIC --disk-format raw --container-format bare --file
  CentOS-7-x86_64-GenericCloud-20140707_01.qcow2.1 --progress

  
  Version-Release number of selected component (if applicable):
  all

  How reproducible:
  Always

  Steps to Reproduce:
  1.glance --debug image-create --name  CentOS-7-x86_64-GenericCloud 
--is-public IS_PUBLIC --disk-format raw --container-format bare --file 
CentOS-7-x86_64-GenericCloud-20140707_01.qcow2.1 --progress 
  2.
  3.

  Actual results:
  sets is-public to false

  Expected results:
  an error

  Additional info:
  This looks to be becuase of the strutils.bool_from_string which looks to 
follow the rule of if its not true then its false.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1394236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238439] Re: admin can not delete External Network because floatingip

2014-11-19 Thread Salvatore Orlando
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1238439

Title:
  admin can not delete External Network because floatingip

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New
Status in neutron juno series:
  New

Bug description:
  HI

  in admin role, I create External Network and router.  and create
  tenant A and userA.

  Now the userA login, create network and router, create VM1 and assign
  Floating IP , access well ,perfectly.

  Now I try to in admin roles  delete it.

  1:  delete userA ,  no problem
  2: delete tenantA ,  no problem
  3: delete vm1, no problem
  4: delete router, no problem
  5: delete External Networknet,  report error,  I try to delete the port in 
sub panel, also fail. 
  check the Neutrion server log  

  TRACE neutron.api.v2.resource L3PortInUse: Port 2e5fa663-22e0-4c9e-
  87cc-e89c12eff955 has owner network:floatingip and therefore cannot be
  deleted directly via the port API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1238439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394227] Re: tempest.scenario.test_minimum_basic.TestMinimumBasicScenario fails with cellular devstack

2014-11-19 Thread Vineet Menon
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394227

Title:
  tempest.scenario.test_minimum_basic.TestMinimumBasicScenario fails
  with cellular devstack

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  A devstack installation with cells enabled fails the tempest
  'tempest.scenario.test_minimum_basic.TestMinimumBasicScenario'.

  The error log is as follows,

  traceback-1: {{{
  Traceback (most recent call last):
File /opt/stack/tempest/tempest/services/compute/json/servers_client.py, 
line 196, in wait_for_server_termination
  raise exceptions.BuildErrorException(server_id=server_id)
  BuildErrorException: Server 89933458-9c96-41ee-8082-a0567f573458 failed to 
build and is in ERROR status
  }}}

  traceback-2: {{{
  Traceback (most recent call last):
File /opt/stack/tempest/tempest/scenario/manager.py, line 160, in 
_wait_for_cleanups
  waiter_callable(**wait)
File /opt/stack/tempest/tempest/services/compute/json/servers_client.py, 
line 196, in wait_for_server_termination
  raise exceptions.BuildErrorException(server_id=server_id)
  BuildErrorException: Server 89933458-9c96-41ee-8082-a0567f573458 failed to 
build and is in ERROR status
  }}}

  Traceback (most recent call last):
File /opt/stack/tempest/tempest/test.py, line 113, in wrapper
  return f(self, *func_args, **func_kwargs)
File /opt/stack/tempest/tempest/scenario/test_minimum_basic.py, line 137, 
in test_minimum_basic_scenario
  self.nova_boot()
File /opt/stack/tempest/tempest/scenario/test_minimum_basic.py, line 53, 
in nova_boot
  create_kwargs=create_kwargs)
File /opt/stack/tempest/tempest/scenario/manager.py, line 210, in 
create_server
  status='ACTIVE')
File /opt/stack/tempest/tempest/services/compute/json/servers_client.py, 
line 183, in wait_for_server_status
  ready_wait=ready_wait)
File /opt/stack/tempest/tempest/common/waiters.py, line 77, in 
wait_for_server_status
  server_id=server_id)
  BuildErrorException: Server 89933458-9c96-41ee-8082-a0567f573458 failed to 
build and is in ERROR status
  Details: {u'message': u'No valid host was found. ', u'code': 500, u'created': 
u'2014-11-19T13:01:41Z'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1394227/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394267] [NEW] The overview page collecting wrong info

2014-11-19 Thread Bishoy
Public bug reported:

The overview page in the dashboard has a floating ips resource diagram
showing how many ips used. It should only show the floating ips attached
to machines in use not all of the allocated ips in use!

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: floating horizon overview

** Description changed:

  The overview page in the dashboard has a floating ips resource diagram
  showing how many ips used. It should only show the floating ips attached
- to machines in used  not all of the allocated ips in use!
+ to machines in use not all of the allocated ips in use!

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1394267

Title:
  The overview page collecting wrong info

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The overview page in the dashboard has a floating ips resource diagram
  showing how many ips used. It should only show the floating ips
  attached to machines in use not all of the allocated ips in use!

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1394267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394268] [NEW] wrong error message when no IP addresses are available

2014-11-19 Thread Edgar Magana
Public bug reported:

When a network subnet runs out of IP addresses a request to create a VM on that 
network fails with the Error message: No valid host was found. There are not 
enough hosts available.
In the nova logs the error message is: NoMoreFixedIps: No fixed IP addresses 
available for network.

Nova should propagate the right error message reported by Neutron.

** Affects: nova
 Importance: Undecided
 Assignee: Edgar Magana (emagana)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Edgar Magana (emagana)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394268

Title:
  wrong error message when no IP addresses are available

Status in OpenStack Compute (Nova):
  New

Bug description:
  When a network subnet runs out of IP addresses a request to create a VM on 
that network fails with the Error message: No valid host was found. There are 
not enough hosts available.
  In the nova logs the error message is: NoMoreFixedIps: No fixed IP addresses 
available for network.

  Nova should propagate the right error message reported by Neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1394268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393365] Re: cross-manager use of config values for backward compatibility should have deprecation warnings

2014-11-19 Thread Dolph Mathews
I'd rather see support for this come out of oslo.config, automatically.
I believe there was a related mailing list discussion recently as well.

** Also affects: oslo.config
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1393365

Title:
  cross-manager use of config values for backward compatibility should
  have deprecation warnings

Status in OpenStack Identity (Keystone):
  Incomplete
Status in Oslo configuration management library:
  New

Bug description:
  There are a few cases where, for backward compatibility, we honor
  older config values to ensure that installations don't break on
  upgrade between releases.  A good example of this is the 'driver'
  config setting from when we split up the original identity
  manager/backend - as well as the config values around the new split of
  assignment.

  We should issue deprecation warnings when the new config values have
  not been set and the old one still are set (in which case we use the
  old values).  However, the current versionutils.deprecated class
  doesn't really support the logging of arbitrary objects (it supports
  just classes and functions).  This should be enhanced, and then places
  where we do provide this backward compatibility for config values
  should be so marked (The __init__ method in the manager class for
  resource/core.py and assignment/core.py are good places to start).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1393365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394192] Re: neutron.openstack.common.rpc.common -- AMQP Server is unreachable [Errno 113]

2014-11-19 Thread Eugene Nikanorov
It looks more like support request rather than a bug. Please use
ask.openstack.org for this.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394192

Title:
  neutron.openstack.common.rpc.common -- AMQP Server is unreachable
  [Errno 113]

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Hello world!
  On Neutron Compute : /var/log/neutron/openvswitch-agent.log, we have this 
error :
  neutron.openstack.common.rpc.common -- AMQP Server is unreachable [Errno 113]

  We can ping the host when AMQP Server (rabbitmq) is installed.
  Nova-compute can connect to AMQP server.

  On nova.conf :
  rabbit_host = OUR_IP_ADDRESS

  For nova-compute its works.

  But... for /var/log/neutron/openvswitch-agent.log, with the same
  parameters, AMQP server is unreachable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394299] [NEW] Shared image shows as private

2014-11-19 Thread Stuart McLaren
Public bug reported:


image  281d576a-9e4b-4d11-94bb-8b1e89f62a71 is owned by this user and
correctly shows as 'private', however image '795518ca-
13a6-4493-b3a3-91519ad7c067' is not owned by this user, it is a shared
image.


 $ glance --os-image-api-version 2 image-list --visibility shared
 +--+-+
 | ID| Name|
 +--+-+
 | 795518ca-13a6-4493-b3a3-91519ad7c067 | accepted--image |  correct, the 
shared image is shown
 +--+-+


 $ glance --os-image-api-version 2 image-list --visibility private
 +--+-+
 | ID   | Name|
 +--+-+
 | 281d576a-9e4b-4d11-94bb-8b1e89f62a71 | private-image   |
 | 795518ca-13a6-4493-b3a3-91519ad7c067 | accepted--image |   wrong, I 
think, this is shared, not private
 +--+-+


 $ glance --os-image-api-version 2 image-show 
281d576a-9e4b-4d11-94bb-8b1e89f62a71
 +--+--+
 | Property | Value|
 +--+--+
 | checksum | 398759a311bf25c6f1d67e753bb24dae |
 | container_format | bare |
 | created_at   | 2014-11-18T11:16:33Z |
 | disk_format  | raw  |
 | id   | 281d576a-9e4b-4d11-94bb-8b1e89f62a71 |
 | min_disk | 0|
 | min_ram  | 0|
 | name | private-image|
 | owner| f68be3a5c2b14721a9e0ed2fcb750481 |
 | protected| False|
 | size | 106  |
 | status   | active   |
 | tags | []   |
 | updated_at   | 2014-11-18T15:51:35Z |
 | visibility   | private  |  correct
 +--+--+


 (py27)ubuntu in ~/git/python-glanceclient on master*
 $ glance --os-image-api-version 2 image-show 
795518ca-13a6-4493-b3a3-91519ad7c067
 +--+--+
 | Property | Value|
 +--+--+
 | checksum | 398759a311bf25c6f1d67e753bb24dae |
 | container_format | bare |
 | created_at   | 2014-11-18T11:14:58Z |
 | disk_format  | raw  |
 | id   | 795518ca-13a6-4493-b3a3-91519ad7c067 |
 | min_disk | 0|
 | min_ram  | 0|
 | name | accepted--image  |
 | owner| 2dcea26aa97a41fa9547a133f6c7f5b4 |  different owner
 | protected| False|
 | size | 106  |
 | status   | active   |
 | tags | []   |
 | updated_at   | 2014-11-19T16:32:33Z |
 | visibility   | private  |  wrong, I think
 +--+--+

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1394299

Title:
  Shared image shows as private

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:

  image  281d576a-9e4b-4d11-94bb-8b1e89f62a71 is owned by this user and
  correctly shows as 'private', however image '795518ca-
  13a6-4493-b3a3-91519ad7c067' is not owned by this user, it is a shared
  image.


   $ glance --os-image-api-version 2 image-list --visibility shared
   +--+-+
   | ID| Name|
   +--+-+
   | 795518ca-13a6-4493-b3a3-91519ad7c067 | accepted--image |  correct, the 
shared image is shown
   +--+-+

  
   $ glance --os-image-api-version 2 image-list --visibility private
   +--+-+
   | ID   | Name|
   +--+-+
   | 281d576a-9e4b-4d11-94bb-8b1e89f62a71 | 

[Yahoo-eng-team] [Bug 1391116] Re: keystone user-password-update also accept blank password.

2014-11-19 Thread Dolph Mathews
Added keystone to this bug - is there any reason why keystone should
accept a fasley password for a user password update?

** Changed in: python-keystoneclient
   Importance: Undecided = Medium

** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1391116

Title:
  keystone user-password-update also accept blank password.

Status in OpenStack Identity (Keystone):
  Incomplete
Status in Python client library for Keystone:
  In Progress

Bug description:
  If we enter blank password for a user than it accepts it and then user
  can not log in using either older password or blank password. I
  reproduce it following way.

  1)   I entered keystone user-password-update username this command.
  It prompt for new password then i hit enter without giving any
  password. And during confirmation also i hit enter. Command run
  successfully without any error.

  2) I tried to log in using blank password, i was not able to log in.

  3) I tried with older password also, it did not work either.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1391116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379798] Re: Glance returns HTTPInternalServerError 500 in case of image server is down

2014-11-19 Thread Louis Taylor
** Also affects: glance-store
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1379798

Title:
  Glance returns HTTPInternalServerError 500 in case of image server is
  down

Status in OpenStack Image Registry and Delivery Service (Glance):
  Triaged
Status in OpenStack Glance backend store-drivers library (glance_store):
  Triaged

Bug description:
  HTTPInternalServerError 500 response is returned to the user while image 
server is down during downloading the image.
  When user tries to download the image from the remote location (image server) 
which is down, Connection refused ECONNREFUSED error is raised on the glance 
server.

  Ideally it should return 503 HTTPServiceUnavailable response to the
  user.

  Steps to reproduce:

  1. Create a file 'test_image.py' at any location. example: /home/openstack/
  2. Run Simple HTTP server using python -m SimpleHTTPServer 8050 from 
location mentioned in step 1.
  3. Create an image using location parameter.
  example: glance image-create --name myimage --disk-format=raw 
--container-format=bare --location http://10.69.4.178:8050/test_image.py
  4. Stop Simple HTTP server.
  5. Download image using  glance image-download image_id created in step 3.

  Please refer http://paste.openstack.org/show/120165/ for v1 and v2 api
  logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1379798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394315] [NEW] place the word Info with Information

2014-11-19 Thread Cindy Lu
Public bug reported:

Should be replaced on all the Detail Overview pages.

For example:
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/images/templates/images/images/_detail_overview.html#L6

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit ux

** Tags added: low-hanging-fruit

** Tags added: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1394315

Title:
  place the word Info with Information

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Should be replaced on all the Detail Overview pages.

  For example:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/images/templates/images/images/_detail_overview.html#L6

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1394315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392964] Re: UnicodeDecodeError

2014-11-19 Thread Joe Gordon
nova is trying to execute a command and oslo.concurrency (processutils)
is trying to look for passwords to mask them in the log and raising a
Unicode error. Is there a log line before the stacktrace saying Mount
device on dir? my hunch is something about that is unicode

** Also affects: oslo.concurrency
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1392964

Title:
  UnicodeDecodeError

Status in OpenStack Compute (Nova):
  Incomplete
Status in Oslo Concurrency Library:
  New

Bug description:
  I create a instance on nova, I got these errors.

  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281]   File 
/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py, line 155, in extend
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281] if not is_image_partitionless(image, 
use_cow):
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281]   File 
/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py, line 205, in 
is_image_partitionless
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281] fs.setup()
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281]   File 
/usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py, line 81, in 
setup
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281] self.teardown()
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281] six.reraise(self.type_, self.value, 
self.tb)
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281]   File 
/usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py, line 75, in 
setup
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281] if not mount.do_mount():
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281]   File 
/usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py, line 218, in 
do_mount
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281] status = self.get_dev() and 
self.map_dev() and self.mnt_dev()
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281]   File 
/usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py, line 193, in 
mnt_dev
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281] discard_warnings=True, 
run_as_root=True)
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281]   File 
/usr/lib/python2.7/dist-packages/nova/utils.py, line 172, in trycmd
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281] return processutils.trycmd(*args, 
**kwargs)
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py, line 
225, in trycmd
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281] out, err = execute(*args, **kwargs)
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py, line 
191, in execute
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281] sanitized_stderr = 
strutils.mask_password(stderr)
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/strutils.py, line 274, 
in mask_password
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281] message = six.text_type(message)
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281] UnicodeDecodeError: 'utf8' codec can't 
decode byte 0xa5 in position 7: invalid start byte
  2014-11-15 18:01:23.825 2176 TRACE nova.compute.manager [instance: 
52888037-bb00-449c-b38f-7f1c19017281]

  
  I created a instance via nova CLI.

  # 

[Yahoo-eng-team] [Bug 1394316] [NEW] glance-manage does not display basic options if run without any arguments

2014-11-19 Thread Louis Taylor
Public bug reported:

glance-manage isn't very helpful if it is run without any arguments.
Compare against nova-manage and cinder-manage:

$ nova-manage   





usage: nova-manage [-h] [--config-dir DIR] [--config-file PATH] [--debug]
   [--log-config-append PATH] [--log-date-format DATE_FORMAT]
   [--log-dir LOG_DIR] [--log-file PATH] [--log-format FORMAT]
   [--nodebug] [--nouse-syslog] [--nouse-syslog-rfc-format]
   [--noverbose] [--syslog-log-facility SYSLOG_LOG_FACILITY]
   [--use-syslog] [--use-syslog-rfc-format] [--verbose]
   [--version] [--remote_debug-host REMOTE_DEBUG_HOST]
   [--remote_debug-port REMOTE_DEBUG_PORT]
   
{version,bash-completion,shell,logs,db,vm,agent,host,flavor,vpn,floating,project,account,network,service,cell,fixed}
   ...
nova-manage: error: too few arguments

$ cinder-manage

OpenStack Cinder version: 2014.2

/usr/local/bin/cinder-manage category action [args]
Available categories:
db
volume
host
shell
logs
service
config
version
backup


Glance-manage, on the other hand shows very little extra information:

$ glance-manage
usage: glance-manage [options] cmd
glance-manage: error: too few arguments

** Affects: glance
 Importance: Wishlist
 Status: New


** Tags: low-hanging-fruit

** Changed in: glance
   Importance: Undecided = Wishlist

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1394316

Title:
  glance-manage does not display basic options if run without any
  arguments

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  glance-manage isn't very helpful if it is run without any arguments.
  Compare against nova-manage and cinder-manage:

  $ nova-manage 




  
  usage: nova-manage [-h] [--config-dir DIR] [--config-file PATH] [--debug]
 [--log-config-append PATH] [--log-date-format DATE_FORMAT]
 [--log-dir LOG_DIR] [--log-file PATH] [--log-format FORMAT]
 [--nodebug] [--nouse-syslog] [--nouse-syslog-rfc-format]
 [--noverbose] [--syslog-log-facility SYSLOG_LOG_FACILITY]
 [--use-syslog] [--use-syslog-rfc-format] [--verbose]
 [--version] [--remote_debug-host REMOTE_DEBUG_HOST]
 [--remote_debug-port REMOTE_DEBUG_PORT]
 
{version,bash-completion,shell,logs,db,vm,agent,host,flavor,vpn,floating,project,account,network,service,cell,fixed}
 ...
  nova-manage: error: too few arguments

  $ cinder-manage

  OpenStack Cinder version: 2014.2

  /usr/local/bin/cinder-manage category action [args]
  Available categories:
  db
  volume
  host
  shell
  logs
  service
  config
  version
  backup

  
  Glance-manage, on the other hand shows very little extra information:

  $ glance-manage
  usage: glance-manage [options] cmd
  glance-manage: error: too few arguments

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1394316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387950] Re: libvirt: fail to delete VM due to libvirt timeout

2014-11-19 Thread Joe Gordon
Sounds like this is a bug upstream from nova. Closing as not sure what
we can do in nova for this.

** Changed in: nova
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387950

Title:
  libvirt: fail to delete VM due to libvirt timeout

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  When I run longevity test against Juno code, I notice that that delete
  VM operation occasionally fails. The stack trace is:

  File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2507, 
in _delete_instance
  self._shutdown_instance(context, instance, bdms)
File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2437, 
in _shutdown_instance
  requested_networks)
File /usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, 
line 82, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2426, 
in _shutdown_instance
  block_device_info)
File /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 
1054, in destroy
  self._destroy(instance)
File /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 
1010, in _destroy
  instance=instance)
File /usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, 
line 82, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 
979, in _destroy
  virt_dom.destroy()
File /usr/lib/python2.6/site-packages/eventlet/tpool.py, line 183, in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File /usr/lib/python2.6/site-packages/eventlet/tpool.py, line 141, in 
proxy_call
  rv = execute(f, *args, **kwargs)
File /usr/lib/python2.6/site-packages/eventlet/tpool.py, line 122, in 
execute
  six.reraise(c, e, tb)
File /usr/lib/python2.6/site-packages/eventlet/tpool.py, line 80, in 
tworker
  rv = meth(*args, **kwargs)
File /usr/lib64/python2.6/site-packages/libvirt.py, line 730, in destroy
  if ret == -1: raise libvirtError ('virDomainDestroy() failed', dom=self)

  
  Libvirt log is:

  2014-10-29 06:28:17.535+: 2025: warning : qemuProcessKill:4174 : Timed 
out waiting after SIGTERM to process 9972, sending SIGKILL
  2014-10-29 06:28:22.537+: 2025: warning : qemuProcessKill:4206 : Timed 
out waiting after SIGKILL to process 9972
  2014-10-29 06:28:22.537+: 2025: error : qemuDomainDestroyFlags:2098 : 
operation failed: failed to kill qemu process with SIGTERM

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1387950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386147] Re: live-migration failed because of Filter ComputeFilter returned 0 hosts, the instance's status is still migrating.

2014-11-19 Thread Joe Gordon
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1386147

Title:
  live-migration failed because of  Filter ComputeFilter returned 0
  hosts, the instance's status is still migrating.

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  I have three compute nodes, and one instance on host opencos179-24.

  A live-migration failed, the content of  nova-scheduler.log  show
  that  Filter ComputeFilter returned 0 hosts.

  But the instance's status is still migrating.

  I hope the instance could rollback.

  log is as follows:
  http://paste.openstack.org/show/125266/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1386147/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394022] Re: v2.1 api sample tests taking a long time to run

2014-11-19 Thread Joe Gordon
tox  -epy27 -- nova.tests.unit.integrated.test_api_samples


Run: 598 in 1936.127657 sec.


** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394022

Title:
  v2.1 api sample tests taking a long time to run

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  There have been reports that the api sample tests for v2.1 are taking
  significantly longer to run that the v2 versions. This needs to be
  investigated and fixed. Eg. is it setup time (stevedore?), is it
  jsonschema input validation? Or something else

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1394022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366694] Re: ironic: unnecessary instance.save called in the spawn method

2014-11-19 Thread Joe Gordon
** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366694

Title:
  ironic: unnecessary instance.save called in the spawn method

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When an ephemeral disk is used there in an unnecesasry call to save.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323475] Re: Losing network info cache sometimes

2014-11-19 Thread Joe Gordon
Havana is not supported anymore

** Changed in: nova
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1323475

Title:
  Losing network info cache sometimes

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  We are using stable/havana.

  For some inexplicable reason, some instances lost network information.
  The result looks like:

  
  $ nova list
  | a8f8a437-d203-4265-aca2-7bd35539c5d1 | test 
 | ACTIVE | -| Running |  

  $ neutron port-list --device-id a8f8a437-d203-4265-aca2-7bd35539c5d1
  
+--+--+---++
  | id   | name | mac_address   | fixed_ips 
 |
  
+--+--+---++
  | 6b042778-76bb-45ca-86a8-abfdb1ba1a62 |  | fa:16:3e:67:9a:88 | 
{subnet_id: 90b338d3-7711-48fd-a0f6-11a27388cb42, ip_address: 
10.162.82.2} |
  | 9800fd03-5e07-4a54-8568-28d501073c5f |  | fa:16:3e:d0:86:4a | 
{subnet_id: 9a1fc59d-aec1-4e3a-bd88-99ea558e8b29, ip_address: 
192.168.0.5} |
  
+--+--+---++

  neutron said there are two ports binding with the instance, but nova
  said the instance has no port.

  We dug logs, and found somethings went wrong after running
  heal_instance_info_cache. One line log said the instance info_cache is
  [], but the previous log said the instance info_cache is filled. From
  that time, the info_cache lost, and can't self-healing.

  The simple logs pasted below, and full log here:
  http://paste.openstack.org/show/81605/

  
  
  2014-05-26 03:47:13.258 14884 DEBUG nova.network.api [-] Updating cache with 
info: [VIF({'ovs_interfaceid': u'5953e098-e131-48eb-b53c-5eb095f3bfee', 
'network': Network({'bridge': 'br-int', 'subne
  ts': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 'fixed', 
'floating_ips': [], 'address': u'10.162.81.4'})], 'version': 4, 'meta': 
{'dhcp_server': u'10.162.81.3'}, 'dns': [], 'rout
  es': [], 'cidr': u'10.162.81.0/28', 'gateway': IP({'meta': {}, 'version': 
None, 'type': 'gateway', 'address': None})})], 'meta': {'injected': False, 
'tenant_id': u'c10373fb5d234e31af4d5d56527994f
  c'}, 'id': u'b0bb08c1-dc05-4e17-a021-f3b850a823ba', 'label': 
u'idc_c10373fb5d234e31af4d5d56527994fc'}), 'devname': u'tap5953e098-e1', 
'qbh_params': None, 'meta': {}, 'address': u'fa:16:3e:40:34:4
  c', 'type': u'ovs', 'id': u'5953e098-e131-48eb-b53c-5eb095f3bfee', 
'qbg_params': None})] update_instance_cache_with_nw_info 
/usr/lib/python2.7/dist-packages/nova/network/api.py:71
  2014-05-26 03:47:13.263 14884 DEBUG nova.compute.manager [-] [instance: 
49a806a9-986e-4ce3-ae9f-d3c4317255a3] Updated the info_cache for instance 
_heal_instance_info_cache /usr/lib/python2.7/dist
  -packages/nova/compute/manager.py:5146
  .
  2014-05-26 03:52:14.255 14884 DEBUG nova.network.api [-] Updating cache with 
info: [] update_instance_cache_with_nw_info 
/usr/lib/python2.7/dist-packages/nova/network/api.py:71
  .

  
  I try hard but can't no re-product the bug manual, The key problem here is 
why the info_cache not showing up. But on the other hand, we'd better give nova 
the ability to self-healing in this case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1323475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 814469] Re: OSAPI reports ACTIVE when server built from bad image

2014-11-19 Thread Joe Gordon
** Changed in: nova
   Status: Confirmed = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/814469

Title:
  OSAPI reports ACTIVE when server built from bad image

Status in OpenStack Compute (Nova):
  Opinion

Bug description:

  Nova shows instance as active when it definitely is not.

  To reproduce
  1. upload a bad image to glance:
  glance add name=bogus is_public=True  barbie.jpg

  2. boot it:
  nova boot --flavor 1 --image imgage id 'bogus-server'

  3. list it:
  $ nova list
  +-+--++---++
  |  ID | Name | Status | Public IP | Private IP |
  +-+--++---++
  | 667 | bogus-server | ACTIVE |   | 10.0.0.7   |
  +-+--++---++

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/814469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394227] Re: tempest.scenario.test_minimum_basic.TestMinimumBasicScenario fails with cellular devstack

2014-11-19 Thread Matt Riedemann
It also fails in tree:

http://logs.openstack.org/30/133530/2/experimental/check-tempest-dsvm-
cells/dde0c62/console.html#_2014-11-18_20_59_51_300

The instance uuid in this case is 009ba63a-f4e6-41b1-9f02-2830c284e9e7.

It fails here:

http://logs.openstack.org/30/133530/2/experimental/check-tempest-dsvm-
cells/dde0c62/logs/screen-n-cpu.txt.gz?level=TRACE#_2014-11-18_20_52_50_764

It's the flavors issue that alaski is trying to fix:

2014-11-18 20:52:50.764 30949 ERROR nova.compute.manager [-] [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] Instance failed to spawn
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] Traceback (most recent call last):
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 2244, in _build_resources
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] yield resources
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 2114, in 
_build_and_run_instance
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] instance_type=instance_type)
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2639, in spawn
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] write_to_disk=True)
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 4222, in _get_guest_xml
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] context)
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 3968, in 
_get_guest_config
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] instance['instance_type_id'])
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7]   File 
/opt/stack/new/nova/nova/objects/base.py, line 154, in wrapper
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] args, kwargs)
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7]   File 
/opt/stack/new/nova/nova/conductor/rpcapi.py, line 359, in object_class_action
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] objver=objver, args=args, 
kwargs=kwargs)
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7]   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py, line 
152, in call
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] retry=self.retry)
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7]   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py, line 90, 
in _send
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] timeout=timeout, retry=retry)
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7]   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 408, in send
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] retry=retry)
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7]   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 399, in _send
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] raise result
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] FlavorNotFound_Remote: Flavor 6 could not 
be found.
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] Traceback (most recent call last):
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7] 
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager [instance: 
009ba63a-f4e6-41b1-9f02-2830c284e9e7]   File 
/opt/stack/new/nova/nova/conductor/manager.py, line 400, in _object_dispatch
2014-11-18 20:52:50.764 30949 TRACE nova.compute.manager 

[Yahoo-eng-team] [Bug 1054225] Re: Aggregates Extension XML and JSON return different format timestamp

2014-11-19 Thread Joe Gordon
we are dropping support for XML in Juno

** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1054225

Title:
  Aggregates Extension XML and JSON return different format timestamp

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  The response to aggregate create returns different format timestamps
  for XML and JSON requests.

  For XML a sample response is:
  ?xml version='1.0' encoding='UTF-8'?
  aggregate
namename/name
availability_zonenova/availability_zone
deletedFalse/deleted
created_at2012-09-21 16:49:02.265059/created_at
updated_atNone/updated_at
deleted_atNone/deleted_at
id1/id
  /aggregate

  and for JSON:
  {
  aggregate: {
  availability_zone: nova,
  created_at: 2012-09-21T15:49:27.263099,
  deleted: false,
  deleted_at: null,
  id: 1,
  name: name,
  updated_at: null
  }
  }

  The 'created_at' field is using 2 different formats:

  2012-09-21T15:49:27.263099
  vs
  2012-09-21 16:49:02.265059

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1054225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 938314] Re: Atom links should respect Accept header

2014-11-19 Thread Joe Gordon
we are dropping XML support in nova in Juno

** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/938314

Title:
  Atom links should respect Accept header

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  The atom links have a huge overhead.  In a quick experiment with an
  instance listing, the response doubled in size vs if the atom links
  were not present.  As the atom links seem to carry no extra
  information vs the IDs, they should be optional.

  Further, per the Atom RFC (http://www.ietf.org/rfc/rfc4287), the
  correct Content-Type for atom links is application/atom+xml.  As such,
  we should return atom links only if the user sends an Accept header
  including the content type application/atom+xml or
  application/atom+json.

  If the client does not send an atom content type, but instead
  specifies e.g. Accept: application/json, then we must not return the
  Atom formatted information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/938314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1161208] Re: make AggregateAPI work in cells

2014-11-19 Thread Joe Gordon
now that we are planning on replacing the entire cells design with
something else and don't want to spend time fixing something we are
about to remove, closing this.

** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1161208

Title:
  make AggregateAPI work in cells

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Make nova.compute.AggregateAPI work in child cells:

  def create_aggregate(self, context, aggregate_name, availability_zone):
  def get_aggregate(self, context, aggregate_id):
  def get_aggregate_list(self, context):
  def update_aggregate(self, context, aggregate_id, values):
  def update_aggregate_metadata(self, context, aggregate_id, metadata):
  def delete_aggregate(self, context, aggregate_id):
  def add_host_to_aggregate(self, context, aggregate_id, host_name):
  def remove_host_from_aggregate(self, context, aggregate_id, host_name):

  Should all work through cells.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1161208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 989339] Re: Incorrect usage data on delete.

2014-11-19 Thread Joe Gordon
very old bug

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/989339

Title:
  Incorrect usage data on delete.

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  On deletes, there is an extra compute.instance.exists event being
  generated. This was a relic of a very old initial design for
  notifications where periodic exists events were not emitted for
  instances deleted in the current audit period, and was missed when
  that design was changed.

  As it stands this event results in instance uptime being
  miscalculated. (These extra 'backstop' events should only be emitted
  on events like resize and rebuild that reset launched_at)

  Also, the notification is not reporting deleted_at time correctly, as
  it is blank, instead of showing the delete time for the instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/989339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244561] Re: Delete vm when the host shutdown.

2014-11-19 Thread Joe Gordon
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244561

Title:
  Delete vm when the host shutdown.

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The code version:
  The lastest version of master.

  The API version:
  V2

  Details:
  Now, i have two hosts.The one named A runs all the openstack services, and 
the other one named B only runs nova-computer and quantum-agent services.
  1.Create a vm on the host B.
  2.Shutdown host B.
  3.Delete the vm on the host B.
  4.Start host B, i find that the data of the deleted vm is still on 
/opt/stack/data/nova/instances directory.And use virsh list --all command, 
i find the vm with shutoff state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244561/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1209345] Re: Migration tests fail with sqlalchemy 0.8

2014-11-19 Thread Matt Riedemann
Given stable/icehouse supports newer sqlalchemy:

https://github.com/openstack/nova/blob/stable/icehouse/requirements.txt#L2

I think we can close this, there would be nothing to backport in
community since havana is end of life.

** Changed in: nova
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1209345

Title:
  Migration tests fail with sqlalchemy 0.8

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
   File 
/��BUILDDIR��/nova-2013.2+git201308071233~saucy/nova/db/sqlalchemy/migrate_repo/versions/206_add_instance_cleaned.py,
 line 47, in downgrade
  instances.columns.cleaned.drop()
File /usr/lib/python2.7/dist-packages/migrate/changeset/schema.py, line 
549, in drop
  engine._run_visitor(visitorcallable, self, connection, **kwargs)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
1479, in _run_visitor
  conn._run_visitor(visitorcallable, element, **kwargs)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
1122, in _run_visitor
  **kwargs).traverse_single(element)
File /usr/lib/python2.7/dist-packages/migrate/changeset/ansisql.py, line 
53, in traverse_single
  ret = super(AlterTableVisitor, self).traverse_single(elem)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py, line 
111, in traverse_single
  return meth(obj, **kw)
File 
/usr/lib/python2.7/dist-packages/migrate/changeset/databases/sqlite.py, line 
90, in visit_column
  super(SQLiteColumnDropper,self).visit_column(column)
File 
/usr/lib/python2.7/dist-packages/migrate/changeset/databases/sqlite.py, line 
53, in visit_column
  self.recreate_table(table,column,delta)
File 
/usr/lib/python2.7/dist-packages/migrate/changeset/databases/sqlite.py, line 
40, in recreate_table
  table.create(bind=self.connection)
File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 614, in 
create
  checkfirst=checkfirst)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
1122, in _run_visitor
  **kwargs).traverse_single(element)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py, line 
111, in traverse_single
  return meth(obj, **kw)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/ddl.py, line 93, 
in visit_table
  self.traverse_single(index)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py, line 
111, in traverse_single
  return meth(obj, **kw)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/ddl.py, line 105, 
in visit_index
  self.connection.execute(schema.CreateIndex(index))
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
662, in execute
  params)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
720, in _execute_ddl
  compiled
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
874, in _execute_context
  context)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
1024, in _handle_dbapi_exception
  exc_info
File /usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 
195, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
867, in _execute_context
  context)
File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
324, in do_execute
  cursor.execute(statement, parameters)
  OperationalError: (OperationalError) table instances has no column named 
cleaned u'CREATE INDEX instances_host_deleted_cleaned_idx ON instances (host, 
deleted, cleaned)' ()
  ==
  FAIL: process-returncode
  tags: worker-0
  --

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1209345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1091780] Re: nova-network - iptables-restore v1.4.12: host/network `None' not found

2014-11-19 Thread Joe Gordon
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1091780

Title:
  nova-network - iptables-restore v1.4.12: host/network `None' not
  found

Status in OpenStack Compute (Nova):
  Invalid
Status in “nova” package in Ubuntu:
  Expired

Bug description:
  1- In Precise nova-network crashes because it cannot apply iptables
  rules when trying to apply vpn rules. nova-network tries to set VPN
  iptables rules for openvpn access:

  2012-12-17 07:17:24 TRACE nova Stderr: iptables-restore v1.4.12:
  host/network `None' not found\nError occurred at line: 23\nTry
  `iptables-restore -h' or 'iptables-restore --help' for more
  information.\n

  2- How reproducible?

  Not clear. The configuration I used with juju seems to create an
  environment that causes this problem. When this problem is present the
  issue reproduces every time.

  3- How to reproduce:

  When the issue is present just starting up nova-network causes the
  problem to reproduce. Nova-network exits in the end and dies because
  of the error on iptables-restore

  4- I added debugging in nova.conf with --debug=true and added extra
  debugging in

  /usr/lib/python2.7/dist-packages/nova/utils.py

  which showed the full iptables rules that were to be restored by
  iptables-restore:

  2012-12-17 07:17:24 DEBUG nova.utils 
[req-391688fd-3b99-4b1c-8b46-fb4f64e64246 None None] process input: 
  # Generated by iptables-save v1.4.12 on Mon Dec 17 07:17:21 2012
  *filter
  :INPUT ACCEPT [0:0]
  :FORWARD ACCEPT [0:0]
  :OUTPUT ACCEPT [0:0]
  :nova-api-FORWARD - [0:0]
  :nova-api-INPUT - [0:0]
  :nova-api-OUTPUT - [0:0]
  :nova-api-local - [0:0]
  :nova-network-FORWARD - [0:0]
  :nova-network-INPUT - [0:0]
  :nova-network-local - [0:0]
  :nova-network-OUTPUT - [0:0]
  :nova-filter-top - [0:0]
  -A FORWARD -j nova-filter-top
  -A OUTPUT -j nova-filter-top
  -A nova-filter-top -j nova-network-local
  -A INPUT -j nova-network-INPUT
  -A OUTPUT -j nova-network-OUTPUT
  -A FORWARD -j nova-network-FORWARD
  -A nova-network-FORWARD --in-interface br100 -j ACCEPT
  -A nova-network-FORWARD --out-interface br100 -j ACCEPT
  -A nova-network-FORWARD -d None -p udp --dport 1194 -j ACCEPT
  -A INPUT -j nova-api-INPUT
  -A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
  -A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
  -A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
  -A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
  -A FORWARD -j nova-api-FORWARD
  -A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED,ESTABLISHED 
-j ACCEPT
  -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
  -A FORWARD -i virbr0 -o virbr0 -j ACCEPT
  -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
  -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
  -A OUTPUT -j nova-api-OUTPUT
  -A nova-api-INPUT -d 192.168.124.150/32 -p tcp -m tcp --dport 8775 -j ACCEPT
  -A nova-filter-top -j nova-api-local
  COMMIT

  
  4.1- Among the rules above we have:

  -A nova-network-FORWARD -d None -p udp --dport 1194 -j ACCEPT

  which is responsible for the fault in iptables-restore.

  5- These are the error messages:

  2012-12-17 07:17:24 DEBUG nova.utils 
[req-391688fd-3b99-4b1c-8b46-fb4f64e64246 None None] Result was 2 from 
(pid=14699) execute /usr/lib/python2.7/dist-packages/nova/utils.py:237
  2012-12-17 07:17:24 CRITICAL nova [-] Unexpected error while running command.
  Command: sudo nova-rootwrap iptables-restore
  Exit code: 2
  Stdout: ''

  Stderr: iptables-restore v1.4.12: host/network `None' not found\nError 
occurred at line: 23\nTry `iptables-restore -h' or 'iptables-restore --help' 
for more information.\n
  2012-12-17 07:17:24 TRACE nova Traceback (most recent call last):
  2012-12-17 07:17:24 TRACE nova   File /usr/bin/nova-network, line 49, in 
module
  2012-12-17 07:17:24 TRACE nova service.wait()
  2012-12-17 07:17:24 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 413, in wait
  2012-12-17 07:17:24 TRACE nova _launcher.wait()
  2012-12-17 07:17:24 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 131, in wait
  2012-12-17 07:17:24 TRACE nova service.wait()
  2012-12-17 07:17:24 TRACE nova   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 166, in wait
  2012-12-17 07:17:24 TRACE nova return self._exit_event.wait()
  2012-12-17 07:17:24 TRACE nova   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2012-12-17 07:17:24 TRACE nova return hubs.get_hub().switch()
  2012-12-17 07:17:24 TRACE nova   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 177, in switch
  2012-12-17 07:17:24 TRACE nova return self.greenlet.switch()
  2012-12-17 07:17:24 TRACE nova   File 

[Yahoo-eng-team] [Bug 1166927] Re: Migrating multiple instances causes 'u'\'NoneType\' object is unsubscriptable ' Error

2014-11-19 Thread Joe Gordon
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1166927

Title:
  Migrating multiple instances causes 'u'\'NoneType\' object is
  unsubscriptable ' Error

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  * When I do a resize-confirm on instances that have been migrated I
  get a unsubscriptable error.

  (1).  Boot multiple instances

  nova boot --image 9792609f-7b42-424b-9826-ae31c426f0bd --flavor 1 MIGRATE1
  nova boot --image 9792609f-7b42-424b-9826-ae31c426f0bd --flavor 1 MIGRATE2
  nova boot --image 9792609f-7b42-424b-9826-ae31c426f0bd --flavor 1 MIGRATE3

  MIGRATE1:::vs33x-instance-0053:::olyblade02
  MIGRATE2:::vs33x-instance-0054:::olyblade02
  MIGRATE3:::vs33x-instance-0055:::olyblade02

  [root@vs339 ~]# nova list
  
+--+--++---+
  | ID   | Name | Status | Networks 
 |
  
+--+--++---+
  | 01761a76-1fe9-40ae-9f31-ebd019c7c8c6 | MIGRATE1 | ACTIVE | 
network1=10.0.1.4 |
  | 3dfe852f-e0e5-4511-a16d-3ff7b3f278a2 | MIGRATE2 | ACTIVE | 
network1=10.0.1.3 |
  | d800eab4-f91e-48f9-b208-e5821f4da6b1 | MIGRATE3 | ACTIVE | 
network1=10.0.1.6 |
  
+--+--++---+

  (2).  Migrate the instances

  * They all came back to verify_resize state:

  
+--+--+---+---+
  | ID   | Name | Status| Networks  
|
  
+--+--+---+---+
  | 01761a76-1fe9-40ae-9f31-ebd019c7c8c6 | MIGRATE1 | VERIFY_RESIZE | 
network1=10.0.1.4 |
  | 3dfe852f-e0e5-4511-a16d-3ff7b3f278a2 | MIGRATE2 | VERIFY_RESIZE | 
network1=10.0.1.3 |
  | d800eab4-f91e-48f9-b208-e5821f4da6b1 | MIGRATE3 | VERIFY_RESIZE | 
network1=10.0.1.6 |
  
+--+--+---+---+

  (3).  Resize-confirm

  * Attempt a resize-confirm on the instances

  - My instance went to ERROR
  [root@vs339 ~]# nova list
  
+--+--++---+
  | ID   | Name | Status | Networks 
 |
  
+--+--++---+
  | 01761a76-1fe9-40ae-9f31-ebd019c7c8c6 | MIGRATE1 | ERROR  | 
network1=10.0.1.4 |
  | 3dfe852f-e0e5-4511-a16d-3ff7b3f278a2 | MIGRATE2 | ACTIVE | 
network1=10.0.1.3 |
  | d800eab4-f91e-48f9-b208-e5821f4da6b1 | MIGRATE3 | ACTIVE | 
network1=10.0.1.6 |
  
+--+--++---+

  [root@vs339 ~]# nova show MIGRATE1
  
+-++
  | Property| Value 

 |
  
+-++
  | status  | ERROR 

 |
  | updated | 2013-04-09T16:20:47Z  

 |
  | OS-EXT-STS:task_state   | None  

 |
  | OS-EXT-SRV-ATTR:host| vs339.rch.kstart.ibm.com  

 |
  | key_name| None  

 |
  | image   | Rhel6MasterFile 
(9792609f-7b42-424b-9826-ae31c426f0bd)  
   |
  | network1 network| 10.0.1.4  

 |
  | hostId  | 
d9d08dd10c89d762c75d081740e855c61bf3bf5c1083e4b73738dfdb
   |
  | OS-EXT-STS:vm_state | error 

 |
  | 

[Yahoo-eng-team] [Bug 1378233] Re: Provide an option to ignore suspended VMs in the resource count

2014-11-19 Thread Joe Gordon
Although this may be fine for your cloud, I don't think this is
desirable in the general case.

** Changed in: nova
   Status: Confirmed = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378233

Title:
  Provide an option to ignore suspended VMs in the resource count

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  It would be very useful for our case scenario to be able to have an
  option that enables not counting suspended machines as consuming
  resources. The use case is having little memory available and still
  being able to launch new VMs when old VMs are in suspended mode. We
  understand that once the compute node's memory is full we won't be
  able to resume these machines, but that is OK with the way we're using
  our cloud.

  For example a compute node with 8G of RAM, launch 1 VM with 4G and
  another with 2G, then suspend them both, one could then launch a new
  VM with 4G of RAM (the actual memory on the compute node is free).

  On essex we had the following patch for this scenario to work :

  Index: nova/nova/scheduler/host_manager.py
  ===
  --- nova.orig/nova/scheduler/host_manager.py
  +++ nova/nova/scheduler/host_manager.py
  @@ -337,6 +337,8 @@ class HostManager(object):
   if not host:
   continue
   host_state = host_state_map.get(host, None)
  +if instance.get('power_state', 1) != 1: # power_state.RUNNING
  +continue
   if not host_state:
   continue
   host_state.consume_from_instance(instance)

  We're looking into patching icehouse for the same behaviour but would
  like to add it as an option this time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1153774] Re: exception handling logic questionably assumes the availability of database service

2014-11-19 Thread Joe Gordon
this is very old and a lot has changed since this was filed.

** Changed in: nova
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1153774

Title:
  exception handling logic questionably assumes the availability of
  database service

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  version: essex

  In function _run_instance of class ComputeManager in
  nova/compute/manager.py, it seems that the exception handling logic
  assumes the availability of database service, which is problematic
  because an exception may be triggered by the unavailability of
  database service.

  For example, if the database service (mysqld in our case) crashes (and
  then restarts e.g. in a few seconds) when the _spawn is executing,
  as shown in the snippet below, then the three-fold exception handling
  logic of _run_instance would all fail.

  ==
  (wrapper)
  try:
(inside _run_instance)
try:
try:
_spawn
except:
_deallocate_network
except:
_set_instance_error_state
  except:
add_instance_fault_from_exc
  ==

  Specifically, _deallocate_network would not be able to reset the ip
  to the state allocated=0 in db nova.fixed_ips (assuming the use of
  fixed ip). _set_instance_error_state would not be able to set
  vm_state to ERROR in db nova.instances. And
  add_instance_fault_from_exc would not be able to add the fault
  information into db nova.instance_faults.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1153774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1153751] Re: invoking nova/utils.py's execute with check_exit_code=False causes problem in nova-network

2014-11-19 Thread Joe Gordon
This bug is very old and the code has changed significantly since this
was filed. If this is still valid in Icehouse or later please reopen.

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1153751

Title:
  invoking nova/utils.py's execute with check_exit_code=False causes
  problem in nova-network

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  version: essex

  In nova/network/linux_net.py, there are six invocations of
  nova/utils.py's execute function with check_exit_code=False. Our
  experiences show that this pattern is problematic. Specifically, the
  process executing a given command may exit unexpectedly.

  All examples below are related to nova/network/linux_net.py.

  Example 1:
  In function ensure_bridge of class LinuxBridgeInterfaceDriver, a command 
brctl addif is executed without checking the exit code. The parent process 
calling _execute cannot know whether the execution of brctl addif succeeds. 
This can cause a bridge and a NIC to remain dissociated. 

  Example 2:
  In function _device_exists, if the execution of ip link show dev [device] 
fails unexpectedly, then neither stdout nor stderr contains any data. According 
to the logic in _device_exists, such a fault would lead the caller of 
_device_exists to think that the device in question exists, which may not be 
the case.

  Example 3:
  In function restart_dhcp, if the execution of cat /proc/[pid]/cmdline 
fails unexpectedly, then there may be no data in the stdout, which in turn will 
cause the caller to believe that a dnsmasq instance has crashed and a new 
instance should be started, while in fact the existing dnsmasq instance works 
properly. 

  The rest occurrences of check_exit_code=False raises similar issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1153751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1054042] Re: Nova-network bridges traffic between tenant VLANs by default

2014-11-19 Thread Joe Gordon
** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1054042

Title:
  Nova-network bridges traffic between tenant VLANs by default

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  We're running Openstack Essex (openstack-nova-2012.1-7.el6.noarch) on
  Centos 6.3. We use VLANs (nova.network.manager.VlanManager) to
  separete the tenants from each other.

  We noticed that the nova-network host that acts as a gateway for the
  virtual machines bridges all traffic between the VLANs. This means
  that any tenant has access to any other tenant's network, and other
  internal networks that happen to be available. It seems that the
  problems are these firewall rules

  -A nova-network-FORWARD -i br100 -j ACCEPT 
  -A nova-network-FORWARD -o br100 -j ACCEPT 
  -A nova-network-FORWARD -d 192.168.100.2/32 -p udp -m udp --dport 1194 -j 
ACCEPT 
  -A nova-network-FORWARD -i br101 -j ACCEPT 
  -A nova-network-FORWARD -o br101 -j ACCEPT 
  -A nova-network-FORWARD -d 192.168.101.2/32 -p udp -m udp --dport 1194 -j 
ACCEPT 

  Nova-network should definately not forward all traffic from the bridges since 
it's in the other tenants networks too. It should be something like
  -A nova-network-FORWARD -i br100 -o $external_interface -j ACCEPT 
  -A nova-network-FORWARD -i br100  -j DROP 

  Other services (metadata) should however be considered, so that
  traffic isn't dropped. The ouput rule is also way too liberal, since
  it is processed before the next bridge's input rule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1054042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1017379] Re: Auto-assigned floating IPs not included in floating IPs listing

2014-11-19 Thread Joe Gordon
** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1017379

Title:
  Auto-assigned floating IPs not included in floating IPs listing

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  I have enabled automatic assignment of floating IPs to newly created
  instances. These IPs get allocated and assigned to the instances -- I
  can connect to the instances using the IPs and they show up in the
  horizon dashboard when I view the instance information details.

  However, the Access  Security tab of the dashboard neither shows
  the IPs as allocated nor assigned. The nova floating-ip-list also
  fails to list them in any way.

  Attempting to remove such an automatically allocated IP using nova
  floating-ip-delete results in the following error message:

  ERROR: ('Floating ip %s not found.', 'THE_IP_IN_QUESTION')

  nova-manage floating list lists these IPs, however.

  When the instance is terminated, the automatically allocated IP winds
  up as still allocated, but to a missing instance (possibly connected
  to bug #997763).

  The effect of all this is that only the admin has command-line read-
  only access to automatically assigned floating IPs, and the only way
  I've figured out to clean up after the leak caused by terminating
  instances is to remove the block of floating IPs and then add them
  again using the nova-manage command.

  Operating System: Ubuntu 12.04 LTS
  OpenStack version: Essex (installed from Ubuntu packages)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1017379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1086464] Re: Deleted fixed network causes vm failure

2014-11-19 Thread Joe Gordon
Essex is not supported anymore.

** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1086464

Title:
  Deleted fixed network causes vm failure

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Using Essex 2012.1.3

  1) Created fixed ip network 192.168.7.0/24
  This created an entry in both fixed_ips and networks.
   
  2) deleted using nova-manage network delete 192.168.7.0/27
   
  3)  again created the same network 
   
  4) When I launch a VM, I am seeing Network 1 not found  error.  
  In the database, nova-networks table, there is no network entry with id =1 
and all the entries in fixed_ips for network id=1 are set to deleted=t.

  
  I had to manually delete network and  all the entries in fixed_ips in the 
database. After that I had to recreate the network. Then, it worked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1086464/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1003934] Re: Additional fixed ip address assigned to vm after nova-network restart

2014-11-19 Thread Joe Gordon
essex has been end of lifed a while ago, is this still valid?

** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1003934

Title:
  Additional fixed ip address assigned to vm after nova-network restart

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  So I'm runing into a problem when I restart nova-network on a compute node in 
a multi_host setup.  Each time I 
  restart nova-network on one of the compute hosts, multiple guests on the 
other host are allocated additional 
  fixed ip addresses in addition to the one they already have and a stack trace 
is thrown in network.log. The 
  nova-br100.conf is updated to include both addresses on the same 
mac/hostname.  I'm still able to access the 
  vm just fine on thier original ip address.
   
  Recreating the problem is very consistent:
  1) From a fresh install, bring up 3 virtual machines and take down the first 
one that came up (typically 
 instance_id 1 in the networks table).
   
  2) Stop nova-network on either of the compute hosts.  This will produce a 
stack trace in the network.log on 
 the other compute node complaining about 'InstanceNotFound: Instance 1 
could not be found'

  3) Look at horizon, dnsmasq's config (nova-br100.conf) or the networks table 
in the database and you will 
 see an additional ip address allocated to the vm that was running on the 
host that did not have nova-network 
 taken down.  The ec2 compatibility api only reports the new ip address for 
the instance.


  My setup:
  I'm running 2 hosts for testing, one is the compute host running nova-api, 
nova-compute and nova-network.  The 
  other is the controller running nova-api, nova-compute, nova-network as well 
as the other needed services 
  (keystone, glance, scheduler).  I'm running a fixed network only 
(192.168.97.0/24), no floating, in a multi_host 
  setup with the routing handled by an external router (setup with dhcp flags 
for dnsmasq) with eth1 being the vm 
  network.  Basically nova-network and nova-api are there only for dhcp and 
meta-data.

  I'm using essex packages from EPEL on Centos 6.2.  On the compute nodes:
  python-novaclient-2012.1-1.el6.noarch
  python-nova-2012.1-4.el6.noarch
  openstack-nova-2012.1-4.el6.noarch

  dhcp-options.conf (flags for dnsmasq):
  dhcp-option-force=3,192.168.97.254
  #dhcp-option-force=6,192.168.95.10,192.168.95.11
  dhcp-option-force=15,openstack.internal

  nova-br100.conf after the problem:
  fa:16:3e:32:80:69,server-2.openstack.internal,192.168.97.4
  fa:16:3e:32:80:69,server-2.openstack.internal,192.168.97.6

  View of the networks table before the problem:
  http://pastebin.com/KSDgxAkk

  View of the networks table after the problem:
  http://pastebin.com/mxQWLb7q

  My nova.conf (it's the same on both hosts except for my_ip and the console 
addresses):
  http://pastebin.com/Xt7fyim9

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1003934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1185902] Re: nova-manage flavor create produces the wrong help

2014-11-19 Thread Joe Gordon
** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1185902

Title:
  nova-manage flavor create produces the wrong help

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  $ nova-manage flavor create
  Creates instance types / flavors.
  usage: nova-manage [-h] [--version] [--debug] [--nodebug] [--verbose]
 [--noverbose] [--use-syslog] [--nouse-syslog]
 [--config-file PATH] [--log-config PATH]
 [--log-format FORMAT] [--log-date-format DATE_FORMAT]
 [--log-file PATH] [--log-dir LOG_DIR]
 [--syslog-log-facility SYSLOG_LOG_FACILITY]
 [--config-dir DIR]
 
 
{version,bash-completion,project,account,shell,logs,service,db,vm,agent,cell,instance_type,host,flavor,fixed,vpn,floating,network}
 ...

  optional arguments:
-h, --helpshow this help message and exit
--version show program's version number and exit
--debug, -d   Print debugging output (set logging level to DEBUG
  instead of default WARNING level).
--nodebug The inverse of --debug
--verbose, -v Print more verbose output (set logging level to INFO
  instead of default WARNING level).
--noverbose   The inverse of --verbose
--use-syslog  Use syslog for logging.
--nouse-syslogThe inverse of --use-syslog
--config-file PATHPath to a config file to use. Multiple config files
  can be specified, with values in later files taking
  precedence. The default files used are:
  ['/etc/nova/nova.conf']
--log-config PATH If this option is specified, the logging configuration
  file specified is used and overrides any other logging
  options specified. Please see the Python logging
  module documentation for details on logging
  configuration files.
--log-format FORMAT   A logging.Formatter log message format string which
  may use any of the available logging.LogRecord
  attributes. This option is deprecated. Please use
  logging_context_format_string and
  logging_default_format_string instead.
--log-date-format DATE_FORMAT
  Format string for %(asctime)s in log records. Default:
  None
--log-file PATH, --logfile PATH
  (Optional) Name of log file to output to. If no
  default is set, logging will go to stdout.
--log-dir LOG_DIR, --logdir LOG_DIR
  (Optional) The base directory used for relative --log-
  file paths
--syslog-log-facility SYSLOG_LOG_FACILITY
  syslog facility to receive log lines
--config-dir DIR  Path to a config directory to pull *.conf files from.
  This file set is sorted, so as to provide a
  predictable parse order if individual options are
  over-ridden. The set is parsed after the file(s), if
  any, specified via --config-file, hence over-ridden
  options in the directory take precedence.

  Command categories:

{version,bash-completion,project,account,shell,logs,service,db,vm,agent,cell,instance_type,host,flavor,fixed,vpn,floating,network}
  Available categories
  4 arguments are missing
  $

  This does not help me much. I need the help for 'flavor create', like
  when I do --help:

  $ nova-manage flavor create --help
  usage: nova-manage flavor create [-h] [--name name] [--memory memory size]
   [--cpu num cores] [--root_gb root_gb]
   [--ephemeral_gb ephemeral_gb]
   [--flavor flavor  id] [--swap swap]
   [--rxtx_factor rxtx_factor]
   [--is_public is_public]
   [action_args [action_args ...]]

  positional arguments:
action_args

  optional arguments:
-h, --helpshow this help message and exit
--name name Name of instance type/flavor
--memory memory size
  Memory size
--cpu num cores Number cpus
--root_gb root_gb   Root disk size
--ephemeral_gb ephemeral_gb
  Ephemeral disk size
--flavor flavor  id
   

[Yahoo-eng-team] [Bug 1379373] Re: baremetal-node-show command shows missing table in db

2014-11-19 Thread melanie witt
Paul is right, novaclient can't remove commands because of backward
compatibility. Similarly, I'm not sure nova api can remove the baremetal
nodes api extensions.

So I'm going to file this under nova api, the extensions should work by
proxying requests to ironic. We could have better handling for scenarios
when the user hasn't set up ironic (good error message) or maybe disable
the baremetal nodes extensions by default.

** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: python-novaclient

** Tags added: api baremetal

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1379373

Title:
  baremetal-node-show command shows missing table in db

Status in OpenStack Compute (Nova):
  New

Bug description:
  I ran nova baremetal-node-show and am getting missing table error: 
  2014-10-09 17:27:13.897 22211 TRACE nova.api.openstack   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/engine/default.py, line 324, in 
do_execute
  2014-10-09 17:27:13.897 22211 TRACE nova.api.openstack 
cursor.execute(statement, parameters)
  2014-10-09 17:27:13.897 22211 TRACE nova.api.openstack OperationalError: 
(OperationalError) no such table: bm_nodes u'SELECT bm_nodes.created_at AS 
bm_nodes_created_at, bm_nodes.updated_at AS bm_nodes_updated_at, 
bm_nodes.deleted_at AS bm_nodes_deleted_at, bm_nodes.id AS bm_nodes_id, 
bm_nodes.deleted AS bm_nodes_deleted, bm_nodes.uuid AS bm_nodes_uuid, 
bm_nodes.service_host AS bm_nodes_service_host, bm_nodes.instance_uuid AS 
bm_nodes_instance_uuid, bm_nodes.instance_name AS bm_nodes_instance_name, 
bm_nodes.cpus AS bm_nodes_cpus, bm_nodes.memory_mb AS bm_nodes_memory_mb, 
bm_nodes.local_gb AS bm_nodes_local_gb, bm_nodes.preserve_ephemeral AS 
bm_nodes_preserve_ephemeral, bm_nodes.pm_address AS bm_nodes_pm_address, 
bm_nodes.pm_user AS bm_nodes_pm_user, bm_nodes.pm_password AS 
bm_nodes_pm_password, bm_nodes.task_state AS bm_nodes_task_state, 
bm_nodes.terminal_port AS bm_nodes_terminal_port, bm_nodes.image_path AS 
bm_nodes_image_path, bm_nodes.pxe_config_path AS bm_nodes_pxe_config_path,
  bm_nodes.deploy_key AS bm_nodes_deploy_key, bm_nodes.root_mb AS 
bm_nodes_root_mb, bm_nodes.swap_mb AS bm_nodes_swap_mb, bm_nodes.ephemeral_mb 
AS bm_nodes_ephemeral_mb \nFROM bm_nodes \nWHERE bm_nodes.deleted = 0' ()
  2014-10-09 17:27:13.897 22211 TRACE nova.api.openstack 

  to reproduce:

  nova baremetal-node-show node

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1379373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394351] [NEW] deadlock when delete port

2014-11-19 Thread Zengfa Gao
Public bug reported:

netdemoid=$(neutron net-list | awk '{if($4=='demo-net'){print $2;}}')
subnetdemoid=$(neutron subnet-list | awk '{if($4=='demo-subnet'){print $2;}}')

exnetid=$(neutron net-list | awk '{if($4=='ext-net'){print $2;}}')
for i in `seq 1 10`; do
#boot vm, and create floating ip
nova boot --image cirros --flavor m1.tiny --nic net-id=$netdemoid 
cirrosdemo${i}
cirrosdemoid[i]=$(nova list | awk '{if($4=='cirrosdemo${i}'){print $2;}}')
output=$(neutron floatingip-create $exnetid)
echo $output
floatipid[i]=$(echo $output | awk '{if($2==id){print $4;}}')
floatip[i]=$(echo $output | awk '{if($2==floating_ip_address){print 
$4;}}')a
done

# Setup router
neutron router-gateway-set $routerdemoid $exnetid
neutron router-interface-add demo-router $subnetdemoid
#wait for VM to be running
sleep 30

for i in `seq 1 10`; do
cirrosfix=$(nova list | awk '{if($4=='cirrosdemo${i}'){print $12;}}')
cirrosfixip=${cirrosfix#*=}
output=$(neutron port-list | grep ${cirrosfixip})
echo $output
portid=$(echo $output | awk '{print $2;}')
neutron floatingip-associate --fixed-ip-address $cirrosfixip 
${floatipid[i]} $portid
neutron floatingip-delete ${floatipid[i]}
nova delete ${cirrosdemoid[i]}
done


With several tries, I have one instance in ERROR state:
2014-11-19 19:41:02.670 8659 DEBUG neutron.context 
[req-3ff9aed1-e5fb-4388-b26d-e35bb7fc25f7 None] Arguments dropped when creating 
context: {u'project_name': None, u'tenant': None} __init__ 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/context.py:83
2014-11-19 19:41:02.671 8659 DEBUG neutron.plugins.ml2.rpc 
[req-3ff9aed1-e5fb-4388-b26d-e35bb7fc25f7 None] Device 
498e7a54-22dd-4e5b-a8db-d6bffb8edd25 details requested by agent 
ovs-agent-overcloud-controller0-d5wwhbhhtlmp with host 
overcloud-controller0-d5wwhbhhtlmp get_device_details 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:90
2014-11-19 19:41:02.707 8659 DEBUG neutron.openstack.common.lockutils 
[req-3ff9aed1-e5fb-4388-b26d-e35bb7fc25f7 None] Got semaphore db-access lock 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/openstack/common/lockutils.py:168
2014-11-19 19:41:04.061 8658 ERROR oslo.messaging.rpc.dispatcher 
[req-4303cd41-c87c-44aa-b78a-549fb914ac9c ] Exception during message handling: 
(OperationalError) (1213, 'Deadlock found when trying to get lock; try 
restarting transaction') None None
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 134, in _dispatch_and_reply
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 177, in _dispatch
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 123, in _do_dispatch
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/agents_db.py,
 line 220, in report_state
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher 
self.plugin.create_or_update_agent(context, agent_state)
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/agents_db.py,
 line 180, in create_or_update_agent
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher return 
self._create_or_update_agent(context, agent)
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/agents_db.py,
 line 174, in _create_or_update_agent
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher 
greenthread.sleep(0)
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py,
 line 470, in __exit__
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher 
self.rollback()
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py,
 line 60, in __exit__
2014-11-19 19:41:04.061 8658 TRACE oslo.messaging.rpc.dispatcher 
compat.reraise(exc_type, exc_value, 

[Yahoo-eng-team] [Bug 1394358] [NEW] check-tempest-dsvm-neutron-aiopcpu causes neutron failure

2014-11-19 Thread Kyle Mestery
Public bug reported:

The nova error is as follows:

2014-11-19 05:31:24.097 ERROR nova.compute.manager 
[req-6c02bdd7-448a-4c68-9296-70c41f28b53e ServerActionsTestXML-263356571 
ServerActionsTestXML-1715780796] [instance: 
e6cba227-74ea-4592-8978-2646ca26e35f] Setting instance vm_state to ERROR
2014-11-19 05:31:24.097 13050 TRACE nova.compute.manager [instance: 
e6cba227-74ea-4592-8978-2646ca26e35f] Traceback (most recent call last):
2014-11-19 05:31:24.097 13050 TRACE nova.compute.manager [instance: 
e6cba227-74ea-4592-8978-2646ca26e35f]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 6109, in 
_error_out_instance_on_exception
2014-11-19 05:31:24.097 13050 TRACE nova.compute.manager [instance: 
e6cba227-74ea-4592-8978-2646ca26e35f] yield
2014-11-19 05:31:24.097 13050 TRACE nova.compute.manager [instance: 
e6cba227-74ea-4592-8978-2646ca26e35f]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 3569, in 
finish_revert_resize
2014-11-19 05:31:24.097 13050 TRACE nova.compute.manager [instance: 
e6cba227-74ea-4592-8978-2646ca26e35f] block_device_info, power_on)
2014-11-19 05:31:24.097 13050 TRACE nova.compute.manager [instance: 
e6cba227-74ea-4592-8978-2646ca26e35f]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 6176, in 
finish_revert_migration
2014-11-19 05:31:24.097 13050 TRACE nova.compute.manager [instance: 
e6cba227-74ea-4592-8978-2646ca26e35f] block_device_info, power_on)
2014-11-19 05:31:24.097 13050 TRACE nova.compute.manager [instance: 
e6cba227-74ea-4592-8978-2646ca26e35f]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 4500, in 
_create_domain_and_network
2014-11-19 05:31:24.097 13050 TRACE nova.compute.manager [instance: 
e6cba227-74ea-4592-8978-2646ca26e35f] raise 
exception.VirtualInterfaceCreateException()
2014-11-19 05:31:24.097 13050 TRACE nova.compute.manager [instance: 
e6cba227-74ea-4592-8978-2646ca26e35f] VirtualInterfaceCreateException: Virtual 
Interface creation failed
2014-11-19 05:31:24.097 13050 TRACE nova.compute.manager [instance: 
e6cba227-74ea-4592-8978-2646ca26e35f] 
2014-11-19 05:31:24.400 ERROR oslo.messaging.rpc.dispatcher 
[req-6c02bdd7-448a-4c68-9296-70c41f28b53e ServerActionsTestXML-263356571 
ServerActionsTestXML-1715780796] Exception during message handling: Virtual 
Interface creation failed

The neutron agent error is as follows:

2014-11-19 05:13:50.339 5051 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-3667b9d8-1b2a-4a4a-b6f1-f2b1810801a1 None] Error while processing VIF ports
2014-11-19 05:13:50.339 5051 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
2014-11-19 05:13:50.339 5051 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1421, in rpc_loop
2014-11-19 05:13:50.339 5051 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
2014-11-19 05:13:50.339 5051 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1229, in process_network_ports
2014-11-19 05:13:50.339 5051 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent devices_added_updated, 
ovs_restarted)
2014-11-19 05:13:50.339 5051 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1125, in treat_devices_added_or_updated
2014-11-19 05:13:50.339 5051 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
2014-11-19 05:13:50.339 5051 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1024, in treat_vif_port
2014-11-19 05:13:50.339 5051 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent fixed_ips, 
device_owner, ovs_restarted)
2014-11-19 05:13:50.339 5051 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 637, in port_bound
2014-11-19 05:13:50.339 5051 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent lvm = 
self.local_vlan_map[net_uuid]
2014-11-19 05:13:50.339 5051 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent KeyError: 
u'b13d216f-4713-4b1d-8353-26423dbf5bdc'

It appears that network b13d216f-4713-4b1d-8353-26423dbf5bdc is deleted
on the server while the agent is processing ports for this network.

See the logs here:

http://logs.openstack.org/39/134639/16/experimental/check-tempest-dsvm-
neutron-aiopcpu/7aac738/

** Affects: neutron
 Importance: High
 Status: New

** Changed in: neutron
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394358


[Yahoo-eng-team] [Bug 1392270] Re: POST /v2/images succeds without any POST data

2014-11-19 Thread nikhil komawar
This is by design: http://docs.openstack.org/api/openstack-image-
service/2.0/content/upload-image-file.html

** Changed in: glance
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1392270

Title:
  POST /v2/images succeds without any POST data

Status in OpenStack Image Registry and Delivery Service (Glance):
  Won't Fix

Bug description:
  The exact request is:

  curl -i -X POST -H 'User-Agent: python-glanceclient' -H 'Content-Type:
  application/json' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*'
  -H 'X-Auth-Token: {SHA1}2d5b305e1ea690482d697a6971130f1efd04fbc2' -d
  '{}' http://127.0.0.1:9292/v2/images

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1392270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392270] Re: POST /v2/images succeds without any POST data

2014-11-19 Thread Ian Cordasco
Ajaya, could you confirm that performing the curl request returns a 204?
If so, I believe this is certainly a bug. Glance should reject an empty
request body like this. Thanks in advance!

** Changed in: glance
   Status: Won't Fix = Incomplete

** Changed in: glance
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1392270

Title:
  POST /v2/images succeds without any POST data

Status in OpenStack Image Registry and Delivery Service (Glance):
  Incomplete

Bug description:
  The exact request is:

  curl -i -X POST -H 'User-Agent: python-glanceclient' -H 'Content-Type:
  application/json' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*'
  -H 'X-Auth-Token: {SHA1}2d5b305e1ea690482d697a6971130f1efd04fbc2' -d
  '{}' http://127.0.0.1:9292/v2/images

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1392270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394377] [NEW] Launch Instance button gives infinite spinner when non-english language is selected

2014-11-19 Thread Ahmed Rahal
Public bug reported:

This happens in Icehouse Horizon.
When logging in as a user and switching language to a non-english language (I 
tried french, german, spanish) the Launch Instance button starts the waiting 
spinner and never shows the instance creation dialog.
The content of the dialog is actually reaching the browser (output in browser 
debug tools) but something prevents the browser from moving over to rendering 
it.
One can manually change the language from within Horizon once logged in.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1394377

Title:
  Launch Instance button gives infinite spinner when non-english
  language is selected

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This happens in Icehouse Horizon.
  When logging in as a user and switching language to a non-english language (I 
tried french, german, spanish) the Launch Instance button starts the waiting 
spinner and never shows the instance creation dialog.
  The content of the dialog is actually reaching the browser (output in browser 
debug tools) but something prevents the browser from moving over to rendering 
it.
  One can manually change the language from within Horizon once logged in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1394377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392270] Re: POST /v2/images succeds without any POST data

2014-11-19 Thread Erno Kuvaja
Ian,

This returns empty image on queued status exactly as designed. Nothing
wrong with that.

** Changed in: glance
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1392270

Title:
  POST /v2/images succeds without any POST data

Status in OpenStack Image Registry and Delivery Service (Glance):
  Won't Fix

Bug description:
  The exact request is:

  curl -i -X POST -H 'User-Agent: python-glanceclient' -H 'Content-Type:
  application/json' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*'
  -H 'X-Auth-Token: {SHA1}2d5b305e1ea690482d697a6971130f1efd04fbc2' -d
  '{}' http://127.0.0.1:9292/v2/images

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1392270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1157261] Re: performance issue on high concurrency

2014-11-19 Thread Aleksandr Shaposhnikov
** Also affects: mos
   Importance: Undecided
   Status: New

** Changed in: mos
   Importance: Undecided = Critical

** Changed in: mos
Milestone: None = 6.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1157261

Title:
  performance issue on high concurrency

Status in OpenStack Identity (Keystone):
  Triaged
Status in Mirantis OpenStack:
  New

Bug description:
  Im currently using Keystone as the auth-server in Swift.
  We have done performance test on our solution and we found that performance 
of keystone+swift is quite low, only 200 ops/s. While with the same setup the 
throughput can be improved to 7k ops/s by replacing keystone with swauth
  We found the keystone process is fully saturating one sandy bridge core. This 
looks like a scalability issue.
  Would be good if keystone could scale up well on a multi-core server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1157261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394236] Re: glance client should fail if using a non-boolean string when a boolean is required

2014-11-19 Thread Erno Kuvaja
Moved to python-glanceclient as this seems to be client issue.

** Project changed: glance = python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1394236

Title:
  glance client should fail if using a non-boolean string when a boolean
  is required

Status in Python client library for Glance:
  New

Bug description:
  Description of problem:
  When the following command is executed, the command succeded but revirests 
the --is-public IS_PUBLIC to false. This should come out with wrong input error

  glance --debug image-create --name  CentOS-7-x86_64-GenericCloud --is-
  public IS_PUBLIC --disk-format raw --container-format bare --file
  CentOS-7-x86_64-GenericCloud-20140707_01.qcow2.1 --progress

  
  Version-Release number of selected component (if applicable):
  all

  How reproducible:
  Always

  Steps to Reproduce:
  1.glance --debug image-create --name  CentOS-7-x86_64-GenericCloud 
--is-public IS_PUBLIC --disk-format raw --container-format bare --file 
CentOS-7-x86_64-GenericCloud-20140707_01.qcow2.1 --progress 
  2.
  3.

  Actual results:
  sets is-public to false

  Expected results:
  an error

  Additional info:
  This looks to be becuase of the strutils.bool_from_string which looks to 
follow the rule of if its not true then its false.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1394236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1176978] Re: Can't re-use an ID from a previously deleted image

2014-11-19 Thread Erno Kuvaja
As there has bee no progress and valid concerns regarding UUID concept,
closing this as Won't fix. Works as designed and IMO if there is need
for this behaviour it should come through spec, not a bug.

** Changed in: glance
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1176978

Title:
  Can't re-use an ID from a previously deleted image

Status in OpenStack Image Registry and Delivery Service (Glance):
  Won't Fix

Bug description:
  Hi all,

  Steps to reproduce it:

  
  $ glance image-create --name foo
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | None |
  | created_at   | 2013-05-06T16:08:34  |
  | deleted  | False|
  | deleted_at   | None |
  | disk_format  | None |
  | id   | c3313eda-f6dd-4062-85c8-7ab1bb168cd1 |
  | is_public| False|
  | min_disk | 0|
  | min_ram  | 0|
  | name | foo  |
  | owner| 19292b3b597b4ecc9a41103cc312a42f |
  | protected| False|
  | size | 0|
  | status   | queued   |
  | updated_at   | 2013-05-06T16:08:34  |
  +--+--+

  
  $ glance image-delete c3313eda-f6dd-4062-85c8-7ab1bb168cd1

  
  $ glance image-create --name foo --id c3313eda-f6dd-4062-85c8-7ab1bb168cd1
  Request returned failure status.
  409 Conflict
  An image with identifier c3313eda-f6dd-4062-85c8-7ab1bb168cd1 already exists
  (HTTP 409)

  I use the grizzly stable branch from Ubuntu cloud archive, same
  behavior on the debian packages.

  Cheers

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1176978/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213179] Re: swiftclient ClientException: Container HEAD failed

2014-11-19 Thread Erno Kuvaja
Moved to glance-store. Needs to be re-evaluated if the issue is still
valid.

** Project changed: glance = glance-store

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1213179

Title:
  swiftclient ClientException: Container HEAD failed

Status in OpenStack Glance backend store-drivers library (glance_store):
  New

Bug description:
  When using Glance Havana with the latest version of swiftclient I get
  the following stack trace in my Glance API server log on startup:

  2013-08-16 15:46:25.279 28346 ERROR swiftclient [-] Container HEAD failed: 
http://nova1:8080/v1/AUTH_85e1d06769e4448a9a11d9924297a206/glance 404 Not Found
  2013-08-16 15:46:25.279 28346 TRACE swiftclient Traceback (most recent call 
last):
  2013-08-16 15:46:25.279 28346 TRACE swiftclient   File 
/usr/lib/python2.7/site-packages/swiftclient/client.py, line 1095, in _retry
  2013-08-16 15:46:25.279 28346 TRACE swiftclient rv = func(self.url, 
self.token, *args, **kwargs)
  2013-08-16 15:46:25.279 28346 TRACE swiftclient   File 
/usr/lib/python2.7/site-packages/swiftclient/client.py, line 566, in 
head_container
  2013-08-16 15:46:25.279 28346 TRACE swiftclient 
http_response_content=body)
  2013-08-16 15:46:25.279 28346 TRACE swiftclient ClientException: Container 
HEAD failed: http://nova1:8080/v1/AUTH_85e1d06769e4448a9a11d9924297a206/glance 
404 Not Found
  2013-08-16 15:46:25.279 28346 TRACE swiftclient 

  -

  I'm using swift_store_create_container_on_put = True and the issue
  seems get logged when the _create_container_if_missing function in the
  Glance swift backend checks for the swift container. The Glance code
  looks totally fine to me one of our logging configs seems to cause the
  swiftclient library to log this exception regardless.

  We should not be logging an ERROR when creating the initial swift
  container...

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1213179/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297465] Re: ChunkReader has no len() Swiftclient + Glance

2014-11-19 Thread Erno Kuvaja
Moved to glance-store as the swift driver lives there now.

** Project changed: glance = glance-store

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1297465

Title:
  ChunkReader has no len() Swiftclient + Glance

Status in OpenStack Glance backend store-drivers library (glance_store):
  New
Status in Python client library for Swift:
  New

Bug description:
  On CentOS 6.5

  Name: openstack-glance
  Arch: noarch
  Version : 2013.2.2
  Release : 2.el6
  Size: 52 k
  Repo: installed
  From repo   : openstack-havana

  When uploading an image that is larger than swifts chunk size I
  receive a ChunkReader error:

  2014-03-25 18:29:16.977 21092 TRACE glance.api.v1.upload_utils
  TypeError: object of type 'ChunkReader' has no len()

  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift TypeError:
  object of type 'ChunkReader' has no len()

  Full Traceback:

  2014-03-25 18:29:16.916 21092 ERROR glance.store.swift 
[34633528-d86d-4055-bdc9-1bcdf872fc2b OpenStack Admin 
57813631e9e5420589216b33925ef6a3] Error during chunked upload to backend, 
deleting stale chunks
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift Traceback (most recent 
call last):
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
/usr/lib/python2.6/site-packages/glance/store/swift.py, line 384, in add
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift 
content_length=content_length)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
/usr/lib/python2.6/site-packages/swiftclient/client.py, line 1318, in 
put_object
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift 
response_dict=response_dict)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
/usr/lib/python2.6/site-packages/swiftclient/client.py, line 1192, in _retry
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift rv = 
func(self.url, self.token, *args, **kwargs)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
/usr/lib/python2.6/site-packages/swiftclient/client.py, line 943, in 
put_object
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift 
conn.putrequest(path, headers=headers, data=contents)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
/usr/lib/python2.6/site-packages/swiftclient/client.py, line 197, in 
putrequest
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift return 
self.request('PUT', full_path, data, headers, files)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
/usr/lib/python2.6/site-packages/swiftclient/client.py, line 187, in request
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift files=files, 
**self.requests_args)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
/usr/lib/python2.6/site-packages/swiftclient/client.py, line 176, in _request
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift return 
requests.request(*arg, **kwarg)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
/usr/lib/python2.6/site-packages/requests/api.py, line 44, in request
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift return 
session.request(method=method, url=url, **kwargs)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
/usr/lib/python2.6/site-packages/requests/sessions.py, line 276, in request
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift prep = 
req.prepare()
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
/usr/lib/python2.6/site-packages/requests/models.py, line 224, in prepare
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift 
p.prepare_body(self.data, self.files)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
/usr/lib/python2.6/site-packages/requests/models.py, line 384, in prepare_body
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift 
self.headers['Content-Length'] = str(len(body))
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift TypeError: object of 
type 'ChunkReader' has no len()
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift 
  2014-03-25 18:29:16.977 21092 ERROR glance.api.v1.upload_utils 
[34633528-d86d-4055-bdc9-1bcdf872fc2b OpenStack Admin 
57813631e9e5420589216b33925ef6a3] Failed to upload image 
375a5784-8911-433a-a2c2-56ea0c621eda
  2014-03-25 18:29:16.977 21092 TRACE glance.api.v1.upload_utils Traceback 
(most recent call last):
  2014-03-25 18:29:16.977 21092 TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.6/site-packages/glance/api/v1/upload_utils.py, line 101, in 
upload_data_to_store
  2014-03-25 18:29:16.977 21092 TRACE glance.api.v1.upload_utils store)
  2014-03-25 18:29:16.977 21092 TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.6/site-packages/glance/store/__init__.py, line 333, in 
store_add_to_backend
  2014-03-25 18:29:16.977 21092 TRACE glance.api.v1.upload_utils

[Yahoo-eng-team] [Bug 1235386] Re: Failed to add object to Swift. causes glance to return 503

2014-11-19 Thread Erno Kuvaja
** Project changed: glance = glance-store

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1235386

Title:
   Failed to add object to Swift. causes glance to return 503

Status in OpenStack Glance backend store-drivers library (glance_store):
  In Progress

Bug description:
  http://logs.openstack.org/23/49623/3/check/check-tempest-devstack-vm-
  neutron/8a1a42d/logs/screen-g-api.txt.gz?#_2013-10-04_16_31_15_067

  2013-10-04 16:31:15.067 32574 ERROR glance.store.swift 
[7ba13d89-a2f1-4ed6-96f9-013286e74edc cafcbf642bd4498b9ca6aa7f387af5ee 
03102de58497423db8082bd91188d60a] Failed to add object to Swift.
  Got error from Swift: put_object('glance', 
'6d938c1c-3295-4ac4-81af-630b6b906f46', ...) failure and no ability to reset 
contents for reupload.
  2013-10-04 16:31:15.067 32574 ERROR glance.api.v1.upload_utils 
[7ba13d89-a2f1-4ed6-96f9-013286e74edc cafcbf642bd4498b9ca6aa7f387af5ee 
03102de58497423db8082bd91188d60a] Failed to upload image 
6d938c1c-3295-4ac4-81af-630b6b906f46
  2013-10-04 16:31:15.067 32574 TRACE glance.api.v1.upload_utils Traceback 
(most recent call last):
  2013-10-04 16:31:15.067 32574 TRACE glance.api.v1.upload_utils   File 
/opt/stack/new/glance/glance/api/v1/upload_utils.py, line 101, in 
upload_data_to_store
  2013-10-04 16:31:15.067 32574 TRACE glance.api.v1.upload_utils store)
  2013-10-04 16:31:15.067 32574 TRACE glance.api.v1.upload_utils   File 
/opt/stack/new/glance/glance/store/__init__.py, line 333, in 
store_add_to_backend
  2013-10-04 16:31:15.067 32574 TRACE glance.api.v1.upload_utils (location, 
size, checksum, metadata) = store.add(image_id, data, size)
  2013-10-04 16:31:15.067 32574 TRACE glance.api.v1.upload_utils   File 
/opt/stack/new/glance/glance/store/swift.py, line 447, in add
  2013-10-04 16:31:15.067 32574 TRACE glance.api.v1.upload_utils raise 
glance.store.BackendException(msg)
  2013-10-04 16:31:15.067 32574 TRACE glance.api.v1.upload_utils 
BackendException: Failed to add object to Swift.
  2013-10-04 16:31:15.067 32574 TRACE glance.api.v1.upload_utils Got error from 
Swift: put_object('glance', '6d938c1c-3295-4ac4-81af-630b6b906f46', ...) 
failure and no ability to reset contents for reupload.
  2013-10-04 16:31:15.067 32574 TRACE glance.api.v1.upload_utils 

  
  logstash.openstack.org shows at least 4 cases of this in the past week

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJAbWVzc2FnZTpcIkdvdCBlcnJvciBmcm9tIFN3aWZ0OiBwdXRfb2JqZWN0XCIgQU5EIEBmaWVsZHMuZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1nLWFwaS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4MDkwOTQ1MDAwMCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1235386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394423] [NEW] unused action_present and action_past in openstack_dashboard/dashboards/admin/hypervisors/compute/tables.py

2014-11-19 Thread Liyingjun
Public bug reported:

in
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/hypervisors/compute/tables.py#L23

'action_past' and 'action_present' is not needed for LinkAction..

** Affects: horizon
 Importance: Undecided
 Assignee: Liyingjun (liyingjun)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Liyingjun (liyingjun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1394423

Title:
  unused action_present and action_past in
  openstack_dashboard/dashboards/admin/hypervisors/compute/tables.py

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  in
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/hypervisors/compute/tables.py#L23

  'action_past' and 'action_present' is not needed for LinkAction..

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1394423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298803] Re: rpc worker makes neutron-server crash

2014-11-19 Thread Li Ma
The related fix has merged.

https://review.openstack.org/#/c/126914/5

** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298803

Title:
  rpc worker makes neutron-server crash

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  After upgrading from icehouse b2 to b3, neutron-server using zmq was
  crashed silently. No core dump, no exception, no error, no warnings.
  After I switched to rabbitmq, everything went well. It really drove me
  crazy.

  Then I tried to debug, it was really hard to do it, because there were
  no useful messages provided.

  Then I compared source codes between b2 and b3, and finally I found
  out the main difference between the two versions are that b3 enables a
  new rpc_worker mechanism.

  I removed all the related code back to the way b2 goes.

  Then the problem is immediately solved.

  I just guess that rpc_worker uses greenpool which has conflicts with
  the thread pool from zeromq. I suggest to claim in neutron.conf that
  it is incompatible with zeromq, and when rpc_worker == 0, just disable
  it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1298803/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361108] Re: novnc failed to start due to unexpected keyword argument

2014-11-19 Thread Launchpad Bug Tracker
[Expired for devstack because there has been no activity for 60 days.]

** Changed in: devstack
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361108

Title:
  novnc failed to start due to unexpected keyword argument

Status in devstack - openstack dev environments:
  Expired
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  Hi everybody,

  I'm new to DevStack and during my installation of icehouse devstack, some 
error arose like this:
  
  ...
  ls /opt/stack/status/stack/n-novnc.failure
  2014-08-25 07:49:55.739 | + failures=/opt/stack/status/stack/n-novnc.failure
  2014-08-25 07:49:55.739 | + for service in '$failures'
  2014-08-25 07:49:55.740 | ++ basename /opt/stack/status/stack/n-novnc.failure
  2014-08-25 07:49:55.740 | + service=n-novnc.failure
  2014-08-25 07:49:55.741 | + service=n-novnc
  2014-08-25 07:49:55.741 | + echo 'Error: Service n-novnc is not running'
  2014-08-25 07:49:55.741 | Error: Service n-novnc is not running
  2014-08-25 07:49:55.741 | + '[' -n /opt/stack/status/stack/n-novnc.failure ']'
  2014-08-25 07:49:55.741 | + die 1316 'More details about the above errors can 
be found with screen, with ./rejoin-stack.sh'
  2014-08-25 07:49:55.741 | + local exitcode=0
  2014-08-25 07:49:55.741 | [Call Trace]
  2014-08-25 07:49:55.741 | ./stack.sh:1371:service_check
  2014-08-25 07:49:55.741 | /home/darren/devstack/functions-common:1316:die
  2014-08-25 07:49:55.743 | [ERROR] /home/darren/devstack/functions-common:1316 
More details about the above errors can be found with screen, with 
./rejoin-stack.sh
  2014-08-25 07:49:56.747 | Error on exit
  

  Then I went to the corrispond screen to check what is wrong:

  
  cd /opt/stack/nova  /usr/local/bin/nova-novncproxy --config-file 
/etc/nova/nova.conf --web /opt/stack/noVNC  echo $! 
/opt/stack/status/stack/n-novnc.pid; fg || echo n-novnc failed to start | 
tee /opt/stack/status/stack/n-novnc.failure
  [1] 2881
  cd /opt/stack/nova  /usr/local/bin/nova-novncproxy --config-file 
/etc/nova/nova.conf --web /opt/stack/noVNC

  Traceback (most recent call last):
File /usr/local/bin/nova-novncproxy, line 10, in module
  sys.exit(main())
File /opt/stack/nova/nova/cmd/novncproxy.py, line 87, in main
  wrap_cmd=None)
File /opt/stack/nova/nova/console/websocketproxy.py, line 38, in __init__
  ssl_target=None, *args, **kwargs)
File /usr/local/lib/python2.7/dist-packages/websockify/websocketproxy.py, 
line 231, in __init__
  websocket.WebSocketServer.__init__(self, RequestHandlerClass, *args, 
**kwargs)
  TypeError: __init__() got an unexpected keyword argument 'no_parent'
  n-novnc failed to start
  
  I don't what's going on and how to fix it, anyone have any idea? 
  THX!

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1361108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394456] [NEW] Deleting floating ip also disassociates the floating ip on active instances without a warning

2014-11-19 Thread Prasoon Telang
Public bug reported:

I had associated a floating IP to an instance and then I deleted the
floating ips under the access  security panel from the dashboard.

I don't know if it is by design but I was expecting some sort of a
warning saying that the floating IP is in use and not allow the
deletion. Similar to how subnet cannot be deleted if there are active
instances.

If this is not a valid bug, I would like to know why it isn't.

- Prasoon

** Affects: neutron
 Importance: Undecided
 Assignee: Prasoon Telang (prasoontelang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Prasoon Telang (prasoontelang)

** Summary changed:

- Deleting floating ip also disassociates the floating ip on active instances
+ Deleting floating ip also disassociates the floating ip on active instances 
without a warning

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394456

Title:
  Deleting floating ip also disassociates the floating ip on active
  instances without a warning

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I had associated a floating IP to an instance and then I deleted the
  floating ips under the access  security panel from the dashboard.

  I don't know if it is by design but I was expecting some sort of a
  warning saying that the floating IP is in use and not allow the
  deletion. Similar to how subnet cannot be deleted if there are active
  instances.

  If this is not a valid bug, I would like to know why it isn't.

  - Prasoon

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394456/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394462] [NEW] nova show output should display sec-group id instead of name

2014-11-19 Thread rampradeep
Public bug reported:


As nova security-group-create allowing more than one group with the same
name. showing security-group name in nova show is not correct

I have two security groups with name default.

ram@ubuntu:~$ neutron security-group-list
+--+-+-+
| id   | name| description |
+--+-+-+
| 2d3d1914-32d1-451f-b4bc-ed5eeda398ee | default | default |
| c2d19dea-0863-40d5-872c-543f97b00bd4 | default | default |
+--+-+-+

In this case, I did not specify any security group while spawning the
instance. I do not know how nova boot picks the default security group
but nova show displays default in the security group.

ram@ubuntu:~$ nova show first
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | ubuntu 
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | ubuntu 
|
| OS-EXT-SRV-ATTR:instance_name| instance-0001  
|
| OS-EXT-STS:power_state   | 1  
|
| OS-EXT-STS:task_state| -  
|
| OS-EXT-STS:vm_state  | active 
|
| OS-SRV-USG:launched_at   | 2014-11-19T18:42:00.00 
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| config_drive |
|
| created  | 2014-11-19T18:41:15Z   
|
| flavor   | m1.tiny (1)
|
| hostId   | 
4ac39bb970bb90f0aca2efaca1f43cc2997f6550a1449f08ade677af   |
| id   | 8f7eb319-8f5d-46c2-bb1f-6a16838ef7b1   
|
| image| cirros-0.3.2-x86_64-uec 
(90aa74a4-138a-4a1a-a530-aa1cd4ee5e05) |
| key_name | -  
|
| metadata | {} 
|
| name | first  
|
| os-extended-volumes:volumes_attached | [] 
|
| private network  | 10.0.0.2   
|
| progress | 0  
|
| security_groups  | default
| --- security group 
| status   | ACTIVE 
|
| tenant_id| e747bc1a96ea4d88a0ddf7b2df8e0ad8   
|
| updated  | 2014-11-19T18:42:01Z   
|
| user_id  | 31a22dcf6b0a437294cb6c10f2996e08   
|
+--++


It leads to confusion in getting the security group id used by the instance..

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394462

Title:
  nova show output should display sec-group id instead of name

Status in OpenStack Compute (Nova):
  New

Bug description:

  As nova security-group-create allowing more than one group with the
  same name. showing 

[Yahoo-eng-team] [Bug 1238439] Re: admin can not delete External Network because floatingip

2014-11-19 Thread Yair Fried
The bug was fixed in Juno release. I don't think it needs backporting to
Juno

** Changed in: neutron/juno
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1238439

Title:
  admin can not delete External Network because floatingip

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  In Progress
Status in neutron juno series:
  Invalid

Bug description:
  HI

  in admin role, I create External Network and router.  and create
  tenant A and userA.

  Now the userA login, create network and router, create VM1 and assign
  Floating IP , access well ,perfectly.

  Now I try to in admin roles  delete it.

  1:  delete userA ,  no problem
  2: delete tenantA ,  no problem
  3: delete vm1, no problem
  4: delete router, no problem
  5: delete External Networknet,  report error,  I try to delete the port in 
sub panel, also fail. 
  check the Neutrion server log  

  TRACE neutron.api.v2.resource L3PortInUse: Port 2e5fa663-22e0-4c9e-
  87cc-e89c12eff955 has owner network:floatingip and therefore cannot be
  deleted directly via the port API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1238439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394472] [NEW] Metadata Widget doesn't support context specific title and help text

2014-11-19 Thread Travis Tripp
Public bug reported:

Basically these are items we want to be able to customize:

1.   The top description text

2.   The bottom description text

3.   Filter box name for the left table

4.   Filter box name for the right table

5.   Message when there’s no item in the left table (currently
showing “No available metadata”)

6.   Message when there’s no item in the right table


From Julie Gravel (HP)
As I mentioned in the last email, when I changed “Available Metadata” to 
“Available Extra Specs” the filter box was misaligned. Looks like we also need 
the width of the tables to be a little wider (probably a good reason to do that 
for translations as well).
As for the usage of Metadata vs. Extra Specs, I believe Cinder uses both of 
them and they are different. If I try to use Metadata instead of Extra Specs in 
this case I’m sure there’ll be an uproar! For other modules, I can see that 
they use Metadata and Extra Specs interchangeably.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1394472

Title:
  Metadata Widget doesn't support context specific title and help text

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Basically these are items we want to be able to customize:

  1.   The top description text

  2.   The bottom description text

  3.   Filter box name for the left table

  4.   Filter box name for the right table

  5.   Message when there’s no item in the left table (currently
  showing “No available metadata”)

  6.   Message when there’s no item in the right table

  
  From Julie Gravel (HP)
  As I mentioned in the last email, when I changed “Available Metadata” to 
“Available Extra Specs” the filter box was misaligned. Looks like we also need 
the width of the tables to be a little wider (probably a good reason to do that 
for translations as well).
  As for the usage of Metadata vs. Extra Specs, I believe Cinder uses both of 
them and they are different. If I try to use Metadata instead of Extra Specs in 
this case I’m sure there’ll be an uproar! For other modules, I can see that 
they use Metadata and Extra Specs interchangeably.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1394472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391714] Re: ryu plugin removal

2014-11-19 Thread YAMAMOTO Takashi
relevant reviews
https://review.openstack.org/#/q/status:open+branch:master+topic:remove-ryu-plugin,n,z

** Also affects: devstack
   Importance: Undecided
   Status: New

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Also affects: driverlog
   Importance: Undecided
   Status: New

** Changed in: driverlog
 Assignee: (unassigned) = YAMAMOTO Takashi (yamamoto)

** Changed in: devstack
 Assignee: (unassigned) = YAMAMOTO Takashi (yamamoto)

** Changed in: openstack-manuals
 Assignee: (unassigned) = YAMAMOTO Takashi (yamamoto)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1391714

Title:
  ryu plugin removal

Status in devstack - openstack dev environments:
  New
Status in Vendor drivers for OpenStack:
  New
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Manuals:
  New

Bug description:
  ryu plugin was marked deprecated in Juno.  this bug is to track the
  removal for Kilo.

  relevant neutron meeting log:
  
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html

  we (ryu team) recommend users to migrate to ofagent, on which we aim to 
concentrate our development resources
  by this removal.  however, it isn't a functionality equivalent and no 
mechanical upgrade path will be provided.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1391714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp