[Yahoo-eng-team] [Bug 1453666] [NEW] libvirt: guestfs api makes nova-compute hang

2015-05-11 Thread Qin Zhao
Public bug reported:

Latest Kilo code.

In inspect_capabilities() of nova/virt/disk/vfs/guestfs.py, guestfs api,
which is C-extension, will hang nova-compute process when it is invoked.
This problem will result in message queue time out error and instance
booting failure.

And example of this problem is:

2015-05-09 17:07:08.393 4449 DEBUG nova.virt.disk.vfs.api 
[req-1f7c1104-2679-43a5-bbcb-f73114ce9103 - - - - -] Using primary VFSGuestFS 
instance_for_image /usr/lib/python2.7/site-packages/nova/virt/disk/vfs/api.py:50
2015-05-09 17:08:35.443 4449 DEBUG nova.virt.disk.vfs.guestfs 
[req-1f7c1104-2679-43a5-bbcb-f73114ce9103 - - - - -] Setting up appliance for 
/var/lib/nova/instances/0517e2a9-469c-43f4-a129-f489fc1c8356/disk qcow2 setup 
/usr/lib/python2.7/site-packages/nova/virt/disk/vfs/guestfs.py:169
2015-05-09 17:08:35.457 4449 DEBUG nova.openstack.common.periodic_task 
[req-bb78b74b-bed7-450f-bd40-19686aab2c3e - - - - -] Running periodic task 
ComputeManager._instance_usage_audit run_periodic_tasks 
/usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:219
2015-05-09 17:08:35.461 4449 INFO oslo_messaging._drivers.impl_rabbit 
[req-bb78b74b-bed7-450f-bd40-19686aab2c3e - - - - -] Connecting to AMQP server 
on 127.0.0.1:5671
2015-05-09 17:08:35.472 4449 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager Traceback (most recent 
call last):
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1783, in 
_allocate_network_async
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager 
system_metadata=sys_meta)
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 739, in 
_instance_update
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager **kwargs)
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/conductor/api.py, line 308, in 
instance_update
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager updates, 
'conductor')
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py, line 194, in 
instance_update
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager service=service)
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py, line 156, in 
call
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager retry=self.retry)
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/transport.py, line 90, in 
_send
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager timeout=timeout, 
retry=retry)
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
350, in send
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager retry=retry)
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
339, in _send
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager result = 
self._waiter.wait(msg_id, timeout)
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
243, in wait
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager message = 
self.waiters.get(msg_id, timeout=timeout)
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
149, in get
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager 'to message ID %s' 
% msg_id)
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager MessagingTimeout: Timed 
out waiting for a reply to message ID 8ff07520ea8743c997b5017f6638a0df
2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453666

Title:
  libvirt: guestfs api makes nova-compute hang

Status in OpenStack Compute (Nova):
  New

Bug description:
  Latest Kilo code.

  In inspect_capabilities() of nova/virt/disk/vfs/guestfs.py, guestfs
  api, which is C-extension, will hang nova-compute process when it is
  invoked. This problem will result in message queue time out error and
  instance booting failure.

  And example of this problem is:

  2015-05-09 17:07:08.393 4449 DEBUG nova.virt.disk.vfs.api 
[req-1f7c1104-2679-43a5-bbcb-f73114ce9103 - - - - -] Using primary VFSGuestFS 
instance_for_image /usr/lib/python2.7/site-packages/nova/virt/disk/vfs/api.py:50
  2015-05-09 17:08:35.443 4449 DEBUG 

[Yahoo-eng-team] [Bug 1453676] [NEW] --port-security-enabled=False/True option does not exist in network creation / update

2015-05-11 Thread Eran Kuris
Public bug reported:

According to RFE: https://bugzilla.redhat.com/show_bug.cgi?id=1167496
We need to add support of port-security-enabled=False/True when creating and 
updating network.
version :
]# rpm -qa |grep horizon
python-django-horizon-2015.1.0-2.el7.noarch


this flag relevant to port creation  but it related to this bug :
https://bugs.launchpad.net/horizon/+bug/1432373

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453676

Title:
  --port-security-enabled=False/True option does not exist in network
  creation / update

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  According to RFE: https://bugzilla.redhat.com/show_bug.cgi?id=1167496
  We need to add support of port-security-enabled=False/True when creating and 
updating network.
  version :
  ]# rpm -qa |grep horizon
  python-django-horizon-2015.1.0-2.el7.noarch


  this flag relevant to port creation  but it related to this bug :
  https://bugs.launchpad.net/horizon/+bug/1432373

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453667] [NEW] Changing --port_security_enabled=False in network does not propagated to already existing ports

2015-05-11 Thread Eran Kuris
Public bug reported:

According to RFE: https://bugzilla.redhat.com/show_bug.cgi?id=1167496 
Port that already created from network with --port_security_enabled=True  will 
not updated to False when we update network to --port_security_enabled=False.
Version:
# rpm -qa |grep neutron
python-neutronclient-2.3.11-1.el7.noarch
openstack-neutron-2015.1.0-1.el7.noarch
openstack-neutron-ml2-2015.1.0-1.el7.noarch
openstack-neutron-lbaas-2015.1.0-1.el7.noarch
openstack-neutron-openvswitch-2015.1.0-1.el7.noarch
python-neutron-2015.1.0-1.el7.noarch
openstack-neutron-common-2015.1.0-1.el7.noarch
python-neutron-lbaas-2015.1.0-1.el7.noarch

enter to plugin.ini and enable port-security extension:
[root@puma15]# vi /etc/neutron/plugin.ini extension_drivers=port_security
* you have to restart neutron server service : 
#openstack-service restart neutron-server
1. Create internal network  subnet 
# neutron net-create int_net
# neutron net-show int_net | grep port_security_enabled
# neutron subnet-create net-id 192.168.1.0/24 --name ipv4_subnet --ip-version 
4 --dns_nameservers list=true 10.35.28.28 
2. create neutron router 
#neutron router-create Router_eNet
3. create interface for internal network in the router 
#neutron router-interface-add Router_eNet ipv4_subnet
4. create gateway for the router
#neutron router-gateway-set Router_eNet id net ext net 
5. Launch 2 instances 
6.#neutron net-update int_net --port-security-enabled=False 
7. check the port of exist VM  its still in True .

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453667

Title:
  Changing --port_security_enabled=False in network does not propagated
  to already existing ports

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  According to RFE: https://bugzilla.redhat.com/show_bug.cgi?id=1167496 
  Port that already created from network with --port_security_enabled=True  
will not updated to False when we update network to 
--port_security_enabled=False.
  Version:
  # rpm -qa |grep neutron
  python-neutronclient-2.3.11-1.el7.noarch
  openstack-neutron-2015.1.0-1.el7.noarch
  openstack-neutron-ml2-2015.1.0-1.el7.noarch
  openstack-neutron-lbaas-2015.1.0-1.el7.noarch
  openstack-neutron-openvswitch-2015.1.0-1.el7.noarch
  python-neutron-2015.1.0-1.el7.noarch
  openstack-neutron-common-2015.1.0-1.el7.noarch
  python-neutron-lbaas-2015.1.0-1.el7.noarch

  enter to plugin.ini and enable port-security extension:
  [root@puma15]# vi /etc/neutron/plugin.ini extension_drivers=port_security
  * you have to restart neutron server service : 
  #openstack-service restart neutron-server
  1. Create internal network  subnet 
  # neutron net-create int_net
  # neutron net-show int_net | grep port_security_enabled
  # neutron subnet-create net-id 192.168.1.0/24 --name ipv4_subnet 
--ip-version 4 --dns_nameservers list=true 10.35.28.28 
  2. create neutron router 
  #neutron router-create Router_eNet
  3. create interface for internal network in the router 
  #neutron router-interface-add Router_eNet ipv4_subnet
  4. create gateway for the router
  #neutron router-gateway-set Router_eNet id net ext net 
  5. Launch 2 instances 
  6.#neutron net-update int_net --port-security-enabled=False 
  7. check the port of exist VM  its still in True .

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364133] Re: [Heat] Neutron LBaaS vip invisible in dashboard

2015-05-11 Thread Darshan
** Changed in: horizon
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1364133

Title:
  [Heat] Neutron LBaaS vip invisible in dashboard

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I have a Heat template with an output like this:

pool_ip_address:
  value: {get_attr: [pool, vip, address]}
  description: The IP address of the load balancing pool

  For that output, the value shows up in command line output (`heat
  stack-show`) but in the dashboard the value is invisible; the output
  name and description appear in both.

  Here is the template source for the LB and pool:

pool:
  type: OS::Neutron::Pool
  properties:
protocol: HTTP
monitors: [{get_resource: monitor}]
subnet_id: {get_param: subnet_id}
lb_method: ROUND_ROBIN
vip:
  protocol_port: 80
lb:
  type: OS::Neutron::LoadBalancer
  properties:
protocol_port: 80
pool_id: {get_resource: pool}

  
  Here is the relevant part of the `heat stack-show` output:

  |  | output_value: 10.0.0.21,spaces 
snipped/   |
  |  | description: The IP address of the load 
balancing pool, spaces snipped/   |
  |  | output_key: pool_ip_address spaces 
snipped/   |


  This is from an install by DevStack today.  Here are the versions I am
  running:

  ubuntu@mjs-dstk-901a:/opt/stack/horizon$ git branch -v
  * master e0abdfa Merge Port details template missing some translation.

  ubuntu@mjs-dstk-901a:/opt/stack/horizon$ cd ../neutron/

  ubuntu@mjs-dstk-901a:/opt/stack/neutron$ git branch -v
  * master 4a91073 Merge Remove old policies from policy.json

  ubuntu@mjs-dstk-901a:/opt/stack/neutron$ cd ../python-heatclient/

  ubuntu@mjs-dstk-901a:/opt/stack/python-heatclient$ git branch -v
  * master 4bc53ac Merge Handle upper cased endpoints

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1364133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453675] [NEW] Live migration fails

2015-05-11 Thread Mika Saari
Public bug reported:

1: Exact Version (Latest apt-get dist-upgrade with Kilo repositories for ubuntu 
14.04.02)
ii  nova-api1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - API frontend
ii  nova-cert   1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - certificate management
ii  nova-common 1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - common files
ii  nova-conductor  1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - conductor service
ii  nova-consoleauth1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - Console Authenticator
ii  nova-novncproxy 1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - NoVNC proxy
ii  nova-scheduler  1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - virtual machine scheduler
ii  python-nova 1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute Python libraries
ii  python-novaclient   1:2.22.0-0ubuntu1~cloud0  
all  client library for OpenStack Compute API

2: Log files
2015-05-11 09:26:05.515 25372 DEBUG nova.compute.api 
[req-d3f3807f-3aa9-472e-b076-3987372f8943 fb2aaa72c412443fafe9d483ecb396c5 
3d5adc9afa334a2097fc4374fe3c96e1 - - -] [instance: 
9cf946cf-8e0a-4e4b-8651-514251f7c2de] Going to try to live migrate instance to 
compute2 live_migrate /usr/lib/python2.7/dist-packages/nova/compute/api.py:3224
2015-05-11 09:26:05.607 25372 INFO oslo_messaging._drivers.impl_rabbit 
[req-d3f3807f-3aa9-472e-b076-3987372f8943 fb2aaa72c412443fafe9d483ecb396c5 
3d5adc9afa334a2097fc4374fe3c96e1 - - -] Connecting to AMQP server on 
controller:5672
2015-05-11 09:26:05.619 25372 INFO oslo_messaging._drivers.impl_rabbit 
[req-d3f3807f-3aa9-472e-b076-3987372f8943 fb2aaa72c412443fafe9d483ecb396c5 
3d5adc9afa334a2097fc4374fe3c96e1 - - -] Connected to AMQP server on 
controller:5672
2015-05-11 09:26:05.623 25372 INFO oslo_messaging._drivers.impl_rabbit 
[req-d3f3807f-3aa9-472e-b076-3987372f8943 fb2aaa72c412443fafe9d483ecb396c5 
3d5adc9afa334a2097fc4374fe3c96e1 - - -] Connecting to AMQP server on 
controller:5672
2015-05-11 09:26:05.636 25372 INFO oslo_messaging._drivers.impl_rabbit 
[req-d3f3807f-3aa9-472e-b076-3987372f8943 fb2aaa72c412443fafe9d483ecb396c5 
3d5adc9afa334a2097fc4374fe3c96e1 - - -] Connected to AMQP server on 
controller:5672
2015-05-11 09:26:05.776 25372 ERROR 
nova.api.openstack.compute.contrib.admin_actions 
[req-d3f3807f-3aa9-472e-b076-3987372f8943 fb2aaa72c412443fafe9d483ecb396c5 
3d5adc9afa334a2097fc4374fe3c96e1 - - -] Live migration of instance 
9cf946cf-8e0a-4e4b-8651-514251f7c2de to host compute2 failed
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions Traceback (most recent call 
last):
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/admin_actions.py,
 line 331, in _migrate_live
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions disk_over_commit, host)
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
/usr/lib/python2.7/dist-packages/nova/compute/api.py, line 219, in inner
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions return function(self, 
context, instance, *args, **kwargs)
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
/usr/lib/python2.7/dist-packages/nova/compute/api.py, line 247, in _wrapped
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions return fn(self, context, 
instance, *args, **kwargs)
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
/usr/lib/python2.7/dist-packages/nova/compute/api.py, line 200, in inner
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions return f(self, context, 
instance, *args, **kw)
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
/usr/lib/python2.7/dist-packages/nova/compute/api.py, line 3234, in 
live_migrate
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions 
disk_over_commit=disk_over_commit)
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
/usr/lib/python2.7/dist-packages/nova/conductor/api.py, line 333, in 
live_migrate_instance
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions block_migration, 
disk_over_commit, None)
2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 

[Yahoo-eng-team] [Bug 1453671] [NEW] --port_security_enabled flag does not exist in neutron net/port-create/update help

2015-05-11 Thread Eran Kuris
Public bug reported:

When printing  help of  neutron net-create  / neutron net-update / neutron 
port-create / neutron port-update 
there is no explanation of new flag : --port_security_enabled.

version:

# rpm -qa |grep neutron
python-neutronclient-2.3.11-1.el7.noarch
openstack-neutron-2015.1.0-1.el7.noarch
openstack-neutron-ml2-2015.1.0-1.el7.noarch
openstack-neutron-lbaas-2015.1.0-1.el7.noarch
openstack-neutron-openvswitch-2015.1.0-1.el7.noarch
python-neutron-2015.1.0-1.el7.noarch
openstack-neutron-common-2015.1.0-1.el7.noarch
python-neutron-lbaas-2015.1.0-1.el7.noarch


example of help output(It is relevant to all the commands I mentioned up) : 
# neutron help net-create 
usage: neutron net-create [-h] [-f {shell,table,value}] [-c COLUMN]
  [--max-width integer] [--prefix PREFIX]
  [--request-format {json,xml}]
  [--tenant-id TENANT_ID] [--admin-state-down]
  [--shared] [--router:external]
  [--provider:network_type network_type]
  [--provider:physical_network physical_network_name]
  [--provider:segmentation_id segmentation_id]
  NAME

Create a network for a given tenant.

positional arguments:
  NAME  Name of network to create.

optional arguments:
  -h, --helpshow this help message and exit
  --request-format {json,xml}
The XML or JSON request format.
  --tenant-id TENANT_ID
The owner tenant ID.
  --admin-state-downSet admin state up to false.
  --shared  Set the network as shared.
  --router:external Set network as external, it is only available for
admin
  --provider:network_type network_type
The physical mechanism by which the virtual network is
implemented.
  --provider:physical_network physical_network_name
Name of the physical network over which the virtual
network is implemented.
  --provider:segmentation_id segmentation_id
VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN
networks.

output formatters:
  output formatter options

  -f {shell,table,value}, --format {shell,table,value}
the output format, defaults to table
  -c COLUMN, --column COLUMN
specify the column(s) to include, can be repeated

table formatter:
  --max-width integer
Maximum display width, 0 to disable

shell formatter:
  a format a UNIX shell can parse (variable=value)

  --prefix PREFIX   add a prefix to all variable names

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453671

Title:
  --port_security_enabled flag  does not exist in neutron net/port-
  create/update help

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When printing  help of  neutron net-create  / neutron net-update / neutron 
port-create / neutron port-update 
  there is no explanation of new flag : --port_security_enabled.

  version:

  # rpm -qa |grep neutron
  python-neutronclient-2.3.11-1.el7.noarch
  openstack-neutron-2015.1.0-1.el7.noarch
  openstack-neutron-ml2-2015.1.0-1.el7.noarch
  openstack-neutron-lbaas-2015.1.0-1.el7.noarch
  openstack-neutron-openvswitch-2015.1.0-1.el7.noarch
  python-neutron-2015.1.0-1.el7.noarch
  openstack-neutron-common-2015.1.0-1.el7.noarch
  python-neutron-lbaas-2015.1.0-1.el7.noarch

  
  example of help output(It is relevant to all the commands I mentioned up) : 
  # neutron help net-create 
  usage: neutron net-create [-h] [-f {shell,table,value}] [-c COLUMN]
[--max-width integer] [--prefix PREFIX]
[--request-format {json,xml}]
[--tenant-id TENANT_ID] [--admin-state-down]
[--shared] [--router:external]
[--provider:network_type network_type]
[--provider:physical_network 
physical_network_name]
[--provider:segmentation_id segmentation_id]
NAME

  Create a network for a given tenant.

  positional arguments:
NAME  Name of network to create.

  optional arguments:
-h, --helpshow this help message and exit
--request-format {json,xml}
  The XML or JSON request format.
--tenant-id TENANT_ID
  The owner tenant ID.
--admin-state-downSet admin state up to false.
--shared  Set the network as shared.
--router:external Set network as external, it 

[Yahoo-eng-team] [Bug 1453708] [NEW] Copyright text in vendor code should refer to Brocade instead of OpenStack foundation

2015-05-11 Thread vishwanath jayaraman
Public bug reported:

The below Brocade Firewall vendor code related files refers to OpenStack
Foundation instead of Brocade in copyright section and should be fixed

https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/services/firewall/agents/vyatta/vyatta_utils.py
https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/services/firewall/drivers/vyatta/vyatta_fwaas.py

** Affects: neutron
 Importance: Undecided
 Assignee: vishwanath jayaraman (vishwanathj)
 Status: New


** Tags: neutron-fwaas

** Changed in: neutron
 Assignee: (unassigned) = vishwanath jayaraman (vishwanathj)

** Changed in: neutron
 Assignee: vishwanath jayaraman (vishwanathj) = (unassigned)

** Changed in: neutron
 Assignee: (unassigned) = vishwanath jayaraman (vishwanathj)

** Description changed:

  The below Brocade Firewall vendor code related files refers to OpenStack
- Foundation instead of Brocade and should be fixed
+ Foundation instead of Brocade in copyright section and should be fixed
  
  
https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/services/firewall/agents/vyatta/vyatta_utils.py
  
https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/services/firewall/drivers/vyatta/vyatta_fwaas.py

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453708

Title:
  Copyright text in vendor code should refer to Brocade instead of
  OpenStack foundation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The below Brocade Firewall vendor code related files refers to
  OpenStack Foundation instead of Brocade in copyright section and
  should be fixed

  
https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/services/firewall/agents/vyatta/vyatta_utils.py
  
https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/services/firewall/drivers/vyatta/vyatta_fwaas.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453715] [NEW] ml2 plugin can't update port 'binding:host_id' be None

2015-05-11 Thread shihanzhang
Public bug reported:

Now with neutron ml2 plugin, if we want to update port 'binding:host_id'
be None, we must set 'binding:host_id' be empty
string(binding:host_id=''),  there is a problem when nova delete a VMs:
https://bugs.launchpad.net/nova/+bug/1441419

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453715

Title:
  ml2 plugin can't update port 'binding:host_id' be None

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Now with neutron ml2 plugin, if we want to update port
  'binding:host_id' be None, we must set 'binding:host_id' be empty
  string(binding:host_id=''),  there is a problem when nova delete a
  VMs: https://bugs.launchpad.net/nova/+bug/1441419

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453754] [NEW] cannot login after automatic logout

2015-05-11 Thread Giovanni
Public bug reported:

After being logged out due to inactivity, the next login attempt fails
with Something went wrong error. This happens with any user.

First successful login:
2015-05-11 01:56:11,986 2327 INFO openstack_auth.forms Login successful for 
user gtirloni.


Second unsuccessful login:
2015-05-11 10:24:30,717 2328 INFO openstack_auth.forms Login successful for 
user gtirloni.
2015-05-11 10:24:30,718 2328 ERROR django.request Internal Server Error: 
/dashboard/auth/login/
Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/django/core/handlers/base.py, line 
132, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File /usr/lib/python2.7/site-packages/django/views/decorators/debug.py, 
line 76, in sensitive_post_parameters_wrapper
return view(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/django/utils/decorators.py, line 110, 
in _wrapped_view
response = view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/django/views/decorators/cache.py, 
line 57, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/openstack_auth/views.py, line 111, in 
login
**kwargs)
  File /usr/lib/python2.7/site-packages/django/views/decorators/debug.py, 
line 76, in sensitive_post_parameters_wrapper
return view(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/django/utils/decorators.py, line 110, 
in _wrapped_view
response = view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/django/views/decorators/cache.py, 
line 57, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/django/contrib/auth/views.py, line 
51, in login
auth_login(request, form.get_user())
  File /usr/lib/python2.7/site-packages/django/contrib/auth/__init__.py, line 
102, in login
if _get_user_session_key(request) != user.pk or (
  File /usr/lib/python2.7/site-packages/django/contrib/auth/__init__.py, line 
59, in _get_user_session_key
return get_user_model()._meta.pk.to_python(request.session[SESSION_KEY])
  File /usr/lib/python2.7/site-packages/django/db/models/fields/__init__.py, 
line 969, in to_python
params={'value': value},
ValidationError: [u'1550aa46bac146a0ac76e4801a66e065' value must be an 
integer.]

The workaround is to clean all cookies and start again, or possibly use
an incognito window.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453754

Title:
  cannot login after automatic logout

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After being logged out due to inactivity, the next login attempt fails
  with Something went wrong error. This happens with any user.

  First successful login:
  2015-05-11 01:56:11,986 2327 INFO openstack_auth.forms Login successful for 
user gtirloni.

  
  Second unsuccessful login:
  2015-05-11 10:24:30,717 2328 INFO openstack_auth.forms Login successful for 
user gtirloni.
  2015-05-11 10:24:30,718 2328 ERROR django.request Internal Server Error: 
/dashboard/auth/login/
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/django/core/handlers/base.py, line 
132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File /usr/lib/python2.7/site-packages/django/views/decorators/debug.py, 
line 76, in sensitive_post_parameters_wrapper
  return view(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/utils/decorators.py, line 
110, in _wrapped_view
  response = view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/views/decorators/cache.py, 
line 57, in _wrapped_view_func
  response = view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/openstack_auth/views.py, line 111, 
in login
  **kwargs)
File /usr/lib/python2.7/site-packages/django/views/decorators/debug.py, 
line 76, in sensitive_post_parameters_wrapper
  return view(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/utils/decorators.py, line 
110, in _wrapped_view
  response = view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/views/decorators/cache.py, 
line 57, in _wrapped_view_func
  response = view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/contrib/auth/views.py, line 
51, in login
  auth_login(request, form.get_user())
File /usr/lib/python2.7/site-packages/django/contrib/auth/__init__.py, 
line 102, in login
  if _get_user_session_key(request) != user.pk or (
File /usr/lib/python2.7/site-packages/django/contrib/auth/__init__.py, 
line 59, in 

[Yahoo-eng-team] [Bug 1452205] Re: VPNaaS: ipsec addconn failed

2015-05-11 Thread venkata anil
As Wei Hu explained, please enable libreswan driver
https://github.com/openstack/neutron-vpnaas/blob/master/etc/vpn_agent.ini#L16
Please see this patch
https://review.openstack.org//#/c/174299

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452205

Title:
  VPNaaS: ipsec addconn failed

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When create an ipsec-connection

  2015-05-05 14:06:41.875 4555 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-a9e53c63-23fa-4544-9ad4-cdaa480eb5de', 'ipsec', 
'addconn', '--ctlbase', 
'/var/lib/neutron/ipsec/a9e53c63-23fa-4544-9ad4-cdaa480eb5de/var/run/pluto.ctl',
 '--defaultroutenexthop', '10.62.72.1', '--config', 
'/var/lib/neutron/ipsec/a9e53c63-23fa-4544-9ad4-cdaa480eb5de/etc/ipsec.conf', 
'94a916ff-375f-46e8-8c58-8231ce0eea1c'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:46
  2015-05-05 14:06:41.973 4555 ERROR neutron.agent.linux.utils [-] 
  2015-05-05 14:06:41.974 4555 ERROR neutron.services.vpn.device_drivers.ipsec 
[-] Failed to enable vpn process on router a9e53c63-23fa-4544-9ad4-cdaa480eb5de
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec 
Traceback (most recent call last):
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
 File 
/usr/lib/python2.7/site-packages/neutron/services/vpn/device_drivers/ipsec.py,
 line 242, in enable
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
   self.restart()
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
 File 
/usr/lib/python2.7/site-packages/neutron/services/vpn/device_drivers/ipsec.py,
 line 342, in restart
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
   self.start()
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
 File 
/usr/lib/python2.7/site-packages/neutron/services/vpn/device_drivers/ipsec.py,
 line 395, in start
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
   ipsec_site_conn['id']
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
 File 
/usr/lib/python2.7/site-packages/neutron/services/vpn/device_drivers/ipsec.py,
 line 314, in _execute
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
   check_exit_code=check_exit_code)
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
 File /usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py, line 
550, in execute
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
   check_exit_code=check_exit_code, extra_ok_codes=extra_ok_codes)
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
 File /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py, line 84, 
in execute
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
   raise RuntimeError(m)
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec 
RuntimeError: 
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec 
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-a9e53c63-23fa-4544-9ad4-cdaa480eb5de', 'ipsec', 
'addconn', '--ctlbase', 
'/var/lib/neutron/ipsec/a9e53c63-23fa-4544-9ad4-cdaa480eb5de/var/run/pluto.ctl',
 '--defaultroutenexthop', '10.62.72.1', '--config', 
'/var/lib/neutron/ipsec/a9e53c63-23fa-4544-9ad4-cdaa480eb5de/etc/ipsec.conf', 
'94a916ff-375f-46e8-8c58-8231ce0eea1c']
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec 
Exit code: 255
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec 
Stdout: ''
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec 
Stderr: 'connect(pluto_ctl) failed: No such file or directory\n'
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453766] [NEW] LBaaS - can't associate monitor to pool

2015-05-11 Thread Roey Dekel
Public bug reported:

Created monitor is not shown at Associate Monitor selection at the
pool (image attached).

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Screenshot from 2015-05-11 14:16:59.png
   
https://bugs.launchpad.net/bugs/1453766/+attachment/4395424/+files/Screenshot%20from%202015-05-11%2014%3A16%3A59.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453766

Title:
  LBaaS - can't associate monitor to pool

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Created monitor is not shown at Associate Monitor selection at the
  pool (image attached).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453769] Re: Domain name update breaks IDP configuration

2015-05-11 Thread Dolph Mathews
I completely agree, the current design directly results in the fragility
you described (I pushed for naming domain-specific configuration files
using their immutable, system-defined domain IDs instead, but lost that
argument... I think on the basis of deployer experience? I'll let Henry
Nash comment further).

As a workaround, you could set the identity:update_domain to be more
restrictive (to users that understand the impact of such a change), or
disallow it completely.

I'm leaving this as Won't Fix, as the only alternative solution I can
think of is introducing a new configuration option that determines
whether configuration files are named using domain names or IDs, which
doesn't quite seem worth it (just to provide backwards compatibility...
unless someone has a better idea? if so, please change the status
accordingly).

** Changed in: keystone
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1453769

Title:
  Domain name update breaks IDP configuration

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  The configuration file for an identity provider eg. LDAP is generally named 
as keystone.domain_name.conf. 
  Since Keystone allows a user to update a domain name, any domain name update 
makes this file for that domain name irrelevant. This file is not automatically 
renamed via Keystone and I tried to look around in the documentation and this 
seems to be the only way to configure an LDAP IDP. Manual renaming of all such 
config files for domains seems like an overhead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1453769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453769] [NEW] Domain name update breaks IDP configuration

2015-05-11 Thread Prateek Jassal
Public bug reported:

The configuration file for an identity provider eg. LDAP is generally named as 
keystone.domain_name.conf. 
Since Keystone allows a user to update a domain name, any domain name update 
makes this file for that domain name irrelevant. This file is not automatically 
renamed via Keystone and I tried to look around in the documentation and this 
seems to be the only way to configure an LDAP IDP. Manual renaming of all such 
config files for domains seems like an overhead.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1453769

Title:
  Domain name update breaks IDP configuration

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The configuration file for an identity provider eg. LDAP is generally named 
as keystone.domain_name.conf. 
  Since Keystone allows a user to update a domain name, any domain name update 
makes this file for that domain name irrelevant. This file is not automatically 
renamed via Keystone and I tried to look around in the documentation and this 
seems to be the only way to configure an LDAP IDP. Manual renaming of all such 
config files for domains seems like an overhead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1453769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453779] [NEW] Performing rescue operation on a volume backed instance fails.

2015-05-11 Thread Ahmad Faheem
Public bug reported:

When performing rescue operation on an instance booted from volume it gives 
error Cannot rescue a volume-backed instance, code: 400. 
Steps to reproduce
1. Boot a VM from volume
curl -g -i -X POST 
https://10.0.0.5:8774/v2/ee61323896a34bea9c9a5623fbb6f239/os-volumes_boot  -H 
X-Auth-Token: omitted -d '{server: {name: TestVm, imageRef: , 
block_device_mapping_v2: [{boot_index: 0, uuid: 
5d246189-a666-470c-8cee-36ee489cbd9e, volume_size: 6, source_type: 
image, destination_type: volume, delete_on_termination: 1}], 
flavorRef: da9ba7b5-be67-4a62-bb35-a362e05ba2f2, max_count: 1, 
min_count: 1, networks: [{uuid: 
b5220eb2-e105-4ae0-8fc7-75a7cd468a40}]}}'

{server: {security_groups: [{name: default}], OS-
DCF:diskConfig: MANUAL, id: e436453d-5164-4f36-a7b0-617b63718759,
links: [{href:
http://127.0.0.1:18774/v2/ee61323896a34bea9c9a5623fbb6f239/servers/e436453d-5164-4f36-a7b0-617b63718759;,
rel: self}, {href:
http://127.0.0.1:18774/ee61323896a34bea9c9a5623fbb6f239/servers/e436453d-5164-4f36-a7b0-617b63718759;,
rel: bookmark}], adminPass: 6zGefA3nzNiv}}


2. Run rescue operation on this instance.
curl -i 
'https://10.0.0.5:8774/v2/ee61323896a34bea9c9a5623fbb6f239/servers/e436453d-5164-4f36-a7b0-617b63718759/action'
 -X POST -H 'X-Auth-Token: omitted'  -d '{rescue: {adminPass: 
p8uQwFZ8qQan}}'
HTTP/1.1 400 Bad Request
Date: Mon, 11 May 2015 05:20:57 GMT
Server: Apache/2.4.7 (Ubuntu)
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Accept, Content-Type, X-Auth-Token, 
X-Subject-Token
Access-Control-Expose-Headers: Accept, Content-Type, X-Auth-Token, 
X-Subject-Token
Access-Control-Allow-Methods: GET POST OPTIONS PUT DELETE PATCH
Content-Length: 147
Content-Type: application/json; charset=UTF-8
X-Compute-Request-Id: req-6d671d9d-475c-41a3-894e-3e72676e1144
Via: 1.1 10.0.05:8774
Connection: close

{badRequest: {message: Instance
e436453d-5164-4f36-a7b0-617b63718759 cannot be rescued: Cannot rescue a
volume-backed instance, code: 400}}

The above issue is observed in Icehouse.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453779

Title:
  Performing rescue operation on a volume backed instance fails.

Status in OpenStack Compute (Nova):
  New

Bug description:
  When performing rescue operation on an instance booted from volume it gives 
error Cannot rescue a volume-backed instance, code: 400. 
  Steps to reproduce
  1. Boot a VM from volume
  curl -g -i -X POST 
https://10.0.0.5:8774/v2/ee61323896a34bea9c9a5623fbb6f239/os-volumes_boot  -H 
X-Auth-Token: omitted -d '{server: {name: TestVm, imageRef: , 
block_device_mapping_v2: [{boot_index: 0, uuid: 
5d246189-a666-470c-8cee-36ee489cbd9e, volume_size: 6, source_type: 
image, destination_type: volume, delete_on_termination: 1}], 
flavorRef: da9ba7b5-be67-4a62-bb35-a362e05ba2f2, max_count: 1, 
min_count: 1, networks: [{uuid: 
b5220eb2-e105-4ae0-8fc7-75a7cd468a40}]}}'

  {server: {security_groups: [{name: default}], OS-
  DCF:diskConfig: MANUAL, id:
  e436453d-5164-4f36-a7b0-617b63718759, links: [{href:
  
http://127.0.0.1:18774/v2/ee61323896a34bea9c9a5623fbb6f239/servers/e436453d-5164-4f36-a7b0-617b63718759;,
  rel: self}, {href:
  
http://127.0.0.1:18774/ee61323896a34bea9c9a5623fbb6f239/servers/e436453d-5164-4f36-a7b0-617b63718759;,
  rel: bookmark}], adminPass: 6zGefA3nzNiv}}

  
  2. Run rescue operation on this instance.
  curl -i 
'https://10.0.0.5:8774/v2/ee61323896a34bea9c9a5623fbb6f239/servers/e436453d-5164-4f36-a7b0-617b63718759/action'
 -X POST -H 'X-Auth-Token: omitted'  -d '{rescue: {adminPass: 
p8uQwFZ8qQan}}'
  HTTP/1.1 400 Bad Request
  Date: Mon, 11 May 2015 05:20:57 GMT
  Server: Apache/2.4.7 (Ubuntu)
  Access-Control-Allow-Origin: *
  Access-Control-Allow-Headers: Accept, Content-Type, X-Auth-Token, 
X-Subject-Token
  Access-Control-Expose-Headers: Accept, Content-Type, X-Auth-Token, 
X-Subject-Token
  Access-Control-Allow-Methods: GET POST OPTIONS PUT DELETE PATCH
  Content-Length: 147
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-6d671d9d-475c-41a3-894e-3e72676e1144
  Via: 1.1 10.0.05:8774
  Connection: close

  {badRequest: {message: Instance
  e436453d-5164-4f36-a7b0-617b63718759 cannot be rescued: Cannot rescue
  a volume-backed instance, code: 400}}

  The above issue is observed in Icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452298] Re: Fails to filter domains by id

2015-05-11 Thread Dolph Mathews
Ready for the punt return? :)

Because ?id=default is not a query parameter documented or supported in
any collection API, the client is not actually making a valid API
request. And because 'id' is also a documented API convention, I'd
suggest that to provide the expected user experience, the client should
simply know to do the right thing and call a get(id=x) and return the
result wrapped in a list, as callers of .list(id=x) would expect.

** Tags added: user-experience

** Changed in: python-keystoneclient
   Status: Invalid = Triaged

** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1452298

Title:
  Fails to filter domains by id

Status in OpenStack Identity (Keystone):
  Invalid
Status in Python client library for Keystone:
  Triaged

Bug description:
  V3 client fails to filter domains by id. Following code should list
  only 'default' domain, but list of all domains is returned instead:

   import keystoneclient.v3.client as ksclient_v3
   client = ksclient_v3.Client(endpoint='http://192.0.2.5:35357/v3', 
token='153c6ee5a6486e7db131ada9a464ab0f12f3f4cb')
   default_domain = client.domains.list(id='default')[0]
   default_domain
  Domain description=Contains users and projects created by heat, 
enabled=True, id=29f4f3f567f943eb9769329352753b89, links={u'self': 
u'http://192.0.2.5:35357/v3/domains/29f4f3f567f943eb9769329352753b89'}, 
name=heat_stack
   client.domains.list(id='default')
  [Domain description=Contains users and projects created by heat, 
enabled=True, id=29f4f3f567f943eb9769329352753b89, links={u'self': 
u'http://192.0.2.5:35357/v3/domains/29f4f3f567f943eb9769329352753b89'}, 
name=heat_stack, Domain description=Owns users and tenants (i.e. projects) 
available on Identity API v2., enabled=True, id=default, links={u'self': 
u'http://192.0.2.5:35357/v3/domains/default'}, name=Default]

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1452298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435153] Re: ironic hypervisor is still available when the node is in maintenance status.

2015-05-11 Thread Ruby Loo
Setting this to invalid because some of us don't think this is a bug.
Ie, that nodes in maintenance should be counted/shown in nova. Their
resources show up as unavailable, so it seems fine.

See comments in https://review.openstack.org/177575.

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1435153

Title:
  ironic hypervisor is still available when the node is in maintenance
  status.

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Won't Fix
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In my env, we have two ironic nodes, so in hypervisor-stats, show
  count=2, after I set one of them to maintenance state, the hypervisor-
  stats still show count=2, I understand for the node which is in
  maintenance status, it should not be counted into nova hypervisor
  stats.

  [root@rhel7-osee ~]# nova hypervisor-stats
  +--+---+
  | Property | Value |
  +--+---+
  | count| 2 |
  | current_workload | 0 |
  | disk_available_least | 0 |
  | free_disk_gb | 0 |
  | free_ram_mb  | 0 |
  | local_gb | 40|
  | local_gb_used| 40|
  | memory_mb| 2048  |
  | memory_mb_used   | 2048  |
  | running_vms  | 2 |
  | vcpus| 2 |
  | vcpus_used   | 2 |
  +--+---+
  [root@rhel7-osee ~]# ironic node-set-maintenance 
d4edf8c7-ae8d-40ed-b3a6-c5600ff09287 on
  [root@rhel7-osee ~]# nova hypervisor-stats
  +--+---+
  | Property | Value |
  +--+---+
  | count| 2 |
  | current_workload | 0 |
  | disk_available_least | 0 |
  | free_disk_gb | 0 |
  | free_ram_mb  | 0 |
  | local_gb | 40|
  | local_gb_used| 40|
  | memory_mb| 2048  |
  | memory_mb_used   | 2048  |
  | running_vms  | 2 |
  | vcpus| 2 |
  | vcpus_used   | 2 |
  +--+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1435153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444232] Re: Only using huge page may filter the usable host

2015-05-11 Thread zhangtralon
yeah, Daniel and Nikola, you are right. 
Before, I find no numa in a host by the numactl --hardware, so I think that 
there may be no numa topo.
I made the following test, and find that the libvirt return a numa even if the 
value from the numactl  is zero.
thanks.

root@tralon-Vostro-1400:~# virsh --version
1.2.2
root@tralon-Vostro-1400:~# numactl --hardware
available: 0 nodes ()
root@tralon-Vostro-1400:~# virsh capabilities
capabilities

  host
uuid44454c4c-3500-1039-8038-b4c04f433258/uuid
cpu
  archi686/arch
  modeln270/model
  vendorIntel/vendor
  topology sockets='1' cores='2' threads='1'/
  feature name='lahf_lm'/
  feature name='lm'/
  feature name='pdcm'/
  feature name='xtpr'/
  feature name='cx16'/
  feature name='tm2'/
  feature name='est'/
  feature name='vmx'/
  feature name='ds_cpl'/
  feature name='dtes64'/
  feature name='pbe'/
  feature name='tm'/
  feature name='ht'/
  feature name='ss'/
  feature name='acpi'/
  feature name='ds'/
  feature name='pse36'/
/cpu
power_management
  suspend_mem/
  suspend_disk/
  suspend_hybrid/
/power_management
migration_features
  live/
  uri_transports
uri_transporttcp/uri_transport
  /uri_transports
/migration_features
topology
  cells num='1'
cell id='0'
  memory unit='KiB'3106852/memory
  cpus num='2'
cpu id='0' socket_id='0' core_id='0' siblings='0'/
cpu id='1' socket_id='0' core_id='1' siblings='1'/
  /cpus
/cell
  /cells
/topology
secmodel
  modelapparmor/model
  doi0/doi
/secmodel
secmodel
  modeldac/model
  doi0/doi
  baselabel type='kvm'+117:+126/baselabel
  baselabel type='qemu'+117:+126/baselabel
/secmodel
  /host

/capabilities


** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444232

Title:
  Only using huge page may filter the usable host

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  when only using huge page parameter without numa to create vms, the
  current code can generate an instance numatopology with a numa cell.

  the solution will filter some hosts which meet the
  need of huge page but no numa.

  I think that binding the huge page with  numa so closely  is
  unreasonable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440773] Re: Remove WritableLogger as eventlet has a real logger interface in 0.17.2

2015-05-11 Thread Ihar Hrachyshka
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) = Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440773

Title:
  Remove WritableLogger as eventlet has a real logger interface in
  0.17.2

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in Logging configuration library for OpenStack:
  In Progress

Bug description:
  Info from Sean on IRC:

  the patch to use a real logger interface in eventlet has been released
  in 0.17.2, which means we should be able to phase out
  https://github.com/openstack/oslo.log/blob/master/oslo_log/loggers.py

  Eventlet PR was:
  https://github.com/eventlet/eventlet/pull/75

  thanks,
  dims

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1440773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453835] [NEW] Hyper-V: Nova cold resize / migration fails

2015-05-11 Thread Claudiu Belu
Public bug reported:

Commit https://review.openstack.org/#/c/162999/ changed where the
Hyper-V VM configuration files are stored. The files are being stored in
the same folder as the instance. Performing a cold resize / migration
will cause a os.rename call on the instance's folder, which fails as
long as there are configuration files used by Hyper-V in that folder,
thus resulting in a failed migration and the instance ending up in ERROR
state.

Logs: http://paste.openstack.org/show/219887/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v juno-backport-potential kilo-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453835

Title:
  Hyper-V: Nova cold resize / migration fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  Commit https://review.openstack.org/#/c/162999/ changed where the
  Hyper-V VM configuration files are stored. The files are being stored
  in the same folder as the instance. Performing a cold resize /
  migration will cause a os.rename call on the instance's folder, which
  fails as long as there are configuration files used by Hyper-V in that
  folder, thus resulting in a failed migration and the instance ending
  up in ERROR state.

  Logs: http://paste.openstack.org/show/219887/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453855] [NEW] HA routers may fail to send out GARPs when node boots

2015-05-11 Thread Assaf Muller
Public bug reported:

When a node boots, it starts the OVS and L3 agents. As an example, in
RDO systemd unit files, these services have no dependency. This means
that the L3 agent can stop before the OVS agent. It can start
configuring routers before the OVS agent finished syncing with the
server and starts processing ovsdb monitor updates. The result is that
when the L3 agent finishes configuring an HA router, it starts up
keepalived, which under certain conditions will transition to master and
send our gratuitous ARPs before the OVS agent finishes plugging its
ports. This means that the gratuitous ARP will be lost, but with the
router acting as master, this can cause black holes.

Possible solutions:
* Introduce systemd dependencies, but this has its set of intricacies and it's 
hard to solve the above problem comprehensively just with this approach.
* Regardless, it's a good idea to use new keepalived flags:
garp_master_repeat INTEGER# how often the gratuitous ARP after MASTER
   #  state 
transition should be repeated?
garp_master_refresh INTEGER  # Periodic delay in seconds sending
   #  
gratuitous ARP while in MASTER state

** Affects: neutron
 Importance: Medium
 Status: New


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453855

Title:
  HA routers may fail to send out GARPs when node boots

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a node boots, it starts the OVS and L3 agents. As an example, in
  RDO systemd unit files, these services have no dependency. This means
  that the L3 agent can stop before the OVS agent. It can start
  configuring routers before the OVS agent finished syncing with the
  server and starts processing ovsdb monitor updates. The result is that
  when the L3 agent finishes configuring an HA router, it starts up
  keepalived, which under certain conditions will transition to master
  and send our gratuitous ARPs before the OVS agent finishes plugging
  its ports. This means that the gratuitous ARP will be lost, but with
  the router acting as master, this can cause black holes.

  Possible solutions:
  * Introduce systemd dependencies, but this has its set of intricacies and 
it's hard to solve the above problem comprehensively just with this approach.
  * Regardless, it's a good idea to use new keepalived flags:
  garp_master_repeat INTEGER# how often the gratuitous ARP after 
MASTER
 #  
state transition should be repeated?
  garp_master_refresh INTEGER  # Periodic delay in seconds sending
 #  
gratuitous ARP while in MASTER state

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453858] [NEW] Fix existing JSCS errors

2015-05-11 Thread Matt Borland
Public bug reported:

When running ./run_tests.sh --jscs there are a few errors (trailing
whitespace).

These should be cleaned up.

** Affects: horizon
 Importance: Undecided
 Assignee: Matt Borland (palecrow)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Matt Borland (palecrow)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453858

Title:
  Fix existing JSCS errors

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When running ./run_tests.sh --jscs there are a few errors (trailing
  whitespace).

  These should be cleaned up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453857] [NEW] 5/11 gate-nova-pip-missing-reqs failing on master

2015-05-11 Thread Matt Riedemann
Public bug reported:

https://jenkins07.openstack.org/job/gate-nova-pip-missing-
reqs/167/console

2015-05-11 15:11:26.399 | Missing requirements:
2015-05-11 15:11:26.399 | nova/test.py:43 dist=testtools module=testtools
2015-05-11 15:11:26.399 | nova/scheduler/filters/trusted_filter.py:50 
dist=requests module=requests
2015-05-11 15:11:26.399 | nova/api/ec2/__init__.py:30 dist=requests 
module=requests
2015-05-11 15:11:26.399 | nova/openstack/common/versionutils.py:26 
dist=setuptools module=pkg_resources
2015-05-11 15:11:26.400 | nova/virt/xenapi/image/bittorrent.py:18 
dist=setuptools module=pkg_resources
2015-05-11 15:11:26.400 | nova/openstack/common/cliutils.py:29 dist=prettytable 
module=prettytable
2015-05-11 15:11:26.400 | nova/context.py:22 dist=python-keystoneclient 
module=keystoneclient.auth
2015-05-11 15:11:26.401 | nova/network/neutronv2/api.py:21 
dist=python-keystoneclient module=keystoneclient.auth
2015-05-11 15:11:26.401 | nova/compute/manager.py:42 dist=python-keystoneclient 
module=keystoneclient.exceptions
2015-05-11 15:11:26.401 | nova/volume/cinder.py:27 dist=python-keystoneclient 
module=keystoneclient.exceptions
2015-05-11 15:11:26.401 | nova/context.py:23 dist=python-keystoneclient 
module=keystoneclient.service_catalog
2015-05-11 15:11:26.401 | nova/network/neutronv2/api.py:23 
dist=python-keystoneclient module=keystoneclient.auth.token_endpoint
2015-05-11 15:11:26.402 | nova/network/neutronv2/api.py:24 
dist=python-keystoneclient module=keystoneclient.session
2015-05-11 15:11:26.402 | nova/volume/cinder.py:28 dist=python-keystoneclient 
module=keystoneclient.session
2015-05-11 15:11:26.402 | nova/keymgr/barbican.py:25 dist=python-keystoneclient 
module=keystoneclient.session
2015-05-11 15:11:26.402 | nova/network/neutronv2/api.py:22 
dist=python-keystoneclient module=keystoneclient.auth.identity.v2
2015-05-11 15:11:26.402 | nova/compute/utils.py:21 dist=netifaces 
module=netifaces
2015-05-11 15:11:26.403 | nova/test.py:33 dist=fixtures module=fixtures
2015-05-11 15:11:26.403 | nova/test.py:30 dist=mock module=mock

For the test ones like testtools, mock, fixtures, etc, those are in
test-requirements.txt so it seems the job is busted in that regard and
we can ignore those.

But the runtime requirements like python-keystoneclient, prettytable,
netifaces, etc we should add.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress

** Changed in: nova
   Status: New = Triaged

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: nova
 Assignee: (unassigned) = Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453857

Title:
  5/11 gate-nova-pip-missing-reqs failing on master

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  https://jenkins07.openstack.org/job/gate-nova-pip-missing-
  reqs/167/console

  2015-05-11 15:11:26.399 | Missing requirements:
  2015-05-11 15:11:26.399 | nova/test.py:43 dist=testtools module=testtools
  2015-05-11 15:11:26.399 | nova/scheduler/filters/trusted_filter.py:50 
dist=requests module=requests
  2015-05-11 15:11:26.399 | nova/api/ec2/__init__.py:30 dist=requests 
module=requests
  2015-05-11 15:11:26.399 | nova/openstack/common/versionutils.py:26 
dist=setuptools module=pkg_resources
  2015-05-11 15:11:26.400 | nova/virt/xenapi/image/bittorrent.py:18 
dist=setuptools module=pkg_resources
  2015-05-11 15:11:26.400 | nova/openstack/common/cliutils.py:29 
dist=prettytable module=prettytable
  2015-05-11 15:11:26.400 | nova/context.py:22 dist=python-keystoneclient 
module=keystoneclient.auth
  2015-05-11 15:11:26.401 | nova/network/neutronv2/api.py:21 
dist=python-keystoneclient module=keystoneclient.auth
  2015-05-11 15:11:26.401 | nova/compute/manager.py:42 
dist=python-keystoneclient module=keystoneclient.exceptions
  2015-05-11 15:11:26.401 | nova/volume/cinder.py:27 dist=python-keystoneclient 
module=keystoneclient.exceptions
  2015-05-11 15:11:26.401 | nova/context.py:23 dist=python-keystoneclient 
module=keystoneclient.service_catalog
  2015-05-11 15:11:26.401 | nova/network/neutronv2/api.py:23 
dist=python-keystoneclient module=keystoneclient.auth.token_endpoint
  2015-05-11 15:11:26.402 | nova/network/neutronv2/api.py:24 
dist=python-keystoneclient module=keystoneclient.session
  2015-05-11 15:11:26.402 | nova/volume/cinder.py:28 dist=python-keystoneclient 
module=keystoneclient.session
  2015-05-11 15:11:26.402 | nova/keymgr/barbican.py:25 
dist=python-keystoneclient module=keystoneclient.session
  2015-05-11 15:11:26.402 | nova/network/neutronv2/api.py:22 
dist=python-keystoneclient module=keystoneclient.auth.identity.v2
  2015-05-11 15:11:26.402 | nova/compute/utils.py:21 dist=netifaces 
module=netifaces
  2015-05-11 15:11:26.403 | nova/test.py:33 dist=fixtures module=fixtures
  2015-05-11 

[Yahoo-eng-team] [Bug 1453888] [NEW] Fullstack doesn't clean resources if environment fails to start

2015-05-11 Thread John Schwarz
Public bug reported:

As the title says, in case fullstack_fixtures.EnvironmentFixture fails
to start because 'wait_until_env_is_up' didn't return successfully (for
example, there was a problem with one of the agents), cleanUp isn't
called. This causes all the resources of the fixtures that are used in
the environment (processes, configurations, namespaces...) not to be
cleaned.

** Affects: neutron
 Importance: Undecided
 Assignee: John Schwarz (jschwarz)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = John Schwarz (jschwarz)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453888

Title:
  Fullstack doesn't clean resources if environment fails to start

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  As the title says, in case fullstack_fixtures.EnvironmentFixture fails
  to start because 'wait_until_env_is_up' didn't return successfully
  (for example, there was a problem with one of the agents), cleanUp
  isn't called. This causes all the resources of the fixtures that are
  used in the environment (processes, configurations, namespaces...) not
  to be cleaned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453909] [NEW] xenapi: BittorrentStore tries to load entry points that don't exist in tree

2015-05-11 Thread Matt Riedemann
Public bug reported:

http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/xenapi/image/bittorrent.py#n75

There are no entry points in nova's setup.cfg for this:

matches = [ep for ep in
   pkg_resources.iter_entry_points('nova.virt.xenapi.vm_utils')
   if ep.name == 'torrent_url']

For anyone using torrents with the xenapi driver, they should set the
config option CONF.xenserver.torrent_base_url.

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: xenserver

** Changed in: nova
   Status: New = Triaged

** Changed in: nova
   Importance: Undecided = Low

** Changed in: nova
 Assignee: (unassigned) = Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453909

Title:
  xenapi: BittorrentStore tries to load entry points that don't exist in
  tree

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/xenapi/image/bittorrent.py#n75

  There are no entry points in nova's setup.cfg for this:

  matches = [ep for ep in
 
pkg_resources.iter_entry_points('nova.virt.xenapi.vm_utils')
 if ep.name == 'torrent_url']

  For anyone using torrents with the xenapi driver, they should set the
  config option CONF.xenserver.torrent_base_url.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453925] [NEW] BGP Dynamic Routing

2015-05-11 Thread Carl Baldwin
Public bug reported:

We propose create an new dr-agent which speaks BGP on behalf of Neutron
to external routers.  It will only announce routes on an external
network and will not yet learn routers from the external system.

These routes will include floating IPs in IPv4 and IPv6 subnets for
IPv6.  The address scopes blueprint is related and helps determine which
subnets in IPv6 should be announced.

Described in blueprint bgp-dynamic-routing

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453925

Title:
  BGP Dynamic Routing

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We propose create an new dr-agent which speaks BGP on behalf of
  Neutron to external routers.  It will only announce routes on an
  external network and will not yet learn routers from the external
  system.

  These routes will include floating IPs in IPv4 and IPv6 subnets for
  IPv6.  The address scopes blueprint is related and helps determine
  which subnets in IPv6 should be announced.

  Described in blueprint bgp-dynamic-routing

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327473] Re: Don't use mutables as default args

2015-05-11 Thread Steve Baker
** Changed in: python-heatclient
   Importance: Undecided = Low

** Changed in: python-heatclient
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1327473

Title:
  Don't use mutables as default args

Status in Cinder:
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in Python client library for heat:
  Fix Released

Bug description:
  
  Passing mutable objects as default args is a known Python pitfall.
  We'd better avoid this.

  This is an  example show the pitfall:
  http://docs.python-guide.org/en/latest/writing/gotchas/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1327473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453965] [NEW] extension UT not run anymore

2015-05-11 Thread Mathieu Rohon
Public bug reported:

in VPNaas, the command :

$ python -m subunit.run discover -t ./ ./neutron_vpnaas/tests/unit
--list | grep test_ikepolicy_list

doesn't return any result, while there is a test named
test_ikepolicy_list :

https://github.com/openstack/neutron-
vpnaas/blob/master/neutron_vpnaas/tests/unit/extensions/test_vpnaas.py#L74

this kind of extensions tests are not run anymore by the gate, as we can
see here for instance :

http://logs.openstack.org/42/181842/1/check/gate-neutron-vpnaas-
python27/b7b2311/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453965

Title:
  extension UT not run anymore

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  in VPNaas, the command :

  $ python -m subunit.run discover -t ./ ./neutron_vpnaas/tests/unit
  --list | grep test_ikepolicy_list

  doesn't return any result, while there is a test named
  test_ikepolicy_list :

  https://github.com/openstack/neutron-
  vpnaas/blob/master/neutron_vpnaas/tests/unit/extensions/test_vpnaas.py#L74

  this kind of extensions tests are not run anymore by the gate, as we
  can see here for instance :

  http://logs.openstack.org/42/181842/1/check/gate-neutron-vpnaas-
  python27/b7b2311/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453906] [NEW] Implement Routing Networks in Neutron

2015-05-11 Thread Carl Baldwin
Public bug reported:

This feature request proposes to allow using private subnets and public
subnets together on the same physical network. The private network will
be used for router next-hops and other router communication.

This will also allow having an L3 only routed network which spans L2
networks. This will depend on dynamic routing integration with Neutron.

https://blueprints.launchpad.net/neutron/+spec/routing-networks

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453906

Title:
  Implement Routing Networks in Neutron

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This feature request proposes to allow using private subnets and
  public subnets together on the same physical network. The private
  network will be used for router next-hops and other router
  communication.

  This will also allow having an L3 only routed network which spans L2
  networks. This will depend on dynamic routing integration with
  Neutron.

  https://blueprints.launchpad.net/neutron/+spec/routing-networks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453906/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453921] [NEW] Implement Address Scopes

2015-05-11 Thread Carl Baldwin
Public bug reported:

Make address scopes a first class thing in Neutron and make Neutron
routers aware of them.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453921

Title:
  Implement Address Scopes

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Make address scopes a first class thing in Neutron and make Neutron
  routers aware of them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453915] [NEW] incorrect cinder_catalog_info option in warning message

2015-05-11 Thread Matt Riedemann
Public bug reported:

http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/cinder.py#n125

if version == '1' and not _V1_ERROR_RAISED:
msg = _LW('Cinder V1 API is deprecated as of the Juno '
  'release, and Nova is still configured to use it. '
  'Enable the V2 API in Cinder and set '
  'cinder_catalog_info in nova.conf to use it.')

The cinder options were moved from the DEFAULT group in nova.conf to the
[cinder] group, but the warning message wasn't updated, so that should
be cinder.catalog_info now.

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: low-hanging-fruit volumes

** Changed in: nova
   Status: New = Triaged

** Changed in: nova
   Importance: Undecided = Low

** Changed in: nova
 Assignee: (unassigned) = Matt Riedemann (mriedem)

** Tags added: low-hanging-fruit volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453915

Title:
  incorrect cinder_catalog_info option in warning message

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  http://git.openstack.org/cgit/openstack/nova/tree/nova/volume/cinder.py#n125

  if version == '1' and not _V1_ERROR_RAISED:
  msg = _LW('Cinder V1 API is deprecated as of the Juno '
'release, and Nova is still configured to use it. '
'Enable the V2 API in Cinder and set '
'cinder_catalog_info in nova.conf to use it.')

  The cinder options were moved from the DEFAULT group in nova.conf to
  the [cinder] group, but the warning message wasn't updated, so that
  should be cinder.catalog_info now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453915/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453943] [NEW] ML2 DVR port binding not covered by unit tests

2015-05-11 Thread Robert Kukura
Public bug reported:

While working on cleaning up the duplicated DB schema and logic
introduced to support DVR's distributed bindings (bug 1367391), I
discovered much much of the ML2 port binding support for DVR was not
covered by existing unit tests. Ideally, tests covering this would be
written and merged before the fix for bug 1367391 to ensure that the the
cleanup work does not make unexpected behavioral changes.

** Affects: neutron
 Importance: Medium
 Assignee: Robert Kukura (rkukura)
 Status: New


** Tags: l3-dvr-backlog ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453943

Title:
  ML2 DVR port binding not covered by unit tests

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  While working on cleaning up the duplicated DB schema and logic
  introduced to support DVR's distributed bindings (bug 1367391), I
  discovered much much of the ML2 port binding support for DVR was not
  covered by existing unit tests. Ideally, tests covering this would be
  written and merged before the fix for bug 1367391 to ensure that the
  the cleanup work does not make unexpected behavioral changes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453953] [NEW] [data processing] Unable to upload job binaries

2015-05-11 Thread Chad Roberts
Public bug reported:

This bug was originally written against Sahara, but it appears to be a
Horizon issue instead, so I'm reporting it here.

When trying to upload the spark-example.jar from the Sahara edp-
examples, it fails with the message Danger: There was an error
submitting the form. Please try again.

In the logs, the stack trace looks like this:

Internal Server Error: /project/data_processing/job_binaries/create-job-binary
Traceback (most recent call last):
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/django/core/handlers/base.py,
 line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File /home/croberts/src/horizon/horizon/decorators.py, line 36, in dec
return view_func(request, *args, **kwargs)
  File /home/croberts/src/horizon/horizon/decorators.py, line 52, in dec
return view_func(request, *args, **kwargs)
  File /home/croberts/src/horizon/horizon/decorators.py, line 36, in dec
return view_func(request, *args, **kwargs)
  File /home/croberts/src/horizon/horizon/decorators.py, line 84, in dec
return view_func(request, *args, **kwargs)
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/django/views/generic/base.py,
 line 69, in view
return self.dispatch(request, *args, **kwargs)
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/django/views/generic/base.py,
 line 87, in dispatch
return handler(request, *args, **kwargs)
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/django/views/generic/edit.py,
 line 173, in post
return self.form_valid(form)
  File /home/croberts/src/horizon/horizon/forms/views.py, line 173, in 
form_valid
exceptions.handle(self.request)
  File /home/croberts/src/horizon/horizon/exceptions.py, line 364, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File /home/croberts/src/horizon/horizon/forms/views.py, line 170, in 
form_valid
handled = form.handle(self.request, form.cleaned_data)
  File 
/home/croberts/src/horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/forms.py,
 line 183, in handle
_(Unable to create job binary))
  File /home/croberts/src/horizon/horizon/exceptions.py, line 364, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File 
/home/croberts/src/horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/forms.py,
 line 169, in handle
bin_url = self.handle_internal(request, context)
  File 
/home/croberts/src/horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/forms.py,
 line 216, in handle_internal
_(Unable to upload job binary))
  File /home/croberts/src/horizon/horizon/exceptions.py, line 364, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File 
/home/croberts/src/horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/forms.py,
 line 212, in handle_internal
request.FILES[job_binary_file].read())
  File /home/croberts/src/horizon/openstack_dashboard/api/sahara.py, line 
332, in job_binary_internal_create
data=data)
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/saharaclient/api/job_binary_internals.py,
 line 31, in create
'job_binary_internal', dump_json=False)
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/saharaclient/api/base.py,
 line 110, in _update
resp = self.api.put(url, **kwargs)
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/keystoneclient/adapter.py,
 line 179, in put
return self.request(url, 'PUT', **kwargs)
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/saharaclient/api/client.py,
 line 46, in request
return super(HTTPClient, self).request(*args, **kwargs)
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/keystoneclient/adapter.py,
 line 95, in request
return self.session.request(url, method, **kwargs)
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/keystoneclient/utils.py,
 line 318, in inner
return func(*args, **kwargs)
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/keystoneclient/session.py,
 line 371, in request
logger=logger)
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/keystoneclient/utils.py,
 line 318, in inner
return func(*args, **kwargs)
  File 
/home/croberts/src/horizon/.venv/lib/python2.7/site-packages/keystoneclient/session.py,
 line 195, in _http_log_request
logger.debug(' '.join(string_parts))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xfd in position 14: 
ordinal not in range(128)

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453953

Title:
  [data processing]  Unable to upload job 

[Yahoo-eng-team] [Bug 1453955] [NEW] ML2 mechanism driver DVR PortContext info incomplete/inconsistent

2015-05-11 Thread Robert Kukura
Public bug reported:

While extending the existing ML2 port binding unit tests to cover DVR
distributed port binding (bug 1453943), I ran into a number of issues
where the PortContext passed to ML2 mechanism drivers does not provide
the information needed for distributed ports or provides it
inconsistently. This is likely to prevent existing mechanism drivers for
ToR switches from working with DVR ports, and tests have reportedly
shown this to be the case.

When DVR was introduced, the host, original_host, status, and
original_status attributes were added to PortContext to provide drivers
with the host-specific details of distributed (or normal) ports. But the
current and previous port dictionary attributes also contain some host-
specific information, such as the VIF type, for distributed ports. New
attributes need to be added for host-specific  current and previous VIF
type and details, and the current and previous port dictionaries should
contain only the host-independent information that is returned from REST
operations.

Also, the existing original_status and original_host PortContext
attributes should return None when in the context of create or delete
operations, and original_host should reflect the host for which a
distributed port operation is being performed.

** Affects: neutron
 Importance: High
 Assignee: Robert Kukura (rkukura)
 Status: In Progress


** Tags: l3-dvr-backlog ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453955

Title:
  ML2 mechanism driver DVR PortContext info incomplete/inconsistent

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  While extending the existing ML2 port binding unit tests to cover DVR
  distributed port binding (bug 1453943), I ran into a number of issues
  where the PortContext passed to ML2 mechanism drivers does not provide
  the information needed for distributed ports or provides it
  inconsistently. This is likely to prevent existing mechanism drivers
  for ToR switches from working with DVR ports, and tests have
  reportedly shown this to be the case.

  When DVR was introduced, the host, original_host, status, and
  original_status attributes were added to PortContext to provide
  drivers with the host-specific details of distributed (or normal)
  ports. But the current and previous port dictionary attributes also
  contain some host-specific information, such as the VIF type, for
  distributed ports. New attributes need to be added for host-specific
  current and previous VIF type and details, and the current and
  previous port dictionaries should contain only the host-independent
  information that is returned from REST operations.

  Also, the existing original_status and original_host PortContext
  attributes should return None when in the context of create or delete
  operations, and original_host should reflect the host for which a
  distributed port operation is being performed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422699] Re: glance api doesn't abort start up on Store configuration errors

2015-05-11 Thread nikhil komawar
** Changed in: glance-store
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1422699

Title:
  glance api doesn't abort start up on Store configuration errors

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in OpenStack Glance backend store-drivers library (glance_store):
  Fix Released

Bug description:
  Glance api service does not abort start up when errors in glance-api.cfg file 
are encountered.
  It would make sense to abort service start up when a BadStoreConfiguration 
exception is encountered, instead of just sending the error to the logs and 
disabling adding images to that Store.

  For example if a Filesystem Storage Backend with multiple store is configured 
with a duplicate directory:
  filesystem_store_datadirs=/mnt/nfs1/images/:200
  filesystem_store_datadirs=/mnt/nfs1/images/:100

  Logs will have the error:
  ERROR glance_store._drivers.filesystem [-] Directory /mnt/nfs1/image 
specified multiple times in filesystem_store_datadirs option of filesystem 
configuration
  TRACE glance_store._drivers.filesystem None
  TRACE glance_store._drivers.filesystem
  WARNING glance_store.driver [-] Failed to configure store correctly: None 
Disabling add method.

  Service will start and when client tries to add an image he will
  receive a 410 Gone error saying: Error in store configuration. Adding
  images to store is disabled.

  This affects not only the filesystem storage backend but all glance-
  storage drivers that encounter an error in the configuration and raise
  a BadStoreConfiguration exception.

  How reproducible:
  Every time

  Steps to Reproduce:
  1. Configure Glance to use  Filesystem Storage Backend with multiple store 
and duplicate a filesystem_storage_datadirs.
  2. Run glance api

  Expected behavior:
  Glance api service should not have started and should have reported that the 
directory was specified multiple times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1422699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449639] Re: RBD: On image creation error, image is not deleted

2015-05-11 Thread nikhil komawar
** Changed in: glance-store
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1449639

Title:
  RBD: On image creation error, image is not deleted

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in OpenStack Glance backend store-drivers library (glance_store):
  Fix Released

Bug description:
  When an exception rises while adding/creating an image, and the image
  has been created, this new image is not properly deleted.

  The fault lies in the `_delete_image` call of the Store.add method
  that is providing incorrect arguments.

  This also affects Glance (Icehouse), since back then glance_store
  functionality was included there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1449639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422333] Re: instance resize fail when changing between flavor with ephemeral disk to a flavor without ephemeral disk

2015-05-11 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1422333

Title:
  instance resize fail when changing between flavor with ephemeral disk
  to a flavor without ephemeral disk

Status in OpenStack Compute (Nova):
  Expired

Bug description:
  Description of problem:

  The resize process fails and move the instance to a an 'error' state. 
  The instance was created with the flavor: 
  m2.small  | 2048  | 10   | 0 |  | 1 | 1.0 | True
  and was resized to: 
  m3.small  | 2048  | 10   | 10| 2048 | 1 | 1.0 | True

  the Horizon error message:

  Error: Failed to launch instance cirros: Please try again later
  [Error: Unexpected error while running command. Command: ssh compute
  node IP mkdir -p
  /var/lib/nova/instances/b54a62ea-b739-4b44-a394-a92a89dfa759 Exit
  code: 255 Stdout: u'' Stderr: u'Host key verification failed.\r\n'].

  Version-Release number of selected component (if applicable):
  openstack-nova-console-2014.2.2-2.el7ost.noarch
  openstack-nova-novncproxy-2014.2.2-2.el7ost.noarch
  openstack-nova-common-2014.2.2-2.el7ost.noarch
  openstack-nova-compute-2014.2.2-2.el7ost.noarch
  openstack-nova-cert-2014.2.2-2.el7ost.noarch
  python-nova-2014.2.2-2.el7ost.noarch
  openstack-nova-scheduler-2014.2.2-2.el7ost.noarch
  python-novaclient-2.20.0-1.el7ost.noarch
  openstack-nova-api-2014.2.2-2.el7ost.noarch
  openstack-nova-conductor-2014.2.2-2.el7ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Launch an instance with the small flavor
  2. create a flavor with ephemeral disk
  3. resize the instance to the new flavor

  Actual results:
  The resize fail. the instance move to error state

  Expected results:
  the instance should be resized to the new flavor

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1422333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429093] Re: nova allows to boot images with virtual size root_gb specified in flavor

2015-05-11 Thread Tristan Cacqueray
I've mark the OSSA task as won't fix as it's considered a vulnerability
per se.

** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429093

Title:
  nova allows to boot images with virtual size  root_gb specified in
  flavor

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  It's currently possible to boot an instance from a QCOW2 image, which
  has the virtual size larger than root_gb size specified in the given
  flavor.

  Steps to reproduce:

  1. Download a QCOW2 image (e.g. Cirros -
  https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-i386-disk.img)

  2. Resize the image to a reasonable size:

  qemu-img resize cirros-0.3.0-i386-disk.img +9G

  3. Upload the image to Glance:

  glance image-create --file cirros-0.3.0-i386-disk.img --name cirros-
  10GB --is-public True --progress --container-format bare --disk-format
  qcow2

  4. Boot the first VM using a 'correct' flavor (root_gb  virtual size
  of the Cirros image), e.g. m1.small (root_gb = 20)

  nova boot --image cirros-10GB --flavor m1.small demo-ok

  5. Wait until the VM boots.

  6. Boot the second VM using an 'incorrect' flavor (root_gb  virtual
  size of the Cirros image), e.g. m1.tiny (root_gb = 1):

  nova boot --image cirros-10GB --flavor m1.tiny demo-should-fail

  7. Wait until the VM boots.

  Expected result:

  demo-ok is in ACTIVE state
  demo-should-fail is in ERROR state (failed with FlavorDiskTooSmall)

  Actual result:

  demo-ok is in ACTIVE state
  demo-should-fail is in ACTIVE state

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454041] [NEW] misunderstanding caused by uuid token and pki token in install guide

2015-05-11 Thread brenda
Public bug reported:

In released install guide, we can see the step to set token provider to uuid,  
as following:
[token]
provider = keystone.token.providers.uuid.Provider

but there are further steps to set pki token, as following:
# keystone-manage pki_setup --keystone-user keystone --keystone-group
keystone
# chown -R keystone:keystone /var/log/keystone
# chown -R keystone:keystone /etc/keystone/ssl
# chmod -R o-rwx /etc/keystone/ssl


I think pki token has been brought in from Grizzly,and the installation  guide 
should be use pki token provier, like below:
[token]
provider = keystone.token.providers.pki.Provider

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1454041

Title:
  misunderstanding caused by uuid token and pki token in install guide

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In released install guide, we can see the step to set token provider to uuid, 
 as following:
  [token]
  provider = keystone.token.providers.uuid.Provider

  but there are further steps to set pki token, as following:
  # keystone-manage pki_setup --keystone-user keystone --keystone-group
  keystone
  # chown -R keystone:keystone /var/log/keystone
  # chown -R keystone:keystone /etc/keystone/ssl
  # chmod -R o-rwx /etc/keystone/ssl

  
  I think pki token has been brought in from Grizzly,and the installation  
guide should be use pki token provier, like below:
  [token]
  provider = keystone.token.providers.pki.Provider

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1454041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453358] Re: Using volume v2 client instead of v1 to get local volume

2015-05-11 Thread Jerry Cai
This is a powervc-driver issue, and I've already fixed it, thank you.
@mzoeller

** Project changed: nova = powervc-driver

** Changed in: powervc-driver
 Assignee: (unassigned) = Jerry Cai (caimin)

** Changed in: powervc-driver
   Importance: Undecided = Medium

** Changed in: powervc-driver
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453358

Title:
  Using volume v2 client instead of v1 to get local volume

Status in IBM PowerVC Driver for OpenStack:
  Fix Released

Bug description:
  There is something wrong with volume v1 api to get volume detail info,
  need to use volume v2 client instead of v1 to get local volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/powervc-driver/+bug/1453358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447678] Re: session ID does not respect the principle of Same Origin Policy

2015-05-11 Thread Lin Hua Cheng
** Changed in: horizon
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447678

Title:
  session ID does not respect the principle of Same Origin Policy

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added as to
  the bug as attachments.

  Reported via private E-mail from Anass ANNOUR:

  For the services Horizon, the session ID does [editor's note: did you
  mean does not here?] respect the principle of (SOP: Same Origin
  Policy), any one who can open a port in the Horizon server (  1024)
  with less privilege can get the session ID of the victim which he had
  convince him/her to visit the port, then the attacker can replay the
  session ID.

  The screenshot vuln2 present the session id catched in a port ,
  after i convinced an authenticated victim (tester) that the service is
  faster on port .

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1447678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452955] Re: Client does not catch exceptions when making a token authentication request

2015-05-11 Thread Lin Hua Cheng
python-keystoneclient do raise a ConnectionRefused exception if it could
not connect to the Keystone endpoint.

The horizon component that invokes keystone do actually catch this error and 
log a debug msg. I can see this message in the log:
Unable to establish connection to http://some_bad_url:5000/v3/auth/tokens

Horizon code that handles the exception from python-keystoneclient:

except (keystone_exceptions.ClientException,
keystone_exceptions.AuthorizationFailure) as exc:
msg = _(An error occurred authenticating. 
Please try again later.)
LOG.debug(str(exc))
raise exceptions.KeystoneAuthException(msg)

I think horizon should perform a LOG.error() for connection refused,
since it is configuration error instead of LOG.debug().


** Also affects: django-openstack-auth
   Importance: Undecided
   Status: New

** Changed in: python-keystoneclient
   Status: New = Invalid

** Changed in: django-openstack-auth
 Assignee: (unassigned) = Lin Hua Cheng (lin-hua-cheng)

** Tags added: kilo-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1452955

Title:
  Client does not catch exceptions when making a token authentication
  request

Status in Django OpenStack Auth:
  New
Status in OpenStack Identity (Keystone):
  Invalid
Status in Python client library for Keystone:
  Invalid

Bug description:
  keystoneclient.auth.identity.v3.token.TokenMethod does a
  session.post() without catching exceptions.

  In my case, I had a misconfigured DNS which meant that this post()
  never succeeded, however the error that ends up going back to Horizon
  is a simplified:

  Login failed: An error occurred authenticating. Please try again
  later.

  which makes no mention of the underlying cause, nor do the keystone
  logs. This caused me an enormous amount of wasted time debugging, the
  error could certainly be improved here!

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1452955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp