[Yahoo-eng-team] [Bug 1322597] Re: Unable to update image members

2014-08-20 Thread Gary W. Smith
Marking invalid since this is already being tracked via the above
blueprint

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1322597

Title:
  Unable to update image members

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Glance API let us update the image members, we should expose that
  functionality in Horizon

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1322597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328468] Re: Admin instances filter is not working in server side.

2014-08-20 Thread Gary W. Smith
Marking as invalid since this is already being handled by the above
blueprint.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1328468

Title:
  Admin instances filter is not working in server side.

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  It would be better if admin instances filter is working in server side
  for project, name or host, so that instances can be filtered across
  pages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1328468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359072] [NEW] Incorrect logic of _modify_rules() in IptablesManager

2014-08-20 Thread Qin Zhao
Public bug reported:

The logic of _modify_rules() seems not correct. For instance, assuming
that we have a in-memory table like this:

:bn-chain001 - [0:0]
:chain002 - [0:0]
[0:0] -A bn-chain001 rule001
[0:0] -A chain002 rule002

and iptables-save output like this:

# Generated by zhaoqin on mars
*zhaoqin
:bn-chain001 - [0:0]
[0:0] -A bn-chain001 rule001
[0:0] -A chain002 rule002
COMMIT
# Completed on moon

The current code of _modify_rules() will generate the following result:

# Generated by zhaoqin on mars
:chain002 - [0:0]
:bn-chain001 - [0:0]
[0:0] -A bn-chain001 rule001
[0:0] -A chain002 rule002
*zhaoqin
COMMIT
# Completed on moon

The root cause is that rule '[0:0] -A chain002 rule002' is in new_filter
list is removed, so that the current code will do 'rules_index -= 1'.
That is an incorrect action. The correct action is to do 'rules_index -=
1', if one chain entry in new_filter list is removed, because the chain
list is above the rule list.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1359072

Title:
  Incorrect logic of _modify_rules() in IptablesManager

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The logic of _modify_rules() seems not correct. For instance, assuming
  that we have a in-memory table like this:

  :bn-chain001 - [0:0]
  :chain002 - [0:0]
  [0:0] -A bn-chain001 rule001
  [0:0] -A chain002 rule002

  and iptables-save output like this:

  # Generated by zhaoqin on mars
  *zhaoqin
  :bn-chain001 - [0:0]
  [0:0] -A bn-chain001 rule001
  [0:0] -A chain002 rule002
  COMMIT
  # Completed on moon

  The current code of _modify_rules() will generate the following
  result:

  # Generated by zhaoqin on mars
  :chain002 - [0:0]
  :bn-chain001 - [0:0]
  [0:0] -A bn-chain001 rule001
  [0:0] -A chain002 rule002
  *zhaoqin
  COMMIT
  # Completed on moon

  The root cause is that rule '[0:0] -A chain002 rule002' is in
  new_filter list is removed, so that the current code will do
  'rules_index -= 1'. That is an incorrect action. The correct action is
  to do 'rules_index -= 1', if one chain entry in new_filter list is
  removed, because the chain list is above the rule list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1359072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359092] [NEW] During snapshot create using nova CLI, unable to use optional arguments

2014-08-20 Thread Mh Raies
Public bug reported:

I am creating a VM snapshot using nova CLI -

Set of operations and error are as below -


1. nova cli help ===>

[raies@localhost cli_services]$ nova help image-create
usage: nova image-create [--show] [--poll]  

Create a new image by taking a snapshot of a running server.

Positional arguments:
Name or ID of server.
  Name of snapshot.

Optional arguments:
  --showPrint image info.
  --pollReport the snapshot progress and poll until image creation is
complete.
[raies@localhost cli_services]$


2. 
[raies@localhost cli_services]$ nova list
+--+-+++-+---+
| ID   | Name| Status | Task State | 
Power State | Networks  |
+--+-+++-+---+
| 238eb266-5837-4544-a626-287cb3ca98c3 | test-server | ACTIVE | -  | 
Running | public=172.24.4.3 |
+--+-+++-+---+

3. 
when used optional argument (--poll) - 

[raies@localhost cli_services]$ nova --poll image-create 
238eb266-5837-4544-a626-287cb3ca98c3 test-snapshot
usage: nova [--version] [--debug] [--os-cache] [--timings]
[--timeout ] [--os-auth-token OS_AUTH_TOKEN]
[--os-username ] [--os-user-id ]
[--os-password ]
[--os-tenant-name ]
[--os-tenant-id ] [--os-auth-url ]
[--os-region-name ] [--os-auth-system ]
[--service-type ] [--service-name ]
[--volume-service-name ]
[--endpoint-type ]
[--os-compute-api-version ]
[--os-cacert ] [--insecure]
[--bypass-url ]
 ...
error: unrecognized arguments: --poll


4. 
when used optional argument --show - 


[raies@localhost cli_services]$ nova --show image-create 
238eb266-5837-4544-a626-287cb3ca98c3 test-snapshot
usage: nova [--version] [--debug] [--os-cache] [--timings]
[--timeout ] [--os-auth-token OS_AUTH_TOKEN]
[--os-username ] [--os-user-id ]
[--os-password ]
[--os-tenant-name ]
[--os-tenant-id ] [--os-auth-url ]
[--os-region-name ] [--os-auth-system ]
[--service-type ] [--service-name ]
[--volume-service-name ]
[--endpoint-type ]
[--os-compute-api-version ]
[--os-cacert ] [--insecure]
[--bypass-url ]
 ...
error: unrecognized arguments: --show
Try 'nova help ' for more information.

5. 
When tried with no optional argument - 

[raies@localhost cli_services]$ nova image-create
238eb266-5837-4544-a626-287cb3ca98c3 test-snapshot

[raies@localhost cli_services]$ 
[raies@localhost cli_services]$ 
[raies@localhost cli_services]$ 
[raies@localhost cli_services]$ nova image-list
+--+-++--+
| ID   | Name| 
Status | Server   |
+--+-++--+
| 5f785211-590c-48aa-b64c-c9a6a9b60aad | Fedora-x86_64-20-20140618-sda   | 
ACTIVE |  |
| 4c5838f1-fce5-422d-a411-401d79f6250a | cirros-0.3.2-x86_64-uec | 
ACTIVE |  |
| b17aa5bf-eed2-4792-8ecb-5e161d0d3aa2 | cirros-0.3.2-x86_64-uec-kernel  | 
ACTIVE |  |
| 238742ef-7739-4e0b-9f58-d609f9994ff5 | cirros-0.3.2-x86_64-uec-ramdisk | 
ACTIVE |  |
| c0b1cff7-9251-4b91-a121-2d3c69d8a0fc | test-snapshot   | 
SAVING | 238eb266-5837-4544-a626-287cb3ca98c3 |
+--+-++--+

i.e successful


So the problem is we cannot create snapshot if we are using optional arguments 
with nova image-create CLI.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359092

Title:
  During snapshot create using nova CLI, unable to use optional
  arguments

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am creating a VM snapshot using nova CLI -

  Set of operations and error are as below -

  
  1. nova cli help ===>

  [raies@localhost cli_services]$ nova help image-create
  usage: nova image-create [--show] [--poll]  

  Create a new image by taking a snapshot of a running server.

  Positional arguments:
  Name or ID of server.
Name of snapshot.

[Yahoo-eng-team] [Bug 1359113] [NEW] Typos in l3_router_plugin's comments

2014-08-20 Thread Wei Wang
Public bug reported:

In the services.l3_router.L3RouterPlugin.create_floatingip(), we have
some comments (docstring) to help us to know this function, but there
are some nits inside them:

  :param floatingip: data fo the floating IP being created
  :returns: A floating IP object on success

  AS the l3 router plugin aysnchrounously creates floating IPs
  leveraging tehe l3 agent, the initial status fro the floating
  IP object will be DOWN.

data fo the floating IP --> data of the floating IP
aysnchrounously creates --> asynchronously creates
status fro the floating IP --> status for the floating IP

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1359113

Title:
  Typos in l3_router_plugin's comments

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In the services.l3_router.L3RouterPlugin.create_floatingip(), we have
  some comments (docstring) to help us to know this function, but there
  are some nits inside them:

:param floatingip: data fo the floating IP being created
:returns: A floating IP object on success

AS the l3 router plugin aysnchrounously creates floating IPs
leveraging tehe l3 agent, the initial status fro the floating
IP object will be DOWN.

  data fo the floating IP --> data of the floating IP
  aysnchrounously creates --> asynchronously creates
  status fro the floating IP --> status for the floating IP

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1359113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359134] [NEW] Broken translations in object storage page

2014-08-20 Thread Ambroise CHRISTEA
Public bug reported:

Missing call to function unicode for some update status messages in the object 
storage pages.
This prevents some messages from being translated.

** Affects: horizon
 Importance: Undecided
 Assignee: Ambroise CHRISTEA (ambroise-christea)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ambroise CHRISTEA (ambroise-christea)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359134

Title:
  Broken translations in object storage page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Missing call to function unicode for some update status messages in the 
object storage pages.
  This prevents some messages from being translated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1359134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359136] [NEW] vmware: display incorrect host_ip for hypervisor show

2014-08-20 Thread zhu zhu
Public bug reported:

>From VCenter driver,  with compute node configured to be cluster01. And 
>VCenter IP is 10.9.1.43
But with hypervisor show command, it will display the host IP for the 
controller node. It should be pointing to 
the really VCenter IP.

[root@dhcp-10-9-3-83 ~]# nova hypervisor-show 29
+---+--+
| Property  | Value 
   |
+---+--+
| cpu_info_model| ["Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz", 
"Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz"] |
| cpu_info_topology_cores   | 24
   |
| cpu_info_topology_threads | 48
   |
| cpu_info_vendor   | ["IBM", "IBM"]
   |
| current_workload  | 0 
   |
| disk_available_least  | - 
   |
| free_disk_gb  | 763   
   |
| free_ram_mb   | 354728
   |
| host_ip   | 10.9.3.83 
   |
| hypervisor_hostname   | domain-c17(cluster01) 
   |
| hypervisor_type   | VMware vCenter Server 
   |
| hypervisor_version| 5005000   
   |
| id| 29
   |
| local_gb  | 794   
   |
| local_gb_used | 31
   |
| memory_mb | 362336
   |
| memory_mb_used| 7608  
   |
| running_vms   | 3 
   |
| service_host  | dhcp-10-9-3-83.sce.cn.ibm.com 
   |
| service_id| 6 
   |
| vcpus | 48
   |
| vcpus_used| 5 
   |
+---+--+

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

** Tags added: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359136

Title:
  vmware: display incorrect host_ip for hypervisor show

Status in OpenStack Compute (Nova):
  New

Bug description:
  From VCenter driver,  with compute node configured to be cluster01. And 
VCenter IP is 10.9.1.43
  But with hypervisor show command, it will display the host IP for the 
controller node. It should be pointing to 
  the really VCenter IP.

  [root@dhcp-10-9-3-83 ~]# nova hypervisor-show 29
  
+---+--+
  | Property  | Value   
 |
  
+---+--+
  | cpu_info_model| ["Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz", 
"Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz"] |
  | cpu_info_topology_cores   | 24  
 |
  | cpu_info_topology_threads | 48  
 |
  | cpu_info_vendor   | ["IBM", "IBM"]  

[Yahoo-eng-team] [Bug 1359140] [NEW] NotImplementedError during nova network-create

2014-08-20 Thread Mh Raies
Public bug reported:

I am using devstack development environment.


my localrc file is as -

ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta
DEST=/opt/stack
disable_service n-net
enable_service tempest
API_RATE_LIMIT=False
VOLUME_BACKING_FILE_SIZE=4G
VIRT_DRIVER=libvirt
SWIFT_REPLICAS=1
export OS_NO_CACHE=True
SCREEN_LOGDIR=/opt/stack/screen-logs
SYSLOG=True
SKIP_EXERCISES=boot_from_volume,client-env
ROOTSLEEP=0
ACTIVE_TIMEOUT=60
Q_USE_SECGROUP=True
BOOT_TIMEOUT=90
ASSOCIATE_TIMEOUT=60
ADMIN_PASSWORD=Password
MYSQL_PASSWORD=Password
RABBIT_PASSWORD=Password
SERVICE_PASSWORD=Password
SERVICE_TOKEN=tokentoken
SWIFT_HASH=Password

I am trying to create a network using nova network-create CLI.

I tries following command -

#[raies@localhost devstack]$ nova network-create --fixed-range-v4 192.168.1.0/8 
test-net-1
ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-da64fd03-3637-4a80-9b6f-28e6d477e60d)


Traces of n-api logs are as -


 _http_log_response 
/opt/stack/python-keystoneclient/keystoneclient/session.py:196
2014-08-20 15:37:57.212 DEBUG nova.api.openstack.wsgi 
[req-da64fd03-3637-4a80-9b6f-28e6d477e60d admin admin] Action: 'create', body: 
{"network": {"cidr": "192.168.1.0/8", "label": "test-net-1"}} _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:931
2014-08-20 15:37:57.212 DEBUG nova.api.openstack.wsgi 
[req-da64fd03-3637-4a80-9b6f-28e6d477e60d admin admin] Calling method '>' (Content-type='application/json', Accept='application/json') 
_process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:936
2014-08-20 15:37:57.213 DEBUG nova.api.openstack.compute.contrib.os_networks 
[req-da64fd03-3637-4a80-9b6f-28e6d477e60d admin admin] Creating network with 
label test-net-1 create 
/opt/stack/nova/nova/api/openstack/compute/contrib/os_networks.py:129
2014-08-20 15:37:57.213 ERROR nova.api.openstack 
[req-da64fd03-3637-4a80-9b6f-28e6d477e60d admin admin] Caught error:
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack Traceback (most recent 
call last):
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/__init__.py", line 124, in __call__
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return 
req.get_response(self.application)
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1296, in send
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack application, 
catch_exc_info=False)
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1260, in 
call_application
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py", line 565, 
in __call__
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return 
self._app(env, start_response)
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/routes/middleware.py", line 131, in __call__
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 908, in __call__
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack content_type, body, 
accept)
2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   Fi

[Yahoo-eng-team] [Bug 1342507] Re: healing migration doesn't work for ryu CI

2014-08-20 Thread Ann Kamyshnikova
There is a problem with version of alembic. With alembic>=0.6.4 for all
plugins heal script worked normal. I tested locally and I got such error
only with alembic versions 0.6.2 and older. Global requirements already
updated with  alembic>=0.6.4. So the problem should get fixed with
alembic version upgrade.


** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342507

Title:
  healing migration doesn't work for ryu CI

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  ryu CI started failing after healing migration change
  (https://review.openstack.org/#/c/96438/)

  http://180.37.183.32/ryuci/38/96438/41/check/check-tempest-dsvm-
  ryuplugin/e457d80/logs/devstacklog.txt.gz

  2014-07-15 11:27:55.722 | Traceback (most recent call last):
  2014-07-15 11:27:55.722 |   File "/usr/local/bin/neutron-db-manage", line 10, 
in 
  2014-07-15 11:27:55.722 | sys.exit(main())
  2014-07-15 11:27:55.722 |   File 
"/opt/stack/new/neutron/neutron/db/migration/cli.py", line 171, in main
  2014-07-15 11:27:55.722 | CONF.command.func(config, CONF.command.name)
  2014-07-15 11:27:55.722 |   File 
"/opt/stack/new/neutron/neutron/db/migration/cli.py", line 85, in 
do_upgrade_downgrade
  2014-07-15 11:27:55.722 | do_alembic_command(config, cmd, revision, 
sql=CONF.command.sql)
  2014-07-15 11:27:55.722 |   File 
"/opt/stack/new/neutron/neutron/db/migration/cli.py", line 63, in 
do_alembic_command
  2014-07-15 11:27:55.723 | getattr(alembic_command, cmd)(config, *args, 
**kwargs)
  2014-07-15 11:27:55.723 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 124, in 
upgrade
  2014-07-15 11:27:55.733 | script.run_env()
  2014-07-15 11:27:55.734 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/script.py", line 199, in run_env
  2014-07-15 11:27:55.736 | util.load_python_file(self.dir, 'env.py')
  2014-07-15 11:27:55.737 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 205, in 
load_python_file
  2014-07-15 11:27:55.737 | module = load_module_py(module_id, path)
  2014-07-15 11:27:55.738 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/compat.py", line 58, in 
load_module_py
  2014-07-15 11:27:55.738 | mod = imp.load_source(module_id, path, fp)
  2014-07-15 11:27:55.738 |   File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py", line 
106, in 
  2014-07-15 11:27:55.738 | run_migrations_online()
  2014-07-15 11:27:55.738 |   File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py", line 
90, in run_migrations_online
  2014-07-15 11:27:55.738 | options=build_options())
  2014-07-15 11:27:55.738 |   File "", line 7, in run_migrations
  2014-07-15 11:27:55.739 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/environment.py", line 681, in 
run_migrations
  2014-07-15 11:27:55.740 | self.get_context().run_migrations(**kw)
  2014-07-15 11:27:55.740 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/migration.py", line 225, in 
run_migrations
  2014-07-15 11:27:55.741 | change(**kw)
  2014-07-15 11:27:55.741 |   File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py",
 line 32, in upgrade
  2014-07-15 11:27:55.741 | heal_script.heal()
  2014-07-15 11:27:55.741 |   File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py",
 line 78, in heal
  2014-07-15 11:27:55.741 | execute_alembic_command(el)
  2014-07-15 11:27:55.741 |   File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py",
 line 93, in execute_alembic_command
  2014-07-15 11:27:55.741 | parse_modify_command(command)
  2014-07-15 11:27:55.741 |   File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py",
 line 126, in parse_modify_command
  2014-07-15 11:27:55.741 | op.alter_column(table, column, **kwargs)
  2014-07-15 11:27:55.741 |   File "", line 7, in alter_column
  2014-07-15 11:27:55.742 |   File "", line 1, in 
  2014-07-15 11:27:55.742 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 322, in go
  2014-07-15 11:27:55.742 | return fn(*arg, **kw)
  2014-07-15 11:27:55.742 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/operations.py", line 300, in 
alter_column
  2014-07-15 11:27:55.743 | existing_autoincrement=existing_autoincrement
  2014-07-15 11:27:55.743 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/ddl/mysql.py", line 42, in 
alter_column
  2014-07-15 11:27:55.743 | else existing_autoincrement
  2014-07-15 11:27:55.743 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 76, in _exec
  2014-07-15 11:27:55.744 | conn.execute(construct, *multiparams, **params)
  2014-07-15 11

[Yahoo-eng-team] [Bug 1322597] Re: Unable to update image members

2014-08-20 Thread Santiago Baldassin
The bug is not invalid, it will be fixed by the blueprint but it's still
valid

** Changed in: horizon
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1322597

Title:
  Unable to update image members

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Glance API let us update the image members, we should expose that
  functionality in Horizon

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1322597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358805] Re: Incorrect API in the add Tenant Access to private flavor action

2014-08-20 Thread KaiLin
** Project changed: nova => openstack-api-site

** Description changed:

  when I give a specified tenant access to the specified private flavor, I
  use it like this :
  
  POST /v2/​{tenant_id}​/flavors/​{flavor_id}​/action
  {
  "addTenantAccess": {
  "tenant": "fake_tenant"
  }
  }
  tenant: The name of the tenant to which to give access.
  
  The response is:
  {
  "flavor_access": [
  {
  "flavor_id": "10",
  #here is "tenant_id"
  "tenant_id": "fake_tenant"
  },
  {
  "flavor_id": "10",
  "tenant_id": "openstack"
  }
  ]
  }
  
  when I use the private flavor to create VM in the specified tenant,it
  failed. But if I add the tenant access by using the tenant id,it can
  create VM successfully in the specified tenant .
  
  POST /v2/​{tenant_id}​/flavors/​{flavor_id}​/action
  {
  "addTenantAccess": {
  "tenant": "{tenant_id}"
  }
  }
+ I check the code, It also uses the "tenant" information as the "project id".
  
- I check the code, It also uses the "tenant" information as the "project id".
- So we should change "tenant" to "tenant_id" in the API of  the add Tenant 
Access to private flavor action.
+ So we should change the api guides in the add Tenant Access to private flavor 
action,
+ change "tenant: The name of the tenant to which to give access." to 
+ "tenant: The uuid of the tenant to which to give access."

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358805

Title:
  Incorrect API in the add Tenant Access to private flavor action

Status in OpenStack API documentation site:
  New

Bug description:
  when I give a specified tenant access to the specified private flavor,
  I use it like this :

  POST /v2/​{tenant_id}​/flavors/​{flavor_id}​/action
  {
  "addTenantAccess": {
  "tenant": "fake_tenant"
  }
  }
  tenant: The name of the tenant to which to give access.

  The response is:
  {
  "flavor_access": [
  {
  "flavor_id": "10",
  #here is "tenant_id"
  "tenant_id": "fake_tenant"
  },
  {
  "flavor_id": "10",
  "tenant_id": "openstack"
  }
  ]
  }

  when I use the private flavor to create VM in the specified tenant,it
  failed. But if I add the tenant access by using the tenant id,it can
  create VM successfully in the specified tenant .

  POST /v2/​{tenant_id}​/flavors/​{flavor_id}​/action
  {
  "addTenantAccess": {
  "tenant": "{tenant_id}"
  }
  }
  I check the code, It also uses the "tenant" information as the "project id".

  So we should change the api guides in the add Tenant Access to private flavor 
action,
  change "tenant: The name of the tenant to which to give access." to 
  "tenant: The uuid of the tenant to which to give access."

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1358805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359215] [NEW] Issue in viewing floating IP association with Virtual IP of Load balancer in Horizon

2014-08-20 Thread Arun prasath S
Public bug reported:

When the user have to see the floating IP association they can view it
under Access and Security -> Floating IPs tab.

But when a floating IP is associated with virtual IP of the load
balancer, under Floating IPs tab -> Instance column , only '-' is
displayed. If we click on the '-' sign, it throws a error 'Unable to
find instance '.

Actually the  displayed in the error is a Port ID of the Load
balancer (we can find it under 'Networks'). But I believe, when we click
on '-' the  is searched in 'Instances' tables, that's why the error
is thrown.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359215

Title:
  Issue in viewing floating IP association with Virtual IP of Load
  balancer in Horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When the user have to see the floating IP association they can view it
  under Access and Security -> Floating IPs tab.

  But when a floating IP is associated with virtual IP of the load
  balancer, under Floating IPs tab -> Instance column , only '-' is
  displayed. If we click on the '-' sign, it throws a error 'Unable to
  find instance '.

  Actually the  displayed in the error is a Port ID of the Load
  balancer (we can find it under 'Networks'). But I believe, when we
  click on '-' the  is searched in 'Instances' tables, that's why
  the error is thrown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1359215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359092] Re: During snapshot create using nova CLI, unable to use optional arguments

2014-08-20 Thread Davanum Srinivas (DIMS)
I can't seem to run "nova --all flavor-list" or "nova --extra-specs
flavor-list" either

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
   Status: New => Confirmed

** Changed in: python-novaclient
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359092

Title:
  During snapshot create using nova CLI, unable to use optional
  arguments

Status in OpenStack Compute (Nova):
  Confirmed
Status in Python client library for Nova:
  Confirmed

Bug description:
  I am creating a VM snapshot using nova CLI -

  Set of operations and error are as below -

  
  1. nova cli help ===>

  [raies@localhost cli_services]$ nova help image-create
  usage: nova image-create [--show] [--poll]  

  Create a new image by taking a snapshot of a running server.

  Positional arguments:
  Name or ID of server.
Name of snapshot.

  Optional arguments:
--showPrint image info.
--pollReport the snapshot progress and poll until image creation is
  complete.
  [raies@localhost cli_services]$


  2. 
  [raies@localhost cli_services]$ nova list
  
+--+-+++-+---+
  | ID   | Name| Status | Task State | 
Power State | Networks  |
  
+--+-+++-+---+
  | 238eb266-5837-4544-a626-287cb3ca98c3 | test-server | ACTIVE | -  | 
Running | public=172.24.4.3 |
  
+--+-+++-+---+

  3. 
  when used optional argument (--poll) - 

  [raies@localhost cli_services]$ nova --poll image-create 
238eb266-5837-4544-a626-287cb3ca98c3 test-snapshot
  usage: nova [--version] [--debug] [--os-cache] [--timings]
  [--timeout ] [--os-auth-token OS_AUTH_TOKEN]
  [--os-username ] [--os-user-id ]
  [--os-password ]
  [--os-tenant-name ]
  [--os-tenant-id ] [--os-auth-url ]
  [--os-region-name ] [--os-auth-system ]
  [--service-type ] [--service-name ]
  [--volume-service-name ]
  [--endpoint-type ]
  [--os-compute-api-version ]
  [--os-cacert ] [--insecure]
  [--bypass-url ]
   ...
  error: unrecognized arguments: --poll

  
  4. 
  when used optional argument --show - 

  
  [raies@localhost cli_services]$ nova --show image-create 
238eb266-5837-4544-a626-287cb3ca98c3 test-snapshot
  usage: nova [--version] [--debug] [--os-cache] [--timings]
  [--timeout ] [--os-auth-token OS_AUTH_TOKEN]
  [--os-username ] [--os-user-id ]
  [--os-password ]
  [--os-tenant-name ]
  [--os-tenant-id ] [--os-auth-url ]
  [--os-region-name ] [--os-auth-system ]
  [--service-type ] [--service-name ]
  [--volume-service-name ]
  [--endpoint-type ]
  [--os-compute-api-version ]
  [--os-cacert ] [--insecure]
  [--bypass-url ]
   ...
  error: unrecognized arguments: --show
  Try 'nova help ' for more information.

  5. 
  When tried with no optional argument - 

  [raies@localhost cli_services]$ nova image-create
  238eb266-5837-4544-a626-287cb3ca98c3 test-snapshot

  [raies@localhost cli_services]$ 
  [raies@localhost cli_services]$ 
  [raies@localhost cli_services]$ 
  [raies@localhost cli_services]$ nova image-list
  
+--+-++--+
  | ID   | Name| 
Status | Server   |
  
+--+-++--+
  | 5f785211-590c-48aa-b64c-c9a6a9b60aad | Fedora-x86_64-20-20140618-sda   | 
ACTIVE |  |
  | 4c5838f1-fce5-422d-a411-401d79f6250a | cirros-0.3.2-x86_64-uec | 
ACTIVE |  |
  | b17aa5bf-eed2-4792-8ecb-5e161d0d3aa2 | cirros-0.3.2-x86_64-uec-kernel  | 
ACTIVE |  |
  | 238742ef-7739-4e0b-9f58-d609f9994ff5 | cirros-0.3.2-x86_64-uec-ramdisk | 
ACTIVE |  |
  | c0b1cff7-9251-4b91-a121-2d3c69d8a0fc | test-snapshot   | 
SAVING | 238eb266-5837-4544-a626-287cb3ca98c3 |
  
+--+-+-

[Yahoo-eng-team] [Bug 1359231] [NEW] List role assignments filters performance

2014-08-20 Thread Samuel de Medeiros Queiroz
Public bug reported:

When listing role assignments, we have the option to filter them by actor, 
target and role.
As Henry Nash pointed out at [1] , the current implementation uses the standard 
filtering the V3.wrap_collection. Given the large number of individual 
assignments, this is pretty inefficient. 

The controller must pass the filters into the driver call, so that the list 
size is kept a minimum.
For example, if the user wants to filter role assignments by project X, the 
driver will query the DB only selecting the role assignments where target_id = 
X.id, instead of querying everything and returning the result to be filtered by 
the controller.

[1]
https://github.com/openstack/keystone/blob/master/keystone/assignment/controllers.py#L860

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1359231

Title:
  List role assignments filters performance

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When listing role assignments, we have the option to filter them by actor, 
target and role.
  As Henry Nash pointed out at [1] , the current implementation uses the 
standard filtering the V3.wrap_collection. Given the large number of individual 
assignments, this is pretty inefficient. 

  The controller must pass the filters into the driver call, so that the list 
size is kept a minimum.
  For example, if the user wants to filter role assignments by project X, the 
driver will query the DB only selecting the role assignments where target_id = 
X.id, instead of querying everything and returning the result to be filtered by 
the controller.

  [1]
  
https://github.com/openstack/keystone/blob/master/keystone/assignment/controllers.py#L860

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1359231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359238] [NEW] pagination info at top and bottom of any horizon table

2014-08-20 Thread David Medberry
Public bug reported:

Hello dear OpenStack dashboard folk:

Pagination info at top and bottom of any horizon table. For instance, by
default, you typically limit table size to 20 but only display that
there are more than 20 returned entities at the bottom of a page. GOOD
UX DESIGN PRINCIPLES REQUIRE THAT YOU DISPLAY THIS AT TOP AND BOTTOM OF
A PAGE. And that you be able to go to the next "n" entries from the top
or bottom via a navigator button.

*(Yes, I yelled.)*

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359238

Title:
  pagination info at top and bottom of any horizon table

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hello dear OpenStack dashboard folk:

  Pagination info at top and bottom of any horizon table. For instance,
  by default, you typically limit table size to 20 but only display that
  there are more than 20 returned entities at the bottom of a page. GOOD
  UX DESIGN PRINCIPLES REQUIRE THAT YOU DISPLAY THIS AT TOP AND BOTTOM
  OF A PAGE. And that you be able to go to the next "n" entries from the
  top or bottom via a navigator button.

  *(Yes, I yelled.)*

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1359238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359241] [NEW] Default sort order is braindead broken in User and Project lists

2014-08-20 Thread David Medberry
Public bug reported:

The default returned search order in the Admin panel for Users and
Projects return them in "id" order, not name order. The ID order is, by
design, opaque and useless so it is tantamount to idiocy to use that as
the default sort order. (It's like saying, "Return things in random
order on purpose", said with a diabolical peel of laughter)

Ie, the "admin" user (typically the first user created) appears on the
third page, when either sorting by date created (which likely isn't
available) or alphabetically would put it much much much higher up the
list.

This is another UX bug.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359241

Title:
  Default sort order is braindead broken in User and Project lists

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The default returned search order in the Admin panel for Users and
  Projects return them in "id" order, not name order. The ID order is,
  by design, opaque and useless so it is tantamount to idiocy to use
  that as the default sort order. (It's like saying, "Return things in
  random order on purpose", said with a diabolical peel of laughter)

  Ie, the "admin" user (typically the first user created) appears on the
  third page, when either sorting by date created (which likely isn't
  available) or alphabetically would put it much much much higher up the
  list.

  This is another UX bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1359241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359245] [NEW] Middleware creates tuple on every iteration of a loop

2014-08-20 Thread Ian Cordasco
Public bug reported:

In this generator [1] (nested in a for-loop) the middleware constantly
recreates the same tuple. This would perform better if two changes were
made:

1. Use a set instead of a tuple
2. Assign the set once outside of the for-loop saving several object creations. 

[1]:
https://github.com/openstack/horizon/blob/0f3192f84c872b1060ac6b7bbe1a3b34bbaf1b36/horizon/middleware.py#L179..L180

** Affects: horizon
 Importance: Undecided
 Assignee: Ian Cordasco (icordasc)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ian Cordasco (icordasc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359245

Title:
  Middleware creates tuple on every iteration of a loop

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In this generator [1] (nested in a for-loop) the middleware constantly
  recreates the same tuple. This would perform better if two changes
  were made:

  1. Use a set instead of a tuple
  2. Assign the set once outside of the for-loop saving several object 
creations. 

  [1]:
  
https://github.com/openstack/horizon/blob/0f3192f84c872b1060ac6b7bbe1a3b34bbaf1b36/horizon/middleware.py#L179..L180

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1359245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359246] [NEW] Documentation should accurately list supported browser levels

2014-08-20 Thread Doug Fish
Public bug reported:

Based on conversation in IRC 20 Aug 2014 our browser support is:
latest firefox, latest chrome, IE 9+ with some lower priority support for 
latest safari, and latest opera.

Maybe this could be restated as
Horizon is primarily tested and supported on the latest version of Firefox, the 
latest version of Chrome, and IE9+.  Issues related to Safari and Opera will 
also be considered.

This should be added to someplace prominent in the documentation.  Maybe
https://github.com/openstack/horizon/blob/master/doc/source/faq.rst.

The statement should be corrected on
/doc/source/contributing.rst

Statement can be refined as info is recorded in
https://wiki.openstack.org/wiki/Horizon/BrowserSupport

** Affects: horizon
 Importance: Medium
 Status: Triaged


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359246

Title:
  Documentation should accurately list supported browser levels

Status in OpenStack Dashboard (Horizon):
  Triaged

Bug description:
  Based on conversation in IRC 20 Aug 2014 our browser support is:
  latest firefox, latest chrome, IE 9+ with some lower priority support for 
latest safari, and latest opera.

  Maybe this could be restated as
  Horizon is primarily tested and supported on the latest version of Firefox, 
the latest version of Chrome, and IE9+.  Issues related to Safari and Opera 
will also be considered.

  This should be added to someplace prominent in the documentation.
  Maybe
  https://github.com/openstack/horizon/blob/master/doc/source/faq.rst.

  The statement should be corrected on
  /doc/source/contributing.rst

  Statement can be refined as info is recorded in
  https://wiki.openstack.org/wiki/Horizon/BrowserSupport

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1359246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349491] Re: [OSSA 2014-027] Persistent XSS in the Host Aggregates interface (CVE-2014-3594)

2014-08-20 Thread Tristan Cacqueray
** Changed in: ossa
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1349491

Title:
  [OSSA 2014-027] Persistent XSS in the Host Aggregates interface
  (CVE-2014-3594)

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Received 2014-07-28 18:08:47 +0200 via encrypted E-mail from "Dennis
  Felsch ":

  Hi everyone,

  We spotted an issue with Horizon in OpenStack Icehouse and the current
  development version of Juno (older versions not tested):

  The interface for Host Aggregates is vulnerable to persistent XSS.

  Steps to reproduce the issue:

   * Log into Horizon as admin
   * Go to "Host Aggregates"
   * Create a new host aggregate
   * Enter some name and an availability zone like this: 
   * Save
   * See alert pop up

  Because we are researchers, we are happy to help you, whenever we can.
  However, from the research point of view, it would be really nice to get
  some acknowledgment on your site about this issue. Is something
  like this possible?

  The people working on this are:
  Dennis Felsch (me), dennis.fel...@ruhr-uni-bochum.de
  Mario Heiderich, mario.heider...@cure53.de

  Please let me know if you need more info.

  Greetings,
  Dennis

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1349491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353602] Re: ServerSecurityGroupController doesn't handle case where there are no security groups

2014-08-20 Thread Brent Eagles
*** This bug is a duplicate of bug 1291489 ***
https://bugs.launchpad.net/bugs/1291489

It was originally  reported against Havana. I would agree that the patch
and the backport should fix the issue, so I'll close this BZ. Thanks,
cyeoh for spotting this (and lcostantino for fixing :)).

** This bug has been marked a duplicate of bug 1291489
   list-secgroup fail if no secgroups defined for server

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353602

Title:
  ServerSecurityGroupController doesn't handle case where there are no
  security groups

Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  In nova/api/openstack/compute/contrib/security_groups.py, the
  ServerSecurityGroupController.index(...) method does not handle the
  situation where security_group_api.get_instance_security_groups()
  returns None (as might be the case when neutron is the security group
  impl) resulting in an exception when attempting to iterate on a
  NoneType.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1353602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359326] [NEW] DVR routers should not be auto scheduled according to router_auto_schedule flag

2014-08-20 Thread Armando Migliaccio
Public bug reported:

DVR routers have specific rules regarding when and where the placement
of the namespaces (SNAT and IR namespaces) occur, therefore auto-placing
based routers should be avoided as it may conflict with these.

We should also document inline what these rules are.

** Affects: neutron
 Importance: Undecided
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Armando Migliaccio (armando-migliaccio)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1359326

Title:
  DVR routers should not be auto scheduled according to
  router_auto_schedule flag

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  DVR routers have specific rules regarding when and where the placement
  of the namespaces (SNAT and IR namespaces) occur, therefore auto-
  placing based routers should be avoided as it may conflict with these.

  We should also document inline what these rules are.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1359326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359376] [NEW] KeyError in GroupNotFound error path

2014-08-20 Thread Matthew Edmonds
Public bug reported:

Hit the following exception:

2014-07-23 04:31:08.206 4449 ERROR keystone.common.wsgi [-] 'group_id'
2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi Traceback (most recent 
call last):
2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/common/wsgi.py", line 207, in 
__call__
2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi result = 
method(context, **params)
2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/common/controller.py", line 196, in 
wrapper
2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi return f(self, 
context, filters, **kwargs)
2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/assignment/controllers.py", line 
876, in list_role_assignments
2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi formatted_refs)
2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/assignment/controllers.py", line 
811, in _
expand_indirect_assignments
2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi members = 
_get_group_members(r)
2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/assignment/controllers.py", line 
696, in _get_group_members
2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi 'group': 
ref['group_id'], 'target': target,
2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi KeyError: 'group_id'

It appears that the dictionary format was changed and the error path
code was not updated to expect the new format.

** Affects: keystone
 Importance: Undecided
 Assignee: Matthew Edmonds (edmondsw)
 Status: In Progress

** Changed in: keystone
   Status: New => In Progress

** Changed in: keystone
 Assignee: (unassigned) => Matthew Edmonds (edmondsw)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1359376

Title:
  KeyError in GroupNotFound error path

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Hit the following exception:

  2014-07-23 04:31:08.206 4449 ERROR keystone.common.wsgi [-] 'group_id'
  2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/common/wsgi.py", line 207, in 
__call__
  2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi result = 
method(context, **params)
  2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/common/controller.py", line 196, in 
wrapper
  2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi return f(self, 
context, filters, **kwargs)
  2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/assignment/controllers.py", line 
876, in list_role_assignments
  2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi formatted_refs)
  2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/assignment/controllers.py", line 
811, in _
  expand_indirect_assignments
  2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi members = 
_get_group_members(r)
  2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/assignment/controllers.py", line 
696, in _get_group_members
  2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi 'group': 
ref['group_id'], 'target': target,
  2014-07-23 04:31:08.206 4449 TRACE keystone.common.wsgi KeyError: 'group_id'

  It appears that the dictionary format was changed and the error path
  code was not updated to expect the new format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1359376/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359395] [NEW] JShint non-voting gate errors

2014-08-20 Thread Thai Tran
Public bug reported:

We are getting common JShint errors. Example link below:

http://logs.openstack.org/88/115588/1/check/gate-horizon-jshint/760a733/console.html
http://logs.openstack.org/78/109278/18/check/gate-horizon-jshint/9883c1a/console.html
http://logs.openstack.org/36/115736/1/check/gate-horizon-jshint/a993a33/console.html

** Affects: horizon
 Importance: Undecided
 Assignee: Thai Tran (tqtran)
 Status: In Progress

** Description changed:

  We are getting common JShint errors. Example link below:
- http://logs.openstack.org/88/115588/1/check/gate-horizon-
- jshint/760a733/console.html
+ 
+ 
http://logs.openstack.org/88/115588/1/check/gate-horizon-jshint/760a733/console.html
+ 
http://logs.openstack.org/78/109278/18/check/gate-horizon-jshint/9883c1a/console.html
+ 
http://logs.openstack.org/36/115736/1/check/gate-horizon-jshint/a993a33/console.html

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359395

Title:
  JShint non-voting gate errors

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  We are getting common JShint errors. Example link below:

  
http://logs.openstack.org/88/115588/1/check/gate-horizon-jshint/760a733/console.html
  
http://logs.openstack.org/78/109278/18/check/gate-horizon-jshint/9883c1a/console.html
  
http://logs.openstack.org/36/115736/1/check/gate-horizon-jshint/a993a33/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1359395/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359399] [NEW] [Bootstrap] Broken table inline edits

2014-08-20 Thread Thai Tran
Public bug reported:

After the bootstrap patch, inline table edit is missing a couple of
features:

1. Cell used to turn green with a check mark before fading out when edits was 
successful
2. Error icon no longer shows up in the text area if an server returns a 400

I am sure more bugs will surface with more testing.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359399

Title:
  [Bootstrap] Broken table inline edits

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After the bootstrap patch, inline table edit is missing a couple of
  features:

  1. Cell used to turn green with a check mark before fading out when edits was 
successful
  2. Error icon no longer shows up in the text area if an server returns a 400

  I am sure more bugs will surface with more testing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1359399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359407] [NEW] tempest errors in logs

2014-08-20 Thread Brant Knudson
Public bug reported:


This is regarding a tempest run on a Keystone change, here's the log: 
http://logs.openstack.org/73/111573/3/check/check-tempest-dsvm-full/f4e3313/console.html

All the tempest tests ran successfully. Then it runs the log checker and
there are several errors in the logs.

- Log File Has Errors: n-cond

nova.quota - Failed to commit reservations ...

- Log File Has Errors: n-cpu

There's several errors here:

glanceclient.common.http -- Request returned failure status 404.
(there's several of these)

oslo.messaging.rpc.dispatcher -- Exception during message handling: Unexpected 
task state: expecting (u'powering-off',) but the actual state is None
(this generates a lot of logs and there are several of them)

- Log File Has Errors: n-api

glanceclient.common.http - Request returned failure status 404.
(there's several of these)

- Log File Has Errors: g-api

glance.store.sheepdog [-] Error in store configuration: [Errno 2] No such file 
or directory
swiftclient [-] Container HEAD failed: 
http://127.0.0.1:8080/v1/AUTH_3c05c27e027f451b9837e04c9d8ae1e5/glance 404 Not 
Found

- Log File Has Errors: c-api

ERROR cinder.volume.api Volume status must be available to reserve
(There's 4 of these)

- Log File Has Errors: ceilometer-alarm-evaluator

ceilometer.alarm.service [-] alarm evaluation cycle failed
(several of these)

- Log File Has Errors: ceilometer-acentral

ERROR ceilometer.neutron_client [-] publicURL endpoint for network service not 
found
(there's several errors related to endpoints)

- Log File Has Errors: ceilometer-acompute

ceilometer.compute.pollsters.disk [-] Ignoring instance instance-0087: 
internal error: cannot find statistics for device 'virtio-disk1'
(there's a few of these)

** Affects: ceilometer
 Importance: Undecided
 Status: New

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: ceilometer
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359407

Title:
  tempest errors in logs

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  
  This is regarding a tempest run on a Keystone change, here's the log: 
http://logs.openstack.org/73/111573/3/check/check-tempest-dsvm-full/f4e3313/console.html

  All the tempest tests ran successfully. Then it runs the log checker
  and there are several errors in the logs.

  - Log File Has Errors: n-cond

  nova.quota - Failed to commit reservations ...

  - Log File Has Errors: n-cpu

  There's several errors here:

  glanceclient.common.http -- Request returned failure status 404.
  (there's several of these)

  oslo.messaging.rpc.dispatcher -- Exception during message handling: 
Unexpected task state: expecting (u'powering-off',) but the actual state is None
  (this generates a lot of logs and there are several of them)

  - Log File Has Errors: n-api

  glanceclient.common.http - Request returned failure status 404.
  (there's several of these)

  - Log File Has Errors: g-api

  glance.store.sheepdog [-] Error in store configuration: [Errno 2] No such 
file or directory
  swiftclient [-] Container HEAD failed: 
http://127.0.0.1:8080/v1/AUTH_3c05c27e027f451b9837e04c9d8ae1e5/glance 404 Not 
Found

  - Log File Has Errors: c-api

  ERROR cinder.volume.api Volume status must be available to reserve
  (There's 4 of these)

  - Log File Has Errors: ceilometer-alarm-evaluator

  ceilometer.alarm.service [-] alarm evaluation cycle failed
  (several of these)

  - Log File Has Errors: ceilometer-acentral

  ERROR ceilometer.neutron_client [-] publicURL endpoint for network service 
not found
  (there's several errors related to endpoints)

  - Log File Has Errors: ceilometer-acompute

  ceilometer.compute.pollsters.disk [-] Ignoring instance instance-0087: 
internal error: cannot find statistics for device 'virtio-disk1'
  (there's a few of these)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1359407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359417] [NEW] Arista ML2 driver does not pass admin tenant name to EOS

2014-08-20 Thread Shashank Hegde
Public bug reported:

When the Arista ML2 driver registers itself with Arista EOS, it sends
the admin credentials to EOS. But it does not send the admin tenant name
which causes keystone authentication to fail.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1359417

Title:
  Arista ML2 driver does not pass admin tenant name to EOS

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When the Arista ML2 driver registers itself with Arista EOS, it sends
  the admin credentials to EOS. But it does not send the admin tenant
  name which causes keystone authentication to fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1359417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359416] [NEW] RPC callback should not use mixin approach

2014-08-20 Thread Akihiro Motoki
Public bug reported:

RPC is versioned per feature, but all of callback classes are implemented as 
mixin classes
and they are inherited into a single class.
This prevents RPC callback class from versioning properly.

This kind of chaos leads to bugs like bug 1353309.

This bug focuses on fixes of the server side of this problem.

# Agent side implementation uses neutron.manager.Manager and
# Manager requires callback as a top level method. It leads to mixin approach.

The following are the current status of the server side.

[l3-agent RPC callback]

- 1.1: NEC, Cisco N1KV, oneconvergence, ryu
- 1.2: Brocade, Hyper-V, LinuxBridge, MLNX, OVS
 version is bumped due to L2-plugin-agent RPC callback change
- 1.3: L3 Router plugin

All plugins above use l3-agent and RPC version should match.

[dhcp-agent RPC callback]

DHCP RPC version itself is 1.1.

The versions of all plugins which use dhcp-agent are:

- 1.1: nec, bigswitch, cisco n1kv, midonet, oneconvergence, vmware nsx
- 1.2: brocade, hyper-v, linuxbridge, mlnx, ovs
 the version is bumped due to L2 plugin agent RPC callback change
- 1.3: ml2
the version is bumped due to RPC callback changes in L2 plugin agent and 
DVR support.

** Affects: neutron
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: neutron-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1359416

Title:
  RPC callback should not use mixin approach

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  RPC is versioned per feature, but all of callback classes are implemented as 
mixin classes
  and they are inherited into a single class.
  This prevents RPC callback class from versioning properly.

  This kind of chaos leads to bugs like bug 1353309.

  This bug focuses on fixes of the server side of this problem.

  # Agent side implementation uses neutron.manager.Manager and
  # Manager requires callback as a top level method. It leads to mixin approach.

  The following are the current status of the server side.

  [l3-agent RPC callback]

  - 1.1: NEC, Cisco N1KV, oneconvergence, ryu
  - 1.2: Brocade, Hyper-V, LinuxBridge, MLNX, OVS
   version is bumped due to L2-plugin-agent RPC callback change
  - 1.3: L3 Router plugin

  All plugins above use l3-agent and RPC version should match.

  [dhcp-agent RPC callback]

  DHCP RPC version itself is 1.1.

  The versions of all plugins which use dhcp-agent are:

  - 1.1: nec, bigswitch, cisco n1kv, midonet, oneconvergence, vmware nsx
  - 1.2: brocade, hyper-v, linuxbridge, mlnx, ovs
   the version is bumped due to L2 plugin agent RPC callback change
  - 1.3: ml2
  the version is bumped due to RPC callback changes in L2 plugin agent and 
DVR support.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1359416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359427] [NEW] Potential codec errors during rescheduling operations

2014-08-20 Thread Joe Cropper
Public bug reported:

In the compute manager's 'retry' logic (i.e., handling
RescheduledException) in _build_and_run_instance, it has a catch-all
case for all exceptions and sets reason=str(e).  This works well in most
cases, but for cases in which lower level code may generate locale-
specific messages, it's possible to see the "UnicodeEncodeError: 'ascii'
codec can't encode character..." error, which ultimately masks the
message in the compute logs, etc.

It also means the instance won't be rescheduled as it fails with an
error similar to...

WARNING nova.compute.manager [req-7fe662b1-c947-4303-b43f-114fbdafe875
None] [instance: 333474eb-dd07-4e05-b920-787da0fd5b32] Unexpected build
failure, not rescheduling build.

** Affects: nova
 Importance: Undecided
 Assignee: Joe Cropper (jwcroppe)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Joe Cropper (jwcroppe)

** Description changed:

  In the compute manager's 'retry' logic (i.e., handling
  RescheduledException) in _build_and_run_instance, it has a catch-all
  case for all exceptions and sets reason=str(e).  This works well in most
  cases, but for cases in which lower level code may generate locale-
  specific messages, it's possible to see the "UnicodeEncodeError: 'ascii'
  codec can't encode character..." error, which ultimately masks the
  message in the compute logs, etc.
+ 
+ It also means the instance won't be rescheduled as it fails with an
+ error similar to...
+ 
+ WARNING nova.compute.manager [req-7fe662b1-c947-4303-b43f-114fbdafe875
+ None] [instance: 333474eb-dd07-4e05-b920-787da0fd5b32] Unexpected build
+ failure, not rescheduling build.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359427

Title:
  Potential codec errors during rescheduling operations

Status in OpenStack Compute (Nova):
  New

Bug description:
  In the compute manager's 'retry' logic (i.e., handling
  RescheduledException) in _build_and_run_instance, it has a catch-all
  case for all exceptions and sets reason=str(e).  This works well in
  most cases, but for cases in which lower level code may generate
  locale-specific messages, it's possible to see the
  "UnicodeEncodeError: 'ascii' codec can't encode character..." error,
  which ultimately masks the message in the compute logs, etc.

  It also means the instance won't be rescheduled as it fails with an
  error similar to...

  WARNING nova.compute.manager [req-7fe662b1-c947-4303-b43f-114fbdafe875
  None] [instance: 333474eb-dd07-4e05-b920-787da0fd5b32] Unexpected
  build failure, not rescheduling build.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359460] [NEW] l3 agent rescheduling query is not joining tables

2014-08-20 Thread Kevin Benton
Public bug reported:

The code to reschedule the L3 agents cross-references the heartbeat in
the agents table but does not join the records between the tables so
incorrect results are returned.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1359460

Title:
  l3 agent rescheduling query is not joining tables

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The code to reschedule the L3 agents cross-references the heartbeat in
  the agents table but does not join the records between the tables so
  incorrect results are returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1359460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359475] [NEW] AttributeError: 'module' object has no attribute 'VIR_MIGRATE_LIVE'

2014-08-20 Thread Matt Riedemann
Public bug reported:

This commit added some new default flags for migration in the libvirt
driver:

https://github.com/openstack/nova/commit/26504d71ceaecf22f135d8321769db801290c405

However those new flags weren't added to
nova.tests.virt.libvirt.fakelibvirt, so if you're not running with a
real libvirt you're going to get this:

FAIL: 
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_live_migration_uses_migrateToURI_without_dest_listen_addrs
tags: worker-1
--
Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
INFO [nova.network.driver] Loading network driver 'nova.network.linux_net'
INFO [nova.virt.driver] Loading compute driver 'nova.virt.fake.FakeDriver'
WARNING [nova.virt.libvirt.firewall] Libvirt module could not be loaded. 
NWFilterFirewall will not work correctly.
ERROR [nova.virt.libvirt.driver] Live Migration failure: 'module' object has no 
attribute 'VIR_MIGRATE_LIVE'
}}}

traceback-1: {{{
Traceback (most recent call last):
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 112, in cleanUp
return self._cleanups(raise_errors=raise_first)
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/fixtures/callmany.py",
 line 88, in __call__
reraise(error[0], error[1], error[2])
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/fixtures/callmany.py",
 line 82, in __call__
cleanup(*args, **kwargs)
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/mox.py",
 line 286, in VerifyAll
mock_obj._Verify()
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/mox.py",
 line 506, in _Verify
raise ExpectedMethodCallsError(self._expected_calls_queue)
ExpectedMethodCallsError: Verify: Expected methods never called:
  0.  Stub for Domain.migrateToURI() -> None.__call__('qemu+tcp://dest/system', 
, None, 0) -> None
}}}

Traceback (most recent call last):
  File "nova/tests/virt/libvirt/test_driver.py", line 4607, in 
test_live_migration_uses_migrateToURI_without_dest_listen_addrs
migrate_data=migrate_data)
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 393, in assertRaises
self.assertThat(our_callable, matcher)
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 404, in assertThat
mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 454, in _matchHelper
mismatch = matcher.match(matchee)
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
mismatch = self.exception_matcher.match(exc_info)
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
mismatch = matcher.match(matchee)
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 385, in match
reraise(*matchee)
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
result = matchee()
  File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 902, in __call__
return self._callable_object(*self._args, **self._kwargs)
  File "nova/virt/libvirt/driver.py", line 4798, in _live_migration
recover_method(context, instance, dest, block_migration)
  File "nova/openstack/common/excutils.py", line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "nova/virt/libvirt/driver.py", line 4764, in _live_migration
flagvals = [getattr(libvirt, x.strip()) for x in flaglist]
AttributeError: 'module' object has no attribute 'VIR_MIGRATE_LIVE'

That is in basically anything that goes through the _live_migrate
method.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359475

Title:
  AttributeError: 'module' object has no attribute 'VIR_MIGRATE_LIVE'

Status in OpenStack Compute (Nova):
  New

Bug description:
  This commit added some new default flags for migration in the libvirt
  driver:

  
https://github.com/openstack/nova/commit/26504d71ceaecf22f135d8321769db801290c405

  However those new flags weren't added to
  nova.tests.virt.libvirt.fakelibvirt, so if you're not running with a
  

[Yahoo-eng-team] [Bug 1359504] [NEW] encoding error when delete role with windows 2012 ad identity backend

2014-08-20 Thread Yaguang Tang
Public bug reported:

yaguang@yaguang-ThinkPad-X230:~$ keystone --os-token beyond630 --os-endpoint 
http://localhost:35357/v2.0 role-delete 1c8551ebd5e341d7888fa2b31ce592eb
An unexpected error prevented the server from fulfilling your request: 'utf8' 
codec can't decode byte 0xfb in position 2: invalid start byte (Disable debug 
mode to suppress these details.) (HTTP 500)


2014-08-21 10:38:08.241 9132 DEBUG keystone.common.ldap.core [-] LDAP search: 
base=ou=Roles,dc=ubuntu,dc=com scope=2 
filterstr=(&(cn=1c8551ebd5e341d7888fa2b31ce592eb)(objectclass=organizationalRole))
 attrs=None attrsonly=0 search_s 
/home/yaguang/working/openstack/keystone/keystone/common/ldap/core.py:911
2014-08-21 10:38:08.695 9132 DEBUG keystone.common.ldap.core [-] LDAP unbind 
unbind_s 
/home/yaguang/working/openstack/keystone/keystone/common/ldap/core.py:884
2014-08-21 10:38:08.696 9132 DEBUG keystone.common.ldap.core [-] LDAP unbind 
unbind_s 
/home/yaguang/working/openstack/keystone/keystone/common/ldap/core.py:884
2014-08-21 10:38:08.696 9132 ERROR keystone.common.wsgi [-] 'utf8' codec can't 
decode byte 0xfb in position 2: invalid start byte
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi Traceback (most recent 
call last):
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/common/wsgi.py", line 214, 
in __call__
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi result = 
method(context, **params)
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/assignment/controllers.py", 
line 232, in delete_role
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi 
self.assignment_api.delete_role(role_id)
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/notifications.py", line 75, 
in wrapper
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi result = f(*args, 
**kwargs)
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/assignment/core.py", line 
484, in delete_role
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi 
self.driver.delete_role(role_id)
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/assignment/backends/ldap.py",
 line 227, in delete_role
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi return 
self.role.delete(role_id, self.project.tree_dn)
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/assignment/backends/ldap.py",
 line 651, in delete
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi super(RoleApi, 
self).delete(role_id)
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/common/ldap/core.py", line 
1494, in delete
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi 
conn.delete_s(self._id_to_dn(object_id))
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/common/ldap/core.py", line 
1264, in _id_to_dn
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi 'objclass': 
self.object_class})
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/common/ldap/core.py", line 
926, in search_s
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi py_result = 
convert_ldap_result(ldap_result)
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/common/ldap/core.py", line 
154, in convert_ldap_result
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi for kind, values in 
six.iteritems(attrs
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/common/ldap/core.py", line 
154, in 
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi for kind, values in 
six.iteritems(attrs
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/common/ldap/core.py", line 
123, in ldap2py
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi return 
utf8_decode(val)
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/home/yaguang/working/openstack/keystone/keystone/common/ldap/core.py", line 
84, in utf8_decode
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi return 
_utf8_decoder(value)[0]
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi return 
codecs.utf_8_decode(input, errors, True)
2014-08-21 10:38:08.696 9132 TRACE keystone.common.wsgi UnicodeDecodeError: 
'utf8' codec can't decode byte 0xfb in position 2: invalid start byte

[Yahoo-eng-team] [Bug 1359510] [NEW] Network deallocation can fail with use_single_default_gateway

2014-08-20 Thread Vish Ishaya
Public bug reported:

There is a race condition in linux_net.get_dhcp_opts when
use_single_default_gateway is set. Because it makes multiple queries, if
an instance is deleted while iterating through the list of fixed ips, it
can cause an exception like the following:

2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 1043, in 
update_dhcp
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher 
restart_dhcp(context, dev, network_ref)
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 
249, in inner
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 1096, in 
restart_dhcp
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher 
write_to_file(optsfile, get_dhcp_opts(context, network_ref))
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 1016, in 
get_dhcp_opts
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher context, 
instance_uuid)
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 110, in wrapper
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher args, 
kwargs)
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 425, in 
object_class_action
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher 
objver=objver, args=args, kwargs=kwargs)
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 150, in 
call
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in 
_send
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 
412, in send
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 
405, in _send
2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher raise 
result

While the query could be fixed to catch the exception, this code is also
horribly inefficient, so the logic needs to be fixed to only make one db
query.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359510

Title:
  Network deallocation can fail with use_single_default_gateway

Status in OpenStack Compute (Nova):
  New

Bug description:
  There is a race condition in linux_net.get_dhcp_opts when
  use_single_default_gateway is set. Because it makes multiple queries,
  if an instance is deleted while iterating through the list of fixed
  ips, it can cause an exception like the following:

  2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 1043, in 
update_dhcp
  2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher 
restart_dhcp(context, dev, network_ref)
  2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 
249, in inner
  2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 1096, in 
restart_dhcp
  2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher 
write_to_file(optsfile, get_dhcp_opts(context, network_ref))
  2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 1016, in 
get_dhcp_opts
  2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher 
context, instance_uuid)
  2014-08-20 19:44:31.371 10867 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 110, in wrappe

[Yahoo-eng-team] [Bug 1296148] Re: neutronclient should raise generic per-code exceptions

2014-08-20 Thread Akihiro Motoki
** Changed in: python-neutronclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296148

Title:
  neutronclient should raise generic per-code exceptions

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in Python client library for Neutron:
  Fix Released

Bug description:
  Neutronclient now raises known exceptions (such as NetworkNotFound, 
PortNotFound,...) or generic NeutronClientException, and library users (like 
Horizon) cannot catch generic NotFound exception without checking the detail of 
NeutronClientException.
  Obviously neutronclient should raise individual exceptions based on response 
status code (400, 404, 409 and so on).

  By doing so, Horizon exception handling will be much simpler and I believe it 
brings the similar merits to other library users.
  In Horizon it is targeted to Icehouse release.

  It is related to bug 1284317, which is specific to Router Not Found
  exception, and this bug is intended to address the issue in more
  generic way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297309] Re: Unauthorized: Unknown auth strategy

2014-08-20 Thread Akihiro Motoki
** Changed in: python-neutronclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297309

Title:
  Unauthorized: Unknown auth strategy

Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Neutron:
  Fix Released

Bug description:
  I have seen this error occasionally in various (Nova) logs:

  2014-03-25 10:42:58.182 31770 TRACE nova.api.openstack raise 
exceptions.Unauthorized(message=_('Unknown auth strategy'))
  2014-03-25 10:42:58.182 31770 TRACE nova.api.openstack Unauthorized: Unknown 
auth strategy

  Full stacktrace can be found with this logstash query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5hdXRob3JpemVkOiBVbmtub3duIGF1dGggc3RyYXRlZ3lcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiODY0MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzk1NzU2MzQxNTc3fQ==

  As a first step, it would be nice for neutronclient to say what auth
  strategy   it is unable to handle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1297309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335733] Re: Failed to boot instance with admin role

2014-08-20 Thread Akihiro Motoki
** Changed in: python-neutronclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1335733

Title:
  Failed to boot instance with admin role

Status in OpenStack Compute (Nova):
  In Progress
Status in Python client library for Neutron:
  Fix Released

Bug description:
  Sometimes we need to create instances with admin role for some other
  tenants. But if we're booting the instance with a security group which
  name is duplicated in Neutron. Then user will run into an error and
  from the GUI, user don't know what happened.

  The root cause of this issue is Neutron client doesn't honoured the
  tenant id, but get all the security group ids with the given name. So
  we need to fix this from both Nova and Neutron client side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1335733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241275] Re: Nova / Neutron Client failing upon re-authentication after token expiration

2014-08-20 Thread Akihiro Motoki
** Changed in: python-neutronclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241275

Title:
  Nova / Neutron Client failing upon re-authentication after token
  expiration

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Python client library for Neutron:
  Fix Released

Bug description:
  By default, the token length for clients is 24 hours.  When that token
  expires (or is invalidated for any reason), nova should obtain a new
  token.

  Currently, when the token expires, it leads to the following fault:
  File "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", 
line 136, in _get_available_networks
nets = neutron.list_networks(**search_opts).get('networks', [])
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 108, in with_params
ret = self.function(instance, *args, **kwargs)
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 325, in list_networks
**_params)
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 1197, in list
for r in self._pagination(collection, path, **params):
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 1210, in _pagination
res = self.get(path, params=params)
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 1183, in get
headers=headers, params=params)
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 1168, in retry_request
headers=headers, params=params)
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 1103, in do_request
resp, replybody = self.httpclient.do_request(action, method, body=body)
  File "/usr/lib/python2.6/site-packages/neutronclient/client.py", line 
188, in do_request
self.authenticate()
  File "/usr/lib/python2.6/site-packages/neutronclient/client.py", line 
224, in authenticate
token_url = self.auth_url + "/tokens"
  TRACE nova.openstack.common.rpc.amqp TypeError: unsupported operand 
type(s) for +: 'NoneType' and 'str'

  This error is occurring because nova/network/neutronv2/__init__.py
  obtains a token for communication with neutron.  Nova is then
  authenticating the token (nova/network/neutronv2/__init__.py -
  _get_auth_token).  Upon authentication, it passes in the token into
  the neutron client (via the _get_client method).  It should be noted
  that the token is the main element passed into the neutron client
  (auth_url, username, password, etc... are not passed in as part of the
  request)

  Since nova is passing the token directly into the neutron client, nova
  does not validate whether or not the token is authenticated.

  After the 24 hour period of time, the token naturally expires.
  Therefore, when the neutron client goes to make a request, it catches
  an exceptions.Unauthorized block.  Upon catching this exception, the
  neutron client attempts to re-authenticate and then make the request
  again.

  The issue arises in the re-authentication of the token.  The neutron client's 
authenticate method requires that the following parameters are sent in from its 
users:
   - username
   - password
   - tenant_id or tenant_name
   - auth_url
   - auth_strategy

  Since the nova client is not passing these parameters in, the neutron
  client is failing with the exception above.

  Not all methods from the nova client are exposed to this.  Invocations
  to nova/network/neutronv2/__init__.py - get_client with an 'admin'
  value set to True will always get a new token.  However, the clients
  that invoke the get_client method without specifying the admin flag,
  or by explicitly setting it to False will be affected by this.  Note
  that the admin flag IS NOT determined based off the context's admin
  attribute.

  Methods from nova/network/neutronv2/api.py that are currently affected appear 
to be:
   - _get_available_networks
   - allocate_for_instance
   - deallocate_for_instance
   - deallocate_port_for_instance
   - list_ports
   - show_port
   - add_fixed_ip_to_instance
   - remove_fixed_ip_from_instance
   - validate_networks
   - _get_instance_uuids_by_ip
   - associate_floating_ip
   - get_all
   - get
   - get_floating_ip
   - get_floating_ip_pools
   - get_floating_ip_by_address
   - get_floating_ips_by_project
   - get_instance_id_by_floating_address
   - allocate_floating_ip
   - release_floating_ip
   - disassociate_floating_ip
   - _get_subnets_from_port

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1241275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad

[Yahoo-eng-team] [Bug 1359136] Re: vmware: display incorrect host_ip for hypervisor show

2014-08-20 Thread Thang Pham
The update link is actually: http://docs.openstack.org/trunk/config-
reference/content/vmware.html#VMwareVCDriver_details

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359136

Title:
  vmware: display incorrect host_ip for hypervisor show

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  From VCenter driver,  with compute node configured to be cluster01. And 
VCenter IP is 10.9.1.43
  But with hypervisor show command, it will display the host IP for the 
controller node. It should be pointing to 
  the really VCenter IP.

  [root@dhcp-10-9-3-83 ~]# nova hypervisor-show 29
  
+---+--+
  | Property  | Value   
 |
  
+---+--+
  | cpu_info_model| ["Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz", 
"Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz"] |
  | cpu_info_topology_cores   | 24  
 |
  | cpu_info_topology_threads | 48  
 |
  | cpu_info_vendor   | ["IBM", "IBM"]  
 |
  | current_workload  | 0   
 |
  | disk_available_least  | -   
 |
  | free_disk_gb  | 763 
 |
  | free_ram_mb   | 354728  
 |
  | host_ip   | 10.9.3.83   
 |
  | hypervisor_hostname   | domain-c17(cluster01)   
 |
  | hypervisor_type   | VMware vCenter Server   
 |
  | hypervisor_version| 5005000 
 |
  | id| 29  
 |
  | local_gb  | 794 
 |
  | local_gb_used | 31  
 |
  | memory_mb | 362336  
 |
  | memory_mb_used| 7608
 |
  | running_vms   | 3   
 |
  | service_host  | dhcp-10-9-3-83.sce.cn.ibm.com   
 |
  | service_id| 6   
 |
  | vcpus | 48  
 |
  | vcpus_used| 5   
 |
  
+---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359136/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359017] Re: Returning unprecise error message when you create image with long name

2014-08-20 Thread Jin Liu
I tried this case in the latest Juno master, I find it's ok now

linux:˜/source> glance image-create 
--name="cirros-0.3.2-x86_64-"
 --disk-format=qcow2 \
>   --container-format=bare --is-public=true --min-disk=133766616\
>   --copy-from 
> http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
Request returned failure status 400.


  400 Bad Request


  400 Bad Request
  Image name too long: 820


 (HTTP 400)

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1359017

Title:
  Returning unprecise error message when you create image with long name

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Image creation fails when the image name is > 255 chars, but the error
  message does not indicate that name length is the issue.  Note that
  double byte names work if they do not exceed the Glance length limit.

  Upon clicking on Import button from GUI, console displayed below 500 internal 
error message:
  ==
  Error
  An error occurred while creating image Test to see if text counter can handle 
double byte.Test to see if text counter can handle double byte..

  Explanation: The server encountered an unexpected error: 500 (Internal
  Server Error).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1359017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359523] [NEW] Conntrack causes security group rule fails

2014-08-20 Thread Zhiyuan Cai
Public bug reported:

The following steps happen in the same host machine.

1. tenant1 create vm1 with network net1 and ip 199.168.1.2
2. tenant1 create vm2 with network net1 and ip 199.168.1.4
3. configure security group of vm1 and vm2 so they can communicate with tcp 
connetion
4. tenant2 create vm3 with network net2 and ip 199.168.1.2
5. tenant2 create vm4 with network net2 and ip 199.168.1.4
6. configure security group of vm3 and vm4 so they can't communicate with tcp 
connetion
7. create tcp connetion between vm1 and vm2, success
8. create tcp connetion betwwen vm3 and vm4 when vm1 and vm2 are still 
connecting, success, which failure is expected

This problem is caused since these two connections share the same
5-tuple, so conntrack let the packets between vm3 and vm4 pass.

** Affects: neutron
 Importance: Undecided
 Assignee: Zhiyuan Cai (luckyvega-g)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Zhiyuan Cai (luckyvega-g)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1359523

Title:
  Conntrack causes security group rule fails

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The following steps happen in the same host machine.

  1. tenant1 create vm1 with network net1 and ip 199.168.1.2
  2. tenant1 create vm2 with network net1 and ip 199.168.1.4
  3. configure security group of vm1 and vm2 so they can communicate with tcp 
connetion
  4. tenant2 create vm3 with network net2 and ip 199.168.1.2
  5. tenant2 create vm4 with network net2 and ip 199.168.1.4
  6. configure security group of vm3 and vm4 so they can't communicate with tcp 
connetion
  7. create tcp connetion between vm1 and vm2, success
  8. create tcp connetion betwwen vm3 and vm4 when vm1 and vm2 are still 
connecting, success, which failure is expected

  This problem is caused since these two connections share the same
  5-tuple, so conntrack let the packets between vm3 and vm4 pass.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1359523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp