[Yahoo-eng-team] [Bug 1451681] [NEW] new launch instance doesn't honor webroot setting

2015-05-04 Thread Matthias Runge
Public bug reported:

I moved my dashboard to /dasbhoard

With new launch instance: hitting "launch instance", screen stays white, 
firebug reports: 
"NetworkError: 404 Not Found - http:///static/angular/wizard/wizard.html";

Note there is "/dashboard" in front of "/static" missing.

** Affects: horizon
 Importance: High
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451681

Title:
  new launch instance doesn't honor webroot setting

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I moved my dashboard to /dasbhoard

  With new launch instance: hitting "launch instance", screen stays white, 
  firebug reports: 
  "NetworkError: 404 Not Found - http:///static/angular/wizard/wizard.html";

  Note there is "/dashboard" in front of "/static" missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1451681/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451659] [NEW] cannot change domain for project creation

2015-05-04 Thread Steve Martinelli
Public bug reported:

When using identity v3 federation through websso, it appears that the project 
creation is a bit different.
When attempting to create a new project, the new projects domain name and ID 
are greyed out, I suspect this was intended to restrict users to only creating 
projects in their own domain.

There are possibly two issues here:
  - If the user has admin credentials then project creation this shouldn't be 
limited to that domain.
  - If the user is a 'federated' user, then the don't have a domain, and won't 
be able to create a project. (Maybe this is a good thing).

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "greyed_out_projects.png"
   
https://bugs.launchpad.net/bugs/1451659/+attachment/4391009/+files/greyed_out_projects.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451659

Title:
  cannot change domain for project creation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When using identity v3 federation through websso, it appears that the project 
creation is a bit different.
  When attempting to create a new project, the new projects domain name and ID 
are greyed out, I suspect this was intended to restrict users to only creating 
projects in their own domain.

  There are possibly two issues here:
- If the user has admin credentials then project creation this shouldn't be 
limited to that domain.
- If the user is a 'federated' user, then the don't have a domain, and 
won't be able to create a project. (Maybe this is a good thing).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1451659/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451630] [NEW] The "PING" monitor in load balance appear to work as "TCP" monitor

2015-05-04 Thread haoliang
Public bug reported:

1.Under test tenement,create network:net1,subnet:subnet1,network 
address:192.168.1.0/24 and other keep default
2.Create router:R1,R1 inner interface relate to subnet1 and set outer network 
for R1
  Create VM1-1(port 80 is deny by iptables),choose subnet1,security group 
choose default,image is centos
  Create VM1-2(port 80 is deny by iptables),choose subnet1,security group 
choose default,image is centos
3.Create Resource Pool name "FZ1",choose subnet1,protocol is http,Load Blance 
Mode is ROUND_ROBIN
  Add VIP for"FZ1",set assign ip to 192.168.1.16,protocol port is 80 and other 
keep as default
4.Add member VM1-1 and VM1-2 for"FZ1",Weight is 1,protocol port is 80
5.Add "PING" monitor(delay is 1,timeout is 1,max retry times is 1)which is 
relate to FZ1
->the status of VM1-1 and VM1-2 is inactive should be active
6.stop iptables service in VM1-1 and VM1-2
->the status of VM1-1 and VM1-2 is active
In summary,"PING" monitor appear to work as "TCP" monitor

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451630

Title:
  The "PING" monitor in load balance appear to work as "TCP" monitor

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1.Under test tenement,create network:net1,subnet:subnet1,network 
address:192.168.1.0/24 and other keep default
  2.Create router:R1,R1 inner interface relate to subnet1 and set outer network 
for R1
Create VM1-1(port 80 is deny by iptables),choose subnet1,security group 
choose default,image is centos
Create VM1-2(port 80 is deny by iptables),choose subnet1,security group 
choose default,image is centos
  3.Create Resource Pool name "FZ1",choose subnet1,protocol is http,Load Blance 
Mode is ROUND_ROBIN
Add VIP for"FZ1",set assign ip to 192.168.1.16,protocol port is 80 and 
other keep as default
  4.Add member VM1-1 and VM1-2 for"FZ1",Weight is 1,protocol port is 80
  5.Add "PING" monitor(delay is 1,timeout is 1,max retry times is 1)which is 
relate to FZ1
  ->the status of VM1-1 and VM1-2 is inactive should be active
  6.stop iptables service in VM1-1 and VM1-2
  ->the status of VM1-1 and VM1-2 is active
  In summary,"PING" monitor appear to work as "TCP" monitor

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451621] [NEW] creating resources with an empty tenant_id succeeds when using noop driver

2015-05-04 Thread Aishwarya Thangappa
Public bug reported:

Please take a look at "test_create_listener_empty_tenant_id" method in
test_listeners_admin.py in the patch
https://review.openstack.org/#/c/173423 to reproduce the error.


Ongoing discussion:

In case of  noop driver when we try to create a 
resource(loadbalancer/listener/member/pool) with empty tenant id, it gets 
created (as there is no validation). But in case of backend driver with 
validation for empty/invalid tenant-id, it raises BadRequest which is 
translated as "driver-internal error" at plugin layer.
-Santosh Sharma

I think the question would be whether the api should treat an empty string 
tenant id the same as a tenant id which is None.
-Franklin Naval

I think this falls under the same discussion about tenant_id being validated 
for all admin requests. All of neutron currently does not do this, and since we 
are an extension to neutron and don't control the wsgi layer, we usually just 
go with what they do. However, we can specifically validate the string is 
non-empty in our plugin, but that would be a temporary hack until the larger 
neutron discussion completes.
-Brandon Logan

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-lbaas

** Tags added: neutron-lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451621

Title:
  creating resources with an empty tenant_id succeeds when using noop
  driver

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Please take a look at "test_create_listener_empty_tenant_id" method in
  test_listeners_admin.py in the patch
  https://review.openstack.org/#/c/173423 to reproduce the error.

  
  Ongoing discussion:
  
  In case of  noop driver when we try to create a 
resource(loadbalancer/listener/member/pool) with empty tenant id, it gets 
created (as there is no validation). But in case of backend driver with 
validation for empty/invalid tenant-id, it raises BadRequest which is 
translated as "driver-internal error" at plugin layer.
  -Santosh Sharma

  I think the question would be whether the api should treat an empty string 
tenant id the same as a tenant id which is None.
  -Franklin Naval

  I think this falls under the same discussion about tenant_id being validated 
for all admin requests. All of neutron currently does not do this, and since we 
are an extension to neutron and don't control the wsgi layer, we usually just 
go with what they do. However, we can specifically validate the string is 
non-empty in our plugin, but that would be a temporary hack until the larger 
neutron discussion completes.
  -Brandon Logan

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451605] [NEW] Ironic admin_auth_token option should be deprecated

2015-05-04 Thread Eric Brown
Public bug reported:

The ironic driver has config options for admin_username, admin_password,
admin_auth_token so that the ironic client can authenticate using the
keystoneclient.

>From nova/virt/ironic/driver.py:

cfg.StrOpt('admin_auth_token',
   help='Ironic keystone auth token.'),


The keystoneclient has deprecated admin_auth_token since Icehouse (at least) 
and thus the ironic driver option should similarly be deprecated.  The keystone 
admin token is intended only for bootstrapping keystone, no for other services 
to utilize.

https://github.com/openstack/python-
keystoneclient/blob/stable/icehouse/keystoneclient/middleware/auth_token.py#L244

cfg.StrOpt('admin_token',
   secret=True,
   help='This option is deprecated and may be removed in a future'
   ' release. Single shared secret with the Keystone configuration'
   ' used for bootstrapping a Keystone installation, or otherwise'
   ' bypassing the normal authentication process. This option'
   ' should not be used, use `admin_user` and `admin_password`'
   ' instead.'),

** Affects: nova
 Importance: Low
 Assignee: Eric Brown (ericwb)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Eric Brown (ericwb)

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451605

Title:
  Ironic admin_auth_token option should be deprecated

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The ironic driver has config options for admin_username,
  admin_password, admin_auth_token so that the ironic client can
  authenticate using the keystoneclient.

  From nova/virt/ironic/driver.py:

  cfg.StrOpt('admin_auth_token',
 help='Ironic keystone auth token.'),

  
  The keystoneclient has deprecated admin_auth_token since Icehouse (at least) 
and thus the ironic driver option should similarly be deprecated.  The keystone 
admin token is intended only for bootstrapping keystone, no for other services 
to utilize.

  https://github.com/openstack/python-
  
keystoneclient/blob/stable/icehouse/keystoneclient/middleware/auth_token.py#L244

  cfg.StrOpt('admin_token',
 secret=True,
 help='This option is deprecated and may be removed in a future'
 ' release. Single shared secret with the Keystone 
configuration'
 ' used for bootstrapping a Keystone installation, or otherwise'
 ' bypassing the normal authentication process. This option'
 ' should not be used, use `admin_user` and `admin_password`'
 ' instead.'),

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451595] [NEW] lbaas screen should explain modes / methods

2015-05-04 Thread Eric Peterson
Public bug reported:

The add pool screen has the various lbaas methods listed in a drop down.
It would be nice to explain the modes, as documented at:

http://docs.openstack.org/admin-guide-cloud/content/section_lbaas-
overview.html

** Affects: horizon
 Importance: Undecided
 Assignee: Eric Peterson (ericpeterson-l)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Eric Peterson (ericpeterson-l)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451595

Title:
  lbaas screen should explain modes / methods

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The add pool screen has the various lbaas methods listed in a drop
  down.  It would be nice to explain the modes, as documented at:

  http://docs.openstack.org/admin-guide-cloud/content/section_lbaas-
  overview.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1451595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449531] Re: remote stack fails on autoscaling - redelegation depth

2015-05-04 Thread Steve Baker
Given my comment in bug 1451475 you may end up restructuring your
templates in a way which avoids this problem

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1449531

Title:
  remote stack fails on autoscaling - redelegation depth

Status in Orchestration API (Heat):
  Incomplete
Status in OpenStack Identity (Keystone):
  New

Bug description:
  on auto scaling event (via heat-cfn-api) where in the autoscaling group there 
is a resource of type OS::Heat::Stack we are getting an error 
"HTTPInternalServerError: ERROR: Remote error: Forbidden Remaining redelegation 
depth of 0 out of allowed range of [0..3] "
  we tried also with the default config and also with the following 
keystone.conf :
  [trust]
  allow_redelegation = true
  max_redelegation_count = 3
  enabled = true

  attached logs and example template , we scale out posting to the
  scaleup url

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1449531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451576] [NEW] subnetpool allocation not ensuring non-overlapping cidrs

2015-05-04 Thread Cedric Brandily
Public bug reported:

_get_allocated_cidrs[1] locks only allocated subnets in a subnetpool
(with mysql/postgresql at least) which ensures we won't allocate a cidr
overlapping with existent cidrs but nothing disallows a concurrent
subnet allocation to create a subnet in the same subnetpool.

[1]:
https://github.com/openstack/neutron/blob/5962d825a6c98225c51bc6dd304b5c1ac89035etef/neutron/ipam/subnet_alloc.py#L40-L44

** Affects: neutron
 Importance: Undecided
 Assignee: Cedric Brandily (cbrandily)
 Status: New


** Tags: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => Cedric Brandily (cbrandily)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451576

Title:
  subnetpool allocation not ensuring non-overlapping cidrs

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  _get_allocated_cidrs[1] locks only allocated subnets in a subnetpool
  (with mysql/postgresql at least) which ensures we won't allocate a
  cidr overlapping with existent cidrs but nothing disallows a
  concurrent subnet allocation to create a subnet in the same
  subnetpool.

  [1]:
  
https://github.com/openstack/neutron/blob/5962d825a6c98225c51bc6dd304b5c1ac89035etef/neutron/ipam/subnet_alloc.py#L40-L44

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451567] [NEW] XStatic-Magic-Search pypi package has errors

2015-05-04 Thread Drew Fisher
Public bug reported:

Two issues:

1:  The extracted source directory from the tarball is wrong

$ ls
XStatic-Magic-Search-0.2.0.1.tar.gz

$ tar xzf XStatic-Magic-Search-0.2.0.1.tar.gz

$ ls
xstatic-magic-search/  XStatic-Magic-Search-0.2.0.1.tar.gz

It should be Xstatic-Magic-Search instead of xstatic-magic-search

2:  The .git, .gitignore, and .gitreview files/directories are in the
tarball and should probably be removed.

$ cd xstatic-magic-search

$ ls -a
./ .gitignore  MANIFEST.in*  xstatic/
../.gitreview  README.txt*   XStatic_Magic_Search.egg-info/
.git/  dist/   setup.py*

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451567

Title:
  XStatic-Magic-Search pypi package has errors

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Two issues:

  1:  The extracted source directory from the tarball is wrong

  $ ls
  XStatic-Magic-Search-0.2.0.1.tar.gz

  $ tar xzf XStatic-Magic-Search-0.2.0.1.tar.gz

  $ ls
  xstatic-magic-search/  XStatic-Magic-Search-0.2.0.1.tar.gz

  It should be Xstatic-Magic-Search instead of xstatic-magic-search

  2:  The .git, .gitignore, and .gitreview files/directories are in the
  tarball and should probably be removed.

  $ cd xstatic-magic-search

  $ ls -a
  ./ .gitignore  MANIFEST.in*  xstatic/
  ../.gitreview  README.txt*   XStatic_Magic_Search.egg-info/
  .git/  dist/   setup.py*

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1451567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419613] Re: Fails to run unit test in red hat 6.5 with tox

2015-05-04 Thread Eugene Nikanorov
I'm not sure this bug should be filed against upstream neutron. Marking
as Invalid.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1419613

Title:
  Fails to run unit test in red hat 6.5 with tox

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Did following steps to run neutron unit test in RED HAT 6.5 and failed
  to run tox:

  # Fresh instllation of RED HAT 6.5
  # Installed the dependencies mentioned in the doc 
"http://docs.openstack.org/developer/nova/devref/development.environment.html";

   "sudo yum install python-devel openssl-devel python-pip git gcc 
libxslt-devel mysql-devel postgresql-devel libffi-devel libvirt-devel graphviz 
sqlite-devel"
   "pip install tox"

  # cd neutron
  # tox 
  Failed with following error: 
  [root@localhost neutron]# tox -v neutron.tests.unit.api
  Traceback (most recent call last):
File "/usr/bin/tox", line 5, in 
  from pkg_resources import load_entry_point
File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 2655, in 

  working_set.require(__requires__)
File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 648, in 
require
  needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 546, in 
resolve
  raise DistributionNotFound(req)
  pkg_resources.DistributionNotFound: argparse
  [root@localhost neutron]# 

  
  Even though argparse module is installed successfully

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1419613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451559] [NEW] subnetpool allocation not working with multiples subnets on a unique network

2015-05-04 Thread Cedric Brandily
Public bug reported:

The following scenario is not working:

#$ neutron subnetpool-create pool --pool-prefix 10.0.0.0/8 --default-prefixlen 
24
#$ neutron net-create net
#$ neutron subnet-create net --name subnet0 10.0.0.0/24
#$ neutron subnet-create net --name subnet1--subnetpool pool
>>> returns a 409

Last command fails because neutron tries to allocate to subnet1 the 1st
unallocated cidr from 10.0.0.0/8 pool => 10.0.0.0/24 but subnet net as
already 10.0.0.0/24 as cidr (subnet0) and overlapping cidrs are
disallowed on the same network!

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451559

Title:
  subnetpool allocation not working with multiples subnets on a unique
  network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The following scenario is not working:

  #$ neutron subnetpool-create pool --pool-prefix 10.0.0.0/8 
--default-prefixlen 24
  #$ neutron net-create net
  #$ neutron subnet-create net --name subnet0 10.0.0.0/24
  #$ neutron subnet-create net --name subnet1--subnetpool pool
  >>> returns a 409

  Last command fails because neutron tries to allocate to subnet1 the
  1st unallocated cidr from 10.0.0.0/8 pool => 10.0.0.0/24 but subnet
  net as already 10.0.0.0/24 as cidr (subnet0) and overlapping cidrs are
  disallowed on the same network!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450006] Re: Create port in both subnets ipv4 and ipv6 when specified only one subnet

2015-05-04 Thread Eugene Nikanorov
This behavior is a result of commit
https://review.openstack.org/#/c/113339/

It should have had DocImpact in commit msg.
Anyway, marking the bug as Invalid since behavior is by-design.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450006

Title:
  Create port in both subnets ipv4 and ipv6 when specified only one
  subnet

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When I try to create port on fresh devstack in specific subnet:

  neutron port-create private --fixed-ip subnet_id=3f80621c-
  4b88-48bd-8794-2372ef56485c

  It is created in both ipv4 and ipv6 subnets:

  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:vnic_type | normal  
|
  | device_id | 
|
  | device_owner  | 
|
  | fixed_ips | {"subnet_id": 
"3f80621c-4b88-48bd-8794-2372ef56485c", "ip_address": "10.0.0.18"}  
  |
  |   | {"subnet_id": 
"f7166154-5064-4e85-9ab2-10877ca4bfad", "ip_address": 
"fd67:d76a:29be:0:f816:3eff:feda:83a0"} |
  | id| b9811513-9a6c-4f2d-b7d4-583f32bfd8e5
|
  | mac_address   | fa:16:3e:da:83:a0   
|
  | name  | 
|
  | network_id| 73c8160b-647a-4cbe-81a4-0adab9ac67c8
|
  | security_groups   | f5915ed5-67e6-425b-913d-38193ca16e22
|
  | status| DOWN
|
  | tenant_id | 9574611fa7c04fab87617412a409f606 

  skr ~ $ neutron port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 

  |
  
+--+--+---+-+
  | b9811513-9a6c-4f2d-b7d4-583f32bfd8e5 |  | fa:16:3e:da:83:a0 | 
{"subnet_id": "3f80621c-4b88-48bd-8794-2372ef56485c", "ip_address": 
"10.0.0.18"}|
  |  |  |   | 
{"subnet_id": "f7166154-5064-4e85-9ab2-10877ca4bfad", "ip_address": 
"fd67:d76a:29be:0:f816:3eff:feda:83a0"} |
  
+--+--+---+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451558] [NEW] subnetpool allocation not working with postgresql

2015-05-04 Thread Cedric Brandily
Public bug reported:

The following is working with mysql but not with postgresql


#$ neutron subnetpool-create pool --pool-prefix 10.0.0.0/8 --default-prefixlen 
24
#$ neutron net-create net
#$ neutron subnet-create net --name subnet --subnetpool pool


The last command raises a 501 with postgresql with the stacktrace[2] in 
neutron-server, because _get_allocated_cidrs[1] performs a SELECT FOR UPDATE 
with a JOIN on an empty select! (allowed with mysql, not postgresql).


[1]: 
https://github.com/openstack/neutron/blob/5962d825a6c98225c51bc6dd304b5c1ac89035ef/neutron/ipam/subnet_alloc.py#L40-L44
  query = session.query(models_v2.Subnet).with_lockmode('update')
  subnets = query.filter_by(subnetpool_id=self._subnetpool['id'])


[2]: neutron-server stacktrace
2015-05-04 21:47:01.939 ERROR neutron.api.v2.resource 
[req-a6c14f61-bdb2-4273-a231-df0a85fb33d8 demo 
b532b7a9302c45b18f06f68b41869ffa] create failed
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 461, in create
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 804, in create_subnet
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource result, mech_context 
= self._create_subnet_db(context, subnet)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 795, in 
_create_subnet_db
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource result = 
super(Ml2Plugin, self).create_subnet(context, subnet)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 1389, in 
create_subnet
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource subnetpool_id)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 131, in wrapper
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 1283, in 
_create_subnet_from_pool
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource ipam_subnet = 
allocator.allocate_subnet(context.session, req)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/ipam/subnet_alloc.py", line 141, in allocate_subnet
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource return 
self._allocate_any_subnet(session, request)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/ipam/subnet_alloc.py", line 93, in 
_allocate_any_subnet
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource prefix_pool = 
self._get_available_prefix_list(session)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/ipam/subnet_alloc.py", line 48, in 
_get_available_prefix_list
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource allocations = 
self._get_allocated_cidrs(session)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/ipam/subnet_alloc.py", line 44, in 
_get_allocated_cidrs
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource return (x.cidr for x 
in subnets)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2441, in 
__iter__
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource return 
self._execute_and_instances(context)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2456, in 
_execute_and_instances
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 841, 
in execute
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource return meth(self, 
multiparams, params)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 322, 
in _execute_on_connection
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource return 
connection._execute_clauseelement(self, multiparams, params)
2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 938, 
in _execute_clauseelement
2015-05-0

[Yahoo-eng-team] [Bug 1449344] Re: When VM security group is not choosen, the packets are still blocked by default

2015-05-04 Thread Eugene Nikanorov
It's by design behavior, marking bus as Invalid

** Summary changed:

- When VM security group is not choose,the packets is still block by security 
group
+ When VM security group is not choosen, the packets are still blocked by 
default

** Summary changed:

- When VM security group is not choosen, the packets are still blocked by 
default
+ When VM security group is not chosen, the packets are still blocked by default

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449344

Title:
  When VM security group is not chosen, the packets are still blocked by
  default

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  1.1 Under test tenement,create network:net1,subnet:subnet1,network 
address:192.168.1.0/24 and other keep default
  1.2 Create router:R1,R1 inner interface relate to subnet1 and set outer 
network for R1
  1.2 Create VM1-1,choose subnet1,security group choose default and firewall is 
closed
  1.3 Edit security group of VM1-1,remove default security group from VM-1,now 
VM1-1 security group is none
  1.4 VM1-1 ping subnet1 gw:192.168.1.1 fail

  Capture in tap.xxx of linux bridge which is connect to VM1-1 ,we can see icmp 
request packets which is go to 192.168.1.1 from VM1-1
  Capture in qvb.xxx,we can't see any packets.Therefore,the packets is deny by 
security group.But VM1-1 security group is not choose

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449546] Re: Neutron-LB Health monitor association not listed in Horizon Dashboard

2015-05-04 Thread Eugene Nikanorov
** Project changed: neutron => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449546

Title:
  Neutron-LB Health monitor association not listed in Horizon Dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In LB-Pool Horizon Dashboard, 
  -->LB- Pool --> Edit Pool --> Associate Monitor,

  it is expected that all the available health monitors to be listed.
  But the List box is empty.

  Please find the attached screen shot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449544] Re: Neutron-LB Health monitor association mismatch in horizon and CLI

2015-05-04 Thread Eugene Nikanorov
** Project changed: neutron => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449544

Title:
  Neutron-LB Health monitor association mismatch in horizon and CLI

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When a new pool is created, all the health monitors that are available
  are shown in LB-Pool Information in Horizon Dashboard.

  But in CLI ,

  neutron lb-pool-show   shows no monitors associated to the newly 
created Pool.
  Please refer "LB_HM_default_assoc_UI" and "LB_HM_default_assoc_CLI".

  Using CLI ,  associate any health monitor to the Pool, correct information 
will be displayed in Horizon Dashboard and CLI.
  So only after creating new pool, Horizon Dashboard lists all the health 
monitors , which is wrong and this needs to be corrected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450414] Re: can't get authentication with os-token and os-url

2015-05-04 Thread Eugene Nikanorov
what auth_url are you providing when making the call?

** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450414

Title:
  can't get authentication with os-token and os-url

Status in Python client library for Neutron:
  Incomplete

Bug description:
  Hi, I can't get authentication with os-token and os-url on Juno
  pythone-neutronclient.

  On Icehouse, with os-token and os-url, we can get authentication.
  [root@compute01 ~]# neutron --os-token $token --os-url http://controller:9696 
net-list
  
+--+--+--+
  | id   | name | subnets   
   |
  
+--+--+--+
  | 06c5d426-ec2c-4a19-a5c9-cfd21cfb5a0c | ext-net  | 
38d87619-9c76-481f-bfe8-b301e05693d9 193.160.15.0/24 |
  
+--+--+--+

  But on Juno, it failed. The detail :
  [root@compute01 ~]# neutron --os-token $token --os-url http://controller:9696 
net-list --debug 
  ERROR: neutronclient.shell Unable to determine the Keystone version to 
authenticate with using the given auth_url. Identity service may not support 
API version discovery. Please provide a versioned auth_url instead. 
  Traceback (most recent call last):
 File "/usr/lib/python2.6/site-packages/neutronclient/shell.py", line 666, 
in run 
   self.initialize_app(remainder)
 File "/usr/lib/python2.6/site-packages/neutronclient/shell.py", line 808, 
in initialize_app
   self.authenticate_user()
 File "/usr/lib/python2.6/site-packages/neutronclient/shell.py", line 761, 
in authenticate_user
   auth_session = self._get_keystone_session()
 File "/usr/lib/python2.6/site-packages/neutronclient/shell.py", line 904, 
in _get_keystone_session
   auth_url=self.options.os_auth_url)
 File "/usr/lib/python2.6/site-packages/neutronclient/shell.py", line 889, 
in _discover_auth_versions
   raise exc.CommandError(msg)
   CommandError: Unable to determine the Keystone version to authenticate with 
using the given auth_url. Identity service may not support API version 
discovery. Please provide a versioned auth_url instead. 
  Unable to determine the Keystone version to authenticate with using the given 
auth_url. Identity service may not support API version discovery. Please 
provide a versioned auth_url instead. 

  
  my solution is this:
  On /usr/lib/python2.6/site-packages/neutronclient/shell.py, modify the 
authenticate_user(self) method.

   Origin:
   auth_session = self._get_keystone_session()

  Modified: 
  auth_session = None
  auth = None
  if not self.options.os_token:
  auth_session = self._get_keystone_session()
  auth = auth_session.auth

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1450414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450625] Re: common service chaining driver API

2015-05-04 Thread Eugene Nikanorov
Usually new features are tracked via blueprints, not bugs, and require
spec to be merged.

Is there a blueprint regarding this feature?

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450625

Title:
  common service chaining driver API

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  This feature/bug is related to bug #1450617 (Neutron extension to
  support service chaining)

  Bug #1450617 is to add a neutron port chaining API and associated "neutron 
port chain manager" to support service chaining functionality.  Between the 
"neutron port manager" and the underlying service chain drivers,  a common 
service chain driver API shim layer is needed to allow for different types of 
service chain drivers  (eg. OVS driver, different SDN Controller drivers) to be 
integrated into the Neutron. Different service chain drivers may have different 
ways of constructing the service chain path and use different data path 
encapsulation and transport to steer the flow through the chain path. With one 
common interface between the Neutron service chain manager and various 
vendor-specific drivers, the driver design/implementation can be changed 
without changing the Neutron Service Chain Manager and the interface APIs.
   
  This interface should include the following entities:
   
   * An ordered list of service function instance clusters. Each service 
instance cluster represents a group of like service function instances which 
can be used for load distribution.
   * Traffic flow classification rules: It consists of a set of flow 
descriptors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451506] [NEW] spawn failed with "libvirtError: internal error: received hangup / error event on socket" in the gate

2015-05-04 Thread Matt Riedemann
Public bug reported:

Looks like libvirt was temporarily disconnected which caused the spawn
failure:

http://logs.openstack.org/80/170780/11/gate/gate-tempest-dsvm-postgres-
full/b2e3fd4/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-05-04_16_49_01_948

2015-05-04 16:49:01.948 ERROR nova.compute.manager 
[req-e9676551-d3ed-4f85-ab2d-34ce9e1b1446 TestServerAdvancedOps-1668477466 
TestServerAdvancedOps-559926363] [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] Instance failed to spawn
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] Traceback (most recent call last):
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2475, in _build_resources
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] yield resources
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2347, in 
_build_and_run_instance
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] block_device_info=block_device_info)
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2355, in spawn
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] block_device_info=block_device_info)
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4393, in 
_create_domain_and_network
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] power_on=power_on)
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4324, in _create_domain
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] LOG.error(err)
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in 
__exit__
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] six.reraise(self.type_, self.value, 
self.tb)
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4308, in _create_domain
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] domain = self._conn.defineXML(xml)
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] rv = execute(f, *args, **kwargs)
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] six.reraise(c, e, tb)
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] rv = meth(*args, **kwargs)
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 3263, in defineXML
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] if ret is None:raise 
libvirtError('virDomainDefineXML() failed', conn=self)
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] libvirtError: internal error: received 
hangup / error event on socket
2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 
2d8249a5-df5a-

[Yahoo-eng-team] [Bug 1451399] Re: subnet-create arguments order is too strict (not backward compatibility)

2015-05-04 Thread Eugene Nikanorov
I believe cidr should go last per command help, isn't it?

** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451399

Title:
  subnet-create arguments order is too strict (not backward
  compatibility)

Status in Python client library for Neutron:
  Incomplete

Bug description:
  Changing arguments order cause CLI error:

  [root@puma14 ~(keystone_admin)]# neutron subnet-create public --gateway 
10.35.178.254  10.35.178.0/24 --name public_subnet
  Invalid values_specs 10.35.178.0/24

  Changing order:

  [root@puma14 ~(keystone_admin)]# neutron subnet-create --gateway 
10.35.178.254 --name public_subnet public 10.35.178.0/24
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {"start": "10.35.178.1", "end": "10.35.178.253"} |
  | cidr  | 10.35.178.0/24   |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 10.35.178.254|
  | host_routes   |  |
  | id| 19593f99-b13c-4624-9755-983d7406cb47 |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | public_subnet|
  | network_id| df7af5c2-c84b-4991-b370-f8a854c29a80 |
  | subnetpool_id |  |
  | tenant_id | 7e8736e9aba546e98be4a71a92d67a77 |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1451399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451391] Re: --router:external=True syntax is invalid

2015-05-04 Thread Eugene Nikanorov
** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451391

Title:
  --router:external=True syntax is invalid

Status in Python client library for Neutron:
  New

Bug description:
  Kilo syntax is not backward compatibility:

  [root@puma14 ~(keystone_admin)]# neutron net-create public 
--provider:network_type vlan --provider:physical_network physnet 
--provider:segmentation_id 193 --router:external=True
  usage: neutron net-create [-h] [-f {shell,table,value}] [-c COLUMN]
[--max-width ] [--prefix PREFIX]
[--request-format {json,xml}]
[--tenant-id TENANT_ID] [--admin-state-down]
[--shared] [--router:external]
[--provider:network_type ]
[--provider:physical_network 
]
[--provider:segmentation_id ]
[--vlan-transparent {True,False}]
NAME
  neutron net-create: error: argument --router:external: ignored explicit 
argument u'True'

  Current syntax supports:
  [root@puma14 ~(keystone_admin)]# neutron net-create public 
--provider:network_type vlan --provider:physical_network physnet 
--provider:segmentation_id 193 --router:external
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| df7af5c2-c84b-4991-b370-f8a854c29a80 |
  | mtu   | 0|
  | name  | public   |
  | provider:network_type | vlan |
  | provider:physical_network | physnet  |
  | provider:segmentation_id  | 193  |
  | router:external   | True |
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tenant_id | 7e8736e9aba546e98be4a71a92d67a77 |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1451391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451498] [NEW] lbaas add vip screen should show subnet name

2015-05-04 Thread Eric Peterson
Public bug reported:

The lbaas add vip screen has a dropdown of subnets to select.  Right now
only the cidr is displayed, which is not always helpful.

I'd like to see the subnet name displayed as well.

** Affects: horizon
 Importance: Undecided
 Assignee: Eric Peterson (ericpeterson-l)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Eric Peterson (ericpeterson-l)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451498

Title:
  lbaas add vip screen should show subnet name

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The lbaas add vip screen has a dropdown of subnets to select.  Right
  now only the cidr is displayed, which is not always helpful.

  I'd like to see the subnet name displayed as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1451498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451492] [NEW] DELETE /v2.0/ports/uuid.json causes SQL error in log

2015-05-04 Thread jason bishop
Public bug reported:


I'm running 20 tempest test_server_basic_ops scenario tests at same time and 
after a few iterations it will fail during teardown with this stack:

2015-05-04 08:57:25.769 20904 ERROR neutron.api.v2.resource 
[req-15065032-6513-4433-81b8-89bc53ea8c6a None] delete failed
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 476, in delete
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 1019, in 
delete_port
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource context, id, 
do_notify=False)
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_dvr_db.py", line 203, in 
disassociate_floatingips
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
do_notify=do_notify)
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_db.py", line 1185, in 
disassociate_floatingips
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource context, 
port_id)
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_db.py", line 915, in 
disassociate_floatingips
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 'router_id': 
None})
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 470, in 
__exit__
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource self.rollback()
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in 
__exit__
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
compat.reraise(exc_type, exc_value, exc_tb)
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 467, in 
__exit__
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource self.commit()
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 377, in 
commit
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
self._prepare_impl()
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 357, in 
_prepare_impl
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
self.session.flush()
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1919, in 
flush
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
self._flush(objects)
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2037, in 
_flush
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
transaction.rollback(_capture_exception=True)
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in 
__exit__
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
compat.reraise(exc_type, exc_value, exc_tb)
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2001, in 
_flush
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
flush_context.execute()
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 372, in 
execute
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
rec.execute(self)
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 526, in 
execute
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource uow
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 60, in 
save_obj
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource mapper, table, 
update)
2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 536, in 
_emit_update_stateme

[Yahoo-eng-team] [Bug 1451465] Re: listener's operating status is not 'ONLINE' when lb's admin_state_up is false and listener's admin_state_up s true

2015-05-04 Thread Aishwarya Thangappa
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451465

Title:
  listener's operating status is not 'ONLINE' when lb's admin_state_up
  is false and listener's admin_state_up s true

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  On testing the following scenario,

  1.  Create load balancer with valid attributes and admin_state_up is set to 
False.
  2.  Create a listener with valid attributes and admin_state_up is set to True.
  3.  Verify that the status tree for operating status is set to ONLINE for 
listener and DISABLED for load balancer.
  4.  Verify all entities attributes are correct.

  I could see that the operating status for listener is also DISABLED
  while we expect it to be ONLINE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450658] Re: VolumeBackendAPIException during _shutdown_instance are not handled

2015-05-04 Thread jichenjc
** Also affects: python-cinderclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450658

Title:
  VolumeBackendAPIException during _shutdown_instance are not handled

Status in OpenStack Compute (Nova):
  Confirmed
Status in Python client library for Cinder:
  New

Bug description:
  Use rally for cinder boot from voume test, after test completed, found a lot 
of VMs were deleted failed with error status, manual retry a few times to 
delete VM, but can't work.
  [root@abba-n04 home]#  nova list --all-tenants
  
+--+---+--+++-+--+
  | ID   | Name  | 
Tenant ID| Status | Task State | Power State | Networks 
|
  
+--+---+--+++-+--+
  | 335f2ca0-a86d-45a5-b4a6-4d4ea1930e89 | rally_novaserver_aftvfqejftusyflf | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
  | 888aa512-07f0-42ad-ae5e-255bbca6fe34 | rally_novaserver_ajqhjxjxnjojelgm | 
7e05339543e743c3b023cee8128e | ERROR  | -  | NOSTATE |  
|
  | 84b32483-2997-4a2a-8d75-236395b4ef2f | rally_novaserver_arycquunqovvyxnl | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
  | 535f1ce1-63fe-4681-b215-e21d38f77fbb | rally_novaserver_arzjesynagiqfjgt | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
  | 6ef3fa37-e5a9-4a21-8996-d291070c246a | rally_novaserver_aufevkgzvynumwwo | 
d37e8219a3de4e5b985828a0b959f1d6 | ERROR  | -  | NOSTATE |  
|
  | e5d2dc5f-6e86-43e2-8f1f-1a3f6d4951bc | rally_novaserver_ayzjeqcplouwcaht | 
d37e8219a3de4e5b985828a0b959f1d6 | ERROR  | -  | NOSTATE |  
|
  | 2dcb294a-e2cc-4989-9e87-74081f1567db | rally_novaserver_bbphsjrexkcgcjtt | 
7e05339543e743c3b023cee8128e | ERROR  | -  | NOSTATE |  
|
  | 88053991-2fab-4442-86c7-7825ed47ff0a | rally_novaserver_beveqwdokixwdbgi | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
  | 1e109862-34ea-4686-a099-28f5840244cf | rally_novaserver_bimlwsfkndjbrczv | 
d37e8219a3de4e5b985828a0b959f1d6 | ERROR  | -  | NOSTATE |  
|
  | 6143bbb2-d3eb-4635-9554-85b30fbb5aa5 | rally_novaserver_bmycsptyoicymdmb | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
  | 5e9308ae-9736-485f-92b7-e6783c613ba1 | rally_novaserver_bsvcnsqeyawcnbgp | 
d37e8219a3de4e5b985828a0b959f1d6 | ERROR  | -  | NOSTATE |  
|
  | 13fb2413-15b6-41a3-a8a0-a0b775747261 | rally_novaserver_bvfyfnnixwgisbkk | 
d37e8219a3de4e5b985828a0b959f1d6 | ERROR  | -  | NOSTATE |  
|
  | d5ce6fa3-99b5-4d61-ac7c-4b5bc13d1c27 | rally_novaserver_cpbevhulxylepvql | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
  | 23ce9007-ab43-4b57-8686-4aa1ce336f09 | rally_novaserver_cswudaopawepnmms | 
7e05339543e743c3b023cee8128e | ERROR  | -  | NOSTATE |  
|

  [root@abba-n04 home]# nova delete 335f2ca0-a86d-45a5-b4a6-4d4ea1930e89
  Request to delete server 335f2ca0-a86d-45a5-b4a6-4d4ea1930e89 has been 
accepted.
  [root@abba-n04 home]# nova  list --all | grep  
335f2ca0-a86d-45a5-b4a6-4d4ea1930e89
  | 335f2ca0-a86d-45a5-b4a6-4d4ea1930e89 | rally_novaserver_aftvfqejftusyflf | 
3e87ac6d12264286a10ac68eb913dacb | ERROR  | -  | NOSTATE |  
|
  [root@abba-n04 home]#

  The system is using local LVM volumes attached via iSCSI.  It appears
  that something is going wrong when the attempt to detach the volume is
  being made:

  2015-04-29 07:26:02.680 21775 DEBUG oslo_concurrency.processutils 
[req-fec5ff89-5465-4e76-b573-6a5afc0d4ee2 a9d0160b91f2412d84972a615b7547dc 
828348052e494d76a401f669f85829f3 - - -] Running cmd (subprocess): sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf cinder-rtstool delete-initiator 
iqn.2010-10.org.openstack:volume-a2907017-dda6-4243-bc50-85fe2164f05f 
iqn.1994-05.com.redhat:734fb33285d0 execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:199
  2015-04-29 07:26:02.813 21775 DEBUG oslo_concurrency.processutils 
[req-fec5ff89-5465-4e76-b573-6a5afc0d4ee2 a9d0160b91f2412d84972a615b7547dc 
828348052e494d76a401f669f85829f3 - - -] CMD "sudo cinder-rootwrap 
/etc/cinder/rootwrap.conf cinder-rtstool delete-initiator 
iqn.2010-10.org.openstack:volume-a2907017-dda6-4243-bc50-85fe2164f05f 
iqn.1994-05.com.redhat:734fb33285d0" returned: 1 in 0.133s execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processuti

[Yahoo-eng-team] [Bug 1451468] [NEW] support_matrix.py does not render URLs in notes

2015-05-04 Thread Markus Zoeller
Public bug reported:

The sphinx-extension which renders the support_matrix.ini file into HTML [1] 
does not recognize URLs in the notes which have to be written as an HTML anchor 
element. 
I tried to use the sphinx notation with a backquote and the direct use of the 
HTML anchor element, but this will not be rendered as a link. 

See comment in patch set 6 of https://review.openstack.org/#/c/172391/

I'm not sure if this a bug, maybe more a "wishlist" item.

[1]
https://github.com/openstack/nova/blob/master/doc/ext/support_matrix.py#L414

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451468

Title:
  support_matrix.py does not render URLs in notes

Status in OpenStack Compute (Nova):
  New

Bug description:
  The sphinx-extension which renders the support_matrix.ini file into HTML [1] 
does not recognize URLs in the notes which have to be written as an HTML anchor 
element. 
  I tried to use the sphinx notation with a backquote and the direct use of the 
HTML anchor element, but this will not be rendered as a link. 

  See comment in patch set 6 of https://review.openstack.org/#/c/172391/

  I'm not sure if this a bug, maybe more a "wishlist" item.

  [1]
  https://github.com/openstack/nova/blob/master/doc/ext/support_matrix.py#L414

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451465] [NEW] listener's operating status is not 'ONLINE' when lb's admin_state_up is false and listener's admin_state_up s true

2015-05-04 Thread Aishwarya Thangappa
Public bug reported:

On testing the following scenario,

1.  Create load balancer with valid attributes and admin_state_up is set to 
False.
2.  Create a listener with valid attributes and admin_state_up is set to True.
3.  Verify that the status tree for operating status is set to ONLINE for 
listener and DISABLED for load balancer.
4.  Verify all entities attributes are correct.

I could see that the operating status for listener is also DISABLED
while we expect it to be ONLINE.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451465

Title:
  listener's operating status is not 'ONLINE' when lb's admin_state_up
  is false and listener's admin_state_up s true

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On testing the following scenario,

  1.  Create load balancer with valid attributes and admin_state_up is set to 
False.
  2.  Create a listener with valid attributes and admin_state_up is set to True.
  3.  Verify that the status tree for operating status is set to ONLINE for 
listener and DISABLED for load balancer.
  4.  Verify all entities attributes are correct.

  I could see that the operating status for listener is also DISABLED
  while we expect it to be ONLINE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440773] Re: Remove WritableLogger as eventlet has a real logger interface in 0.17.2

2015-05-04 Thread Davanum Srinivas (DIMS)
We need to use self._logger directly in Nova and deprecate
WritableLogger in oslo.log

** Summary changed:

- Deprecate WritableLogger as eventlet has a real logger interface in 0.17.2
+ Remove WritableLogger as eventlet has a real logger interface in 0.17.2

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1440773

Title:
  Remove WritableLogger as eventlet has a real logger interface in
  0.17.2

Status in OpenStack Compute (Nova):
  In Progress
Status in Logging configuration library for OpenStack:
  New

Bug description:
  Info from Sean on IRC:

  the patch to use a real logger interface in eventlet has been released
  in 0.17.2, which means we should be able to phase out
  https://github.com/openstack/oslo.log/blob/master/oslo_log/loggers.py

  Eventlet PR was:
  https://github.com/eventlet/eventlet/pull/75

  thanks,
  dims

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1440773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447122] Re: Username is not at the top in the user_create_form

2015-05-04 Thread utsav dusad
This is working fine now.

** Attachment added: "capture.png"
   
https://bugs.launchpad.net/horizon/+bug/1447122/+attachment/4390692/+files/capture.png

** Changed in: horizon
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447122

Title:
  Username is not at the top in the user_create_form

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In the latest upstream identiy->Users->Create user
   
  The Top  field  for  user_create_form should be User Name*  ,

  but now it is Password

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1447122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443798] Re: Restrict netmask of CIDR to avoid DHCP resync

2015-05-04 Thread Thierry Carrez
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New => Incomplete

** Changed in: neutron/juno
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443798

Title:
  Restrict netmask of CIDR to avoid DHCP resync

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Incomplete
Status in neutron juno series:
  Incomplete
Status in neutron kilo series:
  Fix Released
Status in OpenStack Security Advisories:
  Confirmed

Bug description:
  If any tenant creates a subnet with a netmask of 31 or 32 in IPv4,
  IP addresses of network will fail to be generated, and that
  will cause constant resyncs and neutron-dhcp-agent malfunction.

  [Example operation 1]
   - Create subnet from CLI, with CIDR /31 (CIDR /32 has the same result).

  $ neutron subnet-create net 192.168.0.0/31 --name sub
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  |  |
  | cidr  | 192.168.0.0/31   |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 192.168.0.1  |
  | host_routes   |  |
  | id| 42a91f59-1c2d-4e33-9033-4691069c5e4b |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | sub  |
  | network_id| 65cc6b46-17ec-41a8-9fe4-5bf93fc25d1e |
  | subnetpool_id |  |
  | tenant_id | 4ffb89e718d346b48fdce2ac61537bce |
  +---+--+

  [Example operation 2]
   - Create subnet from API, with cidr /32 (CIDR /31 has the same result).

  $ curl -i -X POST -H "content-type:application/json" -d '{"subnet": { "name": 
"badsub", "cidr" : "192.168.0.0/32", "ip_version": 4, "network_id": "8
  8143cda-5fe7-45b6-9245-b1e8b75d28d8"}}' -H "x-auth-token:$TOKEN" 
http://192.168.122.130:9696/v2.0/subnets
  HTTP/1.1 201 Created
  Content-Type: application/json; charset=UTF-8
  Content-Length: 410
  X-Openstack-Request-Id: req-4e7e74c0-0190-4a69-a9eb-93d545e8aeef
  Date: Thu, 16 Apr 2015 19:21:20 GMT

  {"subnet": {"name": "badsub", "enable_dhcp": true, "network_id":
  "88143cda-5fe7-45b6-9245-b1e8b75d28d8", "tenant_id":
  "4ffb89e718d346b48fdce2ac61537bce", "dns_nameservers": [],
  "gateway_ip": "192.168.0.1", "ipv6_ra_mode": null, "allocation_pools":
  [], "host_routes": [], "ip_version": 4, "ipv6_address_mode": null,
  "cidr": "192.168.0.0/32", "id": "d210d5fd-8b3b-4c0e-b5ad-
  41798bd47d97", "subnetpool_id": null}}

  [Example operation 3]
   - Create subnet from API, with empty allocation_pools.

  $ curl -i -X POST -H "content-type:application/json" -d '{"subnet": { "name": 
"badsub", "cidr" : "192.168.0.0/24", "allocation_pools": [], "ip_version": 4, 
"network_id": "88143cda-5fe7-45b6-9245-b1e8b75d28d8"}}' -H 
"x-auth-token:$TOKEN" http://192.168.122.130:9696/v2.0/subnets
  HTTP/1.1 201 Created
  Content-Type: application/json; charset=UTF-8
  Content-Length: 410
  X-Openstack-Request-Id: req-54ce81db-b586-4887-b60b-8776a2ebdb4e
  Date: Thu, 16 Apr 2015 19:18:21 GMT

  {"subnet": {"name": "badsub", "enable_dhcp": true, "network_id":
  "88143cda-5fe7-45b6-9245-b1e8b75d28d8", "tenant_id":
  "4ffb89e718d346b48fdce2ac61537bce", "dns_nameservers": [],
  "gateway_ip": "192.168.0.1", "ipv6_ra_mode": null, "allocation_pools":
  [], "host_routes": [], "ip_version": 4, "ipv6_address_mode": null,
  "cidr": "192.168.0.0/24", "id": "abc2dca4-bf8b-46f5-af1a-
  0a1049309854", "subnetpool_id": null}}

  [Trace log]
  2015-04-17 04:23:27.907 16641 DEBUG oslo_messaging._drivers.amqp [-] 
UNIQUE_ID is e0a6a81a005d4aa0b40130506afa0267. _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:258
  2015-04-17 04:23:27.979 16641 ERROR neutron.agent.dhcp.agent [-] Unable to 
enable dhcp for 88143cda-5fe7-45b6-9245-b1e8b75d28d8.
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 112, in call_driver
  2015-04-17 04:23:27.979 16641 TRACE neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2015-04-17 0

[Yahoo-eng-team] [Bug 1451447] [NEW] lbaas pool creation should show subnet name

2015-05-04 Thread Eric Peterson
Public bug reported:

The pool creation UI has a selection for choosing the subnet.  What is
shown is the IP range, something like:

192.168.1.0/24  (whatever)

The problem with this is that I could use multiple subnets with the same
or very similar IP ranges.  The UI is not providing any clue as to which
subnet is which.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451447

Title:
  lbaas pool creation should show subnet name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The pool creation UI has a selection for choosing the subnet.  What is
  shown is the IP range, something like:

  192.168.1.0/24  (whatever)

  The problem with this is that I could use multiple subnets with the
  same or very similar IP ranges.  The UI is not providing any clue as
  to which subnet is which.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1451447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451440] [NEW] form validation not done while creating IPsec Site connection

2015-05-04 Thread Masco Kaliyamoorthy
Public bug reported:

While creating IPSec Site Connection, the form should be validated for
interval < timeout.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451440

Title:
  form validation not done while creating IPsec Site connection

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  While creating IPSec Site Connection, the form should be validated for
  interval < timeout.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1451440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451429] [NEW] Kilo: I/O error uploading image

2015-05-04 Thread Tom Verdaat
Public bug reported:

Using a Ceph backend. Same configuration works just fine on Juno.

Glance image creation from file using the API works. Horizon image
creation from URL works too, but using file upload does not.

Apache error.log:

Unhandled exception in thread started by 
Traceback (most recent call last):
  File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py",
 line 112, in image_update
exceptions.handle(request, ignore=True)
  File "/usr/lib/python2.7/dist-packages/horizon/exceptions.py", line 364, in 
handle
six.reraise(exc_type, exc_value, exc_traceback)
  File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py",
 line 110, in image_update
image = glanceclient(request).images.update(image_id, **kwargs)
  File "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 329, 
in update
resp, body = self.client.put(url, headers=hdrs, data=image_data)
  File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
265, in put
return self._request('PUT', url, **kwargs)
  File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
206, in _request
**kwargs)
  File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 455, in 
request
resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 558, in 
send
r = adapter.send(request, **kwargs)
  File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 350, in 
send
for i in request.body:
  File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
170, in chunk_body
chunk = body.read(CHUNKSIZE)
ValueError: I/O operation on closed file


Horizon log:

[req-0e69bfd9-c6ab-4131-b445-aa57c1a455f7 87d1da7fba6f4f5a9d4e7f78da344e91 
ba35660ba55b4a5283c691a4c6d99f23 - - -] Failed to upload image 
90a30bfb-946c-489d-9a04-5f601af0f821
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py", line 
113, in upload_data_to_store
context=req.context)
  File "/usr/lib/python2.7/dist-packages/glance_store/backend.py", line 339, in 
store_add_to_backend
context=context)
  File "/usr/lib/python2.7/dist-packages/glance_store/capabilities.py", line 
226, in op_checker
return store_op_fun(store, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/glance_store/_drivers/rbd.py", line 
384, in add
self._delete_image(loc.image, loc.snapshot)
  File "/usr/lib/python2.7/dist-packages/glance_store/_drivers/rbd.py", line 
290, in _delete_image
with conn.open_ioctx(target_pool) as ioctx:
  File "/usr/lib/python2.7/dist-packages/rados.py", line 667, in open_ioctx
raise make_ex(ret, "error opening pool '%s'" % ioctx_name)
ObjectNotFound: error opening pool '90a30bfb-946c-489d-9a04-5f601af0f821'

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451429

Title:
  Kilo: I/O error uploading image

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Using a Ceph backend. Same configuration works just fine on Juno.

  Glance image creation from file using the API works. Horizon image
  creation from URL works too, but using file upload does not.

  Apache error.log:

  Unhandled exception in thread started by 
  Traceback (most recent call last):
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py",
 line 112, in image_update
  exceptions.handle(request, ignore=True)
File "/usr/lib/python2.7/dist-packages/horizon/exceptions.py", line 364, in 
handle
  six.reraise(exc_type, exc_value, exc_traceback)
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py",
 line 110, in image_update
  image = glanceclient(request).images.update(image_id, **kwargs)
File "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 
329, in update
  resp, body = self.client.put(url, headers=hdrs, data=image_data)
File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
265, in put
  return self._request('PUT', url, **kwargs)
File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
206, in _request
  **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 455, in 
request
  resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 558, in 
send
  r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 350, in 
send
  for i in request.body:
File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
170, in chunk_body
  chunk = body.re

[Yahoo-eng-team] [Bug 1340661] Re: de-stub test_http and use mock

2015-05-04 Thread Kamil Rykowski
Mox has been replaced by mock by Jamie Lennox (thanks!).

Reference:
https://review.openstack.org/gitweb?p=openstack%2Fpython-glanceclient.git;a=commitdiff;h=98d8266a5055dcf4c8fe059ae618de44f601438a

** Changed in: glance
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1340661

Title:
  de-stub test_http and use mock

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Modify test_http.py to use mock instead of the FakeResponse stub.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1340661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451413] [NEW] Add kernel/ramdisk to create image

2015-05-04 Thread Ana Krivokapić
Public bug reported:

When creating a glance image, user should be able to specify
kernel/ramdisk for the image.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451413

Title:
  Add kernel/ramdisk to create image

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating a glance image, user should be able to specify
  kernel/ramdisk for the image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1451413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451402] [NEW] v3 - pagination in GET services does not work

2015-05-04 Thread Jamie Hannaford
Public bug reported:

When I execute

GET http://xxx.xxx.xxx.xxx:35357/v3/services?page=2&per_page=5

I expect to see a paginated collection returned (5 resources in total),
however this is not the case. Instead I get back the full collection -
so it seems like the query params are being ignored.

The next and prev links don't reference the next page in the collection
either.

Do I need to configure the conf file or something? If so, I couldn't
find it documented anywhere.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1451402

Title:
  v3 - pagination in GET services does not work

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When I execute

  GET http://xxx.xxx.xxx.xxx:35357/v3/services?page=2&per_page=5

  I expect to see a paginated collection returned (5 resources in
  total), however this is not the case. Instead I get back the full
  collection - so it seems like the query params are being ignored.

  The next and prev links don't reference the next page in the
  collection either.

  Do I need to configure the conf file or something? If so, I couldn't
  find it documented anywhere.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1451402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451399] [NEW] subnet-create arguments order is too strict (not backward compatibility)

2015-05-04 Thread Roey Dekel
Public bug reported:

Changing arguments order cause CLI error:

[root@puma14 ~(keystone_admin)]# neutron subnet-create public --gateway 
10.35.178.254  10.35.178.0/24 --name public_subnet
Invalid values_specs 10.35.178.0/24

Changing order:

[root@puma14 ~(keystone_admin)]# neutron subnet-create --gateway 10.35.178.254 
--name public_subnet public 10.35.178.0/24
Created a new subnet:
+---+--+
| Field | Value|
+---+--+
| allocation_pools  | {"start": "10.35.178.1", "end": "10.35.178.253"} |
| cidr  | 10.35.178.0/24   |
| dns_nameservers   |  |
| enable_dhcp   | True |
| gateway_ip| 10.35.178.254|
| host_routes   |  |
| id| 19593f99-b13c-4624-9755-983d7406cb47 |
| ip_version| 4|
| ipv6_address_mode |  |
| ipv6_ra_mode  |  |
| name  | public_subnet|
| network_id| df7af5c2-c84b-4991-b370-f8a854c29a80 |
| subnetpool_id |  |
| tenant_id | 7e8736e9aba546e98be4a71a92d67a77 |
+---+--+

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451399

Title:
  subnet-create arguments order is too strict (not backward
  compatibility)

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Changing arguments order cause CLI error:

  [root@puma14 ~(keystone_admin)]# neutron subnet-create public --gateway 
10.35.178.254  10.35.178.0/24 --name public_subnet
  Invalid values_specs 10.35.178.0/24

  Changing order:

  [root@puma14 ~(keystone_admin)]# neutron subnet-create --gateway 
10.35.178.254 --name public_subnet public 10.35.178.0/24
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {"start": "10.35.178.1", "end": "10.35.178.253"} |
  | cidr  | 10.35.178.0/24   |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 10.35.178.254|
  | host_routes   |  |
  | id| 19593f99-b13c-4624-9755-983d7406cb47 |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | public_subnet|
  | network_id| df7af5c2-c84b-4991-b370-f8a854c29a80 |
  | subnetpool_id |  |
  | tenant_id | 7e8736e9aba546e98be4a71a92d67a77 |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451391] [NEW] --router:external=True syntax is invalid

2015-05-04 Thread Roey Dekel
Public bug reported:

Kilo syntax is not backward compatibility:

[root@puma14 ~(keystone_admin)]# neutron net-create public 
--provider:network_type vlan --provider:physical_network physnet 
--provider:segmentation_id 193 --router:external=True
usage: neutron net-create [-h] [-f {shell,table,value}] [-c COLUMN]
  [--max-width ] [--prefix PREFIX]
  [--request-format {json,xml}]
  [--tenant-id TENANT_ID] [--admin-state-down]
  [--shared] [--router:external]
  [--provider:network_type ]
  [--provider:physical_network ]
  [--provider:segmentation_id ]
  [--vlan-transparent {True,False}]
  NAME
neutron net-create: error: argument --router:external: ignored explicit 
argument u'True'

Current syntax supports:
[root@puma14 ~(keystone_admin)]# neutron net-create public 
--provider:network_type vlan --provider:physical_network physnet 
--provider:segmentation_id 193 --router:external
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| df7af5c2-c84b-4991-b370-f8a854c29a80 |
| mtu   | 0|
| name  | public   |
| provider:network_type | vlan |
| provider:physical_network | physnet  |
| provider:segmentation_id  | 193  |
| router:external   | True |
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tenant_id | 7e8736e9aba546e98be4a71a92d67a77 |
+---+--+

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451391

Title:
  --router:external=True syntax is invalid

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Kilo syntax is not backward compatibility:

  [root@puma14 ~(keystone_admin)]# neutron net-create public 
--provider:network_type vlan --provider:physical_network physnet 
--provider:segmentation_id 193 --router:external=True
  usage: neutron net-create [-h] [-f {shell,table,value}] [-c COLUMN]
[--max-width ] [--prefix PREFIX]
[--request-format {json,xml}]
[--tenant-id TENANT_ID] [--admin-state-down]
[--shared] [--router:external]
[--provider:network_type ]
[--provider:physical_network 
]
[--provider:segmentation_id ]
[--vlan-transparent {True,False}]
NAME
  neutron net-create: error: argument --router:external: ignored explicit 
argument u'True'

  Current syntax supports:
  [root@puma14 ~(keystone_admin)]# neutron net-create public 
--provider:network_type vlan --provider:physical_network physnet 
--provider:segmentation_id 193 --router:external
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| df7af5c2-c84b-4991-b370-f8a854c29a80 |
  | mtu   | 0|
  | name  | public   |
  | provider:network_type | vlan |
  | provider:physical_network | physnet  |
  | provider:segmentation_id  | 193  |
  | router:external   | True |
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tenant_id | 7e8736e9aba546e98be4a71a92d67a77 |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451391/+subscriptions

-- 
Mailing list

[Yahoo-eng-team] [Bug 1451389] [NEW] Nova gate nroke due to failed unit test

2015-05-04 Thread Gary Kotton
Public bug reported:


[x]


ft1.13172: 
nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase.test_ipv6_host_read_StringException:
 Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 1201, in patched
return func(*args, **keywargs)
  File "nova/tests/unit/virt/vmwareapi/test_read_write_util.py", line 49, in 
test_ipv6_host_read
verify=False)
  File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 846, in assert_called_once_with
return self.assert_called_with(*args, **kwargs)
  File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 835, in assert_called_with
raise AssertionError(msg)
AssertionError: Expected call: request('get', 
'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dc&dsName=fake_ds',
 stream=True, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, 
allow_redirects=True, verify=False)
Actual call: request('get', 
'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dc&dsName=fake_ds',
 stream=True, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, 
allow_redirects=True, params=None, verify=False)

** Affects: nova
 Importance: Critical
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Gary Kotton (garyk)

** Changed in: nova
   Importance: Undecided => Critical

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451389

Title:
  Nova gate nroke due to failed unit test

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  
  [x]

  
  ft1.13172: 
nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase.test_ipv6_host_read_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 1201, in patched
  return func(*args, **keywargs)
File "nova/tests/unit/virt/vmwareapi/test_read_write_util.py", line 49, in 
test_ipv6_host_read
  verify=False)
File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 846, in assert_called_once_with
  return self.assert_called_with(*args, **kwargs)
File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 835, in assert_called_with
  raise AssertionError(msg)
  AssertionError: Expected call: request('get', 
'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dc&dsName=fake_ds',
 stream=True, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, 
allow_redirects=True, verify=False)
  Actual call: request('get', 
'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dc&dsName=fake_ds',
 stream=True, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, 
allow_redirects=True, params=None, verify=False)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451388] [NEW] OVS neutron agent missing ports on Hyper-V

2015-05-04 Thread Adelina Tuvenie
Public bug reported:

In order for networking to work on Hyper-V two ports must be added in
br-tun, "external.1" and "internal" and their associated flows . If the
OVS neutron agent restarts these two ports and flows get deleted when
setting up the tunnel bridge and need to be added again.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451388

Title:
  OVS neutron agent missing ports on Hyper-V

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In order for networking to work on Hyper-V two ports must be added in
  br-tun, "external.1" and "internal" and their associated flows . If
  the OVS neutron agent restarts these two ports and flows get deleted
  when setting up the tunnel bridge and need to be added again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451383] [NEW] in vpn form all the fields including optional fields having asterisk symbol

2015-05-04 Thread Masco Kaliyamoorthy
Public bug reported:

In all the vpn forms (create and edit) almost all the fields having the
mandatory mark (asterisk) but mandatory fields only should have this
symbol for optional fields it should be removed.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451383

Title:
  in vpn form all the fields including optional fields having asterisk
  symbol

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In all the vpn forms (create and edit) almost all the fields having
  the mandatory mark (asterisk) but mandatory fields only should have
  this symbol for optional fields it should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1451383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 901802] Re: get default quota with non-existent tenant didn't return 404 Error

2015-05-04 Thread Michal Rostecki
*** This bug is a duplicate of bug 1118066 ***
https://bugs.launchpad.net/bugs/1118066

** This bug is no longer a duplicate of bug 897326
   Nova should not return quota information for non-existent projects
** This bug has been marked a duplicate of bug 1118066
   Nova should confirm quota requests against Keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/901802

Title:
  get default quota with non-existent tenant didn't return 404 Error

Status in OpenStack Compute (Nova):
  New
Status in OpenStack QA:
  New

Bug description:
  Request(GET):
  URL: /v1.1/unknown/os-quota-sets/defaults

  
  Response(200 OK):
  {u'quota_set': {u'cores': 20,
    u'floating_ips': 10,
    u'gigabytes': 1000,
    u'id': u'defaults',
    u'injected_file_content_bytes': 10240,
    u'injected_files': 5,
    u'instances': 10,
    u'metadata_items': 128,
    u'ram': 51200,
    u'volumes': 10}}

  This must be 404 Not Found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/901802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276207] Re: vmware driver does not validate server certificates

2015-05-04 Thread Vipin Balachandran
** Changed in: cinder
   Status: Fix Released => Confirmed

** Changed in: cinder
 Assignee: Johnson koil raj (jjohnsonkoilraj) => Vipin Balachandran (vbala)

** Changed in: cinder
   Importance: Low => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276207

Title:
  vmware driver does not validate server certificates

Status in Cinder:
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo VMware library for OpenStack projects:
  Fix Released

Bug description:
  The VMware driver establishes connections to vCenter over HTTPS, yet
  the vCenter server certificate is not verified as part of the
  connection process.  I know this because my vCenter server is using a
  self-signed certificate which always fails certification verification.
  As a result, someone could use a man-in-the-middle attack to spoof the
  vcenter host to nova.

  The vmware driver has a dependency on Suds, which I believe also does
  not validate certificates because hartsock and I noticed it uses
  urllib.

  For reference, here is a link on secure connections in OpenStack:
  https://wiki.openstack.org/wiki/SecureClientConnections

  Assuming Suds is fixed to provide an option for certificate
  verification, next step would be to modify the vmware driver to
  provide an option to override invalid certificates (such as self-
  signed).  In other parts of OpenStack, there are options to bypass the
  certificate check with a "insecure" option set, or you could put the
  server's certificate in the CA store.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1276207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276207] Re: vmware driver does not validate server certificates

2015-05-04 Thread Radoslav Gerganov
** Changed in: nova
   Status: Fix Released => Confirmed

** Changed in: nova
 Assignee: (unassigned) => Radoslav Gerganov (rgerganov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276207

Title:
  vmware driver does not validate server certificates

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo VMware library for OpenStack projects:
  Fix Released

Bug description:
  The VMware driver establishes connections to vCenter over HTTPS, yet
  the vCenter server certificate is not verified as part of the
  connection process.  I know this because my vCenter server is using a
  self-signed certificate which always fails certification verification.
  As a result, someone could use a man-in-the-middle attack to spoof the
  vcenter host to nova.

  The vmware driver has a dependency on Suds, which I believe also does
  not validate certificates because hartsock and I noticed it uses
  urllib.

  For reference, here is a link on secure connections in OpenStack:
  https://wiki.openstack.org/wiki/SecureClientConnections

  Assuming Suds is fixed to provide an option for certificate
  verification, next step would be to modify the vmware driver to
  provide an option to override invalid certificates (such as self-
  signed).  In other parts of OpenStack, there are options to bypass the
  certificate check with a "insecure" option set, or you could put the
  server's certificate in the CA store.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1276207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451358] [NEW] admin project see packstack created demo network under network-topology

2015-05-04 Thread Roey Dekel
Public bug reported:

Using Kilo I noticed that I can see demo private network via horizon
while logging via project admin.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screenshot from 2015-05-04 11:24:48.png"
   
https://bugs.launchpad.net/bugs/1451358/+attachment/4390552/+files/Screenshot%20from%202015-05-04%2011%3A24%3A48.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451358

Title:
  admin project see packstack created demo network under network-
  topology

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Using Kilo I noticed that I can see demo private network via horizon
  while logging via project admin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1451358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp