[Yahoo-eng-team] [Bug 1517503] Re: Cinder v2 - Volume type resource attributes not documented

2015-11-18 Thread Steve Martinelli
?? how is this a keystone issue?

are you referring to some online document?

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1517503

Title:
  Cinder v2 - Volume type resource attributes not documented

Status in Cinder:
  New
Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  When listing volume types, you get back this JSON:

  {
  "volume_types": [
  {
  "extra_specs": {
  "capabilities": "gpu"
  },
  "id": "6685584b-1eac-4da6-b5c3-555430cf68ff",
  "name": "SSD"
  },
  {
  "extra_specs": {},
  "id": "8eb69a46-df97-4e41-9586-9a40a7533803",
  "name": "SATA"
  }
  ]
  }

  But none of those attributes are documented. The same is the case for
  retrieving a specific volume type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1517503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282089] Re: keystone client is leaving hanging connections to the server

2015-11-18 Thread Doug Fish
I spent some time trying to reproduce this. I don’t believe the problem
is still occurring.

Here’s what I did:

I used devstack to set up an environment on an ubuntu image.
I edited /etc/apache2/sites-available/keystone.conf and changed processes=5 to 
processes=1 for both virtualhosts (to reduce the number of processes I needed 
to watch)
and restarted the apache service.

Use ps aux | grep keystone and noted the PIDs for processes named
(wsgi:keystone-pu -k start  
and
(wsgi:keystone-ad -k start
after my keystone.conf http config edit there is only one each of these 
processes.

I opened two windows and monitored each process with a  loop like:
while true; do lsof -p  | wc -l; sleep 2; done

Then I opened Horizon. I launched 10 instances, terminated them and
launched 10 again. The output from my loops did not change at all during
this time.

** Changed in: django-openstack-auth
   Status: Confirmed => Invalid

** Changed in: horizon
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1282089

Title:
  keystone client is leaving hanging connections to the server

Status in django-openstack-auth:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in python-keystoneclient:
  Fix Released

Bug description:
  This is remarkable noticeable from Horizon which use keystoneclient to
  connect to the keystone server and at each request this later is left
  hanged there which consume the keystone server and at one point this
  will result to having keystone server process exceeding the limit of
  connection that is allowed to handle (ulimit of open filed).

  ## How to check:

  If you have horizon installed so just keep using it normally (creating
  instances ) while keeping an eye on the server number of opened
  files "lsof -p " you can see that the number increment
  pretty quickly.

  To reproduce this bug very fast try launching 40 instances at the same time
  for example using "Instance Count" field.

  ## Why:

  This because keystone client doesn't reuse the http connection pool,
  so in a long running service (e.g. horizon) the effect will be a new
  connections created for each request no connection reuse.

  Patch coming soon with more details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1282089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517461] Re: Create a 'stable/liberty' branch for networking-vsphere subproject of neutron

2015-11-18 Thread Kyle Mestery
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kyle Mestery (mestery)

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517461

Title:
  Create a 'stable/liberty' branch for networking-vsphere subproject of
  neutron

Status in networking-vsphere:
  Confirmed
Status in neutron:
  Confirmed

Bug description:
  As per the guidelines for subprojects in neutron here:
  
https://review.openstack.org/#/c/240800/1/doc/source/stadium/sub_project_guidelines.rst

  We are requesting the neutron release team to create 'stable/liberty'
  branch for networking-vsphere.

  In addition, since we are not part of the release-team, we request you to 
push the tag on 
  that stable/liberty branch of networking-vsphere.

  The SHA to use is HEAD, which is enclosed here:

  396e4de19a81c5b33ed09462f4dfc7c5f4d02ac2

  Thanks in advance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-vsphere/+bug/1517461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517579] [NEW] neutron.tests.unit.debug.test_shell.ShellTest.test_endpoint_option fails due to neutronclient change

2015-11-18 Thread Amir Sadoughi
Public bug reported:

This failure is a result of https://review.openstack.org/#/c/236325/
renaming the endpoint option default from publicURL to public. However
this hasn't affected upstream CI yet because this code is newer than the
3.1.0 release.

==
FAIL: neutron.tests.unit.debug.test_shell.ShellTest.test_endpoint_option
neutron.tests.unit.debug.test_shell.ShellTest.test_endpoint_option
--
_StringException: Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

Traceback (most recent call last):
  File "neutron/tests/unit/debug/test_shell.py", line 94, in 
test_endpoint_option
self.assertEqual('publicURL', namespace.os_endpoint_type)
  File 
"/home/jenkins/workspace/Merge-neutron-Ply/virtualenv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/jenkins/workspace/Merge-neutron-Ply/virtualenv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 'publicURL' != 'public'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517579

Title:
  neutron.tests.unit.debug.test_shell.ShellTest.test_endpoint_option
  fails due to neutronclient change

Status in neutron:
  New

Bug description:
  This failure is a result of https://review.openstack.org/#/c/236325/
  renaming the endpoint option default from publicURL to public. However
  this hasn't affected upstream CI yet because this code is newer than
  the 3.1.0 release.

  ==
  FAIL: neutron.tests.unit.debug.test_shell.ShellTest.test_endpoint_option
  neutron.tests.unit.debug.test_shell.ShellTest.test_endpoint_option
  --
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File "neutron/tests/unit/debug/test_shell.py", line 94, in 
test_endpoint_option
  self.assertEqual('publicURL', namespace.os_endpoint_type)
File 
"/home/jenkins/workspace/Merge-neutron-Ply/virtualenv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/jenkins/workspace/Merge-neutron-Ply/virtualenv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 'publicURL' != 'public'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1517579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353939] Re: Rescue fails with 'Failed to terminate process: Device or resource busy' in the n-cpu log

2015-11-18 Thread Chuck Short
** Also affects: nova/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353939

Title:
  Rescue fails with 'Failed to terminate process: Device or resource
  busy' in the n-cpu log

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  New
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in nova package in Ubuntu:
  New

Bug description:
  [Impact]

   * Users may sometimes fail to shutdown an instance if the associated qemu
 process is on uninterruptable sleep (typically IO).

  [Test Case]

   * 1. create some IO load in a VM
 2. look at the associated qemu, make sure it has STAT D in ps output
 3. shutdown the instance
 4. with the patch in place, nova will retry calling libvirt to shutdown
the instance 3 times to wait for the signal to be delivered to the 
qemu process.

  [Regression Potential]

   * None


  message: "Failed to terminate process" AND
  message:'InstanceNotRescuable' AND message: 'Exception during message
  handling' AND tags:"screen-n-cpu.txt"

  The above log stash-query reports back only the failed jobs, the 'Failed to 
terminate process' close other failed rescue tests,
  but tempest does not always reports them as an error at the end.

  message: "Failed to terminate process" AND tags:"screen-n-cpu.txt"

  Usual console log:
  Details: (ServerRescueTestJSON:test_rescue_unrescue_instance) Server 
0573094d-53da-40a5-948a-747d181462f5 failed to reach RESCUE status and task 
state "None" within the required time (196 s). Current status: SHUTOFF. Current 
task state: None.

  http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-
  full/90726cb/console.html#_2014-08-07_03_50_26_520

  Usual n-cpu exception:
  
http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-full/90726cb/logs/screen-n-cpu.txt.gz#_2014-08-07_03_32_02_855

  2014-08-07 03:32:02.855 ERROR oslo.messaging.rpc.dispatcher 
[req-39ce7a3d-5ceb-41f5-8f9f-face7e608bd1 ServerRescueTestJSON-2035684545 
ServerRescueTestJSON-1017508309] Exception during message handling: Instance 
0573094d-53da-40a5-948a-747d181462f5 cannot be rescued: Driver Error: Failed to 
terminate process 26425 with SIGKILL: Device or resource busy
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 408, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 292, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher pass
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-07 03:32:02.855 22829 TRACE 

[Yahoo-eng-team] [Bug 1517503] Re: Cinder v2 - Volume type resource attributes not documented

2015-11-18 Thread Jamie Hannaford
Sorry, I selected the wrong project. It should be `openstack-api-site`.

** Project changed: cinder => openstack-api-site

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1517503

Title:
  Cinder v2 - Volume type resource attributes not documented

Status in OpenStack Identity (keystone):
  Invalid
Status in openstack-api-site:
  New

Bug description:
  When listing volume types, you get back this JSON:

  {
  "volume_types": [
  {
  "extra_specs": {
  "capabilities": "gpu"
  },
  "id": "6685584b-1eac-4da6-b5c3-555430cf68ff",
  "name": "SSD"
  },
  {
  "extra_specs": {},
  "id": "8eb69a46-df97-4e41-9586-9a40a7533803",
  "name": "SATA"
  }
  ]
  }

  But none of those attributes are documented. The same is the case for
  retrieving a specific volume type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1517503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457757] Re: Glance- Failed validating deactivated in schema

2015-11-18 Thread Abhishek Kekane
*** This bug is a duplicate of bug 1505218 ***
https://bugs.launchpad.net/bugs/1505218

Already fixed in master against commit
135a946a2d1a74dda67fcc35ab31c9d83d4d0c40

by Mike committed on 12 Oct


** This bug has been marked a duplicate of bug 1505218
   Image schema doesn't contain 'deactivated' status

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1457757

Title:
  Glance- Failed validating deactivated in schema

Status in Glance:
  New

Bug description:
  Hi
  I am working on a new test for glance image deactivated, on kilo release .
  During the tests , while running self.admin_client.image_list() got 
validation error , 

  In tempest they query glance for get_scheme , so i guess the new state of 
image "deactivated" should be added to image status.
  How can we update the schem ? Please help


  
  Error
  _StringException: Empty attachments:
pythonlogging:''

  Traceback (most recent call last):
File 
"/home/bkopilov/Automation/tempest/tempest/api/image/admin/v2/test_images.py", 
line 51, in test_deactivate_image
  images = self.admin_client.image_list()
File 
"/home/bkopilov/Automation/tempest/tempest/services/image/v2/json/image_client.py",
 line 126, in image_list
  self._validate_schema(body, type='images')
File 
"/home/bkopilov/Automation/tempest/tempest/services/image/v2/json/image_client.py",
 line 59, in _validate_schema
  jsonschema.validate(body, schema)
File 
"/home/bkopilov/.local/lib/python2.7/site-packages/jsonschema/validators.py", 
line 432, in validate
  cls(schema, *args, **kwargs).validate(instance)
File 
"/home/bkopilov/.local/lib/python2.7/site-packages/jsonschema/validators.py", 
line 117, in validate
  raise error
  ValidationError: u'deactivated' is not one of [u'queued', u'saving', 
u'active', u'killed', u'deleted', u'pending_delete']

  Failed validating u'enum' in 
schema[u'properties'][u'images'][u'items'][u'properties'][u'status']:
  {u'description': u'Status of the image (READ-ONLY)',
   u'enum': [u'queued',
 u'saving',
 u'active',
 u'killed',
 u'deleted',
 u'pending_delete'],
   u'type': u'string'}

  On instance[u'images'][0][u'status']:
  u'deactivated'


  Process finished with exit code 0

  Thans, 
  Benny

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1457757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515670] Re: VPNaaS: Modify neutron-client to allow Horizon to detect multiple local subnet feature

2015-11-18 Thread Akihiro Motoki
The original description was a bit confusing.
It is not directly to neutroncleint and it is related to neutron-vpnaas itself.
I will update the description.

** Summary changed:

- VPNaaS: Modify neutron-client to allow Horizon to detect multiple local 
subnet feature
+ VPNaaS: Modify neutron API users to detect multiple local subnet feature

** Description changed:

  In review of 231133, Akihiro mentioned follow up work for the neutron
- client, so that Horizon can detect whether or not the new multiple local
- subnet feature, with endpoint groups, is available.
+ API consumers, so that Horizon can detect whether or not the new
+ multiple local subnet feature, with endpoint groups, is available.
+ 
+ At the moment, multiple local subnet feature has been implemented in
+ VPNaaS API, but API consumers need to try VPNaaS API to detect multiple
+ local subnet feature is available or not. It is better to detect the
+ feature without trying to call VPNaaS API.
+ 
+ The suggested approach is to add an extension which represents this
+ feature.
  
  Placeholder for that work.

** Project changed: python-neutronclient => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515670

Title:
  VPNaaS: Modify neutron API users to detect multiple local subnet
  feature

Status in neutron:
  New

Bug description:
  In review of 231133, Akihiro mentioned follow up work for the neutron
  API consumers, so that Horizon can detect whether or not the new
  multiple local subnet feature, with endpoint groups, is available.

  At the moment, multiple local subnet feature has been implemented in
  VPNaaS API, but API consumers need to try VPNaaS API to detect
  multiple local subnet feature is available or not. It is better to
  detect the feature without trying to call VPNaaS API.

  The suggested approach is to add an extension which represents this
  feature.

  Placeholder for that work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461047] Re: description column is missing in firewall tables

2015-11-18 Thread David Lyle
** Changed in: horizon
   Status: In Progress => Fix Released

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1461047

Title:
  description column is missing in firewall tables

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  in all the firewall tables 'description' column is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1461047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256043] Re: Need to add Development environment files to ignore list

2015-11-18 Thread David Lyle
Horizon has decided in the past not to merge IDE specific files and let
the developer manage their own .gitignore.

** Changed in: horizon
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1256043

Title:
  Need to add Development environment files to ignore list

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  In Progress
Status in python-keystoneclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in python-swiftclient:
  Won't Fix
Status in OpenStack Object Storage (swift):
  Won't Fix

Bug description:
  Following files generated by Eclipse development environment should be
  in ignore list to avoid their inclusion during a git push.

  .project
  .pydevproject

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1256043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447459] Re: stable/kilo fetches master translations

2015-11-18 Thread Akihiro Motoki
https://review.openstack.org/#/c/175122/ was merged into stable/kilo horizon
and the problem is now addressed.

** Changed in: openstack-i18n
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447459

Title:
  stable/kilo fetches master translations

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in openstack i18n:
  Fix Released

Bug description:
  Stable kilo fetches master (or lates) translations instead of the
  *-kilo resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1447459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353344] Re: create network success , tips are error

2015-11-18 Thread Akihiro Motoki
I believe the fix was shipped along with Horizon Liberty.
We can close the bug as openstack-i18n.

** Changed in: openstack-i18n
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1353344

Title:
  create network success ,tips are error

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in openstack i18n:
  Fix Released

Bug description:
  when I creating a network with name net04_share successfully, the tip
  s are: 成功:成果创建 net04_share 网络.

  Maybe there is an error with dashboard translation
  file:\openstack\horizon\openstack_dashboard\locale\zh_CN\LC_MESSAGES\django.po

  
  It should be changed with :成功:成功创建 net04_share 网络.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1353344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1155075] Re: Horizon shows stack trace while creating vip

2015-11-18 Thread David Lyle
** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1155075

Title:
  Horizon shows stack trace while creating vip

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Steps to reproduce:
  1. Create pool.
  2. Create vip using some IP address.
  3. Delete vip.
  4. Create vip using the same IP address as previous vip. The following stack 
trace page is displayed in Horizon:

  Traceback:
  File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in 
get_response
    111. response = callback(request, *callback_args, 
**callback_kwargs)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py" in dec
    38. return view_func(request, *args, **kwargs)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py" in dec
    54. return view_func(request, *args, **kwargs)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py" in dec
    38. return view_func(request, *args, **kwargs)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py" in dec
    86. return view_func(request, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py" in 
view
    48. return self.dispatch(request, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py" in 
dispatch
    69. return handler(request, *args, **kwargs)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/workflows/views.py" 
in post
    139. exceptions.handle(request)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/workflows/views.py" 
in post
    136. success = workflow.finalize()
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/workflows/base.py" 
in finalize
    779. if not self.handle(self.request, self.context):
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/loadbalancers/workflows.py"
 in handle
    244.   self.failure_message)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/loadbalancers/workflows.py"
 in handle
    240. api.lbaas.vip_create(request, **context)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/api/lbaas.py"
 in vip_create
    136. vip = quantumclient(request).create_vip(body).get('vip')
  File "/opt/stack/python-quantumclient/quantumclient/v2_0/client.py" in 
with_params
    107. ret = self.function(instance, *args, **kwargs)
  File "/opt/stack/python-quantumclient/quantumclient/v2_0/client.py" in 
create_vip
    547. return self.post(self.vips_path, body=body)
  File "/opt/stack/python-quantumclient/quantumclient/v2_0/client.py" in post
    987.headers=headers, params=params)
  File "/opt/stack/python-quantumclient/quantumclient/v2_0/client.py" in 
do_request
    912. self._handle_fault_response(status_code, replybody)
  File "/opt/stack/python-quantumclient/quantumclient/v2_0/client.py" in 
_handle_fault_response
    893. exception_handler_v20(status_code, des_error_body)
  File "/opt/stack/python-quantumclient/quantumclient/v2_0/client.py" in 
exception_handler_v20
    80. message=error_dict)

  Exception Type: QuantumClientException at 
/project/loadbalancers/addvip/096308f7-0183-48a7-a1d7-f22cd02e1300/
  Exception Value: Unable to complete operation for network 
fe2e5fcb-6531-40d1-bad8-a7103a3105c2. The IP address 10.0.0.5 is in use.

  There should be user-friendly error message with explaination.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1155075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461018] Re: description column is missing in vpn tables

2015-11-18 Thread Rob Cresswell
** Changed in: horizon
   Status: In Progress => Fix Released

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1461018

Title:
  description column is missing in vpn tables

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  description column is not present in vpn tables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1461018/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515866] Re: two lbaas agent instances are run for gate-neutron-lbaasv1-dsvm-api

2015-11-18 Thread Gary Kotton
I do not think that this is relevant anymore as the V1 is deprecated.

** Tags added: lbaas

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515866

Title:
  two lbaas agent instances are run for gate-neutron-lbaasv1-dsvm-api

Status in neutron:
  Won't Fix

Bug description:
  on gate, two instances of lbaas agents are executed.
  one by neutron-legacy, another by neutron-lbaas devstack plugin.

  for example:

  http://logs.openstack.org/34/243934/4/check/gate-neutron-lbaasv1-dsvm-
  api/3b43406/logs/devstacklog.txt.gz#_2015-11-12_22_53_07_383

  http://logs.openstack.org/34/243934/4/check/gate-neutron-lbaasv1-dsvm-
  api/3b43406/logs/devstacklog.txt.gz#_2015-11-12_22_55_57_529

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515866/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515508] Re: Kilo:No command to configure LB session persistence in openstack CLI.

2015-11-18 Thread Gary Kotton
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
   Importance: Undecided => Low

** Changed in: python-neutronclient
   Status: New => Confirmed

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515508

Title:
  Kilo:No command to configure LB session persistence in openstack CLI.

Status in python-neutronclient:
  Confirmed

Bug description:
  openstack env is kilo using ubuntu 14.04,  can't configure load
  balance session persistence in openstack CLI. anyway to configure it
  via cli?

  [root@nsj1 ~]# neutron lb-vip-update 748b4c56-7a3b-41d4-b591-ecd9014730ed 
--session_persistence type=HTTP_COOKIE
  name 'HTTP_COOKIE' is not defined

  [root@nsj1 ~]# neutron lb-vip-update 748b4c56-7a3b-41d4-b591-ecd9014730ed 
--session_persistence type=http_cookie
  name 'http_cookie' is not defined

  
  [root@nsj1 ~]# neutron lb-vip-update 748b4c56-7a3b-41d4-b591-ecd9014730ed 
--session_persistence type=app_cookie
  name 'app_cookie' is not defined


  [root@nsj1 ~]# neutron lb-vip-create
  usage: neutron lb-vip-create [-h] [-f {shell,table,value}] [-c COLUMN]
   [--max-width ] [--prefix PREFIX]
   [--request-format {json,xml}]
   [--tenant-id TENANT_ID] [--address ADDRESS]
   [--admin-state-down]
   [--connection-limit CONNECTION_LIMIT]
   [--description DESCRIPTION] --name NAME
   --protocol-port PROTOCOL_PORT --protocol
   {TCP,HTTP,HTTPS} --subnet-id SUBNET
   POOL
  neutron lb-vip-create: error: too few arguments

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1515508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515670] [NEW] VPNaaS: Modify neutron API users to detect multiple local subnet feature

2015-11-18 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

In review of 231133, Akihiro mentioned follow up work for the neutron
API consumers, so that Horizon can detect whether or not the new
multiple local subnet feature, with endpoint groups, is available.

At the moment, multiple local subnet feature has been implemented in
VPNaaS API, but API consumers need to try VPNaaS API to detect multiple
local subnet feature is available or not. It is better to detect the
feature without trying to call VPNaaS API.

The suggested approach is to add an extension which represents this
feature.

Placeholder for that work.

** Affects: neutron
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: vpnaas
-- 
VPNaaS: Modify neutron API users to detect multiple local subnet feature
https://bugs.launchpad.net/bugs/1515670
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516791] Re: LBaaS v2 doc for show loadbalancer has incorrect status

2015-11-18 Thread Gary Kotton
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1516791

Title:
  LBaaS v2 doc for show loadbalancer has incorrect status

Status in openstack-manuals:
  New

Bug description:
  http://developer.openstack.org/api-ref-
  networking-v2-ext.html#showLoadBalancerv2 displays the output of a
  show loadbalancer call. One of the keys shown is 'status'.

  Based on the code (https://github.com/openstack/neutron-
  lbaas/blob/master/neutron_lbaas/services/loadbalancer/data_models.py#L499)
  I would expect to see 2 statuses here, a provisioning_status and an
  operating_status.

  This appears to be an error in the docs, since a Loadbalancer object
  would contain both of these attributes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1516791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516634] Re: Openstack typo

2015-11-18 Thread Shuquan Huang
** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => Shuquan Huang (shuquan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1516634

Title:
  Openstack typo

Status in Glance:
  New
Status in Manila:
  In Progress

Bug description:
  According to the word choice convention in 
http://docs.openstack.org/contributor-guide/writing-style/word-choice.html
  We should use OpenStack instead of Openstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1516634/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517367] [NEW] unable to boot instance after image reactivate with xenapi citrix

2015-11-18 Thread Benny Kopilov
Public bug reported:

Hi , 
Trying to submit a path to upstream tempest.
https://review.openstack.org/#/c/245519/

The test steps:
#1 deactivate image
#2 activate 
#3 boot an instance from this image

boot an instance fails only on citrix server , it pass on all others setup in 
temest.
citrix runs another test for image deactivate successfully.

Logs :
http://dd6b71949550285df7dc-dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/19/245519/4/16056/logs/index.html


Could you please check ? 

Thanks, 
Benny

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1517367

Title:
  unable to boot instance after image reactivate with xenapi citrix

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi , 
  Trying to submit a path to upstream tempest.
  https://review.openstack.org/#/c/245519/

  The test steps:
  #1 deactivate image
  #2 activate 
  #3 boot an instance from this image

  boot an instance fails only on citrix server , it pass on all others setup in 
temest.
  citrix runs another test for image deactivate successfully.

  Logs :
  
http://dd6b71949550285df7dc-dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/19/245519/4/16056/logs/index.html

  
  Could you please check ? 

  Thanks, 
  Benny

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1517367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515670] Re: VPNaaS: Modify neutron-client to allow Horizon to detect multiple local subnet feature

2015-11-18 Thread Gary Kotton
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** No longer affects: neutron

** Changed in: python-neutronclient
   Importance: Undecided => High

** Changed in: python-neutronclient
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515670

Title:
  VPNaaS: Modify neutron-client to allow Horizon to detect multiple
  local subnet feature

Status in python-neutronclient:
  New

Bug description:
  In review of 231133, Akihiro mentioned follow up work for the neutron
  client, so that Horizon can detect whether or not the new multiple
  local subnet feature, with endpoint groups, is available.

  Placeholder for that work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1515670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475093] Re: l3_db updates port db without calling l2 plugin

2015-11-18 Thread Wim De Clercq
bgpvpn is also having this issue for
https://blueprints.launchpad.net/bgpvpn/+spec/router-bgpvpn-association

Inside _add_interface_by_port the port update is happening without using
ml2, so there is no way to get notified by this type of router-
interface-add.

** Also affects: bgpvpn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475093

Title:
  l3_db updates port db without calling l2 plugin

Status in bgpvpn:
  New
Status in neutron:
  In Progress

Bug description:
  l3 db updates port::owner directly without calling l2 plugin when adding an 
interface to the router.
  So ML2 mechanism driver gets confused resulting an error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1475093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517503] [NEW] Cinder v2 - Volume type resource attributes not documented

2015-11-18 Thread Jamie Hannaford
Public bug reported:

When listing volume types, you get back this JSON:

{
"volume_types": [
{
"extra_specs": {
"capabilities": "gpu"
},
"id": "6685584b-1eac-4da6-b5c3-555430cf68ff",
"name": "SSD"
},
{
"extra_specs": {},
"id": "8eb69a46-df97-4e41-9586-9a40a7533803",
"name": "SATA"
}
]
}

But none of those attributes are documented. The same is the case for
retrieving a specific volume type.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1517503

Title:
  Cinder v2 - Volume type resource attributes not documented

Status in OpenStack Identity (keystone):
  New

Bug description:
  When listing volume types, you get back this JSON:

  {
  "volume_types": [
  {
  "extra_specs": {
  "capabilities": "gpu"
  },
  "id": "6685584b-1eac-4da6-b5c3-555430cf68ff",
  "name": "SSD"
  },
  {
  "extra_specs": {},
  "id": "8eb69a46-df97-4e41-9586-9a40a7533803",
  "name": "SATA"
  }
  ]
  }

  But none of those attributes are documented. The same is the case for
  retrieving a specific volume type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1517503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517670] [NEW] update rule name failing in the test

2015-11-18 Thread Lin Hua Cheng
Public bug reported:

Failed to update rule new name: Unexpected method call.  unexpected:-  
expected:+
- function.__call__(, 
'h0881d38-c3eb-4fee-9763-12de3338041d', action=u'ALLOW', description=u'new 
desc', destination_ip_address=None, destination_port=u'1:65535', enabled=True, 
ip_version=u'', name=u'new name', protocol=u'ICMP', shared=False, 
source_ip_address='1.2.3.0/24', source_port=None) -> None
+ function.__call__(mox.IsA() , 
'h0881d38-c3eb-4fee-9763-12de3338041d', action='ALLOW', description='new desc', 
destination_ip_address=None, destination_port='1:65535', enabled=True, 
name='new name', protocol='ICMP', shared=False, source_ip_address='1.2.3.0/24', 
source_port=None) -> 
..Failed to update rule new name: Unexpected method call.  unexpected:-  
expected:+
- function.__call__(, 
'f0881d38-c3eb-4fee-9763-12de3338041d', action=u'ALLOW', description=u'new 
desc', destination_ip_address=None, destination_port=u'1:65535', enabled=True, 
ip_version=u'', name=u'new name', protocol=u'ICMP', shared=False, 
source_ip_address='1.2.3.0/24', source_port=None) -> None
+ function.__call__(mox.IsA() , 
'f0881d38-c3eb-4fee-9763-12de3338041d', action='ALLOW', description='new desc', 
destination_ip_address=None, destination_port='1:65535', enabled=True, 
name='new name', protocol='ICMP', shared=False, source_ip_address='1.2.3.0/24', 
source_port=None) -> , , 'tenant_id': 
'1', 'enabled': True, 'source_ip_address': '1.2.3.0/24', 
'destination_ip_address': '4.5.6.7/32', 'firewall_policy_id': 
'abcdef-c3eb-4fee-9763-12de3338041e', 'action': 'deny', 'position': 2, 
'source_port': '80', 'shared': True, 'destination_port': '1:65535', 'id': 
'c6298a93-850f-4f64-b78a-959fd4f1e5df', 'name': ''}>], 'tenant_id': '1', 'i
 d': 'abcdef-c3eb-4fee-9763-12de3338041e', 'shared': True, 'audited': True, 
'name': 'policy1'}>, 'tenant_id': '1', 'enabled': True, 'rule_id': 
'f0881d38-c3eb-4fee-9763-12de3338041d', 'source_ip_address': '1.2.3.0/24', 
'destination_ip_address': '4.5.6.7/32', 'firewall_policy_id': 
'abcdef-c3eb-4fee-9763-12de3338041e', 'action': 'ALLOW', 'position': 1, 
'source_port': '80', 'shared': True, 'destination_port': '1:65535', 'id': 
'f0881d38-c3eb-4fee-9763-12de3338041d', 'name': 'rule1'}>
.Failed to update rule new name: Unexpected method call.  unexpected:-  
expected:+
- function.__call__(, 
'f0881d38-c3eb-4fee-9763-12de3338041d', action=u'ALLOW', description=u'new 
desc', destination_ip_address=None, destination_port=u'1:65535', enabled=True, 
ip_version=u'', name=u'new name', protocol=None, shared=False, 
source_ip_address='1.2.3.0/24', source_port=None) -> None
+ function.__call__(mox.IsA() , 
'f0881d38-c3eb-4fee-9763-12de3338041d', action='ALLOW', description='new desc', 
destination_ip_address=None, destination_port='1:65535', enabled=True, 
name='new name', protocol=None, shared=False, source_ip_address='1.2.3.0/24', 
source_port=None) -> , , 'tenant_id': 
'1', 'enabled': True, 'source_ip_address': '1.2.3.0/24', 
'destination_ip_address': '4.5.6.7/32', 'firewall_policy_id': 
'abcdef-c3eb-4fee-9763-12de3338041e', 'action': 'deny', 'position': 2, 
'source_port': '80', 'shared': True, 'destination_port': '1:65535', 'id': 
'c6298a93-850f-4f64-b78a-959fd4f1e5df', 'name': ''}>], 'tenant_id': '1', 'id'
 : 'abcdef-c3eb-4fee-9763-12de3338041e', 'shared': True, 'audited': True, 
'name': 'policy1'}>, 'tenant_id': '1', 'enabled': True, 'rule_id': 
'f0881d38-c3eb-4fee-9763-12de3338041d', 'source_ip_address': '1.2.3.0/24', 
'destination_ip_address': '4.5.6.7/32', 'firewall_policy_id': 
'abcdef-c3eb-4fee-9763-12de3338041e', 'action': 'ALLOW', 'position': 1, 
'source_port': '80', 'shared': True, 'destination_port': '1:65535', 'id': 
'f0881d38-c3eb-4fee-9763-12de3338041d', 'name': 'rule1'}>

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1517670

Title:
  update rule name failing in the test

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Failed to update rule new name: Unexpected method call.  unexpected:-  
expected:+
  - function.__call__(, 
'h0881d38-c3eb-4fee-9763-12de3338041d', action=u'ALLOW', description=u'new 
desc', destination_ip_address=None, destination_port=u'1:65535', enabled=True, 
ip_version=u'', name=u'new name', protocol=u'ICMP', shared=False, 
source_ip_address='1.2.3.0/24', source_port=None) -> None
  + function.__call__(mox.IsA() , 
'h0881d38-c3eb-4fee-9763-12de3338041d', action='ALLOW', description='new desc', 
destination_ip_address=None, destination_port='1:65535', enabled=True, 
name='new name', protocol='ICMP', shared=False, source_ip_address='1.2.3.0/24', 
source_port=None) -> 
  ..Failed to update rule new name: Unexpected method call.  unexpected:-  
expected:+
  - function.__call__(, 
'f0881d38-c3eb-4fee-9763-12de3338041d', action=u'ALLOW', description=u'new 
desc', 

[Yahoo-eng-team] [Bug 1517694] [NEW] delete project fail using ldap backend identity driver

2015-11-18 Thread lumeihong
Public bug reported:

delete project fail using ldap backend identity driver

1.In the [identity] section of keystone.conf, replace driver = 
keystone.identity.backends.sql.Identity with driver = 
keystone.identity.backends.ldap.Identity.
2.Update the [ldap] section to reflect LDAP server configuration. as follows: 
[ldap]
url = ldap://localhost  
user = cn=Manager,dc=my-domain,dc=com
password = 123456
suffix = dc=my-domain,dc=com
user_tree_dn = ou=users,dc=my-domain,dc=com  
user_objectclass = inetOrgPerson
tenant_tree_dn = ou=projects,dc=my-domain,dc=com 
tenant_objectclass=groupOfNames
role_tree_dn = ou=roles,dc=my-domain,dc=com
role_objectclass=organizationalRole
group_tree_dn = ou=groups,dc=my-domain,dc=com 
use_dumb_member = True  
allow_subtree_delete = True


3. restart keystone
4. create default data like users (e.g. admin), project (e.g. admin project) 
and role (e.g. admin or member role)
5.delete project ,failure reason as follows
# kestone --debug tenant-delete test
DEBUG:keystoneclient.session:REQ: curl -g -i -X DELETE 
http://10.43.211.108:35357/v2.0/tenants/a4d874fa21f048cc830ef83296c04e29 -H 
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}fcf0bdd1b74b11623c46762555379ed7a1dc80f4"
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 
10.43.211.108
DEBUG:requests.packages.urllib3.connectionpool:"DELETE 
/v2.0/tenants/a4d874fa21f048cc830ef83296c04e29 HTTP/1.1" 404 114
DEBUG:keystoneclient.session:RESP:
DEBUG:keystoneclient.session:Request returned failure status: 404
Could not find role: a4d874fa21f048cc830ef83296c04e29 (HTTP 404) (Request-ID: 
req-bf6ec82f-27b8-42fe-a020-af94373a620c)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1517694

Title:
  delete project fail using ldap backend identity driver

Status in OpenStack Identity (keystone):
  New

Bug description:
  delete project fail using ldap backend identity driver

  1.In the [identity] section of keystone.conf, replace driver = 
keystone.identity.backends.sql.Identity with driver = 
keystone.identity.backends.ldap.Identity.
  2.Update the [ldap] section to reflect LDAP server configuration. as follows: 
  [ldap]
  url = ldap://localhost  
  user = cn=Manager,dc=my-domain,dc=com
  password = 123456
  suffix = dc=my-domain,dc=com
  user_tree_dn = ou=users,dc=my-domain,dc=com  
  user_objectclass = inetOrgPerson
  tenant_tree_dn = ou=projects,dc=my-domain,dc=com 
  tenant_objectclass=groupOfNames
  role_tree_dn = ou=roles,dc=my-domain,dc=com
  role_objectclass=organizationalRole
  group_tree_dn = ou=groups,dc=my-domain,dc=com 
  use_dumb_member = True  
  allow_subtree_delete = True

  
  3. restart keystone
  4. create default data like users (e.g. admin), project (e.g. admin project) 
and role (e.g. admin or member role)
  5.delete project ,failure reason as follows
  # kestone --debug tenant-delete test
  DEBUG:keystoneclient.session:REQ: curl -g -i -X DELETE 
http://10.43.211.108:35357/v2.0/tenants/a4d874fa21f048cc830ef83296c04e29 -H 
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}fcf0bdd1b74b11623c46762555379ed7a1dc80f4"
  INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection 
(1): 10.43.211.108
  DEBUG:requests.packages.urllib3.connectionpool:"DELETE 
/v2.0/tenants/a4d874fa21f048cc830ef83296c04e29 HTTP/1.1" 404 114
  DEBUG:keystoneclient.session:RESP:
  DEBUG:keystoneclient.session:Request returned failure status: 404
  Could not find role: a4d874fa21f048cc830ef83296c04e29 (HTTP 404) (Request-ID: 
req-bf6ec82f-27b8-42fe-a020-af94373a620c)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1517694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517702] [NEW] create a rbac policy target_tenant_id=self, can not delete this policy

2015-11-18 Thread zhaobo
Public bug reported:


I create a network use admin with just input the network name. I want to make 
this network just shared with myself. So I create a policy about  it.But after 
a period , I want to make this network to share to other tenants or delete this 
policy , it cannot work.

repo 

1. neutron net-create test1withadmin tenant A
2. neutron rbac-create test --type network --action access_as_shared 
--target-tenant admin_tenant
3.neutron rbac-delete policy_id  --> hit error
4.neutron rbac-update policy_id  --target-tenant demo_tenant --> 
hit error  

So this policy cannot delete.

err_details
-
2015-11-19 02:46:57.687 ERROR neutron.callbacks.manager 
[req-5300e9fd-518d-46d8-b168-4ff3ea8e11bc admin 
5d73438ed76a4399b8d2996a699146c5] Error during notification for 
neutron.plugins.ml2.plugin.Ml2Plugin.validate_network_rbac_policy_change 
rbac-policy, before_update
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager Traceback (most recent 
call last):
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/callbacks/manager.py", line 141, in _notify_loop
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager callback(resource, 
event, trigger, **kwargs)
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 151, in 
validate_network_rbac_policy_change
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager tenant_to_check)
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 157, in 
ensure_no_tenant_ports_on_network
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager ctx_admin = 
ctx.get_admin_context()
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager InvalidSharedSetting: 
Unable to reconfigure sharing settings for network 
d207350c-6d19-45fc-a3a4-2c70bf35a933. Multiple tenants are using it.
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager 
2015-11-19 02:46:57.687 ERROR neutron.callbacks.manager 
[req-5300e9fd-518d-46d8-b168-4ff3ea8e11bc admin 
5d73438ed76a4399b8d2996a699146c5] Error during notification for 
neutron.plugins.ml2.plugin.Ml2Plugin.validate_network_rbac_policy_change 
rbac-policy, before_update
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager Traceback (most recent 
call last):
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/callbacks/manager.py", line 141, in _notify_loop
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager callback(resource, 
event, trigger, **kwargs)
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 151, in 
validate_network_rbac_policy_change
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager tenant_to_check)
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 157, in 
ensure_no_tenant_ports_on_network
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager ctx_admin = 
ctx.get_admin_context()
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager InvalidSharedSetting: 
Unable to reconfigure sharing settings for network 
d207350c-6d19-45fc-a3a4-2c70bf35a933. Multiple tenants are using it.
2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager

** Affects: neutron
 Importance: Undecided
 Assignee: zhaobo (zhaobo6)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => zhaobo (zhaobo6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517702

Title:
  create a rbac policy target_tenant_id=self, can not delete this policy

Status in neutron:
  New

Bug description:
  
  I create a network use admin with just input the network name. I want to make 
this network just shared with myself. So I create a policy about  it.But after 
a period , I want to make this network to share to other tenants or delete this 
policy , it cannot work.

  repo 
  
  1. neutron net-create test1withadmin tenant A
  2. neutron rbac-create test --type network --action access_as_shared 
--target-tenant admin_tenant
  3.neutron rbac-delete policy_id  --> hit error
  4.neutron rbac-update policy_id  --target-tenant demo_tenant --> 
hit error  

  So this policy cannot delete.

  err_details
  -
  2015-11-19 02:46:57.687 ERROR neutron.callbacks.manager 
[req-5300e9fd-518d-46d8-b168-4ff3ea8e11bc admin 
5d73438ed76a4399b8d2996a699146c5] Error during notification for 
neutron.plugins.ml2.plugin.Ml2Plugin.validate_network_rbac_policy_change 
rbac-policy, before_update
  2015-11-19 02:46:57.687 TRACE neutron.callbacks.manager Traceback (most 
recent call last):
  2015-11-19 02:46:57.687 TRACE 

[Yahoo-eng-team] [Bug 1517653] [NEW] subnet tests generating error in tests

2015-11-18 Thread Lin Hua Cheng
Public bug reported:

- function.__call__(, 
'h0881d38-c3eb-4fee-9763-12de3338041d', action=u'ALLOW', description=u'new 
desc', destination_ip_address=None, destination_port=u'1:65535', enabled=True, 
ip_version=u'', name=u'new name', protocol=u'ICMP', shared=False, 
source_ip_address='1.2.3.0/24', source_port=None) -> None
+ function.__call__(mox.IsA() , 
'h0881d38-c3eb-4fee-9763-12de3338041d', action='ALLOW', description='new desc', 
destination_ip_address=None, destination_port='1:65535', enabled=True, 
name='new name', protocol='ICMP', shared=False, source_ip_address='1.2.3.0/24', 
source_port=None) -> 
..Failed to update rule new name: Unexpected method call.  unexpected:-  
expected:+
- function.__call__(, 
'f0881d38-c3eb-4fee-9763-12de3338041d', action=u'ALLOW', description=u'new 
desc', destination_ip_address=None, destination_port=u'1:65535', enabled=True, 
ip_version=u'', name=u'new name', protocol=u'ICMP', shared=False, 
source_ip_address='1.2.3.0/24', source_port=None) -> None
+ function.__call__(mox.IsA() , 
'f0881d38-c3eb-4fee-9763-12de3338041d', action='ALLOW', description='new desc', 
destination_ip_address=None, destination_port='1:65535', enabled=True, 
name='new name', protocol='ICMP', shared=False, source_ip_address='1.2.3.0/24', 
source_port=None) -> , , 'tenant_id': 
'1', 'enabled': True, 'source_ip_address': '1.2.3.0/24', 
'destination_ip_address': '4.5.6.7/32', 'firewall_policy_id': 
'abcdef-c3eb-4fee-9763-12de3338041e', 'action': 'deny', 'position': 2, 
'source_port': '80', 'shared': True, 'destination_port': '1:65535', 'id': 
'c6298a93-850f-4f64-b78a-959fd4f1e5df', 'name': ''}>], 'tenant_id': '1', 'i
 d': 'abcdef-c3eb-4fee-9763-12de3338041e', 'shared': True, 'audited': True, 
'name': 'policy1'}>, 'tenant_id': '1', 'enabled': True, 'rule_id': 
'f0881d38-c3eb-4fee-9763-12de3338041d', 'source_ip_address': '1.2.3.0/24', 
'destination_ip_address': '4.5.6.7/32', 'firewall_policy_id': 
'abcdef-c3eb-4fee-9763-12de3338041e', 'action': 'ALLOW', 'position': 1, 
'source_port': '80', 'shared': True, 'destination_port': '1:65535', 'id': 
'f0881d38-c3eb-4fee-9763-12de3338041d', 'name': 'rule1'}>
.Failed to update rule new name: Unexpected method call.  unexpected:-  
expected:+
- function.__call__(, 
'f0881d38-c3eb-4fee-9763-12de3338041d', action=u'ALLOW', description=u'new 
desc', destination_ip_address=None, destination_port=u'1:65535', enabled=True, 
ip_version=u'', name=u'new name', protocol=None, shared=False, 
source_ip_address='1.2.3.0/24', source_port=None) -> None
+ function.__call__(mox.IsA() , 
'f0881d38-c3eb-4fee-9763-12de3338041d', action='ALLOW', description='new desc', 
destination_ip_address=None, destination_port='1:65535', enabled=True, 
name='new name', protocol=None, shared=False, source_ip_address='1.2.3.0/24', 
source_port=None) -> , , 'tenant_id': 
'1', 'enabled': True, 'source_ip_address': '1.2.3.0/24', 
'destination_ip_address': '4.5.6.7/32', 'firewall_policy_id': 
'abcdef-c3eb-4fee-9763-12de3338041e', 'action': 'deny', 'position': 2, 
'source_port': '80', 'shared': True, 'destination_port': '1:65535', 'id': 
'c6298a93-850f-4f64-b78a-959fd4f1e5df', 'name': ''}>], 'tenant_id': '1', 'id'
 : 'abcdef-c3eb-4fee-9763-12de3338041e', 'shared': True, 'audited': True, 
'name': 'policy1'}>, 'tenant_id': '1', 'enabled': True, 'rule_id': 
'f0881d38-c3eb-4fee-9763-12de3338041d', 'source_ip_address': '1.2.3.0/24', 
'destination_ip_address': '4.5.6.7/32', 'firewall_policy_id': 
'abcdef-c3eb-4fee-9763-12de3338041e', 'action': 'ALLOW', 'position': 1, 
'source_port': '80', 'shared': True, 'destination_port': '1:65535', 'id': 
'f0881d38-c3eb-4fee-9763-12de3338041d', 'name': 'rule1'}>
.Error
 while checking action permissions.
Traceback (most recent call last):
  File 
"/home/lin-hua-cheng/Documents/workspace/horizon/horizon/tables/base.py", line 
1270, in _filter_action
return action._allowed(request, datum) and row_matched
  File 
"/home/lin-hua-cheng/Documents/workspace/horizon/horizon/tables/actions.py", 
line 136, in _allowed
return self.allowed(request, datum)
  File 
"/home/lin-hua-cheng/Documents/workspace/horizon/openstack_dashboard/dashboards/project/networks/subnets/tables.py",
 line 40, in allowed
network = self.table._get_network()
  File 
"/home/lin-hua-cheng/Documents/workspace/horizon/horizon/utils/memoized.py", 
line 90, in wrapped
value = cache[key] = func(*args, **kwargs)
  File 
"/home/lin-hua-cheng/Documents/workspace/horizon/openstack_dashboard/dashboards/project/networks/subnets/tables.py",
 line 145, in _get_network
exceptions.handle(self.request, msg, redirect=self.failure_url)
  File "/home/lin-hua-cheng/Documents/workspace/horizon/horizon/exceptions.py", 
line 368, in handle

[Yahoo-eng-team] [Bug 1473567] Re: Fernet tokens fail tempest runs

2015-11-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/231191
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=a2c4ebc4fac75c0889489e4bed5a0aa89f8193f1
Submitter: Jenkins
Branch:master

commit a2c4ebc4fac75c0889489e4bed5a0aa89f8193f1
Author: Lance Bragstad 
Date:   Mon Oct 5 20:34:39 2015 +

Fix race condition when changing passwords

This patch makes it so that there is a one second wait when changing a 
password
with Keystone. This is done because when we lose sub-second precision with
Fernet tokens there is the possibility of a token being issued and revoked
within the same second. Keystone will err on the side of security and 
return a
404 NotFound when validating a token that was issued in the same second as a
revocation event.

For example, it is possible for a revocation event to happen at .01, 
but it
will be stored in MySQL as .00 because of sub-second truncation. A 
token can
be created at .02, but the creation time of that token, according to
Fernet, will be .00, because Fernet tokens don't have sub-second 
precision.
When that token is validated, it will appear invalid even though it was 
created
*after* the revocation event.

Change-Id: Ied83448de8af1b0da9afdfe6ce9431438215bfe0
Closes-Bug: 1473567


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1473567

Title:
  Fernet tokens fail tempest runs

Status in OpenStack Identity (keystone):
  In Progress
Status in tempest:
  Fix Released

Bug description:
  It seems testing an OpenStack instance that was deployed with Fernet tokens 
fails on some of the tempest tests.  In my case these tests failed:
  http://paste.openstack.org/show/363017/

  bknudson also found similar in a test patch:
 https://review.openstack.org/#/c/195780

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1473567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517652] [NEW] Cannot handle glanceclient CommunicationError in Horizon

2015-11-18 Thread Jingjing Ren
Public bug reported:

When clicking on the "Launch instance" button, if glance client throws
CommunicationError, although there is code  that tries to handle the
exception, the UI still shows message "Danger: An error occurred. Please
try again later.", not the error message from the exception handling
code.

The horizon log shows the following traceback when glance client throws
a CommunicationError.

2015-09-30 12:57:00,240 3549 ERROR django.request Internal Server Error: 
/horizon/project/instances/launch
Traceback (most recent call last):
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 137, in get_response
response = response.render()
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/template/response.py",
 line 105, in render
self.content = self.rendered_content
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/template/response.py",
 line 82, in rendered_content
content = template.render(context)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/template/base.py", 
line 140, in render
return self._render(context)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/template/base.py", 
line 134, in _render
return self.nodelist.render(context)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/template/base.py", 
line 840, in render
bit = self.render_node(node, context)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/template/debug.py", 
line 78, in render_node
return node.render(context)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/template/defaulttags.py",
 line 504, in render
six.iteritems(self.extra_context)])
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/template/base.py", 
line 585, in resolve
obj = self.var.resolve(context)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/template/base.py", 
line 735, in resolve
value = self._resolve_lookup(context)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/template/base.py", 
line 789, in _resolve_lookup
current = current()
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../horizon/workflows/base.py",
 line 717, in get_entry_point
step._verify_contributions(self.context)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../horizon/workflows/base.py",
 line 392, in _verify_contributions
field = self.action.fields.get(key, None)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../horizon/workflows/base.py",
 line 368, in action
context)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 147, in __init__
request, context, *args, **kwargs)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../horizon/workflows/base.py",
 line 138, in __init__
self._populate_choices(request, context)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../horizon/workflows/base.py",
 line 151, in _populate_choices
bound_field.choices = meth(request, context)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 446, in populate_instance_snapshot_id_choices
self._images_cache)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/images/utils.py",
 line 44, in get_available_images
_("Unable to retrieve public images."))
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../horizon/exceptions.py",
 line 364, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/images/utils.py",
 line 39, in get_available_images
request, filters=public)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/api/glance.py",
 line 104, in image_list_detailed
images = list(images_iter)
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../glanceclient/v1/images.py",
 line 249, in list
for image in paginate(params, return_request_id):
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../glanceclient/v1/images.py",
 line 233, in paginate
images, resp = self._list(url, "images")
  File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../glanceclient/v1/images.py",
 line 63, in _list
resp, body = self.client.get(url)
  

[Yahoo-eng-team] [Bug 1517741] [NEW] "Update default quota" is always displayed successful messages even if error occurred.

2015-11-18 Thread Kenji Ishii
Public bug reported:

"Update default quota" execute two apis (nova.default_quota_update and
cinder.default_quota_update).

At the moment,  "Update default quota" always return "True" even if both of 
their apis failed. 
Then success message is displayed in spite of failure.

In this case, I think we should display only error message.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1517741

Title:
  "Update default quota" is always displayed successful messages even if
  error occurred.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  "Update default quota" execute two apis (nova.default_quota_update and
  cinder.default_quota_update).

  At the moment,  "Update default quota" always return "True" even if both of 
their apis failed. 
  Then success message is displayed in spite of failure.

  In this case, I think we should display only error message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1517741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513279] Re: ping6 works even in absence of security rules v6 routing with legacy router

2015-11-18 Thread Sridhar Gaddam
Marking the bug as Invalid as per the discussion. If any issue is
identified, @Ritesh would open a new bug.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513279

Title:
  ping6 works even in absence of security rules v6 routing with legacy
  router

Status in neutron:
  Invalid

Bug description:
  Not able to ping v6 address of vm on a different network.  With legacy router.
  Setup has one controller/network node and two compute nodes.

  Steps:
  0. Add security rules to allow ping traffic. 
  neutron security-group-rule-create --protocol icmp --direction ingress 
94d41516-dab5-413c-9349-7c9bc3a09e75
  1. create two networks.
  2. create ipv4 subnet on each (for accessing vm).
  3. create ipv6 subnet on each with dhcpv6-stateful addressing.
   neutron subnet-create dnet1 :1::1/64 --name d6sub1 --enable-dhcp 
--ip-version 6 --ipv6-ra-mode dhcpv6-stateful --ipv6-address-mode 
dhcpv6-stateful
  4. create a router (not distributed).
  5. add interface to router on each of the four subnets.
  6. boot a vm on both networks.
  7. Log into the guest vm and configure inteface to receive inet6 dhcp 
address; use dhclient to get v6 address.
  8. Ping v6 address of the other guest vm. Fails!

  
  ubuntu@dvm11:~$ ping6 :2::4
  PING :2::4(:2::4) 56 data bytes
  From :1::1 icmp_seq=1 Destination unreachable: Address unreachable
  From :1::1 icmp_seq=2 Destination unreachable: Address unreachable
  From :1::1 icmp_seq=3 Destination unreachable: Address unreachable

  
  Note: As we need to modify interface settings and use dhclient, ubuntu cloud 
image was used. One may need to set MTU to 1400 for communicating with ubuntu 
cloud image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517704] [NEW] Test still passes even with tests failure

2015-11-18 Thread Lin Hua Cheng
Public bug reported:

Tests seems still to pass even with test failures.

There are two tests failure on the code, and test run still report it
passes:

https://bugs.launchpad.net/horizon/+bug/1517670
https://bugs.launchpad.net/horizon/+bug/1517653

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1517704

Title:
  Test still passes even with tests failure

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Tests seems still to pass even with test failures.

  There are two tests failure on the code, and test run still report it
  passes:

  https://bugs.launchpad.net/horizon/+bug/1517670
  https://bugs.launchpad.net/horizon/+bug/1517653

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1517704/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517643] [NEW] VMware: metadata definition for QoS resource allocation

2015-11-18 Thread Giridhar Jayavelu
Public bug reported:

Nova VCDriver supports QoS resource allocation for memory, disk and vif in 
addition to cpu [1]. 
Resource allocation can be expressed using shares, limit, and reservation. 
Administrators can configure these properties using flavor extra specs or image 
metadata.

This lite-spec is for adding metadata definition for both flavor and
images in Glance.

The list of metadata properties to be added:
disk_io_reservation, disk_io_limit, disk_io_shares_share,
disk_io_shares_level, vif_reservation, vif_limit, vif_shares_share,
vif_shares_level, memory_reservation, memory_limit, memory_shares_share,
memory_shares_level, cpu_reservation, cpu_limit, cpu_shares_share,
cpu_shares_level

[1] http://specs.openstack.org/openstack/nova-
specs/specs/mitaka/approved/vmware-limits.html

** Affects: glance
 Importance: Undecided
 Assignee: Giridhar Jayavelu (gjayavelu)
 Status: New


** Tags: spec-lite wishlist

** Changed in: glance
 Assignee: (unassigned) => Giridhar Jayavelu (gjayavelu)

** Summary changed:

- VMware: metadata definition for VMware QoS 
+ VMware: metadata definition for QoS resource allocation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1517643

Title:
  VMware: metadata definition for QoS resource allocation

Status in Glance:
  New

Bug description:
  Nova VCDriver supports QoS resource allocation for memory, disk and vif in 
addition to cpu [1]. 
  Resource allocation can be expressed using shares, limit, and reservation. 
Administrators can configure these properties using flavor extra specs or image 
metadata.

  This lite-spec is for adding metadata definition for both flavor and
  images in Glance.

  The list of metadata properties to be added:
  disk_io_reservation, disk_io_limit, disk_io_shares_share,
  disk_io_shares_level, vif_reservation, vif_limit, vif_shares_share,
  vif_shares_level, memory_reservation, memory_limit, memory_shares_share,
  memory_shares_level, cpu_reservation, cpu_limit, cpu_shares_share,
  cpu_shares_level

  [1] http://specs.openstack.org/openstack/nova-
  specs/specs/mitaka/approved/vmware-limits.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1517643/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp