[Yahoo-eng-team] [Bug 1467560] Re: RFE: add instance uuid field to nova.quota_usages table

2015-10-07 Thread Stephen Gordon
** Changed in: nova
   Status: Won't Fix => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467560

Title:
  RFE: add instance uuid field to nova.quota_usages table

Status in OpenStack Compute (nova):
  New

Bug description:
  In Icehouse, the nova.quota_usages table frequently gets out-of-sync
  with the currently active/stopped instances in a tenant/project,
  specifically, there are times when the instance will be set to
  terminated/deleted in the instances table and the quota_usages table
  will retain the data, counting against the tenant's total quota.  As
  far as I can tell there is no way to correlate instances.uuid with the
  records in nova.quota_usages.

  I propose adding an instance uuid column to make future cleanup of
  this table easier.

  I also propose a housecleaning task that does this clean up
  automatically.

  Thanks,
  Dan

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351001] [NEW] nova force-delete does not delete BUILD state instances

2014-07-31 Thread Stephen Gordon
Public bug reported:

Description of problem:

Using nova force-delete $instance-id fails when an instance is in status
BUILD and OS-EXT-STS:task_state deleting.  However, nova delete
does seem to work after several tries.

Version-Release number of selected component (if applicable):

2013.2 (Havana)

How reproducible:


Steps to Reproduce:
1. find a seemingly hung instance
2. fire off nova-delete
3. watch it complain

Actual results:

[root@host02 ~(keystone_admin)]$ nova force-delete 
3a83b712-4667-44c1-a83d-ada164ff78d1
ERROR: Cannot 'forceDelete' while instance is in vm_state building (HTTP 409) 
(Request-ID: req-22737c83-32f4-4c6d-ae9c-09a542556907)


Expected results:

do the needful.


Additional info:

Here are some logs obtained from this behavior, this is on RHOS4 /
RHEL6.5:

--snip--

[root@host02 ~(keystone_admin)]$ nova force-delete 
3a83b712-4667-44c1-a83d-ada164ff78d1
ERROR: Cannot 'forceDelete' while instance is in vm_state building (HTTP 409) 
(Request-ID: req-22737c83-32f4-4c6d-ae9c-09a542556907)
[root@host02 ~(keystone_admin)]$ nova list --all-tenants | grep 
3a83b712-4667-44c1-a83d-ada164ff78d1
| 3a83b712-4667-44c1-a83d-ada164ff78d1 | bcrochet-foreman   
| BUILD   | deleting | NOSTATE | default=192.168.87.7; 
foreman_int=192.168.200.6; foreman_ext=192.168.201.6 

[root@host02 ~(keystone_admin)]$ nova show 3a83b712-4667-44c1-a83d-ada164ff78d1
+--++
| Property | Value  
|
+--++
| status   | BUILD  
|
| updated  | 2014-04-16T20:56:44Z   
|
| OS-EXT-STS:task_state| deleting   
|
| OS-EXT-SRV-ATTR:host | host08.oslab.priv  
|
| foreman_ext network  | 192.168.201.6  
|
| key_name | foreman-ci 
|
| image| rhel-guest-image-6-6.5-20140116.1-1 
(253354e7-8d65-4d95-b134-6b423d125579) |
| hostId   | 
4b98ba395063916c15f5b96a791683fa5d116109987c6a6b0b8de2f1   |
| OS-EXT-STS:vm_state  | building   
|
| OS-EXT-SRV-ATTR:instance_name| instance-99e9  
|
| foreman_int network  | 192.168.200.6  
|
| OS-SRV-USG:launched_at   | None   
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | host08.oslab.priv  
|
| flavor   | m1.large (4)   
|
| id   | 3a83b712-4667-44c1-a83d-ada164ff78d1   
|
| security_groups  | [{u'name': u'default'}, {u'name': 
u'default'}, {u'name': u'default'}]  |
| OS-SRV-USG:terminated_at | None   
|
| user_id  | 13090770bacc46ccb8fb7f5e13e5de98   
|
| name | bcrochet-foreman   
|
| created  | 2014-04-16T20:27:51Z   
|
| tenant_id| f8e6ba11caa94ea98d24ec819eb746fd   
|
| OS-DCF:diskConfig| MANUAL 
|
| metadata | {} 
|
| os-extended-volumes:volumes_attached | [] 
|
| accessIPv4   |
|
| accessIPv6   |
|
| progress | 0  
|
| 

[Yahoo-eng-team] [Bug 1330132] Re: Creation of Member role is no longer required

2014-07-25 Thread Stephen Gordon
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1330132

Title:
  Creation of Member role is no longer required

Status in devstack - openstack dev environments:
  In Progress
Status in OpenStack Identity (Keystone):
  New
Status in Tempest:
  In Progress

Bug description:
  Since Grizzly the Keystone service's SQL creation/migration scripts
  automatically create a role named _member_ for use as the default
  member role. Since Icehouse (backported to Havana) Horizon uses this
  as the default member role.

  Devstack still creates a Member role, as was previously required:

  318 # The Member role is used by Horizon and Swift so we need to keep it:
  319 MEMBER_ROLE=$(openstack role create \
  320 Member \
  321 | grep  id  | get_field 2)

  As noted above, Horizon no longer uses such a role in the default
  configuration and on investigation the Swift dependency appears to be
  introduced by the way devstack configures Swift.

  As such it should now be possible to stop creating this role (with
  corresponding changes to the Swift setup in devstack) and use _member_
  instead, avoiding the creation (and confusion) of having two member
  roles with different names.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1330132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278028] Re: VMware: update the default 'task_poll_interval' time

2014-07-03 Thread Stephen Gordon
** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1278028

Title:
  VMware: update the default 'task_poll_interval' time

Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Manuals:
  In Progress

Bug description:
  https://review.openstack.org/70079

  Dear documentation bug triager. This bug was created here because we
  did not know how to map the project name openstack/nova to a
  launchpad project name. This indicates that the notify_impact config
  needs tweaks. You can ask the OpenStack infra team (#openstack-infra
  on freenode) for help if you need to.

  commit 73c87a280e77e03d228d34ab4781ca2e3b02e40e
  Author: Gary Kotton gkot...@vmware.com
  Date:   Thu Jan 30 01:44:10 2014 -0800

  VMware: update the default 'task_poll_interval' time
  
  The original means that each operation against the backend takes at
  least 5 seconds. The default is updated to 0.5 seconds.
  
  DocImpact
  Updated default value for task_poll_interval from 5 seconds to
  0.5 seconds
  
  Change-Id: I867b913f52b67fa9d655f58a2e316b8fd1624426
  Closes-bug: #1274439

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1278028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326965] [NEW] instance doesn't work after a failed migration

2014-06-05 Thread Stephen Gordon
Public bug reported:

Description of problem:
while trying to run instance migration on an instance, the migration failed 
because of incorrect ssh key authentication between the nova compute nodes. 
After the failed migration:
1. The instance failed to run.
2. Couldn't be deleted.
3. Its volumes couldn't be detached, snapshoted or copied from. 

Thus all of the consistent data was not available.

Version-Release number of selected component (if applicable):
python-nova-2014.1-0.11.b2.fc21.noarch
openstack-nova-api-2014.1-0.11.b2.fc21.noarch
openstack-nova-cert-2014.1-0.11.b2.fc21.noarch
openstack-nova-conductor-2014.1-0.11.b2.fc21.noarch
openstack-nova-scheduler-2014.1-0.11.b2.fc21.noarch
openstack-nova-compute-2014.1-0.11.b2.fc21.noarch
python-novaclient-2.15.0-1.fc20.noarch
openstack-nova-common-2014.1-0.11.b2.fc21.noarch
openstack-nova-console-2014.1-0.11.b2.fc21.noarch
openstack-nova-novncproxy-2014.1-0.11.b2.fc21.noarch

How reproducible:
100%

Steps to Reproduce:
1. launch an instance
2. try to migrate it. 
3. see instance status.

Actual results:
the instance is not responding to any action.

Expected results:
if the migration fails before doing anything to the instance's files the 
instance should be up and running  the an error about the failed migration 
should appear.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326965

Title:
  instance doesn't work after a failed migration

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  while trying to run instance migration on an instance, the migration failed 
because of incorrect ssh key authentication between the nova compute nodes. 
  After the failed migration:
  1. The instance failed to run.
  2. Couldn't be deleted.
  3. Its volumes couldn't be detached, snapshoted or copied from. 

  Thus all of the consistent data was not available.

  Version-Release number of selected component (if applicable):
  python-nova-2014.1-0.11.b2.fc21.noarch
  openstack-nova-api-2014.1-0.11.b2.fc21.noarch
  openstack-nova-cert-2014.1-0.11.b2.fc21.noarch
  openstack-nova-conductor-2014.1-0.11.b2.fc21.noarch
  openstack-nova-scheduler-2014.1-0.11.b2.fc21.noarch
  openstack-nova-compute-2014.1-0.11.b2.fc21.noarch
  python-novaclient-2.15.0-1.fc20.noarch
  openstack-nova-common-2014.1-0.11.b2.fc21.noarch
  openstack-nova-console-2014.1-0.11.b2.fc21.noarch
  openstack-nova-novncproxy-2014.1-0.11.b2.fc21.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. launch an instance
  2. try to migrate it. 
  3. see instance status.

  Actual results:
  the instance is not responding to any action.

  Expected results:
  if the migration fails before doing anything to the instance's files the 
instance should be up and running  the an error about the failed migration 
should appear.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314756] Re: Default RedHat Install of IceHouse fails

2014-05-15 Thread Stephen Gordon
** Changed in: openstack-manuals
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1314756

Title:
  Default RedHat Install of IceHouse fails

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Manuals:
  Invalid

Bug description:
  When following the deployment guide for RedHat IceHouse install the
  nova compute layer fails to launch a vm via the dashboard.

  I am running CentOs 6.4.

  The resulting error from the api.log is:

  2014-04-29 16:16:49.723 2252 ERROR nova.api.openstack 
[req-a5503226-ad62-47fd-9ce2-9bd0b3c79d54 ca8cf7879f304f328420a023aa47d821 
cc55e4c69e404fc4b89e2983a36efd80] Caught error: Timed out waiting for a reply 
to message ID 1c8a38a6a601410981cc630ddefa6346
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack Traceback (most recent 
call last):
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/__init__.py, line 125, in 
__call__
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/request.py, line 1296, in send
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/request.py, line 1260, in 
call_application
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py, 
line 615, in __call__
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack return 
self.app(env, start_response)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py,
 line 131, in __call__
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 130, in __call__
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 195, in call_func
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 917, in 
__call__
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack content_type, body, 
accept)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 983, in 
_process_stack
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 1070, in 
dispatch
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/servers.py, line 
956, in create
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack 
legacy_bdm=legacy_bdm)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/hooks.py, line 103, in inner
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack rv = f(*args, 
**kwargs)
  2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/compute/api.py, line 1341, in create
  

[Yahoo-eng-team] [Bug 1315793] [NEW] Host aggregate screen does not allow provision of metadata

2014-05-03 Thread Stephen Gordon
Public bug reported:

A tab has been added for creating Host Aggregates, optionally they may
be exposed as Availability Zones making them explicitly targetable by
users.

If they are not exposed as Availability Zones though then they are
targeted implicitly as a result of the matching of flavor extra
specifications with Host Aggregate metadata. The Host Aggregate creation
and edit screens don't provide any way to set the metadata though.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1315793

Title:
  Host aggregate screen does not allow provision of metadata

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  A tab has been added for creating Host Aggregates, optionally they may
  be exposed as Availability Zones making them explicitly targetable by
  users.

  If they are not exposed as Availability Zones though then they are
  targeted implicitly as a result of the matching of flavor extra
  specifications with Host Aggregate metadata. The Host Aggregate
  creation and edit screens don't provide any way to set the metadata
  though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1315793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306597] [NEW] default floating ip pool should be visible and editable via nova CLI

2014-04-11 Thread Stephen Gordon
Public bug reported:

default floating ip pool is configured in default_floating_pool=
(default is named: nova  )

user can't see the dafault under nova floating-ip-pool-list
user can't change/edit the default pool using CLI 

1. create few floating ip pools with names 
2. try to check which one is default 
nova floating-ip-pool-list
+---+
| name  |
+---+
| Pool1 |
| Pool2 |
| Pool3 |
| nova  |
|   |
+---+

3. try to change the default without edit nova conf and restart nova

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1306597

Title:
  default floating ip pool should be visible and editable via nova CLI

Status in OpenStack Compute (Nova):
  New

Bug description:
  default floating ip pool is configured in default_floating_pool=
  (default is named: nova  )

  user can't see the dafault under nova floating-ip-pool-list
  user can't change/edit the default pool using CLI 

  1. create few floating ip pools with names 
  2. try to check which one is default 
  nova floating-ip-pool-list
  +---+
  | name  |
  +---+
  | Pool1 |
  | Pool2 |
  | Pool3 |
  | nova  |
  |   |
  +---+

  3. try to change the default without edit nova conf and restart nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1306597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287722] [NEW] get_vcpu_total does not account for cpu_allocation_ratio

2014-03-04 Thread Stephen Gordon
Public bug reported:

When retrieving the vCPU total via the Nova API the returned value does
not account for cpu_allocation_ratio changes. As a result external
consumers, such as Horizon and the command line client, can reflect over
utilization when a cpu_allocation_ratio  1 is in use even when the
number of vCPUs available has not been exhausted.

For example:

If compute node has 8 cores, and overcommit is 16, that means 128 vCPUs
are available from the schedulers point of view. The API call will
however continue to return 8.

Obviously it's probably not possible to change the vCPU total behaviour
in the current field, but it would be nice for external consumers to be
able to determine (possibly via an additional field or extension) how
many vCPUs the host is actually exposing for scheduling.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1287722

Title:
  get_vcpu_total does not account for cpu_allocation_ratio

Status in OpenStack Compute (Nova):
  New

Bug description:
  When retrieving the vCPU total via the Nova API the returned value
  does not account for cpu_allocation_ratio changes. As a result
  external consumers, such as Horizon and the command line client, can
  reflect over utilization when a cpu_allocation_ratio  1 is in use
  even when the number of vCPUs available has not been exhausted.

  For example:

  If compute node has 8 cores, and overcommit is 16, that means 128
  vCPUs are available from the schedulers point of view. The API call
  will however continue to return 8.

  Obviously it's probably not possible to change the vCPU total
  behaviour in the current field, but it would be nice for external
  consumers to be able to determine (possibly via an additional field or
  extension) how many vCPUs the host is actually exposing for
  scheduling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1287722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266391] Re: Install the dashboard in OpenStack Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora  - havana

2014-01-10 Thread Stephen Gordon
3) Just noticed that review is only for puppet/packstack stuff - so
there is still the need to get the default in the horizon project fixed
up.

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1266391

Title:
  Install the dashboard in OpenStack Installation Guide for Red Hat
  Enterprise Linux, CentOS, and Fedora  - havana

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Manuals:
  Confirmed

Bug description:
  OPENSTACK_KEYSTONE_DEFAULT_ROLE = Member in the config file should
  be set to OPENSTACK_KEYSTONE_DEFAULT_ROLE = admin

  
  ---
  Built: 2014-01-03T13:40:41 00:00
  git SHA: 9ad96c1083e1bed1a0582d54c7bc99dc84208fa4
  URL: 
http://docs.openstack.org/havana/install-guide/install/yum/content/install_dashboard.html
  source File: 
file:/home/jenkins/workspace/openstack-install-deploy-guide-fedora/doc/install-guide/section_dashboard-install.xml
  xml:id: install_dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1266391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1199577] Re: Dev guide has outdated links in CI w/ Jenkins section

2013-11-19 Thread Stephen Gordon
** Changed in: openstack-manuals
Milestone: None = havana

** Changed in: openstack-manuals
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1199577

Title:
  Dev guide has outdated links in CI w/ Jenkins section

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Manuals:
  Fix Released

Bug description:
  http://docs.openstack.org/developer/nova/devref/jenkins.html

  
  The links associated to all but one of the tasks (nova-tarball works) are 
broken. Example links:
  gate-nova-unittests -- 
https://jenkins.openstack.org/view/Nova/job/gate-nova-unittests which is 404

  It's hard to be a good citizen and write/update unit tests when you
  add/change code on the project, if you can't find how/where/whatever
  to do it;-)

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1199577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp