[Yahoo-eng-team] [Bug 1513981] [NEW] Extend IPAM tables to store more information helpful for IPAM drivers

2015-11-06 Thread Shraddha Pandhe
Public bug reported:

We (Yahoo) have some use cases where it would be great if following
information is associated with the subnet db table

1. Rack switch info
2. Backplane info
3. DHCP ip helpers
4. Option to tag allocation pools inside subnets
5. Multiple gateway addresses

We also want to store some information about the backplanes locally, so
a different table might be useful.

In a way, this information is not specific to our company. Its generic
information which ought to go with the subnets. Different companies can
use this information differently in their IPAM drivers. But, the
information needs to be made available to justify the flexibility of
ipam. In fact, there are few companies that are interested in some
having some of these attributes in the database.

Considering that the community is against arbitrary JSON blobs, I think
we can expand the database to incorporate such use cases. I would prefer
to avoid having our own database to make sure that our use-cases are
always shared with the community.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513981

Title:
  Extend IPAM tables to store more information helpful for IPAM drivers

Status in neutron:
  New

Bug description:
  We (Yahoo) have some use cases where it would be great if following
  information is associated with the subnet db table

  1. Rack switch info
  2. Backplane info
  3. DHCP ip helpers
  4. Option to tag allocation pools inside subnets
  5. Multiple gateway addresses

  We also want to store some information about the backplanes locally,
  so a different table might be useful.

  In a way, this information is not specific to our company. Its generic
  information which ought to go with the subnets. Different companies
  can use this information differently in their IPAM drivers. But, the
  information needs to be made available to justify the flexibility of
  ipam. In fact, there are few companies that are interested in some
  having some of these attributes in the database.

  Considering that the community is against arbitrary JSON blobs, I
  think we can expand the database to incorporate such use cases. I
  would prefer to avoid having our own database to make sure that our
  use-cases are always shared with the community.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513645] [NEW] Inconsistent use of admin/non-admin context in neutron network api while preparing network info

2015-11-05 Thread Shraddha Pandhe
Public bug reported:

In allocate_for_instance, Neutron network API calls Neutron to create
the port(s)[1]. Once the port is created, it formats network info in
network info model, before returning it to Compute Manager [2]. To form
network info, the API makes several calls to Neutron.

1. List ports for device id [3]
2. Get associated floating IPs [4]
3. Get subnets from port [5] & [6]

Notice that, in 3 & 4, the API uses admin context to talk to Neutron,
whereas in 6, it doesn't use admin context.

This is inconsistent. Is this intentional?


[1] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L716
[2] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L739-L741
[3] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1671-L1676
[4] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1555-L1566
[5] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1568-L1573
[6] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1755-L1756

** Affects: nova
 Importance: Undecided
 Assignee: Shraddha Pandhe (shraddha-pandhe)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513645

Title:
  Inconsistent use of admin/non-admin context in neutron network api
  while preparing network info

Status in OpenStack Compute (nova):
  New

Bug description:
  In allocate_for_instance, Neutron network API calls Neutron to create
  the port(s)[1]. Once the port is created, it formats network info in
  network info model, before returning it to Compute Manager [2]. To
  form network info, the API makes several calls to Neutron.

  1. List ports for device id [3]
  2. Get associated floating IPs [4]
  3. Get subnets from port [5] & [6]

  Notice that, in 3 & 4, the API uses admin context to talk to Neutron,
  whereas in 6, it doesn't use admin context.

  This is inconsistent. Is this intentional?


  [1] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L716
  [2] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L739-L741
  [3] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1671-L1676
  [4] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1555-L1566
  [5] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1568-L1573
  [6] 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1755-L1756

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1513645/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506242] [NEW] If instance spawn fails and shutdown_instance also fails, a new excpetion is raised, masking original spawn failure

2015-10-14 Thread Shraddha Pandhe
Public bug reported:

When nova-compute, when building and running the instance, calls spawn
on virt driver, spawn can fail for several reasons.

e.g. For Ironic, the spawn call can fail if deploy callback timeout
happens.

If this call fails, nova-compute catches the exception, saves it for re-
raising and calls shutdown_instance in a try-except block [1]. The
problem is, if this shutdown_instance call also fails, a new exception
'BuildAbortException' is raised. This masks the original spawn failure.

This can cause problems for Ironic where, if deployment failed due to
timeout, there is a good chance that shutdown_instance will also fail
due to same reason, since it involves zapping etc. So original
deployment failure will not be propagated back as instance fault.


[1] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2171-L2191

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506242

Title:
  If instance spawn fails and shutdown_instance also fails, a new
  excpetion is raised, masking original spawn failure

Status in OpenStack Compute (nova):
  New

Bug description:
  When nova-compute, when building and running the instance, calls spawn
  on virt driver, spawn can fail for several reasons.

  e.g. For Ironic, the spawn call can fail if deploy callback timeout
  happens.

  If this call fails, nova-compute catches the exception, saves it for
  re-raising and calls shutdown_instance in a try-except block [1]. The
  problem is, if this shutdown_instance call also fails, a new exception
  'BuildAbortException' is raised. This masks the original spawn
  failure.

  This can cause problems for Ironic where, if deployment failed due to
  timeout, there is a good chance that shutdown_instance will also fail
  due to same reason, since it involves zapping etc. So original
  deployment failure will not be propagated back as instance fault.

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2171-L2191

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506234] [NEW] Ironic virt driver in Nova calls destroy unnecessarily if spawn fails

2015-10-14 Thread Shraddha Pandhe
Public bug reported:

To give some context, calling destroy [5] was added as a bug fix [1]. It
was required back then because, Nova compute was not calling destroy on
catching the exception [2]. But now, Nova compute catches all exceptions
that happen during spawn and calls destroy (_shutdown_instance) [3]

Since Nova compute is already taking care of destroying the instance
before rescheduling, we shouldn't have to call destroy separately in the
driver. I confirmed in logs that destroy gets called twice if there is
any failure during _wait_for_active() [4] or timeout happens [5]


[1] https://review.openstack.org/#/c/99519/
[2] 
https://github.com/openstack/nova/blob/2014.1.5/nova/compute/manager.py#L2116-L2118
[3] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2171-L2191
[4] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L431-L462
[5] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L823-L836

** Affects: nova
 Importance: Undecided
 Assignee: Shraddha Pandhe (shraddha-pandhe)
 Status: New


** Tags: ironic

** Project changed: nova-hyper => nova

** Changed in: nova
 Assignee: (unassigned) => Shraddha Pandhe (shraddha-pandhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506234

Title:
  Ironic virt driver in Nova calls destroy unnecessarily if spawn fails

Status in OpenStack Compute (nova):
  New

Bug description:
  To give some context, calling destroy [5] was added as a bug fix [1].
  It was required back then because, Nova compute was not calling
  destroy on catching the exception [2]. But now, Nova compute catches
  all exceptions that happen during spawn and calls destroy
  (_shutdown_instance) [3]

  Since Nova compute is already taking care of destroying the instance
  before rescheduling, we shouldn't have to call destroy separately in
  the driver. I confirmed in logs that destroy gets called twice if
  there is any failure during _wait_for_active() [4] or timeout happens
  [5]


  [1] https://review.openstack.org/#/c/99519/
  [2] 
https://github.com/openstack/nova/blob/2014.1.5/nova/compute/manager.py#L2116-L2118
  [3] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2171-L2191
  [4] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L431-L462
  [5] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L823-L836

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470612] [NEW] neutron-dhcp-agent does not recover known ports cache after restart

2015-07-01 Thread Shraddha Pandhe
Public bug reported:

When the agent restarts, it loses its previous network cache. As soon as
the agent starts, as part of __init__, it rebuilds that cache [1]. But
it does not put the ports in there [2].

In sync_state, Neutron tries to enable/disable networks, by checking the
diff between Neutron's state and its own network cache that it just
built [3]. It enables any NEW networks and disables any DELETED
networks, but it does nothing to PREVIOUSLY KNOWN NETWORKS. So those
subnets and ports remain empty lists.

Now, if such a port is deleted, [4] will return None and the port will
never get deleted from the config.

Filing this bug based on my conversation with Kevin Benton on IRC [5]

[1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L68
[2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L79-L86
[3] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L154-L171
[4] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L349
[5] 
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2015-07-01.log.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470612

Title:
  neutron-dhcp-agent does not recover known ports cache after restart

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When the agent restarts, it loses its previous network cache. As soon
  as the agent starts, as part of __init__, it rebuilds that cache
  [1]. But it does not put the ports in there [2].

  In sync_state, Neutron tries to enable/disable networks, by checking
  the diff between Neutron's state and its own network cache that it
  just built [3]. It enables any NEW networks and disables any DELETED
  networks, but it does nothing to PREVIOUSLY KNOWN NETWORKS. So those
  subnets and ports remain empty lists.

  Now, if such a port is deleted, [4] will return None and the port will
  never get deleted from the config.

  Filing this bug based on my conversation with Kevin Benton on IRC [5]

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L68
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L79-L86
  [3] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L154-L171
  [4] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L349
  [5] 
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2015-07-01.log.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470612/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464793] [NEW] Add a driver for isc-dhcpd

2015-06-12 Thread Shraddha Pandhe
Public bug reported:

Currently, Neutron's dhcp agent only supports Dnsmasq. Dnsmasq has a
very serious limitation that, it needs a reload any time when hosts,
opts or addn hosts configs are updated. According to a blog [1], this
can take unto 4 minutes with 65535 static leases.

Typical industry standard is to use isc-dhcpd [2] instead of Dnsmasq. In
general isc-dhcpd has more features than Dnsmasq. It also supports
OMAPI, which allows updating host or lease objects runtime. There are
two ways to use OMAPI: omshell [3] and pypureomapi python library [4],
[5]. With OMAPI, its very easy to selectively add/delete a host/lease
from configuration (as against updating full config on every
create/update port)

Considering that most of the industry uses isc-dhcpd, I think it will be
a great addition to OpenStack neutron.

[1] https://www.mirantis.com/blog/improving-dhcp-performance-openstack/
[2] https://www.isc.org/downloads/dhcp/
[3] http://linux.die.net/man/1/omshell
[4] https://pypi.python.org/pypi/pypureomapi
[5] https://github.com/CygnusNetworks/pypureomapi

** Affects: neutron
 Importance: Undecided
 Assignee: Shraddha Pandhe (shraddha-pandhe)
 Status: New


** Tags: rfe

** Tags added: rfe

** Changed in: neutron
 Assignee: (unassigned) = Shraddha Pandhe (shraddha-pandhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464793

Title:
  Add a driver for isc-dhcpd

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently, Neutron's dhcp agent only supports Dnsmasq. Dnsmasq has a
  very serious limitation that, it needs a reload any time when hosts,
  opts or addn hosts configs are updated. According to a blog [1], this
  can take unto 4 minutes with 65535 static leases.

  Typical industry standard is to use isc-dhcpd [2] instead of Dnsmasq.
  In general isc-dhcpd has more features than Dnsmasq. It also supports
  OMAPI, which allows updating host or lease objects runtime. There are
  two ways to use OMAPI: omshell [3] and pypureomapi python library [4],
  [5]. With OMAPI, its very easy to selectively add/delete a host/lease
  from configuration (as against updating full config on every
  create/update port)

  Considering that most of the industry uses isc-dhcpd, I think it will
  be a great addition to OpenStack neutron.

  [1] https://www.mirantis.com/blog/improving-dhcp-performance-openstack/
  [2] https://www.isc.org/downloads/dhcp/
  [3] http://linux.die.net/man/1/omshell
  [4] https://pypi.python.org/pypi/pypureomapi
  [5] https://github.com/CygnusNetworks/pypureomapi

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464361] [NEW] Support for multiple gateways in Neutron subnets in provider networks

2015-06-11 Thread Shraddha Pandhe
Public bug reported:

Currently, the subnets in Neutron only support one gateway. For provider
networks in large data centers, quite often, the architecture is such a
way that multiple gateways are configured for the subnets. These
multiple gateways are typically spread across backplanes  so that the
production traffic can be load-balanced between backplanes.

This is just my use case for supporting multiple gateways, but other
folks might have more use cases as well.

I want to open up a discussion on this topic and figure out the best way
to handle this. Should this be done in a same way as dns-nameserver,
with a separate table with two columns:  gateway_ip, subnet_id.

** Affects: neutron
 Importance: Undecided
 Assignee: Shraddha Pandhe (shraddha-pandhe)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Shraddha Pandhe (shraddha-pandhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464361

Title:
  Support for multiple gateways in Neutron subnets in provider networks

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently, the subnets in Neutron only support one gateway. For
  provider networks in large data centers, quite often, the architecture
  is such a way that multiple gateways are configured for the subnets.
  These multiple gateways are typically spread across backplanes  so
  that the production traffic can be load-balanced between backplanes.

  This is just my use case for supporting multiple gateways, but other
  folks might have more use cases as well.

  I want to open up a discussion on this topic and figure out the best
  way to handle this. Should this be done in a same way as dns-
  nameserver, with a separate table with two columns:  gateway_ip,
  subnet_id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1325905] Re: Cleaning on failed spawn in Ironic driver may override original exception

2015-01-22 Thread Shraddha Pandhe
The latest code on Master is already using save_and_reraise_exception.

https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L641-L656

Marking this bug an Invalid

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1325905

Title:
  Cleaning on failed spawn in Ironic driver may override original
  exception

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In Nova driver we have the following code:

  # trigger the node deploy
  try:
  icli.call(node.set_provision_state, node_uuid,
ironic_states.ACTIVE)
  except (exception.NovaException,   # Retry failed
  ironic_exception.InternalServerError,  # Validations
  ironic_exception.BadRequest) as e: # Maintenance
  msg = (_(Failed to request Ironic to provision instance 
   %(inst)s: %(reason)s) % {'inst': instance['uuid'],
  'reason': str(e)})
  LOG.error(msg)
  self._cleanup_deploy(node, instance, network_info)
  raise exception.InstanceDeployFailure(msg)

  If exception happens inside _cleanup_deploy, it will hide the original
  one. excutils.save_and_reraise_exception() should be used here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1325905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391695] [NEW] IPv6 network info translation support for rhel

2014-11-11 Thread Shraddha Pandhe
Public bug reported:

Cloud-init currently doesn’t parse ipv6 network information from the OpenStack 
Nova's disk.config.
Here’s the code that does the parsing: 
https://github.com/number5/cloud-init/blob/master/cloudinit/distros/rhel.py#L65-L98

Whereas, the IPv6 network-scripts/ifcfg file is expected to be:
http://www.cyberciti.biz/faq/rhel-redhat-fedora-centos-ipv6-network-configuration/

The conversion is missing a lot of options.

** Affects: cloud-init
 Importance: Undecided
 Assignee: Shraddha Pandhe (shraddha-pandhe)
 Status: New

** Changed in: cloud-init
 Assignee: (unassigned) = Shraddha Pandhe (shraddha-pandhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1391695

Title:
  IPv6 network info translation support for rhel

Status in Init scripts for use on cloud images:
  New

Bug description:
  Cloud-init currently doesn’t parse ipv6 network information from the 
OpenStack Nova's disk.config.
  Here’s the code that does the parsing: 
https://github.com/number5/cloud-init/blob/master/cloudinit/distros/rhel.py#L65-L98

  Whereas, the IPv6 network-scripts/ifcfg file is expected to be:
  
http://www.cyberciti.biz/faq/rhel-redhat-fedora-centos-ipv6-network-configuration/

  The conversion is missing a lot of options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1391695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334024] [NEW] Nova rescue fails for libvirt driver with config drive

2014-06-24 Thread Shraddha Pandhe
':
No such file or directory], module: manager, filename: manager.py,
levelno: 40, msecs: 728.74093055725098, pathname:
/usr/lib/python2.6/site-packages/nova/compute/manager.py, lineno: 2986,
asctime: 2014-06-20 23:00:52,728, msg: Error trying to Rescue Instance,
message: Error trying to Rescue Instance, funcname: rescue_instance,
levelname: ERROR}

** Affects: nova
 Importance: Undecided
 Assignee: Shraddha Pandhe (shraddha-pandhe)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Shraddha Pandhe (shraddha-pandhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334024

Title:
  Nova rescue fails for libvirt driver with config drive

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am using config drive to boot VMs. In icehouse, I observed that nova
  rescue fails and leaves the VM in SHUTOFF state.

  Short error log: 
  
/home/y/var/nova/instances/270e299b-90b2-46d5-bf9a-e7f6efe3742e/disk.config.rescue':
 No such file or directory

  Difference in Havana and Icehouse code path:

  # Havana
  # Config drive
  if configdrive.required_by(instance):
  LOG.info(_('Using config drive'), instance=instance)
  extra_md = {}
  if admin_pass:
  extra_md['admin_pass'] = admin_pass

  for f in ('user_name', 'project_name'):
  if hasattr(context, f):
  extra_md[f] = getattr(context, f, None)
  inst_md = instance_metadata.InstanceMetadata(instance,
  content=files, extra_md=extra_md, network_info=network_info)
  with configdrive.ConfigDriveBuilder(instance_md=inst_md) as cdb:
  configdrive_path = basepath(fname='disk.config')
 LOG.info(_('Creating config drive at 
%(path)s'),
   {'path': configdrive_path}, instance=instance)

  
  def basepath(fname='', suffix=suffix):  Adds suffix .rescue to 
disk.config.
  return os.path.join(libvirt_utils.get_instance_path(instance),
  fname + suffix)

  
  # Icehouse:

  # Config drive
  if configdrive.required_by(instance):
  LOG.info(_('Using config drive'), instance=instance)
  extra_md = {}
  if admin_pass:
  extra_md['admin_pass'] = admin_pass

  for f in ('user_name', 'project_name'):
  if hasattr(context, f):
  extra_md[f] = getattr(context, f, None)
  inst_md = instance_metadata.InstanceMetadata(instance,
  content=files, extra_md=extra_md, network_info=network_info)
  with configdrive.ConfigDriveBuilder(instance_md=inst_md) as cdb:
  configdrive_path = self._get_disk_config_path(instance)
LOG.info(_('Creating config drive at %(path)s'),
   {'path': configdrive_path}, instance=instance)

   @staticmethod
  def _get_disk_config_path(instance):
  return os.path.join(libvirt_utils.get_instance_path(instance),
  'disk.config')

  The suffix .rescue is missed here and hence, original disk.config is
  overwritten.

  Following change fixed the issue for me:

  configdrive_path = self._get_disk_config_path(instance, suffix)
   
  @staticmethod
  def _get_disk_config_path(instance, suffix=''):
  return os.path.join(libvirt_utils.get_instance_path(instance),
  'disk.config' + suffix)

  
  Complete Error log:

  {extra: {project_name: admin, timestamp: 2014-06-20T23:00:50.269421,
  auth_token: 17fcde000c3040f9981e1804cdaf94fe, remote_address:
  10.220.4.45, quota_class: null, is_admin: true, user:
  dfac8c9e704a4312b0447b26b57a12da, service_catalog: [], tenant:
  13b05759646b41c9a51660a1e653b146, user_id:
  dfac8c9e704a4312b0447b26b57a12da, roles: [admin], project:
  nova.compute.manager, instance: [instance:
  270e299b-90b2-46d5-bf9a-e7f6efe3742e] , version: unknown, read_deleted:
  no, request_id: req-a7f12cd9-dd23-48fa-9479-ab4e7825ae1e,
  instance_lock_checked: false, project_id:
  13b05759646b41c9a51660a1e653b146, user_name: spandhe}, thread_name:
  GreenThread-31, process_name: MainProcess, name:
  nova.compute.manager, thread: 82908816, created: 1403305252.7287409,
  process: 26178, relative_created: 114349619.65203285, args: [],
  traceback: [Traceback (most recent call last):,   File
  \/usr/lib/python2.6/site-packages/nova/compute/manager.py\, line 2983, in
  rescue_instance, rescue_image_meta, admin_password),   File
  \/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py\, line 2205, 
in
  rescue, self._create_domain(xml),   File
  \/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py\, line 3562, 
in
  _create_domain

[Yahoo-eng-team] [Bug 1326974] [NEW] nova-clear-rabbit-queues removed from Juno (master branch of nova). Make appropriate changes to nova-spec file

2014-06-05 Thread Shraddha Pandhe
Public bug reported:

Current nova spec file expects the file nova-clear-rabbit-queues in
the bindir.

https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/specs
/openstack-nova.spec#L562

Until icehouse, nova was generating that file:
https://github.com/openstack/nova/blob/stable/icehouse/setup.cfg#L42

But the file is gone in Juno:
https://github.com/openstack/nova/blob/master/setup.cfg

Need to update the Anvil spec file to something like:

#if $older_than('2014.1')
%{_bindir}/nova-clear-rabbit-queues
#end if

** Affects: anvil
 Importance: Undecided
 Assignee: Shraddha Pandhe (shraddha-pandhe)
 Status: New

** Changed in: anvil
 Assignee: (unassigned) = Shraddha Pandhe (shraddha-pandhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1326974

Title:
  nova-clear-rabbit-queues removed from Juno (master branch of nova).
  Make appropriate changes to nova-spec file

Status in ANVIL for forging OpenStack.:
  New

Bug description:
  Current nova spec file expects the file nova-clear-rabbit-queues in
  the bindir.

  https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/specs
  /openstack-nova.spec#L562

  Until icehouse, nova was generating that file:
  https://github.com/openstack/nova/blob/stable/icehouse/setup.cfg#L42

  But the file is gone in Juno:
  https://github.com/openstack/nova/blob/master/setup.cfg

  Need to update the Anvil spec file to something like:

  #if $older_than('2014.1')
  %{_bindir}/nova-clear-rabbit-queues
  #end if

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1326974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258619] [NEW] New debug module that prints some information about the instance

2013-12-06 Thread Shraddha Pandhe
Public bug reported:

This is a new debug module thats supposed to print out some information
about the instance being created. The module can be included at any
stage of the process - init/config/final

** Affects: cloud-init
 Importance: Undecided
 Assignee: Shraddha Pandhe (shraddha-pandhe)
 Status: New

** Changed in: cloud-init
 Assignee: (unassigned) = Shraddha Pandhe (shraddha-pandhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1258619

Title:
  New debug module that prints some information about the instance

Status in Init scripts for use on cloud images:
  New

Bug description:
  This is a new debug module thats supposed to print out some
  information about the instance being created. The module can be
  included at any stage of the process - init/config/final

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1258619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp