[Yahoo-eng-team] [Bug 1709115] [NEW] IPV6 only network DHCP Namespace missing default route

2017-08-07 Thread sean redmond
Public bug reported:

I noticed if I create an iPV6 only network using DHCPv6 stateful the
qdhcp namespeace does not contain a default route to make recursive DNS
requests possible as they need to route to external networks.

The below shows what I see in the routing table of the qdhcp namespace
with IPV6:

# ip -6 ro
::9:9::/64 dev ns-e3832e4a-bd  proto kernel  metric 256  pref medium
fe80::/64 dev ns-e3832e4a-bd  proto kernel  metric 256  pref medium
#

As we can see there is no default route so it is not possible to dns-
masq to perform lookup:

# ping6 2001:4860:4860::8844
connect: Network is unreachable

>From within an instance on the same network/subnet if I set the
resolv.conf to use the above external (google) ipv6 resolver I am able
to reach it and get an address as expected as the route is getting set
in the instance correctly.

If we contrast this with IPv4 I can see a default route as shown below
inside the qdhcp namespace:

# ip -4 ro
default via 172.16.0.1 dev ns-e3832e4a-bd
172.16.0.0/24 dev ns-e3832e4a-bd  proto kernel  scope link  src 172.16.0.11
#

I find it very strange we are missing this default gateway for IPV6 in
the qDHCP name space.

I did some further testing and found this is also the case in the LBAAS
(v2) name space - I can see that a packet ipv6 only request would route
into the LBAAS name space, haproxy would proxy it onto the instance, the
instance would respond to haproxy but then the connection would drop as
the lbaas namespace does not contain the default route.

This is on the newton release of neutron, if you need any further
details just let me know and I will be happy to help.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709115

Title:
  IPV6 only network DHCP Namespace missing default route

Status in neutron:
  New

Bug description:
  I noticed if I create an iPV6 only network using DHCPv6 stateful the
  qdhcp namespeace does not contain a default route to make recursive
  DNS requests possible as they need to route to external networks.

  The below shows what I see in the routing table of the qdhcp namespace
  with IPV6:

  # ip -6 ro
  ::9:9::/64 dev ns-e3832e4a-bd  proto kernel  metric 256  pref medium
  fe80::/64 dev ns-e3832e4a-bd  proto kernel  metric 256  pref medium
  #

  As we can see there is no default route so it is not possible to dns-
  masq to perform lookup:

  # ping6 2001:4860:4860::8844
  connect: Network is unreachable

  From within an instance on the same network/subnet if I set the
  resolv.conf to use the above external (google) ipv6 resolver I am able
  to reach it and get an address as expected as the route is getting set
  in the instance correctly.

  If we contrast this with IPv4 I can see a default route as shown below
  inside the qdhcp namespace:

  # ip -4 ro
  default via 172.16.0.1 dev ns-e3832e4a-bd
  172.16.0.0/24 dev ns-e3832e4a-bd  proto kernel  scope link  src 172.16.0.11
  #

  I find it very strange we are missing this default gateway for IPV6 in
  the qDHCP name space.

  I did some further testing and found this is also the case in the
  LBAAS (v2) name space - I can see that a packet ipv6 only request
  would route into the LBAAS name space, haproxy would proxy it onto the
  instance, the instance would respond to haproxy but then the
  connection would drop as the lbaas namespace does not contain the
  default route.

  This is on the newton release of neutron, if you need any further
  details just let me know and I will be happy to help.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1709115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709104] [NEW] cloud-init with only ipv6 Failes to POST encrypted password

2017-08-07 Thread sean redmond
Public bug reported:

Release = newton

When launching a Windows instance utilizing an IPv6-only network,
cloudinit fails to post the encrypted (generated) password to the
metadata service due to the absence of the metadata service on non-
dual-stack/IPv4 probably

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- Release = neutron
+ Release = newton
  
  When launching a Windows instance utilizing an IPv6-only network,
  cloudinit fails to post the encrypted (generated) password to the
  metadata service due to the absence of the metadata service on non-
  dual-stack/IPv4 probably

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709104

Title:
  cloud-init with only ipv6 Failes to POST encrypted password

Status in neutron:
  New

Bug description:
  Release = newton

  When launching a Windows instance utilizing an IPv6-only network,
  cloudinit fails to post the encrypted (generated) password to the
  metadata service due to the absence of the metadata service on non-
  dual-stack/IPv4 probably

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1709104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708458] [NEW] Expose instance system_metadata in compute API

2017-08-03 Thread sean redmond
Public bug reported:

Via the nova compute API I can see it is possible to GET metadata for an
instance but it does not seem to be possible to get system_metadata. It
is useful to be able to GET the system metadata fro an instance if you
want to query the point in time properties that where inherited from an
image during the launch.

I noticed in change
https://review.openstack.org/#/c/7045/5/nova/db/api.py there is a db api
definition to read this from the database but this does not seem to be
exposed in the compute api.

I assume something like the below in compute/api.py would be a starting
point to expose this.

def get_instance_system_metadata(self, context, instance):
"""Get all system metadata associated with an instance."""
return self.db.instance_system_metadata_get(context, instance.uuid)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708458

Title:
  Expose instance system_metadata in compute API

Status in OpenStack Compute (nova):
  New

Bug description:
  Via the nova compute API I can see it is possible to GET metadata for
  an instance but it does not seem to be possible to get
  system_metadata. It is useful to be able to GET the system metadata
  fro an instance if you want to query the point in time properties that
  where inherited from an image during the launch.

  I noticed in change
  https://review.openstack.org/#/c/7045/5/nova/db/api.py there is a db
  api definition to read this from the database but this does not seem
  to be exposed in the compute api.

  I assume something like the below in compute/api.py would be a
  starting point to expose this.

  def get_instance_system_metadata(self, context, instance):
  """Get all system metadata associated with an instance."""
  return self.db.instance_system_metadata_get(context, instance.uuid)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1708458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1692007] [NEW] Dual Stack IPV4/6 ARP bleed

2017-05-19 Thread sean redmond
Public bug reported:

Version = newton (2:9.2.0-0ubuntu1~cloud0)
DVR (HA mode) with ipv4 and ipv6
l2_population = True

I noticed when using dual stack ipv4 and ipv6 Linux guests are working
fine but windows guests seemed to have a problem with their ipv4
connectivity. Upon investigation I found an ARP issue in that both the
ipv4 and ipv6 interface of the virtual router are responding to arp
requests the below shows a capture from the tap interface of the guest
when the arp table in the guest is flushed.

12:04:58.273446 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.0.0.1 
tell 10.0.0.23, length 28
12:04:58.273776 ARP, Ethernet (len 6), IPv4 (len 4), Reply 10.0.0.1 is-at 
fa:16:3e:43:9d:58, length 28
12:04:58.273790 ARP, Ethernet (len 6), IPv4 (len 4), Reply 10.0.0.1 is-at 
fa:16:3e:19:96:e6, length 28

If I look at the active router interfaxces I can see mac ending 9d:58 is
the ipv4 interface and mac ending 96:e6 is the ipv6 interfaces as shown
below:

# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: qr-12e41bc2-68@if1021:  mtu 8950 qdisc 
noqueue state UP group default qlen 1000
link/ether fa:16:3e:43:9d:58 brd ff:ff:ff:ff:ff:ff link-netnsid 0

inet 10.0.0.1/8 brd 10.255.255.255 scope global qr-12e41bc2-68

   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe43:9d58/64 scope link
   valid_lft forever preferred_lft forever
6: qr-bd91567c-81@if1143:  mtu 8950 qdisc 
noqueue state UP group default qlen 1000
link/ether fa:16:3e:19:96:e6 brd ff:ff:ff:ff:ff:ff link-netnsid 0

inet6 :9:1::1/64 scope global

   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe19:96e6/64 scope link
   valid_lft forever preferred_lft forever
7: qr-03c27e46-4b@if1159:  mtu 8950 qdisc 
noqueue state UP group default qlen 1000
link/ether fa:16:3e:8e:f7:bd brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 :9:2::1/64 scope global
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe8e:f7bd/64 scope link
   valid_lft forever preferred_lft forever


We should not see two arp responses here, we should only see one from 
fa:16:3e:43:9d:58. Turns out that Linux guests populates the arp table based 
upon the first response and windows guests based upon the latest response, this 
explains to me why windows guests are failing and Linux are working as the 
first arp response is valid and the second one is invalid - but I think we have 
a bigger issue here as I should not be getting an arp response for the ipv6 
interface.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1692007

Title:
  Dual Stack IPV4/6 ARP bleed

Status in neutron:
  New

Bug description:
  Version = newton (2:9.2.0-0ubuntu1~cloud0)
  DVR (HA mode) with ipv4 and ipv6
  l2_population = True

  I noticed when using dual stack ipv4 and ipv6 Linux guests are working
  fine but windows guests seemed to have a problem with their ipv4
  connectivity. Upon investigation I found an ARP issue in that both the
  ipv4 and ipv6 interface of the virtual router are responding to arp
  requests the below shows a capture from the tap interface of the guest
  when the arp table in the guest is flushed.

  12:04:58.273446 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.0.0.1 
tell 10.0.0.23, length 28
  12:04:58.273776 ARP, Ethernet (len 6), IPv4 (len 4), Reply 10.0.0.1 is-at 
fa:16:3e:43:9d:58, length 28
  12:04:58.273790 ARP, Ethernet (len 6), IPv4 (len 4), Reply 10.0.0.1 is-at 
fa:16:3e:19:96:e6, length 28

  If I look at the active router interfaxces I can see mac ending 9d:58
  is the ipv4 interface and mac ending 96:e6 is the ipv6 interfaces as
  shown below:

  # ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
  2: qr-12e41bc2-68@if1021:  mtu 8950 qdisc 
noqueue state UP group default qlen 1000
  link/ether fa:16:3e:43:9d:58 brd ff:ff:ff:ff:ff:ff link-netnsid 0

  inet 10.0.0.1/8 brd 10.255.255.255 scope global qr-12e41bc2-68

 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe43:9d58/64 scope link
 valid_lft forever preferred_lft forever
  6: qr-bd91567c-81@if1143: 

[Yahoo-eng-team] [Bug 1691427] [NEW] AttributeError: 'DvrLocalRouter' object has no attribute 'ha_state'

2017-05-17 Thread sean redmond
Public bug reported:

I have noticed the below in the newutron l3 agent log of compute nodes
running in DVR mode, It seems to log the same message every 60 seconds
or so. Restarting the agent will stop the log until log rotate is called
the next day and then the return in the same fashion.

2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task 
[req-5c48816b-0705-438a-b823-eca41cfe6c35 - - - - -] Error during 
L3NATAgentWithStateReport.periodic_sync_routers_task
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task Traceback (most 
recent call last):
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py", line 220, in 
run_periodic_tasks
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task task(self, 
context)
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 552, in 
periodic_sync_routers_task
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task 
self.fetch_and_sync_all_routers(context, ns_manager)
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 154, in wrapper
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task return 
f(*args, **kwargs)
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 586, in 
fetch_and_sync_all_routers
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task r['id'], 
r.get(l3_constants.HA_ROUTER_STATE_KEY))
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 120, in 
check_ha_state_for_router
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task if ri and 
current_state != TRANSLATION_MAP[ri.ha_state]:
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task AttributeError: 
'DvrLocalRouter' object has no attribute 'ha_state'
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task 
[req-5c48816b-0705-438a-b823-eca41cfe6c35 - - - - -] Error during 
L3NATAgentWithStateReport.periodic_sync_routers_task
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task Traceback (most 
recent call last):
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py", line 220, in 
run_periodic_tasks
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task task(self, 
context)
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 552, in 
periodic_sync_routers_task
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task 
self.fetch_and_sync_all_routers(context, ns_manager)
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 154, in wrapper
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task return 
f(*args, **kwargs)
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 586, in 
fetch_and_sync_all_routers
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task r['id'], 
r.get(l3_constants.HA_ROUTER_STATE_KEY))
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 120, in 
check_ha_state_for_router
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task if ri and 
current_state != TRANSLATION_MAP[ri.ha_state]:
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task AttributeError: 
'DvrLocalRouter' object has no attribute 'ha_state'
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691427

Title:
  AttributeError: 'DvrLocalRouter' object has no attribute 'ha_state'

Status in neutron:
  New

Bug description:
  I have noticed the below in the newutron l3 agent log of compute nodes
  running in DVR mode, It seems to log the same message every 60 seconds
  or so. Restarting the agent will stop the log until log rotate is
  called the next day and then the return in the same fashion.

  2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task 
[req-5c48816b-0705-438a-b823-eca41cfe6c35 - - - - -] Error during 
L3NATAgentWithStateReport.periodic_sync_routers_task
  2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task Traceback 
(most recent call last):
  2017-05-17 10:37:10.432 27502 ERROR 

[Yahoo-eng-team] [Bug 1686729] [NEW] Creating object storage container causes user to be logged out

2017-04-27 Thread sean redmond
Public bug reported:

Version = openstack-dashboard3:11.0.1-0ubuntu1~cloud0
Ceph version = 10.2.7

When using ceph RGW swift interface for open stack and the open stack
dashboard version above to create a swift container the dashboard does a
number of curl requests to check if the bucket name already exists to
prevent the user from trying to create a bucket with the same name as an
existing bucket.

In most cases this works as expected, however if I try to create a
bucket that starts with the same name as an existing bucket that has the
ACL set to private I am unexpectedly logged out of the dashboard.

In my tests I have open stack user 'paul' and project 'paul that owns a
private swift bucket called 'paul'

I then as a second user 'sean' and project 'sean' try to create a swift
container called 'paul1' this will result in me getting logged out of
the dashboard, The below shows the log file for when I try and create
this bucket:

``
REQ: curl -i https://rgw.domain.com/swift/v1/p/ -X GET -H "X-Auth-Token: 
{hidden}"
RESP STATUS: 400 Bad Request
RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:01 GMT', u'Content-Length': 
u'17', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': 
u'bytes', u'X-Trans-Id': u'{hidden}'}
RESP BODY: InvalidBucketName
REQ: curl -i https://rgw.domain.com/swift/v1/pa/ -X GET -H "X-Auth-Token: 
{hidden}"
RESP STATUS: 400 Bad Request
RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:02 GMT', u'Content-Length': 
u'17', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': 
u'bytes', u'X-Trans-Id': u'{hidden}'}
RESP BODY: InvalidBucketName
REQ: curl -i https://rgw.domain.com/swift/v1/pau/ -X GET -H "X-Auth-Token: 
{hidden}"
RESP STATUS: 404 Not Found
RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:04 GMT', u'Content-Length': 
u'12', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': 
u'bytes', u'X-Trans-Id': u'{hidden}'}
RESP BODY: NoSuchBucket
REQ: curl -i https://rgw.domain.com/swift/v1/paul/ -X GET -H "X-Auth-Token: 
{hidden}"
RESP STATUS: 401 Unauthorized
RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:04 GMT', u'Content-Length': 
u'12', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': 
u'bytes', u'X-Trans-Id': u'{hidden}'}
RESP BODY: AccessDenied
Logging out user "sean
``

As you can see this works until the 401 is received by horizon from the
rgw when checking bucket 'paul' I believe this is because the bucket ACL
of Paul (created by user Paul) is set to ACL private as I don't have the
same issue when the ACL is set to public or when the ACL is private and
I try and create the bucket 'paul1' as the user 'paul'

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1686729

Title:
  Creating object storage container causes user to be logged out

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Version = openstack-dashboard3:11.0.1-0ubuntu1~cloud0
  Ceph version = 10.2.7

  When using ceph RGW swift interface for open stack and the open stack
  dashboard version above to create a swift container the dashboard does
  a number of curl requests to check if the bucket name already exists
  to prevent the user from trying to create a bucket with the same name
  as an existing bucket.

  In most cases this works as expected, however if I try to create a
  bucket that starts with the same name as an existing bucket that has
  the ACL set to private I am unexpectedly logged out of the dashboard.

  In my tests I have open stack user 'paul' and project 'paul that owns
  a private swift bucket called 'paul'

  I then as a second user 'sean' and project 'sean' try to create a
  swift container called 'paul1' this will result in me getting logged
  out of the dashboard, The below shows the log file for when I try and
  create this bucket:

  ``
  REQ: curl -i https://rgw.domain.com/swift/v1/p/ -X GET -H "X-Auth-Token: 
{hidden}"
  RESP STATUS: 400 Bad Request
  RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:01 GMT', u'Content-Length': 
u'17', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': 
u'bytes', u'X-Trans-Id': u'{hidden}'}
  RESP BODY: InvalidBucketName
  REQ: curl -i https://rgw.domain.com/swift/v1/pa/ -X GET -H "X-Auth-Token: 
{hidden}"
  RESP STATUS: 400 Bad Request
  RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:02 GMT', u'Content-Length': 
u'17', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': 
u'bytes', u'X-Trans-Id': u'{hidden}'}
  RESP BODY: InvalidBucketName
  REQ: curl -i https://rgw.domain.com/swift/v1/pau/ -X GET -H "X-Auth-Token: 
{hidden}"
  RESP STATUS: 404 Not Found
  RESP HEADERS: {u'Date': u'Thu, 27 Apr 2017 13:22:04 GMT', u'Content-Length': 
u'12', u'Content-Type': u'text/plain; charset=utf-8', u'Accept-Ranges': 
u'bytes', u'X-Trans-Id': 

[Yahoo-eng-team] [Bug 1633405] [NEW] Lock/Unlock status not visible in dashboard or action log

2016-10-14 Thread sean redmond
Public bug reported:

It appears that when the Lock Instance / Unlock Instance options are
used on a compute instance, nothing is indicated within the dashboard or
action log to suggest this is the case.

Steps to repeat:

Log into dashboard
Click instances
>From the drop down on the right of a given instance, select Lock instance
Try to power on/off/restart the instance, 

You will get the error: Error: Unable to shut off instance: , with no indication of the reason.

Click into the instance and click the action log, there is no record
there of the instance being locked. This makes it difficult for an end
user to determine why they cannot interact with their instance.

Version info:

ii  python-django-horizon  2:9.1.0-0ubuntu1all  
Django module providing web based interaction with OpenStack
ii  openstack-dashboard-ubuntu-theme   2:9.1.0-0ubuntu1all  
Ubuntu theme for the OpenStack dashboard

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1633405

Title:
  Lock/Unlock status not visible in dashboard or action log

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  It appears that when the Lock Instance / Unlock Instance options are
  used on a compute instance, nothing is indicated within the dashboard
  or action log to suggest this is the case.

  Steps to repeat:

  Log into dashboard
  Click instances
  From the drop down on the right of a given instance, select Lock instance
  Try to power on/off/restart the instance, 

  You will get the error: Error: Unable to shut off instance: , with no indication of the reason.

  Click into the instance and click the action log, there is no record
  there of the instance being locked. This makes it difficult for an end
  user to determine why they cannot interact with their instance.

  Version info:

  ii  python-django-horizon  2:9.1.0-0ubuntu1all
  Django module providing web based interaction with OpenStack
  ii  openstack-dashboard-ubuntu-theme   2:9.1.0-0ubuntu1all
  Ubuntu theme for the OpenStack dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1633405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528877] [NEW] Libvirt can't honour user-supplied dev names

2015-12-23 Thread sean redmond
Public bug reported:

OpenStack liberty (Ubuntu 14.04)

When creating a new instance via the dashboard and selecting 'boot from
image (creates new volume)' the below is logged into nova-compute.log on
the host the instance is deployed to:

"Ignoring supplied device name: /dev/vda. Libvirt can't honour user-
supplied dev names"

In the virsh XML for the instance I can see the target dev name is sda,
this seems to have changed from how kilo was behaving.

I also noticed that if I then try and add an extra volume to this
instance the below error is placed in the nova-compute.log and the
volume is not attached.

caf420922780362 - - -] Exception during message handling: internal error: 
unable to execute QEMU command 'device_add': Duplicate ID 'scsi0-0-0-0' for 
device
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 129, 
in _do_dispatch
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 89, in wrapped
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher payload)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 72, in wrapped
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 378, in 
decorated_function
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 366, in 
decorated_function
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4609, in 
attach_volume
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
do_attach_volume(context, instance, driver_bdm)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 254, in 
inner
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4607, in 
do_attach_volume
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
bdm.destroy()
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4604, in 
do_attach_volume
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
self._attach_volume(context, instance, driver_bdm)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4627, in 
_attach_volume
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
self.volume_api.unreserve_volume(context, bdm.volume_id)
2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1528877] Re: Libvirt can't honour user-supplied dev names

2015-12-23 Thread sean redmond
** Also affects: ubuntu
   Importance: Undecided
   Status: New

** No longer affects: ubuntu

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1528877

Title:
  Libvirt can't honour user-supplied dev names

Status in OpenStack Compute (nova):
  New

Bug description:
  OpenStack liberty (Ubuntu 14.04)

  When creating a new instance via the dashboard and selecting 'boot
  from image (creates new volume)' the below is logged into nova-
  compute.log on the host the instance is deployed to:

  "Ignoring supplied device name: /dev/vda. Libvirt can't honour user-
  supplied dev names"

  In the virsh XML for the instance I can see the target dev name is
  sda, this seems to have changed from how kilo was behaving.

  I also noticed that if I then try and add an extra volume to this
  instance the below error is placed in the nova-compute.log and the
  volume is not attached.

  caf420922780362 - - -] Exception during message handling: internal error: 
unable to execute QEMU command 'device_add': Duplicate ID 'scsi0-0-0-0' for 
device
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 129, 
in _do_dispatch
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 89, in wrapped
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher payload)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 72, in wrapped
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 378, in 
decorated_function
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 366, in 
decorated_function
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4609, in 
attach_volume
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
do_attach_volume(context, instance, driver_bdm)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 254, in 
inner
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4607, in 
do_attach_volume
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
bdm.destroy()
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4604, in 
do_attach_volume
  2015-12-23 15:21:55.698 4621 ERROR oslo_messaging.rpc.dispatcher return 

[Yahoo-eng-team] [Bug 1502876] [NEW] Error: You are not allowed to delete volume: $X

2015-10-05 Thread sean redmond
Public bug reported:

When using Ceph as the back end storage a snapshot of a volume is
dependent upon the original volume meaning it is an illegal operation to
delete the volume without first deleting the snapshot.

Should you try and delete such a volume horizon {kilo} reports back the
error "Error: You are not allowed to delete volume:" - This error
message is confusing and leaves the end user  unsure why the volume can
not be removed.

I think we should handle this by better checking what cinder API reports
back and displaying a more user friendly error message.

The below is an example of the error that comes back from cinder,
passing this up in horizon would be very helpful.

$ cinder delete 34feea60-fb50-4fbe-8136-885f5553a8a8
Delete for volume 34feea60-fb50-4fbe-8136-885f5553a8a8 failed: Invalid volume: 
Volume still has 1 dependent snapshots. (HTTP 400) (Request-ID: 
req-320be24f-ce7a-4a79-9d6f-9166821c42e9)
ERROR: Unable to delete any of specified volumes.
$

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1502876

Title:
  Error: You are not allowed to delete volume: $X

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When using Ceph as the back end storage a snapshot of a volume is
  dependent upon the original volume meaning it is an illegal operation
  to delete the volume without first deleting the snapshot.

  Should you try and delete such a volume horizon {kilo} reports back
  the error "Error: You are not allowed to delete volume:" - This error
  message is confusing and leaves the end user  unsure why the volume
  can not be removed.

  I think we should handle this by better checking what cinder API
  reports back and displaying a more user friendly error message.

  The below is an example of the error that comes back from cinder,
  passing this up in horizon would be very helpful.

  $ cinder delete 34feea60-fb50-4fbe-8136-885f5553a8a8
  Delete for volume 34feea60-fb50-4fbe-8136-885f5553a8a8 failed: Invalid 
volume: Volume still has 1 dependent snapshots. (HTTP 400) (Request-ID: 
req-320be24f-ce7a-4a79-9d6f-9166821c42e9)
  ERROR: Unable to delete any of specified volumes.
  $

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1502876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp