[Yahoo-eng-team] [Bug 1459042] Re: cloud-init fails to report IPv6 connectivity when booting

2017-07-12 Thread Dr. Jens Rosenboom
** Changed in: cirros
   Status: Invalid => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1459042

Title:
  cloud-init fails to report IPv6 connectivity when booting

Status in CirrOS:
  Fix Committed
Status in cloud-init:
  Confirmed

Bug description:
  It would be convenient to see the IPv6 networking information printed
  at boot, similar to the IPv4 networking information currently is.

  Output from the boot log:
  [   15.621085] cloud-init[1058]: Cloud-init v. 0.7.7 running 'init' at Tue, 
14 Jun 2016 13:48:14 +. Up 6.71 seconds.
  [   15.622670] cloud-init[1058]: ci-info: Net 
device info+
  [   15.624106] cloud-init[1058]: ci-info: 
++--++-+---+---+
  [   15.625516] cloud-init[1058]: ci-info: | Device |  Up  |  Address   | 
Mask| Scope | Hw-Address|
  [   15.627058] cloud-init[1058]: ci-info: 
++--++-+---+---+
  [   15.628504] cloud-init[1058]: ci-info: | ens3:  | True | 10.42.0.48 | 
255.255.0.0 |   .   | fa:16:3e:f9:86:07 |
  [   15.629930] cloud-init[1058]: ci-info: | ens3:  | True | .  |  
.  |   d   | fa:16:3e:f9:86:07 |
  [   15.631334] cloud-init[1058]: ci-info: |  lo:   | True | 127.0.0.1  |  
255.0.0.0  |   .   | . |
  [   15.632765] cloud-init[1058]: ci-info: |  lo:   | True | .  |  
.  |   d   | . |
  [   15.634221] cloud-init[1058]: ci-info: 
++--++-+---+---+
  [   15.635671] cloud-init[1058]: ci-info: 
+++Route IPv4 info+++
  [   15.637186] cloud-init[1058]: ci-info: 
+---+-+---+-+---+---+
  [   15.638682] cloud-init[1058]: ci-info: | Route |   Destination   |  
Gateway  | Genmask | Interface | Flags |
  [   15.640182] cloud-init[1058]: ci-info: 
+---+-+---+-+---+---+
  [   15.641657] cloud-init[1058]: ci-info: |   0   | 0.0.0.0 | 
10.42.0.1 | 0.0.0.0 |ens3   |   UG  |
  [   15.643149] cloud-init[1058]: ci-info: |   1   |10.42.0.0|  
0.0.0.0  |   255.255.0.0   |ens3   |   U   |
  [   15.644661] cloud-init[1058]: ci-info: |   2   | 169.254.169.254 | 
10.42.0.1 | 255.255.255.255 |ens3   |  UGH  |
  [   15.646175] cloud-init[1058]: ci-info: 
+---+-+---+-+---+---+

  Output from running system:
  ci-info: +++Net device 
info+++
  ci-info: 
++---+-+---++---+
  ci-info: |   Device   |   Up  | Address | 
 Mask | Scope  | Hw-Address|
  ci-info: 
++---+-+---++---+
  ci-info: |ens3|  True |10.42.0.44   |  
255.255.0.0  |   .| fa:16:3e:90:11:e0 |
  ci-info: |ens3|  True | 2a04:3b40:8010:1:f816:3eff:fe90:11e0/64 | 
  .   | global | fa:16:3e:90:11:e0 |
  ci-info: | lo |  True |127.0.0.1|   
255.0.0.0   |   .| . |
  ci-info: | lo |  True | ::1/128 | 
  .   |  host  | . |
  ci-info: 
++---+-+---++---+
  ci-info: +++Route IPv4 
info+++
  ci-info: 
+---+-+---+-+---+---+
  ci-info: | Route |   Destination   |  Gateway  | Genmask | Interface 
| Flags |
  ci-info: 
+---+-+---+-+---+---+
  ci-info: |   0   | 0.0.0.0 | 10.42.0.1 | 0.0.0.0 |ens3   
|   UG  |
  ci-info: |   1   |10.42.0.0|  0.0.0.0  |   255.255.0.0   |ens3   
|   U   |
  ci-info: |   2   | 169.254.169.254 | 10.42.0.1 | 255.255.255.255 |ens3   
|  UGH  |
  ci-info: 
+---+-+---+-+---+---+

  $ netstat -rn46
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 10.42.0.1   0.0.0.0 UG0 0  0 ens3
  10.42.0.0   0.0.0.0 255.255.0.0 U 0 0  0 ens3
  169.254.169.254 10.42.0.1   255.255.255.255 UGH   0 0  0 ens3
  192.168.122.0   0.0.0.0 255.255.255.0   U 

[Yahoo-eng-team] [Bug 1703360] [NEW] Don't show IPv6 addresses when selecting port for floating IP

2017-07-10 Thread Dr. Jens Rosenboom
Public bug reported:

A floating IP can only be associated with an IPv4 address, trying to use
an IPv6 address leads to an error like this:

Error: Bad floatingip request: Cannot process floating IP association
with fdb7:6005:3c07:0:f816:3eff:fe94:35d4, since that is not an IPv4
address. Neutron server returns request_ids: ['req-1ffbc2fb-5e84-44ac-
990e-62be46801e77']

So the selector should only show the IPv4 addresses of an instance to
choose from in the "Associate Floating IP" action.

Tested with current stable/ocata.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1703360

Title:
  Don't show IPv6 addresses when selecting port for floating IP

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  A floating IP can only be associated with an IPv4 address, trying to
  use an IPv6 address leads to an error like this:

  Error: Bad floatingip request: Cannot process floating IP association
  with fdb7:6005:3c07:0:f816:3eff:fe94:35d4, since that is not an IPv4
  address. Neutron server returns request_ids: ['req-1ffbc2fb-5e84-44ac-
  990e-62be46801e77']

  So the selector should only show the IPv4 addresses of an instance to
  choose from in the "Associate Floating IP" action.

  Tested with current stable/ocata.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1703360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1693771] [NEW] Instance fails to receive DHCPv6 responses

2017-05-26 Thread Dr. Jens Rosenboom
Public bug reported:

When running an instance on a network with ipv6-address-mode
dhcpv6-stateful, the instance fails to receive responses from the DHCP
server. The workaround is to add an explicit security group rule
allowing

--proto udp --dst-port 546 --ethertype IPv6

and then the DHCP handshake succeeds successfully.

See https://review.openstack.org/#/c/386525/ and the failures at e.g.
http://logs.openstack.org/25/386525/8/check/gate-tempest-dsvm-neutron-
full-ssh/b4c7bf0/, the same setup worked fine in December, so this seems
to be a regression introduced since then.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1693771

Title:
  Instance fails to receive DHCPv6 responses

Status in neutron:
  New

Bug description:
  When running an instance on a network with ipv6-address-mode
  dhcpv6-stateful, the instance fails to receive responses from the DHCP
  server. The workaround is to add an explicit security group rule
  allowing

  --proto udp --dst-port 546 --ethertype IPv6

  and then the DHCP handshake succeeds successfully.

  See https://review.openstack.org/#/c/386525/ and the failures at e.g.
  http://logs.openstack.org/25/386525/8/check/gate-tempest-dsvm-neutron-
  full-ssh/b4c7bf0/, the same setup worked fine in December, so this
  seems to be a regression introduced since then.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1693771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1685773] [NEW] Delete instance popup shows "." as selected instance name

2017-04-24 Thread Dr. Jens Rosenboom
Public bug reported:

When deleting an instance from the detail page for that instance, the
popup says

"You have selected: . Please confirm your selection. Deleted instances
are not recoverable."

Same behaviour for stable/ocata and current master.

Expected behaviour: Show the name of the instance that is to be deleted.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1685773

Title:
  Delete instance popup shows "." as selected instance name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When deleting an instance from the detail page for that instance, the
  popup says

  "You have selected: . Please confirm your selection. Deleted instances
  are not recoverable."

  Same behaviour for stable/ocata and current master.

  Expected behaviour: Show the name of the instance that is to be
  deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1685773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684682] [NEW] DHCP namespace doesn't have IPv6 default route

2017-04-20 Thread Dr. Jens Rosenboom
Public bug reported:

This is a regression for Ocata, things are working fine for Newton. But
if I create a IPv6 subnet in Ocata, the DHCP namespace gets configured
with an IPv6 address, but is lacking a default route, so dnsmasq fails
to resolv any DNS queries except for the local OpenStack instances.

I think there have been some changes in the way the namespace is being
set up, removing listening to RAs and instead doing static
configuration, that may have caused this.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684682

Title:
  DHCP namespace doesn't have IPv6 default route

Status in neutron:
  New

Bug description:
  This is a regression for Ocata, things are working fine for Newton.
  But if I create a IPv6 subnet in Ocata, the DHCP namespace gets
  configured with an IPv6 address, but is lacking a default route, so
  dnsmasq fails to resolv any DNS queries except for the local OpenStack
  instances.

  I think there have been some changes in the way the namespace is being
  set up, removing listening to RAs and instead doing static
  configuration, that may have caused this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660682] Re: resource tracker sets wrong initial resource provider generation when creating a new resource provider

2017-04-18 Thread Dr. Jens Rosenboom
** Changed in: nova/newton
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660682

Title:
  resource tracker sets wrong initial resource provider generation when
  creating a new resource provider

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Fix Released

Bug description:
  When the resource tracker creates (and then effectively caches) a new
  resource provider for the compute node, it sets the generation to 1.
  This leads to the first PUT to set inventory resulting in a 409
  conflict because the generation is wrong.

  The default generation in the database is 0, so for any new resource
  provider this is what it will be. So in the resource tracker, in
  _create_resource_provider, the generation should also be 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1676363] [NEW] The network metadata should be more useful

2017-03-27 Thread Dr. Jens Rosenboom
Public bug reported:

There are two issues affecting the useability of the network information
presented to an instance via the metadata API:

1. For networks using DHCP, the IP address information is omitted. There
are however use cases where an instance would want to use static address
configuration even when DHCP is available. So adding the information
would make deploying such an instance easier.

2. For IPv6 subnets, the type is always "ipv6_dhcp", regardless of
whether the subnet has mode "slaac", "dhcpv6-stateless" or
"dhcpv6-stateful". This makes is impossible for an instance to decide
whether it should use DHCPv6 for address and/or additional
configuration.

Here is the current output for an instance with one network for IPv4 and
one for IPv6:

{
  "services": [

  ],
  "networks": [
{
  "network_id": "fb1ca77c-624d-42ab-9102-16f21313a6cb",
  "link": "tap92b3d1dd-12",
  "type": "ipv4_dhcp",
  "id": "network0"
},
{
  "network_id": "6179a9e5-e370-4ee4-8ff6-d83f118b08fd",
  "link": "tap2fa5e368-de",
  "type": "ipv6_dhcp",
  "id": "network1"
}
  ],
  "links": [
{
  "ethernet_mac_address": "fa:16:3e:e0:b3:ad",
  "mtu": 1500,
  "type": "ovs",
  "id": "tap92b3d1dd-12",
  "vif_id": "92b3d1dd-12c2-49cd-82a5-298c071896fd"
},
{
  "ethernet_mac_address": "fa:16:3e:aa:71:95",
  "mtu": 1500,
  "type": "ovs",
  "id": "tap2fa5e368-de",
  "vif_id": "2fa5e368-de18-416f-b6f7-063687d4b9e5"
}
  ]
}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1676363

Title:
  The network metadata should be more useful

Status in OpenStack Compute (nova):
  New

Bug description:
  There are two issues affecting the useability of the network
  information presented to an instance via the metadata API:

  1. For networks using DHCP, the IP address information is omitted.
  There are however use cases where an instance would want to use static
  address configuration even when DHCP is available. So adding the
  information would make deploying such an instance easier.

  2. For IPv6 subnets, the type is always "ipv6_dhcp", regardless of
  whether the subnet has mode "slaac", "dhcpv6-stateless" or
  "dhcpv6-stateful". This makes is impossible for an instance to decide
  whether it should use DHCPv6 for address and/or additional
  configuration.

  Here is the current output for an instance with one network for IPv4
  and one for IPv6:

  {
"services": [
  
],
"networks": [
  {
"network_id": "fb1ca77c-624d-42ab-9102-16f21313a6cb",
"link": "tap92b3d1dd-12",
"type": "ipv4_dhcp",
"id": "network0"
  },
  {
"network_id": "6179a9e5-e370-4ee4-8ff6-d83f118b08fd",
"link": "tap2fa5e368-de",
"type": "ipv6_dhcp",
"id": "network1"
  }
],
"links": [
  {
"ethernet_mac_address": "fa:16:3e:e0:b3:ad",
"mtu": 1500,
"type": "ovs",
"id": "tap92b3d1dd-12",
"vif_id": "92b3d1dd-12c2-49cd-82a5-298c071896fd"
  },
  {
"ethernet_mac_address": "fa:16:3e:aa:71:95",
"mtu": 1500,
"type": "ovs",
"id": "tap2fa5e368-de",
"vif_id": "2fa5e368-de18-416f-b6f7-063687d4b9e5"
  }
]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1676363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1675351] Re: glance: creating a public image fails during installation

2017-03-23 Thread Dr. Jens Rosenboom
Turns out that the default glance-api.conf file is misleading, it is
showing:

[paste_deploy]
#flavor = keystone

implying that "keystone" is the default value for paste_deploy.flavor,
but in fact it is None.

** Changed in: openstack-manuals
   Status: New => Invalid

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1675351

Title:
  glance: creating a public image fails during installation

Status in Glance:
  New
Status in openstack-manuals:
  Invalid

Bug description:
  Following the instructions at https://docs.openstack.org/ocata
  /install-guide-ubuntu/glance-verify.html I am getting this error when
  trying to upload the test image:

  $ openstack image create "cirros2"   --file cirros-0.3.5-x86_64-disk.img   
--disk-format qcow2 --container-format bare  --public
  403 Forbidden
  You are not authorized to complete publicize_image action.
  (HTTP 403)
  $

  Creating the image without the "--public" option works fine. I did
  verify that I do have the admin role as specified in
  /etc/glance/policy.json:

  "publicize_image": "role:admin",

  $ openstack role assignment list --na
  +---++---+-++---+
  | Role  | User   | Group | Project | Domain | Inherited |
  +---++---+-++---+
  | user  | demo@Default   |   | demo@Default|| False |
  | admin | admin@Default  |   | admin@Default   || False |
  | admin | glance@Default |   | service@Default || False |
  +---++---+-++---+
  $

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1675351/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668223] Re: subnet create is not working with --use-default-subnet-pool

2017-03-10 Thread Dr. Jens Rosenboom
** Changed in: python-openstackclient
   Status: In Progress => Fix Released

** Changed in: python-openstacksdk
 Assignee: Dr. Jens Rosenboom (j-rosenboom-j) => (unassigned)

** Changed in: python-openstackclient
 Assignee: Dr. Jens Rosenboom (j-rosenboom-j) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1668223

Title:
  subnet create is not working with --use-default-subnet-pool

Status in neutron:
  Confirmed
Status in python-openstackclient:
  Fix Released
Status in OpenStack SDK:
  Fix Released

Bug description:
  Seems like the option is not passed properly to the API:

  $ openstack --debug subnet create --ip-version 6 --use-default-subnet-pool 
--ipv6-address-mode slaac --ipv6-ra-mode slaac --network mynet mysubnet
  ...
  REQ: curl -g -i -X POST http://10.42.1.126:9696/v2.0/subnets -H "User-Agent: 
openstacksdk/0.9.13 keystoneauth1/2.18.0 python-requests/2.12.5 CPython/2.7.12" 
-H "Content-Type: application/json" -H "X-Auth-Token: 
{SHA1}c61b74f7c026e385b8953576101f854d86cb0d48" -d '{"subnet": {"network_id": 
"1f20da97-ddd4-40f8-b8d3-6321de8671a0", "ipv6_ra_mode": "slaac", "ip_version": 
6, "name": "mysubnet", "ipv6_address_mode": "slaac"}}'
  http://10.42.1.126:9696 "POST /v2.0/subnets HTTP/1.1" 400 146
  RESP: [400] Content-Type: application/json Content-Length: 146 
X-Openstack-Request-Id: req-299d7f52-d5ce-4873-979d-f63626f380ab Date: Mon, 27 
Feb 2017 10:30:31 GMT Connection: keep-alive 
  RESP BODY: {"NeutronError": {"message": "Bad subnets request: a subnetpool 
must be specified in the absence of a cidr.", "type": "BadRequest", "detail": 
""}}
  ...
  HttpException: HttpException: Bad Request, Bad subnets request: a subnetpool 
must be specified in the absence of a cidr.

  END return value: 1
  $

  Running the same command using the neutron CLI is working fine:

  $ neutron --debug subnet-create --name subnet6 --ip_version 6 
--use-default-subnetpool --ipv6-address-mode slaac --ipv6-ra-mode slaac mynet
  ...
  DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://10.42.1.126:9696/v2.0/subnets.json -H "User-Agent: python-neutronclient" 
-H "Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}dd78a0939a35913dad4181aaf5f68e0c9277d74e" -d '{"sub
  net": {"use_default_subnetpool": true, "network_id": 
"1f20da97-ddd4-40f8-b8d3-6321de8671a0", "ipv6_ra_mode": "slaac", "ip_version": 
6, "ipv6_address_mode": "slaac", "name": "subnet6"}}'
  DEBUG: keystoneauth.session RESP: [201] Content-Type: application/json 
Content-Length: 735 X-Openstack-Request-Id: 
req-13f0bc2a-e4e5-43dc-b048-e63acef8f131 Date: Mon, 27 Feb 2017 10:46:32 GMT 
Connection: keep-alive 
  RESP BODY: {"subnet": {"service_types": [], "description": "", "enable_dhcp": 
true, "tags": [], "network_id": "1f20da97-ddd4-40f8-b8d3-6321de8671a0", 
"tenant_id": "6de6f29dcf904ab8a12e8ca558f532e9", "created_at": 
"2017-02-27T10:46:31Z", "dns_nameservers": [], "updated_at":
   "2017-02-27T10:46:31Z", "gateway_ip": "2001:db8:1234::1", "ipv6_ra_mode": 
"slaac", "allocation_pools": [{"start": "2001:db8:1234::2", "end": 
"2001:db8:1234:0::::"}], "host_routes": [], "revision_number": 
2, "ip_version": 6, "ipv6_address_mode": "slaac", "c
  idr": "2001:db8:1234::/64", "project_id": "6de6f29dcf904ab8a12e8ca558f532e9", 
"id": "ebb07be0-3f8d-4219-afbd-f81ca352954d", "subnetpool_id": 
"4c1661ba-b24c-4fda-8815-3f1fd29281af", "name": "subnet6"}}
  ...

  So somehow this attribute is missing from the OSC's request:

  "use_default_subnetpool": true

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1668223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668223] Re: subnet create is not working with --use-default-subnet-pool

2017-03-08 Thread Dr. Jens Rosenboom
** Changed in: python-openstacksdk
   Status: In Progress => Fix Released

** Changed in: neutron
 Assignee: Dr. Jens Rosenboom (j-rosenboom-j) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1668223

Title:
  subnet create is not working with --use-default-subnet-pool

Status in neutron:
  Confirmed
Status in python-openstackclient:
  In Progress
Status in OpenStack SDK:
  Fix Released

Bug description:
  Seems like the option is not passed properly to the API:

  $ openstack --debug subnet create --ip-version 6 --use-default-subnet-pool 
--ipv6-address-mode slaac --ipv6-ra-mode slaac --network mynet mysubnet
  ...
  REQ: curl -g -i -X POST http://10.42.1.126:9696/v2.0/subnets -H "User-Agent: 
openstacksdk/0.9.13 keystoneauth1/2.18.0 python-requests/2.12.5 CPython/2.7.12" 
-H "Content-Type: application/json" -H "X-Auth-Token: 
{SHA1}c61b74f7c026e385b8953576101f854d86cb0d48" -d '{"subnet": {"network_id": 
"1f20da97-ddd4-40f8-b8d3-6321de8671a0", "ipv6_ra_mode": "slaac", "ip_version": 
6, "name": "mysubnet", "ipv6_address_mode": "slaac"}}'
  http://10.42.1.126:9696 "POST /v2.0/subnets HTTP/1.1" 400 146
  RESP: [400] Content-Type: application/json Content-Length: 146 
X-Openstack-Request-Id: req-299d7f52-d5ce-4873-979d-f63626f380ab Date: Mon, 27 
Feb 2017 10:30:31 GMT Connection: keep-alive 
  RESP BODY: {"NeutronError": {"message": "Bad subnets request: a subnetpool 
must be specified in the absence of a cidr.", "type": "BadRequest", "detail": 
""}}
  ...
  HttpException: HttpException: Bad Request, Bad subnets request: a subnetpool 
must be specified in the absence of a cidr.

  END return value: 1
  $

  Running the same command using the neutron CLI is working fine:

  $ neutron --debug subnet-create --name subnet6 --ip_version 6 
--use-default-subnetpool --ipv6-address-mode slaac --ipv6-ra-mode slaac mynet
  ...
  DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://10.42.1.126:9696/v2.0/subnets.json -H "User-Agent: python-neutronclient" 
-H "Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}dd78a0939a35913dad4181aaf5f68e0c9277d74e" -d '{"sub
  net": {"use_default_subnetpool": true, "network_id": 
"1f20da97-ddd4-40f8-b8d3-6321de8671a0", "ipv6_ra_mode": "slaac", "ip_version": 
6, "ipv6_address_mode": "slaac", "name": "subnet6"}}'
  DEBUG: keystoneauth.session RESP: [201] Content-Type: application/json 
Content-Length: 735 X-Openstack-Request-Id: 
req-13f0bc2a-e4e5-43dc-b048-e63acef8f131 Date: Mon, 27 Feb 2017 10:46:32 GMT 
Connection: keep-alive 
  RESP BODY: {"subnet": {"service_types": [], "description": "", "enable_dhcp": 
true, "tags": [], "network_id": "1f20da97-ddd4-40f8-b8d3-6321de8671a0", 
"tenant_id": "6de6f29dcf904ab8a12e8ca558f532e9", "created_at": 
"2017-02-27T10:46:31Z", "dns_nameservers": [], "updated_at":
   "2017-02-27T10:46:31Z", "gateway_ip": "2001:db8:1234::1", "ipv6_ra_mode": 
"slaac", "allocation_pools": [{"start": "2001:db8:1234::2", "end": 
"2001:db8:1234:0::::"}], "host_routes": [], "revision_number": 
2, "ip_version": 6, "ipv6_address_mode": "slaac", "c
  idr": "2001:db8:1234::/64", "project_id": "6de6f29dcf904ab8a12e8ca558f532e9", 
"id": "ebb07be0-3f8d-4219-afbd-f81ca352954d", "subnetpool_id": 
"4c1661ba-b24c-4fda-8815-3f1fd29281af", "name": "subnet6"}}
  ...

  So somehow this attribute is missing from the OSC's request:

  "use_default_subnetpool": true

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1668223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668542] [NEW] nova.conf - az configuration options in Configuration Reference

2017-02-28 Thread Dr. Jens Rosenboom
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [X] This doc is inaccurate in this way: __see below
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode



The descriptions of default_availability_zone and default_schedule_zone
are confusing, they seem to serve the same purpose and it is not clear
how they differ.

Looking at the code a bit, the text for default_schedule_zone is even
wrong, it does not affect the scheduler (at least not directly), but is
being used in the "create server" call in the API in case that the
original request did not specify an availability_zone.

The default_availability_zone in contrast seems to be used to evaluate
what the az for a compute host will be if it is not being set by other
means.

It would be nice if someone from Nova team could confirm this before we
start updating the docs.

---
Release: 0.9 on 2017-02-28 05:45
SHA: f8b8c1c2f797d927274c6b005dffb4acb18b3a6e
Source: 
https://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/config-reference/source/compute/config-options.rst
URL: 
https://docs.openstack.org/draft/config-reference/compute/config-options.html

** Affects: nova
 Importance: Medium
 Status: Confirmed

** Affects: openstack-manuals
 Importance: Undecided
 Status: New


** Tags: config-reference

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1668542

Title:
  nova.conf - az configuration options in Configuration Reference

Status in OpenStack Compute (nova):
  Confirmed
Status in openstack-manuals:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [X] This doc is inaccurate in this way: __see below
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  

  The descriptions of default_availability_zone and
  default_schedule_zone are confusing, they seem to serve the same
  purpose and it is not clear how they differ.

  Looking at the code a bit, the text for default_schedule_zone is even
  wrong, it does not affect the scheduler (at least not directly), but
  is being used in the "create server" call in the API in case that the
  original request did not specify an availability_zone.

  The default_availability_zone in contrast seems to be used to evaluate
  what the az for a compute host will be if it is not being set by other
  means.

  It would be nice if someone from Nova team could confirm this before
  we start updating the docs.

  ---
  Release: 0.9 on 2017-02-28 05:45
  SHA: f8b8c1c2f797d927274c6b005dffb4acb18b3a6e
  Source: 
https://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/config-reference/source/compute/config-options.rst
  URL: 
https://docs.openstack.org/draft/config-reference/compute/config-options.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1668542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668223] Re: subnet create is not working with --use-default-subnet-pool

2017-02-27 Thread Dr. Jens Rosenboom
** Changed in: python-openstackclient
   Status: Invalid => In Progress

** Changed in: python-openstackclient
 Assignee: (unassigned) => Dr. Jens Rosenboom (j-rosenboom-j)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1668223

Title:
  subnet create is not working with --use-default-subnet-pool

Status in neutron:
  New
Status in python-openstackclient:
  In Progress
Status in OpenStack SDK:
  In Progress

Bug description:
  Seems like the option is not passed properly to the API:

  $ openstack --debug subnet create --ip-version 6 --use-default-subnet-pool 
--ipv6-address-mode slaac --ipv6-ra-mode slaac --network mynet mysubnet
  ...
  REQ: curl -g -i -X POST http://10.42.1.126:9696/v2.0/subnets -H "User-Agent: 
openstacksdk/0.9.13 keystoneauth1/2.18.0 python-requests/2.12.5 CPython/2.7.12" 
-H "Content-Type: application/json" -H "X-Auth-Token: 
{SHA1}c61b74f7c026e385b8953576101f854d86cb0d48" -d '{"subnet": {"network_id": 
"1f20da97-ddd4-40f8-b8d3-6321de8671a0", "ipv6_ra_mode": "slaac", "ip_version": 
6, "name": "mysubnet", "ipv6_address_mode": "slaac"}}'
  http://10.42.1.126:9696 "POST /v2.0/subnets HTTP/1.1" 400 146
  RESP: [400] Content-Type: application/json Content-Length: 146 
X-Openstack-Request-Id: req-299d7f52-d5ce-4873-979d-f63626f380ab Date: Mon, 27 
Feb 2017 10:30:31 GMT Connection: keep-alive 
  RESP BODY: {"NeutronError": {"message": "Bad subnets request: a subnetpool 
must be specified in the absence of a cidr.", "type": "BadRequest", "detail": 
""}}
  ...
  HttpException: HttpException: Bad Request, Bad subnets request: a subnetpool 
must be specified in the absence of a cidr.

  END return value: 1
  $

  Running the same command using the neutron CLI is working fine:

  $ neutron --debug subnet-create --name subnet6 --ip_version 6 
--use-default-subnetpool --ipv6-address-mode slaac --ipv6-ra-mode slaac mynet
  ...
  DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://10.42.1.126:9696/v2.0/subnets.json -H "User-Agent: python-neutronclient" 
-H "Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}dd78a0939a35913dad4181aaf5f68e0c9277d74e" -d '{"sub
  net": {"use_default_subnetpool": true, "network_id": 
"1f20da97-ddd4-40f8-b8d3-6321de8671a0", "ipv6_ra_mode": "slaac", "ip_version": 
6, "ipv6_address_mode": "slaac", "name": "subnet6"}}'
  DEBUG: keystoneauth.session RESP: [201] Content-Type: application/json 
Content-Length: 735 X-Openstack-Request-Id: 
req-13f0bc2a-e4e5-43dc-b048-e63acef8f131 Date: Mon, 27 Feb 2017 10:46:32 GMT 
Connection: keep-alive 
  RESP BODY: {"subnet": {"service_types": [], "description": "", "enable_dhcp": 
true, "tags": [], "network_id": "1f20da97-ddd4-40f8-b8d3-6321de8671a0", 
"tenant_id": "6de6f29dcf904ab8a12e8ca558f532e9", "created_at": 
"2017-02-27T10:46:31Z", "dns_nameservers": [], "updated_at":
   "2017-02-27T10:46:31Z", "gateway_ip": "2001:db8:1234::1", "ipv6_ra_mode": 
"slaac", "allocation_pools": [{"start": "2001:db8:1234::2", "end": 
"2001:db8:1234:0::::"}], "host_routes": [], "revision_number": 
2, "ip_version": 6, "ipv6_address_mode": "slaac", "c
  idr": "2001:db8:1234::/64", "project_id": "6de6f29dcf904ab8a12e8ca558f532e9", 
"id": "ebb07be0-3f8d-4219-afbd-f81ca352954d", "subnetpool_id": 
"4c1661ba-b24c-4fda-8815-3f1fd29281af", "name": "subnet6"}}
  ...

  So somehow this attribute is missing from the OSC's request:

  "use_default_subnetpool": true

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1668223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668223] Re: subnet create is not working with --use-default-subnet-pool

2017-02-27 Thread Dr. Jens Rosenboom
The attribute is also missing in the Neutron api-ref at

https://developer.openstack.org/api-ref/networking/v2/?expanded=create-
subnet-detail

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: python-openstackclient
 Assignee: Dr. Jens Rosenboom (j-rosenboom-j) => (unassigned)

** Changed in: neutron
 Assignee: (unassigned) => Dr. Jens Rosenboom (j-rosenboom-j)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1668223

Title:
  subnet create is not working with --use-default-subnet-pool

Status in neutron:
  New
Status in python-openstackclient:
  Invalid
Status in OpenStack SDK:
  In Progress

Bug description:
  Seems like the option is not passed properly to the API:

  $ openstack --debug subnet create --ip-version 6 --use-default-subnet-pool 
--ipv6-address-mode slaac --ipv6-ra-mode slaac --network mynet mysubnet
  ...
  REQ: curl -g -i -X POST http://10.42.1.126:9696/v2.0/subnets -H "User-Agent: 
openstacksdk/0.9.13 keystoneauth1/2.18.0 python-requests/2.12.5 CPython/2.7.12" 
-H "Content-Type: application/json" -H "X-Auth-Token: 
{SHA1}c61b74f7c026e385b8953576101f854d86cb0d48" -d '{"subnet": {"network_id": 
"1f20da97-ddd4-40f8-b8d3-6321de8671a0", "ipv6_ra_mode": "slaac", "ip_version": 
6, "name": "mysubnet", "ipv6_address_mode": "slaac"}}'
  http://10.42.1.126:9696 "POST /v2.0/subnets HTTP/1.1" 400 146
  RESP: [400] Content-Type: application/json Content-Length: 146 
X-Openstack-Request-Id: req-299d7f52-d5ce-4873-979d-f63626f380ab Date: Mon, 27 
Feb 2017 10:30:31 GMT Connection: keep-alive 
  RESP BODY: {"NeutronError": {"message": "Bad subnets request: a subnetpool 
must be specified in the absence of a cidr.", "type": "BadRequest", "detail": 
""}}
  ...
  HttpException: HttpException: Bad Request, Bad subnets request: a subnetpool 
must be specified in the absence of a cidr.

  END return value: 1
  $

  Running the same command using the neutron CLI is working fine:

  $ neutron --debug subnet-create --name subnet6 --ip_version 6 
--use-default-subnetpool --ipv6-address-mode slaac --ipv6-ra-mode slaac mynet
  ...
  DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://10.42.1.126:9696/v2.0/subnets.json -H "User-Agent: python-neutronclient" 
-H "Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}dd78a0939a35913dad4181aaf5f68e0c9277d74e" -d '{"sub
  net": {"use_default_subnetpool": true, "network_id": 
"1f20da97-ddd4-40f8-b8d3-6321de8671a0", "ipv6_ra_mode": "slaac", "ip_version": 
6, "ipv6_address_mode": "slaac", "name": "subnet6"}}'
  DEBUG: keystoneauth.session RESP: [201] Content-Type: application/json 
Content-Length: 735 X-Openstack-Request-Id: 
req-13f0bc2a-e4e5-43dc-b048-e63acef8f131 Date: Mon, 27 Feb 2017 10:46:32 GMT 
Connection: keep-alive 
  RESP BODY: {"subnet": {"service_types": [], "description": "", "enable_dhcp": 
true, "tags": [], "network_id": "1f20da97-ddd4-40f8-b8d3-6321de8671a0", 
"tenant_id": "6de6f29dcf904ab8a12e8ca558f532e9", "created_at": 
"2017-02-27T10:46:31Z", "dns_nameservers": [], "updated_at":
   "2017-02-27T10:46:31Z", "gateway_ip": "2001:db8:1234::1", "ipv6_ra_mode": 
"slaac", "allocation_pools": [{"start": "2001:db8:1234::2", "end": 
"2001:db8:1234:0::::"}], "host_routes": [], "revision_number": 
2, "ip_version": 6, "ipv6_address_mode": "slaac", "c
  idr": "2001:db8:1234::/64", "project_id": "6de6f29dcf904ab8a12e8ca558f532e9", 
"id": "ebb07be0-3f8d-4219-afbd-f81ca352954d", "subnetpool_id": 
"4c1661ba-b24c-4fda-8815-3f1fd29281af", "name": "subnet6"}}
  ...

  So somehow this attribute is missing from the OSC's request:

  "use_default_subnetpool": true

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1668223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654183] Re: Token based authentication in Client class does not work

2017-02-09 Thread Dr. Jens Rosenboom
This issue is still present for Horizon in Newton, any chance to
backport the Horizon fixes there?

To reproduce:

- Install devstack from stable/newton
- Run stack.sh
- sudo -H pip install -U python-openstackclient
- sudo systemctl restart apache2

=> All nova related tabs like Admin/Flavors log out the Session with
auth failure.

Doing "sudo -H pip install python-novaclient==6.0.0 -U" and apache2
restart resolves the issue.

** Changed in: horizon
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1654183

Title:
  Token based authentication in Client class does not work

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in python-novaclient:
  Fix Released
Status in tripleo:
  Fix Released
Status in tripleo-quickstart:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  With newly released novaclient (7.0.0) it seems that token base
  authentication does not work in novaclient.client.Clinet.

  I have get back the following response from Nova server:

  Malformed request URL: URL's project_id
  'e0beb44615f34d54b8a9a9203a3e5a1c' doesn't match Context's project_id
  'None' (HTTP 400)

  I just created the Nova client in following way:
  Client(
  2,
  endpoint_type="public",
  service_type='compute',
  auth_token=auth_token,
  tenant_id="devel",
  region_name="RegionOne",
  auth_url=keystone_url,
  insecure=True,
  endpoint_override=nova_endpoint 
#https://.../v2/e0beb44615f34d54b8a9a9203a3e5a1c
  )

  After it nova client performs a new token based authentication without
  project_id (tenant_id) and it causes that the new token does not
  belong to any project. Anyway if we have a token already why
  novaclient requests a new one from keystone? (Other clients like Heat
  and Neutron for example does not requests any token from keystone if
  it is already provided for client class)

  The bug is introduced by follwoig commit:
  
https://github.com/openstack/python-novaclient/commit/8409e006c5f362922baae9470f14c12e0443dd70

  +if not auth and auth_token:
  +auth = identity.Token(auth_url=auth_url,
  +  token=auth_token)

  When project_id is also passed into Token authentication than
  everything works fine. So newly requested token belongs to right
  project/tenant.

  Note: Originally this problem appears in Mistral project of OpenStack,
  which is using the client classes directly from their actions with
  token based authentication.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1654183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1637072] Re: Unable to talk to nova properly

2017-01-25 Thread Dr. Jens Rosenboom
*** This bug is a duplicate of bug 1654183 ***
https://bugs.launchpad.net/bugs/1654183

So it turns out that this is another duplicate of 1654183, fixed in
python-novaclient git master currently.

For the Chef deployment, there was a global pip based python-
openstackclient installation that unfortunately installed the latest
python-novaclient, too, and made it leak into other services.

** This bug has been marked a duplicate of bug 1654183
   Token based authentication in Client class does not work

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1637072

Title:
  Unable to talk to nova properly

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  To reproduce just run devstack with default settings:

  Most interactions involving nova are failing after that.

  Original description:

  When we try to create flavor using dashboard it is giving error :
  Danger: There was an error submitting the form. Please try again

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1637072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259760] Re: Spice console isn't working when ssl_only=True is set

2017-01-24 Thread Dr. Jens Rosenboom
Thanks for the hint Mike, I must admit that I never looked at that. In
fact it seems like this would be fixed already in spice-html5-0.1.5
dated more than two years ago, but Ubuntu still ships 0.1.4.

** Also affects: spice-html5 (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259760

Title:
  Spice console isn't working when ssl_only=True is set

Status in OpenStack Compute (nova):
  In Progress
Status in spice-html5 package in Ubuntu:
  New

Bug description:
  OpenStack instalation: 2013.2
  OS: Ubuntu 13.10
  Repo: standart Ubuntu repozitory

  
  When using ssl_only in nova.conf, browser gets error: 
  [Exception... "The operation is insecure." code: "18" nsresult: "0x80530012 
(SecurityError)" location: "https://api.region.domain.tld:6082/spiceconn.js 
Line: 34"]

  Problem: trying to reach using ws:// schema, not wss://.

  Temporary fixed changing /usr/share/spice-html5/spice_auto.html scheme
  = "wss://" at 82th line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1259760/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658074] [NEW] openvswitch-agent spawning infinite number of ovsdb-client processes

2017-01-20 Thread Dr. Jens Rosenboom
Public bug reported:

After installing neutron on Ubuntu Xenial from the Newton UCA
(2:9.0.0-0ubuntu1.16.10.2~cloud0), I noticed these processes:

neutron  11222  2.9  0.8 262628 108712 ?   Ss   10:59   1:36 
/usr/bin/python /usr/bin/neutron-openvswitch-agent 
--config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/plugins/ml2/openvswitch_agent.ini 
--log-file=/var/log/neutron/neutron-openvswitch-agent.lo
root 11686  0.0  0.0  54112  3256 ?S11:00   0:00  \_ sudo 
neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface 
name,ofport,external_ids --format=json
root 11688  0.0  0.3  83336 45444 ?S11:00   0:00  |   \_ 
/usr/bin/python /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf 
ovsdb-client monitor Interface name,ofport,external_ids --format=json
root 11828  0.0  0.0  20056  2976 ?S11:00   0:00  |   \_ 
/usr/bin/ovsdb-client monitor Interface name,ofport,external_ids --format=json
root 13426  0.0  0.0  54112  3204 ?S11:00   0:00  \_ sudo 
neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface 
name,ofport,external_ids --format=json
root 13430  0.0  0.3  83336 45480 ?S11:00   0:00  |   \_ 
/usr/bin/python /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf 
ovsdb-client monitor Interface name,ofport,external_ids --format=json
root 13490  0.0  0.0  20056  3052 ?S11:00   0:00  |   \_ 
/usr/bin/ovsdb-client monitor Interface name,ofport,external_ids --format=json
root 14775  0.0  0.0  54112  3256 ?S11:01   0:00  \_ sudo 
neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface 
name,ofport,external_ids --format=json
root 14779  0.0  0.3  83336 45272 ?S11:01   0:00  |   \_ 
/usr/bin/python /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf 
ovsdb-client monitor Interface name,ofport,external_ids --format=json
root 14821  0.0  0.0  20056  2944 ?S11:01   0:00  |   \_ 
/usr/bin/ovsdb-client monitor Interface name,ofport,external_ids --format=json

with another set being spawned every 30 seconds. In /var/log/neutron
/neutron-openvswitch-agent.log I see these errors:

2017-01-20 11:00:39.804 11222 ERROR neutron.agent.linux.async_process [-] Error 
received from [ovsdb-client monitor Interface name,ofport,external_ids 
--format=json]: sudo: unable to resolve host jr-ansi02
2017-01-20 11:00:39.805 11222 ERROR neutron.agent.linux.async_process [-] 
Process [ovsdb-client monitor Interface name,ofport,external_ids --format=json] 
dies due to the error: sudo: unable to resolve host jr-ansi02

Now of course one can claim that properly setting up sudo (or /etc/hosts
rather) will solve this issue, but still maybe the ovs-agent process
should properly clean up its children and not assume that they are dead
as soon as there is any output to stderr (function _read_stderr() in
neutron/agent/linux/async_process.py).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1658074

Title:
  openvswitch-agent spawning infinite number of ovsdb-client processes

Status in neutron:
  New

Bug description:
  After installing neutron on Ubuntu Xenial from the Newton UCA
  (2:9.0.0-0ubuntu1.16.10.2~cloud0), I noticed these processes:

  neutron  11222  2.9  0.8 262628 108712 ?   Ss   10:59   1:36 
/usr/bin/python /usr/bin/neutron-openvswitch-agent 
--config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/plugins/ml2/openvswitch_agent.ini 
--log-file=/var/log/neutron/neutron-openvswitch-agent.lo
  root 11686  0.0  0.0  54112  3256 ?S11:00   0:00  \_ sudo 
neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface 
name,ofport,external_ids --format=json
  root 11688  0.0  0.3  83336 45444 ?S11:00   0:00  |   \_ 
/usr/bin/python /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf 
ovsdb-client monitor Interface name,ofport,external_ids --format=json
  root 11828  0.0  0.0  20056  2976 ?S11:00   0:00  |   \_ 
/usr/bin/ovsdb-client monitor Interface name,ofport,external_ids --format=json
  root 13426  0.0  0.0  54112  3204 ?S11:00   0:00  \_ sudo 
neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface 
name,ofport,external_ids --format=json
  root 13430  0.0  0.3  83336 45480 ?S11:00   0:00  |   \_ 
/usr/bin/python /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf 
ovsdb-client monitor Interface name,ofport,external_ids --format=json
  root 13490  0.0  0.0  20056  3052 ?S11:00   0:00  |   \_ 
/usr/bin/ovsdb-client monitor Interface name,ofport,external_ids --format=json
  root 14775  0.0  0.0  54112  3256 ?S11:01   0:00  \_ sudo 
neutron-rootwrap /etc/neutron/rootwrap.conf

[Yahoo-eng-team] [Bug 1624791] Re: Horizon randomly fails to connect to the service APIs

2016-11-22 Thread Dr. Jens Rosenboom
We get the same issue when deploying Newton via Ubuntu UCA. Affected
package versions are:

python-openssl=16.1.0-1~cloud0
python-cryptography=1.5-2~cloud0

The error goes away if similar to the Ansible workaround we downgrade to
the packages from Xenial:

python-openssl=0.15.1-2build1
python-cryptography=1.2.3-1

Still not sure whether this issue could/should have a workaround in
Horizon or whether it is only a packaging issue.

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1624791

Title:
  Horizon randomly fails to connect to the service APIs

Status in OpenStack Dashboard (Horizon):
  New
Status in openstack-ansible:
  Won't Fix

Bug description:
  This started occurring after upgrading to RC1. Before that, I was
  using Horizon's e7b4bdfe5d576766b34bf00cea3dcbcb42436420 plus cherry-
  picked changesets I2db4218e7351e0017a7a74114be6ac7af803476c and
  Idb58cebefab747f204e54ea6350db0852aec60f5.

  Running nova, cinder, etc. from the CLI seems to work perfectly.

  However Horizon randomly fails with "bad handshake: SysCallError(0,
  None)" when connecting to other services, including Keystone.

  The symptoms are:

  * Login is intermittent, it randomly succeeds or fails.

  * After being lucky enough to login, dashboards would randomly fail to
  display information.

  I tried to isolate and see if some of my 3 infra nodes was the
  culprit, without success.

  I also tried to destroy and reconstruct the horizon and keystone
  containers multiple times without success.

  I didn't try yet to downgrade Horizon to the
  e7b4bdfe5d576766b34bf00cea3dcbcb42436420 commit, I will try it ASAP
  and report the results here.

  
  [Sun Sep 18 01:42:52.285253 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
Traceback (most recent call last):
  [Sun Sep 18 01:42:52.285256 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
  File 
"/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/tables.py",
 line 389, in allowed
  [Sun Sep 18 01:42:52.285259 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
limits = api.nova.tenant_absolute_limits(request, reserved=True)
  [Sun Sep 18 01:42:52.285261 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
  File 
"/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py",
 line 949, in tenant_absolute_limits
  [Sun Sep 18 01:42:52.285264 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
limits = novaclient(request).limits.get(reserved=reserved).absolute
  [Sun Sep 18 01:42:52.285266 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
  File 
"/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../novaclient/v2/limits.py",
 line 100, in get
  [Sun Sep 18 01:42:52.285269 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
return self._get("/limits%s" % query_string, "limits")
  [Sun Sep 18 01:42:52.285271 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
  File 
"/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../novaclient/base.py",
 line 346, in _get
  [Sun Sep 18 01:42:52.285273 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
resp, body = self.api.client.get(url)
  [Sun Sep 18 01:42:52.285276 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
  File 
"/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../novaclient/client.py",
 line 480, in get
  [Sun Sep 18 01:42:52.285278 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
return self._cs_request(url, 'GET', **kwargs)
  [Sun Sep 18 01:42:52.285280 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
  File 
"/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../novaclient/client.py",
 line 458, in _cs_request
  [Sun Sep 18 01:42:52.285282 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
resp, body = self._time_request(url, method, **kwargs)
  [Sun Sep 18 01:42:52.285285 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
  File 
"/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../novaclient/client.py",
 line 431, in _time_request
  [Sun Sep 18 01:42:52.285287 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
resp, body = self.request(url, method, **kwargs)
  [Sun Sep 18 01:42:52.285289 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
  File 
"/openstack/venvs/horizon-14.0.0/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../novaclient/client.py",
 line 396, in request
  [Sun Sep 18 01:42:52.285299 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
**kwargs)
  [Sun Sep 18 01:42:52.285303 2016] [wsgi:error] [pid 6470:tid 139771996858112] 
  File 
"/op

[Yahoo-eng-team] [Bug 1639220] [NEW] [RFE] Default action for RBAC

2016-11-04 Thread Dr. Jens Rosenboom
Public bug reported:

Introduce a new type of action in RBAC for use with QoS (and potentially
useful for other stuff),

The action would be default:

neutron rbac-create  --type qos-policy --target-tenant
 --action default

That would mean:
   Any created network for that tenant would be assigned to the specific policy 
id by default.

Maybe "network_default" is a more appropriate name in such case. One
important use case would be an operator that wants to rate-limit tenant
ports by default.

This has been proposed in https://bugs.launchpad.net/bugs/1512587 , but
it was decided that this should be implemented as a separate feature
after the basic work is done.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1639220

Title:
  [RFE] Default action for RBAC

Status in neutron:
  New

Bug description:
  Introduce a new type of action in RBAC for use with QoS (and
  potentially useful for other stuff),

  The action would be default:

  neutron rbac-create  --type qos-policy --target-tenant
   --action default

  That would mean:
 Any created network for that tenant would be assigned to the specific 
policy id by default.

  Maybe "network_default" is a more appropriate name in such case. One
  important use case would be an operator that wants to rate-limit
  tenant ports by default.

  This has been proposed in https://bugs.launchpad.net/bugs/1512587 ,
  but it was decided that this should be implemented as a separate
  feature after the basic work is done.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1639220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509004] Re: "test_dualnet_dhcp6_stateless_from_os" failures seen in the gate

2016-10-17 Thread Dr. Jens Rosenboom
54 hits in the last 7 days. Checked a couple and they all have an error
like this:

http://logs.openstack.org/96/384696/1/check/gate-tempest-dsvm-neutron-
placement-full-ubuntu-xenial-
nv/c19ea11/logs/screen-q-svc.txt.gz?level=ERROR#_2016-10-10_20_45_30_546

/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py:68: 
SAWarning: An exception has occurred during handling of a previous exception.  
The previous exception is:
  (pymysql.err.InternalError) (1213, 
u'Deadlock found when trying to get lock; try restarting transaction') [SQL: 
u'INSERT INTO ipallocations (port_id, ip_address, subnet_id, network_id) VALUES 
(%(port_id)s, %(ip_address)s, %(subnet_id)s, %(network_id)s)'] [parameters: 
{'network_id': u'e345189c-5ab3-4696-a860-51874e215e0a', 'subnet_id': 
'0d3d3a6a-5957-4cd1-bfa3-009a45957589', 'port_id': 
u'84722cc3-107b-49bb-b422-e858a8fa7b79', 'ip_address': 
'2003::f816:3eff:feb9:b9a1'}]


** Changed in: neutron
   Status: Expired => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509004

Title:
  "test_dualnet_dhcp6_stateless_from_os" failures seen in the gate

Status in neutron:
  Confirmed

Bug description:
  "test_dualnet_dhcp6_stateless_from_os" - This test fails in the gate
  randomly both with DVR and non-DVR routers.

  http://logs.openstack.org/79/230079/27/check/gate-tempest-dsvm-
  neutron-full/1caed8b/logs/testr_results.html.gz

  http://logs.openstack.org/85/238485/1/check/gate-tempest-dsvm-neutron-
  dvr/1059e22/logs/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622684] Re: Keycode error using novnc and Horizon console

2016-09-30 Thread Dr. Jens Rosenboom
** Changed in: nova
   Status: In Progress => Invalid

** Changed in: nova
 Assignee: Dr. Jens Rosenboom (j-rosenboom-j) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1622684

Title:
  Keycode error using novnc and Horizon console

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When using Newton or Mitaka versons of OpenStack Horizon, I am unable
  to talk to the vm in the Horizon console window. I am using noVNC and
  I see the following in the console when ever pressing any key on the
  keyboard:

  
  atkbd serio0: Use 'setkeycodes 00 ' to make it known.
  [   41.750245] atkbd serio0: Unknown key released (translated set 2,
  code 0x0 on isa0060/serio0).
  [   41.750591] atkbd serio0: Use 'setkeycodes 00 ' to make it known.
  [   41.815590] atkbd serio0: Unknown key pressed (translated set 2,
  code 0x0 on isa0060/serio0).
  [   41.816087] atkbd serio0: Use 'setkeycodes 00 ' to make it known.
  [   41.945017] atkbd serio0: Unknown key released (translated set 2,
  code 0x0 on isa0060/serio0).
  [   41.945848] atkbd serio0: Use 'setkeycodes 00 ' to make it known.
  [   42.393227] atkbd serio0: Unknown key pressed (translated set 2,
  code 0x0 on isa0060/serio0).

  This appears to be related to recent code changes in noVNC. If I
  revert noVNC to the sha 4e0c36dda708628836dc6f5d68fc40d05c7716d9,
  everything works. This sha commit date is August 26, 2016.

  Phil

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1622684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628883] [NEW] Minimum requirements too low on oslo.log for keystone

2016-09-29 Thread Dr. Jens Rosenboom
Public bug reported:

After upgrading keystone from mitaka to newton-rc1 on Xenial I am
getting this error:


$ keystone-manage db_sync
Traceback (most recent call last):
  File "/usr/bin/keystone-manage", line 6, in 
from keystone.cmd.manage import main
  File "/usr/lib/python2.7/dist-packages/keystone/cmd/manage.py", line 32, in 

from keystone.cmd import cli
  File "/usr/lib/python2.7/dist-packages/keystone/cmd/cli.py", line 28, in 

from keystone.cmd import doctor
  File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/__init__.py", line 
13, in 
from keystone.cmd.doctor import caching
  File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/caching.py", line 
13, in 
import keystone.conf
  File "/usr/lib/python2.7/dist-packages/keystone/conf/__init__.py", line 26, 
in 
from keystone.conf import default
  File "/usr/lib/python2.7/dist-packages/keystone/conf/default.py", line 180, 
in 
deprecated_since=versionutils.deprecated.NEWTON,
AttributeError: type object 'deprecated' has no attribute 'NEWTON'

It seems due to the fact that the installed version of oslo.log is not
updated properly:

python-oslo.log:
  Installed: 3.2.0-2
  Candidate: 3.16.0-0ubuntu1~cloud0
  Version table:
 3.16.0-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/newton/main amd64 Packages
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/newton/main i386 Packages
 *** 3.2.0-2 500
500 http://mirror/ubuntu xenial/main amd64 Packages
100 /var/lib/dpkg/status

But looking at the requirements.txt in stable/newton, even
oslo.log>=1.14.0 is claimed to work.

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: keystone (Ubuntu)
 Importance: Undecided
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1628883

Title:
  Minimum requirements too low on oslo.log for keystone

Status in OpenStack Identity (keystone):
  New
Status in keystone package in Ubuntu:
  Triaged

Bug description:
  After upgrading keystone from mitaka to newton-rc1 on Xenial I am
  getting this error:

  
  $ keystone-manage db_sync
  Traceback (most recent call last):
File "/usr/bin/keystone-manage", line 6, in 
  from keystone.cmd.manage import main
File "/usr/lib/python2.7/dist-packages/keystone/cmd/manage.py", line 32, in 

  from keystone.cmd import cli
File "/usr/lib/python2.7/dist-packages/keystone/cmd/cli.py", line 28, in 

  from keystone.cmd import doctor
File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/__init__.py", 
line 13, in 
  from keystone.cmd.doctor import caching
File "/usr/lib/python2.7/dist-packages/keystone/cmd/doctor/caching.py", 
line 13, in 
  import keystone.conf
File "/usr/lib/python2.7/dist-packages/keystone/conf/__init__.py", line 26, 
in 
  from keystone.conf import default
File "/usr/lib/python2.7/dist-packages/keystone/conf/default.py", line 180, 
in 
  deprecated_since=versionutils.deprecated.NEWTON,
  AttributeError: type object 'deprecated' has no attribute 'NEWTON'

  It seems due to the fact that the installed version of oslo.log is not
  updated properly:

  python-oslo.log:
Installed: 3.2.0-2
Candidate: 3.16.0-0ubuntu1~cloud0
Version table:
   3.16.0-0ubuntu1~cloud0 500
  500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/newton/main amd64 Packages
  500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/newton/main i386 Packages
   *** 3.2.0-2 500
  500 http://mirror/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status

  But looking at the requirements.txt in stable/newton, even
  oslo.log>=1.14.0 is claimed to work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1628883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628549] [NEW] DB migration is broken with two unassigned floating IPs

2016-09-28 Thread Dr. Jens Rosenboom
Public bug reported:

The error looks like this:

STDERR: INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Traceback (most recent call last):
  File "/usr/bin/neutron-db-manage", line 10, in 
sys.exit(main())
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
686, in main
return_val |= bool(CONF.command.func(config, CONF.command.name))
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
205, in do_upgrade
run_sanity_checks(config, revision)
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
670, in run_sanity_checks
script_dir.run_env()
  File "/usr/lib/python2.7/dist-packages/alembic/script/base.py", line 407, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File "/usr/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 93, in 
load_python_file
module = load_module_py(module_id, path)
  File "/usr/lib/python2.7/dist-packages/alembic/util/compat.py", line 79, in 
load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 120, in 
run_migrations_online()
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 114, in run_migrations_online
context.run_migrations()
  File "", line 8, in run_migrations
  File "/usr/lib/python2.7/dist-packages/alembic/runtime/environment.py", line 
797, in run_migrations
self.get_context().run_migrations(**kw)
  File "/usr/lib/python2.7/dist-packages/alembic/runtime/migration.py", line 
303, in run_migrations
for step in self._migrations_fn(heads, self):
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
663, in check_sanity
script.module.check_sanity(context.connection)
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/versions/newton/expand/6b461a21bcfc_uniq_floatingips0floating_network_.py",
 line 59, in check_sanity
raise DuplicateFloatingIPforOneFixedIP(fixed_ip_address=",".join(res))
TypeError: sequence item 0: expected string, NoneType found

with the following database entries:
mysql> select * from floatingips;
+--+--+-+--+--+--+--+--+--++--+
| tenant_id| id   | 
floating_ip_address | floating_network_id  | floating_port_id   
  | fixed_port_id| fixed_ip_address | 
router_id| last_known_router_id | 
status | standard_attr_id |
+--+--+-+--+--+--+--+--+--++--+
| f0081f0762a443cbbad09b186557721b | 52c6641f-fcd9-4d7a-bcab-54739353b122 | 
xxx.29   | 768ae947-9346-4373-b9bf-ebf6c82a7187 | 
f4c857a1-27ca-4a59-931b-379569eb8f72 | NULL | 
NULL | NULL | NULL  
   | DOWN   |  132 |
| f0081f0762a443cbbad09b186557721b | 57abe18d-d886-4532-b82f-40e93b07b1d7 | 
xxx.52   | 768ae947-9346-4373-b9bf-ebf6c82a7187 | 
54e44a59-13b1-4926-ac67-8c90e7dd41a6 | 3f862ab3-ea76-4b52-8618-7ce2f5a9b13d | 
10.11.0.30   | 3865b2e8-0764-4084-a565-db1b790011a8 | NULL  
   | ACTIVE | 3062 |
| f0081f0762a443cbbad09b186557721b | 67c32788-5d79-410f-9c6a-de6caa40cf29 | 
xxx.31   | 768ae947-9346-4373-b9bf-ebf6c82a7187 | 
7bba70b8-3e56-4854-9af5-38efd0703597 | 59365d06-13e2-4c99-b167-1455caa032ca | 
10.11.0.34   | 3865b2e8-0764-4084-a565-db1b790011a8 | 
3865b2e8-0764-4084-a565-db1b790011a8 | ACTIVE |  183 |
| f0081f0762a443cbbad09b186557721b | 85043a0c-e294-4d61-a54f-2e5171898c70 | 
xxx.53   | 768ae947-9346-4373-b9bf-ebf6c82a7187 | 
fa689c0f-168a-4a86-8011-870711ebcaf5 | NULL | 
NULL | NULL | NULL  
   | DOWN   | 3089 |
| f0081f0762a443cbbad09b186557721b | 88f11e58-cd78-4c6d-a2a6-b35aa32c7a00 | 
xxx.51   | 768ae947-9346-4373-b9bf-ebf6c82a7187 | 
58a7f431-0d78-4916-ac73-b8592dfea018 | ef7ec32a-b968-4d73-9f91-24a441e9b59a | 
10.11.0.31   | 3865b2e8-0764-4084-a565-db1b790011a8 | NULL  
   | ACTIVE | 3

[Yahoo-eng-team] [Bug 1622684] Re: Keycode error using novnc and Horizon console

2016-09-23 Thread Dr. Jens Rosenboom
Note that this is not a horizon bug, I get the same behaviour when
accessing the URL returned from "nova get-vnc-console" directly.

** Changed in: horizon
   Status: New => Invalid

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Dr. Jens Rosenboom (j-rosenboom-j)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1622684

Title:
  Keycode error using novnc and Horizon console

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When using Newton or Mitaka versons of OpenStack Horizon, I am unable
  to talk to the vm in the Horizon console window. I am using noVNC and
  I see the following in the console when ever pressing any key on the
  keyboard:

  
  atkbd serio0: Use 'setkeycodes 00 ' to make it known.
  [   41.750245] atkbd serio0: Unknown key released (translated set 2,
  code 0x0 on isa0060/serio0).
  [   41.750591] atkbd serio0: Use 'setkeycodes 00 ' to make it known.
  [   41.815590] atkbd serio0: Unknown key pressed (translated set 2,
  code 0x0 on isa0060/serio0).
  [   41.816087] atkbd serio0: Use 'setkeycodes 00 ' to make it known.
  [   41.945017] atkbd serio0: Unknown key released (translated set 2,
  code 0x0 on isa0060/serio0).
  [   41.945848] atkbd serio0: Use 'setkeycodes 00 ' to make it known.
  [   42.393227] atkbd serio0: Unknown key pressed (translated set 2,
  code 0x0 on isa0060/serio0).

  This appears to be related to recent code changes in noVNC. If I
  revert noVNC to the sha 4e0c36dda708628836dc6f5d68fc40d05c7716d9,
  everything works. This sha commit date is August 26, 2016.

  Phil

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1622684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622684] Re: Keycode error using novnc and Horizon consloe

2016-09-23 Thread Dr. Jens Rosenboom
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1622684

Title:
  Keycode error using novnc and Horizon console

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  When using Newton or Mitaka versons of OpenStack Horizon, I am unable
  to talk to the vm in the Horizon console window. I am using noVNC and
  I see the following in the console when ever pressing any key on the
  keyboard:

  
  atkbd serio0: Use 'setkeycodes 00 ' to make it known.
  [   41.750245] atkbd serio0: Unknown key released (translated set 2,
  code 0x0 on isa0060/serio0).
  [   41.750591] atkbd serio0: Use 'setkeycodes 00 ' to make it known.
  [   41.815590] atkbd serio0: Unknown key pressed (translated set 2,
  code 0x0 on isa0060/serio0).
  [   41.816087] atkbd serio0: Use 'setkeycodes 00 ' to make it known.
  [   41.945017] atkbd serio0: Unknown key released (translated set 2,
  code 0x0 on isa0060/serio0).
  [   41.945848] atkbd serio0: Use 'setkeycodes 00 ' to make it known.
  [   42.393227] atkbd serio0: Unknown key pressed (translated set 2,
  code 0x0 on isa0060/serio0).

  This appears to be related to recent code changes in noVNC. If I
  revert noVNC to the sha 4e0c36dda708628836dc6f5d68fc40d05c7716d9,
  everything works. This sha commit date is August 26, 2016.

  Phil

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1622684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619554] Re: [KVM] VM console throwing error atkbd serio0: Unknown key pressed

2016-09-23 Thread Dr. Jens Rosenboom
*** This bug is a duplicate of bug 1622684 ***
https://bugs.launchpad.net/bugs/1622684

** This bug is no longer a duplicate of bug 1621257
   VNC console keeps reporting "setkeycodes 00" exception
** This bug has been marked a duplicate of bug 1622684
   Keycode error using novnc and Horizon consloe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619554

Title:
  [KVM] VM console throwing error atkbd serio0: Unknown key pressed

Status in devstack:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  With master latest code console of cirros images on KVM is throwing
  error.

  root@runner:~/tools# glance image-show e63f91c4-3326-4a88-ad3a-aea819d64df9
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | eb9139e4942121f22bbc2afc0400b2a4 |
  | container_format | ami  |
  | created_at   | 2016-08-31T09:29:20Z |
  | disk_format  | ami  |
  | hypervisor_type  | qemu |
  | id   | e63f91c4-3326-4a88-ad3a-aea819d64df9 |
  | kernel_id| e5f5bafd-a998-463a-9a29-03c9812ec948 |
  | min_disk | 0|
  | min_ram  | 0|
  | name | cirros-0.3.4-x86_64-uec   |
  | owner| 1b64a0ec2d8e481abc938bd44197c40c |
  | protected| False|
  | ramdisk_id   | 643e5ddb-bd94-4ddb-9656-84987a2ef917 |
  | size | 25165824 |
  | status   | active   |
  | tags | []   |
  | updated_at   | 2016-09-02T06:38:01Z |
  | virtual_size | None |
  | visibility   | public   |
  +--+--+
  root@runner:~/tools# glance image-list | grep 
e63f91c4-3326-4a88-ad3a-aea819d64df9
  | e63f91c4-3326-4a88-ad3a-aea819d64df9 | cirros-0.3.4-x86_64-uec  
  |
  root@runner:~/tools# 

  stack@controller:~/nsbu_cqe_openstack/devstack$ git log -2
  commit e6b7e7ff3f5c1b1afdae1c3f9c35754d11c0a6aa
  Author: Gary Kotton 
  Date:   Sun Aug 14 06:55:42 2016 -0700

  Enable neutron to work in a multi node setup
  
  On the controller node where devstack is being run should create
  the neutron network. The compute node should not.
  
  The the case that we want to run a multi-node neutron setup we need
  to configure the following (in the case that a plugin does not
  have any agents running on the compute node):
  ENABLED_SERVICES=n-cpu,neutron
  
  In addition to this the code did not enable decomposed plugins to
  configure their nova configurations if necessary.
  
  This patch ensure that the multi-node support works.
  
  Change-Id: I8e80edd453a1106ca666d6c531b2433be631bce4
  Closes-bug: #1613069

  commit 79722563a67d941a808b02aeccb3c6d4f1af0c41
  Merge: 434035e 4d60175
  Author: Jenkins 
  Date:   Tue Aug 30 19:52:15 2016 +

  Merge "Add support for placement API to devstack"
  stack@controller:~/nsbu_cqe_openstack/devstack$ 


  Console logs:

  
  [0.00] Initializing cgroup subsys cpuset
  [0.00] Initializing cgroup subsys cpu
  [0.00] Linux version 3.2.0-80-virtual (buildd@batsu) (gcc version 
4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 
2015 (Ubuntu 3.2.0-80.116-virtual 3.2.68)
  [0.00] Command line: root=/dev/vda console=tty0 console=ttyS0 
no_timer_check
  [0.00] KERNEL supported cpus:
  [0.00]   Intel GenuineIntel
  [0.00]   AMD AuthenticAMD
  [0.00]   Centaur CentaurHauls
  [0.00] BIOS-provided physical RAM map:
  [0.00]  BIOS-e820:  - 0009fc00 (usable)
  [0.00]  BIOS-e820: 0009fc00 - 000a (reserved)
  [0.00]  BIOS-e820: 000f - 0010 (reserved)
  [0.00]  BIOS-e820: 0010 - 1fffe000 (usable)
  [0.00]  BIOS-e820: 1fffe000 - 2000 (reserved)
  [0.00]  BIOS-e820: fffc - 0001 (reserved)
  [0.00] NX (Execute Disable) protection: active
  [0.00] SMBIOS 2.4 present.
  [0.00] No AGP bridge found
  [0.00] last_pfn = 0x1fffe max_arch_pfn = 0x4
  [0.00] x86 PAT enabled: cpu 0, old 0x7040600070406, new 
0x7010600070106
  [0.00] found SMP MP-table at [880f0b00] f0b00
 

[Yahoo-eng-team] [Bug 1623813] Re: IPv6 network not shown in metadata

2016-09-15 Thread Dr. Jens Rosenboom
Gah, forgot there is a knob that needs turning in order to enable this.
Might be worth considering changing the default for this nowadays.

** Changed in: nova
   Status: New => Invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623813

Title:
  IPv6 network not shown in metadata

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Steps to reproduce:

  - Set up devstack with default settings from master
  - Start an instance using e.g. the default cirros-0.3.4 image
  - The instance will receive both an IPv4 and IPv6 address, but this isn't 
shown in the metadata, either on the configdrive or via http:

  stack@jr-t5:~/devstack$ nova list
  
+--+---+++-++
  | ID   | Name  | Status | Task State | Power 
State | Networks   |
  
+--+---+++-++
  | dfc9021e-b9d6-4b47-8bcf-d59d6b886a67 | test1 | ACTIVE | -  | 
Running | private=10.1.0.6, fdc0:b675:211f:0:f816:3eff:fe03:5402 |
  
+--+---+++-++

  $ curl 169.254.169.254/openstack/latest/network_data.json;echo
  {"services": [], "networks": [{"network_id": 
"43e683d5-d3c8-4dd3-aab5-a279bbf6d049", "link": "tap5359dc4d-d0", "type": 
"ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": 
"fa:16:3e:03:54:02", "mtu": 1450, "type": "ovs", "id": "tap5359dc4d-d0", 
"vif_id": "53
  59dc4d-d04b-4a18-aab4-57b763c100af"}]} 
  $ cat /mnt/openstack/latest/network_data.json;echo
  {"services": [], "networks": [{"network_id": 
"43e683d5-d3c8-4dd3-aab5-a279bbf6d049", "link": "tap5359dc4d-d0", "type": 
"ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": 
"fa:16:3e:03:54:02", "mtu": 1450, "type": "ovs", "id": "tap5359dc4d-d0", 
"vif_id": "53
  59dc4d-d04b-4a18-aab4-57b763c100af"}]}

  Expected result: The presence of an IPv6 network should be shown in
  the metadata, in order to allow the instance to enable or disable IPv6
  processing accordingly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623813] [NEW] IPv6 address not shown in metadata

2016-09-15 Thread Dr. Jens Rosenboom
Public bug reported:

Steps to reproduce:

- Set up devstack with default settings from master
- Start an instance using e.g. the default cirros-0.3.4 image
- The instance will receive both an IPv4 and IPv6 address, but this isn't shown 
in the metadata, either on the configdrive or via http:

stack@jr-t5:~/devstack$ nova list
+--+---+++-++
| ID   | Name  | Status | Task State | Power 
State | Networks   |
+--+---+++-++
| dfc9021e-b9d6-4b47-8bcf-d59d6b886a67 | test1 | ACTIVE | -  | Running  
   | private=10.1.0.6, fdc0:b675:211f:0:f816:3eff:fe03:5402 |
+--+---+++-++

$ curl 169.254.169.254/openstack/latest/network_data.json;echo
{"services": [], "networks": [{"network_id": 
"43e683d5-d3c8-4dd3-aab5-a279bbf6d049", "link": "tap5359dc4d-d0", "type": 
"ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": 
"fa:16:3e:03:54:02", "mtu": 1450, "type": "ovs", "id": "tap5359dc4d-d0", 
"vif_id": "53
59dc4d-d04b-4a18-aab4-57b763c100af"}]} 
$ cat /mnt/openstack/latest/network_data.json;echo
{"services": [], "networks": [{"network_id": 
"43e683d5-d3c8-4dd3-aab5-a279bbf6d049", "link": "tap5359dc4d-d0", "type": 
"ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": 
"fa:16:3e:03:54:02", "mtu": 1450, "type": "ovs", "id": "tap5359dc4d-d0", 
"vif_id": "53
59dc4d-d04b-4a18-aab4-57b763c100af"}]}

Expected result: The presence of an IPv6 network should be shown in the
metadata, in order to allow the instance to enable or disable IPv6
processing accordingly.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Description changed:

  Steps to reproduce:
  
  - Set up devstack with default settings from master
  - Start an instance using e.g. the default cirros-0.3.4 image
  - The instance will receive both an IPv4 and IPv6 address, but this isn't 
shown in the metadata, either on the configdrive or via http:
  
+ ```
  stack@jr-t5:~/devstack$ nova list
  
+--+---+++-++
  | ID   | Name  | Status | Task State | Power 
State | Networks   |
  
+--+---+++-++
  | dfc9021e-b9d6-4b47-8bcf-d59d6b886a67 | test1 | ACTIVE | -  | 
Running | private=10.1.0.6, fdc0:b675:211f:0:f816:3eff:fe03:5402 |
  
+--+---+++-++
  
  $ curl 169.254.169.254/openstack/latest/network_data.json;echo
  {"services": [], "networks": [{"network_id": 
"43e683d5-d3c8-4dd3-aab5-a279bbf6d049", "link": "tap5359dc4d-d0", "type": 
"ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": 
"fa:16:3e:03:54:02", "mtu": 1450, "type": "ovs", "id": "tap5359dc4d-d0", 
"vif_id": "53
- 59dc4d-d04b-4a18-aab4-57b763c100af"}]} $ cat 
/mnt/openstack/latest/network_data.json;echo
+ 59dc4d-d04b-4a18-aab4-57b763c100af"}]} 
+ $ cat /mnt/openstack/latest/network_data.json;echo
  {"services": [], "networks": [{"network_id": 
"43e683d5-d3c8-4dd3-aab5-a279bbf6d049", "link": "tap5359dc4d-d0", "type": 
"ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": 
"fa:16:3e:03:54:02", "mtu": 1450, "type": "ovs", "id": "tap5359dc4d-d0", 
"vif_id": "53
  59dc4d-d04b-4a18-aab4-57b763c100af"}]}
+ ```
  
  Expected result: The presence of an IPv6 network should be shown in the
  metadata, in order to allow the instance to enable or disable IPv6
  processing accordingly.

** Description changed:

  Steps to reproduce:
  
  - Set up devstack with default settings from master
  - Start an instance using e.g. the default cirros-0.3.4 image
  - The instance will receive both an IPv4 and IPv6 address, but this isn't 
shown in the metadata, either on the configdrive or via http:
  
- ```
  stack@jr-t5:~/devstack$ nova list
  
+--+---+++-++
  | ID   | Name  | Status | Task State | Power 
State | Networks   |
  
+--+---+++--

[Yahoo-eng-team] [Bug 1459042] Re: Cirros should report IPv6 connectivity when booting

2016-07-15 Thread Dr. Jens Rosenboom
The output in question comes from cloud-init, so added that package.
After doing a bit of debugging, there seem to be two issues here:

1. The output is created too early when the IPv6 network is using slaac,
so this means that there is no IPv6 address listed because it takes a
bit longer to show up. If I call cloudinit.netinfo.debug_info() on a
running system after booting has finished, the IPv6 address information
is shown correctly.

2. Routes for IPv6 are never shown correctly. cloud-init uses a call to
"netstat -rn", which only outputs IPv4 routes at least on Xenial.
Running "netstat -rn46" shows routes for both address families, but that
output doesn't seem to get parsed properly.

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Summary changed:

- Cirros should report IPv6 connectivity when booting
+ cloud-init fails to report IPv6 connectivity when booting

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cirros
   Status: Confirmed => Invalid

** Description changed:

  It would be convenient to see the IPv6 networking information printed at
  boot, similar to the IPv4 networking information currently is.
+ 
+ Output from the boot log:
+ [   15.621085] cloud-init[1058]: Cloud-init v. 0.7.7 running 'init' at Tue, 
14 Jun 2016 13:48:14 +. Up 6.71 seconds.
+ [   15.622670] cloud-init[1058]: ci-info: Net 
device info+
+ [   15.624106] cloud-init[1058]: ci-info: 
++--++-+---+---+
+ [   15.625516] cloud-init[1058]: ci-info: | Device |  Up  |  Address   | 
Mask| Scope | Hw-Address|
+ [   15.627058] cloud-init[1058]: ci-info: 
++--++-+---+---+
+ [   15.628504] cloud-init[1058]: ci-info: | ens3:  | True | 10.42.0.48 | 
255.255.0.0 |   .   | fa:16:3e:f9:86:07 |
+ [   15.629930] cloud-init[1058]: ci-info: | ens3:  | True | .  |  
.  |   d   | fa:16:3e:f9:86:07 |
+ [   15.631334] cloud-init[1058]: ci-info: |  lo:   | True | 127.0.0.1  |  
255.0.0.0  |   .   | . |
+ [   15.632765] cloud-init[1058]: ci-info: |  lo:   | True | .  |  
.  |   d   | . |
+ [   15.634221] cloud-init[1058]: ci-info: 
++--++-+---+---+
+ [   15.635671] cloud-init[1058]: ci-info: 
+++Route IPv4 info+++
+ [   15.637186] cloud-init[1058]: ci-info: 
+---+-+---+-+---+---+
+ [   15.638682] cloud-init[1058]: ci-info: | Route |   Destination   |  
Gateway  | Genmask | Interface | Flags |
+ [   15.640182] cloud-init[1058]: ci-info: 
+---+-+---+-+---+---+
+ [   15.641657] cloud-init[1058]: ci-info: |   0   | 0.0.0.0 | 
10.42.0.1 | 0.0.0.0 |ens3   |   UG  |
+ [   15.643149] cloud-init[1058]: ci-info: |   1   |10.42.0.0|  
0.0.0.0  |   255.255.0.0   |ens3   |   U   |
+ [   15.644661] cloud-init[1058]: ci-info: |   2   | 169.254.169.254 | 
10.42.0.1 | 255.255.255.255 |ens3   |  UGH  |
+ [   15.646175] cloud-init[1058]: ci-info: 
+---+-+---+-+---+---+
+ 
+ Output from running system:
+ ci-info: +++Net device 
info+++
+ ci-info: 
++---+-+---++---+
+ ci-info: |   Device   |   Up  | Address | 
 Mask | Scope  | Hw-Address|
+ ci-info: 
++---+-+---++---+
+ ci-info: |ens3|  True |10.42.0.44   |  
255.255.0.0  |   .| fa:16:3e:90:11:e0 |
+ ci-info: |ens3|  True | 2a04:3b40:8010:1:f816:3eff:fe90:11e0/64 | 
  .   | global | fa:16:3e:90:11:e0 |
+ ci-info: | lo |  True |127.0.0.1|   
255.0.0.0   |   .| . |
+ ci-info: | lo |  True | ::1/128 | 
  .   |  host  | . |
+ ci-info: 
++---+-+---++---+
+ ci-info: +++Route IPv4 
info+++
+ ci-info: 
+---+-+---+-+---+---+
+ ci-info: | Route |   Destination   |  Gateway  | Genmask | Interface 
| Flags |
+ ci-info: 
+---+-+---+-+---+---+
+ ci-info: |   0   | 0.0.0.0 | 10.42.0.1 | 0.0.0.0 |ens3   
|   UG  |
+ ci-info: |   1   |10.4

[Yahoo-eng-team] [Bug 1576713] Re: Network metadata fails to state correct mtu

2016-07-11 Thread Dr. Jens Rosenboom
** Changed in: nova/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1576713

Title:
  Network metadata fails to state correct mtu

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Fix Released

Bug description:
  Scenario:

  Instance is booted on Neutron tenant network with ML2 OVS driver and
  encapsulation. The MTU for that network is automatically calculated as
  1450. Instance has --config-drive=true set.

  Result:

  In /openstack/latest/network_data.json we get:

   "links": [{"ethernet_mac_address": "fa:16:3e:36:96:c8", "mtu": null,
  "type": "ovs", "id": "tapb989c3aa-5c", "vif_id": "b989c3aa-5c1f-
  4d2b-8711-b96c66604902"}]

  Expected:

  Have "mtu": "1450" instead.

  Environment:

  OpenStack Mitaka on Ubuntu 16.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1576713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387812] Re: Hypervisor summary shows incorrect total storage (Ceph)

2016-07-05 Thread Dr. Jens Rosenboom
CONFIRMED FOR: MITAKA

** Changed in: nova
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387812

Title:
  Hypervisor summary shows incorrect total storage (Ceph)

Status in OpenStack Compute (nova):
  New

Bug description:
  On Horizon UI in Admin/Hypervisors, Disk Usage shows incorrect value.

  Since using Ceph for ephemeral storage it adds up the ceph storage seen in 
each storage node rather than just using the real amount of ceph storage.
  When we use Ceph we should divide sum of storage sizes by the replication 
factor of ceph storage. (Replication factor is a number  which tells how much 
times information into the Ceph storage would be duplicated).
  For example we have 3 nodes and each node has 60 Gb storage.
  Replication factor is 2. So total storage is 60 * 3 / 2 = 90.
  But now size of a total storage is calculating as 60 + 60 + 60 = 180.  See 
the screenshot (the real size of storage is 207 Tb.)
  So if type storage is Ceph, we should ask information about size of storage 
directly from Ceph.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1387812/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598062] [NEW] Unit test fails on python3.5

2016-07-01 Thread Dr. Jens Rosenboom
Public bug reported:

This is similar to https://launchpad.net/bugs/1559191 but in this case
it looks like the embedded error message comes from the jsonschema
library:

==
Failed 1 tests - output below:
==

nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails
---

Captured traceback:
~~~
b'Traceback (most recent call last):'
b'  File "/home/ubuntu/src/nova/nova/api/validation/validators.py", line 
258, in validate'
b'self.validator.validate(*args, **kwargs)'
b'  File 
"/home/ubuntu/src/nova/.tox/py35/lib/python3.5/site-packages/jsonschema/validators.py",
 line 122, in validate'
b'for error in self.iter_errors(*args, **kwargs):'
b'  File 
"/home/ubuntu/src/nova/.tox/py35/lib/python3.5/site-packages/jsonschema/validators.py",
 line 98, in iter_errors'
b'for error in errors:'
b'  File 
"/home/ubuntu/src/nova/.tox/py35/lib/python3.5/site-packages/jsonschema/_validators.py",
 line 25, in additionalProperties'
b'extras = set(_utils.find_additional_properties(instance, schema))'
b'  File 
"/home/ubuntu/src/nova/.tox/py35/lib/python3.5/site-packages/jsonschema/_utils.py",
 line 100, in find_additional_properties'
b'if patterns and re.search(patterns, property):'
b'  File "/home/ubuntu/src/nova/.tox/py35/lib/python3.5/re.py", line 173, 
in search'
b'return _compile(pattern, flags).search(string)'
b'TypeError: expected string or bytes-like object'
b''
b'During handling of the above exception, another exception occurred:'
b''
b'Traceback (most recent call last):'
b'  File "/home/ubuntu/src/nova/nova/tests/unit/test_api_validation.py", 
line 101, in check_validation_error'
b'method(body=body, req=req,)'
b'  File "/home/ubuntu/src/nova/nova/api/validation/__init__.py", line 71, 
in wrapper'
b"schema_validator.validate(kwargs['body'])"
b'  File "/home/ubuntu/src/nova/nova/api/validation/validators.py", line 
277, in validate'
b'raise exception.ValidationError(detail=detail)'
b'nova.exception.ValidationError: expected string or bytes-like object'
b''
b'During handling of the above exception, another exception occurred:'
b''
b'Traceback (most recent call last):'
b'  File "/home/ubuntu/src/nova/nova/tests/unit/test_api_validation.py", 
line 359, in test_validate_patternProperties_fails'
b'expected_detail=detail)'
b'  File "/home/ubuntu/src/nova/nova/tests/unit/test_api_validation.py", 
line 106, in check_validation_error'
b"'Exception details did not match expected')"
b'  File 
"/home/ubuntu/src/nova/.tox/py35/lib/python3.5/site-packages/testtools/testcase.py",
 line 411, in assertEqual'
b'self.assertThat(observed, matcher, message)'
b'  File 
"/home/ubuntu/src/nova/.tox/py35/lib/python3.5/site-packages/testtools/testcase.py",
 line 498, in assertThat'
b'raise mismatch_error'
b"testtools.matchers._impl.MismatchError: 'expected string or buffer' != 
'expected string or bytes-like object': Exception details did not match 
expected"
b''

** Affects: nova
 Importance: Undecided
 Assignee: Dr. Jens Rosenboom (j-rosenboom-j)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Dr. Jens Rosenboom (j-rosenboom-j)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1598062

Title:
  Unit test fails on python3.5

Status in OpenStack Compute (nova):
  New

Bug description:
  This is similar to https://launchpad.net/bugs/1559191 but in this case
  it looks like the embedded error message comes from the jsonschema
  library:

  ==
  Failed 1 tests - output below:
  ==

  
nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails
  
---

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File "/home/ubuntu/src/nova/nova/api/validation/validators.py", line 
2

[Yahoo-eng-team] [Bug 1594371] [NEW] Docs for keystone recommend deprecated memcache backend

2016-06-20 Thread Dr. Jens Rosenboom
Public bug reported:

At http://docs.openstack.org/developer/keystone/configuration.html
#cache-configuration-section there is a recommendation to use

backend = keystone.cache.memcache_pool

however this seems to be deprecated in the code:

WARNING oslo_log.versionutils [-] Deprecated:
keystone.cache.memcache_pool backend is deprecated as of Mitaka in favor
of oslo_cache.memcache_pool backend and may be removed in N.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1594371

Title:
  Docs for keystone recommend deprecated memcache backend

Status in OpenStack Identity (keystone):
  New

Bug description:
  At http://docs.openstack.org/developer/keystone/configuration.html
  #cache-configuration-section there is a recommendation to use

  backend = keystone.cache.memcache_pool

  however this seems to be deprecated in the code:

  WARNING oslo_log.versionutils [-] Deprecated:
  keystone.cache.memcache_pool backend is deprecated as of Mitaka in
  favor of oslo_cache.memcache_pool backend and may be removed in N.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1594371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592017] [NEW] ML2 uses global MTU for encapsulation calculation

2016-06-13 Thread Dr. Jens Rosenboom
Public bug reported:

My goal is to achieve an MTU of 1500 for both Vlan and Vxlan based
tenant networks.

So I set the path_mtu to 1550, while leaving global_physnet_mtu at the
default value of 1500.

However, the MTU for my Vxlan networks is still calculated as 1450,
because the underlying MTU is calculated as min(1500,1550)=1500 and then
the encapsulation overhead is subtracted from that.

IMHO the correct calculation would be to subtract the encapsulation
overhead only from the path_mtu if it is specified and possibly take the
minimum of that and global_physnet_mtu.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592017

Title:
  ML2 uses global MTU for encapsulation calculation

Status in neutron:
  New

Bug description:
  My goal is to achieve an MTU of 1500 for both Vlan and Vxlan based
  tenant networks.

  So I set the path_mtu to 1550, while leaving global_physnet_mtu at the
  default value of 1500.

  However, the MTU for my Vxlan networks is still calculated as 1450,
  because the underlying MTU is calculated as min(1500,1550)=1500 and
  then the encapsulation overhead is subtracted from that.

  IMHO the correct calculation would be to subtract the encapsulation
  overhead only from the path_mtu if it is specified and possibly take
  the minimum of that and global_physnet_mtu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590696] [NEW] neutron-lbaas: Devstack doesn't start agent properly

2016-06-09 Thread Dr. Jens Rosenboom
Public bug reported:

If I run devstack with

enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
ENABLED_SERVICES+=,q-lbaasv1

then the service gets configured, but not started. Using

ENABLED_SERVICES+=,q-lbaas

instead works fine. The reason is that neutron-lbaas/devstack/plugin.sh
does a

run_process q-lbaas ...

and within that function there is another check for "is_enabled q-lbaas"
which is false at that point in the first case.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590696

Title:
  neutron-lbaas: Devstack doesn't start agent properly

Status in neutron:
  New

Bug description:
  If I run devstack with

  enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
  ENABLED_SERVICES+=,q-lbaasv1

  then the service gets configured, but not started. Using

  ENABLED_SERVICES+=,q-lbaas

  instead works fine. The reason is that neutron-
  lbaas/devstack/plugin.sh does a

  run_process q-lbaas ...

  and within that function there is another check for "is_enabled
  q-lbaas" which is false at that point in the first case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590397] [NEW] MTU setting too low when mixing Vlan and Vxlan

2016-06-08 Thread Dr. Jens Rosenboom
Public bug reported:

When booting an instance on a network with encapsulation type vxlan (and
thus automatically set MTU to 1450), this will also lower the MTU of the
integration bridge to that value:

$ ip link show br-int
6: br-int:  mtu 1450 qdisc noqueue state UNKNOWN mode 
DEFAULT group default 
link/ether 7e:39:32:87:27:44 brd ff:ff:ff:ff:ff:ff

In turn, all newly created ports for routers, DHCP agents and the like,
will also be created with MTU 1450, as they are forked off br-int.

However, this will be incorrect if there are other network types in use
at the same time. So if I boot an instance on a network with type vlan,
the instance interface will still have MTU 1500, but the interface for
its router will have MTU 1450, leading to possible errors.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590397

Title:
  MTU setting too low when mixing Vlan and Vxlan

Status in neutron:
  New

Bug description:
  When booting an instance on a network with encapsulation type vxlan
  (and thus automatically set MTU to 1450), this will also lower the MTU
  of the integration bridge to that value:

  $ ip link show br-int
  6: br-int:  mtu 1450 qdisc noqueue state UNKNOWN mode 
DEFAULT group default 
  link/ether 7e:39:32:87:27:44 brd ff:ff:ff:ff:ff:ff

  In turn, all newly created ports for routers, DHCP agents and the
  like, will also be created with MTU 1450, as they are forked off br-
  int.

  However, this will be incorrect if there are other network types in
  use at the same time. So if I boot an instance on a network with type
  vlan, the instance interface will still have MTU 1500, but the
  interface for its router will have MTU 1450, leading to possible
  errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532171] Re: lb: hard reboot or destroy of vm can lead to error log and agent resync

2016-06-06 Thread Dr. Jens Rosenboom
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1532171

Title:
  lb: hard reboot or destroy of vm can lead to error log and agent
  resync

Status in neutron:
  Fix Released

Bug description:
  A tap device can suddenly disappear e.g. due to instance destroy. If
  the agent is in the midst of processing a a device update for this tap
  device (e.g. due to a security group update), the agent logs the
  following errors:

  2016-01-07 17:43:52.225 DEBUG neutron.agent.linux.utils 
[req-07a4fb1d-88fe-40d7-b0fa-f93d1bac8a34 None None] Running command: ['ip', 
'-o', 'link', 'show', 'tapa0084edd-d4'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:84
  2016-01-07 17:43:52.230 DEBUG 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-07a4fb1d-88fe-40d7-b0fa-f93d1bac8a34 None None] Tap device: tapa0084edd-d4 
does not exist on this host, skipped add_tap_interface 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:409
  2016-01-07 17:43:52.230 DEBUG 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-07a4fb1d-88fe-40d7-b0fa-f93d1bac8a34 None None] Setting admin_state_up to 
True for port a0084edd-d437-4ff0-b2e7-7cfd93ea34c4 ensure_port_admin_state 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:686
  2016-01-07 17:43:52.231 DEBUG neutron.agent.linux.utils 
[req-07a4fb1d-88fe-40d7-b0fa-f93d1bac8a34 None None] Running command (rootwrap 
daemon): ['ip', 'link', 'set', 'tapa0084edd-d4', 'up'] execute_rootwrap_daemon 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:100
  2016-01-07 17:43:52.263 ERROR neutron.agent.linux.utils 
[req-07a4fb1d-88fe-40d7-b0fa-f93d1bac8a34 None None] Exit code: 1; Stdin: ; 
Stdout: ; Stderr: Cannot find device "tapa0084edd-d4"

  2016-01-07 17:43:52.263 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-07a4fb1d-88fe-40d7-b0fa-f93d1bac8a34 None None] Error in agent loop. 
Devices info: {'current': set(['tap3bbccdeb-0d', 'tap2cbadddb-48', 
'tap2ff01acc-16', 'tap92ccd364-e1', 'tap1b585b2d-f7', 'tap6838b208-7e', 
'tapf03a19db-48', 'tap294b5031-17', 'tapa0084edd-d4', 'tap6457a7f6-65', 
'tap91c29239-c1']), 'removed': set([]), 'added': set([]), 'updated': 
set([u'tapa0084edd-d4'])}
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
Traceback (most recent call last):
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 1191, in daemon_loop
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
sync = self.process_network_devices(device_info)
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 994, in process_network_devices
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
resync_a = self.treat_devices_added_updated(devices_added_updated)
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 1070, in treat_devices_added_updated
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
device_details['admin_state_up'])
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 689, in ensure_port_admin_state
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
ip_lib.IPDevice(tap_name).link.set_up()
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 461, in set_up
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
return self._as_root([], ('set', self.name, 'up'))
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 321, in _as_root
  2016-01-07 17:43:52.263 23166 ERROR 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_

[Yahoo-eng-team] [Bug 1583503] Re: L3 HA broken after network node crash

2016-05-19 Thread Dr. Jens Rosenboom
It seems that this is mainly a bug in how keepalived handles an empty or
otherwise broken pid file. According to the upstream change log, the
issue has been fixed in 1.2.20.

@Ubuntu: Can you look into updating the package or backporting the fix?

** Also affects: keepalived (Ubuntu)
   Importance: Undecided
   Status: New

** Summary changed:

- L3 HA broken after network node crash
+ keepalived fails to start when PID file is empty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583503

Title:
  keepalived fails to start when PID file is empty

Status in neutron:
  New
Status in keepalived package in Ubuntu:
  New

Bug description:
  After a crash of a network node, we were left with empty PID files for
  some keepalived processes:

   root@network-node14:~# ls -l 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  -rw-r--r-- 1 root root 0 May 19 08:41 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid

  This causes the L3 agent to log the following errors repeating every
  minute:

  2016-05-19 08:46:44.525 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.external_process [-] 
keepalived for router with uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882 not found. 
The process should not have died
  2016-05-19 08:46:44.526 13554 WARNING neutron.agent.linux.external_process 
[-] Respawning keepalived for uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid-vrrp

  and the keepalived process fails to start. As a result, the routers
  hosted by this agent are non-functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583503] [NEW] L3 HA broken after network node crash

2016-05-19 Thread Dr. Jens Rosenboom
Public bug reported:

After a crash of a network node, we were left with empty PID files for
some keepalived processes:

 root@network-node14:~# ls -l 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
-rw-r--r-- 1 root root 0 May 19 08:41 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid

This causes the L3 agent to log the following errors repeating every
minute:

2016-05-19 08:46:44.525 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.external_process [-] 
keepalived for router with uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882 not found. 
The process should not have died
2016-05-19 08:46:44.526 13554 WARNING neutron.agent.linux.external_process [-] 
Respawning keepalived for uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882
2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid-vrrp

and the keepalived process fails to start. As a result, the routers
hosted by this agent are non-functional.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: keepalived (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583503

Title:
  L3 HA broken after network node crash

Status in neutron:
  New
Status in keepalived package in Ubuntu:
  New

Bug description:
  After a crash of a network node, we were left with empty PID files for
  some keepalived processes:

   root@network-node14:~# ls -l 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  -rw-r--r-- 1 root root 0 May 19 08:41 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid

  This causes the L3 agent to log the following errors repeating every
  minute:

  2016-05-19 08:46:44.525 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.external_process [-] 
keepalived for router with uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882 not found. 
The process should not have died
  2016-05-19 08:46:44.526 13554 WARNING neutron.agent.linux.external_process 
[-] Respawning keepalived for uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid-vrrp

  and the keepalived process fails to start. As a result, the routers
  hosted by this agent are non-functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576713] [NEW] Network metadata fails to state correct mtu

2016-04-29 Thread Dr. Jens Rosenboom
Public bug reported:

Scenario:

Instance is booted on Neutron tenant network with ML2 OVS driver and
encapsulation. The MTU for that network is automatically calculated as
1450. Instance has --config-drive=true set.

Result:

In /openstack/latest/network_data.json we get:

 "links": [{"ethernet_mac_address": "fa:16:3e:36:96:c8", "mtu": null,
"type": "ovs", "id": "tapb989c3aa-5c", "vif_id": "b989c3aa-5c1f-
4d2b-8711-b96c66604902"}]

Expected:

Have "mtu": "1450" instead.

Environment:

OpenStack Mitaka on Ubuntu 16.04

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1576713

Title:
  Network metadata fails to state correct mtu

Status in OpenStack Compute (nova):
  New

Bug description:
  Scenario:

  Instance is booted on Neutron tenant network with ML2 OVS driver and
  encapsulation. The MTU for that network is automatically calculated as
  1450. Instance has --config-drive=true set.

  Result:

  In /openstack/latest/network_data.json we get:

   "links": [{"ethernet_mac_address": "fa:16:3e:36:96:c8", "mtu": null,
  "type": "ovs", "id": "tapb989c3aa-5c", "vif_id": "b989c3aa-5c1f-
  4d2b-8711-b96c66604902"}]

  Expected:

  Have "mtu": "1450" instead.

  Environment:

  OpenStack Mitaka on Ubuntu 16.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1576713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573092] [NEW] neutron and python-neutronclient should allow for 32bit ASN

2016-04-21 Thread Dr. Jens Rosenboom
Public bug reported:

Currently there is a limit hardcoded in Neutron that will only allow
16bit ASN being used in the configuration of BGP speakers and peers,
i.e. a range of [1..65535]. But with https://tools.ietf.org/html/rfc6793
it is possible for BGP implementations to allow 32bit numbers to be used
and in fact some RIRs have already run out of 16bit ASNs and are only
handing out new ASNs that are above 65536.

So although the ryu-based reference implementation does not support
this, there may be other agents e.g. based on ExaBGP that will support
32bit ASNs being used, and it doesn't seem sensible that Neutron should
prevent this upfront.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573092

Title:
  neutron and python-neutronclient should allow for 32bit ASN

Status in neutron:
  New

Bug description:
  Currently there is a limit hardcoded in Neutron that will only allow
  16bit ASN being used in the configuration of BGP speakers and peers,
  i.e. a range of [1..65535]. But with
  https://tools.ietf.org/html/rfc6793 it is possible for BGP
  implementations to allow 32bit numbers to be used and in fact some
  RIRs have already run out of 16bit ASNs and are only handing out new
  ASNs that are above 65536.

  So although the ryu-based reference implementation does not support
  this, there may be other agents e.g. based on ExaBGP that will support
  32bit ASNs being used, and it doesn't seem sensible that Neutron
  should prevent this upfront.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1573092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572062] [NEW] nova-consoleauth doesn't play well with memcached

2016-04-19 Thread Dr. Jens Rosenboom
oslo.serialization2.4.0-2 all
  utilities for serialization , especially JSON - Python 2.x
ii  python-oslo.service  1.8.0-1ubuntu1  all
  library for running OpenStack services - Python 2.x
ii  python-oslo.utils3.8.0-2 all
  set of utility functions for OpenStack - Python 2.x
ii  python-oslo.versionedobjects 1.8.0-1 all
  deals with DB schema versions and code expectations - Python 2.x
ii  python-oslo.vmware   2.5.0-2 all
  VMware library for OpenStack projects - Python 2.7

** Affects: nova
 Importance: Undecided
 Assignee: Dr. Jens Rosenboom (j-rosenboom-j)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Dr. Jens Rosenboom (j-rosenboom-j)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572062

Title:
  nova-consoleauth doesn't play well with memcached

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When running with the Ubuntu Xenial Mitaka packages, I'm seeing the
  following behaviour:

  Following the release notes at
  http://docs.openstack.org/releasenotes/nova/mitaka.html I remove the
  old option "memcached_servers" and set up a cache section with

  [cache]
  enabled = true
  memcache_servers = host1:11211,host2:11211,host3:11211

  The result is that there are errors logged when creating a token:

  2016-04-19 10:17:33.501 15952 WARNING nova.consoleauth.manager 
[req-ad043de9-616d-4cbb-a93f-fc5e345f1f53 1b0106d4f339406bb7012a83162ba5f2 
e3c253d3e8344a8796e70bc4f96b6166 - - -] Token: 
dd0e7bbc-c593-421f-a56b-52578aaec20e failed to save into memcached.
  2016-04-19 10:17:33.503 15952 WARNING nova.consoleauth.manager 
[req-ad043de9-616d-4cbb-a93f-fc5e345f1f53 1b0106d4f339406bb7012a83162ba5f2 
e3c253d3e8344a8796e70bc4f96b6166 - - -] Instance: 
d3267abb-258d-4b5d-a22e-8e3b0a39905c failed to save into memcached

  Only after a lot of debugging it turns out that the default backend
  for oslo_cache is dogpile.cache.null, implying that no values get
  cached and token validation always fails. Only after adding

  [cache]
  backend = oslo_cache.memcache_pool

  does console authentication start working. Strangely though, the above
  warning messages are still being logged in the working setup, which
  made debugging this even more difficult.

  So I suggest the following fixes:

  1. Change the text of the warnings from "failed to save into memcached" to 
"failed to save into cache", as with the change to using oslo_cache, there may 
be other backends in use instead of memcached.
  2. Either override the default of using the null backend or refuse to run 
with it or at the very least give a big fat warning that the configuration can 
not work.
  3. Stop generating the warning messages when the data got in fact saved into 
cache properly.

  Package versions for reference:
  # dpkg -l | grep nova
  ii  nova-api-metadata2:13.0.0-0ubuntu2   all  
OpenStack Compute - metadata API frontend
  ii  nova-api-os-compute  2:13.0.0-0ubuntu2   all  
OpenStack Compute - OpenStack Compute API frontend
  ii  nova-cert2:13.0.0-0ubuntu2   all  
OpenStack Compute - certificate management
  ii  nova-common  2:13.0.0-0ubuntu2   all  
OpenStack Compute - common files
  ii  nova-conductor   2:13.0.0-0ubuntu2   all  
OpenStack Compute - conductor service
  ii  nova-consoleauth 2:13.0.0-0ubuntu2   all  
OpenStack Compute - Console Authenticator
  ii  nova-scheduler   2:13.0.0-0ubuntu2   all  
OpenStack Compute - virtual machine scheduler
  ii  nova-spiceproxy  2:13.0.0-0ubuntu2   all  
OpenStack Compute - spice html5 proxy
  ii  python-nova  2:13.0.0-0ubuntu2   all  
OpenStack Compute Python libraries
  ii  python-novaclient2:3.3.1-2   all  
client library for OpenStack Compute API - Python 2.7
  # dpkg -l | grep oslo
  ii  python-oslo.cache1.6.0-2 all  
cache storage for Openstack projects - Python 2.7
  ii  python-oslo.concurrency  3.7.0-2 all  
concurrency and locks for OpenStack projects - Python 2.x
  ii  python-oslo.config   1:3.9.0-3   all  
Common code for Openstack Projects (configuration API) - Python 2.x
  ii  python-os

[Yahoo-eng-team] [Bug 1513894] Re: Discrepancy between API and CLI on how to enable PD

2016-04-18 Thread Dr. Jens Rosenboom
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513894

Title:
  Discrepancy between API and CLI on how to enable PD

Status in neutron:
  Confirmed
Status in python-neutronclient:
  New

Bug description:
  Checking api/v2/attributes.py, user can provide a subnetpool_id as
  "prefix_delegation" to enable PD. refer to
  
https://github.com/openstack/neutron/blob/master/neutron/api/v2/attributes.py#L376.
  However, the neutron CLI doesn't allow this sincee it's now retrieving
  the subnetpool before submitting the subnet create. The get of
  subnetpool id for "prefix_delegation" will not return a valid id. This
  causes discrepancy between API and CLI on how to enable PD

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567923] Re: Neutron advertises too high MTU for vxlan

2016-04-08 Thread Dr. Jens Rosenboom
Gah, turns out that chef installed a file /etc/neutron/dnsmasq.conf
containing

+dhcp-option=26,1454

which overrides all other options. Sorry to the Neutron folks for the
false alarm, I'll go and fix our cookbook.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567923

Title:
  Neutron advertises too high MTU for vxlan

Status in neutron:
  Invalid

Bug description:
  When creating a tenant network with type vxlan, the MTU is
  automatically set to 1450:

  # neutron net-show net2
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones| nova |
  | created_at| 2016-04-08T11:11:42  |
  | description   |  |
  | id| f44e9e2c-8a60-46c7-98bb-3f1824fc09e9 |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | mtu   | 1450 |
  | name  | net2 |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 65633|
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   | 288021c1-7073-41c5-a233-529226971dd3 |
  | tags  |  |
  | tenant_id | e3c253d3e8344a8796e70bc4f96b6166 |
  | updated_at| 2016-04-08T11:11:42  |
  +---+--+

  This is the maximum Ethernet MTU possible for the tenant assuming the
  encapsulated packet has to fit into an IP MTU of 1500 on the tunnel
  network.

  Now neutron tells DHCP to set --dhcp-option-force=option:mtu,1450 but
  the DHCP option refers to the IP layer MTU, see
  https://tools.ietf.org/html/rfc2132 section 5.1. So a correctly
  behaving client will set its interface (Ethernet) MTU to 1454,
  implying an IP MTU of 1450.

  But now it will send Ethernet frames of size 1454, which encapsulated
  have size 1504, and thus get dropped on the tunnel network.

  The correct behaviour here would be to advertise an MTU reduced by 4
  via DHCP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567923/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567923] [NEW] Neutron advertises too high MTU for vxlan

2016-04-08 Thread Dr. Jens Rosenboom
Public bug reported:

When creating a tenant network with type vxlan, the MTU is automatically
set to 1450:

# neutron net-show net2
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| availability_zone_hints   |  |
| availability_zones| nova |
| created_at| 2016-04-08T11:11:42  |
| description   |  |
| id| f44e9e2c-8a60-46c7-98bb-3f1824fc09e9 |
| ipv4_address_scope|  |
| ipv6_address_scope|  |
| mtu   | 1450 |
| name  | net2 |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 65633|
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   | 288021c1-7073-41c5-a233-529226971dd3 |
| tags  |  |
| tenant_id | e3c253d3e8344a8796e70bc4f96b6166 |
| updated_at| 2016-04-08T11:11:42  |
+---+--+

This is the maximum Ethernet MTU possible for the tenant assuming the
encapsulated packet has to fit into an IP MTU of 1500 on the tunnel
network.

Now neutron tells DHCP to set --dhcp-option-force=option:mtu,1450 but
the DHCP option refers to the IP layer MTU, see
https://tools.ietf.org/html/rfc2132 section 5.1. So a correctly behaving
client will set its interface (Ethernet) MTU to 1454, implying an IP MTU
of 1450.

But now it will send Ethernet frames of size 1454, which encapsulated
have size 1504, and thus get dropped on the tunnel network.

The correct behaviour here would be to advertise an MTU reduced by 4 via
DHCP.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567923

Title:
  Neutron advertises too high MTU for vxlan

Status in neutron:
  New

Bug description:
  When creating a tenant network with type vxlan, the MTU is
  automatically set to 1450:

  # neutron net-show net2
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones| nova |
  | created_at| 2016-04-08T11:11:42  |
  | description   |  |
  | id| f44e9e2c-8a60-46c7-98bb-3f1824fc09e9 |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | mtu   | 1450 |
  | name  | net2 |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 65633|
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   | 288021c1-7073-41c5-a233-529226971dd3 |
  | tags  |  |
  | tenant_id | e3c253d3e8344a8796e70bc4f96b6166 |
  | updated_at| 2016-04-08T11:11:42  |
  +---+--+

  This is the maximum Ethernet MTU possible for the tenant assuming the
  encapsulated packet has to fit into an IP MTU of 1500 on the tunnel
  network.

  Now neutron tells DHCP to set --dhcp-option-force=option:mtu,1450 but
  the DHCP option refers to the IP layer MTU, see
  https://tools.ietf.org/html/rfc2132 section 5.1. So a correctly
  behaving client will set its interface (Ethernet) MTU to 1454,
  implying an IP M

[Yahoo-eng-team] [Bug 1555042] [NEW] Neutron HA should allow min_l3_agents_per_router to equal one

2016-03-09 Thread Dr. Jens Rosenboom
Public bug reported:

As an operator, when I am running a setup with two network nodes, the
idea of running L3 HA is that an outage of one of the network nodes
should have minimum customer impact. With the current code, existing
setups will indeed have little to no impact, but customers will not be
able to create new routers during the outage.

If neutron would allow to set min_l3_agents_per_router=1, new routers
will be created even when just one agent is available, which certainly
is not optimal, but at least will fulfill the customer request. Once the
second network node recovers, the second router instance will be added
and thus redundancy restored.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555042

Title:
  Neutron HA should allow min_l3_agents_per_router to equal one

Status in neutron:
  New

Bug description:
  As an operator, when I am running a setup with two network nodes, the
  idea of running L3 HA is that an outage of one of the network nodes
  should have minimum customer impact. With the current code, existing
  setups will indeed have little to no impact, but customers will not be
  able to create new routers during the outage.

  If neutron would allow to set min_l3_agents_per_router=1, new routers
  will be created even when just one agent is available, which certainly
  is not optimal, but at least will fulfill the customer request. Once
  the second network node recovers, the second router instance will be
  added and thus redundancy restored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1555042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423165] Re: https: client can cause nova/cinder to leak sockets for 'get' 'show' 'delete' 'update'

2015-03-13 Thread Dr. Jens Rosenboom
Nova stable/juno is still affected by this issue, since the fix is not
available there currently due to the version cap on python-glanceclient.

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423165

Title:
  https: client can cause nova/cinder to leak sockets for 'get' 'show'
  'delete' 'update'

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New
Status in Python client library for Glance:
  Fix Released

Bug description:
  
  Other OpenStack services which instantiate a 'https' glanceclient using
  ssl_compression=False and insecure=False (eg Nova, Cinder) are leaking
  sockets due to glanceclient not closing the connection to the Glance
  server.
  
  This could happen for a sub-set of calls, eg 'show', 'delete', 'update'.
  
  netstat -nopd would show the sockets would hang around forever:
  
  ... 127.0.0.1:9292  ESTABLISHED 9552/python  off (0.00/0/0)
  
  urllib's ConnectionPool relies on the garbage collector to tear down
  sockets which are no longer in use. The 'verify_callback' function used to
  validate SSL certs was holding a reference to the VerifiedHTTPSConnection
  instance which prevented the sockets being torn down.

  
  --

  to reproduce, set up devstack with nova talking to glance over https (must be 
performing full cert verification) and
  perform a nova operation such as:

  
   $ nova image-meta 53854ea3-23ed-4682-abf7-8415f2d6b7d9 set foo=bar

  you will see connections from nova to glance which have no timeout
  (off):

   $ netstat -nopd | grep 9292

   tcp0  0 127.0.0.1:34204 127.0.0.1:9292
  ESTABLISHED 9552/python  off (0.00/0/0)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1423165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407685] [NEW] New eventlet library breaks nova-manage

2015-01-05 Thread Dr. Jens Rosenboom
Public bug reported:

This only affects stable/juno and stable/icehouse, which still use the
deprecated eventlet.util module:

~# nova-manage service list
2015-01-05 13:13:11.202 29016 ERROR stevedore.extension [-] Could not load 
'file': cannot import name util
2015-01-05 13:13:11.202 29016 ERROR stevedore.extension [-] cannot import name 
util
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension Traceback (most recent 
call last):
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/stevedore/extension.py",
 line 162, in _load_plugins
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension verify_requirements,
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/stevedore/extension.py",
 line 178, in _load_one_plugin
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension plugin = 
ep.load(require=verify_requirements)
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/pkg_resources/__init__.py",
 line 2306, in load
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension return self._load()
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/pkg_resources/__init__.py",
 line 2309, in _load
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension module = 
__import__(self.module_name, fromlist=['__name__'], level=0)
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/image/download/file.py",
 line 23, in 
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension import 
nova.virt.libvirt.utils as lv_utils
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/virt/libvirt/__init__.py",
 line 15, in 
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension from 
nova.virt.libvirt import driver
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 59, in 
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension from eventlet 
import util as eventlet_util
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension ImportError: cannot 
import name util
2015-01-05 13:13:11.202 29016 TRACE stevedore.extension

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407685

Title:
  New eventlet library breaks nova-manage

Status in OpenStack Compute (Nova):
  New

Bug description:
  This only affects stable/juno and stable/icehouse, which still use the
  deprecated eventlet.util module:

  ~# nova-manage service list
  2015-01-05 13:13:11.202 29016 ERROR stevedore.extension [-] Could not load 
'file': cannot import name util
  2015-01-05 13:13:11.202 29016 ERROR stevedore.extension [-] cannot import 
name util
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension Traceback (most 
recent call last):
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/stevedore/extension.py",
 line 162, in _load_plugins
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension 
verify_requirements,
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/stevedore/extension.py",
 line 178, in _load_one_plugin
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension plugin = 
ep.load(require=verify_requirements)
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/pkg_resources/__init__.py",
 line 2306, in load
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension return 
self._load()
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/pkg_resources/__init__.py",
 line 2309, in _load
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension module = 
__import__(self.module_name, fromlist=['__name__'], level=0)
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/image/download/file.py",
 line 23, in 
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension import 
nova.virt.libvirt.utils as lv_utils
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension   File 
"/opt/cloudbau/nova-virtualenv/lib/python2.7/site-packages/nova/virt/libvirt/__init__.py",
 line 15, in 
  2015-01-05 13:13:11.202 29016 TRACE stevedore.extension from 
nova.virt.libvirt import driver
  2015-01-05 13:13:11.202 29016 TRACE stevedore.exten

[Yahoo-eng-team] [Bug 1360139] Re: Live migration hosts dropbox should contain service host

2014-11-29 Thread Dr. Jens Rosenboom
*** This bug is a duplicate of bug 1335999 ***
https://bugs.launchpad.net/bugs/1335999

** This bug has been marked a duplicate of bug 1335999
   live migration choosing wrong host names

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1360139

Title:
  Live migration hosts dropbox should contain service host

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  To populate the available hosts in the live migration form, horizon
  use hypervisor hostname else nova api want the service host, which is
  not the same thing.

  
  openstack_dashboard/dashboards/admin/instances/forms.py:
 def populate_host_choices(self, request, initial):
  hosts = initial.get('hosts')
  current_host = initial.get('current_host')
  host_list = [(host.hypervisor_hostname,
host.hypervisor_hostname)
   for host in hosts
   if host.service['host'] != current_host]
  ...

  More details:
  ==

  When using libvirt, this later when asked about hostname i.e. "$ virsh
  hostname", it will always return a fqdn
  
(https://github.com/c4milo/libvirt/blob/0eac9d1e90fc3388030c6109aeb1f4860f108054/src/libvirt.c#L1610),
  which is what nova will set as hypervisor_hostname
  
(https://github.com/openstack/nova/blob/f0a43555b79fe3161933bc01d6bb79f1e622bec2/nova/virt/libvirt/driver.py#L4008)
  in the NovaCompute table
  
(https://github.com/openstack/nova/blob/f0a43555b79fe3161933bc01d6bb79f1e622bec2/nova/db/sqlalchemy/models.py#L97).

  In the other hand the service (nova-manage service list) will return the host 
as the hostname of the compute node i.e. "$ hostname
  " 
(https://github.com/openstack/nova/blob/f0a43555b79fe3161933bc01d6bb79f1e622bec2/nova/db/sqlalchemy/models.py#L67).

  In platform that make a difference between hostname and fqdn i.e. a
  call to "$ hostname" and "$ hostname -f" return different things, both
  fields service.host != compute_node. hypervisor_hostname will be
  different.

  As for Nova API, this later accept the service.host, at least that how
  it validate the destination host passed
  
(https://github.com/openstack/nova/blob/f0a43555b79fe3161933bc01d6bb79f1e622bec2/nova/conductor/tasks/live_migrate.py#L85
  
https://github.com/openstack/nova/blob/f0a43555b79fe3161933bc01d6bb79f1e622bec2/nova/db/sqlalchemy/api.py#L488)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1360139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388143] [NEW] [sahara] Quickstart guide is missing definition for SAHARA_URL

2014-10-31 Thread Dr. Jens Rosenboom
Public bug reported:

The quickstart document
http://docs.openstack.org/developer/sahara/devref/quickstart.html uses
$SAHARA_URL without defining it. From looking at what the cli client
does, it should be set to:

SAHARA_URL=http://127.0.0.1:8386/v1.0/$TENANT_ID

and the installation for me seems to work after that.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1388143

Title:
  [sahara] Quickstart guide is missing definition for SAHARA_URL

Status in OpenStack Compute (Nova):
  New

Bug description:
  The quickstart document
  http://docs.openstack.org/developer/sahara/devref/quickstart.html uses
  $SAHARA_URL without defining it. From looking at what the cli client
  does, it should be set to:

  SAHARA_URL=http://127.0.0.1:8386/v1.0/$TENANT_ID

  and the installation for me seems to work after that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1388143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388132] [NEW] [compute] Ceph client key missing in libvirt apparmor profile

2014-10-31 Thread Dr. Jens Rosenboom
Public bug reported:

This happens when booting an instance while nova has ceph backend
enabled:

Oct 31 14:06:59 vagrant-ubuntu-trusty-64 kernel: [ 8264.770442] type=1400 
audit(1414764419.818:29): apparmor="DENIED" operation="open" 
profile="libvirt-1550f42a-1b8b-4db5-9458-d5b9f496cc0c" name="/tmp/" pid=25660 
comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=112 ouid=0
Oct 31 14:06:59 vagrant-ubuntu-trusty-64 kernel: [ 8264.770454] type=1400 
audit(1414764419.818:30): apparmor="DENIED" operation="open" 
profile="libvirt-1550f42a-1b8b-4db5-9458-d5b9f496cc0c" name="/var/tmp/" 
pid=25660 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=112 
ouid=0
Oct 31 14:06:59 vagrant-ubuntu-trusty-64 kernel: [ 8264.776679] type=1400 
audit(1414764419.826:31): apparmor="DENIED" operation="open" 
profile="libvirt-1550f42a-1b8b-4db5-9458-d5b9f496cc0c" 
name="/etc/ceph/ceph.client.cindy.keyring" pid=25660 comm="qemu-system-x86" 
requested_mask="r" denied_mask="r" fsuid=112 ouid=1000

The keyring should not be used at all, since the secret is defined as
virsh secret.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1388132

Title:
  [compute] Ceph client key missing in libvirt apparmor profile

Status in OpenStack Compute (Nova):
  New

Bug description:
  This happens when booting an instance while nova has ceph backend
  enabled:

  Oct 31 14:06:59 vagrant-ubuntu-trusty-64 kernel: [ 8264.770442] type=1400 
audit(1414764419.818:29): apparmor="DENIED" operation="open" 
profile="libvirt-1550f42a-1b8b-4db5-9458-d5b9f496cc0c" name="/tmp/" pid=25660 
comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=112 ouid=0
  Oct 31 14:06:59 vagrant-ubuntu-trusty-64 kernel: [ 8264.770454] type=1400 
audit(1414764419.818:30): apparmor="DENIED" operation="open" 
profile="libvirt-1550f42a-1b8b-4db5-9458-d5b9f496cc0c" name="/var/tmp/" 
pid=25660 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=112 
ouid=0
  Oct 31 14:06:59 vagrant-ubuntu-trusty-64 kernel: [ 8264.776679] type=1400 
audit(1414764419.826:31): apparmor="DENIED" operation="open" 
profile="libvirt-1550f42a-1b8b-4db5-9458-d5b9f496cc0c" 
name="/etc/ceph/ceph.client.cindy.keyring" pid=25660 comm="qemu-system-x86" 
requested_mask="r" denied_mask="r" fsuid=112 ouid=1000

  The keyring should not be used at all, since the secret is defined as
  virsh secret.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1388132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387808] [NEW] Documentation for installation lists [glance_store] section for wrong config file

2014-10-30 Thread Dr. Jens Rosenboom
Public bug reported:

In the installation guide at http://docs.openstack.org/juno/install-
guide/install/apt/content/glance-install.html section 3.c asks for
[glance-store] parameters to be added to /etc/glance/glance-
registry.conf. These really belong to glance-api.conf however, i.e. that
should become section 2.c.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1387808

Title:
  Documentation for installation lists [glance_store] section for wrong
  config file

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  In the installation guide at http://docs.openstack.org/juno/install-
  guide/install/apt/content/glance-install.html section 3.c asks for
  [glance-store] parameters to be added to /etc/glance/glance-
  registry.conf. These really belong to glance-api.conf however, i.e.
  that should become section 2.c.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1387808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361840] [NEW] nova boot fails with rbd backend

2014-08-26 Thread Jens Rosenboom
Public bug reported:

Booting a VM in a plain devstack setup with ceph enabled, I get an error
like:

libvirtError: internal error: process exited while connecting to
monitor: qemu-system-x86_64: -drive file=rbd:vmz/27dcd57f-948f-410c-
830f-
48d8fda0d968_disk.config:id=cindy:key=AQA00PxTiFa0MBAAQ9Uq9IVtBwl/pD8Fd9MWZw==:auth_supported=cephx\;none:mon_host=192.168.122.76\:6789,if=none,id
=drive-ide0-1-1,readonly=on,format=raw,cache=writeback: error reading
header from 27dcd57f-948f-410c-830f-48d8fda0d968_disk.config

even though config_drive is set to false.

This seems to be related to https://review.openstack.org/#/c/112014/, if
I revert ecce888c469c62374a3cc43e3cede11d8aa1e799 everything works fine.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: rbd

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361840

Title:
  nova boot fails with rbd backend

Status in OpenStack Compute (Nova):
  New

Bug description:
  Booting a VM in a plain devstack setup with ceph enabled, I get an
  error like:

  libvirtError: internal error: process exited while connecting to
  monitor: qemu-system-x86_64: -drive file=rbd:vmz/27dcd57f-948f-410c-
  830f-
  
48d8fda0d968_disk.config:id=cindy:key=AQA00PxTiFa0MBAAQ9Uq9IVtBwl/pD8Fd9MWZw==:auth_supported=cephx\;none:mon_host=192.168.122.76\:6789,if=none,id
  =drive-ide0-1-1,readonly=on,format=raw,cache=writeback: error reading
  header from 27dcd57f-948f-410c-830f-48d8fda0d968_disk.config

  even though config_drive is set to false.

  This seems to be related to https://review.openstack.org/#/c/112014/,
  if I revert ecce888c469c62374a3cc43e3cede11d8aa1e799 everything works
  fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352595] [NEW] nova boot fails when using rbd backend

2014-08-04 Thread Jens Rosenboom
Public bug reported:

Trace ends with:

TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3]  
File "/opt/stack/nova/nova/virt/libvirt/rbd.py", line 238, in exists
TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3] 
except rbd.ImageNotFound:
TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3] 
AttributeError: 'module' object has no attribute 'ImageNotFound'

It looks like the above module tries to do a "import rbd" and ends up
importing itself again instead of the global library module.

A quick fix would be renaming the file to rbd2.py and changing the
references in driver.py and imagebackend.py, but maybe there is a better
solution?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352595

Title:
  nova boot fails when using rbd backend

Status in OpenStack Compute (Nova):
  New

Bug description:
  Trace ends with:

  TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3]  
File "/opt/stack/nova/nova/virt/libvirt/rbd.py", line 238, in exists
  TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3]   
  except rbd.ImageNotFound:
  TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3] 
AttributeError: 'module' object has no attribute 'ImageNotFound'

  It looks like the above module tries to do a "import rbd" and ends up
  importing itself again instead of the global library module.

  A quick fix would be renaming the file to rbd2.py and changing the
  references in driver.py and imagebackend.py, but maybe there is a
  better solution?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp