[Yahoo-eng-team] [Bug 1838438] Re: Rename custom theme on drop down menu

2019-09-22 Thread Xav Paice
Added the Juju openstack-dashboard charm.

The help test for the theme options needs some clarity, currently
default-theme is "this setting is mutually exclusive to ubuntu-theme",
ubuntu-theme doesn't mention custom theme or default-theme, and custom-
theme says "this setting is mutually exclustive to ubuntu-theme and
default-theme".

How should we set a dashboard to use the custom theme and not have
options for other themes?

** Also affects: charm-openstack-dashboard
   Importance: Undecided
   Status: New

** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1838438

Title:
  Rename custom theme on drop down menu

Status in OpenStack openstack-dashboard charm:
  New
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When dashboard has a custom theme it is shown on the drop down menu as
  "custom".

  Is it possible to change the name of the theme from "custom" or can
  this feature be added?

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1838438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818671] Re: Openstack usage list not showing all projects

2019-09-01 Thread Xav Paice
Seeing this in two difference clouds, when operating as the admin user.

Client versions:  Tested with both 3.14.0 and 3.18.0

Nova is version 17.0.9 (package 17.0.9-0ubuntu1~cloud0) and 17.0.5
(package 17.0.5-0ubuntu1~cloud0).

Note that we get a bunch of results, but not a complete list - it's as
if there's some pagination, but no message to tell us, nor a logical
point at which it splits.  One site, we get 15 results, another 508.

One of these clouds has two sites, and have the same projects in each
site.  One site returns 24 rows, one 15.  The site that returns more
rows runs Nova 17.0.7 and client 3.16.2 (although I've tried this client
elsewhere and it doesn't seem to make any difference).

** Changed in: nova
   Status: Expired => New

** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818671

Title:
  Openstack usage list not showing all projects

Status in OpenStack Compute (nova):
  New

Bug description:
  In a customer environment running nova 2:17.0.5-0ubuntu1~cloud0

  when querying projects usage list most recent projects are not listed
  in the reply.

  Example:

  $ openstack  usage list --print-empty --start 2019-01-01 --end
  2019-02-01

  Not showing any information about project
  a897ea83f01c436e82e13a4306fa5ef0

  But querying for the usage of the specific project we can retrieve the
  results:

  openstack  usage show --project a897ea83f01c436e82e13a4306fa5ef0  --start 
2019-01-01 --end 2019-02-01 
  Usage from 2019-01-01 to 2019-02-01 on project 
a897ea83f01c436e82e13a4306fa5ef0: 
  +---++
  | Field | Value  |
  +---++
  | CPU Hours | 528.3  |
  | Disk GB-Hours | 10566.07   |
  | RAM MB-Hours  | 2163930.45 |
  | Servers   | 43 |
  +---++

  As a workaround we are able to get projects_uuid like this:
  projects_uuid=$(openstack project list | grep -v ID | awk '{print $2}')

  And iterate over them and get individuals usage:

  for prog in $projects_uuid; do openstack project show $prog; openstack
  usage show --project $prog  --start 2019-01-01 --end 2019-02-01; done

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784342] Re: AttributeError: 'Subnet' object has no attribute '_obj_network_id'

2019-05-02 Thread Xav Paice
subscribed field-high, added Ubuntu Neutron package, since this has
occurred in multiple production sites.

** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1784342

Title:
  AttributeError: 'Subnet' object has no attribute '_obj_network_id'

Status in neutron:
  Confirmed
Status in neutron package in Ubuntu:
  New

Bug description:
  Running rally caused subnets to be created without a network_id
  causing this AttributeError.

  OpenStack Queens RDO packages
  [root@controller1 ~]# rpm -qa | grep -i neutron
  python-neutron-12.0.2-1.el7.noarch
  openstack-neutron-12.0.2-1.el7.noarch
  python2-neutron-dynamic-routing-12.0.1-1.el7.noarch
  python2-neutron-lib-1.13.0-1.el7.noarch
  openstack-neutron-dynamic-routing-common-12.0.1-1.el7.noarch
  python2-neutronclient-6.7.0-1.el7.noarch
  openstack-neutron-bgp-dragent-12.0.1-1.el7.noarch
  openstack-neutron-common-12.0.2-1.el7.noarch
  openstack-neutron-ml2-12.0.2-1.el7.noarch

  
  MariaDB [neutron]> select project_id, id, name, network_id, cidr from subnets 
where network_id is null;

  
+--+--+---++-+

  | project_id   | id
  | name  | network_id | cidr|

  
+--+--+---++-+

  | b80468629bc5410ca2c53a7cfbf002b3 | 7a23c72b-
  3df8-4641-a494-af7642563c8e | s_rally_1e4bebf1_1s3IN6mo | NULL   |
  1.9.13.0/24 |

  | b80468629bc5410ca2c53a7cfbf002b3 |
  f7a57946-4814-477a-9649-cc475fb4e7b2 | s_rally_1e4bebf1_qWSFSMs9 |
  NULL   | 1.5.20.0/24 |

  
+--+--+---++-+

  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
[req-c921b9fb-499b-41c1-9103-93e71a70820c b6b96932bbef41fdbf957c2dc01776aa 
050c556faa5944a8953126c867313770 - default default] GET failed.: 
AttributeError: 'Subnet' object has no attribute '_obj_network_id'
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
Traceback (most recent call last):
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/pecan/core.py", line 678, in __call__
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
self.invoke_controller(controller, args, kwargs, state)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/pecan/core.py", line 569, in 
invoke_controller
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
result = controller(*args, **kwargs)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 91, in wrapped
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
setattr(e, '_RETRY_EXCEEDED', True)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 87, in wrapped
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*args, **kwargs)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 147, in wrapper
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
ectxt.value = e.inner_exc
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File 

[Yahoo-eng-team] [Bug 1731595] [NEW] L3 HA: multiple agents are active at the same time

2017-11-10 Thread Xav Paice
Public bug reported:

OS: Xenial, Ocata from Ubuntu Cloud Archive
We have three neutron-gateway hosts, with L3 HA enabled and a min of 2, max of 
3.  There are approx. 400 routers defined.

At some point (we weren't monitoring exactly) a number of the routers
changed from being one active, and 1+ others standby, to >1 active.
This included each of the 'active' namespaces having the same IP
addresses allocated, and therefore traffic problems reaching instances.

Removing the routers from all but one agent, and re-adding, resolved the
issue.  Restarting one l3 agent also appeared to resolve the issue, but
very slowly, to the point where we needed the system alive again faster
and reverted to removing/re-adding.

At the same time, a number of routers were listed without any agents
active at all.  This situation appears to have been resolved by adding
routers to agents, after several minutes downtime.

I'm finding it very difficult to find relevant keepalived messages to
indicate what's going on, but what I do notice is that all the agents
have equal priority and are configured as 'backup'.

I am trying to figure out a way to get a reproducer of this, it might be
that we need to have a large number of routers configured on a small
number of gateways.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1731595

Title:
  L3 HA: multiple agents are active at the same time

Status in neutron:
  New

Bug description:
  OS: Xenial, Ocata from Ubuntu Cloud Archive
  We have three neutron-gateway hosts, with L3 HA enabled and a min of 2, max 
of 3.  There are approx. 400 routers defined.

  At some point (we weren't monitoring exactly) a number of the routers
  changed from being one active, and 1+ others standby, to >1 active.
  This included each of the 'active' namespaces having the same IP
  addresses allocated, and therefore traffic problems reaching
  instances.

  Removing the routers from all but one agent, and re-adding, resolved
  the issue.  Restarting one l3 agent also appeared to resolve the
  issue, but very slowly, to the point where we needed the system alive
  again faster and reverted to removing/re-adding.

  At the same time, a number of routers were listed without any agents
  active at all.  This situation appears to have been resolved by adding
  routers to agents, after several minutes downtime.

  I'm finding it very difficult to find relevant keepalived messages to
  indicate what's going on, but what I do notice is that all the agents
  have equal priority and are configured as 'backup'.

  I am trying to figure out a way to get a reproducer of this, it might
  be that we need to have a large number of routers configured on a
  small number of gateways.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1731595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1722293] Re: Keystone not removing mapping between deleted LDAP user and Openstack

2017-10-10 Thread Xav Paice
Adding the charm because maybe there's a more unique field we can use
than uid, given this behaviour with re-use of uid's

** Project changed: keystone => charm-keystone-ldap

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1722293

Title:
  Keystone not removing mapping between deleted LDAP user and Openstack

Status in OpenStack Keystone LDAP integration:
  New
Status in OpenStack Identity (keystone):
  New

Bug description:
  Keystone not removing mapping between deleted LDAP user and Openstack

  The client is using LDAP for authentication and has used uid as a key
  for user_id_attribute. The client created a LDAP user say ABC with
  UID=100, this user is associated with an OpenStack user ABC. The
  relationship is recorded in id_mapping table within keystone database.

  Now when the client delete the ldap user ABC, the entry is not deleted
  from the id_mapping table. Thus when the client create a new ldap user
  XYZ which get the same UID=100, the incorrect record in id_mapping
  restrict the new user XYZ from authenticating and successfully log on
  to OpenStack.

  Note: there is not record for XYZ within the id_mapping table.

  Details of domain config:

  # User supplied configuration flags
  user_filter = (memberof=cn=xxx,ou=Group,dc=xxx,dc=xxx)
  user_id_attribute = uidNumber
  user_name_attribute = uid
  user_objectclass = posixAccount
  user_tree_dn = ou=x,dc=xxx,dc=xx
  [identity]
  driver = ldap

  Table Description

  mysql> desc id_mapping;
  +-+--+--+-+-+---+
  | Field   | Type | Null | Key | Default | Extra |
  +-+--+--+-+-+---+
  | public_id   | varchar(64)  | NO   | PRI | NULL|   |
  | domain_id   | varchar(64)  | NO   | MUL | NULL|   |
  | local_id| varchar(64)  | NO   | | NULL|   |
  | entity_type | enum('user','group') | NO   | | NULL|   |
  +-+--+--+-+-+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-keystone-ldap/+bug/1722293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648206] Re: sriov agent report_state is slow

2017-01-09 Thread Xav Paice
Added Ubuntu Cloud Archive to get this fix ported into the current
packages - 8.3.0-0ubuntu1.1 doesn't have this patch.

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648206

Title:
  sriov agent report_state is slow

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  Fix Released

Bug description:
  On a system with lots of VFs and PFs we get these logs:

  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 29.67 sec
  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 45.43 sec
  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 47.64 sec
  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 23.89 sec
  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 30.20 sec

  
  Depending on the agent_down_time configuration, this can cause the Neutron 
server to think the agent has died.

  
  This appears to be caused by blocking on the eswitch manager every time to 
get a device count to include in the state report.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1648206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458471] [NEW] get_router_for_floatingip does not check if router is default gateway

2015-05-25 Thread Xav Paice
Public bug reported:

In the function db._get_router_for_floatingip() we check that the router
has a suitable gateway IP, but we do not check if that router is the
default gateway for that subnet.  When multiple routers exist with ip
addresses in both the subnet for the instance, plus the gateway subnet,
the function returns the first router it finds rather than the one that
instances are using as default gateway.

Can we check a condition such as router_gw_qry.floating_ip ==
subnet_db['gateway_ip'] in addition to checking has_gw_port?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458471

Title:
  get_router_for_floatingip does not check if router is default gateway

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In the function db._get_router_for_floatingip() we check that the
  router has a suitable gateway IP, but we do not check if that router
  is the default gateway for that subnet.  When multiple routers exist
  with ip addresses in both the subnet for the instance, plus the
  gateway subnet, the function returns the first router it finds rather
  than the one that instances are using as default gateway.

  Can we check a condition such as router_gw_qry.floating_ip ==
  subnet_db['gateway_ip'] in addition to checking has_gw_port?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458471/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp