[Yahoo-eng-team] [Bug 1424571] [NEW] firewall table status not updating

2015-02-23 Thread Masco Kaliyamoorthy
Public bug reported:

while creating a firewall, when the status changes
the corresponding row data need to be updated
but it is struck in the creating pending_create status.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1424571

Title:
  firewall table status not updating

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  while creating a firewall, when the status changes
  the corresponding row data need to be updated
  but it is struck in the creating pending_create status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1424571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424597] [NEW] Obscure 'No valid hosts found' if no free fixed IPs left in the network

2015-02-23 Thread George Shuklin
Public bug reported:

If network have no free fixed IPs, new instances failed with 'No valid
hosts found' without proper explanation.

Example:

nova boot foobar --flavor SSD.1 --image cirros --nic net-id=f3f2802a-
c2a1-4d8b-9f43-cf24d0dc8233

(There is no free IP left in network f3f2802a-c2a1-4d8b-
9f43-cf24d0dc8233)

nova show fb4552e5-50cb-4701-a095-c006e4545c04
...
| status   | BUILD  
   |

(few seconds later)

| fault| {message: No valid host was found. 
Exceeded max scheduling attempts 2 for instance 
fb4552e5-50cb-4701-a095-c006e4545c04. Last exception: [u'Traceback (most recent 
call last):\ |
|  | ', u'  File 
\/usr/lib/python2.7/dist-packages/nova/compute/manager.py\, line 2036, in 
_do, code: 500, details:   File 
\/usr/lib/python2.7/dist-packages/nova/conductor/manager.py\, line 612, in 
build_instances |
|  | instances[0].uuid) 


|
|  |   File 
\/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py\, line 161, in 
populate_retry  
 |
|  | raise 
exception.NoValidHost(reason=msg)   

 |
| status   | ERROR  


|


Expected behaviour: Compains about 'No free IP' before attempting to schedule 
instance.

See https://bugs.launchpad.net/nova/+bug/1424594 for similar behaviour.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1424597

Title:
  Obscure 'No valid hosts found' if no free fixed IPs left in the
  network

Status in OpenStack Compute (Nova):
  New

Bug description:
  If network have no free fixed IPs, new instances failed with 'No valid
  hosts found' without proper explanation.

  Example:

  nova boot foobar --flavor SSD.1 --image cirros --nic net-id=f3f2802a-
  c2a1-4d8b-9f43-cf24d0dc8233

  (There is no free IP left in network f3f2802a-c2a1-4d8b-
  9f43-cf24d0dc8233)

  nova show fb4552e5-50cb-4701-a095-c006e4545c04
  ...
  | status   | BUILD
 |

  (few seconds later)

  | fault| {message: No valid host was 
found. Exceeded max scheduling attempts 2 for instance 
fb4552e5-50cb-4701-a095-c006e4545c04. Last exception: [u'Traceback (most recent 
call last):\ |
  |  | ', u'  File 
\/usr/lib/python2.7/dist-packages/nova/compute/manager.py\, line 2036, in 
_do, code: 500, details:   File 
\/usr/lib/python2.7/dist-packages/nova/conductor/manager.py\, line 612, in 
build_instances |
  |  | instances[0].uuid)   


  |
  |  |   File 
\/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py\, line 161, in 
populate_retry  
 |
  |  | raise 
exception.NoValidHost(reason=msg)   

 |
  | status   | ERROR


  |

  
  Expected behaviour: Compains about 'No free IP' before attempting to schedule 
instance.

  See https://bugs.launchpad.net/nova/+bug/1424594 for similar
  behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1424597/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1424595] [NEW] Create network with no name shows ID in the name column

2015-02-23 Thread Yamini Sardana
Public bug reported:

From Horizon, when we create a network with no name, it shows the
starting bits of the network ID in brackets as the network name which is
confusing.

Network ID should not be mentioned in the network Name column when a
user has not specified any name for the network.

Instead, the Networks table should display Network ID column (as done in
cli output) with the network Id mentioned under that.

Thus, this way, it will Display the ID information for the networks
which do not have a name specified and will also be consistent with the
cli output

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1424595

Title:
  Create network with no name shows ID in the name column

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  From Horizon, when we create a network with no name, it shows the
  starting bits of the network ID in brackets as the network name which
  is confusing.

  Network ID should not be mentioned in the network Name column when a
  user has not specified any name for the network.

  Instead, the Networks table should display Network ID column (as done
  in cli output) with the network Id mentioned under that.

  Thus, this way, it will Display the ID information for the networks
  which do not have a name specified and will also be consistent with
  the cli output

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1424595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424594] [NEW] 500 error and 2 traces if no free fixed IP left in the neutron network

2015-02-23 Thread George Shuklin
Public bug reported:

If nova recieve 404 from neutron due lack of free fixed IPs, it traces
badly and return 500 error to user.

Steps to reproduce:
0. Setup nova  neutron, create network  subnetwork
1. Consume all IP from that network
2. Try to attach interface to that network (nova interface-attach --net-id 
NET-UUID SERVER-UUID)

Actual behaviour:

ERROR (ClientException): The server has either erred or is incapable of
performing the requested operation. (HTTP 500) (Request-ID: req-
99ec-6a69-428d-9c16-c58d685553dd)

... and traces (see below)

Expected behaviour:

Proper complain about lack of IP (NoMoreFixedIps) and proper HTTP error
code.

Traces (nova-api):

nova.api.openstack.wsgi[26783]: DEBUG Action: 'create', calling method: bound 
method InterfaceAttachmentController.create of 
nova.api.openstack.compute.contrib.attach_interfaces.InterfaceAttachmentController
 object at 0x7f098e01b390, body: {interfaceAttachment: {net_id: 
f3f2802a-c2a1-4d8b-9f43-cf24d0dc8233}} 
[req-57f4e821-a968-48cd-8358-f73fa16b4ff7 4aac5cb61b1741b2a32067619555ecc1 
78ea359977584bcc9feceef2553dbe57] _process_stack 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:934
nova.api.openstack.compute.contrib.attach_interfaces[26783]: AUDIT [instance: 
5f1e84cb-1766-45e1-899b-9de1e535309b] Attach interface 
[req-57f4e821-a968-48cd-8358-f73fa16b4ff7 4aac5cb61b1741b2a32067619555ecc1 
78ea359977584bcc9feceef2553dbe57]
nova.api.openstack[26783]: ERROR Caught error: Zero fixed ips available.
Traceback (most recent call last):

  File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 134, in _dispatch_and_reply
incoming.message))

  File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 177, in _dispatch
return self._do_dispatch(endpoint, method, ctxt, args)

  File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 123, in _do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)

  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 419, in 
decorated_function
return function(self, context, *args, **kwargs)

  File /usr/lib/python2.7/dist-packages/nova/exception.py, line 88, in wrapped
payload)

  File /usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File /usr/lib/python2.7/dist-packages/nova/exception.py, line 71, in wrapped
return f(self, context, *args, **kw)

  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 303, in 
decorated_function
pass

  File /usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 289, in 
decorated_function
return function(self, context, *args, **kwargs)

  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 331, in 
decorated_function
kwargs['instance'], e, sys.exc_info())

  File /usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 319, in 
decorated_function
return function(self, context, *args, **kwargs)

  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 4787, 
in attach_interface
context, instance, port_id, network_id, requested_ip)

  File /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py, line 
569, in allocate_port_for_instance
requested_networks=requested_networks)

  File /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py, line 
443, in allocate_for_instance
self._delete_ports(neutron, instance, created_port_ids)

  File /usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py, line 
423, in allocate_for_instance
security_group_ids, available_macs, dhcp_opts)

  File /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py, line 
226, in _create_port
raise exception.NoMoreFixedIps()

NoMoreFixedIps: Zero fixed ips available.
 [req-57f4e821-a968-48cd-8358-f73fa16b4ff7 4aac5cb61b1741b2a32067619555ecc1 
78ea359977584bcc9feceef2553dbe57]
nova.api.openstack[26783]: TRACE Traceback (most recent call last):
nova.api.openstack[26783]: TRACE   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py, line 124, in 
__call__
nova.api.openstack[26783]: TRACE return req.get_response(self.application)
nova.api.openstack[26783]: TRACE   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
nova.api.openstack[26783]: TRACE application, catch_exc_info=False)
nova.api.openstack[26783]: TRACE   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 

[Yahoo-eng-team] [Bug 1424593] [NEW] ObjectDeleted error when network already removed during rescheduling

2015-02-23 Thread Eugene Nikanorov
Public bug reported:

In some cases when concurrent rescheduling occurs, the following trace
is observed:

ERROR neutron.openstack.common.loopingcall [-] in fixed duration looping call
TRACE neutron.openstack.common.loopingcall Traceback (most recent call last):
TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/loopingcall.py, 
line 76, in _inner
TRACE neutron.openstack.common.loopingcall self.f(*self.args, **self.kw)
TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 269, 
in remove_networks_from_down_agents
TRACE neutron.openstack.common.loopingcall {'net': binding.network_id,
TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 239, in 
__get__
TRACE neutron.openstack.common.loopingcall return 
self.impl.get(instance_state(instance), dict_)
TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 589, in 
get
TRACE neutron.openstack.common.loopingcall value = callable_(state, passive)
TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/state.py, line 424, in 
__call__
TRACE neutron.openstack.common.loopingcall 
self.manager.deferred_scalar_loader(self, toload)
TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py, line 614, in 
load_scalar_attributes
TRACE neutron.openstack.common.loopingcall raise 
orm_exc.ObjectDeletedError(state)
TRACE neutron.openstack.common.loopingcall ObjectDeletedError: Instance 
'NetworkDhcpAgentBinding at 0x52b1850' has been deleted, or its row is 
otherwise not present.

Need to avoid accessing db object after it has been deleted from db as 
attribute access may trigger this exception.
This issue terminates periodic task of rescheduling networks.

** Affects: neutron
 Importance: High
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424593

Title:
  ObjectDeleted error when network already removed during rescheduling

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In some cases when concurrent rescheduling occurs, the following trace
  is observed:

  ERROR neutron.openstack.common.loopingcall [-] in fixed duration looping call
  TRACE neutron.openstack.common.loopingcall Traceback (most recent call last):
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/loopingcall.py, 
line 76, in _inner
  TRACE neutron.openstack.common.loopingcall self.f(*self.args, **self.kw)
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 269, 
in remove_networks_from_down_agents
  TRACE neutron.openstack.common.loopingcall {'net': binding.network_id,
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 239, in 
__get__
  TRACE neutron.openstack.common.loopingcall return 
self.impl.get(instance_state(instance), dict_)
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 589, in 
get
  TRACE neutron.openstack.common.loopingcall value = callable_(state, 
passive)
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/state.py, line 424, in 
__call__
  TRACE neutron.openstack.common.loopingcall 
self.manager.deferred_scalar_loader(self, toload)
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py, line 614, in 
load_scalar_attributes
  TRACE neutron.openstack.common.loopingcall raise 
orm_exc.ObjectDeletedError(state)
  TRACE neutron.openstack.common.loopingcall ObjectDeletedError: Instance 
'NetworkDhcpAgentBinding at 0x52b1850' has been deleted, or its row is 
otherwise not present.

  Need to avoid accessing db object after it has been deleted from db as 
attribute access may trigger this exception.
  This issue terminates periodic task of rescheduling networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424596] [NEW] Create network with no name shows ID in the name column

2015-02-23 Thread Yamini Sardana
Public bug reported:

From Horizon, when we create a network with no name, it shows the
starting bits of the network ID in brackets as the network name which is
confusing.

Network ID should not be mentioned in the network Name column when a
user has not specified any name for the network.

Instead, the Networks table should display Network ID column (as done in
cli output) with the network Id mentioned under that.

Thus, this way, it will Display the ID information for the networks
which do not have a name specified and will also be consistent with the
cli output

** Affects: horizon
 Importance: Undecided
 Assignee: Yamini Sardana (yamini-sardana)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Yamini Sardana (yamini-sardana)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1424596

Title:
  Create network with no name shows ID in the name column

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  From Horizon, when we create a network with no name, it shows the
  starting bits of the network ID in brackets as the network name which
  is confusing.

  Network ID should not be mentioned in the network Name column when a
  user has not specified any name for the network.

  Instead, the Networks table should display Network ID column (as done
  in cli output) with the network Id mentioned under that.

  Thus, this way, it will Display the ID information for the networks
  which do not have a name specified and will also be consistent with
  the cli output

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1424596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424578] [NEW] DetachedInstanceError when binding network to agent

2015-02-23 Thread Eugene Nikanorov
Public bug reported:

TRACE neutron.db.agentschedulers_db Traceback (most recent call last):
TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 192, 
in _schedule_network
TRACE neutron.db.agentschedulers_db agents = self.schedule_network(context, 
network)
TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 400, 
in schedule_network
TRACE neutron.db.agentschedulers_db self, context, created_network)
TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/scheduler/dhcp_agent_scheduler.py, 
line 91, in schedule
TRACE neutron.db.agentschedulers_db self._schedule_bind_network(context, 
chosen_agents, network['id'])
TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/scheduler/dhcp_agent_scheduler.py, 
line 51, in _schedule_bind_network
TRACE neutron.db.agentschedulers_db LOG.info(_('Agent %s already present'), 
agent)
...
TRACE neutron.db.agentschedulers_db DetachedInstanceError: Instance Agent at 
0x5ff1710 is not bound to a Session; attribute refresh operation cannot proceed
2015-02-21 14:45:15.927 1417 TRACE neutron.db.agentschedulers_db

Need to print saved agent_id instead of using db object.

** Affects: neutron
 Importance: Low
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424578

Title:
  DetachedInstanceError when binding network to agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  TRACE neutron.db.agentschedulers_db Traceback (most recent call last):
  TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 192, 
in _schedule_network
  TRACE neutron.db.agentschedulers_db agents = 
self.schedule_network(context, network)
  TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 400, 
in schedule_network
  TRACE neutron.db.agentschedulers_db self, context, created_network)
  TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/scheduler/dhcp_agent_scheduler.py, 
line 91, in schedule
  TRACE neutron.db.agentschedulers_db self._schedule_bind_network(context, 
chosen_agents, network['id'])
  TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/scheduler/dhcp_agent_scheduler.py, 
line 51, in _schedule_bind_network
  TRACE neutron.db.agentschedulers_db LOG.info(_('Agent %s already 
present'), agent)
  ...
  TRACE neutron.db.agentschedulers_db DetachedInstanceError: Instance Agent at 
0x5ff1710 is not bound to a Session; attribute refresh operation cannot proceed
  2015-02-21 14:45:15.927 1417 TRACE neutron.db.agentschedulers_db

  Need to print saved agent_id instead of using db object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424576] [NEW] RuntimeError: Unable to find group for option fatal_deprecations, maybe it's defined twice in the same group?

2015-02-23 Thread Christian Berendt
Public bug reported:

I tried to generate a nova.conf configuration file with the current
state of the Nova repository (master) and got the following exception
message:

% tox -e genconfig
genconfig create: /home/berendt/Repositories/nova/.tox/genconfig
genconfig installdeps: -r/home/berendt/Repositories/nova/requirements.txt, 
-r/home/berendt/Repositories/nova/test-requirements.txt
genconfig develop-inst: /home/berendt/Repositories/nova
genconfig runtests: PYTHONHASHSEED='0'
genconfig runtests: commands[0] | bash tools/config/generate_sample.sh -b . -p 
nova -o etc/nova
Traceback (most recent call last):
  File /usr/lib64/python2.7/runpy.py, line 162, in _run_module_as_main
__main__, fname, loader, pkg_name)
  File /usr/lib64/python2.7/runpy.py, line 72, in _run_code
exec code in run_globals
  File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 303, in module
main()
  File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 300, in main
generate(sys.argv[1:])
  File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 128, in generate
for group, opts in _list_opts(mod_obj):
  File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 192, in _list_opts
ret.setdefault(_guess_groups(opt, obj), []).append(opt)
  File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 172, in _guess_groups
% opt.name
RuntimeError: Unable to find group for option fatal_deprecations, maybe it's 
defined twice in the same group?

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- I tried to generate a nova.conf configuration file with the current stat
- of the Nova repository (master) and got the following exception message:
+ I tried to generate a nova.conf configuration file with the current
+ state of the Nova repository (master) and got the following exception
+ message:
  
- % tox -e genconfig   
+ % tox -e genconfig
  genconfig create: /home/berendt/Repositories/nova/.tox/genconfig
  genconfig installdeps: -r/home/berendt/Repositories/nova/requirements.txt, 
-r/home/berendt/Repositories/nova/test-requirements.txt
  genconfig develop-inst: /home/berendt/Repositories/nova
  genconfig runtests: PYTHONHASHSEED='0'
  genconfig runtests: commands[0] | bash tools/config/generate_sample.sh -b . 
-p nova -o etc/nova
  Traceback (most recent call last):
-   File /usr/lib64/python2.7/runpy.py, line 162, in _run_module_as_main
- __main__, fname, loader, pkg_name)
-   File /usr/lib64/python2.7/runpy.py, line 72, in _run_code
- exec code in run_globals
-   File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 303, in module
- main()
-   File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 300, in main
- generate(sys.argv[1:])
-   File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 128, in generate
- for group, opts in _list_opts(mod_obj):
-   File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 192, in _list_opts
- ret.setdefault(_guess_groups(opt, obj), []).append(opt)
-   File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 172, in _guess_groups
- % opt.name
+   File /usr/lib64/python2.7/runpy.py, line 162, in _run_module_as_main
+ __main__, fname, loader, pkg_name)
+   File /usr/lib64/python2.7/runpy.py, line 72, in _run_code
+ exec code in run_globals
+   File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 303, in module
+ main()
+   File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 300, in main
+ generate(sys.argv[1:])
+   File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 128, in generate
+ for group, opts in _list_opts(mod_obj):
+   File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 192, in _list_opts
+ ret.setdefault(_guess_groups(opt, obj), []).append(opt)
+   File 
/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py, 
line 172, in _guess_groups
+ % opt.name
  RuntimeError: Unable to find group for option fatal_deprecations, maybe it's 
defined twice in the same group?

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1424576

Title:
  RuntimeError: Unable to find group for option fatal_deprecations,
  maybe it's defined twice in the same group?

Status in OpenStack Compute (Nova):
  New

Bug description:
  I tried to generate a nova.conf configuration file with the current
  state of the Nova repository (master) and got the following exception
  message:

  % tox -e genconfig
  genconfig 

[Yahoo-eng-team] [Bug 1424587] [NEW] In ML2 plugin for accessing private attributes of PortContext members use accessors

2015-02-23 Thread Rossella Sblendido
Public bug reported:

In the ML2 plugin instead of accessing the private members of
PortContext directly, use accessors. For example:

orig_context._network_context._network

should be:

orig_context.network.current


port = mech_context._port

should be:

port = mech_context.current

and so on...

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit ml2

** Tags added: ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424587

Title:
  In ML2 plugin for accessing private attributes of PortContext members
  use accessors

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In the ML2 plugin instead of accessing the private members of
  PortContext directly, use accessors. For example:

  orig_context._network_context._network

  should be:

  orig_context.network.current

  
  port = mech_context._port

  should be:

  port = mech_context.current

  and so on...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424605] [NEW] Security group should deleted which names lead by -.

2015-02-23 Thread Amandeep
Public bug reported:

While deleting a Security Group with a name which starts with '-'
followed by an alphabet then the Error message is not user friendly.
Security group

Steps to reproduce:

 nova secgroup-list

+--+-+-+
| Id   | Name| Description |
+--+-+-+
| 6f491527-0405-479d-80da-c9b81547bf39 | -hello | hello   |
|| 93b1f5b2-094b-4599-afc6-0ffc5518c0e9 | default | default |
||
+--+-+-+

nova secgroup-delete -hello

Expected Result:
should delete the security group.
Actual Result:
usage: nova secgroup-delete secgroup
error: too few arguments
Try 'nova help secgroup-delete' for more information.

** Affects: python-novaclient
 Importance: Undecided
 Assignee: Amandeep (rattenpal-amandeep)
 Status: New

** Project changed: nova = python-novaclient

** Summary changed:

- Error message not user friendly while deleting a security group
+ Security group should deleted which names lead by -.

** Changed in: python-novaclient
 Assignee: (unassigned) = Amandeep (rattenpal-amandeep)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1424605

Title:
  Security group should deleted which names lead by -.

Status in Python client library for Nova:
  New

Bug description:
  While deleting a Security Group with a name which starts with '-'
  followed by an alphabet then the Error message is not user friendly.
  Security group

  Steps to reproduce:

   nova secgroup-list

  +--+-+-+
  | Id   | Name| Description |
  +--+-+-+
  | 6f491527-0405-479d-80da-c9b81547bf39 | -hello | hello   |
  || 93b1f5b2-094b-4599-afc6-0ffc5518c0e9 | default | default |
  ||
  +--+-+-+

  nova secgroup-delete -hello

  Expected Result:
  should delete the security group.
  Actual Result:
  usage: nova secgroup-delete secgroup
  error: too few arguments
  Try 'nova help secgroup-delete' for more information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1424605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424616] [NEW] A timeout on Chrome 40/ Ubuntu 14.10 causes the user to be stuck at Log In

2015-02-23 Thread Rob Cresswell
Public bug reported:

If you timeout from Horizon, then clicking Sign In redirects you to
the Log In page without any warning or error message. This continues
until the sessionid cookie is manually removed.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1424616

Title:
  A timeout on Chrome 40/ Ubuntu 14.10 causes the user to be stuck at
  Log In

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If you timeout from Horizon, then clicking Sign In redirects you to
  the Log In page without any warning or error message. This continues
  until the sessionid cookie is manually removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1424616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421863] Re: Can not find policy directory: policy.d spams the logs

2015-02-23 Thread Attila Fazekas
** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: ceilometer
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421863

Title:
  Can not find policy directory: policy.d spams the logs

Status in OpenStack Telemetry (Ceilometer):
  New
Status in devstack - openstack dev environments:
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in The Oslo library incubator:
  Triaged
Status in Oslo Policy:
  Fix Released

Bug description:
  This hits over 118 million times in 24 hours in Jenkins runs:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ2FuIG5vdCBmaW5kIHBvbGljeSBkaXJlY3Rvcnk6IHBvbGljeS5kXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMzg2Njk0MTcxOH0=

  We can probably just change something in devstack to avoid this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1421863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424647] [NEW] Allow configuring proxy_host and proxy_port in nova.conf

2015-02-23 Thread Sridhar Gaddam
Public bug reported:

Following patch I2d46b926f1c895aba412d84b4ee059fda3df9011,
proxy_host/proxy_port configured in nova.conf or passed via
command line are not taking effect for novncproxy, spicehtmlproxy
and serial proxy.

** Affects: nova
 Importance: Undecided
 Assignee: Sridhar Gaddam (sridhargaddam)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Sridhar Gaddam (sridhargaddam)

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1424647

Title:
  Allow configuring proxy_host and proxy_port in nova.conf

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Following patch I2d46b926f1c895aba412d84b4ee059fda3df9011,
  proxy_host/proxy_port configured in nova.conf or passed via
  command line are not taking effect for novncproxy, spicehtmlproxy
  and serial proxy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1424647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424698] [NEW] Backend fIlter testing could be more comprehensive

2015-02-23 Thread Henry Nash
Public bug reported:

The current filter testing for backends covers some of the filtering
combinations (such as startswith) . but not all of them.  These should
be expanded to provide better coverage (especially as filtering is now
supported by SQL and Ldap backends).

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: test-improvement

** Tags added: test-improvement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1424698

Title:
  Backend fIlter testing could be more comprehensive

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The current filter testing for backends covers some of the filtering
  combinations (such as startswith) . but not all of them.  These should
  be expanded to provide better coverage (especially as filtering is now
  supported by SQL and Ldap backends).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1424698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424710] [NEW] CentOS 7: update-hostname always overwrites hostname

2015-02-23 Thread Brian Rak
Public bug reported:

On CentOS 7 (systemd based), with cloud-init 0.7.5, cc_update_hostname
will *always* overwrite the hostname, even if it shouldn't.

This is pretty easy to reproduce:

1) Add a 'hostname: metadata.example.com' to your user-data
2) Run 'hostnamectl set-hostname test.example.com'
3) Reboot the server
4) Check the hostname, it'll still be 'metadata.example.com'
5) Repeat 2-4, hostname will still be set

Looking at the code, this is because rhel.py doesn't populate the
previous-hostname file correctly:

def _write_hostname(self, hostname, out_fn):
if self._dist_uses_systemd():
util.subp(['hostnamectl', 'set-hostname', str(hostname)])
else:
host_cfg = {
'HOSTNAME': hostname,
}
rhel_util.update_sysconfig_file(out_fn, host_cfg)

So, Distros::update_hostname calls _write_hostname to try and save the
previous hostname to /var/lib/cloud/data/previous-hostname .. however,
all this does is run the hostnamectl command, which doesn't actually
update this file.  This means that on the next cloud-init run,
/var/lib/cloud/data/previous-hostname doesn't exist so the hostname gets
blindly updated again.

The fix here seems pretty simple, we just need to update previous-
hostname as well when running hostnamectl.  I attached a patch that does
this, though I don't know the best way to implement this.. my patch
fails if cloud-init's data path is changed

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Patch added: centos-7-hostname.patch
   
https://bugs.launchpad.net/bugs/1424710/+attachment/4325548/+files/centos-7-hostname.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1424710

Title:
  CentOS 7: update-hostname always overwrites hostname

Status in Init scripts for use on cloud images:
  New

Bug description:
  On CentOS 7 (systemd based), with cloud-init 0.7.5, cc_update_hostname
  will *always* overwrite the hostname, even if it shouldn't.

  This is pretty easy to reproduce:

  1) Add a 'hostname: metadata.example.com' to your user-data
  2) Run 'hostnamectl set-hostname test.example.com'
  3) Reboot the server
  4) Check the hostname, it'll still be 'metadata.example.com'
  5) Repeat 2-4, hostname will still be set

  Looking at the code, this is because rhel.py doesn't populate the
  previous-hostname file correctly:

  def _write_hostname(self, hostname, out_fn):
  if self._dist_uses_systemd():
  util.subp(['hostnamectl', 'set-hostname', str(hostname)])
  else:
  host_cfg = {
  'HOSTNAME': hostname,
  }
  rhel_util.update_sysconfig_file(out_fn, host_cfg)

  So, Distros::update_hostname calls _write_hostname to try and save the
  previous hostname to /var/lib/cloud/data/previous-hostname .. however,
  all this does is run the hostnamectl command, which doesn't actually
  update this file.  This means that on the next cloud-init run,
  /var/lib/cloud/data/previous-hostname doesn't exist so the hostname
  gets blindly updated again.

  The fix here seems pretty simple, we just need to update previous-
  hostname as well when running hostnamectl.  I attached a patch that
  does this, though I don't know the best way to implement this.. my
  patch fails if cloud-init's data path is changed

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1424710/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424722] [NEW] neuton should instantiate a oslo.messaging transport after fork

2015-02-23 Thread Ihar Hrachyshka
Public bug reported:

As per
http://docs.openstack.org/developer/oslo.messaging/transport.html,

oslo.messaging can’t ensure that forking a process that shares the same
transport object is safe for the library consumer, because it relies on
different 3rd party libraries that don’t ensure that. In certain cases,
with some drivers, it does work

In neutron, we initialize transport object before forking workers. We do
it by calling to neutron.common.rpc:init, which is called from
neutron.common.config:init, which is called BEFORE we call
neutron.service:serve_rpc which actually forks workers.

Note that in neutron case, it's not a matter of moving the
initialization a bit lower due to the way how plugins are instantiated.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424722

Title:
  neuton should instantiate a oslo.messaging transport after fork

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  As per
  http://docs.openstack.org/developer/oslo.messaging/transport.html,

  oslo.messaging can’t ensure that forking a process that shares the
  same transport object is safe for the library consumer, because it
  relies on different 3rd party libraries that don’t ensure that. In
  certain cases, with some drivers, it does work

  In neutron, we initialize transport object before forking workers. We
  do it by calling to neutron.common.rpc:init, which is called from
  neutron.common.config:init, which is called BEFORE we call
  neutron.service:serve_rpc which actually forks workers.

  Note that in neutron case, it's not a matter of moving the
  initialization a bit lower due to the way how plugins are
  instantiated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422716] Re: Image data remains in backend after deleting the image (created using task api, import-from) when it is in saving state

2015-02-23 Thread Thierry Carrez
*** This bug is a duplicate of bug 1371118 ***
https://bugs.launchpad.net/bugs/1371118

** This bug has been marked a duplicate of bug 1371118
   Image file stays in store if image has been deleted during upload 
(CVE-2014-9684)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1422716

Title:
  Image data remains in backend after deleting the image (created using
  task api, import-from) when it is in saving state

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance juno series:
  In Progress
Status in OpenStack Security Advisories:
  In Progress

Bug description:
  Trying to delete image created using task api (import-from) while it
  is in saving state then, image gets deleted from the database, but
  image data remains in the backend.

  Steps to reproduce:
  1. Create image using task api

  $ curl -i -X POST -H 'User-Agent: python-glanceclient' -H 'Content-
  Type: application/json' -H 'Accept-Encoding: gzip, deflate, compress'
  -H 'Accept: */*' -H 'X-Auth-Token: 35a9e49237b74eddbe5057eb434b3f9e'
  -d '{type: import, input: {import_from:
  http://releases.ubuntu.com/14.10/ubuntu-14.10-server-i386.iso;,
  import_from_format: raw, image_properties: {disk_format:
  raw, container_format: bare, name: task_image}}}'
  http://10.69.4.176:9292/v2/tasks

  2. Delete the image before its become active.
     $ glance image-delete image-id

  3. Verify image-list does not show deleted image
     $ glance image-list

  Image gets deleted from the database but image data presents in the
  backend.

  Problem:
  In stable/juno if image is deleted while uploading is in progress (image is 
in saving state) then image_repo.get() will raise NotFound exception which is 
not caught, so it will not delete image data from the backend.

  This issue is reproducible only in stable/juno.

  Note: You need to replace auth_token in above curl command, otherwise it will 
raise error for authentication failure.
  (Use 'keystone token-get' command to generate the new token)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1422716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424570] [NEW] Unfortunate wording: Edit volume button

2015-02-23 Thread Matthias Runge
Public bug reported:

Description of problem:
The 'save' button on the 'Edit Volume' window is currently 'Edit Volume' which 
makes the customer think that they are then going to edit something else. 
Please re-label to 'Save'.

** Affects: horizon
 Importance: Low
 Assignee: Matthias Runge (mrunge)
 Status: In Progress


** Tags: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1424570

Title:
  Unfortunate wording: Edit volume button

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Description of problem:
  The 'save' button on the 'Edit Volume' window is currently 'Edit Volume' 
which makes the customer think that they are then going to edit something else. 
Please re-label to 'Save'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1424570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420696] Re: [OSSA 2015-004] Image data remains in backend after deleting the image created using task api (import-from) (CVE-2015-1881)

2015-02-23 Thread Tristan Cacqueray
** Changed in: ossa
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1420696

Title:
  [OSSA 2015-004] Image data remains in backend after deleting the image
  created using task api (import-from) (CVE-2015-1881)

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance icehouse series:
  Invalid
Status in Glance juno series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  --
  This issue is being treated as a potential security risk under embargo. 
Please do not make any public mention of embargoed (private) security 
vulnerabilities before their coordinated publication by the OpenStack 
Vulnerability Management Team in the form of an official OpenStack Security 
Advisory. This includes discussion of the bug or associated fixes in public 
forums such as mailing lists, code review systems and bug trackers. Please also 
avoid private disclosure to other individuals not already approved for access 
to this information, and provide this same reminder to those who are made aware 
of the issue prior to publication. All discussion should remain confined to 
this private bug report, and any proposed fixes should be added as to the bug 
as attachments.
  --

  Trying to delete image created using task api (import-from) image gets
  deleted from the database, but image data remains in the backend.

  Steps to reproduce:
  1. Create image using task api

  $ curl -i -X POST -H 'User-Agent: python-glanceclient' -H 'Content-
  Type: application/json' -H 'Accept-Encoding: gzip, deflate, compress'
  -H 'Accept: */*' -H 'X-Auth-Token: 35a9e49237b74eddbe5057eb434b3f9e'
  -d '{type: import, input: {import_from:
  http://releases.ubuntu.com/14.10/ubuntu-14.10-server-i386.iso;,
  import_from_format: raw, image_properties: {disk_format:
  raw, container_format: bare, name: task_image}}}'
  http://10.69.4.176:9292/v2/tasks

  2. wait until image becomes active.
  3. Confirm image is in active state.
     $ glance image-list
  4. Delete the image
     $ glance image-delete image-id
  5. Verify image-list does not show deleted image
     $ glance image-list

  Image gets deleted from the database but image data presents in the
  backend.

  Problem:
  Import task does not update the location of the image and it remains None 
even image becomes active.
  Location entry is not added in the database in image_locations table.

  While deleting the image it checks if location is present for image
  [1][2] then only it deletes that image data from that location.

  [1] v1: 
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L1066
  [2] v2: 
https://github.com/openstack/glance/blob/master/glance/location.py#L361

  This issue is reproducible in stable/juno as well as in current
  master.

  Note: You need to replace auth_token in above curl command, otherwise it will 
raise error for authentication failure.
  (Use 'keystone token-get' command to generate the new token)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1420696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424760] [NEW] SLAAC/DHCPv6-stateless subnets can be deleted with router ports still in-use

2015-02-23 Thread Andrew Boik
Public bug reported:

SLAAC and DHCPv6-stateless subnets can be deleted using the subnet-
delete command even when they still have associated internal router
ports. This causes the subnet to be deleted from the Neutron database,
yet in reality the subnet still exists with radvd continuing to
advertise the prefix to clients on the network. Calling subnet-delete on
a subnet that still has internal router ports should result in an error.

===
===
Steps to reproduce:
===
===


1. Create a slaac or dhcpv6-stateless subnet


dboik@dboik-VirtualBox:/opt/stack/neutron/neutron$ neutron subnet-create 
--ip-version 6 --ipv6-ra-mode slaac --ipv6-address-mode slaac --name subv6 
private cafe::/64
Created a new subnet:
+---+--+
| Field | Value|
+---+--+
| allocation_pools  | {start: cafe::2, end: cafe:::::fffe} |
| cidr  | cafe::/64|
| dns_nameservers   |  |
| enable_dhcp   | True |
| gateway_ip| cafe::1  |
| host_routes   |  |
| id| f878a81c-3fdf-46f1-9719-fdbdb314d822 |
| ip_version| 6|
| ipv6_address_mode | slaac|
| ipv6_ra_mode  | slaac|
| name  | subv6|
| network_id| 77b850fd-8f87-4001-aa2e-6375a87b9598 |
| tenant_id | dc748d64a2fc4ec798e9a16d5f6cb444 |
+---+--+


2. Create a router interface using this subnet


dboik@dboik-VirtualBox:/opt/stack/neutron/neutron$ neutron router-interface-add 
router1 subv6
Added interface e86154dd-fee6-435d-8065-55cf4b2ae860 to router router1.

dboik@dboik-VirtualBox:/opt/stack/neutron/neutron$ neutron router-port-list 
router1
+--+--+---+--+
| id   | name | mac_address   | fixed_ips   

 |
+--+--+---+--+
| 31640bf5-5533-4ca4-b04c-d9808be385b2 |  | fa:16:3e:96:41:b7 | 
{subnet_id: 46659d0b-230f-49a0-8fea-2156a67f099f, ip_address: 
2001:420:2c50:200a::1}
| e86154dd-fee6-435d-8065-55cf4b2ae860 |  | fa:16:3e:c3:5a:3e | 
{subnet_id: f878a81c-3fdf-46f1-9719-fdbdb314d822, ip_address: cafe::1}  
 |
+--+--+---+--+

=
3. Delete the subnet
=

dboik@dboik-VirtualBox:/opt/stack/neutron/neutron$ neutron subnet-delete subv6
Deleted subnet: subv6

dboik@dboik-VirtualBox:/opt/stack/neutron/neutron$ neutron router-port-list 
router1
+--+--+---+--+
| id   | name | mac_address   | fixed_ips   

 |
+--+--+---+--+
| 31640bf5-5533-4ca4-b04c-d9808be385b2 |  | fa:16:3e:96:41:b7 | 
{subnet_id: 46659d0b-230f-49a0-8fea-2156a67f099f, ip_address: 
2001:420:2c50:200a::1} |
| e86154dd-fee6-435d-8065-55cf4b2ae860 |  | fa:16:3e:c3:5a:3e | 

 |
+--+--+---+--+

Subnet is deleted from the router port in Neutron. Subnet still exists in the 
router namespace:
dboik@dboik-VirtualBox:/opt/stack/neutron/neutron$ sudo ip netns exec 

[Yahoo-eng-team] [Bug 1316137] Re: glance --location images handling is hiding HTTP errors and uses incorrect HTTP codes...

2015-02-23 Thread nikhil komawar
** Changed in: glance-store
   Importance: Undecided = Medium

** Changed in: glance-store
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1316137

Title:
  glance --location images handling is hiding HTTP errors and uses
  incorrect HTTP codes...

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Glance backend store-drivers library (glance_store):
  Fix Released

Bug description:
  1) Glance is hiding error messages when used with --location. If the
  URL returns HTTP 404, glance is just ignoring it.

  2) Reply to GET request to http://CENSORED:9292/v1/images/2e34a168
  -62ca-412d-84bb-852fcaf2a391 contains header:

     location: http://CENSORED:9292/v1/images/2e34a168-62ca-412d-84bb-
  852fcaf2a391

  which is referring the same URL. This feels weird.

  3) It also replies with HTTP 200 in the case above, so why does the reply 
contain the location header?
  How can it reply 200 when it couldn't have succeed to download the image?

  [root@cloudimg ~(keystone_admin)]# glance image-create --location 
http://www.google.com/glance_handles_404_and_other_errors_incorrectly.html 
--is-public False --container-format bare --disk-format qcow2 --name 404.html; 
echo exit code: $?
  /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: 
PowmInsecureWarning: Not using mpz_powm_sec.  You should rebuild using libgmp 
= 5 to avoid timing attack vulnerability.
    _warn(Not using mpz_powm_sec.  You should rebuild using libgmp = 5 to 
avoid timing attack vulnerability., PowmInsecureWarning)
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | bare |
  | created_at   | 2014-05-05T12:42:17  |
  | deleted  | False|
  | deleted_at   | None |
  | disk_format  | qcow2|
  | id   | 2e34a168-62ca-412d-84bb-852fcaf2a391 |
  | is_public| False|
  | min_disk | 0|
  | min_ram  | 0|
  | name | 404.html |
  | owner| 56be9abf0ca24193908472465157112f |
  | protected| False|
  | size | 0|
  | status   | active   |
  | updated_at   | 2014-05-05T12:42:17  |
  +--+--+
  exit code: 0

  [root@cloudimg ~(keystone_admin)]# glance --debug image-download --file 
404.html 404.html; echo exit code: $?
  /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: 
PowmInsecureWarning: Not using mpz_powm_sec.  You should rebuild using libgmp 
= 5 to avoid timing attack vulnerability.
    _warn(Not using mpz_powm_sec.  You should rebuild using libgmp = 5 to 
avoid timing attack vulnerability., PowmInsecureWarning)
  curl -i -X GET -H 'X-Auth-Token: 
MIIO3wYJKoZIhvcNAQcCoIIO0DC...EcKIsfbUbm5A74Gg==' -H 'Content-Type: 
application/json' -H 'User-Agent: python-glanceclient' 
http://CENSORED:9292/v1/images/detail?limit=20name=404.html

  HTTP/1.1 200 OK
  date: Mon, 05 May 2014 12:44:06 GMT
  content-length: 425
  content-type: application/json; charset=UTF-8
  x-openstack-request-id: req-d1bb2f3a-31ea-4da8-b1a5-e8a021f6e122

  {images: [{status: active, name: 404.html, deleted: false,
  container_format: bare, created_at: 2014-05-05T12:42:17,
  disk_format: qcow2, updated_at: 2014-05-05T12:42:17,
  min_disk: 0, protected: false, id: 2e34a168-62ca-412d-84bb-
  852fcaf2a391, min_ram: 0, checksum: null, owner:
  56be9abf0ca24193908472465157112f, is_public: false, deleted_at:
  null, properties: {}, size: 0}]}

  curl -i -X GET -H 'X-Auth-Token:
  MIIO3wYJKoZIhvcNAQcCo...EcKIsfbUbm5A74Gg==' -H 'Content-Type:
  application/octet-stream' -H 'User-Agent: python-glanceclient'
  http://CENSORED:9292/v1/images/2e34a168-62ca-412d-84bb-852fcaf2a391

  HTTP/1.1 200 OK
  content-length: 0
  x-image-meta-id: 2e34a168-62ca-412d-84bb-852fcaf2a391
  date: Mon, 05 May 2014 12:44:07 GMT
  x-image-meta-deleted: False
  x-image-meta-container_format: bare
  x-image-meta-protected: False
  x-image-meta-min_disk: 0
  x-image-meta-created_at: 2014-05-05T12:42:17
  x-image-meta-size: 0
  x-image-meta-status: active
  location: http://CENSORED:9292/v1/images/2e34a168-62ca-412d-84bb-852fcaf2a391
  x-image-meta-is_public: False
  x-image-meta-min_ram: 0
  x-image-meta-owner: 56be9abf0ca24193908472465157112f
  

[Yahoo-eng-team] [Bug 1371118] Re: [OSSA 2015-004] Image file stays in store if image has been deleted during upload (CVE-2014-9684)

2015-02-23 Thread Tristan Cacqueray
** Changed in: ossa
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1371118

Title:
  [OSSA 2015-004] Image file stays in store if image has been deleted
  during upload (CVE-2014-9684)

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Invalid
Status in Glance juno series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  When I create a new task in v2 to upload an image, it creates the
  image record in db, sets status to saving and then begins the
  uploading.

  If the image is deleted by appropriate API call while its content is
  still being uploaded, an exception is raised and it is not handled in
  the API code. This leads to the fact that the uploaded image file
  stays in a storage and clogs it.

  File /opt/stack/glance/glance/common/scripts/image_import/main.py, line 62, 
in _execute 
  uri)
  File /opt/stack/glance/glance/common/scripts/image_import/main.py, line 95, 
in import_image
  new_image = image_repo.get(image_id)
  File /opt/stack/glance/glance/api/authorization.py, line 106, in get
  image = self.image_repo.get(image_id)
  File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
  return self.helper.proxy(self.base.get(item_id))
  File /opt/stack/glance/glance/api/policy.py, line 179, in get
  return super(ImageRepoProxy, self).get(image_id)
  File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
  return self.helper.proxy(self.base.get(item_id))
  File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
  return self.helper.proxy(self.base.get(item_id))
  File /opt/stack/glance/glance/domain/proxy.py, line 86, in get 
  return self.helper.proxy(self.base.get(item_id))
  File /opt/stack/glance/glance/db/__init__.py, line 72, in get raise 
exception.NotFound(msg)
  NotFound: No image found with ID e2285448-a56f-45b1-9e6e-216d2b304967

  This bug is very similar to
  https://bugs.launchpad.net/glance/+bug/1188532, but it relates to task
  mechanism in v2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1371118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415336] Re: type: parameter should be replaced by healthmonitor_type in loadbalancer

2015-02-23 Thread Amandeep
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1415336

Title:
  type: parameter should be replaced by healthmonitor_type in
  loadbalancer

Status in OpenStack API documentation site:
  New
Status in Python client library for Neutron:
  In Progress

Bug description:
  As per V2 API specification load balancer healthmonitor has a parameter 
type which can not be parsed by JSON parser. 
  So, it must be replaced by healthmonitor_type as per the OpenDayLight Bug- 
( https://bugs.opendaylight.org/show_bug.cgi?id=1674 )

  Further information related to lbaas healthmonitor can be found here: 
  
http://docs.openstack.org/api/openstack-network/2.0/content/POST_createHealthMonitor__v2.0_healthmonitors_lbaas_ext_ops_health_monitor.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1415336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424548] [NEW] neutron allow to create network with flase physical network

2015-02-23 Thread Roey Dekel
Public bug reported:

Try to create a new network with false --provider:physical_network (typo 
mistake).
Neutron allow the mistake which was hard to found.
Could have check if the physical network is valid (such as in 
/etc/neutron/plugin.ini ).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424548

Title:
  neutron allow to create network with flase physical network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Try to create a new network with false --provider:physical_network (typo 
mistake).
  Neutron allow the mistake which was hard to found.
  Could have check if the physical network is valid (such as in 
/etc/neutron/plugin.ini ).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424557] [NEW] value of action string is not translatable in firewall rules table

2015-02-23 Thread Masco Kaliyamoorthy
Public bug reported:

In firewall rules table, value of action column is not translatable

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1424557

Title:
  value of action string is not translatable in firewall rules table

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In firewall rules table, value of action column is not translatable

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1424557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1194688] Re: Devstack uses keystone.middleware.s3_token in swift pipeline

2015-02-23 Thread Christian Schwede
I think we can close this bug for all projects. s3_token is included in
python-keystoneclient for a while now, and the s3 middleware itself is
available on stackforge: https://github.com/stackforge/swift3

- Closing bug.

** Changed in: swift
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1194688

Title:
  Devstack uses keystone.middleware.s3_token in swift pipeline

Status in devstack - openstack dev environments:
  Confirmed
Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Object Storage (Swift):
  Fix Released

Bug description:
  Coming from https://bugs.launchpad.net/keystone/+bug/1193112 devstack
  will use keystone's keystone.middleware.s3token when setting up swift.

  If it is being used externally then it shouldn't be in
  keystone.middleware and there are two main options for moving it:

  1. s3token middleware should be moved into python-keystoneclient. 
  This makes it available to other components in the same way that the 
auth_token middleware was moved.

  2. s3token middleware should be moved into swift. 
  There is already a dependency on swift.common.utils in the s3token middleware 
and it appears that is calling keystone via http so it should be fine to be 
moved. Note: this is the first time i've paid much attention to this middleware 
so please tell me if it needs to be a keystone thing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1194688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424930] [NEW] Compressing an incorrect scss files

2015-02-23 Thread Kahou Lei
Public bug reported:

I was trying to compress the css file with some local changes. Since it
is a local change, I was doing the change under the static folder
(horizon/static) for testing.

But I notice that the changes are not being picked up at all.

Here are the steps to reproduce.

1. Run python manage.py collectstatic --noinput
2. Under horizon/static folder, open up 
horizon/static/bootstrap/scss/bootstrap/_variables.scss
3. Look for $brand-primary declaration, change the hex value to #00
4. Run python manage.py compress
5. Under horizon/static/dashboard/css, open up the compressed css file.
6. Look for btn-primary, you will notice that it is still using the old value 
but not #00

I spent some time to debug it and turns out compressor is still
importing the scss file from the xstatic installation package under the
.venv folder.

I tried it on devstack and my local computer and the issue stays the
same.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1424930

Title:
  Compressing an incorrect scss files

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I was trying to compress the css file with some local changes. Since
  it is a local change, I was doing the change under the static folder
  (horizon/static) for testing.

  But I notice that the changes are not being picked up at all.

  Here are the steps to reproduce.

  1. Run python manage.py collectstatic --noinput
  2. Under horizon/static folder, open up 
horizon/static/bootstrap/scss/bootstrap/_variables.scss
  3. Look for $brand-primary declaration, change the hex value to #00
  4. Run python manage.py compress
  5. Under horizon/static/dashboard/css, open up the compressed css file.
  6. Look for btn-primary, you will notice that it is still using the old 
value but not #00

  I spent some time to debug it and turns out compressor is still
  importing the scss file from the xstatic installation package under
  the .venv folder.

  I tried it on devstack and my local computer and the issue stays the
  same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1424930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424930] Re: Compressing an incorrect scss files

2015-02-23 Thread Kahou Lei
Turns out the scss compiler will do a glob if we set DEBUG = True.
Therefore, it will never get the file from the static folder.

Once DEBUG is set as False, everything works fine.

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1424930

Title:
  Compressing an incorrect scss files

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I was trying to compress the css file with some local changes. Since
  it is a local change, I was doing the change under the static folder
  (horizon/static) for testing.

  But I notice that the changes are not being picked up at all.

  Here are the steps to reproduce.

  1. Run python manage.py collectstatic --noinput
  2. Under horizon/static folder, open up 
horizon/static/bootstrap/scss/bootstrap/_variables.scss
  3. Look for $brand-primary declaration, change the hex value to #00
  4. Run python manage.py compress
  5. Under horizon/static/dashboard/css, open up the compressed css file.
  6. Look for btn-primary, you will notice that it is still using the old 
value but not #00

  I spent some time to debug it and turns out compressor is still
  importing the scss file from the xstatic installation package under
  the .venv folder.

  I tried it on devstack and my local computer and the issue stays the
  same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1424930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424825] Re: Parsing of service catalog should be less error prone

2015-02-23 Thread Lin Hua Cheng
jamielennox:  good call,  didn't noticed that auth plugins also exposed
the get_endpoint() method too.  We should move to that instead.

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1424825

Title:
  Parsing of  service catalog should be less error prone

Status in Django OpenStack Auth:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently, parsing of the service catalog is hard-coded and dependent
  on the data structure of the service catalog.

  For example:

  in user.py

  @property
  def available_services_regions(self):
  Returns list of unique region name values in service catalog.
  regions = []
  if self.service_catalog:
  for service in self.service_catalog:
  if service['type'] == 'identity':
  continue
  for endpoint in service['endpoints']:
  if endpoint['region'] not in regions:
  regions.append(endpoint['region'])
  return regions

  This code is prone to issue if the structure of the service catalog
  changes, it should be using the public interfaces of Service Catalog
  object from KSC when parsing rather than directly accessing the
  service_catalog dictionary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1424825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1222900] Re: verbose logging with standard devstack settings

2015-02-23 Thread gordon chung
** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: Confirmed = Invalid

** Changed in: devstack
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1222900

Title:
  verbose logging with standard devstack settings

Status in devstack - openstack dev environments:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  using a standard devstack setup, there seems to be a lot of (invalid)
  error logging.  not sure if it is expected behaviour but it visually
  seems odd that it logs with [error] on every call yet the response
  seems to be ok:

  [Mon Sep 09 16:01:26 2013] [error] DEBUG:ceilometerclient.common.http:
  [Mon Sep 09 16:01:26 2013] [error] HTTP/1.0 200 OK
  [Mon Sep 09 16:01:26 2013] [error] date: Mon, 09 Sep 2013 16:01:26 GMT
  [Mon Sep 09 16:01:26 2013] [error] content-length: 2
  [Mon Sep 09 16:01:26 2013] [error] content-type: application/json; 
charset=UTF-8
  [Mon Sep 09 16:01:26 2013] [error] server: WSGIServer/0.1 Python/2.7.3
  [Mon Sep 09 16:01:26 2013] [error]
  [Mon Sep 09 16:01:26 2013] [error] []
  [Mon Sep 09 16:01:26 2013] [error]
  [Mon Sep 09 16:01:26 2013] [error] DEBUG:ceilometerclient.common.http:curl -i 
-X GET -H 'X-Auth-Token: 4ccec28a944671cf9d1246de45084742' -H 'Content-Type: 
application/json' -H 'Accept: application/json' -H 'User-Agent: 
python-ceilometerclient' 
http://10.0.2.15:8777/v2/meters/disk.read.requests/statistics?q.op=eqq.op=geq.op=leq.value=55cae1bb63dd491ca7504624c413a3c2q.value=2013-08-10+16%3A01%3A20.911959q.value=2013-09-09+16%3A01%3A20.911972q.field=project_idq.field=timestampq.field=timestamp
  [Mon Sep 09 16:01:26 2013] [error] DEBUG:ceilometerclient.common.http:
  [Mon Sep 09 16:01:26 2013] [error] HTTP/1.0 200 OK
  [Mon Sep 09 16:01:26 2013] [error] date: Mon, 09 Sep 2013 16:01:26 GMT
  [Mon Sep 09 16:01:26 2013] [error] content-length: 2
  [Mon Sep 09 16:01:26 2013] [error] content-type: application/json; 
charset=UTF-8
  [Mon Sep 09 16:01:26 2013] [error] server: WSGIServer/0.1 Python/2.7.3
  [Mon Sep 09 16:01:26 2013] [error]
  [Mon Sep 09 16:01:26 2013] [error] []
  [Mon Sep 09 16:01:26 2013] [error]
  [Mon Sep 09 16:01:26 2013] [error] DEBUG:ceilometerclient.common.http:curl -i 
-X GET -H 'X-Auth-Token: 4ccec28a944671cf9d1246de45084742' -H 'Content-Type: 
application/json' -H 'Accept: application/json' -H 'User-Agent: 
python-ceilometerclient' 
http://10.0.2.15:8777/v2/meters/disk.write.bytes/statistics?q.op=eqq.op=geq.op=leq.value=55cae1bb63dd491ca7504624c413a3c2q.value=2013-08-10+16%3A01%3A20.911959q.value=2013-09-09+16%3A01%3A20.911972q.field=project_idq.field=timestampq.field=timestamp

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1222900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp