[Yahoo-eng-team] [Bug 1361030] Re: new dnsmasq requirement exceeds available version on ubuntu 12.04

2014-08-25 Thread Yaguang Tang
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361030

Title:
  new dnsmasq requirement exceeds available version on ubuntu 12.04

Status in OpenStack Neutron (virtual network service):
  New
Status in “neutron” package in Ubuntu:
  New

Bug description:
  The recent commit requiring a dnsmasq version >=2.63
  (https://review.openstack.org/#/c/106299/) means that Juno cannot used
  with the dnsmasq version available in Ubuntu 12.04 so neutron DHCP
  agents will not work.

  I'm not sure if this was already discussed and agreed upon somewhere,
  but I thought I would create a bug for people to find if they try to
  use 12.04 with Juno.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361044] [NEW] Show/Update/Delete IPSec connection APIs return 500 rather than 404 when resource not found

2014-08-25 Thread Ma, Tianxiao
Public bug reported:

Show/Update/Delete IPSec connection APIs return 500 rather than 404 when
resource not found.

The following APIs is affected.

GET /v2.0/vpn/ipsec-siteconnections/{connection-id}
PUT /v2.0/vpn/ipsec-siteconnections/{connection-id}
DELETE /v2.0/vpn/ipsec-siteconnections/{connection-id}

Issuing above APIs with incorrect {connection-id} will get a 500 server
internal error rather than 404: item not found.

Please check it, thank you!

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361044

Title:
  Show/Update/Delete IPSec connection APIs return 500 rather than 404
  when resource not found

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Show/Update/Delete IPSec connection APIs return 500 rather than 404
  when resource not found.

  The following APIs is affected.

  GET /v2.0/vpn/ipsec-siteconnections/{connection-id}
  PUT /v2.0/vpn/ipsec-siteconnections/{connection-id}
  DELETE /v2.0/vpn/ipsec-siteconnections/{connection-id}

  Issuing above APIs with incorrect {connection-id} will get a 500
  server internal error rather than 404: item not found.

  Please check it, thank you!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361081] [NEW] v1&v2 creation interface prompts differences when Image name exceeds limit

2014-08-25 Thread Jin Liu
Public bug reported:

v1 api gives clear prompt as below,

linux:˜/source> glance image-create 
--name="cirros-0.3.2-x86_64-"
 --disk-format=qcow2 \
>   --container-format=bare --is-public=true --min-disk=133766616\
>   --copy-from 
> http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
Request returned failure status 400.


  400 Bad Request


  400 Bad Request
  Image name too long: 820


 (HTTP 400)

v2 gives blur prompt,
# glance  --os-image-api-version 2 image-create --name 
Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.
 --disk-format=qcow2 --container-format=bare < cirros-0.3.2-x86_64-disk.img

Unable to set 'name' to
'Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.'

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1361081

Title:
  v1&v2 creation interface prompts differences when Image name exceeds
  limit

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  v1 api gives clear prompt as below,

  linux:˜/source> glance image-create 
--name="cirros-0.3.2-x86_64-"
 --disk-format=qcow2 \
  >   --container-format=bare --is-public=true --min-disk=133766616\
  >   --copy-from 
http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
  Request returned failure status 400.
  
  
400 Bad Request
  
  
400 Bad Request
Image name too long: 820

  
   (HTTP 400)

  v2 gives blur prompt,
  # glance  --os-image-api-version 2 image-create --name 
Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.
 --disk-format=qcow2 --container-format=bare < cirros-0.3.2-x86_64-disk.img

  Unable to set 'name' to
  
'Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenameistoolong.Thisimagenam

[Yahoo-eng-team] [Bug 1191884] Re: Error: Service n-novnc is not running

2014-08-25 Thread DWang
** Also affects: ubuntu
   Importance: Undecided
   Status: New

** Package changed: ubuntu => devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1191884

Title:
  Error: Service n-novnc is not running

Status in devstack - openstack dev environments:
  New
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  From a clean install of Fedora 16 (updated) in a VMware VM, I attempt
  to install openstack using the DevStack quick installation. After
  running ./stack.sh

  I receive the error message: Service n-novnc is not running.

  Neither URL(S) for Horizon or Keystone work. :(

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1191884/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361088] [NEW] Get VM metadata information by l3 agent, resource temporarily unavailable

2014-08-25 Thread Dongcan Ye
Public bug reported:

When boot a VM assign name and password, I have met a run-time error. In
L3 agent configuration file  I have enabled enable_metadata_proxy.

Trace info from l3-agent.log:

2014-08-18 16:56:11.971 3281 ERROR neutron.agent.linux.utils 
[req-3c9892ce-0d64-4cdd-ac27-dd8736076c18 None]
Command: ['sudo', 'ip', 'netns', 'exec', 
'qrouter-2123c965-410d-4dc0-ab3c-240c0969b525', 'neutron-ns-metadata-proxy', 
'--pid_file=/var/lib/neutron/external/pids/2123c965-410d-4dc0-ab3c-240c0969b525.pid',
 '--metadata_proxy_socket=/var/lib/neutron/metadata_proxy', 
'--router_id=2123c965-410d-4dc0-ab3c-240c0969b525', 
'--state_path=/var/lib/neutron', '--metadata_port=9697', '--verbose', 
'--log-file=neutron-ns-metadata-proxy-2123c965-410d-4dc0-ab3c-240c0969b525.log',
 '--log-dir=/var/log/neutron']
Exit code: 1
Stdout: ''
Stderr: '2014-08-18 16:56:11.908 3861 INFO neutron.common.config [-] 
Logging enabled!\n2014-08-18 16:56:11.916 3861 ERROR neutron.agent.linux.daemon 
[-] Error while handling pidfile: 
/var/lib/neutron/external/pids/2123c965-410d-4dc0-ab3c-240c0969b525.pid\n2014-08-18
 16:56:11.916 3861 TRACE neutron.agent.linux.daemon Traceback (most recent call 
last):\n2014-08-18 16:56:11.916 3861 TRACE neutron.agent.linux.daemon   File 
"/usr/lib/python2.6/site-packages/neutron/agent/linux/daemon.py", line 37, in 
__init__\n2014-08-18 16:56:11.916 3861 TRACE neutron.agent.linux.daemon 
fcntl.flock(self.fd, fcntl.LOCK_EX | fcntl.LOCK_NB)\n2014-08-18 16:56:11.916 
3861 TRACE neutron.agent.linux.daemon IOError: [Errno 11] Resource temporarily 
unavailable\n2014-08-18 16:56:11.916 3861 TRACE neutron.agent.linux.daemon \n'
2014-08-18 16:56:11.972 3281 ERROR neutron.agent.l3_agent 
[req-3c9892ce-0d64-4cdd-ac27-dd8736076c18 None] Failed synchronizing routers
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent Traceback (most 
recent call last):
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
"/usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py", line 879, in 
_sync_routers_task
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent 
self._process_routers(routers, all_routers=True)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
"/usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py", line 812, in 
_process_routers
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent 
self._router_added(r['id'], r)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
"/usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py", line 368, in 
_router_added
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent 
self._spawn_metadata_proxy(ri.router_id, ri.ns_name)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
"/usr/lib/python2.6/site-packages/neutron/agent/l3_agent.py", line 409, in 
_spawn_metadata_proxy
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent 
pm.enable(callback)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
"/usr/lib/python2.6/site-packages/neutron/agent/linux/external_process.py", 
line 54, in enable
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent 
ip_wrapper.netns.execute(cmd)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
"/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 466, in 
execute
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent 
check_exit_code=check_exit_code)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent   File 
"/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py", line 78, in 
execute
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent raise 
RuntimeError(m)
2014-08-18 16:56:11.972 3281 TRACE neutron.agent.l3_agent RuntimeError:

when spawn neutron-ns-metadata-proxy, using file-lock lock the pidfile which on 
behalf of router id is failed.
But the router already exists when neutron-ns-metadata-proxy starts.

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- Get VM metadata infomation by l3 agent, resource  temporarily unavailable
+ Get VM metadata information by l3 agent, resource  temporarily unavailable

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361088

Title:
  Get VM metadata information by l3 agent, resource  temporarily
  unavailable

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When boot a VM assign name and password, I have met a run-time error.
  In L3 agent configuration file  I have enabled enable_metadata_proxy.

  Trace info from l3-agent.log:

  2014-08-18 16:56:11.971 3281 ERROR neutron.agent.linux.utils 
[req-3c9892ce-0d64-4cdd-ac27-dd8736076c18 None]
  Command: ['sudo', 'ip', 'netns', 'exec', 
'qrouter-2123c965-410d-4dc0-ab3c-240c0969b525', 'neutron-ns-metadata-proxy', 
'--pid_file=/var/lib/neutron/external/pids/2123c965-410d-4dc0-ab3c-

[Yahoo-eng-team] [Bug 1361097] [NEW] Compute exception text never present when max sched attempt reached

2014-08-25 Thread Joe Cropper
Public bug reported:

When scheduling VMs and the retry logic kicks in, the failed compute
exception text is saved to be displayed for triaging purposes in the
conductor/scheduler logs.  When the conductor tries to display the
exception text when the maximum scheduling attempts have been reached,
the exception always shows 'None' for the exception text.

Snippet from scheduler_utils.py...

 msg = (_('Exceeded max scheduling attempts %(max_attempts)d '
'for instance %(instance_uuid)s. '
'Last exception: %(exc)s.')
% {'max_attempts': max_attempts,
'instance_uuid': instance_uuid,
'exc': exc})

That is, 'exc' is erroneously ALWAYS None in this case.

** Affects: nova
 Importance: Undecided
 Assignee: Joe Cropper (jwcroppe)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Joe Cropper (jwcroppe)

** Description changed:

  When scheduling VMs and the retry logic kicks in, the failed compute
  exception text is saved to be displayed for triaging purposes in the
  conductor/scheduler logs.  When the conductor tries to display the
  exception text when the maximum scheduling attempts have been reached,
  the exception always shows 'None' for the exception text.
+ 
+  msg = (_('Exceeded max scheduling attempts %(max_attempts)d '
+ 'for instance %(instance_uuid)s. '
+ 'Last exception: %(exc)s.')
+ % {'max_attempts': max_attempts,
+ 'instance_uuid': instance_uuid,
+ 'exc': exc})
+ 
+ That is, 'exc' is erroneously ALWAYS None in this case.

** Description changed:

  When scheduling VMs and the retry logic kicks in, the failed compute
  exception text is saved to be displayed for triaging purposes in the
  conductor/scheduler logs.  When the conductor tries to display the
  exception text when the maximum scheduling attempts have been reached,
  the exception always shows 'None' for the exception text.
  
-  msg = (_('Exceeded max scheduling attempts %(max_attempts)d '
+ Snippet from scheduler_utils.py...
+ 
+  msg = (_('Exceeded max scheduling attempts %(max_attempts)d '
  'for instance %(instance_uuid)s. '
  'Last exception: %(exc)s.')
  % {'max_attempts': max_attempts,
  'instance_uuid': instance_uuid,
  'exc': exc})
  
  That is, 'exc' is erroneously ALWAYS None in this case.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361097

Title:
  Compute exception text never present when max sched attempt reached

Status in OpenStack Compute (Nova):
  New

Bug description:
  When scheduling VMs and the retry logic kicks in, the failed compute
  exception text is saved to be displayed for triaging purposes in the
  conductor/scheduler logs.  When the conductor tries to display the
  exception text when the maximum scheduling attempts have been reached,
  the exception always shows 'None' for the exception text.

  Snippet from scheduler_utils.py...

   msg = (_('Exceeded max scheduling attempts %(max_attempts)d '
  'for instance %(instance_uuid)s. '
  'Last exception: %(exc)s.')
  % {'max_attempts': max_attempts,
  'instance_uuid': instance_uuid,
  'exc': exc})

  That is, 'exc' is erroneously ALWAYS None in this case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1191884] Re: Error: Service n-novnc is not running

2014-08-25 Thread Ian Wienand
** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1191884

Title:
  Error: Service n-novnc is not running

Status in devstack - openstack dev environments:
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  From a clean install of Fedora 16 (updated) in a VMware VM, I attempt
  to install openstack using the DevStack quick installation. After
  running ./stack.sh

  I receive the error message: Service n-novnc is not running.

  Neither URL(S) for Horizon or Keystone work. :(

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1191884/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361108] [NEW] novnc failed to start due to unexpected keyword argument

2014-08-25 Thread DWang
Public bug reported:

Hi everybody,

I'm new to DevStack and during my installation of icehouse devstack, some error 
arose like this:
"
...
ls /opt/stack/status/stack/n-novnc.failure
2014-08-25 07:49:55.739 | + failures=/opt/stack/status/stack/n-novnc.failure
2014-08-25 07:49:55.739 | + for service in '$failures'
2014-08-25 07:49:55.740 | ++ basename /opt/stack/status/stack/n-novnc.failure
2014-08-25 07:49:55.740 | + service=n-novnc.failure
2014-08-25 07:49:55.741 | + service=n-novnc
2014-08-25 07:49:55.741 | + echo 'Error: Service n-novnc is not running'
2014-08-25 07:49:55.741 | Error: Service n-novnc is not running
2014-08-25 07:49:55.741 | + '[' -n /opt/stack/status/stack/n-novnc.failure ']'
2014-08-25 07:49:55.741 | + die 1316 'More details about the above errors can 
be found with screen, with ./rejoin-stack.sh'
2014-08-25 07:49:55.741 | + local exitcode=0
2014-08-25 07:49:55.741 | [Call Trace]
2014-08-25 07:49:55.741 | ./stack.sh:1371:service_check
2014-08-25 07:49:55.741 | /home/darren/devstack/functions-common:1316:die
2014-08-25 07:49:55.743 | [ERROR] /home/darren/devstack/functions-common:1316 
More details about the above errors can be found with screen, with 
./rejoin-stack.sh
2014-08-25 07:49:56.747 | Error on exit
"

Then I went to the corrispond screen to check what is wrong:

"
cd /opt/stack/nova && /usr/local/bin/nova-novncproxy --config-file 
/etc/nova/nova.conf --web /opt/stack/noVNC & echo $! 
>/opt/stack/status/stack/n-novnc.pid; fg || echo "n-novnc failed to start" | 
tee "/opt/stack/status/stack/n-novnc.failure"
[1] 2881
cd /opt/stack/nova && /usr/local/bin/nova-novncproxy --config-file 
/etc/nova/nova.conf --web /opt/stack/noVNC

Traceback (most recent call last):
  File "/usr/local/bin/nova-novncproxy", line 10, in 
sys.exit(main())
  File "/opt/stack/nova/nova/cmd/novncproxy.py", line 87, in main
wrap_cmd=None)
  File "/opt/stack/nova/nova/console/websocketproxy.py", line 38, in __init__
ssl_target=None, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/websockify/websocketproxy.py", 
line 231, in __init__
websocket.WebSocketServer.__init__(self, RequestHandlerClass, *args, 
**kwargs)
TypeError: __init__() got an unexpected keyword argument 'no_parent'
n-novnc failed to start
"
I don't what's going on and how to fix it, anyone have any idea? 
THX!

** Affects: devstack
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: devstack

** Also affects: devstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361108

Title:
  novnc failed to start due to unexpected keyword argument

Status in devstack - openstack dev environments:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Hi everybody,

  I'm new to DevStack and during my installation of icehouse devstack, some 
error arose like this:
  "
  ...
  ls /opt/stack/status/stack/n-novnc.failure
  2014-08-25 07:49:55.739 | + failures=/opt/stack/status/stack/n-novnc.failure
  2014-08-25 07:49:55.739 | + for service in '$failures'
  2014-08-25 07:49:55.740 | ++ basename /opt/stack/status/stack/n-novnc.failure
  2014-08-25 07:49:55.740 | + service=n-novnc.failure
  2014-08-25 07:49:55.741 | + service=n-novnc
  2014-08-25 07:49:55.741 | + echo 'Error: Service n-novnc is not running'
  2014-08-25 07:49:55.741 | Error: Service n-novnc is not running
  2014-08-25 07:49:55.741 | + '[' -n /opt/stack/status/stack/n-novnc.failure ']'
  2014-08-25 07:49:55.741 | + die 1316 'More details about the above errors can 
be found with screen, with ./rejoin-stack.sh'
  2014-08-25 07:49:55.741 | + local exitcode=0
  2014-08-25 07:49:55.741 | [Call Trace]
  2014-08-25 07:49:55.741 | ./stack.sh:1371:service_check
  2014-08-25 07:49:55.741 | /home/darren/devstack/functions-common:1316:die
  2014-08-25 07:49:55.743 | [ERROR] /home/darren/devstack/functions-common:1316 
More details about the above errors can be found with screen, with 
./rejoin-stack.sh
  2014-08-25 07:49:56.747 | Error on exit
  "

  Then I went to the corrispond screen to check what is wrong:

  "
  cd /opt/stack/nova && /usr/local/bin/nova-novncproxy --config-file 
/etc/nova/nova.conf --web /opt/stack/noVNC & echo $! 
>/opt/stack/status/stack/n-novnc.pid; fg || echo "n-novnc failed to start" | 
tee "/opt/stack/status/stack/n-novnc.failure"
  [1] 2881
  cd /opt/stack/nova && /usr/local/bin/nova-novncproxy --config-file 
/etc/nova/nova.conf --web /opt/stack/noVNC

  Traceback (most recent call last):
File "/usr/local/bin/nova-novncproxy", line 10, in 
  sys.exit(main())
File "/opt/stack/nova/nova/cmd/novncproxy.py", line 87, in main
  wrap_cmd=None)
File "/opt/stack/nova/nova/console/websocketproxy.py", line 38, in __init__
  ssl_target=None, *args, **kwargs)
File "/usr/local/l

[Yahoo-eng-team] [Bug 1342274] Re: auth_token middleware in keystoneclient is deprecated

2014-08-25 Thread Thierry Carrez
** Changed in: swift
   Status: Fix Committed => Fix Released

** Changed in: swift
Milestone: None => 2.1.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Key Management (Barbican):
  Fix Committed
Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Cinder:
  Fix Committed
Status in Designate:
  Fix Committed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Keystone:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  In Progress
Status in OpenStack Object Storage (Swift):
  Fix Released
Status in Openstack Database (Trove):
  Fix Released
Status in OpenStack Messaging and Notifications Service (Zaqar):
  Fix Committed

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361125] [NEW] keystone install failed, meet error 'got an unexpected keyword argument 'namedtuple_as_object''

2014-08-25 Thread Qiu Hua Qiao
Public bug reported:

My env:
OS: RHEL 6.5 x86-64
Pthon: 2.6.6
Source branch: master

error log:
2014-08-25 07:30:34.256 | 5298 DEBUG migrate.versioning.repository [-] 
Repository /opt/stack/keystone/keystone/common/sql/migrate_repo loaded 
successfully __init__ 
/usr/lib/python2.6/site-packages/migrate/versioning/repository.py:82
2014-08-25 07:30:34.256 | 5298 DEBUG migrate.versioning.repository [-] Config: 
{'db_settings': {'__name__': 'db_settings', 'use_timestamp_numbering': 'False', 
'required_dbs': '[]', 'version_table': 'migrate_version', 'repository_id': 
'keystone'}} __init__ 
/usr/lib/python2.6/site-packages/migrate/versioning/repository.py:83
2014-08-25 07:30:34.256 | 5298 INFO migrate.versioning.api [-] 35 -> 36...
2014-08-25 07:30:35.142 | 5298 CRITICAL keystone [-] TypeError: __init__() got 
an unexpected keyword argument 'namedtuple_as_object'
2014-08-25 07:30:35.142 | 5298 TRACE keystone Traceback (most recent call last):
2014-08-25 07:30:35.142 | 5298 TRACE keystone   File 
"/opt/stack/keystone/bin/keystone-manage", line 44, in 
2014-08-25 07:30:35.142 | 5298 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/opt/stack/keystone/keystone/cli.py", line 292, in main
2014-08-25 07:30:35.143 | 5298 TRACE keystone CONF.command.cmd_class.main()
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/opt/stack/keystone/keystone/cli.py", line 74, in main
2014-08-25 07:30:35.143 | 5298 TRACE keystone 
migration_helpers.sync_database_to_version(extension, version)
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/opt/stack/keystone/keystone/common/sql/migration_helpers.py", line 204, in 
sync_database_to_version
2014-08-25 07:30:35.143 | 5298 TRACE keystone _sync_common_repo(version)
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/opt/stack/keystone/keystone/common/sql/migration_helpers.py", line 160, in 
_sync_common_repo
2014-08-25 07:30:35.143 | 5298 TRACE keystone init_version=init_version)
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/migration.py", line 79, in 
db_sync
2014-08-25 07:30:35.143 | 5298 TRACE keystone return 
versioning_api.upgrade(engine, repository, version)
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/api.py", line 186, in 
upgrade
2014-08-25 07:30:35.143 | 5298 TRACE keystone return _migrate(url, 
repository, version, upgrade=True, err=err, **opts)
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File "", line 2, in 
_migrate
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/util/__init__.py", line 
160, in with_engine
2014-08-25 07:30:35.143 | 5298 TRACE keystone return f(*a, **kw)
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/api.py", line 366, in 
_migrate
2014-08-25 07:30:35.143 | 5298 TRACE keystone schema.runchange(ver, change, 
changeset.step)
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/schema.py", line 93, in 
runchange
2014-08-25 07:30:35.143 | 5298 TRACE keystone change.run(self.engine, step)
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/script/py.py", line 148, 
in run
2014-08-25 07:30:35.143 | 5298 TRACE keystone script_func(engine)
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/opt/stack/keystone/keystone/common/sql/migrate_repo/versions/036_havana.py", 
line 283, in upgrade
2014-08-25 07:30:35.143 | 5298 TRACE keystone 
domain.insert(migration_helpers.get_default_domain()).execute()
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/opt/stack/keystone/keystone/common/sql/migration_helpers.py", line 47, in 
get_default_domain
2014-08-25 07:30:35.143 | 5298 TRACE keystone 'extra': 
jsonutils.dumps({'description': 'Owns users and tenants '
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/opt/stack/keystone/keystone/openstack/common/jsonutils.py", line 172, in dumps
2014-08-25 07:30:35.143 | 5298 TRACE keystone return json.dumps(value, 
default=default, **kwargs)
2014-08-25 07:30:35.143 | 5298 TRACE keystone   File 
"/usr/lib64/python2.6/site-packages/simplejson/__init__.py", line 237, in dumps
2014-08-25 07:30:35.143 | 5298 TRACE keystone **kw).encode(obj)
2014-08-25 07:30:35.143 | 5298 TRACE keystone TypeError: __init__() got an 
unexpected keyword argument 'namedtuple_as_object'
2014-08-25 07:30:35.143 | 5298 TRACE keystone
2014-08-25 07:30:35.209 | + exit_trap
2014-08-25 07:30:35.209 | + local r=1
2014-08-25 07:30:35.210 | ++ jobs -p
2014-08-25 07:30:35.210 | + jobs=
2014-08-25 07:30:35.210 | + [[ -n '' ]]
2014-08-25 07:30:35.210 | + kill_spinner
2014-08-25 07:30:35.210 | + '[' '!' -z '' ']'
2014-08-25 07:30:35.210 | + [[ 1 -ne 0 ]]
2014-08-25

[Yahoo-eng-team] [Bug 1361176] [NEW] DB: Some tables still explicitly set mysql_engine

2014-08-25 Thread Salvatore Orlando
Public bug reported:

After commit commit 466e89970f11918a809aafe8a048d138d4664299 migrations
should not anymore explicitly specify the engine used for mysql.

There are still some migrations which do that, and they should be
amended.

** Affects: neutron
 Importance: Low
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361176

Title:
  DB: Some tables still explicitly set mysql_engine

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  After commit commit 466e89970f11918a809aafe8a048d138d4664299
  migrations should not anymore explicitly specify the engine used for
  mysql.

  There are still some migrations which do that, and they should be
  amended.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361180] [NEW] nova service disable/enable returns 500 on cell environment

2014-08-25 Thread Rajesh Tailor
Public bug reported:

nova service disable/enable returns 500 on cell environment. Actual
enable/disable looks processed correctly.

It also throws following error in nova-api service:
ValueError: invalid literal for int() with base 10: 'region!child@5'

How to reproduce:

$ nova --os-username admin service-list

Output:
++--+-+--+-+---++-+
| Id | Binary   | Host| Zone | Status  
| State | Updated_at | Disabled Reason |
++--+-+--+-+---++-+
| region!child@1 | nova-conductor   | region!child@ubuntu | internal | enabled 
| up| 2014-08-18T06:17:36.00 | -   |
| region!child@3 | nova-cells   | region!child@ubuntu | internal | enabled 
| up| 2014-08-18T06:17:29.00 | -   |
| region!child@4 | nova-scheduler   | region!child@ubuntu | internal | enabled 
| up| 2014-08-18T06:17:30.00 | -   |
| region!child@5 | nova-compute | region!child@ubuntu | nova | enabled 
| up| 2014-08-18T06:17:31.00 | -   |
| region@1   | nova-cells   | region@ubuntu   | internal | enabled 
| up| 2014-08-18T06:17:29.00 | -   |
| region@2   | nova-cert| region@ubuntu   | internal | enabled 
| down  | 2014-08-18T06:08:28.00 | -   |
| region@3   | nova-consoleauth | region@ubuntu   | internal | enabled 
| up| 2014-08-18T06:17:37.00 | -   |
++--+-+--+-+---++-+

$ nova --os-username admin service-disable 'region!child@ubuntu' nova-
compute

The above command gives the following error:
ERROR (ClientException): Unknown Error (HTTP 500)

$ nova --os-username admin service-list

Output:
++--+-+--+--+---++-+
| Id | Binary   | Host| Zone | Status   
| State | Updated_at | Disabled Reason |
++--+-+--+--+---++-+
| region!child@1 | nova-conductor   | region!child@ubuntu | internal | enabled  
| up| 2014-08-18T06:19:06.00 | -   |
| region!child@3 | nova-cells   | region!child@ubuntu | internal | enabled  
| up| 2014-08-18T06:19:09.00 | -   |
| region!child@4 | nova-scheduler   | region!child@ubuntu | internal | enabled  
| up| 2014-08-18T06:19:10.00 | -   |
| region!child@5 | nova-compute | region!child@ubuntu | nova | disabled 
| up| 2014-08-18T06:19:11.00 | -   |
| region@1   | nova-cells   | region@ubuntu   | internal | enabled  
| up| 2014-08-18T06:19:09.00 | -   |
| region@2   | nova-cert| region@ubuntu   | internal | enabled  
| down  | 2014-08-18T06:08:28.00 | -   |
| region@3   | nova-consoleauth | region@ubuntu   | internal | enabled  
| up| 2014-08-18T06:19:07.00 | -   |
++--+-+--+--+---++-+

$ nova --os-username admin service-enable 'region!child@ubuntu' nova-compute
The above command gives following error:
ERROR (ClientException): Unknown Error (HTTP 500)

$ nova --os-username admin service-list
++--+-+--+-+---++-+
| Id | Binary   | Host| Zone | Status  
| State | Updated_at | Disabled Reason |
++--+-+--+-+---++-+
| region!child@1 | nova-conductor   | region!child@ubuntu | internal | enabled 
| up| 2014-08-18T06:52:37.00 | -   |
| region!child@3 | nova-cells   | region!child@ubuntu | internal | enabled 
| up| 2014-08-18T06:52:40.00 | -   |
| region!child@4 | nova-scheduler   | region!child@ubuntu | internal | enabled 
| up| 2014-08-18T06:52:41.00 | -   |
| region!child@5 | nova-compute | region!child@ubuntu | nova | enabled 
| up| 2014-08-18T06:52:42.00 | -   |
| region@1   | nova-cells   | region@ubuntu   | internal | enabled 
| up| 2014-08-18T06:52:40.00 | -   |
| region@2   | nova-cert| region@ubuntu   | internal | enabled 
| down  | 2014-08-18T06:08:28.00 | -   

[Yahoo-eng-team] [Bug 1361183] [NEW] Shouldn't pass instance.uuid when call _set_instance_error_state

2014-08-25 Thread Alex Xu
Public bug reported:

This found by review code.

We already turn the _set_instance_error_state to use instance object. but there 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1942
still pass the uuid.

When there is exception raised, will get error as below:
2014-08-25 19:57:31.784 ERROR oslo.messaging.rpc.dispatcher 
[req-164071ff-f94e-4cb4-999b-2d595c6e77c6 admin admin] Exception during message 
handling: string in
dices must be integers, not str
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dis
patch_and_reply
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dis
patch
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_
dispatch
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 88, in wrapped
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher payload)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 71, in wrapped
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher return f(self, 
context, *args, **kw)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 296, in decorated_function
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher pass
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 282, in decorated_function
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 346, in decorated_function
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 324, in decorated_function
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 312, in decorated_function
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 1984, in build_and_run_instance
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher node, limits)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/lockutils.py", line 325, in inner
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher return f(*args, 
**kwargs)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 1942, in 
do_build_and_run_instance
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher 
self._set_instance_error_state(context, instance.uuid)
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 644, in 
_set_instance_error_state
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher instance_uuid = 
instance['uuid']
2014-08-25 19:57:31.784 TRACE oslo.messaging.rpc.dispatcher TypeError: string 
indices must be integers, not str

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 

[Yahoo-eng-team] [Bug 1361184] [NEW] Race condition in imagebackend.Image.cache downloads image several times

2014-08-25 Thread Alvaro Lopez
Public bug reported:

There's a race condition in imagebackend.Image.cache that makes nova
download an image N times when N requests requiring the same image are
scheduled in the same host during the image feching.

The imagebackend.Image.cache method only synchronizes on the image
fetching function, but the whole function should be synchronized (or the
create_image function). When several requests using the same image are
scheduled at the same time there's no synchronization when nova checks
if an image already exists or not, therefore several requests may check
that the image does not exist, and start a download for all of them (the
actual download will be syncronized, but the image will be download
several times, one for each request).

This can be seen requesting several instances into the same host:

nova boot --image  --flavor  --num-instances=4
--availability-zone :

In the host we can see:

-rw-r--r-- 1 nova nova 5.0G Aug 25 14:21 
243eccfbc52469947665a506145d798670e3fc88
-rw-r--r-- 1 nova nova 1.2G Aug 25 14:22 
243eccfbc52469947665a506145d798670e3fc88.part

** Affects: nova
 Importance: Undecided
 Assignee: Alvaro Lopez (aloga)
 Status: In Progress

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Alvaro Lopez (aloga)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361184

Title:
  Race condition in imagebackend.Image.cache downloads image several
  times

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  There's a race condition in imagebackend.Image.cache that makes nova
  download an image N times when N requests requiring the same image are
  scheduled in the same host during the image feching.

  The imagebackend.Image.cache method only synchronizes on the image
  fetching function, but the whole function should be synchronized (or
  the create_image function). When several requests using the same image
  are scheduled at the same time there's no synchronization when nova
  checks if an image already exists or not, therefore several requests
  may check that the image does not exist, and start a download for all
  of them (the actual download will be syncronized, but the image will
  be download several times, one for each request).

  This can be seen requesting several instances into the same host:

  nova boot --image  --flavor  --num-instances=4
  --availability-zone :

  In the host we can see:

  -rw-r--r-- 1 nova nova 5.0G Aug 25 14:21 
243eccfbc52469947665a506145d798670e3fc88
  -rw-r--r-- 1 nova nova 1.2G Aug 25 14:22 
243eccfbc52469947665a506145d798670e3fc88.part

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361186] [NEW] nova service-delete fails for services on non-child (top) cell

2014-08-25 Thread Rajesh Tailor
Public bug reported:

Nova service-delete fails for services on non-child (top) cell.

How to reproduce:

$ nova --os-username admin service-list

++--+-+--+-+---++-+
| Id | Binary   | Host| Zone | Status  
| State | Updated_at | Disabled Reason |
++--+-+--+-+---++-+
| region!child@1 | nova-conductor   | region!child@ubuntu | internal | enabled 
| up| 2014-08-18T06:06:56.00 | -   |
| region!child@2 | nova-compute | region!child@ubuntu | nova | enabled 
| up| 2014-08-18T06:06:55.00 | -   |
| region!child@3 | nova-cells   | region!child@ubuntu | internal | enabled 
| up| 2014-08-18T06:06:59.00 | -   |
| region!child@4 | nova-scheduler   | region!child@ubuntu | internal | enabled 
| up| 2014-08-18T06:06:50.00 | -   |
| region@1   | nova-cells   | region@ubuntu   | internal | enabled 
| up| 2014-08-18T06:06:59.00 | -   |
| region@2   | nova-cert| region@ubuntu   | internal | enabled 
| up| 2014-08-18T06:06:58.00 | -   |
| region@3   | nova-consoleauth | region@ubuntu   | internal | enabled 
| up| 2014-08-18T06:06:57.00 | -   |
++--+-+--+-+---++-+

Stop one of the services on top cell (e.g. nova-cert).

$ nova --os-username admin service-list

++--+-+--+-+---++-+
| Id | Binary   | Host| Zone | Status  
| State | Updated_at | Disabled Reason |
++--+-+--+-+---++-+
| region!child@1 | nova-conductor   | region!child@ubuntu | internal | enabled 
| up| 2014-08-18T06:09:26.00 | -   |
| region!child@2 | nova-compute | region!child@ubuntu | nova | enabled 
| up| 2014-08-18T06:09:25.00 | -   |
| region!child@3 | nova-cells   | region!child@ubuntu | internal | enabled 
| up| 2014-08-18T06:09:19.00 | -   |
| region!child@4 | nova-scheduler   | region!child@ubuntu | internal | enabled 
| up| 2014-08-18T06:09:20.00 | -   |
| region@1   | nova-cells   | region@ubuntu   | internal | enabled 
| up| 2014-08-18T06:09:19.00 | -   |
| region@2   | nova-cert| region@ubuntu   | internal | enabled 
| down  | 2014-08-18T06:08:28.00 | -   |
| region@3   | nova-consoleauth | region@ubuntu   | internal | enabled 
| up| 2014-08-18T06:09:27.00 | -   |
++--+-+--+-+---++-+

Nova service-delete:
$ nova --os-username admin service-delete 'region@2'

Check the request id from nova-api.log:

2014-08-18 15:10:23.491 INFO nova.osapi_compute.wsgi.server [req-
e134d915-ad66-41ba-a6f8-33ec51b7daee admin demo] 192.168.101.31 "DELETE
/v2/d66804d2e78549cd8f5efcedd0abecb2/os-services/region@2 HTTP/1.1"
status: 204 len: 179 time: 0.1334069

Error log in n-cell-region service:

2014-08-18 15:10:23.464 ERROR nova.cells.messaging 
[req-e134d915-ad66-41ba-a6f8-33ec51b7daee admin demo] Error locating next hop 
for message: 'NoneType' object has no attribute 'count'
2014-08-18 15:10:23.464 TRACE nova.cells.messaging Traceback (most recent call 
last):
2014-08-18 15:10:23.464 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 406, in process
2014-08-18 15:10:23.464 TRACE nova.cells.messaging next_hop = 
self._get_next_hop()
2014-08-18 15:10:23.464 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 361, in _get_next_hop
2014-08-18 15:10:23.464 TRACE nova.cells.messaging dest_hops = 
target_cell.count(_PATH_CELL_SEP)
2014-08-18 15:10:23.464 TRACE nova.cells.messaging AttributeError: 'NoneType' 
object has no attribute 'count'


Appendix:
In case of services on child cell, no issues.

$ nova --os-username admin service-list

++--+-+--+-+---++-+
| Id | Binary   | Host| Zone | Status  
| State | Updated_at | Disabled Reason |
++--+-+--+-+---++-+
| region!chi

[Yahoo-eng-team] [Bug 1361190] [NEW] Too huge space reserved for tenant_id in the database

2014-08-25 Thread Attila Fazekas
Public bug reported:

Keystone defines the project/user/domain IDs as varchar(64), but neutron
uses varchar(255) on every resources.

But the  tenant id actually generated by keystone is 32 character.

Please change to the tenant id lengths to <=64 >=32.

The record size has impact on the memory usage  and to the db/disk
caching efficiency.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361190

Title:
  Too huge space reserved for tenant_id in the database

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Keystone defines the project/user/domain IDs as varchar(64), but
  neutron uses varchar(255) on every resources.

  But the  tenant id actually generated by keystone is 32 character.

  Please change to the tenant id lengths to <=64 >=32.

  The record size has impact on the memory usage  and to the db/disk
  caching efficiency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361197] [NEW] Glance image-upload truncates the image.

2014-08-25 Thread Jaroslav Henner
Public bug reported:

This may be a DUP of #1240355 but I am not sure.

I have three hosts which all are connected to the same shared NFS
datastore and glance configured to use it. I am uploading the an and
then try to download, but download returns empty string and there is an
ERROR in the glance/api.log:


2014-08-25 09:11:01.438 2724 ERROR glance.api.common [893f9ace-0176-42b1
-947f-21b8875547be cffc8c555ebe44bb97b48baabd92e606
94a68b099a674d55986f4ce15fbb946b - - -] Backend storage for image
46b9b487-9c49-47a4-87aa-a11d0b17b6ff disconnected after writing only 0
bytes


The reproducer:

# echo 123456 | glance -d image-create --name foo --disk-format raw 
--container-format bare
curl -i -X POST -H 'x-image-meta-container_format: bare' -H 'Transfer-Encoding: 
chunked' -H 'User-Agent: python-glanceclient' -H 'x-image-meta-is_public: 
False' -H 'X-Auth-Token: ***' -H 'Content-Type: application/octet-stream' -H 
'x-image-meta-disk_format: raw' -H 'x-image-meta-name: foo' -d '', mode 'r' at 0x7f38eea620c0>' http://172.16.40.19:9292/v1/images

HTTP/1.1 201 Created
content-length: 467
etag: f447b20a7fcbf53a5d5be013ea0b15af
location: 
http://172.16.40.19:9292/v1/images/46b9b487-9c49-47a4-87aa-a11d0b17b6ff
date: Mon, 25 Aug 2014 13:10:38 GMT
content-type: application/json
x-openstack-request-id: req-c63d01a6-6c84-4867-8944-f9113497546c

{"image": {"status": "active", "deleted": false, "container_format":
"bare", "min_ram": 0, "updated_at": "2014-08-25T13:10:30", "owner":
"94a68b099a674d55986f4ce15fbb946b", "min_disk": 0, "is_public": false,
"deleted_at": null, "id": "46b9b487-9c49-47a4-87aa-a11d0b17b6ff",
"size": 7, "virtual_size": null, "name": "foo", "checksum":
"f447b20a7fcbf53a5d5be013ea0b15af", "created_at": "2014-08-25T13:10:20",
"disk_format": "raw", "properties": {}, "protected": false}}

+--+--+
| Property | Value|
+--+--+
| checksum | f447b20a7fcbf53a5d5be013ea0b15af |
| container_format | bare |
| created_at   | 2014-08-25T13:10:20  |
| deleted  | False|
| deleted_at   | None |
| disk_format  | raw  |
| id   | 46b9b487-9c49-47a4-87aa-a11d0b17b6ff |
| is_public| False|
| min_disk | 0|
| min_ram  | 0|
| name | foo  |
| owner| 94a68b099a674d55986f4ce15fbb946b |
| protected| False|
| size | 7|
| status   | active   |
| updated_at   | 2014-08-25T13:10:30  |
| virtual_size | None |
+--+--+
[root@incomplete-read ~(keystone_admin)]# glance -d image-download foo
curl -i -X GET -H 'X-Auth-Token: ***' -H 'Content-Type: application/json' -H 
'User-Agent: python-glanceclient' 
http://172.16.40.19:9292/v1/images/detail?limit=20&name=foo

HTTP/1.1 200 OK
date: Mon, 25 Aug 2014 13:10:52 GMT
content-length: 470
content-type: application/json; charset=UTF-8
x-openstack-request-id: req-b8f1c595-baf4-4a15-b9ae-407e7db3899a

{"images": [{"status": "active", "deleted_at": null, "name": "foo",
"deleted": false, "container_format": "bare", "created_at":
"2014-08-25T13:10:20", "disk_format": "raw", "updated_at":
"2014-08-25T13:10:30", "min_disk": 0, "protected": false, "id":
"46b9b487-9c49-47a4-87aa-a11d0b17b6ff", "min_ram": 0, "checksum":
"f447b20a7fcbf53a5d5be013ea0b15af", "owner":
"94a68b099a674d55986f4ce15fbb946b", "is_public": false, "virtual_size":
null, "properties": {}, "size": 7}]}

curl -i -X GET -H 'X-Auth-Token: ***' -H 'Content-Type: 
application/octet-stream' -H 'User-Agent: python-glanceclient' 
http://172.16.40.19:9292/v1/images/46b9b487-9c49-47a4-87aa-a11d0b17b6ff
''

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

** Attachment added: "log"
   https://bugs.launchpad.net/bugs/1361197/+attachment/4186642/+files/log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361197

Title:
  Glance image-upload truncates the image.

Status in OpenStack Compute (Nova):
  New

Bug description:
  This may be a DUP of #1240355 but I am not sure.

  I have three hosts which all are connected to the same shared NFS
  datastore and glance configured to use it. I am uploading the an and
  then try to download, but download returns empty string and there is
  an ERROR in the glance/api.log:


  2014-08-25 09:11:01.438 2724 ERROR glance

[Yahoo-eng-team] [Bug 1155092] Re: Namespace doesn't get deleted on vip/pool removal

2014-08-25 Thread OpenStack Infra
** Changed in: neutron
   Status: Won't Fix => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1155092

Title:
  Namespace doesn't get deleted on vip/pool removal

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Steps to reproduce (in Horizon):
  1. Create one pool.
  2. Create vip for it.
  3. Create another pool.
  4. Create vip for it.
  5. Delete one vip.
  6. Delete another vip.
  7. Delete two pools at once.
  8. Check qlbaas- namespaces. One namespace of corresponding pool doesn't get 
deleted and is unusable because of cleared permissions in /var/run/netns/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1155092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1173060] Re: RFE: Missing parameter --sql_connection for nova-manage

2014-08-25 Thread Davanum Srinivas (DIMS)
both cinder-manage and glance-manage no longer support this parameter.
Closing as wont-fix

** Changed in: nova
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1173060

Title:
   RFE: Missing parameter --sql_connection for nova-manage

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Command nova-manage is missing --sql_connection parameter as
  (cinder|glance)-manage has. This is useful when you need to "db sync",
  but you don't want to modify config file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1173060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361211] [NEW] Hyper-V agent does not add new VLAN ids to the external port's trunked list on Hyper-V 2008 R2

2014-08-25 Thread Alessandro Pilotti
Public bug reported:

This issue affects Hyper-V 2008 R2 and does not affect Hyper-V 2012 and
above.

The Hyper-V agent is correctly setting the VLAN ID and access mode
settings on the vmswitch ports associated with a VM, but not on the
trunked list associated with an external port. This is a required
configuration.

A workaround consists in setting the external port trunked list to
contain all possible VLAN ids expected to be used in neutron's network
configuration as provided by the following script:

https://github.com/cloudbase/devstack-hyperv-
incubator/blob/master/trunked_vlans_workaround_2008r2.ps1

** Affects: neutron
 Importance: Undecided
 Status: Confirmed


** Tags: hyper-v

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361211

Title:
  Hyper-V agent does not add new VLAN ids to the external port's trunked
  list on Hyper-V 2008 R2

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  This issue affects Hyper-V 2008 R2 and does not affect Hyper-V 2012
  and above.

  The Hyper-V agent is correctly setting the VLAN ID and access mode
  settings on the vmswitch ports associated with a VM, but not on the
  trunked list associated with an external port. This is a required
  configuration.

  A workaround consists in setting the external port trunked list to
  contain all possible VLAN ids expected to be used in neutron's network
  configuration as provided by the following script:

  https://github.com/cloudbase/devstack-hyperv-
  incubator/blob/master/trunked_vlans_workaround_2008r2.ps1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361197] Re: Glance image-upload truncates the image.

2014-08-25 Thread Flavio Percoco
@Arnaud, mind looking at this? I think you know the vmware driver better
than anyone else here :D

** Project changed: nova => glance

** Changed in: glance
 Assignee: (unassigned) => Arnaud Legendre (arnaudleg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1361197

Title:
  Glance image-upload truncates the image.

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  This may be a DUP of #1240355 but I am not sure.

  I have three hosts which all are connected to the same shared NFS
  datastore and glance configured to use it. I am uploading the an and
  then try to download, but download returns empty string and there is
  an ERROR in the glance/api.log:


  2014-08-25 09:11:01.438 2724 ERROR glance.api.common
  [893f9ace-0176-42b1-947f-21b8875547be cffc8c555ebe44bb97b48baabd92e606
  94a68b099a674d55986f4ce15fbb946b - - -] Backend storage for image
  46b9b487-9c49-47a4-87aa-a11d0b17b6ff disconnected after writing only 0
  bytes

  
  The reproducer:

  # echo 123456 | glance -d image-create --name foo --disk-format raw 
--container-format bare
  curl -i -X POST -H 'x-image-meta-container_format: bare' -H 
'Transfer-Encoding: chunked' -H 'User-Agent: python-glanceclient' -H 
'x-image-meta-is_public: False' -H 'X-Auth-Token: ***' -H 'Content-Type: 
application/octet-stream' -H 'x-image-meta-disk_format: raw' -H 
'x-image-meta-name: foo' -d '', mode 'r' at 0x7f38eea620c0>' 
http://172.16.40.19:9292/v1/images

  HTTP/1.1 201 Created
  content-length: 467
  etag: f447b20a7fcbf53a5d5be013ea0b15af
  location: 
http://172.16.40.19:9292/v1/images/46b9b487-9c49-47a4-87aa-a11d0b17b6ff
  date: Mon, 25 Aug 2014 13:10:38 GMT
  content-type: application/json
  x-openstack-request-id: req-c63d01a6-6c84-4867-8944-f9113497546c

  {"image": {"status": "active", "deleted": false, "container_format":
  "bare", "min_ram": 0, "updated_at": "2014-08-25T13:10:30", "owner":
  "94a68b099a674d55986f4ce15fbb946b", "min_disk": 0, "is_public": false,
  "deleted_at": null, "id": "46b9b487-9c49-47a4-87aa-a11d0b17b6ff",
  "size": 7, "virtual_size": null, "name": "foo", "checksum":
  "f447b20a7fcbf53a5d5be013ea0b15af", "created_at":
  "2014-08-25T13:10:20", "disk_format": "raw", "properties": {},
  "protected": false}}

  +--+--+
  | Property | Value|
  +--+--+
  | checksum | f447b20a7fcbf53a5d5be013ea0b15af |
  | container_format | bare |
  | created_at   | 2014-08-25T13:10:20  |
  | deleted  | False|
  | deleted_at   | None |
  | disk_format  | raw  |
  | id   | 46b9b487-9c49-47a4-87aa-a11d0b17b6ff |
  | is_public| False|
  | min_disk | 0|
  | min_ram  | 0|
  | name | foo  |
  | owner| 94a68b099a674d55986f4ce15fbb946b |
  | protected| False|
  | size | 7|
  | status   | active   |
  | updated_at   | 2014-08-25T13:10:30  |
  | virtual_size | None |
  +--+--+
  [root@incomplete-read ~(keystone_admin)]# glance -d image-download foo
  curl -i -X GET -H 'X-Auth-Token: ***' -H 'Content-Type: application/json' -H 
'User-Agent: python-glanceclient' 
http://172.16.40.19:9292/v1/images/detail?limit=20&name=foo

  HTTP/1.1 200 OK
  date: Mon, 25 Aug 2014 13:10:52 GMT
  content-length: 470
  content-type: application/json; charset=UTF-8
  x-openstack-request-id: req-b8f1c595-baf4-4a15-b9ae-407e7db3899a

  {"images": [{"status": "active", "deleted_at": null, "name": "foo",
  "deleted": false, "container_format": "bare", "created_at":
  "2014-08-25T13:10:20", "disk_format": "raw", "updated_at":
  "2014-08-25T13:10:30", "min_disk": 0, "protected": false, "id":
  "46b9b487-9c49-47a4-87aa-a11d0b17b6ff", "min_ram": 0, "checksum":
  "f447b20a7fcbf53a5d5be013ea0b15af", "owner":
  "94a68b099a674d55986f4ce15fbb946b", "is_public": false,
  "virtual_size": null, "properties": {}, "size": 7}]}

  curl -i -X GET -H 'X-Auth-Token: ***' -H 'Content-Type: 
application/octet-stream' -H 'User-Agent: python-glanceclient' 
http://172.16.40.19:9292/v1/images/46b9b487-9c49-47a4-87aa-a11d0b17b6ff
  ''

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1361197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-

[Yahoo-eng-team] [Bug 1361134] Re: run "nova evacuate" command, 'can't be encoded'error

2014-08-25 Thread Tristan Cacqueray
** Also affects: ossa
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1361134

Title:
  run "nova evacuate" command, 'can't be encoded'error

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Security Advisories:
  New

Bug description:
  I have some trouble when run 'nova evacuate'command
  details:
  1.I have two hosts, host1 is controler and conpute services, host2 is only 
compute service
  2.stop the host2's compute service 
  3.run 'nova evacuate'command on the host1, return error
  the host1's compute log:

  | created  | 2014-08-17T10:41:50Z 

   |
  | fault| {"message": " can't 
be encoded", "code": 500, "details": "  File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 305, in 
decorated_function |
  |  | return function(self, context, 
*args, **kwargs)
 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2498, in 
rebuild_instance
 |
  |  | image_meta = 
_get_image_meta(context, image_ref) 
   |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 432, in 
_get_image_meta 
  |
  |  | return 
image_service.show(context, image_id)   

 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/image/glance.py\", line 270, in show

 |
  |  | image = 
self._client.call(context, 1, 'get', image_id)  

|
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/image/glance.py\", line 270, in show

 |
  |  | image = 
self._client.call(context, 1, 'get', image_id)  

|
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/image/glance.py\", line 209, in call

 |
  |  | return getattr(client.images, 
method)(*args, **kwargs)
  |
  |  |   File 
\"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py\", line 114, in 
get 
|
  |  | % urllib.quote(str(image_id)))   

   |
  |  |   File 
\"/usr/lib/python2.7/site-packages/glanceclient/common/http.py\", line 289, in 
raw_request 
  |
  |  | return self._http_request(url, 
method, **kwargs)   
 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/glanceclient/common/http.py\", line 191, in 
_http_request   
  |
  |  | kwargs['headers'] = 
self.encode_headers(kwargs['headers'])  
|
  |  |   File 
\"

[Yahoo-eng-team] [Bug 1361230] [NEW] ad248f6 jsonutils sync breaks if simplejson < 2.2.0

2014-08-25 Thread Matt Riedemann
Public bug reported:

This keystone sync:

https://github.com/openstack/keystone/commit/94efafd6d6066f63a9226a6b943d0e86699e7edd

Pulled in this change to jsonutils:

https://review.openstack.org/#/c/113760/

That uses a flag in json.dumps which is only in simplejson >= 2.2.0.  If
you don't have a new enough simplejson the keystone database migrations
fail.

Keystone doesn't even list simplejson as a requirement and oslo-
incubator lists simplejson >= 2.0.9 as a test-requirement since it's
optional in the code.

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: oslo
 Importance: Undecided
 Status: New


** Tags: oslo

** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361230

Title:
  ad248f6 jsonutils sync breaks if simplejson < 2.2.0

Status in OpenStack Identity (Keystone):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  This keystone sync:

  
https://github.com/openstack/keystone/commit/94efafd6d6066f63a9226a6b943d0e86699e7edd

  Pulled in this change to jsonutils:

  https://review.openstack.org/#/c/113760/

  That uses a flag in json.dumps which is only in simplejson >= 2.2.0.
  If you don't have a new enough simplejson the keystone database
  migrations fail.

  Keystone doesn't even list simplejson as a requirement and oslo-
  incubator lists simplejson >= 2.0.9 as a test-requirement since it's
  optional in the code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357379] Re: policy admin_only rules not enforced when changing value to default

2014-08-25 Thread Thierry Carrez
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357379

Title:
  policy admin_only rules not enforced when changing value to default

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in neutron havana series:
  New
Status in neutron icehouse series:
  New
Status in OpenStack Security Advisories:
  Confirmed

Bug description:
  If a non-admin user tries to update an attribute, which should be
  updated only by admin, from a non-default value to default,  the
  update is successfully performed and PolicyNotAuthorized exception is
  not raised.

  The reason is that when a rule to match for a given action is built
  there is a verification that each attribute in a body of the resource
  is present and has a non-default value. Thus, if we try to change some
  attribute's value to default, it is not considered to be explicitly
  set and a corresponding rule is not enforced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361238] [NEW] Too huge space reserved for tenant_id/user_id/domain_id in the database

2014-08-25 Thread Attila Fazekas
Public bug reported:

Keystone uses 32 byte/character domain/user/project id , which contains
a hexadecimal representation of 128 bit (16 byte) integer.

Please reduce the filed size at least to 32 byte varchar, it helps to
the db for using the caches (disk/ memory/ record per physical sector..)
more efficiently.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361238

Title:
  Too huge space reserved for tenant_id/user_id/domain_id in the
  database

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Keystone uses 32 byte/character domain/user/project id , which
  contains a hexadecimal representation of 128 bit (16 byte) integer.

  Please reduce the filed size at least to 32 byte varchar, it helps to
  the db for using the caches (disk/ memory/ record per physical
  sector..) more efficiently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361230] Re: ad248f6 jsonutils sync breaks if simplejson < 2.2.0 (under python 2.6)

2014-08-25 Thread Matt Riedemann
Since keystone doesn't list simplejson as a requirement (since it's
optional in the jsonutils code) maybe this is an invalid bug for
keystone.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361230

Title:
  ad248f6 jsonutils sync breaks if simplejson < 2.2.0 (under python 2.6)

Status in OpenStack Identity (Keystone):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  Triaged

Bug description:
  This keystone sync:

  
https://github.com/openstack/keystone/commit/94efafd6d6066f63a9226a6b943d0e86699e7edd

  Pulled in this change to jsonutils:

  https://review.openstack.org/#/c/113760/

  That uses a flag in json.dumps which is only in simplejson >= 2.2.0.
  If you don't have a new enough simplejson the keystone database
  migrations fail.

  Keystone doesn't even list simplejson as a requirement and oslo-
  incubator lists simplejson >= 2.0.9 as a test-requirement since it's
  optional in the code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361190] Re: Too huge space reserved for tenant_id in the database

2014-08-25 Thread Eugene Nikanorov
That is backward incompatible schema change relying on the fact that all
resources use keystone tenant_ids only.

** Changed in: neutron
   Importance: Undecided => Low

** Tags added: db

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361190

Title:
  Too huge space reserved for tenant_id in the database

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  Keystone defines the project/user/domain IDs as varchar(64), but
  neutron uses varchar(255) on every resources.

  But the  tenant id actually generated by keystone is 32 character.

  Please change to the tenant id lengths to <=64 >=32.

  The record size has impact on the memory usage  and to the db/disk
  caching efficiency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361134] Re: run "nova evacuate" command, 'can't be encoded'error

2014-08-25 Thread Thierry Carrez
** Changed in: ossa
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1361134

Title:
  run "nova evacuate" command, 'can't be encoded'error

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  I have some trouble when run 'nova evacuate'command
  details:
  1.I have two hosts, host1 is controler and conpute services, host2 is only 
compute service
  2.stop the host2's compute service 
  3.run 'nova evacuate'command on the host1, return error
  the host1's compute log:

  | created  | 2014-08-17T10:41:50Z 

   |
  | fault| {"message": " can't 
be encoded", "code": 500, "details": "  File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 305, in 
decorated_function |
  |  | return function(self, context, 
*args, **kwargs)
 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2498, in 
rebuild_instance
 |
  |  | image_meta = 
_get_image_meta(context, image_ref) 
   |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 432, in 
_get_image_meta 
  |
  |  | return 
image_service.show(context, image_id)   

 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/image/glance.py\", line 270, in show

 |
  |  | image = 
self._client.call(context, 1, 'get', image_id)  

|
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/image/glance.py\", line 270, in show

 |
  |  | image = 
self._client.call(context, 1, 'get', image_id)  

|
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/image/glance.py\", line 209, in call

 |
  |  | return getattr(client.images, 
method)(*args, **kwargs)
  |
  |  |   File 
\"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py\", line 114, in 
get 
|
  |  | % urllib.quote(str(image_id)))   

   |
  |  |   File 
\"/usr/lib/python2.7/site-packages/glanceclient/common/http.py\", line 289, in 
raw_request 
  |
  |  | return self._http_request(url, 
method, **kwargs)   
 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/glanceclient/common/http.py\", line 191, in 
_http_request   
  |
  |  | kwargs['headers'] = 
self.encode_headers(kwargs['headers'])  
|
  |  |   File 
\"/usr/lib

[Yahoo-eng-team] [Bug 1361256] [NEW] TypeError when instance fails to build

2014-08-25 Thread Adelina Tuvenie
Public bug reported:

In some cases when a instance fails to build and it is set in ERROR
state by the manager, we get the following error: "TypeError: string
indices must be integers, not str". This happens because the method
_set_instance_error_state is wrongly called with instance.uuid instead
of the  instance object in manager.py

Error trace: http://paste.openstack.org/show/99978/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361256

Title:
  TypeError when instance fails to build

Status in OpenStack Compute (Nova):
  New

Bug description:
  In some cases when a instance fails to build and it is set in ERROR
  state by the manager, we get the following error: "TypeError: string
  indices must be integers, not str". This happens because the method
  _set_instance_error_state is wrongly called with instance.uuid instead
  of the  instance object in manager.py

  Error trace: http://paste.openstack.org/show/99978/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361264] [NEW] Neutron metering do not check overlap ip range

2014-08-25 Thread Liping Mao
Public bug reported:

I use neutron metering api to create two same rules, I did not get any
error. I think that I should get the error  "MeteringLabelRuleOverlaps"
defined in neutron/extensions/metering.py

BTW,
We define the following error inherit "NotFound", I think "Conflict" is better:

In neutron/extensions/metering.py

class MeteringLabelRuleOverlaps(qexception.NotFound):
message = _("Metering label rule with remote_ip_prefix "
"%(remote_ip_prefix)s overlaps another")

** Affects: neutron
 Importance: Undecided
 Assignee: Liping Mao (limao)
 Status: In Progress

** Description changed:

- I use neutron metering api to create two same rule, I did not get any
+ I use neutron metering api to create two same rules, I did not get any
  error. I think that I should get the error  "MeteringLabelRuleOverlaps"
  defined in neutron/extensions/metering.py
  
  BTW,
  We define the following error inherit "NotFound", I think "Conflict" is 
better:
  
  In neutron/extensions/metering.py
  
  class MeteringLabelRuleOverlaps(qexception.NotFound):
- message = _("Metering label rule with remote_ip_prefix "
- "%(remote_ip_prefix)s overlaps another")
+ message = _("Metering label rule with remote_ip_prefix "
+ "%(remote_ip_prefix)s overlaps another")

** Summary changed:

- Neutron metering overlap ip range
+ Neutron metering do not overlap ip range

** Summary changed:

- Neutron metering do not overlap ip range
+ Neutron metering do not check overlap ip range

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361264

Title:
  Neutron metering do not check overlap ip range

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  I use neutron metering api to create two same rules, I did not get any
  error. I think that I should get the error
  "MeteringLabelRuleOverlaps" defined in neutron/extensions/metering.py

  BTW,
  We define the following error inherit "NotFound", I think "Conflict" is 
better:

  In neutron/extensions/metering.py

  class MeteringLabelRuleOverlaps(qexception.NotFound):
  message = _("Metering label rule with remote_ip_prefix "
  "%(remote_ip_prefix)s overlaps another")

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361271] [NEW] ipv6 routers don't define a default route

2014-08-25 Thread Emmanuel THIERRY
Public bug reported:

When a router is created between two IPv6 networks with a gateway on one
of these networks, the router fails to setup a default route to the
gateway. This causes IPv6 traffic not to be forwarded to external
networks.

The problem is characterized by the fact that the function
external_gateway_added don't handle the IPv6 case. This may be solved by
backporting the commit 9e1c61b93ab3523bc1b5510775c1ee3331097f21 to
Icehouse.

Combined with bugfixing #1355195, this enables functional IPv6 routers.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361271

Title:
  ipv6 routers don't define a default route

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a router is created between two IPv6 networks with a gateway on
  one of these networks, the router fails to setup a default route to
  the gateway. This causes IPv6 traffic not to be forwarded to external
  networks.

  The problem is characterized by the fact that the function
  external_gateway_added don't handle the IPv6 case. This may be solved
  by backporting the commit 9e1c61b93ab3523bc1b5510775c1ee3331097f21 to
  Icehouse.

  Combined with bugfixing #1355195, this enables functional IPv6
  routers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361298] [NEW] Can not load instances page if glance service is not configured for region

2014-08-25 Thread Justin Pomeroy
Public bug reported:

If the current region does not have a glance service endpoint configured
then loading up the Instances page will result in being directed to the
error page. The dashboard should be tolerant of this situation.
Tolerance for missing services and unreachable endpoints has been added
recently in another patch and this should match that behavior.

** Affects: horizon
 Importance: Undecided
 Assignee: Justin Pomeroy (jpomero)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Justin Pomeroy (jpomero)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361298

Title:
  Can not load instances page if glance service is not configured for
  region

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If the current region does not have a glance service endpoint
  configured then loading up the Instances page will result in being
  directed to the error page. The dashboard should be tolerant of this
  situation. Tolerance for missing services and unreachable endpoints
  has been added recently in another patch and this should match that
  behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1361298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361305] [NEW] Replace hard-coded date formats with Django formats

2014-08-25 Thread Thai Tran
Public bug reported:

We are currently using hard coded date formats in various D3 charts. We
should things like "%Y-%m-%dT%H:%M:%S" scattered throughout the code. We
should really be using Django's date format. Here are a few of the
formats available:

 django.formats = {
"DATETIME_FORMAT": "N j, Y, P", 
"DATETIME_INPUT_FORMATS": [
  "%Y-%m-%d %H:%M:%S", 
  "%Y-%m-%d %H:%M:%S.%f", 
  "%Y-%m-%d %H:%M", 
  "%Y-%m-%d", 
  "%m/%d/%Y %H:%M:%S", 
  "%m/%d/%Y %H:%M:%S.%f", 
  "%m/%d/%Y %H:%M", 
  "%m/%d/%Y", 
  "%m/%d/%y %H:%M:%S", 
  "%m/%d/%y %H:%M:%S.%f", 
  "%m/%d/%y %H:%M", 
  "%m/%d/%y"
]

As you can see, the hard-coded format is very similar to
django.formats.DATE_TIME_INPUT_FORMATS[0]. Why do we wan to do this?
Django handles internationalization for us, so it make sense to take
advantage of this. It will also centralize the hard-coded date formats
into a single place.

** Affects: horizon
 Importance: Undecided
 Assignee: Thai Tran (tqtran)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Thai Tran (tqtran)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361305

Title:
  Replace hard-coded date formats with Django formats

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We are currently using hard coded date formats in various D3 charts.
  We should things like "%Y-%m-%dT%H:%M:%S" scattered throughout the
  code. We should really be using Django's date format. Here are a few
  of the formats available:

   django.formats = {
  "DATETIME_FORMAT": "N j, Y, P", 
  "DATETIME_INPUT_FORMATS": [
"%Y-%m-%d %H:%M:%S", 
"%Y-%m-%d %H:%M:%S.%f", 
"%Y-%m-%d %H:%M", 
"%Y-%m-%d", 
"%m/%d/%Y %H:%M:%S", 
"%m/%d/%Y %H:%M:%S.%f", 
"%m/%d/%Y %H:%M", 
"%m/%d/%Y", 
"%m/%d/%y %H:%M:%S", 
"%m/%d/%y %H:%M:%S.%f", 
"%m/%d/%y %H:%M", 
"%m/%d/%y"
  ]

  As you can see, the hard-coded format is very similar to
  django.formats.DATE_TIME_INPUT_FORMATS[0]. Why do we wan to do this?
  Django handles internationalization for us, so it make sense to take
  advantage of this. It will also centralize the hard-coded date formats
  into a single place.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1361305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361306] [NEW] Keysttone doesn't handle user_attribute_id mapping

2014-08-25 Thread Haneef Ali
Public bug reported:

By default keystone gets the id from first field of DN. It doesn't use
user_id_attibute mapping from keystone.conf

In the following code, "id" attribute is always  1 element in DN
---Relevent code---

  @staticmethod
def _dn_to_id(dn):
return utf8_decode(ldap.dn.str2dn(utf8_encode(dn))[0][0][1])


def _ldap_res_to_model(self, res):
obj = self.model(id=self._dn_to_id(res[0]))
# LDAP attribute names may be returned in a different case than
# they are defined in the mapping, so we need to check for keys
# in a case-insensitive way.  We use the case specified in the
# mapping for the model to ensure we have a predictable way of
# retrieving values later.
lower_res = dict((k.lower(), v) for k, v in six.iteritems(res[1]))
for k in obj.known_keys:
if k in self.attribute_ignore:
continue

try:
v = lower_res[self.attribute_mapping.get(k, k).lower()]
except KeyError:
pass
else:
try:
obj[k] = v[0]
except IndexError:
obj[k] = None

return obj

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361306

Title:
  Keysttone doesn't handle user_attribute_id mapping

Status in OpenStack Identity (Keystone):
  New

Bug description:
  By default keystone gets the id from first field of DN. It doesn't use
  user_id_attibute mapping from keystone.conf

  In the following code, "id" attribute is always  1 element in DN
  ---Relevent code---

@staticmethod
  def _dn_to_id(dn):
  return utf8_decode(ldap.dn.str2dn(utf8_encode(dn))[0][0][1])

  
  def _ldap_res_to_model(self, res):
  obj = self.model(id=self._dn_to_id(res[0]))
  # LDAP attribute names may be returned in a different case than
  # they are defined in the mapping, so we need to check for keys
  # in a case-insensitive way.  We use the case specified in the
  # mapping for the model to ensure we have a predictable way of
  # retrieving values later.
  lower_res = dict((k.lower(), v) for k, v in six.iteritems(res[1]))
  for k in obj.known_keys:
  if k in self.attribute_ignore:
  continue

  try:
  v = lower_res[self.attribute_mapping.get(k, k).lower()]
  except KeyError:
  pass
  else:
  try:
  obj[k] = v[0]
  except IndexError:
  obj[k] = None

  return obj

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361307] [NEW] Please port Certificate apis to V3

2014-08-25 Thread Haneef Ali
Public bug reported:

This is a wish list

We need  certificates API to get the PKI certficates in the services. If
we depreicate v2.0 api, it will be odd, if the services rely on v2.0 api
to fetch certificates.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361307

Title:
  Please port Certificate apis to V3

Status in OpenStack Identity (Keystone):
  New

Bug description:
  This is a wish list

  We need  certificates API to get the PKI certficates in the services.
  If we depreicate v2.0 api, it will be odd, if the services rely on
  v2.0 api to fetch certificates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316475] Re: [SRU] CloudSigma DS for causes hangs when serial console present

2014-08-25 Thread Ben Howard
** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1316475

Title:
  [SRU] CloudSigma DS for causes hangs when serial console present

Status in Init scripts for use on cloud images:
  Fix Released
Status in Openstack disk image builder:
  Fix Released
Status in tripleo - openstack on openstack:
  Invalid
Status in “cloud-init” package in Ubuntu:
  Fix Released
Status in “cloud-init” source package in Trusty:
  Fix Released

Bug description:
  SRU Justification

  Impact: The Cloud Sigma Datasource read and writes to /dev/ttyS1 if
  present; the Datasource does not have a time out. On non-CloudSigma
  Clouds or systems w/ /dev/ttyS1, Cloud-init will block pending a
  response, which may never come. Further, it is dangerous for a default
  datasource to write blindly on a serial console as other control plane
  software and Clouds use /dev/ttyS1 for communication.

  Fix: The patch queries the BIOS to see if the instance is running on
  CloudSigma before querying /dev/ttys1.

  Verification: On both a CloudSigma instance and non-CloudSigma instance with 
/dev/ttys1:
  1. Install new cloud-init
  2. Purge existing cloud-init data (rm -rf /var/lib/cloud)
  3. Run "cloud-init --debug init"
  4. Confirm that CloudSigma provisioned while CloudSigma datasource skipped 
non-CloudSigma instance

  Regression: The risk is low, as this change further restrict where the
  CloudSigma Datasource can run.

  [Original Report]
  DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 3 (xid=0x7e777c23)
  DHCPREQUEST of 10.22.157.186 on eth2 to 255.255.255.255 port 67 
(xid=0x7e777c23)
  DHCPOFFER of 10.22.157.186 from 10.22.157.149
  DHCPACK of 10.22.157.186 from 10.22.157.149
  bound to 10.22.157.186 -- renewal in 39589 seconds.
   * Starting Mount network filesystems[ OK 
]
   * Starting configure network device [ OK 
]
   * Stopping Mount network filesystems[ OK 
]
   * Stopping DHCP any connected, but unconfigured network interfaces  [ OK 
]
   * Starting configure network device [ OK 
]
   * Stopping DHCP any connected, but unconfigured network interfaces  [ OK 
]
   * Starting configure network device [ OK 
]

  And it stops there.

  I see this on about 10% of deploys.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1316475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316475] Re: [SRU] CloudSigma DS for causes hangs when serial console present

2014-08-25 Thread Scott Moser
** Changed in: cloud-init
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1316475

Title:
  [SRU] CloudSigma DS for causes hangs when serial console present

Status in Init scripts for use on cloud images:
  Fix Committed
Status in Openstack disk image builder:
  Fix Released
Status in tripleo - openstack on openstack:
  Invalid
Status in “cloud-init” package in Ubuntu:
  Fix Released
Status in “cloud-init” source package in Trusty:
  Fix Released

Bug description:
  SRU Justification

  Impact: The Cloud Sigma Datasource read and writes to /dev/ttyS1 if
  present; the Datasource does not have a time out. On non-CloudSigma
  Clouds or systems w/ /dev/ttyS1, Cloud-init will block pending a
  response, which may never come. Further, it is dangerous for a default
  datasource to write blindly on a serial console as other control plane
  software and Clouds use /dev/ttyS1 for communication.

  Fix: The patch queries the BIOS to see if the instance is running on
  CloudSigma before querying /dev/ttys1.

  Verification: On both a CloudSigma instance and non-CloudSigma instance with 
/dev/ttys1:
  1. Install new cloud-init
  2. Purge existing cloud-init data (rm -rf /var/lib/cloud)
  3. Run "cloud-init --debug init"
  4. Confirm that CloudSigma provisioned while CloudSigma datasource skipped 
non-CloudSigma instance

  Regression: The risk is low, as this change further restrict where the
  CloudSigma Datasource can run.

  [Original Report]
  DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 3 (xid=0x7e777c23)
  DHCPREQUEST of 10.22.157.186 on eth2 to 255.255.255.255 port 67 
(xid=0x7e777c23)
  DHCPOFFER of 10.22.157.186 from 10.22.157.149
  DHCPACK of 10.22.157.186 from 10.22.157.149
  bound to 10.22.157.186 -- renewal in 39589 seconds.
   * Starting Mount network filesystems[ OK 
]
   * Starting configure network device [ OK 
]
   * Stopping Mount network filesystems[ OK 
]
   * Stopping DHCP any connected, but unconfigured network interfaces  [ OK 
]
   * Starting configure network device [ OK 
]
   * Stopping DHCP any connected, but unconfigured network interfaces  [ OK 
]
   * Starting configure network device [ OK 
]

  And it stops there.

  I see this on about 10% of deploys.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1316475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361315] [NEW] Navigation causes undefined error when clicked on twice

2014-08-25 Thread Thai Tran
Public bug reported:

Steps to reproduce:
1. Open up your browser console.
2. Click on the project navigation item
3. Click on it again.
4. Uncaught TypeError: undefined is not a function

** Affects: horizon
 Importance: Undecided
 Assignee: Thai Tran (tqtran)
 Status: In Progress


** Tags: ui

** Tags added: ui

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361315

Title:
  Navigation causes undefined error when clicked on twice

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Steps to reproduce:
  1. Open up your browser console.
  2. Click on the project navigation item
  3. Click on it again.
  4. Uncaught TypeError: undefined is not a function

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1361315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361317] [NEW] Nova creates volume images of wrong size in Ceph RBD

2014-08-25 Thread Craig
Public bug reported:

When spawning a new instance and using the Boot from Image option, nova
will create images of the wrong size on ceph RBD.  This problem shows up
as the image being several orders of magnitude larger than the requested
size in ceph (i.e. 1TB in size instead of 1GB in size).

Configuration notes:
- nova version 2.17.0
- ceph version 0.80.1

This can be verified using the rbd command line tool after spawning an instance.
1) Start an instance of cirros using 1GB boot disk (m1.tiny)
2) rbd -p vms ls  ( get the disk image name)
3) rbd -p vms info 

The bug comes from a size conversion error in
nova/virt/libvirt/imagebackend.py (lines 657-658, patch file attached)

def _resize(self, volume_name, size):
size = int(size) * units.Ki

Should be:
def _resize(self, volume_name, size):
size = int(size)

** Affects: nova
 Importance: Undecided
 Status: New

** Patch added: "Patch for nova - ceph RBD image size bug"
   
https://bugs.launchpad.net/bugs/1361317/+attachment/4186839/+files/imagebackend.py.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361317

Title:
  Nova creates volume images of wrong size in Ceph RBD

Status in OpenStack Compute (Nova):
  New

Bug description:
  When spawning a new instance and using the Boot from Image option,
  nova will create images of the wrong size on ceph RBD.  This problem
  shows up as the image being several orders of magnitude larger than
  the requested size in ceph (i.e. 1TB in size instead of 1GB in size).

  Configuration notes:
  - nova version 2.17.0
  - ceph version 0.80.1

  This can be verified using the rbd command line tool after spawning an 
instance.
  1) Start an instance of cirros using 1GB boot disk (m1.tiny)
  2) rbd -p vms ls  ( get the disk image name)
  3) rbd -p vms info 

  The bug comes from a size conversion error in
  nova/virt/libvirt/imagebackend.py (lines 657-658, patch file attached)

  def _resize(self, volume_name, size):
  size = int(size) * units.Ki

  Should be:
  def _resize(self, volume_name, size):
  size = int(size)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350949] Re: Tables in tabs show incorrect number of row count

2014-08-25 Thread Thai Tran
** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1350949

Title:
  Tables in tabs show incorrect number of row count

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  A few tables in tabs show incorrect number of row count. This behavior
  happens after the merge of the Bootstrap changes
  (https://review.openstack.org/107042).

  This behavior is seen in 
  - Project Volumes panel's Volumes Snapshots table  
  - Project Access & Security panel's Key Pairs table, Floating IPs table, API 
Access table

  Empty tables show '-1' for the row count. Non-empty tables show '0'
  for the row count.

  Note: The tables in the first tabs for these two panel are fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1350949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249125] Re: smartos datasource should use meta-data for server identification

2014-08-25 Thread Scott Moser
** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1249125

Title:
  smartos datasource should use meta-data for server identification

Status in Init scripts for use on cloud images:
  New
Status in “cloud-init” package in Ubuntu:
  Confirmed

Bug description:
  New SmartOS documentation shows that the instance ID should be
  detected through the meta-data.

  http://us-east.manta.joyent.com/jmc/public/mdata/index.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1249125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356208] Re: groups syntax broken

2014-08-25 Thread Scott Moser
** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1356208

Title:
  groups syntax broken

Status in Init scripts for use on cloud images:
  Triaged
Status in “cloud-init” package in Ubuntu:
  New

Bug description:
  When adding a user, the 'groups' keyword only works if multiple groups
  are given in useradd syntax, i.e. as a single string without spaces

  groups: adm,sudo,cdrom

  I'm guessing the value is passed directly to the --groups command line option 
of useradd. 
  Any of the more intuitive syntax forms (including the one in the 
documentation at 
http://cloudinit.readthedocs.org/en/latest/topics/examples.html) do not seem to 
work. I've tried

  groups: [adm, sudo, cdrom]

  groups: adm, sudo, cdrom

  groups:
- adm
- sudo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1356208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355343] Re: cloud-init writes sources.list without newline at end of file

2014-08-25 Thread Scott Moser
** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Low

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1355343

Title:
  cloud-init writes sources.list without newline at end of file

Status in Init scripts for use on cloud images:
  Confirmed
Status in “cloud-init” package in Ubuntu:
  Confirmed

Bug description:
  This happens on Utopic. Trusty is fine. Steps to reproduce:

  sudo lxc-create -t ubuntu-cloud -n utopic -- -F -s daily -r utopic
  sudo lxc-start-ephemeral -o trusty -n test -d
  sudo lxc-attach -n test -- login -f root

  Examine /etc/apt/sources.list inside the host. For example, "cat
  /etc/apt/sources.list" shows the subsequent prompt at the end of the
  last line instead of on a fresh line. "vim /etc/apt/sources.list" says
  "noeol".

  Expected: newline at end of file, following Unix convention.
  Actual: no newline at end of file.

  Impact: messes up my local script that does trivial manipulations
  (adds a local repository).

  I've examined the sources.list shipped with the image, and it doesn't
  have this problem and the sources.list I see after startup looks
  radically different (matching the template in /etc/cloud/...). So it
  seems to me that the templating mechanism inside cloud-init is causing
  this.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.10
  Package: cloud-init 0.7.6~bzr992-0ubuntu1
  ProcVersionSignature: Ubuntu 3.13.0-7.26-generic 3.13.1
  Uname: Linux 3.13.0-7-generic x86_64
  NonfreeKernelModules: veth xt_conntrack ipt_REJECT ip6table_filter ip6_tables 
ebtable_nat ebtables overlayfs xt_CHECKSUM iptable_mangle ipt_MASQUERADE 
iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack 
xt_tcpudp bridge stp llc iptable_filter ip_tables x_tables dm_crypt kvm_intel 
kvm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel microcode psmouse 
serio_raw aesni_intel aes_x86_64 glue_helper lrw gf128mul ablk_helper cryptd 
floppy
  ApportVersion: 2.14.5-0ubuntu4
  Architecture: amd64
  Date: Mon Aug 11 18:00:26 2014
  PackageArchitecture: all
  ProcEnviron:
   TERM=screen
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: cloud-init
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1355343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361328] [NEW] trove not supporting multiple regions

2014-08-25 Thread David Lyle
Public bug reported:

refactor of trove code in Horizon broke multi-region support.

** Affects: horizon
 Importance: High
 Assignee: David Lyle (david-lyle)
 Status: New


** Tags: trove

** Changed in: horizon
   Importance: Undecided => High

** Changed in: horizon
 Assignee: (unassigned) => David Lyle (david-lyle)

** Changed in: horizon
Milestone: None => juno-3

** Tags added: trov

** Tags removed: trov
** Tags added: trove

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361328

Title:
  trove not supporting multiple regions

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  refactor of trove code in Horizon broke multi-region support.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1361328/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350195] Re: TestDashboardBasicOps.test_basic_scenario fails with "KeyError: .btn'>, )>, )>, 0 props>"

2014-08-25 Thread Gary W. Smith
This error is coming from the scss compiler and is likely due to an error in 
one of horizon's scss files. The scss files have undergone
a number changes since this bug was reported. Closing this bug since it is no 
longer occuring. Please reopen if it happens again.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1350195

Title:
  TestDashboardBasicOps.test_basic_scenario fails with "KeyError:
  .btn'>,)>,)>, 0 props>"

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  http://logs.openstack.org/56/94556/15/check/check-tempest-dsvm-
  full/fa2d731/console.html

  http://logs.openstack.org/56/94556/15/check/check-tempest-dsvm-
  full/fa2d731/logs/horizon_error.txt.gz

  [Wed Jul 30 00:22:45.269512 2014] [:error] [pid 22945:tid 140269069489920] 
WARNING:py.warnings:RuntimeWarning: Scanning acceleration disabled (_speedups 
not found)!
  [Wed Jul 30 00:22:49.444192 2014] [:error] [pid 22945:tid 140269069489920] 
Internal Server Error: /
  [Wed Jul 30 00:22:49.444263 2014] [:error] [pid 22945:tid 140269069489920] 
Traceback (most recent call last):
  [Wed Jul 30 00:22:49.444393 2014] [:error] [pid 22945:tid 140269069489920]   
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 112, in get_response
  [Wed Jul 30 00:22:49.444508 2014] [:error] [pid 22945:tid 140269069489920]
 response = wrapped_callback(request, *callback_args, **callback_kwargs)
  [Wed Jul 30 00:22:49.444541 2014] [:error] [pid 22945:tid 140269069489920]   
File "/usr/local/lib/python2.7/dist-packages/django/views/decorators/vary.py", 
line 36, in inner_func
  [Wed Jul 30 00:22:49.444634 2014] [:error] [pid 22945:tid 140269069489920]
 response = func(*args, **kwargs)
  [Wed Jul 30 00:22:49.444646 2014] [:error] [pid 22945:tid 140269069489920]   
File 
"/opt/stack/new/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/views.py",
 line 45, in splash
  [Wed Jul 30 00:22:49.444758 2014] [:error] [pid 22945:tid 140269069489920]
 return shortcuts.render(request, 'splash.html', {'form': form})
  [Wed Jul 30 00:22:49.444788 2014] [:error] [pid 22945:tid 140269069489920]   
File "/usr/local/lib/python2.7/dist-packages/django/shortcuts/__init__.py", 
line 53, in render
  [Wed Jul 30 00:22:49.445016 2014] [:error] [pid 22945:tid 140269069489920]
 return HttpResponse(loader.render_to_string(*args, **kwargs),
  [Wed Jul 30 00:22:49.445114 2014] [:error] [pid 22945:tid 140269069489920]   
File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py", line 
169, in render_to_string
  [Wed Jul 30 00:22:49.445206 2014] [:error] [pid 22945:tid 140269069489920]
 return t.render(context_instance)
  [Wed Jul 30 00:22:49.445218 2014] [:error] [pid 22945:tid 140269069489920]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
140, in render
  [Wed Jul 30 00:22:49.445225 2014] [:error] [pid 22945:tid 140269069489920]
 return self._render(context)
  [Wed Jul 30 00:22:49.445397 2014] [:error] [pid 22945:tid 140269069489920]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
134, in _render
  [Wed Jul 30 00:22:49.445409 2014] [:error] [pid 22945:tid 140269069489920]
 return self.nodelist.render(context)
  [Wed Jul 30 00:22:49.445414 2014] [:error] [pid 22945:tid 140269069489920]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
840, in render
  [Wed Jul 30 00:22:49.445509 2014] [:error] [pid 22945:tid 140269069489920]
 bit = self.render_node(node, context)
  [Wed Jul 30 00:22:49.445519 2014] [:error] [pid 22945:tid 140269069489920]   
File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 
78, in render_node
  [Wed Jul 30 00:22:49.445650 2014] [:error] [pid 22945:tid 140269069489920]
 return node.render(context)
  [Wed Jul 30 00:22:49.445659 2014] [:error] [pid 22945:tid 140269069489920]   
File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", 
line 155, in render
  [Wed Jul 30 00:22:49.445870 2014] [:error] [pid 22945:tid 140269069489920]
 return self.render_template(self.template, context)
  [Wed Jul 30 00:22:49.445886 2014] [:error] [pid 22945:tid 140269069489920]   
File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", 
line 137, in render_template
  [Wed Jul 30 00:22:49.445893 2014] [:error] [pid 22945:tid 140269069489920]
 output = template.render(context)
  [Wed Jul 30 00:22:49.446077 2014] [:error] [pid 22945:tid 140269069489920]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
140, in render
  [Wed Jul 30 00:22:49.446087 2014] [:error] [pid 22945:tid 140269069489920]
 return self._render(context)
  [Wed Jul 30 00:22:49.446095 2014] [:error] [pid 22945:tid 140269069489920]   
File 

[Yahoo-eng-team] [Bug 1345955] Re: KeyError: , 0 props> Failure

2014-08-25 Thread Gary W. Smith
This error is coming from the scss compiler and is likely due to an
error in one of horizon's scss files. The scss files have undergone
substantial changes since this bug was reported, including the bootstrap
update (https://review.openstack.org/#/c/107042/). Closing this bug
since it is likely no longer relevant. Please reopen if it happens
again.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1345955

Title:
  KeyError: , 0 props> Failure

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Seeing a new gate failure that causes tempest to fail:

  http://logs.openstack.org/49/107549/11/check/check-tempest-dsvm-
  postgres-full/2c0808c/logs/testr_results.html.gz

  [Mon Jul 21 04:24:17.823759 2014] [:error] [pid 20477:tid 140176873875200] 
Internal Server Error: /
  [Mon Jul 21 04:24:17.823869 2014] [:error] [pid 20477:tid 140176873875200] 
Traceback (most recent call last):
  [Mon Jul 21 04:24:17.823909 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 112, in get_response
  [Mon Jul 21 04:24:17.823971 2014] [:error] [pid 20477:tid 140176873875200]
 response = wrapped_callback(request, *callback_args, **callback_kwargs)
  [Mon Jul 21 04:24:17.824036 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/views/decorators/vary.py", 
line 36, in inner_func
  [Mon Jul 21 04:24:17.824071 2014] [:error] [pid 20477:tid 140176873875200]
 response = func(*args, **kwargs)
  [Mon Jul 21 04:24:17.824131 2014] [:error] [pid 20477:tid 140176873875200]   
File 
"/opt/stack/new/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/views.py",
 line 45, in splash
  [Mon Jul 21 04:24:17.824208 2014] [:error] [pid 20477:tid 140176873875200]
 return shortcuts.render(request, 'splash.html', {'form': form})
  [Mon Jul 21 04:24:17.824446 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/shortcuts/__init__.py", 
line 53, in render
  [Mon Jul 21 04:24:17.824611 2014] [:error] [pid 20477:tid 140176873875200]
 return HttpResponse(loader.render_to_string(*args, **kwargs),
  [Mon Jul 21 04:24:17.824776 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py", line 
169, in render_to_string
  [Mon Jul 21 04:24:17.824807 2014] [:error] [pid 20477:tid 140176873875200]
 return t.render(context_instance)
  [Mon Jul 21 04:24:17.824891 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
140, in render
  [Mon Jul 21 04:24:17.824924 2014] [:error] [pid 20477:tid 140176873875200]
 return self._render(context)
  [Mon Jul 21 04:24:17.824956 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
134, in _render
  [Mon Jul 21 04:24:17.825061 2014] [:error] [pid 20477:tid 140176873875200]
 return self.nodelist.render(context)
  [Mon Jul 21 04:24:17.825091 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
840, in render
  [Mon Jul 21 04:24:17.825124 2014] [:error] [pid 20477:tid 140176873875200]
 bit = self.render_node(node, context)
  [Mon Jul 21 04:24:17.825212 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 
78, in render_node
  [Mon Jul 21 04:24:17.825272 2014] [:error] [pid 20477:tid 140176873875200]
 return node.render(context)
  [Mon Jul 21 04:24:17.825304 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", 
line 155, in render
  [Mon Jul 21 04:24:17.825431 2014] [:error] [pid 20477:tid 140176873875200]
 return self.render_template(self.template, context)
  [Mon Jul 21 04:24:17.825554 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", 
line 137, in render_template
  [Mon Jul 21 04:24:17.825589 2014] [:error] [pid 20477:tid 140176873875200]
 output = template.render(context)
  [Mon Jul 21 04:24:17.825756 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
140, in render
  [Mon Jul 21 04:24:17.825784 2014] [:error] [pid 20477:tid 140176873875200]
 return self._render(context)
  [Mon Jul 21 04:24:17.825838 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
134, in _render
  [Mon Jul 21 04:24:17.825895 2014] [:error] [pid 20477:tid 140176873875200]
 ret

[Yahoo-eng-team] [Bug 1353008] Re: MAAS Provider: LXC did not get DHCP address, stuck in "pending"

2014-08-25 Thread David Britton
** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1353008

Title:
  MAAS Provider: LXC did not get DHCP address, stuck in "pending"

Status in Init scripts for use on cloud images:
  New
Status in juju-core:
  Triaged
Status in juju-core 1.20 series:
  Triaged

Bug description:
  Note, that after I went onto the system, it *did* have an IP address.

    0/lxc/3:
  agent-state: pending
  instance-id: juju-machine-0-lxc-3
  series: trusty
  hardware: arch=amd64

  cloud-init-output.log snip:

  Cloud-init v. 0.7.5 running 'init' at Mon, 04 Aug 2014 23:57:12 +. Up 
572.29 seconds.
  ci-info: +++Net device info+++
  ci-info: ++--+---+---+---+
  ci-info: | Device |  Up  |  Address  |Mask   | Hw-Address|
  ci-info: ++--+---+---+---+
  ci-info: |   lo   | True | 127.0.0.1 | 255.0.0.0 | . |
  ci-info: |  eth0  | True | . | . | 00:16:3e:34:aa:57 |
  ci-info: ++--+---+---+---+
  ci-info: !!!Route info 
failed
  Cloud-init v. 0.7.5 running 'modules:config' at Mon, 04 Aug 2014 23:57:12 
+. Up 572.99 seconds.
  Cloud-init v. 0.7.5 running 'modules:final' at Mon, 04 Aug 2014 23:57:14 
+. Up 574.42 seconds.
  Cloud-init v. 0.7.5 finished at Mon, 04 Aug 2014 23:57:14 +. Datasource 
DataSourceNoCloudNet [seed=/var/lib/cloud/seed/nocloud-net][dsmode=net].  Up 
574.54 seconds

  syslog on system, showing DHCPACK 1 second later:

  root@juju-machine-0-lxc-3:/home/ubuntu# grep DHCP /var/log/syslog
  Aug  4 23:57:13 juju-machine-0-lxc-3 dhclient: DHCPREQUEST of 10.96.3.173 on 
eth0 to 255.255.255.255 port 67 (xid=0x1687c544)
  Aug  4 23:57:13 juju-machine-0-lxc-3 dhclient: DHCPOFFER of 10.96.3.173 from 
10.96.0.10
  Aug  4 23:57:13 juju-machine-0-lxc-3 dhclient: DHCPACK of 10.96.3.173 from 
10.96.0.10
  Aug  5 05:28:15 juju-machine-0-lxc-3 dhclient: DHCPREQUEST of 10.96.3.173 on 
eth0 to 10.96.0.10 port 67 (xid=0x1687c544)
  Aug  5 05:28:15 juju-machine-0-lxc-3 dhclient: DHCPACK of 10.96.3.173 from 
10.96.0.10
  Aug  5 11:15:00 juju-machine-0-lxc-3 dhclient: DHCPREQUEST of 10.96.3.173 on 
eth0 to 10.96.0.10 port 67 (xid=0x1687c544)
  Aug  5 11:15:00 juju-machine-0-lxc-3 dhclient: DHCPACK of 10.96.3.173 from 
10.96.0.10

  It appears in every case, cloud-init init-local has failed very early
  visible in juju logs /var/lib/juju/containers//console.log:

  Traceback (most recent call last):
File "/usr/bin/cloud-init", line 618, in 
  sys.exit(main())
File "/usr/bin/cloud-init", line 614, in main
  get_uptime=True, func=functor, args=(name, args))
File "/usr/lib/python2.7/dist-packages/cloudinit/util.py", line 1875, in 
log_time
  ret = func(*args, **kwargs)
File "/usr/bin/cloud-init", line 491, in status_wrapper
  force=True)
File "/usr/lib/python2.7/dist-packages/cloudinit/util.py", line 1402, in 
sym_link
  os.symlink(source, link)
  OSError: [Errno 2] No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1353008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333106] Re: Tempest:Running test_network_basic_ops scenario in tempest results is failing with internal server error

2014-08-25 Thread Elena Ezhova
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1333106

Title:
  Tempest:Running test_network_basic_ops scenario in tempest results is
  failing with internal server error

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Tested on build: 2014.2.dev543.g8bdc649

  Pre-requisite : External network exist.
  Both the instances are created successfully with internal and external 
network  connectivity passed.

  
  neutronclient.client: DEBUG: RESP:{'date': 'Mon, 23 Jun 2014 02:09:48 GMT', 
'status': '204', 'content-length': '0', 'x-openstack-request-id': 
'req-c75f44c1-42e1-41ac-a163-8821d78ecddc'}

  tempest.scenario.manager: DEBUG: Deleting {u'status': u'ACTIVE', u'subnets': 
[], u'name': u'network-smoke--1921748135', u'provider:physical_network': None, 
u'admin_state_up': True, u'tenant_id': u'b61abe9a4c8e4e439603941040610d90', 
u'provider:network_type': u'vxlan', u'shared': False, u'id': 
u'9a32273d-fc1c-4b9e-90cc-44702236b173', u'provider:segmentation_id': 1003} 
from shared resources of TestNetworkBasicOps
  neutronclient.client: DEBUG:
  REQ: curl -i 
http://192.0.2.26:9696//v2.0/networks/9a32273d-fc1c-4b9e-90cc-44702236b173.json 
-X DELETE -H "X-Auth-Token: b345c371a7364c8ba4d4e9269c9db9b9" -H "Content-Type: 
application/json" -H "Accept: application/json" -H "User-Agent: 
python-neutronclient"

  neutronclient.client: DEBUG: RESP:{'date': 'Mon, 23 Jun 2014 02:09:48
  GMT', 'status': '500', 'content-length': '88', 'content-type':
  'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-
  7b8270d5-6346-4525-a6b4-19a58f400e78'} {"NeutronError": "Request
  Failed: internal server error while processing your request."}

  neutronclient.v2_0.client: DEBUG: Error message: {"NeutronError": "Request 
Failed: internal server error while processing your request."}
  tempest.scenario.manager: DEBUG: Deleting {u'tenant_id': 
u'b61abe9a4c8e4e439603941040610d90', u'name': u'secgroup-smoke--1142035513', 
u'description': u'secgroup-smoke--1142035513 description', 
u'security_group_rules': [{u'remote_group_id': None, u'direction': u'egress', 
u'remote_ip_prefix': None, u'protocol': None, u'tenant_id': 
u'b61abe9a4c8e4e439603941040610d90', u'port_range_max': None, 
u'security_group_id': u'6737784c-ba3a-4c2b-805c-97a69c6ccf4b', 
u'port_range_min': None, u'ethertype': u'IPv4', u'id': 
u'92a73bea-0c43-4490-9470-da2b23013760'}, {u'remote_group_id': None, 
u'direction': u'egress', u'remote_ip_prefix': None, u'protocol': None, 
u'tenant_id': u'b61abe9a4c8e4e439603941040610d90', u'port_range_max': None, 
u'security_group_id': u'6737784c-ba3a-4c2b-805c-97a69c6ccf4b', 
u'port_range_min': None, u'ethertype': u'IPv6', u'id': 
u'406eb703-0d20-4a71-b13f-15ecbe832fbd'}], u'id': 
u'6737784c-ba3a-4c2b-805c-97a69c6ccf4b'} from shared resources of 
TestNetworkBasicOps
  neutronclient.client: DEBUG:
  REQ: curl -i 
http://192.0.2.26:9696//v2.0/security-groups/6737784c-ba3a-4c2b-805c-97a69c6ccf4b.json
 -X DELETE -H "X-Auth-Token: b345c371a7364c8ba4d4e9269c9db9b9" -H 
"Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: 
python-neutronclient"

  neutronclient.client: DEBUG: RESP:{'date': 'Mon, 23 Jun 2014 02:09:48
  GMT', 'status': '204', 'content-length': '0', 'x-openstack-request-
  id': 'req-876d170a-448c-4bb2-b358-adca177d3bd9'}

  - >> end captured logging << -

  --
  Ran 2 tests in 142.676s

  FAILED (errors=2)

  Actual Result :  Just after deleting the subnet , deletion of  network
  throws internal server error. (Attached is the server log of the
  controller error are logged in that file also)

  Giving some time delay in /tempest/tempest/api/network/common.py file onto 
the deletion of network  will get rid of this issue. 
  class DeletableNetwork(DeletableResource):

  def delete(self):
  #time.sleep(3)
  self.client.delete_network(self.id)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1333106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346932] Re: delete floating ip via neutron port-delete

2014-08-25 Thread Elena Ezhova
Joe, since there was no evidence that the bug exists, I mark it as
invalid. Please leave a comment if the issue still persists.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346932

Title:
  delete floating ip via neutron port-delete

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When running neutron port-delete  I get a traceback
  referencing :

  2014-07-21 16:34:28.769 31455 TRACE neutron.api.v2.resource DBError:
  (IntegrityError) (1451, 'Cannot delete or update a parent row: a
  foreign key constraint fails (`neutron`.`floatingips`, CONSTRAINT
  `floatingips_ibfk_2` FOREIGN KEY (`floating_port_id`) REFERENCES
  `ports` (`id`))') 'DELETE FROM ports WHERE ports.id = %s'
  ('25c9a306-6f5f-4630-99ec-78893b1e766a',)

  Instead of dumping a unhelpful trace to the logs, shouldn't there be a
  message to the user that they should use the right command to remove
  the floating IP port?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1345955] Re: KeyError: , 0 props> Failure

2014-08-25 Thread Gary W. Smith
** Changed in: horizon
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1345955

Title:
  KeyError: , 0 props> Failure

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  Seeing a new gate failure that causes tempest to fail:

  http://logs.openstack.org/49/107549/11/check/check-tempest-dsvm-
  postgres-full/2c0808c/logs/testr_results.html.gz

  [Mon Jul 21 04:24:17.823759 2014] [:error] [pid 20477:tid 140176873875200] 
Internal Server Error: /
  [Mon Jul 21 04:24:17.823869 2014] [:error] [pid 20477:tid 140176873875200] 
Traceback (most recent call last):
  [Mon Jul 21 04:24:17.823909 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 112, in get_response
  [Mon Jul 21 04:24:17.823971 2014] [:error] [pid 20477:tid 140176873875200]
 response = wrapped_callback(request, *callback_args, **callback_kwargs)
  [Mon Jul 21 04:24:17.824036 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/views/decorators/vary.py", 
line 36, in inner_func
  [Mon Jul 21 04:24:17.824071 2014] [:error] [pid 20477:tid 140176873875200]
 response = func(*args, **kwargs)
  [Mon Jul 21 04:24:17.824131 2014] [:error] [pid 20477:tid 140176873875200]   
File 
"/opt/stack/new/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/views.py",
 line 45, in splash
  [Mon Jul 21 04:24:17.824208 2014] [:error] [pid 20477:tid 140176873875200]
 return shortcuts.render(request, 'splash.html', {'form': form})
  [Mon Jul 21 04:24:17.824446 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/shortcuts/__init__.py", 
line 53, in render
  [Mon Jul 21 04:24:17.824611 2014] [:error] [pid 20477:tid 140176873875200]
 return HttpResponse(loader.render_to_string(*args, **kwargs),
  [Mon Jul 21 04:24:17.824776 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py", line 
169, in render_to_string
  [Mon Jul 21 04:24:17.824807 2014] [:error] [pid 20477:tid 140176873875200]
 return t.render(context_instance)
  [Mon Jul 21 04:24:17.824891 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
140, in render
  [Mon Jul 21 04:24:17.824924 2014] [:error] [pid 20477:tid 140176873875200]
 return self._render(context)
  [Mon Jul 21 04:24:17.824956 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
134, in _render
  [Mon Jul 21 04:24:17.825061 2014] [:error] [pid 20477:tid 140176873875200]
 return self.nodelist.render(context)
  [Mon Jul 21 04:24:17.825091 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
840, in render
  [Mon Jul 21 04:24:17.825124 2014] [:error] [pid 20477:tid 140176873875200]
 bit = self.render_node(node, context)
  [Mon Jul 21 04:24:17.825212 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 
78, in render_node
  [Mon Jul 21 04:24:17.825272 2014] [:error] [pid 20477:tid 140176873875200]
 return node.render(context)
  [Mon Jul 21 04:24:17.825304 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", 
line 155, in render
  [Mon Jul 21 04:24:17.825431 2014] [:error] [pid 20477:tid 140176873875200]
 return self.render_template(self.template, context)
  [Mon Jul 21 04:24:17.825554 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", 
line 137, in render_template
  [Mon Jul 21 04:24:17.825589 2014] [:error] [pid 20477:tid 140176873875200]
 output = template.render(context)
  [Mon Jul 21 04:24:17.825756 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
140, in render
  [Mon Jul 21 04:24:17.825784 2014] [:error] [pid 20477:tid 140176873875200]
 return self._render(context)
  [Mon Jul 21 04:24:17.825838 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
134, in _render
  [Mon Jul 21 04:24:17.825895 2014] [:error] [pid 20477:tid 140176873875200]
 return self.nodelist.render(context)
  [Mon Jul 21 04:24:17.825987 2014] [:error] [pid 20477:tid 140176873875200]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
840, in render
  [Mon Jul 21 04:24:17.826034 2014] [:error] [pid 20477:tid 140176873875200]
 bit = self.render_node(node, context)
  [Mon Jul 21 04:2

[Yahoo-eng-team] [Bug 1361337] [NEW] keystone.tests.test_serializer.XmlSerializerTestCase.test_collection_member random fails; lxml hashseed?

2014-08-25 Thread Matt Riedemann
Public bug reported:

This is in the gate:

http://logs.openstack.org/19/111519/4/gate/gate-keystone-
python26/7003102/console.html.gz#_2014-08-22_05_00_00_019

2014-08-22 05:00:00.019 | FAIL: 
keystone.tests.test_serializer.XmlSerializerTestCase.test_collection_member
2014-08-22 05:00:00.019 | tags: worker-0
2014-08-22 05:00:00.019 | 
--
2014-08-22 05:00:00.019 | pythonlogging:'': {{{Adding cache-proxy 
'keystone.tests.test_cache.CacheIsolatingProxy' to backend.}}}
2014-08-22 05:00:00.019 | 
2014-08-22 05:00:00.019 | Traceback (most recent call last):
2014-08-22 05:00:00.019 |   File "keystone/tests/test_serializer.py", line 253, 
in test_collection_member
2014-08-22 05:00:00.019 | self.assertSerializeDeserialize(d, xml)
2014-08-22 05:00:00.019 |   File "keystone/tests/test_serializer.py", line 37, 
in assertSerializeDeserialize
2014-08-22 05:00:00.019 | ksmatchers.XMLEquals(xml))
2014-08-22 05:00:00.020 |   File 
"/home/jenkins/workspace/gate-keystone-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 423, in assertThat
2014-08-22 05:00:00.020 | raise mismatch_error
2014-08-22 05:00:00.020 | MismatchError: expected = http://docs.openstack.org/identity/api/v2.0"; attribute="value">
2014-08-22 05:00:00.020 |   
2014-08-22 05:00:00.020 | http://localhost:5000/v3/objects/abc123def"; rel="self"/>
2014-08-22 05:00:00.020 | http://localhost:5000/v3/anotherobjs/123"; rel="anotherobj"/>
2014-08-22 05:00:00.020 |   
2014-08-22 05:00:00.020 | 
2014-08-22 05:00:00.020 | 
2014-08-22 05:00:00.020 | actual = http://docs.openstack.org/identity/api/v2.0"; attribute="value">
2014-08-22 05:00:00.020 |   
2014-08-22 05:00:00.020 | http://localhost:5000/v3/anotherobjs/123"; rel="anotherobj"/>
2014-08-22 05:00:00.021 | http://localhost:5000/v3/objects/abc123def"; rel="self"/>
2014-08-22 05:00:00.021 |   
2014-08-22 05:00:00.021 | 

This is probably due to running with latest tox and tests using lxml
which is not hash safe, so the unit tests need to be updated to account
for random order results.  Tempest had a similar problem last week.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: testing xml

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361337

Title:
  keystone.tests.test_serializer.XmlSerializerTestCase.test_collection_member
  random fails; lxml hashseed?

Status in OpenStack Identity (Keystone):
  New

Bug description:
  This is in the gate:

  http://logs.openstack.org/19/111519/4/gate/gate-keystone-
  python26/7003102/console.html.gz#_2014-08-22_05_00_00_019

  2014-08-22 05:00:00.019 | FAIL: 
keystone.tests.test_serializer.XmlSerializerTestCase.test_collection_member
  2014-08-22 05:00:00.019 | tags: worker-0
  2014-08-22 05:00:00.019 | 
--
  2014-08-22 05:00:00.019 | pythonlogging:'': {{{Adding cache-proxy 
'keystone.tests.test_cache.CacheIsolatingProxy' to backend.}}}
  2014-08-22 05:00:00.019 | 
  2014-08-22 05:00:00.019 | Traceback (most recent call last):
  2014-08-22 05:00:00.019 |   File "keystone/tests/test_serializer.py", line 
253, in test_collection_member
  2014-08-22 05:00:00.019 | self.assertSerializeDeserialize(d, xml)
  2014-08-22 05:00:00.019 |   File "keystone/tests/test_serializer.py", line 
37, in assertSerializeDeserialize
  2014-08-22 05:00:00.019 | ksmatchers.XMLEquals(xml))
  2014-08-22 05:00:00.020 |   File 
"/home/jenkins/workspace/gate-keystone-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 423, in assertThat
  2014-08-22 05:00:00.020 | raise mismatch_error
  2014-08-22 05:00:00.020 | MismatchError: expected = http://docs.openstack.org/identity/api/v2.0"; attribute="value">
  2014-08-22 05:00:00.020 |   
  2014-08-22 05:00:00.020 | http://localhost:5000/v3/objects/abc123def"; rel="self"/>
  2014-08-22 05:00:00.020 | http://localhost:5000/v3/anotherobjs/123"; rel="anotherobj"/>
  2014-08-22 05:00:00.020 |   
  2014-08-22 05:00:00.020 | 
  2014-08-22 05:00:00.020 | 
  2014-08-22 05:00:00.020 | actual = http://docs.openstack.org/identity/api/v2.0"; attribute="value">
  2014-08-22 05:00:00.020 |   
  2014-08-22 05:00:00.020 | http://localhost:5000/v3/anotherobjs/123"; rel="anotherobj"/>
  2014-08-22 05:00:00.021 | http://localhost:5000/v3/objects/abc123def"; rel="self"/>
  2014-08-22 05:00:00.021 |   
  2014-08-22 05:00:00.021 | 

  This is probably due to running with latest tox and tests using lxml
  which is not hash safe, so the unit tests need to be updated to
  account for random order results.  Tempest had a similar problem last
  week.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-te

[Yahoo-eng-team] [Bug 1361357] [NEW] metadata service performance regression ~100x

2014-08-25 Thread Scott Moser
Public bug reported:

The fix for bug 1325128 [1] included changes to the metadata service
seen at:
   trunk[2]: 4a60c6a655006b2882331844664fac5cf67c5f34
   icehouse [3]: 9f59ca751f1a392ef24d8ab73a7bf5ce9655017e

The new code is around 100x slower.  That slow down causes
excessive load on the metadata server if an instance crawls the metadata
service (which generates dozens of http requests).

Both cloud-init and cirros's boot code crawl that meta-data service.  The
end result is linux instances boot time is dramatically affected.  A
typical Ubuntu boot time will be < 10 seconds to executing user code, but
after this regression, I'm seeing boots in the range of 30 to 45 seconds.

--
[1] http://pad.lv/1325128
[2] 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4a60c6a655006b2882331844664fac5cf67c5f34
[3] 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4a60c6a655006b2882331844664fac5cf67c5f34

** Affects: cloud-archive
 Importance: Medium
 Status: Confirmed

** Affects: cloud-archive/icehouse
 Importance: Medium
 Status: Confirmed

** Affects: cloud-archive/juno
 Importance: Undecided
 Status: Confirmed

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: nova (Ubuntu)
 Importance: Medium
 Status: New

** Affects: nova (Ubuntu Trusty)
 Importance: Medium
 Status: Confirmed

** Affects: nova (Ubuntu Utopic)
 Importance: Medium
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361357

Title:
  metadata service performance regression ~100x

Status in Ubuntu Cloud Archive:
  Confirmed
Status in ubuntu-cloud-archive icehouse series:
  Confirmed
Status in ubuntu-cloud-archive juno series:
  Confirmed
Status in OpenStack Compute (Nova):
  New
Status in “nova” package in Ubuntu:
  New
Status in “nova” source package in Trusty:
  Confirmed
Status in “nova” source package in Utopic:
  New

Bug description:
  The fix for bug 1325128 [1] included changes to the metadata service
  seen at:
 trunk[2]: 4a60c6a655006b2882331844664fac5cf67c5f34
 icehouse [3]: 9f59ca751f1a392ef24d8ab73a7bf5ce9655017e

  The new code is around 100x slower.  That slow down causes
  excessive load on the metadata server if an instance crawls the metadata
  service (which generates dozens of http requests).

  Both cloud-init and cirros's boot code crawl that meta-data service.  The
  end result is linux instances boot time is dramatically affected.  A
  typical Ubuntu boot time will be < 10 seconds to executing user code, but
  after this regression, I'm seeing boots in the range of 30 to 45 seconds.

  --
  [1] http://pad.lv/1325128
  [2] 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4a60c6a655006b2882331844664fac5cf67c5f34
  [3] 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4a60c6a655006b2882331844664fac5cf67c5f34

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1361357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361366] [NEW] all one convergence IPv6 tests have to be skipped explicitly

2014-08-25 Thread Kevin Benton
Public bug reported:

The One Convergence plugin doesn't currently support IPv6 so every new
IPv6 test has to be explicitly skipped in the plugin's tests. This is a
burden to IPv6 developers. As an interim until v6 support is added, a
way to skip IPv6 tests by default should be added.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361366

Title:
  all one convergence IPv6 tests have to be skipped explicitly

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The One Convergence plugin doesn't currently support IPv6 so every new
  IPv6 test has to be explicitly skipped in the plugin's tests. This is
  a burden to IPv6 developers. As an interim until v6 support is added,
  a way to skip IPv6 tests by default should be added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361357] Re: metadata service performance regression ~100x

2014-08-25 Thread Scott Moser
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/icehouse
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/juno
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Utopic)
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/icehouse
   Status: New => Confirmed

** Changed in: cloud-archive/juno
   Status: New => Confirmed

** Changed in: nova (Ubuntu Trusty)
   Status: New => Confirmed

** Changed in: nova (Ubuntu Trusty)
   Importance: Undecided => Medium

** Changed in: nova (Ubuntu Utopic)
   Importance: Undecided => Medium

** Changed in: cloud-archive/icehouse
   Importance: Undecided => Medium

** Changed in: nova (Ubuntu Utopic)
   Status: New => Confirmed

** Changed in: cloud-archive/juno
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361357

Title:
  metadata service performance regression ~100x

Status in Ubuntu Cloud Archive:
  Confirmed
Status in ubuntu-cloud-archive icehouse series:
  Confirmed
Status in ubuntu-cloud-archive juno series:
  Confirmed
Status in OpenStack Compute (Nova):
  New
Status in “nova” package in Ubuntu:
  Confirmed
Status in “nova” source package in Trusty:
  Confirmed
Status in “nova” source package in Utopic:
  Confirmed

Bug description:
  The fix for bug 1325128 [1] included changes to the metadata service
  seen at:
 trunk[2]: 4a60c6a655006b2882331844664fac5cf67c5f34
 icehouse [3]: 9f59ca751f1a392ef24d8ab73a7bf5ce9655017e

  The new code is around 100x slower.  That slow down causes
  excessive load on the metadata server if an instance crawls the metadata
  service (which generates dozens of http requests).

  Both cloud-init and cirros's boot code crawl that meta-data service.  The
  end result is linux instances boot time is dramatically affected.  A
  typical Ubuntu boot time will be < 10 seconds to executing user code, but
  after this regression, I'm seeing boots in the range of 30 to 45 seconds.

  --
  [1] http://pad.lv/1325128
  [2] 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4a60c6a655006b2882331844664fac5cf67c5f34
  [3] 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4a60c6a655006b2882331844664fac5cf67c5f34

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1361357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361378] [NEW] "MySQL server has gone away" again

2014-08-25 Thread Dolph Mathews
Public bug reported:

This is a regression of an old issue, which I thought was resolved by
the "SELECT 1;" hack, but perhaps recently reintroduced with oslo.db?

[Mon Aug 25 14:30:54.403538 2014] [:error] [pid 25778:tid 139886259214080] 
25778 ERROR keystone.common.wsgi [-] (OperationalError) (2003, "Can't connect 
to MySQL server on '127.0.0.1' (111)") None None
[Mon Aug 25 14:30:54.403562 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi Traceback (most recent call last):
[Mon Aug 25 14:30:54.403570 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/common/wsgi.py", line 214, in __call__
[Mon Aug 25 14:30:54.403575 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi result = method(context, **params)
[Mon Aug 25 14:30:54.403581 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/token/controllers.py", line 99, in 
authenticate
[Mon Aug 25 14:30:54.403589 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi context, auth)
[Mon Aug 25 14:30:54.403594 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/token/controllers.py", line 308, in 
_authenticate_local
[Mon Aug 25 14:30:54.403600 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi username, CONF.identity.default_domain_id)
[Mon Aug 25 14:30:54.403606 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/identity/core.py", line 182, in wrapper
[Mon Aug 25 14:30:54.403612 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi return f(self, *args, **kwargs)
[Mon Aug 25 14:30:54.403618 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/identity/core.py", line 193, in wrapper
[Mon Aug 25 14:30:54.403624 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi return f(self, *args, **kwargs)
[Mon Aug 25 14:30:54.403630 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/identity/core.py", line 579, in 
get_user_by_name
[Mon Aug 25 14:30:54.403637 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi ref = driver.get_user_by_name(user_name, 
domain_id)
[Mon Aug 25 14:30:54.403644 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/opt/stack/new/keystone/keystone/identity/backends/sql.py", line 140, in 
get_user_by_name
[Mon Aug 25 14:30:54.403650 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi user_ref = query.one()
[Mon Aug 25 14:30:54.403656 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2310, in one
[Mon Aug 25 14:30:54.403662 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi ret = list(self)
[Mon Aug 25 14:30:54.403667 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2353, in 
__iter__
[Mon Aug 25 14:30:54.403673 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi return self._execute_and_instances(context)
[Mon Aug 25 14:30:54.403680 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2366, in 
_execute_and_instances
[Mon Aug 25 14:30:54.403731 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi close_with_result=True)
[Mon Aug 25 14:30:54.403740 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2357, in 
_connection_from_session
[Mon Aug 25 14:30:54.403746 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi **kw)
[Mon Aug 25 14:30:54.403752 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 799, in 
connection
[Mon Aug 25 14:30:54.403757 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi close_with_result=close_with_result)
[Mon Aug 25 14:30:54.403763 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 805, in 
_connection_for_bind
[Mon Aug 25 14:30:54.403769 2014] [:error] [pid 25778:tid 139886259214080] 
25778 TRACE keystone.common.wsgi return engine.contextu

[Yahoo-eng-team] [Bug 1325143] Re: Eliminate use of with_lockmode('update')

2014-08-25 Thread Morgan Fainberg
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

** Changed in: keystone/icehouse
   Status: New => In Progress

** Changed in: keystone/icehouse
   Importance: Undecided => Medium

** Changed in: keystone/icehouse
 Assignee: (unassigned) => Morgan Fainberg (mdrnstm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1325143

Title:
  Eliminate use of with_lockmode('update')

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  In Progress

Bug description:
  As discussed here: 
http://lists.openstack.org/pipermail/openstack-dev/2014-May/035264.html
   the use of "with_lockmode('update')" can cause a number of issues when run 
on top of MySQL+Galera because galera does not support the 'SELECT ... FOR 
UPDATE' SQL call.

  We currently only use with_lockmode('update') for coordinating
  consuming trusts (limited use trusts).

  We should eliminate this and handle the coordination of consumption to
  ensure only the specified number of tokens can be issued from a trust.
  Unfortunately, this is not as straightforward as it could be, we need
  to handle the following deployment scenarios:

  * Eventlet
  * Multiple Keystone Processes (same physical server) [same issue as mod_wsgi]
  * Multiple Keystone Processes (different physical servers)

  The first and second ones could be handled with the lockutils
  (external file-based) locking decorator. The last scenario will take
  more thought.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1325143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361386] [NEW] [Sahara] No description for Storage location on Node group template page

2014-08-25 Thread Andrew Lazarev
Public bug reported:

Description for Storage location on Node group template page contains
just word "Storage" (see screenshot). It should be more descriptive.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screen Shot 2014-08-25 at 2.43.13 PM.png"
   
https://bugs.launchpad.net/bugs/1361386/+attachment/4186934/+files/Screen%20Shot%202014-08-25%20at%202.43.13%20PM.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361386

Title:
  [Sahara] No description for Storage location on Node group template
  page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description for Storage location on Node group template page contains
  just word "Storage" (see screenshot). It should be more descriptive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1361386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361413] [NEW] LBaaS documentation is outdated , shows listeners instead of VIPs

2014-08-25 Thread Diogo Monteiro
Public bug reported:

The documentation for the LBaaS REST API endpoints listed on the office docs 
website does not match the REST API exposed by neutron.
Documentation URL: 
http://developer.openstack.org/api-ref-networking-v2.html#lbaas

In the API docs there is a reference to /listeners. However, neutron
doesn't have an API for /listeners, it only has an API for /vips

Below is a curl command demonstrating the issue:
Listing VIPs: *WORKS
curl -i http://infracont.rnd.cloud:9696/v2.0/lb/vips -X GET -H "X-Auth-Token: 
5c5b55bb54cc4c90971fc695ff44923d" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "User-Agent: python-neutronclient"

Listing Listeners: *FAILS
curl -i http://infracont.rnd.cloud:9696/v2.0/lb/listeners -X GET -H 
"X-Auth-Token: 5c5b55bb54cc4c90971fc695ff44923d" -H "Content-Type: 
application/json" -H "Accept: application/json" -H "User-Agent: 
python-neutronclient"


Openstack icehouse deployment.
Running neutron version 2.3.4

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361413

Title:
  LBaaS documentation is outdated , shows listeners instead of VIPs

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The documentation for the LBaaS REST API endpoints listed on the office docs 
website does not match the REST API exposed by neutron.
  Documentation URL: 
http://developer.openstack.org/api-ref-networking-v2.html#lbaas

  In the API docs there is a reference to /listeners. However, neutron
  doesn't have an API for /listeners, it only has an API for /vips

  Below is a curl command demonstrating the issue:
  Listing VIPs: *WORKS
  curl -i http://infracont.rnd.cloud:9696/v2.0/lb/vips -X GET -H "X-Auth-Token: 
5c5b55bb54cc4c90971fc695ff44923d" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "User-Agent: python-neutronclient"

  Listing Listeners: *FAILS
  curl -i http://infracont.rnd.cloud:9696/v2.0/lb/listeners -X GET -H 
"X-Auth-Token: 5c5b55bb54cc4c90971fc695ff44923d" -H "Content-Type: 
application/json" -H "Accept: application/json" -H "User-Agent: 
python-neutronclient"

  
  Openstack icehouse deployment.
  Running neutron version 2.3.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361419] [NEW] Hyper-V driver should provide a more detailed exception in case block storage volumes cannot be mounted due to a invalid SAN policy

2014-08-25 Thread Alessandro Pilotti
Public bug reported:

On some versions of Windows Server 2008 R2 the SAN policy is set by
default to Online All, bringing online any disk, local or shared,
attached to the host.

Since only offline disks can be attached as passthrough disks to a
Hyper-V VM, this prevents Cinder volumes from being attached to
instances, resulting in an exception:

NotFound: Unable to find a mounted disk for target_iqn:
iqn.2010-10.org.openstack:volume-d8904a90-d189-4fc8-a7b4-4fcdc7309166

Since this can be an issue not easy to troubleshoot without knowing the
specific context, it'd be useful to include a reference to the SAN
policy in the exception message.

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361419

Title:
  Hyper-V driver should provide a more detailed exception in case block
  storage volumes cannot be mounted due to a invalid SAN policy

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  On some versions of Windows Server 2008 R2 the SAN policy is set by
  default to Online All, bringing online any disk, local or shared,
  attached to the host.

  Since only offline disks can be attached as passthrough disks to a
  Hyper-V VM, this prevents Cinder volumes from being attached to
  instances, resulting in an exception:

  NotFound: Unable to find a mounted disk for target_iqn:
  iqn.2010-10.org.openstack:volume-d8904a90-d189-4fc8-a7b4-4fcdc7309166

  Since this can be an issue not easy to troubleshoot without knowing
  the specific context, it'd be useful to include a reference to the SAN
  policy in the exception message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361424] [NEW] When using Keystone API v3, catalog won't be returned

2014-08-25 Thread David Hill
Public bug reported:

Warning:  I don't know if that should be working or not, but heat
2014.1.2 doesn't seem to get a catalog where as heat 2013.2.3 seems to
be getting along pretty well.  I downgraded the packages, read
everything that had to be read, patched the code and the verdict is
always the same.  It appears that keystone v3 doesn't return the catalog
and heat depends on it (well it's complaining about it so I guess it
needs it)


Hi guys,

It appears that in Icehouse (well in my setup and probably the 
setup of some other guys too) the catalog won’t be returned when the keystone 
v3 api is being used….
What am I missing?

[root@labctrl ~]# keystone catalog
'NoneType' object has no attribute 'has_service_catalog'


Catalog:
catalog.RegionOne.identity.publicURL = http://IP:$(public_port)s/v3
catalog.RegionOne.identity.adminURL = http://IP:$(admin_port)s/v3
catalog.RegionOne.identity.internalURL = http://IP:$(public_port)s/v3
catalog.RegionOne.identity.name = Identity Service


Keystone-paste.ini
[pipeline:api_v3]
pipeline = sizelimit url_normalize build_auth_context token_auth 
admin_token_auth xml_body_v3 json_body ec2_extension_v3 s3_extension 
simple_cert_extension service_v3

Thanks,

Dave


From: David Hill 
Sent: 25-Aug-14 4:11 PM
To: openstack
Subject: Re: [Openstack] Heat: 2014.1.2-0 vs Keystone

Hi guys,

This is what heat-engine gets back :
RESP BODY: {"token": {"methods": ["token"], "roles": [{"id": 
"59bd5c58fe344eeab3bc3443b82155a0", "name": "Member"}, {"id": 
"9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}, {"id": 
"c119300b61bb4bfeafdf9ccc8ea3efae", "name": "Admin"}, {"id": 
"e80ca12406714be799fc9066d5978dbb", "name": "Owner"}], "expires_at": 
"2014-08-26T20:07:11.00Z", "project": {"domain": {"id": "default", "name": 
"Default"}, "id": "85bcc32e66b54c8bb52f28cb58319758", "name": "monitoring"}, 
"catalog": {}, "extras": {}, "user": {"domain": {"id": "default", "name": 
"Default"}, "id": "ccba454033204a7ba96b67ddaaacf00a", "name": "monitoring"}, 
"issued_at": "2014-08-25T20:07:12.589937Z"}}
_send_request /usr/lib/python2.6/site-packages/keystoneclient/session.py:297

Notice the catalog”: {} ?  I’m not sure but… shouldn’t contain the
actual catalog?

Dave

From: David Hill 
Sent: 25-Aug-14 4:41 AM
To: 'openstack'
Subject: Heat: 2014.1.2-0 vs Keystone

Hi guys,

  I’m trying to get Heat to work … but everytime I try to create a stack, 
the engine will fail at getting the catalog.
Since everything is working fine (ceilometer,nova,cinder,glance), am I 
forgetting something?

StackValidationFailed_Remote: Property error : WikiDatabase: ImageId The
service catalog is empty.


Here is the catalog:
catalog.RegionOne.identity.publicURL = http://IP:$(public_port)s/v2.0
catalog.RegionOne.identity.adminURL = http://IP:$(admin_port)s/v2.0
catalog.RegionOne.identity.internalURL = http://IP:$(public_port)s/v2.0
catalog.RegionOne.identity.name = Identity Service
catalog.RegionOne.compute.publicURL = http://IP:8774/v2/$(tenant_id)s
catalog.RegionOne.compute.adminURL = http://IP:8774/v2/$(tenant_id)s
catalog.RegionOne.compute.internalURL = http://IP:8774/v2/$(tenant_id)s
catalog.RegionOne.compute.name = Compute Service
catalog.RegionOne.volume.publicURL = http://IP:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.adminURL = http://IP:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.internalURL = http://IP:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.name = Volume Service
catalog.RegionOne.ec2.publicURL = http://IP:8773/services/Cloud
catalog.RegionOne.ec2.adminURL = http://IP:8773/services/Admin
catalog.RegionOne.ec2.internalURL = http://IP:8773/services/Cloud
catalog.RegionOne.ec2.name = EC2 Service
catalog.RegionOne.image.publicURL = http://IP:9292/
catalog.RegionOne.image.adminURL = http://IP:9292/
catalog.RegionOne.image.internalURL = http://IP:9292/
catalog.RegionOne.image.name = Image Service
catalog.RegionOne.object_store.publicURL = http://IP:8080/v1/AUTH_$(tenant_id)s
catalog.RegionOne.object_store.adminURL = http://IP:8080/
catalog.RegionOne.object_store.internalURL = 
http://IP:8080/v1/AUTH_$(tenant_id)s
catalog.RegionOne.object_store.name = Swift Service
catalog.RegionOne.cloudformation.publicURL = http://IP:8000/v1
catalog.RegionOne.cloudformation.adminURL = http://IP:8000/v1
catalog.RegionOne.cloudformation.internalURL = http://IP:8000/v1
catalog.RegionOne.cloudformation.name = Heat CloudFormation API
catalog.RegionOne.heat.publicURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.heat.adminURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.heat.internalURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.heat.name = Heat API
catalog.RegionOne.orchestration.publicURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.orchestration.adminURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.orchestration.internalURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.orchestration.name = Heat API
catalog.RegionOne.ceilometer.publicURL = http://IP:

[Yahoo-eng-team] [Bug 1361422] [NEW] When using Keystone API v3, catalog won't be returned

2014-08-25 Thread David Hill
Public bug reported:

Warning:  I don't know if that should be working or not, but heat
2014.1.2 doesn't seem to get a catalog where as heat 2013.2.3 seems to
be getting along pretty well.  I downgraded the packages, read
everything that had to be read, patched the code and the verdict is
always the same.  It appears that keystone v3 doesn't return the catalog
and heat depends on it (well it's complaining about it so I guess it
needs it)


Hi guys,

It appears that in Icehouse (well in my setup and probably the 
setup of some other guys too) the catalog won’t be returned when the keystone 
v3 api is being used….
What am I missing?

[root@labctrl ~]# keystone catalog
'NoneType' object has no attribute 'has_service_catalog'


Catalog:
catalog.RegionOne.identity.publicURL = http://IP:$(public_port)s/v3
catalog.RegionOne.identity.adminURL = http://IP:$(admin_port)s/v3
catalog.RegionOne.identity.internalURL = http://IP:$(public_port)s/v3
catalog.RegionOne.identity.name = Identity Service


Keystone-paste.ini
[pipeline:api_v3]
pipeline = sizelimit url_normalize build_auth_context token_auth 
admin_token_auth xml_body_v3 json_body ec2_extension_v3 s3_extension 
simple_cert_extension service_v3

Thanks,

Dave


From: David Hill 
Sent: 25-Aug-14 4:11 PM
To: openstack
Subject: Re: [Openstack] Heat: 2014.1.2-0 vs Keystone

Hi guys,

This is what heat-engine gets back :
RESP BODY: {"token": {"methods": ["token"], "roles": [{"id": 
"59bd5c58fe344eeab3bc3443b82155a0", "name": "Member"}, {"id": 
"9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}, {"id": 
"c119300b61bb4bfeafdf9ccc8ea3efae", "name": "Admin"}, {"id": 
"e80ca12406714be799fc9066d5978dbb", "name": "Owner"}], "expires_at": 
"2014-08-26T20:07:11.00Z", "project": {"domain": {"id": "default", "name": 
"Default"}, "id": "85bcc32e66b54c8bb52f28cb58319758", "name": "monitoring"}, 
"catalog": {}, "extras": {}, "user": {"domain": {"id": "default", "name": 
"Default"}, "id": "ccba454033204a7ba96b67ddaaacf00a", "name": "monitoring"}, 
"issued_at": "2014-08-25T20:07:12.589937Z"}}
_send_request /usr/lib/python2.6/site-packages/keystoneclient/session.py:297

Notice the catalog”: {} ?  I’m not sure but… shouldn’t contain the
actual catalog?

Dave

From: David Hill 
Sent: 25-Aug-14 4:41 AM
To: 'openstack'
Subject: Heat: 2014.1.2-0 vs Keystone

Hi guys,

  I’m trying to get Heat to work … but everytime I try to create a stack, 
the engine will fail at getting the catalog.
Since everything is working fine (ceilometer,nova,cinder,glance), am I 
forgetting something?

StackValidationFailed_Remote: Property error : WikiDatabase: ImageId The
service catalog is empty.


Here is the catalog:
catalog.RegionOne.identity.publicURL = http://IP:$(public_port)s/v2.0
catalog.RegionOne.identity.adminURL = http://IP:$(admin_port)s/v2.0
catalog.RegionOne.identity.internalURL = http://IP:$(public_port)s/v2.0
catalog.RegionOne.identity.name = Identity Service
catalog.RegionOne.compute.publicURL = http://IP:8774/v2/$(tenant_id)s
catalog.RegionOne.compute.adminURL = http://IP:8774/v2/$(tenant_id)s
catalog.RegionOne.compute.internalURL = http://IP:8774/v2/$(tenant_id)s
catalog.RegionOne.compute.name = Compute Service
catalog.RegionOne.volume.publicURL = http://IP:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.adminURL = http://IP:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.internalURL = http://IP:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.name = Volume Service
catalog.RegionOne.ec2.publicURL = http://IP:8773/services/Cloud
catalog.RegionOne.ec2.adminURL = http://IP:8773/services/Admin
catalog.RegionOne.ec2.internalURL = http://IP:8773/services/Cloud
catalog.RegionOne.ec2.name = EC2 Service
catalog.RegionOne.image.publicURL = http://IP:9292/
catalog.RegionOne.image.adminURL = http://IP:9292/
catalog.RegionOne.image.internalURL = http://IP:9292/
catalog.RegionOne.image.name = Image Service
catalog.RegionOne.object_store.publicURL = http://IP:8080/v1/AUTH_$(tenant_id)s
catalog.RegionOne.object_store.adminURL = http://IP:8080/
catalog.RegionOne.object_store.internalURL = 
http://IP:8080/v1/AUTH_$(tenant_id)s
catalog.RegionOne.object_store.name = Swift Service
catalog.RegionOne.cloudformation.publicURL = http://IP:8000/v1
catalog.RegionOne.cloudformation.adminURL = http://IP:8000/v1
catalog.RegionOne.cloudformation.internalURL = http://IP:8000/v1
catalog.RegionOne.cloudformation.name = Heat CloudFormation API
catalog.RegionOne.heat.publicURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.heat.adminURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.heat.internalURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.heat.name = Heat API
catalog.RegionOne.orchestration.publicURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.orchestration.adminURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.orchestration.internalURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.orchestration.name = Heat API
catalog.RegionOne.ceilometer.publicURL = http://IP:

[Yahoo-eng-team] [Bug 1361423] [NEW] When using Keystone API v3, catalog won't be returned

2014-08-25 Thread David Hill
Public bug reported:

Warning:  I don't know if that should be working or not, but heat
2014.1.2 doesn't seem to get a catalog where as heat 2013.2.3 seems to
be getting along pretty well.  I downgraded the packages, read
everything that had to be read, patched the code and the verdict is
always the same.  It appears that keystone v3 doesn't return the catalog
and heat depends on it (well it's complaining about it so I guess it
needs it)


Hi guys,

It appears that in Icehouse (well in my setup and probably the 
setup of some other guys too) the catalog won’t be returned when the keystone 
v3 api is being used….
What am I missing?

[root@labctrl ~]# keystone catalog
'NoneType' object has no attribute 'has_service_catalog'


Catalog:
catalog.RegionOne.identity.publicURL = http://IP:$(public_port)s/v3
catalog.RegionOne.identity.adminURL = http://IP:$(admin_port)s/v3
catalog.RegionOne.identity.internalURL = http://IP:$(public_port)s/v3
catalog.RegionOne.identity.name = Identity Service


Keystone-paste.ini
[pipeline:api_v3]
pipeline = sizelimit url_normalize build_auth_context token_auth 
admin_token_auth xml_body_v3 json_body ec2_extension_v3 s3_extension 
simple_cert_extension service_v3

Thanks,

Dave


From: David Hill 
Sent: 25-Aug-14 4:11 PM
To: openstack
Subject: Re: [Openstack] Heat: 2014.1.2-0 vs Keystone

Hi guys,

This is what heat-engine gets back :
RESP BODY: {"token": {"methods": ["token"], "roles": [{"id": 
"59bd5c58fe344eeab3bc3443b82155a0", "name": "Member"}, {"id": 
"9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}, {"id": 
"c119300b61bb4bfeafdf9ccc8ea3efae", "name": "Admin"}, {"id": 
"e80ca12406714be799fc9066d5978dbb", "name": "Owner"}], "expires_at": 
"2014-08-26T20:07:11.00Z", "project": {"domain": {"id": "default", "name": 
"Default"}, "id": "85bcc32e66b54c8bb52f28cb58319758", "name": "monitoring"}, 
"catalog": {}, "extras": {}, "user": {"domain": {"id": "default", "name": 
"Default"}, "id": "ccba454033204a7ba96b67ddaaacf00a", "name": "monitoring"}, 
"issued_at": "2014-08-25T20:07:12.589937Z"}}
_send_request /usr/lib/python2.6/site-packages/keystoneclient/session.py:297

Notice the catalog”: {} ?  I’m not sure but… shouldn’t contain the
actual catalog?

Dave

From: David Hill 
Sent: 25-Aug-14 4:41 AM
To: 'openstack'
Subject: Heat: 2014.1.2-0 vs Keystone

Hi guys,

  I’m trying to get Heat to work … but everytime I try to create a stack, 
the engine will fail at getting the catalog.
Since everything is working fine (ceilometer,nova,cinder,glance), am I 
forgetting something?

StackValidationFailed_Remote: Property error : WikiDatabase: ImageId The
service catalog is empty.


Here is the catalog:
catalog.RegionOne.identity.publicURL = http://IP:$(public_port)s/v2.0
catalog.RegionOne.identity.adminURL = http://IP:$(admin_port)s/v2.0
catalog.RegionOne.identity.internalURL = http://IP:$(public_port)s/v2.0
catalog.RegionOne.identity.name = Identity Service
catalog.RegionOne.compute.publicURL = http://IP:8774/v2/$(tenant_id)s
catalog.RegionOne.compute.adminURL = http://IP:8774/v2/$(tenant_id)s
catalog.RegionOne.compute.internalURL = http://IP:8774/v2/$(tenant_id)s
catalog.RegionOne.compute.name = Compute Service
catalog.RegionOne.volume.publicURL = http://IP:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.adminURL = http://IP:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.internalURL = http://IP:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.name = Volume Service
catalog.RegionOne.ec2.publicURL = http://IP:8773/services/Cloud
catalog.RegionOne.ec2.adminURL = http://IP:8773/services/Admin
catalog.RegionOne.ec2.internalURL = http://IP:8773/services/Cloud
catalog.RegionOne.ec2.name = EC2 Service
catalog.RegionOne.image.publicURL = http://IP:9292/
catalog.RegionOne.image.adminURL = http://IP:9292/
catalog.RegionOne.image.internalURL = http://IP:9292/
catalog.RegionOne.image.name = Image Service
catalog.RegionOne.object_store.publicURL = http://IP:8080/v1/AUTH_$(tenant_id)s
catalog.RegionOne.object_store.adminURL = http://IP:8080/
catalog.RegionOne.object_store.internalURL = 
http://IP:8080/v1/AUTH_$(tenant_id)s
catalog.RegionOne.object_store.name = Swift Service
catalog.RegionOne.cloudformation.publicURL = http://IP:8000/v1
catalog.RegionOne.cloudformation.adminURL = http://IP:8000/v1
catalog.RegionOne.cloudformation.internalURL = http://IP:8000/v1
catalog.RegionOne.cloudformation.name = Heat CloudFormation API
catalog.RegionOne.heat.publicURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.heat.adminURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.heat.internalURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.heat.name = Heat API
catalog.RegionOne.orchestration.publicURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.orchestration.adminURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.orchestration.internalURL = http://IP:8004/v1/$(tenant_id)s
catalog.RegionOne.orchestration.name = Heat API
catalog.RegionOne.ceilometer.publicURL = http://IP:

[Yahoo-eng-team] [Bug 1319926] Re: incorrect error when failing to launch on No space left on device\n

2014-08-25 Thread Gary W. Smith
This is the generic error that nova returns when it is unable to launch
the instance. Changing nova to providing a more precise  message is
covered in the bug you created
(https://bugs.launchpad.net/nova/+bug/1319920).

It is my understanding that it is not feasible in general for the UI to
know in advance which node that the nova scheduler will eventually
create the instance on and to calculate the remaining space on that
node, and thus to prevent the user from attempting it in the first
place. If that understanding is incorrect, then feel free to update this
bug.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1319926

Title:
  incorrect error when failing to launch on No space left on device\n

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  when we configure nova to launch instances with preallocated disk and we fail 
to spown an instance because we do not have enough disk space, horizon is 
reporting no valid host found: 
  Error: Failed to launch instance "dafna": Please try again later [Error: No 
valid host was found. ]. 

  it would be good if we can calculate the amount of space left before 
launching the instance from horizon (I opened a bug to nova to see if we can 
add a test in nova before launch - 
https://bugs.launchpad.net/nova/+bug/1319920). 
  but, it would also be good to report that there is not enough disk space to 
launch the instance (as reported in compute log) which would be much clearer 
than no valid hosts which makes the user search for issues in the host. 

  2014-05-15 19:12:57.878 23617 ERROR nova.compute.manager 
[req-1607bb0f-88a3-4888-b751-dab00e24f824 c9062d562d9f41e4a1fdce36a4f176f6 
4ad766166539403189f2caca1ba306aa] [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339] Instance failed to spa
  wn
  2014-05-15 19:12:57.878 23617 TRACE nova.compute.manager [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339] Traceback (most recent call last):
  2014-05-15 19:12:57.878 23617 TRACE nova.compute.manager [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1720, in _spawn
  2014-05-15 19:12:57.878 23617 TRACE nova.compute.manager [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339] block_device_info)
  2014-05-15 19:12:57.878 23617 TRACE nova.compute.manager [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2246, in 
spawn
  2014-05-15 19:12:57.878 23617 TRACE nova.compute.manager [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339] admin_pass=admin_password)
  2014-05-15 19:12:57.878 23617 TRACE nova.compute.manager [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2615, in 
_create_image
  2014-05-15 19:12:57.878 23617 TRACE nova.compute.manager [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339] project_id=instance['project_id'])
  2014-05-15 19:12:57.878 23617 TRACE nova.compute.manager [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 196, 
in cache
  2014-05-15 19:12:57.878 23617 TRACE n2014-05-15 19:12:58.471 23617 INFO 
nova.virt.libvirt.driver [req-1607bb0f-88a3-4888-b751-dab00e24f824 
c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] [instance: 
c1ad975d-6974-4539-9
  a1d-c050a3abd339] Deletion of 
/var/lib/nova/instances/c1ad975d-6974-4539-9a1d-c050a3abd339 complete
  2014-05-15 19:12:59.093 23617 ERROR nova.compute.manager 
[req-1607bb0f-88a3-4888-b751-dab00e24f824 c9062d562d9f41e4a1fdce36a4f176f6 
4ad766166539403189f2caca1ba306aa] [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339] Error: Unexpected erro
  r while running command.
  Command: fallocate -n -l 171798691840 
/var/lib/nova/instances/c1ad975d-6974-4539-9a1d-c050a3abd339/disk
  Exit code: 1
  Stdout: ''
  Stderr: 'fallocate: 
/var/lib/nova/instances/c1ad975d-6974-4539-9a1d-c050a3abd339/disk: fallocate 
failed: No space left on device\n'
  2014-05-15 19:12:59.093 23617 TRACE nova.compute.manager [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339] Traceback (most recent call last):
  2014-05-15 19:12:59.093 23617 TRACE nova.compute.manager [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1311, in 
_build_instance
  2014-05-15 19:12:59.093 23617 TRACE nova.compute.manager [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339] set_access_ip=set_access_ip)
  2014-05-15 19:12:59.093 23617 TRACE nova.compute.manager [instance: 
c1ad975d-6974-4539-9a1d-c050a3abd339]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 399, in 
decorated_function
  2014-05-15 19:12:59.093 23617 TRACE nova.compute.manager 

[Yahoo-eng-team] [Bug 1361441] [NEW] better handling for expired signing_cert.pem

2014-08-25 Thread Steve Heyman
Public bug reported:

While working on Barbican, I noted failing user authentications even
though I have a valid token.  I had to debug the openssl calls to see
that the root cause was an expired signing_cert.pem file.

Tracked this down to my keystone server, but had a hard time finding out
how to resolve this situation.  Asked on IRC and a launchpad bug was
suggested, so here it is.

I think there are actually 2 issues here:

1) some doc on how to handle expired certs - maybe just a paragraph in
troubleshooting about using keystone_manage and also cleaning up client
caches.

2) better ffdc (first failure data capture) so that the user (Barbican
in this case) will see that the root cause was an expired cert rather
than just a failed authentication.


I also found this (slightly) related question in ask.openstack:

https://ask.openstack.org/en/question/6402/keystone-ssl-certificate-
expires-after-one-year/

and

http://www.blackmesh.com/blog/openstack-refusing-authentication-psh

Thanks!!

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361441

Title:
  better handling for expired signing_cert.pem

Status in OpenStack Identity (Keystone):
  New

Bug description:
  While working on Barbican, I noted failing user authentications even
  though I have a valid token.  I had to debug the openssl calls to see
  that the root cause was an expired signing_cert.pem file.

  Tracked this down to my keystone server, but had a hard time finding
  out how to resolve this situation.  Asked on IRC and a launchpad bug
  was suggested, so here it is.

  I think there are actually 2 issues here:

  1) some doc on how to handle expired certs - maybe just a paragraph in
  troubleshooting about using keystone_manage and also cleaning up
  client caches.

  2) better ffdc (first failure data capture) so that the user (Barbican
  in this case) will see that the root cause was an expired cert rather
  than just a failed authentication.

  
  I also found this (slightly) related question in ask.openstack:

  https://ask.openstack.org/en/question/6402/keystone-ssl-certificate-
  expires-after-one-year/

  and

  http://www.blackmesh.com/blog/openstack-refusing-authentication-psh

  Thanks!!

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1361441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322597] Re: Unable to update image members

2014-08-25 Thread Gary W. Smith
Sorry, I didn't mean to suggest that the bug itself is invalid. The
deficiency indicated by the bug is clearly valid, as it is part of a
blueprint that has been approved. But it really doesn't make sense to
track the same change as both a bug and a blueprint. The blueprint is
more fitting, as it is introducing new functionality into horizon. So
I'm moving it to the closest status there is to "already handled by an
existing blueprint". Thanks for working on this change.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1322597

Title:
  Unable to update image members

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Glance API let us update the image members, we should expose that
  functionality in Horizon

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1322597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361476] [NEW] flavor access create should check public/private first

2014-08-25 Thread jichenjc
Public bug reported:

jichen@cloudcontroller:~$ nova flavor-access-add 1 2
+---+---+
| Flavor_ID | Tenant_ID |
+---+---+
| 1 | 2 |
+---+---+

jichen@cloudcontroller:~$ nova flavor-access-list --flavor 1
ERROR (CommandError): Failed to get access list for public flavor type.

we should check public/private before access add

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361476

Title:
  flavor access create should check public/private first

Status in OpenStack Compute (Nova):
  New

Bug description:
  jichen@cloudcontroller:~$ nova flavor-access-add 1 2
  +---+---+
  | Flavor_ID | Tenant_ID |
  +---+---+
  | 1 | 2 |
  +---+---+

  jichen@cloudcontroller:~$ nova flavor-access-list --flavor 1
  ERROR (CommandError): Failed to get access list for public flavor type.

  we should check public/private before access add

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361490] [NEW] param check for backup rotatetype is needed

2014-08-25 Thread jichenjc
Public bug reported:

jichen@cloudcontroller:~$ nova backup jitest1 jiback1  2
jichen@cloudcontroller:~$ nova list
+--+-+++-++
| ID   | Name| Status | Task State | Power 
State | Networks   |
+--+-+++-++
| cb7c6742-7b7a-44de-ad5a-8570ee520f9e | jitest1 | ACTIVE | -  | 
Running | private=10.0.0.2   |
| 702d1d2b-f72d-4759-8f13-9ffbcc0ca934 | jitest3 | PAUSED | -  | Paused 
 | private=10.0.0.200 |
+--+-+++-++


we should not allow  as option

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361490

Title:
  param check for backup rotatetype is needed

Status in OpenStack Compute (Nova):
  New

Bug description:
  jichen@cloudcontroller:~$ nova backup jitest1 jiback1  2
  jichen@cloudcontroller:~$ nova list
  
+--+-+++-++
  | ID   | Name| Status | Task State | 
Power State | Networks   |
  
+--+-+++-++
  | cb7c6742-7b7a-44de-ad5a-8570ee520f9e | jitest1 | ACTIVE | -  | 
Running | private=10.0.0.2   |
  | 702d1d2b-f72d-4759-8f13-9ffbcc0ca934 | jitest3 | PAUSED | -  | 
Paused  | private=10.0.0.200 |
  
+--+-+++-++

  
  we should not allow  as option

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361487] [NEW] backup operation can be done in pause and suspend state

2014-08-25 Thread jichenjc
Public bug reported:


jichen@cloudcontroller:~$ nova backup jitest3 jiback1 daily 2
ERROR (Conflict): Cannot 'createBackup' while instance is in vm_state paused 
(HTTP 409) (Request-ID: req-7554dea8-92aa-480c-a1f4-e3d7e479c6b3)
jichen@cloudcontroller:~$ nova list
+--+-+++-++
| ID   | Name| Status | Task State | Power 
State | Networks   |
+--+-+++-++
| cb7c6742-7b7a-44de-ad5a-8570ee520f9e | jitest1 | ACTIVE | -  | 
Running | private=10.0.0.2   |
| 702d1d2b-f72d-4759-8f13-9ffbcc0ca934 | jitest3 | PAUSED | -  | Paused 
 | private=10.0.0.200 |
+--+-+++-++


jichen@cloudcontroller:~$ nova image-create  --show jitest3 test3image1
+-+--+
| Property| Value|
+-+--+
| OS-EXT-IMG-SIZE:size| 0|
| created | 2014-08-26T04:06:41Z |
| id  | 96a5284c-5feb-4231-8b01-9a522a7c5aab |
| metadata base_image_ref | 94e061fb-e628-4deb-901c-9d44c059ecd9 |
| metadata clean_attempts | 2|
| metadata image_type | snapshot |
| metadata instance_type_ephemeral_gb | 0|
| metadata instance_type_flavorid | 1|
| metadata instance_type_id   | 2|
| metadata instance_type_memory_mb| 512  |
| metadata instance_type_name | m1.tiny  |
| metadata instance_type_root_gb  | 1|
| metadata instance_type_rxtx_factor  | 1.0  |
| metadata instance_type_swap | 0|
| metadata instance_type_vcpus| 1|
| metadata instance_uuid  | 702d1d2b-f72d-4759-8f13-9ffbcc0ca934 |
| metadata kernel_id  | 20be8b63-5a84-4440-a0bd-8f69898d5965 |
| metadata ramdisk_id | 07f6f85f-c1dc-4790-98b5-14ab86f21b59 |
| metadata user_id| 256dc6db4b5c45ae90fee8132cbaad7c |
| minDisk | 1|
| minRam  | 0|
| name| test3image1  |
| progress| 25   |
| server  | 702d1d2b-f72d-4759-8f13-9ffbcc0ca934 |
| status  | SAVING   |
| updated | 2014-08-26T04:06:41Z |
+-+--+

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361487

Title:
  backup operation can be done in pause and suspend state

Status in OpenStack Compute (Nova):
  New

Bug description:

  jichen@cloudcontroller:~$ nova backup jitest3 jiback1 daily 2
  ERROR (Conflict): Cannot 'createBackup' while instance is in vm_state paused 
(HTTP 409) (Request-ID: req-7554dea8-92aa-480c-a1f4-e3d7e479c6b3)
  jichen@cloudcontroller:~$ nova list
  
+--+-+++-++
  | ID   | Name| Status | Task State | 
Power State | Networks   |
  
+--+-+++-++
  | cb7c6742-7b7a-44de-ad5a-8570ee520f9e | jitest1 | ACTIVE | -  | 
Running | private=10.0.0.2   |
  | 702d1d2b-f72d-4759-8f13-9ffbcc0ca934 | jitest3 | PAUSED | -  | 
Paused  | private=10.0.0.200 |
  
+--+-+++-++

  
  jichen@cloudcontroller:~$ nova image-create  --show jitest3 test3image1
  +-+--+
  | Property| Value|
  

[Yahoo-eng-team] [Bug 1213126] Re: attaching volume to instance fails with IO error

2014-08-25 Thread Kashyap Chamarthy
Closing this bug per comment #3.

Please reopen it (with more verbose details) if you encounter it again.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213126

Title:
  attaching volume to instance fails with IO error

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  2013-08-16 10:06:05.315 ^[[01;31mERROR root [^[[00;36m-^[[01;31m] 
^[[01;35m^[[01;31mOriginal exception being dropped: ['Traceback (most recent 
call last):\n', '  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 
1038, in attach_volume\nvirt_dom.attachDeviceFlags(conf.to_xml(), 
flags)\n', '  File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", 
line 179, in doit\nresult = proxy_call(self._autowrap, f, *args, 
**kwargs)\n', '  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 139, in 
proxy_call\nrv = execute(f,*args,**kwargs)\n', '  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 77, in 
tworker\nrv = meth(*args,**kwargs)\n', '  File 
"/usr/lib/python2.7/dist-packages/libvirt.py", line 420, in attachDeviceFlags\n 
   if ret == -1: raise libvirtError (\'virDomainAttachDeviceFlags() failed\', 
dom=self)\n', 'libvirtError: End of file while reading data: Input/output 
error\n']^[[00m
  2013-08-16 10:06:05.316 ^[[01;31mERROR nova.compute.manager 
[^[[01;36mreq-db59ccfd-b546-40fd-8447-03d546725caa ^[[00;36madmin 
demo^[[01;31m] ^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] 
^[[01;31mFailed to attach volume 079b6295-8433-444f-bf8f-c013d65ae634 at 
/dev/vdc^[[00m
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mTraceback (most 
recent call last):
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
"/opt/stack/nova/nova/compute/manager.py", line 3465, in _attach_volume
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mmountpoint)
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1051, in attach_volume
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mdisk_dev)
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 992, in volume_driver_method
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mreturn 
method(connection_info, *args, **kwargs)
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
"/opt/stack/nova/nova/openstack/common/lockutils.py", line 246, in inner
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mreturn 
f(*args, **kwargs)
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/volume.py", line 308, in disconnect_volume
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mdevices = 
self.connection.get_all_block_devices()
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2666, in 
get_all_block_devices
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mfor dom_id 
in self.list_instance_ids():
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 694, in list_instance_ids
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mif 
self._conn.numOfDomains() == 0:
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00mAttributeError: 
'NoneType' object has no attribute 'numOfDomains'
  ^[[01;31m2013-08-16 10:06:05.316 TRACE nova.compute.manager 
^[[01;35m[instance: f1f87ce0-0833-4476-8f71-74870b5e7068] ^[[00m

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1213126/+subscriptions

-- 
Mailing list: https://launchpad.net/~ya

[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-08-25 Thread Nikhil Manchanda
** Also affects: trove
   Importance: Undecided
   Status: New

** Changed in: trove
Milestone: None => juno-3

** Changed in: trove
 Assignee: (unassigned) => Nikhil Manchanda (slicknik)

** Changed in: trove
   Importance: Undecided => Critical

** Changed in: trove
   Importance: Critical => High

** Changed in: trove
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Key Management (Barbican):
  Confirmed
Status in OpenStack Telemetry (Ceilometer):
  Triaged
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Committed
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Python client library for Neutron:
  In Progress
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  In Progress
Status in Openstack Database (Trove):
  Triaged
Status in Web Services Made Easy:
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp