[Yahoo-eng-team] [Bug 1779139] [NEW] metadata api detection runs before network-online

2018-06-28 Thread Bernhard M. Wiedemann
Public bug reported:

Version: 18.2

originally filed at
https://bugzilla.opensuse.org/show_bug.cgi?id=1097388

I found that cloud-init did not import authorized keys
because DataSourceNone was used
because the metadata API detection ran before the VM's network was up.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Patch added: "0001-Run-metadata-detection-after-network-online.patch"
   
https://bugs.launchpad.net/bugs/1779139/+attachment/5157420/+files/0001-Run-metadata-detection-after-network-online.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1779139

Title:
  metadata api detection runs before network-online

Status in cloud-init:
  New

Bug description:
  Version: 18.2

  originally filed at
  https://bugzilla.opensuse.org/show_bug.cgi?id=1097388

  I found that cloud-init did not import authorized keys
  because DataSourceNone was used
  because the metadata API detection ran before the VM's network was up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1779139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746000] [NEW] dnsmasq does not fallback on SERVFAIL

2018-01-29 Thread Bernhard M. Wiedemann
Public bug reported:

In our cloud deployment, we configured neutron-dhcp-agent
to have multiple external DNS servers

However, the last server on the list happened to be misconfigured (by others). 
So it returned SERVFAIL instead of correct responses and strace showed that 
dnsmasq only ever asked the last server for a name.
My testing shows that dropping the --strict-order parameter helped this problem.
It was introduced in commit 43960ee448 without reason given.

** Affects: neutron
 Importance: Undecided
 Assignee: Dirk Mueller (dmllr)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1746000

Title:
  dnsmasq does not fallback on SERVFAIL

Status in neutron:
  In Progress

Bug description:
  In our cloud deployment, we configured neutron-dhcp-agent
  to have multiple external DNS servers

  However, the last server on the list happened to be misconfigured (by 
others). So it returned SERVFAIL instead of correct responses and strace showed 
that dnsmasq only ever asked the last server for a name.
  My testing shows that dropping the --strict-order parameter helped this 
problem.
  It was introduced in commit 43960ee448 without reason given.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1746000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714240] [NEW] glance re-spawns a child when terminating

2017-08-31 Thread Bernhard M. Wiedemann
Public bug reported:

When sending a SIGTERM to the main glance-api process,
api.log shows
2017-08-31 13:10:30.996 10618 INFO glance.common.wsgi [-] Removed dead child 
10628
2017-08-31 13:10:31.004 10618 INFO glance.common.wsgi [-] Started child 10642
2017-08-31 13:10:31.006 10642 INFO eventlet.wsgi.server [-] (10642) wsgi 
starting up on https://10.162.184.83:5510
2017-08-31 13:10:31.008 10642 INFO eventlet.wsgi.server [-] (10642) wsgi 
exited, is_accepting=True
2017-08-31 13:10:31.009 10642 INFO glance.common.wsgi [-] Child 10642 exiting 
normally

This is because kill_children sends a SIGTERM to all children
and wait_on_children restarts one, when it notices a dead child

We noticed this, because this triggered a fencing in our cloud's
pacemaker setup because systemd seems to have a race condition in the
cgroup code that should detect that all related services have
terminated.


# systemctl status openstack-glance-api
● openstack-glance-api.service - OpenStack Image Service API server
   Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; 
disabled; vendor preset: disabled)
   Active: deactivating (final-sigterm) since Thu 2017-08-31 10:13:48 UTC; 1min 
14s ago
 Main PID: 25077 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 512)
   CGroup: /system.slice/openstack-glance-api.service
Aug 31 10:13:48 d08-9e-01-b4-9e-42 systemd[1]: Stopping OpenStack Image Service 
API server...
Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: openstack-glance-api.service: 
State 'stop-final-sigterm' timed out. Killing.
Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: Stopped OpenStack Image Service 
API server.
Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: openstack-glance-api.service: 
Unit entered failed state.
Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: openstack-glance-api.service: 
Failed with result 'timeout'.

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  When sending a SIGTERM to the main glance-api process,
  api.log shows
  2017-08-31 13:10:30.996 10618 INFO glance.common.wsgi [-] Removed dead child 
10628
  2017-08-31 13:10:31.004 10618 INFO glance.common.wsgi [-] Started child 10642
  2017-08-31 13:10:31.006 10642 INFO eventlet.wsgi.server [-] (10642) wsgi 
starting up on https://10.162.184.83:5510
  2017-08-31 13:10:31.008 10642 INFO eventlet.wsgi.server [-] (10642) wsgi 
exited, is_accepting=True
  2017-08-31 13:10:31.009 10642 INFO glance.common.wsgi [-] Child 10642 exiting 
normally
  
  This is because kill_children sends a SIGTERM to all children
  and wait_on_children restarts one, when it notices a dead child
+ 
+ We noticed this, because this triggered a fencing in our cloud's
+ pacemaker setup because systemd seems to have a race condition in the
+ cgroup code that should detect that all related services have
+ terminated.
+ 
+ 
+ # systemctl status openstack-glance-api
+ ● openstack-glance-api.service - OpenStack Image Service API server
+Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; 
disabled; vendor preset: disabled)
+Active: deactivating (final-sigterm) since Thu 2017-08-31 10:13:48 UTC; 
1min 14s ago
+  Main PID: 25077 (code=exited, status=0/SUCCESS)
+ Tasks: 0 (limit: 512)
+CGroup: /system.slice/openstack-glance-api.service
+ Aug 31 10:13:48 d08-9e-01-b4-9e-42 systemd[1]: Stopping OpenStack Image 
Service API server...
+ Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: openstack-glance-api.service: 
State 'stop-final-sigterm' timed out. Killing.
+ Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: Stopped OpenStack Image 
Service API server.
+ Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: openstack-glance-api.service: 
Unit entered failed state.
+ Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: openstack-glance-api.service: 
Failed with result 'timeout'.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1714240

Title:
  glance re-spawns a child when terminating

Status in Glance:
  New

Bug description:
  When sending a SIGTERM to the main glance-api process,
  api.log shows
  2017-08-31 13:10:30.996 10618 INFO glance.common.wsgi [-] Removed dead child 
10628
  2017-08-31 13:10:31.004 10618 INFO glance.common.wsgi [-] Started child 10642
  2017-08-31 13:10:31.006 10642 INFO eventlet.wsgi.server [-] (10642) wsgi 
starting up on https://10.162.184.83:5510
  2017-08-31 13:10:31.008 10642 INFO eventlet.wsgi.server [-] (10642) wsgi 
exited, is_accepting=True
  2017-08-31 13:10:31.009 10642 INFO glance.common.wsgi [-] Child 10642 exiting 
normally

  This is because kill_children sends a SIGTERM to all children
  and wait_on_children restarts one, when it notices a dead child

  We noticed this, because this triggered a fencing in our cloud's
  pacemaker setup because systemd seems to have a race condition in the
  cgroup code that should detect that all related services have
  terminated.

[Yahoo-eng-team] [Bug 1551290] [NEW] leap year bug - day is out of range for month

2016-02-29 Thread Bernhard M. Wiedemann
Public bug reported:

Horizon Dashboard -> Admin -> Settings -> Change Language -> Save (->
Trigger)

triggers a bug in
openstack_dashboard/dashboards/settings/user/forms.py : 33
which currently has
return datetime(now.year + 1, now.month, now.day, now.hour,
now.minute, now.second, now.microsecond, now.tzinfo)

ValueError: day is out of range for month

because there will not be a 2017-02-29

IMHO, we should just use the Epoch value and add 365*24*60*60 seconds

** Affects: horizon
 Importance: Undecided
 Status: New

** Project changed: python-openstackclient => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1551290

Title:
  leap year bug - day is out of range for month

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon Dashboard -> Admin -> Settings -> Change Language -> Save (->
  Trigger)

  triggers a bug in
  openstack_dashboard/dashboards/settings/user/forms.py : 33
  which currently has
  return datetime(now.year + 1, now.month, now.day, now.hour,
  now.minute, now.second, now.microsecond, now.tzinfo)

  ValueError: day is out of range for month

  because there will not be a 2017-02-29

  IMHO, we should just use the Epoch value and add 365*24*60*60 seconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1551290/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547061] [NEW] openrc v2.sh contains /v3 url

2016-02-18 Thread Bernhard M. Wiedemann
Public bug reported:

bug 1460150 introduced separate openrc downloads for v2 and v3
but when backporting this to Liberty, we found that the v2 openrc download 
still contained a /v3 URL
which would then cause client failures

The v3 codepath does
context['auth_url'] = utils.fix_auth_url_version(context['auth_url'])

and IMHO for the v2 codepath we would need the inverse replacement. e.g.

 def download_rc_file_v2(request):
 template = 'project/access_and_security/api_access/openrc_v2.sh.template'
 context = _get_openrc_credentials(request)
+context['auth_url'] = context['auth_url'].replace('/v3', '/v2.0')
 return _download_rc_file_for_template(request, context, template)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1547061

Title:
  openrc v2.sh contains /v3 url

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  bug 1460150 introduced separate openrc downloads for v2 and v3
  but when backporting this to Liberty, we found that the v2 openrc download 
still contained a /v3 URL
  which would then cause client failures

  The v3 codepath does
  context['auth_url'] = utils.fix_auth_url_version(context['auth_url'])

  and IMHO for the v2 codepath we would need the inverse replacement.
  e.g.

   def download_rc_file_v2(request):
   template = 'project/access_and_security/api_access/openrc_v2.sh.template'
   context = _get_openrc_credentials(request)
  +context['auth_url'] = context['auth_url'].replace('/v3', '/v2.0')
   return _download_rc_file_for_template(request, context, template)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1547061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1519750] [NEW] stable tarball contains master version

2015-11-25 Thread Bernhard M. Wiedemann
Public bug reported:

somewhat similar to https://bugs.launchpad.net/aodh/+bug/1515507

curl -s http://tarballs.openstack.org/neutron/neutron-stable-kilo.tar.gz|tar 
vzt |head -5
drwxrwxr-x jenkins/jenkins   0 2015-11-24 22:46 neutron-8.0.0.dev486/
drwxrwxr-x jenkins/jenkins   0 2015-11-24 22:46 
neutron-8.0.0.dev486/releasenotes/
drwxrwxr-x jenkins/jenkins   0 2015-11-24 22:46 
neutron-8.0.0.dev486/releasenotes/source/
-rw-rw-r-- jenkins/jenkins 146 2015-11-24 22:44 
neutron-8.0.0.dev486/releasenotes/source/liberty.rst
drwxrwxr-x jenkins/jenkins   0 2015-11-24 22:46 
neutron-8.0.0.dev486/releasenotes/source/_static/

yesterday it still contained the correct neutron-2015.1.3.dev52

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1519750

Title:
  stable tarball contains master version

Status in neutron:
  New

Bug description:
  somewhat similar to https://bugs.launchpad.net/aodh/+bug/1515507

  curl -s http://tarballs.openstack.org/neutron/neutron-stable-kilo.tar.gz|tar 
vzt |head -5
  drwxrwxr-x jenkins/jenkins   0 2015-11-24 22:46 neutron-8.0.0.dev486/
  drwxrwxr-x jenkins/jenkins   0 2015-11-24 22:46 
neutron-8.0.0.dev486/releasenotes/
  drwxrwxr-x jenkins/jenkins   0 2015-11-24 22:46 
neutron-8.0.0.dev486/releasenotes/source/
  -rw-rw-r-- jenkins/jenkins 146 2015-11-24 22:44 
neutron-8.0.0.dev486/releasenotes/source/liberty.rst
  drwxrwxr-x jenkins/jenkins   0 2015-11-24 22:46 
neutron-8.0.0.dev486/releasenotes/source/_static/

  yesterday it still contained the correct neutron-2015.1.3.dev52

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1519750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368251] [NEW] migrate_to_ml2 accessing boolean as int fails on postgresql

2014-09-11 Thread Bernhard M. Wiedemann
Public bug reported:

The allocated variable used in migrate_to_ml2 was defined to be a boolean 
type and in postgresql this type is enforced,
while in mysql this just maps to tinyint and accepts both numbers and bools.

Thus the migrate_to_ml2 script breaks on postgresql

** Affects: neutron
 Importance: Undecided
 Assignee: Bernhard M. Wiedemann (ubuntubmw)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1368251

Title:
  migrate_to_ml2 accessing boolean as int fails on postgresql

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The allocated variable used in migrate_to_ml2 was defined to be a boolean 
type and in postgresql this type is enforced,
  while in mysql this just maps to tinyint and accepts both numbers and bools.

  Thus the migrate_to_ml2 script breaks on postgresql

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1368251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361542] [NEW] neutron-l3-agent does not start without IPv6

2014-08-26 Thread Bernhard M. Wiedemann
Public bug reported:

When testing on a one-node-cloud that had ipv6 blacklisted, I found that 
neutron-l3-agent does not start
because it errors out when it tries to access 
/proc/sys/net/ipv6/conf/default/disable_ipv6

2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 269, in create
2014-08-26 10:12:57.987 29609 TRACE neutron 
periodic_fuzzy_delay=periodic_fuzzy_delay)
2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 202, in __init__
2014-08-26 10:12:57.987 29609 TRACE neutron self.manager = 
manager_class(host=host, *args, **kwargs)
2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/agent/l3_agent.py, line 916, in 
__init__
2014-08-26 10:12:57.987 29609 TRACE neutron 
super(L3NATAgentWithStateReport, self).__init__(host=host, conf=conf)
2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/agent/l3_agent.py, line 230, in 
__init__
2014-08-26 10:12:57.987 29609 TRACE neutron self.use_ipv6 = 
ipv6_utils.is_enabled()
2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/common/ipv6_utils.py, line 50, in 
is_enabled
2014-08-26 10:12:57.987 29609 TRACE neutron with open(disabled_ipv6_path, 
'r') as f:
2014-08-26 10:12:57.987 29609 TRACE neutron IOError: [Errno 2] No such file or 
directory: '/proc/sys/net/ipv6/conf/default/disable_ipv6'

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361542

Title:
  neutron-l3-agent does not start without IPv6

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When testing on a one-node-cloud that had ipv6 blacklisted, I found that 
neutron-l3-agent does not start
  because it errors out when it tries to access 
/proc/sys/net/ipv6/conf/default/disable_ipv6

  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 269, in create
  2014-08-26 10:12:57.987 29609 TRACE neutron 
periodic_fuzzy_delay=periodic_fuzzy_delay)
  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 202, in __init__
  2014-08-26 10:12:57.987 29609 TRACE neutron self.manager = 
manager_class(host=host, *args, **kwargs)
  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/agent/l3_agent.py, line 916, in 
__init__
  2014-08-26 10:12:57.987 29609 TRACE neutron 
super(L3NATAgentWithStateReport, self).__init__(host=host, conf=conf)
  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/agent/l3_agent.py, line 230, in 
__init__
  2014-08-26 10:12:57.987 29609 TRACE neutron self.use_ipv6 = 
ipv6_utils.is_enabled()
  2014-08-26 10:12:57.987 29609 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/common/ipv6_utils.py, line 50, in 
is_enabled
  2014-08-26 10:12:57.987 29609 TRACE neutron with open(disabled_ipv6_path, 
'r') as f:
  2014-08-26 10:12:57.987 29609 TRACE neutron IOError: [Errno 2] No such file 
or directory: '/proc/sys/net/ipv6/conf/default/disable_ipv6'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289346] [NEW] images from unreachable HTTP source are silently discarded

2014-03-07 Thread Bernhard M. Wiedemann
Public bug reported:

When trying to import an image from an HTTP URL that happens to be
unresolvable or unreachable, glance reports it as queued with an exit-
code of 0 (success), but a following glance image-list shows, that the
new image is completely missing.

# glance image-create --name=foobar --disk-format=qcow2 --container-
format=bare --copy-from http://unreachable.example.org/yourimage.qcow2

...
| size | 0|
| status   | queued   |
# echo $?
0

an explicit
# glance image-show $ID
shows status killed

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1289346

Title:
  images from unreachable HTTP source are silently discarded

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When trying to import an image from an HTTP URL that happens to be
  unresolvable or unreachable, glance reports it as queued with an exit-
  code of 0 (success), but a following glance image-list shows, that the
  new image is completely missing.

  # glance image-create --name=foobar --disk-format=qcow2 --container-
  format=bare --copy-from http://unreachable.example.org/yourimage.qcow2

  ...
  | size | 0|
  | status   | queued   |
  # echo $?
  0

  an explicit
  # glance image-show $ID
  shows status killed

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1289346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278342] [NEW] novncproxy accepts un-masked client websocket frames

2014-02-10 Thread Bernhard M. Wiedemann
Public bug reported:

using Havana nova with python-websockify-0.5.1
I found that the server is not picky enough.
It accepts WebSocket frames with the masked-bit unset
though the relevant standard 
https://tools.ietf.org/html/rfc6455#section-5.1
says
The server MUST close the connection upon receiving a frame that is not masked.


For testing this behaviour, you can use my code without this fix
https://github.com/bmwiedemann/connectionproxy/commit/1ece2024090cfbacc003f66c036c2fe550fd488a

it is used like this:

git clone https://github.com/bmwiedemann/connectionproxy.git
$INSTALL perl-Protocol-WebSocket
nova get-vnc-console $YOURINSTANCE novnc
perl wsconnectionproxy.pl --port 5942 --to 
http://cloud.example.com:6080/vnc_auto.html?token=xxx
gvncviewer localhost:42

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1278342

Title:
  novncproxy accepts un-masked client websocket frames

Status in OpenStack Compute (Nova):
  New

Bug description:
  using Havana nova with python-websockify-0.5.1
  I found that the server is not picky enough.
  It accepts WebSocket frames with the masked-bit unset
  though the relevant standard 
  https://tools.ietf.org/html/rfc6455#section-5.1
  says
  The server MUST close the connection upon receiving a frame that is not 
masked.

  
  For testing this behaviour, you can use my code without this fix
  
https://github.com/bmwiedemann/connectionproxy/commit/1ece2024090cfbacc003f66c036c2fe550fd488a

  it is used like this:

  git clone https://github.com/bmwiedemann/connectionproxy.git
  $INSTALL perl-Protocol-WebSocket
  nova get-vnc-console $YOURINSTANCE novnc
  perl wsconnectionproxy.pl --port 5942 --to 
http://cloud.example.com:6080/vnc_auto.html?token=xxx
  gvncviewer localhost:42

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1278342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp