prometheanfire    15/01/13 03:59:07

  Added:                CVE-2014-8153-juno.patch
  Removed:              neutron-confd-2 neutron-sudoers neutron-initd
                        neutron-initd-2 sphinx_mapping.patch nicira.patch
                        neutron-confd
  Log:
  bunch o fixes
  
  removing old for old reasons
  removing badness for CVE-2014-7821
  patching goodness for CVE-2014-8153
  
  (Portage version: 2.2.14/cvs/Linux x86_64, signed Manifest commit with key 
0x33ED3FD25AFC78BA)

Revision  Changes    Path
1.1                  sys-cluster/neutron/files/CVE-2014-8153-juno.patch

file : 
http://sources.gentoo.org/viewvc.cgi/gentoo-x86/sys-cluster/neutron/files/CVE-2014-8153-juno.patch?rev=1.1&view=markup
plain: 
http://sources.gentoo.org/viewvc.cgi/gentoo-x86/sys-cluster/neutron/files/CVE-2014-8153-juno.patch?rev=1.1&content-type=text/plain

Index: CVE-2014-8153-juno.patch
===================================================================
>From 6c490a984af690e732f8b96a18493ac1afef892f Mon Sep 17 00:00:00 2001
From: Ihar Hrachyshka <ihrac...@redhat.com>
Date: Wed, 3 Dec 2014 12:44:57 +0100
Subject: [PATCH] radvd: pass -m syslog to avoid thread lock for radvd 2.0+

Since radvd 2.0, the daemon does not use daemon_fork() function from
libdaemon, but instead calls Linux daemon() function directly. It also
passes (1, 1) arguments when logging method (-m) is either stderr (the
default) or stderr_syslog. The second argument's value = 1 means that
stderr is not closed and left there for (some) log messages.

For neutron, it means that corresponding execute() call that spawns
radvd and expects the invoked process to close stderr does not ever get
completed. The current thread that spawned radvd is locked waiting for
radvd to exit, which does not ever occur unless the process crashes or
receives a signal.

Since L3 agent gives exclusive access to updates queue for each router
to one of processing threads only, it means that the thread that got to
serve a radvd-powered subnet will not proceed and not update any new
ports or other changes to the router anymore.

Passing -m syslog makes radvd 2.0+ close stderr and return to execute()
caller, proceeding with router update processing. The same arguments
should work for old (pre 2.0) versions of radvd too, so passing them
unconditionally.

We could instead use -m logfile and pass appropriate -l <logfile>
argument to radvd to make it log to a log file located in router's
namespace storage path. Though that would be not in line with what
dnsmasq processes currently do for dhcp agent, where we log all messages
to syslog, so sticking to syslog for radvd for consistency.

Juno changes:
- tests: we need to construct RouterInfo to get router namespace name.

Change-Id: I131db0639bc46d332ed48faa2bbe68a214264062
Closes-Bug: #1398779
(cherry picked from commit 72d41174765540bb7672b545c336fb7aaad075e8)
---
 neutron/agent/linux/ra.py           |  8 +++++++-
 neutron/tests/unit/test_l3_agent.py | 34 ++++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+), 1 deletion(-)

diff --git a/neutron/agent/linux/ra.py b/neutron/agent/linux/ra.py
index 66fa012..6bdfea5 100644
--- a/neutron/agent/linux/ra.py
+++ b/neutron/agent/linux/ra.py
@@ -82,9 +82,15 @@ def _generate_radvd_conf(router_id, router_ports, 
dev_name_helper):
 
 def _spawn_radvd(router_id, radvd_conf, router_ns, root_helper):
     def callback(pid_file):
+        # we need to use -m syslog and f.e. not -m stderr (the default)
+        # or -m stderr_syslog so that radvd 2.0+ will close stderr and
+        # exit after daemonization; otherwise, the current thread will
+        # be locked waiting for result from radvd that won't ever come
+        # until the process dies
         radvd_cmd = ['radvd',
                      '-C', '%s' % radvd_conf,
-                     '-p', '%s' % pid_file]
+                     '-p', '%s' % pid_file,
+                     '-m', 'syslog']
         return radvd_cmd
 
     radvd = external_process.ProcessManager(cfg.CONF,
diff --git a/neutron/tests/unit/test_l3_agent.py 
b/neutron/tests/unit/test_l3_agent.py
index 6fda182..f5b2ca4 100644
--- a/neutron/tests/unit/test_l3_agent.py
+++ b/neutron/tests/unit/test_l3_agent.py
@@ -27,6 +27,7 @@ from neutron.agent.common import config as agent_config
 from neutron.agent import l3_agent
 from neutron.agent import l3_ha_agent
 from neutron.agent.linux import interface
+from neutron.agent.linux import ra
 from neutron.common import config as base_config
 from neutron.common import constants as l3_constants
 from neutron.common import exceptions as n_exc
@@ -2350,6 +2351,39 @@ vrrp_instance VR_1 {
         self.assertFalse(nat.add_rule.called)
         nat.clear_rules_by_tag.assert_called_once_with('floating_ip')
 
+    def test_spawn_radvd(self):
+        router = prepare_router_data()
+        ri = l3_agent.RouterInfo(router['id'], self.conf.root_helper,
+                                 self.conf.use_namespaces, router=router)
+
+        conffile = '/fake/radvd.conf'
+        pidfile = '/fake/radvd.pid'
+
+        # we don't want the whole process manager to be mocked to be
+        # able to catch execute() calls
+        self.external_process_p.stop()
+        self.ip_cls_p.stop()
+
+        get_pid_file_name = ('neutron.agent.linux.external_process.'
+                             'ProcessManager.get_pid_file_name')
+        with mock.patch('neutron.agent.linux.utils.execute') as execute:
+            with mock.patch(get_pid_file_name) as get_pid:
+                get_pid.return_value = pidfile
+                ra._spawn_radvd(router['id'],
+                                conffile,
+                                ri.ns_name,
+                                self.conf.root_helper)
+            cmd = execute.call_args[0][0]
+
+        self.assertIn('radvd', cmd)
+
+        _join = lambda *args: ' '.join(args)
+
+        cmd = _join(*cmd)
+        self.assertIn(_join('-C', conffile), cmd)
+        self.assertIn(_join('-p', pidfile), cmd)
+        self.assertIn(_join('-m', 'syslog'), cmd)
+
 
 class TestL3AgentEventHandler(base.BaseTestCase):
 
-- 
2.0.5





Reply via email to