Hi
The new kernel caused
PANIC early exception 0d 10 ..... error 0 rc2
on a KVM SL 6.9 x86_64 guest
AMD server and all other guests running SL7.5 are all runn ing OK on their new 
kernel

Reverting to the previous SL 6.9 kernel gave me back my guest machine
Cheers
Bill
 
 
-----Original message-----
> From:Scott Reid <svr...@fnal.gov>
> Sent: Wednesday 23rd May 2018 4:33
> To: scientific-linux-err...@listserv.fnal.gov
> Subject: Security ERRATA Important: kernel on SL6.x i386/x86_64
> 
> Synopsis:          Important: kernel security and bug fix update
> Advisory ID:       SLSA-2018:1651-1
> Issue Date:        2018-05-22
> CVE Numbers:       CVE-2018-3639
> --
> 
> Security Fix(es):
> 
> * An industry-wide issue was found in the way many modern microprocessor
> designs have implemented speculative execution of Load & Store
> instructions (a commonly used performance optimization). It relies on the
> presence of a precisely-defined instruction sequence in the privileged
> code as well as the fact that memory read from address to which a recent
> memory write has occurred may see an older value and subsequently cause an
> update into the microprocessor's data cache even for speculatively
> executed instructions that never actually commit (retire). As a result, an
> unprivileged attacker could use this flaw to read privileged memory by
> conducting targeted cache side-channel attacks. (CVE-2018-3639)
> 
> Note: This issue is present in hardware and cannot be fully fixed via
> software update. The updated kernel packages provide software side of the
> mitigation for this hardware issue. To be fully functional, up-to-date CPU
> microcode applied on the system is required.
> 
> In this update mitigations for x86 (both 32 and 64 bit) architecture are
> provided.
> 
> Bug Fix(es):
> 
> * Previously, an erroneous code in the x86 kexec system call path caused a
> memory corruption. As a consequence, the system became unresponsive with
> the following kernel stack trace:
> 
> 'WARNING: CPU: 13 PID: 36409 at lib/list_debug.c:59
> __list_del_entry+0xa1/0xd0 list_del corruption. prev->next should be
> ffffdd03fddeeca0, but was (null)'
> 
> This update ensures that the code does not corrupt memory. As a result,
> the operating system no longer hangs.
> --
> 
> SL6
>   x86_64
>     kernel-2.6.32-696.30.1.el6.x86_64.rpm
>     kernel-debug-2.6.32-696.30.1.el6.x86_64.rpm
>     kernel-debug-debuginfo-2.6.32-696.30.1.el6.i686.rpm
>     kernel-debug-debuginfo-2.6.32-696.30.1.el6.x86_64.rpm
>     kernel-debug-devel-2.6.32-696.30.1.el6.i686.rpm
>     kernel-debug-devel-2.6.32-696.30.1.el6.x86_64.rpm
>     kernel-debuginfo-2.6.32-696.30.1.el6.i686.rpm
>     kernel-debuginfo-2.6.32-696.30.1.el6.x86_64.rpm
>     kernel-debuginfo-common-i686-2.6.32-696.30.1.el6.i686.rpm
>     kernel-debuginfo-common-x86_64-2.6.32-696.30.1.el6.x86_64.rpm
>     kernel-devel-2.6.32-696.30.1.el6.x86_64.rpm
>     kernel-headers-2.6.32-696.30.1.el6.x86_64.rpm
>     perf-2.6.32-696.30.1.el6.x86_64.rpm
>     perf-debuginfo-2.6.32-696.30.1.el6.i686.rpm
>     perf-debuginfo-2.6.32-696.30.1.el6.x86_64.rpm
>     python-perf-debuginfo-2.6.32-696.30.1.el6.i686.rpm
>     python-perf-debuginfo-2.6.32-696.30.1.el6.x86_64.rpm
>     python-perf-2.6.32-696.30.1.el6.x86_64.rpm
>   i386
>     kernel-2.6.32-696.30.1.el6.i686.rpm
>     kernel-debug-2.6.32-696.30.1.el6.i686.rpm
>     kernel-debug-debuginfo-2.6.32-696.30.1.el6.i686.rpm
>     kernel-debug-devel-2.6.32-696.30.1.el6.i686.rpm
>     kernel-debuginfo-2.6.32-696.30.1.el6.i686.rpm
>     kernel-debuginfo-common-i686-2.6.32-696.30.1.el6.i686.rpm
>     kernel-devel-2.6.32-696.30.1.el6.i686.rpm
>     kernel-headers-2.6.32-696.30.1.el6.i686.rpm
>     perf-2.6.32-696.30.1.el6.i686.rpm
>     perf-debuginfo-2.6.32-696.30.1.el6.i686.rpm
>     python-perf-debuginfo-2.6.32-696.30.1.el6.i686.rpm
>     python-perf-2.6.32-696.30.1.el6.i686.rpm
>   noarch
>     kernel-abi-whitelists-2.6.32-696.30.1.el6.noarch.rpm
>     kernel-doc-2.6.32-696.30.1.el6.noarch.rpm
>     kernel-firmware-2.6.32-696.30.1.el6.noarch.rpm
> 
> - Scientific Linux Development Team
> 
> 

Reply via email to