Re: Current state of SVS and Meltdown

2018-01-30 Thread Mateusz Kocielski
On Sun, Jan 21, 2018 at 02:00:45PM +0100, Maxime Villard wrote: > I committed this morning the last part needed to completely mitigate Meltdown > on NetBSD-amd64. As I said in the commit message, we still need to change a > few things for KASLR - there is some address leakage, we need to

Current state of SVS and Meltdown

2018-01-21 Thread Maxime Villard
I committed this morning the last part needed to completely mitigate Meltdown on NetBSD-amd64. As I said in the commit message, we still need to change a few things for KASLR - there is some address leakage, we need to hide one instruction -, but otherwise the implementation should be perfectly

Re: meltdown

2018-01-06 Thread Michael
Hello, On Sat, 6 Jan 2018 07:33:50 + m...@netbsd.org wrote: > Loongson-2 had an issue where from branch prediction it would prefetch > instructions from the I/O area and deadlock. > > This happened in normal usage so we build the kernel with a binutils > flag to output different jumps and fl

Re: meltdown

2018-01-06 Thread Paul.Koning
peculative execution that leaves observable side effects (such as the existence of cache entries) after the speculative path is abandoned. And in the case of Meltdown (though not Spectre) it also requires having the speculative load issue omit the access permission check. CDC 6600 has memory relo

Re: meltdown

2018-01-06 Thread Mouse
this wouldn't do anything about covert channels other than the cache. But it'd stop anything using the cache for a covert channel between spec ex and mainline code cold (meltdown and some variants of spectre). It's only a partial fix, but, for most purposes, that's better than n

Re: meltdown

2018-01-05 Thread maya
On Sat, Jan 06, 2018 at 01:41:38AM -0500, Michael wrote: > R10k had all sorts of weirdo speculative execution related problems > ( see hardware workarounds in the O2 ), and I doubt it's the first to > implement it. Loongson-2 had an issue where from branch prediction it would prefetch instructions

Re: meltdown

2018-01-05 Thread Michael
Hello, On Fri, 5 Jan 2018 20:55:19 -0500 Thor Lancelot Simon wrote: > On Thu, Jan 04, 2018 at 04:58:30PM -0500, Mouse wrote: > > > As I understand it, on intel cpus and possibly more, we'll need to > > > unmap the kernel on userret, or else userland can read arbitrary > > > kernel memory. > >

Re: meltdown

2018-01-05 Thread Mouse
less the cost of aborting work in progress and more the (performance) cost of not keeping silicon busy all the time. > (I'm not one, so I don't really know). Me neither. But it seems passing obvious to me that these hardware bugs were at least partially driven by customer demand fo

Re: meltdown

2018-01-05 Thread Thor Lancelot Simon
On Thu, Jan 04, 2018 at 04:58:30PM -0500, Mouse wrote: > > As I understand it, on intel cpus and possibly more, we'll need to > > unmap the kernel on userret, or else userland can read arbitrary > > kernel memory. > > "Possibly more"? Anything that does speculative execution needs a good > hard l

Re: meltdown

2018-01-05 Thread Ted Lemon
On Jan 5, 2018, at 8:52 AM, wrote: > so the illegal read is also speculative, and is voided (exception > and all) when the wrong branch prediction is sorted out. But it > looks like the paper is saying that refinement has not been > demonstrated, though such branch prediction hacks have been show

RE: meltdown

2018-01-05 Thread Terry Moore
> I think you are confusing spectre and meltdown. Yes, my apologies. --Tery

Re: meltdown

2018-01-05 Thread Paul.Koning
es the currently executing process > should not be able to access; timing access to data that cache-collides > with the cache lines of interest reveals the leaked bit(s). > > Nowhere in there is a SEGV generated. > > That's the meltdown stuff. Spectre targets other things (

Re: meltdown

2018-01-05 Thread Mouse
> If there's anything this issue showed is that we definitely need > fewer people independently considering the issue and openly > discussing their own (occasionally wrong) suggestions. Actually, it seems to me we need more. More minds looking at it, more discussion of the various ramifications a

Re: meltdown

2018-01-05 Thread Piotr Meyer
On Fri, Jan 05, 2018 at 02:48:11AM -0600, Dave Huang wrote: > On Jan 4, 2018, at 15:22, Phil Nelson wrote: > > How about turning on the workaround for any process that ignores > > or catches SEGV.Any process that is terminated by a SEGV should > > be safe, shouldn't it? > > Isn't there a sugg

Re: meltdown

2018-01-05 Thread maya
If there's anything this issue showed is that we definitely need fewer people independently considering the issue and openly discussing their own (occasionally wrong) suggestions. It was just a suggestion, I'm not a source of authority.

Re: meltdown

2018-01-05 Thread Dave Huang
On Jan 4, 2018, at 15:22, Phil Nelson wrote: > How about turning on the workaround for any process that ignores > or catches SEGV.Any process that is terminated by a SEGV should > be safe, shouldn't it? Isn't there a suggested mitigation? Seems to me NetBSD should implement it as suggested,

Re: meltdown

2018-01-05 Thread Phil Nelson
On Thursday 04 January 2018 12:49:22 m...@netbsd.org wrote: > I wonder if we can count the number of SEGVs and if we get a few, turn > on the workaround? How about turning on the workaround for any process that ignores or catches SEGV.Any process that is terminated by a SEGV should be safe, s

Re: meltdown

2018-01-05 Thread Warner Losh
ss to data that cache-collides > with the cache lines of interest reveals the leaked bit(s). > > Nowhere in there is a SEGV generated. > > That's the meltdown stuff. Spectre targets other things (I've seen > branch prediction mentioned) to leak information around protection

RE: meltdown

2018-01-04 Thread Terry Moore
est sticking to the original papers which are at meltdownattack.com: https://meltdownattack.com/meltdown.pdf and https://spectreattack.com/spectre.pdf The problems are fairly subtle. They demonstrated (in the Meltdown paper) that you can use JIT compiled code (they used JavaScript in Chrome) to

Re: meltdown

2018-01-04 Thread Paul.Koning
access to data that cache-collides > with the cache lines of interest reveals the leaked bit(s). > > Nowhere in there is a SEGV generated. That depends. The straightforward case of Meltdown starts with an illegal load, which the CPU will execute anyway speculatively, resulting in

Re: meltdown

2018-01-04 Thread Mouse
before.". This means that things like cache line loads can occur based on values the currently executing process should not be able to access; timing access to data that cache-collides with the cache lines of interest reveals the leaked bit(s). Nowhere in there is a SEGV generated. That's

Re: meltdown

2018-01-04 Thread maya
On Thu, Jan 04, 2018 at 10:01:34PM +0100, Kamil Rytarowski wrote: > We have: PaX Segvguard. Can we mitigate it with this feature? > that's what gave me the idea, but I think segvguard is per-binary, and I could just make new binaries to keep on attacking the kernel.

Re: meltdown

2018-01-04 Thread Kamil Rytarowski
On 04.01.2018 21:49, m...@netbsd.org wrote: > Also, I understand that to exploit this, one has to attempt to access > kernel memory a lot, and SEGV at least once per bit. > > I wonder if we can count the number of SEGVs and if we get a few, turn > on the workaround? that would at least spare us th

meltdown

2018-01-04 Thread maya
Yo. As I understand it, on intel cpus and possibly more, we'll need to unmap the kernel on userret, or else userland can read arbitrary kernel memory. People seem to be mentioning a 50% performance penalty and we might do worse (we don't have vDSOs...) Also, I understand that to exploit this, on