On Wed, 23 Sep 2020 15:45:27 -0700
Sean Christopherson wrote:
> This series introduces a concept we've discussed a few times in x86 land.
> The crux of the problem is that x86 has a few cases where KVM could
> theoretically encounter a software or hardware bug deep in a call stack
> without any
On 25/09/20 18:32, Marc Zyngier wrote:
> I'm quite like the idea. However, I wonder whether preventing the
> vcpus from re-entering the guest is enough. When something goes really
> wrong, is it safe to allow the userspace process to terminate normally
> and free the associated memory? And is it
On Fri, Sep 25, 2020 at 05:32:53PM +0100, Marc Zyngier wrote:
> Hi Sean,
>
> On Wed, 23 Sep 2020 23:45:27 +0100,
> Sean Christopherson wrote:
> >
> > This series introduces a concept we've discussed a few times in x86 land.
> > The crux of the problem is that x86 has a few cases where KVM could
Hi Sean,
On Wed, 23 Sep 2020 23:45:27 +0100,
Sean Christopherson wrote:
>
> This series introduces a concept we've discussed a few times in x86 land.
> The crux of the problem is that x86 has a few cases where KVM could
> theoretically encounter a software or hardware bug deep in a call stack
>
On 24.09.20 00:45, Sean Christopherson wrote:
> This series introduces a concept we've discussed a few times in x86 land.
> The crux of the problem is that x86 has a few cases where KVM could
> theoretically encounter a software or hardware bug deep in a call stack
> without any sane way to
This series introduces a concept we've discussed a few times in x86 land.
The crux of the problem is that x86 has a few cases where KVM could
theoretically encounter a software or hardware bug deep in a call stack
without any sane way to propagate the error out to userspace.
Another use case
6 matches
Mail list logo