On Thu, Jan 5, 2017 at 6:34 PM, Andy Lutomirski <l...@amacapital.net> wrote:
> On Thu, Jan 5, 2017 at 3:05 PM, Linus Torvalds
> <torva...@linux-foundation.org> wrote:
>> On Thu, Jan 5, 2017 at 12:18 PM, Andy Lutomirski <l...@kernel.org> wrote:
>>>
>>> Hmm.  I bet that if we preset the accessed bits in all the segments
>>> then we don't need it to be writable in general.
>>
>> I'm not sure that this is architecturally safe.
>>
>
> Hmm.  Last time I looked, I couldn't find *anything* in the SDM
> explaining what happened if a GDT access resulted in a page fault.  I
> did discover that Xen intentionally (!) lazily populates and maps LDT
> pages.  An attempt to access a not-present page results in #PF with
> the error cod e indicating kernel access even if the access came from
> user mode.
>
> SDM volume 3 7.2.2 says "Pages corresponding to the previous task’s
> TSS, the current task’s TSS, and the descriptor table entries for
> each all should be marked as read/write."  But I don't see how a CPU
> implementation could possibly care what the page table for the TSS
> descriptor table entries says after LTR is done because the CPU isn't
> even supposed to *read* that memory.
>
> OTOH a valid implementation could easily require that the page table
> says that the page is writable merely to load a segment, especially in
> weird cases (IRET?).  That being said, this is all quite easy to test.
>
> Also, Thomas, why are you creating a new memory region?  I don't see
> any benefit to randomizing the GDT address.  How about just putting it
> in the fixmap?  This  would be NR_CPUS * 4 pages if do my limit=0xffff
> idea.  I'm not sure if the fixmap code knows how to handle this much
> space.

When I looked at the fixmap, you had to define the space you need
ahead of time and I am not sure there was enough space as you said.

-- 
Thomas

Reply via email to