Re: [coreboot] more smm questions

2017-07-20 Thread ron minnich
OK, I have it working. For the Q35 qemu mainboard, I can direct SMI to the
kernel. The final issue was that the existing linux trampoline can't work
at present if you have enabled NX and set the top bit of a PTE to 1, since
the trampoline doesn't enable NX correctly. Easy fix: add nonx=off to the
commandline. that's not a typo, even though one might expect it to be
nonx=on

So, it's possible to have your kernel handle SMIs and run code that
otherwise would be in ring -2.

We've been advised that the best thing to do with SMI is disable it totally
(I agree -- that's what we did in in linuxbios 1999-2006) and so we'll
probably pursue that path instead. But it's good to know that this is
possible.

For more, see https://github.com/rminnich/linux/tree/monitor

The test is simple, outb to 0xb2 (IIRC) and you'll see the SMI handler in
the kernel print something.
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] more smm questions

2017-07-03 Thread ron minnich
well more later but I now seem to be fighting a bug in qemu. man I hate it
when that happens.

On Mon, Jul 3, 2017 at 2:10 PM Zoran Stojsavljevic <
zoran.stojsavlje...@gmail.com> wrote:

> Let me also enter this discussion, to clear somehow my ignorance in this
> area, which is significant. Last few days I was refreshing my mind about
> what I know about SMM and SMI, and the following went out of this research
> (in lieu of the Original Post).
>
> So this is what I understood about SMM, in general, and HSEG, also TSEG.
> Let me put some assumption, after I read what I read... ;-)
> [1] SMM (as my best understanding is) always runs in real (16 bit) mode;
> [2] For HSEG to be visible (ONLY to SMM mode), the *system management RAM
> control register* must be programmed in BIOS (register belongs to PCH);
> as such the memory around FEEA_h-FEEB_h will be hardcoded to;
> [3] For TSEG, the same *system management RAM control register *is
> programmed, with some to 8MB of memory, beneath PCIe devices DRAM mapping,
> and this memory is NOT visible to OS (as well as HSEG);
> [4] Once SMI is entered, there are very complex mechanisms of HW shadowing
> executed in background, not visible to SW guys; in nutshell, the following
> will happen...
> [A] HSEG will be remapped to A - B by HW;
> [B] Parts of TSEG will be remapped beneath A by HW (where real
> mode memory resides);
>
> Now, the following is to happen: the SMI handler will save the current
> core context in SMRAM/TSEG, using SMBASE value. Then, all cores except one
> (have no idea how this core is delegated - probably BSP core) will enter
> sleep state. The lowest core 0 will have SMBASE: 3000(0)h + constant 8000,
> where the current core 0 context will be saved (actually by
> remapping/shadowing to TSEG), and core 1 will have probably the same SMBASE
> + constant, but with the index 1, so its context will be saved in other
> region of TSEG... etc! It is some HW magic not completely clear to me!?
>
> More or less, this is the theory. This is how I understood it, people.
>
> Floor all yours. Please, continue discussion.
>
> Zoran
> On Mon, Jul 3, 2017 at 7:01 PM, ron minnich  wrote:
>
>> I've got a question right at this code:
>>
>> https://github.com/coreboot/coreboot/blob/fec0328c5f653233859d4aec7dae0b94acb67e97/src/cpu/x86/smm/smmrelocate.S#L101
>>
>> /* Check revision to see if AMD64 style SMM_BASE
>> *   Intel Core Solo/Duo:  0x30007
>> *   Intel Core2 Solo/Duo: 0x30100
>> *   Intel SandyBridge:0x30101
>> *   AMD64:0x3XX64
>> * This check does not make much sense, unless someone ports
>> * SMI handling to AMD64 CPUs.
>> */
>>
>> mov $0x38000 + 0x7efc, %ebx
>> addr32 mov (%ebx), %al
>> cmp $0x64, %al
>> je 1f
>>
>> mov $0x38000 + 0x7ef8, %ebx
>> jmp smm_relocate
>> 1:
>> mov $0x38000 + 0x7f00, %ebx
>>
>> As I read it, it tests for %al being 0x64 and, if so, it assumes the
>> offset is at 7f00. As I read the intel x86 docs, this is wrong, or qemu is
>> wrong. As I read the docs and Xeno's writeups, the offset is at 0x7ef8 on
>> 64-bit processors. But the ich9 version at least in qemu is 0x20064, and
>> that would mean coreboot thinks the register is at 7f00, which it does not
>> appear to be.
>>
>> So: am I missing something? does this work on amd64 today? where is the
>> offset on modern em64t CPUs? And why does it work on coreboot with q35,
>> qemu, and multiple cores if this offset is wrong?
>>
>> Also, a different question. The offset at 7ef8 is a 32-bit number on
>> 64-bit systems. It seems to me this implies that the the save state can be
>> located anywhere in the low 4G memory on a per-core basis. I'm a bit lost
>> on the need for the large contiguous SMM save state area.
>>
>> So, for example, it seems to me we could leave a very small SMM stub at
>> 0xa, and as long as it had a simple way to set its offset at 7ef8 it
>> could put its save state at any convenient location. Why do we need the
>> giant contiguous memory area for save state if this is the case?
>>
>> The main motivation for the TSEG seems to be the requirement of the large
>> contiguous save state area for SMM, but I don't see anything that says it
>> has to be physically contiguous, given the existence of the 32-bit offset.
>>
>> Current status btw is that linux is able to set up the relocation area
>> and I'm able to run SMIs from the command line and the code Linux sets up
>> gets run.
>>
>> It seems to me we ought to be able to break a lot of these fixed address
>> issues and as a result reduce attack surface, but we'll see. I'm got lots
>> of ignorance, little knowledge, and this can be good or bad :-)
>>
>> ron
>>
>>
>>
>> --
>> coreboot mailing list: coreboot@coreboot.org
>> https://mail.coreboot.org/mailman/listinfo/coreboot
>>
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] more smm questions

2017-07-03 Thread Zoran Stojsavljevic
Let me also enter this discussion, to clear somehow my ignorance in this
area, which is significant. Last few days I was refreshing my mind about
what I know about SMM and SMI, and the following went out of this research
(in lieu of the Original Post).

So this is what I understood about SMM, in general, and HSEG, also TSEG.
Let me put some assumption, after I read what I read... ;-)
[1] SMM (as my best understanding is) always runs in real (16 bit) mode;
[2] For HSEG to be visible (ONLY to SMM mode), the *system management RAM
control register* must be programmed in BIOS (register belongs to PCH); as
such the memory around FEEA_h-FEEB_h will be hardcoded to;
[3] For TSEG, the same *system management RAM control register *is
programmed, with some to 8MB of memory, beneath PCIe devices DRAM mapping,
and this memory is NOT visible to OS (as well as HSEG);
[4] Once SMI is entered, there are very complex mechanisms of HW shadowing
executed in background, not visible to SW guys; in nutshell, the following
will happen...
[A] HSEG will be remapped to A - B by HW;
[B] Parts of TSEG will be remapped beneath A by HW (where real mode
memory resides);

Now, the following is to happen: the SMI handler will save the current core
context in SMRAM/TSEG, using SMBASE value. Then, all cores except one (have
no idea how this core is delegated - probably BSP core) will enter sleep
state. The lowest core 0 will have SMBASE: 3000(0)h + constant 8000, where
the current core 0 context will be saved (actually by remapping/shadowing
to TSEG), and core 1 will have probably the same SMBASE + constant, but
with the index 1, so its context will be saved in other region of TSEG...
etc! It is some HW magic not completely clear to me!?

More or less, this is the theory. This is how I understood it, people.

Floor all yours. Please, continue discussion.

Zoran

On Mon, Jul 3, 2017 at 7:01 PM, ron minnich  wrote:

> I've got a question right at this code:
> https://github.com/coreboot/coreboot/blob/fec0328c5f653233859d4aec7dae0b
> 94acb67e97/src/cpu/x86/smm/smmrelocate.S#L101
>
> /* Check revision to see if AMD64 style SMM_BASE
> *   Intel Core Solo/Duo:  0x30007
> *   Intel Core2 Solo/Duo: 0x30100
> *   Intel SandyBridge:0x30101
> *   AMD64:0x3XX64
> * This check does not make much sense, unless someone ports
> * SMI handling to AMD64 CPUs.
> */
>
> mov $0x38000 + 0x7efc, %ebx
> addr32 mov (%ebx), %al
> cmp $0x64, %al
> je 1f
>
> mov $0x38000 + 0x7ef8, %ebx
> jmp smm_relocate
> 1:
> mov $0x38000 + 0x7f00, %ebx
>
> As I read it, it tests for %al being 0x64 and, if so, it assumes the
> offset is at 7f00. As I read the intel x86 docs, this is wrong, or qemu is
> wrong. As I read the docs and Xeno's writeups, the offset is at 0x7ef8 on
> 64-bit processors. But the ich9 version at least in qemu is 0x20064, and
> that would mean coreboot thinks the register is at 7f00, which it does not
> appear to be.
>
> So: am I missing something? does this work on amd64 today? where is the
> offset on modern em64t CPUs? And why does it work on coreboot with q35,
> qemu, and multiple cores if this offset is wrong?
>
> Also, a different question. The offset at 7ef8 is a 32-bit number on
> 64-bit systems. It seems to me this implies that the the save state can be
> located anywhere in the low 4G memory on a per-core basis. I'm a bit lost
> on the need for the large contiguous SMM save state area.
>
> So, for example, it seems to me we could leave a very small SMM stub at
> 0xa, and as long as it had a simple way to set its offset at 7ef8 it
> could put its save state at any convenient location. Why do we need the
> giant contiguous memory area for save state if this is the case?
>
> The main motivation for the TSEG seems to be the requirement of the large
> contiguous save state area for SMM, but I don't see anything that says it
> has to be physically contiguous, given the existence of the 32-bit offset.
>
> Current status btw is that linux is able to set up the relocation area and
> I'm able to run SMIs from the command line and the code Linux sets up gets
> run.
>
> It seems to me we ought to be able to break a lot of these fixed address
> issues and as a result reduce attack surface, but we'll see. I'm got lots
> of ignorance, little knowledge, and this can be good or bad :-)
>
> ron
>
>
>
> --
> coreboot mailing list: coreboot@coreboot.org
> https://mail.coreboot.org/mailman/listinfo/coreboot
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] more smm questions

2017-07-03 Thread Stefan Reinauer



On 03-Jul-17 10:01, ron minnich wrote:

I've got a question right at this code:
https://github.com/coreboot/coreboot/blob/fec0328c5f653233859d4aec7dae0b94acb67e97/src/cpu/x86/smm/smmrelocate.S#L101

/* Check revision to see if AMD64 style SMM_BASE
*   Intel Core Solo/Duo:  0x30007
*   Intel Core2 Solo/Duo: 0x30100
*   Intel SandyBridge:0x30101
*   AMD64:0x3XX64
* This check does not make much sense, unless someone ports
* SMI handling to AMD64 CPUs.
*/

mov $0x38000 + 0x7efc, %ebx
addr32 mov (%ebx), %al
cmp $0x64, %al
je 1f

mov $0x38000 + 0x7ef8, %ebx
jmp smm_relocate
1:
mov $0x38000 + 0x7f00, %ebx

As I read it, it tests for %al being 0x64 and, if so, it assumes the 
offset is at 7f00. As I read the intel x86 docs, this is wrong, or 
qemu is wrong. As I read the docs and Xeno's writeups, the offset is 
at 0x7ef8 on 64-bit processors. But the ich9 version at least in qemu 
is 0x20064, and that would mean coreboot thinks the register is at 
7f00, which it does not appear to be.


So: am I missing something? does this work on amd64 today? where is 
the offset on modern em64t CPUs? And why does it work on coreboot with 
q35, qemu, and multiple cores if this offset is wrong?


Not sure what the issue you are seeing is, but you can assume that Qemu 
is getting that part of the hardware emulation wrong.


What chipset emulations are you trying? Q35? Or Ich9? Do they behave the 
same? The ICH9 (southbridge) should have very little to do with this 
particular piece of the code, because it's the southbridge, but the code 
is dependent on the CPU you emulate.


Last time I checked, Qemu could not emulate SMM properly.




Also, a different question. The offset at 7ef8 is a 32-bit number on 
64-bit systems. It seems to me this implies that the the save state 
can be located anywhere in the low 4G memory on a per-core basis. I'm 
a bit lost on the need for the large contiguous SMM save state area.


This is the code that is used to move the initial SMM offset from 
0x38000 to some other place. The safe state is always located at a fixed 
offset from SMM_BASE.




So, for example, it seems to me we could leave a very small SMM stub 
at 0xa, and as long as it had a simple way to set its offset at 
7ef8 it could put its save state at any convenient location. Why do we 
need the giant contiguous memory area for save state if this is the case?


There is no giant contiguous memory involved. It's only a few hundred 
bytes for the state. The way we layed it out is to save maximum space 
when you have a lot of CPU cores. Read the documentation graphics in one 
of those files. The code (besides the trampoline) is also shared between 
all cores.





The main motivation for the TSEG seems to be the requirement of the 
large contiguous save state area for SMM, but I don't see anything 
that says it has to be physically contiguous, given the existence of 
the 32-bit offset.


Traditionally you want to protect the SMM handler from being overwritten 
by the OS. That protection requires a defined (thus contiguous) piece of 
memory.




Current status btw is that linux is able to set up the relocation area 
and I'm able to run SMIs from the command line and the code Linux sets 
up gets run.


It seems to me we ought to be able to break a lot of these fixed 
address issues and as a result reduce attack surface, but we'll see. 
I'm got lots of ignorance, little knowledge, and this can be good or 
bad :-)


There really are no fixed address issues, except for the initial setup 
(and those will be hard to fix unless you change the silicon)


Stefan
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

[coreboot] more smm questions

2017-07-03 Thread ron minnich
I've got a question right at this code:
https://github.com/coreboot/coreboot/blob/fec0328c5f653233859d4aec7dae0b94acb67e97/src/cpu/x86/smm/smmrelocate.S#L101

/* Check revision to see if AMD64 style SMM_BASE
*   Intel Core Solo/Duo:  0x30007
*   Intel Core2 Solo/Duo: 0x30100
*   Intel SandyBridge:0x30101
*   AMD64:0x3XX64
* This check does not make much sense, unless someone ports
* SMI handling to AMD64 CPUs.
*/

mov $0x38000 + 0x7efc, %ebx
addr32 mov (%ebx), %al
cmp $0x64, %al
je 1f

mov $0x38000 + 0x7ef8, %ebx
jmp smm_relocate
1:
mov $0x38000 + 0x7f00, %ebx

As I read it, it tests for %al being 0x64 and, if so, it assumes the offset
is at 7f00. As I read the intel x86 docs, this is wrong, or qemu is wrong.
As I read the docs and Xeno's writeups, the offset is at 0x7ef8 on 64-bit
processors. But the ich9 version at least in qemu is 0x20064, and that
would mean coreboot thinks the register is at 7f00, which it does not
appear to be.

So: am I missing something? does this work on amd64 today? where is the
offset on modern em64t CPUs? And why does it work on coreboot with q35,
qemu, and multiple cores if this offset is wrong?

Also, a different question. The offset at 7ef8 is a 32-bit number on 64-bit
systems. It seems to me this implies that the the save state can be located
anywhere in the low 4G memory on a per-core basis. I'm a bit lost on the
need for the large contiguous SMM save state area.

So, for example, it seems to me we could leave a very small SMM stub at
0xa, and as long as it had a simple way to set its offset at 7ef8 it
could put its save state at any convenient location. Why do we need the
giant contiguous memory area for save state if this is the case?

The main motivation for the TSEG seems to be the requirement of the large
contiguous save state area for SMM, but I don't see anything that says it
has to be physically contiguous, given the existence of the 32-bit offset.

Current status btw is that linux is able to set up the relocation area and
I'm able to run SMIs from the command line and the code Linux sets up gets
run.

It seems to me we ought to be able to break a lot of these fixed address
issues and as a result reduce attack surface, but we'll see. I'm got lots
of ignorance, little knowledge, and this can be good or bad :-)

ron
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot