Re: [SeaBIOS] [RFC v2 0/3] Support multiple pci domains in pci_device

2018-08-27 Thread Marcel Apfelbaum

Hi Gerd

On 08/28/2018 07:12 AM, Zihan Yang wrote:

Gerd Hoffmann  于2018年8月27日周一 上午7:04写道:

   Hi,


   However, QEMU only binds port 0xcf8 and 0xcfc to
bus pcie.0. To avoid bus confliction, we should use other port pairs for
busses under new domains.

I would skip support for IO based configuration and use only MMCONFIG
for extra root buses.

The question remains: how do we assign MMCONFIG space for
each PCI domain.


Thanks for your comments!


Allocation-wise it would be easiest to place them above 4G.  Right after
memory, or after etc/reserved-memory-end (if that fw_cfg file is
present), where the 64bit pci bars would have been placed.  Move the pci
bars up in address space to make room.

Only problem is that seabios wouldn't be able to access mmconfig then.

Placing them below 4G would work at least for a few pci domains.  q35
mmconfig bar is placed at 0xb000 -> 0xbfff, basically for
historical reasons.  Old qemu versions had 2.75G low memory on q35 (up
to 0xafff), and I think old machine types still have that for live
migration compatibility reasons.  Modern qemu uses 2G only, to make
gigabyte alignment work.

32bit pci bars are placed above 0xc000.  The address space from 2G
to 2.75G (0x800 -> 0xafff) is unused on new machine types.
Enough room for three additional mmconfig bars (full size), so four
pci domains total if you add the q35 one.

Maybe we can support 4 domains first before we come up
with a better solution. But I'm not sure if four domains are
enough for those who want too many devices?


(Adding Michael)

Since we will not use all 256 buses of an extra PCI domain,
I think this space will allow us to support more PCI domains.

How will the flow look like ?

1. QEMU passes to SeaBIOS information of how many extra
   PCI domains needs, and how many buses per domain.
   How it will pass this info? A vendor specific capability,
   some PCI registers or modifying extra-pci-roots fw_cfg file?

2. SeaBIOS assigns the address for each PCI Domain and
    returns the information to QEMU.
    How it will do that? Some pxb-pcie registers? Or do we model
    the MMCFG like a PCI BAR?

3. Once QEMU gets the MMCFG addresses, it can answer to
    mmio configuration cycles.

4. SeaBIOS queries all PCI domains devices, computes
   and assigns IO/MEM resources (for PCI domains > 0 it will
   use MMCFG to configure the PCI devices)

5. QEMU uses the IO/MEM information to create the CRS for each
    extra PCI host bridge.

6. SeaBIOS gets the ACPI tables from QEMU and passes them to the
   guest OS.

Thanks,
Marcel






cheers,
   Gerd




___
SeaBIOS mailing list
SeaBIOS@seabios.org
https://mail.coreboot.org/mailman/listinfo/seabios

[SeaBIOS] Fwd: [RFC v2 0/3] Support multiple pci domains in pci_device

2018-08-27 Thread Zihan Yang
CCed to the wrong mailing list... resend here

-- Forwarded message -
From: Zihan Yang 
Date: 2018年8月28日周二 上午4:12
Subject: Re: [SeaBIOS] [RFC v2 0/3] Support multiple pci domains in pci_device
To: Gerd Hoffmann 
Cc: , Marcel Apfelbaum 


Gerd Hoffmann  于2018年8月27日周一 上午7:04写道:
>
>   Hi,
>
> > >   However, QEMU only binds port 0xcf8 and 0xcfc to
> > > bus pcie.0. To avoid bus confliction, we should use other port pairs for
> > > busses under new domains.
> >
> > I would skip support for IO based configuration and use only MMCONFIG
> > for extra root buses.
> >
> > The question remains: how do we assign MMCONFIG space for
> > each PCI domain.
>
> Allocation-wise it would be easiest to place them above 4G.  Right after
> memory, or after etc/reserved-memory-end (if that fw_cfg file is
> present), where the 64bit pci bars would have been placed.  Move the pci
> bars up in address space to make room.
>
> Only problem is that seabios wouldn't be able to access mmconfig then.
>
> Placing them below 4G would work at least for a few pci domains.  q35
> mmconfig bar is placed at 0xb000 -> 0xbfff, basically for
> historical reasons.  Old qemu versions had 2.75G low memory on q35 (up
> to 0xafff), and I think old machine types still have that for live
> migration compatibility reasons.  Modern qemu uses 2G only, to make
> gigabyte alignment work.
>
> 32bit pci bars are placed above 0xc000.  The address space from 2G
> to 2.75G (0x800 -> 0xafff) is unused on new machine types.
> Enough room for three additional mmconfig bars (full size), so four
> pci domains total if you add the q35 one.

Maybe we can support 4 domains first before we come up
with a better solution. But I'm not sure if four domains are
enough for those who want too many devices?

> cheers,
>   Gerd
>

___
SeaBIOS mailing list
SeaBIOS@seabios.org
https://mail.coreboot.org/mailman/listinfo/seabios

Re: [SeaBIOS] vTPM 2.0 is recognized as vTPM 1.2 on the Win 10 virtual machine with seabios

2018-08-27 Thread 汤福
Excuse me, is there some case of successful attempts on Windows 10? Could you 
provide me some technical docs? Thanks!


> -原始邮件-
> 发件人: "Marc-André Lureau" 
> 发送时间: 2018-08-23 16:37:36 (星期四)
> 收件人: tan...@gohighsec.com
> 抄送: "Kevin O'Connor" , seabios@seabios.org
> 主题: Re: [SeaBIOS] vTPM 2.0 is recognized as vTPM 1.2 on the Win 10 virtual 
> machine with seabios
> 
> Hi
> 
> On Thu, Aug 23, 2018 at 9:29 AM 汤福  wrote:
> >
> > Hi,
> >I am sorry, I bothered you. Still vTPM 2.0 for win 10 problem, I 
> > downloaded the latest qemu source from git, the version is V3.0.50. I think 
> > this is the latest code of qemu upstream. I also downloaded seabios 
> > upstream and bulid it with tpm2 support.  Unfortunately, I tried both 
> > passthrough and emulator, and I didn’t get the expected results.
> >
> >For emulator, I did it like this:
> >#mkdir /tmp/mytpm2/
> >#chown tss:root /tmp/mytpm2
> >#swtpm_setup --tpmstate /tmp/mytpm2 --create-ek-cert 
> > --create-platform-cert --allow-signing --tpm2
> >#swtpm socket --tpmstate dir=/tmp/mytpm2   --ctrl 
> > type=unixio,path=/tmp/mytpm2/swtpm-sock   --log level=20 --tpm2
> >
> >No errors occurred, suggesting that the certificate was also generated 
> > successfully.Then I created a blank img file named win10.img,and install 
> > win10  virtual machine as follows:
> >#qemu-system-x86_64 -display sdl -enable-kvm -cdrom win10.iso -serial 
> > stdio -m 2048 -boot d -bios bios.bin   -boot menu=on  -chardev 
> > socket,id=chrtpm,path=/tmp/mytpm2/swtpm-sock -tpmdev 
> > emulator,id=tpm0,chardev=chrtpm -device tpm-crb,tpmdev=tpm0 win10-ovmf.img
> >Enter the system when the system is successfully installed,I found that 
> > the TPM 2.0 device was not found in the System Device Manager. If I replace 
> > -device tpm-crb with -device tpm-tis and reboot the system,The TPM device 
> > can be found in the device manager.But the vTPM 2.0 is recognized as vTPM 
> > 1.2.
> >
> >I also tried passthrough mode, The result is the same as emulator. So, 
> > what could be the problem?
> >
> 
> Try with OVMF. According to some technical docs, it seems Windows
> requires UEFI & CRB for TPM 2. That's also what testing suggestsTry.
> We are able to pass most WLK TPM tests with this setup.
> 
> >
> >
> > > -原始邮件-
> > > 发件人: "Kevin O'Connor" 
> > > 发送时间: 2018-08-21 12:08:59 (星期二)
> > > 收件人: "汤福" 
> > > 抄送: seabios@seabios.org
> > > 主题: Re: [SeaBIOS] vTPM 2.0 is recognized as vTPM 1.2 on the Win 10 
> > > virtual machine with seabios
> > >
> > > On Mon, Aug 13, 2018 at 04:45:43PM +0800, 汤福 wrote:
> > > > Hi,
> > > >
> > > > I want to use the vTPM in a qemu Windows image. Unfortunately, it 
> > > > didn't work.
> > > > First, the equipment:
> > > > TPM 2.0 hardware
> > > > CentOS 7.2
> > > > Qemu v2.10.2
> > > > SeaBIOS 1.11.0
> > > > libtpm and so on
> > >
> > > If you retry with the latest SeaBIOS code from the master branch, does
> > > the problem still exist?
> > >
> > > See:
> > > https://mail.coreboot.org/pipermail/seabios/2018-August/012384.html
> > >
> > > -Kevin
> > ___
> > SeaBIOS mailing list
> > SeaBIOS@seabios.org
> > https://mail.coreboot.org/mailman/listinfo/seabios
> 
> 
> 
> -- 
> Marc-André Lureau
___
SeaBIOS mailing list
SeaBIOS@seabios.org
https://mail.coreboot.org/mailman/listinfo/seabios

Re: [SeaBIOS] [PATCH v3 3/3] pci: recognize RH PCI legacy bridge resource reservation capability

2018-08-27 Thread Marcel Apfelbaum



On 08/27/2018 05:22 AM, Liu, Jing2 wrote:

Hi Marcel,

On 8/25/2018 11:59 PM, Marcel Apfelbaum wrote:



On 08/24/2018 11:53 AM, Jing Liu wrote:

Enable the firmware recognizing RedHat legacy PCI bridge device ID,
so QEMU can reserve additional PCI bridge resource capability.
Change the debug level lower to 3 when it is non-QEMU bridge.

Signed-off-by: Jing Liu 
---
  src/fw/pciinit.c | 50 
+-

  src/hw/pci_ids.h |  1 +
  2 files changed, 30 insertions(+), 21 deletions(-)

diff --git a/src/fw/pciinit.c b/src/fw/pciinit.c
index 62a32f1..c0634bc 100644
--- a/src/fw/pciinit.c
+++ b/src/fw/pciinit.c
@@ -525,30 +525,38 @@ static void pci_bios_init_platform(void)
  static u8 pci_find_resource_reserve_capability(u16 bdf)
  {
-    if (pci_config_readw(bdf, PCI_VENDOR_ID) == 
PCI_VENDOR_ID_REDHAT &&

-    pci_config_readw(bdf, PCI_DEVICE_ID) ==
-    PCI_DEVICE_ID_REDHAT_ROOT_PORT) {
-    u8 cap = 0;
-    do {
-    cap = pci_find_capability(bdf, PCI_CAP_ID_VNDR, cap);
-    } while (cap &&
- pci_config_readb(bdf, cap + 
PCI_CAP_REDHAT_TYPE_OFFSET) !=

-    REDHAT_CAP_RESOURCE_RESERVE);
-    if (cap) {
-    u8 cap_len = pci_config_readb(bdf, cap + PCI_CAP_FLAGS);
-    if (cap_len < RES_RESERVE_CAP_SIZE) {
-    dprintf(1, "PCI: QEMU resource reserve cap length 
%d is invalid\n",

-    cap_len);
-    return 0;
-    }
-    } else {
-    dprintf(1, "PCI: QEMU resource reserve cap not found\n");
+    u16 device_id;
+
+    if (pci_config_readw(bdf, PCI_VENDOR_ID) != 
PCI_VENDOR_ID_REDHAT) {

+    dprintf(3, "PCI: This is non-QEMU bridge.\n");
+    return 0;
+    }
+
+    device_id = pci_config_readw(bdf, PCI_DEVICE_ID);
+
+    if (device_id != PCI_DEVICE_ID_REDHAT_ROOT_PORT &&
+    device_id != PCI_DEVICE_ID_REDHAT_BRIDGE) {
+    dprintf(1, "PCI: QEMU resource reserve cap device ID 
doesn't match.\n");

+    return 0;
+    }
+    u8 cap = 0;
+
+    do {
+    cap = pci_find_capability(bdf, PCI_CAP_ID_VNDR, cap);
+    } while (cap &&
+ pci_config_readb(bdf, cap + 
PCI_CAP_REDHAT_TYPE_OFFSET) !=

+  REDHAT_CAP_RESOURCE_RESERVE);
+    if (cap) {
+    u8 cap_len = pci_config_readb(bdf, cap + PCI_CAP_FLAGS);
+    if (cap_len < RES_RESERVE_CAP_SIZE) {
+    dprintf(1, "PCI: QEMU resource reserve cap length %d is 
invalid\n",

+    cap_len);
+    return 0;
  }
-    return cap;
  } else {
-    dprintf(1, "PCI: QEMU resource reserve cap VID or DID 
doesn't match.\n");

-    return 0;


I am sorry for the late review.
Did you drop the above line in purpose?


Thanks for the review!

I replaced the above report to following phase.
Check the vendor-id and device-id respectively.

+    if (pci_config_readw(bdf, PCI_VENDOR_ID) != PCI_VENDOR_ID_REDHAT) {
+    dprintf(3, "PCI: This is non-QEMU bridge.\n");
+    return 0;
+    }
+
+    device_id = pci_config_readw(bdf, PCI_DEVICE_ID);
+
+    if (device_id != PCI_DEVICE_ID_REDHAT_ROOT_PORT &&
+    device_id != PCI_DEVICE_ID_REDHAT_BRIDGE) {
+    dprintf(1, "PCI: QEMU resource reserve cap device ID doesn't 
match.\n");

+    return 0;
+    }



I understand.

Reviewed-by: Marcel Apfelbaum


Thanks,
Marcel


Thanks,
Jing


Thanks,
Marcel



___
SeaBIOS mailing list
SeaBIOS@seabios.org
https://mail.coreboot.org/mailman/listinfo/seabios

Re: [SeaBIOS] [RFC v2 0/3] Support multiple pci domains in pci_device

2018-08-27 Thread Gerd Hoffmann
  Hi,

> >   However, QEMU only binds port 0xcf8 and 0xcfc to
> > bus pcie.0. To avoid bus confliction, we should use other port pairs for
> > busses under new domains.
> 
> I would skip support for IO based configuration and use only MMCONFIG
> for extra root buses.
> 
> The question remains: how do we assign MMCONFIG space for
> each PCI domain.

Allocation-wise it would be easiest to place them above 4G.  Right after
memory, or after etc/reserved-memory-end (if that fw_cfg file is
present), where the 64bit pci bars would have been placed.  Move the pci
bars up in address space to make room.

Only problem is that seabios wouldn't be able to access mmconfig then.

Placing them below 4G would work at least for a few pci domains.  q35
mmconfig bar is placed at 0xb000 -> 0xbfff, basically for
historical reasons.  Old qemu versions had 2.75G low memory on q35 (up
to 0xafff), and I think old machine types still have that for live
migration compatibility reasons.  Modern qemu uses 2G only, to make
gigabyte alignment work.

32bit pci bars are placed above 0xc000.  The address space from 2G
to 2.75G (0x800 -> 0xafff) is unused on new machine types.
Enough room for three additional mmconfig bars (full size), so four
pci domains total if you add the q35 one.

cheers,
  Gerd


___
SeaBIOS mailing list
SeaBIOS@seabios.org
https://mail.coreboot.org/mailman/listinfo/seabios