Hi Nico,

Concept of multiple host bridges was partially implemented in PoC project for upcomming SoC when I joined this project some time ago.

There were many issues (maybe because an early, prototype implementation/incompatibility with previous v3 resource alocator) mainly in resource allocation.

This was also adding additional layer (host bridges) and complexity.

This was finally abandoned in favor of multidomain concept (https://review.coreboot.org/c/coreboot/+/56263) which reuses existing coreboot domain. But using multiple of them - separate one for each stack.

The question was why to introduce new structures when we could reuse domain to split available memory/io ranges between stacks/domains. In domain level read resources function it gets ranges preallocated to stack instad of full range in case of one domain.

It was working with only minimal changes to coreboot core/resource allocator.

Just my experience Always can be done better ;-)

Mariusz

W dniu 17.03.2022 o 19:50, Nico Huber pisze:
Hi Arthur,

On 17.03.22 19:03, Arthur Heymans wrote:
Now my question is the following:
On some Stacks there are multiple root busses, but the resources need to be
allocated on the same window. My initial idea was to add those root busses
as separate struct bus in the domain->link_list. However currently the
allocator assumes only one bus on domains (and bridges).
In the code you'll see a lot of things like

for (child = domain->link_list->children; child; child = child->sibling)
       ....
this is correct, we often (if not always by now) ignore that `link_list`
is a list itself and only walk the children of the first entry.

This is fine if there is only one bus on the domain.
Looping over link_list->next, struct bus'ses is certainly an option here,
but I was told that having only one bus here was a design decision on the
allocator v4 rewrite. I'm not sure how common that assumption is in the
tree, so things could be broken in awkward ways.
I wouldn't say it was a design choice, probably rather a convenience
choice. The old concepts around multiple buses directly downstream of
a single device seemed inconsistent, AFAICT. And at the time the allo-
cator v4 was written it seemed unnecessary to keep compatibility around.

That doesn't mean we can't bring it back, of course. There is at least
one alternative, though.

The currently common case looks like this:


           PCI bus 0
              |
              v

   domain 0 --.
              |-- PCI 00:00.0
              |
              |-- PCI 00:02.0
              |
              :


Now we could have multiple PCI buses directly below the domain. But
instead of modelling this with the `link_list`, we could also model
it with an abstract "host" bus below the domain device and another
layer of "host bridge" devices in between:

           host bus
              |
              v

   domain 0 --.
              |-- PCI host bridge 0 --.
              |                       |-- PCI 00:00.0
              |                       |
              |                       `-- PCI 00:02.0
              |
              |
              |-- PCI host bridge 1 --.
              |                       |-- PCI 16:00.0
              |                       |
              |                       :
              :


I guess this would reduce complexity in generic code at the expense of
more data structures (devices) to manage. OTOH, if we'd make a final
decision for such a model, we could also get rid of the `link_list`.
Basically, setting in stone that we only allow one bus downstream of
any device node.

I'm not fully familiar with the hierarchy on Xeon-SP systems. Would
this be an adequate solution? Also, does the term `stack` map to our
`domain` 1:1 or are there differences?

Nico
_______________________________________________
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org

_______________________________________________
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org

Reply via email to