[coreboot] Re: Does PCI driver code belong in coreboot (ARM)?

2021-12-07 Thread Julius Werner
> But, currently, selecting Google Asurada in Kconfig (`make menuconfig`),
> display initialization cannot be disabled, and I do not see a way to
> disable USB init either.

That's a fair point, I think that's just not implemented because
nobody needed it yet. Display init is already globally guarded by the
display_init_required() function in src/lib/bootmode.c, so if anyone
wants to add a Kconfig in there that's easy to do. (Maybe it can be
tied to the NO_GFX_INIT option if we untangle how that interacts with
MAINBOARD_FORCE_NATIVE_VGA_INIT a bit.)

For USB, I think usually it just doesn't take any notable amount of
time so nobody has bothered to make it optional yet. But that could
certainly be done too if there was sufficient interest.
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Does PCI driver code belong in coreboot (ARM)?

2021-12-07 Thread Paul Menzel

Dear Julius,


Am 06.12.21 um 23:42 schrieb Julius Werner:

If I remember correctly, coreboot’s goal to only do minimal hardware
initialization originally meant, that the payload/OS does PCI
initialization.


FWIW, coreboot does do device initialization for things that are only
needed by the payload in other cases too: we've been doing display and
USB initialization that way for years. This only works in those cases
where you need to do a lot of very platform-specific stuff to turn
something on, but then after that it presents a very simple generic
API (like a framebuffer or standardized host controller interface),
but I think PCI also falls in that area. I think it's useful so that
payloads don't all need to implement that super SoC-specific stuff
individually.


It’s hard to draw a clear line. But let’s keep in mind, that the Linux 
kernel has already working drivers.



In general, I don't think we should be too strict about what coreboot
should or shouldn't be in cases where someone just wants to add an
optional feature that doesn't introduce a huge maintenance burden on
the core framework. If someone doesn't like it they can just disable
the Kconfig and do PCI init in the payload / the kernel / via node.js
or whatever instead. This has clearly been useful on x86 platforms for
years, so I don't see why Arm platforms shouldn't be allowed to do it
as well.


I do not want to block or forbid anything. Your point about making it 
configurable is very good. I think disabling PCI initialization is 
currently not possible for x86 with `make menuconfig`. It’d be great if 
it would be possible for ARM. The use case of having coreboot and the 
payload with all drivers (Linux kernel) in the flash ROM chip, so a 
build without PCI, graphics and USB init should be easily configurable.


But, currently, selecting Google Asurada in Kconfig (`make menuconfig`), 
display initialization cannot be disabled, and I do not see a way to 
disable USB init either.



Kind regards,

Paul
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Does PCI driver code belong in coreboot (ARM)?

2021-12-06 Thread Julius Werner
> If I remember correctly, coreboot’s goal to only do minimal hardware
> initialization originally meant, that the payload/OS does PCI
> initialization.

FWIW, coreboot does do device initialization for things that are only
needed by the payload in other cases too: we've been doing display and
USB initialization that way for years. This only works in those cases
where you need to do a lot of very platform-specific stuff to turn
something on, but then after that it presents a very simple generic
API (like a framebuffer or standardized host controller interface),
but I think PCI also falls in that area. I think it's useful so that
payloads don't all need to implement that super SoC-specific stuff
individually.

In general, I don't think we should be too strict about what coreboot
should or shouldn't be in cases where someone just wants to add an
optional feature that doesn't introduce a huge maintenance burden on
the core framework. If someone doesn't like it they can just disable
the Kconfig and do PCI init in the payload / the kernel / via node.js
or whatever instead. This has clearly been useful on x86 platforms for
years, so I don't see why Arm platforms shouldn't be allowed to do it
as well.
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Does PCI driver code belong in coreboot (ARM)?

2021-12-06 Thread Keith Emery
On a semi related topic, a significant issue exists in the absence of a 
one stop shop debug option that works universally on all x86 platforms. 
The lack of universality means that people can only realistically work 
on boards for which they can get a reliable debug output. That massively 
narrows the number of boards that can be worked on easily. Very few 
platforms these days support serial, and I think PCI will be gone soon 
enough. I'm mulling over a potential solution in the form of a coreboot 
specific POST card. The device would display and log POST codes from 
0x80, but also log coreboot console via some other IO, (please feel free 
to make suggestions as to what that should be). Since I know PCIe has 
pin breakouts for JTAG, we might as well log that as well.


All of this to say, that I don't see any of this continuing to be an 
option if PCI drivers end up pulled out of coreboot. Although that 
depends on a few things that perhaps somebody can clarify.


1.) Do we actually need these drivers to access 0x80 via PCI / PCIe. Or 
is that handled as a kind of hardware pass through. like with JTAG?


2.) Also are we talking only about traditional PCI drivers here, or PCIe 
as well?




On 6/12/21 9:59 pm, Angel Pons wrote:

Hi list,

On Mon, Dec 6, 2021 at 7:37 AM Jianjun Wang  wrote:

On Sat, 2021-12-04 at 20:53 +0800, Hung-Te Lin wrote:

On Wed, Dec 1, 2021 at 10:08 PM Patrick Georgi 
wrote:

1. Dezember 2021 12:06, "Paul Menzel" 
schrieb:

If I remember correctly, coreboot’s goal to only do minimal
hardware
initialization originally meant, that the payload/OS does PCI
initialization.

The original idea was to boot into Linux (hence LinuxBIOS, back in
the day). coreboot is very different from this scheme, see the
presence of payloads that aren't Linux.


Should PCI support be added to coreboot for ARM, so it’s aligned
with
x86?
Should coreboot stay minimal on ARM, for example PCI code adds
100 ms delay [4]?

Paul, coreboot would "stay minimal" if that PCI code was moved into
depthcharge as-is, but the 100ms delay would still be there and other
payloads would be missing PCI init. I'm pretty sure this isn't what
you want, though.


   Need to check with MTK folks, but I'd assume the 100ms will be
eliminated in the end, or re-implemented as early-init (and do the
rest in depthcharge).

I don't think any of the PCI code belongs into depthcharge. I'm pretty
sure it can be integrated better to leverage the existing coreboot
infrastructure, e.g. the resource allocator and the devicetree.


Agree, this 100ms is defined by the PCI specification, remove it
directly will cause some compatibility issues, but I think we can put
this flow in the early stage to reduce its impact.

As I understand the spec, 100ms is the *minimum* delay before PERST#
de-assertion, so it's possible to use a longer delay while doing
something else in the meantime. One way to implement this would be to
assert PERST# early and do some other initialisation in the meantime,
e.g. memory init. To ensure that at least 100ms have elapsed, a
stopwatch from `src/include/timer.h` is very convenient.


PCI drivers then have to be added to the payloads, which
could be a minimal Linux kernel, so that booting from drives
connected
over PCI is possible?

The only option I see for getting rid of PCI support on ARM is to
remodel the relationships between coreboot, the payload and the OS.
Reminds me that I wanted to build a proof-of-concept for
chainloaded payloads, a concept that might help with such a
redesign because we could move things out of coreboot to
"elsewhere" (wherever that might be) piece by piece.

But as is, if there are PCI(e) devices that need early init,
coreboot is the place to put these drivers.

   Agree with Patrick - many eMMC devices do need early init, so in
the
end we still have to put some eMMC code in Coreboot,
   and I'd assume that will be the same situation for PCI-e (NVMe) and
UFS.

I don't know what this "early init" consists of, but it sounds like
something that should be done in coreboot. It could possibly be done
in a passthrough/chainloaded payload (which would do late ramstage
init), but it wouldn't really make much of a difference. The idea is
to start abstracting the hardware so that regular (non-passthrough)
payloads don't need to know hardware-specific details.

Best regards,
Angel
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org

___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Does PCI driver code belong in coreboot (ARM)?

2021-12-06 Thread Angel Pons
Hi list,

On Mon, Dec 6, 2021 at 7:37 AM Jianjun Wang  wrote:
>
> On Sat, 2021-12-04 at 20:53 +0800, Hung-Te Lin wrote:
> > On Wed, Dec 1, 2021 at 10:08 PM Patrick Georgi 
> > wrote:
> > > 1. Dezember 2021 12:06, "Paul Menzel" 
> > > schrieb:
> > > > If I remember correctly, coreboot’s goal to only do minimal
> > > > hardware
> > > > initialization originally meant, that the payload/OS does PCI
> > > > initialization.
> > >
> > > The original idea was to boot into Linux (hence LinuxBIOS, back in
> > > the day). coreboot is very different from this scheme, see the
> > > presence of payloads that aren't Linux.
> > >
> > > > Should PCI support be added to coreboot for ARM, so it’s aligned
> > > > with
> > > > x86?
> > > > Should coreboot stay minimal on ARM, for example PCI code adds
> > > > 100 ms delay [4]?

Paul, coreboot would "stay minimal" if that PCI code was moved into
depthcharge as-is, but the 100ms delay would still be there and other
payloads would be missing PCI init. I'm pretty sure this isn't what
you want, though.

> >   Need to check with MTK folks, but I'd assume the 100ms will be
> > eliminated in the end, or re-implemented as early-init (and do the
> > rest in depthcharge).

I don't think any of the PCI code belongs into depthcharge. I'm pretty
sure it can be integrated better to leverage the existing coreboot
infrastructure, e.g. the resource allocator and the devicetree.

> Agree, this 100ms is defined by the PCI specification, remove it
> directly will cause some compatibility issues, but I think we can put
> this flow in the early stage to reduce its impact.

As I understand the spec, 100ms is the *minimum* delay before PERST#
de-assertion, so it's possible to use a longer delay while doing
something else in the meantime. One way to implement this would be to
assert PERST# early and do some other initialisation in the meantime,
e.g. memory init. To ensure that at least 100ms have elapsed, a
stopwatch from `src/include/timer.h` is very convenient.

> > > > PCI drivers then have to be added to the payloads, which
> > > > could be a minimal Linux kernel, so that booting from drives
> > > > connected
> > > > over PCI is possible?
> > >
> > > The only option I see for getting rid of PCI support on ARM is to
> > > remodel the relationships between coreboot, the payload and the OS.
> > > Reminds me that I wanted to build a proof-of-concept for
> > > chainloaded payloads, a concept that might help with such a
> > > redesign because we could move things out of coreboot to
> > > "elsewhere" (wherever that might be) piece by piece.
> > >
> > > But as is, if there are PCI(e) devices that need early init,
> > > coreboot is the place to put these drivers.
> >
> >   Agree with Patrick - many eMMC devices do need early init, so in
> > the
> > end we still have to put some eMMC code in Coreboot,
> >   and I'd assume that will be the same situation for PCI-e (NVMe) and
> > UFS.

I don't know what this "early init" consists of, but it sounds like
something that should be done in coreboot. It could possibly be done
in a passthrough/chainloaded payload (which would do late ramstage
init), but it wouldn't really make much of a difference. The idea is
to start abstracting the hardware so that regular (non-passthrough)
payloads don't need to know hardware-specific details.

Best regards,
Angel
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Does PCI driver code belong in coreboot (ARM)?

2021-12-05 Thread Jianjun Wang
On Sat, 2021-12-04 at 20:53 +0800, Hung-Te Lin wrote:
> On Wed, Dec 1, 2021 at 10:08 PM Patrick Georgi 
> wrote:
> > 1. Dezember 2021 12:06, "Paul Menzel" 
> > schrieb:
> > > If I remember correctly, coreboot’s goal to only do minimal
> > > hardware
> > > initialization originally meant, that the payload/OS does PCI
> > > initialization.
> > 
> > The original idea was to boot into Linux (hence LinuxBIOS, back in
> > the day). coreboot is very different from this scheme, see the
> > presence of payloads that aren't Linux.
> > 
> > > Should PCI support be added to coreboot for ARM, so it’s aligned
> > > with
> > > x86?
> > > Should coreboot stay minimal on ARM, for example PCI code adds
> > > 100 ms delay [4]?
> 
>   Need to check with MTK folks, but I'd assume the 100ms will be
> eliminated in the end, or re-implemented as early-init (and do the
> rest in depthcharge).

Agree, this 100ms is defined by the PCI specification, remove it
directly will cause some compatibility issues, but I think we can put
this flow in the early stage to reduce its impact.
> 
> > > PCI drivers then have to be added to the payloads, which
> > > could be a minimal Linux kernel, so that booting from drives
> > > connected
> > > over PCI is possible?
> > 
> > The only option I see for getting rid of PCI support on ARM is to
> > remodel the relationships between coreboot, the payload and the OS.
> > Reminds me that I wanted to build a proof-of-concept for
> > chainloaded payloads, a concept that might help with such a
> > redesign because we could move things out of coreboot to
> > "elsewhere" (wherever that might be) piece by piece.
> > 
> > But as is, if there are PCI(e) devices that need early init,
> > coreboot is the place to put these drivers.
> 
>   Agree with Patrick - many eMMC devices do need early init, so in
> the
> end we still have to put some eMMC code in Coreboot,
>   and I'd assume that will be the same situation for PCI-e (NVMe) and
> UFS.
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Does PCI driver code belong in coreboot (ARM)?

2021-12-04 Thread Hung-Te Lin
On Wed, Dec 1, 2021 at 10:08 PM Patrick Georgi  wrote:
> 1. Dezember 2021 12:06, "Paul Menzel"  schrieb:
> > If I remember correctly, coreboot’s goal to only do minimal hardware
> > initialization originally meant, that the payload/OS does PCI
> > initialization.
> The original idea was to boot into Linux (hence LinuxBIOS, back in the day). 
> coreboot is very different from this scheme, see the presence of payloads 
> that aren't Linux.
>
> > Should PCI support be added to coreboot for ARM, so it’s aligned with
> > x86?
> > Should coreboot stay minimal on ARM, for example PCI code adds 100 ms delay 
> > [4]?

  Need to check with MTK folks, but I'd assume the 100ms will be
eliminated in the end, or re-implemented as early-init (and do the
rest in depthcharge).

> > PCI drivers then have to be added to the payloads, which
> > could be a minimal Linux kernel, so that booting from drives connected
> > over PCI is possible?
> The only option I see for getting rid of PCI support on ARM is to remodel the 
> relationships between coreboot, the payload and the OS. Reminds me that I 
> wanted to build a proof-of-concept for chainloaded payloads, a concept that 
> might help with such a redesign because we could move things out of coreboot 
> to "elsewhere" (wherever that might be) piece by piece.
>
> But as is, if there are PCI(e) devices that need early init, coreboot is the 
> place to put these drivers.

  Agree with Patrick - many eMMC devices do need early init, so in the
end we still have to put some eMMC code in Coreboot,
  and I'd assume that will be the same situation for PCI-e (NVMe) and UFS.
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Does PCI driver code belong in coreboot (ARM)?

2021-12-01 Thread Patrick Georgi via coreboot
1. Dezember 2021 12:06, "Paul Menzel"  schrieb:
> If I remember correctly, coreboot’s goal to only do minimal hardware 
> initialization originally meant, that the payload/OS does PCI 
> initialization.
The original idea was to boot into Linux (hence LinuxBIOS, back in the day). 
coreboot is very different from this scheme, see the presence of payloads that 
aren't Linux.

> Should PCI support be added to coreboot for ARM, so it’s aligned with 
> x86? Should coreboot stay minimal on ARM, for example PCI code adds 100 
> ms delay [4]? PCI drivers then have to be added to the payloads, which 
> could be a minimal Linux kernel, so that booting from drives connected 
> over PCI is possible?
The only option I see for getting rid of PCI support on ARM is to remodel the 
relationships between coreboot, the payload and the OS. Reminds me that I 
wanted to build a proof-of-concept for chainloaded payloads, a concept that 
might help with such a redesign because we could move things out of coreboot to 
"elsewhere" (wherever that might be) piece by piece.

But as is, if there are PCI(e) devices that need early init, coreboot is the 
place to put these drivers.


Regards,
Patrick
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org