[coreboot] Project departure announcement: Arthur Heymans

2024-10-04 Thread Arthur Heymans
Hi

In light of a recent decision by the coreboot leadership, I would like to
announce my departure from the project. I believe the one-year ban of Nico
Huber is unjust and a poor decision for the community. Unless professional
reasons compel me otherwise, I will be leaving the project for at least the
same amount of time.

Most of what I did this year was done in my free time. Out of the 104
patches I submitted to main since January, Nico reviewed 49. From a
practical standpoint, this is a disaster.

I do want to end this chapter on a positive note.
I learned so much from this community and have had an incredible experience
to be part of it, so thank you a million times! Interacting with this
community has been a blast!

Learning from people and giving back to others is one of the most rewarding
things one can do in life. That's why I believe the future of firmware must
be open source. The communities that form around such software align so
well with our deep human nature.

I think open-source firmware has a really bright future ahead, and I hope
to be part of it somehow. I have a cool firmware project in the pipeline
that I look forward to sharing one day.

Thanks a lot and good luck!

Arthur
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Recent disciplinary action taken

2024-10-04 Thread Arthur Heymans
Hi


>
>1.
>
>This particular issue was not brought to the attention of coreboot
>leadership by anybody at Google or Intel. Someone in coreboot's small
>business ecosystem asked us to look into CB:84356
><https://review.coreboot.org/c/coreboot/+/84356>, which later spawned
>other patches. The subtext is that upstream development is very difficult
>and spending days squabbling over a tiny part of a spec that we don't
>control (FSP in this case, but the same is true of SBI, TFA, UEFI, etc) is
>counterproductive. What caught the leadership team's attention was the
>introduction of personal insults into the mix which made a heated debate
>between two individuals much worse. We expect better from everyone,
>especially senior members of the community.
>
> A spec, like any text, needs interpretation which means it can require a
discussion to work with it.
How to implement a spec for sure requires discussion.
Specs also have flaws and discussions on it are useful to prevent them.
Tiny parts of the spec can have important implications, so things can get
heated...
So I disagree with this statement. (Not sure why but my email editor does
not quote the number of the argument correctly. Sorry for that).

The personal insult is basically "did you read the spec". I don't think
"RTFM" is particularly insulting and certainly not worth banning someone
for a year.


>1.
>
>We acknowledge that Nico has a certain communication style, and like
>others in this thread we've each been on the receiving end of it and have
>rationalized brushing it off for one reason or another. However, this does
>not work in aggregate within a community or organization where many people
>can take it many different ways, especially given that we're a global
>organization with people of varying levels of language proficiency.
>
> As indicated by this thread a lot of people really like working with Nico.
Do we really want our community to have the lowest common denominator of
what everyone can stomach communication-wise, determine who can participate?
I don't think so.


>1.
>
>One can create a hostile environment even without overt actions such
>as hitting someone, yelling profanity, inappropriate contact, etc. To put
>this in another context, imagine the storm that would ensue if your
>company's HR department responded to complaints of sexual harassment by a
>guy named Bob in the sales department by saying "We've known for years that
>Bob likes to flirt with his coworkers and we have asked him to tone it
>down. Some have told us that they don't mind too much, and those who
>complain probably just misunderstand his communication style. Besides, a
>lot of people like Bob and he is a really great salesman!" Eventually it
>comes crashing down with more and more collateral damage the longer it's
>left unchecked.
>
> Overzealous HR departments can do at least as much damage as leaving
alleged bad actors unchecked. This is precisely what is happening right now.

I think these leadership decisions to ban away the most competent people in
our project are poor decisions (it does not seem to be the first time) and
have a chilling effect on others.
The cost benefit ratio is not good here.

Arthur Heymans

On Fri, Oct 4, 2024 at 8:09 AM David Hendricks via coreboot <
coreboot@coreboot.org> wrote:

> Hi everyone,
> Thanks for the feedback, both public and private. As with similar
> situations in the past this was not an easy decision, and there are
> arguments on both sides. It's always hard to lose a valued member of the
> community, even temporarily, but sometimes it becomes necessary. I'll try
> to elaborate on a few points and respond to the above questions in
> aggregate below (even then this got really lengthy):
>
>1.
>
>Contact info for the leadership team can be found at
>https://coreboot.org/leadership.html. We also have an arbitration team
>composed of people other than the leadership who you can reach out to for
>help resolving problems like the ones mentioned in my earlier e-mail.
>2.
>
>This particular issue was not brought to the attention of coreboot
>leadership by anybody at Google or Intel. Someone in coreboot's small
>business ecosystem asked us to look into CB:84356
><https://review.coreboot.org/c/coreboot/+/84356>, which later spawned
>other patches. The subtext is that upstream development is very difficult
>and spending days squabbling over a tiny part of a spec that we don't
>control (FSP in this case, but the same is true of SBI, TFA, UEFI, etc) is
>  

[coreboot] Re: Recent disciplinary action taken

2024-10-02 Thread Arthur Heymans
Hi

I am not happy with this decision.
We are a community with a lot of passionate individuals that differ in
their communication style.
Differences in communication styles cause friction, there is no way around
it.
Commenting beyond the mere technical on how we treat each other in a
community is appropriate and not a violation of respectful conduct.
To the contrary, sometimes a more heated discussion is to be preferred to
be being "nice" all the time.

I remember on one of my first patches in 2016, that Nico commented that the
code looked so bad he wanted to cry.
It's not "nice" but it was really bad code and I learned a lot since then,
thanks to the honest and truthful communication of the community.
More direct communication is preferred by lot of people, including myself.

I believe Nico is a good actor in our community and a 1 year ban does more
harm than good.
I personally thoroughly enjoy having him as a reviewer.

I ask the leadership to revisit this decision. Coreboot is a hard project
to get into and driving the most competent people away is not a smart move.

Arthur

On Wed, Oct 2, 2024 at 9:40 AM David Hendricks via coreboot <
coreboot@coreboot.org> wrote:

> Dear coreboot community members,
>
> Recently there was some unpleasant activity on Gerrit which violated our
> community’s guidelines regarding respectful conduct. In this case the
> coreboot leadership team determined that the behavior in question fit a
> long pattern about which the individual had been previously warned. As a
> result we have decided to remove Nico from our community for a period of 1
> year. We hope this will be a sufficient cooling off period and that we will
> not need to take more drastic steps in the future.
>
> As we've said in the past, we trust that developers in our community are
> acting in good faith and can generally resolve issues on their own. In
> cases where two sides cannot reach an agreement, for example in a code
> review, we expect all engagement to be respectful and to help drive toward
> a solution. For technical matters this often means starting a mailing list
> discussion, bringing an issue up during the coreboot leadership meeting,
> starting a task force to tackle a large problem, or other means of
> gathering input and collaborating.
>
> Personal matters should be brought to the leadership team directly. We'll
> listen to any complaints or frustrations, but cannot tolerate personal
> attacks made on Gerrit, the mailing list, or other forums. It is always
> required that we treat others in a professional manner and communicate with
> respect, regardless of how strongly we may feel about a particular issue.
>
> If anybody feels that a discussion has become too heated, or that somebody
> is not being treated respectfully, or are simply unsure of how to proceed
> in a difficult situation, please reach out to the coreboot leadership and
> we will chart a path forward together.
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Announcement: Introducing UEFI Capsule Update Integration for coreboot with EDK II

2024-09-13 Thread Arthur Heymans
>
> No EFI_CAPSULE_LOADER needed, not Linux specific. EFI-aware OS'
> are also aware of the EFI system partition, FWIW. And with the
> Boot Loader Specification[3], we already try to make use of it
> beyond UEFI.
>
> Modeling things with files is a good choice by the Linux kernel
> developers. But the overall design seems to miss that the boot-
> loaders are file-aware too! We already have a ubiquitous mecha-
> nism to pass files (usually the kernel) to the firmware.  Often
> also to validate their authenticity (secure boot).


So basically instead of a runtime service, getting a file from a disk at
boottime to update the flash?
I think this is indeed dramatically simpler on both the firmware side as
the OS side.
I suppose this only works when there is a boot disk, but then again servers
tend to have a BMC which takes care of firmware updates anyway.
So this seems like a good trade-off.

Not sure about exact considerations which went into the decision other than
> on-disk capsules already working in EDK2, but use of in-RAM capsules looks
> like a cleaner design to me:
>  - no file-system writes by an OS
>  - no file-system writes by firmware to remove a processed capsule (not
>sure I want to trust EDK2 drivers doing that)
>  - the capsule can be verified at the moment it's offered to the
>firmware, not in the middle of a boot

- File system writes are way simpler than supporting an UEFI runtime
service.
- Does the firmware need to remove the capsule? Can't fwupd do that once it
detects a successful update? What does the spec say about this?
-  Doing things at runtime is against the coreboot philosophy. If doing
something at boottime is a reasonable option to avoid needing a runtime I
think this is the way to go.

I generally like Nico's proposal. No new coreboot-payload interaction
(depends on the design). Less runtime. Reusing existing spec and tooling.
This is the way to go in my opinion.

Arthur
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Stable RO/RW ABI discussion

2024-08-21 Thread Arthur Heymans
Hi

I fully agree with Julius. It's not reasonable to attempt to maintain
compatibility with RO and future RW in the main branch.
Too many things that can go wrong and the complexity of reducing this
problem is too much: you basically need replace linker script symbols
with finding where things are at runtime which is a mess (remember CAR
relocation) and at the same time keep track of what version that data is at
that location since that can also be incompatible.
The strain on both developers and reviewers to achieve compatibility won't
be pleasant.

Maybe someone can create a tool to parse RO and RW ELFs to detect some
incompatibility to help downstream maintainers?

Arthur


On Thu, Aug 22, 2024 at 1:30 AM Julius Werner  wrote:

> [resent with correct sender address]
>
> Hi,
>
> I wanted to pick up the discussion from the leadership meeting again
> here because I think it's easier to show my point with a few examples.
> If you want an old coreboot RO build to work together with a ToT RW
> build, then you don't just care about the vboot workbuffer matching...
> *everything* that is shared in any way between bootblock/verstage and
> romstage/ramstage/payload needs to be kept in sync or risks
> introducing dangerous and not necessarily immediately obvious errors.
>
> One major issue is the pre-RAM memory layout. Take for example this
> simple one-line CL: https://review.coreboot.org/83680 . Would most
> reviewers expect that it makes RW firmware built after that point
> incompatible with RO firmware built before? Well, it does, because it
> increases the size of a memory region here
>
> https://review.coreboot.org/plugins/gitiles/coreboot/+/refs/heads/main/src/arch/x86/car.ld#34
> which then moves the offsets of all subsequent memory regions
> downwards accordingly. Some of these regions contain buffers that were
> passed by the bootblock to later stages (e.g. CBFS_MCACHE,
> FMAP_CACHE), and when the romstage tries to reuse those buffers it
> suddenly finds different data at the same offsets.
>
> We actually had to figure out how to support this for an RW update to
> one of our shipped Chromebooks, and it looks like this:
>
> https://chromium-review.googlesource.com/c/chromiumos/third_party/coreboot/+/5732763/3/src/arch/x86/assembly_entry.S
> . It needs an ugly, complicated assembly hack to copy bytes around
> from the locations where RO firmware left them into the locations
> where RW firmware expects them. Since one of these regions is the
> stack, you can't even do that in C code because you might already be
> trampling over the data you're still trying to save once you enter a C
> function. And this patch is just tailored to this one particular case
> (and platform), it's not a universal solution — if we were resizing a
> different region, or swapping the order of two regions, that would
> require a completely different (maybe impossible) workaround.
>
> Now, we could try to replace our simple memory region management with
> something more complicated that comes with a built-in tagged directory
> (kinda like a second CBMEM in pre-RAM). Then, if every single access
> to one of those regions went through that directory, it would
> automatically find the right location. We would need to write the code
> to parse that directory in assembly for every architecture, since the
> stack setup is one of the things that uses them. But even then, that
> doesn't solve the resize problem: if we e.g. find a rare stack
> overflow in the FSP that requires us to have a bigger romstage stack,
> then we can't just reuse the directory the bootblock has written, we
> need to actually change it to fix that bug and move all those regions
> around to fit the new requirements. And what if migrating from the old
> to the new layout on the fly is not possible? If we flip the order of
> two sections, there might not be enough free scratch space to
> temporarily save the contents of one section while we move the
> contents of the other into its place. (Note that car.ld is only the
> shared pre-RAM layout for Intel platforms. Every Arm and (recent) AMD
> platform has their own separate layout like this, and some of them
> tend to be very tight. We regularly have to land patches like
> https://review.coreboot.org/78970 to make some sections larger or
> smaller so that things still fit on certain platforms. Each of those
> changes would lead to one of these situations where suddenly the
> romstage needs to move everything into a different place from where
> the bootblock left it because it needs space for new things that the
> older code didn't account for.)
>
> There are plenty of other dependencies between RO and RW stages, many
> of them very abstract and platform specific. For example, on Arm SoCs
> we often have a clock tree where a few shared top-level PLLs get
> multiplexed onto many peripheral devices that each have their own
> pre-divisors. The PLLs are usually set up by bootblock code (because
> some of them are needed very ea

[coreboot] Re: Enforcing coreboot as lowercase

2024-07-04 Thread Arthur Heymans
Hi

Thanks for the reply.

Are you proposing to give up trying to defend the spelling of the
> project's name because too many people write it wrong and educating
> them is too much effort? If so, I think this is a self-defeating
> attitude and I completely disagree with it.


Language is not a set in stone thing. There are default grammatical rules
on how to write things and sometimes it is worth it to override the rules
as I explained.
It's basically a trade-off.
There is no right and wrong here, except maybe from a trademark
perspective, which most people are unaware of.
Later I make the case that even from a trademark perspective I don't think
it matters.
I'm making the case that enforcing to write "coreboot" lowercase has more
downsides than upsides, which is why I propose to allow "Coreboot" at the
start of a sentence.
Personally I think educating people about a trademark thing is superfluous
work.
Also in my personal communication it's a conundrum.
For instance if I write a blog post I don't want to look like I'm making
silly grammatical mistakes to those that haven't looked into the trademark
registry (which almost no one does).
At the same time I don't want to explain the trademark either as I think it
blunts communicative efficiency.

Or is it that the trademark only covers the all-lowercase "coreboot"
> spelling, so one can use a name like "CoReboot" (e.g. for something
> unrelated) without infringing the "coreboot" trademark? In that case,
> making the trademark case-insensitive makes sense.
>

So currently the only reason lowercase coreboot is enforced is because
that's how the trademark was obtained.
I'm using the argument that trademark interpretation is typically broad and
allows for using an uppercase letter at the start of a sentence since
that's what grammatical rules want.
So I think "Coreboot" is very much covered by the "coreboot" trademark.

Arthur Heymans


On Thu, Jul 4, 2024 at 7:19 PM Angel Pons  wrote:

> Hi,
>
> On Thu, Jul 4, 2024 at 3:48 PM Arthur Heymans  wrote:
> >
> > Hi
> >
> > The coreboot trademark is registered as lowercase.
> > We enforce this in for instance commits, even when normal grammar would
> dictate uppercase at the start of a sentence.
> >
> > This makes sense for very well known brands, companies and products like
> "eBay", "iPhone", "AMD". They are all very well known trademarks and they
> have some uppercase letter in them in atypical places. For these words
> grammar exceptions seems reasonable.
> >
> > Coreboot is a reasonably well known as a project, but little people know
> about the specificity of the trademark. This often causes confusion on
> people either reading "coreboot" at the start of a sense, where it looks
> grammatically wrong, making it even look unprofessional in the eyes of
> some. This is because there is no other uppercase letter inside coreboot
> that would make it a typical exception to regular grammar rules.
> >
> > People getting into the project making the mistake at the start of a
> sentence, might get the wrong impression of too many idiosyncrasies. On top
> of that it takes a non zero amount of effort on people in the project to
> educate others on this trademark thing.
> >
> > Also trademark are typically a bit more broad than exactly how they are
> registered. I cannot start a company called iNTel or aMD that makes chips.
> I cannot put a product on the market called "IPHoNE". I think the same
> applies to "coreboot".
> >
> > So my question is: can we relax the trademark in lowercase enforcement?
> I would suggest to simply allow both ways.
>
> I am not sure if I understood you correctly.
>
> Are you proposing to give up trying to defend the spelling of the
> project's name because too many people write it wrong and educating
> them is too much effort? If so, I think this is a self-defeating
> attitude and I completely disagree with it.
>
> Or is it that the trademark only covers the all-lowercase "coreboot"
> spelling, so one can use a name like "CoReboot" (e.g. for something
> unrelated) without infringing the "coreboot" trademark? In that case,
> making the trademark case-insensitive makes sense.
>
> Or is it something else? Then... *confused noises*
>
> > Arthur Heymans
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
>
> Best regards,
> Angel
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Enforcing coreboot as lowercase

2024-07-04 Thread Arthur Heymans
Hi

The coreboot trademark is registered as lowercase.
We enforce this in for instance commits, even when normal grammar would
dictate uppercase at the start of a sentence.

This makes sense for very well known brands, companies and products like
"eBay", "iPhone", "AMD". They are all very well known trademarks and they
have some uppercase letter in them in atypical places. For these words
grammar exceptions seems reasonable.

Coreboot is a reasonably well known as a project, but little people know
about the specificity of the trademark. This often causes confusion on
people either reading "coreboot" at the start of a sense, where it looks
grammatically wrong, making it even look unprofessional in the eyes of
some. This is because there is no other uppercase letter inside coreboot
that would make it a typical exception to regular grammar rules.

People getting into the project making the mistake at the start of a
sentence, might get the wrong impression of too many idiosyncrasies. On top
of that it takes a non zero amount of effort on people in the project to
educate others on this trademark thing.

Also trademark are typically a bit more broad than exactly how they are
registered. I cannot start a company called iNTel or aMD that makes chips.
I cannot put a product on the market called "IPHoNE". I think the same
applies to "coreboot".

So my question is: can we relax the trademark in lowercase enforcement? I
would suggest to simply allow both ways.

Arthur Heymans
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] [coreboot - Bug #276] SPI flash console output causes `SMM Handler caused a stack overflow`

2024-06-21 Thread Arthur Heymans
Issue #276 has been updated by Arthur Heymans.


config SMM_MODULE_STACK_SIZE
hex
default 0x800 if ARCH_RAMSTAGE_X86_64
default 0x400

So just bumping this to always use 0x800 might be a solution


Bug #276: SPI flash console output causes `SMM Handler caused a stack overflow`
https://ticket.coreboot.org/issues/276#change-1864

* Author: Paul Menzel
* Status: New
* Priority: Normal
* Start date: 2020-07-19

On the Lenovo T60 (Type 2007 with dedicated ATI/AMD graphics device), coreboot 
built with *SPI Flash console output* (`CONFIG_CONSOLE_SPI_FLASH=y`) fails to 
boot due to a stack overflow:

```
FMAP: area COREBOOT found @ 60200 (1703424 bytes)
CBFS: Locating 'fallback/dsdt.aml'
CBFS: Found @ offset 39300 size 3138
FMAP: area COREBOOT found @ 60200 (1703424 bytes)
CBFS: Locating 'fallback/slic'
CBFS: 'fallback/slic' not found.
ACPI: Writing ACPI tables at bfb51000.
ACPI:* FACS
ACPI:* DSDT
FMAP: area CONSOLE found @ 0 (131072 bytes)


coreboot-4.12-1529-gaba8103093 Sun Jul 19 07:24:18 UTC 2020 smm starting (log 
level: 7)...
canary 0x0 != 0xbfeffc00
SMM Handler caused a stack overflow
```



-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
https://ticket.coreboot.org/my/account
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: AMD64 & X86S payload handover

2024-04-23 Thread Arthur Heymans
Hi

So to play devil's advocate on why having a 64bit payload handoff/entry
point on AMD64 might be desirable:
- speed (how long does the mode transition take?)
- flexibility: loading and jumping to it above 4G

However I'm not convinced by either argument and I agree with your proposal.
It's keeping things compatible and simple on the coreboot side (no change
or runtime detection) and requires only a bit more logic on the payload
side.

https://review.coreboot.org/c/coreboot/+/82016 adds that 32bit entry point
for 64bit libpayload. I'll try to make it work in the next few days.
As this requires no changes in the coreboot handoff and still allows to
have the benefits of a 64bit payload (except the location of the entry
point), I think it's a solid way forward.
Unless I'm wrong about the arguments I gave first not being valid.

Kind regards
Arthur


On Fri, Apr 19, 2024 at 2:01 PM Nico Huber via coreboot <
coreboot@coreboot.org> wrote:

> Hi coreboot fellows,
>
> yesterday some patches[1] surfaced on Gerrit about a handover for 64-bit
> x86 payloads.  Strictly speaking,  we don't have to do anything for this
> right now, as the original, protected-mode handover would work too. How-
> ever, there is X86S on the horizon.  If you don't know about it, here is
> the short version: X86S was announced by Intel last year,  it's supposed
> to be a simplified version of AMD64, without real nor protected mode, so
> 64-bit long mode only. So we have:
>
> ++---+--+
> || AMD64 | X86S |
> ++---+--+
> |  real mode |   X   |  |
> ++---+--+
> | protected mode |   X   |  |
> ++---+--+
> |  long mode |   X   |  X   |
> ++---+--+
>
> After a night's sleep, I'm convinced we should keep things as simple as
> possible on the coreboot side, and hence propose the following:
>
> 1. AMD64: Keep the current, 32-bit protected mode handover
> 2.  X86S: Hand over in long mode with
>   a) the pointer to cbtables in RDI (like the first parameter
>  in the System V ABI),
>   b) the guarantee that the payload and cbtables are identity
>  mapped in the current page tables.
>
> Rationale:
> * 1. Allows us to keep compatibility where it's possible.  X86S breaks
>   compatibility on purpose but we don't have to break compatibility in
>   the AMD64 case.  There is one exception: A future X86S payload could
>   potentially run on AMD64 and vice versa. Though, compatibility could
>   be ensured on the payload end (e.g. having two entry points like the
>   Linux kernel has(*)).
> * Keeping the current handover where possible would allow to use a new
>   64-bit payload with a coreboot build from one of the older branches,
>   for instance,  without having to modify them all.  Existing coreboot
>   binaries for AMD64 systems would stay compatible as well.
> * 1. requires a 64-bit payload to switch (back) to long mode by itself.
>   This should be straight-forward, though, and can be done with rather
>   few instructions. The necessary page-table setup could be kept small,
>   as long mode supports 1GiB pages.  Having to set up its own page ta-
>   tables also avoids problems with assumptions about the prior setup.
> * Generally, we can't control what downstream does. However, by adding
>   a long-mode handover as late as possible (i.e. the first X86S port),
>   we would encourage everybody to stay compatible.  Once the long-mode
>   handover is implemented upstream, it will be easier to create a pay-
>   load that works with some x86 coreboot builds,  but not all.  Making
>   it X86S only, will limit the room for incompatibility.
> * 2. a) is probably what people would expect.
> * 2. b) allows for more flexibility in coreboot, without having to set
>   up much (ideally nothing) special for the payload. If we'd make more
>   guarantees, e.g. a whole 4GiB space identity mapped, it would become
>   more likely that we have to change the mapping for the handover. For
>   instance, if we'd ever decide to add a continuous mapping for a >16M
>   flash chip. That would likely still be compatible with 2. b), though
>   might not be with more elaborate guarantees.
>
> So, please share your thoughts :)
>
> Best regards,
> Nico
>
> [1] https://review.coreboot.org/c/coreboot/+/81960
> https://review.coreboot.org/c/coreboot/+/81964
> https://review.coreboot.org/c/coreboot/+/81968
>
> (*) All payloads (builds) until now will be incompatible to X86S. But
> if we'd encourage to give all 64-bit payloads from now on two en-
> try points (32- and 64-bit), we could increase the number of pay-
> loads that are both compatible to X86S  and all prior AMD64 core-
> coreboots.
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot

[coreboot] Re: QEMU x86 i440fx/piix4 build fails for >= 32MB ROMs - Assertion IS_HOST_SPACE_ADDRESS(host_space_address) failed

2024-02-20 Thread Arthur Heymans
Hi Mike

Same thing as on that AMD platform. Qemu does not support that. In general
x86 will require a special memory map for larger than 16M boot medium, as
below 4G - 16M there are other default MMIO things that will conflict, like
the LAPIC base. That capability to deal with larger flash is only present
on fairly recent hardware and in that case coreboot often supports it.

Kind regards.

On Tue, Feb 20, 2024 at 4:42 PM Mike Banon  wrote:

> Dear friends, thank you very much for all your replies - especially to
> Felix Held for his research on the possibility of >16 MB SPI flash on
> these AMD boards.
>
> > I guess what I’m thinking is I’m not sure it’s worth the effort to make
> a build work for something that is physically impossible
>
> Hmmm... there could be newer boards with 4-byte SPI Flash controller,
> and by the way there is no physical impossibility for QEMU - which is
> also failing.
>
> I just tried the latest coreboot (so without any git reverts of
> restore_agesa.sh that restore the opensource AGESA boards) - picked a
> virtual BOARD_EMULATION_QEMU_X86_I440FX board with almost-default
> coreboot config (only set CONFIG_COREBOOT_ROMSIZE_KB_65536 and
> CONFIG_CBFS_SIZE=0x0400 , everything else is unchanged) - but have
> exactly the same error:
>
> ...
> CC bootblock/mainboard/emulation/qemu-i440fx/bootmode.o
> CC bootblock/mainboard/emulation/qemu-i440fx/fw_cfg.o
> CC bootblock/southbridge/intel/common/reset.o
> CC bootblock/southbridge/intel/common/rtc.o
> CC bootblock/southbridge/intel/i82371eb/bootblock.o
> LINK   cbfs/fallback/bootblock.debug
> /my-path-to/coreboot-new/util/crossgcc/xgcc/bin/i386-elf-ld.bfd:
> warning: build/cbfs/fallback/bootblock.debug has a LOAD segment with
> RWX permissions
> OBJCOPYcbfs/fallback/bootblock.elf
> OBJCOPYbootblock.raw.elf
> OBJCOPYbootblock.raw.bin
> Created CBFS (capacity = 67108324 bytes)
> BOOTBLOCK
> CBFS   cbfs_master_header
> CBFS   fallback/romstage
> Image SIZE 67108864
> cbfstool: /my-path-to/coreboot-new/util/cbfstool/cbfstool.c:1186:
> cbfstool_convert_mkstage: Assertion
> `IS_HOST_SPACE_ADDRESS(host_space_address)' failed.
> Aborted (core dumped)
> make: *** [Makefile.mk:1211: build/coreboot.pre] Error 13
>
> On Mon, Feb 19, 2024 at 9:24 PM ron minnich  wrote:
> >
> > I guess what I’m thinking is I’m not sure it’s worth the effort to make
> a build work for something that is physically impossible
> >
> > On Mon, Feb 19, 2024 at 12:11 Felix Held 
> wrote:
> >>
> >> Hi Mike,
> >>
> >> SPI NOR flash chips with more than 16MByte use 4 byte addresses while
> >> ones with up to 16MBytes use 3 byte addresses. The SPI flash controllers
> >> on older systems often only support the 3 byte address mode. Also
> >> typically only up to 16 MBytes worth of SPI flash contents can be mapped
> >> right below the 4GB boundary, since the 16MByte below that contain the
> >> MMIO of for example LAPIC and IOAPIC.
> >> Had a quick look at the BKDG for family 16h model 30h, which is newer
> >> than the chip used on G505S or A88XM-E, and it didn't have the registers
> >> in the SPI controller that I'd expect to be present if it supports the 4
> >> byte address mode.
> >>
> >> Regards,
> >> Felix
> >>
> >> On 19/02/2024 19:55, Mike Banon wrote:
> >> > Theoretically - yes, if someone finds & solders there a 32 MB (256
> >> > megabit) SPI Flash chip with 8 pins. Hopefully, as the proprietary
> >> > UEFIs become more & more bloated, these large capacity chips will
> >> > become more widely available in the near future. And, since a coreboot
> >> > itself consumes less than 1MB on these "opensource AGESA" AMD systems
> >> > such as G505S and A88XM-E, all this room will allow some very
> >> > interesting experiments! If even 3 MB is enough for me to put 9 of 10
> >> > floppies of the collection described here (thanks to LZMA compression)
> >> > -
> http://dangerousprototypes.com/docs/Lenovo_G505S_hacking#Useful_floppies
> >> > , guess what wonders we can do with 31 MB... ;-)
> >> >
> >> > On Mon, Feb 19, 2024 at 7:17 PM ron minnich 
> wrote:
> >> >>
> >> >> Can the system you are discussing actually use larger than 16 MB rom?
> >> >>
> >> >>   I am wondering about your use of the phrase “out of curiosity”
> >> >>
> >> >> On Mon, Feb 19, 2024 at 07:05 Mike Banon  wrote:
> >> >>>
> >> >>> Small bump, I am still having this error while (out of curiosity)
> >> >>> trying to build the Lenovo G505S ROM for 32 MB or 64 MB spi flash:
> >> >>>
> >> >>>  OBJCOPYbootblock.raw.bin
> >> >>> Created CBFS (capacity = 33488356 bytes)
> >> >>>  BOOTBLOCK
> >> >>>  CBFS   cbfs_master_header
> >> >>>  CBFS   fallback/romstage
> >> >>> Image SIZE 33554432
> >> >>> cbfstool:
> /media/mint/2183183a-158f-476a-81af-b42534a68706/shared/core/coreboot/util/cbfstool/cbfstool.c:1186:
> >> >>> cbfstool_convert_mkstage: Assertion
> >> >>> `IS_HOST_SPACE_ADDRES

[coreboot] Re: src/soc/intel/xeon_sp/Kconfig:95:warning: config symbol defined without type

2024-02-18 Thread Arthur Heymans
H

I'm not seeing this and the type of the option is set in
src/device/Kconfig. Do you have some local changes that affect that?

Arthur

On Sun, Feb 18, 2024 at 6:33 PM Mike Banon  wrote:

> After this commit - https://review.coreboot.org/c/coreboot/+/79058 -
> now I always have this warning while doing a "make menuconfig" :
>
> src/soc/intel/xeon_sp/Kconfig:95:warning: config symbol defined without
> type
>
> To fix this, please add the "bool" line before "default y" to specify
> the config symbol type.
>
> P.S. wrote about this problem under the change about ~1 month ago, but
> maybe the notifications are not working for the merged changes - so
> posting it here just in case
> --
> Best regards, Mike Banon
> Open Source Community Manager of 3mdeb - https://3mdeb.com/
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: RFC: Post-build control of serial console

2023-09-11 Thread Arthur Heymans
Hi Simon

I agree that both cbfs or fmap for that matter are not practical options as
on some SOC the boot memory is not memory mapped and requires
some hardware init.

We have other instances where cbfstool updates the bootblock code post
compilation: cbfs verification.
See  src/lib/metadata_hash.c which has a magic set of characters the
tooling can find:

__attribute__((used, section(".metadata_hash_anchor")))
static struct metadata_hash_anchor metadata_hash_anchor = {
/* This is the only place in all of coreboot where we actually need to use
this. */
.magic = DO_NOT_USE_METADATA_HASH_ANCHOR_MAGIC_DO_NOT_USE,
.cbfs_hash = { .algo = CONFIG_CBFS_HASH_ALGO }
};

I guess CCB would have a similar way of working: a magic char sequence for
each setting, because one struct containing all the options will not work
well when the struct or the semantics of the entries need updating?

sidenote: There is an option framework already in place get_uint_option,
which hooks up to serial console verbosity.
Maybe CCB could be used as an implementation for that option framework so
no code needs to change?

Similar to updating metadata_hash_anchor there are some difficulties if the
bootblock is checksummed somewhere, in which case it needs to be updated.
It also makes root of trust technologies like Intel TXT,bootguard,CBnT very
hard to use as the digest of the bootblock is in a signed manifest which
you'd have
to regenerate then. So it's probably best to make this feature optional?

About Firmware Handoff: currently linker script globals (linker scripts are
the same for each stage) are used to pass data between stages.
It has the same advantage as the CCB solution you propose: the code has a
pointer to the data without needing to fetch or decode it.
The disadvantage is that you need to maintain this in every linker script.
Maybe this deserves another discussion as others stumbled on the desire to
pass data between stages too: https://review.coreboot.org/c/coreboot/+/76393

Arthur



On Tue, Sep 12, 2023 at 12:04 AM Simon Glass  wrote:

> Hi,
>
> RFC: Post-build control of serial console
>
> It is annoying to have to create and maintain two completely
> different builds of coreboot just to enable or disable the console.
> It would be much more convenient to have a 'silent' flag in the
> image, which can be updated as needed, without needing to rebuild
> coreboot. For example, if something goes wrong and coreboot hangs,
> it would be nice to be able to enable serial on the same image, then
> boot it again to see how far it gets.
>
> I propose a  'Coreboot Control Block' (CCB) which can hold a small
> number of such settings.
>
> It is designed to be available very early in bootblock, before CBFS
> is ready. It is able to control the output of the very first bootblock
> banner. early silicon-init, etc. It is built as part of the bootblock
> image,
> so can be accessed simply as a static C struct within the bootblock
> code. That means that the code overhead is very low and we could
> perhaps enable it by default.
>
> The bootblock can have a CBFS file attribute that indicates that it
> contains a
> CCB and the offset where it is stored. Other coreboot stages can read
> this as well, or it could be duplicated in a separate file.
>
> We can provide options in cbfstool to get and set settings in the CCB.
> This makes it easy to use this feature, e.g. to enable silent:
>
>cbfstool coreboot.rom ccb-set -n serial -V silent
>
> Why not use a separate CBFS file?
> - Boards typically read the entire bootblock and start execution from
> it. The console is started early so the settings are needed before
> CBFS is available. By putting the CCB inside the bootblock, it can
> control things from an early stage.
>
> Why not CMOS RAM / VVRAM?
> - If we allocate some space in CMOS for console / logging settings,
> then it would allow a similar feature. But it involves changing
> settings on the device. Each board would need to provide some CMOS
> options for this feature as part of the layout file. It would not be
> possible to enable console output without running some code on the
> device to update the CMOS RAM.
>
> Why not use Firmware Handoff to pass the CCB to following stages?
> - We could do that, particularly if CCB attracts some additional features,
> such as logging.
>
> Regards,
> Simon
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: 2023-08-23 - coreboot Leadership meeting minutes

2023-09-07 Thread Arthur Heymans
Hi Hannah

So the only reason for including uGOP as a blob is a failed planning at
Intel and nothing technical?

I agree with Patrick here. This is really an Intel-problem and not
something coreboot should have to put up with.

Arthur

On Thu, Sep 7, 2023, 21:50 Williams, Hannah 
wrote:

> It is not possible to open source uGOP today without re-writing it. We do
> not have time to re-write considering our product timeline and hence the
> request to allow to use binary now. We acknowledge that we will make effort
> to open source uGOP for future SOC by working internally with the other
> teams in Intel like i915 team. We have to see how to write common code
> between the two so that we can open source at the same time.
>
> Hannah
>
>
>
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: 2023-08-23 - coreboot Leadership meeting minutes

2023-09-07 Thread Arthur Heymans
Hi Hannah

It looks like your information is not up to date.

https://www.phoronix.com/news/Intel-HDMI-2.1-FRL-Linux so hdmi 2.1
upstreaming in Linux began 10 months ago?

Both Linux and
https://gitlab.freedesktop.org/drm/igt-gpu-tools/-/commits/master/ has
support for VBT, which should suffice without a spec. Even coreboot has
limited support for VBT.

It looks like there is no good reason for this blob to be closed source or
did I miss something? The only difference with i915 that has functional
display support is that you use VGA legacy mode in uGOP and not a linear
framebuffer. VGA legacy mode is very old. I think you'll have a hard time
convincing this community that this mode is sensitive NDA only IP that
justifies a blob.

Do you have other technical reasons to justify uGOP being closed source
while Linux has support pretty much a year before the silicon is released?
Maybe we or a search engine can help you out on those too?

Arthur

On Wed, Sep 6, 2023, 17:58 Williams, Hannah 
wrote:

> Here are the reasons why we cannot open source Meteor Lake uGOP:
> - It has licensed code for HDMI and other industry specifications (i915
> also cannot open source HDMI 2.1)
> - VBT spec is not open sourced
> There will have to be a re-design of the uGOP component so that we can
> work around above issues and still open source. This is being considered
> for future SOCs.
> Hannah
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] [coreboot - Bug #508] Dojo fails to boot from NVMe with CONFIG_RESOURCE_ALLOCATION_TOP_DOWN enabled

2023-08-31 Thread Arthur Heymans
Issue #508 has been updated by Arthur Heymans.


Yu-Ping Wu wrote:
> Similar to #499, after https://review.coreboot.org/c/coreboot/+/75012, Dojo 
> fails to boot.
> Disabling CONFIG_RESOURCE_ALLOCATION_TOP_DOWN fixes the problem.
> However I'm not sure how to fix it from MediaTek's PCIe functions or settings 
> (for example mtk_pcie_domain_read_resources).

write32p(table, mmio_res->cpu_addr |
 PCIE_ATR_SIZE(__fls(mmio_res->size)));


Bug #508: Dojo fails to boot from NVMe with CONFIG_RESOURCE_ALLOCATION_TOP_DOWN 
enabled
https://ticket.coreboot.org/issues/508#change-1647

* Author: Yu-Ping Wu
* Status: New
* Priority: Normal
* Assignee: Nico Huber
* Target version: none
* Start date: 2023-08-31
* Affected versions: 4.21

Similar to #499, after https://review.coreboot.org/c/coreboot/+/75012, Dojo 
fails to boot.
Disabling CONFIG_RESOURCE_ALLOCATION_TOP_DOWN fixes the problem.
However I'm not sure how to fix it from MediaTek's PCIe functions or settings 
(for example mtk_pcie_domain_read_resources).

---Files
ap-bad.log (32.8 KB)


-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
https://ticket.coreboot.org/my/account
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: [RFC] Pre-Memory Sign-of-Life using Intel uGOP

2023-08-23 Thread Arthur Heymans
>> ReportStatusCode() to report debug information which coreboot prints
using printk.
> can you point me to the code integrating this? I could find this
identifier only in vendorcode/ headers. Is it for debugging?

I meant code calling back to the coreboot console in general for debugging.
A few examples:
1)
https://review.coreboot.org/plugins/gitiles/coreboot/+/refs/heads/master/src/northbridge/intel/sandybridge/raminit_mrc.c#153
2)
https://review.coreboot.org/plugins/gitiles/coreboot/+/refs/heads/master/src/drivers/intel/fsp2_0/fsp_debug_event.c#20

> I thought this was kept optional--one of the many things dumped into our
repo that didn't take off.
Just checked and it's enabled by default oO, but I could disable it and
coreboot built.
Does anybody use this PPI "feature" in a product?

I thought the CPU PPI was enabled by default and necessary(?) on all Intel
products (except xeon-sp) since Intel Icelake.
If the PPI is not provided the FSP will do the whole CPU init on its own:

> /** Offset 0x06B0 - CpuMpPpi
>   Optional pointer to the boot loader's implementation of
> EFI_PEI_MP_SERVICES_PPI.
>   If not NULL, FSP will use the boot loader's implementation of
> multiprocessing.
>


On Wed, Aug 23, 2023 at 11:14 AM Nico Huber  wrote:

> Hi Arthur,
>
> On 23.08.23 10:41, Arthur Heymans wrote:
> > We already have code similar to ReportStatusCode
>
> can you point me to the code integrating this? I could find this
> identifier only in vendorcode/ headers. Is it for debugging?
>
> > and ramstage PPI so maybe
> > it's not a problem.
>
> I thought this was kept optional--one of the many things dumped
> into our repo that didn't take off. Just checked and it's enabled
> by default oO, but I could disable it and coreboot built. Does
> anybody use this PPI "feature" in a product?
>
> Nico
>
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: [RFC] Pre-Memory Sign-of-Life using Intel uGOP

2023-08-23 Thread Arthur Heymans
Hi

> We propose to take advantage of a proprietary driver Intel already
supports, validates and includes in FSP silicon: the Intel Graphics PEIM
(Pre-EFI Initialization Module) driver also known as the GOP (Graphical
Output Protocol) driver.

Usually the reasoning for using a binary is because the hardware cannot be
publicly documented (e.g. DRAM controller) or because they are
cryptographically signed.
That is however not the case for Intel display controllers as typically
both code (Intel i915 driver) and documentation exist.
Maybe it's just marketing, but I was under the impression that Intel is
actively promoting open source on the graphics side with initiatives as
oneAPI.

I think allowing binary only PEI modules just because they exist and are
supported by the vendor is a very slippery slope.
The same argument could be applied to pretty much everything (just include
your code doing X as a PEI module), which goes against the project goals of
coreboot being an open source project.
This is of course not part of your proposal but I'm cautious of it.

To nuance this, other vendors only provide a proprietary VBIOS as an option
for graphic init so it's not particularly worse.

> 2.1. PEI services

> µGOP depends on a limited of PEI services:

   1. InstallPpi() to install the PEIM Graphics PPI
   2. LocatePpi() to access PEIM-to-PEIM Interface (PPI) Dependencies
   3. AllocatePool() to dynamically allocate memory to handle internal data
   structure such as display information …
   4. GetHobList() and CreateHob() to access Hand Off Blocks (HOB) holding
   runtime data
   5. ReportStatusCode() to report debug information which coreboot prints
   using printk.

> Those services implemented in coreboot are pretty straightforward and fit
in less than 300 lines of code.

That looks like some form of linking, which might result in legal troubles
as the GPL does not allow it.
We already have code similar to ReportStatusCode and ramstage PPI so maybe
it's not a problem.

On a technical note: only coreboot's ramstage has a heap. Romstage has less
resources available so we avoided using a heap thus far. AllocatePool would
break that tradition. How does one know how much heap is needed? It's best
to avoid memory allocations at runtime.


> We also noticed that µGOP is faster to bring-up graphics than libgfxinit.
Indeed, according to previously captured numbers on Raptor Lake compared to
some number of µGOP on Meteor Lake, µGOP is three times faster to bring up
graphics than libgfxinit on an eDP panel (119 ms vs 373 ms).

Is this relevant? I was under the impression that the display is only used
to notify the user of a very long DRAM init, where regular boots use cached
results. If the dram init is orders of magnitude longer than the display
init this performance difference is meaningless.

> it is compatible with our software convergence goals

Can you elaborate on this?


Thanks for taking the time to clearly present your arguments!

Arthur

On Tue, Aug 22, 2023 at 3:35 PM Nico Huber  wrote:

> Hi Jeremy,
>
> On 14.08.23 22:52, Compostella, Jeremy wrote:
> > We propose to take advantage of a proprietary driver Intel already
> supports, validates and includes in FSP silicon: the Intel Graphics PEIM
> (Pre-EFI Initialization Module) driver also known as the GOP (Graphical
> Output Protocol) driver.
>
> just to make sure nobody makes wrong assumptions: Will the uGOP be
> open-source or proprietary as well? I first thought the latter. But
> your proposed code-flow looks like some sort of dynamic linking with
> coreboot.
>
> > We intend to keep providing such a binary base solution on the long run
> as it addresses our software convergence goals and is compatible with early
> platform development stage constraints. [libgfxinit] supports can always be
> added later by the open-source community once the Graphics Programmer
> Reference Manuals are published.
>
> Sad to hear about this decision. It seems Intel is forgetting about
> non-consumer products (e.g. embedded market) where the code isn't
> needed years ahead of a platform launch.
>
> > We also noticed that microGOP is faster to bring-up graphics than
> libgfxinit. Indeed, according to previously captured numbers on Raptor Lake
> compared to some number of microGOP on Meteor Lake, microGOP is three times
> faster to bring up graphics than libgfxinit on an eDP panel (119 ms vs 373
> ms).
>
> Configuring the hardware and bringing up the eDP link should take
> about 20~30ms mostly depending on how long it takes to read the
> EDID. The longer delays are likely about panel power sequencing.
> IIRC, libgfxinit falls back to hardcoded default values if the
> sequencer is unconfigured, while the GOP just leaves it like that.
> Chromebooks often skip the configuration[1] in firmware and leave
> it to the OS driver. Using wrong delays probably doesn't hurt on
> a rare interactive boot. However, I guess doing this on regular
> boots might not be the best idea.
>
> Nico
>
> [1] 

[coreboot] Re: Getting onboard NIC recognized as enoX instead of enpXsX

2023-03-06 Thread Arthur Heymans
Hi

The NIC device needs to be in the devicetree.cb for on_board to be set to 1
(done by sconfig).

On the GA-G41M-ES2L you see the onboard nic as a child device below a PCIe
port:
device pci 1c.1 on # PCIe 2 (NIC)
device pci 00.0 on # PCI 10ec:8168
subsystemid 0x1458 0xe000
end
end

If you add the proper child device below the right pcie port in the
devicetree an smbios entry41 will be added and the naming will be
persistent.

Kind regards

Arthur

On Mon, Mar 6, 2023 at 9:43 PM Kevin Keijzer via coreboot <
coreboot@coreboot.org> wrote:

> Op 06-03-2023 om 19:50 schreef Jonathan A. Kollasch:
> > I think you simply need to explicitly list the device in the device
> tree, and a
> > SMBIOS type 41 entry will be generated automatically.
>
> The NIC is defined in the device tree like this:
>
>
> https://github.com/coreboot/coreboot/blob/master/src/mainboard/asrock/b75m-itx/devicetree.cb#L57-L59
>
> I don't see anything wildly different to other boards I have, which do
> show their onboard NIC as eno0 instead of enpXsX.
>
>
> > Seems with CONFIG_SMBIOS_TYPE41_PROVIDED_BY_DEVTREE you can even control
> > which NIC gets what index if you have more than one.
>
> But this does not seem to be set for any board by default. So I don't
> think it's a requirement for this to work; more like a hard override.
>
> As stated, on my X230, P8Z77-V and GA-G41M-ES2L, the onboard NIC is
> called eno0 (as it should be). On my B75M-ITX it's called enp3s0, and on
> my B75 Pro3-M it's called enp4s0.
>
> So coreboot doesn't seem to pass the "this device is onboard and its
> index is 0" message properly.
>
> I can see `if (!dev->on_mainboard)` in smbios.c, but I can't find where
> that flag is set.
>
> --
> With kind regards,
>
> Kevin Keijzer
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] [coreboot - Bug #457] Haswell (t440p): CAR memory region should not conflict with CBFS_SIZE > 8mb

2023-02-08 Thread Arthur Heymans
Issue #457 has been updated by Arthur Heymans.


It is quite hard to relocate a binary (mrc.bin). This was done for the 
sandybridge using a hexeditor, but that is quite errorprone.
Native init could use a different address but it is not as complete as the 
mrc.bin.
For instance I think dram clocks will not run as high, some power reducing 
trainings are skipped and S3 resume is not implemented.


Bug #457: Haswell (t440p): CAR memory region should not conflict with CBFS_SIZE 
> 8mb
https://ticket.coreboot.org/issues/457#change-1402

* Author: Thierry Laurion
* Status: New
* Priority: Normal
* Target version: none
* Start date: 2023-02-08
* Affected versions: 4.19, master

When neutering ME to pass freed space to IFD BIOS region (and having CBFS_SIZE 
match maximized IFD region), booting of the platform was reported to take an 
additional 20 seconds.

A quick review at FOSDEM with a coreboot dev inspecting current Haswell code 
suggested that fixing DCACHE_RAM_BASE might fix the issue under 
src/northbridge/intel/haswell/Kconfig:

0xff7c -> 0xfe7c


-

Unfortunately, I have no access to a t440p to test the fix.

It was also suggested that mrc.bin might need to be patched as well. 
But 4.19 is bringing native raminit, so that might not be an issue?

Attached is the suggested change to be tested.

---Files
haswell_car_20230205233221.patch (369 Bytes)


-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
https://ticket.coreboot.org/my/account
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] devicetree driven reference code / blob configuration

2023-01-12 Thread Arthur Heymans
Hi all

On most modern x86 systems a lot of the silicon init happens as part of
blob (FSP, binaryPI)
or some reference code (AGESA). Very often that silicon init enables or
hides
PCI devices. This means that this code needs to run before coreboot device
enumeration code.
With coreboot's current ramstage state machine running this binary can only
happen as part of
chip_ops->init or with a bootstate hook.

As a quick reminder about the coreboot ramstage state machine:
 *start
 *  |
 *BS_PRE_DEVICE
 *  |
 *BS_DEV_INIT_CHIPS
 *  |
 *BS_DEV_ENUMERATE
 *  |
 the rest (device resource allocation, device init, ...)

BS_DEV_INIT_CHIPS:
- Call the .init from all the chip_ops from each "chip" once.
  FSP-S loading, configuring and calling is currently set up
  as a monolithic call inside such chip_ops->init. It has a lot of
  callbacks for soc and mainboard and consumes the devicetree in one big
  soc specific function, with little reuse between socs.
  All the silicon configuration can only be put into a monolithic soc
chip_info and is translated
  into FSP UPDs.

BS_DEV_ENUMERATE:
- call static chip_ops->enable_dev on each static device on the devicetree.
On older
  non-RC using code enabling or hiding PCI devices is done here.
- scan the actual hardware (e.g. PCI bus) for what is there. What will be
  found is depends on the params passed to the reference code which will
hide/enable
  devices

Both the monolithic configuration, one big chip_info, and code flow don't
follow coreboot's
state machine philosophy, where the devicetree nodes drive the
configuration process and codeflow.
It would be nicer if device enabling and other configuration would be
structured on a per node basis.
So for instance all the USB configuration options would be below the xhci
device node and
not globally available to all nodes.

Based on a discussion on IRC there are a few options, which are generally
not mutually exclusive or share similar ideas:
1) Split up the DEV_ENUMERATE into a static devicetree part and a scan part.
  In combination with calling FSP between those two parts this would allow
  the chip_ops->enable_dev to be called on each device and so UPD setup
could
  be modularised. This is still rather monolithic as the enable_dev is the
same
  function callback for each device but it could be more modularised
because the
  struct device is passed as an argument.
  So something like:

  switch (dev->path.pci.devfn) {
  case PCI_DEV(0, 0):
 do_something()
  case PCI_DEV(0x1f, 0):
 do_something_else()
  }
  is possible. So this at least removes all the current callbacks.

2) Move devices below separate chips and split up FSP loading, configuring
and calling.
  Then then devicetree looks like this

  chip soc/intel/.../usb
 device ref xhci on end
 register "config_option_1" = "A"
 register "config_option_2" = "B"
  end

  If the FSP loading is separated from the calling, for instance by moving
it earlier in
 a cbmem init hook or bootstate ENTRY hook, then each chip_ops .init could
configure FSP UPDs based.  The calling of FSP can then happen as part of
bootstate EXIT hook.
  A disadvantage here would be that this would add a lot of "chips" which
is not recommended
  for things that are not a separate physical chip.

3) Allow for a per device configuration.
  https://review.coreboot.org/c/coreboot/+/41745 implements this.
  This would easily allow per device configuration without introducing new
chips.

4) Since it is now possible to hook up device ops directly in the devicetree
  we could add a new device specific entry to ops that a new bootstate
executes
  before scanning buses. A enable_dev, but not on the chip_ops level but
device level.
  One thing to note is that scan_bus will not give default PCI ops to
devices anymore
  if ops is !NULL. This can be worked around by setting ops to NULL after
calling this
  new ops on devices where the default pci ops are desirable. Another way
  would be to allow more finegrained control when writing the devicetree
about which ops can be set directly in the devicetree.

It looks like 3 + 4 would be the cleanest, but it should maybe not block
trying out 1 and 2.
The splitting of device specific configuration is possible in 1 and 2 and
migrating to
solutions of 3 and 4 would be rather trivial.

Any thoughts on this?

Arthur
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] [coreboot - Feature #420] Use standard format of TPM event log

2022-10-14 Thread Arthur Heymans
Issue #420 has been updated by Arthur Heymans.


https://review.coreboot.org/c/coreboot/+/51710 Implements the TCG one. The 
coreboot implementation is not a 'proprietary' format. That would imply that 
there is a license restriction on using it which there is not.
A lot of the TCG spec simply does not make a lot of sense for coreboot which is 
why it's not implemented.


Feature #420: Use standard format of TPM event log
https://ticket.coreboot.org/issues/420#change-1159

* Author: Krystian Hebel
* Status: New
* Priority: Normal
* Target version: none
* Start date: 2022-10-12
* Related links: [1] 
https://trustedcomputinggroup.org/wp-content/uploads/TCG_PCClientImplementation_1-21_1_00.pdf
[2] 
https://trustedcomputinggroup.org/wp-content/uploads/TCG_PCClient_PFP_r1p05_v23_pub.pdf

Request to admin or someone with permissions to add as subtasks:
- https://ticket.coreboot.org/issues/421
- https://ticket.coreboot.org/issues/422
- https://ticket.coreboot.org/issues/423
- https://ticket.coreboot.org/issues/424
- https://ticket.coreboot.org/issues/425
- https://ticket.coreboot.org/issues/426

Currently coreboot uses proprietary format for TPM event log. TCG has 
standardized log formats, differently for TPM1.2 (aka legacy or SHA1) [1] and 
TPM2.0 (aka crypto agile) [2], both of which can be parsed by Linux kernel and 
exposed in sysfs. I don't know of any tool outside of cbmem which can parse 
coreboot format; this includes payloads which may be interested in continuing 
chain of trust started by coreboot.

Another incompatibility is caused by vboot's assignment of PCRs. Roles of PCRs 
are roughly specified by TCG in both of mentioned documents, they are more or 
less compatible with each other, but not with current coreboot code.

These changes could break assumptions made by existing platforms, so they 
should be made as Kconfig options.

This is a tracking issue to collect subtasks that need to be done in order to 
support standard event log formats.



-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
https://ticket.coreboot.org/my/account
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: FSP 2.4: runtime blobs!

2022-09-30 Thread Arthur Heymans
Hi Laurie

I created a github issue about this
https://github.com/UniversalScalableFirmware/documentation/issues/37 almost
11 months ago with no reply.
I also don't see any discussions about FSP-I in particular in that
community but I posted my github issue again on that mailing list.

Arthur

On Fri, Sep 30, 2022 at 6:45 PM Jarlstrom, Laurie <
laurie.jarlst...@intel.com> wrote:

> Hi,
>
> I would like to invite you to the Universal Scalable Firmware (USF)
> Community where Intel® FSP is an ingredient to a full system firmware
> solution and incorporates multiple bootloaders including coreboot.
>
> There have been discussions on the updates to Intel® FSP within this
> community
>
> https://universalscalablefirmware.groups.io/g/discussion
>
>
>
> What is USF?  Links to  USF training videos:
> https://www.youtube.com/playlist?list=PLehYIRQs6PR5N73cbW8CPvU_stDTAG_j5
>
>
>
> Thanks,
>
> *Laurie*
>
>
>
> laurie.jarlst...@intel.com,
>
> System Firmware Products
>
> Firmware Ecosystem & Business Dev.
>
> (503) 880 5648 Mobile
>
>
>
> *From:* ron minnich 
> *Sent:* Friday, September 30, 2022 9:00 AM
> *To:* Nico Huber 
> *Cc:* Arthur Heymans ; coreboot <
> coreboot@coreboot.org>
> *Subject:* [coreboot] Re: FSP 2.4: runtime blobs!
>
>
>
>  note that I am having this exact same problem in the RISC-V community:
> https://github.com/riscv-non-isa/riscv-sbi-doc/issues/102
>
>
>
> People just like their SMM. It's hard to kill.
>
>
>
> I fear that you're not going to get much luck with Intel, which is why I
> try to work with non-Intel CPUs as much as I can nowadays.
>
>
>
> On Fri, Sep 30, 2022 at 5:58 AM Nico Huber  wrote:
>
> Hi Arthur, coreboot fellows,
>
> On 30.09.22 13:53, Arthur Heymans wrote:
> > What are your thoughts?
>
> printing, bonfire...
>
> > Do we take a stance against FSP-I integration in coreboot?
>
> I think we already do. From coreboot.org:
>
>   "coreboot is an extended firmware platform that delivers a lightning
>fast and secure boot experience on modern computers and embedded
>systems. As an Open Source project it provides auditability and
>maximum control over technology."
>
> FSP-I means exactly the opposite of most of the above points. It's
> inherently incompatible.
>
> IMO, unless we discuss if we want to change how we define coreboot
> first, there can't be a discussion about integrating FSP-I nor any
> action in that direction.
>
> > Are there precedents where blobs runtimes are installed on the main CPU,
> > that I don't know of which could justify FSP-I?
>
> There's something for the main CPU but definitely not the same: I was
> told AMD's binary pi can provide runtime ACPI code. But running ACPI is
> an opt-in for the OS, whilst FSP-I wouldn't even allow an opt-out, I
> guess.
>
> >
> > P.S. It's quite sad to see this happen after an open letter 361 people
> > signed for a more open FSP.
> >
> https://openletter.earth/adopting-open-source-firmware-approach-for-intel-fsp-59d7a0c6
>
> Sad, but not unexpected. I believe this is part of a more than a
> decade old strategy. It seems to me Intel never really supported
> open-source OS drivers for their server platforms. They just hid
> everything in SMM with a nice open-source facade for Linux. We
> turned a blind eye to that. Now it seems that the ecosystem around
> Intel servers is rather unprepared for open source. Even if they'd
> open up their SMM code, it would just be wrong to keep the code in
> SMM, IMO. Proper OS drivers should be written instead.
>
> Nico
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] [coreboot - Bug #401] edk2 hangs indefiniately

2022-07-13 Thread Arthur Heymans
Issue #401 has been updated by Arthur Heymans.


Christian Walter wrote in #note-4:
> Is this a problem within coreboot - or do we rather need to fix up EDKII ?

The problem is inside EDKII. Those reverts would create problems for other 
payloads and Linux would even complain about incoherent MTRR settings I think.


Bug #401: edk2 hangs indefiniately 
https://ticket.coreboot.org/issues/401#change-1044

* Author: Sean Rhodes
* Status: New
* Priority: Normal
* Assignee: Arthur Heymans
* Category: board support
* Target version: none
* Start date: 2022-07-08
* Affected versions: master
* Affected hardware: Everything
* Affected OS: Doesn't matter

Since CB:63555, edk2 will no longer boot and hangs indefiniately

Various forks disable MTRR programming in edk2 (such as 
https://github.com/MrChromebox/edk2/commit/d641ea6920737fd9b9a94210e9a2e7636bfb3cdc)
 but this shouldn't be done as it breaks spec.

Workarounds are to revert CB:64804, CB:63550, CB:64803 and CB:63555.

---Files
with_avph_patch.txt (65.8 KB)
with_avph_patch_reverted.txt (64.6 KB)
master.txt (91.9 KB)
master_w_revert.txt (197 KB)


-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
https://ticket.coreboot.org/my/account
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] [coreboot - Bug #392] coreboot 4.16 & 4.17 - SeaBIOS Windows 10 BSOD "ACPI Error"

2022-06-20 Thread Arthur Heymans
Issue #392 has been updated by Arthur Heymans.


Can you check if https://review.coreboot.org/c/coreboot/+/65250 fixes it?


Bug #392: coreboot 4.16 & 4.17 - SeaBIOS Windows 10 BSOD "ACPI Error"
https://ticket.coreboot.org/issues/392#change-981

* Author: Pawel Radomychelski
* Status: New
* Priority: Normal
* Category: board support
* Target version: 4.17
* Start date: 2022-06-18
* Affected versions: 4.16, 4.17, master
* Related links: https://ticket.coreboot.org/issues/327
* Affected hardware: Lenovo ThinkPad X230 Tablet
* Affected OS: Windows 10

Since CoreBoot 4.16 my Windows 10 cant boot from SeaBios, i get BSOD with "ACPI 
Error" very early.

Don't know which commit exactly brakes ACPI, but i can say, that in my CoreBoot 
4.15 Image from 11/09/2021 Windows 10 is booting just fine from SeaBIOS.
Some time later under CoreBoot 4.16 i saw that windows is BSODing. Tried 
yesterday with CoreBoot 4.17 and its still broken.

I think, since CB4.16 there is something broken in ACPI. As i read, the problem 
is, that ACPI reserves some memory area, which in Windows is reserved for the 
system.

This [[https://ticket.coreboot.org/issues/327]] seems to be a similar problem, 
but the guy is using TianoCore instead of SeaBIOS.
Changing the line
OperationRegion (OPRG, SystemMemory, ASLS, 0x2000)
to
OperationRegion (OPRG, SystemMemory, ASLS, 0x1000)
doesnt fix it for SeaBios, but fix it for TianoCore.

---Files
dmesg_cb415.txt (78.9 KB)
dmesg_cb417.txt (79.9 KB)
.config (19.7 KB)


-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
https://ticket.coreboot.org/my/account
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: denverton: crash when using coreboot 4.17

2022-06-14 Thread Arthur Heymans
> So basically Temp RAM Exit is broken, since we need to use
> NO_FSP_TEMP_RAM_EXIT which selects
> "./src/soc/intel/common/block/cpu/car/exit_car.S" which is doing nothing
> besides disabling MTTRs (as all INTEL_CAR_* defines are unset) and then
> returning. Therefore, we have no CAR teardown.

That is why I'm recommending to select INTEL_CAR_NEM_ENHANCED as it will do
the right CAR teardown.

I doubt that it's not reaching the main call but rather that something bad
happens when calling the TempRamExit API...
Is it worth investigating if we can make the native coreboot CAR teardown
work?

Arthur

On Tue, Jun 14, 2022 at 8:11 PM Sumo  wrote:

> Hi Arthur,
>
> So basically Temp RAM Exit is broken, since we need to use
> NO_FSP_TEMP_RAM_EXIT which selects
> "./src/soc/intel/common/block/cpu/car/exit_car.S" which is doing nothing
> besides disabling MTTRs (as all INTEL_CAR_* defines are unset) and then
> returning. Therefore, we have no CAR teardown.
>
> Without NO_FSP_TEMP_RAM_EXIT, we are using
> "./src/soc/intel/common/block/cpu/car/exit_car_fsp.S" which it is crashing
> without reaching the main() function to invoke the TempRamExit API. This
> was working before the commit #5315e96abf - we have the following changes
> in exit_cat_fsp.S:
>
> diff --git a/src/soc/intel/common/block/cpu/car/exit_car_fsp.S
> b/src/soc/intel/common/block/cpu/car/exit_car_fsp.S
> index 4b906280e6..4d35447a56 100644
> --- a/src/soc/intel/common/block/cpu/car/exit_car_fsp.S
> +++ b/src/soc/intel/common/block/cpu/car/exit_car_fsp.S
> @@ -10,16 +10,16 @@
>   * to tear down the CAR and set up caching which can be overwritten
>   * after the API call.  More info can be found in the FSP Integration
>   * Guide included with the FSP binary.
>   */
>
>  .text
>  .global chipset_teardown_car
>  chipset_teardown_car:
>
> /* Set up new stack. */
> -   mov post_car_stack_top, %esp
> +   mov _estack, %esp
> /* Align the stack. */
> andl$0xfff0, %esp
>
> /* Call C code */
>     callmain
>
> The question is why we can't reach main() anymore. Any clues?
>
> Kind regards,
> Sumo
>
>
>
>
> On Tue, Jun 14, 2022 at 6:27 AM Arthur Heymans 
> wrote:
>
>> Hi
>>
>> Adding NO_FSP_TEMP_RAM_EXIT in src/soc/intel/denverton_ns/Kconfig fixes
>>> the issue, which seems pretty odd since I haven't enabled
>>> INTEL_CAR_NEM_ENHANCED. According to your explanation
>>> INTEL_CAR_NEM_ENHANCED is required right?
>>>
>> That's right. So FSP-T still sets up the cache as ram but I have no idea
>> how it does that (it could be similar to INTEL_CAR_NEM or
>> INTEL_CAR_NEM_ENHANCED).
>> Selecting INTEL_CAR_NEM_ENHANCED in this case only makes sure that the
>> enhanced NEM msr are cleared.
>> Even if FSP-T does not use that, it's fine.
>>
>> By configuring coreboot this way, the Temp RAM FSP is not used? So for
>>> the coreboot latest Temp RAM FSP support is broken right?
>>>
>> Actually the native coreboot CAR init is broken... I send a patch to
>> remove it:
>> https://review.coreboot.org/c/coreboot/+/55519
>>
>> On Mon, Jun 13, 2022 at 9:55 PM Sumo  wrote:
>>
>>> Hi Arthur,
>>>
>>> Adding NO_FSP_TEMP_RAM_EXIT in src/soc/intel/denverton_ns/Kconfig fixes
>>> the issue, which seems pretty odd since I haven't enabled
>>> INTEL_CAR_NEM_ENHANCED. According to your explanation
>>> INTEL_CAR_NEM_ENHANCED is required right?
>>>
>>> But I have also tried adding INTEL_CAR_NEM_ENHANCED which worked as
>>> well. However when selecting CONFIG_USE_DENVERTON_NS_CAR_NEM_ENHANCED the
>>> makefile complained about the FSP_CAR dependency which I have then enabled
>>> in Kconfig also.
>>> (INTEL_CAR_NEM_ENHANCED is enabled only
>>> when USE_DENVERTON_NS_CAR_NEM_ENHANCED is set)
>>>
>>> diff --git a/src/soc/intel/denverton_ns/Kconfig
>>> b/src/soc/intel/denverton_ns/Kconfig
>>> index 92fc065a..cd5e13b8 100644
>>> --- a/src/soc/intel/denverton_ns/Kconfig
>>> +++ b/src/soc/intel/denverton_ns/Kconfig
>>> @@ -20,6 +20,7 @@ config CPU_SPECIFIC_OPTIONS
>>> select CPU_INTEL_FIRMWARE_INTERFACE_TABLE
>>> select CPU_SUPPORTS_PM_TIMER_EMULATION
>>> select DEBUG_GPIO
>>> +   select NO_FSP_TEMP_RAM_EXIT
>>> select FSP_M_XIP
>>> select FSP_T_XIP if FSP_CAR
>>> select HAVE_INTEL_FSP_REPO
>>> @@ -163,6 +164,7 @@ config USE_DENVERTON_NS_CAR_NEM_ENHANCED
>>>

[coreboot] Re: Open letter regarding a more open FSP codebase

2022-06-08 Thread Arthur Heymans
Hi

Awesome initiative!

I think more openness around FSP-S is indeed the best way to start
improving things for both developers and users on Intel hardware.
It's not uncommon that you have to tell a customer that you can't fix a
problem because the cause is inside the FSP.
Even if you have the source code and can fix the issue yourself, getting
that inside the officially redistributable FSP is hard.

Feature overlap is also another place where integrating the opaque FSP can
get awkward and potentially insecure.
So it is not always clear which project owns which hardware configuration.
You need to have good knowledge of the hardware
AND coreboot AND FSP to be able to make a good judgement on this, which is
just more painful than it needs to be.

Last but not least, having a real community around a firmware project and
hardware is simply a very good idea to improve the experience for everyone.
I think openness with regard to the code is not enough for that, but it is
a non-starter without that.

I really hope this moves forward and thanks those starting this effort!

Kind regards
Arthur


On Fri, Jun 3, 2022 at 9:54 PM coreboot org  wrote:

> Hi everyone,
>
> Subrata has written a fantastic proposal for a plan to reduce the work
> done by the Intel FSP, and transition that work into open source
> implementations.[1] This would be a good initial step towards what the
> open source firmware communities would like to see, which is, of
> course, to have all the firmware to be completely open and well
> documented.
>
> In looking towards that goal, a group of people including myself have
> drafted an open letter to intel, asking that they consider Subrata’s
> proposal and work with us to achieve the slimming, then replacement of
> the FSP-S functionality with open source code [2]. The drafters of
> this open letter agree that this will benefit all open source firmware
> communities along with Intel, and ultimately their customers.
>
> I understand that some will see this as an underwhelming change, but
> please look at it instead as at least a step in the direction that
> we’d like to see.  I believe that these small steps, if shown to be
> beneficial to Intel and their customers, will lead to larger changes,
> pushing the boundaries between open and proprietary codebases even
> further in favor of openness.
>
> If you agree with our thoughts, please take the time to read through
> Subrata’s proposal and the open letter to Intel, and consider adding
> your support behind these by signing onto the letter [2].
>
> Thanks very much for your time, and take care.
>
> Martin
>
> [1]: https://blog.osfw.foundation/osf-intel-reduce-fsp-boundary/
> [2]:
> https://openletter.earth/adopting-open-source-firmware-approach-for-intel-fsp-59d7a0c6
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Another day, another SMM loader vulnerability

2022-05-23 Thread Arthur Heymans
Hi

It looks like this bug is biting us now.
https://review.coreboot.org/c/coreboot/+/64521 removed the heap from SMM
(because it's not needed and a bad idea).
Now that the heap is gone the FX_SAVE area is actually overwriting the
handler. So this vulnerability is not hypothetical anymore
it breaks the smihandler on all SSE platforms.

Please review the fix in https://review.coreboot.org/c/coreboot/+/63475 or
alternatively https://review.coreboot.org/c/coreboot/+/63560 (rewrite using
regions to check for overlaps at runtime).

Kind regards,
Arthur

On Tue, Apr 12, 2022 at 10:46 AM Arthur Heymans  wrote:

> Hi
>
> The obvious easy solution is to not use SMM but that's a different topic.
>
> I think it should also be doable to write unit tests that would do a setup
> for 1 (1 is often a special case) and many cpus and see that things like
> stubs, stack, save state, permanent handler, ... don't overlap.
> I played around with our unit test framework and I'm impressed. We should
> use it way more ;-)
> https://review.coreboot.org/c/coreboot/+/63560 is where I did some very
> very WIP unit test(s) on SMM loading and I'm already able to load specially
> crafted relocatable modules for both stubs and permanent handler.
> I think it should be possible to test all kinds of good and bad
> configurations and make this code more robust and future proof.
>
> Kind regards
> Arthur
>
>
> On Mon, Apr 11, 2022 at 6:11 PM ron minnich  wrote:
>
>> arthur, what might we do with either the build process or startup to
>> avoid this problem in future? Do you think we could find a way to
>> catch this programmatically soon, rather than humanly too late?
>>
>> On Mon, Apr 11, 2022 at 2:48 AM Arthur Heymans 
>> wrote:
>> >
>> > Hi
>> >
>> > After last week's SMM loader problem on all but the BSP, I noticed
>> another problem in the SMM setup.
>> > The permanent smihandler is currently built as a relocatable module
>> such that coreboot
>> > can place it wherever it thinks it's a good idea. (TSEG is not known at
>> buildtime).
>> > These relocatable modules have an alignment requirement.
>> >
>> > It looks however that the code to deal with the alignment requirement
>> is also wrong
>> > and aligns the handler upwards instead of downwards which makes it
>> encroach either an SSE2
>> > FX_SAVE area or an SMM register save state. It's hard to know whether
>> this is easily exploitable.
>> > I would think that a carefully crafted SMM save state on the right AP
>> arbitrary code executing might be possible. On the other hand I noticed
>> last week that launching SMM on APs is broken too so this is likely a
>> lesser problem.
>> >
>> > Anyway the fix is in https://review.coreboot.org/c/coreboot/+/63475
>> > (It has a comment indicating what code was causing this problem)
>> > Please review and update your coreboot code!
>> >
>> > Kind regards
>> > Arthur
>> > ___
>> > coreboot mailing list -- coreboot@coreboot.org
>> > To unsubscribe send an email to coreboot-le...@coreboot.org
>>
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: [RFC] #pragma once

2022-05-17 Thread Arthur Heymans
Hi

Those arguments not to use #pragma once make a lot of sense. Thanks Martin!

I've made some good progress on getting boards to build with clang (each
x86 board now builds). Clang at least warns about #ifndef and #define lines
not being equal so we'd have that check covered.

Kind regards
Arthur

On Tue, 17 May 2022, 15:28 Felix Held,  wrote:

> Hi Martin,
>
> > To support #pragma once, the compiler tries to identify duplicate
> encounters with the same file, but the check gcc actually performs to
> establish the identity of the file is weak. Here's someone who made two
> copies of the same header with different names, each with a #pragma once,
> and it screwed up his build.
>
> Ouch, that isn't what I expected here; especially since multiple files
> with the same timestamp are expected when doing a fresh repo checkout.
> With this info I agree that we should keep the include guard; definitely
> learned something new today. It would be helpful to have this documented
> and possibly have some check to make sure that the include guards aren't
> broken in some files.
>
> Regards,
> Felix
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Please test and review AGESA code

2022-05-17 Thread Arthur Heymans
Hi

I have patches that improve those platforms and wrote code that should make
some of
the AGESA platforms easier to transition to newer soon to be mandated
codepaths and I did so
with past codepath mandates with both code and review.

My message about moving code from the master branch was more or less a
tongue in cheek, I'm
sorry that it was perceived differently. I'm not advocating for removal,
just referring to past discussions.
It is however a problem to get code changes tested. The (admittedly
cynical) joke was just to get some attention to patches
I would like to get reviewed as that work tends to be related to general
codebase improvements.

> Instead of phrasing it this way, maybe say something like, "Thanks AMD
for releasing this, now community, let's get together and review and
improve things."

It's likely that AMD does not care about the AGESA platforms supported in
coreboot, since they don't produce that hardware anymore?
Just making assumptions here. Anyway, improving that code is my intention
given that I write patches for it ;-)

Arthur

On Tue, May 17, 2022 at 7:40 PM Martin Roth  wrote:

> Arthur, you are not making an argument that any vendor should release
> their source code as opensource.  I agree that all of this code should be
> reviewed, but if we complain about code quality and lack of testing for
> open sourced code, but don't for closed source, that's an argument against
> any company opening their codebases.
>
> Instead of phrasing it this way, maybe say something like, "Thanks AMD for
> releasing this, now community, let's get together and review and improve
> things."
>
> Just my opinion, and I'm intentionally replying off list.  But I'll say
> that I'm going to fight *very* hard to keep the AGESA codebases in coreboot
> for as long as there are people testing it.  Doing otherwise is again, a
> disincentive to companies for opening their sourcecode.
>
> Take care.
> Martin
>
>
> May 17, 2022, 08:47 by art...@aheymans.xyz:
>
> > Hi
> > We spend more time debating whether to keep AGESA in the master branch
> than actually reviewing code to maintains it.
> > Here are some patches series I would like to be tested & reviewed:
> > Agesa was never properly linked and relied on default linker behavior to
> append unmatched data. Here is the fix: >
> https://review.coreboot.org/q/topic:AGESA_DATA
> > Use MRC cache for non volatile data  >
> https://review.coreboot.org/q/topic:AGESA_MRC_CACHE
> > Use CLFLUSH to make sure code hits DRAM and incidently avoid
> inconsistent MTRRs (bonus is compressed postcar stage): >
> https://review.coreboot.org/q/topic:compress_postcar
> > Kind regards
> > Arthur
> >
>
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] [coreboot - Bug #343] Fails to load GRUB payload: CPU Index 0 - APIC 0 Unexpected Exception:6 @ 10:00009016 - Halting

2022-05-10 Thread Arthur Heymans
Issue #343 has been updated by Arthur Heymans.





Bisected. 4ea7bae51f97e49c84dc67ea30b466ca8633b9f6 is the culprit in grub.





Bug #343: Fails to load GRUB payload: CPU Index 0 - APIC 0 Unexpected 
Exception:6 @ 10:9016 - Halting

https://ticket.coreboot.org/issues/343#change-923



* Author: Paul Menzel

* Status: New

* Priority: Normal

* Start date: 2022-03-30



coreboot emulation/qemu-i440fx is unable to load GRUB payload. It works with 
SeaBIOS.



```

FMAP REGION: COREBOOT

Name   Offset Type   Size   Comp

cbfs master header 0x0cbfs header32 none

fallback/romstage  0x80   stage   21944 none

fallback/ramstage  0x56c0 stage   62048 LZMA (139536 
decompressed)

config 0x14980raw   428 none

revision   0x14b80raw   716 none

build_info 0x14e80raw   112 none

fallback/dsdt.aml  0x14f40raw  4044 none

cmos_layout.bin0x15f40cmos_layout   640 none

fallback/postcar   0x16200stage   23328 none

fallback/payload   0x1bd80simple elf 460906 none

(empty)0x8c640null  3603556 none

bootblock  0x3fc2c0   bootblock   15104 none

```



```

[…]

[NOTE ]  coreboot-4.16-415-ga84c00c9ce Wed Mar 30 01:18:14 UTC 2022 ramstage 
starting (log level: 7)...

[…]

[INFO ]  Timestamp - selfboot jump: 965162845

[EMERG]  CPU Index 0 - APIC 0 Unexpected Exception:6 @ 10:9016 - Halting

[EMERG]  Code: 0 eflags: 00010002 cr2: 

[EMERG]  eax:  ebx: 3ffbdc18 ecx: 0001ac27 edx: 00ff

[EMERG]  edi: 3ffbdc00 esi: 3ffbdbf0 ebp: 01e4 esp: 3ffc3fbc



[EMERG]  0x8fd0:00 00 00 00 00 00 00 00 

[EMERG]  0x8fd8:00 00 00 00 00 00 00 00 

[EMERG]  0x8fe0:00 00 00 00 00 00 00 00 

[EMERG]  0x8fe8:00 00 00 00 00 00 00 00 

[EMERG]  0x8ff0:00 00 00 00 00 00 00 00 

[EMERG]  0x8ff8:00 00 00 00 00 00 00 00 

[EMERG]  0x9000:52 52 68 27 ac 01 00 6a 

[EMERG]  0x9008:0b e8 5e cd 00 00 83 c4 

[EMERG]  0x9010:10 a0 00 00 00 00 0f 0b 

[EMERG]  0x9018:51 51 68 6a a2 01 00 6a 

[EMERG]  0x9020:0b e8 46 cd 00 00 83 c4 

[EMERG]  0x9028:10 eb e6 90 bc f0 ff 07 

[EMERG]  0x9030:00 e9 c7 d7 00 00 66 90 

[EMERG]  0x9038:02 b0 ad 1b 02 00 00 00 

[EMERG]  0x9040:fc 4f 52 e4 55 57 56 53 

[EMERG]  0x9048:83 ec 1c 8b 5c 24 30 8b 

[EMERG]  0x3ffc4038:0x0002

[EMERG]  0x3ffc4034:0x0003

[EMERG]  0x3ffc4030:0x

[EMERG]  0x3ffc402c:0x

[EMERG]  0x3ffc4028:0x

[EMERG]  0x3ffc4024:0x

[EMERG]  0x3ffc4020:0x

[EMERG]  0x3ffc401c:0x

[EMERG]  0x3ffc4018:0x

[EMERG]  0x3ffc4014:0x

[EMERG]  0x3ffc4010:0x

[EMERG]  0x3ffc400c:0x

[EMERG]  0x3ffc4008:0x

[EMERG]  0x3ffc4004:0x

[EMERG]  0x3ffc4000:0x

[EMERG]  0x3ffc3ffc:0x

[EMERG]  0x3ffc3ff8:0x3ffbd4e8

[EMERG]  0x3ffc3ff4:0xdeadbeef

[EMERG]  0x3ffc3ff0:0xdeadbeef

[EMERG]  0x3ffc3fec:0x3ff9e061

[EMERG]  0x3ffc3fe8:0x3ffa1f19

[EMERG]  0x3ffc3fe4:0x00018f20

[EMERG]  0x3ffc3fe0:0x3ffdcfd4

[EMERG]  0x3ffc3fdc:0x3ffc4000

[EMERG]  0x3ffc3fd8:0x3ffd63ac

[EMERG]  0x3ffc3fd4:0x

[EMERG]  0x3ffc3fd0:0x3ffa1fe5

[EMERG]  0x3ffc3fcc:0x3ffa2139

[EMERG]  0x3ffc3fc8:0x3ffbdc24

[EMERG]  0x3ffc3fc4:0x3ffa4303

[EMERG]  0x3ffc3fc0:0x3ff95000

[EMERG]  0x3ffc3fbc:0x3ffac8e4 <-esp

```



---Files

coreboot.log (17.2 KB)





-- 

You have received this notification because you have either subscribed to it, 
or are involved in it.

To change your notification preferences, please click here: 
https://ticket.coreboot.org/my/account

___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] [coreboot - Bug #322] (Resolved) Image fails to boot on Thinkpad X201 if built with GCC 11

2022-05-10 Thread Arthur Heymans
Issue #322 has been updated by Arthur Heymans.

Status changed from New to Resolved

Fixed by https://review.coreboot.org/c/coreboot/+/61938


Bug #322: Image fails to boot on Thinkpad X201 if built with GCC 11
https://ticket.coreboot.org/issues/322#change-922

* Author: Stefan Ott
* Status: Resolved
* Priority: Normal
* Start date: 2021-11-17

I can't seem to build a bootable image for the Thinkpad X201 with the new 
GCC-11-based toolchain. Rather than booting up, the machine just flashes a 
bunch of LEDs and then resets itself. If I use the old toolchain with GCC 8, 
the image works just fine.

Unfortunately I don't have a console on this machine so I don't have any logs. 
I don't know whether is is relevant or not, but cbfstool reports different 
layouts for the images:

$ util/cbfstool/cbfstool coreboot-gcc-8.rom print
FMAP REGION: COREBOOT
Name   Offset Type   Size   Comp
cbfs master header 0x0cbfs header32 none
fallback/romstage  0x80   stage   65328 none
cpu_microcode_blob.bin 0x10040microcode   13312 none
fallback/ramstage  0x13480stage  106502 LZMA 
(227804 decompressed)
vgaroms/seavgabios.bin 0x2d500raw 28160 none
config  0x34340raw   643 none
revision   0x34600raw   716 none
build_info 0x34900raw   101 none
fallback/dsdt.aml  0x349c0raw 14119 none
cmos_layout.bin0x38140cmos_layout  1612 none
fallback/postcar   0x387c0stage   20024 none
fallback/payload   0x3d640simple elf  69197 none
payload_config  0x4e4c0raw  1728 none
payload_revision   0x4ebc0raw   237 none
etc/ps2-keyboard-spinup0x4ed00raw 8 none
(empty)0x4ed40null   641380 none
bootblock  0xeb6c0bootblock   18176 none

$ util/cbfstool/cbfstool coreboot-gcc-11.rom print
FMAP REGION: COREBOOT
Name   Offset Type   Size   Comp
cbfs master header 0x0cbfs header32 
fallback/romstage  0x80   stage   63664 none
cpu_microcode_blob.bin 0xf9c0 microcode   13312 none
fallback/ramstage  0x12e00stage  106836 LZMA 
(228284 decompressed)
vgaroms/seavgabios.bin 0x2cfc0raw 28160 none
config  0x33e00raw   643 none
revision   0x340c0raw   716 none
build_info 0x343c0raw   101 none
fallback/dsdt.aml  0x34480raw 14119 none
cmos_layout.bin0x37c00cmos_layout  1612 none
fallback/postcar   0x38280stage   19976 none
fallback/payload   0x3d100simple elf  69050 none
payload_config  0x4df00raw  1728 none
payload_revision   0x4e600raw   236 none
etc/ps2-keyboard-spinup0x4e740raw 8 none
(empty)0x4e780null   642916 none
bootblock  0xeb700bootblock   18112 none




-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
https://ticket.coreboot.org/my/account
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: RFC: Clarifying use of GCC extensions in the coding style

2022-04-18 Thread Arthur Heymans
Hi Julius

Sounds good to me.
Btw a similar discussion happened on the LKML.
https://lore.kernel.org/lkml/CAHk-=whFKYMrF6euVvziW+drw7-yi1pYdf=uccnzj8k09do...@mail.gmail.com/
is the upshot,
which is complete agreement with what you said.

Kind regards
Arthur

On Fri, Apr 15, 2022 at 11:30 PM Julius Werner  wrote:

> Hi,
>
> We occasionally get into discussions in code reviews when code uses a
> GCC extension, and a reviewer asks that it be rewritten to be C
> standard compliant instead. A recent example was on the use of `void
> *` arithmetic in this patch
>
> https://review.coreboot.org/c/coreboot/+/56791/9/src/soc/mediatek/common/pcie.c#109
> , and there have been others in the past. I would like to come to a
> consensus on this topic here and add some blurb to the coding style
> document so we don't have to rehash the same discussion over and over
> again.
>
> In my opinion, GCC extensions are absolutely necessary for coreboot
> development and there should be no reason to reject them. Inline
> assembly is the most obvious example -- without it we would have to
> convert a ton of static inline functions that wrap special
> instructions into full linker-level functions in a separate assembly
> file instead, and eat all the unnecessary function call overhead that
> comes with that. Others enable such important features that it would
> become much more dangerous and cumbersome to develop without them --
> most importantly statement expressions
> (https://gcc.gnu.org/onlinedocs/gcc-11.2.0/gcc/Statement-Exprs.html)
> which are necessary to write things like double-evaluation safe
> macros, expression-assertions like dead_code_t() and simple
> convenience shortcuts like wait_us(). And some extensions just offer a
> small bit of convenience to reduce boilerplate -- for example, `void
> *` arithmetic just tends to be useful to prevent cluttering the code
> with a bunch of unnecessary casts to other types that don't add any
> additional meaning to the data (e.g. for an unspecified buffer of
> opaque data I think `void *` is a much more appropriate type than
> `uint8_t *`, even if I want to add a byte offset to it), and I've
> never seen a case where I think it would have actually been unclear to
> anyone what step width the pointer arithmetic was done at (there's no
> reason to assume anything other than bytewise for a `void *`).
>
> If we need some extensions anyway, and coreboot will never be "fully C
> standards compliant" (not that that would actually be very useful for
> anything in practice), I don't see a reason why we should still avoid
> some extensions when we're using others. I think if an (official,
> long-term supported) extension exists and it allows us to write better
> code than standard C, there should be no reason not to use it. (Note
> that for those people who are trying to get coreboot working with
> clang, it generally supports all the same extensions as GCC.) I've
> written a draft CL to add a section for this to the coding style,
> please let me know what you think:
> https://review.coreboot.org/c/coreboot/+/63660
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] lb_serial: drop 'uart_pci_addr' entry

2022-04-17 Thread Arthur Heymans
Hi

In 2016 'uart_pci_addr' was added to the coreboot table entry for serial
devices.
(https://review.coreboot.org/c/coreboot/+/14609)
It was done for the Intel Quark platform which has its uart on a PCI device
like other
Intel hardware. Right now only Quark sets this to a non zero value using an
awkwardly defined Kconfig parameter: CONFIG_UART_PCI_ADDR. It looks like
only tianocore uses this and it's
pretty much a NOOP used only to get the VID/DID of the PCI device.

Should we update tianocore and just drop this for the lb_table?
Most other payloads don't even have this struct entry updated to contain
this entry...
Now our codebase has awkward code with "serial.uart_pci_addr =
CONFIG_UART_PCI_ADDR;" on a lot of platforms that don't even feature PCI
and there is no real use case as far as I can tell.

Do any of your payloads use this in a meaningful way?
If not, can we just drop it?

Kind regards

Arthur
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Deprecation of the Intel Quark SoC

2022-04-13 Thread Arthur Heymans
Hi

When platforms stand in the way of improving the general code base, I think
that's it's not controversial
to ask people to either step up and do necessary maintenance or move the
platform to a branch. Past examples of
that would things like dropping romcc bootblock, car global, ...

When a platform is pretty much dead, mainly because it's old or unused, but
not standing in the way of general
improvements, I don't think there is much to gain from moving it to a
branch. For instance with the i440bx platform
which is over 20y old (I do hope no one has to really use those!), we had
someone maintain it a few years back.
I think it was done for the fun of it and I think there is value to that.
So maybe someone will have fun with quark in a few years,
fix some problems and make it work again, with the upshot of having a new
community member?

Old platforms should not drag down development on the master branch. If you
need to make a change to soc/quark and
no one is there to test it, then so be it... That does mean that we can't
guarantee that things work in the
master branch, which as I see it is already the case to a large extent on a
lot of platforms.

Not breaking platforms in the master branch is an orthogonal problem that
can only be solved with automated
hardware testing. It is a *hard* and therefore expensive problem to solve...

Now what should we do when something old is suddenly tested to be broken in
the master branch? That's a different
problem where moving things out of the master branch might make more sense.

Kind regards

On Wed, Apr 13, 2022 at 2:42 PM Peter Stuge  wrote:

> Michael Niewöhner wrote:
> > > But once code is moved off master reuse of changes on master will
> > > eventually become impossible and there's no good path to recover from
> > > that situation, so it should be important to avoid such dead ends for
> > > any code we want to stay usable - IMO all code.
> >
> > How would you "reuse [] changes on master" on a platform, where these
> > changes can't be tested? o.O
>
> By reuse I don't mean that code runs, I mean that a commit benefits
> also platforms without test coverage.
>
> There are many ways to determine whether a commit benefits a platform
> or not, testing is just one way and testing alone is a weak indicator.
>
> That's perhaps foreign to someone with a "test-driven" mindset. I
> don't hate on testing at all, I just want to preserve value also
> where there's no coverage when that's possible without much detriment
> to other parts of the code.
>
> I don't think it's reasonable nor is it current practice to require
> every commit to be tested on every affected platform. That would
> obviously be nice data points to have but that has not been coreboot
> reality in the past 20 years and I predict that it will also not be
> so in the next 20 years. I think that's fine.
>
>
> I hope you can understand that my ask is simply to not erase what
> might be working well based only on a lack of information.
>
> I'm obviously grateful that the leadership meeting settled on keeping
> quark at least as long as it causes no problems. Thanks for that!
>
>
> Kind regards
>
> //Peter
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Security notice: SMM can be hijacked by the OS on APs

2022-04-12 Thread Arthur Heymans
Hi

Would it make sense to backport your fix to old releases and bump
> those release numbers to a .1 on the end?
>

Some see releases as mere synchronization tags & nice PR.
Some releases are also branches in gerrit but there are none affected by
this (latest is 4.12 and it was introduced in 4.13).
There is a precedent where 4.8 was bumped to 4.8.1 because all boards were
broken.

I don't have a strong opinion on this.
Do people really use the releases in production or are most using git
anyway?
It's a bit weird to have releases that you'd have to advertise as *don't
use*, but I've seen us do that in the past (because issues are quite often
just fixed in master).

Kind regards
Arthur

On Tue, Apr 12, 2022 at 12:52 AM Peter Stuge  wrote:

> Arthur Heymans wrote:
> > I think this issue might affect a lot more systems than I initially
> thought.
>
> Would it make sense to backport your fix to old releases and bump
> those release numbers to a .1 on the end?
>
>
> //Peter
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Another day, another SMM loader vulnerability

2022-04-12 Thread Arthur Heymans
Hi

The obvious easy solution is to not use SMM but that's a different topic.

I think it should also be doable to write unit tests that would do a setup
for 1 (1 is often a special case) and many cpus and see that things like
stubs, stack, save state, permanent handler, ... don't overlap.
I played around with our unit test framework and I'm impressed. We should
use it way more ;-)
https://review.coreboot.org/c/coreboot/+/63560 is where I did some very
very WIP unit test(s) on SMM loading and I'm already able to load specially
crafted relocatable modules for both stubs and permanent handler.
I think it should be possible to test all kinds of good and bad
configurations and make this code more robust and future proof.

Kind regards
Arthur


On Mon, Apr 11, 2022 at 6:11 PM ron minnich  wrote:

> arthur, what might we do with either the build process or startup to
> avoid this problem in future? Do you think we could find a way to
> catch this programmatically soon, rather than humanly too late?
>
> On Mon, Apr 11, 2022 at 2:48 AM Arthur Heymans 
> wrote:
> >
> > Hi
> >
> > After last week's SMM loader problem on all but the BSP, I noticed
> another problem in the SMM setup.
> > The permanent smihandler is currently built as a relocatable module such
> that coreboot
> > can place it wherever it thinks it's a good idea. (TSEG is not known at
> buildtime).
> > These relocatable modules have an alignment requirement.
> >
> > It looks however that the code to deal with the alignment requirement is
> also wrong
> > and aligns the handler upwards instead of downwards which makes it
> encroach either an SSE2
> > FX_SAVE area or an SMM register save state. It's hard to know whether
> this is easily exploitable.
> > I would think that a carefully crafted SMM save state on the right AP
> arbitrary code executing might be possible. On the other hand I noticed
> last week that launching SMM on APs is broken too so this is likely a
> lesser problem.
> >
> > Anyway the fix is in https://review.coreboot.org/c/coreboot/+/63475
> > (It has a comment indicating what code was causing this problem)
> > Please review and update your coreboot code!
> >
> > Kind regards
> > Arthur
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Another day, another SMM loader vulnerability

2022-04-11 Thread Arthur Heymans
Hi

After last week's SMM loader problem on all but the BSP, I noticed another
problem in the SMM setup.
The permanent smihandler is currently built as a relocatable module such
that coreboot
can place it wherever it thinks it's a good idea. (TSEG is not known at
buildtime).
These relocatable modules have an alignment requirement.

It looks however that the code to deal with the alignment requirement is
also wrong
and aligns the handler upwards instead of downwards which makes it encroach
either an SSE2
FX_SAVE area or an SMM register save state. It's hard to know whether this
is easily exploitable.
I would think that a carefully crafted SMM save state on the right AP
arbitrary code executing might be possible. On the other hand I noticed
last week that launching SMM on APs is broken too so this is likely a
lesser problem.

Anyway the fix is in https://review.coreboot.org/c/coreboot/+/63475
(It has a comment indicating what code was causing this problem)
Please review and update your coreboot code!

Kind regards
Arthur
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Security notice: SMM can be hijacked by the OS on APs

2022-04-08 Thread Arthur Heymans
Hi

I did some testing on real hardware with an Intel Coffeelake system on
whether
vectoring out of TSEG is prohibited by the hardware, which I assumed would
be the case.
It's *not* the case! Vectoring out of TSEG does succeed so this issue
really affects modern hardware.
So I think this issue might affect a lot more systems than I initially
thought.

Kind regards
Arthur

On Fri, Apr 8, 2022 at 12:43 AM Arthur Heymans  wrote:

> Hi
>
> When refactoring the coreboot SMM setup I noticed that there is a security
> vulnerability in our SMM setup code.
>
> It boils down to this: except on the BSP the smihandler code will execute
> code at a random location, but most likely at offset 0. With some carefully
> crafted code a bootloader or the OS could place some code at that offset,
> generate an SMI on an AP and get control over SMM. More recent silicon has
> hardware mechanisms to avoid executing code outside the designated SMM area
> (TSEG) so those would not be affected.
>
> The commit introducing this problem is
> https://review.coreboot.org/c/coreboot/+/43684.
> Roughly it affects most x86 builds from end 2020/ beginning 2021 till now.
>
> https://review.coreboot.org/c/coreboot/+/63478 fixes the problem. (Feel
> free to review the rest of that series as it makes the smm setup much more
> readable ;-))
>
> Kind regards
> Arthur
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Security notice: SMM can be hijacked by the OS on APs

2022-04-07 Thread Arthur Heymans
Hi

When refactoring the coreboot SMM setup I noticed that there is a security
vulnerability in our SMM setup code.

It boils down to this: except on the BSP the smihandler code will execute
code at a random location, but most likely at offset 0. With some carefully
crafted code a bootloader or the OS could place some code at that offset,
generate an SMI on an AP and get control over SMM. More recent silicon has
hardware mechanisms to avoid executing code outside the designated SMM area
(TSEG) so those would not be affected.

The commit introducing this problem is
https://review.coreboot.org/c/coreboot/+/43684.
Roughly it affects most x86 builds from end 2020/ beginning 2021 till now.

https://review.coreboot.org/c/coreboot/+/63478 fixes the problem. (Feel
free to review the rest of that series as it makes the smm setup much more
readable ;-))

Kind regards
Arthur
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: 2022-03-29 - coreboot UEFI working group meeting minutes

2022-03-30 Thread Arthur Heymans
Hi

I would like to add a few notes to the meeting notes to clarify things a
bit better.

  * In the coreboot meeting, it was suggested that we push to just use
>   coreboot tables as they’re already supported in a number of
>   payloads.  This really isn’t practical however.  Intel would need
>   to be able to modify the list of known cbmem IDs and they would
>   need to be able to change the description format to something
>   that’s more self-describing.
>

CBMEM is an internal in memory database that coreboot uses.
Payloads don't need to know cbmem and most actually don't.
The handoff structure to payloads are 'coreboot tables'.
These are far older than cbmem. Inside the coreboot tree those are even
still called 'lb_table',
so it dates back from when the project was called LinuxBIOS.

'really isn't practical' is a very relative term here...
Monday I asked what existing firmware payloads (fwiw this is really a
coreboot concept to begin with)
don't support coreboot tables that would benefit from having a 'universal'
self describing handoff.
The answer is none: all existing firmware payloads support coreboot tables.
So the only thing not 'practical' here is that UEFI teams don't have
control over the handoff structure
format that is inside coreboot and is used by coreboot payloads (coreboot
tables). The proposed solution is a new format
that all payloads and coreboot ought to support. Needless to say that this
is a lot of work (adapting both coreboot and
all existing payloads) with very little benefits for coreboot.

The current payload handoff method has a number of flaws that
>   they’d like to fix, such as the address for stack being
>   hardcoded.
>

Normally payloads set up their own stack very early on so this is not a
problem. The context here, was that I voiced some
practical concerns about using CBOR as a handoff structure. LinuxBIOS or
coreboot tables were carefully designed to be
very easy to parse. In fact so easy to parse that Linux payloads on x86 are
loaded the following way:
- cbfstool has an assembly written trampoline (~150LOC) that parses the
coreboot tables and fills in the zero page of Linux
- This trampoline is position independent and stackless for maximum
flexibility
- cbfstool appends this trampoline to Linux payloads inside cbfs
- This is done so that coreboots runtime only knows how to load the 'SELF'
format (simple ELF) which abstracts all the complexities of
  payload formats away at buidtime (cbfstool). This is also how other
formats are handled (elf, raw bin, FV, ...)

My objection to a new format like cbor was that it is likely very hard to
parse using the same trampoline scheme.
It is likely possible to write a trampoline using a stack in C, but then
again that just complicates things a lot needlessly
just to adopt a new format with probably little to gain.

* The coreboot project can, however, encapsulate a CBOR-based
>   handoff-structure into cbmem, similar to what we currently do with
>   ACPI tables.
>

I think this is about supporting both a CBOR-based handoff and coreboot
tables at the same time.
My concerns here are that is requires some synchronization between both
codepaths and just increases maintenance in general.
Introducing multiple codepaths to do roughly the same is an error we get
bit by way too often. I think we should be
careful about this...

Additionally Intel was willing to look at using CBOR structures as
>   input and output to the FSP, so we could get rid of both the UPDs
>   & HOBs.
>

This seems like the real positive upshot of that conversation!

Kind regards
Arthur

On Tue, Mar 29, 2022 at 9:03 PM coreboot org  wrote:

> # 2022-03-29 - coreboot UEFI working group
>   Next meeting: 2022-04-26
>
> ## Attendees:
>Jay, Werner, Martin, Sheng, Ron, David
>
> ## Minutes:
>
> * In the coreboot meeting last week, we discussed a proposal from Intel
>   to change the handoff mechanism to payloads [1].  After that
>   discussion, the decision was made to move further discussion of that
>   topic over to the UEFI working group meeting.  There was another
>   meeting on Monday, March 28th where we got a much better understanding
>   of what’s going on and the reasons behind the proposal.
> * Intel wants to look at modifying the payload handoff as a part of
>   their universal-scalable-firmware [2] initiative.
> * The current payload handoff method has a number of flaws that
>   they’d like to fix, such as the address for stack being
>   hardcoded.
> * This likely also gives coreboot an opportunity to address
>   other parts of the USF spec that cause issues or
>   incompatibilities for us.
> * The coreboot project got involved when Sheng suggested to Intel
>   that it would be good to align the payload-handoff format with
>   other projects such as coreboot.
> * Thanks Sheng!
> * Using CBOR is the current proposal, but Intel was open to
>   discussing different formats than CBOR if anyone ha

[coreboot] Re: Multi domain PCI resource allocation: How to deal with multiple root busses on one domain

2022-03-22 Thread Arthur Heymans
>
>
> So it can be handled as you proposed in CB:59395 or we can define weak
> function e.g. get_max_subordinate(int current) which return 0xff by
> default and can be overriden in soc code to return real allowed max
> subordinate no.
>
> int __weak get_max_subordinate(int current) { return 0xff;};
>
> and in src/device/pci_device.c
>
> subordinate = get_max_subordinate(primary); // instead of subordinate =
> 0xff; /* MAX PCI_BUS number here */


I chose to have it directly in the devicetree over weak functions as the
the soc specific override function would essentially be a loop over the
devicetree struct which seems more fragile when things are being appended
to it (scan_bus).


On Tue, Mar 22, 2022 at 2:11 PM Mariusz Szafrański via coreboot <
coreboot@coreboot.org> wrote:

> W dniu 22.03.2022 o 12:38, Arthur Heymans pisze:
> > sidenote: it also looks like the hardware really does not like to have
> > PCI bridges on a IIO stack set a subordinate
> > value larger than the IIO stack 'MaxBus' (basically a stack-level
> > subordinate bus?). So scanning PCI busses needs some care.
> > See https://review.coreboot.org/c/coreboot/+/59395
>
> Each stack can have preassigned PCI bus range. window from busbase (pci
> bus no of first root bus on stack) to IIO stack 'MaxBus' inclusive. If
> MaxBus
> So you can logically (and with big simplification) imagine this as there
> exists preconfigured 'virtual bridge' between CPU and stack PCI root
> buses with secondary set to busbase and subordinate set to 'MaxBus'
> (same for io window/mem below 4G window/mem above 4G - one of each type
> per each stack)
>
> There can also exists stacks marked as disabled or reserved with or
> without defined pci bus ranges. PCI bus no defined in disabled or
> reserved stacks should not be used/accessed. Access can cause
> hang/lookup or very long delays. So only bus ranges defined in "enabled"
> stacks should be used.
>
> So it can be handled as you proposed in CB:59395 or we can define weak
> function e.g. get_max_subordinate(int current) which return 0xff by
> default and can be overriden in soc code to return real allowed max
> subordinate no.
>
> int __weak get_max_subordinate(int current) { return 0xff;};
>
> and in src/device/pci_device.c
>
> subordinate = get_max_subordinate(primary); // instead of subordinate =
> 0xff; /* MAX PCI_BUS number here */
>
> Mariusz
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Multi domain PCI resource allocation: How to deal with multiple root busses on one domain

2022-03-22 Thread Arthur Heymans
Hi

Hi
>
>
>> e.g. if we got from HOB info that physical stack x has preallocated PCI
>> buses 0x20..0x2f, io form 0x2000..0x2fff, mem 0xd000..0xdfff, mem
>> 0x100...0x1ff and there are 2 root buses 0x20 and 0x28
>> instead of adding one domain with "physical" stack we added two domains
>> with "virtual" stacks:
>> stack x1 with virtually preallocated PCI buses 0x20..0x27, ip form
>> 0x2000..0x27ff, mem 0xd000..0xd7ff, mem
>> 0x100...0x17f
>> stack x2 with virtually preallocated PCI buses 0x28..0x2f, ip form
>> 0x2800..0x2fff, mem 0xd700..0xdfff, mem
>> 0x180...0x1ff
>>
>> Each one with only one root bus without this link_list->next "complexity"
>>
> This only works if the downstream resources fit in this split virtual
> allocation, which you can't know before reading all downstream resources.
> Especially for mem32 resources the resource allocation is already tight so
> I think this can get ugly.
>
> Today most resources are mem64 "ready" and above 4G window is big enough
> so using "prefer 64bit strategy" practically eliminates this tight on mem32
> range
>

I've seen FSP allocating just enough mem32 space for buildin endpoint PCI
32bit-only resources spread over multiple root busses on the same stack (on
DINO stack? No idea what that really means).
Just splitting the mem32 resources in 'half' like you suggest would break
allocation, so no. The "prefer 64bit strategy" is certainly needed but not
sufficient.

On 22.03.22 09:57, Mariusz Szafrański via coreboot wrote:
> > e.g. if we got from HOB info that physical stack x has preallocated PCI
> > buses 0x20..0x2f, io form 0x2000..0x2fff, mem 0xd000..0xdfff,
> > mem 0x100...0x1ff and there are 2 root buses 0x20 and
> > 0x28 instead of adding one domain with "physical" stack we added two
> > domains with "virtual" stacks:
>
> I'm still trying to learn what a "stack" comprises. I'm pretty sure most
> of the problems solve themselves if we map the Intel terms to standard
> and coreboot terms.
>
> Would the following be a correct statement about stacks? A "stack"
> always has dedicated I/O port and memory ranges (that don't overlap
> with anything else, especially not with the ranges of other stacks)
> and has one or more PCI root buses.
>
> If so, are the PCI bus numbers separate from those of other stacks?
> Or do all stacks share a single range of 0..255 PCI buses? In standard
> terms, do they share a single PCI segment group?


So that is actually configurable in the hardware but currently all stacks
consume a set of PCI busses on a single 0..255 PCI segment.
Those PCI busses allocated to a stack are then consumed by endpoint devices
directly on the stack or by 'regular' PCI bridges on there.

sidenote: it also looks like the hardware really does not like to have PCI
bridges on a IIO stack set a subordinate
value larger than the IIO stack 'MaxBus' (basically a stack-level
subordinate bus?). So scanning PCI busses needs some care.
See https://review.coreboot.org/c/coreboot/+/59395

On Tue, Mar 22, 2022 at 12:21 PM Nico Huber  wrote:

> On 22.03.22 09:57, Mariusz Szafrański via coreboot wrote:
> > e.g. if we got from HOB info that physical stack x has preallocated PCI
> > buses 0x20..0x2f, io form 0x2000..0x2fff, mem 0xd000..0xdfff,
> > mem 0x100...0x1ff and there are 2 root buses 0x20 and
> > 0x28 instead of adding one domain with "physical" stack we added two
> > domains with "virtual" stacks:
>
> I'm still trying to learn what a "stack" comprises. I'm pretty sure most
> of the problems solve themselves if we map the Intel terms to standard
> and coreboot terms.
>
> Would the following be a correct statement about stacks? A "stack"
> always has dedicated I/O port and memory ranges (that don't overlap
> with anything else, especially not with the ranges of other stacks)
> and has one or more PCI root buses.
>
> If so, are the PCI bus numbers separate from those of other stacks?
> Or do all stacks share a single range of 0..255 PCI buses? In standard
> terms, do they share a single PCI segment group?
>
> Nico
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Multi domain PCI resource allocation: How to deal with multiple root busses on one domain

2022-03-22 Thread Arthur Heymans
Hi


> e.g. if we got from HOB info that physical stack x has preallocated PCI
> buses 0x20..0x2f, io form 0x2000..0x2fff, mem 0xd000..0xdfff, mem
> 0x100...0x1ff and there are 2 root buses 0x20 and 0x28
> instead of adding one domain with "physical" stack we added two domains
> with "virtual" stacks:
> stack x1 with virtually preallocated PCI buses 0x20..0x27, ip form
> 0x2000..0x27ff, mem 0xd000..0xd7ff, mem
> 0x100...0x17f
> stack x2 with virtually preallocated PCI buses 0x28..0x2f, ip form
> 0x2800..0x2fff, mem 0xd700..0xdfff, mem
> 0x180...0x1ff
>
> Each one with only one root bus without this link_list->next "complexity"
>
This only works if the downstream resources fit in this split virtual
allocation, which you can't know before reading all downstream resources.
Especially for mem32 resources the resource allocation is already tight so
I think this can get ugly.

At some point of time I was thinking about something called "subdomains"
> concept to cover this multiple root buses in one domain case so to make
> something like:
> domain 0  //domain
> domain 1 //subdomain
>  first root bus from stack x and its downstream devices
> end
> domain 2 //subdomain
> second root bus from stack x and its downstream devices
> end
> end
> domain ...
> ...
> end
> ...
>
>
The way I understood it, domains are a set of resource windows to be
constrained and then distributed over children and in this case children
over multiple PCI root busses.
I have some doubts that subdomains map the situation correctly/efficiently,
because it has essentially the same problem as knowing how to split the
resources between domains correctly.

OTOH, does it even make sense to map this in the devicetree? The way FSP
reports stacks is generated at runtime and differs depending on the
hardware configuration.
So having a static structure mapping that may not be interesting?

Arthur


On Tue, Mar 22, 2022 at 9:58 AM Mariusz Szafrański via coreboot <
coreboot@coreboot.org> wrote:

> Hi Artur,
>
> Multiple PCI root bus per domain gives us more control about resource
> allocation for downstream devices from poll preallocated by FSP to stack
> but adds this link_list->next looping complexity (maybe not much deal to
> handle that as you stated). Additional work will be needed for statically
> defining them in devicetree.
>
> For us it was easiest to split them "virtually"
>
> e.g. if we got from HOB info that physical stack x has preallocated PCI
> buses 0x20..0x2f, io form 0x2000..0x2fff, mem 0xd000..0xdfff, mem
> 0x100...0x1ff and there are 2 root buses 0x20 and 0x28
> instead of adding one domain with "physical" stack we added two domains
> with "virtual" stacks:
> stack x1 with virtually preallocated PCI buses 0x20..0x27, ip form
> 0x2000..0x27ff, mem 0xd000..0xd7ff, mem
> 0x100...0x17f
> stack x2 with virtually preallocated PCI buses 0x28..0x2f, ip form
> 0x2800..0x2fff, mem 0xd700..0xdfff, mem
> 0x180...0x1ff
>
> Each one with only one root bus without this link_list->next "complexity"
>
> It was just a "shortcut" for now.
>
> At some point of time I was thinking about something called "subdomains"
> concept to cover this multiple root buses in one domain case so to make
> something like:
> domain 0  //domain
> domain 1 //subdomain
>  first root bus from stack x and its downstream devices
> end
> domain 2 //subdomain
> second root bus from stack x and its downstream devices
> end
> end
> domain ...
> ...
> end
> ...
>
> But finally didn`t tried to implement it.
>
> Additional dirty "overflow" trick that I was used for some time was to use
> something like:
> domain 0
> //first root bus
> pci 0:0.1end
> 
> //second root bus
>     pci 0x20:0end  //overflow at 0x20 pci bus number boundary
> 
> end
>
> And dynamic update 0x20 -> real bus number at runtime
>
> All above is what we have tried but all that is not a final solution.
> Maybe it will give someone else hint to find a better/easier way to handle
> this hw in coreboot.
>
> Mariusz
> W dniu 22.03.2022 o 08:29, Arthur Heymans pisze:
>
> Hi Mariusz
>
> I was inspired by the multi domain approach doc and got quite far already.
> I decided to allocate and attach domains at runtime for the mom

[coreboot] Re: Multi domain PCI resource allocation: How to deal with multiple root busses on one domain

2022-03-22 Thread Arthur Heymans
Hi Mariusz

I was inspired by the multi domain approach doc and got quite far already.
I decided to allocate and attach domains at runtime for the moment
instead of statically via the devicetree. In the future I think having
devicetree structures makes a lot of sense, e.g. to provide stack specific
configuration or derive the IIO bifurcation configuration from it.

I see that FSP sometimes does this virtual splitting but not always and
just reports multiple PCI roots on one stack (via a HOB).
I don't think splitting up a reported stack in multiple domains in coreboot
is a good idea. This means you need to be aware of
the downstream resources when just constraining the domain resources to the
ones reported to be allocated to the stack.
This agains means bypassing or redoing a lot of the coreboot resources
allocation so I doubt this approach makes sense.
I'm currently thinking that the multiple PCI root bus per domain approach
is the easiest. It already works with some minor allocator changes.

Kind regards

Arthur

On Tue, Mar 22, 2022 at 8:15 AM Mariusz Szafrański via coreboot <
coreboot@coreboot.org> wrote:

> Hi Arthur,
>
> In our multidomain based PoC in this situation (multiple root busses on
> one stack) we "virtually" splitted this stack and its resource window to
> two or more virtual stacks and later handled as separate stacks.
>
> Mariusz
>
> W dniu 17.03.2022 o 19:03, Arthur Heymans pisze:
> > Hi
> >
> > I've recently tried to improve the soc/intel/xeon_sp codebase.
> > I want to make it use more native coreboot structures and codeflows
> > instead of parsing the FSP HOB again and again to do things. Ideally
> > the HOB is parsed only once in ramstage, parsed into adequate native
> > coreboot structures (struct device, struct bus, chip_info, ...) and
> > used later on.
> >
> > The lowest hanging fruit in that effort is resource allocation.
> > Currently the coreboot allocator is sort of hijacked by the soc code
> > and done over again.
> > The reason for this is that xeon_sp platforms operate a bit
> > differently than most AMD and Intel Client hardware: there are
> > multiple root busses. This means that there are PCI busses that are in
> > use, but are not downstream from PCI bus 0. In hardware terminology
> > those are the IIO and other type of Stacks.
> >
> > Each Stack has its own range of usable PCI Bus numbers and decoded IO
> > and MEM spaces below and above 4G. I tried to map these hardware
> > concepts to the existing coreboot 'domain' structure. Each domain has
> > resource windows that are used to allocate children devices on, which
> > would be the PCI devices on the stacks.
> > The allocator needs some tweaks to allow for multiple resources of a
> > type (MEM or IO), but nothing major. See
> > https://review.coreboot.org/c/coreboot/+/62353/ and
> > https://review.coreboot.org/c/coreboot/+/62865 (allocator
> > rewrite/improvement based on Nico's excellent unmerged v4.5 work) This
> > seems to work really well and arguably even better than how it is now
> > with more elegant handling of above and below 4G resources.
> >
> > Now my question is the following:
> > On some Stacks there are multiple root busses, but the resources need
> > to be allocated on the same window. My initial idea was to add those
> > root busses as separate struct bus in the domain->link_list. However
> > currently the allocator assumes only one bus on domains (and bridges).
> > In the code you'll see a lot of things like
> >
> > for (child = domain->link_list->children; child; child = child->sibling)
> >   
> >
> > This is fine if there is only one bus on the domain.
> > Looping over link_list->next, struct bus'ses is certainly an option
> > here, but I was told that having only one bus here was a design
> > decision on the allocator v4 rewrite. I'm not sure how common that
> > assumption is in the tree, so things could be broken in awkward ways.
> >
> > Do you have any suggestions to move forward?
> >
> > Kind regards
> >
> > Arthur Heymans
> >
> >
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Multi domain PCI resource allocation: How to deal with multiple root busses on one domain

2022-03-21 Thread Arthur Heymans
Hi all

Thanks a lot for the input.

I looked a bit further into this and it looks like only the resource
allocation parts assumes one downstream bus under link_list.
The rest of coreboot seems to properly account for sibling busses, so maybe
making the allocator loop over ->next in busses is
not so bad after all. https://review.coreboot.org/c/coreboot/+/62967
implements this.

OTOH I'm however under the impression that the sconfig tool currently does
not easily allow for statically defining multibus domains.

Kind regards

Arthur


On Fri, Mar 18, 2022 at 3:20 PM Nico Huber  wrote:

> Hi Lance,
>
> On 18.03.22 05:06, Lance Zhao wrote:
> > Stack idea is from
> >
> https://www.intel.com/content/www/us/en/developer/articles/technical/utilizing-the-intel-xeon-processor-scalable-family-iio-performance-monitoring-events.html
>
> thank you very much! The diagrams are enlightening. I always assumed
> Intel calls these "stacks" because there are multiple components invol-
> ved that matter for software/firmware development. Turns out these
> stacks are rather black boxes to us and we don't need to know what
> components compose a stack, is that right?
>
> Looking at these diagrams, I'd say the IIO stacks are PCI host bridges
> from our point of view.
>
> > In linux, sometimes domain is same as "segment", I am not sure current
> > coreboot on xeon_sp already cover the case of multiple segment yet.
>
> These terms are highly ambiguous. We always need to be careful to not
> confuse them, e.g. "domain" in one project can mean something very dif-
> ferent than our "domain device".
>
> Not sure if you are referring to "PCI bus segments". These are very dif-
> ferent from our "domain" term. I assume coreboot supports multiple
> PCI bus segments. At least it looks like one just needs to initialize
> `.secondary` and `.subordinate` of the downstream link of a PCI host
> bridge accordingly.
>
> There is also the term "PCI segment group". This refers to PCI bus
> segments that share a space of 256 buses, e.g. one PCI bus segment
> could occupy buses 0..15 and another 16..31 in the same group. Multiple
> PCI segment groups are currently not explicitly supported. Might work,
> though, if the platform has a single, consecutive ECAM/MMCONF region to
> access more than the first group.
>
> Nico
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Multi domain PCI resource allocation: How to deal with multiple root busses on one domain

2022-03-17 Thread Arthur Heymans
Hi

I've recently tried to improve the soc/intel/xeon_sp codebase.
I want to make it use more native coreboot structures and codeflows instead
of parsing the FSP HOB again and again to do things. Ideally the HOB is
parsed only once in ramstage, parsed into adequate native coreboot
structures (struct device, struct bus, chip_info, ...) and used later on.

The lowest hanging fruit in that effort is resource allocation.
Currently the coreboot allocator is sort of hijacked by the soc code and
done over again.
The reason for this is that xeon_sp platforms operate a bit differently
than most AMD and Intel Client hardware: there are multiple root busses.
This means that there are PCI busses that are in use, but are not
downstream from PCI bus 0. In hardware terminology those are the IIO and
other type of Stacks.

Each Stack has its own range of usable PCI Bus numbers and decoded IO and
MEM spaces below and above 4G. I tried to map these hardware concepts to
the existing coreboot 'domain' structure. Each domain has resource windows
that are used to allocate children devices on, which would be the PCI
devices on the stacks.
The allocator needs some tweaks to allow for multiple resources of a type
(MEM or IO), but nothing major. See
https://review.coreboot.org/c/coreboot/+/62353/ and
https://review.coreboot.org/c/coreboot/+/62865 (allocator
rewrite/improvement based on Nico's excellent unmerged v4.5 work) This
seems to work really well and arguably even better than how it is now with
more elegant handling of above and below 4G resources.

Now my question is the following:
On some Stacks there are multiple root busses, but the resources need to be
allocated on the same window. My initial idea was to add those root busses
as separate struct bus in the domain->link_list. However currently the
allocator assumes only one bus on domains (and bridges).
In the code you'll see a lot of things like

for (child = domain->link_list->children; child; child = child->sibling)
  

This is fine if there is only one bus on the domain.
Looping over link_list->next, struct bus'ses is certainly an option here,
but I was told that having only one bus here was a design decision on the
allocator v4 rewrite. I'm not sure how common that assumption is in the
tree, so things could be broken in awkward ways.

Do you have any suggestions to move forward?

Kind regards

Arthur Heymans
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Proposition to revert GCC 11

2022-02-14 Thread Arthur Heymans
Hi

It looks like alignment on a packed struct being cast two times to
different integers was the root cause for the ironlake platform.
https://review.coreboot.org/c/coreboot/+/61938 fixed the issue.

Kind regards

On Mon, Feb 14, 2022 at 9:12 AM Arthur Heymans  wrote:

> Hi
>
> There have been some reports of GCC11 not booting on some AGESA fam15
> platforms and on Intel ironlake.
>
> I confirmed that on my X201 (ironlake). The system hangs at an endless
> loop during heci init.
> The code generated doing that part is not wrong, which makes me think that
> other code in that
> 5k+ LOC file is incorrectly generated. Figuring out what is going on:
> whether that gcc revision is broken or our code is going to take some time.
> Clang (with some patches to get it to build) does result in a booting image
> btw. Also my fedora35 gcc11 has the same issue.
>
> My initial approach for figuring out what function is incorrectly
> generated is to move them over in a separate file and replace the .o file
> with a gcc8 generated .o file. That approach will take some time for sure
> as there are a lot of functions in that 5K LOC file.
>
> Another suggestion was to bisect gcc, which is more straightforward.
>
> Any suggestions for a faster/better approach?
>
> I'm suggesting reverting gcc while this issue is being sorted out.
> One issue with this however is that older GCC don't build anymore with my
> fedora gcc11 toolchain.
> However this can likely easily be fixed using docker.
> GCC 8.3 was the previous revision we used but there have been a lot of
> releases in between so those could be considered too.
>
> Kind regards
>
> Arthur
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Proposition to revert GCC 11

2022-02-14 Thread Arthur Heymans
Hi

There have been some reports of GCC11 not booting on some AGESA fam15
platforms and on Intel ironlake.

I confirmed that on my X201 (ironlake). The system hangs at an endless loop
during heci init.
The code generated doing that part is not wrong, which makes me think that
other code in that
5k+ LOC file is incorrectly generated. Figuring out what is going on:
whether that gcc revision is broken or our code is going to take some time.
Clang (with some patches to get it to build) does result in a booting image
btw. Also my fedora35 gcc11 has the same issue.

My initial approach for figuring out what function is incorrectly generated
is to move them over in a separate file and replace the .o file with a gcc8
generated .o file. That approach will take some time for sure as there are
a lot of functions in that 5K LOC file.

Another suggestion was to bisect gcc, which is more straightforward.

Any suggestions for a faster/better approach?

I'm suggesting reverting gcc while this issue is being sorted out.
One issue with this however is that older GCC don't build anymore with my
fedora gcc11 toolchain.
However this can likely easily be fixed using docker.
GCC 8.3 was the previous revision we used but there have been a lot of
releases in between so those could be considered too.

Kind regards

Arthur
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Suggestion for deprecation: LEGACY_SMP_INIT & RESOURCE_ALLOCATOR_V3

2021-11-30 Thread Arthur Heymans
Hi Keith

Thanks a lot for testing! It looks like the newer parallel mp code uses
"mfence" which is probably not supported by your CPU.
I updated the code to reflect that.
I'd appreciate if you can test the latest version of
https://review.coreboot.org/c/coreboot/+/59693/

Kind regards

On Tue, Nov 30, 2021 at 8:13 PM Keith Hui  wrote:

> Hi everyone,
>
> Thanks for your efforts to keep a computing legend alive. :)
>
> I suffered an unexpected exception after applying the patch train.
> Serial log at the end of this email. I probably could leave out
> bootblock/romstage/postcar, but it's here for completeness. Next:
> bisect.
>
> I do still have a P2B-DS on hand, but all my Pentium 3 CPUs are
> singles, and Pentium III-S 1400MHz (the best CPU money can buy for
> this board) are running ~$85 apiece on ebay. On the other hand, I
> think one of my two P2B-LS may have died.
>
> (Branden - and a P3B-F board too. ;-)
>
> Meanwhile, I should have pushed harder to get P8Z77-M into the tree.
>
> Keith
>
> On Tue, 30 Nov 2021 at 05:32, Angel Pons  wrote:
> >
> > Hi Branden,
> >
> > On Mon, Nov 29, 2021 at 9:18 PM Branden Waldner 
> wrote:
> > >
> > > I wasn't really sure that I wanted to comment on this, but seeing as
> > > how I have some of the affected boards I guess I should.
> >
> > Thank you very much.
> >
> > >  Angel Pons wrote:
> > > > Besides AMD AGESA boards, the other boards that need to be updated
> are AOpen DXPL
> > > > Plus-U (a dual-socket server board that uses Netburst Xeons, no
> other board in the tree uses
> > > > the same chipset code) and various Asus P2B boards (which support
> Pentium 2/3 CPUs, these
> > > > boards are older than me). Even though I only know two people who
> still have some of these
> > > > boards (and they don't have the same boards), they're still
> supported because the code has
> > > > been maintained so far.
> > >
> > > I am one of the two with Asus P2B boards, with Keith Hui being the
> > > other. I've got a P2B and a P2-99 and I believe Keith Hui has a
> > > P2B-LS.
> > > So far there have not been very many changes and Keith Hui and others
> > > have worked on them, all I've done is test master and relevant patch
> > > sets every once in a while.
> > > I know I have not been uploading board_status results and I have not
> > > gotten around to fixing the variant set up for the P2-99 so I'm not
> > > uploading results that are uncertain about which board they are for.
> > > Not really relevant, but I think it is pretty neat to be running
> > > coreboot on boards older then some of the contributors.
> > >
> > >  Mike Banon wrote:
> > > > I am often build-testing my boards (didn't notice a
> > > > https://review.coreboot.org/c/coreboot/+/59636 problem for a while,
> but only because I've been
> > > > re-using the previously built toolchains to save time). Also, I am
> actively tech-supporting all the
> > > > people who would like to build coreboot for AMD boards from this
> list, even right now I am in an
> > > > active message exchange with >10 people who are switching to these
> boards to run coreboot
> > > > on them - and any user may give back to the project one day.
> > >
> > > I actually have a few AMD boards and laptops that might be viable for
> > > porting to, but I've never looked in to it much because of the state
> > > support is in coreboot and the fact most of the hardware was actively
> > > being used.
> > >
> > >  Arthur Heymans wrote:
> > > > The first one I'd like to deprecate is LEGACY_SMP_INIT. This also
> includes the codepath for
> > > > SMM_ASEG. This code is used to start APs and do some feature
> programming on each AP, but
> > > > also set up SMM. This has largely been superseded by PARALLEL_MP,
> which should be able
> > > > to cover all use cases of LEGACY_SMP_INIT, with little code changes.
> The reason for
> > > > deprecation is that having 2 codepaths to do the virtually the same
> increases maintenance
> > > > burden on the community a lot, while also being rather confusing.
> > > >
> > > > A few things are lacking in PARALLEL_MP init: - Support for
> !CONFIG_SMP on single core
> > > > systems. It's likely easy to extend PARALLEL_MP or write some code
> that just does CPU
> > > > detection on the BSP CPU. - Support 

[coreboot] Re: Suggestion for deprecation: LEGACY_SMP_INIT & RESOURCE_ALLOCATOR_V3

2021-11-25 Thread Arthur Heymans
> Do you remember from where you got these magic values? Suspect I'm going
> to need similar. Will investigate soc/amd/¨* too.

> /* QEMU-specific register */
> #define EXT_TSEG_MBYTES 0x50
> +#define SMRAMC 0x9d
> +#define C_BASE_SEG ((0 << 2) | (1 << 1) | (0 << 0))
> +#define G_SMRAME   (1 << 3)
> +#define D_LCK  (1 << 4)
> +#define D_CLS  (1 << 5)
> +#define D_OPEN (1 << 6)
> +#define ESMRAMC0x9e
> +#define T_EN   (1 << 0)
> +#define TSEG_SZ_MASK   (3 << 1)
> +#define H_SMRAME   (1 << 7)

Those are northbridge specific register on how to handle the SMM windows
(SMRAM). The BKDG should have something similar.
TSEG is also an interesting search parameter.

On Thu, Nov 25, 2021 at 7:39 PM awokd via coreboot 
wrote:

> Arthur Heymans:
>
> > https://review.coreboot.org/c/coreboot/+/48210 and
> > https://review.coreboot.org/c/coreboot/+/48262/ provided the
> implementation
> > for PARALLEL_MP on qemu.
> > Notice that modern AMD CPUs (soc/amd/¨*) also use PARALLEL_MP and can be
> > used as an example for AMD AGESA platforms too.
> >
> > Good luck!
>
> Thank you, going to need it! Would be nice if that AMD open source rep.
> wanted to own and deliver on global (to AMD AGESA) changes like this to
> "demonstrate a renewed commitment to the community" in corpospeak, but
> will see what I can do.
>
> Do you remember from where you got these magic values? Suspect I'm going
> to need similar. Will investigate soc/amd/¨* too.
>
> /* QEMU-specific register */
> #define EXT_TSEG_MBYTES 0x50
> +#define SMRAMC 0x9d
> +#define C_BASE_SEG ((0 << 2) | (1 << 1) | (0 << 0))
> +#define G_SMRAME   (1 << 3)
> +#define D_LCK  (1 << 4)
> +#define D_CLS  (1 << 5)
> +#define D_OPEN (1 << 6)
> +#define ESMRAMC0x9e
> +#define T_EN   (1 << 0)
> +#define TSEG_SZ_MASK   (3 << 1)
> +#define H_SMRAME   (1 << 7)
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Suggestion for deprecation: LEGACY_SMP_INIT & RESOURCE_ALLOCATOR_V3

2021-11-25 Thread Arthur Heymans
> To address the OP, it seems like there is some activity on getting an
> AGESA RESOURCE_ALLOCATOR_V4 working, but is an AGESA PARALLEL_MP init
> also needed (and is there any activity or something I can do to help?)
> Realize resources may not exist to spoon feed problem definitions to a
> level my brain can handle.

https://review.coreboot.org/c/coreboot/+/48210 and
https://review.coreboot.org/c/coreboot/+/48262/ provided the implementation
for PARALLEL_MP on qemu.
Notice that modern AMD CPUs (soc/amd/¨*) also use PARALLEL_MP and can be
used as an example for AMD AGESA platforms too.

Good luck!

Arthur

On Thu, Nov 25, 2021 at 6:30 PM awokd via coreboot 
wrote:

> Patrick Georgi via coreboot:
> > On 25.11.21 17:04, Mike Banon  wrote:
> >> 2. It's not just the loss of boards - it's also the loss of coreboot
> >> users/contributors who only have these boards and don't want to switch
> > These users didn't contribute fixes to their boards (or even just
> > feedback that things needs to be done and testing when others provide
> > patches) - are they even contributors?
> >
> > It's easy to argue in favor of "lots of users" (or contributors if you
> > want), but if they're all but invisible, do they even exist?
>
> I contributed a number of changes to address Coverity warnings in the
> AMD families, and am actively using a corebooted G505s and PC Engines
> APU. I lurk in the mailing list most of the time, and have a hard time
> tracking what needs to be done to keep boards alive. On this thread, for
> example, the updated requirements feel like they are (but are likely
> not) coming out of the blue.
>
> To address the OP, it seems like there is some activity on getting an
> AGESA RESOURCE_ALLOCATOR_V4 working, but is an AGESA PARALLEL_MP init
> also needed (and is there any activity or something I can do to help?)
> Realize resources may not exist to spoon feed problem definitions to a
> level my brain can handle.
>
> The rest of the reply might be due to communication losses of non-face
> to face transmission medium.
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Suggestion for deprecation: LEGACY_SMP_INIT & RESOURCE_ALLOCATOR_V3

2021-11-25 Thread Arthur Heymans
me). Also, I am actively tech-supporting all the people who
> would like to build coreboot for AMD boards from this list, even right
> now I am in an active message exchange with >10 people who are
> switching to these boards to run coreboot on them - and any user may
> give back to the project one day.
>
> Hopefully my post above explains why I think "dropping 50 boards is a
> bad idea", although I agree that it would be nice to get a resource
> allocator v4 working on them.
>
>
>
>
>
> On Thu, Nov 25, 2021 at 12:46 AM ron minnich  wrote:
> >
> > The word 'drop' has ominous connotations, but it's not a deletion. A
> > board is never really gone. It's git. I can still find the Alpha
> > boards in there if I go back far enough. It's just that active
> > development ends, as no one is working to keep them up to date.
> >
> > Would it be ok with you to drop the board, and bring it back when it
> > is working again?
> >
> > There is a cost to keeping boards too long when there is no one
> > maintaining them. They may still build, but they can stop working.
> > That's happened and in my view it's best not to let it happen. People
> > should be able to count on a board working if they build an image.
> >
> > Thanks
> >
> > ron
> >
> >
> > On Wed, Nov 24, 2021 at 12:16 PM Mike Banon  wrote:
> > >
> > > With all due respect, dropping support for the majority of AMD boards
> > > - with a quite significant community around them! - doesn't seem like
> > > a wise decision, if we still care about the coreboot marketshare on
> > > the worldwide-available consumer PCs. Small improvement in the common
> > > source, but a huge loss of boards? (almost 50!). For the sake of the
> > > bright future of the coreboot project, this must be prevented at all
> > > costs...
> > >
> > > Some time ago I did https://review.coreboot.org/c/coreboot/+/41431
> > > change where tried to get a resource allocator V4 working for these
> > > AGESA boards, and despite a tiny size (less than 20 lines) - it almost
> > > worked, judging by that fam15h A88XM-E booted fine (although there
> > > might have been some other problems undercover). I wonder if it could
> > > help and will be happy to test the new changes related to this.
> > >
> > >
> > > On Wed, Nov 24, 2021 at 8:52 PM Arthur Heymans 
> wrote:
> > > >
> > > > > We could announce this deprecation in the 4.16 release notes, then
> deprecate after 4.18 (8.5 months from now).  At that point, we'd create a
> branch and set up a verification builder so that any deprecated platforms
> could be continued in the 4.18 branch.
> > > >
> > > > That timeline of 8.5 months does sound fair. I just found this
> updated release schedule in the meeting minutes.
> > > > If we are going to release every 3 months then I guess that's a good
> way to go.
> > > >
> > > > I started a CL: https://review.coreboot.org/c/coreboot/+/59618 .
> I'll update it to reflect that schedule if it can be agreed upon.
> > > >
> > > > On Wed, Nov 24, 2021 at 6:07 PM Martin Roth 
> wrote:
> > > >>
> > > >> Hey Arthur,
> > > >>
> > > >> Nov 24, 2021, 05:50 by art...@aheymans.xyz:
> > > >>
> > > >> > Hi
> > > >> > I would like to suggest to deprecate some legacy codepaths inside
> the coreboot tree and therefore make some newer ones mandatory.
> > > >> > ... snip ...>  About the timeline of deprecations. Is deprecating
> non conforming platforms from the master branch after the 4.16 release in 6
> months a reasonable proposal?
> > > >> >
> > > >> I have no strong opinion about the platform deprecations, although
> I suspect that PC Engines might be unhappy if it's platforms were removed
> from the ToT codebase.
> > > >>
> > > >>  My preference would be to announce deprecations in the release
> notes.  We just missed the 4.15 release, but we're switching to a 3 month
> release cadence, so the next release will be in early February, 2.5 months
> from now.
> > > >>
> > > >> We could announce this deprecation in the 4.16 release notes, then
> deprecate after 4.18 (8.5 months from now).  At that point, we'd create a
> branch and set up a verification builder so that any deprecated platforms
> could be continued in the 4.18 branch.
> > > >>
> > > >> Would this schedule work?
> > > >>
> > > >> Martin
> > > >>
> > > > ___
> > > > coreboot mailing list -- coreboot@coreboot.org
> > > > To unsubscribe send an email to coreboot-le...@coreboot.org
> > >
> > >
> > >
> > > --
> > > Best regards, Mike Banon
> > > Open Source Community Manager of 3mdeb - https://3mdeb.com/
>
>
>
> --
> Best regards, Mike Banon
> Open Source Community Manager of 3mdeb - https://3mdeb.com/
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Suggestion for deprecation: LEGACY_SMP_INIT & RESOURCE_ALLOCATOR_V3

2021-11-24 Thread Arthur Heymans
> With all due respect, dropping support for the majority of AMD boards
> - with a quite significant community around them! - doesn't seem like
> a wise decision, if we still care about the coreboot marketshare on
> the worldwide-available consumer PCs. Small improvement in the common
> source, but a huge loss of boards? (almost 50!). For the sake of the
> bright future of the coreboot project, this must be prevented at all
> costs...

If there is such a community around those boards there must be someone
willing to either invest time
or money to implement the proposed improvements. Having boards or platforms
inside the tree
is much cheaper than paying AMI for a crappy closed source BIOS/UEFI. It's
still not a free endeavor and
code requires maintenance from time to time such that development on the
master branch can remain
simple and sensible. Not maintaining code from time to time and enforcing
some features, is probably far worse
for the project.

> Some time ago I did https://review.coreboot.org/c/coreboot/+/41431
> change where tried to get a resource allocator V4 working for these
> AGESA boards, and despite a tiny size (less than 20 lines) - it almost
> worked, judging by that fam15h A88XM-E booted fine (although there
> might have been some other problems undercover). I wonder if it could
> help and will be happy to test the new changes related to this.

The proposed change requires the code to know what memory regions are used
and must be reserved.
Angel already reviewed the patch it seems, so that's probably a good start.

On Wed, Nov 24, 2021 at 9:16 PM Mike Banon  wrote:

> With all due respect, dropping support for the majority of AMD boards
> - with a quite significant community around them! - doesn't seem like
> a wise decision, if we still care about the coreboot marketshare on
> the worldwide-available consumer PCs. Small improvement in the common
> source, but a huge loss of boards? (almost 50!). For the sake of the
> bright future of the coreboot project, this must be prevented at all
> costs...
>
> Some time ago I did https://review.coreboot.org/c/coreboot/+/41431
> change where tried to get a resource allocator V4 working for these
> AGESA boards, and despite a tiny size (less than 20 lines) - it almost
> worked, judging by that fam15h A88XM-E booted fine (although there
> might have been some other problems undercover). I wonder if it could
> help and will be happy to test the new changes related to this.
>
>
> On Wed, Nov 24, 2021 at 8:52 PM Arthur Heymans 
> wrote:
> >
> > > We could announce this deprecation in the 4.16 release notes, then
> deprecate after 4.18 (8.5 months from now).  At that point, we'd create a
> branch and set up a verification builder so that any deprecated platforms
> could be continued in the 4.18 branch.
> >
> > That timeline of 8.5 months does sound fair. I just found this updated
> release schedule in the meeting minutes.
> > If we are going to release every 3 months then I guess that's a good way
> to go.
> >
> > I started a CL: https://review.coreboot.org/c/coreboot/+/59618 . I'll
> update it to reflect that schedule if it can be agreed upon.
> >
> > On Wed, Nov 24, 2021 at 6:07 PM Martin Roth 
> wrote:
> >>
> >> Hey Arthur,
> >>
> >> Nov 24, 2021, 05:50 by art...@aheymans.xyz:
> >>
> >> > Hi
> >> > I would like to suggest to deprecate some legacy codepaths inside the
> coreboot tree and therefore make some newer ones mandatory.
> >> > ... snip ...>  About the timeline of deprecations. Is deprecating non
> conforming platforms from the master branch after the 4.16 release in 6
> months a reasonable proposal?
> >> >
> >> I have no strong opinion about the platform deprecations, although I
> suspect that PC Engines might be unhappy if it's platforms were removed
> from the ToT codebase.
> >>
> >>  My preference would be to announce deprecations in the release notes.
> We just missed the 4.15 release, but we're switching to a 3 month release
> cadence, so the next release will be in early February, 2.5 months from now.
> >>
> >> We could announce this deprecation in the 4.16 release notes, then
> deprecate after 4.18 (8.5 months from now).  At that point, we'd create a
> branch and set up a verification builder so that any deprecated platforms
> could be continued in the 4.18 branch.
> >>
> >> Would this schedule work?
> >>
> >> Martin
> >>
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
>
>
>
> --
> Best regards, Mike Banon
> Open Source Community Manager of 3mdeb - https://3mdeb.com/
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Suggestion for deprecation: LEGACY_SMP_INIT & RESOURCE_ALLOCATOR_V3

2021-11-24 Thread Arthur Heymans
> We could announce this deprecation in the 4.16 release notes, then
deprecate after 4.18 (8.5 months from now).  At that point, we'd create a
branch and set up a verification builder so that any deprecated platforms
could be continued in the 4.18 branch.

That timeline of 8.5 months does sound fair. I just found this updated
release schedule in the meeting minutes.
If we are going to release every 3 months then I guess that's a good way to
go.

I started a CL: https://review.coreboot.org/c/coreboot/+/59618 . I'll
update it to reflect that schedule if it can be agreed upon.

On Wed, Nov 24, 2021 at 6:07 PM Martin Roth  wrote:

> Hey Arthur,
>
> Nov 24, 2021, 05:50 by art...@aheymans.xyz:
>
> > Hi
> > I would like to suggest to deprecate some legacy codepaths inside the
> coreboot tree and therefore make some newer ones mandatory.
> > ... snip ...>  About the timeline of deprecations. Is deprecating non
> conforming platforms from the master branch after the 4.16 release in 6
> months a reasonable proposal?
> >
> I have no strong opinion about the platform deprecations, although I
> suspect that PC Engines might be unhappy if it's platforms were removed
> from the ToT codebase.
>
>  My preference would be to announce deprecations in the release notes.  We
> just missed the 4.15 release, but we're switching to a 3 month release
> cadence, so the next release will be in early February, 2.5 months from now.
>
> We could announce this deprecation in the 4.16 release notes, then
> deprecate after 4.18 (8.5 months from now).  At that point, we'd create a
> branch and set up a verification builder so that any deprecated platforms
> could be continued in the 4.18 branch.
>
> Would this schedule work?
>
> Martin
>
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Suggestion for deprecation: LEGACY_SMP_INIT & RESOURCE_ALLOCATOR_V3

2021-11-24 Thread Arthur Heymans
Hi

I would like to suggest to deprecate some legacy codepaths inside the
coreboot tree and therefore make some newer ones mandatory.

The first one I'd like to deprecate is LEGACY_SMP_INIT. This also includes
the codepath for SMM_ASEG. This code is used to start APs and do some
feature programming on each AP, but also set up SMM. This has largely been
superseded by PARALLEL_MP, which should be able to cover all use cases of
LEGACY_SMP_INIT, with little code changes. The reason for deprecation is
that having 2 codepaths to do the virtually the same increases maintenance
burden on the community a lot, while also being rather confusing.

A few things are lacking in PARALLEL_MP init:
- Support for !CONFIG_SMP on single core systems. It's likely easy to
extend PARALLEL_MP or write some code that just does CPU detection on the
BSP CPU.
- Support smm in the legacy ASEG (0xa - 0xb) region. A POC showed
that it's not that hard to do with PARALLEL_MP
https://review.coreboot.org/c/coreboot/+/58700

No platforms in the tree have any hardware limitations that would block
migrating to PARALLEL_MP / a simple !CONFIG_SMP codebase.

The second codepath that I'd like to propose for deprecation is
RESOURCE_ALLOCATOR_V3.
V4 was introduced more than a year ago and with minor changes most
platforms were able to work just fine with it. A major difference is that
V3 uses just one continuous region below 4G to allocate all PCI memory
BAR's. V4 uses all available space below 4G and if asked to, also above 4G
too. This makes it important that SoC code properly reports all fixed
resources.

Currently only AGESA platforms have issues with it. On gerrit both attempts
to fix AMD AGESA codebases to use V4 and compatibility modes inside the V4
allocator have been proposed, but both efforts seem stalled. See the (not
yet merged) documentation https://review.coreboot.org/c/coreboot/+/43603 on
it's details. It looks like properly reporting all fixed resources is the
culprit.

About the timeline of deprecations. Is deprecating non conforming platforms
from the master branch after the 4.16 release in 6 months a reasonable
proposal?

The affected boards currently are:
AMD_INAGUA
AMD_OLIVEHILL
AMD_PARMER
AMD_SOUTHSTATION
AOPEN_DXPLPLUSU
AMD_PERSIMMON
AMD_THATCHER
AMD_UNIONSTATION
ASROCK_E350M1
ASUS_A88XM_E
ASROCK_IMB_A180
ASUS_AM1I_A
ASUS_F2A85_M
ASUS_F2A85_M_PRO
ASUS_F2A85_M_LE
ASUS_P2B_RAMDEBUG
ASUS_P2B_LS
ASUS_P2B_F
ASUS_P2B_D
ASUS_P2B_DS
ASUS_P3B_F
ASUS_P2B
ODE_E20XX
BIOSTAR_AM1ML
BIOSTAR_A68N5200
ELMEX_PCM205400
ELMEX_PCM205401
GIZMOSPHERE_GIZMO2
GIZMOSPHERE_GIZMO
HP_ABM
HP_PAVILION_M6_1035DX
JETWAY_NF81_T56N_LF
LENOVO_G505S
LIPPERT_FRONTRUNNER_AF
LIPPERT_TOUCAN_AF
MSI_MS7721
PCENGINES_APU1_
PCENGINES_APU2_
PCENGINES_APU3_
PCENGINES_APU4_
PCENGINES_APU5_
PCENGINES_APU1
PCENGINES_APU2
PCENGINES_APU3
PCENGINES_APU4
PCENGINES_APU5

sidenote: Qemu platforms support both LEGACY_SMP_INIT and PARALLEL_MP init
so I did not list them here.

Let me know your thoughts.

Arthur
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: TigerLake RVP TCSS init failure

2021-10-25 Thread Arthur Heymans
Hi

That MTRR setup looks suboptimal for sure, but not fatally flawed.
What's located at 0x7700 till 0x8000? I suspect it's just dram but
maybe allocated for different purposes like TSEG, GFX stolen memory, 
If you mark it as such during resource allocation the MTRR solution will be
more optimised (see soc/intel/common/block/systemagent/systemagent.c).

Kind regards

Arthur

On Mon, Oct 25, 2021 at 5:05 PM Samek, Jan  wrote:

> Hello coreboot Community,
>
> After a long time, there's an update to this Tiger Lake issue:
>
> For now, the masks in mca_configure() are used as a workaround to ignore
> the MCEs:
>
> --- a/src/soc/intel/common/block/cpu/cpulib.c
> +++ b/src/soc/intel/common/block/cpu/cpulib.c
> @@ -346,7 +346,7 @@ void mca_configure(void)
> for (i = 0; i < num_banks; i++) {
> /* Initialize machine checks */
> wrmsr(IA32_MC_CTL(i),
> -   (msr_t) {.lo = 0x, .hi = 0x});
> +   (msr_t) {.lo = 0, .hi = 0});  /* FIXME: MCEs
> temp. disabled */
> }
>
> It was found by Werner that these MCEs are set by FSP-M. With possibility
> being wrong FSP parameters, SPD data etc. There was also a need to disable
> MCE checking in FSP-S UPD to get through the silicon init.
>
> Nevertheless what after discussion with Intel and Werner, what seems to be
> the root cause, might be the MTRR setup. From what I see from the logs, the
> values indeed look somehow strange to me. Sorry, I have no clue yet how to
> set up MTRRs correctly or how they should look like.
>
> ...
> BS: BS_WRITE_TABLES run times (exec / console): 7 / 307 ms
> MTRR: Physical address space:
> 0x - 0x000a size 0x000a type 6
> 0x000a - 0x000c size 0x0002 type 0
> 0x000c - 0x7700 size 0x76f4 type 6
> 0x7700 - 0x8000 size 0x0900 type 0
> 0x8000 - 0x9000 size 0x1000 type 1
> 0x9000 - 0x0001 size 0x7000 type 0
> 0x0001 - 0x00048040 size 0x38040 type 6
> MTRR: Fixed MSR 0x250 0x0606060606060606
> MTRR: Fixed MSR 0x258 0x0606060606060606
> MTRR: Fixed MSR 0x259 0x
> MTRR: Fixed MSR 0x268 0x0606060606060606
> MTRR: Fixed MSR 0x269 0x0606060606060606
> MTRR: Fixed MSR 0x26a 0x0606060606060606
> MTRR: Fixed MSR 0x26b 0x0606060606060606
> MTRR: Fixed MSR 0x26c 0x0606060606060606
> MTRR: Fixed MSR 0x26d 0x0606060606060606
> MTRR: Fixed MSR 0x26e 0x0606060606060606
> MTRR: Fixed MSR 0x26f 0x0606060606060606
> call enable_fixed_mtrr()
> CPU physical address size: 39 bits
> MTRR: default type WB/UC MTRR counts: 6/7.
> MTRR: WB selected as default type.
> MTRR: 0 base 0x7700 mask 0x007fff00 type 0
> MTRR: 1 base 0x7800 mask 0x007ff800 type 0
> MTRR: 2 base 0x8000 mask 0x007ff000 type 1
> MTRR: 3 base 0x9000 mask 0x007ff000 type 0
> MTRR: 4 base 0xa000 mask 0x007fe000 type 0
> MTRR: 5 base 0xc000 mask 0x007fc000 type 0
> MTRR: Fixed MSR 0x250 0x0606060606060606
> MTRR: Fixed MSR 0x258 0x0606060606060606
> MTRR: Fixed MSR 0x259 0x
> MTRR: Fixed MSR 0x268 0x0606060606060606
> MTRR: Fixed MSR 0x269 0x0606060606060606
> MTRR: Fixed MSR 0x26a 0x0606060606060606
> MTRR: Fixed MSR 0x26b 0x0606060606060606
> MTRR: Fixed MSR 0x26c 0x0606060606060606
> MTRR: Fixed MSR 0x26d 0x0606060606060606
> MTRR: Fixed MSR 0x26e 0x0606060606060606
> MTRR: Fixed MSR 0x26f 0x0606060606060606
> MTRR: Fixed MSR 0x250 0x0606060606060606
> MTRR: Fixed MSR 0x250 0x0606060606060606
> MTRR: Fixed MSR 0x258 0x0606060606060606
> MTRR: Fixed MSR 0x259 0x
> MTRR: Fixed MSR 0x268 0x0606060606060606
> MTRR: Fixed MSR 0x269 0x0606060606060606
> MTRR: Fixed MSR 0x26a 0x0606060606060606
> MTRR: Fixed MSR 0x26b 0x0606060606060606
> MTRR: Fixed MSR 0x26c 0x0606060606060606
> MTRR: Fixed MSR 0x26d 0x0606060606060606
> MTRR: Fixed MSR 0x26e 0x0606060606060606
> MTRR: Fixed MSR 0x26f 0x0606060606060606
> MTRR: Fixed MSR 0x258 0x0606060606060606
> call enable_fixed_mtrr()
> MTRR: Fixed MSR 0x259 0x
> MTRR: Fixed MSR 0x268 0x0606060606060606
> MTRR: Fixed MSR 0x269 0x0606060606060606
> MTRR: Fixed MSR 0x26a 0x0606060606060606
> MTRR: Fixed MSR 0x26b 0x0606060606060606
> MTRR: Fixed MSR 0x26c 0x0606060606060606
> MTRR: Fixed MSR 0x26d 0x0606060606060606
> MTRR: Fixed MSR 0x26e 0x0606060606060606
> MTRR: Fixed MSR 0x26f 0x0606060606060606
> CPU physical address size: 39 bits
> call enable_fixed_mtrr()
> MTRR: Fixed MSR 0x250 0x060

[coreboot] Re: There is a python in our toolchain?!?

2021-09-30 Thread Arthur Heymans
>
> As a rule of thumb, any project involving a substantial amount of Python
> always ends up needing a Docker container to build. So I'm in the "no" camp
> for making Python a dependency, however I think it's fine to keep things
> as-is where it can be used for helper scripts and utilities for specific
> purposes such that they aren't critical to building the tree.
>

I'm on the same side here. Building the documentation with python sphinx is
a pain and I ended up needing docker.
The same can be said about edk2/tianocore which also uses a lot of python
in critical parts of its build system.

At a minimum, I think we should consider introducing Python on an optional
> basis (i.e., the C Kconfig implementation only gets used if a Python
> interpreter is unavailable), but making it required would be even better.
>

No. I don't like 'optional' things which do more or less the same as
existing code. It's a great way to make the maintenance cost explode.
This is true in general (not just python tooling): adding alternative
codepaths to do more or less the same as existing code creates a
combinatorial explosion.
We have (had) 2 different bootblocks codepathds, multiple different cache
as ram teardown mechanism, 2 resource different allocators, 2 MP init
codepaths, 3 SMM init codepaths, ...
Tooling is not the place you want to add new optional codepaths (maybe
toolchains can be an exception as different compilers catch different
errors which improves code quality).
'Optional' things under the pretext to not break existing things looks like
a nice idea at first sight, but it's a very bad strategy in the long run.
I'd much rather stick to 1 thing, even
if it breaks the master branch, quickly fix things that broke and move on.
I don't want to hijack the thread which is about python, but 'optional'
python is worse than both no
python or mandatory python.

On Thu, Sep 30, 2021 at 7:01 AM David Hendricks 
wrote:

> As a rule of thumb, any project involving a substantial amount of Python
> always ends up needing a Docker container to build. So I'm in the "no" camp
> for making Python a dependency, however I think it's fine to keep things
> as-is where it can be used for helper scripts and utilities for specific
> purposes such that they aren't critical to building the tree.
>
> On Wed, Sep 29, 2021 at 2:58 AM Patrick Georgi via coreboot <
> coreboot@coreboot.org> wrote:
>
>> That said, python makes its way back into the tree every now and then
>> (typically as small snippets to compute and add hashes to binaries as
>> needed by ARM SoCs). Uncanny, but typically not a big deal.
>>
> ...
>> To avoid these scenarios, could we possibly nail down the policy on
>> python in coreboot?
>>
>
> The policy should be simple: The CI system (Jenkins) must be able to build
> every target in its default configuration.
>
> If we introduce Python as a dependency, then all Python in the tree must
> be compatible with whatever version Jenkins uses. And if we're going to
> impose the burden of fixing Python on everyone, then all developers must
> have the ability to install a compatible version in their OS. Given the
> experiences many of us in this thread have had and how widely distros vary
> in Python support, I don't see this as tenable.
>
> Another thing to keep in mind is that we have these sorts of helper
> scripts from multiple vendors/parties over several years, and we'll likely
> see more in the future. Pushing them all to use whatever version(s) of
> Python we decide to build with does not seem realistic.
>
> All that said, I'm fine with Python being used for helper scripts and such
> as we've done in the past. It gives developers/vendors/etc. freedom to use
> whatever works for their purposes without imposing a huge burden on
> everyone else.
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: coreboot not starting on denverton_ns

2021-06-10 Thread Arthur Heymans
Hi

Try with https://review.coreboot.org/c/coreboot/+/55389 applied.

Kind regards
Arthur

On Thu, Jun 10, 2021 at 3:16 PM Sumo  wrote:

> Hi,
>
> Coreboot is not starting on denverton due to this commit:
> * 0f068a600e drivers/intel/fsp2_0: Fix the FSP-T position
> (found this by doing a manual git bisect using an old commit as reference,
> everything was working good around november 2020)
> Basically it crashes, nothing is shown in the console output.
> Any advice?
>
> Kind regards,
> Sumo
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: asus/p2b - not enough space in cbfs for default build since i440bx bootblock console enable

2021-05-20 Thread Arthur Heymans
Hi

Thanks for sharing your findings. THe flash is 256K big, which is quite
small these days.
When building coreboot with default settings but without a payload I find
that there is 69K empty space left for payloads.

Some future developments I have been working on might give a bit more
breathing space.
- I want to make romstage optional and include the sources in the
bootblock: That should shave off roughly 10K of romstage.
- I have compressing postcar working (maybe you can also disable the
postcar console to reduce size). That's also 2-3k size gains
at likely the const of a tiny bit of boot performance on this platform.
- I also have some WIP code to merge postcar into ramstage which would save
15k.

Maybe on coreboot release 4.15 you will have a better time building a fully
working image with the default configuration.

Kind regards

Arthur Heymans

On Fri, May 21, 2021 at 7:08 AM Paul Menzel  wrote:

> Dear Branden,
>
>
> Am 21.05.21 um 05:36 schrieb Branden Waldner:
> > When testing the latest coreboot code before the 4.14 release, I found
> > I couldn't build a working image with the default (or what I usually
> > use) config for the asus/p2b. I figured out that it failed to build
> > with an error of not enough space in cbfs after the merge to enable
> > bootblock console for intel 440bx.
> > Following this, I just disabled microcode firmware to free up space
> > and it worked fine, even without the microcode update. Specifically
> > selecting the microcode for the cpu I'm using would probably be better
> > though.
> > I'm just commenting on my findings, not really expecting anything. I
> > had intended on trying to obtain some larger flash chips yet, though I
> > never got around to it. It would still leave a broken default build
> > config though with the standard rom size.
>
> Thank you for sharing your findings. All default configurations are
> tested – without a payload though I believe –, so please attach your
> configuration, `defconfig` created by `make savedefconfig`, and your
> payload and size.
>
>
> Kind regards,
>
> Paul
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Dropping the "cbfs master header" file

2021-04-30 Thread Arthur Heymans
Hi Werner

Sounds good.

I got rid of the SeaBIOS dependency on the CBFS master haeder:
https://mail.coreboot.org/hyperkitty/list/seab...@seabios.org/thread/PSLZAMCG7C5IU6TLEGXWZCESXHPYUS76/
Maybe that can be of use for you?

Arthur

On Fri, Apr 30, 2021 at 12:38 PM Zeh, Werner  wrote:

> Hi Patrick, Arthur.
>
> We do have a use case in our self-crafted linux where the CBFS master
> header is used.
> I need to dig into the code and find out what needs to be done there in
> order to get rid of this dependency while still not break it for older
> builds.
>
> Werner
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Dropping the "cbfs master header" file

2021-04-27 Thread Arthur Heymans
Hi

Currently the "COREBOOT" FMAP cbfs region has a file named "cbfs master
header" at the bottom of this fmap region and the X86 bootblock has a
pointer at 0xFFFC to it. Other ARCH have a "header pointer" file at the
top of that FMAP region pointing to it.

Currently this file is only used as an anchor point to use cbfs with
walkcbfs_asm on X86 to access cbfs in assembly (before any C code). There
are 2 uses for this at the moment:
1) updating microcode on Intel systems that don't feature FIT before
setting up CAR
2) finding FSP-T (if FSP_CAR is used) before jumping to it
Both the cbfstool and the C coreboot code don't rely on it anymore, so it
is a legacy feature. Other cbfs FMAP region like FW_MAIN_A/B in a VBOOT
setup don't feature it.

Accessing cbfs with walkcbfs_asm breaks hardware based root of trust
security mechanisms like Intel bootguard/TXT/CBnT, because no verification
or measurement whatsoever happens on either " cbfs master header" of
"fsp-t" files. So for instance even if TXT/Bootguard measured or verified
FSP-T as an IBB so that it is trusted, an attacker could insert a new cbfs
file with the same name, "fsp-t" at a lower address and coreboot will run
it anyway.  So a static pointer to fsp-t is required. Sidenote/Rant: FSP-T
continuously causes such integration problems... Blobs that set up the
execution environment are just a very bad idea.

So I propose to drop the legacy "cbfs master header" file and adapt the 2
current use cases in the following way:
- Reuse the Intel FIT table and implement FIT microcode updates in
assembly/software. (I had this working on some point, before I decided to
use walkcbfs_asm)
- Either fix the location of FSP-T via for instance a Kconfig option or add
a pointer to "fsp-t" at a fixed location in the bootblock and have the
tooling update the pointer during the build process. I think the Kconfig
option is the least amount of work and cbfstool is already overloaded with
options and flags, so my preference goes to this.

Let me know what you think.

Kind regards

Arthur Heymans
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Intel CBnT tooling and dealing with NDA

2021-02-09 Thread Arthur Heymans

Hi

Thanks for your input!

Peter Stuge  writes:

> Arthur Heymans wrote:
>> To make Intel CBnT (Converged Bootguard and TXT) useful in coreboot
some
>> tooling is required to generate both a Key Manifest (A signed
binary, that
>> is checked against a key fused into the ME, holding keys that OEM
can use
>> to sign the BPM) and a Boot Policy Manifest (signed binary, has a
digest
>> of IBBs, Initial Boot Blocks).
>
> This seems like something that could be put together even by a shell
> script calling openssl the right way.
>
>

Shell scripts are a very bad way to construct binary data structures.

>> 9elements has written some open source tooling (BSD-3 clause) to
generate
>> both KM and BPM. The code for this tool is not yet public
>
> You can't pretend that something is open source until you've
published it.
>

Being open source does not require to be public. FWIW we couldn't
publish our tool as a binary-only if it was not licenqed as it
currently
is (BSD-3 clause). I'm not claiming the binary is open source ofc, but
we are
committed to publish the code ASAP.

>> Intel is currently reviewing this to allow us to make it public, but
this
>> takes time.
> ..
>> My question to the community is if it would be ok to allow for the
build
>> system integration code for KM and BPM generation to be integrated
into
>> the master branch before the code to the tooling is made public.
>
> I don't think that is at all acceptable, even if it may be
technically OK.
>
> If you push this problem upstream you would surrender part of your
> responsibility for delivering an open source solution to your
customer,
> and push the burden of the blobbyness you have created (for your
customer)
> onto the community.
>
> I don't think that's a very good move.
>

I don't like this very much either, but it looks like it is the best
move
available.
Silicon vendors being overprotective over their IP is a recurring
issue.
Developing everything on a private branch and doing a big code dump
once
the silicon vendor gives a green light is an even bigger disservice to
the community.
- Big code dumps are a huge strain on the community and proved to cause
problems in the past.
- The rate at which new silicon and platforms are pushed is staggering.
If one has to wait for the silicon vendor green light before publishing
code the platform might already being to old to be very interesting.
We saw this issue in the past where AGESA srubbing took so long that
the
platforms were not interesting anymore. Intel also publishes FSP glue
code without FSP being published. Relying on things not yet published
seems to be a necessary compromise to have upstream development in the
master branch for new silicon.

One of the things the coreboot project and community does very well at
the moment is to develop everything in one master branch, compared to
the UEFI world where everything lives in a branch and where there is
pretty much 0 community. To keep this upstream development model in
line
with new silicon development it looks like some compromises are
necessary.

>
>> we propose to add a binary tool (it's written in go so it's
>> automatically build as a static binary) to the blobs repo under a
>> licence similar to the one used for Intel FSP and MCU (allows
>> redistribution). We hope to remove it ASAP from there and build it
>> from source from 3rdparty/intel-sec-tools.
>
> Especially given that you seem to have good support for pushing Intel
> forward on this, it doesn't seem urgent to accept what is essentially
> your problem into upstream.
>

If we always have to wait for the silicon vendor to be able to
upstream,
the platform would already be in production and upstreaming is less
interesting for both the vendor and the community. So yes, in a sense
it's urgent and this is how a lot of upstream development is going on
anyway at the moment.

>
>> We'd like to develop as close as possible to the coreboot master
branch,
>> so we hope that this is an acceptable solution to the community.
>
> While it is honorable that you want to work as close to master as
possible,
> I do think you should have considered that a lot earlier in this
process,
> before getting tangled up in Intel's net of NDA nonsense.
>

So no upstream Intel support in coreboot anymore? I'll put it on the
agenda tomorrow ;-)

I wish we had enough leverage as a community to change the silicon
vendors way, but that is just not the case. I feel however that things
improve in the right direction. OSF on server hardware is becoming a
thing again, which was not the case at all a few years back.

>
> Sorry, x86 sucks.
>

On a philosophical note, it's always possible to define goals and
desires
against which e

[coreboot] Re: Intel CBnT tooling and dealing with NDA

2021-02-09 Thread Arthur Heymans

Patrick Georgi via coreboot  writes:

> Am Di., 9. Feb. 2021 um 11:34 Uhr schrieb Arthur Heymans
:
>
>  So TL;DR:
>  - Is (temporarily) adding a tool to the blobs repo ok?
>
> If it matches the requirements of the blobs repo wrt. license terms
and documentation, I don't see why not from a formal perspective.
> It's telling though (in the sense of a Freudian slip) that you put
the "temporarily" in parentheses already: interim solutions like these
tend to survive
> their best due date ;-)

I was told this typically takes Intel a few months.

>  - Is integrating an (optional) not yet open tool into the build
system ok?
>
> This one is IMHO the bigger issue: that tool will only run on
Linux/x86(-64?), and probably only with a select set of libc
implementations. While we
> have portability issues every now and then, they're always
accidentally introduced because our testing isn't good enough while
adding this to the
> build flow deliberately makes all other platforms second tier build
hosts.
>

Good point!
Tools build with go include the go runtime so they should be fully
standalone. Adding different ARCH + OS is very easy with go, so adding
other platforms can be done very quickly if desired.

Kind regards

-- 
==
Arthur Heymans

___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Intel CBnT tooling and dealing with NDA

2021-02-09 Thread Arthur Heymans
Hi

To make Intel CBnT (Converged Bootguard and TXT) useful in coreboot some
tooling is required to generate both a Key Manifest (A signed binary,
that is checked
against a key fused into the ME, holding keys that OEM can use to sign the BPM)
and a Boot Policy Manifest (signed binary, has a digest of IBBs,
Initial Boot Blocks).
At the moment these are included as binaries by the build system.

Obviously this only works if the IBB hasn't changed. If it changed, you'd
need to regenerate the BPM. 9elements has written some open source tooling
(BSD-3 clause) to generate both KM and BPM. The code for this tool is not yet
public as it was written using NDA documentation. Intel is currently reviewing
this to allow us to make it public, but this takes time. It will be
part of the 3rdparty/intel-sec-tools
submodule.

My question to the community is if it would be ok to allow for the build system
integration code for KM and BPM generation to be integrated into the
master branch
before the code to the tooling is made public.
CBnT is an optional feature on Intel hardware and is implemented as an
optional feature in
coreboot. The tool is standalone and coreboot can still be built fine
without it.

At the moment coreboot has code for xeon_sp in the master
branch without a public FSP too, with the promise that it will be
publicly released later
on by Intel. Compared to that the situation would be a little better:
we propose to add a binary tool (it's written in go so it's
automatically build as a static binary) to the blobs repo under a
licence similar to the one used for Intel FSP and MCU (allows
redistribution). We hope to remove it ASAP from there and build it
from source from 3rdparty/intel-sec-tools.

We'd like to develop as close as possible to the coreboot master
branch, so we hope that this is an acceptable solution to the
community.

So TL;DR:
- Is (temporarily) adding a tool to the blobs repo ok?
- Is integrating an (optional) not yet open tool into the build system ok?

Let me know what you think.

Kind regards.

Arthur Heymans



9elements GmbH, Kortumstraße 19-21, 44787 Bochum, Germany
Email:  arthur.heym...@9elements.com
Phone:  +49 234 68 94 188
Mobile:  +32 478499445

Sitz der Gesellschaft: Bochum
Handelsregister: Amtsgericht Bochum, HRB 17519
Geschäftsführung: Sebastian Deutsch, Eray Basar

Datenschutzhinweise nach Art. 13 DSGVO
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Gotta look for the next target after Asus P2B family and i440BX

2019-11-27 Thread Arthur Heymans
Keith Hui  writes:

> Hi guys,
>
> You'll know me as one of the driving forces keeping our i440BX port alive. I 
> did see the two latest patches trying to modernize it and I swear I'll get to 
> it soon, because to be honest, that P2B-LS board has a
> special place in my mind and is not going away, although it is not seeing 
> much use anymore in practice, which brings me to my next step and question.
>

Thanks for that.
The C_ENVIRONMENT_BOOTBLOCK for i440BX has been merged.
The following patches are still pending:
https://review.coreboot.org/c/coreboot/+/36775/ (i440bx console in bootblock)
https://review.coreboot.org/c/coreboot/+/37164/ (i440bx run romstage cached)

> Lenovo ThinkPad X230 Tablet. This is now my daily machine. Due to some snafu 
> on my part I bought a second unit with only Wacom pen but no touch. The 
> non-tablet X230 is in the tree and there were
> some notes on tablet hardware. I need to buy a test clip to access the flash 
> chip. Now I wonder if this guy is "already" supported.
>

It looks like the tablet variant is not explicitly supported. It
probably boots with a x230 build, but I suspect there is not support for
the wacom digitizer.

> Asus M4A785TD-M EVO. AM3, family 10h, closest cousin was m4a785t-m, but I 
> don't know where it went now.
>

AMDFAM10 support was dropped after the 4.11 release. It lives on in the
4.11 branch.

> Asus P8Z77-M. Closest cousin is p8z77-m_pro. Mine is not pro, but I don't 
> know off my head what the difference is. And I get to deal with ME.
>

Best way to deal with ME is to not deal with it at all and just flash
the BIOS region ;). Some possible differences are enabled/disabled pci
devices and gpio's. You should be able to add the board as a variant.

Kind regards

-- 
==
Arthur Heymans
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: System hangs during pci init

2019-11-26 Thread Arthur Heymans
Hi

The devicetree seems incomplete with regard to what PCI devices are
discovered. Later on it tries to get the chip_info from a node that
is not on the devicetree (00:01.0), which fails for this reason. I
suggest you complete the devicetree.

> This Email may contain confidential or privileged information for the 
> intended recipient (s). If you are not the intended recipient, please do not 
> use or disseminate the
> information, notify the sender and delete it from your system.
>

You are sending this to a public mailing list...

> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>

Kind regards

-- 
==
Arthur Heymans
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: T440P: Unable to initialize memory speeds above 1600 MT/s

2019-11-17 Thread Arthur Heymans
ad-...@mailbox.org writes:

> I just bought a pair of HyperX DDR3L-2133 CL11 memory. They are supported and 
> run fine at 2133 MT/s with the original Lenovo firmware.
>

> However Coreboot only registers them at 1600 MT/s. 
>
> Changing the value of .max_ddr3_freq to 2133 in 
> src/mainboard/lenovo/t440p/romstage.c leads to a bricked notebook. A beeping 
> sound occurs, regardless of the installed memory type (1600 or 2133).
>

Looking at src/northbridge/intel/haswell/pei_data.h The only valid
values are 800, 1067, 1333, 1600 for the mrc.bin. The sandybridge code
with a similar PEI/mrc.bin is better at guarding against invalid
values. Maybe that code should be adapted to haswell.

1600MT/s is what the controller officially supports. Anything faster is
overclocking and the mrc.bin likely does not support that.

-- 
Arthur Heymans
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: [RFC] Using relocatable module program parameters to pass cbmem_top() on X86

2019-10-21 Thread Arthur Heymans
Aaron Durbin via coreboot  writes:

>
> That's the same thing as effectively providing a register ABI between the 
> stages passing information to it. I'm not sure EBDA is necessarily bad in 
> practice. It's typically reserved. What specific issues are
> you concerned with EBDA?
>

EBDA typically also holds the RSDP. SeaBIOS sometimes relocates the
RSDP when it needs more place. Rewriting the RSDP during S3 resume can
(and did in the past) overwrite the RSDP and break S3.

I found that passing cbmem_top via the function argument option (on
stack on x86, in registers on arm, aarch64, x86_64) of the program
loader could be an universal method of handing off the cbmem_top
pointer. With very little code at the start of ramstages only one
cbmem_top implementation is needed for all arch during those stages.

See some working POC on arm, x86, x86_64 here:
https://review.coreboot.org/c/coreboot/+/36143/
https://review.coreboot.org/c/coreboot/+/36144
https://review.coreboot.org/c/coreboot/+/36145/


>  Any thoughts?
>
>  Arthur Heymans
>  ___
>  coreboot mailing list -- coreboot@coreboot.org
>  To unsubscribe send an email to coreboot-le...@coreboot.org
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>

-- 
==
Arthur Heymans
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] [RFC] Using relocatable module program parameters to pass cbmem_top() on X86

2019-10-19 Thread Arthur Heymans
Hi

Currently all stages that need cbmem need an implementation of a
cbmem_top function. On platforms with fully open source coreboot code it
is generally not a problem to link in all stages the code reading the
hardware registers to figur out where the top of lower memory is. On FSP
platforms this proves to be painful and using the value provided by the
FSP-M HOB is to be preferred. Next stages don't have access to this
variable since CAR is gone. EBDA is used to pass it on to next stages.

The problem with this is that EBDA also need to be written on S3 as one
cannot assume it to be still there. Writing things on S3 is always
fragile as it could overwrite other things set up by something else.

One possible solution is to back up the area it's going to write to in
romstage in cbmem and restoring it on the ramstage, but writing to cbmem
on S3 resume is just moving the problem to a different place and likely
makes things even worse...

One other idea I could come up with is to make the cbmem_top pointer an
argument to the relocatable stages needing it (postcar and ramstage).
This would unify all cbmem_top implementation in those stages and should
be more robust on S3 resume as on S3 resume the ramstage is typically
fetched from RAM (cbmem or tseg stage cache).

Any thoughts?

Arthur Heymans
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Shubhendra - GSoC proposal Inquiry about "Port payloads to RISC-V"

2019-04-09 Thread Arthur Heymans
"taii...@gmx.com"  writes:

>
> Yeah porting the main payloads to RISC-V (also consider OpenPOWER) is a
> great plan,
> GRUB2 and SeaBIOS are the main ones - SeaBIOS is the default and most
> people use that as it "just works" as a traditional "BIOS" firmware "F12"
> loader where the user is presented with a screen and can pick various
> such as booting from hdds, dvd drives, Option ROM etc.

This is pretty ill informed advice. SeaBIOS implements a legacy, x86
specific interface. This is not something you'd ever want to implement
on non-x86 hardware.

> I personally suggest that while you are skilled before you dive in neck
> deep and start porting you should purchase an affordable coreboot device
> and install it such as the KCMA-D8 which is a great owner controlled
> open source firmware example of coreboot.

People can contribute to coreboot without even owning a physical device
on which coreboot runs, let alone *your* favorite board. The board you
suggest is not a good example at all. The code supporting that board has
some serious quality issues.


Kind regards

-- 
==
Arthur Heymans
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: [EC] ACPI DSDT not complete

2019-02-10 Thread Arthur Heymans
Johnny Sorocil  writes:

> First thank you all for good work, is pretty impressive to have (as it is 
> possible to be) open source version of firmware for modern x86 black box.
> Not sure if this is right mailing list, if not, please redirect me to the 
> correct one.
>
> I am using CoreBoot with SeaBIOS as a payload for ThinkPad T430s to boot 
> FreeBSD (but with Linux problem is same).
> DSDT provided by CoreBoot is missing battery charge start (BCTG, BCCS) and 
> stop (BCSG, BCSS) thresholds.
>

https://review.coreboot.org/c/coreboot/+/23178 (needs a rebase)
implements the things you are asking for. Just wondering is this a
mainline driver in FreeBSD? Linux got one quite recently.

-- 
==
Arthur Heymans
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: getting rid of CAR_GLOBAL and early_variables.h

2019-02-10 Thread Arthur Heymans
Arthur Heymans  writes:

> Now moving forward it would be a nice goal to set for the October
> release 2020 to have NO_CAR_GLOBAL_MIGRATION as a mandatory feature?
> This was already discussed in [2], without a decisive conclusion.

I meant Oktober 2019, that is 9 months from now which seems like a
reasonable delay.

> [1]https://review.coreboot.org/q/topic:%2522no_CAR_GLOBAL%2522+
> [2]https://mail.coreboot.org/hyperkitty/list/coreboot@coreboot.org/message/VJ34MNXVZRO4VUZAK2YXUMTBRFWNF7NM/
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org

-- 
==
Arthur Heymans
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] getting rid of CAR_GLOBAL and early_variables.h

2019-02-10 Thread Arthur Heymans
Hi

Currently most x86 platforms have CONFIG_NO_CAR_GLOBAL_MIGRATION set by
implementing POSTCAR_STAGE. This means that global variables during CAR
stages don't need to be migrated to cbmem when initializing cbmem, as
stages are cleanly separated programs (in other words you don't tear
down CAR while running code in CAR). Previously we had a CAR_GLOBAL
macro that would put global variables a 'special' place in car. with
NO_CAR_GLOBAL_MIGRATION this is not needed anymore.

I propose to remove all those CAR_GLOBAL references on platforms already
implementing POSTCAR_STAGE. see [1]. That way future platforms that tend
to copy a lot of this code don't needlessly end up using this
meaningless macro.

Now moving forward it would be a nice goal to set for the October
release 2020 to have NO_CAR_GLOBAL_MIGRATION as a mandatory feature?
This was already discussed in [2], without a decisive conclusion.

[1]https://review.coreboot.org/q/topic:%2522no_CAR_GLOBAL%2522+
[2]https://mail.coreboot.org/hyperkitty/list/coreboot@coreboot.org/message/VJ34MNXVZRO4VUZAK2YXUMTBRFWNF7NM/
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Fallback mechanisms on x86 with C_ENVIRONMENT_BOOTBLOCK

2019-01-22 Thread Arthur Heymans
Hi

As more and more x86 platforms are moving to C_ENVIRONMENT_BOOTBLOCK
and therefore don't use a romcc compiled bootblock anymore a certain
question arises. With the romcc bootblock there was a normal/fallback
mechanism.

It works the following way:
It uses RTC cmos to select between the normal and the fallback
bootpaths. So depending on that bit the bootblock selected either
normal/romstage or fallback/romstage which also load postcar stage,
ramstage and payloads with the same prefix from there. There is also a
reboot counter which makes sure that it actually gets to the point it
can load the payload and depending on CONFIG_SKIP_MAX_REBOOT_CNT_CLEAR
it resets that the counter.

This mechanism is not very robust and is more intended to be used to be
able to test things without needing a hardware programmer to flash
images in the case something goes wrong. I use it for instance to test
changes on laptops which take a long time to disassemble.

Currently C_ENVIRONMENT_BOOTBLOCK lacks such generic mechanism on x86
platforms. On the first sight it looks like VBOOT with verstage running
after the bootblock, might be able to achieve a similar boot
scheme. VBOOT seems to lack documentation and while not that hard to get
working, it looks like it is not falling back when there is problem on a
RW_A/B boot path (I called die die(); somewhere in the ramstage to
test). Also the tools around vboot (crossystem) are quite chromeos
specific, requiring Chromeos specific ACPI code exposing the VBNV
variables and also a Linux kernel exposing those ACPI methods via
sysfs.

My understanding of VBOOT might be incorrect or incomplete, so it would
be great if someone more knowledgeable could fill in here.

So at the moment it looks like VBOOT does not fit the bill to be able to
quickly test things while having a fallback mechanism.

Now being able to run GCC compiled code in the bootblock does have the
advantage of allowing much more flexibility over romcc compiled code.
So it is possible to simply reimplement the same behavior with different
prefixes for bootpaths but it would also be possible to do something
similar to what vboot does, namely using separate FMAP regions for boot
paths. This would require a simple cbfs_locator. Upstream flashrom
master now supports using FMAP as a layout so it would be rather easy to
use.

Using FMAP requires a little bit more work (generating a proper default
FMAP, populate the CBFS FMAP regions, implementing a cbfs_locator) but
does allow for nice features like locking the fallback CBFS region to
make sure the fallback can't be erased by accident.

Any thought or suggestions?

Kind regards

Arthur Heymans
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Depreciating FSP1.1 on soc/intel/quark along with util/checklist

2019-01-09 Thread Arthur Heymans
Dear community

soc/intel/quark Has 2 FSP versions hooked up, both FSP1.1 and FSP2.0.
Maintaining the code for both versions does increase the developers
load.

So my question is can we remove the older FSP1.1 implementation?
Is someone using this board in a setup that cannot be obtained with
FSP2.0?

There is only one board that has both FSP versions hooked up which can
be selected in menuconfig, so we wouldn't be dropping a board.

FSP1.1 on the Intel Galileo (only board using soc/intel/quark) is also
the only board that uses util/checklist. This utility allows to check
if certain linker symbols are found. Supposedly it allows board porters
to gradually work towards with a checklist of things to do while
in reality its just an extra burden to maintain a list of symbols.

So can we remove util/checklist? [1] does that.

Kind regards

Arthur Heymans

[1] https://review.coreboot.org/c/coreboot/+/30691
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Testing modernizing patches on ASUS KGPE-D16. (was Re: Asus KGPE-D16 with latest coreboot errors with Unsupported Hardware on Qubes 4 install missing IOMMU?)

2018-12-17 Thread Arthur Heymans
Felix Held  writes:

> Hi!
>
>
>> Things should settle down some after Christmas, so I'll see what I can
>> do to pull the old D16 dev platform back out at that time and start
>> testing / merging patches.  Are there any others that I should also help
>> take a look at?
>
> Arthur pushed a new version of your patch #19820 and since it does what your
> original patch does, I'd say that it should be safe for merging to have that
> problem fixed in the upcoming release. Of course unless someone else tests the
> patch and finds a (very unlikely) regression before tomorrow. There are two 
> more
> patches from that series, but I haven't reviewed them, since they add features
> and don't just fix problems.
>
> I'm not sure if Arthur has some other patches that are relevant to the 
> KGPE-D16;
> IIRC he also looked into some improvements on the AMD side.
>

https://review.coreboot.org/c/coreboot/+/30063 implement relocatable
ramstage on AMDFAM10. Would be interesting to see whether it works on
amdfam15 hardware and if S3 resume still works fine.

https://review.coreboot.org/c/coreboot/+/30064/4 Implements postcar
stage on amdfam10 and drops the backup of RAMBASE..RAMTOP region.
It would allow to drop a whole lot of legacy code used for S3 resume, so
also definitely worth testing.

Both patches worked on a gigabyte m57-sli4 board with a amdfam10 CPU, so
I expect that booting till the OS still works fine. S3 resume might be a
little different. (review for board port is:
https://review.coreboot.org/c/coreboot/+/27618)

Something thing that would seem nice and would make CAR setup more
unified with how we do things on Intel hardware is to use variable
MTRR's for CAR instead of the fixed ones. A difficulty is that the
location needs to be carefully chosen as it needs to be below the
TOP_MEM MTRR, while also not hindering the raminit (I suppose)...

Another thing I'm working on, is to use C_ENVIRONMENT_BOOTBLOCK on all
x86 platforms (dropping use of romcc). Bringing that feature to amdfam10
won't be practical on my gigabyte m57sli board due to it using LPC flash
(I expect tons of reflashes/testing cycles ;). If someone is willing to
donate a supported board (not really interested in doing a board port
due to the deplorable state of AMD code in general) featuring amdfam10
hardware, preferably with DDR3 (implements S3 resume) and a SPI boot
medium, that would be fun.

It does not look this has anything to do with the original thread title
so sorry for hijacking it a little...

> Regards
>
> Felix

Kind regards

-- 
==
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] [flashrom] [LinuxBoot] FOSDEM 2019 deadline today

2018-12-17 Thread Arthur Heymans
Carl-Daniel Hailfinger  writes:

> Hi,
> * FOSDEM 2-3 February 2019
> * We have a coreboot/LinuxBoot/flashrom stand! Need people for the stand
> (2 days, 1 table).

I Live in Antwerp, so reasonably close by, so I will certainly be there
and can lend a hand if needed. I can also bring some hardware to showcase
but I don't own shiny new stuff. Maybe my setup using Felix's qspimux
that I use to quickly test roms on hardware can be interesting to
showcase?

Kind regards

-- 
======
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


[coreboot] AMD fam10 relocatable ramstage

2018-12-09 Thread Arthur Heymans
Hi

I have been playing around with the AMD fam10 codebase as of late.
It is quite different from Intel hardware is that AMD hardware starts up
other AP cores whereas on Intel hardware typically only the BSP runs code.

As far as I can tell the AP's are started early on. They then start
running from the reset vector. The code then updates their microcode
and sets a 'reasonable' TOP_MEM MTRR such that when starting the APs
later on during the ramstage, it won't result in any problems given that the
stack will be in that 'reasonable' range (below CONFIG_RAMTOP). After that,
they are put to rest and the BSP will be doing the rest of the things
during the romstage (raminit for instance).

Now the cbmem_top depends on the value the BSP set's in TOP_MEM, but
unless the AP's have their own TOP_MEM (not shared MTRR) synced with the
BSP. This syncing only happens during the AP init.

To implement relocatable ramstage this syncing needs to happen earlier,
preferable during the romstage...

At the moment with relocatable ramstage enabled it hangs when starting
the APs as it puts their stack somewhere in CBMEM well above the
'reasonable' default value of TOP_MEM which is CONFIG_RAMTOP.

Now my questions:

- Is this analysis correct?
- If so, how to sync TOP_MEM in romstage (is there already an example of
  an implementation somewhere?), is there an easy way to run code on
  all AP's during the romstage?
- Are workarounds possible/better? like figuring out TOP_MEM saving it
  nvram and resetting and programming that value on AP's

Kind regards

Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


[coreboot] Testing asked, implementing POSTCAR stage

2018-12-05 Thread Arthur Heymans
Hi

I'm trying to implement a few features on x86 platforms to improve
coreboot. Currently I'm focusing on unifying the bootflow of x86
platforms a little better. An important aspect of that is to make sure
program boundaries within stages are respected, which is mostly an issue
when romstage destroys the stack+environment in which it is running.
Currently many platforms work around it by having code to fetch the
global variables which are either still in the CAR (before CAR tear
down) or somewhere relocated in cbmem after cbmem has been set up.
A better solution is to have CAR being torn down in a separate stage,
which means that romstage can always access global variables where the
linker initially puts them. We call this stage postcar stage.

I have an implementation ready for the following platforms:
* CPU_AMD_MODEL_10XXX or in mainboard terms:
amd/serengeti_cheetah_fam10
asus/kfsn4-dre
hp/dl165_g6_fam10
msi/ms9652_fam10
supermicro/h8dmr_fam10
supermicro/h8qme_fam10
tyan/s2912_fam10
amd/mahogany_fam10
gigabyte/ma78gm
iei/kino-780am2-fam10
jetway/pa78vm5
amd/tilapia_fam10
asus/m4a78-em
asus/m4a785-m
asus/m4a785t-m
asus/m5a88-v
gigabyte/ma785gm
gigabyte/ma785gmt
avalue/eax-785e
amd/bimini_fam10
advansus/a785e-i
asus/kcma-d8
supermicro/h8scm_fam10
asus/kgpe-d16

It would be great if the following patches could be tested (i.e. does it
still boot):
https://review.coreboot.org/c/coreboot/+/30063/2
https://review.coreboot.org/c/coreboot/+/30064/2
preferably on a board on which ACPI S3 resume is implemented (select 
HAVE_ACPI_RESUME).

* NORTHBRIDGE_VIA_VX900 on the VIA EPIA-M850 board
https://review.coreboot.org/c/coreboot/+/30057/1
https://review.coreboot.org/c/coreboot/+/30058/3

The only remaining targets that need to be addressed before the special
logic for CAR globals can be dropped are FSP1.0 platforms and geode_lx.
geode_lx still has to implement EARLY_CBMEM (requirement for 4.7 and 4.9 is
coming soon)...

Kind regards

Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Further coreboot releases, setting new standards

2018-11-30 Thread Arthur Heymans
FSP 2.0 compliant FSPs for these 
>> platforms, which I highly doubt is going to happen.
>> 
>> I would argue that making anything mandatory that alienates platforms that 
>> are still popular and actively being used is not the right answer
>> here.
>> 
>> > Once these are sorted out, Jay's chipsets are off the hook!
>> >
>> > We can easily make the support for Jay's boards in coreboot keep on
>> > building - we can't easily test that we're just carrying along 
>> > non-functional bits.
>> >
>> > That's where the "remove boards from master" movement is coming from:
>> > Truth in advertising, in that instead of claiming that we support 200 
>> > boards of which 180 were built with a tree from 3 years ago, we have a
>> rather good idea what does.
>> > Both by taking the board-status system into account, and by dropping code 
>> > paths that nobody-who's-testing uses anymore.
>> >
>> 
>> I fully support removing code that nobody uses anymore - if you can ensure 
>> that nobody is actually using it. I don't support removing code
>> for platforms that are still popular and actively being used just because 
>> you want to pretty up the codebase and make everything
>> conformant to one individual's proposal that isn't necessarily applicable to 
>> all members of the coreboot community. Coreboot is used for a
>> very diverse range of applications, from Chromebooks and laptops to IoT 
>> devices to banks of servers in a server farm to industrial control
>> systems, and even to military applications. That's why I chimed into this 
>> discussion... to give a voice to those other members of the
>> community with applications that use the platforms that would otherwise be 
>> eliminated from master per this proposal. And I know I'm not
>> alone here.
>> 
>> > Right now you're just reiterating that us spending work on keeping boards 
>> > in the tree is a nice service to Jay. Thanks, but we're well
>> aware.
>> > Can you also convince us that it's a good service to the users of
>> > Jay's boards who expect master (and any future release) to work, given 
>> > that there's code for boards of that specific name?
>> >
>> 
>> First, it's more than just me. I know for a fact that we aren't the only 
>> ones developing coreboot-based solutions based on both Broadwell-DE
>> and Bay Trail that would like to see support continued, not arbitrarily 
>> removed because it didn't conform to the proposal. So please don't
>> belittle it down to being just a "nice service to Jay". This is much bigger 
>> than just me, my company, or our clients.
>> 
>> >
>> > (Jay, sorry for singling you out like that)
>> >
>> > Regards,
>> > Patrick
>> > --
>> > Google Germany GmbH, ABC-Str. 19, 20354 Hamburg Registergericht und
>> > -nummer: Hamburg, HRB 86891, Sitz der Gesellschaft: Hamburg
>> > Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
>> 
>> 
>> --
>> coreboot mailing list: coreboot@coreboot.org 
>> https://mail.coreboot.org/mailman/listinfo/coreboot

-- 
==
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


[coreboot] GSOC submission

2018-11-29 Thread Arthur Heymans
Hi

It has been a few years since coreboot (or flashrom) applied for Google
Summer Of Code. In 2019 the applications for organizations open on
january 2019 and student applications on March 25.

I think it would be great if the coreboot project could apply in 2019,
as doing so has been very valuable for the project in the past.

I don't really know the full set of requirements and procedures, but I
think it could be worthwhile to start thinking about project ideas.

A few ideas were already suggested on IRC on freenode #coreboot:
- 64bit x86 ramstage (hard)
- documented microcode update methods and write a tool that generates a
webpage which microcodes are included in coreboot (easy)
- nvidea optimus support (medium)
- QEMU power9 support / initial openpower support (hard I guess?)
- Rework device resource allocation to support 64bit BAR (relatively
hard)

Any ideas or suggestions?

Kind regards

Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Further coreboot releases, setting new standards

2018-11-28 Thread Arthur Heymans
"Jay Talbott"  writes:

> I know I don't post much here, but I feel like I need to chime in on this 
> thread... Perhaps it's time that SysPro becomes a louder voice in the 
> community.
>
> Bay Trail and Broadwell DE are both still very popular platforms, yet neither 
> one of them meets the cut for any of the three criteria. So I caution against 
> removing the support for either of them too hastily.
>

I looked into that FSP 1.0 integration code a little. It would seem to
me that relocatable ramstage and C_ENVIRONMENT_BOOTBLOCK are possible.
NO_CAR_GLOBAL_MIGRATION however seems rather impossible as the FSP has
total control over the environment and destroys the CAR environment
itself. Since I propose the standards I could offer some help to reach
them.

It looks like FSP 1.0 will be dragging coreboot down for some time.
Maybe we can agree not to integrate such monsters into coreboot in the future?
BTW baytrail has a non FSP port that will likely be in better shape.

Kind regards

Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Further coreboot releases, setting new standards

2018-11-25 Thread Arthur Heymans
"Jay Talbott"  writes:

> I know I don't post much here, but I feel like I need to chime in on this 
> thread... Perhaps it's time that SysPro becomes a louder voice in the 
> community.
>
> Bay Trail and Broadwell DE are both still very popular platforms, yet
> neither one of them meets the cut for any of the three criteria. So I
> caution against removing the support for either of them too hastily.

Could you test with "select NO_RELOCATABLE_RAMSTAGE"?

>
> Yes, it can be a pain to keep maintaining old platforms, and certainly 
> support for platforms that are old enough that they are no longer being used 
> by anybody are good candidates for cleanup and
> removal.

It's not about old or new. For instance the Intel i440bx (20y old) is still
supported by coreboot, uses many recent features like POSTCAR_STAGE and
relocatable ramstage, so it would be flagged for cleanup and removal.


> But support for platforms that are still popular and still actively being 
> used by people shouldn't be stripped out of the coreboot code base.
>

If they are still popular and actively used, it would mean that someone
has interest towards achieving new coreboot standards? Pushing standards
is not really about active use or not but about improving the code base.

> My $0.02.
>
> - Jay
>
> Jay Talbott
> SysPro Consulting, LLC
> 3057 E. Muirfield St.
> Gilbert, AZ 85297
> (480) 704-8045
> (480) 445-9895 (FAX)
> jaytalb...@sysproconsulting.com
> http://www.sysproconsulting.com
>
>  ---- Original Message 
>  Subject: Re: [coreboot] Further coreboot releases, setting new standards
>  From: Arthur Heymans 
>  Date: Fri, November 23, 2018 8:32 am
>  To: Patrick Georgi via coreboot 
>  Cc: Patrick Georgi 
>
>  Patrick Georgi via coreboot  writes:
>
>  > Am Fr., 23. Nov. 2018 um 14:43 Uhr schrieb Arthur Heymans 
> :
>  >
>  > I'd argue for requiring the following:
>  >
>  > In which time frame? The next release, ie May 2019? In two releases,
>  > November 2019?
>  >
>  That is indeed worthy item of discussion.
>
>  NO_RELOCATABLE_RAMSTAGE on x86 is only selected by:
>  NORTHBRIDGE_AMD_AMDFAM10,
>  NORTHBRIDGE_AMD_LX,
>  NORTHBRIDGE_VIA_VX900,
>  SOC_INTEL_FSP_BAYTRAIL,
>  SOC_INTEL_FSP_BROADWELL_DE
>
>  POSTCAR_STAGE is selected by:
>  cpu/amd/agesa
>  cpu/amd/pi
>  mainboard/intel/galileo
>  northbridge/intel/i440bx
>  northbridge/intel/i945
>  northbridge/intel/e7505
>  northbridge/intel/gm45
>  northbridge/intel/haswell
>  northbridge/intel/nehalem
>  northbridge/intel/pineview
>  northbridge/intel/sandybridge
>  northbridge/intel/sandybridge
>  northbridge/intel/x4x
>  soc/amd/stoneyridge
>  soc/intel/apollolake
>  soc/intel/cannonlake
>  soc/intel/denverton_ns
>  soc/intel/skylake
>  soc/intel/icelake
>  so all other x86 targets don't implement it and therefore lack
>  NO_CAR_GLOBAL_MIGRATION.
>
>  C_ENVIRONMENT_BOOTBLOCK is even less used since it is a relatively new
>  feature (was introduced with INTEL_APOLLOLAKE and INTEL_SKYLAKE) so most
>  x86 targets don't implement it but there are already many patches for it 
> lying
>  around for review (like most targets in northbridge/intel/*). It is
>  however a very useful feature to have.
>
>  So it would seem reasonable to drop NO_RELOCATABLE_RAMSTAGE in may 2019
>  and mandate NO_CAR_GLOBAL_MIGRATION and C_ENVIRONMENT_BOOTBLOCK in
>  november 2019? Any thoughts on this?
>
>  Nico also suggested to set the timeframe 2 weeks before the release, to
>  avoid last minute WIP patches attempting to tackle the issue right
>  before the release.
>
>  -- 
>  ==
>  Arthur Heymans
>
>  -- 
>  coreboot mailing list: coreboot@coreboot.org
>  https://mail.coreboot.org/mailman/listinfo/coreboot

-- 
==
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Supported Mainboard in coreboot is missing because of using LATE_CBMEM_INIT

2018-11-25 Thread Arthur Heymans
j44...@goat.si writes:

> Hello. I got a MSI MS6178 mainboard for coreboot based on the official wiki 
> page
> https://www.coreboot.org/Board:msi/ms6178
>
> I have followed the wiki to build coreboot but the board is missing in make
> menuconfig here: https://www.coreboot.org/Build_HOWTO
>
> Then i find https://review.coreboot.org/c/coreboot/+/23300
>
> Thus i run 'git checkout tags/4.7'
>
> But when trying to build a image i got following error:
>
> Cloning into 'seabios'...
> fatal: https://review.coreboot.org/p/seabios.git/info/refs not valid: is this 
> a
> git repository?
> Makefile:18: recipe for target 'seabios' failed

You can try to build the payload separately.

>
> Could someone fix the MSI MS6178 mainboard so that it can be used with the
> recent coreboot?

That would require that platform (i810) to have early cbmem init.

> Thanks!

-- 
==
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Coreboots Board Status have privacy issues for contributors

2018-11-25 Thread Arthur Heymans
j44...@goat.si writes:

>
> I was thinking of contributing to the Board Status but i dont want to release
> any private data and wont contribute now. What is the usage of the world to 
> know
> what mac address the people are using?
>
Feel free to edit the kernel log.

> Please fix this to:
> 1) Remove kernel log and replace it with "uname -r" to just know the kernel
> version.

The kernel log does contain other useful information, so dropping it
would make the board status repo less useful.

-- 
==
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Further coreboot releases, setting new standards

2018-11-23 Thread Arthur Heymans
Patrick Georgi via coreboot  writes:

> Am Fr., 23. Nov. 2018 um 14:43 Uhr schrieb Arthur Heymans 
> :
>
>  I'd argue for requiring the following:
>
> In which time frame? The next release, ie May 2019? In two releases,
> November 2019?
>
That is indeed worthy item of discussion.

NO_RELOCATABLE_RAMSTAGE on x86 is only selected by:
NORTHBRIDGE_AMD_AMDFAM10,
NORTHBRIDGE_AMD_LX,
NORTHBRIDGE_VIA_VX900,
SOC_INTEL_FSP_BAYTRAIL,
SOC_INTEL_FSP_BROADWELL_DE

POSTCAR_STAGE is selected by:
cpu/amd/agesa
cpu/amd/pi
mainboard/intel/galileo
northbridge/intel/i440bx
northbridge/intel/i945
northbridge/intel/e7505
northbridge/intel/gm45
northbridge/intel/haswell
northbridge/intel/nehalem
northbridge/intel/pineview
northbridge/intel/sandybridge
northbridge/intel/sandybridge
northbridge/intel/x4x
soc/amd/stoneyridge
soc/intel/apollolake
soc/intel/cannonlake
soc/intel/denverton_ns
soc/intel/skylake
soc/intel/icelake
so all other x86 targets don't implement it and therefore lack
NO_CAR_GLOBAL_MIGRATION.


C_ENVIRONMENT_BOOTBLOCK is even less used since it is a relatively new
feature (was introduced with INTEL_APOLLOLAKE and INTEL_SKYLAKE) so most
x86 targets don't implement it but there are already many patches for it lying
around for review (like most targets in northbridge/intel/*). It is
however a very useful feature to have.

So it would seem reasonable to drop NO_RELOCATABLE_RAMSTAGE in may 2019
and mandate NO_CAR_GLOBAL_MIGRATION and C_ENVIRONMENT_BOOTBLOCK in
november 2019? Any thoughts on this?

Nico also suggested to set the timeframe 2 weeks before the release, to
avoid last minute WIP patches attempting to tackle the issue right
before the release.

-- 
==
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


[coreboot] Further coreboot releases, setting new standards

2018-11-23 Thread Arthur Heymans
Dear coreboot community

While the next coreboot release is due for november 2018, I think it
is worthwhile to think about further releases and standards we want to
set.

In the past coreboot adapted numerous general improvements, which were not
always ported to all coreboot targets. Keeping those platforms and their
respective codepath then typically becomes a burden often accompanied
with regressions. The reasonable decision to drop these targets was then
made. A few examples of this were dropping targets that had a romcc
compiled romstage (in favor of GCC compiled romstage running in Cache As
Ram) and dropping targets without early_cbmem.

Coreboot hasn't stood still and it might be time to set some new
standards again which platforms have to conform.
Since I mostly know x86 the following ideas will be quite x86 centric.
I'd argue for requiring the following:

- getting rid of NO_RELOCATABLE_RAMSTAGE on x86

This allows the ramstage to be relocated in a place out of the way of
the OS such that copying the memory is unnecessary during S3 resume.

- config NO_CAR_GLOBAL_MIGRATION on all x86 targets

This is now achieved using postcar stage. This would mean that all x86
targets have a common way to set up and get rid of the CAR environment
and car globals.

- config C_ENVIRONMENT_BOOTBLOCK on all x86 targets

This means that the bootblock is responsible to set up the CAR, which
means that the rest of the bootblock can be compiled with GCC.
This effectively makes ROMCC bootblocks obsolete.
Having a bootblock with access to a working stack effectively increases
the bootflow flexibility. This is reflected in for instance when using
VBOOT, which can then verify all stages starting from the romstage,
hence also allowing a fallback with regards to ram initialization (vs.
having the romstage only in the RO_WP region). It would be a important
step in making vboot useful and usable and maybe default on all targets.


Any suggestions, reflections, ideas, remarks?

Kind regards

Arthur


-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] FSP 1.0 Sandy-/Ivybridge removal

2018-11-12 Thread Arthur Heymans
Zaolin  writes:

> Hey coreboot folks,
>
> I want to remove the old Sandy-/Ivybridge FSP 1.0 implementation from
> the tree:
>
> https://review.coreboot.org/#/c/coreboot/+/29402/
>
> We already have an Open Source replacement for it ( under src/{nb,sb} )
> which can replace the legacy FSP integration.
>

To make this statement more accurate. There are 3 bootpaths for 
sandy-/ivybridge.

1. Fully open source (including raminit)
2. raminit done by mrc.bin developed by google engineers, with the
mrc.bin in the blob repo, which is for the most part interchangeable
with the fully open source raminit
3. FSP 1.0 bootpath

bootpaths 1 and 2 have 43 boards in the coreboot tree. (based on 'chip
northbridge/intel/sandybridge in devicetree.cb)
The FSP bootpath has 2 boards that are likely not obtainable.

> Does anyone use the FSP stuff or has complains about the current plan?
>
>
> BR, Zaolin

Given the unpopularity, lack of maintenance, the availability of a much
more popular bootpath and actual hindrance in moving the common codebase
forward (for instance when implementing parallel mp init for i945 till
sandybridge, some code still has to be left in place to keep that fsp
bootpath happy) I fully endorse the removal of this code.


-- 
==
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] [AMD/fam15h] coreboots update_microcode is NOT working, and I know why

2018-08-18 Thread Arthur Heymans
Mike Banon  writes:

> Before borrowing update_microcode.c for our family15tn, we need to
> somehow figure out if its even working for that "
> family_10h-family_15h ". If you go to the root directory with coreboot
> sources, and issue ' find . -type f -print0 | xargs -0 grep
> "cpu_microcode_bins" ' command, you'll see that cpu_microcode_bins
> variable is defined ONLY for various Intel CPUs, and even one VIA CPU,
> but never for the AMD CPUs !
>
The amd microcode is included differently, by including it as a
cbfs-file(s) directly. See cpu/amd/family_10h-family_15h/Makefile.inc.

> So I have no idea if this update_microcode.c for AMD has been ever
> tested, because there is no "cpu_microcode_bins" variable defined even
> for "10h/15h" family, and its essential for this update mechanism to
> work : if "cpu_microcode_bins" isnt defined, microcode_amd_fam15h.bin
> isnt included, and update_microcode.c is either never launched or
> gives an error
>
It is tested and the update is done quite early on to make sure the CPU
operates properly before done any other initialization.

-- 
==
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] wiki backup

2018-06-12 Thread Arthur Heymans
Hi

On Tue, 2018-06-12 at 19:15 +0100, Leah Rowe wrote:
> Hi,
> 
> Now that the wiki is being retired, is there a backup of the wiki
> (database, files etc) so that someone else can host it elsewhere? For
> archival purposes.
> 
The wiki is still reachable at coreboot.org/wiki. It is however read-
only.

Kind regards

-- 
Arthur Heymans

signature.asc
Description: This is a digitally signed message part
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Looking for a volunteer to add Fam15h spectre MSR to coreboot

2018-04-11 Thread Arthur Heymans
Mike Banon  writes:

> Line 869 - "const int amd_erratum_319[] =" --- is this code really
> against the Spectre, or its more like against the erratas in general?
> Also, What if someone would like to use either a Linux distro which
> hasnt been upgraded to the latest kernels, or maybe some alternative
> OS like FreeDOS or Kolibri? I think Taiidan has a good point: the
> availability of protection from this vulnerability should not depend
> on your OS and the version of your Linux kernel.
>

I disagree, the OS and the systems proper operation should depend as
little as possible on the firmware and coreboot generally follows the
philosophy of doing as little as possible. Note that a lot of other
errata get fixed in the kernel as well already. Depending on firmware
for safe operation of outdated or legacy OS seems silly to me...

OTOH there already is some overlap between coreboot and the OS with
stuff like updating microcode which is not always needed...

> Are there any existing MSR writes inside the coreboot code, so that
> they could be copied and modified into the MSR of Taiidan's interest?
> (MSR C001_1029[1]=1) Maybe that MSR write would even be a C code
> 1-liner?
>
Yes that is quite easy to do, but there is other functionality in that
MSR that is needed for instance when setting up CAR, so care needs to be
taken where this happens. I haven't looked it up but it could also be a
per AP MSR in which case it needs to be programmed on each AP...

Kind regards

-- 
==
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Looking for a volunteer to add Fam15h spectre MSR to coreboot

2018-04-09 Thread Arthur Heymans
Hi

Linux already does that for you: (v4.16) arch/x86/kernel/cpu/amd.c line 869.


Kind regards

On 5 April 2018 00:51:30 GMT+02:00, "taii...@gmx.com"  wrote:
>As I am not a programmer I do not know how to do this (thanks for the
>heads-up rmarek) nor am I permitted to add to the repos.
>
>MITIGATION G-2                                       
>Description: Set an MSR in the processor so that LFENCE is a dispatch
>serializing instruction and then
>use LFENCE in code streams to serialize dispatch (LFENCE is faster than
>RDTSCP which is also dispatch
>serializing).
>
>This mode of LFENCE may be enabled by setting MSR C001_1029[1]=1.
>
>This is important and covers a variety of boards such as the KGPE-D16,
>KCMA-D8 and G505s (all the last and best owner controlled x86_64
>systems)

-- 
Arthur Heymans-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

[coreboot] How to handle vbt.bin

2018-04-05 Thread Arthur Heymans
Hi

Recently there has been some development towards better handling of the
VBT (Video BIOS Table) on Intel targets within coreboot. [1],[2](and a
whole lot of patches that hook up this code).

VBT is usually a part of the option rom for the Intel integrated graphic
device. It is pure configuration data that is in binary format, has a
variable length and is fully documented. The OS typically needs to know
a few things about the hardware like on which I2C address to talk to
SDVO devices, which I2C pins to use for the VGA port (standalone or
shared with DVI-I port). Linux can typically work to some extend without
it by it assumes some defaults but the Windows drivers fails to work
without it. I heard it is also a hard requirement for the GOP (pre-os
driver).

Usually this VBT data is passed on via the option rom address for video
devices (0xc) but this can also be passed on via ACPI via a
pointer, which coreboot can do (either by extracting it from an option
rom in cbfs or via a cbfsfile named vbt.bin)

Now the real question.

Since this is purely configuration data that is also fully documented it
holds no copyright and can be included into the coreboot project without
legal troubles. Now one would be able to generate this binary file but
due to its modularity and variable length such generator would be
tedious to create. So currently the option are to either include the
binary in the blob repo or in the main repo.

An argument was made that it belongs in the main repo in the
src/mainboard/*/*/ dir as we already have binary configuration data in
there, namely SPD (serial presence detect) for soldered dram. Also
having the policy of allowing (and encouraging) such binaries in the
main repo could make it easier for users to run their self-build
coreboot roms on devices from big coreboot vendors (like Google).

So what are your thoughts on this?



Kind regards

Arthur Heymans


--

[1] https://review.coreboot.org/#/c/coreboot/+/18902/
[2] https://review.coreboot.org/#/c/coreboot/+/19905/

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Are multiple GPUs with lots of memory (ex: 8GB+8GB) supported in coreboot?

2018-02-10 Thread Arthur Heymans
"taii...@gmx.com"  writes:

> I would like to add this information to the wiki so I am wondering if
> anyone has successfully used for instance dual 8GB graphics cards with
> coreboot.
>
I don't think that this gpu memory is mapped into the linear memory
space. Such a GPU will typically have a PCI memory resource BAR that is
(only) 256M large (could be 512M or more these days).

> I am not sure if this would be an issue due to coreboot only having
> 32bit MMIO space.
>
It depends a bit on how things are configured but with a fairly common
2G mmio space below 4G I think it is unlikely for 2 external GPU's to be
an issue.

It would of course be nice to have 64bit BAR support but that would
require substantial changes in the allocator and more importantly a lot
of time spend in a sane and thoughtful design...

> Thanks!

-- 
==
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


[coreboot] coreboot community meeting January 5th, 2017 report

2018-01-05 Thread Arthur Heymans
Dear coreboot community

This is the report of yesterdays community meeting:

General coreboot news & Discussion
==

There is patch [1] up for review that implements LinuxBoot as payload.
  - Currently with u-root in the initrd, which is an userspace entirely
  written in go;
  - Currently only as a payload but in the future it could replace
  ramstage on some targets;
  - Currently its still WIP and doesn't work all that well with qemu
  targets;
  - It needs some Kconfig options for some simple example configurations;
  - In the future we also want to integrate the HEADS[2] userspace
  and possibly also an userspace featuring the petitboot kexec
  gui[3];

The 4.7 release is imminent and Martin will hopefully be able to finish
up the releasenotes this weekend.
The 4.6 release announced that after the 4.7 release platforms lacking
the early cbmem feature will be removed from the master branch. Those
platforms can then live in the 4.7 branch or be reintegrated when early
cbmem support is implemented. The details about how and when this will
happen have yet to be decided/discussed.

Development
===

There was some discussion about how to move forward to use the new ACPI
ASL 2.0 syntax, which is considered more readable. There previously was
a discussion on this topic on the mailing list [4].
Currently it is possible with iasl to compile and decompile our ASL
sources to the new syntax but this would lose all our comments, so
ideally a new tool would need to be written to handle that
transition. It was also suggested that given the fact that coreboot
has reproducibel timeless builds, it would be able to tell if anything
changed in the resulting binary, which it shouldn't since the resulting
bytecode ought to be the same.

In the mean time we still need to have a discussion about what do with
for instance new asl source files: do we still want the old syntax or is
the new syntax ok for new files? Mixing syntaxes in existing files
seemed a bad idea.

So what are your ideas and opinions on this?

Infrastructure
==

The pre-commit hook doesn't return failures on 'make lint-stable' but
this seems to be fixed in [5].

Documentation
=

Currently there is an effort going on to have the documentation
accessible in one place on the web. The current idea was to use hugo for
static webpage generation for this, see [6]. Some concerns were however
raised that this particular theme works rather poorly without
javascript, so it might be desirable to find a better lightweight
theme.
Related to this, is the ongoing effort to convert our current
documentation to markdown and move those files to a different directory
which starts with lowercase letter for consistency, with a separation of
a content and a static folder.
The idea to use netlify to push it to the production server was also
suggested. (Philipp might be able to say something in more details about
this)

Flashrom 1.0 was released!
This new release has some nice new features that make handling some
blobs like the IFD/GBE/ME/... on Intel systems much easier. One can now
with the --ifd flag fetch the flash layout from flash and use this to
read/write/erase those regions. Also a --noverify-all or -N flag was
introduced which skips verifying regions that were not touched at all,
which can greatly speed up flashing if one is only writing to a small
region on a large flash. Our guides however need to be adapted to use
these features. One example where this has already been done is [7]
which still has the old instructions for reference.
In the future flashrom will support Linux MTD (memory technology
device) which can remove the need to boot Linux without strict checking
of MMIO memory regions (currently flashrom needs the iomem=relaxed boot
parameter to use the internal programmer).



I hope you can join us next time!

Kind regards

Arthur


[1] https://review.coreboot.org/#/c/coreboot/+/23071/
[2] https://trmm.net/Heads
[3] https://github.com/ArthurHeymans/petitboot_for_coreboot
[4]
https://mail.coreboot.org/pipermail/coreboot/2016-September/082050.html
[5] https://review.coreboot.org/#/c/coreboot/+/23130/
[6] https://www.coreboot.org/Documentation/
[7] https://www.coreboot.org/Board:lenovo/x200#Flashing_your_coreboot_ROM_image

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] 16 GPUs on one board

2018-01-05 Thread Arthur Heymans
Adam Talbot  writes:
> Arthur: Thanks for the details. I have a board that with give me a "missing 
> memory" beep code with more then 6 GPUs. Now I understand why!
>
> How can I track down how much system DRAM a GPU is using? These are
> all the newest Nvidia Pascal based cards. Mostly GTX 1070's.

The answer is none unless onboard GPU which uses some.

What you are interested in is how much linear address space it uses and
how much you can fit in PCI_MMIO (memory mapped input output) region.

You can use 'lspci -vvv' for that e.g.

00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset 
Integrated Graphics Controller (rev 07) (prog-if 00 [VGA controller])
Subsystem: Lenovo Mobile 4 Series Chipset Integrated Graphics Controller
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- SERR- 
> On an interesting note, one of my oldest motherboards, a Gigabyte GA-970A-UD3 
> will boot with all 8 cards, but gives me the no VGA beep code. Serial console 
> for the win!
>
> Is this just a BIOS level issue? Or is there some hardware component I should 
> be aware of?
>
> Thanks for the help.
> -Adam
>

-- 
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] 16 GPUs on one board

2018-01-04 Thread Arthur Heymans
Hi

What target are you on?

Coreboot tries to locate all PCI BAR's below 4G in the PCI_MMIO region and 
above the lower DRAM
limit (the rest of the DRAM is mapped above 4G). Typically a GPU takes
around 256M but I guess that could be more nowadays. If that doesn't fit
in the PCI MMIO region, it will have troubles and probably not boot.

The real fix would be to have coreboot locate BAR's above 4G too.

At least that is what I think is going on here...

(sry for top posting it felt like the answer was better in one block)

Adam Talbot  writes:

> -Coreboot
> I am totally off the deep end and don't know where else to turn for 
> help/advice. I am trying to get 16 GPU's on one motherboard. Whenever I 
> attach more then 3~5 GPU's to a single motherboard, it fails to post. To make 
> matters worse, my post
> code reader(s) don't seem to give me any good error codes. Or at least 
> nothing I can go on.
>
> I am using PLX PEX8614 chips (PCIe 12X switch) to take 4 lanes and pass them 
> to 8 GPU's, 1 lane per GPU. Bandwidth is not an issues as all my code runs 
> native on the GPUs. Depending on the motherboard, I can get up to 5 GPU's to 
> post. After
> many hours of debugging, googling, and trouble shooting, I am out of ideas. 
>
> At this point I have no clue. I think there is a hardware, and a BIOS 
> component? Can you help me understand the post process and where the hang up 
> is occurring? Do you think Coreboot will get around this hangup and, if so, 
> can you advise a
> motherboard for me to test with? 
>
> Its been a long time sense I last compiled linuxbios. ;-)
>
> Thanks
> -Adam

Kind regards

-- 
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Problems building coreboot 4.6

2017-09-11 Thread Arthur Heymans
Julius Werner  writes:

>> The error is fixed by commit
>> 54fd92bc (util/cbfstool/lz4frame.c: Add comment to fall through).
>
> Looks like I'm too late for that commit, but in general, please do not
> just hack around in the LZ4 files. That code was pulled in verbatim
> from the upstream source -- if there are issues with it, please
> instead send a patch to https://github.com/lz4/lz4 and resync our code
> base to there once it has landed.

[1] Tried to update lz4 files but needed the same 'fix' for gcc7
(comment was 'pass-through' instead of 'fall-through'), so it
was dropped in favor [2]. It looks like version 1.8.0 has the proper
"falls trough" comment to make gcc7 happy...

[1] https://review.coreboot.org/#/c/20011/ "util/cbfstool: Update lz4 to
1.7.5" 
[2] https://review.coreboot.org/#/c/20036/ "util/cbfstool/lz4frame.c:
Add comment to fall through" 

Kind regards
-- 
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] coreboot community meeting: August 17, 2017

2017-08-22 Thread Arthur Heymans
> Development
>
> * Tianocore patches merged, and working for at least a subset of
> boards.  Some boards still need some work.  Need to test on 32-bit
> machine.
>
I tested a 32bit build on my 32bit-only Thinkpad X60 and while I did not
test if it was able to boot something (no EFI bootloader on it), it
seemed to work fine and greeted me with the tianocore loading screen.

-- 
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Some updated instructions to flash coreboot to the Thinkpad X60 from vendor bios

2017-08-14 Thread Arthur Heymans
Peter Stuge  writes:

> Arthur Heymans wrote:
>> https://gist.github.com/ArthurHeymans/c5ef494ada01af372735f237f6c6adbe
>
> I note these differences from what I wrote up in the wiki:
> (that may no longer be there though)
>
It's still there for the most part and were a good source of inspiration.

> * CONFIG_BUCTS_BOOTBLOCK is now a thing (great! very neat!)
> * updated flashrom patches
Those are probably still the same.
> * use flashrom image file to not touch last 64k (great idea!)
> * added nvramtool step
>
This step is probably good advice for every flash coming from a vendor
bios.

> Is that accurate? Nice improvements for sure. Thanks.
>
>
> //Peter

Yes that's about it. I also extended the bucts utility to support some
newer chipsets and report whether or not it can be set but that's not
really related to this.

Sadly those instructions did not lead to a successfully flashed thinkpad
for one user, so I'll investigate where the issue resides soon(ish).

Kind regards
-- 
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


  1   2   >