[coreboot] Re: Security notice: SMM can be hijacked by the OS on APs
On 4/12/22 10:17, Nico Huber wrote: Hello Insurgo, On 12.04.22 16:01, Insurgo Technologies Libres / Open Technologies wrote: On April 12, 2022 8:55:56 AM UTC, Arthur Heymans wrote: Would it make sense to backport your fix to old releases and bump those release numbers to a .1 on the end? Some see releases as mere synchronization tags & nice PR. Some releases are also branches in gerrit but there are none affected by this (latest is 4.12 and it was introduced in 4.13). As you may know, coreboot distributions (talking of Heads specifically here), take releases tarballs and apply patches where needed on top of it. In the present case, Heads currently depends on coreboot 4.11, 4.13 and 4.15 for its supported boards. I quickly attempted to backport the relevant patches to 4.13 tarball release, unsuccessfully. have you checked if the SMM module loader v2 was used in your 4.13 builds? AIUI, it was optional and only enabled on user request. Thanks Nico for that pointer. Community maintained Heads boards are mostly based on coreboot 4.13 as of now: # CONFIG_X86_SMM_LOADER_VERSION2 is not set was hidden in the savedefconfig format stored under Heads repositories for coreboot 4.13 depending boards. Expending the saved configuration confirms non-usage of SMM2 optional loader and is therefore not considered vulnerable per reported vulnerability. I would highly doubt other coreboot based distributions would have activated this explicitly, but will depend of the new coreboot pushed defaults from upstream releases. Let's see. 4.15 and 4.16 removed that optional configuration setting (default configuration) and seemed to have switched to SMM2 by default. Neither coreboot 4.14, 4.15 or 4.16 releases notes explicitly noted the change to SMM2, where 4.13 announces the change. Not sure users are following coreboot discussions, but I hope coreboot distribution maintainers are. Consequently, all downstream coreboot based distributions, and their users, may stay vulnerable if no new 4.15.1 4.16.1 are released from my understanding until 4.17 is released. A quick exploration of other coreboot distributions I am aware of: - Skulls uses coreboot master git at time of release (1.0.4 is based on 74d2218cc7 as of december 2021, configs are also saved in savedefconfig and are expected as well, consequently). https://github.com/merge/skulls/releases/tag/1.0.4 - Not so familiar with osboot build system. They store configs in expended full format. Sampled config for x220 was updated last month and seems to be based on coreboot 4.14+ 9probably 4.16), which is deemed to be vulnerable as well. https://notabug.org/osboot/osbmk/src/master/resources/coreboot/x220_8mb - Not so familiar with libreboot recent buildsystem either. A sampled configuration for x200 shows coreboot config being last updated 4 months ago, making it depend on coreboot 4.14+ which is not showing 4.13 optional SMM2 loader, which also seem to default to SMM2. Hence all their boards (outside of kgpe-d16, kcma-d8 etc depending on older 4.11) being vulnerable as well: https://notabug.org/libreboot/lbmk/src/master/resources/coreboot/x200_8mb/config/libgfxinit_corebootfb As per my precedent e-mail, I believe all coreboot based distributions (maintainers and their project users) would be grateful to have releases backporting this patchset (4.14? 4.15, 4.16) to properly support their coreboot users. Then being able to do a point release as well without all of them having to point to a random commit, happening in between coreboot releases/maintainers or trying to manually cherry-pick relevant commits and have patches deployed (if their build systems permit that) to have point releases. Nico Thierry ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: Security notice: SMM can be hijacked by the OS on APs
On April 12, 2022 8:55:56 AM UTC, Arthur Heymans wrote: >Hi > >Would it make sense to backport your fix to old releases and bump >> those release numbers to a .1 on the end? >> > >Some see releases as mere synchronization tags & nice PR. >Some releases are also branches in gerrit but there are none affected by >this (latest is 4.12 and it was introduced in 4.13). As you may know, coreboot distributions (talking of Heads specifically here), take releases tarballs and apply patches where needed on top of it. In the present case, Heads currently depends on coreboot 4.11, 4.13 and 4.15 for its supported boards. I quickly attempted to backport the relevant patches to 4.13 tarball release, unsuccessfully. The alternative would be to move all 4.13+ coreboot versions to switch to a specific commit in the middle of the current 4.16 release, which doesn't seem to be interesting from a stability perspective for users, moving production board owners to testers of 4.17 coreboot release. The relevant patchset is on top of 4.16, where I haven't found a proper merging strategy to backport a good patchset onto 4.13. I attempted to find all patches touching the 4.13 introduced new smm2 handler, but conflicts arose when trying to cherry-pick those commits in reverse order in the attempt of creating a patch that would apply successfully on top of 4.13 extracted tarball. I believe as well that new tarballs for 4.13.1 and newer should have the patchset backported and included and newer branches/tarballs (.1) released, so that all coreboot based distributions can easily point to those new tarballs without each of them having to suddenly point to a specific commit in between releases, 4.16 just having been released, also containing the vulnerability. >There is a precedent where 4.8 was bumped to 4.8.1 because all boards were >broken. > >I don't have a strong opinion on this. >Do people really use the releases in production or are most using git >anyway? In the goal of coreboot release based distributions, and to properly support products/solutions, I am not aware of a lot of projects that are based on coreboot rolling release (git). If reproducibility of coreboot distribution builds (roms) are also being a goal for those coreboot distribution projects, pointing to a git commit is only good for testing new introduced platforms, not optimal in creating stable releases for already supported platforms, which project codebase changes as well in between coreboot upstream releases. >It's a bit weird to have releases that you'd have to advertise as *don't >use*, but I've seen us do that in the past (because issues are quite often >just fixed in master). In an ideal world, each of those projects would have time to test fully rolling releases. But the reality is different. Heads/Skulls/Osboot/libreboot relies on coreboot releases to build ROMs for their supported platforms and users. > >Kind regards >Arthur > >On Tue, Apr 12, 2022 at 12:52 AM Peter Stuge wrote: > >> Arthur Heymans wrote: >> > I think this issue might affect a lot more systems than I initially >> thought. >> >> Would it make sense to backport your fix to old releases and bump >> those release numbers to a .1 on the end? >> >> >> //Peter >> ___ >> coreboot mailing list -- coreboot@coreboot.org >> To unsubscribe send an email to coreboot-le...@coreboot.org >> ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: OpenBMC on KGPE-D16, somebody has it working?
OpenBMC for the KGPE-D16 is here and "works": https://www.raptorengineering.com/coreboot/kgpe-d16-bmc-port-status.php This is stale, PoC state quality: - SSH password is hardcoded to 0penBMC - TTY gets corrupted after a while and OpenBMC needs to be restarted to fix the issue (asus_power.sh reset) - Thermal management is for server, not workstation. That needs to be hacked to not have all fans, full speed, all the time since AST2050 controls thermal management when powered up. - REST API is exposed If you want the OpenBMC bitbake binary produced, let me know. u-bmc would be awesome. I opened this ticket for u-bmc's KGPE-D16: https://github.com/u-root/u-bmc/issues/133 Regards, Thierry/Insurgo On 10/13/19 1:11 AM, ron minnich wrote: > If you like running systemd on your bmc, the minimum 60 seconds > openbmc takes to boot, the complex, fragile, and long time it takes to > build from source, and the openbmc stack's need for giant memory > footprint and lots of nvme, stop reading. > > IF, OTOH, you like the idea of a very lightweight stack, which builds > in minutes not hours, and needs maybe 32M of memory to run, and boots > much faster, well, you might want to checkout u-bmc. > > https://github.com/u-root/u-bmc > > On Sat, Oct 12, 2019 at 2:00 PM Kinky Nekoboi > wrote: >> Debian 10. >> >> Thats perfect i have the nessary flashing tools. >> >> >> Am 12.10.19 um 22:57 schrieb Merlin Büge: >>> Hi, >>> >>>> Due to lib depenency hell i am not able to build openbmc myself atm. >>>> >>> Are you building on Debian (9 or 10)? >>> >>>> Does somebody has openbmc working for there kgpe-d16 system and can >>>> maybe send me a rom ? >>> I'll send you a separate email. >>> >>>> Can i flash the module with a Programmer and testclip or only internally? >>> Yes, a SOIC16 testclip and an SPI programmer like the CH341a will work. >>> >>> >>> Merlin >>> >>> >> ___ >> coreboot mailing list -- coreboot@coreboot.org >> To unsubscribe send an email to coreboot-le...@coreboot.org > ___ > coreboot mailing list -- coreboot@coreboot.org > To unsubscribe send an email to coreboot-le...@coreboot.org ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: DIY debug Dongle
A not on EHCI debug, the screwdriver on Beaglebone black would need some loving but works: https://github.com/osresearch/heads/issues/541#issuecomment-481513900 On 10/14/19 10:53 AM, Nico Huber wrote: > On 13.10.19 16:40, Tom Hiller wrote: >> The wiki briefly mentions "usb_debug_io.c" with a link to an old trac >> ticket. Could a second device connected via usb only read the logs? I >> am curious on the feasibility of attaching an Android phone/tablet with >> an app that could easily read coreboot debug information. > For EHCI debug, the basic requirements are that it's a high-speed > capable device and that you can configure and use it with <= 8 byte > transfers. > > Usually this applies to Android devices and some have been used > already for debugging. You need the `g_dbgp` gadget driver. This > driver is reported to not always work, however... a workaround > could be to use `g_serial` and add some support code to the debug > target. > > Nico > ___ > coreboot mailing list -- coreboot@coreboot.org > To unsubscribe send an email to coreboot-le...@coreboot.org ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: KGPE-D16 maintainership
On 9/23/19 4:42 AM, Arthur Heymans wrote: > Hi > > Thanks for wanting to maintain this platform! > > There are a issues with the amdfam10 codebase that could be improved > upon. I'll try to list a few of them, to give an idea of what > maintaining this code in 2019 could mean. > > So first and foremost: > 1) The amdfam10 codebase (terminology-wise this always means non-agesa > in this context) suffers from a romcc romstage past. This code is > derived from code that used to be compiled with romcc instead of running > in CAR. The result of that is a big amount of romstage boilerplate code > and lots of #include *.c in the mainboard code, that is often not > mainboard specific. To give an example romstage spinlocks and memory > tests are implemented in the kgpe-d16 romstage.c file while not being > mainboard specific at all. These practices result in an unusually large > amount of unmaintained code in mainboard dirs with a fragile inclusion > order of headers (and *.c files!). > > Other issues pertain to coreboot wanting to move forward by mandating a > few features (relocatable ramstage, postcar > stage/no_car_global_migration, c environment bootblock): > 2) relocatable ramstage: This is just a Kconfig switch to build the > ramstage as a relocatable stage instead of statically linking it to > execute at CONFIG_RAM_BASE. Typically you want to set up some caching of > where cbmem is going to be to speed up ramstage, but on amd hardware > it's bit more complicated. Part of ramstage is to initialize AP's and AP's > will eventually jump to ramstage code. On AMD hardware however there is > a TOP_MEM MTRR that creates a boundary between ram and mmio below 4G. If > this is configured to be below cbmem AP's won't execute code. I see a > few proper solutions: > - AP's are also active during the romstage -> try to sync MTRR's at the > end of romstage. > - Use the parallel mp init code and modify the relocatable SIPI vector > stub to have the MTRR's as arguments instead of a pointer to the BSP > MTRR settings, in order to copy them. > > https://review.coreboot.org/c/coreboot/+/30063 is an attempt to make it > work but by setting TOP_MEM to something sufficiently large... > > 3) Postcar-stage: This development goes hand in hand with relocatable > ramstage, as it easily allows to program MTRR's to set up caching cbmem. > If you tearing down CAR during romstage you need to take special care > for where you're globals are (before and after CAR). Tearing down CAR in > a separate stage, hence having a romstage with a clean program boundary, > as a solution allows to get rid of a lot of code to deal with globals. > > https://review.coreboot.org/c/coreboot/+/30064/ is an attempt to > implement it on amdfam10 > > 4) C_ENVIRONMENT_BOOTBLOCK: romcc is an unmaintained 20k+ lines of C > code in one file program, that imposes some restrictions and workarounds > in C code. On older platforms it is used to compile a bootblock that > often does nothing more than loading {normal,fallback}/romstage. The CAR > setup happens at the start of romstage. The alternative is to move the CAR > init to the bootblock and compile the bootblock with gcc. One big > functional advantage is the ability to have a separate verstage running > in CAR before romstage. This broadens the scope of what can be placed in > RW_A/B slot in a VBOOT setup. > > 5) a few minor things: coreboot often has multiple methods of achieving > more or less the same things, with newer methods being better in some > aspects. It would be great if: > - for saving the raminit timings the drivers/mrc_cache code was used > instead of the custom implementation in > northbridge/amd/amdmct/mct_ddr3/s3utils.c > - for AP init CONFIG_PARALLEL_MP were used instead of the old > cpu/x86/lapic/lapic_cpu_init.c code. > - A stage cache were set up in TSEG to ensure the OS can't trash the > ramstage in cbmem on S3 resume. > > > I hope this list provides some useful pointers to where to code can be > improved. Feel free to add me on reviews. Thanks Arthur, awesome inputs. > > > Piotr Król writes: > >> Hi all, >> we see a lot of attention around KGPE-D16 maintainership problems. >> After discussion with Thierry Laurion (Insurgo) at OSFC2019 3mdeb >> decided to help in maintaining that platform by organizing crowd >> founding campaign or getting founds in other ways (direct sponsors). >> >> Since we are based in Poland there is chance that even with small >> contribution from community we would be able to cover the costs. >> >> Ideal plan would be to have structure similar to what we maintain for >> PC Engines: >> https://
[coreboot] Re: KGPE-D16 maintainership
On September 23, 2019 8:42:04 AM UTC, Arthur Heymans wrote: >Hi > >Thanks for wanting to maintain this platform! > >There are a issues with the amdfam10 codebase that could be improved >upon. I'll try to list a few of them, to give an idea of what >maintaining this code in 2019 could mean. Awesome points, Arthur. Thanks a bunch! > >So first and foremost: >1) The amdfam10 codebase (terminology-wise this always means non-agesa >in this context) suffers from a romcc romstage past. This code is >derived from code that used to be compiled with romcc instead of >running >in CAR. The result of that is a big amount of romstage boilerplate code >and lots of #include *.c in the mainboard code, that is often not >mainboard specific. To give an example romstage spinlocks and memory >tests are implemented in the kgpe-d16 romstage.c file while not being >mainboard specific at all. These practices result in an unusually large >amount of unmaintained code in mainboard dirs with a fragile inclusion >order of headers (and *.c files!). > >Other issues pertain to coreboot wanting to move forward by mandating a >few features (relocatable ramstage, postcar >stage/no_car_global_migration, c environment bootblock): >2) relocatable ramstage: This is just a Kconfig switch to build the >ramstage as a relocatable stage instead of statically linking it to >execute at CONFIG_RAM_BASE. Typically you want to set up some caching >of >where cbmem is going to be to speed up ramstage, but on amd hardware >it's bit more complicated. Part of ramstage is to initialize AP's and >AP's >will eventually jump to ramstage code. On AMD hardware however there is >a TOP_MEM MTRR that creates a boundary between ram and mmio below 4G. >If >this is configured to be below cbmem AP's won't execute code. I see a >few proper solutions: >- AP's are also active during the romstage -> try to sync MTRR's at the > end of romstage. > - Use the parallel mp init code and modify the relocatable SIPI vector > stub to have the MTRR's as arguments instead of a pointer to the BSP > MTRR settings, in order to copy them. > >https://review.coreboot.org/c/coreboot/+/30063 is an atte On my side, I'm committing to spin the need of support into Libreboot communities and open source forums and security trainings I do for organisation for self hosting needs. I also commit of giving a margin of Insurgo profits following needs to cover part of the maintenance fees. Sorry to not have jumped in preciously, I'm crazy busy with Insurgo tasks and looking myself for collaboration to build a cooperative business platform and modify things so I can be completely replaceable into doing the whole production chain for the PrivacyBeast x230. Both from a Heads perspective and from a QubesOS perspective. Let me know what I can do to help on a community basis and what are the costs to cover. I'll do my best. Thanks a bunch for showing interest into keeping that platform alive. Thierry/Insurgo -- Sent from /e/ Mail ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: Final calls for S3 suspend support on amdfam10-15
I could provide such resume logs for kgpe-d16. How do I produce them? On August 21, 2019 2:43:44 AM UTC, Matt B wrote: >Is a lack of C bootblock support common to all family 15h platforms? >(Including the G505s and any others?) >In other words, would it only need to be implemented once for all of >these >systems? > >Sincerely, >-Matthew Bradley > >On Thu, Aug 15, 2019 at 11:50 AM Kyösti Mälkki > >wrote: > >> Hi >> >> The decisions are out on 4.11 release requirements. Unless anyone >acts >> on it, amdfam10-15 boards will hit deprecation due to: >> * RELOCATABLE_RAMSTAGE=n >> * CAR_GLOBAL_MIGRATION=y >> * C_ENVIRONMENT_BOOTBLOCK=n >> >> To smooth down the path, should anyone want to attempt on fixing >> these, I have pushed a patchset [1] that removes S3 suspend support >> from said platform. Depending of what the response is, I hope to have >> that submitted already before 4.11 release. >> >> The latest info [2] I have is asus/kcma-d8 not working and >> asus/kgpe-d16 working in 4.6 but very slowly, for S3 resume boot, >that >> is. No resume logs have been brought to my knowledge for 12 months. I >> also had some suspicions that tests results were mistakenly from >> suspend-to-disk (S4). >> >> Affected boards are asus/kcma-d8 and asus/kgpe-d16. >> >> [1] https://review.coreboot.org/c/coreboot/+/15474 >> [2] >https://mail.coreboot.org/pipermail/coreboot/2018-June/086906.html >> >> Kind regards, >> Kyösti >> ___ >> coreboot mailing list -- coreboot@coreboot.org >> To unsubscribe send an email to coreboot-le...@coreboot.org >> -- Sent from /e/ Mail___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org