[coreboot] Re: Howdy!
Felix, Danke vielmals! The machine I'm currently using has Git installed. Can you or another member point me at what source I need to download? A Google search, with linuxbios as the target, was not productive. I came across a nice slide presentation, but that said use cache as RAM. Eli D. From: Felix Held Sent: Thursday, November 7, 2019 7:55 PM To: coreboot@coreboot.org Subject: [coreboot] Re: Howdy! > Using an old (pre romcc-romstage removal) coreboot version or even linuxbios (not to be confused with linuxboot) is probably your best bet here. Regards Felix > ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: Howdy!
Hi! You need either to use cache as RAM On those very old processors and chipsets it's rather unlikely that you'll get cache as RAM working, since those are typically missing some rather essential functionality for that; mostly that they don't have MTRRs. Using romcc would probably be an option, but romcc romstage was dropped a few years ago, so that's not useful for porting the device to a current coreboot version. Using an old (pre romcc-romstage removal) coreboot version or even linuxbios (not to be confused with linuxboot) is probably your best bet here. Regards Felix -- Felix Held c/o cyberkombinat23 Steinstraße 23 76133 Karlsruhe Germany m...@felixheld.de Ust-ID: DE301814366 ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: Howdy!
Rudolf, That you took time to respond to a NOOB, at all, is highly appreciated. It's definitely C 101 time for me. I'm an assembly language programmer and COBOL is number 2. I've previously looked into C but (now unfortunately) developed a distaste for it, given its claims of "universality" and incompatability with COBOL's data structures. However, you definitely don't want to hear my gripes. XYZ for Dummies will be the starting point. Would disassembling the contents of the installed BIOS ROM make sense? That code, obviously, gets RAM squared away. While unpleasant, I've dealt with machine language patches. Eli D. From: Rudolf Marek Sent: Thursday, November 7, 2019 5:19 PM To: Eli Duttman ; coreboot@coreboot.org Subject: Re: [coreboot] Howdy! Hi Eli, Unfortunately I'm very busy so I can't help much, although I like the blast from the past idea. Thanks, Rudolf ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: Copy-first platform additions (was: Re: Re: Proposal to add teeth to our Gerrit guidelines)
On 07.11.19 12:05, Nico Huber wrote: > 1. Few people seem to take the might of Git into account. We have a >tool that can significantly increase the efficiency of development >and can help to preempt bugs. But many people would accept a process >that breaks Git benefits. Especially with the growth of the coreboot >community, I believe this gains importance. Patrick already commented on two competing approaches to put the changes for a new chip into commits. I'd like to look closer at both and explain how I work with Git and why it makes it much much harder (at least for me) to work with copy-first additions. What I call the copy-first approach --- 1. First commit copies existing code (with name changes only). n. Some commits change parts of the copied code that were identified to differ between the old and new chips. (n+1. Not a practice yet, but it was suggested to have a last commit to document things that aren't covered by the n changing commits; e.g. what didn't change but was tested) The alternative step-by-step approach - Each commit adds a chunk of code in a state that is supposed to be compatible with the new chip. Generally, smaller, cohesive commits are easier to review, of course. But there is no strict rule what the steps should be. The major difference between the two approaches lies in what we know about the code chunks that stay the same for old and new chips. With the copy-first approach, we basically know nothing beside what we find in the history of an older chip. But we don't know if the code was left unchanged intentionally or if differences between the chips were overlooked. Also, nobody had an opportunity to review if it applies to the new chip. The n+1 commit might mitigate the documentation issue, but it's too late for review. Sometimes people tell me that the copy-first approach makes it more visible what the author thinks changed between the old and new chips. I don't see yet, how we benefit from that (please help me). With the step-by-step approach, we can easily document intentions in the commit messages, everything can be reviewed, and it's less likely to overlook changes between old and new chips. That much about the initial addition of a new chip. But how does it look like later, when the first (non-reference) boards using the chip are added, or during maintenance? No matter what approach was applied, the code won't be perfect. And whenever somebody hits a bug or an incompatibility, or maybe just wonders why the code exists (e.g. when a datasheet doesn't mention anything related), people will have to investigate the reasons behind the code. Personally, I make use a lot of the `git blame` command. It shows for each line of a file which commit touched that line last. And very often, the commit message provides some reasoning about the code. The step-by- step approach gives us the opportunity to document reasons, assumptions made and what was tested in the commit messages. With the copy-first approach, OTOH, this information is scattered. I'm not saying the reasons or test coverage would be different, just much harder to find, if documented at all. So for code that didn't change between old and new chips, in the best case, the copy-first approach with a documenting n+1 commit still increases costs maybe 5x to find the information. In the worst case much more. Patrick asked to be aware of costs. Here is my view on this: The step- by-step approach is more expensive for the initial addition. But these are one-time costs. The copy-first approach is cheaper but more error- prone due to the lack of a thorough review. It does not only create further costs due to more bugs to fix, but also makes it harder to reason about the code, so bug-fixing costs increase further. Assuming the copy-first approach costs X and another X per year of maintenance, and we'd want to maintain a chip 3 years (IIRC, Intel sometimes sells chips for 5~7 years), we could spend 4*X instead on the step-by-step approach without losing anything. Nico ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: Copy-first platform additions (was: Re: Re: Proposal to add teeth to our Gerrit guidelines)
Hi Patrick, thank you for your response. I really appreciate that somebody helps to clear things up. On 07.11.19 15:58, Patrick Georgi wrote: > On Thu, Nov 07, 2019 at 12:05:44PM +0100, Nico Huber wrote: >> 1. Few people seem to take the might of Git into account. We have a > Git is rather limited in some respects: For example it has no notion > of a copy (except by using interesting merge hacks: create a branch > where you rename, merge the two branches, keeping both copies in conflict > resolution and you'll get a copy operation that git understands). Yeah, Git isn't perfect. But that doesn't mean we can't use what it provides, does it? > >>tool that can significantly increase the efficiency of development >>and can help to preempt bugs. > Interesting claim, but no: in the end it's up to developers. Git > serves the function of a (rather stupid) librarian who keeps track > of things so they don't get lost. What are we bikeshedding here? That I said "[it] can" and should have said "[it] can be used to"? > >>But many people would accept a process >>that breaks Git benefits. Especially with the growth of the coreboot >>community, I believe this gains importance. > As Stefan explained yesterday, the copy & adapt flow was established in > part in response to issues with high velocity development overwhelming > the project's review capacity. > > There are two approaches, each with their upsides and downsides, > but from my point of view the review capacity issue is a real issue, > while "git can't track copies" is not: it's easier to improve git > (I'd expect they accept patches) than to ramp up on reviewers (that > aren't in the next step accused of rubber stamping commits). Is review capacity still an issue? From what you write it seems that it was back in 2012/2013 before I got really involved with the community. But during the last 3~4 years I witnessed the opposite. There were several occasions where I begged people to let me do a review and I was pushed away. At least in one case, I didn't even ask to break a big commit up. It was just decided that a review would be too much to bear for the submitter. So it would seem the capacity issue is on the other side: Do we lack capacities to write reviewable patches? If that is already the case, I fear it will turn into a race to the bottom. Less reviews, or less thorough reviews => more bugs => less time to write decent commits. > > One approach is to bring up a new chipset feature by feature. This > might happen by bringing over code from another chipset that > sufficiently similar, editing it and only then putting it up for > review. > > This means that every instance of the code will look slightly different > (because devs actively have to take apart .c files and reassemble them > as they reassemble the various features in the chipset), and every > line of code will have to be reviewed. That's where things broke down. > > On top of that, since every instance of the code will look > oh-so-slightly different, it becomes harder to port fixes around > along the lineage of a chipset. I agree, that is an issue I've seen, too. Though, without any guideline to keep older copies up to speed, we have that in any case (currently, everything is growing uncontrolled). > > The other approach is to replicate what the chip designers (probably) > did: Take the previous generation, make a copy and modify, under the > assumption that the old stuff isn't the worst place to start from. > > This means that the first code drop for chip(X+1) in coreboot likely > is more suitable for initializing chip(X) instead of chip(X+1), > as that's what the later follow-up commits provide. > > The main risk here seems to be that old stuff that isn't needed > anymore isn't cut out of the code. That is one issue but I see a much bigger one... > On the upside, when applying > transitive trust (that is: the assumption that the original chip(X+1) > code isn't worse than the chip(X) code it came from), ... transitive trust is a fallacy. While the copied code isn't worse in terms of coding style and whatever poison the programming language has to offer, it can be worse for the new chip. If we don't keep a record and review what has to be changed for the new chip, more differences between old and new will be overlooked. I can't prove it. But I assume that the effort to find overlooked details afterwards exceeds what we saved during the initial review. > the main concern > is looking what changed between the generations, reducing the load > on the project during review. > > You assert that the first approach I listed is superior to the other, > but you ignore the issues we had with it. I'm not ignoring the past, sorry if I made it look like that. It just came as a surprise to me because I didn't witness any discussion of the review-capacity trouble. I have no idea, though, how to measure if we still have that problem. If reviewing turns out too slow, it's
[coreboot] Re: coreboot and linuxboot
Hi Jorge, On Wed, Nov 6, 2019 at 5:09 AM Jorge Fernandez Monteagudo wrote: > Hi all, > > I've built a coreboot with the LinuxBoot payload. The kernel is the stable > 5.3.8 version and the intiramfs is the u-root master version. > All with the default choices. I'm using coreboot 4.10. > > Once the system boots I get a framebuffer console without usb keyboard > support and the u-root welcome message. > How can I boot my debian system on a SATA or a USB disk? Maybe, first I've > to add SATA and usb storage support? > Is there someway to looks for the grub configuration automatically? > Yes, if you're using LinuxBoot then you need to include all the drivers you wish to use in your LinuxBoot kernel, including SATA and USB. `localboot -grub` will systematically mount partitions looking for a GRUB config file, parse it, and kexec the kernel it finds. You can select a particular entry using the `-config` command-line parameter. > Sorry my noob questions, just arrived to LinuxBoot... > Welcome! ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: Copy-first platform additions (was: Re: Re: Proposal to add teeth to our Gerrit guidelines)
On Thu, Nov 07, 2019 at 12:05:44PM +0100, Nico Huber wrote: > 1. Few people seem to take the might of Git into account. We have a Git is rather limited in some respects: For example it has no notion of a copy (except by using interesting merge hacks: create a branch where you rename, merge the two branches, keeping both copies in conflict resolution and you'll get a copy operation that git understands). >tool that can significantly increase the efficiency of development >and can help to preempt bugs. Interesting claim, but no: in the end it's up to developers. Git serves the function of a (rather stupid) librarian who keeps track of things so they don't get lost. >But many people would accept a process >that breaks Git benefits. Especially with the growth of the coreboot >community, I believe this gains importance. As Stefan explained yesterday, the copy & adapt flow was established in part in response to issues with high velocity development overwhelming the project's review capacity. There are two approaches, each with their upsides and downsides, but from my point of view the review capacity issue is a real issue, while "git can't track copies" is not: it's easier to improve git (I'd expect they accept patches) than to ramp up on reviewers (that aren't in the next step accused of rubber stamping commits). One approach is to bring up a new chipset feature by feature. This might happen by bringing over code from another chipset that sufficiently similar, editing it and only then putting it up for review. This means that every instance of the code will look slightly different (because devs actively have to take apart .c files and reassemble them as they reassemble the various features in the chipset), and every line of code will have to be reviewed. That's where things broke down. On top of that, since every instance of the code will look oh-so-slightly different, it becomes harder to port fixes around along the lineage of a chipset. The other approach is to replicate what the chip designers (probably) did: Take the previous generation, make a copy and modify, under the assumption that the old stuff isn't the worst place to start from. This means that the first code drop for chip(X+1) in coreboot likely is more suitable for initializing chip(X) instead of chip(X+1), as that's what the later follow-up commits provide. The main risk here seems to be that old stuff that isn't needed anymore isn't cut out of the code. On the upside, when applying transitive trust (that is: the assumption that the original chip(X+1) code isn't worse than the chip(X) code it came from), the main concern is looking what changed between the generations, reducing the load on the project during review. You assert that the first approach I listed is superior to the other, but you ignore the issues we had with it. And on the project management side it's a hard sell (to put it mildly) when you expect everybody to follow your preferred scheme (without acknowledging its downsides) while claiming that it's project policy (it's not: we don't have a policy on that). We can decide to put up a policy, but we should be aware of the costs of whatever approach we mandate. Patrick -- Google Germany GmbH, ABC-Str. 19, 20354 Hamburg Registergericht und -nummer: Hamburg, HRB 86891, Sitz der Gesellschaft: Hamburg Geschäftsführer: Paul Manicle, Halimah DeLaine Prado signature.asc Description: PGP signature ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: Copy-first platform additions (was: Re: Re: Proposal to add teeth to our Gerrit guidelines)
Hi again all, I took part in (half of) a leadership meeting last night, and I was impressed by two things that I didn't expect: 1. Few people seem to take the might of Git into account. We have a tool that can significantly increase the efficiency of development and can help to preempt bugs. But many people would accept a process that breaks Git benefits. Especially with the growth of the coreboot community, I believe this gains importance. 2. The general acceptance of unreviewed code. Yes, one can try to argue that copy-pasted code was already reviewed. But in a different con- text and in a different time. Such argumentation also seems to assume that reviews are mostly about coding style and bikeshedding. On a few occasions, I've already commented about these things on Gerrit. I'll now try to take the time to detail my concerns with above points in separate emails. I really hope that discussing this will achieve something. I believe there are some $100k to save for the active coreboot community (albeit partial virtual $, as much of the work is done by volunteers). Am 06.11.19 um 19:35 schrieb Patrick Georgi: > On Wed, Nov 06, 2019 at 12:39:59PM +0100, Nico Huber wrote: >>> Some of the mega patches are copies of a predecessor chip (with the >>> minimum amount of changes to integrate it in the build), that are >>> then modified to fit the new chip. >> >> Ack. I think that is a problem. If this procedure is intended, I think >> we should update our guidelines to reflect that. > I guess first we should get on the same page with regard to > strategy. There's a bit of flip-flopping between extremes (code > duplication vs. silently breaking stuff). I don't think this is about code duplication, at least to me that's a separate concern. Nico -- M. Sc. Nico Huber Senior Consultant SINA Software Development and Verification Division Defence secunet Security Networks AG Phone: +49-201-5454-3635, Fax: +49-201-5454-1325 E-Mail: nico.hu...@secunet.com Mergenthalerallee 77, 65760 Eschborn, Deutschland www.secunet.com _ secunet Security Networks AG Registered at: Kurfuerstenstraße 58, 45138 Essen, Germany Amtsgericht Essen HRB 13615 Management Board: Axel Deininger (CEO), Torsten Henn, Dr. Kai Martius, Thomas Pleines Chairman of Supervisory Board: Ralf Wintergerst __ 0xBD56B4A4138B3CE3.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: LAVA support
Hello Patrick and Alexander, I am sorry for the late reply to your answers. We are looking into this as well. From what I have seen so far I got the feeling LAVA is a bit "heavy" given the tasks that need to be performed. Please note that I am new to the LAVA system. Best Regards, Wim Vervoorn -Original Message- From: Patrick Rudolph [mailto:s...@das-labor.org] Sent: Friday, November 1, 2019 7:34 PM To: Alexander 'lynxis' Couzens Cc: Wim Vervoorn ; coreboot@coreboot.org Subject: Re: [coreboot] Re: LAVA support Hi Wim, the lava as QA system is still being worked on, but currently has the lowest priority. While some hardware has been automated for firmware testing, some glue code is still missing. Some details can be found here: https://9esec.io/blog/bios-test-station/ Regards, Patrick On 2019-11-01 04:58 PM, Alexander 'lynxis' Couzens wrote: > Hi Wim, > > >> The coreboot site refers to the LAVA site as the QA system. It looks >> like some attempts have been made and then abandoned. > > that's so far true. I've setted up a lava environment some years ago, > but didn't had the time to continue and maintain it. > >> Does anyone know the current status? Did this not work out or is >> there another reason not to add additional platforms to the >> environment? > > It worked out to test coreboot. Patrick was also looking into it this > year. (CC: Patrick/siro). > > Maybe it's time to do another run of integrating it into the coreboot > CI? > > > Best, > lynxis > > ___ > coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an > email to coreboot-le...@coreboot.org ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org
[coreboot] Re: Call to FSP_FSP_INIT never returns back
Hello Naveen, The longer time is expected on the first boot. After that this should not take that long. If every boot takes that long please check if the MRC information is preserved in flash and properly restored before making the FSP call. Best Regards, Wim Vervoorn From: Naveen Chaudhary [mailto:naveenchaudhary2...@hotmail.com] Sent: Thursday, November 7, 2019 7:02 AM To: werner@siemens.com Cc: coreboot@coreboot.org Subject: [coreboot] Re: Call to FSP_FSP_INIT never returns back Hi fellow engineers, I got that RAM stick working when I changed the fourth byte of SPD from from mini-UDIMM (0x06) to SO-DIMM (0x03) and its working fine... But still I am curious why mini-UDIMM didn't work out. Perhaps this form factor was not supported by FSP? Another thing I observed that the call to FSP_FSP_INIT takes 7-8 seconds to return back. Is that expected? What could be the possible ways to optimize it (even an inflexible solution is welcome). The total boot time from power-on to loading payload is around 20 seconds with logs disabled. Regards, Naveen ___ coreboot mailing list -- coreboot@coreboot.org To unsubscribe send an email to coreboot-le...@coreboot.org