[coreboot] Xeon SP code structure

2020-02-28 Thread Andrey Petrov
Dear coreboot folks,

As some you know we at OSF are working on enabling Xeons in coreboot. We have
recently uploaded Skylake-SP which goes in src/soc/intel/skylake_sp. At the
same time we are working on enabling next generation SP processor. I was
wondering what may be a good way to structure the code. It feels wrong to just
throw code into src/soc/intel especially for systems with discrete PCH.

I'd like to hear opinions and discuss what may be a good way to structure and
organize the code. Here is what we want to achieve:

  * Make code modular
It looks like certain things are similar to all -SP variants and it makes
sense to share that code rather than copy-paste.
  * Allow same motherboard to host different CPU (and potentially different PCH)
The practical thing here is that some server boards support two generations
of CPUs that are pin-compatible. However, chip code is different. PCH may be
same or different.
  * Did I already say eliminate/decrease copy-pasta?

Here is a structure that I came up with so far (patch stack ending with 39017)

cpu/xeon_sp/
   ├─ Kconfig  # baseline of config
   ├─ include/ # common headers
   ├─ common/  # true common code such as IIO stacks code, ACPI tables
   ├─ cpu/skylake-sp/
   │├─ include/ # cpu/northbridge defines specific to given model
   │├─ Kconfig  # whatever overrides from common we need
   │└─ *.c  # code that implements specific platform bits
   ├─ cpu/nextlake-sp/
   │├─ include/ # same
   │├─ Kconfig  # same
   │└─ *.c  # same

Now then, the "common" xeon_sp code may be placed in src/northbridge. We
probably should add Lewisburg C62x code in src/southbridge as well. Thoughts?

Alternatively, we can place everything in soc/intel/ and put Xeon server common
code in soc/intel/common/block/ or similar. This may be easiest way
but fells messy.

Is there any other way with pros and cons?

thanks
Andrey
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


Re: [coreboot] x86: best approach to debug consumer hardware?

2017-07-05 Thread Andrey Petrov



On 07/05/2017 10:01 AM, Andrey Korolyov wrote:


The fourth/fifth points has very high likeness for the fact that the
regular kernel debugging would not help at all and I hardly imagine
myself spending few more days to manage firewire memory 'sniffer' to
work, though this method has highest successful potential among other
approaches, excluding (unavaiable due to pricing of the counterparting
LA) memory interceptor. What could be a suggestion to move on with
least effort at this point?


So you are after memory contents? Freeze DIMMS, turn off memory 
scrambling and flash firmware that dumps memory contents. In essence, 
cold boot attack.


Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] DediProg EM100Pro with Apollo Lake

2017-05-31 Thread Andrey Petrov

Hi,

On 05/31/2017 01:54 AM, Urs Ritzmann wrote:


What flash type were you emulating? I require a 1.8 Volt device with >= 128Mbit 
size.


W25Q128FW, also 1.8v. What application do you use to drive em100? 
Official one from Dediprog or the open linux one? Also, what em100 
firmware# you have, perhaps yours is old?


Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] DediProg EM100Pro with Apollo Lake

2017-05-30 Thread Andrey Petrov

Hi

On 05/30/2017 12:36 AM, Urs Ritzmann wrote:


Is there a way to disable Quad IO Read(0xEB) from the flash descriptor region?


please try clearing bits 3 (Quad I/O Read Enable) and 2 (Quad Output 
Read Enable) at 0x108 offset.


Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] DediProg EM100Pro with Apollo Lake

2017-05-30 Thread Andrey Petrov

Hi,

On 05/29/2017 06:29 AM, Urs Ritzmann wrote:


Are there any known quirks required to use the DediProg EM100-Pro flash 
emulator with Apollo Lake?


The only quirk I know is that the emulated part must support SFDP. ROM 
boot code is very sensitive to SFDP and will halt if there no response 
to SFDP commands.


Just adding --em100 worked for me. What does em100 in trace mode show?

Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Easier accessibility for coreboot/flashrom wikipedias? What do you think?

2017-05-22 Thread Andrey Petrov

Hi,

On 05/19/2017 08:17 AM, Peter Stuge wrote:


I think a more workable and sustainable solution is to enable more
people to grant write access. Another project uses an IRC bot for
this task, so that a group of trusted users on the IRC channel(s)
can grant write access immediately. It works really well. However,
it requires some programming to implement. Please volunteer to work
on creating this solution if you can. I guess that it's about a week
of R


Just put wiki contents on git, and make updates go through gerrit? I 
know at least one opensource project which does that and it seems to work.


Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] SPI Flash does not work on Intel LeafHill CRB

2017-03-29 Thread Andrey Petrov

Hi,

On 03/29/2017 07:52 PM, Toan Le manh wrote:

Hi Andrey,

Even I tried flashing with IFWI .bin file released by Intel, the board
still doesn't boot.


ah, I think I misunderstood you. You were saying even stock 
Intel-provided image didn't work. If that is the case yes, probably 
flashing is the issue. Can you try to read back from flash image right 
after flashing? Then compare it to the file you wanted to flash. This 
way you can tell if flashing works.


Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] SPI Flash does not work on Intel LeafHill CRB

2017-03-29 Thread Andrey Petrov

Hi,

On 03/29/2017 06:48 PM, Toan Le manh wrote:

@Andrey: The flashing is OK, but the board doesn't boot anything. The
POST CODE is always "0".
Even I tried flashing with IFWI .bin file released by Intel, the board
still doesn't boot.
Where can I select "Use IFWI stitching"?


make nconfig, select board, there will be option to "use IFWI 
stitching". Then you will need to provide path to the ifwi file.


Best
Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] SPI Flash does not work on Intel LeafHill CRB

2017-03-29 Thread Andrey Petrov

Hi,

On 03/29/2017 04:29 AM, Toan Le manh wrote:

I got the LeafHill CRB from Intel, tried flashing SPI chip (Winbond
W25Q128FW) using BeeProg2C.
However nothing worked. The Status Code remained "0".


Are you saying the flashing didn't work OR board doesn't boot after?

If latter, did you select "Use IFWI stitching" ?

Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] [PATCH] nb/intel/nehalem/raminit.c: Add timeouts when waiting for heci.

2017-03-27 Thread Andrey Petrov

Hi,

On 03/27/2017 01:05 PM, Denis 'GNUtoo' Carikli wrote:

Since until now, the code running on the management engine is:
- Signed by its manufacturer
- Proprietary software, without corresponding source code
It can desirable to run the least ammount possible of such
code, which is what me_cleaner[1] enables.

It does it by removing partitions of the management engine
firmwares, however when doing so, the HECI interface might
not be present anymore.

So it is desirable not to have the RAM initialisation code
wait forever for the HECI interface to appear.


I do not know how ME cleaner operates but I believe security engine may 
be going into "recovery mode". This means it may never indicate 
readyness status. However the fact it is in recovery mode can be figured 
out programmatically as one of FWSTS registers. So you can try checking 
if security engine is in recovery and just skip waiting altogether. Try 
looking at "Current state" bits or "OP mode" bits. I suspect either of 
them will change after ME cleaner. FWSTS sits in ME PCI device config 
space and should be easily accessible. Typically FWSTS registers they 
sit in offset 0x40,0x48,0x60 and so on. Please try to compare them 
before and after ME cleaner.


Best,
Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Question about PK location

2017-03-16 Thread Andrey Petrov

Hi,

On 03/16/2017 07:44 AM, Rafael Machado wrote:


/"Intel Boot Guard is intended to protect against this scenario. When
your CPU starts up, it reads some code out of flash and executes it.
With Intel Boot Guard, the CPU verifies a signature on that code before
executing it[1]. The hash of the public half of the*_signing key is
flashed into fuses on the CPU_*. It is the system vendor that owns this
key and chooses to flash it into the CPU, not Intel.  "/
/
/
/
/
I would just like to know if some intel spec or something similar has
more details about the place this key can be stored.
Does anyone here have this information?


I believe that is stored in FPF (Field Programmable Fuses).
There are some details here:
https://embedded.communities.intel.com/thread/8670

Best,
Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Inteal Leafhill : Linux SATA driver fails when used with coreboot+grub

2017-03-05 Thread Andrey Petrov



On 03/05/2017 10:58 AM, Gailu Singh wrote:

Hi Again,

I tried to find out the details for following error

ata1: SATA link down (SStatus 4 SControl 300)

As per status register description

SStatus 4 : Phy in offline mode as a result of the interface being
disabled or running in a BIST loopback mode

Is there any chance that coreboot/grub/Linux is putting SATA in to BIST
loopback mode?

I am trying to understand who is responsible for SATA Linux status 4 and
possible candidates are
a) coreboot
b) grub
c) Linux


You already mentioned coreboot+tianocore worked fine under Linux. I 
think it is safe to assume the problem is not originating from the 
kernel. Why don't you check what tianocore does about SATA? I suspect 
tianocore may be injecting some ACPI tables for SATA controller which 
kernel picks up. Perhaps you can dump, decompile and compare asl from 
coreboot+tianocore and coreboot+grub and see what is different?


Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Coreboot at Apollo Lake Oxbohill CRB

2017-02-25 Thread Andrey Petrov



On 02/25/2017 11:52 AM, Gailu Singh wrote:

Thank you once again for your help and support.

Managed to build the coreboot 16MB image with FSP, ifwi and
descriptor.bin and flashed it on the board. When power-on board, I see
Red LED (DS3B1) is ON that seems to be some error. User guide only
provide description of 4 LEDs (DS6B1, DS2C1, DS6B2, D5L1) so I assume
red led is indicating some error condition.

Currently:
DS6B1 : Green ON
DS6B2 : Green ON
DS2C1 : OFF
DS3B1 : RED ON

No Output on serial console or HDMI.


Red light is never a good sign. Do you have a way to know if PLTRST# has 
been de-asserted?


Another thing is that when you stitch with FIT you need to turn off boot 
guard completely. I suspect you have it on and that makes CSE to check 
signatures. Please check that in Platform protection -> Bootguard 
configuration it is set to "bootguard profile 0 - legacy". Then try 
restitching fit image and rebuilding coreboot again.


- Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Coreboot at Apollo Lake Oxbohill CRB

2017-02-25 Thread Andrey Petrov

On 02/25/2017 11:05 AM, Gailu Singh wrote:

Thanks Andrey,

Manage to extract required blob with SplitFspBin.py.' I have prebuilt 
IFWI binary. only missing part is to find/generate correct 
descriptor.bin.



just dd if=fitimage.bin bs=4096 count=1 of=descriptor.bin
where fitimage.bin is output from FIT tool.

-Andrey
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Coreboot at Apollo Lake Oxbohill CRB

2017-02-25 Thread Andrey Petrov


On 02/25/2017 02:48 AM, Gailu Singh wrote:

>>you need a bunch of blobs (of course), most importantly fitimage.bin and fsp.

>>Please usehttps://review.coreboot.org/#/c/18479/3  as starting point.
>>That is for Leafhill. But once you apply that patch, select mainboard
>>intel/leafhill in 'make nconfig', put the sacred blobs in the designated
>>location and 'make' should give you flashable coreboot.rom.
I pulled the leafhill patches and yes I get options to specify FSP when 
selected leafhill. However not clear about the difference between FSP-M.fv and 
FSP-S.fv. I have FSP.bsf and FSP.fd files for FSP. Can you please let me know 
how to create required FSP blob from FSP.bsf and FSP.fd files?

You need to use script to break the big blob into smaller blobs:
https://github.com/tianocore/edk2/blob/master/IntelFsp2Pkg/Tools/SplitFspBin.py

$ SplitFspBin.py split FSP.fd

Here is some video on FSP2.0 and what blobs does what:

https://www.youtube.com/watch?v=uzfiTiP9dEM=youtu.be

Here is formal FSP2.0 spec:

http://www.intel.com/content/www/us/en/embedded/software/fsp/fsp-architecture-spec-v2.html

Best,
Andrey
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Coreboot at Apollo Lake Oxbohill CRB

2017-02-24 Thread Andrey Petrov

Hi,

On 02/24/2017 09:19 PM, Gailu Singh wrote:

Hi Experts,

I have built coreboot image for Apollo Lake and trying to boot Oxbohill
CRB but no console or display at HDMI port.


you need a bunch of blobs (of course), most importantly fitimage.bin and 
fsp.


Please use https://review.coreboot.org/#/c/18479/3 as starting point. 
That is for Leafhill. But once you apply that patch, select mainboard 
intel/leafhill in 'make nconfig', put the sacred blobs in the designated 
location and 'make' should give you flashable coreboot.rom.


Best
Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Add coreboot storage driver

2017-02-14 Thread Andrey Petrov

Hi,

On 02/13/2017 11:16 AM, Nico Huber wrote:

On 13.02.2017 08:19, Andrey Petrov wrote:

For example Apollolake is struggling to finish firmware boot with all
the whistles and bells (vboot, tpm and our friendly, ever-vigilant TXE)
under one second.

Can you provide exhaustive figures, which part of this system's boot
process takes how long? That would make it easier to reason about where
"parallelism" would provide a benefit.


Such data is available.  Here is a boot chart I drew few months back:
http://imgur.com/a/huyPQ

I color-coded different work types. Some blocks are coded incorrectly 
please bear with me).


So what we can see is that everything is serial and there is great deal 
of waiting. For that specific SDHCI case you can see "Storage device 
initialization" that is happening in depthcharge. That is CMD1 that you 
need keep on sending to the controller. As you can see, it completes in 
130ms. Unfortunately you really can't just send CMD1 and go about your 
business. You need to poll readiness status and keep on sending CMD1 
again and again. Also, it is not always 130ms. It tends to vary and 
worst case we seen was over 300ms. Another one is "kernel read", which 
is pure IO and takes 132ms. If you invest some 300ms in training the 
link (has to happen on every boot on every board) to HS400 you can read 
it in just 10ms. Naturally you can't see HS400 in the picture because 
enabling it late in the boot flow would be counter productive.


That's essentially the motivation to why we are looking into starting 
this CMD1 and HS400 link training as early as possible. However fixing 
this particular issue is just a "per-platform" fix. I was hoping we 
could come up with a model that adds parallelism as a generic reusable 
feature not just a quick hack.


Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Add coreboot storage driver

2017-02-13 Thread Andrey Petrov

Hi,

On 02/13/2017 12:31 PM, ron minnich wrote:

Another idea just popped up: Performing "background" tasks in udelay()
/ mdelay() implementations ;)


that is adurbin's threading model. I really like it.

A lot of times, concurrency will get you just as far as ||ism without
the nastiness.


But how do you guarantee code will get a slice of execution time when it 
needs it? For example for eMMC link training you need to issue certain 
commands with certain time interval. Lets say every 10ms. How do you 
make sure that happens? You can keep track of time and see when next 
piece of work needs to be scheduled, but how do you guarantee you enter 
this udelay code often enough?


Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Add coreboot storage driver

2017-02-13 Thread Andrey Petrov

Hi,

On 02/13/2017 10:22 AM, Timothy Pearson wrote:




For [2] we have been working on prototype for Apollolake that does
pre-memory MPinit. We've got to a stage where we can run C code on
another core before DRAM is up (please do not try that at home, because
you'd need custom experimental ucode).


In addition to the very valid points raised by others on this list, this
note in particular is concerning.  Whenever we start talking about
microcode, we're talking about yet another magic black box that coreboot
has no control over and cannot maintain.  Adding global functionality
that is so system specific in practice as to rely on microcode feature
support is not something I ever want to see, unless perhaps the relevant
portions of the microcode are open and maintainable by the coreboot project.


I am just talking about BIOS shadowing. This is a pretty standard 
feature, just that not every SoC implement it by default. Naturally, we 
would be only adding new code if it became publicly available. I believe 
shadowing works on many existing CPUs, so no, it is not "use this custom 
NDA-only ucode" to get the stuff working.


Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Add coreboot storage driver

2017-02-13 Thread Andrey Petrov

Hi,

On 02/13/2017 12:21 AM, Zoran Stojsavljevic wrote:


IBVs can work on this proposal, and see how BIOS boot-up time will improve (by 
this parallelism)


There is no need to wait for anybody to see real-world benefits.

The original patch where you train eMMC link already saves some 50ms. 
However MP init kicks in very late. That is a limitation of current 
approach where MPinit depends on DRAM to be available. If you move 
mpinit earlier, you can already get approx 200ms saving. On Apollolake 
we have a prototype where MPinit happens in bootblock. That already 
reduces boot time by some 200ms.



Since, very soon, you'll run to shared HW resource, and then you'll need
to implement semaphores, atomic operations and God knows what!?


Fortunately, divine powers have nothing to do with it. Atomic operations 
are already implemented and spinlocks are in as well.


What other major issues you see, Zoran?

thanks
Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Add coreboot storage driver

2017-02-13 Thread Andrey Petrov

Hi,

On 02/13/2017 06:05 AM, Peter Stuge wrote:

Andrey Petrov wrote:



Nowadays we see firmware getting more complicated.


Sorry, but that's nonsense. Indeed MSFT is pushing more and more
complicated requirements into the EFI/UEFI ecosystem, but that's
their problem, not a universal one.


I wish it was only MSFT. Chrome systems do a lot of work early on that 
is CPU intensive, and there waiting on secure hardware as well. Then 
there is the IO problem that original patch tries to address.



Your colleague wants to speed up boot time by moving storage driver
code from the payload into coreboot proper, but in fact this goes
directly against the design goals of coreboot, so here's a refresh:

* coreboot has *minimal* platform (think buses, not peripherals)
  initialization code

* A payload does everything further.


This is nice and clean design, no doubt about it. However, it is serial.

Another design goal of coreboot is to be fast. Do "be fast" and "be 
parallel" conflict?



For example Apollolake is struggling to finish firmware boot with all
the whistles and bells (vboot, tpm and our friendly, ever-vigilant TXE)
under one second. Interestingly, great deal of tasks that needs to be
done are not even computation-bound. They are IO bound.


How much of that time is spent in the FSP?


FSP is about 250ms grand total. However, that is not all that great if 
you compare to IO to load kernel over SHDCI (130ms) and initialize eMMC 
device itself (100-300ms). Not to mention other IO-bound tasks that can 
very well be started in parallel early.



how to create infrastructure to run code in parallel in such early stage


I think you are going in completely the wrong direction.

You want a scheduler, but that very clearly does not belong in coreboot.


Actually I am just interested in getting things to boot faster. It can 
be scheduling or parallel execution on secondary HW threads.



Shall we just add "run this (mini) stage on this core" concept?
Or shall we add tasklet/worklet structures


Neither. The correct engineering solution is very simple - adapt FSP
to fit into coreboot, instead of trying to do things the other way
around.


FSP definitely needs a lot of love to be more usable, I couldn't agree 
more. But if hardware needs be waited on and your initialization process 
is serial, you will end up wasting time on polling while you could be 
doing something else.



This means that your scheduler lives in the payload. There is already
precedent - SeaBIOS also already implements multitasking.


Unfortunately it is way too late to even make a dent on overall boot time.

Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Add coreboot storage driver

2017-02-12 Thread Andrey Petrov

Hi there,

tl;dr:
We are considering adding early parallel code execution in coreboot. We 
need to discuss how this can be done.


Nowadays we see firmware getting more complicated. At the same time CPUs 
do not necessarily catch up. Furthermore, recent increases in 
performance can be largely attributed to parallelism and stuffing more 
cores on die rather than sheer core computing power. However, firmware 
typically runs on just one CPU and is effectively barred from all the 
parallelism goodies available to OS software.


For example Apollolake is struggling to finish firmware boot with all 
the whistles and bells (vboot, tpm and our friendly, ever-vigilant TXE) 
under one second. Interestingly, great deal of tasks that needs to be 
done are not even computation-bound. They are IO bound. In case of SDHCI 
below it is possible to train eMMC link to switch from default low-freq 
single data rate (sdr50) mode to high frequency dual data rate mode 
(hs400). This link training increases eMMC throughput by factor by 
15-20. As result time it takes to load kernel in depthcharge goes down 
from 130ms to 10ms. However, the training sequence requires constant by 
frequent CPU attention. As result, it doesn't make any sense to try to 
turn on higher-frequency modes because you don't get any net win. We 
also experimented by starting work in current MPinit code. Unfortunately 
it starts pretty late in the game and we do not have enough parallel 
time to reap meaningful benefit.


In order to address this problem we can do following things:
1. Add scheduler, early or not
2. Add early MPinit code

For [1] I am aware of one scheduler discussion in 2013, but that was 
long time ago and things may have moved a bit. I do not want to be a 
necromancer and reanimate old discussion, but does anybody see it as a 
useful/viable thing to do?


For [2] we have been working on prototype for Apollolake that does 
pre-memory MPinit. We've got to a stage where we can run C code on 
another core before DRAM is up (please do not try that at home, because 
you'd need custom experimental ucode). However, there are many questions 
what model to use and how to create infrastructure to run code in 
parallel in such early stage. Shall we just add "run this (mini) stage 
on this core" concept? Or shall we add tasklet/worklet structures that 
would allow code to live in run and when migration to DRAM happens have 
infrastructure take care of managing context and potentially resume it? 
One problem is that code running with CAR needs to stop by the time 
system is ready to tear down CAR and migrate to DRAM. We don't want to 
delay that by waiting on such task to complete. At the same time, 
certain task may have largely fluctuating run times so you would want to 
continue them. It is actually may be possible just to do that, if we use 
same address space for CAR and DRAM. But come to think of it, this is 
just a tip of iceberg and there are packs of other issues we would need 
to deal with.


Does any of that make sense? Perhaps somebody thought of this before? 
Let's see what may be other ways to deal with this challenge.


thanks
Andrey


On 01/25/2017 03:16 PM, Guvendik, Bora wrote:

Port sdhci and mmc driver from depthcharge to coreboot. The purpose is
to speed up boot time by starting

storage initialization on another CPU in parallel. On the Apollolake
systems we checked, we found that cpu can take

up to 300ms sending CMD1s to HW, so we can avoid this delay by
parallelizing.



- Why not add this parallelization in the payload instead?

There is potentially more time to parallelize things in
coreboot. Payload execution is much faster,

so we don't get much parallel execution time.



- Why not send CMD1 once in coreboot to trigger power-up and let HW
initialize using only 1 cpu?

Jedec spec requires the CPU to keep sending CMD1s when
the hardware is busy (section 6.4.3). We tested

with real-world hardware and it indeed didn't work with
a single CMD1.



- Why did you port the driver from depthcharge?

I wanted to use a driver that is proven to avoid bugs.
It is also easier to apply patches back and forth.



https://review.coreboot.org/#/c/18105



Thanks

Bora







--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Flash ROM access during boot

2017-01-21 Thread Andrey Petrov

Hi,

On 01/21/2017 02:30 PM, Paul Menzel via coreboot wrote:

Dear coreboot folks,


Playing around with the trace feature of the Dediprog EM100Pro, I
noticed several flash ROM accesses until the payload is loaded.

Are there ways or strategies to preload the whole flash ROM chip
content into memory for faster access right after RAM is set up for
example? What does that depend on? Does that make any sense at all?


preloading whole flash is a bad idea, because you have to pay upfront 
IO cost for whole-flash read. And then most of that is going to be 
wasted anyways, because you likely need only parts of the flash at that 
point.


On Apollolake at least, SPI hardware sequencer has some internal cache 
and when combined with regular CPU cache (just set MTRRs to cover memory 
mapped SPI flash) seems to work effectively. There was an issue we found 
recently where ramstage never cached mmaped BIOS area but that was 
addressed swiftly.


What you could do is to pre-populate cache with flash data right before 
it is going to be used. So you could read just a byte from each page of 
memory-mapped payload, and cause spi hardware to read the whole page in 
'background' due to work of prefetchers. This may be useful if you are 
at the last stages of ramstage and doing PCI device IO and 
waiting/spinning. So by the time you want to load payload it is already 
"preloaded" in the cache.


However on apollolake grand total for IO is less than 100ms and even 
less for payload so I suspect benefits from such hack are going to be 
pretty small.


Andrey

--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Newbie question about motherboard support

2016-11-03 Thread Andrey Petrov

привет,

On 11/03/2016 01:17 PM, Vasiliy Tolstov wrote:


> I don't know enough about Intel to tell you whether your board is
> using BootGuard or how you would find that out, though. If it does,
> you're probably out of luck. (If it doesn't, it's true that you still
> need blobs... but you can usually extract these from your vendor
> firmware and work them into a coreboot image.)

I'm still waiting for intel guys ,I think that they can say more about 
this.


On Braswell you'll need blobs (of course). You need ME (Management 
Engine) and FSP (firmware support package) blobs.
I am not sure if you can legally obtain the blobs from Intel, but you 
can extract them from Chrome images.


Best,
Andrey
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] How to obtain FSP v2.0 for ApolloLake?

2016-06-21 Thread Andrey Petrov

Hi Rolf,

I think blobs aren't uploaded yet because ApolloLake has not been 
officially announced.
You should try contacting Intel to get the blobs. I know Intel does 
offer blobs to certain customers who use FSP-based solutions.


Andrey

On 06/21/2016 06:25 AM, Rolf Evers-Fischer wrote:

Coreboot for Intel ApolloLake needs FSP v2.0 blobs. Unfortunately FSP v2.0 is
not offered at www.intel.com/fsp.
Do you know, how or where I could get it?

Kind regards,
  Rolf



--
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot