On 04/27/2016 08:50 AM, Simon Glass wrote:
Hi Stephen,

On 25 April 2016 at 13:25, Stephen Warren <swar...@wwwdotorg.org> wrote:
On 04/23/2016 11:14 AM, Simon Glass wrote:

Hi Stephen,

On 19 April 2016 at 14:59, Stephen Warren <swar...@wwwdotorg.org> wrote:

From: Stephen Warren <swar...@nvidia.com>

U-Boot is compiled for a single board, which in turn uses a specific SoC.
There's no need to make runtime decisions based on SoC ID. While there's
certainly an argument for making the code support different SoCs at
run-time, the Tegra code is so far from that possible ideal that the
existing runtime code is an anomaly. If this changes in the future, all
runtime decisions should likely be based on DT anyway.

Signed-off-by: Stephen Warren <swar...@nvidia.com>
---
   arch/arm/mach-tegra/ap.c               | 106
++++++++++-----------------------
   arch/arm/mach-tegra/cache.c            |  20 +++----
   arch/arm/mach-tegra/cpu.c              |  16 ++---
   arch/arm/mach-tegra/cpu.h              |   6 --
   arch/arm/mach-tegra/tegra20/warmboot.c |  20 ++-----
   5 files changed, 51 insertions(+), 117 deletions(-)


What exactly is missing to prevent multi-arch support?

In a word: everything:-)

Pretty much all decisions in core architecture code, core Tegra code,
drivers, and even board files are currently made at compile time. For
example, consider drivers where the register layouts are different between
different SoCs; not just new fields added, but existing fields moved to
different offsets. Right now, we handle this by changing the register struct
definition at compile time. To support multiple chips, we'd have to either
(a) link in n copies of the driver, one per register layout, or (b) rework
the driver to use #defines and runtime calculations for register offsets,
like the Linux kernel drivers do. Tegra USB is one example. The pinmux and
clock drivers have a significantly different sets of pins/clocks/resets/...
per SoC, and enums/tables describing those sets are currently configured at
compile time. Some PMIC constants (e.g. vdd_cpu voltage) are configured at
compile-time, and even differ per board.

I wonder how far we would get by converting clock, pinctrl, reset to
driver model drivers?

Well, I expect we'll find out soon. The next SoC has radically different clock/reset mechanisms, so we'll need to switch to standardized APIs for clock/reset on Tegra to isolate drivers from those differences, and I imagine that conversion would also involve conversion to DM since any standard APIs probably assume use of DM. I haven't investigated this in detail yet though.

Shouldn't we head towards that rather than making it harder?

I don't see any need for that, no.

U-Boot is built for a specific board (or in some cases a set of extremely
closely related set of boards, such as the RPI A/B/A+/B+). There's no need
to determine almost anything at run-time since almost all information is
known at compile time, with exceptions such as standardized enumerable buses
such as USB, PCIe. If we support multiple HW in a single binary, it gets
bloated with code that simply isn't going to be used, since all the extra
code is either for a platform that the build won't be installed on (e.g.
clock/pinmux tables), or is overhead to add runtime detection of which block
of code to use, which simply isn't needed in the current model.

It's not so much that. Presumably a build for a particular board would
not include support for and SoC it doesn't have. But it is still
useful to build the code. For example it would be nice to have an
overall Tegra build that enables all SoCs to avoid building every
board.

So it is a serious question. I suspect the main impediment may be
moving the clock and other core stuff to driver model.

Yes, everything is a bit too tightly coupled at the moment, and in many cases each SoC-specific implementation exposes the same global symbols which clients use. DM or similar conversions may well solve a lot of this.

In my opinion, firmware/bootloaders run on a single specific board, whereas
full-featured operating systems support multiple systems.

Except when the boards are pretty similar. Also, doesn't barebox have
only one build for Tegra?

I haven't looked at Barebox much. IIRC it only supports Tegra20 and not later SoCs which could simplify things. Besides, I'm not arguing that it's impossible to make a unified binary, simply that I don't see any need to do so, except perhaps your compile-coverage suggestion.

As an aside, I've wondered whether U-Boot should be split into multiple
parts; one HW-specific binary providing various drivers (e.g. via DM-related
APIs?) and the other containing just high-level user-interface code such as
the shell, high-level USB/... protocols, which would only call into those
APIs. Still, I don't think we're anywhere close to that, and I'm not aware
that it's a goal of the project at the moment.

Well it gets built as one binary, but there's a pretty clear
separation in the code, at least with driver model. What's the purpose
of this?

It would allow the HW-agnostic portion to be compiled once (or once for a CPU ISA) and re-used with any of the HW-/board-specific "driver" blobs. It'd get use to "single binary" for the generic stuff, but without requiring the for HW-specific code. Perhaps the generic portion could even run on top of other driver stacks if they implemented the API it needed! However, this does ignore potential feature differences in the common binary, e.g. someone might want dfu/ums commands, but someone else might not need them and hence consider them bloat. Still, those configurations would be differentiated by feature more than HW, so it might still be useful.
_______________________________________________
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot

Reply via email to