On Thursday 24 April 2014 13:53:47 Kukjin Kim wrote: > Arnd Bergmann wrote: > > > > On Wednesday 23 April 2014 15:23:16 Liviu Dudau wrote: > > > > Unfortunately we are in a tricky situation on arm64 because we have > > > > to support both server-type SoCs and embedded-type SoCs. In an > > > > embedded system, you can't trust the boot loader to do a proper > > > > setup of all the hardware, so the kernel needs full control over > > > > all the initialization. In a server, the initialization is the > > > > responsibility of the firmware, and we don't want the kernel to > > > > even know about those registers. > > BTW, actually we can trust boot-loader to do required things in mobile also > ;-)
Not really. Those boot loaders do the bare minimum to get one kernel running. There is no testing done with distro kernels (usually because they won't work anyway), and they wouldn't touch (or describe in DT) any hardware that isn't actually used by the kernel that initially ships with the device. If we could trust the boot loader to do all the necessary setup, we would get rid of a lot of kernel code. > > > > My hope is that all server chips use an SBSA compliant PCIe > > > > implementation, but we already have X-Gene, which is doing server > > > > workloads with a nonstandard PCIe, and it's possible that there > > > > will also be server-like systems with a DesignWare PCIe block > > > > instead of an SBSA compliant one. We can still support those, but > > > > I don't want to see more than one implementation of dw-pcie > > > > on servers. Just like we have the generic PCIe support that Will > > > > is doing for virtual machines and SBSA compliant systems, we > > > > would do one dw-pcie variant for all systems that come with a > > > > host firmware and rely on it being set up already. > > OK and I think, just one device driver would be nice for whatever > embedded or server. The runtime parts (e.g. config space access) should definitely be shared, and we should also share any peripheral drivers. However, basic infrastructure like PCI on servers should just work and you really shouldn't require any driver code for it. SBSA gets this part right by defining the config space layout, so we can have a very trivial PCI host driver for all SBSA systems, even if the same hardware needs a SoC specific driver for embedded systems that don't initialize the PCI host at boot time. There is also nothing wrong with embedded systems doing it the same way as servers and initializing everything before Linux starts. We just need to be prepared to add fixups when someone gets it wrong. > > I think we should treat DW-PCIe in the same way if anyone attempts > > to use that in a server, e.g. in SBSA level 0. As you can see here, > > Agreed. BTW, how about GICv2m for level 1? It can be supported with the same > way in one DW-PCIe driver? I don't think anybody has done a DT binding for GICv2m or submitted a patch yet, but I'm pretty sure it can be done. We just need to come up with a proper DT representation to pick which MSI controller is used by default. Hardware-wise you should be able to mix any combination of MSI controllers, but I would suspect that if there is a GICv2m or higher, we would always want to use that for performance reasons. The MSI controller in the dw-pcie block just sends a normal interrupt to the GIC, which means you lose all the benefits of using MSI. > > even when reusing hardware between Exynos and GH7, you can't just > > use the same init code, so it has to be in firmware to be any good. > > On a real server platform, you can't require a kernel upgrade every > > time a new SoC comes out, any basic functionality like PCI just has to > > work with existing OS images. > > > OK, when Will's driver is ready, we will test it on GH7 with the setup for > PCIe > included in firmware. Anyway I hope we can use the driver in 3.16 :-) Ok, sounds good. Arnd -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/