Re: [fedora-arm] ARM summit at Plumbers 2011
David: On Wed, Aug 24, 2011 at 6:55 PM, da...@lang.hm wrote: ARM is currently in worse shape than the PC market ever was in this aspect, but in this case it's less a matter of getting the hardware guys to change what they do than it is to get better documentation of what the hardware is really doing and not duplicating drivers for cases where the right answer is just replacing a constant with a variable (just as an example of the very common case where the same component is wired to a different address) I agree. Maybe Linaro or an equivalent organization could provide a ARM kernel janitor service to the community, where they refactor existing ARM platform/driver code to make it more common. This is something that's difficult for a single person with experience in only one or two SoCs to do, but it would be pretty straightforward work for a team of three or four people with broad coverage of the SoC devices the kernel supports now. As such refactoring consolidated larger and larger chunks of kernel code, new designs would gravitate towards those consolidated implementations because they would be the dominant references. b.g. -- Bill Gatliff b...@billgatliff.com -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: [fedora-arm] ARM summit at Plumbers 2011
On Fri, Aug 26, 2011 at 11:11:41AM -0500, Bill Gatliff wrote: As such refactoring consolidated larger and larger chunks of kernel code, new designs would gravitate towards those consolidated implementations because they would be the dominant references. Don't bet on it. That's not how it works (unfortunately.) Just look at the many serial port inventions dreamt up by SoC designers - everyone is different from each other. Now consider: why didn't they use a well established standard 16550A or later design? Also consider why ARM Ltd designed the PL010 and PL011 primecells which are different from the 16550A. This need to be different is so heavily embedded in the mindset of the hardware people that I doubt providing consolidated implementations will make the blind bit of difference. I doubt that hardware people coming up with these abominations even care one bit about what's in the kernel. -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: [fedora-arm] ARM summit at Plumbers 2011
Russell: On Fri, Aug 26, 2011 at 11:35 AM, Russell King - ARM Linux li...@arm.linux.org.uk wrote: On Fri, Aug 26, 2011 at 11:11:41AM -0500, Bill Gatliff wrote: As such refactoring consolidated larger and larger chunks of kernel code, new designs would gravitate towards those consolidated implementations because they would be the dominant references. Don't bet on it. That's not how it works (unfortunately.) I wasn't being clear. The Linux community isn't large enough to dictate to ARM SoC designers how their hardware should work--- mostly because the Linux community doesn't buy chips, corporations do. And it has been my experience that the parts of corporations that negotiate deals for the hardware aren't populated with the developers of the drivers for said hardware. What I meant was that as new hardware becomes available, if we have strong driver models then driver authors will adopt those APIs rather than inventing their own. I'm thinking about GPIO before gpiolib, for example. Or the current state of PWM. This need to be different is so heavily embedded in the mindset of the hardware people that I doubt providing consolidated implementations will make the blind bit of difference. I doubt that hardware people coming up with these abominations even care one bit about what's in the kernel. I don't routinely see a need to be different as existing strictly for its' own sake, even with the hardware guys. Rather, I see a lot of developers (hardware and software) that are so consumed with their own requirements and deadlines that they don't get the chance to step back and see the bigger picture. The resulting fragmentation is a symptom, not the disease itself. And honestly, some of the fragmentation is a really good thing. I love how Atmel does their GPIO controllers on the SAM-series parts, for example. The SODR and CODR registers are a godsend for concurrent code. We wouldn't have such treats if everybody did things the same way. So I'm generally ambivalent to the hardware situation. But that doesn't mean that the software has to be equally fragmented. In fact, I think the hardware situation necessitates that we pay particular attention to NOT fragmenting the drivers for said hardware. Gpiolib proves that is possible, something I didn't think I would find myself saying when David Brownell started his effort. I'm glad he proved me wrong. b.g. -- Bill Gatliff b...@billgatliff.com -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: [fedora-arm] ARM summit at Plumbers 2011
russell, good to hear from you. can i recommend, that although this is a really wide set of cross-posting on a discussion that underpins pretty much everything (except gnu/hurd and minix) because it's linux kernel, that, just as steve kindly advised, we keep this to e.g. cross-dis...@lists.linaro.org? i'll be doing that from now on [after this] perhaps including arm-netbooks as well, but will be taking off all the distros. so - folks, let's be clear: please move this discussion to cross-dis...@lists.linaro.org, and, if it's worthwhile discussing in person, please do contact steve, so he can keep the slot open at the Plumbers 2011 summit. On Fri, Aug 26, 2011 at 5:35 PM, Russell King - ARM Linux li...@arm.linux.org.uk wrote: On Fri, Aug 26, 2011 at 11:11:41AM -0500, Bill Gatliff wrote: As such refactoring consolidated larger and larger chunks of kernel code, new designs would gravitate towards those consolidated implementations because they would be the dominant references. Don't bet on it. That's not how it works (unfortunately.) Just look at the many serial port inventions dreamt up by SoC designers - everyone is different from each other. Now consider: why didn't they use a well established standard 16550A or later design? *sigh* because they wanted to save power. or pins. or... just be bloody-minded. This need to be different is so heavily embedded in the mindset of the hardware people that I doubt providing consolidated implementations will make the blind bit of difference. i think... russell... after they are told, repeatedly, no, you can't have that pile of junk in the mainline linux kernel, Get With The Programme, you'd think that, cumulatively if they end up having to maintain a 6mb patch full of such shit, they _might_ get with the programme? and if they don't, well who honestly cares? if they don't, it's not *your* problem, is it? _they_ pay their employees to continue to main a pile of junk, instead of spongeing off of _your_ time (and linus's, and everyone else's in the Free Software Community). I doubt that hardware people coming up with these abominations even care one bit about what's in the kernel. then don't f**g make it _your_ problem, or anyone else's, upstream!! :) this is the core of the proposal that i have been advocating: if it's selfish, i.e. as bill and many many others clearly agree with if the bang-per-buck ratio is on the low side then keep it *out* the mainline linux kernel... ... and that really is the end of the matter. the sensible people that i've been talking to about this are truly puzzled as to why the principles of cooperation and collaboration behind free software are just being... completely ignored, in something as vital as The Linux Kernel, and they feel that it's really blindingly obvious that the bang-per-buck ratio of patches to mainline linux kernel need to go up. so the core of the proposal that is the proposed selfish-vs-cooperation patch policy is quite simple: if the patch has _some_ evidence of collaboration, cooperation, refactoring, sharing - *anything* that increases the bang-per-buck ratio with respect to the core fundamental principles of Free Software - it goes to the next phase [which is technical evaluation etc. etc.]. otherwise, it's absolutely out, regardless of its technical correctness, and that's the end of it. the linux kernel mainline source tree should *not* be a dumping-ground for a bunch of selfish self-centred pathological profit-mongering corporations whose employees end up apologising in sheer embarrassment as they submit time-pressured absolutely shit non-cooperative and impossible-to-maintain code. you're not the only one, russell, who is pissed off at having to tidy up SoC vendors' patches. there's another ARM-Linux guy, forget his name, specialises in samsung: two years ago he said that he was getting fed up with receiving yet another pile of rushed junk... and that's *just* him, specialising in samsung ARM SoCs! we're just stunned that you, the recipient of _multiple_ SoC vendors piles of shite, have tolerated this for so long! anyway - i've endeavoured to put together some examples, in case that's not clear: i admit it's quite hard to create clear examples, and would greatly appreciate help doing so. i've had some very much appreciated help from one of the openwrt developers (thanks!) clarifying by creating another example that's similar to one which wasn't clear. http://lkcl.net/linux/linux-selfish.vs.cooperation.html this should be _fun_, guys. it shouldn't be a chore. if you're not enjoying it, and not being paid, tell the people who are clearly taking the piss to f*** off! but - i also would like to underscore this with another idea: lead by example (which is why i've kept the large cross-distro list) we - the free software community - are seeing tons of nice lovely android tablets, tons of nice lovely expensive bits of big iron and/or x86 laptops, and
Re: [fedora-arm] ARM summit at Plumbers 2011
On Wed, 24 Aug 2011, Bill Gatliff wrote: I have observed all the hand-wringing regarding the state of ARM Linux, and it's obvious to everyone that there is still work to be done. ARM isn't like PCs, and that's obviously inconvenient for Linus but it's an essential part of ARM's success. I think that the thing being disputed isn't that ARM is different from PCs, but rather the issue that different ARM SOs do the same thing in different ways. in the early days of the PC we had the same issues (those who were around to remember the 'almost' PC compatible machines (from some of the biggest names in the business). they all thought that they had good reasons to do things differently, but over time they all changed to hide the differences from the system. ARM is currently in worse shape than the PC market ever was in this aspect, but in this case it's less a matter of getting the hardware guys to change what they do than it is to get better documentation of what the hardware is really doing and not duplicating drivers for cases where the right answer is just replacing a constant with a variable (just as an example of the very common case where the same component is wired to a different address) David Lang -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: [fedora-arm] ARM summit at Plumbers 2011
On Tue, Aug 09, 2011 at 07:15:34PM +0100, Steve McIntyre wrote: Hi folks, Following on from the founding of the cross-distro ARM mailing list, I'd like to propose an ARM summit at this year's Linux Plumbers conference [1]. I'm hoping for a slot on Thursday evening, but this remains to be confirmed at this point. We had some lively discussion about the state of ARM Linux distros at the Linaro Connect [2] event in Cambridge last week. It rapidly became clear that some of the topics we discussed deserve a wider audience, so we're suggesting a meetup at Plumbers for that bigger discussion. ok. allow me to give some perspective and background as to why i believe that a bigger discussion is important, and to whom that discussion is important. a few years ago i read what seems like a silly book, called The Strategy-Focussed Organisation. sounds trite, but i was advised to read it when i proposed some ideas and was confronted with the very valid question why should i [a lowly developer] _care_ about this 'strategy' that you are proposing? (fortunately the person who asked the question was the same one who advised me to read this silly book). it's a tough one, isn't it? why should any of us - as free software developers - _care_ about the state of ARM Linux? you're getting on with the truly crucial task of managing the distro that you're committed to. it's a focussed job: it's a vital role, and you should not let anyone tell you otherwise. yet... and this is the bit that this silly book explained: it's just as important to know where *your* role fits in with what else is going on. linaro, for example, as you no doubt well know, is tasked (by its subscribers who pay $1m / year) with sorting out vital underlying infrastructure that ties what *you* are doing in with the subscriber's ARM CPUs. you're doing the user-facing stuff; they're doing the CPU-facing stuff. that's *their* strategic role: in concrete terms it means sorting out gcc with ARM optimisations, and it means seeking out and/or increasing the number of areas of shared and refactored code across as many places as possible, in order to reduce the software development effort required of their subscribers. linux kernel. device tree. LSB. (and, it has to be said, _if_ the stupid, stupid 3D GPU companies got with the picture, linaro could well take gallium3d for example under its wing, too). so the key question is: if linaro is taking care of this aspect, because that's linaro's role, then why _should_ any distro maintainer care? yes they should be aware of what's happening, but there's no real incentive to get pro-actively involved, is there? all that's required is passive acceptance of the work filtering down from linaro... and this perhaps explains the lack of response to the proposed meetup, steve. [the other reason is that yes, although _discussion_ can take place about 3D GPUs, we as free software developers feel powerless to act in the face of so much money. despite the fact (which personally makes me extremely angry) that without our overall contribution these companies simply would not have a gnu/linux distro or a linux kernel on which to make that money]. so, the important question to ask, then, is what *is* good motivation to take action? if, indeed, any action need be taken at all, which is a perfectly reasonable conclusion to reach. not that i personally agree with that, but i can live with it :) and, to answer that question, i feel it's important to take into account some context and background. many of these things you will already be aware of, but let me put them all together, here. take a deep breath... * with the rise of android, Matt Codon shows us an empirical glimpse into the blatant state of GPL violations by OEMs taking place on the Linux Kernel and more: http://www.codon.org.uk/~mjg59/android_tablets/ * many android vendors have lost the right to use linux kernel source code. this article is the most insightful and non-aggrandising i've yet found into the GPL violations situation and its consequences: http://fosspatents.blogspot.com/2011/08/most-android-vendors-lost-their-linux.html * Our Linus declared in april that he was getting fed up with the state of the ARM Linux Kernel. my take on this is that there is an overwhelming amount of selfishness creeping into the Linux Kernel development. Our Linus has also recently stated that his passion is actually low-level device driver development. http://thread.gmane.org/gmane.linux.kernel/1114495/focus=112007 * Russell King, the ARM maintainer, has completely lost all motivation to work on the task of merging ARM Linux patches. with the amount of selfishness that has been going on for so many years, i am surprised he's tolerated it this long. http://article.gmane.org/gmane.linux.kernel/1121096 * I've seen proposed solutions and many many descriptions of the problems caused by the rise of ARM Linux, but none of them look at this from an overview
Re: ARM 3D support was Re: [fedora-arm] ARM summit at Plumbers 2011
On Wed, 24 Aug 2011 12:00:43 +0100, Luke Kenneth Casson Leighton l...@lkcl.net wrote: [ok i'm going to do another cross-post in a bit which will give some background and also perhaps some other topics for discussion, but i wanted to cover this first. apologies for people for whom this is just noise] On Tue, Aug 23, 2011 at 7:01 PM, omall...@msu.edu wrote: the xilinx zynq-7000 or similar (dual core Cortex A9 + FPGA). The idea is to have an OGP GPU in firmware in FPGA. In terms of the power budget, it seems to work relatively sanely considering what it is, and it is as ideal as it gets as far as openness and flexibility goes. I just thought it's worthy of a mention. It does seem outlandish, but it is kind of cool. Is it going to give enough 3d speed? The next gen tegra is supposed to have a 24 core GPU. if nvidia have a published announcement of their plans to release a fully free-software-compliant 3D driver to match the proprietary hardware, then that would be brilliant news [about their next gen GPU]. about the zynq idea: it actually doesn't matter if it's enough. the very fact that free software developers - and people who want to be free software developers - around the world could even _remotely_ consider buying one of these for an affordable price instead of $750 for the present OGP card means that more people can at least begin to try to address the unbelievably wide and very discouraging gap between us and proprietary 3D hardware. the NREs on producing a set of masks are _only_ $250,000 if you are a taiwanese company asking TSMC, but for everyone else they're at least $2 million. the development costs if you use off-the-shelf tools before you even _get_ to the point where you can ask a fab to produce those masks spiral out of control (Mentor Graphics charges something like $250,000 per month or maybe per week per user; NREs for peripheral hard macros can be $50k to $100k each etc. etc.), taking the total development costs in many cases to well above $USD 30 million. and that's excluding all that proprietary software which of course is utterly useless without the corresponding hardware but, because of USA Accountancy Rules, where IP can be added to the books to increase the value of a company, there's a strong financial disincentive to consider just givvin it aww away 4 fwee. and here we are with a CPU which could well be around the $25 - $30 mark in large enough volumes, presented with the possibility to say u all, you proprietary GPU companies and your greed, fear, patent warfare and lack of willingness to collaborate and cooperate. ok maybe not those exact words but you know what i mean :) I quite like the wording, actually. :) Gordan -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: [fedora-arm] ARM summit at Plumbers 2011
Luke: Step back from the keyboard just a bit. :) It's true that the glass isn't completely full--- but it's pretty darned full! And we wouldn't be discussing the various GPL and other violations that you cite were it not for the overwhelming successes of Free Software, ARM, Linux, and Android. We are well past debating the merits of Free Software et. al, which itself is a huge milestone that we need to recognize. Now it's time to let the lawyers do their jobs. And they will, because there are tremendous sums of money at play. Money that wouldn't be there if it weren't for us developers. But we need to stay out of their way, while at the same time taking care to continue producing tangible things that are worth fighting over. As developers, we've won. Deal with it. Revel in it. And then get over it. I have observed all the hand-wringing regarding the state of ARM Linux, and it's obvious to everyone that there is still work to be done. ARM isn't like PCs, and that's obviously inconvenient for Linus but it's an essential part of ARM's success. Russell King has been overworked for a decade or more, attempting through sheer force of human/developer will to keep ARM Linux from running off the rails. As far as ARM Linux is concerned, I think we're dangerously close to being smothered by our own success. We have to learn to work smarter, because we can't work any harder. And I applaud Linaro and the countless others for recognizing this problem and looking for ways to resolve it. I for one would love to participate in the ARM Summit, but I'm a sole proprietor without an expense account to charge the travel costs to and they are too large for me to carry personally. I suspect I'm not the only one in that situation. The fact that there has been little response to the ARM Summit doesn't mean that nobody cares or that the problems seem to large to solve. It just means that we're going to have to find a different way to get this work done. b.g. -- Bill Gatliff b...@billgatliff.com -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: [fedora-arm] ARM summit at Plumbers 2011
On Tue, 23 Aug 2011 17:11:34 +0100, Steve McIntyre steve.mcint...@linaro.org wrote: On Tue, Aug 09, 2011 at 07:15:34PM +0100, Steve McIntyre wrote: Hi folks, Following on from the founding of the cross-distro ARM mailing list, I'd like to propose an ARM summit at this year's Linux Plumbers conference [1]. I'm hoping for a slot on Thursday evening, but this remains to be confirmed at this point. We had some lively discussion about the state of ARM Linux distros at the Linaro Connect [2] event in Cambridge last week. It rapidly became clear that some of the topics we discussed deserve a wider audience, so we're suggesting a meetup at Plumbers for that bigger discussion. The initial proposed agenda is: * ARM hard-float + What is it and why does it matter? + How can distributions keep compatible (i.e. gcc triplet to describe the port)? * Adding support for ARM as an architecture to the Linux Standard Base (LSB) + Does it matter? + What's needed? * FHS - multi-arch coming soon, how do we proceed? * 3D support on ARM platforms + Open GL vs. GLES - which is appropriate? but I'm sure that other people will think of more issues they'd like to discuss. :-) If you wish to attend, please reply to the cross-distro list and let us know to expect you. Make sure you're registered to attend Plumbers Conf, and get your travel and accommodation organised ASAP. [1] http://www.linuxplumbersconf.org/2011/ [2] http://connect.linaro.org/ UPDATE: we've not had many people confirm interest in this event yet, which is a shame. If you would like to join us for this session, please reply and let me know. If we don't get enough interest by the end of Sunday (28th August), then we'll have to cancel the meeting. Unfortunately there is no way I could make it, but on the subject of 3D support on ARM, Luke recently mentioned something that initially seemed outlandish but upon closer examination doesn't seem like a bad idea. As we all know, the state of openness of specifications of commonly used ARM 3D GPUs is at best dire. What has been proposed is a bit radical, but it doesn't actually seem that implausible. Specifically, combining Open Graphics Project (http://wiki.opengraphics.org/tiki-index.php) and the xilinx zynq-7000 or similar (dual core Cortex A9 + FPGA). The idea is to have an OGP GPU in firmware in FPGA. In terms of the power budget, it seems to work relatively sanely considering what it is, and it is as ideal as it gets as far as openness and flexibility goes. I just thought it's worthy of a mention. Gordan -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: ARM 3D support was Re: [fedora-arm] ARM summit at Plumbers 2011
On 08/23/2011 07:01 PM, omall...@msu.edu wrote: Quoting Gordan Bobic gor...@bobich.net: Unfortunately there is no way I could make it, but on the subject of 3D support on ARM, Luke recently mentioned something that initially seemed outlandish but upon closer examination doesn't seem like a bad idea. As we all know, the state of openness of specifications of commonly used ARM 3D GPUs is at best dire. What has been proposed is a bit radical, but it doesn't actually seem that implausible. Specifically, combining Open Graphics Project (http://wiki.opengraphics.org/tiki-index.php) and the xilinx zynq-7000 or similar (dual core Cortex A9 + FPGA). The idea is to have an OGP GPU in firmware in FPGA. In terms of the power budget, it seems to work relatively sanely considering what it is, and it is as ideal as it gets as far as openness and flexibility goes. I just thought it's worthy of a mention. It does seem outlandish, but it is kind of cool. Is it going to give enough 3d speed? The next gen tegra is supposed to have a 24 core GPU. If you can quantify what enough 3D speed means, then perhaps that can be assessed. There really aren't many applications around at the moment to make this an issue. I'd be more interested in it's ability to decode 1080p. Then again - it's FPGA! You can load a different firmware depending on whether you need 1080p decoding or 3D rendering, or some other kind of specialized DSP offload with only bare minimal VGA. :) Personally, I think OGP would be worth it even if just for the fact that we would no longer have to beg (in vain) the vendors for decent drivers or published specs. The added flexibility on top is just a free extra. :) Gordan -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel