i think you may find that a less stringent goal - of doing "outsourcing" - may result in an intermediate useable compromise that would keep most people happy or at least a whole damn lot more happy than they are at the moment.
My take, the numbers of people that face 50+ wide things is very small. Doing compiler work is expensive (more expensive than just learning and coding assembler). The architecture is uncompilable, if there are so few people doing it, that the compiler doesn't support it. When there are enough people using it, then it is important to expend the energy (bucks) to make the compiler support it.
Look at ia64 for example, seemed like a good idea to someone, and yet, even with large amounts of $ thrown at it, it isn't capturing the hearts and minds of people, in part seemingly due to inadequate tools.
The compiler is _part_ of the architecture costs, and if people are going to pay that cost, then they will wind up with something that they will force users to code in assembly for, and as you say, with training that can cost 6 months of time. In the end, the companies promoting the architecture, will have to make those valuations and fund the work, or not; and live and die by those decisions.
and the key reason why all these things are a pain in the arse is because you cannot "abstract" it out to c++
I've not familiar with that proof. I've seen very many odd things done with C++ and it seems to me that it would be possible to abstract it out.
- you _have_ to go to assembler, you _have_ to make use of macros (which don't mix with c++ templates).
? I don't know that. I'd prefer you went to comp.lang.c++ and challenged them with I bet this can't be done, and had someone sketch out how to do it, or do it. Certainly we designed it (C++) in part with the idea that one could have exotic hardware behind some of the classes (valarray) and get the speed.
See the altivec intrinsics or the mmx/sse/sse2/sse3 intrinsics for a sketch of an existance proof on why one never would need assembler to do anything.
well, i do, but it's many _many_ steps removed from becoming a reality - funding,
No, funding is a _primary_ concern. Either, people that want the architecture to succeed pay for it, convince grad students that it would be a good research area, or, well, you write in assembler. This list is a poor place, when it comes to funding issues.
... you can tell that i really loved the processor design and the opportunity to work with something that radical, though.
There have been neat architectures that I liked, but, that failed the market reality testcase, in the end, it isn't about being neat, it is about price/performance and giving a customer what they want.
it's because i am looking to recommend to a company that is doing a parallel processor design that they also provide a vector processor unit.
:-) If they only support constructs that can be easily be used by gcc, and if they customers need it, and if there is a large market for the architecture where they can beat ia32 and AMD-64 now, and in the next 3 years, then, they might have a chance to not fail. Otherwise, well, you can do anything you want, because they will fail, just be sure to get your money up front, don't work for free or stock... :-)
if gcc don't make the grade as a viable vector processing aware compiler or as part of a vector processing aware toolchain, the recommendation ain't gonna happen - i've seen what happens when you don't have a good enough development toolchain.
companies fail.
Yup.
So, while it may seem like you can escape out to perl and code up a compiler assist in perl, our best sense tells us to dissuade you from that idea. Better to recognize the code (autovec) or use tagging techniques (OpenMP/altivec/sse) and then just do up that support directly in the compiler, plus, now-a-days, I would recommend doing up a template library that makes the architecture rock; to make that work, it also needs to be ported to all the market relevant architectures as well and have a massive user following. That part is beyond the scope of this list.