The worst part of these sort of designs is the memory limitations. For instance, modern GPUs can read from GPU memory at about 80GB/sec. However that's only in some very specific cases. That is, if you have 1024 stream processors they all must be reading memory in the same pattern. They all can't be reading from memory at a random location. Doing so drastically reduces memory performance.
The problem I see with Parallella<http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone> is that you only have 1GB of memory for 64 processors. So either each processor will have a small amount of local memory (< 16MB) and access will be fast. Or the read speed from shared memory is going to be very slow. Interesting, yes. Fun to play with? Yes. Capable of actually performing much actual work, not likely. Let's remember, a modern i7 chip will probably beat the pants off this thing GFLOPs wise. And if you need OpenCL and cheap massive parallelism...why aren't you buying a GeForce 620/630? Not to mention that their way of selling it as a "13Ghz cpu" speaks something to their lack of knowledge of the problem domain. Timothy Baldridge -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this group, send email to clojure+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/clojure?hl=en