I wrote: > Even today, in 2016, it’s not possible to spawn a new thread in J, not even > manually, without leaving the language proper. Forget about high-level and > productive frameworks for farming out work and collecting results from a > heterogeneous cluster of compute nodes.
Raul responded: > But threads are not really about highly parallel computing. Yes, my point was more-or-less that threads are “not really about highly parallel computing”. Threads are nowhere near sufficient in the contemporary grid-computing world, and yet we haven’t even implemented that minor piece of parallelism in J (which has been around for 26 years at this point)! Raul wrote: > Meanwhile, it's relatively easy to work with parallel instances of J: > it can make sense to use one J process per cpu, and to farm work out > to multiple CPUs. I guess I’m challenging the assertion that it is “relatively easy to work with parallel instances of J”. It is true in theory, of course, but I take the very existence of frameworks like Hadoop and its myriad competitors as proof (or at least very compelling evidence) that practice differs from that theory. All applications are, in theory, capable of being run in a highly parallel fashion (just spin up multiple instances and connect them together, through the file system if you have to). But if it were so easy you wouldn’t need giant companies to produce sophisticated frameworks which consist of hundreds of thousands to millions of lines of code — systems which large companies happily pay top dollar for — if it were actually that easy. So, my assertion that J has not really taken advantage of its native notational advantage of explicitly avoiding describing the “how” (Guy Steele’s principal point), and that all production applications for the last three decades have been, and still are, fundamentally serial. -Dan PS: The question of “who foots the bill” for the hardware is a distraction, at best. That’s a commercial decision driven by the value to the bill-footer of processing his data rapidly and efficiently. If the benefit outweighs the cost, he will happily foot the bill. If it doesn’t, that’s not the language’s problem. The language’s problem is, currently, that there is no simple, standard, and straightforward way to create a grid or cloud bill in the first place (because J offers no simple, standard, and straightforward way to take advantage of a grid or cloud to process huge quantities of data, though its notation was explicitly designed to permit that). And so far as I know, there are no current initiatives designed to address that problem (including zero initiatives led by me, the complainant). Let me wrap up with a quote from Roger Hui in comp.lang.apl: https://groups.google.com/d/msg/comp.lang.apl/6C5N0lbHtv8/kv2dsGbaGXIJ > A careful reading of the dictionary indicates that evaluation > order within an operator is unspecified. Everything points to > an unspecified order; nothing points to a particular order. … > Regardless of the application, regardless of how nice it would be, > regardless of how many times or how strongly you wish it, regardless > of how many messages are sent to comp.lang.apl, regardless of > whatever reason, ... it would be unwise to depend on a particular > order of evaluation within an operator. This statement was made in May of 1996. That is, just short of 20 years ago. Today, in practical terms, I have absolutely no qualms or fears about relying on order of execution of (deterministic) primitives. I’d happy rely on it in production code (if I wrote production code any longer). But I’ve been very careful — very careful — throughout my J career to explicitly *not* rely on it, to construct my code in such a way that it could run in parallel without my knowing. Because I was always afraid that some day, it would. I guess what I’m saying is: I hope, one day soon, my fears are justified. ---------------------------------------------------------------------- For information about J forums see http://www.jsoftware.com/forums.htm
