Re: [FRIAM] [sfx: Discuss] The Go Programming Language
Thanks, good .. er .. pointers. :) -- Owen On Jul 23, 2010, at 9:16 PM, Roger Critchlow wrote: Oh, I see Pike gave two other talks at OSCON, no video but pdfs of the slides: Go http://www.oscon.com/oscon2010/public/schedule/detail/15464 Another Go at Language Design http://www.oscon.com/oscon2010/public/schedule/detail/14760 On Fri, Jul 23, 2010 at 9:08 PM, Roger Critchlow r...@elf.org wrote: Rob Pike made a presentation at the O'Reilly Open Source Conference yesterday slamming Java and C++ as part of explaining how the Go Language came about. http://infoworld.com/d/developer-world/google-executive-frustrated-java-c-complexity-375 Ah, here's a link to a pdf for the talk and a 12:30 you tube video. http://www.oscon.com/oscon2010/public/schedule/detail/13423 -- rec -- FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
Re: [FRIAM] [sfx: Discuss] The Go Programming Language
I thought they were good references. ;-] ;-] --Doug On Sat, Jul 24, 2010 at 10:59 AM, Owen Densmore o...@backspaces.net wrote: Thanks, good .. er .. pointers. :) -- Owen On Jul 23, 2010, at 9:16 PM, Roger Critchlow wrote: Oh, I see Pike gave two other talks at OSCON, no video but pdfs of the slides: Go http://www.oscon.com/oscon2010/public/schedule/detail/15464 http://www.oscon.com/oscon2010/public/schedule/detail/15464 Another Go at Language Design http://www.oscon.com/oscon2010/public/schedule/detail/14760 On Fri, Jul 23, 2010 at 9:08 PM, Roger Critchlow r...@elf.org wrote: Rob Pike made a presentation at the O'Reilly Open Source Conference yesterday slamming Java and C++ as part of explaining how the Go Language came about. http://infoworld.com/d/developer-world/google-executive-frustrated-java-c-complexity-375 http://infoworld.com/d/developer-world/google-executive-frustrated-java-c-complexity-375Ah, here's a link to a pdf for the talk and a 12:30 you tube video. http://www.oscon.com/oscon2010/public/schedule/detail/13423 http://www.oscon.com/oscon2010/public/schedule/detail/13423-- rec -- FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
Re: [FRIAM] [sfx: Discuss] The Go Programming Language
It's interesting to see that go already is in the language shootout: http://shootout.alioth.debian.org/u32/which-programming-languages-are-fastest.php#table That means its pretty serious .. lota work to get the benchmark programs written. But they have a long way to go: above them are c/c++, java, scala, lua, pascal, ada, haskell, Fortran, F#/C#, ocaml, lisp/scheme. BUT all these (plus javascript v8) all come in with a median of less than 10. Python = 31, ruby = 38, php = 81!!. Listening to the talk, Pike clearly distinguished between dynamic/interpreted and ease of use. Sad to loose the python dynamics and sophistication, but at its level of development to have javascript crush it at over 4x in speed, I gotta say v8 js looks pretty good! Interesting that both go and js use a different solution for classes/objects. Alan Kay must be happy! So I gotta wonder: is go going to replace c/c++ over time @ google? And possibly more interesting, will it somehow affect the JS v8 effort? -- Owen FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
Re: [FRIAM] [sfx: Discuss] The Go Programming Language
Pike actually says in a few places that there could be pointers when his examples use arrays. Declaring an array argument doesn't automatically get you a pointer to the contents of the array, but you can get one if it's needed. So pointers and references. I found it even more apparent on this pass through that the language is very well built for the kind of parallel programming that I've become comfortable with in erlang. That is, go makes it very easy to spin off a new thread/process/goroutine and establish communications using channels. This is a matter of being able to easily instantiate the appropriate graph of communicating sequential processes to a computational task, receive the result of the computation when it finishes or fails, and know that all the cruft got cleaned up. So if your computation can be pipelined or fanned out onto multiple cores, That it's all statically typed and compiled and compiles fast is an appreciable advantage. I've been beating my head against the erlang dialyzer lately, which does static type analysis as an appendix to the main language involving type specifications with a syntax and predefined vocabulary that extends base erlang. That would be tolerable, but the dialyzer takes a coffee break to grovel through its computations, and I can't drink enough coffee to keep myself amused while it grovels. -- rec -- On Sat, Jul 24, 2010 at 11:17 AM, Douglas Roberts d...@parrot-farm.netwrote: I thought they were good references. ;-] ;-] --Doug On Sat, Jul 24, 2010 at 10:59 AM, Owen Densmore o...@backspaces.netwrote: Thanks, good .. er .. pointers. :) -- Owen On Jul 23, 2010, at 9:16 PM, Roger Critchlow wrote: Oh, I see Pike gave two other talks at OSCON, no video but pdfs of the slides: Go http://www.oscon.com/oscon2010/public/schedule/detail/15464 http://www.oscon.com/oscon2010/public/schedule/detail/15464 Another Go at Language Design http://www.oscon.com/oscon2010/public/schedule/detail/14760 On Fri, Jul 23, 2010 at 9:08 PM, Roger Critchlow r...@elf.org wrote: Rob Pike made a presentation at the O'Reilly Open Source Conference yesterday slamming Java and C++ as part of explaining how the Go Language came about. http://infoworld.com/d/developer-world/google-executive-frustrated-java-c-complexity-375 http://infoworld.com/d/developer-world/google-executive-frustrated-java-c-complexity-375Ah, here's a link to a pdf for the talk and a 12:30 you tube video. http://www.oscon.com/oscon2010/public/schedule/detail/13423 http://www.oscon.com/oscon2010/public/schedule/detail/13423-- rec -- FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
Re: [FRIAM] [sfx: Discuss] The Go Programming Language
Roger Critchlow wrote: I found it even more apparent on this pass through that the language is very well built for the kind of parallel programming that I've become comfortable with in erlang. That is, go makes it very easy to spin off a new thread/process/goroutine and establish communications using channels. This is a matter of being able to easily instantiate the appropriate graph of communicating sequential processes to a computational task, receive the result of the computation when it finishes or fails, and know that all the cruft got cleaned up. So if your computation can be pipelined or fanned out onto multiple cores, I can see that goroutines and channels are appealing programming abstractions, but have a hard time believing they could scale. Seems like the more goroutines you have the more CPU cycles that will be absorbed in switching amongst them.I could see how distributed Erlang would scale with lots of high latency _network_ messages in flight -- the amount of time for switching would be small compared to the latency of the message. That wouldn't seem to be the case with Google Go, which would all be in core. Marcus FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
Re: [FRIAM] [sfx: Discuss] The Go Programming Language
On Sat, Jul 24, 2010 at 12:46 PM, Marcus Daniels mar...@snoutfarm.comwrote: Roger Critchlow wrote: I found it even more apparent on this pass through that the language is very well built for the kind of parallel programming that I've become comfortable with in erlang. That is, go makes it very easy to spin off a new thread/process/goroutine and establish communications using channels. This is a matter of being able to easily instantiate the appropriate graph of communicating sequential processes to a computational task, receive the result of the computation when it finishes or fails, and know that all the cruft got cleaned up. So if your computation can be pipelined or fanned out onto multiple cores, I can see that goroutines and channels are appealing programming abstractions, but have a hard time believing they could scale. Seems like the more goroutines you have the more CPU cycles that will be absorbed in switching amongst them.I could see how distributed Erlang would scale with lots of high latency _network_ messages in flight -- the amount of time for switching would be small compared to the latency of the message. That wouldn't seem to be the case with Google Go, which would all be in core. Right, but is that a Google Go problem or is it our failure to build useful multi-core processors? All my Erlang programs are running on one machine, but that doesn't make the factoring into communicating processes any less pleasing to my sense of algorithmic correctness. If I am comfortable correctly expressing the parallel granularity of a computation, then a compiler can transform to any equivalent sequential form up to simply simulating the parallelism I wrote on a single core. But if I can't express the parallel granularity, then who will ever know what I was trying to do? Erlang can scale with distribution, but it can also discover that processes which cooperated when locally hosted fail when distributed, or vice versa. Every receive in an Erlang program has a timeout which typically reports what failed to happen in the expected time and dies. Which is why Erlang comes bundled with the uselessly misnamed OTP (Open Telecom Platform) libraries so you can monitor process deaths and specify how much of the system needs to be torn down and restarted when part of it chokes, and give up when it chokes repeatedly, and write logs of stultifying detail about what happened. At which point you open up the logs, see who repeatedly timed out, and tweak the timeout until it gets happy again. You can, in general, tune things to work at different scales, but not all things and not at all scales. Locally hosted Erlang programs can scale linearly in performance with the number of cores, but they will probably run into the same problem that you anticipate for Google Go at some point. -- rec -- FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
Re: [FRIAM] [sfx: Discuss] The Go Programming Language
Roger Critchlow wrote: I can see that goroutines and channels are appealing programming abstractions, but have a hard time believing they could scale. Seems like the more goroutines you have the more CPU cycles that will be absorbed in switching amongst them.I could see how distributed Erlang would scale with lots of high latency _network_ messages in flight -- the amount of time for switching would be small compared to the latency of the message. That wouldn't seem to be the case with Google Go, which would all be in core. Right, but is that a Google Go problem or is it our failure to build useful multi-core processors? It don't think it's a processor design issue so much as a network and memory subsystem design issue. Given: 1) Concurrency = Bandwidth * Latency 2) Latency can only be minimized so far 3) Bandwidth can always be increased by adding wires. By being limited to SMP type systems, Go is assuming latency is already minimized. But the way you really get a lot of concurrency is by allowing for higher latency communication (e.g. long wires between many processors). Go does not provide a programming model where memory can be accessed across cores. Even if the operating system did that for you, the Go scheduler would only know about spinning threads for pending channels, not for pending memory. To my mind, what would be preferable is to have all memory to be channels (i.e. as Cray XMT implements in hardware). Alternatively, keep a small number of channels (compared to the number memory addresses) but constrain the use of memory to named (typically local) address spaces, i.e. Sequoia or OpenCL. Marcus FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org