Re: [fonc] CodeSpells. Learn how to program Java by writing spells for a 3D environment.
One of the fundamentals we are all still grasping at is how to teach programming. These are links to people attempting to contribute something meaningful in that direction rather than posting derisive comments and blatant cult related wing nuttery which, in fact, have nothing to do with computing. Good day sir! On Fri, Apr 12, 2013 at 2:12 PM, John Pratt jpra...@gmail.com wrote: Fine, but what does that have to do with setting the fundamentals of new computing? Is this just a mailing list for computer scientist to jerk off? On Apr 12, 2013, at 1:00 PM, Josh Grams wrote: On 2013-04-12 11:11AM, David Barbour wrote: I've occasionally contemplated developing such a game: program the behavior of your team of goblins (who may have different strengths, capabilities, and some behavioral habits/quirks) to get through a series of puzzles, with players building/managing a library as they go. Forth Warrior? :) https://github.com/JohnEarnest/Mako/tree/master/games/Warrior2 --Josh ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Natural Language Wins
I am now convinced we are on some sort of mailing list version of candid camera. Or maybe these messages are the product of some strange markovian email generator programmed to create dissonance by combining fringe comp sci theories with offensive social commentary normally reserved for talk radio. On Fri, Apr 5, 2013 at 2:19 PM, Kirk Fraser overcomer@gmail.com wrote: Gath, So what language do you normally think in? You have stated you don't live in America. Obviously you haven't listened to Rush Limbaugh long enough to know what liberal is. Why comment on things you know so little about? Tune in to Rush via iheart radio and listen for about 6 weeks and you'll have more clarity on what liberal actually means in America. I don't know many details about Israel but I suspect they don't misuse their words as often as liberals do here. That in itself makes Hebrew more efficient. Of course, as you wrote certain environmental factors may contribute to overachieving but I would argue that is impossible. It is impossible to overachieve. But that would be me exercising liberalism by going off topic - inefficient. On Fri, Apr 5, 2013 at 12:44 PM, Gath-Gealaich gath.na.geala...@gmail.com wrote: On Fri, Apr 5, 2013 at 7:59 PM, Kirk Fraser overcomer@gmail.comwrote: I was pointing out that innovation for its own sake is worthless then was agreeing with the view that not all the world's inventions come from people who think in English yet pointing out communicating in English is best for world wide distribution. I don't really know how many Jews who won Nobel Prizes thought in Hebrew, English, or even Russian. But it is as you wrote possible that Hebrew is more efficient. No, it's not. Whorfianism has been all but refuted. The only area in which the idea hold water, quite ironically, is formal/computer/programming languages (or so Paul Graham says, but he's right, as far as I can tell). Something about their culture tends to be productive compared to others. Perhaps it's their orientation toward God, which is defined as absolute spiritual perfection. That in itself would tend to produce more efficient thought. They've been oppressed by intellectually impoverished Christians for two millennia, denied the right to work in the fields of agriculture and crafts, and were forced to work in knowledge oriented professions such as medicine or finances. Of course that this nurtures a specific culture, and with the (most likely involuntary) need to become as indispensable for others as possible in over to avoid getting killed by hilt-happy Easter celebrators, they were virtually forced into what is usually referred to as overachievement (although here I have to admit, despite my former point, that you English people have the weirdest notions in your language). English has a property that unfortunately allows it to be redefined with liberal definitions which are inefficient. ^^^ This is a thoroughly nonsensical and meaningless statement. Computers need smarter software to exceed the performance of Watson and OpenCyc to create worthwhile innovations automatically. I think working to automate Bible analysis is an efficient way to produce smarter software. But based the failures of automatic translators, computers may be slow to think flawlessly. Again, you're completely ignoring the actual nature of speech, demonstrated in such phenomena as the existence of idiolects, referential indeterminacy, diachronic shifts etc. Language is what it is because there's a common sense component to its processing in our brains, and once you have that, you've successfully replicated a human being in silicon. Until that happens, all bets are off. (I'm tempted to wager that the inverse also holds, has_human_intelligence(X) :- understands_language(X). Although the fact that an average human being picked from your general population often fails at simple logical reasoning sort of suggests that the intelligence is of a slightly different kind that what we usually mean by saying he's intelligent/he's a genius.) - Gath ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Kirk W. Fraser http://freetom.info/TrueChurch - Replace the fraud churches with the true church. http://congressionalbiblestudy.org - Fix America by first fixing its Christian foundation. http://freetom.info - Example of False Justice common in America ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] POLs for rules. Fuzz testing nile
It seems this is not possible without a fitness function. A declarative approach to defining the what of your program would probably go some way towards making the validation of the how possible. BDD (behavior driven development) is the closest approach I have found to this - however the silver bullet of automatic validation from your declared behaviors is far from having been found. On Fri, Feb 15, 2013 at 12:33 PM, John Carlson yottz...@gmail.com wrote: I guess what I am asking for is a critic service. For both POLs and uses of POLs. Can POLs be designed such that uses of POLs ensure good design? Good architecture? I am way beyond my technical knowledge here. On Feb 15, 2013 1:19 PM, John Carlson yottz...@gmail.com wrote: I know of a few sites/tools which critcise your wesite...is there one for css? On Feb 15, 2013 1:02 PM, John Carlson yottz...@gmail.com wrote: Sorry we got into a big discussion about the web. I really want to discuss POLs for rules, css being one of them. And in particular, once we have a good POL, how to test it, and author with it--how to create a great POL program? But what about probablistic rules? Can we design an ultimate website w/o a designer? Can we use statistics to create a great solitaire player--i have a pretty good stochastic solitaire player for one version of solitaire...how about others? How does one create a great set of rules? One can create great rule POLs, but where are the authors? Something like cameron browne's thesis seems great for grid games. He is quite prolific. Can we apply the same logic to card games? Web sites? We have The Nature of Order by c. Alexander. Are there nile designers or fuzz testers/genetic algorithms for nile? Is fuzz testing a by product of nile design...should it be? If you want to check out the state of the art for dungeons and dragons POLs check out fantasy grounds...xml hell. We can do better. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
Here is my current attempt at replicating the diagram alongside the code laid out as in appendix III. http://order-of-no.posterous.com/st71-one-pager. This was created in inkscape and uses andale mono for the font. There are definitely some parts of the diagram I had to guess at, particularly in the during frames. After further consideration I am thinking that in the original the character between e.SENDER etc. is an interpunct - would that be correct? -Shaun On Thu, Mar 15, 2012 at 10:28 PM, shaun gilchrist shaunxc...@gmail.comwrote: Alan, Regarding these quotes from the early history of smalltalk re ST 71: I had originally made the boast because McCarthy’s self-describing LISP interpreter was written in itself. It was about “a page”, and as far as power goes, LISP was the whole nine-yards for functional languages. I was quite sure I could do the same for object-oriented languages *plus* be able to do a reasonable syntax for the code *a loa* some of the FLEX machine techniques. Once I read that I HAD to see it. I finally tracked down a PDF version of the history of smalltalk which contained the mythical appendix III and thus the one pager. The scan is pretty bad so I have attempted to transcribe it into a plain text file, which I am attaching here. I was wondering if you could take a look and see if it is roughly correctly transcribed? For the sake of completeness I am working on SVG versions of the diagrams. My main questions specifically relating to the one pager are: 1. Does my use of . instead of the other symbol (e.g. e.MSG v.s. e{other-char}MSG) present an issue? 2. I had a hard time making out the single-quoted characters on lines 23 and 24 following the e.MSG.PC, my best guess was a comma and a period (which relates to my first question, is the character actually the same as {other-char})? 3. regarding line 54 and the etc... any idea what that ellipsis would be once expanded haha? Would it just be a full definition of escapes or would it be further definitions relating to the interpreter? 4. line 62 where I put {?} what is that character meant to be? I believe it is the same as what is on line 70 and also marked as {?}. 5. Is it implied that things like quote, set (-), Table, null, atom, notlist, escape, goto, if-then-else, select-case, +, etc. would exist as primitives? Any other insight you could provide would be much appreciated. Thanks -Shaun On Thu, Mar 15, 2012 at 6:40 PM, Andre van Delft andre.vande...@gmail.com wrote: The theory Algebra of Communicating Processes (ACP) offers non-determinism (as in Meta II) plus concurrency. I will present a paper on extending Scala with ACP next month at Scala Days 2012. For an abstract, see http://days2012.scala-lang.org/node/92 A non-final version of the paper is at http://code.google.com/p/subscript/downloads/detail?name=SubScript-TR2012.pdf André Op 15 mrt. 2012, om 03:03 heeft Alan Kay het volgende geschreven: Well, it was very much a mythical beast even on paper -- and you really have to implement programming languages and make a lot of things with them to be able to assess them But -- basically -- since meeting Seymour and starting to think about children and programming, there were eight systems that I thought were really nifty and cried out to be unified somehow: 1. Joss 2. Lisp 3. Logo -- which was originally a unification of Joss and Lisp, but I thought more could be done in this direction). 4. Planner -- a big set of ideas (long before Prolog) by Carl Hewitt for logic programming and pattern directed inference both forward and backwards with backtracking) 5. Meta II -- a super simple meta parser and compiler done by Val Schorre at UCLA ca 1963 6. IMP -- perhaps the first real extensible language that worked well -- by Ned Irons (CACM, Jan 1970) 7. The Lisp-70 Pattern Matching System -- by Larry Tesler, et al, with some design ideas by me 8. The object and pattern directed extension stuff I'd been doing previously with the Flex Machine and afterwards at SAIL (that also was influenced by Meta II) One of the schemes was to really make the pattern matching parts of this work for everything that eventually required invocations and binding. This was doable semantically but was a bear syntactically because of the different senses of what kinds of matching and binding were intended for different problems. This messed up the readability and desired simple things should be simple. Examples I wanted to cover included simple translations of languages (English to Pig Latin, English to French, etc. some of these had been done in Logo), the Winograd robot block stacking and other examples done with Planner, the making of the language the child was using, message sending and receiving, extensions to Smalltalk-71, and so forth. I think today the way to try to do this would be with a much more graphical UI than with text -- one could
Re: [fonc] OT? Polish syntax
This looks interesting: https://code.google.com/p/ambi/ - instead of supporting infix it supports both polish and reverse polish. Can you give some examples of what your ideal syntax would look like which illustrates the spoken language aspect you touched on? -Shaun On Thu, Mar 15, 2012 at 10:21 AM, Martin Baldan martino...@gmail.comwrote: I have a little off-topic question. Why are there so few programming languages with true Polish syntax? I mean, prefix notation, fixed arity, no parens (except, maybe, for lists, sequences or similar). And of course, higher order functions. The only example I can think of is REBOL, but it has other features I don't like so much, or at least are not essential to the idea. Now there are some open-source clones, such as Boron, and now Red, but what about very different languages with the same concept? I like pure Polish notation because it seems as conceptually elegant as Lisp notation, but much closer to the way spoken language works. Why is it that this simple idea is so often conflated with ugly or superfluous features such as native support for infix notation, or a complex type system? ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
Alan, Regarding these quotes from the early history of smalltalk re ST 71: I had originally made the boast because McCarthy’s self-describing LISP interpreter was written in itself. It was about “a page”, and as far as power goes, LISP was the whole nine-yards for functional languages. I was quite sure I could do the same for object-oriented languages *plus* be able to do a reasonable syntax for the code *a loa* some of the FLEX machine techniques. Once I read that I HAD to see it. I finally tracked down a PDF version of the history of smalltalk which contained the mythical appendix III and thus the one pager. The scan is pretty bad so I have attempted to transcribe it into a plain text file, which I am attaching here. I was wondering if you could take a look and see if it is roughly correctly transcribed? For the sake of completeness I am working on SVG versions of the diagrams. My main questions specifically relating to the one pager are: 1. Does my use of . instead of the other symbol (e.g. e.MSG v.s. e{other-char}MSG) present an issue? 2. I had a hard time making out the single-quoted characters on lines 23 and 24 following the e.MSG.PC, my best guess was a comma and a period (which relates to my first question, is the character actually the same as {other-char})? 3. regarding line 54 and the etc... any idea what that ellipsis would be once expanded haha? Would it just be a full definition of escapes or would it be further definitions relating to the interpreter? 4. line 62 where I put {?} what is that character meant to be? I believe it is the same as what is on line 70 and also marked as {?}. 5. Is it implied that things like quote, set (-), Table, null, atom, notlist, escape, goto, if-then-else, select-case, +, etc. would exist as primitives? Any other insight you could provide would be much appreciated. Thanks -Shaun On Thu, Mar 15, 2012 at 6:40 PM, Andre van Delft andre.vande...@gmail.comwrote: The theory Algebra of Communicating Processes (ACP) offers non-determinism (as in Meta II) plus concurrency. I will present a paper on extending Scala with ACP next month at Scala Days 2012. For an abstract, see http://days2012.scala-lang.org/node/92 A non-final version of the paper is at http://code.google.com/p/subscript/downloads/detail?name=SubScript-TR2012.pdf André Op 15 mrt. 2012, om 03:03 heeft Alan Kay het volgende geschreven: Well, it was very much a mythical beast even on paper -- and you really have to implement programming languages and make a lot of things with them to be able to assess them But -- basically -- since meeting Seymour and starting to think about children and programming, there were eight systems that I thought were really nifty and cried out to be unified somehow: 1. Joss 2. Lisp 3. Logo -- which was originally a unification of Joss and Lisp, but I thought more could be done in this direction). 4. Planner -- a big set of ideas (long before Prolog) by Carl Hewitt for logic programming and pattern directed inference both forward and backwards with backtracking) 5. Meta II -- a super simple meta parser and compiler done by Val Schorre at UCLA ca 1963 6. IMP -- perhaps the first real extensible language that worked well -- by Ned Irons (CACM, Jan 1970) 7. The Lisp-70 Pattern Matching System -- by Larry Tesler, et al, with some design ideas by me 8. The object and pattern directed extension stuff I'd been doing previously with the Flex Machine and afterwards at SAIL (that also was influenced by Meta II) One of the schemes was to really make the pattern matching parts of this work for everything that eventually required invocations and binding. This was doable semantically but was a bear syntactically because of the different senses of what kinds of matching and binding were intended for different problems. This messed up the readability and desired simple things should be simple. Examples I wanted to cover included simple translations of languages (English to Pig Latin, English to French, etc. some of these had been done in Logo), the Winograd robot block stacking and other examples done with Planner, the making of the language the child was using, message sending and receiving, extensions to Smalltalk-71, and so forth. I think today the way to try to do this would be with a much more graphical UI than with text -- one could imagine tiles that would specify what to match, and the details of the match could be submerged a bit. More recently, both OMeta and several of Ian's matchers can handle multiple kinds of matching with binding and do backtracking, etc., so one could imagine a more general language that could be based on this. On the other hand, trying to stuff 8 kinds of language ideas into one new language in a graceful way could be a siren's song of a goal. Still Cheers, Alan -- *From:* shaun gilchrist shaunxc...@gmail.com *To:* fonc
Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
Alan, I would go way back to the never implemented Smalltalk-71 Is there a formal specification of what 71 should have been? I have only ever read about it in passing reference in the various histories of smalltalk as a step on the way to 72, 76, and finally 80. I am very intrigued as to what sets 71 apart so dramatically. -Shaun On Wed, Mar 14, 2012 at 12:29 PM, Alan Kay alan.n...@yahoo.com wrote: Hi Scott -- 1. I will see if I can get one of these scanned for you. Moore tended to publish in journals and there is very little of his stuff available on line. 2.a. if (ab) { ... } is easier to read than if ab then ...? There is no hint of the former being tweaked for decades to make it easier to read. Several experiments from the past cast doubt on the rest of the idea. At Disney we did a variety of code display generators to see what kinds of transformations we could do to the underlying Smalltalk (including syntactic) to make it something that could be subsetted as a growable path from Etoys. We got some good results from this (and this is what I'd do with Javascript in both directions -- Alex Warth's OMeta is in Javascript and is quite complete and could do this). However, the showstopper was all the parentheses that had to be rendered in tiles. Mike Travers at MIT had done one of the first tile based editors for a version of Lisp that he used, and this was even worse. More recently, Jens Moenig (who did SNAP) also did a direct renderer and editor for Squeak Smalltalk (this can be tried out) and it really seemed to be much too cluttered. One argument for some of this, is well, teach the kids a subset that doesn't use so many parens This could be a solution. However, in the end, I don't think Javascript semantics is particularly good for kids. For example, one of features of Etoys that turned out to be very powerful for children and other Etoy programmers is the easy/trivial parallel methods execution. And there are others in Etoys and yet others in Scractch that are non-standard in regular programming languages but are very powerful for children (and some of them are better than standard CS language ideas). I'm encouraging you to do something better (that would be ideal). Or at least as workable. Giving kids less just because that's what an existing language for adults has is not a good tactic. 2.c. Ditto 2.a. above 2.d. Ditto above above Cheers, Alan -- *From:* C. Scott Ananian csc...@laptop.org *To:* Alan Kay alan.n...@yahoo.com *Cc:* IAEP SugarLabs i...@lists.sugarlabs.org; Fundamentals of New Computing fonc@vpri.org; Viewpoints Research a...@vpri.org *Sent:* Wednesday, March 14, 2012 10:25 AM *Subject:* Re: [IAEP] [fonc] Barbarians at the gate! (Project Nell) On Wed, Mar 14, 2012 at 12:54 PM, Alan Kay alan.n...@yahoo.com wrote: The many papers from this work greatly influenced the thinking about personal computing at Xerox PARC in the 70s. Here are a couple: -- O. K. Moore, Autotelic Responsive Environments and Exceptional Children, Experience, Structure and Adaptabilty (ed. Harvey), Springer, 1966 -- Anderson and Moore, Autotelic Folk Models, Sociological Quarterly, 1959 Thank you for these references. I will chase them down and learn as much as I can. 2. Separating out some of the programming ideas here: a. Simplest one is that the most important users of this system are the children, so it would be a better idea to make the tile scripting look as easy for them as possible. I don't agree with the rationalization in the paper about preserving the code reading skills of existing programmers. I probably need to clarify the reasoning in the paper for this point. Traditional text-based programming languages have been tweaked over decades to be easy to read -- for both small examples and large systems. It's somewhat of a heresy, but I thought it would be interesting to explore a tile-based system that *didn't* throw away the traditional text structure, and tried simply to make the structure of the traditional text easier to visualize and manipulate. So it's not really skills of existing programmers I'm interested in -- I should reword that. It's that I feel we have an existence proof that the traditional textual form of a program is easy to read, even for very complicated programs. So I'm trying to scale down the thing that works, instead of trying to invent something new which proves unwieldy at scale. b. Good idea to go all the way to the bottom with the children's language. c. Figure 2 introduces another -- at least equally important language -- in my opinion, this one should be made kid usable and programmable -- and I would try to see how it could fit with the TS language in some way. This language is JSON, which is just the object-definition subset of JavaScript. So it can in fact be expressed with TurtleScript tiles. (Although I haven't yet tackled
Re: a little more FLEXibility (was: [fonc] Re: Ceres and Oberon)
For anyone else who was captivated by: Denis came up with a nice little language that had a bit of an APL feeling for humans to program this system in - here is a link to the 184 page paper describing DCPL: http://content.lib.utah.edu/cdm4/item_viewer.php?CISOROOT=/ir-mainCISOPTR=60083 -shaun On Fri, Sep 2, 2011 at 9:23 AM, Alan Kay alan.n...@yahoo.com wrote: Hi Jecel I think both these sections were reactions to some of the current hardware, and hardware problems of the time. I remember the second one better than the first. In those days cutting dies out of the wafters often damaged them, and the general yield was not great even before cutting the die out. This idea was to lay down regions of memory on the wafer, run bus metalization over them, test them, and zap a few bits if they didn't work. The key here was that the working ones would each have a register that had its name (actually its base address and range) and all could look at the bus to see if their address came up. If it did, it would seize the bus and do what ever. So this was a kind of distributed small pages and MMUs scheme. And the yield would be much higher because the wafers remained intact. I don't think any of these tradeoffs obtain today, though one could imagine other kinds of schemes for distributed memory and memory management that would be more sensible than current schemes. The first one I really don't remember. But it probably was partially the result of the head per track small disk that the FLEX machine used -- and probably was influenced by Paul Rovner's scheme at Lincoln Labs for doing Jerry Feldman's software associative triples memory. I think this was not about Denis Seror's later and really interesting thesis (under Barton) to make a lambda calculus machine -- really a combinator machine (to replace variables by paths) and to have the computation on the disk and just pull in and reduce as possible as the disk zoomed by. All was done in parallel and eventually all would be reduced. Denis came up with a nice little language that had a bit of an APL feeling for humans to program this system in. He (and his wife) wound up making an animated movie to show people who didn't know about lambda expressions and combinators (which was pretty much everyone in CS in those days) what they were and how they reduced. There's no question that Bob Taylor was the prime key for PARC (and he also had paid for most of our PhDs in the 60s when he was an ARPA funder). Cheers, Alan From: Jecel Assumpcao Jr. je...@merlintec.com To: Alan Kay alan.n...@yahoo.com Cc: Fundamentals of New Computing fonc@vpri.org Sent: Thursday, September 1, 2011 3:17 PM Subject: a little more FLEXibility (was: [fonc] Re: Ceres and Oberon) Alan, The Flex Machine was the omelet you have to throw away to clean the pan, so I haven't put any effort into saving that history. Fair enough! Having the table of contents but not the text made me think that perhaps the section B.6.b.ii The Disk as a Serial Associative Memory and B.6.c. An Associativeley Mapped LSI Memory might be interesting in light of Ian's latest paper. Or the first part might be more related to OOZE instead. But there were 4 or 5 pretty good things and 4 or 5 really bad things that helped the Alto-Smalltalk effort a few years later. Was being able to input drawings one of the good things? There was one Lisp GUI that put a lot of effort into allowing you to input objects instead of just text. It did that by outputting text but keeping track of where it came from. So if you pointed to the text generated by listing the contents of a disk directory while there was some program waiting for input, that program would read the actual entry object. It is frustrating for me that while the Squeak VM could easily handle an expression like myView add: yellowEllipseMorph copy. I have no way of typing that. I can't use any object as a literal nor as input. In Etoys I can get close enough by getting a tile representing the yellowEllpiseMorph from its halo and use that in expressions. In Self I could add a constant slot with some easy to type value, like 0, and then drag the arrow from that slot to point to the object I really wanted. It was a bit indirect but it worked and I used this a lot. The nice thing about having something like this is that you never need global variable again. I'd say that the huge factors after having tried to do one of these were two geniuses: Chuck Thacker (who was an infinitely better hardware designer and builder than I was), and Dan Ingalls (who was infinitely better at most phases of software design and implementation than I was). True. You were lucky to have them, though perhaps we might say Bob Taylor had built that luck into PARC. -- Jecel ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Beats
I really liked the idea mentioned on hacker news of using a numeric value in place of the x to indicate velocity. I am going to mess around with a really simple web interface for this over the weekend. On 5/17/11, Casey Ransberger casey.obrie...@gmail.com wrote: Cool! I've been hoping to see some more multimedia stuff happen for Ruby, and I actually like the little DSL they've got going there: it's very visual, and a grid is perfect when what you're emulating is a drum machine which usually has a grid interface or some such, and doesn't know about inexact timing like a drummer does. It looks like fun... too many shiny distractions:) On Mon, May 16, 2011 at 8:21 PM, Josh McDonald j...@joshmcdonald.infowrote: Thought you guys would get a kick out of this YAML-WAV sequencer written in Ruby: https://github.com/jstrait/beats -- Therefore, send not to know For whom the bell tolls. It tolls for thee. Josh 'G-Funk' McDonald - j...@joshmcdonald.info - http://twitter.com/sophistifunk - http://flex.joshmcdonald.info/ ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger -- Sent from my mobile device ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Evaluating Expressions, part 6 – Actor Primitives
On 11/20/10, Dale Schumacher dale.schumac...@gmail.com wrote: Implementing language features using Actors has reached the point where the actor primitives themselves can be described meta-circularly. http://bit.ly/9KfHLI ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Sent from my mobile device ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc