[fonc] Call for GSoC students on Scala ACP related projects

2013-04-12 Thread Andre van Delft
The Scala team at EPFL Lausanne is this year again a mentoring organization for 
the Google Summer of Code, a global program that offers students stipends to 
write code for open source projects. Following a presentation I gave there a 
month ago, the Scala team has included two projects for GSoC related to my 
language extension named SubScript.  More information is here at the Scala site 
and below.

This month students can apply. Interested and qualified? You may contact EPFL, 
or me as andre dot vandelft at gmail dot com.

SubScript

SubScript is a way to extend common programming languages aimed to ease event 
handling and concurrency. Typical application areas are GUI controllers, text 
processing applications and discrete event simulations. SubScript is based on a 
mathematical concurrency theory named Algebra of Communicating Processes (ACP).

You can regard ACP as an extension to Boolean algebra with ‘things that can 
happen’. These items are glued together with operations such alternative, 
sequential and parallel compositions. This way ACP combines the essence of 
grammar specification languages and notions of parallelism.

Adding ACP to a common programming language yields a lightweight alternative 
for threading concurrency. It also brings the 50 year old but still magic 
expressiveness of languages for parser generators and compiler compilers, so 
that SubScript suits language processing. The nondeterministic style combined 
with concurrency support happens to be very useful for programming GUI 
controllers. Surprisingly, ACP with a few extras even enables data flow style 
programming, like you have with pipes in Unix shell language.

Currently a SubScript extension to Scala is available, see 
http://subscript-lang.org. This comes with a branch of the Scala compiler, a 
run-time library, support for the Scala-Swing reactive framework and example 
programs. The C part of ACP is not yet supported.

Investigate SubScript on top of JavaScript

SubScript might as well extend other languages next to Scala. An interesting 
starter would be JavaScript. The good thing is that as from April 2013 (?) 
Scala translates into JavaScript. Therefore a single code base of the SubScript 
VM, which is written in Scala, may also work for JavaScript.
The project would involve some of the following tasks:
develop use cases, both for client-side and server-side applications
create a translator for SubScript into JavaScript
extend an existing JavaScript interpreter to understand SubScript
define a strategy to send over SubScript in HTML pagesand have it translated
provide a translator for the SubScript VM source code into JavaScript
JavaScript does not support explicit multithreading; develop an alternative
Enhance Akka using SubScript

Akka is the Scala actor implementation, very useful for distributed functions. 
Typically an actor operates a state machine, which is programmed using state 
variables. This is relatively inconvenient to program and read. SubScript may 
provide a better alternative for programming actor internals. This project 
would involve:
develop typical actors in two versions: just Scala and SubScript
compare these versions in terms of clearness and conciseness
measure the performance of these versions
make a tutorial
More information on SubScriptActors is available at 
http://subscript-lang.org/subscript-actors/.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Binary Lambda Calculus

2013-03-27 Thread Andre van Delft
On this issue:
* How does parallel processing fit into the picture?
the following may be useful:

In 1989 Henk Goeman combined Lambda Calculus with concepts from concurrency 
directly, in his paper Towards a Theory of (Self) Applicative Communicating 
Processes: a Short Note. The PDF is available at 
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.8907

Op 25 mrt. 2013, om 14:40 heeft Jan Wedekind j...@wedesoft.de het volgende 
geschreven:

 Hi,
  After reading John Tromp's paper [1] (also see De Bruijn Index [2]) I got 
 very interested in Binary Lambda Calculus (BLC). In my little spare time I 
 implemented a BLC interpreter in C [3, 4].
 
  I thought it would be interesting to share it on this mailing list since the 
 motivation behind BLC is similar to the one behind VPRI's Nile [5] and Maru 
 [6]. The test suite [7] gives a rough overview on how BLC works:
 
 * 10, 110, 1110, ... are variables. 10 refers to the top of the stack (the 
 local environment), 110 refers to the next variable in the stack and so on.
 * 00M, where M is a term, defines a lambda expression expecting one value on 
 the stack. E.g. 0010 is the identity function which in Scheme would be 
 (lambda (x) x).
 * 01MN, where M and N are terms, is a function application. E.g. 0100100010 
 is the same as ((lambda (x) x) (lambda (x) x)) in Scheme.
 
  John Tromp's paper furthermore shows how boolean values are represented 
 using 110 and 10 (i.e. functions taking two arguments and returning 
 the first or second argument). Scheme's cons is represented by a function 
 accepting a boolean and returning the head or the tail of the list. An empty 
 list is represented by 10 (false). The paper shows a meta-circular 
 interpreter implemented in 210 bits!
 
  I'm still undecided about how to approach some things:
 * How does parallel processing fit into the picture?
 * What's the best way to implement a DSL including booleans, integers, 
 floating point numbers, strings, .. on top of BLC?
 * Is it possible to describe the electronics of integer addition and 
 multiplication in BLC and still have it jit-compiled to run efficiently on 
 x86 hardware?
 * How to implement I/O (maybe use monadic objects)? Could the I/O 
 implementation provide equivalent functions to Scheme's quote and eval, too?
 
 Any comments and suggestions are welcome :)
 
 Regards Jan
 
 BTW I am looking forward to the VPRI report on the STEPS project and I hope 
 there will be future work to make computing suck less.
 
 [1] http://homepages.cwi.nl/~tromp/cl/LC.pdf
 [2] http://en.wikipedia.org/wiki/De_Bruijn_index
 [3] https://github.com/wedesoft/blc/
 [4] https://github.com/wedesoft/blc/blob/master/blc.c
 [5] https://github.com/damelang/nile/
 [6] http://piumarta.com/software/maru/
 [7] https://github.com/wedesoft/blc/blob/blc/spec_blc.c#L94
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Binary Lambda Calculus

2013-03-25 Thread Andre van Delft
Maybe this is not really what you are looking for, but would I recommend to 
look at Program Algebra (PGA) [1,2,3,4, 5], by Jan Bergstra and others of 
Amsterdam University, adjacent to Tromp's CWI. BTW Bergstra taught me lambda 
calculus 32 years ago in Leiden, in a course Mathematical Logic.

From the web site [1]:
Program Algebra is an algebraic framework for sequential programming. Its 
purpose is to contribute to a better understanding of sequential programming. A 
very simple program notation is used as basis for development of other program 
notations. Program Algebra as presented here is used in both research and 
education. A Toolset is available freely for educational purposes.  

PGA is very different from BLC of course, but both are a simple linear 
notations. PGA starts with jump instructions, and it has step by step 
extensions for variables, control structures, semaphores etc. It has been used 
as a projection of Ruby's OO constructs. 
So I hope PGA could give you some ideas.

Cheers,

André

[1] http://www.science.uva.nl/research/prog/projects/pga/
[2] http://www.science.uva.nl/research/prog/projects/pga/publist.html
[3] http://staff.science.uva.nl/~janb/pga/whypga.html
[4] http://staff.science.uva.nl/~janb/pga/whynotpga.html
[5] http://staff.science.uva.nl/~janb/pga/progic.html

Op 25 mrt. 2013, om 14:40 heeft Jan Wedekind j...@wedesoft.de het volgende 
geschreven:

 Hi,
  After reading John Tromp's paper [1] (also see De Bruijn Index [2]) I got 
 very interested in Binary Lambda Calculus (BLC). In my little spare time I 
 implemented a BLC interpreter in C [3, 4].
 
  I thought it would be interesting to share it on this mailing list since the 
 motivation behind BLC is similar to the one behind VPRI's Nile [5] and Maru 
 [6]. The test suite [7] gives a rough overview on how BLC works:
 
 * 10, 110, 1110, ... are variables. 10 refers to the top of the stack (the 
 local environment), 110 refers to the next variable in the stack and so on.
 * 00M, where M is a term, defines a lambda expression expecting one value on 
 the stack. E.g. 0010 is the identity function which in Scheme would be 
 (lambda (x) x).
 * 01MN, where M and N are terms, is a function application. E.g. 0100100010 
 is the same as ((lambda (x) x) (lambda (x) x)) in Scheme.
 
  John Tromp's paper furthermore shows how boolean values are represented 
 using 110 and 10 (i.e. functions taking two arguments and returning 
 the first or second argument). Scheme's cons is represented by a function 
 accepting a boolean and returning the head or the tail of the list. An empty 
 list is represented by 10 (false). The paper shows a meta-circular 
 interpreter implemented in 210 bits!
 
  I'm still undecided about how to approach some things:
 * How does parallel processing fit into the picture?
 * What's the best way to implement a DSL including booleans, integers, 
 floating point numbers, strings, .. on top of BLC?
 * Is it possible to describe the electronics of integer addition and 
 multiplication in BLC and still have it jit-compiled to run efficiently on 
 x86 hardware?
 * How to implement I/O (maybe use monadic objects)? Could the I/O 
 implementation provide equivalent functions to Scheme's quote and eval, too?
 
 Any comments and suggestions are welcome :)
 
 Regards Jan
 
 BTW I am looking forward to the VPRI report on the STEPS project and I hope 
 there will be future work to make computing suck less.
 
 [1] http://homepages.cwi.nl/~tromp/cl/LC.pdf
 [2] http://en.wikipedia.org/wiki/De_Bruijn_index
 [3] https://github.com/wedesoft/blc/
 [4] https://github.com/wedesoft/blc/blob/master/blc.c
 [5] https://github.com/damelang/nile/
 [6] http://piumarta.com/software/maru/
 [7] https://github.com/wedesoft/blc/blob/blc/spec_blc.c#L94
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Binary Lambda Calculus

2013-03-25 Thread Andre van Delft

Op 25 mrt. 2013, om 17:35 heeft John Tromp john.tr...@gmail.com het volgende 
geschreven:

 PGA is very different from BLC of course, but both are a simple linear
 notations. PGA starts with jump instructions, and it has step by step
 extensions for variables, control structures, semaphores etc. It has been
 used as a projection of Ruby's OO constructs.
 So I hope PGA could give you some ideas.
 
 PGA looks like a notation for control flow as a basis for an imperative
 language. BLC is a self-contained pure functional language aimed at
 minimal programs. This seems a case of comparing apples and oranges:-)

Apples and oranges look far more similar than PGA and BLC. I would say it is 
more a case of apples and apple trees. Very different in appearance, but deep 
inside they share DNA, and the one form gives birth to the other form.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Binary Lambda Calculus

2013-03-25 Thread Andre van Delft
John Tromp john.tr...@gmail.com wrote to me:

 dear Andre,
 You may want to include my entire message, since my response to the fonc list
 bounced (unsurprisingly, as I'm not a member), and was only seen by
 you and Jan...
 
 Apples and oranges look far more similar than PGA and BLC.
 I would say it is more a case of apples and apple trees.
 
 Let's say that comparing PGA and BLC is like comparing oranges and
 apple trees:-)

Pears and apple trees then, Ok?

Below I quote John's entire first message (and repeat my remark).

 John Tromp john.tr...@gmail.com wrote:
 
 dear Andre/Jan,
 
 PGA is very different from BLC of course, but both are a simple linear
 notations. PGA starts with jump instructions, and it has step by step
 extensions for variables, control structures, semaphores etc. It has been
 used as a projection of Ruby's OO constructs.
 So I hope PGA could give you some ideas.
 
 PGA looks like a notation for control flow as a basis for an imperative
 language. BLC is a self-contained pure functional language aimed at
 minimal programs. This seems a case of comparing apples and oranges:-)

Apples and oranges look far more similar than PGA and BLC. I would say it is 
more a case of apples and apple trees. Very different in appearance, but deep 
inside they share DNA, and the one form gives birth to the other form.

 
 After reading John Tromp's paper [1] (also see De Bruijn Index [2]) I got
 very interested in Binary Lambda Calculus (BLC). In my little spare time I
 implemented a BLC interpreter in C [3, 4].
 
 Cool! You can also find a simple interpreter on my webpage,
 from which I derived the IOCCC entry at
  http://www.ioccc.org/2012/tromp/tromp.c
  http://www.ioccc.org/2012/tromp/hint.html
 that uses several interesting sample BLC programs,
 which you may want to try as additional tests for your interpreter.
 My C implementations still trail my Haskell implementation in
 flexibility, robustness and performance though...
 
 I'm still undecided about how to approach some things:
 * How does parallel processing fit into the picture?
 * What's the best way to implement a DSL including booleans, integers,
 floating point numbers, strings, .. on top of BLC?
 * Is it possible to describe the electronics of integer addition and
 multiplication in BLC and still have it jit-compiled to run efficiently on
 x86 hardware?
 * How to implement I/O (maybe use monadic objects)?
 
 All these features are somewhat beyond the scope of BLC, and
 most either require, or benefit greatly, from a type system, which is
 purposely lacking in BLC. I think you should look at a language
 like Haskell for achieving such goals.
 
 Could the I/O
 implementation provide equivalent functions to Scheme's quote and eval, too?
 
 The binary encoding of terms in BLC serves as the quoted form,
 while the 210 bit interpreter is your eval. You can write a quote function,
 but it needs an already quoted form as input (and gives you a double
 quoted form as output).
 
 regards,
 -John
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] SubScript website gone live: programming with Process Algebra

2013-01-01 Thread Andre van Delft
Please allow me to to blurb the following, which is related to several 
discussions at FONC:

Our web site http://subscript-lang.org went officially live last Saturday. 
SubScript is a way to extend common programming languages, aimed to ease event 
handling and concurrency. Typical application areas are GUI controllers, text 
processing applications and discrete event simulations. SubScript is based on a 
mathematical concurrency theory named Algebra of Communicating Processes (ACP).

ACP is a 30 year old branch of mathematics, as solid as numeric algebra and as 
boolean algebra. In fact, you can regard ACP as an extension to boolean algebra 
with 'things that can happen'. These items are glued together with operations 
such alternative, sequential and parallel compositions. This way ACP combines 
the essence of compiler-compilers and notions of parallelism.

Adding ACP to a common programming language yields a lightweight alternative 
for threading concurrency. It also brings the 50 year old but still magic 
expressiveness of languages for parser generators and compiler compilers, so 
that SubScript suits language processing. The nondeterministic style combined 
with concurrency support happens to be very useful for programming GUI 
controllers. Surprisingly, ACP with a few extras even enables data flow style 
programming, like you have with pipes in Unix shell language.

For instance, to program a GUI controller for a simple search application takes 
about 15 lines of code in Java or Scala, if you do threading well. In SubScript 
it is only 5 lines; see 
http://subscript-lang.org/examples/a-simple-gui-application/

At the moment SubScript is being implemented as an extension to the programming 
language Scala; other languages, such as C, C++, C#, Java and JavaScript, would 
be possible too. The current state of the implementation is mature enough for 
experimentation by language researchers, but not yet for real application 
development. If you have the Eclipse environment with the Scala plugin 
installed, it is easy to get SubScript running with the example applications 
from our Google Code project.

We hope this announcement will raise interest from programming language 
researchers, and that some developers will get aboard on the project.

In the second half of February 2013 we will very probably give a presentation 
and a hands on workshop at EPFL in Lausanne, the place where Scala is 
developed. We hope have a SubScript compiler ready then, branched from the 
Scala compiler scalac. A more detailed announcement will follow by the end of 
January on our site.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Obviously Kogge-Stone

2012-12-04 Thread Andre van Delft
Lately I was wondering if we could design hardware inspired on Program Algebra 
(PGA) and Maurer Computers.

PGA is an algebraic framework for sequential programming. PGA's creator Jan 
Bergstra writes in Why PGA?:

 We have spotted Maurer's 1967 JACM paper on 'A theory of computer 
 instructions' as the best possible basis for a theory of machines (Maurer 
 calls them computers) that can used in combination with PGA-style program 
 algebra. Maurer's model is particularly suitable to analyze the multitude of 
 conceivable machine models between what we call the split program machine 
 model (in computer architecture loosely indicated as the ISA architecture), 
 and the stored program running on a modern out-of-order pipelined 
 multi-processor chip. Here we are looking for quantitative results concerning 
 the fraction of conceivable functionalities that can actually be programmed 
 and stored on a given Maurer computer.
http://staff.science.uva.nl/~janb/pga/whypga.html

Bergstra also worked on Thread Algebra, which he uses in his papers on Maurer 
computers.

There is a lot to read here: 
http://staff.science.uva.nl/~mbz/Projects/TASI/tasi.html

André

Op 4 dec. 2012, om 23:55 heeft Casey Ransberger casey.obrie...@gmail.com het 
volgende geschreven:

 Oh, I was mostly fishing to see if anyone was doing anything fun with 
 hardware description. 
 
 The present time sees a bigger trade between performance and power 
 consumption, but that's all in the realm of optimization. 
 
 A mentor of mine in the software world once said something to me to the 
 effect of Don't look under the ISA, you don't want to see what's down 
 there. Of course, that only encouraged me to look. 
 
 After having done so, I'm seeing a lot of the same stuff there that motivated 
 FONC. Cruft built up over generations of deadlines and back compat, etc. I'm 
 pretty sure one could really put the STEPS treatment to hardware design, but 
 there's a catch. If you wanted to run Frank over a machine designed to be 
 understandable (rather than fast and compatible) and Frank hadn't seen 
 anything more in the way of optimization than what the researchers working on 
 it had to overcome with optimizations in order to complete their research, 
 one might end up with a whole system too slow to experiment with. 
 
 That's conjecture (obviously) given that I don't have Frank to play out over 
 my FPGA rig, but I tend to trust my gut. Hence the post:)
 
 So I got to thinking, while I was doing my little armchair exploration of 
 Verilog, what if we could declare/specify behavior with some concrete 
 algebra, and (I know, I know, I added an and) find a way to do optimization 
 in an automated way. 
 
 Thought experiment: what if I designed a behavior-specifying language, which 
 could be reduced to S-expressions, and then tried to find a set of fitness 
 functions around performance? Would it be possible to converge on a 
 performance goal by ripping off nature? If it was possible, how much compute 
 would I need to converge on an acceptable solution?
 
 Probably crazy expensive compute because I'm doing two things that don't want 
 to sleep in the same room... verifying correctness (think a lot of tests in 
 some test language) and optimization (convergence on some set of perf 
 metrics.) 
 
 Not just gen algs but gen programming. 
 
 This popped into my head after thumbing through this thing:
 
 http://www.amazon.com/gp/aw/d/0262111888
 
 Of course, I'm skeptical (of both the content of that book and of my own 
 crazy ideas!) While a performance metric (in some continuum of flops and 
 watts) *does* seem like something GP might be able to optimize, correctness 
 *absolutely* does not. Hence the question about language. 
 
 Has anyone here read that? I'm quite sure I haven't understood all of the 
 content, and with as obscure as it seems to be, it could be full of bunk. Of 
 course, with regard to obscurity, one might say the same thing of other stuff 
 folks around here know, understand well, and love. 
 
 I could go into specific ideas I've had, but I won't, because I haven't 
 really a framework for vetting them. Instead, I'm going to ship the Wouldn't 
 It Be Cool If and let the good people of the list slaughter my little thought 
 experiment with the raw unrelenting debunking power of a group of people who 
 like science.
 
 Bonus points if anyone can name the Dylan song title that I spoofed for the 
 thread, but that's way OT so please reply direct!
 
 Casey
 
 On Nov 30, 2012, at 3:35 PM, David Barbour dmbarb...@gmail.com wrote:
 
 Could you clarify what you're thinking about? Is your question about 
 metaprogramming of Verilog (with an implicit assumption that Verilog will 
 save battery life)?
 
 I've spent much time thinking about language and protocol design to extend 
 battery resources. I happen to think the real wins are at higher levels - 
 avoiding unnecessary work, amortizing work over time, linear logics, 
 graceful degradation 

Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Andre van Delft
Fascinating.
How did Iverson do division?

Op 15 jun. 2012, om 23:08 heeft David Leibs het volgende geschreven:

 Speaking of multiplication.  Ken Iverson teaches us to do multiplication by 
 using a * outer product to build a times table for the digits involved.
 +-++
 | | 3  6  6|
 +-++
 |3| 9 18 18|
 |6|18 36 36|
 |5|15 30 30|
 +-++
 
 Now you sum each diagonal:
(9) (18+18) (18+36+15) (36+30) (30)
  936   6966 30
 And just normalize as usual:
 
9 36 69 66 30
9 36 69 69 0
9 36 75 9  0
9 43 5  9  0
   13 3  5  9  0
  1 3 3  5  9  0
 
 The multiplication table is easy and just continued practice for your 
 multiplication facts.
 
 You don't need much more machinery before you have the kids doing Cannon's 
 order n systolic array algorithm for matrix multiply, on the gym floor, with 
 their bodies.  This assumes that the dance teacher is coordinating with the 
 algorithms teacher. Of course if there isn't something relevant going on that 
 warrants matrix multiply then all is lost. I guess that's a job for the 
 motivation teacher. :-)
 
 -David Leibs
 
 On Jun 15, 2012, at 12:57 PM, Pascal J. Bourguignon wrote:
 
 David Leibs david.le...@oracle.com writes:
 
 I have kinda lost track of this thread so forgive me if I wander off
 in a perpendicular direction.
 
 I believe that things do not have to continually get more and more
 complex.  The way out for me is to go back to the beginning and start
 over (which is what this mailing list is all about).  I constantly go
 back to the beginnings in math and/or physics and try to re-understand
 from first principles.  Of course every time I do this I get less and
 less further along the material continuum because the beginnings are
 so darn interesting.
 
 Let me give an example from arithmetic which I learned from Ken
 Iverson's writings years ago.
 
 As children we spend a lot of time practicing adding up
 numbers. Humans are very bad at this if you measure making a silly
 error as bad. Take for example:
 
   365
 +  366
 --
 
 this requires you to add 5  6, write down 1 and carry 1 to the next
 column then add 6, 6, and that carried 1 and write down 2 and carry a
 1 to the next column finally add 3, 3 and the carried 1 and write down
 7 this gives you 721, oops, the wrong answer.  In step 2 I made a
 totally dyslexic mistake and should have written down a 3.
 
 Ken proposed learning to see things a bit differently and remember the
 digits are a vector times another vector of powers.  Ken would have
 you see this as a two step problem with the digits spread out.
 
   3   6   5
 +  3   6   6
 
 
 Then you just add the digits. Don't think about the carries.
 
   3   6   5
 +  3   6   6
 
   6  12  11
 
 Now we normalize the by dealing with the carry part moving from right
 to left in fine APL style. You can almost see the implied loop using
 residue and n-residue.
 
 6  12 11
 6  13  0
 7   3  0
 
 Ken believed that this two stage technique was much easier for people
 to get right.  I adopted it for when I do addition by had and it works
 very well for me. What would it be like if we changed the education
 establishment and used this technique?  One could argue that this sort
 of hand adding of columns of numbers is also dated. Let's don't go
 there I am just using this as an example of going back and looking at
 a beginning that is hard to see because it is just too darn
 fundamental. 
 
 It's a nice way to do additions indeed.
 
 When doing additions mentally, I tend to do them from right to left,
 predicting whether we need a carry or not by looking ahead the next
 column.  Usually carries don't carry over more than one column, but
 even if it does, you only have to remember a single digit at a time.
 
 There are several ways to do additions :-)
 
 
 Your way works as well for substractions:
 
3  6  5
 -   3  7  1
 ---
0 -1  4
0 -10 + 4 = -6
 
3  7  1
 -  3  6  5
 ---
0  1 -4
   10 -4 = 6
 
 and of course, it's already how we do multiplications too.
 
 
 
 We need to reduce complexity at all levels and that includes the
 culture we swim in.
 
 Otherwise, you can always apply the KISS principle 
 (Keep It Simple Stupid).
 
 
 -- 
 __Pascal Bourguignon__ http://www.informatimago.com/
 A bad day in () is better than a good day in {}.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] LightTable UI

2012-04-24 Thread Andre van Delft
FYI: at last week's Scala Days there was a talk about Asymmetric Lenses in 
Scala; these are unidirectional.
http://skillsmatter.com/podcast/scala/asymmetric-lenses-scala

Op 24 apr. 2012, om 18:48 heeft Toby Schachman het volgende geschreven:

 Benjamin Pierce et al did some work on bidirectional computation. The
 premise is to work with bidirectional transformations (which they call
 lenses) rather than (unidirectional) functions. They took a stab at
 identifying some primitives, and showing how they would work in some
 applications. Of course we can do all the composition tricks with
 lenses that we can do with functions :)
 http://www.seas.upenn.edu/~harmony/
 
 
 See also Gerald Sussman's essay Building Robust Systems,
 http://groups.csail.mit.edu/mac/users/gjs/6.945/readings/robust-systems.pdf
 
 In particular, he has a section called Constraints Generalize
 Procedures. He gives an example of a system as a constraint solver
 (two-way information flow) contrasted with the system as a procedure
 (one-way flow).
 
 
 Also I submitted a paper for Onward 2012 which discusses this topic
 among other things,
 http://totem.cc/onward2012/onward.pdf
 
 My own interest is in programming interfaces for artists. I am
 interested in these causally agnostic programming ideas because I
 think they could support a more non-linear, improvisational approach
 to programming.
 
 
 Toby
 
 
 2012/4/24 Jarek Rzeszótko jrzeszo...@gmail.com:
 On the other hand, Those who cannot remember the past are condemned to
 repeat it.
 
 Also, please excuse me (especially Julian Leviston) for maybe sounding too
 pessimistic and too offensive, the idea surely is exciting, my point is just
 that it excited me and probably many other persons before Bret Victor or
 Chris Granger did (very interesting) demos of it and what would _really_
 excite me now is any glimpse of any idea whatsoever on how to make such
 things work in a general enough domain. Maybe they have or will have such
 idea, that would be cool, but until that time I think it's not unreasonable
 to restrain a bit, especially those ideas are relatively easy to realize in
 special domains and very hard to generalize to the wide scope of software
 people create.
 
 I would actually also love to hear from someone more knowledgeable about
 interesting historic attempts at doing such things, e.g. reversible
 computations, because there certainly were some: for one I remember a few
 years ago back in time debugging was quite a fashionable topic of talks
 (just google the phrase for a sampling), from a more hardware/physical
 standpoint there is http://en.wikipedia.org/wiki/Reversible_computing etc.
 
 Cheers,
 Jarosław Rzeszótko
 
 
 2012/4/24 David Nolen dnolen.li...@gmail.com
 
 The best way to predict the future is to invent it
 
 On Tue, Apr 24, 2012 at 3:50 AM, Jarek Rzeszótko jrzeszo...@gmail.com
 wrote:
 
 You make it sound a bit like this was a working solution already, while
 it seems to be a prototype at best, they are collecting funding right now:
 http://www.kickstarter.com/projects/306316578/light-table.
 
 I would love to be proven wrong, but I think given the state of the
 project, many people overexcite over it: some of the things proposed aren't
 new, just wrapped into a nice modern design (you could try to create a new
 skin or UI toolkit for some Smalltalk IDE for a similiar effect), while
 for the ones that would be new like the real-time evaluation or
 visualisation there is too little detail to say whether they are onto
 something or not - I am sure many people thought of such things in the 
 past,
 but it is highly questionable to what extent those are actually doable,
 especially in an existing language like Clojure or JavaScript. I am not
 convinced if dropping 200,000$ at the thing will help with coming up with a
 solution if there is no decent set of ideas to begin with. I would
 personally be much more enthusiastic if the people behind the project at
 least outlined possible approaches they might take, before trying to 
 collect
 money. Currently it sounds like they just plan to hack it until it 
 handles
 a reasonable number of special cases, but tools that work only some of the
 time are favoured by few. I think we need good theoretical approaches to
 problems like this before we can make any progress in how the actual real
 tools work like.
 
 Cheers,
 Jarosław Rzeszótko
 
 
 2012/4/24 Julian Leviston jul...@leviston.net
 
 Thought this is worth a look as a next step after Brett Victor's work
 (http://vimeo.com/36579366) on UI for programmers...
 
 http://www.kickstarter.com/projects/ibdknox/light-table
 
 We're still not quite there yet IMHO, but that's getting towards the
 general direction... tie that in with a tile-script like language, and I
 think we might have something really useful.
 
 Julian
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 
 
 

[fonc] Quiet and light weight devices

2012-04-21 Thread Andre van Delft
TechCrunch has an interview with Linus Torvalds. He uses a MacBook Air (iOS, 
BTW):

[Start of Quote]
I’m have to admit being a bit baffled by how nobody else seems to have done 
what Apple did with the Macbook Air – even several years after the first 
release, the other notebook vendors continue to push those ugly and *clunky* 
things. Yes, there are vendors that have tried to emulate it, but usually 
pretty badly. I don’t think I’m unusual in preferring my laptop to be thin and 
light.

Btw, even when it comes to Apple, it’s really just the Air that I think is 
special. The other apple laptops may be good-looking, but they are still the 
same old clunky hardware, just in a pretty dress.

I’m personally just hoping that I’m ahead of the curve in my strict requirement 
for “small and silent”. It’s not just laptops, btw – Intel sometimes gives me 
pre-release hardware, and the people inside Intel I work with have learnt that 
being whisper-quiet is one of my primary requirements for desktops too. I am 
sometimes surprised at what leaf-blowers some people seem to put up with under 
their desks.

I want my office to be quiet. The loudest thing in the room – by far – should 
be the occasional purring of the cat. And when I travel, I want to travel 
light. A notebook that weighs more than a kilo is simply not a good thing 
(yeah, I’m using the smaller 11″ macbook air, and I think weight could still be 
improved on, but at least it’s very close to the magical 1kg limit).

[End of Quote]
http://techcrunch.com/2012/04/19/an-interview-with-millenium-technology-prize-finalist-linus-torvalds/

I agree with Linus, especially on the importance of silence (I don't travel 
that much yet). I intent never to buy a noisy Windows PC any more. My 1 year 
old 4GB iMac is pretty silent, but with every tab I open in Chrome I hear some 
soft rumble that irritates me heavily. My iPad is nicely quiet when it should 
be.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Scala Days 2012 and ACP

2012-04-20 Thread Andre van Delft
Indeed I missed the link to the video of the 12 year old Shadaj Laddad.
Here is it as yet:
http://skillsmatter.com/podcast/scala/making-games-and-solving-puzzles-in-scala

Overview of the video's is here:
http://skillsmatter.com/event/scala/scala-days-2012
Still many video's are not online.

Anne Veling had a talk that got the most enthusiast tweets; it is not yet
on the list.

Simon Peyton-Jones (Microsoft Research) started his keynote with the
manifesto for teaching computer science in the 21st century, that was
launched two weeks earlier in the UK. He said that current ICT education
comes down to teaching Microsoft Office. I have observed the same with my
two children, in the Netherlands. French and German attendees of Scala Days
told me that ICT education in their country is like that.
http://www.guardian.co.uk/education/itforschools



On Fri, Apr 20, 2012 at 4:09 PM, John Nilsson j...@milsson.nu wrote:

 I think one of the video links are wrong, should they both be the same?
 BR,
 John
 Den 20 apr 2012 01:57 skrev Andre van Delft andre.vande...@gmail.com:

 Scala Days 2012 was held this week in London; 400 passionate developers;
 many presentations on DSLs, parallelism, concurrency, FP, compiler
 technology and much other stuff. http://days2012.scala-lang.org/ Enthusiastic
 tweets: https://twitter.com/search/scaladays

 The keynotes were by Guy Steele, Simon Peyton-Jones, Anthony Rose (
 http://zeebox.com/) and Martin Odersky; I warmly recommend these, but
 right now the videos are not yet online.

 Twelve year old Shadaj Laddad had an awesome talk; he is a real good
 programmer, and maybe even better teacher. The video is here:

 http://skillsmatter.com/podcast/scala/subscript-extending-scala-with-the-algebra-of-communicating-processes

 I presented my language extension based on the Algebra of Communicating
 Processes; I have mentioned this theory a couple of times here the Fonc
 list. ACP may be viewed as extending Boolean Algebra with actions, and from
 there parallelism and communication. With some syntactic sugar added, it
 has much of the power of BNF, CCS, Linda, pipes in Unix command shell.
 Implementation as a language extension turns out to be fairly easy. I think
 ACP deserves much more attention than it currently gets; it might IMO
 become as important as the object-oriented and functional paradigms.

 The video of my talk is here:

 http://skillsmatter.com/podcast/scala/subscript-extending-scala-with-the-algebra-of-communicating-processes
 The sheets and accompanying paper are at
 http://code.google.com/p/subscript/downloads/list

 André


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Scala Days 2012 and ACP

2012-04-20 Thread Andre van Delft
I recommend to read as well Reflections on Scala Days 2012 by Erik Bakker:
http://www.lunatech-research.com/archives/2012/04/20/reflections-scala-days-2012

Note that there were 4 parallel sessions, so the picture is still far from 
complete.

Martin Odersky built Sun's Java compiler javac, version 1.3; thereafter he 
created Scala. Martin has gathered other excellent compiler specialists in 
Lausanne, such as Paul Phillips and Eugene Burmako. They are now finishing 
project Kepler = compile-time metaprogramming = support for macro's; this 
yields the power of DotNet's Linq. 

WRT ACP:

I was very happy to see and hear that Eugene plans to take my DSL for the 
Algebra of Communicating Processes as a use case for the macro's. (see sheet 14 
of 
http://www.slideshare.net/jamesskillsmatter/project-kepler-compile-time-metaprogramming-for-scala).

IMO ACP is a kind of Maxwell theory of processes. It has extensions for time, 
space and money. See:

Real time process algebra
http://alexandria.tue.nl/repository/freearticles/589158.pdf

Real space process algebra
http://alexandria.tue.nl/repository/books/588986.pdf

Parallel Processes with Implicit Computational Capital 
http://alexandria.tue.nl/extra1/wskrap/publichtml/200635.pdf

Efficient Production Process Algebra
http://www.is-frankfurt.de/uploads/efficient_algebra.pdf

Op 20 apr. 2012, om 21:38 heeft John Nilsson het volgende geschreven:
 Thanks for summary. I've been looking forward to see what's happening in the 
 Scala world.
 
 Btw. For the lurkers of this list who hasn't checked out Scala yet, do have 
 look. The parser combinators in the standard library is more or less an OMeta 
 implementation. Combined with Kojo for editing it should provide an 
 environment very much suited for experimenting with the ideas on this list.
 
 BR, 
 John
 
 Den 20 apr 2012 16:59 skrev Andre van Delft andre.vande...@gmail.com:
 Indeed I missed the link to the video of the 12 year old Shadaj Laddad.
 Here is it as yet:
 http://skillsmatter.com/podcast/scala/making-games-and-solving-puzzles-in-scala
 
 Overview of the video's is here:
 http://skillsmatter.com/event/scala/scala-days-2012 
 Still many video's are not online. 
 
 Anne Veling had a talk that got the most enthusiast tweets; it is not yet on 
 the list.
 
 Simon Peyton-Jones (Microsoft Research) started his keynote with the 
 manifesto for teaching computer science in the 21st century, that was 
 launched two weeks earlier in the UK. He said that current ICT education 
 comes down to teaching Microsoft Office. I have observed the same with my two 
 children, in the Netherlands. French and German attendees of Scala Days told 
 me that ICT education in their country is like that.
 http://www.guardian.co.uk/education/itforschools
 
 
 
 On Fri, Apr 20, 2012 at 4:09 PM, John Nilsson j...@milsson.nu wrote:
 I think one of the video links are wrong, should they both be the same? 
 BR, 
 John
 
 Den 20 apr 2012 01:57 skrev Andre van Delft andre.vande...@gmail.com:
 Scala Days 2012 was held this week in London; 400 passionate developers; many 
 presentations on DSLs, parallelism, concurrency, FP, compiler technology and 
 much other stuff. http://days2012.scala-lang.org/ Enthusiastic tweets: 
 https://twitter.com/search/scaladays
 
 The keynotes were by Guy Steele, Simon Peyton-Jones, Anthony Rose 
 (http://zeebox.com/) and Martin Odersky; I warmly recommend these, but right 
 now the videos are not yet online.
 
 Twelve year old Shadaj Laddad had an awesome talk; he is a real good 
 programmer, and maybe even better teacher. The video is here: 
 http://skillsmatter.com/podcast/scala/subscript-extending-scala-with-the-algebra-of-communicating-processes
 
 I presented my language extension based on the Algebra of Communicating 
 Processes; I have mentioned this theory a couple of times here the Fonc list. 
 ACP may be viewed as extending Boolean Algebra with actions, and from there 
 parallelism and communication. With some syntactic sugar added, it has much 
 of the power of BNF, CCS, Linda, pipes in Unix command shell. Implementation 
 as a language extension turns out to be fairly easy. I think ACP deserves 
 much more attention than it currently gets; it might IMO become as important 
 as the object-oriented and functional paradigms. 
 
 The video of my talk is here:
 http://skillsmatter.com/podcast/scala/subscript-extending-scala-with-the-algebra-of-communicating-processes
 The sheets and accompanying paper are at 
 http://code.google.com/p/subscript/downloads/list
 
 André
 

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Article Lisp as the Maxwell’s equations of software

2012-04-12 Thread Andre van Delft
FYI: Michael Nielsen wrote a large article Lisp as the Maxwell’s equations of 
software, about the famous page 13 of the LISP 1.5 Programmer’s Manual; see 
http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equations-of-software/

The article is discussed on Reddit: 
http://www.reddit.com/r/programming/comments/s5jzt/lisp_as_the_maxwells_equations_of_software/
 

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-15 Thread Andre van Delft
The theory Algebra of Communicating Processes (ACP) 
offers non-determinism (as in Meta II) plus concurrency.
I will present a paper on extending Scala with ACP 
next month at Scala Days 2012. For an abstract, see
http://days2012.scala-lang.org/node/92

A non-final version of the paper is at 
http://code.google.com/p/subscript/downloads/detail?name=SubScript-TR2012.pdf

André

Op 15 mrt. 2012, om 03:03 heeft Alan Kay het volgende geschreven:

 Well, it was very much a mythical beast even on paper -- and you really 
 have to implement programming languages and make a lot of things with them to 
 be able to assess them 
 
 But -- basically -- since meeting Seymour and starting to think about 
 children and programming, there were eight systems that I thought were really 
 nifty and cried out to be unified somehow:
   1. Joss
   2. Lisp
   3. Logo -- which was originally a unification of Joss and Lisp, but I 
 thought more could be done in this direction).
   4. Planner -- a big set of ideas (long before Prolog) by Carl Hewitt for 
 logic programming and pattern directed inference both forward and backwards 
 with backtracking)
   5. Meta II -- a super simple meta parser and compiler done by Val Schorre 
 at UCLA ca 1963
   6. IMP -- perhaps the first real extensible language that worked well -- by 
 Ned Irons (CACM, Jan 1970)
   7. The Lisp-70 Pattern Matching System -- by Larry Tesler, et al, with some 
 design ideas by me
   8. The object and pattern directed extension stuff I'd been doing 
 previously with the Flex Machine and afterwards at SAIL (that also was 
 influenced by Meta II)
 
 One of the schemes was to really make the pattern matching parts of this 
 work for everything that eventually required invocations and binding. 
 This was doable semantically but was a bear syntactically because of the 
 different senses of what kinds of matching and binding were intended for 
 different problems. This messed up the readability and desired simple things 
 should be simple.
 
 Examples I wanted to cover included simple translations of languages (English 
 to Pig Latin, English to French, etc. some of these had been done in Logo), 
 the Winograd robot block stacking and other examples done with Planner, the 
 making of the language the child was using, message sending and receiving, 
 extensions to Smalltalk-71, and so forth.
 
 I think today the way to try to do this would be with a much more graphical 
 UI than with text -- one could imagine tiles that would specify what to 
 match, and the details of the match could be submerged a bit.
 
 More recently, both OMeta and several of Ian's matchers can handle multiple 
 kinds of matching with binding and do backtracking, etc., so one could 
 imagine a more general language that could be based on this.
 
 On the other hand, trying to stuff 8 kinds of language ideas into one new 
 language in a graceful way could be a siren's song of a goal.
 
 Still 
 
 Cheers,
 
 Alan
 
 From: shaun gilchrist shaunxc...@gmail.com
 To: fonc@vpri.org 
 Sent: Wednesday, March 14, 2012 11:38 AM
 Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
 
 Alan, 
 
 I would go way back to the never implemented Smalltalk-71
 
 Is there a formal specification of what 71 should have been? I have only ever 
 read about it in passing reference in the various histories of smalltalk as a 
 step on the way to 72, 76, and finally 80. 
 
 I am very intrigued as to what sets 71 apart so dramatically. -Shaun
 
 On Wed, Mar 14, 2012 at 12:29 PM, Alan Kay alan.n...@yahoo.com wrote:
 Hi Scott --
 
 1. I will see if I can get one of these scanned for you. Moore tended to 
 publish in journals and there is very little of his stuff available on line.
 
 2.a. if (ab) { ... } is easier to read than if ab then ...? There is no 
 hint of the former being tweaked for decades to make it easier to read.
 
 Several experiments from the past cast doubt on the rest of the idea. At 
 Disney we did a variety of code display generators to see what kinds of 
 transformations we could do to the underlying Smalltalk (including syntactic) 
 to make it something that could be subsetted as a growable path from Etoys. 
 
 We got some good results from this (and this is what I'd do with Javascript 
 in both directions -- Alex Warth's OMeta is in Javascript and is quite 
 complete and could do this).
 
 However, the showstopper was all the parentheses that had to be rendered in 
 tiles. Mike Travers at MIT had done one of the first tile based editors for a 
 version of Lisp that he used, and this was even worse.
 
 More recently, Jens Moenig (who did SNAP) also did a direct renderer and 
 editor for Squeak Smalltalk (this can be tried out) and it really seemed to 
 be much too cluttered.
 
 One argument for some of this, is well, teach the kids a subset that doesn't 
 use so many parens  This could be a solution.
 
 However, in the end, I don't think Javascript semantics is particularly good 
 for 

Re: [fonc] History of computing talks at SJSU

2011-12-15 Thread Andre van Delft

Op 15 dec. 2011, om 08:02 heeft Casey Ransberger het volgende geschreven:

 Hypothesis: Mainstream software slows down at a rate slightly less than 
 mainstream
 hardware speeds up. It's an almost-but-not-quite-inverse Moore's Law. 
 Unless someone else has called this out directly, I'm calling it Joe's Law,
 because I don't want to deal with the backlash!
 

You are an optimist. Wirth's law says: 
Software is getting slower more rapidly than hardware becomes faster
And Gates' law: 
The speed of software halves every 18 months
See http://en.wikipedia.org/wiki/Wirth's_law___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Fexpr the Ultimate Lambda

2011-11-26 Thread Andre van Delft
This discussion mentions concurrency, the Meta II parser-compiler, Planner, and 
LINDA, which apply such concepts as non-determinism, parallelism and 
communication. These concepts are well described by the Algebra of 
Communicating Processes (ACP, also known as Process Algebra). This theory was 
initially developed in 1982 by Jan Bergstra and Jan Willem Klop. I find it very 
elegant, and it deserves much more attention than it currently gets. ACP treats 
input events conveniently on the same level as internal actions and output 
actions; something that only very few programming languages match.

There are two links between ACP and lambda calculus: 

Stephen Cole Kleene did in the 1930s important work on lambda calculus; in 1956 
he wrote the first paper on finite automata and regular expressions.
http://www.dlsi.ua.es/~mlf/nnafmc/papers/kleene56representation.pdf
http://www.cs.nmsu.edu/historical-projects/Projects/kleene.9.16.10.pdf
I think the paper Brief History of Process Algebra should have mentioned 
Kleene's work.
http://www.informatik.hs-mannheim.de/~schramm/CSP/files/historyProcessAlgebra.pdf

In 1989 Henk Goeman wrote Towards a Theory of (Self) Applicative Communicating 
Processes: a Short Note. This vastly under-quoted paper unifies lambda 
calculus with a mixture of ACP and its close relative CCS.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.8907 (free download)
Shortly thereafter CCS creator Robin Milner et al. presented the pi-calculus, 
which does something similar.
http://en.wikipedia.org/wiki/%CE%A0-calculus

I am myself working on an Subscript, a language extension to Scala with ACP. I 
claim (blurp) that this eases programming GUI controllers, parsers and 
simulations. The current implementation is as a DSL, with about 1700 lines of 
code; process communication is not supported yet. An executer gets parse tree 
information from scripts, and uses that to maintain a call graph.
I hope to raise interest here, so that some of you will join the further 
development. Code, examples and documentation are at 
http://code.google.com/p/subscript/



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc