Re: [fonc] Error trying to compile COLA

2012-03-14 Thread David Barbour
On Mon, Mar 12, 2012 at 10:24 AM, Martin Baldan martino...@gmail.comwrote:

 And that's how you get a huge software stack. Redundancy can be
 avoided in centralized systems, but in distributed systems with
 competing standards that's the normal state. It's not that programmers
 are dumb, it's that they can't agree on pretty much anything, and they
 can't even keep track of each other's ideas because the community is
 so huge.



I've been interested in how to make systems that work together despite
these challenges. A major part of my answer is seeking data model
independence:
http://awelonblue.wordpress.com/2011/06/15/data-model-independence/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-14 Thread Robin Barooah

On Mar 14, 2012, at 2:22 AM, Max Orhai wrote:

 But, that's exactly the cause for concern! Aside from the fact of Smalltalk's 
 obsolescence (which isn't really the point), the Squeak plugin could never be 
 approved by a 'responsible' sysadmin, because it can run arbitrary user code! 
 Squeak's not in the app store for exactly that reason. You'll notice how 
 crippled the allowed 'programming apps' are. This is simple strong-arm bully 
 tactics on the part of Apple; technical problems  solved by heavy-handed 
 legal means. Make no mistake, the iPad is the anti-Dynabook.

To my mind writing Apple's solution off as 'strong-arm bully tactics' obscures 
very real issues. Code expresses human intentions. Not all humans have good 
intentions, and so not all code is well intentioned.  Work at HP labs in the 
90's showed that it's impossible, even if you have full control of the virtual 
machine and can freeze and inspect memory, to mechanically prove with certainty 
that a random software agent is benign. So when code is exchanged publicly, 
provenance becomes important.

Apple's solution is as much technical as it is legal.  They use code signing to 
control the provenance of code that is allowed to execute, and yes, they have a 
quasi-legal apparatus for determining what code gets signed.  As it stands, 
they have established themselves as the sole arbiter of provenance.

I think one can easily argue that as the first mover, they have set things up 
to greatly advantage themselves as a commercial entity (as they do in other 
areas like the supply chain), and that it would be generally better if there 
was freedom about who to trust as the arbiter of provenance, however I don't 
see a future in which non-trivial unsigned code is generally exchanged.  This 
is the beginning of a necessary trend.  I'd love to hear how I'm wrong about 
this.

My suspicion is that for the most part, Apple's current set up is as locked 
down as it's ever going to be, and that over time the signing system will be 
extended to allow more fine grained human relationships to be expressed.  

For example at the moment, as an iOS developer, I can allow different apps that 
I write to access the same shared data via iCloud.  That makes sense because I 
am solely responsible for making sure that the apps share a common 
understanding of the meaning of the data, and Apple's APIs permit multiple 
independent processes to coordinate access to the same file.  

I am curious to see how Apple plans to make it possible for different 
developers to share data.  Will this be done by a network of cryptographic 
permissions between apps?

 
 -- Max
 
 On Tue, Mar 13, 2012 at 9:28 AM, Mack m...@mackenzieresearch.com wrote:
 For better or worse, both Apple and Microsoft (via Windows 8) are attempting 
 to rectify this via the Terms and Conditions route.
 
 It's been announced that both Windows 8 and OSX Mountain Lion will require 
 applications to be installed via download thru their respective App Stores 
 in order to obtain certification required for the OS to allow them access to 
 features (like an installed camera, or the network) that are outside the 
 default application sandbox.  
 
 The acceptance of the App Store model for the iPhone/iPad has persuaded them 
 that this will be (commercially) viable as a model for general public 
 distribution of trustable software.
 
 In that world, the Squeak plugin could be certified as safe to download in a 
 way that System Admins might believe.
 
 
 On Feb 29, 2012, at 3:09 PM, Alan Kay wrote:
 
 Windows (especially) is so porous that SysAdmins (especially in school 
 districts) will not allow teachers to download .exe files. This wipes out 
 the Squeak plugin that provides all the functionality.
 
 But there is still the browser and Javascript. But Javascript isn't fast 
 enough to do the particle system. But why can't we just download the 
 particle system and run it in a safe address space? The browser people don't 
 yet understand that this is what they should have allowed in the first 
 place. So right now there is only one route for this (and a few years ago 
 there were none) -- and that is Native Client on Google Chrome. 
 
  But Google Chrome is only 13% penetrated, and the other browser fiefdoms 
 don't like NaCl. Google Chrome is an .exe file so teachers can't 
 download it (and if they could, they could download the Etoys plugin).
 
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-14 Thread Alan Kay
As I've mentioned a few times on this list and in the long ago past, I think 
that the way to go is to make a hardware software system that assumes no piece 
of code is completely benign. This was the strategy of the B5000 long ago, and 
of several later OS designs (and of the Internet itself). Many of these ideas 
were heavily influenced by the work of Butler Lampson over the years.


The issue becomes: you can get perfect safety by perfect confinement, but how 
do you still get things to work together and make progress? For example, the 
Internet TCP/IP mechanism only gets packets from one place to another -- this 
mechanism cannot command a receiver computer to do anything. (Stupid software 
done by someone else inside a computer could decide to do what someone on the 
outside says -- but the whole object of design here is to retain confinement 
and avoid the idea of commands at every level.)

In theory real objects are confined virtual computers that cannot be 
commanded. But most use of objects today is as extensions of data ideas. Once 
you make a setter message, you have converted to a data structure that is now 
vulnerable to imperative mischief.

In between we have hardware based processes that are supposed to be HW 
protected virtual computers. These have gotten confused with storage swapping 
mechanisms, and the results are that most CPUs cannot set up enough of them 
(and for different reasons, the interprocess communcation is too slow for many 
purposes).

A hardware vendor with huge volumes (like Apple) should be able to get a CPU 
vendor to make HW that offers real protection, and at a granularity that makes 
more systems sense.

In the present case (where they haven't done the right thing), they still do 
have ways to confine potentially non-benign software in the existing gross 
process mechanisms. Apple et al already does this for running the web browser 
that can download Javascript programs that have not been vetted by the Apple 
systems people. NaCl in the Chrome browser extends this to allow the 
downloading of machine code that is run safely in its own sandbox. 


It should be crystal clear that Apple's restrictions have no substance in the 
large -- e.g. they could just run non-vetted systems as in the browser and 
NaCl. If you want more and Apple doesn't want to fix their OS, then maybe 
allowing them to vet makes some sense if you are in business and want to use 
their platform. 


But the main point here is that there are no technical reasons why a child 
should be restricted from making an Etoys or Scratch project and sharing it 
with another child on an iPad.

No matter what Apple says, the reasons clearly stem from strategies and tactics 
of economic exclusion.

So I agree with Max that the iPad at present is really the anti-Dynabook


Cheers,

Alan





 From: Robin Barooah ro...@sublime.org
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, March 14, 2012 3:38 AM
Subject: Re: [fonc] Error trying to compile COLA
 



On Mar 14, 2012, at 2:22 AM, Max Orhai wrote:

But, that's exactly the cause for concern! Aside from the fact of 
Smalltalk's obsolescence (which isn't really the point), the Squeak plugin 
could never be approved by a 'responsible' sysadmin, because it can run 
arbitrary user code! Squeak's not in the app store for exactly that reason. 
You'll notice how crippled the allowed 'programming apps' are. This is simple 
strong-arm bully tactics on the part of Apple; technical problems  solved by 
heavy-handed legal means. Make no mistake, the iPad is the anti-Dynabook.



To my mind writing Apple's solution off as 'strong-arm bully tactics' obscures 
very real issues. Code expresses human intentions. Not all humans have good 
intentions, and so not all code is well intentioned.  Work at HP labs in the 
90's showed that it's impossible, even if you have full control of the virtual 
machine and can freeze and inspect memory, to mechanically prove with 
certainty that a random software agent is benign. So when code is exchanged 
publicly, provenance becomes important.


Apple's solution is as much technical as it is legal.  They use code signing 
to control the provenance of code that is allowed to execute, and yes, they 
have a quasi-legal apparatus for determining what code gets signed.  As it 
stands, they have established themselves as the sole arbiter of provenance.


I think one can easily argue that as the first mover, they have set things up 
to greatly advantage themselves as a commercial entity (as they do in other 
areas like the supply chain), and that it would be generally better if there 
was freedom about who to trust as the arbiter of provenance, however I don't 
see a future in which non-trivial unsigned code is generally exchanged.  This 
is the beginning of a necessary trend.  I'd love to hear how I'm wrong about 
this.


My suspicion is that for the most part, Apple's current set up is as locked 
down

Re: [fonc] Error trying to compile COLA

2012-03-13 Thread Martin Baldan


 this is possible, but it assumes, essentially, that one doesn't run into
 such a limit.

 if one gets to a point where every fundamental concept is only ever
 expressed once, and everything is built from preceding fundamental concepts,
 then this is a limit, short of dropping fundamental concepts.

Yes, but I don't think any theoretical framework can tell us a priori
how close we are to that limit. The fact that we run out of ideas
doesn't mean there are no more new ideas waiting to be discovered.
Maybe if we change our choice of fundamental concepts, we can further
simplify our systems.

For instance, it was assumed that the holy grail of Lisp would be to
get to the essence of lambda calculus, and then John Shutt did away
with lambda as a fundamental concept, he derived it from vau, doing
away with macros and special forms in the process. I don't know
whether Kernel will live up to its promise, but in any case it was an
innovative line of inquiry.


 theoretically, about the only way to really do much better would be using a
 static schema (say, where the sender and receiver have a predefined set of
 message symbols, predefined layout templates, ...). personally though, I
 really don't like these sorts of compressors (they are very brittle,
 inflexible, and prone to version issues).

 this is essentially what write a tic-tac-toe player in Scheme implies:
 both the sender and receiver of the message need to have a common notion of
 both tic-tac-toe player and Scheme. otherwise, the message can't be
 decoded.

But nothing prevents you from reaching this common notion via previous
messages. So, I don't see why this protocol would have to be any more
brittle than a more verbous one.


 a more general strategy is basically to build a model from the ground up,
 where the sender and reciever have only basic knowledge of basic concepts
 (the basic compression format), and most everything else is built on the fly
 based on the data which has been seen thus far (old data is used to build
 new data, ...).

Yes, but, as I said, old that are used to build new data, but there's
no need to repeat old data over and over again. When two people
communicate with each other, they don't introduce themselves and share
their personal details again and again at the beginning of each
conversation.




 and, of course, such a system would likely be, itself, absurdly complex...


The system wouldn't have to be complex. Instead, it would *represent*
complexity through first-class data structures. The aim would be to
make the implicit complexity explicit, so that this simple system can
reason about it. More concretely, the implicit complexity is the
actual use of competing, redundant standards, and the explicit
complexity is an ontology describing those standards, so that a
reasoner can transform, translate and find duplicities with
dramatically less human attention. Developing such an ontology is by
no means trivial, it's hard work, but in the end I think it would be
very much worth the trouble.




 and this is also partly why making everything smaller (while keeping its
 features intact) would likely end up looking a fair amount like data
 compression (it is compression code and semantic space).


Maybe, but I prefer to think of it in terms of machine translation.
There are many different human languages, some of them more expressive
than others (for instance, with a larger lexicon, or a more
fine-grained tense system). If you want to develop an interlingua for
machine translation, you have to take a superset of all features of
the supported languages, and a convenient grammar to encode it (in GF
it would be an abstract syntax). Of course, it may be tricky to
support translation from any language to any other, because you may
need neologisms or long clarifications to express some ideas in the
least expressive languages, but let's leave that aside for the moment.
My point is that, once you do that, you can feed a reasoner with
literature in any language, and the reasoner doesn't have to
understand them all; it only has to understand the interlingua, which
may well be easier to parse than any of the target languages. You
didn't eliminate the complexity of human languages, but now it's
tidily packaged in an ontology, where it doesn't get in the reasoner's
way.



 some of this is also what makes my VM sub-project as complex as it is: it
 deals with a variety of problem cases, and each adds a little complexity,
 and all this adds up. likewise, some things, such as interfacing (directly)
 with C code and data, add more complexity than others (simpler and cleaner
 FFI makes the VM itself much more complex).

Maybe that's because you are trying to support everything by hand,
with all this knowledge and complexity embedded in your code. On the
other hand, it seems that the VPRI team is trying to develop new,
powerful standards with all the combined features of the existing
ones while actually supporting a very small subset of those 

Re: [fonc] Error trying to compile COLA

2012-03-13 Thread David Barbour
This has been an interesting conversation. I don't like how it's hidden
under the innocent looking subject `Error trying to compile COLA`

On Tue, Mar 13, 2012 at 8:08 AM, Martin Baldan martino...@gmail.com wrote:

 
 
  this is possible, but it assumes, essentially, that one doesn't run into
  such a limit.
 
  if one gets to a point where every fundamental concept is only ever
  expressed once, and everything is built from preceding fundamental
 concepts,
  then this is a limit, short of dropping fundamental concepts.

 Yes, but I don't think any theoretical framework can tell us a priori
 how close we are to that limit. The fact that we run out of ideas
 doesn't mean there are no more new ideas waiting to be discovered.
 Maybe if we change our choice of fundamental concepts, we can further
 simplify our systems.

 For instance, it was assumed that the holy grail of Lisp would be to
 get to the essence of lambda calculus, and then John Shutt did away
 with lambda as a fundamental concept, he derived it from vau, doing
 away with macros and special forms in the process. I don't know
 whether Kernel will live up to its promise, but in any case it was an
 innovative line of inquiry.


  theoretically, about the only way to really do much better would be
 using a
  static schema (say, where the sender and receiver have a predefined set
 of
  message symbols, predefined layout templates, ...). personally though, I
  really don't like these sorts of compressors (they are very brittle,
  inflexible, and prone to version issues).
 
  this is essentially what write a tic-tac-toe player in Scheme implies:
  both the sender and receiver of the message need to have a common notion
 of
  both tic-tac-toe player and Scheme. otherwise, the message can't be
  decoded.

 But nothing prevents you from reaching this common notion via previous
 messages. So, I don't see why this protocol would have to be any more
 brittle than a more verbous one.

 
  a more general strategy is basically to build a model from the ground
 up,
  where the sender and reciever have only basic knowledge of basic concepts
  (the basic compression format), and most everything else is built on the
 fly
  based on the data which has been seen thus far (old data is used to build
  new data, ...).

 Yes, but, as I said, old that are used to build new data, but there's
 no need to repeat old data over and over again. When two people
 communicate with each other, they don't introduce themselves and share
 their personal details again and again at the beginning of each
 conversation.



 
  and, of course, such a system would likely be, itself, absurdly
 complex...
 

 The system wouldn't have to be complex. Instead, it would *represent*
 complexity through first-class data structures. The aim would be to
 make the implicit complexity explicit, so that this simple system can
 reason about it. More concretely, the implicit complexity is the
 actual use of competing, redundant standards, and the explicit
 complexity is an ontology describing those standards, so that a
 reasoner can transform, translate and find duplicities with
 dramatically less human attention. Developing such an ontology is by
 no means trivial, it's hard work, but in the end I think it would be
 very much worth the trouble.


 
 
  and this is also partly why making everything smaller (while keeping its
  features intact) would likely end up looking a fair amount like data
  compression (it is compression code and semantic space).
 

 Maybe, but I prefer to think of it in terms of machine translation.
 There are many different human languages, some of them more expressive
 than others (for instance, with a larger lexicon, or a more
 fine-grained tense system). If you want to develop an interlingua for
 machine translation, you have to take a superset of all features of
 the supported languages, and a convenient grammar to encode it (in GF
 it would be an abstract syntax). Of course, it may be tricky to
 support translation from any language to any other, because you may
 need neologisms or long clarifications to express some ideas in the
 least expressive languages, but let's leave that aside for the moment.
 My point is that, once you do that, you can feed a reasoner with
 literature in any language, and the reasoner doesn't have to
 understand them all; it only has to understand the interlingua, which
 may well be easier to parse than any of the target languages. You
 didn't eliminate the complexity of human languages, but now it's
 tidily packaged in an ontology, where it doesn't get in the reasoner's
 way.


 
  some of this is also what makes my VM sub-project as complex as it is: it
  deals with a variety of problem cases, and each adds a little complexity,
  and all this adds up. likewise, some things, such as interfacing
 (directly)
  with C code and data, add more complexity than others (simpler and
 cleaner
  FFI makes the VM itself much more complex).

 Maybe that's because 

Re: [fonc] Error trying to compile COLA

2012-03-13 Thread Alan Kay
But we haven't wanted to program in Smalltalk for a long time.

This is a crazy non-solution (and is so on the iPad already)

No one should have to work around someone else's bad designs and 
implementations ...


Cheers,

Alan





 From: Mack m...@mackenzieresearch.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, March 13, 2012 9:28 AM
Subject: Re: [fonc] Error trying to compile COLA
 

For better or worse, both Apple and Microsoft (via Windows 8) are attempting 
to rectify this via the Terms and Conditions route.


It's been announced that both Windows 8 and OSX Mountain Lion will require 
applications to be installed via download thru their respective App Stores 
in order to obtain certification required for the OS to allow them access to 
features (like an installed camera, or the network) that are outside the 
default application sandbox.  


The acceptance of the App Store model for the iPhone/iPad has persuaded them 
that this will be (commercially) viable as a model for general public 
distribution of trustable software.


In that world, the Squeak plugin could be certified as safe to download in a 
way that System Admins might believe.



On Feb 29, 2012, at 3:09 PM, Alan Kay wrote:

Windows (especially) is so porous that SysAdmins (especially in school 
districts) will not allow teachers to download .exe files. This wipes out the 
Squeak plugin that provides all the functionality.


But there is still the browser and Javascript. But Javascript isn't fast 
enough to do the particle system. But why can't we just download the particle 
system and run it in a safe address space? The browser people don't yet 
understand that this is what they should have allowed in the first place. So 
right now there is only one route for this (and a few years ago there were 
none) -- and that is Native Client on Google Chrome. 



 But Google Chrome is only 13% penetrated, and the other browser fiefdoms 
don't like NaCl. Google Chrome is an .exe file so teachers can't download 
it (and if they could, they could download the Etoys plugin).


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-13 Thread Max Orhai
But, that's exactly the cause for concern! Aside from the fact of
Smalltalk's obsolescence (which isn't really the point), the Squeak plugin
could never be approved by a 'responsible' sysadmin, *because it can run
arbitrary user code*! Squeak's not in the app store for exactly that
reason. You'll notice how crippled the allowed 'programming apps' are. This
is simple strong-arm bully tactics on the part of Apple; technical problems
 solved by heavy-handed legal means. Make no mistake, the iPad is the
anti-Dynabook.

-- Max

On Tue, Mar 13, 2012 at 9:28 AM, Mack m...@mackenzieresearch.com wrote:

 For better or worse, both Apple and Microsoft (via Windows 8) are
 attempting to rectify this via the Terms and Conditions route.

 It's been announced that both Windows 8 and OSX Mountain Lion will require
 applications to be installed via download thru their respective App
 Stores in order to obtain certification required for the OS to allow them
 access to features (like an installed camera, or the network) that are
 outside the default application sandbox.

 The acceptance of the App Store model for the iPhone/iPad has persuaded
 them that this will be (commercially) viable as a model for general public
 distribution of trustable software.

 In that world, the Squeak plugin could be certified as safe to download in
 a way that System Admins might believe.


 On Feb 29, 2012, at 3:09 PM, Alan Kay wrote:

 Windows (especially) is so porous that SysAdmins (especially in school
 districts) will not allow teachers to download .exe files. This wipes out
 the Squeak plugin that provides all the functionality.

 But there is still the browser and Javascript. But Javascript isn't fast
 enough to do the particle system. But why can't we just download the
 particle system and run it in a safe address space? The browser people
 don't yet understand that this is what they should have allowed in the
 first place. So right now there is only one route for this (and a few years
 ago there were none) -- and that is Native Client on Google Chrome.

  But Google Chrome is only 13% penetrated, and the other browser fiefdoms
 don't like NaCl. Google Chrome is an .exe file so teachers can't
 download it (and if they could, they could download the Etoys plugin).



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-12 Thread Attila Lendvai
 Is that the case? I'm a bit confused. I've read the fascinating reports
 about Frank, and I was wondering what's the closest thing one can download
 and run right now. Could you guys please clear it up for me?

i +1 this, with the addition that writing up anything remotely
official answer would be much better at the FONC wiki, where i looked
recently to try to find it out.

the mailing list is a long string of events obsoleting each other,
while a wiki is a (could be) much better representation of the current
state of affairs.

http://vpri.org/fonc_wiki/index.php/Main_Page

-- 
 attila

Notice the erosion of your (digital) freedom, and do something about it!

PGP: 2FA1 A9DC 9C1E BA25 A59C  963F 5D5F 45C7 DFCD 0A39
OTR XMPP: 8647EEAC EA30FEEF E1B55146 573E52EE 21B1FF06
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-12 Thread Martin Baldan


 that is a description of random data, which granted, doesn't apply to most
 (compressible) data.
 that wasn't really the point though.

I thought the original point was that there's a clear-cut limit to how
much redundancy can be eliminated from computing environments, and
that thousand-fold (and beyond) reductions in code size per feature
don't seem realistic. Then the analogy from data compression was used.
I think it's a pretty good analogy, but I don't think there's a
clear-cut limit we can estimate in advance, because meaningful data
and computations are not random to begin with. Indeed, there are
islands of stability where you've cut all the visible cruft and you
need new theoretical insights and new powerful techniques to reduce
the code size further.



 for example, I was able to devise a compression scheme which reduced
 S-Expressions to only 5% their original size. now what if I want 3%, or 1%?
 this is not an easy problem. it is much easier to get from 10% to 5% than to
 get from 5% to 3%.

I don't know, but there may be ways to reduce it much further if you
know more about the sexprs themselves. Or maybe you can abstract away
the very fact that you are using sexprs. For instance, if those sexprs
are a Scheme program for a tic-tac-toe player, you can say write a
tic-tac-toe player in Scheme and you capture the essence.

I expect much of future progress in code reduction to come from
automated integration of different systems, languages and paradigms,
and this integration to come from widespread development and usage of
ontologies and reasoners. That way, for instance, you could write a
program in BASIC, and then some reasoner would ask you questions such
as I see you used a GOTO to build a loop. Is that correct? or this
array is called 'clients'  , do you mean it as in server/client
architecture or in the business sense? . After a few questions like
that, the system would have a highly descriptive model of what your
program is supposed to do and how it is supposed to do it. Then it
would be able to write an equivalent program in any other programming
language. Of course, once you have such a system, there would be much
more powerful user interfaces than some primitive programming
language. Probably you would speak in natural language (or very close)
and use your hands to point at things. I know it sounds like full-on
AI, but I just mean an expert system for programmers.



 although many current programs are, arguably, huge, the vast majority of the
 code is likely still there for a reason, and is unlikely the result of
 programmers just endlessly writing the same stuff over and over again, or
 resulting from other simple patterns. rather, it is more likely piles of
 special case logic and optimizations and similar.


I think one problem is that not writing the same stuff over and over
again is easier said than done. To begin with, other people's code
may not even be available (or not under a free license). But even if
it is, it may have used different names, different coding patterns,
different third-party libraries and so on, while still being basically
the same. And this happens even within the same programming language
and environment. Not to speak of all the plethora of competing
platforms, layering schemes, communication protocols, programming
languages, programming paradigms, programming frameworks and so on.
Everyone says let's do it may way, and then my system can host
yours, same here, let's make a standard, let's extend the
standard, let's make a cleaner standard, now for real, let's be
realistic and use the available standards let's not reinvent the
wheel, we need backwards compatibility, backwards compatibility is a
drag, let's reinvent the wheel. Half-baked standards become somewhat
popular, and then they have to be supported.

And that's how you get a huge software stack. Redundancy can be
avoided in centralized systems, but in distributed systems with
competing standards that's the normal state. It's not that programmers
are dumb, it's that they can't agree on pretty much anything, and they
can't even keep track of each other's ideas because the community is
so huge.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-12 Thread BGB

On 3/12/2012 10:24 AM, Martin Baldan wrote:


that is a description of random data, which granted, doesn't apply to most
(compressible) data.
that wasn't really the point though.

I thought the original point was that there's a clear-cut limit to how
much redundancy can be eliminated from computing environments, and
that thousand-fold (and beyond) reductions in code size per feature
don't seem realistic. Then the analogy from data compression was used.
I think it's a pretty good analogy, but I don't think there's a
clear-cut limit we can estimate in advance, because meaningful data
and computations are not random to begin with. Indeed, there are
islands of stability where you've cut all the visible cruft and you
need new theoretical insights and new powerful techniques to reduce
the code size further.


this is possible, but it assumes, essentially, that one doesn't run into 
such a limit.


if one gets to a point where every fundamental concept is only ever 
expressed once, and everything is built from preceding fundamental 
concepts, then this is a limit, short of dropping fundamental concepts.




for example, I was able to devise a compression scheme which reduced
S-Expressions to only 5% their original size. now what if I want 3%, or 1%?
this is not an easy problem. it is much easier to get from 10% to 5% than to
get from 5% to 3%.

I don't know, but there may be ways to reduce it much further if you
know more about the sexprs themselves. Or maybe you can abstract away
the very fact that you are using sexprs. For instance, if those sexprs
are a Scheme program for a tic-tac-toe player, you can say write a
tic-tac-toe player in Scheme and you capture the essence.


the sexprs were mostly related to scene-graph delta messages (one could 
compress a Scheme program, but this isn't really what it is needed for).


each expression basically tells about what is going on in the world at 
that moment (objects appearing and moving around, lights turning on/off, 
...). so, basically, a semi-constant message stream.


the specialized compressor was doing better than Deflate, but was also 
exploiting a lot more knowledge about the expressions as well: what the 
basic types are, how things fit together, ...


theoretically, about the only way to really do much better would be 
using a static schema (say, where the sender and receiver have a 
predefined set of message symbols, predefined layout templates, ...). 
personally though, I really don't like these sorts of compressors (they 
are very brittle, inflexible, and prone to version issues).


this is essentially what write a tic-tac-toe player in Scheme implies:
both the sender and receiver of the message need to have a common notion 
of both tic-tac-toe player and Scheme. otherwise, the message can't 
be decoded.


a more general strategy is basically to build a model from the ground 
up, where the sender and reciever have only basic knowledge of basic 
concepts (the basic compression format), and most everything else is 
built on the fly based on the data which has been seen thus far (old 
data is used to build new data, ...).


in LZ77 based algos (Deflate: ZIP/GZ/PNG; LZMA: 7zip; ...), this takes 
the form of a sliding window, where any recently seen character 
sequence is simply reused (via an offset/length run).


in my case, it is built from primitive types (lists, symbols, strings, 
fixnums, flonums, ...).




I expect much of future progress in code reduction to come from
automated integration of different systems, languages and paradigms,
and this integration to come from widespread development and usage of
ontologies and reasoners. That way, for instance, you could write a
program in BASIC, and then some reasoner would ask you questions such
as I see you used a GOTO to build a loop. Is that correct? or this
array is called 'clients'  , do you mean it as in server/client
architecture or in the business sense? . After a few questions like
that, the system would have a highly descriptive model of what your
program is supposed to do and how it is supposed to do it. Then it
would be able to write an equivalent program in any other programming
language. Of course, once you have such a system, there would be much
more powerful user interfaces than some primitive programming
language. Probably you would speak in natural language (or very close)
and use your hands to point at things. I know it sounds like full-on
AI, but I just mean an expert system for programmers.


and, of course, such a system would likely be, itself, absurdly complex...

this is partly the power of information entropy though:
it can't really be created or destroyed, only really moved around from 
one place to another.



so, one can express things simply to a system, and it gives powerful 
outputs, but likely the system itself is very complex. one can express 
things to a simple system, but generally this act of expression tends to 
be much more complex. in either case, the complexity is 

Re: [fonc] Error trying to compile COLA

2012-03-11 Thread Jakub Piotr Cłapa

On 28.02.12 06:42, BGB wrote:

but, anyways, here is a link to another article:
http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem


Shannon's theory applies to lossless transmission. I doubt anybody here 
wants to reproduce everything down to the timings and bugs of the 
original software. Information theory is not thermodynamics.


--
regards,
Jakub Piotr Cłapa
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-11 Thread BGB

On 3/11/2012 5:28 AM, Jakub Piotr Cłapa wrote:

On 28.02.12 06:42, BGB wrote:

but, anyways, here is a link to another article:
http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem


Shannon's theory applies to lossless transmission. I doubt anybody 
here wants to reproduce everything down to the timings and bugs of the 
original software. Information theory is not thermodynamics.




Shannon's theory also applies some to lossy transmission, as it also 
sets a lower bound on the size of the data as expressed with a certain 
degree of loss.


this is why, for example, with JPEGs or MP3s, getting a smaller size 
tends to result in reduced quality. the higher quality can't be 
expressed in a smaller size.



I had originally figured that the assumption would have been to try to 
recreate everything in a reasonably feature-complete way.



this means such things in the OS as:
an OpenGL implementation;
a command-line interface, probably implementing ANSI / VT100 style 
control-codes (even in my 3D engine, my in-program console currently 
implements a subset of these codes);

a loader for program binaries (ELF or PE/COFF);
POSIX or some other similar OS APIs;
probably a C compiler, assembler, linker, run-time libraries, ...;
network stack, probably a web-browser, ...;
...

then it would be a question of how small one could get everything while 
still implementing a reasonably complete (if basic) feature-set, using 
any DSLs/... one could think up to shave off lines of code.


one could probably shave off OS-specific features which few people use 
anyways (for example, no need to implement support for things like GDI 
or the X11 protocol). a simple solution being that OpenGL largely is 
the interface for the GUI subsystem (probably with a widget toolkit 
built on this, and some calls for things not directly supported by 
OpenGL, like managing mouse/keyboard/windows/...).


also, potentially, a vast amount of what would be standalone tools, 
could be reimplemented as library code and merged (say, one has the 
shell as a kernel module, which directly implements nearly all of the 
basic command-line tools, like ls/cp/sed/grep/...).


the result of such an effort, under my estimates, would likely still end 
up in the Mloc range, but maybe one could get from say, 200 Mloc (for a 
Linux-like configuration) down to maybe about 10-15 Mloc, or if one 
tried really hard, maybe closer to 1 Mloc, and much smaller is fairly 
unlikely.



apparently this wasn't the plan though, rather the intent was to 
substitute something entirely different in its place, but this sort of 
implies that it isn't really feature-complete per-se (and it would be a 
bit difficult trying to port existing software to it).


someone asks: hey, how can I build Quake 3 Arena for your OS?, and 
gets back a response roughly along the lines of you will need to 
largely rewrite it from the ground up.


much nicer and simpler would be if it could be reduced to maybe a few 
patches and modifying some of the OS glue stubs or something.



(tangent time):

but, alas, there seems to be a bit of a philosophical split here.

I tend to be a bit more conservative, even if some of this stuff is put 
together in dubious ways. one adds features, but often ends up 
jerry-rigging things, and using bits of functionality in different 
contexts: like, for example, an in-program command-entry console, is not 
normally where one expects ANSI codes, but at the time, it seemed a sane 
enough strategy (adding ANSI codes was a fairly straightforward way to 
support things like embedding color information in console message 
strings, ...). so, the basic idea still works, and so was applied in a 
new context (a console in a 3D engine, vs a terminal window in the OS).


side note: internally, the console is represented as a 2D array of 
characters, and another 2D array to store color and modifier flags 
(underline, strikeout, blink, italic, ...).


the console can be used both for program-related commands, accessing 
cvars, and for evaluating script fragments (sadly, limited to what can 
be reasonably typed into a console command, which can be a little 
limiting for much more than make that thing over there explode or 
similar). functionally, the console is less advanced than something like 
bash or similar.


I have also considered the possibility of supporting multiple consoles, 
and maybe a console-integrated text-editor, but have yet to decide on 
the specifics (I am torn between a specialized text-editor interface, or 
making the text editor be a console command which hijacks the console 
and probably does most of its user-interface via ANSI codes or similar...).


but, it is not obvious what is the best way to integrate a text-editor 
into the UI for a 3D engine, hence why I have had this idea floating 
around for months, but haven't really acted on it (out of humor, it 
could be given a VIM-like user-interface... ok, probably not, I was 
imagining more like 

Re: [fonc] Error trying to compile COLA

2012-03-11 Thread Martin Baldan
I won't pretend I really know what I'm talking about, I'm just
guessing here, but don't you think the requirement for independent
and identically-distributed random variable data in Shannon's source
coding theorem may not be applicable to pictures, sounds or frame
sequences normally handled by compression algorithms? I mean, many
compression techniques rely on domain knowledge about the things to be
compressed. For instance, a complex picture or video sequence may
consist of a well-known background with a few characters from a
well-known inventory in well-known positions. If you know those facts,
you can increase the compression dramatically. A practical example may
be Xtranormal stories, where you get a cute 3-D animated dialogue from
a small script.

Best,

-Martin

On Sun, Mar 11, 2012 at 7:53 PM, BGB cr88...@gmail.com wrote:
 On 3/11/2012 5:28 AM, Jakub Piotr Cłapa wrote:

 On 28.02.12 06:42, BGB wrote:

 but, anyways, here is a link to another article:
 http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem


 Shannon's theory applies to lossless transmission. I doubt anybody here
 wants to reproduce everything down to the timings and bugs of the original
 software. Information theory is not thermodynamics.


 Shannon's theory also applies some to lossy transmission, as it also sets a
 lower bound on the size of the data as expressed with a certain degree of
 loss.

 this is why, for example, with JPEGs or MP3s, getting a smaller size tends
 to result in reduced quality. the higher quality can't be expressed in a
 smaller size.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread Anatoly Levenchuk
The same things are at a Very High Level computing (beyond application
boundary at enterprise-wide level, and especially beyond enterprise boundary
at business eco-systems level). There are BPMN engines, issue trackers,
project management systems, document management/workflow systems, etc..
And when you try to perform workflow/process/case execution of something
that need to be executed on all this engines mess, you have a huge
problems: too many high level execution paradigms (project, process, case
management, complex event management, etc.), too few good architectures and
tools to do this.

 

I think that scalability should go not only from hardware to application as
desktop publishing level but to support of enterprise architecture and
beyond (business eco-system architecture with federated enterprises). SOA
ideas is definitely not helpful here in its current enterprise bus state.

 

I consider programming, modeling and ontologizing from CPU hardware up to a
business eco-system level the same discipline and transfer from
programming/modeling/ontologizing-in-the-small to the same-in-the-large as
one of urgent needs. We should generalize concept of external execution to
preserve it meaning from hardware CPU core to OS/browser/distributed
application level to extended enterprise (network of hundreds of enterprises
that perform complex industrial projects like nuclear power station design
and construction).

 

Best regards,

Anatoly Levenchuk

 

From: fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of Alan
Kay
Sent: Thursday, March 01, 2012 3:10 AM
To: Duncan Mak; Fundamentals of New Computing
Subject: Re: [fonc] Error trying to compile COLA

 

Hi Duncan

 

The short answers to these questions have already been given a few times on
this list. But let me try another direction to approach this.

 

The first thing to notice about the overlapping windows interface personal
computer experience is that it is logically independent of the
code/processes running underneath. This means (a) you don't have to have a
single religion down below (b) the different kinds of things that might be
running can be protected from each other using the address space mechanisms
of the CPU(s), and (c) you can think about allowing outsiders to do pretty
much what they want to create a really scalable really expandable WWW.

 

If you are going to put a browser app on an OS, then the browser has
to be a mini-OS, not an app. 

 

But standard apps are a bad idea (we thought we'd gotten rid of them in
the 70s) because what you really want to do is to integrate functionality
visually and operationally using the overlapping windows interface, which
can safely get images from the processes and composite them on the screen.
(Everything is now kind of super-desktop-publishing.) An app is now just
a kind of integration.

 

But the route that was actually taken with the WWW and the browser was in
the face of what was already being done.

 

Hypercard existed, and showed what a WYSIWYG authoring system for end-users
could do. This was ignored.

 

Postscript existed, and showed that a small interpreter could be moved
easily from machine to machine while retaining meaning. This was ignored.

 

And so forth.

 

19 years later we see various attempts at inventing things that were already
around when the WWW was tacked together.

 

But the thing that is amazing to me is that in spite of the almost universal
deployment of it, it still can't do what you can do on any of the machines
it runs on. And there have been very few complaints about this from the
mostly naive end-users (and what seem to be mostly naive computer folks who
deal with it).

 

Some of the blame should go to Apple and MS for not making real OSs for
personal computers -- or better, going the distance to make something better
than the old OS model. In either case both companies blew doing basic
protections between processes. 

 

On the other hand, the WWW and first browsers were originally done on
workstations that had stronger systems underneath -- so why were they so
blind?

 

As an aside I should mention that there have been a number of attempts to do
something about OS bloat. Unix was always too little too late but its
one outstanding feature early on was its tiny kernel with a design that
wanted everything else to be done in user-mode-code. Many good things
could have come from the later programmers of this system realizing that
being careful about dependencies is a top priority. (And you especially do
not want to have your dependencies handled by a central monolith, etc.)

 

So, this gradually turned into an awful mess. But Linus went back to square
one and redefined a tiny kernel again -- the realization here is that you do
have to arbitrate basic resources of memory and process management, but you
should allow everyone else to make the systems they need. This really can
work well if processes can be small and interprocess communication fast

Re: [fonc] Error trying to compile COLA

2012-03-01 Thread Reuben Thomas
On 1 March 2012 02:26, Igor Stasenko siguc...@gmail.com wrote:
 wonderful. so, in 5 years (put less if you want) i can be sure that my
 app can run on every machine on any browser,
 and i don't have to put update your browser warning.

No, because in 5 years' time you will be wanting to do something
different, and there will be a different immature technology to do it
that hasn't yet been implemented on every machine. And of course
there's a serious chance the browser will be on its way out by then
(see how things are done on mobile platforms).

10 years ago we'd've been having the same conversation about Java.
Today Java is still very much alive, and lots of people are getting
things done with it.

5 years ago Flash might've been mentioned. Ditto.

 As to me, this language is not good enough to serve at such level.
 From this point, it is inherently not complete and never will be, and
 will always stand between you and your goals.

If you're sufficiently determined, you'll manage to get nothing done
whatever the technology on offer.

-- 
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread Igor Stasenko
On 1 March 2012 12:30, Reuben Thomas r...@sc3d.org wrote:
 On 1 March 2012 02:26, Igor Stasenko siguc...@gmail.com wrote:
 wonderful. so, in 5 years (put less if you want) i can be sure that my
 app can run on every machine on any browser,
 and i don't have to put update your browser warning.

 No, because in 5 years' time you will be wanting to do something
 different, and there will be a different immature technology to do it
 that hasn't yet been implemented on every machine. And of course
 there's a serious chance the browser will be on its way out by then
 (see how things are done on mobile platforms).

 10 years ago we'd've been having the same conversation about Java.
 Today Java is still very much alive, and lots of people are getting
 things done with it.

 5 years ago Flash might've been mentioned. Ditto.

Yeah.. all of the above resembling same cycle:
  initial stage - small, good and promising,
  becoming mainstream - growing, trying to fit everyone's needs until eventually
  turning into walking zombie - buried under tons of requirements and
standards and extensions.
And i bet that JavaScript will not be an exception.

Now if you take things like tcp/ip. How much changes/extensions over
the years since first deployment of it you seen?
The only noticeable one i know of is introduction of ipv6.

 As to me, this language is not good enough to serve at such level.
 From this point, it is inherently not complete and never will be, and
 will always stand between you and your goals.

 If you're sufficiently determined, you'll manage to get nothing done
 whatever the technology on offer.

Oh, i am not arguing that we have to rely on what is available.
Just wanted to indicate that if www would base on simpler design
principles at the very beginning,
we would not wait 27 years till javascript will be mature enough to
simulate linux on it.

 --
 http://rrt.sc3d.org
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



-- 
Best regards,
Igor Stasenko.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread Reuben Thomas
On 1 March 2012 12:00, Igor Stasenko siguc...@gmail.com wrote:
 Now if you take things like tcp/ip. How much changes/extensions over
 the years since first deployment of it you seen?
 The only noticeable one i know of is introduction of ipv6.

Yes, but you can say the same of HTTP. You're comparing apples with orchards.

 Just wanted to indicate that if www would base on simpler design
 principles at the very beginning,
 we would not wait 27 years till javascript will be mature enough to
 simulate linux on it.

The reason HTTP/HTML won is precisely because they were extremely
simple to start with. The reason that Smalltalk, Hypercard et al.
didn't is because their inventors didn't take account of what actually
makes systems successful socially, rather than popular with
individuals.

And I fail to see what bemoaning the current state of affairs (or even
the tendency of history to repeat itself) achieves. Noticing what goes
wrong and trying to fix it is a positive step.

The biggest challenge for FONC will not be to achieve good technical
results, as it is stuffed with people who have a history of doing
great work, and its results to date are already exciting, but to get
those results into widespread use; I've seen no evidence that the
principals have considered how and why they failed to do this in the
past, nor that they've any ideas on how to avoid it this time around.

-- 
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread Julian Leviston
Is this one of the aims?

Julian

On 01/03/2012, at 11:42 PM, Reuben Thomas wrote:

 The biggest challenge for FONC will not be to achieve good technical
 results, as it is stuffed with people who have a history of doing
 great work, and its results to date are already exciting, but to get
 those results into widespread use; I've seen no evidence that the
 principals have considered how and why they failed to do this in the
 past, nor that they've any ideas on how to avoid it this time around.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread BGB

On 3/1/2012 8:04 AM, Reuben Thomas wrote:

On 1 March 2012 15:02, Julian Levistonjul...@leviston.net  wrote:

Is this one of the aims?

It doesn't seem to be, which is sad, because however brilliant the
ideas you can't rely on other people to get them out for you.


this is part of why I am personally trying to work more to develop 
products than doing pure research, and focusing more on trying to 
improve the situation (by hopefully increasing the number of viable 
options) rather than remake the world.


there is also, at this point, a reasonable lack of industrial strength 
scripting languages.
there are a few major industrial strength languages (C, C++, Java, C#, 
etc...), and a number of scripting languages (Python, Lua, JavaScript, 
...), but not generally anything to bridge the gap (combining the 
relative dynamic aspects and easy of use of a scripting language, with 
the power to get stuff done as in a more traditional language).


a partial reason I suspect:
many script languages don't scale well (WRT either performance or 
usability);
many script languages have jokingly bad FFI's, combined with a lack of 
good native libraries;

...


or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread Loup Vaillant

BGB wrote:

there is also, at this point, a reasonable lack of industrial strength
scripting languages.
there are a few major industrial strength languages (C, C++, Java, C#,
etc...), and a number of scripting languages (Python, Lua, JavaScript,
...), but not generally anything to bridge the gap (combining the
relative dynamic aspects and easy of use of a scripting language, with
the power to get stuff done as in a more traditional language).


What could you possibly mean by industrial strength scripting language?

When I hear about an industrial strength tool, I mostly infer that the 
tool:

 - spurs low-level code (instead of high-level meaning),
 - is moderately difficult to learn (or even use),
 - is extremely difficult to implement,
 - has paid-for support.

If you meant something more positive, I think Lua is a good candidate:
 - Small (and hopefully reliable) tools.
 - Fast implementations.
 - Widely used in the gaming industry.
 - Good C FFI.
 - Spurs quite higher-level meaning.

Loup.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread BGB

On 3/1/2012 10:12 AM, Loup Vaillant wrote:

BGB wrote:

there is also, at this point, a reasonable lack of industrial strength
scripting languages.
there are a few major industrial strength languages (C, C++, Java, C#,
etc...), and a number of scripting languages (Python, Lua, JavaScript,
...), but not generally anything to bridge the gap (combining the
relative dynamic aspects and easy of use of a scripting language, with
the power to get stuff done as in a more traditional language).


What could you possibly mean by industrial strength scripting language?

When I hear about an industrial strength tool, I mostly infer that 
the tool:

 - spurs low-level code (instead of high-level meaning),
 - is moderately difficult to learn (or even use),
 - is extremely difficult to implement,
 - has paid-for support.



expressiveness is a priority (I borrow many features from scripting 
languages, like JavaScript, Scheme, ...). the language aims to have a 
high-level of dynamic abilities in most areas as well (it supports 
dynamic types, prototype OO, lexical closures, scope delegation, ...).



learning curve or avoiding implementation complexity were not huge 
concerns (the main concession I make to learning curve is that it is in 
many regards fairly similar to current mainstream languages, so if a 
person knows C++ or C# or similar, they will probably understand most of 
it easily enough).


the main target audience is generally people who already know C and C++ 
(and who will probably keep using them as well). so, the language is 
mostly intended to be used mixed with C and C++ codebases. the default 
syntax is more ActionScript-like, but Java/C# style declaration syntax 
may also be used (the only significant syntax differences are those 
related to the language's JavaScript heritage, and the use of as and 
as! operators for casts in place of C-style cast syntax).


generally, its basic design is intended to be a bit less obtuse than C 
or C++ though (the core syntax is more like that in Java and 
ActionScript in most regards, and more advanced features are intended 
mostly for special cases).



the VM is intended to be free, and I currently have it under the MIT 
license, but I don't currently have any explicit plans for support. it 
is more of a use it if you want proposition, provided as-is, and so on.


it is currently given on request via email, mostly due to my server 
being offline and probably will be for a while (it is currently 1600 
miles away, and my parents don't know how to fix it...).



but, what I mostly meant was that it is designed in such a way to 
hopefully deal acceptably well with writing largish code-bases (like, 
supporting packages/namespaces and importing and so on), and should 
hopefully be competitive performance-wise with similar-class languages 
(still needs a bit more work on this front, namely to try to get 
performance to be more like Java, C#, or C++ and less like Python).


as-is, performance is less of a critical failing though, since one can 
put most performance critical code in C land and work around the weak 
performance somewhat (and, also, my projects are currently more bound by 
video-card performance than CPU performance as well).



in a few cases, things were done which favored performance over strict 
ECMA-262 conformance though (most notably, there are currently 
differences regarding default floating-point precision and similar, due 
mostly to the VM presently needing to box doubles, and generally double 
precision being unnecessary, ... however, the VM will use double 
precision if it is used explicitly).




If you meant something more positive, I think Lua is a good candidate:
 - Small (and hopefully reliable) tools.
 - Fast implementations.
 - Widely used in the gaming industry.
 - Good C FFI.
 - Spurs quite higher-level meaning.



Lua is small, and fairly fast (by scripting language terms).

its use in the gaming industry is moderate (it still faces competition 
against several other languages, namely Python, Scheme, and various 
engine-specific languages).


not everyone (myself included) is entirely fond of its Pascal-ish syntax 
though.


I also have doubts how well it will hold up to large-scale codebases though.


its native C FFI is moderate (in that it could be a lot worse), but 
AFAIK most of its ease of use here comes from the common use of SWIG 
(since SWIG shaves away most need for manually-written boilerplate).


the SWIG strategy though is itself a tradeoff IMO, since it requires 
some special treatment on the part of the headers, and works by 
producing intermediate glue code.


similarly, it doesn't address the matter of potential semantic 
mismatches between the languages (so the interfaces tend to be fairly 
basic).



in my case, a similar system to SWIG is directly supported by the VM, 
does not generally require boilerplate code (but does require any 
symbols to be DLL exports on Windows), and the FFI is much more tightly 

Re: [fonc] Error trying to compile COLA

2012-03-01 Thread Casey Ransberger
Below. 

On Feb 29, 2012, at 5:43 AM, Loup Vaillant l...@loup-vaillant.fr wrote:

 Yes, I'm aware of that limitation.  I have the feeling however that
 IDEs and debuggers are overrated.

When I'm Squeaking, sometimes I find myself modeling classes with the browser 
but leaving method bodies to 'self break' and then write all of the actual code 
in the debugger. Doesn't work so well for hacking on the GUI, but, well. 

I'm curious about 'debuggers are overrated' and 'you shouldn't need one.' Seems 
odd. Most people I've encountered who don't use the debugger haven't learned 
one yet. 

At one company (I'd love to tell you which but I signed a non-disparagement 
agreement) when I asked why the standard dev build of the product didn't 
include the debugger module, I was told you don't need it. When I went to 
install it, I was told not to. 

I don't work there any more...
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread BGB

On 3/1/2012 2:58 PM, Casey Ransberger wrote:

Below.

On Feb 29, 2012, at 5:43 AM, Loup Vaillantl...@loup-vaillant.fr  wrote:


Yes, I'm aware of that limitation.  I have the feeling however that
IDEs and debuggers are overrated.

When I'm Squeaking, sometimes I find myself modeling classes with the browser 
but leaving method bodies to 'self break' and then write all of the actual code 
in the debugger. Doesn't work so well for hacking on the GUI, but, well.

I'm curious about 'debuggers are overrated' and 'you shouldn't need one.' Seems 
odd. Most people I've encountered who don't use the debugger haven't learned 
one yet.


agreed.

the main reason I can think of why one wouldn't use a debugger is 
because none are available.
however, otherwise, debuggers are a fairly useful piece of software 
(generally used in combination with debug-logs and unit-tests and similar).


sadly, I don't yet have a good debugger in place for my scripting 
language, as mostly I am currently using the Visual-Studio debugger 
(which, granted, can't really see into script code). granted, this is 
less of an immediate issue as most of the project is plain C.




At one company (I'd love to tell you which but I signed a non-disparagement agreement) 
when I asked why the standard dev build of the product didn't include the debugger 
module, I was told you don't need it. When I went to install it, I was told 
not to.

I don't work there any more...


makes sense.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread Loup Vaillant

Le 01/03/2012 22:58, Casey Ransberger a écrit :

Below.

On Feb 29, 2012, at 5:43 AM, Loup Vaillantl...@loup-vaillant.fr  wrote:


Yes, I'm aware of that limitation.  I have the feeling however that
IDEs and debuggers are overrated.


When I'm Squeaking, sometimes I find myself modeling classes with the browser 
but leaving method bodies to 'self break' and then write all of the actual code 
in the debugger. Doesn't work so well for hacking on the GUI, but, well.


Okay I take it back. Your use case sounds positively awesome.



I'm curious about 'debuggers are overrated' and 'you shouldn't need one.' Seems 
odd. Most people I've encountered who don't use the debugger haven't learned 
one yet.



Spot on.  The only debugger I have used up until now was a semi-broken
version of gdb (it tended to miss stack frames).

Loup.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread BGB

On 3/1/2012 3:56 PM, Loup Vaillant wrote:

Le 01/03/2012 22:58, Casey Ransberger a écrit :

Below.

On Feb 29, 2012, at 5:43 AM, Loup Vaillantl...@loup-vaillant.fr  wrote:


Yes, I'm aware of that limitation.  I have the feeling however that
IDEs and debuggers are overrated.


When I'm Squeaking, sometimes I find myself modeling classes with the 
browser but leaving method bodies to 'self break' and then write all 
of the actual code in the debugger. Doesn't work so well for hacking 
on the GUI, but, well.


Okay I take it back. Your use case sounds positively awesome.


I'm curious about 'debuggers are overrated' and 'you shouldn't need 
one.' Seems odd. Most people I've encountered who don't use the 
debugger haven't learned one yet.



Spot on.  The only debugger I have used up until now was a semi-broken
version of gdb (it tended to miss stack frames).



yeah...

sadly, apparently the Visual Studio debugger will miss stack frames, 
since it apparently often doesn't know how to back-trace through code in 
areas it doesn't have debugging information for, even though presumably 
pretty much everything is using the EBP-chain convention for 32-bit code 
(one gets the address, followed by question-marks, and the little 
message stack frames beyond this point may be invalid).



a lot of time this happens in my case in stack frames where the crash 
has occurred in code which has a call-path going through the BGBScript 
VM (and the debugger apparently isn't really sure how to back-trace 
through the generated code).


note: although I don't currently have a full/proper JIT, some amount of 
the execution path often does end up being through generated code (often 
through piece-wise generate code fragments).


ironically, in AMD Code Analyst, this apparently shows up as unknown 
module, and often accounts for more of the total running time than does 
the interpreter proper (although typically still only 5-10%, as the bulk 
of the running time tends to be in my renderer and also in nvogl32.dll 
and kernel.exe and similar...).



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread Casey Ransberger
Inline.

On Thu, Mar 1, 2012 at 2:56 PM, Loup Vaillant l...@loup-vaillant.fr wrote:

 Le 01/03/2012 22:58, Casey Ransberger a écrit :

  Below.

 On Feb 29, 2012, at 5:43 AM, Loup Vaillantl...@loup-vaillant.fr  wrote:

  Yes, I'm aware of that limitation.  I have the feeling however that
 IDEs and debuggers are overrated.


 When I'm Squeaking, sometimes I find myself modeling classes with the
 browser but leaving method bodies to 'self break' and then write all of the
 actual code in the debugger. Doesn't work so well for hacking on the GUI,
 but, well.


 Okay I take it back. Your use case sounds positively awesome.


It's fun:)


  I'm curious about 'debuggers are overrated' and 'you shouldn't need one.'
 Seems odd. Most people I've encountered who don't use the debugger haven't
 learned one yet.



 Spot on.  The only debugger I have used up until now was a semi-broken
 version of gdb (it tended to miss stack frames).


Oh, ouch. Missed frames. I hate it when things are ill-framed.

I can't say I blame you. GDB is very *NIXy. Not really very friendly to
newcomers. Crack open a Squeak image and break something. It's a whole
different experience. Where is this nil value coming from? is a question
that I can answer more easily in a ST-80 debugger than I can in any other
that I've tried (exception of maybe Self.) The button UI on the thing could
probably use a bit of modern design love (I'm sure I'm going to be trampled
for saying so!) but otherwise I think it's a great study for what the
baseline debugging experience ought to be for a HLL (why deal with less
awesome when there's more awesome available under the MIT license as a
model to work from?)

Of course, I'm saying *baseline.* Which is to say that we can probably go a
whole lot further with these things in the future. I'm still waiting on
that magical OmniDebugger that Alessandro Warth mentioned would be able to
deal with multiple OMeta-implemented languages;)

Loup.
 __**_
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/**listinfo/fonchttp://vpri.org/mailman/listinfo/fonc




-- 
Casey Ransberger
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread Julian Leviston
What if the aim that superseded this was to make it available to the next set 
of people, who can do something about real fundamental change around this?

Perhaps what is needed is to ACTUALLY clear out the cruft. Maybe it's not easy 
or possible through the old channels... too much work to convince too many 
people who have so much history of the merits of tearing down the existing 
systems. 

Just a thought.
Julian

On 02/03/2012, at 2:04 AM, Reuben Thomas wrote:

 On 1 March 2012 15:02, Julian Leviston jul...@leviston.net wrote:
 Is this one of the aims?
 
 It doesn't seem to be, which is sad, because however brilliant the
 ideas you can't rely on other people to get them out for you.
 
 On 01/03/2012, at 11:42 PM, Reuben Thomas wrote:
 
 The biggest challenge for FONC will not be to achieve good technical
 results, as it is stuffed with people who have a history of doing
 great work, and its results to date are already exciting, but to get
 those results into widespread use; I've seen no evidence that the
 principals have considered how and why they failed to do this in the
 past, nor that they've any ideas on how to avoid it this time around.

 -- 
 http://rrt.sc3d.org
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Loup Vaillant

Alan Kay wrote:

Hi Loup

Very good question -- and tell your Boss he should support you!


Cool, thank you for your support.



[…] One general argument is
that non-machine-code languages are POLs of a weak sort, but are more
effective than writing machine code for most problems. (This was quite
controversial 50 years ago -- and lots of bosses forbade using any
higher level language.)


I didn't thought about this historical perspective. I'll keep that in
mind, thanks.



Companies (and programmers within) are rarely rewarded for saving costs
over the real lifetime of a piece of software […]


I think my company is.  We make custom software, and most of the time
also get to maintain it.  Of course, we charge for both.  So, when we
manage to keep the maintenance cheap (less bugs, simpler code…), we win.

However, we barely acknowledge it: much code I see is a technical debt
waiting to be paid, because the original implementer wasn't given the
time to do even a simple cleanup.



An argument that resonates with some bosses is the debuggable
requirements/specifications - ship the prototype and improve it whose
benefits show up early on.


But of course.  I should have thought about it, thanks.



[…] one of the most important POLs to be worked on are
the ones that are for making POLs quickly.


This why I am totally thrilled by Ometa and Maru. I use them to point
out that programming languages can be much cheaper to implement than
most think they are.  It is difficult however to get past the idea that
implementing a language (even a small, specialized one) is by default a
huge undertaking.

Cheers,
Loup.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Loup Vaillant

Yes, I'm aware of that limitation.  I have the feeling however that
IDEs and debuggers are overrated.  Sure, when dealing with a complex
program in a complex language (say, tens of thousands of lines in C++),
then sure, IDEs and debuggers are a must.  But I'm not sure their
absence outweigh the simplicity potentially achieved with POLs. (I
mean, I really don't know.  It could even be domain-dependent.)

I agree however that having both (POLs + tools) would be much better,
and is definitely worth pursuing.  I'll think about it.

Loup.



Alan Kay wrote:

With regard to your last point -- making POLs -- I don't think we are
there yet. It is most definitely a lot easier to make really powerful
POLs fairly quickly than it used to be, but we still don't have a nice
methodology and tools to automatically supply the IDE, debuggers, etc.
that need to be there for industrial-strength use.

Cheers,

Alan

*From:* Loup Vaillant l...@loup-vaillant.fr
*To:* fonc@vpri.org
*Sent:* Wednesday, February 29, 2012 1:27 AM
*Subject:* Re: [fonc] Error trying to compile COLA

Alan Kay wrote:
  Hi Loup
 
  Very good question -- and tell your Boss he should support you!

Cool, thank you for your support.


  […] One general argument is
  that non-machine-code languages are POLs of a weak sort, but
are more
  effective than writing machine code for most problems. (This was
quite
  controversial 50 years ago -- and lots of bosses forbade using any
  higher level language.)

I didn't thought about this historical perspective. I'll keep that in
mind, thanks.


  Companies (and programmers within) are rarely rewarded for saving
costs
  over the real lifetime of a piece of software […]

I think my company is. We make custom software, and most of the time
also get to maintain it. Of course, we charge for both. So, when we
manage to keep the maintenance cheap (less bugs, simpler code…), we win.

However, we barely acknowledge it: much code I see is a technical debt
waiting to be paid, because the original implementer wasn't given the
time to do even a simple cleanup.


  An argument that resonates with some bosses is the debuggable
  requirements/specifications - ship the prototype and improve it
whose
  benefits show up early on.

But of course. I should have thought about it, thanks.


  […] one of the most important POLs to be worked on are
  the ones that are for making POLs quickly.

This why I am totally thrilled by Ometa and Maru. I use them to point
out that programming languages can be much cheaper to implement than
most think they are. It is difficult however to get past the idea that
implementing a language (even a small, specialized one) is by default a
huge undertaking.

Cheers,
Loup.
___
fonc mailing list
fonc@vpri.org mailto:fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Alan Kay
I think it is domain dependent -- for example, it is very helpful to have a 
debugger of some kind for a parser, but less so for a projection language like 
Nile. On the other hand, debuggers for making both of these systems are very 
helpful. Etoys doesn't have a debugger because the important state is mostly 
visible in the form of graphical objects. OTOH, having a capturing tracer (a la 
EXDAMS) could be nice for both reviewing and understanding complex interactions 
and also dealing with unrepeatable events

The topic of going from an idea for a useful POL to an actually mission usable 
POL is prime thesis territory.


Cheers,

Alan





 From: Loup Vaillant l...@loup-vaillant.fr
To: fonc@vpri.org 
Sent: Wednesday, February 29, 2012 5:43 AM
Subject: Re: [fonc] Error trying to compile COLA
 
Yes, I'm aware of that limitation.  I have the feeling however that
IDEs and debuggers are overrated.  Sure, when dealing with a complex
program in a complex language (say, tens of thousands of lines in C++),
then sure, IDEs and debuggers are a must.  But I'm not sure their
absence outweigh the simplicity potentially achieved with POLs. (I
mean, I really don't know.  It could even be domain-dependent.)

I agree however that having both (POLs + tools) would be much better,
and is definitely worth pursuing.  I'll think about it.

Loup.



Alan Kay wrote:
 With regard to your last point -- making POLs -- I don't think we are
 there yet. It is most definitely a lot easier to make really powerful
 POLs fairly quickly than it used to be, but we still don't have a nice
 methodology and tools to automatically supply the IDE, debuggers, etc.
 that need to be there for industrial-strength use.

 Cheers,

 Alan

     *From:* Loup Vaillant l...@loup-vaillant.fr
     *To:* fonc@vpri.org
     *Sent:* Wednesday, February 29, 2012 1:27 AM
     *Subject:* Re: [fonc] Error trying to compile COLA

     Alan Kay wrote:
       Hi Loup
      
       Very good question -- and tell your Boss he should support you!

     Cool, thank you for your support.


       […] One general argument is
       that non-machine-code languages are POLs of a weak sort, but
     are more
       effective than writing machine code for most problems. (This was
     quite
       controversial 50 years ago -- and lots of bosses forbade using any
       higher level language.)

     I didn't thought about this historical perspective. I'll keep that in
     mind, thanks.


       Companies (and programmers within) are rarely rewarded for saving
     costs
       over the real lifetime of a piece of software […]

     I think my company is. We make custom software, and most of the time
     also get to maintain it. Of course, we charge for both. So, when we
     manage to keep the maintenance cheap (less bugs, simpler code…), we win.

     However, we barely acknowledge it: much code I see is a technical debt
     waiting to be paid, because the original implementer wasn't given the
     time to do even a simple cleanup.


       An argument that resonates with some bosses is the debuggable
       requirements/specifications - ship the prototype and improve it
     whose
       benefits show up early on.

     But of course. I should have thought about it, thanks.


       […] one of the most important POLs to be worked on are
       the ones that are for making POLs quickly.

     This why I am totally thrilled by Ometa and Maru. I use them to point
     out that programming languages can be much cheaper to implement than
     most think they are. It is difficult however to get past the idea that
     implementing a language (even a small, specialized one) is by default a
     huge undertaking.

     Cheers,
     Loup.
     ___
     fonc mailing list
    fonc@vpri.org mailto:fonc@vpri.org
     http://vpri.org/mailman/listinfo/fonc




 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Duncan Mak
Hello Alan,

On Tue, Feb 28, 2012 at 4:30 PM, Alan Kay alan.n...@yahoo.com wrote:

 For example, one of the many current day standards that was dismissed
 immediately is the WWW (one could hardly imagine more of a mess).


I was talking to a friend the other day about the conversations going on in
this mailing list - my friend firmly believes that the Web (HTTP) is one of
the most important innovations in recent decades.

One thing he cites as innovative is a point that I think TimBL mentions
often: that the Web was successful (and not prior hypertext systems)
because it allowed for broken links.

Is that really a good architectural choice? If not, is there a reason why
the Web succeeded, where previous hypertext systems failed? Is it only
because of pop culture?

What are the architectural flaws of the current Web? Is there anything that
could be done to make it better, in light of these flaws?

-- 
Duncan.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread BGB
 disagree), since often all that is 
needed is to create something in the form of the standard (and its 
common/expected behaviors), and everything will work as expected.


so, the essence is in the form, and in the behaviors, and not in the 
particular artifacts which are used to manifest it. so, one implements 
something according to a standard, but the standard doesn't really care 
whose code was used to implement it (or often, how things actually work 
internally, which is potentially different from one implementation to 
another).


sometimes, there are reasons to not just chase after the cult of there 
is a library for that, but, annoyingly, many people start raving at the 
mere mention of doing some task without using whatever library/tool/... 
they think should be used in performing said task.



granted, in a few places, I have ended up resorting to relational-style 
systems instead (because, sadly, not every problem maps cleanly to a 
tree structure), but these are typically rarer (but are more common in 
my 3D engine and elsewhere, essentially my 3D engine amounts to a large 
and elaborate system of relational queries (and, no, without using an 
RDBMS), and only tangentially towards actually sending things out to the 
video card). there are pros and cons here.



or such...



Cheers,

Alan


*From:* Loup Vaillant l...@loup-vaillant.fr
*To:* fonc@vpri.org
*Sent:* Wednesday, February 29, 2012 1:27 AM
*Subject:* Re: [fonc] Error trying to compile COLA

Alan Kay wrote:
 Hi Loup

 Very good question -- and tell your Boss he should support you!

Cool, thank you for your support.


 [...] One general argument is
 that non-machine-code languages are POLs of a weak sort, but
are more
 effective than writing machine code for most problems. (This was
quite
 controversial 50 years ago -- and lots of bosses forbade using any
 higher level language.)

I didn't thought about this historical perspective. I'll keep that in
mind, thanks.


 Companies (and programmers within) are rarely rewarded for
saving costs
 over the real lifetime of a piece of software [...]

I think my company is.  We make custom software, and most of the time
also get to maintain it.  Of course, we charge for both.  So, when we
manage to keep the maintenance cheap (less bugs, simpler code...),
we win.

However, we barely acknowledge it: much code I see is a technical debt
waiting to be paid, because the original implementer wasn't given the
time to do even a simple cleanup.


 An argument that resonates with some bosses is the debuggable
 requirements/specifications - ship the prototype and improve
it whose
 benefits show up early on.

But of course.  I should have thought about it, thanks.


 [...] one of the most important POLs to be worked on are
 the ones that are for making POLs quickly.

This why I am totally thrilled by Ometa and Maru. I use them to point
out that programming languages can be much cheaper to implement than
most think they are.  It is difficult however to get past the idea
that
implementing a language (even a small, specialized one) is by
default a
huge undertaking.

Cheers,
Loup.
___
fonc mailing list
fonc@vpri.org mailto:fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Alan Kay
: [fonc] Error trying to compile COLA
 

Hello Alan,


On Tue, Feb 28, 2012 at 4:30 PM, Alan Kay alan.n...@yahoo.com wrote:

For example, one of the many current day standards that was dismissed 
immediately is the WWW (one could hardly imagine more of a mess). 

I was talking to a friend the other day about the conversations going on in 
this mailing list - my friend firmly believes that the Web (HTTP) is one of 
the most important innovations in recent decades.



One thing he cites as innovative is a point that I think TimBL mentions often: 
that the Web was successful (and not prior hypertext systems) because it 
allowed for broken links.


Is that really a good architectural choice? If not, is there a reason why the 
Web succeeded, where previous hypertext systems failed? Is it only because of 
pop culture?


What are the architectural flaws of the current Web? Is there anything that 
could be done to make it better, in light of these flaws?

-- 
Duncan.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Max Orhai
 teachers to download .exe files. This wipes out
 the Squeak plugin that provides all the functionality.

 But there is still the browser and Javascript. But Javascript isn't fast
 enough to do the particle system. But why can't we just download the
 particle system and run it in a safe address space? The browser people
 don't yet understand that this is what they should have allowed in the
 first place. So right now there is only one route for this (and a few years
 ago there were none) -- and that is Native Client on Google Chrome.

  But Google Chrome is only 13% penetrated, and the other browser fiefdoms
 don't like NaCl. Google Chrome is an .exe file so teachers can't
 download it (and if they could, they could download the Etoys plugin).

 Just in from browserland ... there is now -- 19 years later -- an allowed
 route to put samples in your machine's sound buffer that works on some of
 the browsers.

 Holy cow folks!

 Alan



   --
 *From:* Duncan Mak duncan...@gmail.com
 *To:* Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
 fonc@vpri.org
 *Sent:* Wednesday, February 29, 2012 11:50 AM

 *Subject:* Re: [fonc] Error trying to compile COLA

 Hello Alan,

 On Tue, Feb 28, 2012 at 4:30 PM, Alan Kay alan.n...@yahoo.com wrote:

 For example, one of the many current day standards that was dismissed
 immediately is the WWW (one could hardly imagine more of a mess).


 I was talking to a friend the other day about the conversations going on
 in this mailing list - my friend firmly believes that the Web (HTTP) is one
 of the most important innovations in recent decades.

 One thing he cites as innovative is a point that I think TimBL mentions
 often: that the Web was successful (and not prior hypertext systems)
 because it allowed for broken links.

 Is that really a good architectural choice? If not, is there a reason why
 the Web succeeded, where previous hypertext systems failed? Is it only
 because of pop culture?

 What are the architectural flaws of the current Web? Is there anything
 that could be done to make it better, in light of these flaws?

 --
 Duncan.



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Reuben Thomas
On 29 February 2012 23:09, Alan Kay alan.n...@yahoo.com wrote:

[Recapitulation snipped]

 So, this gradually turned into an awful mess. But Linus went back to square
 one

Not really, it was just a reimplementation of the same thing on cheap
modern hardware.

 But there is still the browser and Javascript. But Javascript isn't fast
 enough to do the particle system.

Javascript is plenty fast enough to run a simulated x86 booting Linux
and running applications:

http://bellard.org/jslinux/

Granted, it shouldn't be necessary to go via Javascript.

  But Google Chrome is only 13% penetrated,

~40% these days. Things move fast.

The degree to which the mess we're in is avoidable (or, I'd go
further, even undesirable) is also exaggerated.

-- 
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Igor Stasenko
On 1 March 2012 02:46, Reuben Thomas r...@sc3d.org wrote:
 On 29 February 2012 23:09, Alan Kay alan.n...@yahoo.com wrote:

 [Recapitulation snipped]

 So, this gradually turned into an awful mess. But Linus went back to square
 one

 Not really, it was just a reimplementation of the same thing on cheap
 modern hardware.

 But there is still the browser and Javascript. But Javascript isn't fast
 enough to do the particle system.

 Javascript is plenty fast enough to run a simulated x86 booting Linux
 and running applications:

 http://bellard.org/jslinux/

 Granted, it shouldn't be necessary to go via Javascript.

  But Google Chrome is only 13% penetrated,

 ~40% these days. Things move fast.

 The degree to which the mess we're in is avoidable (or, I'd go
 further, even undesirable) is also exaggerated.


But this is not about penetration. It is about the wrong approach.
The first application which i saw, which can do 'virtual boxing'
is running multiple instances of MS DOS on 386-based machine,
called DESQview (http://en.wikipedia.org/wiki/DESQview)
and i was quite amazed at that moment, that you can actually run two
different instances of operating system
(and of course arbitrary software on each of them), without even
realizing that they sharing single hardware box: CPU/Memory etc.

Now, it took 2011-1984  - 27 years! for industry to realize that
virtualization can be used for something else than boxing whole OS
to be able to run Linux on Windows or Windows on Linux...
What strikes me is that things like NaCl were doable from the very
beginning, back in 1995.
Instead, people invented javascript and invested a lot into it (and
keep investing) to make it faster, more secure , allow you to play
sounds /videos/3D graphics.
While desktop-based apps were able to do that from the beginning!

It is clear, as to me, that NaCl is the only way to make web better
living place.
You say Javascript is fast?
Now can you tell me, how i can run and manage multiple parallel threads in it?
Can you do it at all? And if not, lets wait for 10 more years till
people will implement it for us?


 --
 http://rrt.sc3d.org
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



-- 
Best regards,
Igor Stasenko.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Reuben Thomas
On 1 March 2012 01:40, Igor Stasenko siguc...@gmail.com wrote:
 On 1 March 2012 02:46, Reuben Thomas r...@sc3d.org wrote:
 On 29 February 2012 23:09, Alan Kay alan.n...@yahoo.com wrote:

 [Recapitulation snipped]

 So, this gradually turned into an awful mess. But Linus went back to square
 one

 Not really, it was just a reimplementation of the same thing on cheap
 modern hardware.

 But there is still the browser and Javascript. But Javascript isn't fast
 enough to do the particle system.

 Javascript is plenty fast enough to run a simulated x86 booting Linux
 and running applications:

 http://bellard.org/jslinux/

 Granted, it shouldn't be necessary to go via Javascript.

  But Google Chrome is only 13% penetrated,

 ~40% these days. Things move fast.

 The degree to which the mess we're in is avoidable (or, I'd go
 further, even undesirable) is also exaggerated.


 But this is not about penetration.

Alan seemed to think penetration mattered; I had some good news.

 It is clear, as to me, that NaCl is the only way to make web better
 living place.
 You say Javascript is fast?
 Now can you tell me, how i can run and manage multiple parallel threads in it?

Like this:

http://www.sitepoint.com/javascript-threading-html5-web-workers/

Don't complain that the future is late arriving. The future is already
here, perhaps just not quite in the shape we hoped. And the big news
is that it never will be quite in the shape we'd like; there will be
no complete solution.

Rousseau got this right over 200 years ago: his book The Social
Contract, which tried to do for the organisation of society what FONC
is trying to do for computing, took for its starting point people as
they are and laws as they might be. Most visionaries would do well to
copy that first part.

-- 
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Igor Stasenko
On 1 March 2012 03:59, Reuben Thomas r...@sc3d.org wrote:
 On 1 March 2012 01:40, Igor Stasenko siguc...@gmail.com wrote:
 On 1 March 2012 02:46, Reuben Thomas r...@sc3d.org wrote:
 On 29 February 2012 23:09, Alan Kay alan.n...@yahoo.com wrote:

 [Recapitulation snipped]

 So, this gradually turned into an awful mess. But Linus went back to square
 one

 Not really, it was just a reimplementation of the same thing on cheap
 modern hardware.

 But there is still the browser and Javascript. But Javascript isn't fast
 enough to do the particle system.

 Javascript is plenty fast enough to run a simulated x86 booting Linux
 and running applications:

 http://bellard.org/jslinux/

 Granted, it shouldn't be necessary to go via Javascript.

  But Google Chrome is only 13% penetrated,

 ~40% these days. Things move fast.

 The degree to which the mess we're in is avoidable (or, I'd go
 further, even undesirable) is also exaggerated.


 But this is not about penetration.

 Alan seemed to think penetration mattered; I had some good news.

 It is clear, as to me, that NaCl is the only way to make web better
 living place.
 You say Javascript is fast?
 Now can you tell me, how i can run and manage multiple parallel threads in 
 it?

 Like this:

 http://www.sitepoint.com/javascript-threading-html5-web-workers/

wonderful. so, in 5 years (put less if you want) i can be sure that my
app can run on every machine on any browser,
and i don't have to put update your browser warning.
What a relief!
end of sarcasm

 Don't complain that the future is late arriving. The future is already
 here, perhaps just not quite in the shape we hoped. And the big news
 is that it never will be quite in the shape we'd like; there will be
 no complete solution.

The big news is that mileage can be much shorter if you go right way.
Once half-baked JavaScript now evolved into a full-fledged virtual machine.
I am so happy about it. But the problem is that it is not put your
favorite language virtual machine.
And not just a virtual machine.
Can i ask, why i forced to use javascript as a modern assembly language for web?
As to me, this language is not good enough to serve at such level.
From this point, it is inherently not complete and never will be, and
will always stand between you and your goals.

 Rousseau got this right over 200 years ago: his book The Social
 Contract, which tried to do for the organisation of society what FONC
 is trying to do for computing, took for its starting point people as
 they are and laws as they might be. Most visionaries would do well to
 copy that first part.

 --
 http://rrt.sc3d.org
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



-- 
Best regards,
Igor Stasenko.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread David Smith
. Google Chrome is an .exe file so teachers can't
 download it (and if they could, they could download the Etoys plugin).

 Just in from browserland ... there is now -- 19 years later -- an allowed
 route to put samples in your machine's sound buffer that works on some of
 the browsers.

 Holy cow folks!

 Alan



   --
 *From:* Duncan Mak duncan...@gmail.com
 *To:* Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
 fonc@vpri.org
 *Sent:* Wednesday, February 29, 2012 11:50 AM

 *Subject:* Re: [fonc] Error trying to compile COLA

 Hello Alan,

 On Tue, Feb 28, 2012 at 4:30 PM, Alan Kay alan.n...@yahoo.com wrote:

 For example, one of the many current day standards that was dismissed
 immediately is the WWW (one could hardly imagine more of a mess).


 I was talking to a friend the other day about the conversations going on
 in this mailing list - my friend firmly believes that the Web (HTTP) is one
 of the most important innovations in recent decades.

 One thing he cites as innovative is a point that I think TimBL mentions
 often: that the Web was successful (and not prior hypertext systems)
 because it allowed for broken links.

 Is that really a good architectural choice? If not, is there a reason why
 the Web succeeded, where previous hypertext systems failed? Is it only
 because of pop culture?

 What are the architectural flaws of the current Web? Is there anything
 that could be done to make it better, in light of these flaws?

 --
 Duncan.



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Loup Vaillant

Originally,  the VPRI claims to be able to do a system that's 10,000
smaller than our current bloatware.  That's going from roughly 200
million lines to 20,000. (Or, as Alan Kay puts it, from a whole library
to a single book.) That's 4 orders of magnitude.

From the report, I made a rough break down of the causes for code
reduction.  It seems that

 - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.

 - 1 order of magnitude is gained by mere good engineering principles.
   In Frank for instance, there is _one_ drawing system, that is used
   for everywhere.  Systematic code reuse can go a long way.
 Another example is the  code I work with.  I routinely find
   portions whose volume I can divide by 2 merely by rewriting a couple
   of functions.  I fully expect to be able to do much better if I
   could refactor the whole program.  Not because I'm a rock star (I'm
   definitely not).  Very far from that.  Just because the code I
   maintain is sufficiently abysmal.

 - 2 orders of magnitude are gained through the use of Problem Oriented
   Languages (instead of C or C++). As examples, I can readily recall:
+ Gezira vs Cairo(÷95)
+ Ometa  vs Lex+Yacc (÷75)
+ TCP-IP (÷93)
   So I think this is not exaggerated.

Looked at it this way, it doesn't seems so impossible any more.  I
don't expect you to suddenly agree the 4 orders of magnitude claim
(It still defies my intuition), but you probably disagree specifically
with one of my three points above.  Possible objections I can think of
are:

 - Features matter more than I think they do.
 - One may not expect the user to write his own features, even though
   it would be relatively simple.
 - Current systems may be not as badly written as I think they are.
 - Code reuse could be harder than I think.
 - The two orders of magnitude that seem to come from problem oriented
   languages may not come from _only_ those.  It could come from the
   removal of features, as well as better engineering principles,
   meaning I'm counting some causes twice.

Loup.


BGB wrote:

On 2/27/2012 10:08 PM, Julian Leviston wrote:

Structural optimisation is not compression. Lurk more.


probably will drop this, as arguing about all this is likely pointless
and counter-productive.

but, is there any particular reason for why similar rules and
restrictions wouldn't apply?

(I personally suspect that similar applies to nearly all forms of
communication, including written and spoken natural language, and a
claim that some X can be expressed in Y units does seem a fair amount
like a compression-style claim).


but, anyways, here is a link to another article:
http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem


Julian

On 28/02/2012, at 3:38 PM, BGB wrote:


granted, I remain a little skeptical.

I think there is a bit of a difference though between, say, a log
table, and a typical piece of software.
a log table is, essentially, almost pure redundancy, hence why it can
be regenerated on demand.

a typical application is, instead, a big pile of logic code for a
wide range of behaviors and for dealing with a wide range of special
cases.


executable math could very well be functionally equivalent to a
highly compressed program, but note in this case that one needs to
count both the size of the compressed program, and also the size of
the program needed to decompress it (so, the size of the system
would also need to account for the compiler and runtime).

although there is a fair amount of redundancy in typical program code
(logic that is often repeated, duplicated effort between programs,
...), eliminating this redundancy would still have a bounded
reduction in total size.

increasing abstraction is likely to, again, be ultimately bounded
(and, often, abstraction differs primarily in form, rather than in
essence, from that of moving more of the system functionality into
library code).


much like with data compression, the concept commonly known as the
Shannon limit may well still apply (itself setting an upper limit
to how much is expressible within a given volume of code).

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Reuben Thomas
On 28 February 2012 16:41, BGB cr88...@gmail.com wrote:

  - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.


 this could help some, but isn't likely to result in an order of magnitude.

Example: in Linux 3.0.0, which has many drivers (and Linux is often
cited as being mostly drivers), actually counting the code reveals
about 55-60% in drivers (depending how you count). So that even with
only one hardware configuration, you'd save less than 50% of the code
size, i.e. a factor of 2 at very best.

-- 
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Loup

Very good question -- and tell your Boss he should support you!

If your boss has a math or science background, this will be an easy sell 
because there are many nice analogies that hold, and also some good examples in 
computing itself.

The POL approach is generally good, but for a particular problem area could be 
as difficult as any other approach. One general argument is that 
non-machine-code languages are POLs of a weak sort, but are more effective 
than writing machine code for most problems. (This was quite controversial 50 
years ago -- and lots of bosses forbade using any higher level language.)

Four arguments against POLs are the difficulties of (a) designing them, (b) 
making them, (c) creating IDE etc tools for them, and (d) learning them. (These 
are similar to the arguments about using math and science in engineering, but 
are not completely bogus for a small subset of problems ...).

Companies (and programmers within) are rarely rewarded for saving costs over 
the real lifetime of a piece of software (similar problems exist in the climate 
problems we are facing).These are social problems, but part of real 
engineering. However, at some point life-cycle costs and savings will become 
something that is accounted and rewarded-or-dinged. 

An argument that resonates with some bosses is the debuggable 
requirements/specifications - ship the prototype and improve it whose 
benefits show up early on. However, these quicker track processes will often be 
stressed for time to do a new POL.

This suggests that one of the most important POLs to be worked on are the ones 
that are for making POLs quickly. I think this is a huge important area and 
much needs to be done here (also a very good area for new PhD theses!).


Taking all these factors (and there are more), I think the POL and extensible 
language approach works best for really difficult problems that small numbers 
of really good people are hooked up to solve (could be in a company, and very 
often in one of many research venues) -- and especially if the requirements 
will need to change quite a bit, both from learning curve and quick response to 
the outside world conditions.

Here's where a factor of 100 or 1000 (sometimes even a factor of 10) less code 
will be qualitatively powerful.

Right now I draw a line at *100. If you can get this or more, it is worth 
surmounting the four difficulties listed above. If you can get *1000, you are 
in a completely new world of development and thinking.


Cheers,

Alan






 From: Loup Vaillant l...@loup-vaillant.fr
To: fonc@vpri.org 
Sent: Tuesday, February 28, 2012 8:17 AM
Subject: Re: [fonc] Error trying to compile COLA
 
Alan Kay wrote:
 Hi Loup

 As I've said and written over the years about this project, it is not
 possible to compare features in a direct way here.

Yes, I'm aware of that.  The problem rises when I do advocacy. A
response I often get is but with only 20,000 lines, they gotta
leave features out!.  It is not easy to explain that a point by
point comparison is either unfair or flatly impossible.


 Our estimate so far is that we are getting our best results from the
 consolidated redesign (folding features into each other) and then from
 the POLs. We are still doing many approaches where we thought we'd have
 the most problems with LOCs, namely at the bottom.

If I got it, what you call consolidated redesign encompasses what I
called feature creep and good engineering principles (I understand
now that they can't be easily separated). I originally estimated that:

- You manage to gain 4 orders of magnitude compared to current OSes,
- consolidated redesign gives you roughly 2 of those  (from 200M to 2M),
- problem oriented languages give you the remaining 2.(from 2M  to 20K)

Did I…
- overstated the power of problem oriented languages?
- understated the benefits of consolidated redesign?
- forgot something else?

(Sorry to bother you with those details, but I'm currently trying to
  convince my Boss to pay me for a PhD on the grounds that PoLs are
  totally amazing, so I'd better know real fast If I'm being
  over-confident.)

Thanks,
Loup.



 Cheers,

 Alan


     *From:* Loup Vaillant l...@loup-vaillant.fr
     *To:* fonc@vpri.org
     *Sent:* Tuesday, February 28, 2012 2:21 AM
     *Subject:* Re: [fonc] Error trying to compile COLA

     Originally, the VPRI claims to be able to do a system that's 10,000
     smaller than our current bloatware. That's going from roughly 200
     million lines to 20,000. (Or, as Alan Kay puts it, from a whole library
     to a single book.) That's 4 orders of magnitude.

      From the report, I made a rough break down of the causes for code
     reduction. It seems that

     - 1 order of magnitude is gained by removing feature creep. I agree
     feature creep can be important. But I also believe most feature
     belong to a long tail, where each is needed by a minority of users.
     It does matter

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Reuben

Yep. One of the many finesses in the STEPS project was to point out that 
requiring OSs to have drivers for everything misses what being networked is all 
about. In a nicer distributed systems design (such as Popek's LOCUS), one would 
get drivers from the devices automatically, and they would not be part of any 
OS code count. Apple even did this in the early days of the Mac for its own 
devices, but couldn't get enough other vendors to see why this was a really big 
idea.

Eventually the OS melts away to almost nothing (as it did at PARC in the 70s).

Then the question starts to become how much code has to be written to make the 
various functional parts that will be semi-automatically integrated to make 
'vanilla personal computing'  ?


Cheers,

Alan





 From: Reuben Thomas r...@sc3d.org
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, February 28, 2012 9:33 AM
Subject: Re: [fonc] Error trying to compile COLA
 
On 28 February 2012 16:41, BGB cr88...@gmail.com wrote:

  - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.


 this could help some, but isn't likely to result in an order of magnitude.

Example: in Linux 3.0.0, which has many drivers (and Linux is often
cited as being mostly drivers), actually counting the code reveals
about 55-60% in drivers (depending how you count). So that even with
only one hardware configuration, you'd save less than 50% of the code
size, i.e. a factor of 2 at very best.

-- 
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Jakob Praher
Dear Alan,

Am 28.02.12 14:54, schrieb Alan Kay:
 Hi Ryan

 Check out Smalltalk-71, which was a design to do just what you suggest
 -- it was basically an attempt to combine some of my favorite
 languages of the time -- Logo and Lisp, Carl Hewitt's Planner, Lisp
 70, etc.
do you have a detailled documentation of Smalltalk 71 somewhere?
Something like a Smalltalk 71 for Smalltalk 80 programmers :-)
In the early history of Smalltalk you mention it as

 It was a kind of parser with object-attachment that executed tokens
directly.

From the examples I think that do 'expr' is evaluating expr by using
previous to 'ident' :arg1..:argN body.

As an example do 'factorial 3' should  evaluate to 6 considering:

to 'factorial' 0 is 1
to 'factorial' :n do 'n*factorial n-1'

What about arithmetic and precendence: What part of language was built
into the system?
- :var denote variables, whereas var denotes the instantiated value of
:var in the expr, e.g. :n vs 'n-1'
- '' denote simple tokens (in the head) as well as expressions (in
the body)?
- to, do are keywords
- () can be used for precedence

You described evaluation as straightforward pattern-matching.
It somehow reminds me of a term rewriting system -  e.g 'hd' ('cons' :a
:b) '-'  :c  is a structured term.
I know rewriting systems which first parse into an abstract
representation (e.g. prefix form) and transforms on the abstract syntax
- whereas in Smalltalk 71 the concrete syntax seems to be used in the rules.

Also it seems redundant to both have:
to 'hd' ('cons' :a :b) do 'a'
and
to 'hd' ('cons' :a :b) '-'  :c  do 'a - c'

Is this made to make sure that the left hand side of - has to be a hd
(cons :a :b) expression?

Best,
Jakob


 This never got implemented because of a bet that turned into
 Smalltalk-72, which also did what you suggest, but in a less
 comprehensive way -- think of each object as a Lisp closure that could
 be sent a pointer to the message and could then parse-and-eval that. 

 A key to scaling -- that we didn't try to do -- is semantic typing
 (which I think is discussed in some of the STEPS material) -- that is:
 to be able to characterize the meaning of what is needed and produced
 in terms of a description rather than a label. Looks like we won't get
 to that idea this time either.

 Cheers,

 Alan

 
 *From:* Ryan Mitchley ryan.mitch...@gmail.com
 *To:* fonc@vpri.org
 *Sent:* Tuesday, February 28, 2012 12:57 AM
 *Subject:* Re: [fonc] Error trying to compile COLA

 On 27/02/2012 19:48, Tony Garnock-Jones wrote:

 My interest in it came out of thinking about integrating pub/sub
 (multi- and broadcast) messaging into the heart of a language.
 What would a Smalltalk look like if, instead of a strict unicast
 model with multi- and broadcast constructed atop (via
 Observer/Observable), it had a messaging model capable of
 natively expressing unicast, anycast, multicast, and broadcast
 patterns? 


 I've wondered if pattern matching shouldn't be a foundation of
 method resolution (akin to binding with backtracking in Prolog) -
 if a multicast message matches, the method is invoked (with much
 less specificity than traditional method resolution by
 name/token). This is maybe closer to the biological model of a
 cell surface receptor.

 Of course, complexity is an issue with this approach (potentially
 NP-complete).

 Maybe this has been done and I've missed it.


 ___
 fonc mailing list
 fonc@vpri.org mailto:fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
As I mentioned, Smalltalk-71 was never implemented -- and rarely mentioned (but 
it was part of the history of Smalltalk so I put in a few words about it).

If we had implemented it, we probably would have cleaned up the look of it, and 
also some of the conventions. 

You are right that part of it is like a term rewriting system, and part of it 
has state (object state).

to ... do ... is a operation. The match is on everything between toand do

For example, the first line with cons in it does the car operation (which 
here is hd).

The second line with cons in it does replaca. The value of hd is being 
replaced by the value of c. 

One of the struggles with this design was to try to make something almost as 
simple as LOGO, but that could do language extensions, simple AI backward 
chaining inferencing (like Winograd's block stacking problem), etc.

The turgid punctuations (as I mentioned in the history) were attempts to find 
ways to do many different kinds of matching.

So we were probably lucky that Smalltalk-72 came along  It's pattern 
matching was less general, but quite a bit could be done as far as driving an 
extensible interpreter with it.

However, some of these ideas were done better later. I think by Leler, and 
certainly by Joe Goguen, and others.

Cheers,

Alan



 From: Jakob Praher ja...@praher.info
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Tuesday, February 28, 2012 12:52 PM
Subject: Re: [fonc] Error trying to compile COLA
 

Dear Alan,

Am 28.02.12 14:54, schrieb Alan Kay: 
Hi Ryan


Check out Smalltalk-71, which was a design to do just what you suggest -- it 
was basically an attempt to combine some of my favorite languages of the time 
-- Logo and Lisp, Carl Hewitt's Planner, Lisp 70, etc.
do you have a detailled documentation of Smalltalk 71 somewhere? Something like 
a Smalltalk 71 for Smalltalk 80 programmers :-)
In the early history of Smalltalk you mention it as 
 It was a kind of parser with object-attachment that executed tokens 
 directly. The Early History of Smalltalk From the examples I think that do 
 'expr' is evaluating expr by using previous to 'ident' :arg1..:argN 
 body.

As an example do 'factorial 3' should  evaluate to 6 considering:

to 'factorial' 0 is 1
to 'factorial' :n do 'n*factorial n-1' The Early History of Smalltalk What 
about arithmetic and precendence: What part of language was built into the 
system? 
- :var denote variables, whereas var denotes the instantiated value
of :var in the expr, e.g. :n vs 'n-1'
- '' denote simple tokens (in the head) as well as expressions
(in the body)?
- to, do are keywords
- () can be used for precedence

You described evaluation as straightforward pattern-matching.
It somehow reminds me of a term rewriting system -  e.g 'hd' ('cons'
:a :b) '-'  :c  is a structured term.
I know rewriting systems which first parse into an abstract
representation (e.g. prefix form) and transforms on the abstract
syntax - whereas in Smalltalk 71 the concrete syntax seems to be
used in the rules.

Also it seems redundant to both have:
to 'hd' ('cons' :a :b) do 'a' 
and
to 'hd' ('cons' :a :b) '-'  :c  do 'a - c'

Is this made to make sure that the left hand side of - has to be
a hd (cons :a :b) expression?

Best,
Jakob




This never got implemented because of a bet that turned into Smalltalk-72, 
which also did what you suggest, but in a less comprehensive way -- think of 
each object as a Lisp closure that could be sent a pointer to the message and 
could then parse-and-eval that. 


A key to scaling -- that we didn't try to do -- is semantic typing (which I 
think is discussed in some of the STEPS material) -- that is: to be able to 
characterize the meaning of what is needed and produced in terms of a 
description rather than a label. Looks like we won't get to that idea this 
time either.


Cheers,


Alan





 From: Ryan Mitchley ryan.mitch...@gmail.com
To: fonc@vpri.org 
Sent: Tuesday, February 28, 2012 12:57 AM
Subject: Re: [fonc] Error trying to compile COLA
 

 
On 27/02/2012 19:48, Tony Garnock-Jones wrote:


My interest in it came out of thinking about
  integrating pub/sub (multi- and broadcast)
  messaging into the heart of a language. What
  would a Smalltalk look like if, instead of a
  strict unicast model with multi- and broadcast
  constructed atop (via Observer/Observable), it
  had a messaging model capable of natively
  expressing unicast, anycast, multicast, and
  broadcast patterns? 

I've wondered if pattern matching shouldn't be a
foundation of method resolution (akin to binding
with backtracking in Prolog) - if a multicast

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread BGB

On 2/28/2012 10:33 AM, Reuben Thomas wrote:

On 28 February 2012 16:41, BGBcr88...@gmail.com  wrote:

  - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.


this could help some, but isn't likely to result in an order of magnitude.

Example: in Linux 3.0.0, which has many drivers (and Linux is often
cited as being mostly drivers), actually counting the code reveals
about 55-60% in drivers (depending how you count). So that even with
only one hardware configuration, you'd save less than 50% of the code
size, i.e. a factor of 2 at very best.



yeah, kind of the issue here.

one can shave code, reduce redundancy, increase abstraction, ... but 
this will only buy so much.


then one can start dropping features, but how many can one drop and 
still have everything still work?...


one can be like, well, maybe we will make something like MS-DOS, but in 
long-mode? (IOW: single-big address space, with no user/kernel 
separation, or conventional processes, and all kernel functionality is 
essentially library functionality).



ok, how small can this be made?
maybe 50 kloc, assuming one is sparing with the drivers.


I once wrote an OS kernel (long-dead project, ended nearly a decade 
ago), going and running a line counter on the whole project, I get about 
84 kloc. further investigation: 44 kloc of this was due to a copy of 
NASM sitting in the apps directory (I tried to port NASM to my OS at the 
time, but it didn't really work correctly, very possibly due to a 
quickly kludged-together C library...).



so, a 40 kloc OS kernel, itself at the time bordering on barely worked.

what sorts of HW drivers did it have: ATA / IDE, console, floppy, VESA, 
serial mouse, RS232, RTL8139. how much code as drivers: 11 kloc.


how about VFS: 5 kloc, which include (FS drivers): BSHFS (IIRC, a 
TFTP-like shared filesystem), FAT (12/16/32), RAMFS.


another 5 kloc goes into the network code, which included TCP/IP, ARP, 
PPP, and an HTTP client+server.


boot loader was 288 lines (ASM), setup was 792 lines (ASM).

boot loader: copies boot files (setup.bin and kernel.sys) into RAM 
(in the low 640K). seems hard-coded for FAT12.


setup was mostly responsible for setting up the kernel (copying it to 
the desired address) and entering protected mode (jumping into the 
kernel). this is commonly called a second-stage loader, partly because 
it does a lot of stuff which is too bulky to do in the boot loader 
(which is limited to 512 bytes, whereas setup can be up to 32kB).


setup magic: Enable A20, load GDT, enter big-real mode, check for MZ 
and PE markers (kernel was PE/COFF it seems), copies kernel image to 
VMA base, pushes kernel entry point to stack, remaps IRQs, executes 
32-bit return (jumps into protected mode).


around 1/2 of the setup file is code for jumping between real and 
protected mode and for interfacing with VESA.


note: I was using PE/COFF for apps and libraries as well.
IIRC, I was using a naive process-based model at the time.


could a better HLL have made the kernel drastically smaller? I have my 
doubts...



add the need for maybe a compiler, ... and the line count is sure to get 
larger quickly.


based on my own code, one could probably get a basically functional C 
compiler in around 100 kloc, but maybe smaller could be possible (this 
would include the compiler+assembler+linker).


apps/... would be a bit harder.


in my case, the most direct path would be just dumping all of my 
existing libraries on top of my original OS project, and maybe dropping 
the 3D renderer (since it would be sort of useless without GPU support, 
OpenGL, ...). this would likely weigh in at around 750-800 kloc (but 
could likely be made into a self-hosting OS, since a C compiler would be 
included, and as well there is a C runtime floating around).


this is all still a bit over the stated limit.


maybe, one could try to get a functional GUI framework and some basic 
applications and similar in place (probably maybe 100-200 kloc more, at 
least).


probably, by this point, one is looking at something like a Windows 3.x 
style disk footprint (maybe using 10-15 MB of HDD space or so for all 
the binaries...).



granted, in my case, the vast majority of the code would be C, probably 
with a smaller portion of the OS and applications being written in 
BGBScript or similar...



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread BGB

On 2/28/2012 5:36 PM, Julian Leviston wrote:


On 29/02/2012, at 10:29 AM, BGB wrote:


On 2/28/2012 2:30 PM, Alan Kay wrote:
Yes, this is why the STEPS proposal was careful to avoid the 
current day world.


For example, one of the many current day standards that was 
dismissed immediately is the WWW (one could hardly imagine more of a 
mess).




I don't think the web is entirely horrible:
HTTP basically works, and XML is ok IMO, and an XHTML variant could 
be ok.


Hypertext as a structure is not beautiful nor is it incredibly useful. 
Google exists because of how incredibly flawed the web is and if 
you look at their process for organising it, you start to find 
yourself laughing a lot. The general computing experience these days 
is an absolute shambles and completely crap. Computers are very very 
hard to use. Perhaps you don't see it - perhaps you're in the trees - 
you can't see the forest... but it's intensely bad.




I am not saying it is particularly good, just that it is ok and not 
completely horrible


it is, as are most things in life, generally adequate for what it does...

it could be better, and it could probably also be worse...


It's like someone crapped their pants and google came around and said 
hey you can wear gas masks if you like... when what we really need to 
do is clean up the crap and make sure there's a toilet nearby so that 
people don't crap their pants any more.




IMO, this is more when one gets into the SOAP / WSDL area...


granted, moving up from this, stuff quickly turns terrible (poorly 
designed, and with many shiny new technologies which are almost 
absurdly bad).



practically though, the WWW is difficult to escape, as a system 
lacking support for this is likely to be rejected outright.


You mean like email? A system that doesn't have anything to do with 
the WWW per se that is used daily by millions upon millions of people? 
:P I disagree intensely. In exactly the same was that facebook was 
taken up because it was a slightly less crappy version of myspace, 
something better than the web would be taken up in a heartbeat if it 
was accessible and obviously better.


You could, if you chose to, view this mailing group as a type of 
living document where you can peruse its contents through your email 
program... depending on what you see the web as being... maybe if you 
squint your eyes just the right way, you could envisage the web as 
simply being a means of sharing information to other humans... and 
this mailing group could simply be a different kind of web...


I'd hardly say that email hasn't been a great success... in fact, I 
think email, even though it, too is fairly crappy, has been more of a 
success than the world wide web.




I don't think email and the WWW are mutually exclusive, by any means.

yes, one probably needs email as well, as well as probably a small 
mountain of other things, to make a viable end-user OS...



however, technically, many people do use email via webmail interfaces 
and similar.
nevermind that many people use things like Microsoft Outlook Web 
Access and similar.


so, it is at least conceivable that a future exists where people read 
their email via webmail and access usenet almost entirely via Google 
Groups and similar...


not that it would be necessarily a good thing though...



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Dale Schumacher
On Mon, Feb 27, 2012 at 5:23 PM, Charles Perkins ch...@memetech.com wrote:
 I think of the code size reduction like this:

 A book of logarithm tables may be hundreds of pages in length and yet the 
 equation producing the numbers can fit on one line.

 VPRI is exploring runnable math and is seeking key equations from which the 
 functionality of those 1MLOC, 10MLOC, 14MLOC can be automatically produced.

 It's not about code compression, its about functionality expansion.

This reminds me of Gregory Chaitin's concept of algorithmic
complexity, leading to his results relating to compression, logical
irreducibility and understanding [1].

[1] G. Chaitin.  Meta Math! The Quest for Omega, Vintage Books 2006.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread Martin Baldan
David,

Thanks for the link. Indeed, now I see how to run  eval with .l example
files. There are also .k  files, which I don't know how they differ from
those, except that .k files are called with ./eval filename.k while
.l files are called with ./eval repl.l filename.l where filename is
the name of the file. Both kinds seem to be made of Maru code.

I still don't know how to go from here to a Frank-like GUI. I'm reading
other replies which seem to point that way. All tips are welcome ;)

-Martin


On Mon, Feb 27, 2012 at 3:54 AM, David Girle davidgi...@gmail.com wrote:

 Take a look at the page:

 http://piumarta.com/software/maru/

 it has the original version you have + current.
 There is a short readme in the current version with some examples that
 will get you going.

 David

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread Reuben Thomas
On 27 February 2012 14:01, Martin Baldan martino...@gmail.com wrote:

 I still don't know how to go from here to a Frank-like GUI. I'm reading
 other replies which seem to point that way. All tips are welcome ;)

And indeed, maybe any discoveries could be written up at one of the Wikis:

http://vpri.org/fonc_wiki/index.php/Main_Page
http://www.vpri.org/vp_wiki/index.php/Main_Page

? There's a lot of exciting stuff to learn about here, but the tedious
details of how to build it are not among them!

-- 
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread Alan Kay
Hi Julian

I should probably comment on this, since it seems that the STEPS reports 
haven't made it clear enough.

STEPS is a science experiment not an engineering project. 


It is not at all about making and distributing an operating system etc., but 
about trying to investigate the tradeoffs between problem oriented languages 
that are highly fitted to problem spaces vs. what it takes to design them, 
learn them, make them, integrate them, add pragmatics, etc.

Part of the process is trying many variations in interesting (or annoying) 
areas. Some of these have been rather standalone, and some have had some 
integration from the start.

As mentioned in the reports, we made Frank -- tacking together some of the POLs 
that were done as satellites -- to try to get a better handle on what an 
integration language might be like that is much better than the current use of 
Squeak. It has been very helpful to get something that is evocative of the 
whole system working in a practical enough matter to use it (instead of PPT 
etc) to give talks that involve dynamic demos. We got some good ideas from this.


But this project is really about following our noses, partly via getting 
interested in one facet or another (since there are really too many for just a 
few people to cover all of them). 


For example, we've been thinking for some time that the pretty workable DBjr 
system that is used for visible things - documents, UI, etc. -- should be 
almost constructable by hand if we had a better constraint system. This would 
be the third working DBjr made by us ...


And -- this year is the 50th anniversary of Sketchpad, which has also got us 
re-thinking about some favorite old topics, etc.

This has led us to start putting constraint engines into STEPS, thinking about 
how to automatically organize various solvers, what kinds of POLs would be nice 
to make constraint systems with, UIs for same, and so forth. Intellectually 
this is kind of interesting because there are important overlaps between the 
functions + time stamps approach of many of our POLs and with constraints and 
solvers.

This looks very fruitful at this point!


As you said at the end of your email: this is not an engineering project, but a 
series of experiments.


One thought we had about this list is that it might lead others to conduct 
similar experiments. Just to pick one example: Reuben Thomas' thesis Mite (ca 
2000) has many good ideas that apply here. To quote from the opening:Mite is a 
virtual machine intended to provide fast language and machine-neutral 
just-in-time
translation of binary-portable object code into high quality native code, with 
a formal foundation. So one interesting project could be to try going from 
Nile down to a CPU via Mite. Nile is described in OMeta, so this could be a 
graceful transition, etc.

In any case, we spend most of our time trying to come up with ideas that might 
be powerful for systems design and ways to implement them. We occasionally 
write a paper or an NSF report. We sometimes put out code so people can see 
what we are doing. But what we will put out at the end of this period will be 
very different -- especially in the center -- that what we did for the center 
last year.

Cheers and best wishes,

Alan






 From: Julian Leviston jul...@leviston.net
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Saturday, February 25, 2012 6:48 PM
Subject: Re: [fonc] Error trying to compile COLA
 

As I understand it, Frank is an experiment that is an extended version of DBJr 
that sits atop lesserphic, which sits atop gezira which sits atop nile, which 
sits atop maru all of which which utilise ometa and the worlds idea.


If you look at the http://vpri.org/html/writings.php page you can see a 
pattern of progression that has emerged to the point where Frank exists. From 
what I understand, maru is the finalisation of what began as pepsi and coke. 
Maru is a simple s-expression language, in the same way that pepsi and coke 
were. In fact, it looks to have the same syntax. Nothing is the layer 
underneath that is essentially a symbolic computer - sitting between maru and 
the actual machine code (sort of like an LLVM assembler if I've understood it 
correctly).


They've hidden Frank in plain sight. He's a patch-together of all their 
experiments so far... which I'm sure you could do if you took the time to 
understand each of them and had the inclination. They've been publishing as 
much as they could all along. The point, though, is you have to understand 
each part. It's no good if you don't understand it.


If you know anything about Alan  VPRI's work, you'd know that their focus is 
on getting children this stuff in front as many children as possible, because 
they have so much more ability to connect to the heart of a problem than 
adults. (Nothing to do with age - talking about minds, not bodies here). 
Adults usually get in the way with their stuff

Re: [fonc] Error trying to compile COLA

2012-02-27 Thread Steve Wart
Just to zero in on one idea here


  Anyway I digress... have you had a look at this file?:

  http://piumarta.com/software/maru/maru-2.1/test-pepsi.l

  Just read the whole thing - I found it fairly interesting :) He's build
 pepsi on maru there... that's pretty fascinating, right? Built a micro
 smalltalk on top of the S-expression language... and then does a Fast
 Fourier Transform test using it...

   my case: looked some, but not entirely sure how it works though.


See the comment at the top:

./eval repl.l test-pepsi.l

eval.c is written in C, it's pretty clean code and very cool. Then eval.l
does the same thing in a lisp-like language.

Was playing with the Little Schemer with my son this weekend - if you fire
up the repl, cons, car, cdr stuff all work as expected.

Optionally check out the wikipedia article on PEGs and look at the COLA
paper if you can find it.

Anyhow, it's all self-contained, so if you can read C code and understand a
bit of Lisp, you can watch how the syntax morphs into Smalltalk. Or any
other language you feel like writing a parser for.

This is fantastic stuff.

Steve
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread David Harris
Alan ---

I appreciate both you explanation and what you are doing.  Of course
jealousy comes into it, because you guys appear to be having a lot of fun
mixed in with your hard work, and I would love to part of that.  I know
where I would be breaking down the doors if I was starting a masters or
doctorate.   However, I have made my choices, a long time ago, and so will
have live vicariously through your reports.  The constraint system, a la
Sketchpad, is a laudable experiment and I would love to see a
hand-constructible DBjr.  You seem to be approaching a much more
understandable and malleable system, and achieving more of the promise of
computers as imagined in the sixties and seventies, rather than what seems
to be the more mundane and opaque conglomerate that is generally the case
now.

Keep up the excellent work,
David


On Monday, February 27, 2012, Alan Kay alan.n...@yahoo.com wrote:
 Hi Julian
 I should probably comment on this, since it seems that the STEPS reports
haven't made it clear enough.
 STEPS is a science experiment not an engineering project.

 It is not at all about making and distributing an operating system
etc., but about trying to investigate the tradeoffs between problem
oriented languages that are highly fitted to problem spaces vs. what it
takes to design them, learn them, make them, integrate them, add
pragmatics, etc.
 Part of the process is trying many variations in interesting (or
annoying) areas. Some of these have been rather standalone, and some have
had some integration from the start.
 As mentioned in the reports, we made Frank -- tacking together some of
the POLs that were done as satellites -- to try to get a better handle on
what an integration language might be like that is much better than the
current use of Squeak. It has been very helpful to get something that is
evocative of the whole system working in a practical enough matter to use
it (instead of PPT etc) to give talks that involve dynamic demos. We got
some good ideas from this.

 But this project is really about following our noses, partly via getting
interested in one facet or another (since there are really too many for
just a few people to cover all of them).

 For example, we've been thinking for some time that the pretty workable
DBjr system that is used for visible things - documents, UI, etc. -- should
be almost constructable by hand if we had a better constraint system. This
would be the third working DBjr made by us ...

 And -- this year is the 50th anniversary of Sketchpad, which has also got
us re-thinking about some favorite old topics, etc.
 This has led us to start putting constraint engines into STEPS, thinking
about how to automatically organize various solvers, what kinds of POLs
would be nice to make constraint systems with, UIs for same, and so forth.
Intellectually this is kind of interesting because there are important
overlaps between the functions + time stamps approach of many of our POLs
and with constraints and solvers.
 This looks very fruitful at this point!

 As you said at the end of your email: this is not an engineering project,
but a series of experiments.

 One thought we had about this list is that it might lead others to
conduct similar experiments. Just to pick one example: Reuben Thomas'
thesis Mite (ca 2000) has many good ideas that apply here. To quote from
the opening: Mite is a virtual machine intended to provide fast language
and machine-neutral just-in-time translation of binary-portable object code
into high quality native code, with a formal foundation. So one
interesting project could be to try going from Nile down to a CPU via Mite.
Nile is described in OMeta, so this could be a graceful transition, etc.
 In any case, we spend most of our time trying to come up with ideas that
might be powerful for systems design and ways to implement them. We
occasionally write a paper or an NSF report. We sometimes put out code so
people can see what we are doing. But what we will put out at the end of
this period will be very different -- especially in the center -- that what
we did for the center last year.
 Cheers and best wishes,
 Alan


 
 From: Julian Leviston jul...@leviston.net
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Saturday, February 25, 2012 6:48 PM
 Subject: Re: [fonc] Error trying to compile COLA

 As I understand it, Frank is an experiment that is an extended version of
DBJr that sits atop lesserphic, which sits atop gezira which sits atop
nile, which sits atop maru all of which which utilise ometa and the
worlds idea.
 If you look at the http://vpri.org/html/writings.php page you can see a
pattern of progression that has emerged to the point where Frank exists.
From what I understand, maru is the finalisation of what began as pepsi and
coke. Maru is a simple s-expression language, in the same way that pepsi
and coke were. In fact, it looks to have the same syntax. Nothing is the
layer underneath

Re: [fonc] Error trying to compile COLA

2012-02-27 Thread Tony Garnock-Jones
Hi Alan,

On 27 February 2012 11:32, Alan Kay alan.n...@yahoo.com wrote:

 [...] a better constraint system. [...] This has led us to start putting
 constraint engines into STEPS, thinking about how to automatically organize
 various solvers, what kinds of POLs would be nice to make constraint
 systems with, UIs for same, and so forth.


Have you looked into the Propagators of Radul and Sussman? For example,
http://dspace.mit.edu/handle/1721.1/44215. His approach is closely related
to dataflow, with a lattice defined at each node in the graph for
integrating the messages that are sent to it. He's built FRP systems, type
checkers, type inferencers, abstract interpretation systems and lots of
other fun things in a nice, simple way, out of this core construct that
he's placed near the heart of his language's semantics.

My interest in it came out of thinking about integrating pub/sub (multi-
and broadcast) messaging into the heart of a language. What would a
Smalltalk look like if, instead of a strict unicast model with multi- and
broadcast constructed atop (via Observer/Observable), it had a messaging
model capable of natively expressing unicast, anycast, multicast, and
broadcast patterns? Objects would be able to collaborate on responding to
requests... anycast could be used to provide contextual responses to
requests... concurrency would be smoothly integrable... more research to be
done :-)

Regards,
  Tony
-- 
Tony Garnock-Jones
tonygarnockjo...@gmail.com
http://homepages.kcbbs.gen.nz/tonyg/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread BGB

On 2/27/2012 10:30 AM, Steve Wart wrote:

Just to zero in on one idea here



Anyway I digress... have you had a look at this file?:

http://piumarta.com/software/maru/maru-2.1/test-pepsi.l

Just read the whole thing - I found it fairly interesting :) He's
build pepsi on maru there... that's pretty fascinating, right?
Built a micro smalltalk on top of the S-expression language...
and then does a Fast Fourier Transform test using it...


my case: looked some, but not entirely sure how it works though.


See the comment at the top:
./eval repl.l test-pepsi.l
eval.c is written in C, it's pretty clean code and very cool. Then 
eval.l does the same thing in a lisp-like language.


Was playing with the Little Schemer with my son this weekend - if you 
fire up the repl, cons, car, cdr stuff all work as expected.




realized I could rip the filename off the end of the URL to get the 
directory, got the C file.


initial/quick observations:
apparently uses Boehm;
type system works a bit differently than my stuff, but seems to expose a 
vaguely similar interface (except I tend to put 'dy' on the front of 
everything here, so dycar(), dycdr(), dycaddr(), and most 
predicates have names like dyconsp() and similar, and often I 
type-check using strings rather than an enum, ...);
the parser works a bit differently than my S-Expression parser (mine 
tend to be a bit more, if/else, and read characters typically either 
from strings or stream objects);

ANSI codes with raw escape characters (text editor not entirely happy);
mixed tabs and spaces not leading to very good formatting;
simplistic interpreter, albeit it is not entirely clear how the built-in 
functions get dispatched;

...

a much more significant difference:
in my code, this sort of functionality is spread over many different 
areas (over several different DLLs), so one wouldn't find all of it in 
the same place.


will likely require more looking to figure out how the parser or syntax 
changing works (none of my parsers do this, most are fixed-form and 
typically shun context dependent parsing).



some of my earlier/simpler interpreters were like this though, vs newer 
ones which tend to have a longer multiple-stage translation pipeline, 
and which make use of bytecode.



Optionally check out the wikipedia article on PEGs and look at the 
COLA paper if you can find it.




PEGs, apparently I may have been using them informally already (thinking 
they were EBNF), although I haven't used them for directly driving a 
parser. typically, they have been used occasionally for describing 
elements of the syntax (in documentation and similar), at least not when 
using the lazier system of syntax via tables of examples.


may require more looking to try to better clarify the difference between 
a PEG and EBNF...
(the only difference I saw listed was that PEGs are ordered, but I would 
have assumed that a parser based on EBNF would have been implicitly 
ordered anyways, hmm...).



Anyhow, it's all self-contained, so if you can read C code and 
understand a bit of Lisp, you can watch how the syntax morphs into 
Smalltalk. Or any other language you feel like writing a parser for.


This is fantastic stuff.



following the skim and some more looking, I think I have a better idea 
how it works.



I will infer:
top Lisp-like code defines behavior;
syntax in middle defines syntax (as comment says);
(somehow) the parser invokes the new syntax, internally converting it 
into the Lisp-like form, which is what gets executed.



so, seems interesting enough...


if so, my VM is vaguely similar, albeit without the syntax definition or 
variable parser (the parser for my script language is fixed-form and 
written in C, but does parse to a Scheme-like AST system).


the assumption would have been that if someone wanted a parser for a new 
language, they would write one, assuming the semantics mapped tolerably 
to the underlying VM (exactly matching the semantics of each language 
would be a little harder though).


theoretically, nothing would really prevent writing a parser in the 
scripting language, just I had never really considered doing so (or, for 
that matter, even supporting user-defined syntax elements in the main 
parser).



the most notable difference between my ASTs and Lisp or Scheme:
all forms are special forms, and function calls need to be made via a 
special form (this was mostly to help better detect problems);

operators were also moved to special forms, for similar reasons;
there are lots more special forms, most mapping to HLL constructs (for, 
while, break, continue, ...);

...

as-is, there are also a large-number of bytecode operations, many 
related to common special cases.


for example, a recent addition called jmp_cond_sweq reduces several 
instructions related to switch into a single operation, partly 
intended for micro-optimizing (why 3 opcodes when one only needs 1?), 
and also partly intended to be used as a VM 

Re: [fonc] Error trying to compile COLA

2012-02-27 Thread Alan Kay
Hi Tony

Yes, I've seen it. As Gerry says, it is an extension of Guy Steele's thesis. 
When I read this, I wished for a more interesting, comprehensive and 
wider-ranging and -scaling example to help think with. 


One reason to put up with some of the problems of defining things using 
constraints is that if you can organize things well enough, you get super 
clarity and simplicity and power.

They definitely need a driving example that has these traits. There is a 
certain tinge of the Turing Tarpit to this paper.

With regard to objects, my current prejudice is that objects should be able to 
receive messages, but should not have to send to explicit receivers. This is a 
kind of multi-cast I guess (but I think of it more like publish/subscribe).


Cheers,

Alan






 From: Tony Garnock-Jones tonygarnockjo...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Monday, February 27, 2012 9:48 AM
Subject: Re: [fonc] Error trying to compile COLA
 

Hi Alan,


On 27 February 2012 11:32, Alan Kay alan.n...@yahoo.com wrote:

[...] a better constraint system. [...] This has led us to start putting 
constraint engines into STEPS, thinking about how to automatically organize 
various solvers, what kinds of POLs would be nice to make constraint systems 
with, UIs for same, and so forth.

Have you looked into the Propagators of Radul and Sussman? For example, 
http://dspace.mit.edu/handle/1721.1/44215. His approach is closely related to 
dataflow, with a lattice defined at each node in the graph for integrating the 
messages that are sent to it. He's built FRP systems, type checkers, type 
inferencers, abstract interpretation systems and lots of other fun things in a 
nice, simple way, out of this core construct that he's placed near the heart 
of his language's semantics.

My interest in it came out of thinking about integrating pub/sub (multi- and 
broadcast) messaging into the heart of a language. What would a Smalltalk look 
like if, instead of a strict unicast model with multi- and broadcast 
constructed atop (via Observer/Observable), it had a messaging model capable 
of natively expressing unicast, anycast, multicast, and broadcast patterns? 
Objects would be able to collaborate on responding to requests... anycast 
could be used to provide contextual responses to requests... concurrency would 
be smoothly integrable... more research to be done :-)

Regards,
  Tony
-- 
Tony Garnock-Jones
tonygarnockjo...@gmail.com
http://homepages.kcbbs.gen.nz/tonyg/


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread David Girle
I am interested in the embedded uses of Maru, so I cannot comment on
how to get from here to a Frank-like GUI.  I have no idea how many
others on this list are interested in the Internet of Things (IoT),
but I expect parts of Frank will be useful in that space.  Maybe 5kLOC
will bring up a connected, smart sensor system, rather than the 20kLOC
target VPRI have in mind for a programming system.

David

On Mon, Feb 27, 2012 at 7:01 AM, Martin Baldan martino...@gmail.com wrote:
 David,

 Thanks for the link. Indeed, now I see how to run  eval with .l example
 files. There are also .k  files, which I don't know how they differ from
 those, except that .k files are called with ./eval filename.k while .l
 files are called with ./eval repl.l filename.l where filename is the
 name of the file. Both kinds seem to be made of Maru code.

 I still don't know how to go from here to a Frank-like GUI. I'm reading
 other replies which seem to point that way. All tips are welcome ;)

 -Martin


 On Mon, Feb 27, 2012 at 3:54 AM, David Girle davidgi...@gmail.com wrote:

 Take a look at the page:

 http://piumarta.com/software/maru/

 it has the original version you have + current.
 There is a short readme in the current version with some examples that
 will get you going.

 David



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread BGB
 Subject: Re: [fonc] Error trying to compile COLA

 As I understand it, Frank is an experiment that is an extended 
version of DBJr that sits atop lesserphic, which sits atop gezira 
which sits atop nile, which sits atop maru all of which which utilise 
ometa and the worlds idea.
 If you look at the http://vpri.org/html/writings.php page you can 
see a pattern of progression that has emerged to the point where Frank 
exists. From what I understand, maru is the finalisation of what began 
as pepsi and coke. Maru is a simple s-expression language, in the same 
way that pepsi and coke were. In fact, it looks to have the same 
syntax. Nothing is the layer underneath that is essentially a symbolic 
computer - sitting between maru and the actual machine code (sort of 
like an LLVM assembler if I've understood it correctly).
 They've hidden Frank in plain sight. He's a patch-together of all 
their experiments so far... which I'm sure you could do if you took 
the time to understand each of them and had the inclination. They've 
been publishing as much as they could all along. The point, though, is 
you have to understand each part. It's no good if you don't understand it.
 If you know anything about Alan  VPRI's work, you'd know that their 
focus is on getting children this stuff in front as many children as 
possible, because they have so much more ability to connect to the 
heart of a problem than adults. (Nothing to do with age - talking 
about minds, not bodies here). Adults usually get in the way with 
their stuff - their knowledge sits like a kind of a filter, 
denying them the ability to see things clearly and directly connect to 
them unless they've had special training in relaxing that filter. We 
don't know how to be simple and direct any more - not to say that it's 
impossible. We need children to teach us meta-stuff, mostly this 
direct way of experiencing and looking, and this project's main aim 
appears to be to provide them (and us, of course, but not as 
importantly) with the tools to do that. Adults will come secondarily - 
to the degree they can't embrace new stuff ;-). This is what we need 
as an entire populace - to increase our general understanding - to 
reach breakthroughs previously not thought possible, and fast. Rather 
than changing the world, they're providing the seed for children to 
change the world themselves.
 This is only as I understand it from my observation. Don't take it 
as gospel or even correct, but maybe you could use it to investigate 
the parts of frank a little more and with in-depth openness :) The 
entire project is an experiment... and that's why they're not coming 
out and saying hey guys this is the product of our work - it's not a 
linear building process, but an intensively creative process, and most 
of that happens within oneself before any results are seen (rather 
like boiling a kettle).

 http://www.vpri.org/vp_wiki/index.php/Main_Page
 On the bottom of that page, you'll see a link to the tinlizzie site 
that references experiment and the URL has dbjr in it... as far as I 
understand it, this is as much frank as we've been shown.

 http://tinlizzie.org/dbjr/
 :)
 Julian
 On 26/02/2012, at 9:41 AM, Martin Baldan wrote:

 Is that the case? I'm a bit confused. I've read the fascinating 
reports about Frank, and I was wondering what's the closest thing one 
can download and run right now. Could you guys please clear it up for me?


 Best,

 Martin

 On Sat, Feb 25, 2012 at 5:23 PM, Julian Leviston 
jul...@leviston.net mailto:jul...@leviston.net wrote:


 Isn't the cola basically irrelevant now? aren't they using maru 
instead? (or rather isn't maru the renamed version of coke?)


 Julian


 On 26/02/2012, at 2:52 AM, Martin Baldan wrote:

 Michael,

 Thanks for your reply. I'm looking into it.

 Best,

  Martin
 ___
 fonc mailing list
 fonc@vpri.org mailto:fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

 ___
 fonc m


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread BGB

On 2/27/2012 1:27 PM, David Girle wrote:

I am interested in the embedded uses of Maru, so I cannot comment on
how to get from here to a Frank-like GUI.  I have no idea how many
others on this list are interested in the Internet of Things (IoT),
but I expect parts of Frank will be useful in that space.  Maybe 5kLOC
will bring up a connected, smart sensor system, rather than the 20kLOC
target VPRI have in mind for a programming system.

David


IoT: had to look it up, but it sounds like something which could easily 
turn very cyber-punky or end up being abused in some sort of dystopic 
future scenario. accidentally touch some random object and suddenly the 
person has a price on their head and police jumping in through their 
window armed with automatic weapons or something...


and escape is difficult as doors will automatically lock on their 
approach, and random objects will fly into their path as they try to 
make a run for it, ... (because reality itself has something akin to the 
Radiant AI system from Oblivion or Fallout 3).


(well, ok, not that I expect something like this would necessarily 
happen... or that the idea is necessarily a bad idea...).



granted, as for kloc:
code has to go somewhere, I don't think 5 kloc is going to work.

looking at the Maru stuff from earlier, I would have to check, but I 
suspect it may already go over that budget (by quickly looking at a few 
files and adding up the line counts).



admittedly, I don't as much believe in the tiny kloc goal, since as-is, 
getting a complete modern computing system down into a few Mloc would 
already be a bit of an achievement (vs, say, a 14 Mloc kernel running a 
4 Mloc web browser, on a probably 10 Mloc GUI framework, all being 
compiled by a 5 Mloc C compiler, add another 1 Mloc if one wants a 3D 
engine, ...).



yes, one can make systems much smaller, but typically at a cost in terms 
of functionality, like one has a small OS kernel that only run on a 
single hardware configuration, a compiler that only supports a single 
target, a web browser which only supports very minimal functionality, ...


absent a clearly different strategy (what the VPRI people seem to be 
pursuing), the above outcome would not be desirable, and it is generally 
much more desirable to have a feature-rich system, even if potentially 
the LOC counts are far beyond the ability of any given person to 
understand (and if the total LOC for the system, is likely, *huge*...).


very course estimates:
a Linux installation DVD is 3.5 GB;
assume for a moment that nearly all of this is (likely) compressed 
program-binary code, and assuming that code tends to compress to approx 
1/4 its original size with Deflate;

so, probably 14GB of binary code;
my approx 1Mloc app compiles to about 16.5 MB of DLLs;
assuming everything else holds (and the basic assumptions are correct), 
this would work out to ~ 849 Mloc.


(a more realistic estimate would need to find how much is program code 
vs data files, and maybe find a better estimate of the binary-size to 
source-LOC mapping).



granted, there is probably a lot of redundancy which could likely be 
eliminated, and if one assumes it is a layered tower strategy (a large 
number of rings, with each layer factoring out most of what resides 
above it), then likely a significant reduction would be possible.


the problem is, one is still likely to have an initially fairly large 
wind up time, so ultimately the resulting system, is still likely to 
be pretty damn large (assuming it can do everything a modern OS does, 
and more, it is still likely to be probably well into the Mloc range).



but, I could always be wrong here...



On Mon, Feb 27, 2012 at 7:01 AM, Martin Baldanmartino...@gmail.com  wrote:

David,

Thanks for the link. Indeed, now I see how to run  eval with .l example
files. There are also .k  files, which I don't know how they differ from
those, except that .k files are called with ./eval filename.k while .l
files are called with ./eval repl.l filename.l where filename is the
name of the file. Both kinds seem to be made of Maru code.

I still don't know how to go from here to a Frank-like GUI. I'm reading
other replies which seem to point that way. All tips are welcome ;)

-Martin


On Mon, Feb 27, 2012 at 3:54 AM, David Girledavidgi...@gmail.com  wrote:

Take a look at the page:

http://piumarta.com/software/maru/

it has the original version you have + current.
There is a short readme in the current version with some examples that
will get you going.

David



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread Alan Kay
Hi Tony

I like what the BOOM/BLOOM people are doing quite a bit. Their version of 
Datalog + Time is definitely in accord with lots of our prejudices ...

Cheers,

Alan





 From: Tony Garnock-Jones tonygarnockjo...@gmail.com
To: Alan Kay alan.n...@yahoo.com 
Cc: Fundamentals of New Computing fonc@vpri.org 
Sent: Monday, February 27, 2012 1:44 PM
Subject: Re: [fonc] Error trying to compile COLA
 

On 27 February 2012 15:09, Alan Kay alan.n...@yahoo.com wrote:

Yes, I've seen it. As Gerry says, it is an extension of Guy Steele's thesis. 
When I read this, I wished for a more interesting, comprehensive and 
wider-ranging and -scaling example to help think with.

For me, the moment of enlightenment was when I realized that by using a 
lattice at each node, they'd abstracted out the essence of 
iterate-to-fixpoint that's disguised within a number of the examples I 
mentioned in my previous message. (Particularly the frameworks of abstract 
interpretation.)

I'm also really keen to try to relate propagators to Joe Hellerstein's recent 
work on BOOM/BLOOM. That team has been able to implement the Chord DHT in 
fewer than 50 lines of code. The underlying fact-propagation system of their 
language integrates with a Datalog-based reasoner to permit terse, dense 
reasoning about distributed state.
 
One reason to put up with some of the problems of defining things using 
constraints is that if you can organize things well enough, you get super 
clarity and simplicity and power.

Absolutely. I think Hellerstein's Chord example shows that very well. So I 
wish it had been an example in Radul's thesis :-)
 
With regard to objects, my current prejudice is that objects should be able 
to receive messages, but should not have to send to explicit receivers. This 
is a kind of multi-cast I guess (but I think of it more like 
publish/subscribe).


I'm nearing the point where I can write up the results of a chunk of my 
current research. We have been using a pub/sub-based virtual machine for 
actor-like entities, and have found a few cool uses of non-point-to-point 
message passing that simplify implementation of complex protocols like DNS and 
SSH.

Regards,
  Tony
-- 
Tony Garnock-Jones
tonygarnockjo...@gmail.com
http://homepages.kcbbs.gen.nz/tonyg/


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread Julian Leviston
Structural optimisation is not compression. Lurk more.

Julian

On 28/02/2012, at 3:38 PM, BGB wrote:

 granted, I remain a little skeptical.
 
 I think there is a bit of a difference though between, say, a log table, and 
 a typical piece of software.
 a log table is, essentially, almost pure redundancy, hence why it can be 
 regenerated on demand.
 
 a typical application is, instead, a big pile of logic code for a wide range 
 of behaviors and for dealing with a wide range of special cases.
 
 
 executable math could very well be functionally equivalent to a highly 
 compressed program, but note in this case that one needs to count both the 
 size of the compressed program, and also the size of the program needed to 
 decompress it (so, the size of the system would also need to account for 
 the compiler and runtime).
 
 although there is a fair amount of redundancy in typical program code (logic 
 that is often repeated,  duplicated effort between programs, ...), 
 eliminating this redundancy would still have a bounded reduction in total 
 size.
 
 increasing abstraction is likely to, again, be ultimately bounded (and, 
 often, abstraction differs primarily in form, rather than in essence, from 
 that of moving more of the system functionality into library code).
 
 
 much like with data compression, the concept commonly known as the Shannon 
 limit may well still apply (itself setting an upper limit to how much is 
 expressible within a given volume of code).

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread BGB

On 2/27/2012 10:08 PM, Julian Leviston wrote:

Structural optimisation is not compression. Lurk more.


probably will drop this, as arguing about all this is likely pointless 
and counter-productive.


but, is there any particular reason for why similar rules and 
restrictions wouldn't apply?


(I personally suspect that similar applies to nearly all forms of 
communication, including written and spoken natural language, and a 
claim that some X can be expressed in Y units does seem a fair amount 
like a compression-style claim).



but, anyways, here is a link to another article:
http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem


Julian

On 28/02/2012, at 3:38 PM, BGB wrote:


granted, I remain a little skeptical.

I think there is a bit of a difference though between, say, a log table, and a 
typical piece of software.
a log table is, essentially, almost pure redundancy, hence why it can be 
regenerated on demand.

a typical application is, instead, a big pile of logic code for a wide range of 
behaviors and for dealing with a wide range of special cases.


executable math could very well be functionally equivalent to a highly compressed program, but 
note in this case that one needs to count both the size of the compressed program, and also the size of the 
program needed to decompress it (so, the size of the system would also need to account for the compiler and 
runtime).

although there is a fair amount of redundancy in typical program code (logic 
that is often repeated,  duplicated effort between programs, ...), eliminating 
this redundancy would still have a bounded reduction in total size.

increasing abstraction is likely to, again, be ultimately bounded (and, often, 
abstraction differs primarily in form, rather than in essence, from that of 
moving more of the system functionality into library code).


much like with data compression, the concept commonly known as the Shannon 
limit may well still apply (itself setting an upper limit to how much is 
expressible within a given volume of code).

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-26 Thread BGB

On 2/25/2012 7:48 PM, Julian Leviston wrote:
As I understand it, Frank is an experiment that is an extended version 
of DBJr that sits atop lesserphic, which sits atop gezira which sits 
atop nile, which sits atop maru all of which which utilise ometa and 
the worlds idea.


If you look at the http://vpri.org/html/writings.php page you can see 
a pattern of progression that has emerged to the point where Frank 
exists. From what I understand, maru is the finalisation of what began 
as pepsi and coke. Maru is a simple s-expression language, in the same 
way that pepsi and coke were. In fact, it looks to have the same 
syntax. Nothing is the layer underneath that is essentially a symbolic 
computer - sitting between maru and the actual machine code (sort of 
like an LLVM assembler if I've understood it correctly).




yes, S-Expressions can be nifty.
often, they aren't really something one advertises, or uses as a 
front-end syntax (much like Prototype-OO and delegation: it is a 
powerful model, but people also like their classes).


so, one ends up building something with a C-like syntax and 
Class/Instance OO, even if much of the structure internally is built 
using lists and Prototype-OO. if something is too strange, it may not be 
received well though (like people may see it and be like just what the 
hell is this?). better then if everything is just as could be expected.



in my case, they are often printed out in debugging messages though, as 
a lot of my stuff internally is built using lists (I ended up recently 
devising a specialized network protocol for, among other things, sending 
compressed list-based messages over a TCP socket).


probably not wanting to go too deeply into it, but:
it directly serializes/parses the lists from a bitstream;
a vaguely JPEG-like escape-tag system is used;
messages are Huffman-coded, and make use of both a value MRU/MTF and 
LZ77 compression (many parts of the coding also borrow from Deflate);
currently, I am (in my uses) getting ~60% additional compression vs 
S-Expressions+Deflate, and approximately 97% compression vs plaintext 
(plain Deflate got around 90% for this data).


the above was mostly used for sending scene-graph updates and similar in 
my 3D engine, and is maybe overkill, but whatever (although, luckily, it 
means I can send a lot more data while staying within a reasonable 
bandwidth budget, as my target was 96-128 kbps, and I am currently using 
around 8 kbps, vs closer to the 300-400 kbps needed for plaintext).



They've hidden Frank in plain sight. He's a patch-together of all 
their experiments so far... which I'm sure you could do if you took 
the time to understand each of them and had the inclination. They've 
been publishing as much as they could all along. The point, though, is 
you have to understand each part. It's no good if you don't understand it.




possibly, I don't understand a lot of it, but I guess part of it may be 
knowing what to read.
there were a few nifty things to read here and there, but I wasn't 
really seeing the larger whole I guess.



If you know anything about Alan  VPRI's work, you'd know that their 
focus is on getting children this stuff in front as many children as 
possible, because they have so much more ability to connect to the 
heart of a problem than adults. (Nothing to do with age - talking 
about minds, not bodies here). Adults usually get in the way with 
their stuff - their knowledge sits like a kind of a filter, 
denying them the ability to see things clearly and directly connect to 
them unless they've had special training in relaxing that filter. We 
don't know how to be simple and direct any more - not to say that it's 
impossible. We need children to teach us meta-stuff, mostly this 
direct way of experiencing and looking, and this project's main aim 
appears to be to provide them (and us, of course, but not as 
importantly) with the tools to do that. Adults will come secondarily - 
to the degree they can't embrace new stuff ;-). This is what we need 
as an entire populace - to increase our general understanding - to 
reach breakthroughs previously not thought possible, and fast. Rather 
than changing the world, they're providing the seed for children to 
change the world themselves.


there are merits and drawbacks here.

(what follows here is merely my opinion at the moment, as stated at a 
time when I am somewhat in need of going to sleep... ).



granted, yes, children learning stuff is probably good, but the risk is 
also that children (unlike adults) are much more likely to play things 
much more fast and loose regarding the law, and might show little 
respect for existing copyrights and patents, and may risk creating 
liability issues, and maybe bringing lawsuits to their parents (like, 
some company decides to sue the parents because little Johnny just 
went and infringed on several of their patents, or used some of their IP 
in a personal project, ...).


( and, in my case, I learned 

Re: [fonc] Error trying to compile COLA

2012-02-26 Thread Julian Leviston
What does any of what you just said have to do with the original question about 
COLA?

Julian

On 26/02/2012, at 9:25 PM, BGB wrote:

 On 2/25/2012 7:48 PM, Julian Leviston wrote:
 
 As I understand it, Frank is an experiment that is an extended version of 
 DBJr that sits atop lesserphic, which sits atop gezira which sits atop nile, 
 which sits atop maru all of which which utilise ometa and the worlds idea.
 
 If you look at the http://vpri.org/html/writings.php page you can see a 
 pattern of progression that has emerged to the point where Frank exists. 
 From what I understand, maru is the finalisation of what began as pepsi and 
 coke. Maru is a simple s-expression language, in the same way that pepsi and 
 coke were. In fact, it looks to have the same syntax. Nothing is the layer 
 underneath that is essentially a symbolic computer - sitting between maru 
 and the actual machine code (sort of like an LLVM assembler if I've 
 understood it correctly).
 
 
 yes, S-Expressions can be nifty.
 often, they aren't really something one advertises, or uses as a front-end 
 syntax (much like Prototype-OO and delegation: it is a powerful model, but 
 people also like their classes).
 
 so, one ends up building something with a C-like syntax and Class/Instance 
 OO, even if much of the structure internally is built using lists and 
 Prototype-OO. if something is too strange, it may not be received well though 
 (like people may see it and be like just what the hell is this?). better 
 then if everything is just as could be expected.
 
 
 in my case, they are often printed out in debugging messages though, as a lot 
 of my stuff internally is built using lists (I ended up recently devising a 
 specialized network protocol for, among other things, sending compressed 
 list-based messages over a TCP socket).
 
 probably not wanting to go too deeply into it, but:
 it directly serializes/parses the lists from a bitstream;
 a vaguely JPEG-like escape-tag system is used;
 messages are Huffman-coded, and make use of both a value MRU/MTF and LZ77 
 compression (many parts of the coding also borrow from Deflate);
 currently, I am (in my uses) getting ~60% additional compression vs 
 S-Expressions+Deflate, and approximately 97% compression vs plaintext (plain 
 Deflate got around 90% for this data).
 
 the above was mostly used for sending scene-graph updates and similar in my 
 3D engine, and is maybe overkill, but whatever (although, luckily, it means I 
 can send a lot more data while staying within a reasonable bandwidth budget, 
 as my target was 96-128 kbps, and I am currently using around 8 kbps, vs 
 closer to the 300-400 kbps needed for plaintext).
 
 
 They've hidden Frank in plain sight. He's a patch-together of all their 
 experiments so far... which I'm sure you could do if you took the time to 
 understand each of them and had the inclination. They've been publishing as 
 much as they could all along. The point, though, is you have to understand 
 each part. It's no good if you don't understand it.
 
 
 possibly, I don't understand a lot of it, but I guess part of it may be 
 knowing what to read.
 there were a few nifty things to read here and there, but I wasn't really 
 seeing the larger whole I guess.
 
 
 If you know anything about Alan  VPRI's work, you'd know that their focus 
 is on getting children this stuff in front as many children as possible, 
 because they have so much more ability to connect to the heart of a problem 
 than adults. (Nothing to do with age - talking about minds, not bodies 
 here). Adults usually get in the way with their stuff - their knowledge 
 sits like a kind of a filter, denying them the ability to see things clearly 
 and directly connect to them unless they've had special training in relaxing 
 that filter. We don't know how to be simple and direct any more - not to say 
 that it's impossible. We need children to teach us meta-stuff, mostly this 
 direct way of experiencing and looking, and this project's main aim appears 
 to be to provide them (and us, of course, but not as importantly) with the 
 tools to do that. Adults will come secondarily - to the degree they can't 
 embrace new stuff ;-). This is what we need as an entire populace - to 
 increase our general understanding - to reach breakthroughs previously not 
 thought possible, and fast. Rather than changing the world, they're 
 providing the seed for children to change the world themselves.
 
 there are merits and drawbacks here.
 
 (what follows here is merely my opinion at the moment, as stated at a time 
 when I am somewhat in need of going to sleep... ).
 
 
 granted, yes, children learning stuff is probably good, but the risk is also 
 that children (unlike adults) are much more likely to play things much more 
 fast and loose regarding the law, and might show little respect for 
 existing copyrights and patents, and may risk creating liability issues, and 
 maybe bringing lawsuits to their parents 

Re: [fonc] Error trying to compile COLA

2012-02-26 Thread BGB

On 2/26/2012 3:53 AM, Julian Leviston wrote:
What does any of what you just said have to do with the original 
question about COLA?




sorry, I am really not good with topic, was just trying to respond to 
what was there, but it was 2AM...

(hmm, maybe I should have waited until morning? oh well...).

as for getting COLA to compile, I have little idea, hence why I did not 
comment on this.
it seemed to be going off in the direction of motivations, ... which I 
can comment on.


likewise, I can comment on Prototype OO and S-Expressions, since I have 
a lot more experience using these, ... (both, just so happen, are things 
that seem to be seen very negatively by average programmers, vs say 
Class/Instance and XML, ...). however, both continue to be useful, so 
they don't just go away (like, Lists/S-Exps are easier to work with than 
XML via DOM or similar, ...).



but, yes, maybe I will go back into lurk mode now...


Julian

On 26/02/2012, at 9:25 PM, BGB wrote:


On 2/25/2012 7:48 PM, Julian Leviston wrote:
As I understand it, Frank is an experiment that is an extended 
version of DBJr that sits atop lesserphic, which sits atop gezira 
which sits atop nile, which sits atop maru all of which which 
utilise ometa and the worlds idea.


If you look at the http://vpri.org/html/writings.php page you can 
see a pattern of progression that has emerged to the point where 
Frank exists. From what I understand, maru is the finalisation of 
what began as pepsi and coke. Maru is a simple s-expression 
language, in the same way that pepsi and coke were. In fact, it 
looks to have the same syntax. Nothing is the layer underneath that 
is essentially a symbolic computer - sitting between maru and the 
actual machine code (sort of like an LLVM assembler if I've 
understood it correctly).




yes, S-Expressions can be nifty.
often, they aren't really something one advertises, or uses as a 
front-end syntax (much like Prototype-OO and delegation: it is a 
powerful model, but people also like their classes).


so, one ends up building something with a C-like syntax and 
Class/Instance OO, even if much of the structure internally is built 
using lists and Prototype-OO. if something is too strange, it may not 
be received well though (like people may see it and be like just 
what the hell is this?). better then if everything is just as could 
be expected.



in my case, they are often printed out in debugging messages though, 
as a lot of my stuff internally is built using lists (I ended up 
recently devising a specialized network protocol for, among other 
things, sending compressed list-based messages over a TCP socket).


probably not wanting to go too deeply into it, but:
it directly serializes/parses the lists from a bitstream;
a vaguely JPEG-like escape-tag system is used;
messages are Huffman-coded, and make use of both a value MRU/MTF and 
LZ77 compression (many parts of the coding also borrow from Deflate);
currently, I am (in my uses) getting ~60% additional compression vs 
S-Expressions+Deflate, and approximately 97% compression vs plaintext 
(plain Deflate got around 90% for this data).


the above was mostly used for sending scene-graph updates and similar 
in my 3D engine, and is maybe overkill, but whatever (although, 
luckily, it means I can send a lot more data while staying within a 
reasonable bandwidth budget, as my target was 96-128 kbps, and I am 
currently using around 8 kbps, vs closer to the 300-400 kbps needed 
for plaintext).



They've hidden Frank in plain sight. He's a patch-together of all 
their experiments so far... which I'm sure you could do if you took 
the time to understand each of them and had the inclination. They've 
been publishing as much as they could all along. The point, though, 
is you have to understand each part. It's no good if you don't 
understand it.




possibly, I don't understand a lot of it, but I guess part of it may 
be knowing what to read.
there were a few nifty things to read here and there, but I wasn't 
really seeing the larger whole I guess.



If you know anything about Alan  VPRI's work, you'd know that their 
focus is on getting children this stuff in front as many children as 
possible, because they have so much more ability to connect to the 
heart of a problem than adults. (Nothing to do with age - talking 
about minds, not bodies here). Adults usually get in the way with 
their stuff - their knowledge sits like a kind of a filter, 
denying them the ability to see things clearly and directly connect 
to them unless they've had special training in relaxing that filter. 
We don't know how to be simple and direct any more - not to say that 
it's impossible. We need children to teach us meta-stuff, mostly 
this direct way of experiencing and looking, and this project's main 
aim appears to be to provide them (and us, of course, but not as 
importantly) with the tools to do that. Adults will come secondarily 
- to the degree they can't embrace new stuff 

Re: [fonc] Error trying to compile COLA

2012-02-26 Thread Martin Baldan
Guys, I find these off_topic comments (as in not strictly about my idst
compilation  problem)  really interesting. Maybe I should start a new
thread? Something like «how can a newbie start playing with this
technology?». Thanks!
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-26 Thread Martin Baldan
Julian,

Thanks, now I have a much better picture of the overall situation, although
I still have a lot of reading to do. I already had read a couple of Frank
progress reports, and some stuff about worlds, in the publications link you
mention. So I thought, this sounds great, how can I try this? Then I went
to the wiki, and there was a section called Fundamental new computing
technologies, so I said this is the thing!.  But no, the real thing was,
as you said, hidden in plain sight, under the unconspicuous titles such as
Other prototypes and projects related to our work and experiment. I
wonder, is that some kind of prank for the uninitiated? hehe. By the way,
I've played a little with Squeak, Croquet and other great projects by Alan
and the other wonderful Smalltalk people, so I did have a sense of their
focus on children. I must confess I was a bit annoyed with what seemed to
me like Jedi elitism (as in He is too old. Yes, too old to begin the
training. ) but hey, their project, their code, their rules.

So, to get back on topic,

I've downloaded Maru, The contents are:

boot-eval.c  boot.l  emit.l  eval.l  Makefile

So, the .l files are

So this is the file extension for Maru's implementation language (does it
have a name?).

Sure enough, the very first line of eval.l reads:

;;; -*- coke -*-

This made me smile. Well, actually it was a mad laughter.

It compiles beautifully. Yay!

Now there are some .s files. They look like assembler code. I thought it
was Nothing code, but the Maru webpage explains it's just ia-32. Oh, well.
I don't know yet where Nothing enters the picture.

So, this is compiled to .o files and linked to build the eval
executable, which can take .l files and make a new eval
 executable, and so on. So far so good.

But what else can I do with it? Should I use it to run the examples at 
http://tinlizzie.org/dbjr/; ? All I see is files with a .lbox file
extension. What are those? Apparently, there are no READMEs. Could you
please give me an example of how to try one of those experiments?

Thanks for your tips and patience ;)




On Sun, Feb 26, 2012 at 3:48 AM, Julian Leviston jul...@leviston.netwrote:

 As I understand it, Frank is an experiment that is an extended version of
 DBJr that sits atop lesserphic, which sits atop gezira which sits atop
 nile, which sits atop maru all of which which utilise ometa and the
 worlds idea.

 If you look at the http://vpri.org/html/writings.php page you can see a
 pattern of progression that has emerged to the point where Frank exists.
 From what I understand, maru is the finalisation of what began as pepsi and
 coke. Maru is a simple s-expression language, in the same way that pepsi
 and coke were. In fact, it looks to have the same syntax. Nothing is the
 layer underneath that is essentially a symbolic computer - sitting between
 maru and the actual machine code (sort of like an LLVM assembler if I've
 understood it correctly).

 They've hidden Frank in plain sight. He's a patch-together of all their
 experiments so far... which I'm sure you could do if you took the time to
 understand each of them and had the inclination. They've been publishing as
 much as they could all along. The point, though, is you have to understand
 each part. It's no good if you don't understand it.

 If you know anything about Alan  VPRI's work, you'd know that their focus
 is on getting children this stuff in front as many children as possible,
 because they have so much more ability to connect to the heart of a problem
 than adults. (Nothing to do with age - talking about minds, not bodies
 here). Adults usually get in the way with their stuff - their knowledge
 sits like a kind of a filter, denying them the ability to see things
 clearly and directly connect to them unless they've had special training in
 relaxing that filter. We don't know how to be simple and direct any more -
 not to say that it's impossible. We need children to teach us meta-stuff,
 mostly this direct way of experiencing and looking, and this project's main
 aim appears to be to provide them (and us, of course, but not as
 importantly) with the tools to do that. Adults will come secondarily - to
 the degree they can't embrace new stuff ;-). This is what we need as an
 entire populace - to increase our general understanding - to reach
 breakthroughs previously not thought possible, and fast. Rather than
 changing the world, they're providing the seed for children to change the
 world themselves.

 This is only as I understand it from my observation. Don't take it as
 gospel or even correct, but maybe you could use it to investigate the parts
 of frank a little more and with in-depth openness :) The entire project is
 an experiment... and that's why they're not coming out and saying hey guys
 this is the product of our work - it's not a linear building process, but
 an intensively creative process, and most of that happens within oneself
 before any results are seen (rather like boiling a 

Re: [fonc] Error trying to compile COLA

2012-02-26 Thread David Girle
Take a look at the page:

http://piumarta.com/software/maru/

it has the original version you have + current.
There is a short readme in the current version with some examples that
will get you going.

David

On Sun, Feb 26, 2012 at 6:14 PM, Martin Baldan martino...@gmail.com wrote:
 Julian,

 Thanks, now I have a much better picture of the overall situation, although
 I still have a lot of reading to do. I already had read a couple of Frank
 progress reports, and some stuff about worlds, in the publications link you
 mention. So I thought, this sounds great, how can I try this? Then I went to
 the wiki, and there was a section called Fundamental new computing
 technologies, so I said this is the thing!.  But no, the real thing was,
 as you said, hidden in plain sight, under the unconspicuous titles such as
 Other prototypes and projects related to our work and experiment. I
 wonder, is that some kind of prank for the uninitiated? hehe. By the way,
 I've played a little with Squeak, Croquet and other great projects by Alan
 and the other wonderful Smalltalk people, so I did have a sense of their
 focus on children. I must confess I was a bit annoyed with what seemed to me
 like Jedi elitism (as in He is too old. Yes, too old to begin the training.
 ) but hey, their project, their code, their rules.

 So, to get back on topic,

 I've downloaded Maru, The contents are:

 boot-eval.c  boot.l  emit.l  eval.l  Makefile

 So, the .l files are

 So this is the file extension for Maru's implementation language (does it
 have a name?).

 Sure enough, the very first line of eval.l reads:

 ;;; -*- coke -*-

 This made me smile. Well, actually it was a mad laughter.

 It compiles beautifully. Yay!

 Now there are some .s files. They look like assembler code. I thought it
 was Nothing code, but the Maru webpage explains it's just ia-32. Oh, well. I
 don't know yet where Nothing enters the picture.

 So, this is compiled to .o files and linked to build the eval
 executable, which can take .l files and make a new eval
  executable, and so on. So far so good.

 But what else can I do with it? Should I use it to run the examples at
 http://tinlizzie.org/dbjr/; ? All I see is files with a .lbox file
 extension. What are those? Apparently, there are no READMEs. Could you
 please give me an example of how to try one of those experiments?

 Thanks for your tips and patience ;)




 On Sun, Feb 26, 2012 at 3:48 AM, Julian Leviston jul...@leviston.net
 wrote:

 As I understand it, Frank is an experiment that is an extended version of
 DBJr that sits atop lesserphic, which sits atop gezira which sits atop nile,
 which sits atop maru all of which which utilise ometa and the worlds idea.

 If you look at the http://vpri.org/html/writings.php page you can see a
 pattern of progression that has emerged to the point where Frank exists.
 From what I understand, maru is the finalisation of what began as pepsi and
 coke. Maru is a simple s-expression language, in the same way that pepsi and
 coke were. In fact, it looks to have the same syntax. Nothing is the layer
 underneath that is essentially a symbolic computer - sitting between maru
 and the actual machine code (sort of like an LLVM assembler if I've
 understood it correctly).

 They've hidden Frank in plain sight. He's a patch-together of all their
 experiments so far... which I'm sure you could do if you took the time to
 understand each of them and had the inclination. They've been publishing as
 much as they could all along. The point, though, is you have to understand
 each part. It's no good if you don't understand it.

 If you know anything about Alan  VPRI's work, you'd know that their focus
 is on getting children this stuff in front as many children as possible,
 because they have so much more ability to connect to the heart of a problem
 than adults. (Nothing to do with age - talking about minds, not bodies
 here). Adults usually get in the way with their stuff - their knowledge
 sits like a kind of a filter, denying them the ability to see things clearly
 and directly connect to them unless they've had special training in relaxing
 that filter. We don't know how to be simple and direct any more - not to say
 that it's impossible. We need children to teach us meta-stuff, mostly this
 direct way of experiencing and looking, and this project's main aim appears
 to be to provide them (and us, of course, but not as importantly) with the
 tools to do that. Adults will come secondarily - to the degree they can't
 embrace new stuff ;-). This is what we need as an entire populace - to
 increase our general understanding - to reach breakthroughs previously not
 thought possible, and fast. Rather than changing the world, they're
 providing the seed for children to change the world themselves.

 This is only as I understand it from my observation. Don't take it as
 gospel or even correct, but maybe you could use it to investigate the parts
 of frank a little more and with 

Re: [fonc] Error trying to compile COLA

2012-02-26 Thread Josh Grams
On 2012-02-27 02:14AM, Martin Baldan wrote:
But what else can I do with it? Should I use it to run the examples at
http://tinlizzie.org/dbjr/;? All I see is files with a .lbox file
extension. What are those? Apparently, there are no READMEs. Could you
please give me an example of how to try one of those experiments?

DBJr seems to be a Squeak thing.  Each of those .lbox directories has a
SISS file which seems to be an S-expression serialization of Smalltalk
objects.  Sounds like probably what you need is the stuff under Text
Field Spec for LObjects on the VPRI wiki page.

Not that I know *anything* about this whatsoever...

--Josh
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-26 Thread Julian Leviston
I'm afraid that I am in no way a teacher of this. I'm in no way professing to 
know what I'm talking about - I've simply given you my observations. Perhaps we 
can help each other, because I'm intensely interested, too... I want to 
understand this stuff because it is chock full of intensely powerful ideas.

The elitism isn't true... I've misrepresented what I was meaning to say - I 
simply meant that people who aren't fascinated enough to understand won't have 
the drive to understand it... until it gets to a kind of point where enough 
people care to explain it to the people who take longer to understand... This 
makes sense. It's how it has always been. Sorry for making it sound elitist. 
It's not, I promise you. When your time is limited, though (as the VPRI guys' 
time is), one needs to focus on truly expounding it to as many people as you 
can, so one can teach more teachers first... one teaches the people who can 
understand the quickest first, and then they can propagate and so on... I hope 
this is clear.

I don't think it was a prank. It's not really hidden at all. If you pay 
attention, all the components of Frank are there... like I said. It's obviously 
missing certain things like Nothing, and other optimisations, but for the most 
part, all the tech is present.

My major stumbling block at the moment is understanding OMeta fully. This is 
possibly the most amazing piece of work I've seen in a long, long time, and 
there's no easy explanation of it, and no really simple explanation of the 
syntax, either. There are the papers, and source code and the sandboxes, but 
I'm still trying to understand how to use it. It's kind of huge. I think 
perhaps I need to get a grounding in PEGs before I start on OMeta because there 
seems to be a lot of assumed knowledge there. Mostly I'm having trouble with 
the absolute, complete basics.

Anyway I digress... have you had a look at this file?:

http://piumarta.com/software/maru/maru-2.1/test-pepsi.l

Just read the whole thing - I found it fairly interesting :) He's build pepsi 
on maru there... that's pretty fascinating, right? Built a micro smalltalk on 
top of the S-expression language... and then does a Fast Fourier Transform test 
using it...

I'm not really sure how it relates to this, tho: 

I actually have no idea about how to run one of the experiments you're talking 
about - the mbox files... from what I've read about STEPS, though, I think the 
mbox files are Frank documents... and I think Frank is kinda DBJr... at least, 
if you go to this page and look at the bottom, pay some careful attention to 
the video that appears there demonstrating some of the patchwork doll that is 
frank (if you haven't seen it already)...

http://www.vpri.org/vp_wiki/index.php/Gezira
http://tinlizzie.org/~bert/Gezira.mp4

In the tinlizzie.org/updates/exploratory/packages you'll find montecello 
packages that contains some of experiments, I'm fairly sure, one of which is: 
(yep, you guessed it)

FrankVersion-yo.16.mcz

However, having not tried this, I'm not sure of what it may be ;-) (if I were 
you, I'd take a squizz around those packages)

You probably what the Lobjects stuff and the doc editor, I'm guessing... :-S

Fairly patchy at the best, but that's the point - it's called Frank, as in 
frankenstein's monster - as in... it's a patchy mess, but it's alive... this 
stuff is a fair way off having a full stack that operates beautifully... but 
it's a start... (it seems).

Julian




On 27/02/2012, at 12:14 PM, Martin Baldan wrote:

 Julian,
 
 Thanks, now I have a much better picture of the overall situation, although I 
 still have a lot of reading to do. I already had read a couple of Frank 
 progress reports, and some stuff about worlds, in the publications link you 
 mention. So I thought, this sounds great, how can I try this? Then I went to 
 the wiki, and there was a section called Fundamental new computing 
 technologies, so I said this is the thing!.  But no, the real thing was, 
 as you said, hidden in plain sight, under the unconspicuous titles such as 
 Other prototypes and projects related to our work and experiment. I 
 wonder, is that some kind of prank for the uninitiated? hehe. By the way, 
 I've played a little with Squeak, Croquet and other great projects by Alan 
 and the other wonderful Smalltalk people, so I did have a sense of their 
 focus on children. I must confess I was a bit annoyed with what seemed to me 
 like Jedi elitism (as in He is too old. Yes, too old to begin the training. 
 ) but hey, their project, their code, their rules.
 
 So, to get back on topic,
 
 I've downloaded Maru, The contents are:
 
 boot-eval.c  boot.l  emit.l  eval.l  Makefile
 
 So, the .l files are  
 
 So this is the file extension for Maru's implementation language (does it 
 have a name?).
 
 Sure enough, the very first line of eval.l reads:
 
 ;;; -*- coke -*-
 
 This made me smile. Well, actually it was a mad laughter.
 
 It compiles beautifully. Yay!

Re: [fonc] Error trying to compile COLA

2012-02-26 Thread BGB

On 2/26/2012 8:23 PM, Julian Leviston wrote:
I'm afraid that I am in no way a teacher of this. I'm in no way 
professing to know what I'm talking about - I've simply given you my 
observations. Perhaps we can help each other, because I'm intensely 
interested, too... I want to understand this stuff because it is chock 
full of intensely powerful ideas.




yep, generally agreed.

I generally look for interesting or useful ideas, but admittedly have a 
harder time understanding a lot of what is going on or being talked 
about here (despite being, I think, generally being fairly knowledgeable 
about most things programming-related).


there may be a domain mismatch or something though.


admittedly, I have not generally gotten as far as being really able to 
understand Smalltalk code either, nor for that matter languages too much 
different from vaguely C-like Procedural/OO languages, except maybe 
ASM, which I personally found not particularly difficult to 
learn/understand or read/write, but the main drawbacks of ASM being its 
verbosity and portability issues.


granted, this may be partly a matter of familiarity or similar, since I 
encountered both C and ASM (along with BASIC) at fairly young age (and 
actually partly came to understand C originally to some degree by 
looking at the compiler's ASM output, getting a feel for how the 
constructs mapped to ASM operations, ...).



The elitism isn't true... I've misrepresented what I was meaning to 
say - I simply meant that people who aren't fascinated enough to 
understand won't have the drive to understand it... until it gets to a 
kind of point where enough people care to explain it to the people who 
take longer to understand... This makes sense. It's how it has always 
been. Sorry for making it sound elitist. It's not, I promise you. When 
your time is limited, though (as the VPRI guys' time is), one needs to 
focus on truly expounding it to as many people as you can, so one can 
teach more teachers first... one teaches the people who can understand 
the quickest first, and then they can propagate and so on... I hope 
this is clear.




similarly, I was not meaning to apply that children having knowledge is 
a bad thing, but sadly, it seems to run contrary to common cultural 
expectations.


granted, it is possibly the case that some aspects of culture are broken 
in some ways, namely, that children are kept in the dark, and there is 
this notion that everyone should be some sort of unthinking and passive 
consumer. except, of course, for the content producers, which would 
generally include both the media industry (TV, movies, music, ...) as a 
whole and to a lesser extent the software industry, with a sometimes 
questionable set of Intellectual Property laws in an attempt to uphold 
the status quo (not that IP is necessarily bad, but it could be better).


I guess this is partly why things like FOSS exist.
but, FOSS isn't necessarily entirely perfect either.


but, yes, both giving knowledge and creating a kind of safe haven seem 
like reasonable goals, where one can be free to tinker around with 
things with less risk from some overzealous legal department somewhere.


also nice would be if people were less likely to accuse all of ones' 
efforts of being useless, but sadly, this probably isn't going to happen 
either.



this is not to imply that I personally necessarily have much to offer, 
as beating against a wall may make one fairly well aware of just how far 
there is left to go, as relevance is at times a rather difficult goal 
to reach.


admittedly, I am maybe a bit dense as well. I have never really been 
very good with abstract concepts (nor am I particularly intelligent 
in the strict sense). but, I am no one besides myself (and have no one 
besides myself to fall back on), so I have to make due (and hopefully 
try to avoid annoying people too much, and causing them to despise me, ...).


like, the only way out is through and similar.


I don't think it was a prank. It's not really hidden at all. If you 
pay attention, all the components of Frank are there... like I said. 
It's obviously missing certain things like Nothing, and other 
optimisations, but for the most part, all the tech is present.


sorry for asking, but is their any sort of dense people friendly 
version, like maybe a description on the Wiki or something?...


like, so people can get a better idea of what things are about and how 
they all work and fit together?... (like, in the top-down description 
kind of way?).



My major stumbling block at the moment is understanding OMeta fully. 
This is possibly the most amazing piece of work I've seen in a long, 
long time, and there's no easy explanation of it, and no really simple 
explanation of the syntax, either. There are the papers, and source 
code and the sandboxes, but I'm still trying to understand how to use 
it. It's kind of huge. I think perhaps I need to get a grounding in 
PEGs before I start on OMeta because 

Re: [fonc] Error trying to compile COLA

2012-02-26 Thread Julian Leviston
Hi,

Comments line...

On 27/02/2012, at 5:33 PM, BGB wrote:

 
 I don't think it was a prank. It's not really hidden at all. If you pay 
 attention, all the components of Frank are there... like I said. It's 
 obviously missing certain things like Nothing, and other optimisations, but 
 for the most part, all the tech is present.
 
 sorry for asking, but is their any sort of dense people friendly version, 
 like maybe a description on the Wiki or something?...
 
 like, so people can get a better idea of what things are about and how they 
 all work and fit together?... (like, in the top-down description kind of 
 way?).
 

I don't think this is for people who aren't prepared to roll up their sleeves 
and try things out. For a start, learn SmallTalk. It's not hard. Go check out 
squeak. There are lots of resources to learn SmallTalk.


 
 My major stumbling block at the moment is understanding OMeta fully. This is 
 possibly the most amazing piece of work I've seen in a long, long time, and 
 there's no easy explanation of it, and no really simple explanation of the 
 syntax, either. There are the papers, and source code and the sandboxes, but 
 I'm still trying to understand how to use it. It's kind of huge. I think 
 perhaps I need to get a grounding in PEGs before I start on OMeta because 
 there seems to be a lot of assumed knowledge there. Mostly I'm having 
 trouble with the absolute, complete basics.
 
 Anyway I digress... have you had a look at this file?:
 
 http://piumarta.com/software/maru/maru-2.1/test-pepsi.l
 
 Just read the whole thing - I found it fairly interesting :) He's build 
 pepsi on maru there... that's pretty fascinating, right? Built a micro 
 smalltalk on top of the S-expression language... and then does a Fast 
 Fourier Transform test using it...
 
 
 my case: looked some, but not entirely sure how it works though.
 

You could do what I've done, and read the papers and then re-read them and 
re-read them and re-read them... and research all references you find (the 
whole site is totally full of references to the entire of programming history). 
I personally think knowing LISP and SmallTalk, some assembler, C, Self, 
Javascript and other things is going to be incredibly helpful. Also, math is 
the most helpful! :)

Julian

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-26 Thread BGB

On 2/26/2012 11:43 PM, Julian Leviston wrote:

Hi,

Comments line...

On 27/02/2012, at 5:33 PM, BGB wrote:



I don't think it was a prank. It's not really hidden at all. If you 
pay attention, all the components of Frank are there... like I said. 
It's obviously missing certain things like Nothing, and other 
optimisations, but for the most part, all the tech is present.


sorry for asking, but is their any sort of dense people friendly 
version, like maybe a description on the Wiki or something?...


like, so people can get a better idea of what things are about and 
how they all work and fit together?... (like, in the top-down 
description kind of way?).




I don't think this is for people who aren't prepared to roll up their 
sleeves and try things out. For a start, learn SmallTalk. It's not 
hard. Go check out squeak. There are lots of resources to learn SmallTalk.




could be.

I messed with Squeak some before, but at the time got 
confused/discouraged and gave up after a little while.







My major stumbling block at the moment is understanding OMeta fully. 
This is possibly the most amazing piece of work I've seen in a long, 
long time, and there's no easy explanation of it, and no really 
simple explanation of the syntax, either. There are the papers, and 
source code and the sandboxes, but I'm still trying to understand 
how to use it. It's kind of huge. I think perhaps I need to get a 
grounding in PEGs before I start on OMeta because there seems to be 
a lot of assumed knowledge there. Mostly I'm having trouble with the 
absolute, complete basics.


Anyway I digress... have you had a look at this file?:

http://piumarta.com/software/maru/maru-2.1/test-pepsi.l

Just read the whole thing - I found it fairly interesting :) He's 
build pepsi on maru there... that's pretty fascinating, right? Built 
a micro smalltalk on top of the S-expression language... and then 
does a Fast Fourier Transform test using it...




my case: looked some, but not entirely sure how it works though.



You could do what I've done, and read the papers and then re-read them 
and re-read them and re-read them... and research all references you 
find (the whole site is totally full of references to the entire of 
programming history). I personally think knowing LISP and SmallTalk, 
some assembler, C, Self, Javascript and other things is going to be 
incredibly helpful. Also, math is the most helpful! :)




ASM and C are fairly well known to me (I currently have a little over 1 
Mloc of C code to my name, so I can probably fairly safely say I know 
C...).



I used Scheme before, but eventually gave up on it, mostly due to:
problems with the Scheme community (seemed to be very fragmented and 
filled with elitism);
I generally never really could get over the look of S-Expression 
syntax (and also the issue that no one was happy unless the code had 
Emacs formatting, but I could never really get over Emacs either);
I much preferred C style control flow (which makes arbitrary 
continues/breaks easier), whereas changing flow through a loop in Scheme 
often meant seriously reorganizing it;

...

Scheme remains as a notable technical influence though (and exposure to 
Scheme probably had a fairly notable impact on much of my subsequent 
coding practices).



JavaScript I know acceptably, given my own scripting language is partly 
based on it.
however, I have fairly little experience using it in its original 
context: for fiddling around with web-page layouts (and never really got 
into the whole AJAX thing).


I messed around with Self once before, but couldn't get much 
interesting from it, but I found the language spec and some papers on 
the VM fairly interesting, so I scavenged a bunch of ideas from there.


the main things I remember:
graph of objects, each object being a bag of slots, with the ability 
to delegate to any number of other objects, and the ability to handle 
cyclic delegation loops;

using a big hash-table for lookups and similar;
...

many of those ideas were incorporated into my own language/VM (with 
tweaks and extensions: such as my VM has lexical scoping, and I later 
added delegation support to the lexical environment as well, ...). (I 
chose free-form / arbitrary delegation rather than the single-delegation 
of JavaScript, due to personally finding it more useful and interesting).


I had noted, however, that my model differs in a few significant ways 
from the description of Lieberman Prototypes on the site (my clone 
operation directly copies objects, rather than creating a new empty 
object with copy-on-write style semantics).



the current beast looks like a mix of like a C/JS/AS mix on the surface, 
but internally may have a bit more in common than Scheme and Self than 
it does with other C-like languages (much past the syntax, and 
similarities start to fall away).


but, yet, I can't really understand SmallTalk code...


granted, math is a big weak area of mine:
apparently, my effective 

Re: [fonc] Error trying to compile COLA

2012-02-25 Thread Martin Baldan
Michael,

Thanks for your reply. I'm looking into it.

Best,

 Martin
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-25 Thread Julian Leviston
Isn't the cola basically irrelevant now? aren't they using maru instead? (or 
rather isn't maru the renamed version of coke?)

Julian


On 26/02/2012, at 2:52 AM, Martin Baldan wrote:

 Michael,
 
 Thanks for your reply. I'm looking into it.
 
 Best,
 
  Martin
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-25 Thread Martin Baldan
Is that the case? I'm a bit confused. I've read the fascinating reports
about Frank, and I was wondering what's the closest thing one can download
and run right now. Could you guys please clear it up for me?

Best,

Martin

On Sat, Feb 25, 2012 at 5:23 PM, Julian Leviston jul...@leviston.netwrote:

 Isn't the cola basically irrelevant now? aren't they using maru instead?
 (or rather isn't maru the renamed version of coke?)

 Julian


 On 26/02/2012, at 2:52 AM, Martin Baldan wrote:

  Michael,
 
  Thanks for your reply. I'm looking into it.
 
  Best,
 
   Martin
  ___
  fonc mailing list
  fonc@vpri.org
  http://vpri.org/mailman/listinfo/fonc

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-25 Thread Julian Leviston
As I understand it, Frank is an experiment that is an extended version of DBJr 
that sits atop lesserphic, which sits atop gezira which sits atop nile, which 
sits atop maru all of which which utilise ometa and the worlds idea.

If you look at the http://vpri.org/html/writings.php page you can see a pattern 
of progression that has emerged to the point where Frank exists. From what I 
understand, maru is the finalisation of what began as pepsi and coke. Maru is a 
simple s-expression language, in the same way that pepsi and coke were. In 
fact, it looks to have the same syntax. Nothing is the layer underneath that is 
essentially a symbolic computer - sitting between maru and the actual machine 
code (sort of like an LLVM assembler if I've understood it correctly).

They've hidden Frank in plain sight. He's a patch-together of all their 
experiments so far... which I'm sure you could do if you took the time to 
understand each of them and had the inclination. They've been publishing as 
much as they could all along. The point, though, is you have to understand each 
part. It's no good if you don't understand it.

If you know anything about Alan  VPRI's work, you'd know that their focus is 
on getting children this stuff in front as many children as possible, because 
they have so much more ability to connect to the heart of a problem than 
adults. (Nothing to do with age - talking about minds, not bodies here). Adults 
usually get in the way with their stuff - their knowledge sits like a kind 
of a filter, denying them the ability to see things clearly and directly 
connect to them unless they've had special training in relaxing that filter. We 
don't know how to be simple and direct any more - not to say that it's 
impossible. We need children to teach us meta-stuff, mostly this direct way of 
experiencing and looking, and this project's main aim appears to be to provide 
them (and us, of course, but not as importantly) with the tools to do that. 
Adults will come secondarily - to the degree they can't embrace new stuff ;-). 
This is what we need as an entire populace - to increase our general 
understanding - to reach breakthroughs previously not thought possible, and 
fast. Rather than changing the world, they're providing the seed for children 
to change the world themselves.

This is only as I understand it from my observation. Don't take it as gospel or 
even correct, but maybe you could use it to investigate the parts of frank a 
little more and with in-depth openness :) The entire project is an 
experiment... and that's why they're not coming out and saying hey guys this 
is the product of our work - it's not a linear building process, but an 
intensively creative process, and most of that happens within oneself before 
any results are seen (rather like boiling a kettle).

http://www.vpri.org/vp_wiki/index.php/Main_Page

On the bottom of that page, you'll see a link to the tinlizzie site that 
references experiment and the URL has dbjr in it... as far as I understand 
it, this is as much frank as we've been shown.

http://tinlizzie.org/dbjr/

:)
Julian

On 26/02/2012, at 9:41 AM, Martin Baldan wrote:

 Is that the case? I'm a bit confused. I've read the fascinating reports about 
 Frank, and I was wondering what's the closest thing one can download and run 
 right now. Could you guys please clear it up for me?
 
 Best,
 
 Martin
 
 On Sat, Feb 25, 2012 at 5:23 PM, Julian Leviston jul...@leviston.net wrote:
 Isn't the cola basically irrelevant now? aren't they using maru instead? (or 
 rather isn't maru the renamed version of coke?)
 
 Julian
 
 
 On 26/02/2012, at 2:52 AM, Martin Baldan wrote:
 
  Michael,
 
  Thanks for your reply. I'm looking into it.
 
  Best,
 
   Martin
  ___
  fonc mailing list
  fonc@vpri.org
  http://vpri.org/mailman/listinfo/fonc
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-23 Thread Michael Haupt
Martin,

Am 23.02.2012 um 11:32 schrieb Martin Baldan:
 Here's where I think the compilation went wrong:
 
 [code]
 
 make[2]: Entering directory 
 `/home/martin/Escritorio/otras_cosas/desastre/programming/vpri/cola/idst/stable/fonc-stable/object/id'
 /bin/sh -ec 'mkdir ../stage1/./include; \
 mkdir ../stage1/./include/id;  cp -pr ../id/*.h 
 ../stage1/./include/id; \
 cp -pr ../gc6.7/include ../stage1/./include/gc;  find 
 ../stage1/./include/gc -name .svn -exec rm -rf {} \;'
 find: `../stage1/./include/gc/.svn': No such file or directory
 find: `../stage1/./include/gc/private/.svn': No such file or directory
 make[2]: [../stage1/./include] Error 1 (ignored)
 [/code]

no, that's ok; it even says ignored.

 [code]
 
 make[2]: Leaving directory 
 `/home/martin/Escritorio/otras_cosas/desastre/programming/vpri/cola/idst/stable/fonc-stable/object/id'
 st80
 make[2]: Entering directory 
 `/home/martin/Escritorio/otras_cosas/desastre/programming/vpri/cola/idst/stable/fonc-stable/object/st80'
 ../boot/idc -B../boot/ -O  -k -c _object.st -o ../stage1/_object.o
 
 import: st80.so: No such file or directory

It attempts to find the st80.so that has apparently been successfully created 
(looking at the output you posted earlier). I vaguely recall something like 
this happened to me as well; try to create a static link to the st80.so that 
was generated earlier. I don't recall from which directory. Try a bit. :-)

Best,

Michael

-- 


Dr. Michael Haupt | Principal Member of Technical Staff
Phone: +49 331 200 7277 | Fax: +49 331 200 7561
Oracle Labs 
Oracle Deutschland B.V.  Co. KG, Schiffbauergasse 14 | 14467 Potsdam, Germany
Oracle is committed to developing practices and products that help 
protect the environment

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc