Re: [fonc] Describing Semantics

2010-11-22 Thread Ondřej Bílka
Hello
Can you explain benefits of operational semantics.
I skimmed through few papers  and it looks that they define yet another 
language with its abstract syntax tree.
On Sun, Nov 21, 2010 at 04:55:45PM -0800, Alan Kay wrote:
>Hi Casey,
> 
>You might enjoy looking at "operational semantics" -- whose idea has been
>around longer than the term. There is also "denotational semantics", which
>goes back to Christopher Strachey in the 60s.
> 
>The latter is more amenable to automated reasoning processes, and the
>former is more amenable to lower level translations that create working
>machinery.
> 
>We are more interested in the former, and doing something about it is on
>our list, but this is not likely to happen for quite a few months.
> 
>So it would be great for folks on the fonc list to weigh in. For example,
>Nile is currently translated by OMeta into a variety of targets, including
>Javascript, Squeak, and C -- and soon into a STEPS lower level language
>(which could be thought of as a kind of "operational semantics").
> 
>It would be nice to have more schemes for semantics which allow both
>reasoning and translation into efficient lower level code.
> 
>Cheers,
> 
>Alan
> 
>--
> 
>From: Casey Ransberger 
>To: Fundamentals of New Computing ; "om...@vpri.org"
>
>Sent: Sun, November 21, 2010 3:40:31 PM
>Subject: [fonc] Describing Semantics
>So I was in a heated debate with a good friend about Ometa. He pointed
>out, inadvertently, that matters of syntax and grammar are only the easy
>part of the problem.
> 
>Ometa, I think, is the most elegant language we've seen for binding
>syntactic/grammatical constructs to semantics implemented in a host
>language, but the host language, as cool as it's gotten (e.g., JavaScript
>has some decent semantics, once one can work around the syntax) is still
>resistant, and Ometa leaves it up to us to express the semantics we want
>in whatever host language we've chosen.
> 
>How many hoops do we want to jump through in order to express our
>semantics? Obviously JavaScript isn't ideal for that.
> 
>I imagine I'm not the only person thinking about this. Ometa seems to
>*want* to be embedded in a language that expresses diverse semantics
>compactly.
> 
>What language is that?
> 
>My friend presented me with a construct that looked very much like a
>dictionary but where the keys and values could be only of certain types in
>the target language (this in an argument about the value of source to
>source translation.)
> 
>I wonder if the way to describe those requirements and relationships
>doesn't look like a Prolog.
> 
>If anyone on the list has some experience in this area, I'd love pointers
>to interesting research.
> 
>Thanks in advance, and sorry if the topic has come up before!
> 
>C
>___
>fonc mailing list
>[1]f...@vpri.org
>[2]http://vpri.org/mailman/listinfo/fonc
> 
> References
> 
>Visible links
>1. mailto:fonc@vpri.org
>2. http://vpri.org/mailman/listinfo/fonc

> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc


-- 

Computers under water due to SYN flooding.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Describing Semantics

2010-11-25 Thread Ondřej Bílka
What suprises me is how much semantics definitions are influenced by creator's 
favourite language.
denotational semantics looks like a functional language resembling lambda 
calculus
operational semantics looks like a proof system
axiomatic   ditto
kernel languages of humus ang oz are mix of functional and declerative.

It looks that to formalize and write about it is most natural for mindset of a 
logician and logicians are highly correlated with the functional guys.

Reduction to kernel language is implicitly used in almost all languages. Common 
aproach is macros like in fortress language 
http://projectfortress.sun.com/Projects/Community
I have similar idea which I will write on ometa list.

On Tue, Nov 23, 2010 at 08:38:54AM -0600, Dale Schumacher wrote:
> Another way to define semantics is the "kernel language" approach
> where the kernel language has a formal (possibly Operational)
> semantics.  Van Roy and Haridi used this in CTM [1] for Oz/Mozart.
> 
> My Humus language [2] is a sort of "kernel language" based on a pure
> implementation of the Actor model.  This language has been used to
> explore the semantics of a variety of other computational models
> including Object-Oriented Method Invocation (ala COLA) [3] and
> pure-functional Lambda Calculus [4].
> 
> My strategy in using Humus has been to create "live" objects (actors,
> really) which represent various software constructs (variables,
> patterns, expressions, statements, etc.).  These actors respond to
> events which initiate computation.  They generate events with cause
> effects, modeling the "state" of the system.
> 
> A prototype implementation of Humus generates a graph of actors from
> the AST resulting from parsing language text.  The graph of actors
> represents the "program".  Execution occurs when an "eval/exec"
> message (which carries the context/environment for execution) is sent
> to the entry-point of the actor graph.
> 
> [1] http://c2.com/cgi/wiki?ConceptsTechniquesAndModelsOfComputerProgramming
> [2] http://www.dalnefre.com/wp/humus/
> [3] 
> http://www.dalnefre.com/wp/2010/07/message-passing-part-2-object-oriented-method-invocation/
> [4] 
> http://www.dalnefre.com/wp/2010/08/evaluating-expressions-part-1-core-lambda-calculus/
> 
> On Mon, Nov 22, 2010 at 8:33 AM, Alan Kay  wrote:
> > Actually, I think it would be a good exercise for you to ask "what is
> > semantics?", especially with regard to computing, and some of the ways it
> > might be useful. and report back to this list. It will make a good topic
> > for discussion.
> >
> > This is easier than it was 50 years ago because there are more examples and
> > more formalisms. So, in what ways are e.g. "operational semantics" and
> > "denotational semantics similar, and how are they different? What do you
> > lose or gain going from one to the other?
> >
> > Cheers,
> >
> > Alan
> >
> >
> > 
> > From: Ondřej Bílka 
> > To: Fundamentals of New Computing 
> > Sent: Mon, November 22, 2010 6:21:30 AM
> > Subject: Re: [fonc] Describing Semantics
> >
> > Hello
> > Can you explain benefits of operational semantics.
> > I skimmed through few papers  and it looks that they define yet another
> > language with its abstract syntax tree.
> > On Sun, Nov 21, 2010 at 04:55:45PM -0800, Alan Kay wrote:
> >>    Hi Casey,
> >>
> >>    You might enjoy looking at "operational semantics" -- whose idea has
> >> been
> >>    around longer than the term. There is also "denotational semantics",
> >> which
> >>    goes back to Christopher Strachey in the 60s.
> >>
> >>    The latter is more amenable to automated reasoning processes, and the
> >>    former is more amenable to lower level translations that create working
> >>    machinery.
> >>
> >>    We are more interested in the former, and doing something about it is
> >> on
> >>    our list, but this is not likely to happen for quite a few months.
> >>
> >>    So it would be great for folks on the fonc list to weigh in. For
> >> example,
> >>    Nile is currently translated by OMeta into a variety of targets,
> >> including
> >>    Javascript, Squeak, and C -- and soon into a STEPS lower level language
> >>    (which could be thought of as a kind of "operational semantics").
> >>
> >>    It would be nice to have more schemes for semantics which allow both
> >>    reasoning and translation into efficient lower level c

Re: [fonc] Show Us The Code!

2010-12-18 Thread Ondřej Bílka
On Sun, Dec 19, 2010 at 12:14:57PM +1000, Steve Taylor wrote:
> Reuben Thomas wrote:
> 
> >1. You prefer to release only polished artefacts. This is just egotistical.
> 
> Demanding that people show you their work before it's ready can come
> across as pretty egotistical too.
> 
> Yes - I'd love to see a lot more FONC stuff released - but I don't
> think we've got a right to demand it.
> 
You dont have to but typicaly waiting to released leads to discovery that 
somebody in meantime wrote program that does it better
> 
> 
> 
> Steve
> 
> 
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

-- 

We already sent around a notice about that.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Show Us The Code!

2010-12-20 Thread Ondřej Bílka
On Mon, Dec 20, 2010 at 09:42:28PM +0800, Brian Gilman wrote:
> > 
> > No, I do not accept this. I do not think it is in the project's best
> > interests, I do not think it is in computer science's best interests,
> > and I do not think it is in the public interest. That is why I am
> > "banging on the door" (nice phrase) and trying to persuade them
> > otherwise. (Note: not "complaining".)
> > 
> 
> You aren't banging on the door, or persuading anyone of anything, you are 
> coming off like an abrasive person with the social skills of a computer 
> engineer.
> 
> Just because you believe that "Release early, release often" is the best 
> release strategy, doesn't mean that everyone at VPRI does.  I work in video 
> game development, and it's a pretty much suicidal strategy for releasing 
> games.  "A delayed game is eventually good, a bad game is bad forever." — 
> Shigeru Miyamoto
Good as duke nukem forever?
> 
> It's very hard to shake bad first impressions,  and there are times when you 
> don't want people to see something until it's polished, or you have something 
> cool to show.  Otherwise the bad first impression will color the public's 
> perception of your project for the rest of its lifetime. 
> 
> I'm skeptical that releasing a bunch of source code for something that has 
> been described as being on "life-support", announcing to the world that there 
> has been a revolution in computing, and then have it not work on a majority 
> of machines, is really the optimal strategy for success. 
> 
> VPRI is getting public funding, but $5 million usd isn't a heck of a lot.  To 
> put that into context, these days, that isn't even enough to make a bad video 
> game. That means that they need to make good use of the resources that they 
> have, which means keeping focus.  Which means avoiding distractions, like 
> having to answer a zillion questions and unreasonable demands on mailing 
> lists. 
> 
> 
> 
>   
> 
> 
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

-- 

We had to turn off that service to comply with the CDA Bill.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alternative Web programming models?

2011-06-14 Thread Ondřej Bílka
On Tue, Jun 14, 2011 at 01:04:20PM -0700, BGB wrote:
> On 6/14/2011 12:14 PM, Michael FIG wrote:
> >Hi,
> >
> >John Nilsson  writes:
> >
> >>So my fix is to make the separation a hidden thing, which means the
> >>program needs to be represented in something that allows such hidden
> >>things (and I don't think Unicode control characters is the way to go
> >>here).
> >Why not crib a hack from JavaDoc and make your nested syntax embedded in
> >comments in the host language?
> >
> 
> or, like in my languages (generally for other things):
> use a keyword...
> 
> reusing prior example:
> 
> public int totalAmmount(InvoiceNo invoceNo)
> {
>   return SQLExpr {
> SELECT SUM(ammount) FROM Invoices WHERE invoiceNo = :InvoiceNo;
>   }
> }
> 
>
Well you can embed languages easily using fact that in any sane language 
parenthness match 
unless you areb inside string.
You could get pretty strong preprocessor in few lines like my preprocessor 
which I attach.

It has one command 
register(name,command)  
which replaces all subsequent occurences of name(args){properly parenthized 
text} 
by output of command with args given and text pasted as input.
-- 

Big to little endian conversion error
$reg={"register"=>true}
def subp(s,i,l,r)
return ["",i] if s[i,1]!=l
o=""
pc=0
begin
pc+=1 if s[i,1]==l
pc-=1 if s[i,1]==r
if s[i,1]=='"'
while s[i+1,1]!='"' && s[i,1]!='\\'
o<0
[process(o[1,o.size-2]),i]
end
def process(s)
o=""
i=0
while itmp2`

o<___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] John Shutt's "Primacy of syntax" blog post - a good read

2011-07-11 Thread Ondřej Bílka
Reminds me io language. 
On Mon, Jul 11, 2011 at 03:41:06PM -0400, John Zabroski wrote:
>John Shutt has a new blog, sharing his thoughts on programming language
>design.  He has written a blog post titled "Primacy of Syntax".
>[1]http://fexpr.blogspot.com/2011/06/primacy-of-syntax.html
> 
>Shutt's work on fexprs came to my attention mostly due to a curious
>statement by Alan Kay in his Early History of Smalltalk paper:
> 
>"I could hardly believe how beautiful and wonderful the idea of LISP was.
>I say it this way because LISP had not only been around enough to get some
>honest barnacles, but worse, there were deep flaws in its logical
>foundations. By this, I mean that the pure language was supposed to be
>based on functions, but its most important components -- such as lambda
>expressions, quotes, and conds -- were not functions at all, and instead
>were called special forms. Landin and others had been able to get quotes
>and cons in terms of lambda by tricks that were variously clever and
>useful, but the flaw remained in the jewel. In the practical language
>things were better. There were not just EXPRs (which evaluated their
>arguments), but FEXPRs (which did not). My next questions was, why on
>Earth call it a functional language? Why not just base everything on
>FEXPRs and force evaluation on the receiving side when needed?
> 
>I could never get a good answer, but the question was very helpful when it
>came time to invent Smalltalk, because this started a line of thought that
>said 'take the hardest and most profound thing you need to do, make it
>great, an then build every easier thing out of it.'"
> 
> Alan Kay,
>  The Early History of Smalltalk.,
>   in: Bergin, Jr., T.J., and R.G. Gibson.
>History of Programming Languages - II,
>   ACM Press, New York NY, and
>Addison-Wesley Publ. Co., Reading MA 1996,
>   pp. 511-578
>Since then, I have been really interested in what Shutt has to say about
>language design. Shutt has in essence given Alan his "good answer",
>although whether Alan likes Shutt's Ph.D. thesis is unknown to me.  It
>does seem like there is a lot of overlap with John's ideas and VPRI's
>ideas, but each use different terminology.  For example, VPRI uses "chains
>of meaning", whereas John talks about the output of a language being
>another language rather than a function returning a value that explains
>the meaning of a language.
> 
>Cheers,
>Z-Bo
> 
> References
> 
>Visible links
>1. http://fexpr.blogspot.com/2011/06/primacy-of-syntax.html

> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc


-- 

runaway cat on system.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Last programming language

2011-07-19 Thread Ondřej Bílka
On Tue, Jul 19, 2011 at 05:16:24AM -0700, Casey Ransberger wrote:
>Even if it were possible to have a last language, it would be double plus
>ungood.
> 
>On Mon, Jul 18, 2011 at 8:58 AM, Paul Homer <[1]paul_ho...@yahoo.ca>
>wrote:
> 
> Realistically, I think Godel's Incompleteness Theorem implies that there can 
> be no
> 'last' programming language (formal system).
> 
> But I think it is possible for a fundamentally different paradigm make a huge 
> leap in
> our ability to build complex systems. My thinking from a couple of years back:
> 
> [2]http://theprogrammersparadox.blogspot.com/2009/04/end-of-coding-as-we-know-it.html

Sorry but it is very similar to XML will make everything interoperable articlies


---

Trojan horse ran out of hay

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics and Types

2011-08-05 Thread Ondřej Bílka
On Fri, Aug 05, 2011 at 03:43:04AM -0700, BGB wrote:
>On 8/4/2011 6:19 PM, Alan Kay wrote:
> 
>  Here's the link to the paper
>  [1]http://www.vpri.org/pdf/rn2005001_learning.pdf
> 
>inference:
>it is not that basic math and physics are fundamentally so difficult to
>understand...
>but that many classes portray them as such a confusing and incoherent mess
>of notation and gobbledygook that no one can really make sense of it...
> 
>old stale/dead rant follows:
> 
>it is like, one year, with the help of a physics book,
>google+wikipedia+mathworld, and good old trial and error, I proceed to
>write a (basically functional, but not particularly "good") rigid body
>physics engine.
> 
>several years later, I took a physics class, with a teacher that comes off
>like Q (calling everyone stupid, comparing the students with dogs, ...)
>and writes out esoteric mathematical gobbledygook beyond my abilities to
>make much sense of (filled with set-notation and other unrecognized
>symbols and notations, some in common with first-order logic, like the
>inverted A and backwards E, ..., and others unknown...).
> 
... 
>granted, I have also seen in introductory programming classes just how
>poorly many of the students seem to grasp some of the basics of
>programming (struggling with things like variable declarations, loops,
>understanding why never-called functions fail to do anything, ...), so I
>guess ultimately it is kind of similar (in an almost sad way, programming
>really doesn't seem like it should be all that difficult from the POV of
>someone with a fair amount of experience with it).
> 
>but, at the same time, there would also be nothing good to be gained by
>belittling or being condescending towards newbies...
> 
Well I faced oposite problem that for classes people unnecesarily
complicate things by trying to make it accessible for newbies.
One of my experiences that high school physics could be three times
easier and simpler if students learned differential equations.

-- 

Pentium FDIV bug

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Nile/Gezira (was: Re: +1 FTW)

2011-11-09 Thread Ondřej Bílka
On Wed, Nov 09, 2011 at 09:55:22PM +0530, K. K. Subramaniam wrote:
> On Wednesday 09 Nov 2011 12:43:00 PM Dan Amelang wrote:
> > "Input prefixing" is what I call this pushing of data onto the input
> > stream, though I'm not set on that term. You used the term "pushback",
> > which I like, but the problem is that we're pushing onto the front
> > of the input stream, and "pushfront" just doesn't have the same ring
> I thought 'pushback' means 'push back into', so if you have a input stream,
> abcdefgh >
> and you read in 'h' and then 'g', you could push 'g' back into the input 
> stream for later processing. I suppose one could also 'putback' g into the 
> input stream.
> 
> But then many computer terms border on the weird :-). I can understand 
> 'output' and something that one 'puts out' but then what is 'input'?  If it 
> is 
> something one 'gets in', shouldn't it have been 'inget' ;-) ?
You should use russian terminology. They use words Putin and getout.
> 
> Subbu
> 
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

-- 

clock speed

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Magic Ink and Killing Math

2012-03-10 Thread Ondřej Bílka
On Sat, Mar 10, 2012 at 01:21:42AM -0800, Wesley Smith wrote:
> > most notable thing I did recently (besides some fiddling with getting a new
> > JIT written), was adding a syntax for block-strings. I used <[[ ... ]]>
> > rather than triple-quotes (like in Python), mostly as this syntax is more
> > friendly to nesting, and is also fairly unlikely to appear by accident, and
> > couldn't come up with much "obviously better" at the moment, "<{{ ... }}>"
> > was another considered option (but is IIRC already used for something), as
> > was the option of just using triple-quote (would work, but isn't readily
> > nestable).
> 
> 
> You should have a look at Lua's long string syntax if you haven't already:
>
Better to be consistent with rest of scripting languages(bash,ruby,perl,python) 
and use heredocs.
-- 

Your packets were eaten by the terminator
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Amethyst (was [IAEP] Barbarians at the gate! (Project Nell))

2012-03-15 Thread Ondřej Bílka
My language pattern matching language which I call amethyst starts 
coming close to generic tool for pattern matching. 

For example it is easy to write generic highligther as I did for
amethyst
http://kam.mff.cuni.cz/~ondra/peridot/parser_highlight.ame.html

On Thu, Mar 15, 2012 at 05:20:52AM -0700, Alan Kay wrote:
>Alex Warth did both a standard Prolog and an English based language one
>using OMeta in both Javascript, and in Smalltalk.
>Again, why just go with something that happens to be around? Why not try
>to make a language that fits to the users and the goals?
>A stronger version of this kind of language is Datalog, especially the
>"Datalog + Time" language -- called Daedalus -- used in the BOOM project
>at Berkeley.
>Cheers,
>Alan
> 
>--
> 
>  From: Ryan Mitchley 
>  To: Fundamentals of New Computing 
>  Sent: Thursday, March 15, 2012 4:01 AM
>  Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
>  I wonder if micro-PROLOG isn't worth revisiting by someone:
> 
>  
> [1]ftp://ftp.worldofspectrum.org/pub/sinclair/games-info/m/Micro-PROLOGPrimer.pdf
> 
>  You get pattern matching, backtracking and a "nicer" syntax than Prolog.
>  It's easy enough to extend with IsA and notions of classes of objects.
>  It still doesn't fit well with a procedural model, in common with
>  Prolog, though.
> 
>  ___
>  fonc mailing list
>  [2]fonc@vpri.org
>  http://vpri.org/mailman/listinfo/fonc



-- 

My pony-tail hit the on/off switch on the power strip.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] LightTable UI

2012-04-24 Thread Ondřej Bílka
Reversibility is quite old idea. Prolog is typical example.

In applications trying to make fuction invertible complicates problems
more than it helps. 

Typicaly it you want only adjunction as you want to omit what function
abstracted away.

On Tue, Apr 24, 2012 at 07:47:02PM +0200, Jarek Rzeszótko wrote:
>Many thanks everyone, one more good resource I found is this paper by
>David Eppstein attempting to do automatic inversion of simple Lisp
>functions:
> 
>[1]http://www.ics.uci.edu/~eppstein/pubs/Epp-IJCAI-85.pdf
> 
>Gives a good overview of other work as well and sheds some light on the
>difficulties involved in the problem.
> 
>Cheers,
>Jarosław Rzeszótko
> 
>W dniu 24 kwietnia 2012 18:48 użytkownik Toby Schachman
><[2]t...@alum.mit.edu> napisał:
> 
>  Benjamin Pierce et al did some work on bidirectional computation. The
>  premise is to work with bidirectional transformations (which they call
>  "lenses") rather than (unidirectional) functions. They took a stab at
>  identifying some primitives, and showing how they would work in some
>  applications. Of course we can do all the composition tricks with
>  lenses that we can do with functions :)
>  [3]http://www.seas.upenn.edu/~harmony/
> 
>  See also Gerald Sussman's essay Building Robust Systems,
>  
> [4]http://groups.csail.mit.edu/mac/users/gjs/6.945/readings/robust-systems.pdf
> 
>  In particular, he has a section called "Constraints Generalize
>  Procedures". He gives an example of a system as a constraint solver
>  (two-way information flow) contrasted with the system as a procedure
>  (one-way flow).
> 
>  Also I submitted a paper for Onward 2012 which discusses this topic
>  among other things,
>  [5]http://totem.cc/onward2012/onward.pdf
> 
>  My own interest is in programming interfaces for artists. I am
>  interested in these "causally agnostic" programming ideas because I
>  think they could support a more non-linear, improvisational approach
>  to programming.
> 
>  Toby
> 
>  2012/4/24 Jarek Rzeszótko <[6]jrzeszo...@gmail.com>:
>  > On the other hand, Those who cannot remember the past are condemned to
>  > repeat it.
>  >
>  > Also, please excuse me (especially Julian Leviston) for maybe sounding
>  too
>  > pessimistic and too offensive, the idea surely is exciting, my point
>  is just
>  > that it excited me and probably many other persons before Bret Victor
>  or
>  > Chris Granger did (very interesting) demos of it and what would
>  _really_
>  > excite me now is any glimpse of any idea whatsoever on how to make
>  such
>  > things work in a general enough domain. Maybe they have or will have
>  such
>  > idea, that would be cool, but until that time I think it's not
>  unreasonable
>  > to restrain a bit, especially those ideas are relatively easy to
>  realize in
>  > special domains and very hard to generalize to the wide scope of
>  software
>  > people create.
>  >
>  > I would actually also love to hear from someone more knowledgeable
>  about
>  > interesting historic attempts at doing such things, e.g. reversible
>  > computations, because there certainly were some: for one I remember a
>  few
>  > years ago "back in time debugging" was quite a fashionable topic of
>  talks
>  > (just google the phrase for a sampling), from a more hardware/physical
>  > standpoint there is
>  [7]http://en.wikipedia.org/wiki/Reversible_computing etc.
>  >
>  > Cheers,
>  > Jarosław Rzeszótko
>  >
>  >
>  > 2012/4/24 David Nolen <[8]dnolen.li...@gmail.com>
>  >>
>  >> "The best way to predict the future is to invent it"
>  >>
>  >> On Tue, Apr 24, 2012 at 3:50 AM, Jarek Rzeszótko
>  <[9]jrzeszo...@gmail.com>
>  >> wrote:
>  >>>
>  >>> You make it sound a bit like this was a working solution already,
>  while
>  >>> it seems to be a prototype at best, they are collecting funding
>  right now:
>  >>> [10]http://www.kickstarter.com/projects/306316578/light-table.
>  >>>
>  >>> I would love to be proven wrong, but I think given the state of the
>  >>> project, many people overexcite over it: some of the things proposed
>  aren't
>  >>> new, just wrapped into a nice modern design (you could try to create
>  a new
>  >>> "skin" or UI toolkit for some Smalltalk IDE for a similiar effect),
>  while
>  >>> for the ones that would be new like the real-time evaluation or
>  >>> visualisation there is too little detail to say whether they are
>  onto
>  >>> something or not - I am sure many people thought of such things in
>  the past,
>  >>> but it is highly questionable to what extent those are actually
>  doable,
>  >>> especially in an existing language like Cloju

Re: [fonc] Incentives and Metrics for Infrastructure vs. Functionality

2013-01-01 Thread Ondřej Bílka
On Tue, Jan 01, 2013 at 09:12:07PM +0100, Loup Vaillant-David wrote:
> On Mon, Dec 31, 2012 at 04:36:09PM -0700, Marcus G. Daniels wrote:
> > On 12/31/12 2:58 PM, Paul D. Fernhout wrote:
> > 2. The programmer has a belief or preference that the code is easier
> > to work with if it isn't abstracted. […]
This depends lot on context. On one end you have pile copypasted of visual 
basic code that could be easily refactored into tenth of its size.
On opposite end of spectrum you have piece of haskell code where
everything is abstracted and each abstraction is wrong in some way or
another. 

Main reason of later is functional fixedness. A haskell programmer will see a
structure as a monad but then does not see more apropriate abstractions.

This is mainly problematic when there are portions of code that are very
similar but only by chance and each requires different treatment. You
merge them into one function and after some time this function ends with
ten parameters.

> 
> I have evidence for this poisonous belief.  Here is some production
> C++ code I saw:
> 
>   if (condition1)
>   {
> if (condition2)
> {
>   // some code
> }
>   }
> 
> instead of
> 
>   if (condition1 &&
>   condition2)
>   {
> // some code
>   }
> 
> -
> 
>   void latin1_to_utf8(std::string & s);
> 
Let me guess. They do it to save cycles caused by allocation of new
string.
> instead of
> 
>   std::string utf8_of_latin1(std::string s)
> or
>   std::string utf8_of_latin1(const std::string & s)
> 
> -
> 
> (this one is more controversial)
> 
>   Foo foo;
>   if (condition)
> foo = bar;
>   else
> foo = baz;
> 
> instead of
> 
>   Foo foo = condition
>   ? bar
>   : baz;
> 
> I think the root cause of those three examples can be called "step by
> step thinking".  Some people just can't deal with abstractions at all,
> not even functions.  They can only make procedures, which do their
> thing step by step, and rely on global state.  (Yes, global state,
> though they do have the courtesy to fool themselves by putting it in a
> long lived object instead of the toplevel.)  The result is effectively
> a monster of mostly linear code, which is cut at obvious boundaries
> whenever `main()` becomes too long ("too long" generally being a
> couple hundred lines.  Each line of such code _is_ highly legible,
> I'll give them that.  The whole however would frighten even Cthulhu.
> 
> Loup.
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

-- 

The electricity substation in the car park blew up.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Incentives and Metrics for Infrastructure vs. Functionality

2013-01-02 Thread Ondřej Bílka
On Tue, Jan 01, 2013 at 04:57:05PM -0700, Marcus G. Daniels wrote:
> On 1/1/13 3:18 PM, Ondřej Bílka wrote:
> >On opposite end of spectrum you have piece of haskell code where
> >everything is abstracted and each abstraction is wrong in some way
> >or another. Main reason of later is functional fixedness. A
> >haskell programmer will see a structure as a monad but then does
> >not see more apropriate abstractions. This is mainly problematic
> >when there are portions of code that are very similar but only by
> >chance and each requires different treatment. You merge them into
> >one function and after some time this function ends with ten
> >parameters.

A better example is that you have c code where at several places is code
for inserting element into sorted array and using that array. 
What should you do. 
CS course taugth us to use red-black tree there. Right?

Well not exactly. When one looks how is this code used it discovers that
first occurence is shortest path problem so radix heap is appropriate.
Second does not use ordering so hash table is more appropriate.
Third periodicaly generates webpage from sorted data so keeping new data 
in separate buffer and calling sort from generating routine looks best.
Finally fourth, original contains a comment:

/* Used to find closest value to given value. Profiling shown this
accounted to 13% of time. As updates are rare (not changed in previous
year) using more sophisticated structures than binary search is
counterproductive.*/

> Hmm, yeah.  I've done that, but I've also recognized it and undone
> it, or added parameters to types and/or reworked the combinators to
> deal with both treatments.  That one is even looking for combinators
> seems to me to show that the Haskell programmer is inclined to
> resist functional fixedness...
That depends. Combinators also make you think into terms of combinators. 
You will get same problem.

In general important point here is to have basis a set of primitives
that are quite unlikely to get split into parts. But then you can also
reinvent forth.
> 
> Marcus
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

-- 

Just type 'mv * /dev/null'.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Ondřej Bílka
On Sat, Apr 06, 2013 at 09:00:26PM -0700, David Barbour wrote:
>On Sat, Apr 6, 2013 at 7:10 PM, Julian Leviston <[1]jul...@leviston.net>
>wrote:
> 
>  LISP is "perfectly" precise. It's completely unambiguous. Of course,
>  this makes it incredibly difficult to use or understand sometimes.
> 
>Ambiguity isn't necessarily a bad thing, mind. One can consider it an
>opportunity: For live coding or conversational programming, ambiguity
>enables a rich form of iterative refinement and conversational programming
>styles, where the compiler/interpreter fills the gaps with something that
>seems reasonable then the programmer edits if the results aren't quite
>those desired. For mobile code, or portable code, ambiguity can provide
>some flexibility for a program to adapt to its environment. One can
>consider it a form of contextual abstraction. Ambiguity could even make a
>decent target for machine-learning, e.g. to find optimal results or
>improve system stability [1].
>[1] [2]http://awelonblue.wordpress.com/2012/03/14/stability-without-state/
>

IMO unambiguity is property that looks good only in the paper.

When you look to perfect solution you will get perfect solution for
wrong problem. 

A purpose of language is to convey how to solve problems. You need to look for 
robust solution. You must deal with that real world is inprecise. Just 
transforming 
problem to words causes inaccuracy. when you tell something to many
parties each of them wants to optimize something different. You again
need flexibility.


This is problem of logicians that they did not go into this direction
but direction that makes their results more and more brittle. 
Until one can answer questions above along with how to choose between 
contradictrary data what is more important there is no chance to get decent AI.

What is important is cost of knowledge. It has several important
properties, for example that in 99% of cases it is negative.

You can easily roll dice 50 times and make 50 statements about them that 
are completely unambiguous and completely useless.



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Ondřej Bílka
On Sun, Apr 07, 2013 at 08:03:54AM -0500, Tristan Slominski wrote:
>  A purpose of language is to convey how to solve problems. You need to
>  look for
>  robust solution. You must deal with that real world is inprecise. Just
>  transforming
>  problem to words causes inaccuracy. when you tell something to many
>  parties each of them wants to optimize something different. You again
>  need flexibility.
> 
>Ondrej, have you come across Nassim Nicholas Taleb's Antifragility
>concept? The reason I ask, is because we seem to agree on what's important
>in solving problems. However, robustness is a limited goal, and
>antifragility seems a much more worthy one.

I did not
Yes that is almost exactly what I meant. I did not have word that would
fit exactly so I described it as robustness which was closest upto now.

>In short, the concept can be expressed in opposition of how we usually
>think of fragility. And the opposite of fragility is not robustness.
>Nassim argues that we really didn't have a name for the concept, so he
>called it antifragility.
>fragility - quality of being easily damaged or destroyed.
>robust - 1. Strong and healthy; vigorous. 2. Sturdy in construction.
>Nassim argues that the opposite of easily damaged or destroyed [in face of
>variability] is actually getting better [in face of variability], not just
>remaining robust and unchanging. This "getting better" is what he called
>antifragility.
>Below is a short summary of what antifragility is. (I would also encourage
>reading Nassim Taleb directly, a lot of people, perhaps myself included,
>tend to misunderstand and misrepresent this concept)
>
> [1]http://www.edge.org/conversation/understanding-is-a-poor-substitute-for-convexity-antifragility
> 
>On Sun, Apr 7, 2013 at 4:25 AM, Ondřej Bílka <[2]nel...@seznam.cz> wrote:
> 
>  On Sat, Apr 06, 2013 at 09:00:26PM -0700, David Barbour wrote:
>  >    On Sat, Apr 6, 2013 at 7:10 PM, Julian Leviston
>  <[1][3]jul...@leviston.net>
>  >    wrote:
>  >
>  >      LISP is "perfectly" precise. It's completely unambiguous. Of
>  course,
>  >      this makes it incredibly difficult to use or understand
>  sometimes.
>  >
>  >    Ambiguity isn't necessarily a bad thing, mind. One can consider it
>  an
>  >    opportunity: For live coding or conversational programming,
>  ambiguity
>  >    enables a rich form of iterative refinement and conversational
>  programming
>  >    styles, where the compiler/interpreter fills the gaps with
>  something that
>  >    seems reasonable then the programmer edits if the results aren't
>  quite
>  >    those desired. For mobile code, or portable code, ambiguity can
>  provide
>  >    some flexibility for a program to adapt to its environment. One can
>  >    consider it a form of contextual abstraction. Ambiguity could even
>  make a
>  >    decent target for machine-learning, e.g. to find optimal results or
>  >    improve system stability [1].
>  >    [1]
>  [2][4]http://awelonblue.wordpress.com/2012/03/14/stability-without-state/
>  >
> 
>  IMO unambiguity is property that looks good only in the paper.
> 
>  When you look to perfect solution you will get perfect solution for
>  wrong problem.
> 
>  A purpose of language is to convey how to solve problems. You need to
>  look for
>  robust solution. You must deal with that real world is inprecise. Just
>  transforming
>  problem to words causes inaccuracy. when you tell something to many
>  parties each of them wants to optimize something different. You again
>  need flexibility.
> 
>  This is problem of logicians that they did not go into this direction
>  but direction that makes their results more and more brittle.
>  Until one can answer questions above along with how to choose between
>  contradictrary data what is more important there is no chance to get
>  decent AI.
> 
>  What is important is cost of knowledge. It has several important
>  properties, for example that in 99% of cases it is negative.
> 
>  You can easily roll dice 50 times and make 50 statements about them that
>  are completely unambiguous and completely useless.
> 
>  ___
>  fonc mailing list
>  [5]fonc@vpri.org
>  [6]http://vpri.org/mailman/listinfo/fonc
> 
> References
> 
>Visible links
>

Re: [fonc] FONC: The Fanboy Mailing List With No Productivity

2013-04-13 Thread Ondřej Bílka
On Sat, Apr 13, 2013 at 12:48:53AM +0200, Igor Stasenko wrote:
> On 12 April 2013 22:18, John Pratt  wrote:
> >
> > This is just like open source software.  A bunch of feelgood people
> > hangin' out and messin' around, not ever doing anything, but pretending
> > they are getting somewhere by indulging themselves.  No one on here
> > is probably working on the Fundamentals of New Computing.
> >
> 
> I know that i going against the rule to not feed the troll, but sorry
> cannot resist.
> Replied using Firefox open-source software.
> 
> > This is just a trash bin for people who don't want to do anything.
> > The real work is probably on noise-free mailing list.  This is the
> > fanboy list for Alan Kay.

Also cannot resist.

Well if you are not satisfied you can establish new list. Then we will
have:

fonc - Renamed according to your suggestion to friends of naive computing
focn - Same topics but with elite mentality. To be more elite
conversations are in french (fondamentaux of l'computateour nouvelle)

Elders hang on #fnoc irc channel and criticize postings from fcon as
they were solved 20 years ago on hungarian academy proceedings article.

Meanwhile real work is done @fonnc (fundamentals of new new computing)
twitter channel. 
However do not confuse it with nfonc (new fundamentals of new computing)
facebook group. There they mostly suggest same ideas as in fonc but
with 5 year lag which fail on same problems.

> > ___
> > fonc mailing list
> > fonc@vpri.org
> > http://vpri.org/mailman/listinfo/fonc
> 
> 
> 
> -- 
> Best regards,
> Igor Stasenko.
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

-- 

Borg implants are failing
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Compiler Passes

2013-04-16 Thread Ondřej Bílka
On Sun, Apr 14, 2013 at 04:20:41PM -0700, David Barbour wrote:
>(Forwarded from Layers thread)
>On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich
><[1]gath.na.geala...@gmail.com> wrote:
> 
>  Isn't one of the points of idst/COLA/Frank/whatever-it-is-called-today
>  to simplify the development of domain-specific models to such an extent
>  that their casual application becomes conceivable, and indeed even
>  practical, as opposed to designing a new one-size-fits-all language
>  every decade or so?
> 
>   
> 
>  I had another idea the other day that could profit from a
>  domain-specific model: a model for compiler passes. I stumbled upon the
>  nanopass approach [1] to compiler construction some time ago and found
>  that I like it. Then it occurred to me that if one could express the
>  passes in some sort of a domain-specific language, the total compilation
>  pipeline could be assembled from the individual passes in a much more
>  efficient way that would be the case if the passes were written in
>  something like C++.

I do that at amethyst. I extended ometa to support dataflow analysis and
match arbitrary objects. Then I use optimizations to optimize amethyst
like in compiler.

You will better when you abandon concept of pass. You have
transformations and conditions when you can done them. Idealy all of
them simplify code (according to some metric like size of binary) and you want 
to apply them until none is possible.

Theoretists call this term rewriting system. 

Here biggest problem is determining profitability of transformation. Well
gcc uses ordering that looks empiricaly as fastest.

This idea is nothing new, similar idea is here. 
cseweb.ucsd.edu/~lerner/UW-CSE-01-11-01.pdf
I did not seek if this paper is used in practice.

You need to split transformations when you start getting cycles. Typical
example of this is CSE.

>  In order to do that, however, no matter what the intermediate values in
>  the pipeline would be (trees? annotated graphs?), the individual passes
>  would have to be analyzable in some way. For example, two passes may or

Why do you need that?

>  may not interfere with each other, and therefore may or may not be
>  commutative, associative, and/or fusable (in the same respect that, say,
>  Haskell maps over lists are fusable). I can't imagine that C++ code
>  would be analyzable in this way, unless one were to use some severely
>  restricted subset of C++ code. It would be ugly anyway.
>

>  Composing the passes by fusing the traversals and transformations would
>  decrease the number of memory accesses, speed up the compilation
>  process, and encourage the compiler writer to write more fine-grained

I tried that, does not work. When fusing you trasform memory access to random 
access 
which is slower. At gcc lot of effort is placed to cut off pass when it
starts produce reasonable results. This is hard to determine with
fusion. 

A real reason of doing that combination of two analysis is stronger than
any sequential combination of them. Canonical example are optimizations X,Y 
where X knows that A(0) is true and Y knows that B(y) is true. then
simplify following:

x=0
y=0
while (1){
if (A(x))
y=1;
if (B(y))
x=1;
}

>  passes, in the same sense that deep inlining in modern language
>  implementations encourages the programmer to write small and reusable
>  routines, even higher-order ones, without severe performance penalties.
>  Lowering the barrier to implementing such a problem-specific language
>  seems to make such an approach viable, perhaps even desirable, given how
>  convoluted most "production compilers" seem to be.
> 
>  (If I've just written something that amounts to complete gibberish,
>  please shoot me. I just felt like writing down an idea that occurred to
>  me recently and bouncing it off somebody.)
> 
>  - Gath
> 
>  [1] Kent Dybvig, A nanopass framework for compiler education (2005),
>  [2]http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.72.5578
>   
>  ___
>  fonc mailing list
>  [3]fonc@vpri.org
>  [4]http://vpri.org/mailman/listinfo/fonc
> 
> References
> 
>Visible links
>1. mailto:gath.na.geala...@gmail.com
>2. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.72.5578
>3. mailto:fonc@vpri.org
>4. http://vpri.org/mailman/listinfo/fonc

> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc