Re: [fonc] Unsolved problem in computer science? Fixing shortcuts.

2014-10-05 Thread Tristan Slominski
One thing that comes to mind are copying garbage collectors which need to
keep track of references while moving objects around. Probably looking into
how that is solved will provide some insight.

On Sun, Oct 5, 2014 at 12:35 PM, John Carlson yottz...@gmail.com wrote:

 Not obvious to me.  Are you saying a folder of shortcuts?   A shortcut to
 a folder?   A shortcut to a shortcut to a folder?  Instead of using
 indirect addressing, can you put it in terms of folders and shortcuts, or
 do we need a third type of object?  And how does this apply to a general
 graph problem?   Are you speaking of URNs?  A directory of hard links?
 That seems to make the most sense to me, and would bring in the third type
 of object.  Can you really make a hard link to a directory, and expect it
 to work?  I'm not thinking of something with two levels, I am thinking of a
 multilevel problem, where the shortcuts go really deep, like from a desktop
 to somewhere into program files.  If I rename a program files folder, what
 happens to my shortcuts?  If you like I can put this into Linux/BSD terms
 which I am more comfortable with.  I am trying to address it to a larger
 audience than that though.

 On Sun, Oct 5, 2014 at 8:49 AM, Miles Fidelman mfidel...@meetinghouse.net
  wrote:

 Isn't the obvious answer to use indirect addressing via a directory?

 John Carlson wrote:

 To put the problem in entirely file system terminology, What happens to
 a folder with shortcuts into it when you move the folder?   How does one
 automatically repoint the shortcuts?  Has this problem been solved in
 computer science?   On linux, the shortcuts would be symbolic links.

 I had a dream about smallstar when I was thinking about this.  The
 author was essentially asking me how to fix it.  He was showing me a
 hierarchy, then he moved part of the hierarchy into a subfolder and asked
 me how to automate it--especially the links to the original hierarchy.

 In language terms, this would be equivalent of refactoring a class which
 gets dropped down into an inner class.  This might be solved.  I'm not sure.

 This would be a great problem to solve on the web as well...does Xanadu
 do this?

 I think the solution is to maintain non-persistent nodes which are
 computed at access time, but I'm not entirely clear.

 I have no idea why I am posting this to cap-talk.   There may be some
 capability issues that I haven't thought of yet. Or perhaps the capability
 folks have already solved this.

 For your consideration,

 John Carlson


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 --
 In theory, there is no difference between theory and practice.
 In practice, there is.    Yogi Berra

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-03 Thread Tristan Slominski
Hey Alan,

With regards to burning issues and better directions, I want to
highlight the communicating with aliens problem as worth of remembering.
Machines figuring out on their own a protocol and goals for communication.
This might relate to cooperating solvers aspect of your work.

Cheers,

Tristan


On Tue, Sep 3, 2013 at 6:48 AM, Alan Kay alan.n...@yahoo.com wrote:

 Hi Jonathan

 We are not soliciting proposals, but we like to hear the opinions of
 others on burning issues and better directions in computing.

 Cheers,

 Alan

   --
  *From:* Jonathan Edwards edwa...@csail.mit.edu
 *To:* fonc@vpri.org
 *Sent:* Tuesday, September 3, 2013 4:44 AM

 *Subject:* Re: [fonc] Final STEP progress report abandoned?

 That's great news! We desperately need fresh air. As you know, the way a
 problem is framed bounds its solutions. Do you already know what problems
 to work on or are you soliciting proposals?

 Jonathan


 From: Alan Kay alan.n...@yahoo.com
 To: Fundamentals of New Computing fonc@vpri.org
 Cc:
 Date: Mon, 2 Sep 2013 10:45:50 -0700 (PDT)
 Subject: Re: [fonc] Final STEP progress report abandoned?
 Hi Dan

 It actually got written and given to NSF and approved, etc., a while ago,
 but needs a little more work before posting on the VPRI site.

 Meanwhile we've been consumed by setting up a number of additional, and
 wider scale, research projects, and this has occupied pretty much all of my
 time for the last 5-6 months.

 Cheers,

 Alan

   --
  *From:* Dan Melchione dm.f...@melchione.com
 *To:* fonc@vpri.org
 *Sent:* Monday, September 2, 2013 10:40 AM
 *Subject:* [fonc] Final STEP progress report abandoned?

 Haven't seen much regarding this for a while.  Has it been been abandoned
 or put at such low priority that it is effectively abandoned?

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Macros, JSON

2013-07-21 Thread Tristan Slominski
All this talk of macros and quotes reminds me that there is Kernel language
where they are extraneous (if I understand it correctly). Operative and
applicative combiners are used explicitly:
http://www.wpi.edu/Pubs/ETD/Available/etd-090110-124904/unrestricted/jshutt.pdf


On Sun, Jul 21, 2013 at 5:06 PM, Casey Ransberger
casey.obrie...@gmail.comwrote:

 Lisp is such a joy to implement. FORTH is fun too.

 I'm working on a scheme-alike on and off. The idea is to take the message
 passing and delegation from Self, expose it in Lisp, and then map all of
 that to JavaScript.

 One idea I had when I was messing around with OMetaJS was that it might
 have some kind of escape syntax like

 (let ((x 1))
   #{x++; }#
 )

 Would basically mean

 (let ((x 1))
   (+ x 1))

 ...which would make doing primitives feel pretty smooth, and also give
 you the nice JSON syntax.

 The rule is simple too, '#{' followed by anything:a up until '}#' -
 eval(a)

 Only problem is relating environment context between the two languages,
 which I haven't bothered to figure out yet. The JS eval() in this case is
 insufficient.

 (Sorry about the pseudocode, on a phone and don't keep OMeta syntax in my
 head...)

 On Jul 21, 2013, at 1:15 PM, Alan Moore kahunamo...@closedsource.com
 wrote:

 JSON is all well and good as far as lowest common denominators go.
 However, you might want to consider EDN:

 https://github.com/edn-format/edn

 On the other hand, if you are doing that then you might as well go *all*
 the way and re-invent half of Common Lisp :-)

 http://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule

 Alan Moore


 On Sun, Jul 21, 2013 at 10:28 AM, John Carlson yottz...@gmail.com wrote:

 Hmm.  I've been thinking about creating a macro language written in JSON
 that operates on JSON structures.  Has someone done similar work?  Should I
 just create a JavaScript AST in JSON? Or should I create an AST
 specifically for JSON manipulation?

 Thanks,

 John

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Why Mind Uploading could be horrible

2013-04-23 Thread Tristan Slominski
With great trepidation, I will try to keep this to computing :D

It may revolve around the meaning of uploading, but my problem with the
uploading approach, is that it makes a copy. Whether a copy is the same as
the real thing I feel is beyond the scope of a computing discussion in this
particular sense. I assert, that I am not interested in a copy of Me (in
legal style, I will use capitals for defined terms).

The next thing is the definition of Me. For the purpose of this, Me is
defined as the pattern of interaction of physical processes that happens
within the volume bound by my skin. I will further refine to a concept of
Sensory Me, which I will define as the pattern of interaction of physical
processes that happens within my nervous system. I will further refine to a
concept of the Conscious Me, which I will define as the pattern of
interaction from the definition of Sensory Me, and it is separate from the
physical processes of the same.

With the definition of Conscious Me in place, what I am interested in is
preserving the Conscious Me whether in its original form (i.e. implemented
on top of original physical processes, that is embodied in a human body),
or over a different substrate.

Side note: if you disagree with my definitions, then please don't argue the
conclusions using your own definitions. I consider it axiomatic that from
different definitions we'll likely arrive at something different, so no
argument is to be had really.

It seems to me to be possible to one by one replace various physical
processes with a different type that would result in supporting the same
pattern of interaction (Conscious Me).  The distinction I am making, is
that I am interested in continuing the existing pattern (Conscious Me),
hot-swapping, so to speak, the physical processes implementing it. This is
the best illustration of why I feel uploading, which to me implies a
copy, would be wrong and horrible. Because the existing pattern would then
be discontinued as the uploaded pattern would be permitted to endure.

More on computation...

There is ample evidence, that I will sort of assume and handwave, that our
Conscious Me's are capable of great flexibility and plasticity. For
example, when I drive a car, my concept of me incorporates the machine I
am operating. This effect is even more pronounced when piloting an
aircraft. Or our ability to train our brains to see with our
toungeshttp://www.scientificamerican.com/article.cfm?id=device-lets-blind-see-with-tongues

I am very interested in the Hierarchical Temporal
Memoryhttps://www.numenta.com/technology.html#cla-whitepaper(the
HTM) model of how the human neocortex computes and a lot of my views
about Conscious Me are informed by the HTM model. HTM proposes one
algorithm, implemented on a certain physical architecture, that can give a
rise to Metaphors We Live
Byhttp://www.amazon.com/Metaphors-We-Live-By-ebook/dp/B006KYECYA/ref=tmm_kin_title_0types
of thinking that human beings seem to have.

The reason I am very interested in dynamic objects all the way down (types
of systems VPRI is building) is because I am looking at them through the
lens of preserving the Conscious Me. Fully dynamic objects running on
hardware seem promising in this regard. The Actor Model also helps to frame
some of the things through a slightly different lens, and hence my interest
in it. Both seem to allow emergent behavior for processes that may in the
future support Conscious Me.

Admittedly, the interface between the two physical processes remains as a
subject for future research.



On Tue, Apr 23, 2013 at 10:11 AM, Loup Vaillant-David 
l...@loup-vaillant.frwrote:

 On Tue, Apr 23, 2013 at 04:01:20PM +0200, Eugen Leitl wrote:
  On Fri, Apr 19, 2013 at 02:05:07PM -0500, Tristan Slominski wrote:
 
   That alone seems to me to dismiss the concern that mind uploading
 would not
   be possible (despite that I think it's a wrong and a horrible idea
   personally :D)

 Personally, I can think of 2 objections:

  1. It may turn out that mind uploading doesn't actually transfer your
 mind in a new environment, but actually makes a *copy* of you,
 which will behave the same, but isn't actually you.  From the
 outside, it would make virtually no difference, but from the
 inside, you wouldn't get to live in the Matrix.

  2. There's those cool things called privacy, and free will that
 can get seriously compromised if anyone but a saint ever get root
 access to the Matrix you live in.  And we have plenty of reasons
 to abuse such a system.  Like:

 - Boost productivity with happy slaves.  Just copy your best
   slaves, and kill the rest.  Or make them work 24/7 by killing
   them every 8 hours, and restarting a saved state. (I got the
   idea from Robin Hanson.)

   Combined with point (1), this is a killer: we will probably get
   to a point where meatbags are not competitive enough to feed
   themselves.  So, everyone dies soon, and Earth

Re: [fonc] Why Mind Uploading could be horrible

2013-04-23 Thread Tristan Slominski

 It seems to me they only differ by the size of the part replaced.


I agree. And that seems to be a subject of future research as well, the
whole engineering to get stuff within tolerance side of things. :D

What I find interesting is if, instead of replacing a part, we want to add
parts. Extra sensory module, extra neocortex-type module, etc. Then there
seems to be some engineering involved to determine when the Conscious Me
assimilates the module and spreads itself out over the assimilated
module.



On Tue, Apr 23, 2013 at 12:42 PM, John Nilsson j...@milsson.nu wrote:

 It's not so much if a copy is the same as the real thing, but rather how
 do you define the difference between an all-at-once copy with a
 simultaneous destruction of the original and a piece by piece replacement
 of the parts?
 It seems to me they only differ by the size of the part replaced.
 BR
 John
  Den 23 apr 2013 18:13 skrev Tristan Slominski 
 tristan.slomin...@gmail.com:

 With great trepidation, I will try to keep this to computing :D

 It may revolve around the meaning of uploading, but my problem with the
 uploading approach, is that it makes a copy. Whether a copy is the same as
 the real thing I feel is beyond the scope of a computing discussion in this
 particular sense. I assert, that I am not interested in a copy of Me (in
 legal style, I will use capitals for defined terms).

 The next thing is the definition of Me. For the purpose of this, Me is
 defined as the pattern of interaction of physical processes that happens
 within the volume bound by my skin. I will further refine to a concept of
 Sensory Me, which I will define as the pattern of interaction of physical
 processes that happens within my nervous system. I will further refine to a
 concept of the Conscious Me, which I will define as the pattern of
 interaction from the definition of Sensory Me, and it is separate from the
 physical processes of the same.

 With the definition of Conscious Me in place, what I am interested in is
 preserving the Conscious Me whether in its original form (i.e. implemented
 on top of original physical processes, that is embodied in a human body),
 or over a different substrate.

 Side note: if you disagree with my definitions, then please don't argue
 the conclusions using your own definitions. I consider it axiomatic that
 from different definitions we'll likely arrive at something different, so
 no argument is to be had really.

 It seems to me to be possible to one by one replace various physical
 processes with a different type that would result in supporting the same
 pattern of interaction (Conscious Me).  The distinction I am making, is
 that I am interested in continuing the existing pattern (Conscious Me),
 hot-swapping, so to speak, the physical processes implementing it. This is
 the best illustration of why I feel uploading, which to me implies a
 copy, would be wrong and horrible. Because the existing pattern would then
 be discontinued as the uploaded pattern would be permitted to endure.

 More on computation...

 There is ample evidence, that I will sort of assume and handwave, that
 our Conscious Me's are capable of great flexibility and plasticity. For
 example, when I drive a car, my concept of me incorporates the machine I
 am operating. This effect is even more pronounced when piloting an
 aircraft. Or our ability to train our brains to see with our 
 toungeshttp://www.scientificamerican.com/article.cfm?id=device-lets-blind-see-with-tongues

 I am very interested in the Hierarchical Temporal 
 Memoryhttps://www.numenta.com/technology.html#cla-whitepaper(the HTM) 
 model of how the human neocortex computes and a lot of my views
 about Conscious Me are informed by the HTM model. HTM proposes one
 algorithm, implemented on a certain physical architecture, that can give a
 rise to Metaphors We Live 
 Byhttp://www.amazon.com/Metaphors-We-Live-By-ebook/dp/B006KYECYA/ref=tmm_kin_title_0types
  of thinking that human beings seem to have.

 The reason I am very interested in dynamic objects all the way down
 (types of systems VPRI is building) is because I am looking at them through
 the lens of preserving the Conscious Me. Fully dynamic objects running on
 hardware seem promising in this regard. The Actor Model also helps to frame
 some of the things through a slightly different lens, and hence my interest
 in it. Both seem to allow emergent behavior for processes that may in the
 future support Conscious Me.

 Admittedly, the interface between the two physical processes remains as a
 subject for future research.



 On Tue, Apr 23, 2013 at 10:11 AM, Loup Vaillant-David l...@loup-vaillant.fr
  wrote:

 On Tue, Apr 23, 2013 at 04:01:20PM +0200, Eugen Leitl wrote:
  On Fri, Apr 19, 2013 at 02:05:07PM -0500, Tristan Slominski wrote:
 
   That alone seems to me to dismiss the concern that mind uploading
 would not
   be possible (despite that I think it's a wrong and a horrible idea
   personally :D)

 Personally, I

[fonc] Actors, Light Cones and Epistemology (was Layering, Thinking and Computing)

2013-04-14 Thread Tristan Slominski

 I believe our world is 'synchronous' in the sense of things happening at
 the same time in different places...


 It seems to me that you are describing a privileged frame of reference.


How is it privileged?
 Would you consider your car mechanic to have a 'privileged' frame of
 reference on our universe because he can look down at your vehicle's engine
 and recognize when components are in or out of synch? Is it not obviously
 the case that, even while out of synch, the different components are still
 doing things at the same time?
 Is there any practical or scientific merit for your claim? I believe there
 is abundant scientific and practical merit to models and technologies
 involving multiple entities or components moving and acting at the same
 time.


A mechanic is a poor example because frame of reference is almost
irrelevant in Newtonian view of physics. Obvious things in Newtonian view
become very wrong in Einsteinian take on physics once we get into extremely
large masses or extremely fast speeds. In my opinion, the pattern of
information distribution in actor systems via messages resembles the
Einsteinian view much more closely than the Newtonian view. When an actor
sends messages, there is an information light cone that spreads from that
actor to whatever actors it will reach. Newtonian view is not helpful in
this environment.

Within an actor system, after a creation event, an actor is limited to
knowing the world through messages it receives. This seems to me to be a
purely empirical knowledge (i.e. coming only from sensory experience).

This goes back to what you highlighted about my point of view:

That only matters to people who want as close to the Universe as
 possible.


So yes, you're right, I agree. I would probably remove only from the
above statement, but otherwise, I accept your assertion.

On Sat, Apr 13, 2013 at 1:29 PM, David Barbour dmbarb...@gmail.com wrote:


 On Sat, Apr 13, 2013 at 9:01 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 I think we don't know whether time exists in the first place.


 That only matters to people who want as close to the Universe as
 possible.

 To the rare scientist who is not also a philosopher, it only matters
 whether time is effective for describing and predicting behavior about the
 universe, and the same is true for notions of particles, waves, energy,
 entropy, etc..

 I believe our world is 'synchronous' in the sense of things happening at
 the same time in different places...


 It seems to me that you are describing a privileged frame of reference.


 How is it privileged?

 Would you consider your car mechanic to have a 'privileged' frame of
 reference on our universe because he can look down at your vehicle's engine
 and recognize when components are in or out of synch? Is it not obviously
 the case that, even while out of synch, the different components are still
 doing things at the same time?

 Is there any practical or scientific merit for your claim? I believe there
 is abundant scientific and practical merit to models and technologies
 involving multiple entities or components moving and acting at the same
 time.



 I've built a system that does what you mention is difficult above. It
 incorporates autopoietic and allopoietic properties, enables object
 capability security and has hints of antifragility, all guided by the actor
 model of computation.


 Impressive.  But with Turing complete models, the ability to build a
 system is not a good measure of distance. How much discipline (best
 practices, boiler-plate, self-constraint) and foresight (or up-front
 design) would it take to develop and use your system directly from a pure
 actors model?



 I don't want programming to be easier than physics. Why? First, this
 implies that physics is somehow difficult, and that there ought to be a
 better way.


 Physics is difficult. More precisely: setting up physical systems to
 compute a value or accomplish a task is very difficult. Measurements are
 noisy. There are many non-obvious interactions (e.g. heat, vibration,
 covert channels). There are severe spatial constraints, locality
 constraints, energy constraints. It is very easy for things to 'go wrong'.

 Programming should be easier than physics so it can handle higher levels
 of complexity. I'm not suggesting that programming should violate physics,
 but programs shouldn't be subject to the same noise and overhead. If we had
 to think about adding fans and radiators to our actor configurations to
 keep them cool, we'd hardly get anything done.

 I hope you aren't so hypocritical as to claim that 'programming shouldn't
 be easier than physics' in one breath then preach 'use actors' in another.
 Actors are already an enormous simplification from physics. It even
 simplifies away the media for communication.



 Whatever happened to the pursuit of Maxwell's equations for Computer
 Science? Simple is not the same as easy.


 Simple is also not the same

Re: [fonc] Meta-Reasoning in Actor Systems (was: Layering, Thinking and Computing)

2013-04-14 Thread Tristan Slominski
fair enough :D


On Sun, Apr 14, 2013 at 4:49 PM, David Barbour dmbarb...@gmail.com wrote:

 I always miss a few when making such lists. The easiest way to find new
 good questions is to try finding models that address the existing
 questions, then figuring out why you should be disappointed with it. :D

 On Sun, Apr 14, 2013 at 9:55 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 Impressive.  But with Turing complete models, the ability to build a
 system is not a good measure of distance. How much discipline (best
 practices, boiler-plate, self-constraint) and foresight (or up-front
 design) would it take to develop and use your system directly from a pure
 actors model?


 I don't know the answer to that yet. You've highlighted really good
 questions that a pure actor model system would have to answer (and I
 added a few). I believe they were:

 - composition
 - decomposition
 - consistency
 - discovery
 - persistence
 - runtime update
 - garbage collection
 - process control
 - configuration partitioning
 - partial failure
 - inlining? (optimization)
 - mirroring? (optimization)
 - interactions
 - safety
 - security
 - progress
 - extensibility
 - antifragility
 - message reliability
 - actor persistence

 Did I miss any?

 On Sat, Apr 13, 2013 at 1:29 PM, David Barbour dmbarb...@gmail.comwrote:


 On Sat, Apr 13, 2013 at 9:01 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 I think we don't know whether time exists in the first place.


 That only matters to people who want as close to the Universe as
 possible.

 To the rare scientist who is not also a philosopher, it only matters
 whether time is effective for describing and predicting behavior about the
 universe, and the same is true for notions of particles, waves, energy,
 entropy, etc..

 I believe our world is 'synchronous' in the sense of things happening at
 the same time in different places...


 It seems to me that you are describing a privileged frame of reference.


 How is it privileged?

 Would you consider your car mechanic to have a 'privileged' frame of
 reference on our universe because he can look down at your vehicle's engine
 and recognize when components are in or out of synch? Is it not obviously
 the case that, even while out of synch, the different components are still
 doing things at the same time?

 Is there any practical or scientific merit for your claim? I believe
 there is abundant scientific and practical merit to models and technologies
 involving multiple entities or components moving and acting at the same
 time.



 I've built a system that does what you mention is difficult above. It
 incorporates autopoietic and allopoietic properties, enables object
 capability security and has hints of antifragility, all guided by the actor
 model of computation.


 Impressive.  But with Turing complete models, the ability to build a
 system is not a good measure of distance. How much discipline (best
 practices, boiler-plate, self-constraint) and foresight (or up-front
 design) would it take to develop and use your system directly from a pure
 actors model?



 I don't want programming to be easier than physics. Why? First, this
 implies that physics is somehow difficult, and that there ought to be a
 better way.


 Physics is difficult. More precisely: setting up physical systems to
 compute a value or accomplish a task is very difficult. Measurements are
 noisy. There are many non-obvious interactions (e.g. heat, vibration,
 covert channels). There are severe spatial constraints, locality
 constraints, energy constraints. It is very easy for things to 'go wrong'.

 Programming should be easier than physics so it can handle higher levels
 of complexity. I'm not suggesting that programming should violate physics,
 but programs shouldn't be subject to the same noise and overhead. If we had
 to think about adding fans and radiators to our actor configurations to
 keep them cool, we'd hardly get anything done.

 I hope you aren't so hypocritical as to claim that 'programming
 shouldn't be easier than physics' in one breath then preach 'use actors' in
 another. Actors are already an enormous simplification from physics. It
 even simplifies away the media for communication.



 Whatever happened to the pursuit of Maxwell's equations for Computer
 Science? Simple is not the same as easy.


 Simple is also not the same as physics.

 Maxwell's equations are a metaphor that we might apply to a specific
 model or semantics. Maxwell's equations describe a set of invariants and
 relationships between properties. If you want such equations, you'll
 generally need to design your model to achieve them.

 On this forum, 'Nile' is sometimes proffered as an example of the power
 of equational reasoning, but is a domain specific model.



 if we (literally, you and I in our bodies communicating via the
 Internet) did not get here through composition, integration, open extension
 and abstraction, then I don't know how

Re: [fonc] Layering, Thinking and Computing

2013-04-12 Thread Tristan Slominski
I had this long response drafted criticizing Bloom/CALM and Lightweight
Time Warps, when I realized that we are probably again not aligned as to
which meta level we're discussing.

(my main criticism of Bloom/CALM was assumption of timesteps, which is an
indicator of a meta-framework relying on something else to implement it
within reality; and my criticism of Lightweight Time Warps had to do with
that it is a protocol for message-driven simulation, which also needs an
implementor that touches reality; synchronous reactive programming has
the word synchronous in it) - hence my assertion that this is more meta
level than actors.

I think you and I personally care about different things. I want a
computational model that is as close to how the Universe works as possible,
with a minimalistic set of constructs from which everything else can be
built. Hence my references to cellular automata and Wolfram's hobby of
searching for the Universe. Anything which starts as synchronous cannot
be minimalistic because that's not what we observe in the world, our world
is asynchronous, and if we disagree on this axiom, then so much for that :D

But actors model fails with regards to extensibility(*) and reasoning


Those are concerns of an imperator, are they not? Again, I'm not saying
you're wrong, I'm trying to highlight that our goals differ.

But, without invasive code changes or some other form of cheating (e.g.
 global reflection) it can be difficult to obtain the name of an actor that
 is part of an actor configuration.


Again, this is ignorance of the power of Object Capability and the Actor
Model itself. The above is forbidden in the actor model unless the
configuration explicitly sends you an address in the message. My earlier
comment about Akka refers to this same mistake.

However, you do bring up interesting meta-level reasoning complaints
against the actor model. I'm not trying to dismiss them away or anything.
As I mentioned before, that list is a good guide as to what meta-level
programmers care about when writing programs. It would be great if actors
could make it easier... and I'm probably starting to get lost here between
the meta-levels again :/

Which brings me to a question. Am I the only one that loses track of which
meta-level I'm reasoning or is this a common occurrence  Bringing it back
to the topic somewhat, how do people handle reasoning about all the
different layers (meta-levels) when thinking about computing?


On Wed, Apr 10, 2013 at 12:21 PM, David Barbour dmbarb...@gmail.com wrote:

 On Wed, Apr 10, 2013 at 5:35 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 I think it's more of a pessimism about other models. [..] My
 non-pessimism about actors is linked to Wolfram's cellular automata turing
 machine [..] overwhelming consideration across all those hints is
 unbounded scalability.


 I'm confused. Why would you be pessimistic about non-actor models when
 your argument is essentially that very simple, deterministic, non-actor
 models can be both Turing complete and address unbounded scalability?

 Hmm. Perhaps what you're really arguing is pessimistic about procedural
 - which today is the mainstream paradigm of choice. The imperial nature of
 procedures makes it difficult to compose or integrate them in any
 extensional or collaborative manner - imperative works best when there is
 exactly one imperator (emperor). I can agree with that pessimism.

 In practice, the limits of scalability are very often limits of reasoning
 (too hard to reason about the interactions, safety, security, consistency,
 progress, process control, partial failure) or limits of extensibility (to
 inject or integrate new behaviors with existing systems requires invasive
 changes that are inconvenient or unauthorized). If either of those limits
 exist, scaling will stall. E.g. pure functional programming fails to scale
 for extensibility reasons, even though it admits a lot of natural
 parallelism.

 Of course, scalable performance is sometimes the issue, especially in
 models that have global 'instantaneous' relationships (e.g. ad-hoc
 non-modular logic programming) or global maintenance issues (like garbage
 collection). Unbounded scalability requires a consideration for locality of
 computation, and that it takes time for information to propagate.

 Actors model is one (of many) models that provides some of the
 considerations necessary for unbounded performance scalability. But actors
 model fails with regards to extensibility(*) and reasoning. So do most of
 the other models you mention - e.g. cellular automatons are even less
 extensible than actors (cells only talk to a fixed set of immediate
 neighbors), though one can address that with a notion of visitors (mobile
 agents).

 From what you say, I get the impression that you aren't very aware of
 other models that might compete with actors, that attempt to address not
 only unbounded performance scalability but some of the other limiting

Re: [fonc] Layering, Thinking and Computing

2013-04-10 Thread Tristan Slominski
 I did not specify that there is only one bridge, nor that you finish
 processing a message from a bridge before we start processing another next.
 If you model the island as a single actor, you would fail to represent many
 of the non-deterministic interactions possible in the 'island as a set' of
 actors.


Ok, I think I see the distinction you're painting here from a meta
perspective of reasoning about an actor system. I keep on jumping back in
into the message-only perspective, where the difference is (it seems)
unknowable. But with meta reasoning about the system, which is what I think
you've been trying to get me to see, the difference matters and complicates
reasoning about the thing as a whole.

I cannot fathom your optimism.


I think it's more of a pessimism about other models that leads me to be
non-pessimistic about actors :D. I have some specific goals I want to
achieve with computation, and actors are the only things right now that
seem to fit.

What we can say of a model is often specific to how we implemented it, the
 main exceptions being compositional properties (which are trivially a
 superset of invariants). Ad-hoc reasoning easily grows intractable and
 ambiguous to the extent the number of possibilities increases or depends on
 deep implementation details. And actors model seems to go out of its way to
 make reasoning difficult - pervasive state, pervasive non-determinism,
 negligible ability to make consistent observations or decisions involving
 the states of two or more actors.
 I think any goal to lower those comprehension barriers will lead to
 development of a new models. Of course, they might first resolve as
 frameworks or design patterns that get used pervasively (~ global
 transformation done by hand, ugh). Before RDP, there were reactive design
 patterns I had developed in the actors model while pursuing greater
 consistency and resilience.


I think we're back to different reference points, and different goals. What
follows is not a comment on what you said but my attempt to communicate why
I'm going about it the way I am and continue to resist what I'm sure are
sound software meta-reasoning practices.

My non-pessimism about actors is linked to Wolfram's cellular automata
turing machine (
http://blog.wolfram.com/2007/10/24/the-prize-is-won-the-simplest-universal-turing-machine-is-proved/).
My continuing non-pessimism about interesting computation being possible in
actors is his search for our universe (
http://blog.wolfram.com/2007/09/11/my-hobby-hunting-for-our-universe/).
Cellular automata are not actors, I get that, but these to me are the
hints. Another hint is the structure of HTMs and the algorithm reverse
engineered from the human neocortex (
https://www.numenta.com/htm-overview/education/HTM_CorticalLearningAlgorithms.pdf).
Another hint are what we call mesh networks. And overwhelming consideration
across all those hints is unbounded scalability.

Cheers,

Tristan

On Tue, Apr 9, 2013 at 6:25 PM, David Barbour dmbarb...@gmail.com wrote:

 On Tue, Apr 9, 2013 at 12:44 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 popular implementations (like Akka, for example) give up things such as
 Object Capability for nothing.. it's depressing.


 Indeed. Though, frameworks shouldn't rail too much against their hosts.



 I still prefer to model them as in every message is delivered. It wasn't
 I who challenged this original guaranteed-delivery condition but Carl
 Hewitt himself.


 It is guaranteed in the original formalism, and even Hewitt can't change
 that. But you can model loss of messages (e.g. by explicitly modeling a
 lossy network).


 You've described composing actors into actor configurations :D, from the
 outside world, your island looks like a single actor.


 I did not specify that there is only one bridge, nor that you finish
 processing a message from a bridge before we start processing another next.
 If you model the island as a single actor, you would fail to represent many
 of the non-deterministic interactions possible in the 'island as a set' of
 actors.


 I don't think we have created enough tooling or understanding to fully
 grok the consequences of the actor model yet. Where's our math for emergent
 properties and swarm dynamics of actor systems? [..] Where is our reasoning
 about symbiotic autopoietic and allopoietic systems? This is, in my view,
  where the actor systems will shine


 I cannot fathom your optimism.

 What we can say of a model is often specific to how we implemented it, the
 main exceptions being compositional properties (which are trivially a
 superset of invariants). Ad-hoc reasoning easily grows intractable and
 ambiguous to the extent the number of possibilities increases or depends on
 deep implementation details. And actors model seems to go out of its way to
 make reasoning difficult - pervasive state, pervasive non-determinism,
 negligible ability to make consistent observations or decisions involving

Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread Tristan Slominski
 in tackling. It should be
interesting. This discussion has certainly started to frame problems and
challenges that I will need to address in order to create an actor system
that would meet your usability(for lack of a better world for all of the
above) criteria.

For such properties we *must* reason in an external language/system,
 since Goedel showed that such loops cannot be closed without producing
 inconsistency (or the analogous 'bad' outcome).


Thank you Chris, your highlights tremendously helped to anchor my mind at
the correct layer of the conversation.


On Mon, Apr 8, 2013 at 11:37 PM, David Barbour dmbarb...@gmail.com wrote:


 On Mon, Apr 8, 2013 at 6:29 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 The problem with this, that I see, is that [..] in my physics view of
 actors [..] Messages could be lost.


 Understanding computational physics is a good thing. More people should do
 it. A couple times each year I end up in discussions with people who think
 software and information aren't bound by physical laws, who have never
 heard of Landauer's principle, who seem to mistakenly believe that
 distinction of concepts (like information and representation, or mind and
 body) implies they are separable.

 However, it is not correct to impose physical law on the actors model.
 Actors is its own model, apart from physics.

 A good question to ask is: can I correctly and efficiently implement
 actors model, given these physical constraints? One might explore the
 limitations of scalability in the naive model. Another good question to ask
 is: is there a not-quite actors model suitable for a more
 scalable/efficient/etc. implementation. (But note that the not-quite
 actors model will never quite be the actors model.) Actors makes a
 guarantee that every message is delivered (along with a nigh uselessly weak
 fairness property), but for obvious reasons guaranteed delivery is
 difficult to scale to distributed systems. And it seems you're entertaining
 whether *ad-hoc message loss* is suitable.

 That doesn't mean ad-hoc message-loss is a good choice, of course. I've
 certainly entertained that same thought, as have other, but we can't trust
 every fool thought that enters our heads.

 Consider an alternative: explicitly model islands (within which no message
 loss occurs) and serialized connections (bridges) between them. Disruption
 and message loss could then occur in a controlled manner: a particular
 bridge is lost, with all of the messages beyond a certain point falling
 into the ether. Compared to ad-hoc message loss, the bridged islands design
 is much more effective for reasoning about and recovering from partial
 failure.

 One could *implement* either of those loss models within actors model,
 perhaps requiring some global transforms. But, as we discussed earlier
 regarding composition, the implementation is not relevant while reasoning
 with abstractions.

 Reason about the properties of each abstraction or model. Separately,
 reason about whether the abstraction can be correctly (and easily,
 efficiently, scalably) implemented. This is 'layering' at its finest.


 This is another hint that we might have a different mental model. I don't
 find concurrency within an actor interesting. Actors can only process one
 message at a time. So concurrency is only relevant in that sending messages
 to other actors happens in parallel. That's not an interesting property.


 I find actors can only process one message at a time is an interesting
 constraint on concurrency, and certainly a useful one for reasoning. And
 it's certainly relevant with respect to composition (ability to treat an
 actor configuration as an actor) and decomposition (ability to divide an
 actor into an actor configuration).

 Do you also think zero and one are uninteresting numbers? Well, de
 gustibus non est disputandum.



 Actor behavior is a mapping function from a message that was received to
 creation of finite number of actors, sending finite number of messages, and
 changing own behavior to process the next message. This could be a simple
 dictionary lookup in the degenerate case. What's there to reason about in
 here?


 Exactly what you said: finite, finite, sequential - useful axioms from
 which we can derive theorems and knowledge.



 A fact is that programming is NOT like physics,


 This is a description


 Indeed. See how easily we can create straw-man arguments with which we can
 casually agree or disagree by stupidly taking sentence fragments out of
 context? :-)



 I like actors precisely because I CAN make programming look like physics.


 I am fond of linear logic and stream processing for similar reasons. I
 certainly approve, in a general sense, of developing models designed to
 operate within physical constraints. But focusing on the aspects I enjoy,
 or disregarding those I find uninteresting, would certainly put me at risk
 of reasoning about an idealized model a few handwaves removed from

Re: [fonc] Layering, Thinking and Computing

2013-04-08 Thread Tristan Slominski

 Therefore, with respect to this property, you cannot (in general) reason
 about or treat groups of two actors as though they were a single actor.


This is incorrect, well, it's based on a false premise.. this part is
incorrect/invalid? (an appropriate word escapes me):

But two actors can easily (by passing messages in circles) send out an
 infinite number of messages to other actors upon receiving a single message.


I see it as the equivalent of saying: I can write an infinite loop,
therefore, I cannot reason about functions

As you note, actors are not unique in their non-termination. But that
 misses the point. The issue was our ability to reason about actors
 compositionally, not whether termination is a good property.


The above statement, in my mind, sort of misunderstands reasoning about
actors. What does it mean for an actor to terminate. The _only_ way you
will know, is if the actor sends you a message that it's done. Any
reasoning about actors and their compositionality must be done in terms of
messages sent and received. Reasoning in other ways does not make sense in
the actor model (as far as I understand). This is how I model it in my
head:

It's sort of the analog of asking what happened before the Big Bang.
Well, there was no time before the Big Bang, so asking about before
doesn't make sense. In a similar way, reasoning about actor systems with
anything except messages, doesn't make sense. To use another physics
analogy, there is no privileged frame of reference in actors, you only get
messages. It's actually a really well abstracted system that requires no
other abstractions. Actors and actor configurations (groupings of actors)
become indistinguishable, because they are logically equivalent for
reasoning purposes. The only way to interact with either is to send it a
message and to receive a message. Whether it's millions of actors or just
one doesn't matter, because *you can't tell the difference* (remember,
there's no privileged frame of reference). To instrument an actor
configuration, you need to put actors in front of it. But to the user of
such instrumented configuration, they won't be able to tell the difference.
And so on and so forth, It's Actors All The Way Down.

...

I think we found common ground/understanding on other things.


On Sun, Apr 7, 2013 at 6:40 PM, David Barbour dmbarb...@gmail.com wrote:

 On Sun, Apr 7, 2013 at 2:56 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 stability is not necessarily the goal. Perhaps I'm more in the biomimetic
 camp than I think.


 Just keep in mind that the real world has quintillions of bugs. In
 software, humans are probably still under a trillion.  :)


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



On Sun, Apr 7, 2013 at 6:40 PM, David Barbour dmbarb...@gmail.com wrote:

 On Sun, Apr 7, 2013 at 2:56 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 stability is not necessarily the goal. Perhaps I'm more in the biomimetic
 camp than I think.


 Just keep in mind that the real world has quintillions of bugs. In
 software, humans are probably still under a trillion.  :)


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-08 Thread Tristan Slominski
 make a clear distinction about
expressiveness and properties of a language, vs. those of an actor model of
computation. Perhaps you're describing problems with the platonic model
that are hard to express in an implementing language, and I haven't
grokked this point until now? I'm very much interested in hearing about the
systemic problems of actors because I'd like to figure out any solutions
for such from my physics perspective on the problem.

As a side note, I think this still falls into Layering, Thinking and
Computing. But perhaps if we take this further it ought to be a different
thread?


On Mon, Apr 8, 2013 at 6:51 PM, David Barbour dmbarb...@gmail.com wrote:

 On Mon, Apr 8, 2013 at 2:52 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 This is incorrect, well, it's based on a false premise.. this part is
 incorrect/invalid?


 A valid argument with a false premise is called an 'unsound' argument. (
 http://en.wikipedia.org/wiki/Validity#Validity_and_soundness)



 What does it mean for an actor to terminate. The _only_ way you will
 know, is if the actor sends you a message that it's done.


 That is incorrect. One can also know things via static or global knowledge
 - e.g. type systems, symbolic analysis, proofs, definitions. Actors happen
 to be defined in such a manner to guarantee progress and certain forms of
 fairness at the message-passing level. From their definition, I can know
 that a single actor will terminate (i.e. finish processing a message),
 without ever receiving a response. If it doesn't terminate, then it isn't
 an actor.

 In any case, non-termination (and our ability or inability to reason about
 it) was never the point. Composition is the point. If individual actors
 were allowed to send an infinite number of messages in response to a single
 message (thus obviating any fairness properties), then they could easily be
 compositional with respect to that property.

 Unfortunately, they would still fail to be compositional with respect to
 other relevant properties, such as serializable state updates, or message
 structure.



 Any reasoning about actors and their compositionality must be done in
 terms of messages sent and received. Reasoning in other ways does not make
 sense in the actor model (as far as I understand).


 Carl Hewitt was careful to include certain fairness and progress
 properties in the model, in order to support a few forms of system-level
 reasoning. Similarly, the notion that actor-state effectively serializes
 messages (i.e. each message defines the behavior for processing the next
 message) is important for safe concurrency within an actor. Do you really
 avoid all such reasoning? Or is such reasoning simply at a level that you
 no longer think about it consciously?



 there is no privileged frame of reference in actors, you only get messages


 I'm curious what your IDE looks like. :-)

 A fact is that programming is NOT like physics, in that we do have a
 privileged frame of reference that is only compromised at certain
 boundaries for open systems programming. It is this frame of reference that
 supports abstraction, refactoring, static typing, maintenance,
 optimizations, orthogonal persistence, process control (e.g. kill,
 restart), live coding, and the like.

 If you want an analogy, it's like having a 3D view of a 2D world. As
 developers, often use our privilege to examine our systems from frames that
 no actor can achieve within our model.

 This special frame of reference isn't just for humans, of course. It's
 just as useful for metaprogramming, e.g. for those 'layered' languages with
 which Julian opened this topic.


 Actors and actor configurations (groupings of actors)
 become indistinguishable, because they are logically equivalent for
 reasoning purposes. The only way to interact with either is to send it a
 message and to receive a message.


 It is true that, from within the actor system, we cannot distinguish an
 actor from an actor configuration.



  It's Actors All The Way Down.


 Actors don't have clear phase separation or staging. There is no down,
 just an ad-hoc graph. Also, individual actors often aren't decomposable
 into actor configurations. A phrase I favored while developing actors
 systems (before realizing their systemic problems) was It's actors all the
 way out.


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Tristan Slominski
Thanks for the book reference, I'll check it out

I guess my question mostly relates to whether or not learning more
 languages than one, (perhaps when one gets to about three different
 languages to some level of proficiency and deep study), causes one to form
 a pre/post-linguistic awareness as I referenced in my original post.


Hmm.. I probably fit the criteria of multiple languages. I tried to expose
myself to different language families, so I have Slavic, Germanic, Asiatic,
and Arabic familiarity to various degrees (sorry if those aren't correct
groupings :D). I'm fluent in only two, but both of those were learned as a
kid.

I haven't thought about what you're describing. Perhaps it was the
fish-thinking-about-water phenomenon? I assumed that everyone thinks in
free form and then solidifies it into language when necessary for clarity
or communication. However, I do recall my surprise at experiencing first
hand all the different grammar structures that still allow people to
communicate well. From what I can tell about my thoughts, there's
definitely some abstraction going on, where the thought comes before the
words, and then the word shuffle needs to happen to fit the grammar
structure of the language being used.

So there appear to be at least two modes I think in. One being almost
intuitive and without form, the other being expressed through language.
(I've written poetry that other people liked, so I'm ok at both). However,
I thought that the free form thought had more to do with being good at
mathematics and the very abstract thought that promotes. I didn't link it
to knowing multiple languages.

So what about mathematical thinking? It seems it does more for my abstract
thinking than multiple languages. Trying to imagine mathematical
structures/concepts that are often impossible to present in the solidity of
the real world did more for me loosening my thought boundaries and
abandoning language structure than any language I learned as far as I can
tell.

Mathematics can also be considered a language. But there are also different
mathematical languages as well. I experienced this first hand. Perhaps I'm
not as smart as some people, but the biggest mental challenge, and one I
had to give up on to maintain my sanity (literally), was learning physics
and computer science at the same time and for the first time. It was
overwhelming. Where I studied it, physics math was all continuous
mathematics. In contrast, Computer Science math was all discrete
mathematics. On a physics quiz, my professor did a little extra credit, and
asked the students to describe how they would put together a model of the
solar system. Even though the quiz was anonymous, he knew exactly who I
was, because I was the only one to describe an algorithm for a computer
simulation. The others described a mechanical model. There was also
something very weird that was happening to my brain at the time. The
cognitive dissonance in switching between discrete and continuous math
paradigms was overwhelming, to the point where I ended up picking the
discrete/computer science path and gave up on physics, at least while an
undergrad.

I don't think knowing only one language is bad. It's sort of like saying,
oh, you're only a doctor, and you do nothing else. However, there appears
to be something to knowing multiple languages.

Part of the reason why I mentioned the metaphor stuff in the first place,
is because it resonates with me in what I understand about how the human
neocortex works. The most compelling explanation for the neocortex that I
found is Jeff Hawkins' Hierarchical Temporal Memory model. A similar
concept also came up in David Gelenter's Mirror Worlds. And that is, in
very general terms, that our neocortex is a pattern recognition machine
working on sequences in spatial and temporal neuron activation. There is
nothing in it at birth, and over our lifetimes, it fills up with memories
of more and more complex and abstract sequences. Relating this back to our
language discussion, with that as the background, it seems intuitive that
knowing another language, i.e. memorizing a different set of sequences,
will enable different patterns of thought, as well as more modes of
expression.

As to building a series of tiny LISPs. I see that as being similar as
arguing for knowing only one family of languages. We would be missing
entire structures and modes of expression by concentrating only on LISP
variants, would we not? The Actor Model resonates deeply with me, and
sometimes I have trouble explaining some obvious things that arise from
thinking in Actors to people unfamiliar with that model of computation. I
believe part of the reason is that lots of the computation happens as an
emergent property of the invisible web of message traffic, and not in the
procedural actor behavior. How would one program a flock in LISP?

On Sun, Apr 7, 2013 at 3:47 AM, Julian Leviston jul...@leviston.net wrote:


 On 07/04/2013, at 1:48 PM, Tristan Slominski

Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Tristan Slominski
Very interesting David. I'm subscribed to the RSS feed but I don't think I
read that one yet.

I agree that largely, we can use more work on languages, but it seems that
making the programming language responsible for solving all of programming
problems is somewhat narrow.

A friend of mine, Janelle Klein, is in the process of publishing a book
called The Idea Flow Method: Solving the Human Factor in Software
Development
https://leanpub.com/ideaflow . The method she arrived at, after leading
software teams for a while, in my mind, ended up mapping a software
organization into how a human neocortex works (as opposed to the typical
Lean methodology of mapping a software organization onto a factory).

The Idea Flow Method, in my poor summary, focuses on what matters when
building software. And what appears to matter cannot be determined at the
moment of writing the software, but only after multiple iterations of
working with the software. So, for example, imagine that I write a really
crappy piece of code that works, in a corner of the program that nobody
ever ends up looking in, nobody understands it, and it just works. If
nobody ever has to touch it, and no bugs appear that have to be dealt with,
then as far as the broader organization is concerned, it doesn't matter how
beautiful that code is, or which level of Dante's Inferno it hails from. On
the other hand, if I write a really crappy piece of code that breaks in
ambiguous ways, and people have to spend a lot of time understanding it and
debugging, then it's really important how understandable that code is, and
time should probably be put into making it good. (Janelle's method
provides a tangible way of tracking this type of code importance).

Of course, I can only defend the deal with it if it breaks strategy only
so far. Every component that is built shapes it's surface area and other
components need to mold themselves to it. Thus, if one of them is wrong, it
gets non-linearly worse the more things are shaped to the wrong component,
and those shape to those, etc. We then end up thinking about protocols,
objects, actors, and so on.. and I end up agreeing with you that
composition becomes the most desirable feature of a software system. I
think in terms of actors/messages first, so no argument there :D

As far as applying metaphor to programming... from the book I referenced,
it appears that the crucial thing about metaphor is the ability to pick and
choose pieces from different metaphors to describe a new concept. Depending
on what we want to compute/communicate we can attribute to ideas the
properties of commodities, resources, money, plants, products, cutting
instruments. To me, the most striking thing about this being the absence of
a strict hierarchy at all, i.e., no strict hierarchical inheritance. The
ability to mix and match various attributes together as needed seems to
most closely resemble how we think. That's composition again, yes?

On Sat, Apr 6, 2013 at 11:04 PM, David Barbour dmbarb...@gmail.com wrote:


 On Sat, Apr 6, 2013 at 8:48 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 a lot of people seem to have the opinion the language a person
 communicates in locks them into a certain way of thinking.


 There is an entire book on the subject, Metaphors We Live By, which
 profoundly changed how I think about thinking and what role metaphor plays
 in my thoughts. Below is a link to what looks like an article by the same
 title from the same authors.

 http://www.soc.washington.edu/users/brines/lakoff.pdf


 I'm certainly interested in how metaphor might be applied to programming.
 I write, regarding 'natural language' programming [1] that metaphor and
 analogy might be addressed with a paraconsistent logic - i.e. enabling
 developers to apply wrong functions but still extract some useful meaning
 from them.

 [1]
 http://awelonblue.wordpress.com/2012/08/01/natural-programming-language/


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Tristan Slominski

 A purpose of language is to convey how to solve problems. You need to look
 for
 robust solution. You must deal with that real world is inprecise. Just
 transforming
 problem to words causes inaccuracy. when you tell something to many
 parties each of them wants to optimize something different. You again
 need flexibility.


Ondrej, have you come across Nassim Nicholas Taleb's Antifragility concept?
The reason I ask, is because we seem to agree on what's important in
solving problems. However, robustness is a limited goal, and antifragility
seems a much more worthy one.

In short, the concept can be expressed in opposition of how we usually
think of fragility. And the opposite of fragility is not robustness. Nassim
argues that we really didn't have a name for the concept, so he called it
antifragility.

fragility - quality of being easily damaged or destroyed.
robust - 1. Strong and healthy; vigorous. 2. Sturdy in construction.

Nassim argues that the opposite of easily damaged or destroyed [in face of
variability] is actually getting better [in face of variability], not just
remaining robust and unchanging. This getting better is what he called
antifragility.

Below is a short summary of what antifragility is. (I would also encourage
reading Nassim Taleb directly, a lot of people, perhaps myself included,
tend to misunderstand and misrepresent this concept)

http://www.edge.org/conversation/understanding-is-a-poor-substitute-for-convexity-antifragility





On Sun, Apr 7, 2013 at 4:25 AM, Ondřej Bílka nel...@seznam.cz wrote:

 On Sat, Apr 06, 2013 at 09:00:26PM -0700, David Barbour wrote:
 On Sat, Apr 6, 2013 at 7:10 PM, Julian Leviston [1]
 jul...@leviston.net
 wrote:
 
   LISP is perfectly precise. It's completely unambiguous. Of course,
   this makes it incredibly difficult to use or understand sometimes.
 
 Ambiguity isn't necessarily a bad thing, mind. One can consider it an
 opportunity: For live coding or conversational programming, ambiguity
 enables a rich form of iterative refinement and conversational
 programming
 styles, where the compiler/interpreter fills the gaps with something
 that
 seems reasonable then the programmer edits if the results aren't quite
 those desired. For mobile code, or portable code, ambiguity can
 provide
 some flexibility for a program to adapt to its environment. One can
 consider it a form of contextual abstraction. Ambiguity could even
 make a
 decent target for machine-learning, e.g. to find optimal results or
 improve system stability [1].
 [1] [2]
 http://awelonblue.wordpress.com/2012/03/14/stability-without-state/
 

 IMO unambiguity is property that looks good only in the paper.

 When you look to perfect solution you will get perfect solution for
 wrong problem.

 A purpose of language is to convey how to solve problems. You need to look
 for
 robust solution. You must deal with that real world is inprecise. Just
 transforming
 problem to words causes inaccuracy. when you tell something to many
 parties each of them wants to optimize something different. You again
 need flexibility.


 This is problem of logicians that they did not go into this direction
 but direction that makes their results more and more brittle.
 Until one can answer questions above along with how to choose between
 contradictrary data what is more important there is no chance to get
 decent AI.

 What is important is cost of knowledge. It has several important
 properties, for example that in 99% of cases it is negative.

 You can easily roll dice 50 times and make 50 statements about them that
 are completely unambiguous and completely useless.



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Tristan Slominski
 that 'hit back' when
 attacked, at least as a default policy.


The adaptation does not have to be so drastic :D. Think of a read caching
system. As long as it has the capacity, the more requests are executed, the
more they are cached, the better the system response overall. That's an
example of getting better in face of variability and demand. This followed
by hot-spot detection, horizontal scaling in response, and load balancing
across are all examples of a system improving when stressed. (although, on
second thought, in precise antifragile terms these examples might not
qualify, as I think they may lack convexity as explained by Nassim).

Lastly lastly :D ... The Robust Systems paper was a good read:

We are taught that the \correctness of software is paramount, and that
 correctness is to be achieved by establishing formal speci cation of compo-
 nents and systems of components and by providing proofs that the speci -
 cations of a combination of components are met by the speci cations of the
 components and the pattern by which they are combined. I assert that this
 discipline enhances the brittleness of systems. In fact, to make truly
 robust
 systems we must discard such a tight discipline


I often find myself deliberately disregarding correctness, and thinking
about how correctness can become an emergent property of the system. As in
biological forms, the resulting system is likely to be incredibly complex
and hard to understand, but at that point, perhaps we are in the
complexity domain, instead of the accidental complexity domain.

It is interesting that the paper reads almost as danger ... danger ...
danger... this is dangerous... danger.. risk.. danger. I love it. Here be
dragons - that's usually where the interesting things are.

Also, this was very interesting (from the paper) and related to the topic
at hand:

Dynamically con gured interfaces



How can entities talk when they don't share a common language? A compu-
 tational experiment by Simon Kirby has given us an inkling of how language
 may have evolved. In particular, Kirby [16] showed, in a very simpli ed
 sit-
 uation, that if we have a community of agents that share a few semantic
 structures (perhaps by having common perceptual experiences) and that try
 to make and use rules to parse each other's utterances about experiences
 they have in common, then the community eventually converges so that the
 members share compatible rules. While Kirby's experiment is very primi-
 tive, it does give us an idea about how to make a general mechanism to get
 disparate modules to cooperate.



Jacob Beal [5] extended and generalized the work of Kirby. He built and
 demonstrated a system that allowed computational agents to learn to com-
 municate with each other through a sparse but uncontrolled communication
 medium. The medium has many redundant channels, but the agents do not
 have an ordering on the channels, or even an ability to name them.
 Neverthe-
 less, employing a coding scheme reminiscent of Calvin Mooers's Zatocoding
 (an early kind of hash coding), where descriptors of the information to be
 retrieved are represented in the distribution of notches on the edge of a
 card,
 Mr. Beal exchanges the sparseness and redundancy of the medium for reliable
 and recon gurable communications of arbitrary complexity. Beal's scheme
 allows multiple messages to be communicated at once, by superposition, be-
 cause the probability of collision is small. Beal has shown us new insights
 into this problem, and the results may be widely applicable to engineering
 problems.


Another inspiration for me in similar fashion was the Conscientious
Software paper:
http://pleiad.dcc.uchile.cl/_media/bic2007/papers/conscientioussoftwarecc.pdf


On Sun, Apr 7, 2013 at 10:50 AM, David Barbour dmbarb...@gmail.com wrote:


 On Sun, Apr 7, 2013 at 5:44 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 I agree that largely, we can use more work on languages, but it seems
 that making the programming language responsible for solving all of
 programming problems is somewhat narrow.


 I believe each generation of languages should address a few more of the
 cross-cutting problems relative to their predecessors, else why the new
 language?

 But to address a problem is not necessarily to automate the solution, just
 to push solutions below the level of conscious thought, e.g. into a path of
 least resistance, or into simple disciplines that (after a little
 education) come as easily and habitually (no matter how unnaturally) as
 driving a car or looking both ways before crossing a street.


 imagine that I write a really crappy piece of code that works, in a
 corner of the program that nobody ever ends up looking in, nobody
 understands it, and it just works. If nobody ever has to touch it, and no
 bugs appear that have to be dealt with, then as far as the broader
 organization is concerned, it doesn't matter how beautiful that code is, or
 which level

Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Tristan Slominski

 I believe you imagine an actor simply demuxing or unzipping messages to
 two or more other actors. But such a design does not result in the same
 concurrency, consistency, or termination properties as a single actor,
 which is why you cannot (correctly) treat the grouping as a single actor.


Well... composing multiple functions does not result in the same
termination properties as a single function either, does it? Especially
when we are composing nondeterministic computations? (real question, not
rhetorical) I'm having difficulty seeing how this is unique to actors.


 but it is my understanding that most actors languages and implementations
 are developed with an assumption of ambient authority


Well, yes, and it's a bad idea if you want object capability security. I
implemented object capability security in a system that hosted JavaScript
actors on Node.js. As long as you have a vm to interpret code in, you can
control what it can and cannot access. I'm looking towards working on more
lower level concepts in that direction. So far I've only addressed this
problem from a distributed system perspective. I considered it (but haven't
implemented it) from the language/operating system layer yet so I may be
lacking a perspective that you already have. I'll know more in the future.

You can thus speak of composing lambdas with functional composition, or
 composing diagrams with graphics composition operations. But neither says
 nothing at all about whether actors, or Kernel, or C or whatever they're
 implemented in is composable. Composition is a property of abstractions.


That's fair. You're right.

 But how do you weigh freedom to make choices for the task at hand even
 if they're bad choices for the tasks NOT immediately at hand (such as
 integration, maintenance)?


For this, I think all of us fall back on heuristics as to what's a good
idea. But those heuristics come from past experience. This ties back
somewhat to what I mentioned about The Idea Flow method, and the difficulty
of being able to determine some of those things ahead of time. My personal
heuristic would argue that integration and maintenance should be of greater
consideration. But I learned that by getting hit in the face with
maintenance and integration problems. There are certainly approaches that
are better than a random walk of choices, but which one of those approaches
is best (Agile, XP, Lean, Lean Startup, Idea Flow Method, etc.) seems to
still be an open question.

At the low level of TCP/IP, there is no *generic* way to re-establish a
 broken connection or recover from missing datagrams. Each application or
 service has its own ad-hoc, inconsistent solution. The World Wide Web that
 you might consider resilient is so primarily due to representational state
 transfer (REST) styles and disciplines, which is effectively about giving
 the finger to events. (Events involves communicating and effecting
 *changes* in state, whereas REST involves communicating full
 *representations* of state.) Also, by many metrics (dead links, behavior
 under disruption, persistent inconsistency) neither the web nor the
 internet is especially resilient (though it does degrade gracefully).


Hmm. Based on your response, I think that we define event systems
differently. I'm not saying I'm right, but it feels that I might be picking
and choosing levels of abstraction where events occur. The system as a
whole seems to me to be resilient, and I don't see that even if there is no
generic way to re-establish connection at TCP/IP level this degrades
resiliency somehow. Multiple layers are at play here, and they come
together. No one layer would be successful by itself. I still think it's my
communication failure at describing my view of the Internet, and not a
lacking in the system itself. I'll have to do some more thinking on how to
express it better to address what you've presented.

People who focus on the latter often use phrases such as 'convergence',
 'stability', 'asymptotic', 'bounded error', 'eventual consistency'. It's
 all very interesting. But it is historically a mistake to disregard
 correctness then *handwave* about emergent properties; you really need a
 mathematical justification even for weak correctness properties.
 Biologically inspired models aren't always stable (there are all sorts of
 predator-prey cycles, extinctions, etc.) and the transient stability we
 observe is often anthropocentric.


Agreed that disregarding correctness and *handwaving* emergent properties
is a bad idea. It was more a comment on my starting state of mind to a
problem than a rigorous approach to solving it. Although, stability is not
necessarily the goal. Perhaps I'm more in the biomimetic camp than I think.

On Sun, Apr 7, 2013 at 3:47 PM, David Barbour dmbarb...@gmail.com wrote:


 On Sun, Apr 7, 2013 at 10:40 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:


 Consider: You can't treat a group of two actors as a single actor.


 You can

Re: [fonc] Natural Language Wins

2013-04-04 Thread Tristan Slominski
Thus a major improvement for world computing would be careful adherence to
a world wide natural language

That seems to be contrary to how the world works. We can't even agree
whether to read bytes from right to left or left to right (
http://en.wikipedia.org/wiki/Endianness).

http://xkcd.com/927/



On Thu, Apr 4, 2013 at 3:26 PM, John Carlson yottz...@gmail.com wrote:

 I didn't see lojban mentioned.  http://en.m.wikipedia.org/wiki/Lojban
 On Apr 4, 2013 3:19 PM, Kirk Fraser overcomer@gmail.com wrote:

 The main source of invention is not math wins as described on
 http://www.vpri.org/html/work/ifnct.htm since the world would be
 speaking math if it were really the source of inspiring more inventions
 that improve the world's standard of living.  Math helps add precision to
 tasks that involve counting.  Attempting to move from counting to logic
 such as in statistics sometimes leads to false conclusions, especially if
 logic is not given priority over the tools of math.  For human value,
 readability is required, so computer language improvements must focus on
 natural language.

 Human language itself has problems seen in large projects such as Ubuntu
 where contributors from around the world write in their own language and
 tag their code with favorite names which mean nothing to the average reader
 instead of words which best explain the application.  Thus a major
 improvement for world computing would be careful adherence to a world wide
 natural language.  We know cobbling together a variety of languages as in
 Esperanto fails.  While English is the world standard language for
 business, Hebrew might be more inspiring.  In any case the use of whole
 words with common sense is more readable than acronyms.

 The first math language Fortran was soon displaced in business by more
 readable code afforded by Cobol's longer variable names.  In Smalltalk one
 can write unreadable math as easily as readable code but Smalltalk may have
 a few legacy bugs which nobody has yet fixed, possibly due to having
 metaphor or polymorphism design errors, where the code looks good to
 multiple programmers but fails to perform as truly desired in all
 circumstances.  Further reluctance to use commonsense whole words on some
 objects such as BltBlk present a barrier to learning directly from the
 code.

 One way to reduce these errors is to develop a set of executable rules
 that produce Smalltalk, including checking method reuse implications.  Then
 one could make changes to a few rules and the rules would totally
 reengineer Smalltalk accordingly, without forgetting or overlooking
 anything that the programmer hasn't overlooked in the rules.  There is also
 room for a more efficient and more natural language.  Smalltalk is
 supposed to be 3 times faster to code than C and Expert systems are
 supposed to be 10 times faster to code in than C.  So a better language
 needs development in two directions, easy to understand Expert rules using
 common sense whole words and a built in library which enables Star Trek's
 Computer or Iron Man's Computer level of hands free or at least keyboard
 free function.

 There are three basic statements in any computer language: assignment, If
 then else, and loop.  Beyond that a computer language should provide rapid
 access to all common peripherals.  Expert systems tend to have a built in
 loop which executes everything until there are no more changes.  Some
 industrial process controllers put a strict time limit on the loop.
  Examining published rules of simple expert systems, it appears that random
 rule order makes them easier to create while brainstorming, it is possible
 to organize rules in a sequential order which eliminates the repeat until
 no changes loop.  Rule ordering can be automated to retain freedom of human
 input order.

 Several years ago I worked with a Standford student to develop a language
 we call Lt which introduces a concept of Object Strings which can make
 rules a little easier.  Unfortunately the project was written in VBasic
 instead of Smalltalk so I've had insufficient ability to work on it since
 the project ended.  Soon I'll be working on converting it to Smalltalk then
 reengineering it since it has a few design errors and needs a few more
 development cycles educated by co-developing an NLP application.

 Here's a simple Lt method which is very similar to Smalltalk

 game
 example Lt code
 | bird player rock noise |
'objects
 rock exists.  player clumsy.
 'facts
 player trips : [player {clumsy unlucky}, rock exists].
 'a if x w or x y and z
 noise exists; is loud : (player trips, player noisy).
  'a and b if x or y
 bird frightened : noise is loud.
   'a if x
 (bird ~player has : bird frightened.
 'case:  if b then not a else a.
 bird player has.).

 ^
'answer rock exists, player clumsy,
 player trips, noise exists, noise is loud


Re: [fonc] Natural Language Wins

2013-04-04 Thread Tristan Slominski
It appears you are successfully working with English as do most people
[**citation needed**] who communicate internationally.  Not to say English
best but it is what most people know [**citation needed**] and using it in
programs would make them readable by more people [**no evidence for this
hypothesis**] until people adopt [**no evidence for this hypothesis**] a
purer language [**citation needed**] like Hebrew [**citation needed**].


On Thu, Apr 4, 2013 at 3:47 PM, John Carlson yottz...@gmail.com wrote:

 Esperanto was intended to be a human understandable language.  Lojban is
 intended to be a computer and human understandable language...huge
 difference.
 On Apr 4, 2013 3:39 PM, Kirk Fraser overcomer@gmail.com wrote:

 On Thu, Apr 4, 2013 at 1:26 PM, John Carlson yottz...@gmail.com wrote:

 I didn't see lojban mentioned.  http://en.m.wikipedia.org/wiki/Lojban

 Consider it equal to Esperanto in context of my argument.

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] misc: code security model

2011-08-11 Thread Tristan Slominski
I feel obligated to comment on usage of MD5 for any security purpose:

http://www.codeproject.com/KB/security/HackingMd5.aspx

On Thu, Aug 11, 2011 at 19:06, BGB cr88...@gmail.com wrote:

  On 8/11/2011 12:55 PM, David Barbour wrote:

 On Wed, Aug 10, 2011 at 7:35 PM, BGB cr88...@gmail.com wrote:

 not all code may be from trusted sources.
 consider, say, code comes from the internet.

 what is a good way of enforcing security in such a case?


  Object capability security is probably the very best approach available
 today - in terms of a wide variety of criterion such as flexibility,
 performance, precision, visibility, awareness, simplicity, and usability.

  In this model, ability to send a message to an object is sufficient proof
 that you have rights to use it - there are no passwords, no permissions
 checks, etc. The security discipline involves controlling whom has access to
 which objects - i.e. there are a number of patterns, such as 'revocable
 forwarders', where you'll provide an intermediate object that allows you to
 audit and control access to another object. You can read about several of
 these patterns on the erights wiki [1].


 the big problem though:
 to try to implement this as a sole security model, and expecting it to be
 effective, would likely impact language design and programming strategy, and
 possibly lead to a fair amount of effort WRT hole plugging in an existing
 project.

 granted, code will probably not use logins/passwords for authority, as this
 would likely be horridly ineffective for code (about as soon as a piece of
 malware knows the login used by a piece of trusted code, it can spoof as
 the code and do whatever it wants).

 digital signing is another possible strategy, but poses a similar
 problem:
 how to effectively prevent spoofing (say, one manages to extract the key
 from a trusted app, and then signs a piece of malware with it).

 AFAICT, the usual strategy used with SSL certificates is that they may
 expire and are checked against a certificate authority. although maybe
 reasonably effective for the internet, this seems to be a fairly complex and
 heavy-weight approach (not ideal for software, especially not FOSS, as most
 such authorities want money and require signing individual binaries, ...).

 my current thinking is roughly along the line that each piece of code will
 be given a fingerprint (possibly an MD5 or SHA hash), and this fingerprint
 is either known good to the VM itself (for example, its own code, or code
 that is part of the host application), or may be confirmed as trusted by
 the user (if it requires special access, ...).

 it is a little harder to spoof a hash, and tampering with a piece of code
 will change its hash (although with simpler hashes, such as checksums and
 CRC's, it is often possible to use a glob of garbage bytes to trick the
 checksum algorithm into giving the desired value).

 yes, there is still always the risk of a naive user confirming a piece of
 malware, but this is their own problem at this point.



  Access to FFI and such would be regulated through objects. This leaves
 the issue of deciding: how do we decide which objects untrusted code should
 get access to? Disabling all of FFI is often too extreme.


 potentially.
 my current thinking is, granted, that it will disable access to the FFI
 access object (internally called ctop in my VM), which would disable the
 ability to fetch new functions/... from the FFI (or perform native import
 operations with the current implementation).

 however, if retrieved functions are still accessible, it might be possible
 to retrieve them indirectly and then make them visible this way.

 as noted in another message:

 native import C.math;
 var mathobj={sin: sin, cos: cos, tan: tan, ...};

 giving access to mathobj will still allow access to these functions,
 without necessarily giving access to the entire C toplevel, which poses a
 much bigger security risk.

 sadly, there is no real good way to safely streamline this in the current
 implementation.



  My current design: FFI is a network of registries. Plugins and services
 publish FFI objects (modules) to these registries. Different registries are
 associated with different security levels, and there might be connections
 between them based on relative trust and security. A single FFI plugin
 might provide similar objects at multiple security levels - e.g. access to
 HTTP service might be provided at a low security level for remote addresses,
 but at a high security level that allows for local (127, 192.168, 10.0.0,
 etc.) addresses. One reason to favor plugin-based FFI is that it is easy to
 develop security policy for high-level features compared to low-level
 capabilities. (E.g. access to generic 'local storage' is lower security
 level than access to 'filesystem'.)


 my FFI is based on bulk importing the contents of C headers.

 although fairly powerful and convenient, securing such a beast is likely
 to be a bit of a 

[fonc] OMetaJS + NodeJS

2011-06-26 Thread Tristan Slominski
In anticipation of future work, I had a need for a command-line tool (
instead of the Workspace ) that would execute OMetaJS.  The project provides
for passing OMetaJS grammars, interpreters, or compilers/code emitters via
command-line and chaining them together.

The project is up on github: https://github.com/tristanls/ometa-js-node

README contains some detailed examples to showcase current functionality.

I'm sharing this in hopes it may be useful for some of you. I do anticipate
modifications and improvements as I adapt it to my use-case.

Cheers,

Tristan
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: Age and Language (was Re: [fonc] Alternative Web programming models?)

2011-06-15 Thread Tristan Slominski

 but, yeah... being young, time seems to go by very slowly, and just sitting
 around fiddling with something, one accomplishes a lot of stuff in a
 relatively short period of time.

 as one gets older though, time goes by ever faster, and one can observe
 that less and less happens in a given period of time. then one sees many
 older people, and what one sees doesn't look all that promising.

 sadly, as is seemingly the case that a lot of potentially ones' potentially
 most productive years are squandered away doing things like schoolwork and
 similar, and by the time one generally gets done with all this, ones' mind
 has already become somewhat dulled due to age.


From the insights into how the human neocortex learns information gained
from the research (not mine) in Hierarchical Temporal Memory (HTM) machine
learning model, I would contest the idea that less happens in a give period
of time as you get older.

If you explore HTM, it illustrates that as inputs are generated, they are
evaluated in the context of the predictions that the network is making at
any given time.

As a young person, the neocortex knows nothing, so all the predictions are
most often wrong. When there is dissonance between prediction and input, a
sort of *interrupt* happens ( which gives us awareness of time ) and
learning of a new pattern eventually occurs.  As we get older, less and less
of the world becomes *novel*, so our neocortical predictions are more and
more correct, generating less and less interrupts. One could argue that our
perception of time is the number of these interrupts occurring in an
interval of proper time. The concept of *flow* for example, when one loses
track of time being fully immersed in a problem, could be simply the fact
that you are so engaged and knowledgable in an area that your mind is able
to successfully isolate itself from interrupt generating input and process
information at peak efficiency.

I would go even as far as saying that the productivity described in
productive years is more of an exposure to novel input than actual
productivity.
Perhaps another way of stating what you're perceiving could be that you have
more and more *friction* of learned patterns to overcome as you try to adapt
to novel input as you get older.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alternative Web programming models?

2011-06-14 Thread Tristan Slominski


 By parsing limits I mean the fact that the language grammar usually
 has to be more verbose than is required by a human to resolve
 ambiguity and other issues. This is mainly a problem if you start
 thinking of how to mix languages. To integrates say Java, SQL and
 regular expressions in one grammar. Sure it can be done by careful
 attention to the grammar, like PL/SQL f.ex. but how do you do it in a
 generic way such that DSLs can be created as libraries by application
 programmers?

 BR,
 John


This looks like a job for OMeta ( http://tinlizzie.org/ometa/ )
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alternative Web programming models?

2011-06-14 Thread Tristan Slominski

 I had some thoughts about how to approach the issue. I was thinking that
 you could represent the language in a more semanticaly rich form such as a
 RAG stored in a graph database. Then languages would be composed by
 declaring lenses between them.

 As long as there is a lens to a editor dsl you could edit the labguage in
 that editor. If you had a lens from SQL to Java (for example via jdbc) you
 could ebed SQL expressions in java code. Give transitive lenses it would
 also be a system supporting much reuse. A new DSL could then leverage the
 semantic editing support allredy created for other languages.

 BR,
 John

Just for completeness, the lenses you describe here remind me of OMeta's
foreign rule invocation:

from http://www.vpri.org/pdf/tr2008003_experimenting.pdf

see 2.3.4 Foreign Rule Invocation p. 27 of paper, p. 46 of pdf

So, if you don't like the PEG roots of OMeta, perhaps it's a good reference
that already works?

Cheers,

Tristan
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc