Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-19 Thread Paul D. Fernhout

On 10/15/10 11:52 AM, John Zabroski wrote:

If you want great Design Principles for the Web, read (a) M.A. Padlipsky's
book The Elements of Networking Style [2] (b) Radia Perlman's book
Interconnections [3] (c) Roy Fielding's Ph.d. Thesis [4]


While not exactly about the web, I just saw this video yesterday (video link 
is the "View Webinar" button to the right):

  "Less is More: Redefining the “I” of the IDE (W-JAX Keynote)"
  http://live.eclipse.org/node/676
"Not long ago the notion of a tool that hides more of the program than it 
shows sounded crazy. To some it probably still does. But as Mylyn continues 
its rapid adoption, hundreds of thousands of developers are already part of 
the next big step in the evolution of the IDE. Tasks are more important than 
files, focus is more important than features, and an explicit context is the 
biggest productivity boost since code completion. This talk discusses how 
Java, OSGi, Eclipse, Mylyn, and a combination of open source frameworks and 
commercial extensions like Tasktop have enabled this transformation. It then 
reviews lessons learned for the next generation of tool innovations, and 
looks ahead at how we are redefining the “I” of the IDE."


But, the funny thing about that video is that it is, essentially about how 
the Eclipse and Java communities have reinvented the Smalltalk-80 work 
environment without admitting it or even recognizing it. :-)


Even "tasks" were represented in Smalltalk in the 1980s and 1990s as 
"projects" able to enter worlds of windows (and to a lesser extent, 
workspaces as a manual collection of related evaluable commands).


I have to admit things now are "bigger" and "better" in various ways 
(including security sandboxing, the spread of these ideas to cheap hardware, 
and what Mylyn does at the desktop level with the TaskTop extensions), so I 
don't want to take that away from recent innovations or the presenter's 
ongoing work. But it is all so surreal to someone who has been using 
computers for about 30 years and knows about Smalltalk. :-)


By the way, on what Tim Berners-Lee may miss about network design?
  "Meshworks, Hierarchies, and Interfaces" by Manuel De Landa
  http://www.t0.or.at/delanda/meshwork.htm
"Indeed, one must resist the temptation to make hierarchies into villains 
and meshworks into heroes, not only because, as I said, they are constantly 
turning into one another, but because in real life we find only mixtures and 
hybrids, and the properties of these cannot be established through theory 
alone but demand concrete experimentation."


So, that interweaves with an idea like a principle of least power. Still, I 
think Tim Berners-Lee makes a lot of good points. But how design patterns 
combine in practice is a complex issue. :-) Manuel De Landa's point there is 
so insightful though, because it says, sometimes, yes, there is some value 
to a hierarchies or a standardizations, but that value interacts with the 
value of meshworks in potentially unexpected ways that require experiment to 
work through.


--Paul Fernhout
http://www.pdfernhout.net/

The biggest challenge of the 21st century is the irony of technologies of 
abundance in the hands of those thinking in terms of scarcity.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-16 Thread Paul D. Fernhout

On 10/14/10 10:18 PM, John Zabroski wrote:

AWT was also built in one month, so that Sun could demo applets on the
Internet.  The perception within Sun from management at the time was that
they only had a short period of time to enter the Internet market ahead of
other competitors, and that some code was better than good code 6 months, or
a year or two later.


Just goes to show the biggest "bug" in computer software probably is in the 
current scarcity-paradigm socio-economic system that drives how so many 
programs, libraries, and standards are written, maintained, and transformed: :-)


http://knol.google.com/k/paul-d-fernhout/beyond-a-jobless-recovery#Four_long%282D%29term_heterodox_alternatives

--Paul Fernhout
http://www.pdfernhout.net/

The biggest challenge of the 21st century is the irony of technologies of 
abundance in the hands of those thinking in terms of scarcity.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-15 Thread John Zabroski
On Sun, Oct 10, 2010 at 9:01 AM, Leo Richard Comerford <
leocomerf...@gmail.com> wrote:

>
> On 10 October 2010 01:44, John Zabroski  wrote:
>
> > To be fair, Tim had the right idea with a Uri.
>
> He also had a right idea with the Principle of Least Power. Thesis,
> antithesis...
>


You're referring to [1], Tim Berners-Lee's Axioms of Web architecture.

This is such a shallow document and mixes basic principles up in horrible
ways, and he doesn't formalize things mathematically that need to be
formalized mathematically to truly understand them.  He was NOT right.  If
anything, this document sounds like he is parroting somebody else without
understanding it.

For example, his section on Tolerance is parroting The Robustness Principle,
which is generally considered to be Wrong.  There is also a better way to
achieve Tolerance in an object-oriented system: Request-based Mediated
Execution or Feature-Oriented Programming where clients and servers
negotiate the feature set dynamically.  His example of the HTML data format
is ridiculous and stupid, and doesn't make sense to a mathematician:
Allowing implicit conversions between documents written for HTML4 Strict and
HTML4 Transitional is just one of the many stupid ideas the W3C has had.

What about his strange assertion that SQL is Turing Complete?  What is Tim
even saying?  He also says Java is "unashamedly procedural" despite the
fact, as I quoted earlier in this thread, he admitted at JavaOne to not
knowing much about Java.  Did he suddenly learn it within the year timespan
he wrote this letter from the time he gave that talk?  I suppose that's
possible.

He also writes "Computer Science in the 1960s to 80s spent a lot of effort
making languages which were as powerful as possible. Nowadays we have to
appreciate the reasons for picking not the most powerful solution but the
least powerful.  Nowadays we have to appreciate the reasons for picking not
the most powerful solution but the least powerful. The reason for this is
that the less powerful the language, the more you can do with the data
stored in that language."  I don't think Tim actually understands the issues
at play here, judging by his complete non-understanding of what Turing-power
is (cf. SQL having Turing-power, ANSI SQL didn't even get transitive
closures until SQL-99).  There is more to data abstraction that data
interchange formats.  Even SQL expert CJ Date has said so: stuff like XML
simply solves a many-to-many versioning problem among data interchange
formats by standardizing on a single format, thus allowing higher-level
issues like semantics to override syntactic issues.  This is basic compiler
design: Create a uniform IR (Intermediate Representation) and map things
into that IR.  Tim's verbose explanations with technical inaccuracies only
confuse the issue.

Besides, as far as the Princilpe of Least Power goes, the best example he
could give is the one that Roy Fielding provides about the design of HTTP:
Hypermedia As The Engine of Application State.

If you want great Design Principles for the Web, read (a) M.A. Padlipsky's
book The Elements of Networking Style [2] (b) Radia Perlman's book
Interconnections [3] (c) Roy Fielding's Ph.d. Thesis [4]

Mike Padlipsky actually helped build the ARPANET and supposedly Dijkstra
gave Mike permission to use GOTO if he deemed it necessary in a networking
protocol implementation, Radia Perlman is considered as having written one
of the best dissertations on distributed systems, and Roy Fielding defined
the architectural style REST.

[1] http://www.w3.org/DesignIssues/Principles.html
[2]
http://www.amazon.com/Elements-Networking-Style-Animadversions-Intercomputer/dp/0595088791/
[3]
http://www.amazon.com/Interconnections-Bridges-Switches-Internetworking-Protocols/dp/0201634481/
[4] http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread John Zabroski
By the way, at the JavaOne where Swing was announced, Sun told everyone that
backward compatibility with AWT was a must for customers.  I thought I
stated that directly earlier, but apparently left it out.  If you can find
the old Swing presentations from the second JavaOne, then you'll see what I
mean.  The slides explicitly stated the need for backwards compatibility
with AWT, for customers.  This was considered weird to many, since nobody
seemed to actually be using AWT or liked it.

AWT was also built in one month, so that Sun could demo applets on the
Internet.  The perception within Sun from management at the time was that
they only had a short period of time to enter the Internet market ahead of
other competitors, and that some code was better than good code 6 months, or
a year or two later.

One of the saddest Java books ever (among many sad ones), by the way, was
Java Platform Performance [1].  It basically talks about the author's
experience tuning Swing applications.  They basically said that NOBODY on
the Swing team ever performance tested code by profiling.  They just kept
writing code!  Then the authors job was to clean up the performance disaster
of Swing. -- Swing was notoriously slow for many years, and it wasn't until
1.6 that they finally added support for antialiased text rendering!

Interestingly, Palantir Technologies uses Java and Swing for its
"distributed intelligence" visualizations, according to a convo I had on
Facebook w/ Ari Gesher.

[1]
http://www.amazon.co.uk/Java-Platform-Performance-Strategies-Tactics/dp/0201709694/

On Thu, Oct 14, 2010 at 7:53 PM, Duncan Mak  wrote:

> On Thu, Oct 14, 2010 at 6:51 PM, John Zabroski wrote:
>
>> That being said, I have no idea why people think Smalltalk-80 would have
>> been uniformly better than Java.  I am not saying this to be negative.  In
>> my view, much of the biggest mistakes with Java were requiring insane legacy
>> compatibility, and doing it in really bad ways.  Swing should have never
>> have been forced to reuse AWT, for example.  And AWT should never have had a
>> concrete component model, thus "forcing" Swing to inherit it (dropping the
>> rabbit ears, because I see no good explanation for why it had to inherit
>> AWT's component model via "implementation inheritance").  It's hard for me
>> to even guage if the Swing developers were good programmers or not, given
>> that ridiculously stupid constraint.  It's not like Swing even supported
>> phones, it was never in J2ME.  The best I can conclude is that they were not
>> domain experts, but who really was at the time?
>
>
> I started programming Swing a year ago and spent a little time learning its
> history when I first started. I was able to gather a few anecdotes, and they
> have fascinated me.
>
> There were two working next-generation Java GUI toolkits at the time of
> Swing's conception - Netscape's IFC and Lighthouse Design's LFC - both
> toolkits were developed by ex-NeXT developers and borrowed heavily from
> AppKit's design. IFC even had a design tool that mimicd Interface Builder
> (which still lives on today in Cocoa).
>
> Sun first acquired Lighthouse Design, then decided to join forces with
> Netscape - with two proven(?) toolkits, the politics worked out such that
> all the AWT people at Sun ended up leading the newly-joined team, and the
> working code from the other parties discarded, and from this, Swing was
> born.
>
> http://talblog.info/archives/2007/01/sundown.html
> http://www.noodlesoft.com/blog/2007/01/23/the-sun-also-sets/
>
> --
> Duncan.
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
>
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread John Zabroski
Wow!  Thanks for that amazing nugget of Internet history.

Fun fact: Tony Duarte wrote the book Writing NeXT Programs under the
pseudonym Ann Weintz because supposedly Steve Jobs was so secretive that he
told employees not to write books about the ideas in NeXT's GUI.  See:
http://www.amazon.com/Writing-Next-Programs-Introduction-Nextstep/dp/0963190105/where
Tony comments on it.



On Thu, Oct 14, 2010 at 7:53 PM, Duncan Mak  wrote:

> On Thu, Oct 14, 2010 at 6:51 PM, John Zabroski wrote:
>
>> That being said, I have no idea why people think Smalltalk-80 would have
>> been uniformly better than Java.  I am not saying this to be negative.  In
>> my view, much of the biggest mistakes with Java were requiring insane legacy
>> compatibility, and doing it in really bad ways.  Swing should have never
>> have been forced to reuse AWT, for example.  And AWT should never have had a
>> concrete component model, thus "forcing" Swing to inherit it (dropping the
>> rabbit ears, because I see no good explanation for why it had to inherit
>> AWT's component model via "implementation inheritance").  It's hard for me
>> to even guage if the Swing developers were good programmers or not, given
>> that ridiculously stupid constraint.  It's not like Swing even supported
>> phones, it was never in J2ME.  The best I can conclude is that they were not
>> domain experts, but who really was at the time?
>
>
> I started programming Swing a year ago and spent a little time learning its
> history when I first started. I was able to gather a few anecdotes, and they
> have fascinated me.
>
> There were two working next-generation Java GUI toolkits at the time of
> Swing's conception - Netscape's IFC and Lighthouse Design's LFC - both
> toolkits were developed by ex-NeXT developers and borrowed heavily from
> AppKit's design. IFC even had a design tool that mimicd Interface Builder
> (which still lives on today in Cocoa).
>
> Sun first acquired Lighthouse Design, then decided to join forces with
> Netscape - with two proven(?) toolkits, the politics worked out such that
> all the AWT people at Sun ended up leading the newly-joined team, and the
> working code from the other parties discarded, and from this, Swing was
> born.
>
> http://talblog.info/archives/2007/01/sundown.html
> http://www.noodlesoft.com/blog/2007/01/23/the-sun-also-sets/
>
> --
> Duncan.
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
>
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Jecel Assumpcao Jr.
Pascal J. Bourguignon wrote:
> No idea, but since they invented Java, they could have at a much lower  
> cost written their own implementation of Smalltalk.

or two (Self and Strongtalk).

Of course, Self had to be killed in favor of Java since Java ran in just
a few kilobytes while Self needed a 24MB workstation and most of Sun's
clients still had only 8MB (PC users were even worse, at 4MB and under).

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Duncan Mak
On Thu, Oct 14, 2010 at 6:51 PM, John Zabroski wrote:

> That being said, I have no idea why people think Smalltalk-80 would have
> been uniformly better than Java.  I am not saying this to be negative.  In
> my view, much of the biggest mistakes with Java were requiring insane legacy
> compatibility, and doing it in really bad ways.  Swing should have never
> have been forced to reuse AWT, for example.  And AWT should never have had a
> concrete component model, thus "forcing" Swing to inherit it (dropping the
> rabbit ears, because I see no good explanation for why it had to inherit
> AWT's component model via "implementation inheritance").  It's hard for me
> to even guage if the Swing developers were good programmers or not, given
> that ridiculously stupid constraint.  It's not like Swing even supported
> phones, it was never in J2ME.  The best I can conclude is that they were not
> domain experts, but who really was at the time?


I started programming Swing a year ago and spent a little time learning its
history when I first started. I was able to gather a few anecdotes, and they
have fascinated me.

There were two working next-generation Java GUI toolkits at the time of
Swing's conception - Netscape's IFC and Lighthouse Design's LFC - both
toolkits were developed by ex-NeXT developers and borrowed heavily from
AppKit's design. IFC even had a design tool that mimicd Interface Builder
(which still lives on today in Cocoa).

Sun first acquired Lighthouse Design, then decided to join forces with
Netscape - with two proven(?) toolkits, the politics worked out such that
all the AWT people at Sun ended up leading the newly-joined team, and the
working code from the other parties discarded, and from this, Swing was
born.

http://talblog.info/archives/2007/01/sundown.html
http://www.noodlesoft.com/blog/2007/01/23/the-sun-also-sets/

-- 
Duncan.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Pascal J. Bourguignon


On 2010/10/15, at 00:14 , Steve Dekorte wrote:



I have to wonder how things might be different if someone had made a  
tiny, free, scriptable Smalltalk for unix before Perl appeared...


There has been GNU smalltalk for a long time, AFAIR before perl, which  
was quite adapted to the unix environment.


It would certainly qualify as tiny since it lacked any big GUI  
framework, obviously it is free in all meanings of the words, and it  
is best in writing scripts.



My point is that it hasn't changed anything and nothing else would have.


BTW, there were rumors that Sun considered using Smalltalk in  
browsers instead of Java but the license fees from the vendors were  
too high. Anyone know if that's true?


No idea, but since they invented Java, they could have at a much lower  
cost written their own implementation of Smalltalk.


--
__Pascal Bourguignon__
http://www.informatimago.com




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread John Zabroski
I saw Paul Fernhout mention this once on /.
http://developers.slashdot.org/comments.pl?sid=1578224&cid=31429692

He linked to: http://fargoagile.com/joomla/content/view/15/26/

which references:

http://lists.squeakfoundation.org/pipermail/squeak-dev/2006-December/112337.html

which states:

When I became V.P. of Development at ParcPlace-Digitalk in 1996, Bill
> Lyons (then CEO) told me the same story about Sun and VW. According
> to Bill, at some point in the early '90's when Adele was still CEO,
> Sun approached ParcPlace for a license to use VW (probably
> ObjectWorks at the time) in some set top box project they were
> working on. Sun wanted to use a commercially viable OO language with
> a proven track record. At the time ParcPlace was licensing Smalltalk
> for >$100 a copy. Given the volume that Sun was quoting, PP gave Sun
> a firm quote on the order of $100/copy. Sun was willing to pay at
> most $9-10/copy for the Smalltalk licenses. Sun was not willing to go
> higher and PP was unwilling to go lower, so nothing ever happened and
> Sun went its own way with its own internally developed language
> (Oak...Java). The initial development of Oak might well have predated
> the discussions between Sun and PP, but it was PP's unwillingness to
> go lower on the price of Smalltalk that gave Oak its green light
> within Sun (according to Bill anyway). Bill went on to lament that
> had PP played its cards right, Smalltalk would have been the language
> used by Sun and the language that would have ruled the Internet.
> Obviously, you can take that with a grain of salt. I don't know if
> Bill's story to me was true (he certainly seemed to think it was),
> but it might be confirmable by Adele. If it is true, it is merely
> another sad story of what might have been and how close Smalltalk
> might have come to universal acceptance.
>
> -Eric Clayberg
>
>
That being said, I have no idea why people think Smalltalk-80 would have
been uniformly better than Java.  I am not saying this to be negative.  In
my view, much of the biggest mistakes with Java were requiring insane legacy
compatibility, and doing it in really bad ways.  Swing should have never
have been forced to reuse AWT, for example.  And AWT should never have had a
concrete component model, thus "forcing" Swing to inherit it (dropping the
rabbit ears, because I see no good explanation for why it had to inherit
AWT's component model via "implementation inheritance").  It's hard for me
to even guage if the Swing developers were good programmers or not, given
that ridiculously stupid constraint.  It's not like Swing even supported
phones, it was never in J2ME.  The best I can conclude is that they were not
domain experts, but who really was at the time?

On Thu, Oct 14, 2010 at 6:14 PM, Steve Dekorte  wrote:

>
> I have to wonder how things might be different if someone had made a tiny,
> free, scriptable Smalltalk for unix before Perl appeared...
>
> BTW, there were rumors that Sun considered using Smalltalk in browsers
> instead of Java but the license fees from the vendors were too high. Anyone
> know if that's true?
>
> On 2010-10-08 Fri, at 11:28 AM, John Zabroski wrote:
> > Why are we stuck with such poor architecture?
>
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Steve Dekorte

Amen.

On 2010-10-08 Fri, at 07:57 PM, Casey Ransberger wrote:
> I think "type" is a foundationaly bad idea. What matters is that the object 
> in question can respond intelligently to the message you're passing it... 
> Asking for an object's class reduces the power of polymorphism in your 
> program. 



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Steve Dekorte

I have to wonder how things might be different if someone had made a tiny, 
free, scriptable Smalltalk for unix before Perl appeared...

BTW, there were rumors that Sun considered using Smalltalk in browsers instead 
of Java but the license fees from the vendors were too high. Anyone know if 
that's true?

On 2010-10-08 Fri, at 11:28 AM, John Zabroski wrote:
> Why are we stuck with such poor architecture?


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-11 Thread Ryan Mitchley

It seems that a logic programming inspired take on types might be useful:
e.g. ForAll X such that X DoThis is defined, X DoThis

or maybe, ForAll X such that X HasMethodReturning Y and Y DoThis is 
defined, Y DoThis


Or, how about, pattern matching on message reception? Allow "free 
variables" in the method prototype so that inexact matching is possible? 
Send a message to a field of objects, and all interpret the message as 
it binds to their "receptors"...



On 09/10/2010 04:57, Casey Ransberger wrote:

I think "type" is a foundationaly bad idea. What matters is that the object in 
question can respond intelligently to the message you're passing it. Or at least, that's 
what I think right now, anyway. It seems like type specification (and as such, early 
binding) have a very limited real use in the domain of 
really-actually-for-real-and-seriously mission critical systems, like those that guide 
missiles or passenger planes.

   



Disclaimer: http://www.peralex.com/disclaimer.html



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Paul D. Fernhout

On 10/10/10 1:18 PM, Julian Leviston wrote:
> On 11/10/2010, at 2:39 AM, Paul D. Fernhout wrote:
>> Software is never "done". :-) Especially because the world keeps
>> changing around it. :-) And especially when it is "research" and doing
>> basic research looking for new ideas. :-)
>

My answer can be best expressed simply and deeply thus:

"I don't see the unix command 'ls' being rewritten every day or even
every year."


http://git.busybox.net/busybox/log

:-)


Do you understand what I'm trying to get at? It's possible to use an 'ls'
replacement if I so choose, but that's simply my preference. 'ls' itself
hasn't been touched much in a long time. The same as the ADD assembler
instruction is pretty similar across platforms. Get my drift?


http://en.wikipedia.org/wiki/Cover_Flow

:-)

And there's a lawsuit about that ongoing, by the way, with at US$250 million 
dollar or so judgment being appealed.


I really dislike software patents. :-( Or being essentially forced to accede 
to them as a condition of employment:

  http://www.freepatentsonline.com/6513009.html

From:

http://developer.yahoo.com/yui/theater/video.php?v=crockford-yuiconf2009-state
"Douglas Crockford: So one of the lessons is that patents and open systems 
are not compatible. I think the solution to that incompatibility is to close 
the Patent Office [applause]"



Part of the role of a language meta-description is implementation of
every translatable artefact.  Thus if some source item requires some
widget, that widget comes with it along for the ride as part of the
source language (and framework) meta-description.


Well, licenses may get in the way, as they did for my translation of Delphi 
to Java and Python. Often code we have control over or responsibility for is 
only a small part of a large set of interdependent modules.


Also, you may call a service and that service may need to be reimplemented 
or rethought with an entire chain of conceptual dependencies...


One issue is, what are the boundaries of the system?

Do they even include things like the documentation and culture surrounding 
the artifact?


From:
  http://en.wikipedia.org/wiki/Social_constructivism
"Social constructivism is a sociological theory of knowledge that applies 
the general philosophical constructionism  into social settings, wherein 
groups construct knowledge for one another, collaboratively creating a small 
culture of shared artifacts with shared meanings. When one is immersed 
within a culture of this sort, one is learning all the time about how to be 
a part of that culture on many levels. Its origins are largely attributed to 
Lev Vygotsky. ... Social constructivism is closely related to social 
constructionism in the sense that people are working together to construct 
artifacts. However, there is an important difference: social constructionism 
focuses on the artifacts that are created through the social interactions of 
a group, while social constructivism focuses on an individual's learning 
that takes place because of their interactions in a group. ... Vygotsky's 
contributions reside in Mind in Society (1930, 1978) and Thought and 
Language (1934, 1986). [2] Vygotsky independently came to the same 
conclusions as Piaget regarding the constructive nature of development. ... 
An instructional strategy grounded in social constructivism that is an area 
of active research is computer-supported collaborative learning  (CSCL). 
This strategy gives students opportunities to practice 21st-century skills 
in communication, knowledge sharing, critical thinking and use of relevant 
technologies found in the workplace."


So, can a truly useful translation system function outside that social 
context, including arbitrary legal constructs?


So, what are the boundaries of the translation task? They may be fuzzier 
than they appear at first (or even more sharp and arbitrary, like above, due 
to legal issues or social risk assessments or even limited time).



I'm possibly missing something, but I don't see the future as being a
simple extension of the past... it should not be that we simply create
"bigger worlds" as we've done in the past (think virtual machines) but
rather looks for ways to adapt things from worlds to integrate with each
other. Thus, I should not be looking for a better IDE, or programming
environment, but rather take the things I like out of what exists... some
people like to type their code, others like to drag 'n drop it. I see no
reason why we can't stop trying to re-create entire universes inside the
machines we use and simply split things at a component-level. We're
surely smarter than reinventing the same pattern again and again.


Despite what I wrote above, I basically agree with your main point here. :-)

Still, Chuck Moore rewrote Forth over and over again. He liked doing it, and 
it slowly improved. And as I read elsewhere, it would really upset the 
people at the company he worked with that he would rewrite Forth every time 

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Julian Leviston
My answer can be best expressed simply and deeply thus:

"I don't see the unix command 'ls' being rewritten every day or even every 
year."

Do you understand what I'm trying to get at? It's possible to use an 'ls' 
replacement if I so choose, but that's simply my preference. 'ls' itself hasn't 
been touched much in a long time. The same as the ADD assembler instruction is 
pretty similar across platforms. Get my drift?

Part of the role of a language meta-description is implementation of every 
translatable artefact.  Thus if some source item requires some widget, that 
widget comes with it along for the ride as part of the source language (and 
framework) meta-description.

I'm possibly missing something, but I don't see the future as being a simple 
extension of the past... it should not be that we simply create "bigger worlds" 
as we've done in the past (think virtual machines) but rather looks for ways to 
adapt things from worlds to integrate with each other. Thus, I should not be 
looking for a better IDE, or programming environment, but rather take the 
things I like out of what exists... some people like to type their code, others 
like to drag 'n drop it. I see no reason why we can't stop trying to re-create 
entire universes inside the machines we use and simply split things at a 
component-level. We're surely smarter than reinventing the same pattern again 
and again.

Julian.

On 11/10/2010, at 2:39 AM, Paul D. Fernhout wrote:

> Software is never "done". :-) Especially because the world keeps changing 
> around it. :-) And especially when it is "research" and doing basic research 
> looking for new ideas. :-)


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Paul D. Fernhout

On 10/10/10 10:52 AM, Julian Leviston wrote:

I'm not entirely sure why the idea of pattern expressions and
meta-translators wasn't an awesome idea.


Maybe we're still missing a lot of general purpose tools to work with all that?


If expressing an idea cleanly in a language is possible, and expressing
that language in another language clearly and cleanly is possible, why is
it not possible to write a tool which will re-express that original idea
in the second language, or any other target language for that matter?


Well, I can wonder if it just not possible to express and idea outside of a 
social community? (That's a bit philosophical though. :-)

  http://en.wikipedia.org/wiki/Social_constructivism
  http://en.wikipedia.org/wiki/Social_constructionism
"Although both social constructionism and social constructivism  deal with 
ways in which social phenomena develop, they are distinct. Social 
constructionism refers to the development of phenomena relative to social 
contexts while social constructivism refers to an individual's making 
meaning of knowledge within a social context (Vygotsky 1978). For this 
reason, social constructionism is typically described as a sociological 
concept whereas social constructivism  is typically described as a 
psychological concept. However, while distinct, they are also complementary 
aspects of a single process through which humans in society create their 
worlds and, thereby, themselves."


But in practice, having written some ad hoc translators from one programming 
language and library set to another, there are idiomatic and paradigm issues 
that come up, ones that would really take some sort of fancy 
bordering-on-AI-ish capacity to deal with. It's doable (in theory, as many 
AI-ish solutions are), but it's not usually trivial in practice because of 
the details involved in thinking about ways that things map to each other, 
how to reason about all that, and how to involve humans in that loop.


So, say, if you are translating a GUI application that uses a drop down 
combo box and the target platform does not have such a widget, what do you 
do? Do you change the GUI to have a button an a popup window with a list, do 
you synthesize the functionality by generating a new widget, do you put in a 
stub for the programmer to fill in, or do you involve a person in realtime 
to make that choice (or create a widget) and then keep a note of it 
somewhere for future reference?



I thought this development of a meta-translator was not only one of the
FUNC goals, but one that had for the most part been at least completed?


Software is never "done". :-) Especially because the world keeps changing 
around it. :-) And especially when it is "research" and doing basic research 
looking for new ideas. :-)


Although I'm still a bit confused by what OMeta does that Antlr (and 
ANTLRWorks) can't do? And Antlr, while really neat as a system, is just a 
platform on which to build translators with a lot of hard work.

  http://tinlizzie.org/ometa/
  http://www.antlr.org/

So, I would expect, likewise, that OMeta would be also a platform on which 
people can put in a lot of hard work on top of? But I have not used OMeta, 
so I can't speak to all the differences, even as I've seen the limits of 
other parsing systems I've used, like Antlr (as well as their powers, which 
can take a while to appreciate).


Anyway, if someone could clarify the differences in either goals or 
implementation between OMeta and Antlr, that  might be helpful in 
understanding what one could (or could not) do with it, given Antlr has not 
magically given us a higher level of semantic abstraction for dealing with 
programming systems (even as Antlr is a great system for transforming 
hierarchies encoded in some textual way). Antlr is just the beginning, like, 
say, Smalltalk is just the beginning. So, I'd imagine OMeta is just a 
beginning in that sense.


But I have a feeling this will bring us back to my point in the previous 
section, that knowledge is in a sense social, and a translator (or language) 
by itself is only part of the puzzle.


And, btw, in that sense, "personal computing" by definition can not exist, 
at least, not if it involves humans dealing in "socially constructed 
knowledge". :-) And thus the value in telescopes/microscopes to let 
individuals look out on that social context or the artifacts it presents us 
with.


--Paul Fernhout
http://www.pdfernhout.net/

The biggest challenge of the 21st century is the irony of technologies of
abundance in the hands of those thinking in terms of scarcity.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Leo Richard Comerford
On 10 October 2010 14:01, Leo Richard Comerford  wrote:

>  You still need things similar
> to (say) a HTML renderer, but you don't need the browser vendor's
> choice of monolithic HTML renderer riveted into a fixed position in
> every browser tab's runtime.

Let me rephrase that a bit for clarity, to "Those applications still
need things similar to HTML renderers (and scripting languages), but
they don't need the browser vendor's choice of monolithic HTML
renderer riveted into a fixed position in their runtime"

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Julian Leviston
I'm not entirely sure why the idea of pattern expressions and meta-translators 
wasn't an awesome idea.

If expressing an idea cleanly in a language is possible, and expressing that 
language in another language clearly and cleanly is possible, why is it not 
possible to write a tool which will re-express that original idea in the second 
language, or any other target language for that matter?

I thought this development of a meta-translator was not only one of the FUNC 
goals, but one that had for the most part been at least completed?

Julian.

On 11/10/2010, at 1:38 AM, Paul D. Fernhout wrote:

> Anyway, so I do have hope that we may be able to develop platforms that let 
> us work at a higher level of abstraction like for programming or semantic web 
> knowledge representation, but we should still accept that (de facto social) 
> standards like JavaScript and so on have a role to play in all that, and we 
> need to craft our tools and abstractions with such things in mind (even if we 
> might in some cases just use them as a VM until we have something better). 
> That has always been the power of, say, the Lisp paradigm, even as 
> Smalltalk's message passing paradigm has a lot going for it as a unifying and 
> probably more scalable abstraction. What can be frustrating is when our 
> "bosses" say "write in Fortran" instead of saying "write on top of Fortran", 
> same as if they said, "Write in assembler" instead of "Write on top of 
> assembler or JavaScript or whatever". I think work like VPRI through COLA is 
> doing to think about getting the best of both worlds there is a wonderful 
> aspiration, kind of like trying to understand the particle/wave duality 
> mystery in physics (which it turns out is potentially explainable by a 
> many-worlds hypothesis, btw). But it might help to have better tools to do 
> that -- and tools that linked somehow with the semantic web and social 
> computing and so on.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Paul D. Fernhout

On 10/9/10 8:44 PM, John Zabroski wrote:

From experience, most people don't want to
discuss this because they're happy with Good Enough and scared of testing
something better.  They are always male, probably 40'ish, probably have a
wife and two kids.  We're on two different planets, so I understand
different priorities.


Well, that is probably pretty close by coincidence to describing me. :-)

But, as much as I agree with the general thrust of your arguments about 
design issues, including the network is the computer that needs debugging 
(and better design), I think there is another aspect of this that related to 
Manuel De Landa's point on design and meshworks, hierarchies, and interfaces.


See:
  "Meshwork, Hierarchy, and Interfaces"
  http://www.t0.or.at/delanda/meshwork.htm
"To make things worse, the solution to this is not simply to begin adding 
meshwork components to the mix. Indeed, one must resist the temptation to 
make hierarchies into villains and meshworks into heroes, not only because, 
as I said, they are constantly turning into one another, but because in real 
life we find only mixtures and hybrids, and the properties of these cannot 
be established through theory alone but demand concrete experimentation. 
Certain standardizations [of interfaces], say, of electric outlet designs or 
of data-structures traveling through the Internet, may actually turn out to 
promote heterogenization at another level, in terms of the appliances that 
may be designed around the standard outlet, or of the services that a common 
data-structure may make possible. On the other hand, the mere presence of 
increased heterogeneity is no guarantee that a better state for society has 
been achieved. After all, the territory occupied by former Yugoslavia is 
more heterogeneous now than it was ten years ago, but the lack of uniformity 
at one level simply hides an increase of homogeneity at the level of the 
warring ethnic communities. But even if we managed to promote not only 
heterogeneity, but diversity articulated into a meshwork, that still would 
not be a perfect solution. After all, meshworks grow by drift and they may 
drift to places where we do not want to go. The goal-directedness of 
hierarchies is the kind of property that we may desire to keep at least for 
certain institutions. Hence, demonizing centralization and glorifying 
decentralization as the solution to all our problems would be wrong. An open 
and experimental attitude towards the question of different hybrids and 
mixtures is what the complexity of reality itself seems to call for. To 
paraphrase Deleuze and Guattari, never believe that a meshwork will suffice 
to save us. {11}"


So, that's where the thrust of your point gets parried, :-) even if I may 
agree with your technical points about what is going to make good software. 
And even, given, as suggested in my previous note, that a bug in our 
socio-economic paradigm has led to the adoption of software that is buggy 
and sub-optimal in all sorts of ways. (I feel that having more people learn 
about the implications of cheap computing is part of fixing that 
socio-economic bug related to scarcity thinking in an abundant world.)


Building on Manuel De Landa's point, JavaScript as the ubiquitous standard 
local VM in our lives that potentially allows a diversity of things built on 
a layer above that (like millions of educational web pages about a variety 
of things). JavaScript/ECMAScript may be a language with various warts, it 
may be 100X slower than it needs to be as a VM, it may not be 
message-oriented, it may have all other issues including a standardization 
project that seems bent on removing all the reflection from it that allows 
people to write great tools for it, and so on, but it is what we have now as 
a sort-of "social hierarchy defined" standard for content that people can 
interact with using a web browser (or an email system that is extended in 
it, like Thunderbird). And so people are building a lot of goodness on top 
of JavaScript, like with Firebug and its extensions.

  http://getfirebug.com/wiki/index.php/Firebug_Extensions
One may rightfully say JavaScript has all sorts of issues (as a sort of 
hastily cobbled together "hierarchy" of common functionality), but all those 
extensions show what people in practice can do with it (through the power of 
a social meshwork).


And that syncs with a deeper implication of your point about, in Sun's 
language, that the network is the computer, and thus we need to adjust our 
paradigms (and tools) to deal with that. But whereas you are talking more 
about the technical side of that, there is another social side of that as 
well. Both the technical side and social side (and especially their 
interaction in the context of competition and commercial pressures and 
haste) can lead to unfortunate compromises in various ways. Still, it is 
what we have right now, and we can think about where that momentum is going 
and how t

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Leo Richard Comerford
On 10 October 2010 07:31, Dirk Pranke  wrote:

> But there's always NativeClient (as someone else linked to) if you
> need raw speed :)
>
> -- Dirk

Not just Native Client: a Native Client app including a GUI toolkit.
(A toolkit which will soon be talking to an OpenGL-derived interface,
or is it already?) At this point you're almost ready to rip out most
of the surface area of the runtime interface presented by the browser
to the application running in each tab. You still need things similar
to (say) a HTML renderer, but you don't need the browser vendor's
choice of monolithic HTML renderer riveted into a fixed position in
every browser tab's runtime.

(That link again:
http://labs.qt.nokia.com/2010/06/25/qt-for-google-native-client-preview/#comment-7893
)

On 10 October 2010 01:44, John Zabroski  wrote:

> To be fair, Tim had the right idea with a Uri.

He also had a right idea with the Principle of Least Power. Thesis,
antithesis...

Leo.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-10 Thread Paul D. Fernhout

On 10/10/10 2:25 AM, Dirk Pranke wrote:

On Sat, Oct 9, 2010 at 8:50 PM, Paul D. Fernhout
  wrote:

On 10/9/10 3:45 PM, Dirk Pranke wrote:


C++ is a significant security concern; and it is reasonable to want a
browser written in a memory-safe language.

Unfortunately, web browsers are large, extremely
performance-sensitive, legacy applications. All of the major browsers
are written in some combination of  C, C++, and Objective-C (and
undoubtedly assembly in isolated areas like the JITs), and it's
unclear if one can reasonably hope to see a web browser written from
scratch in a new language to ever hope to render the majority of the
current web correctly; the effort may simply be too large. I was not
aware of Lobo; it looks interesting but currently idle, and is a fine
example of this problem.

I continue to hope, but I may be unreasonable :)


Yes, that seems like a good description of the problem.

How about this as a possibility towards a solution ...


I think I'd rather try to write a browser from scratch than
debug/maintain your solution ;)


Sure, with today's tools "debugging" a solution developed at a higher level 
of abstraction would be hard. So, sure, this is why no one does what I 
proposed with parting C++ into an abstraction, working with the abstraction, 
and then regenerating C++ as an assembly language, and then trying to debug 
the C++ and change the abstraction and have round trip problems with all 
that. Still, I bet people said that about Fortran -- how can you possibly 
debug a Fortran program when what you care about is the assembler instructions?


But like I implied at the start, by the (imagined) standards of, say, 2050, 
we don't have any "debuggers" worth anything. :-) To steal an idea from 
Marshall Brain:

  http://sadtech.blogspot.com/2005/01/premise-of-sadtech.html
"Have you ever talked with a senior citizen and heard the stories? Senior 
citizens love to tell about how they did things "way back when." For 
example, I know people who, when they were kids, lived in shacks, pulled 
their drinking water out of the well with a bucket, had an outhouse in the 
back yard and plowed the fields using a mule  and a hand plow. These people 
are still alive and kicking -- it was not that long ago that lots of people 
in the United States routinely lived that way. ... When we look at this kind 
of stuff from today's perspective, it is so sad. The whole idea of spending 
200 man-hours to create a single shirt is sad. The idea of typing a program 
one line at a time onto punch cards is sad, and Lord help you if you ever 
dropped the deck. The idea of pulling drinking water up from the well by the 
bucketful or crapping in a dark outhouse on a frigid winter night is sad. 
Even the thought of using the original IBM PC in 1982, with its 4.77 MHz 
processor, single-sided floppy disk and 16 KB of RAM is sad when you look at 
it just 20 years later. Now we can buy machines that are 1,000 times faster 
and have a million times more disk space for less than $1,000. But think 
about it -- the people who used these technologies at the time thought that 
they were on the cutting edge. They looked upon themselves as cool, hip, 
high-tech people: ..."


So, people fire up GDB on C++ (or whatever) and think they are cool and hip. 
:-) Or, for a more realistic example given C++ is a couple decades old, 
people fire up Firebug on their JavaScript an think they are cool and hip. 
:-) And, by today's standards, they are:

  http://getfirebug.com/
But by the standards of the future of new computing in 2050, Firebug, as 
awesome as it is now, lacks things that one might think would be common in 
2050, like:

  * monitoring a simulated end user's cognitive load;
  * monitoring what is going on at hundreds of network processing nodes
  * integration with to do lists and workflow management;
  * really useful conversational suggestions about things to think about in 
regard to potential requirements, functionality, algorithmic, 
implementation, networking, testing, social, paradigm, security, and 
stupidity bugs as well as all the other types of possible bugs we maintain 
in an entomological catalog in a semantic web (a few specimens of which are 
listed here:

  http://weblogs.asp.net/fbouma/archive/2003/08/01/22211.aspx );
  * easy archiving of the traces of sessions and an ability to run things 
backwards or in a parallelized many-worlds environment; and
  * an ability to help you debug an applications multiple abstractions 
(even if it is cool that it can help you debug what is going on with the 
server a bit: http://getfirebug.com/wiki/index.php/Firebug_Extensions )


So, to add something to a "to do" list for a universal debugger, it should 
be able to transparently deal with the interface between different levels of 
abstraction. :-) As well as all those other things.


Essentially, we need a debugger who is an entomologist. Or, to take it one 
step further, we need a debugger that likes bugs and cheris

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread Dirk Pranke
On Sat, Oct 9, 2010 at 9:21 PM, John Zabroski  wrote:
>
>
> On Sat, Oct 9, 2010 at 3:45 PM, Dirk Pranke  wrote:
>>
>> On Fri, Oct 8, 2010 at 11:09 PM, Paul D. Fernhout
>>  wrote:
>> > Yes, there are similarities, you are right. I'm not familiar in detail
>> > because I have not used Chrome or looked at the code, but to my
>> > understanding Chrome does each tab as a separate process. And typically
>> > (not
>> > being an expert on Chrome) that process would run a rendering engine (or
>> > maybe not?), JavaScript (presumably?), and/or whatever downloaded
>> > plugins
>> > are relevant to that page (certainly?).
>> >
>>
>> Yes, each tab is roughly a separate process (real algorithm is more
>> complicated, as the wikipedia article says). rendering and JS are in
>> the same process, but plugins run in separate sandboxed processes.
>>
>> C++ is a significant security concern; and it is reasonable to want a
>> browser written in a memory-safe language.
>>
>> Unfortunately, web browsers are large, extremely
>> performance-sensitive, legacy applications. All of the major browsers
>> are written in some combination of  C, C++, and Objective-C (and
>> undoubtedly assembly in isolated areas like the JITs), and it's
>> unclear if one can reasonably hope to see a web browser written from
>> scratch in a new language to ever hope to render the majority of the
>> current web correctly; the effort may simply be too large. I was not
>> aware of Lobo; it looks interesting but currently idle, and is a fine
>> example of this problem.
>>
>> I continue to hope, but I may be unreasonable :)
>>
>>
>
> Most major browsers are doing a much better job making sure they understand
> performance, how to evaluate it and how to improve it.  C, C++, Objective-C,
> it doesn't matter.  The key is really domain-specific knowledge and knowing
> what to tune for.  You need a huge benchmark suite to understand what users
> are actually doing.
>
> Big advances in browsers are being had thanks to research into parallelizing
> browser rendering.  But there are a host of other bottlenecks, as
> Microsoft's IE team pointed out.  Ultimately when you are tuning at this
> scale everything looks like a compiler design issue, not a C, C++,
> Objective-C issue.  These all just become high level assembly languages.  A
> garbage JavaScript engine in C is no good.  A fast JavaScript engine in C
> written with extensive performance tuning and benchmarks to ensure no
> performance regressions is good.
>
> Silverlight (managed runtime) blows away pretty much even hand-tuned
> JavaScript apps like the ones Google writes, by the way... unfortunately it
> uses no hardware accelerated rendering that I'm aware of.  Current browser
> vendors have version branches w/ hardware accelerated rendering that
> parallelizes perfectly.  Silverlight has its own problems, of course.

True. There is a video floating around [1] with Lars Bak (team lead
for the V8 engine) talking to Erik Meijer about how JavaScript VMs
compare to the JVM and the CLR and how the latter two will always be
significantly faster due to the differences in language designs and
the optimizations they enable.

But there's always NativeClient (as someone else linked to) if you
need raw speed :)

-- Dirk

[1] 
http://channel9.msdn.com/Shows/Going+Deep/Expert-to-Expert-Erik-Meijer-and-Lars-Bak-Inside-V8-A-Javascript-Virtual-Machine

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread Dirk Pranke
On Sat, Oct 9, 2010 at 8:50 PM, Paul D. Fernhout
 wrote:
> On 10/9/10 3:45 PM, Dirk Pranke wrote:
>>
>> C++ is a significant security concern; and it is reasonable to want a
>> browser written in a memory-safe language.
>>
>> Unfortunately, web browsers are large, extremely
>> performance-sensitive, legacy applications. All of the major browsers
>> are written in some combination of  C, C++, and Objective-C (and
>> undoubtedly assembly in isolated areas like the JITs), and it's
>> unclear if one can reasonably hope to see a web browser written from
>> scratch in a new language to ever hope to render the majority of the
>> current web correctly; the effort may simply be too large. I was not
>> aware of Lobo; it looks interesting but currently idle, and is a fine
>> example of this problem.
>>
>> I continue to hope, but I may be unreasonable :)
>
> Yes, that seems like a good description of the problem.
>
> How about this as a possibility towards a solution ...

I think I'd rather try to write a browser from scratch than
debug/maintain your solution ;)

Also, re: running Chrome in VirtualBox, are you aware that the page
renderers are
already run in sandboxed execution environments? It's unclear how much
more protection
each page in a separate VM would really buy you.

-- Dirk

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread John Zabroski
On Sat, Oct 9, 2010 at 3:45 PM, Dirk Pranke  wrote:

> On Fri, Oct 8, 2010 at 11:09 PM, Paul D. Fernhout
>  wrote:
> > Yes, there are similarities, you are right. I'm not familiar in detail
> > because I have not used Chrome or looked at the code, but to my
> > understanding Chrome does each tab as a separate process. And typically
> (not
> > being an expert on Chrome) that process would run a rendering engine (or
> > maybe not?), JavaScript (presumably?), and/or whatever downloaded plugins
> > are relevant to that page (certainly?).
> >
>
> Yes, each tab is roughly a separate process (real algorithm is more
> complicated, as the wikipedia article says). rendering and JS are in
> the same process, but plugins run in separate sandboxed processes.
>
> C++ is a significant security concern; and it is reasonable to want a
> browser written in a memory-safe language.
>
> Unfortunately, web browsers are large, extremely
> performance-sensitive, legacy applications. All of the major browsers
> are written in some combination of  C, C++, and Objective-C (and
> undoubtedly assembly in isolated areas like the JITs), and it's
> unclear if one can reasonably hope to see a web browser written from
> scratch in a new language to ever hope to render the majority of the
> current web correctly; the effort may simply be too large. I was not
> aware of Lobo; it looks interesting but currently idle, and is a fine
> example of this problem.
>
> I continue to hope, but I may be unreasonable :)
>
>
>
Most major browsers are doing a much better job making sure they understand
performance, how to evaluate it and how to improve it.  C, C++, Objective-C,
it doesn't matter.  The key is really domain-specific knowledge and knowing
what to tune for.  You need a huge benchmark suite to understand what users
are actually doing.

Big advances in browsers are being had thanks to research into parallelizing
browser rendering.  But there are a host of other bottlenecks, as
Microsoft's IE team pointed out.  Ultimately when you are tuning at this
scale everything looks like a compiler design issue, not a C, C++,
Objective-C issue.  These all just become high level assembly languages.  A
garbage JavaScript engine in C is no good.  A fast JavaScript engine in C
written with extensive performance tuning and benchmarks to ensure no
performance regressions is good.

Silverlight (managed runtime) blows away pretty much even hand-tuned
JavaScript apps like the ones Google writes, by the way... unfortunately it
uses no hardware accelerated rendering that I'm aware of.  Current browser
vendors have version branches w/ hardware accelerated rendering that
parallelizes perfectly.  Silverlight has its own problems, of course.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread Paul D. Fernhout

On 10/9/10 3:45 PM, Dirk Pranke wrote:

C++ is a significant security concern; and it is reasonable to want a
browser written in a memory-safe language.

Unfortunately, web browsers are large, extremely
performance-sensitive, legacy applications. All of the major browsers
are written in some combination of  C, C++, and Objective-C (and
undoubtedly assembly in isolated areas like the JITs), and it's
unclear if one can reasonably hope to see a web browser written from
scratch in a new language to ever hope to render the majority of the
current web correctly; the effort may simply be too large. I was not
aware of Lobo; it looks interesting but currently idle, and is a fine
example of this problem.

I continue to hope, but I may be unreasonable :)


Yes, that seems like a good description of the problem.

How about this as a possibility towards a solution. Use OMeta (or Antlr or 
whatever :-) to parse all the C++ code and output some kind of semantic 
representation formatted in Lisp, RDF, Cola, OCaml, JavaScript, pure 
prototype objects, or whatever. Then write AI-ish code that analyzes that 
abstraction and can write out code again in C++ (or JavaScript, Smalltalk, 
Assembly, Lisp, OCaml, or whatever) but having done some analysis so that it 
can be proved there are no possible buffer overruns or misused pointers or 
memory leaks (and if there are grey areas or halting problem issues, then 
stick in range checks, etc.). And maybe optimize some other stuff while it 
is at it, too. So, treat C++ like assembly language and try to write the 
browser at a higher level of abstraction. :-) Ideally, the end result of 
this round-trip will look *identical* to what is read it (but I'm happy if 
the formatting is a little different or a while loop gets changed to a for 
loop or whatever. :-)


I've written some stuff that works like this (just a tiny bit, but no real 
AI) for reading our old Delphi code (using Antlr and Python) and creating an 
internal representation and then outputting code either in Java or 
Python/Jython. Now, libraries can be a mismatch problem (and are in that 
case, though I do a bit of the heavy lifting and then the programmer has to 
do a bunch of the rest), as can other semantic issues (and again, the 
conversion I did had some issues, but was better than rewriting by hand). 
But in theory, Delphi to Java/Jython/Swing should be doable as I outlined 
above -- with a bit more sophistication in the analysis, and at worst, with 
some additional tools to make whatever human input was required easier to do 
at a semantic level. :-)


Ideally, all the development and debugging would be done using the higher 
level abstraction, but if someone still modified the C++ code, presumably 
you could read the changes back into the higher level and integrate them 
back into the abstraction.


So, I guess my objection to C++ isn't so much that it is used in compiling 
browsers so much as that people are coding in it and not treating it as an 
intermediate language and that it is not undergoing some kind of 
sophisticated automated analysis to check for problems every single time a 
build is made. For example, Squeak has its VM written in Squeak Smalltalk, 
but translates it to C++. Now, that's not exactly what I am talking about, 
but it is closer (coding in Smalltalk is awkward in some ways, and while I 
have not generated a Squeak VM in a long time, I expect there could be 
potentially bugs in the Smalltalk that might lead to buffer problems in the 
C -- because there is not AI-ish programming Nanny checking what is being 
done). I'm proposing a more complex abstraction, one perhaps encoded in 
semantic triples or some other fancy AI-ish representation, even if you 
might interact with that abstraction in more than one way (editing textual 
files or moving GUI items around or whatever).


So, this way all that complexity about how to parse and render quirky HTML 
that is there can be preserved, but then it can be operated on by more 
sophisticated analysis tools, more sophisticated collaboration tools, and 
more sophisticated output tools.


Now, ideally, this is what FONC stuff should be able to support, but I don't 
know how far along OMeta etc. are to the point where they could make this 
easy? I get the feeling there is a whole other AI-ish layer of knowledge 
representation to reason about programs that would be needed above OMeta 
(perhaps like John was getting at with his points on formal analysis?).


Although, on the other hand, XHTML is coming along, and to render older 
pages I guess I could just do all my web browsing in a Debian installation 
in VirtualBox to sandbox potentially buggy C++ -- maybe even one VirtualBox 
install per loaded page? :-) Though two gigabytes here, two gigabytes there, 
and sooner or later we're talking real amounts of disk space for all those 
VirtualBox installs :-) But there is probably some way to get all the 
VirtualBoxes to share most of their virtual hard disks. 

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread John Zabroski
On Fri, Oct 8, 2010 at 9:17 PM, Dirk Pranke  wrote:

> On Fri, Oct 8, 2010 at 11:28 AM, John Zabroski 
> wrote:
> > JavaScript also doesn't support true delegation, as in the Actors Model
> of
> > computation.
> >
> > Also, Sencha Ext Designer is an abomination.  It is a fundamental
> > misunderstanding of the Web and how to glue together chunks of text via
> > "hyperlinks".  It is the same story for any number of technologies that
> > claim to "fix" the Web, including GWT... they are all not quite up to
> par,
> > at least by my standards.
> >
> > The fundamental problem with the Web is the Browser.  This is the
> monsterous
> > bug.
> >
> > The fundamental problem with Sencha Ext is that the quality of the code
> > isn't that great (many JavaScript programmers compound the flaws of the
> > Browser by not understanding how to effectively program against the
> Browser
> > model), and it also misunderstands distributed computing.  It encourages
> > writing applications as if they were still single-tier IBM computers from
> > the 1970s/80s costing thousands of dollars.
> >
> > Why are we stuck with such poor architecture?
> >
>
> Apologies if you have posted this before, but have you talked anywhere
> in more detail about what the "monsterous bug" is (specifically),


I have not discussed this much on the FONC list.  My initial opinions were
formed from watching Alan Kay's 1997 OOPSLA speech, The Computer Revolution
Hasn't Happened Yet (available on Google Video [1]).  One of Alan Kay's key
criticisms from that talk was that "You don't need a Browser."  I've
mentioned this quote and its surrounding context on the FONC list in the
past.  The only comment I've made on this list in the past, related to this
topic, is that the View Source feature in the modern Browser is an
abomination and completely non-object-oriented.  View Source is a property
of the network, not the document.  The document is just exposing a
representation.  If you want to build a debugger, then it needs to be based
on the network, not tied to some Browser that has to know how to interpret
its formats.  What we have right now with View Source is not a true Source.
It's some barf the Browser gives you because you don't know any better to
demand something better, richer.  Sure, it's possible if View Source is a
property of the network that a subnet can always refuse your question.
That's to be expected.  When that happens, you can fallback to the kludgy
View Source you have today.

99% of the people in this world to this point have been happy with it,
because they just haven't thought about what something better should do.
All they care about is if they can steal somebody else's site design and
JavaScript image rollover effect, because editing and stealing live code is
even easier than googling and finding a site with these "Goodies". And the
site they would've googled probably just used the same approach of seeing
some cool effect and using View Source.  The only difference is the sites
about DHTML design decorated the rip with an article explaining how to use
it and common pitfalls the author encountered in trying to steal it.

For some context, to understand Alan's criticisms, you have to know Alan's
research and his research groups.  For example, in the talk I linked above,
Alan makes an overt reference to Tim Berners-Lee's complete
non-understanding of how to build complex systems (Alan didn't call out Tim
directly, but he said the Web is what happens when physicists play with
computers).  Why did Alan say such harsh things?  Because he can back it
up.  Some of his work at Apple on the Vivarium project was focused on
something much better than Tim's "Browser".  Tim won because people didn't
understand the difference and didn't really care for it (Worse Is Better).
To be fair, Tim had the right idea with a Uri.  Roy Fielding's Ph.D. thesis
explains this (and hopefully if you're working on Chromium you've read that
important thesis, since it is probably as widely read as Claude Shannon's on
communication).  And both Alan and Tim understood languages needed good
resource structure.  See Tim's talk at the first-ever JavaOne in 1997 [2]
and Tim's criticism of Java (where he admits never having seen the language,
or used it, just stating what he thinks the most important thing about a VM
is [3]) But that's all Tim got right at first.  Tim got distracted by his
Semantic Web vision.  Sometimes having too huge a vision prevents you from
working on important small problems.  Compare Tim's quote in [3] to Alan's
comments about Java in [1] where Alan talks about meta-systems and
portability.


> or
> how programming for the web misunderstands distributed computing?
>

I've written on my blog a few criticisms, such as a somewhat incoherent
critique of what some developers called "SOFEA" architecture. For an
introduction to SOFEA, read [4].  My critique, which again was just me
rambling about its weaknesses and not meant for widespread consumption, ca

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread Dethe Elza
On 2010-10-09, at 12:45 PM, Dirk Pranke wrote:
> [...] it's
> unclear if one can reasonably hope to see a web browser written from
> scratch in a new language to ever hope to render the majority of the
> current web correctly; the effort may simply be too large. I was not
> aware of Lobo; it looks interesting but currently idle, and is a fine
> example of this problem.
> 
> I continue to hope, but I may be unreasonable :)

The Mozilla Foundation is creating the Rust language explicitly to have an 
alternative to C++ for building a web browser, so it may not be that 
unreasonable, in the medium term. Progress on Google's Go language as an 
alternative to C, and the addition of garbage collection to Objective-C, show 
there is a wide-spread need for alternatives to C/C++ among the folks who 
create browsers.

Rust:
http://github.com/graydon/rust/wiki/Project-FAQ

--Dethe

http://livingcode.org/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread Dirk Pranke
On Fri, Oct 8, 2010 at 11:09 PM, Paul D. Fernhout
 wrote:
> Yes, there are similarities, you are right. I'm not familiar in detail
> because I have not used Chrome or looked at the code, but to my
> understanding Chrome does each tab as a separate process. And typically (not
> being an expert on Chrome) that process would run a rendering engine (or
> maybe not?), JavaScript (presumably?), and/or whatever downloaded plugins
> are relevant to that page (certainly?).
>

Yes, each tab is roughly a separate process (real algorithm is more
complicated, as the wikipedia article says). rendering and JS are in
the same process, but plugins run in separate sandboxed processes.

C++ is a significant security concern; and it is reasonable to want a
browser written in a memory-safe language.

Unfortunately, web browsers are large, extremely
performance-sensitive, legacy applications. All of the major browsers
are written in some combination of  C, C++, and Objective-C (and
undoubtedly assembly in isolated areas like the JITs), and it's
unclear if one can reasonably hope to see a web browser written from
scratch in a new language to ever hope to render the majority of the
current web correctly; the effort may simply be too large. I was not
aware of Lobo; it looks interesting but currently idle, and is a fine
example of this problem.

I continue to hope, but I may be unreasonable :)

-- Dirk

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-09 Thread Leo Richard Comerford
I have mentioned this before, but
http://labs.qt.nokia.com/2010/06/25/qt-for-google-native-client-preview/#comment-7893
.

Leo.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Paul D. Fernhout

On 10/8/10 9:21 PM, Dirk Pranke wrote:

On Fri, Oct 8, 2010 at 2:04 PM, Paul D. Fernhout
  wrote:

It's totally stupid to use JavaScript as a VM for "world peace" since it
would be a lot better if every web page ran in its own well-designed VM and
you could create content that just compiled to the VM, and the VMs had some
sensible and secure way to talk to each other and respect each other's
security zones in an intrinsically and mutually secure way. :-)
  "Stating the 'bleeding' obvious (security is cultural)"
  http://groups.google.com/group/diaspora-dev/msg/17cf35b6ca8aeb00


You are describing something that is not far from Chrome's actual
design. It appears that the other browser vendors are moving in
similar directions. Are you familiar with it? Do you care to elaborate
(off-list, if you like) on what the differences between what Chrome
does and what you'd like are (apart from the JavaScript VM being not
particularly designed for anything other than JavaScript)?


Yes, there are similarities, you are right. I'm not familiar in detail 
because I have not used Chrome or looked at the code, but to my 
understanding Chrome does each tab as a separate process. And typically (not 
being an expert on Chrome) that process would run a rendering engine (or 
maybe not?), JavaScript (presumably?), and/or whatever downloaded plugins 
are relevant to that page (certainly?).


From:
  http://en.wikipedia.org/wiki/Google_Chrome
"Chrome will typically allocate each tab to fit into its own process  to 
"prevent malware from installing itself" and prevent what happens in one tab 
from affecting what happens in another, however, the actual 
process-allocation model is more complex.[49] Following the principle of 
least privilege, each process is stripped of its rights and can compute, but 
cannot write files or read from sensitive areas (e.g. documents, 
desktop)—this is similar to the "Protected Mode" used by Internet Explorer 
on Windows Vista and Windows 7. The Sandbox Team is said to have "taken this 
existing process boundary and made it into a jail";[50] for example, 
malicious software running in one tab is supposed to be unable to sniff 
credit card numbers entered in another tab, interact with mouse inputs, or 
tell Windows to "run an executable on start-up" and it will be terminated 
when the tab is closed.[15] This enforces a simple computer security model 
whereby there are two levels of multilevel security (user and sandbox) and 
the sandbox can only respond to communication requests initiated by the 
user.[51] Typically, plugins such as Adobe Flash Player are not standardized 
and as such, cannot be sandboxed as tabs can be. These often need to run at, 
or above, the security level of the browser itself. To reduce exposure to 
attack, plugins are run in separate processes that communicate with the 
renderer, itself operating at "very low privileges" in dedicated per-tab 
processes. Plugins will need to be modified to operate within this software 
architecture while following the principle of least privilege.[15] Chrome 
supports the Netscape Plugin Application Programming Interface (NPAPI),[52] 
but does not support the embedding of ActiveX controls.[52] On 30 March 2010 
Google announced that the latest development version of Chrome will include 
Adobe Flash as an integral part of the browser, eliminating the need to 
download and install it separately. Flash will be kept up to date as part of 
Chrome's own updates.[53] Java applet support is available in Chrome with 
Java 6 update 12 and above[54]. Support for Java under Mac OS X was provided 
by a Java Update released on May 18, 2010.[55]"


So, yes, especially from a security aspect, there is a lot of overlap.
  http://www.google.com/chrome/intl/en/more/index.html
  http://www.google.com/chrome/intl/en/more/security.html
"Google Chrome includes features to help protect you and your computer from 
malicious websites as you browse the web. Chrome uses technologies such as 
Safe Browsing, sandboxing, and auto-updates to help protect you against 
phishing and malware attacks."


Of course, we use Firefox and NoScript to do some of that just with 
JavaScript. I haven't installed Chrome in part just for concerns that it is 
too much power for Google to have over my life. :-) But I've though about it 
precisely for that feature of seperate processes. But Firefox has been 
adding running plugins in separate processes, so that is a step in that 
direction, too.


But, that's still not the same as having a common Virtual Machine for each 
tab, where you can expect to have, say, a JVM available to run the code for 
that page that meets some standards (even as you could have one installed in 
many situations essentially as a plugin, but 30% coverage or whatever is not 
100% coverage). And you'd need a VM optimized so there is essentially very 
little cost in time or memory to create and dispose of hundreds of these VMs 
(which is probably not yet the case for the

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Paul D. Fernhout

On 10/8/10 5:29 PM, John Zabroski wrote:

On Fri, Oct 8, 2010 at 5:04 PM, Paul D. Fernhout<
pdfernh...@kurtz-fernhout.com>  wrote:

But, the big picture issue I wanted to raise isn't about prototypes. It as
about more general issues -- like how do we have general tools that let us
look at all sorts of computing abstractions?

In biology, while it's true there are now several different types of
microscopes (optical, electron, STM, etc.) in general, we don't have a
special microscope developed for every different type of organism we want to
look at, which is the case now with, say, debuggers and debugging processes.

So, what would the general tools look like to let us debug anything? And
I'd suggest, that would not be "gdb" as useful as that might be.


Computer scientists stink at studying living systems.  Most computer
scientists have absolutely zero experience studying and programming living
systems.  When I worked at BNL, I would have lunch with a biologist who was
named after William Tecumseh Sherman, who wrote his Ph.D. at NYU about
making nano-organisms dance.  That's the level of understanding and
practical experience I am talking about.


Was that a biologist or computer scientist?

I was in a PhD program in ecology and evolution for a time (as was my wife, 
where we met) and I can indeed agree there is a big difference in how people 
think about some things when they have an Ecology and Evolution background, 
because those fields are taught so badly (or not at all) in US schools 
(especially evolution). So, if you want to understand issues like 
complexity, our best models connect to ecology and evolution, but since so 
many CS types don't have that "soft and squishy" background, they just have 
no good metaphors to use for that. But one might make similar arguments 
about the value of the humanities and narrative in understanding complexity. 
Math itself is valuable, but it is also limited in many ways.


See also:
  "Studying Those Who Study Us: An Anthropologist in the World of 
Artificial Intelligence" by Diane Forsythe

  http://www.amazon.com/Studying-Those-Who-Study-Anthropologist/dp/0804742030
"[For a medical example] To build effective online health systems for 
end-users one must combine the knowledge of a medical professional, the 
skills of a programmer/developer, the perspective of a medical 
anthropologist, and the wisdom of Solomon. And since Solomon is not 
currently available, an insightful social scientist like Diana-who can help 
us see our current healthcare practices from a 'man-from-mars' 
perspective-can offer invaluable insights. ... Both builders and users of 
[CHI] systems tend to think of them simply as technical tools or 
problem-solving aids, assuming them to be value-free. However, observation 
of the system-building process reveals that this is not the case: the 
reasoning embedded in such systems reflects cultural values and disciplinary 
assumptions, including assumptions about the everyday world of medicine."


So, one may ask, what "values" are built in to so many of the tools we use?


As for making things debuggable, distributed systems have a huge need for
compression of communication, and thus you can't expect humans to debug
compressed media.  You need a way to formally prove that when you uncompress
the media, you can just jump right in and debug it.


Oh, I'm not interested in proving any thing in a CS sense. :-)

And debugging is only part of the issues. Part of it is also testing and 
learning and probing (which could be done as long as you have some 
consistent way of talking to the system under study -- consistent on your 
end as a user, even if you might, as Michael implies, need specialized 
backends for each system).


And, I'd add, all the stuff that surrounds that process, like making a 
hypothesis and testing it, as people do all the time when debugging, could 
be common.


> There have been

advances in compiler architecture geared towards this sort of thinking, such
as the logic of bunched implications viz a viz Separation Logic and even
more practical ideas towards this sort of thinking, such as Xavier LeRoy's
now famous compiler architecture for proving optimizing compiler
correctness.  The sorts of transformations that an application compiler like
GWT makes are pretty fancy, and if you want to look at the same GWT
application without compression today and just study what went wrong with
it, you can't.


I don't know about GWT, but Google's Closure claims to have a Firebug module 
that lets you debug somewhat optimized code:

  http://googlecode.blogspot.com/2009/11/introducing-closure-tools.html
"You can use the compiler with Closure Inspector, a Firebug extension that 
makes debugging the obfuscated code almost as easy as debugging the 
human-readable source."


Squeak had a mode where, with saving only some variable names, it 
essentially could decompress compiled code into fairly readable source.


In general, to make systems multilingua

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Richard Karpinski
But wait. I think we need more complex types than are even allowed. When we
actually compute something on the back of an envelope, we have been taught
to carry all the units along explicitly, but when we set it up for a really
stupid computer to do it automatically, we are forbidden, almost always,
from even mentioning the units. This seems quite reckless to me.

Why does it seem that no one cares that there are vast flaws in virtually
every project?

Why is no one concerned, in the most modern development techniques, with
testing, step by step, what actually works with the people who have to use
the resulting system?

Why does the whole industry accept as normal that most large projects fail,
and even those said to have succeeded are filled with cruft that never gets
used?

Aside from W. Edwards Deming, Tom Gilb, Jef Raskin, and me, I guess
everybody thinks that as long as they get paid, everything is fine.

Richard

On Fri, Oct 8, 2010 at 7:57 PM, Casey Ransberger
wrote:

> I think "type" is a foundationaly bad idea. What matters is that the object
> in question can respond intelligently to the message you're passing it. Or
> at least, that's what I think right now, anyway. It seems like type
> specification (and as such, early binding) have a very limited real use in
> the domain of really-actually-for-real-and-seriously mission critical
> systems, like those that guide missiles or passenger planes.
>
> In the large though, it really seems like specifying type is a lot of
> ceremonial overhead if what you need to say is really just some arguments to
> some function, or pass a message to some object.
>
> It might help if you explained what you meant by type. If you're thinking
> of using "class" as type, I expect you'll fail. Asking for an object's class
> in any case where one is not employing reflection to implement a tool for
> programmers reduces the power of polymorphism in your program. It can be
> argued easily that you shouldn't have to worry about type: you should be
> able to expect that your method's argument is something which sensibly
> implements a protocol that includes the message you're sending it. If you're
> talking about primitive types, e.g. a hardware integer/word, or a string as
> a series of bytes, then I suppose the conversation is different, right?
> Because if we're talking about machine primitives, we really aren't talking
> about objects at all, are we?
>
> On Oct 8, 2010, at 3:23 PM, spir  wrote:
>
> > On Fri, 8 Oct 2010 19:51:32 +0200
> > Waldemar Kornewald  wrote:
> >
> >> Hi,
> >>
> >> On Fri, Oct 8, 2010 at 5:20 PM, Paul D. Fernhout
> >>  wrote:
> >>> The PataPata project (by me) attempted to bring some ideas for Squeak
> and
> >>> Self to Python about five years ago. A post mortem critique on it from
> four
> >>> years ago:
> >>>  "PataPata critique: the good, the bad, the ugly"
> >>>  http://patapata.sourceforge.net/critique.html
> >>
> >> In that critique you basically say that prototypes *maybe* aren't
> >> better than classes, after all. On the other hand, it seems like most
> >> problems with prototypes weren't related to prototypes per se, but the
> >> (ugly?) implementation in Jython which isn't a real prototype-based
> >> language. So, did you have a fundamental problem with prototypes or
> >> was it more about your particular implementation?
> >
> > I have played with the design (& half-way) of a toy prototyped-based
> language and ended thinking there is some semantic flaw in this paradigm.
> Namely, models we need to express in programs constantly hold the notions of
> "kinds" of similar elements. Which often are held in collections;
> collections and types play together in my sense. In other words, type is a
> fondamental modelling concept that should be a core feature of any language.
> > Indeed, there are many ways to realise it concretely. In my sense, the
> notion of prototype (at least in the sense of self or Io) is too weak and
> vague. For instance, cloning does not help much in practice: programmers
> constantly reinvent constructors, or even separated object creation and
> initialisation. Having such features is conceptually helpful, practically
> secure, but most importantly brings it as "common wealth" of the programming
> community (a decisive argument for builtin features, imo).
> > Conversely, class-based language miss the notion, freedom to create, of
> individual objects. forcing the programmer to create a class for a chess
> board is simply stupid for me, and worse: semantically wrong. This prevents
> the program to mirror the model.
> >
> >> Bye,
> >> Waldemar
> >
> >
> > Denis
> > -- -- -- -- -- -- --
> > vit esse estrany ☣
> >
> > spir.wikidot.com
> >
> >
> > ___
> > fonc mailing list
> > fonc@vpri.org
> > http://vpri.org/mailman/listinfo/fonc
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>



-- 
Richard Karpinski

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Casey Ransberger
I think "type" is a foundationaly bad idea. What matters is that the object in 
question can respond intelligently to the message you're passing it. Or at 
least, that's what I think right now, anyway. It seems like type specification 
(and as such, early binding) have a very limited real use in the domain of 
really-actually-for-real-and-seriously mission critical systems, like those 
that guide missiles or passenger planes. 

In the large though, it really seems like specifying type is a lot of 
ceremonial overhead if what you need to say is really just some arguments to 
some function, or pass a message to some object.  

It might help if you explained what you meant by type. If you're thinking of 
using "class" as type, I expect you'll fail. Asking for an object's class in 
any case where one is not employing reflection to implement a tool for 
programmers reduces the power of polymorphism in your program. It can be argued 
easily that you shouldn't have to worry about type: you should be able to 
expect that your method's argument is something which sensibly implements a 
protocol that includes the message you're sending it. If you're talking about 
primitive types, e.g. a hardware integer/word, or a string as a series of 
bytes, then I suppose the conversation is different, right? Because if we're 
talking about machine primitives, we really aren't talking about objects at 
all, are we?

On Oct 8, 2010, at 3:23 PM, spir  wrote:

> On Fri, 8 Oct 2010 19:51:32 +0200
> Waldemar Kornewald  wrote:
> 
>> Hi,
>> 
>> On Fri, Oct 8, 2010 at 5:20 PM, Paul D. Fernhout
>>  wrote:
>>> The PataPata project (by me) attempted to bring some ideas for Squeak and
>>> Self to Python about five years ago. A post mortem critique on it from four
>>> years ago:
>>>  "PataPata critique: the good, the bad, the ugly"
>>>  http://patapata.sourceforge.net/critique.html
>> 
>> In that critique you basically say that prototypes *maybe* aren't
>> better than classes, after all. On the other hand, it seems like most
>> problems with prototypes weren't related to prototypes per se, but the
>> (ugly?) implementation in Jython which isn't a real prototype-based
>> language. So, did you have a fundamental problem with prototypes or
>> was it more about your particular implementation?
> 
> I have played with the design (& half-way) of a toy prototyped-based language 
> and ended thinking there is some semantic flaw in this paradigm. Namely, 
> models we need to express in programs constantly hold the notions of "kinds" 
> of similar elements. Which often are held in collections; collections and 
> types play together in my sense. In other words, type is a fondamental 
> modelling concept that should be a core feature of any language.
> Indeed, there are many ways to realise it concretely. In my sense, the notion 
> of prototype (at least in the sense of self or Io) is too weak and vague. For 
> instance, cloning does not help much in practice: programmers constantly 
> reinvent constructors, or even separated object creation and initialisation. 
> Having such features is conceptually helpful, practically secure, but most 
> importantly brings it as "common wealth" of the programming community (a 
> decisive argument for builtin features, imo).
> Conversely, class-based language miss the notion, freedom to create, of 
> individual objects. forcing the programmer to create a class for a chess 
> board is simply stupid for me, and worse: semantically wrong. This prevents 
> the program to mirror the model.
> 
>> Bye,
>> Waldemar
> 
> 
> Denis
> -- -- -- -- -- -- --
> vit esse estrany ☣
> 
> spir.wikidot.com
> 
> 
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Dirk Pranke
On Fri, Oct 8, 2010 at 2:04 PM, Paul D. Fernhout
 wrote:
> It's totally stupid to use JavaScript as a VM for "world peace" since it
> would be a lot better if every web page ran in its own well-designed VM and
> you could create content that just compiled to the VM, and the VMs had some
> sensible and secure way to talk to each other and respect each other's
> security zones in an intrinsically and mutually secure way. :-)
>  "Stating the 'bleeding' obvious (security is cultural)"
>  http://groups.google.com/group/diaspora-dev/msg/17cf35b6ca8aeb00
>

You are describing something that is not far from Chrome's actual
design. It appears that the other browser vendors are moving in
similar directions. Are you familiar with it? Do you care to elaborate
(off-list, if you like) on what the differences between what Chrome
does and what you'd like are (apart from the JavaScript VM being not
particularly designed for anything other than JavaScript)?

-- Dirk

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Dirk Pranke
On Fri, Oct 8, 2010 at 11:28 AM, John Zabroski  wrote:
> JavaScript also doesn't support true delegation, as in the Actors Model of
> computation.
>
> Also, Sencha Ext Designer is an abomination.  It is a fundamental
> misunderstanding of the Web and how to glue together chunks of text via
> "hyperlinks".  It is the same story for any number of technologies that
> claim to "fix" the Web, including GWT... they are all not quite up to par,
> at least by my standards.
>
> The fundamental problem with the Web is the Browser.  This is the monsterous
> bug.
>
> The fundamental problem with Sencha Ext is that the quality of the code
> isn't that great (many JavaScript programmers compound the flaws of the
> Browser by not understanding how to effectively program against the Browser
> model), and it also misunderstands distributed computing.  It encourages
> writing applications as if they were still single-tier IBM computers from
> the 1970s/80s costing thousands of dollars.
>
> Why are we stuck with such poor architecture?
>

Apologies if you have posted this before, but have you talked anywhere
in more detail about what the "monsterous bug" is (specifically), or
how programming for the web misunderstands distributed computing?

-- Dirk

> Cheers,
> Z-Bo
>
> On Fri, Oct 8, 2010 at 1:51 PM, Waldemar Kornewald 
> wrote:
>>
>>
>> > I am wondering if there is some value in reviving the idea for
>> > JavaScript?
>> >
>> > Firebug shows what is possible as a sort of computing microscope for
>> > JavaScript and HTML and CSS. Sencha Ext Designer shows what is possible
>> > as
>> > far as interactive GUI design.
>>
>> What exactly does JavaScript give you that you don't get with Python?
>>
>> If you want to have prototypes then JavaScript is probably the worst
>> language you can pick. You can't specify multiple delegates and you
>> can't change the delegates at runtime (unless your browser supports
>> __proto__, but even then you can only have one delegate). Also, as a
>> language JavaScript is just not as powerful as Python. If all you want
>> is a prototypes implementation that doesn't require modifications to
>> the interpreter then you can get that with Python, too (*without*
>> JavaScript's delegation limitations).
>>
>> Bye,
>> Waldemar
>>
>> --
>> Django on App Engine, MongoDB, ...? Browser-side Python? It's open-source:
>> http://www.allbuttonspressed.com/
>>
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
>

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread spir
On Fri, 8 Oct 2010 19:51:32 +0200
Waldemar Kornewald  wrote:

> Hi,
> 
> On Fri, Oct 8, 2010 at 5:20 PM, Paul D. Fernhout
>  wrote:
> > The PataPata project (by me) attempted to bring some ideas for Squeak and
> > Self to Python about five years ago. A post mortem critique on it from four
> > years ago:
> >  "PataPata critique: the good, the bad, the ugly"
> >  http://patapata.sourceforge.net/critique.html
> 
> In that critique you basically say that prototypes *maybe* aren't
> better than classes, after all. On the other hand, it seems like most
> problems with prototypes weren't related to prototypes per se, but the
> (ugly?) implementation in Jython which isn't a real prototype-based
> language. So, did you have a fundamental problem with prototypes or
> was it more about your particular implementation?

I have played with the design (& half-way) of a toy prototyped-based language 
and ended thinking there is some semantic flaw in this paradigm. Namely, models 
we need to express in programs constantly hold the notions of "kinds" of 
similar elements. Which often are held in collections; collections and types 
play together in my sense. In other words, type is a fondamental modelling 
concept that should be a core feature of any language.
Indeed, there are many ways to realise it concretely. In my sense, the notion 
of prototype (at least in the sense of self or Io) is too weak and vague. For 
instance, cloning does not help much in practice: programmers constantly 
reinvent constructors, or even separated object creation and initialisation. 
Having such features is conceptually helpful, practically secure, but most 
importantly brings it as "common wealth" of the programming community (a 
decisive argument for builtin features, imo).
Conversely, class-based language miss the notion, freedom to create, of 
individual objects. forcing the programmer to create a class for a chess board 
is simply stupid for me, and worse: semantically wrong. This prevents the 
program to mirror the model.

> Bye,
> Waldemar


Denis
-- -- -- -- -- -- --
vit esse estrany ☣

spir.wikidot.com


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread John Zabroski
On Fri, Oct 8, 2010 at 5:04 PM, Paul D. Fernhout <
pdfernh...@kurtz-fernhout.com> wrote:

>
> But, the big picture issue I wanted to raise isn't about prototypes. It as
> about more general issues -- like how do we have general tools that let us
> look at all sorts of computing abstractions?
>
> In biology, while it's true there are now several different types of
> microscopes (optical, electron, STM, etc.) in general, we don't have a
> special microscope developed for every different type of organism we want to
> look at, which is the case now with, say, debuggers and debugging processes.
>
> So, what would the general tools look like to let us debug anything? And
> I'd suggest, that would not be "gdb" as useful as that might be.
>
>

Computer scientists stink at studying living systems.  Most computer
scientists have absolutely zero experience studying and programming living
systems.  When I worked at BNL, I would have lunch with a biologist who was
named after William Tecumseh Sherman, who wrote his Ph.D. at NYU about
making nano-organisms dance.  That's the level of understanding and
practical experience I am talking about.

As for making things debuggable, distributed systems have a huge need for
compression of communication, and thus you can't expect humans to debug
compressed media.  You need a way to formally prove that when you uncompress
the media, you can just jump right in and debug it.  There have been
advances in compiler architecture geared towards this sort of thinking, such
as the logic of bunched implications viz a viz Separation Logic and even
more practical ideas towards this sort of thinking, such as Xavier LeRoy's
now famous compiler architecture for proving optimizing compiler
correctness.  The sorts of transformations that an application compiler like
GWT makes are pretty fancy, and if you want to look at the same GWT
application without compression today and just study what went wrong with
it, you can't.  What you need to do, in my humble opinion, is focus on
proving that mappings between representations is isomorphic and non-lossy,
even if one representation needs hidden embeddings (interpreted as no-ops by
a syntax-directed compiler) to map back to the other.

There are also other fancy techniques being developed in programming
language theory (PLT) right now.  Phil Wadler and Jeremy Siek's Blame
Calculus is a good illustration of how to study a living system in a
creative way (but does not provide a complete picture, akin to not knowing
you need to stain a slide before putting it under the microscope), and so is
Carl Hewitt's ActorScript and Direct Logic.  These are the only efforts I am
aware of that try to provide some information on why something happened at
runtime.


> I can usefully point the same microscope at a feather, a rock, a leaf, and
> pond water. So' why can't I point the same debugger at Smalltalk image, a
> web page with JavaScript served by a Python CGI script, a VirtualBox
> emulated Debian installation, and a semantic web trying to understand a
> "jobless recovery"?
>
> I know that may sound ludicrous, but that's my point. :-)
>


What the examples I gave above have in common is that there are certain
limitations on how general you can make this, just as Oliver Heaviside
suggested we discard the balsa wood ship models for engineering equations
derived from Maxwell.


>
> But when you think about it, there might be a lot of similarities at some
> level in thinking about those four things in terms of displaying
> information, moving between conceptual levels, maintaining to do lists,
> doing experiments, recording results, communicating progress, looking at
> dependencies, reasoning about complex topics, and so on. But right now, I
> can't point one debugger at all those things, and even suggesting that we
> could sounds absurd. Of course, most things that sound absurd really are
> absurd, but still: "If at first, the idea is not absurd, then there is no
> hope for it (Albert Einstein)"
>
> In March, John Zabroski wrote: "I am going to take a break from the
> previous thread of discussion.  Instead, it seems like most people need a
> tutorial in how to think BIG."
>
> And that's what I'm trying to do here.
>

Thanks for the kind words.  I have shared my present thoughts with you here.
 But a tutorial in my eyes is more about providing people with a site where
they can go to and just be engulfed in big, powerful ideas.  The FONC wiki
is certainly not that.  Most of the interesting details in the project are
buried and not presented in exciting ways, or if they are, they are still
buried and require somebody to dig it up.  That is a huge bug.  In short,
the FONC wiki is not even a wiki.  It is a chalkboard with one chalk stick,
and it is locked away in some teacher's desk.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Paul D. Fernhout

On 10/8/10 1:51 PM, Waldemar Kornewald wrote:

On Fri, Oct 8, 2010 at 5:20 PM, Paul D. Fernhout
  wrote:

The PataPata project (by me) attempted to bring some ideas for Squeak and
Self to Python about five years ago. A post mortem critique on it from four
years ago:
  "PataPata critique: the good, the bad, the ugly"
  http://patapata.sourceforge.net/critique.html


In that critique you basically say that prototypes *maybe* aren't
better than classes, after all. On the other hand, it seems like most
problems with prototypes weren't related to prototypes per se, but the
(ugly?) implementation in Jython which isn't a real prototype-based
language. So, did you have a fundamental problem with prototypes or
was it more about your particular implementation?


Waldemar-

Thanks for the comments.

A main practical concern was the issue of managing complexity and 
documenting intent in a prototype system, especially in a rough-edges 
environment that mixed a couple layers (leading to confusing error messages).


Still, to some extent, saying "complexity is a problem" is kind of like 
blaming disease on "bad vapors". We need to be specific about things to do, 
like suggesting that you usually have to name things before you can share 
them and talking about naming systems or ways of reconciling naming 
conflicts, which is the equivalent of understanding that there is some 
relation between many diseases and bacteria, even if that does not tell you 
exactly what you should be doing about bacteria (given that people could not 
survive without them).


I think, despite what I said, the success of JavaScript does show that 
prototypes can do the job -- even though, as Alan Kay said, what is most 
important about the magic of Smalltalk is message passing not objects (or 
classes, or for that matter prototypes). I think how prototype languages 
work in practice, without message passing, and with a confusion of 
communicating with them through either slots of functions, is problematical; 
so I'd still rather be working in Smalltalk, which ultimately I think has a 
paradigm that can scale better, than either imperative or functional languages.


But, the big picture issue I wanted to raise isn't about prototypes. It as 
about more general issues -- like how do we have general tools that let us 
look at all sorts of computing abstractions?


In biology, while it's true there are now several different types of 
microscopes (optical, electron, STM, etc.) in general, we don't have a 
special microscope developed for every different type of organism we want to 
look at, which is the case now with, say, debuggers and debugging processes.


So, what would the general tools look like to let us debug anything? And I'd 
suggest, that would not be "gdb" as useful as that might be.


I can usefully point the same microscope at a feather, a rock, a leaf, and 
pond water. So' why can't I point the same debugger at Smalltalk image, a 
web page with JavaScript served by a Python CGI script, a VirtualBox 
emulated Debian installation, and a semantic web trying to understand a 
"jobless recovery"?


I know that may sound ludicrous, but that's my point. :-)

But when you think about it, there might be a lot of similarities at some 
level in thinking about those four things in terms of displaying 
information, moving between conceptual levels, maintaining to do lists, 
doing experiments, recording results, communicating progress, looking at 
dependencies, reasoning about complex topics, and so on. But right now, I 
can't point one debugger at all those things, and even suggesting that we 
could sounds absurd. Of course, most things that sound absurd really are 
absurd, but still: "If at first, the idea is not absurd, then there is no 
hope for it (Albert Einstein)"


In March, John Zabroski wrote: "I am going to take a break from the previous 
thread of discussion.  Instead, it seems like most people need a tutorial in 
how to think BIG."


And that's what I'm trying to do here. There are billions of computers out 
there running JavaScript, HTML, and CSS (and some other stuff, powered by 
CGI stuff). How can we think big about that overall global message passing 
system? Billions of computers connected closely to humans supported with 
millions of customized tiny applications (each a web page) interacting as a 
global dynamic semantic space are just going to be more interesting than a 
few thousand computers running some fancy new kernel with some fancy new 
programming language. But, should not "new computing" take in account this 
reality somehow?


I've also got eight cores on my desktop, most of them idle most of the time, 
and I have electric heat so most of the year it does not cost me anything to 
run them. So, raw performance is not so important as it used to be. What is 
important are the conceptual abstractions as well as the practical 
connection with what people are willing to easily try.


Now, people made the same argument to me t

Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread John Zabroski
Even modern technology like Windows Phone 7 encourages, as part of their App
Store submission guidelines, that the app hardwire support for two screen
resolutions.  This is bizarre considering the underlying graphics
implementation is resolution-independent.

These bad choices add up.  As Gerry Weinberg wrote in Secrets of Consulting,
*Things are the way they are because they got that way ... one logical step
at a time*.

But bad choices keeps us employed in our current roles (as consultants, as
in-house IT, etc.).

Cheers,
Z-Bo

On Fri, Oct 8, 2010 at 2:38 PM, Waldemar Kornewald wrote:

> On Fri, Oct 8, 2010 at 8:28 PM, John Zabroski 
> wrote:
> > Why are we stuck with such poor architecture?
>
> A bad language attracts bad code. ;)
>
> Bye,
> Waldemar
>
> --
> Django on App Engine, MongoDB, ...? Browser-side Python? It's open-source:
> http://www.allbuttonspressed.com/blog/django
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Waldemar Kornewald
On Fri, Oct 8, 2010 at 8:28 PM, John Zabroski  wrote:
> Why are we stuck with such poor architecture?

A bad language attracts bad code. ;)

Bye,
Waldemar

-- 
Django on App Engine, MongoDB, ...? Browser-side Python? It's open-source:
http://www.allbuttonspressed.com/blog/django

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread John Zabroski
JavaScript also doesn't support true delegation, as in the Actors Model of
computation.

Also, Sencha Ext Designer is an abomination.  It is a fundamental
misunderstanding of the Web and how to glue together chunks of text via
"hyperlinks".  It is the same story for any number of technologies that
claim to "fix" the Web, including GWT... they are all not quite up to par,
at least by my standards.

The fundamental problem with the Web is the Browser.  This is the monsterous
bug.

The fundamental problem with Sencha Ext is that the quality of the code
isn't that great (many JavaScript programmers compound the flaws of the
Browser by not understanding how to effectively program against the Browser
model), and it also misunderstands distributed computing.  It encourages
writing applications as if they were still single-tier IBM computers from
the 1970s/80s costing thousands of dollars.

Why are we stuck with such poor architecture?

Cheers,
Z-Bo

On Fri, Oct 8, 2010 at 1:51 PM, Waldemar Kornewald wrote:

>
>
> > I am wondering if there is some value in reviving the idea for
> JavaScript?
> >
> > Firebug shows what is possible as a sort of computing microscope for
> > JavaScript and HTML and CSS. Sencha Ext Designer shows what is possible
> as
> > far as interactive GUI design.
>
> What exactly does JavaScript give you that you don't get with Python?
>
> If you want to have prototypes then JavaScript is probably the worst
> language you can pick. You can't specify multiple delegates and you
> can't change the delegates at runtime (unless your browser supports
> __proto__, but even then you can only have one delegate). Also, as a
> language JavaScript is just not as powerful as Python. If all you want
> is a prototypes implementation that doesn't require modifications to
> the interpreter then you can get that with Python, too (*without*
> JavaScript's delegation limitations).
>
> Bye,
> Waldemar
>
> --
> Django on App Engine, MongoDB, ...? Browser-side Python? It's open-source:
> http://www.allbuttonspressed.com/
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-08 Thread Waldemar Kornewald
Hi,

On Fri, Oct 8, 2010 at 5:20 PM, Paul D. Fernhout
 wrote:
> The PataPata project (by me) attempted to bring some ideas for Squeak and
> Self to Python about five years ago. A post mortem critique on it from four
> years ago:
>  "PataPata critique: the good, the bad, the ugly"
>  http://patapata.sourceforge.net/critique.html

In that critique you basically say that prototypes *maybe* aren't
better than classes, after all. On the other hand, it seems like most
problems with prototypes weren't related to prototypes per se, but the
(ugly?) implementation in Jython which isn't a real prototype-based
language. So, did you have a fundamental problem with prototypes or
was it more about your particular implementation?

> I am wondering if there is some value in reviving the idea for JavaScript?
>
> Firebug shows what is possible as a sort of computing microscope for
> JavaScript and HTML and CSS. Sencha Ext Designer shows what is possible as
> far as interactive GUI design.

What exactly does JavaScript give you that you don't get with Python?

If you want to have prototypes then JavaScript is probably the worst
language you can pick. You can't specify multiple delegates and you
can't change the delegates at runtime (unless your browser supports
__proto__, but even then you can only have one delegate). Also, as a
language JavaScript is just not as powerful as Python. If all you want
is a prototypes implementation that doesn't require modifications to
the interpreter then you can get that with Python, too (*without*
JavaScript's delegation limitations).

Bye,
Waldemar

--
Django on App Engine, MongoDB, ...? Browser-side Python? It's open-source:
http://www.allbuttonspressed.com/

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc